“… one usually encounters a definition such as, ‘Testing is the process of confirming that a program is correct. It is the demonstration that errors are not present.’ The main trouble with this definition is that it is totally wrong; in fact, it almost defines the antonym of testing.”
– Glenford Myers,
Software Reliability: Principles & Practices, 1976
People keep telling me that testing is a validation activity — that the purpose of testing is to validate that the software meets all the specifications, has no errors, meets performance SLAs, meets expectations of anonymous users, or some other lofty goal.
I read about testing processes designed to validate software. I use testing tools built to support validation. I listen to service companies pitch testing services to validate software. I read about testing metrics built on the assertion that software systems can be proved correct. I attend testing presentations explaining the presenters’ best practices for validation.
The trouble is that we cannot prove software correct. We cannot prove the absence of bugs. We cannot test every possible state and input. We cannot evaluate every possible output. We cannot fully understand the desires of stakeholders. We cannot prove that customers will be happy. We cannot prove that a software product will solve the problems it was built to solve. If all this were possible, I suspect insurance companies would find a way to make a profit selling software quality insurance.
“If you think you can fully test a program without testing its response to every possible input, fine. Give us a list of your test cases. We can write a program that will pass all your tests but still fail spectacularly on an input you missed. If we can do this deliberately, our contention is that we or other programmers can do it accidentally.”
– Cem Kaner, Jack Falk, and Hung Quoc Nguyen,
Testing Computer Software, Second Edition, 1999
Now, thirty-two years since Glenford Myers called testing to prove correctness the opposite of testing, we’re surrounded by testing practices and tools based on proving correctness. The myth of proving correctness is alive and well.
Activities designed to try to prove correctness are the antonym of testing.
So if testing is not validation, what is testing? Testing is investigation; and communicating useful information about quality to decision makers.
“Testing is the process by which we explore and understand the status of the benefits and the risk associated with release of a software system.”
– James Bach,
James Bach on Risk-Based Testing, STQE Magazine, Nov 1999
“Testing is done to find information. Critical decisions about the project or the product are made on the basis of that information.”
– Cem Kaner, James Bach, Bret Pettichord,
Lessons Learned In Software Testing: A Context-Driven Approach, 2002
“A software tester’s job is to test software, find bugs, and report them so that they can be fixed. An effective software tester focuses on the software product itself and gathers empirical information regarding what it does and doesn’t do. This is a big job all by itself. The challenge is to provide accurate, comprehensive, and timely information, so managers can make informed decisions.”
– Brett Pettichord,
Don’t Become the Quality Police, StickyMinds.com, 2002
Once we admit that we cannot prove the software correct, we can refocus our efforts on finding useful quality-related information. Instead of pretending to assure quality or validate correctness, we can gather and communicate useful information. Investigate the software. Find information about threats to the quality of the systems under investigation. Communicate that information in terms that matter to stakeholders. Help managers make informed decisions.