Problem 2: Inattentional Blindness
When executing the tests, the testers didn’t look for any errors other than the specific errors identified in the test case. For example, I randomly selected a passed test case for review. I executed the test case myself and scanned the actual results looking for the expected value (the value defined in the test case) and I found it. I then re-scanned the actual results looking for other information about the product. I found some funny stuff.
All said and done, we logged six defects on this "passed" test case. The only way I can explain this result is that the tester wasn’t brain-engaged. Because we had a script, all the tester did was follow the script and turned off his or her powers of critical thinking and problem-solving. This was one example of many similar cases we found.
Computerworld has a great article in which Kathleen Melymuka interviewed Max H Bazerman, a professor of business administration at Harvard Business School. In the article, Bazerman talks about how "bounded awareness" can cause you to ignore critical information when making decisions. The example used by Bazerman is the same example that Cem Kaner and James Bach use in their Black Box Software Testing course on test procedures and scripts. In the course video (the same video to which Bazerman refers), Kaner points out some of the implications of inattentional blindness for our reliance on scripts:
"The idea (the fantasy) is that the script specifies the test so precisely that the person following it can be a virtual robot, doing what a computer would do if the test were automated."
Inattentional blindness teaches us that unless we pay close attention, we can miss even the most conspicuous events that occur while we’re executing our well-planned tests. This means that perhaps our tests are not as powerful as we think. Joel Spolsky provides a wonderful example of this principle:
"All the testing we did, meticulously pulling down every menu and seeing if it worked right, didn’t uncover the showstoppers that made it impossible to do what the product was intended to allow. Trying to use the product, as a customer would, found these showstoppers in a minute.
And not just those. As I worked, not even exercising the features, just quietly trying to build a simple site, I found 45 bugs on one Sunday afternoon. And I am a lazy man, I couldn’t have spent more than 2 hours on this. I didn’t even try anything but the most basic functionality of the product."
Joel’s example also points to the relative power of a test (the ability of a test to find defects). I don’t attribute all of the bugs missed to inattentional blindness, but assuming that Fog Creek has smart people doing its testing, it’s my guess that inattentional blindness played more than a small part.