Test Cases as Tours
Tourists might plan a vacation by selecting several attractions and deciding later which activities they will actually do. This tourist metaphor represents good advice for software testers; it lets us slice our application's functionality into test cases.
But here is where the cartography metaphor breaks down. Tourists select attractions and activities based on personal preferences and their specific interests. Such subjectivism isn't necessarily a good thing for Google testers where we desire some optimization to our effort or, barring that, at least some guidance and repeatability. Do we simply leave test selection up to the individual or do we attempt to build a knowledge base of techniques that work?
At Google, we call these overarching test design issues tours, extending the cartography metaphor into a tourist metaphor. Real tourists may plan an entire day based on tours. We'll take the Big Red Bus Tour of London, followed by a visit to the War Museum, and a guided ghost tour in the evening. The tourists’ day is fully planned, with attractions selected in advance and the activities mapped out, often with reservations made in advance. Tourists know where they'll stay, when they'll arrive, and how they will get around.
For guidance, tourists have the mother lode: tourists' guides, web sites, trip tics, guidebooks, and racks of attraction brochures. Tourists use concierges, buses, taxis, walks, paths, and so on. Testers can take a page from this guidebook. Just as new tourists learn from the tourists who went before them, there are test cases we can learn from others. There are problematic features we can recall. There is timeless advice (“Get advance reservations,” “Keep your wallet in your front pocket,” “Don't drink the water”) that’s useful every time we venture out.
Guidance for testers has to come from testers. Keep a log of what works. Scrutinize test cases that find major bugs. What made them work so well? What combination of features and capabilities found an important bug? What reasoning went into defining that particular test case?
These are the testing tours: the collection of advice that worked for others and just might work for you. How long had tourists puzzled through the complexity of Paris before someone thought to write down all the best advice? How many testers must retest the same functionality before it becomes part of the standard reference?
A single slice of functionality might look something like Figure 3.
Figure 3. A tour through the capabilities of the application represents a test case.
But what makes any given slice a good tour/test case? Enter the common wisdom: the collective advice of dozens of good testers. We call this advice the testing tours. Others have also called them tours; some have called them test patterns. In How to Break Web Software I called them “attacks.” It doesn't matter what you call them if the metaphor works for you. At Google, we settled on the tourism metaphor. A sampling of our tours appears is below.
The Money Tour: Every tourist destination has a reason for its success. Orlando has theme parks, Las Vegas has casinos and shows, Amsterdam has red lights and coffee shops. These are the features that bring in the money.
The Morning Commute Tour and the After Hours Tour: Even tourist destinations have residents who must get to work every day. Every business day is flanked by a morning commute on one end and after hours (meet at the bar after work?) activity on the other. The software parallel is the code that runs at startup and at shutdown. As testers, it is crucial that we pay special attention to these oft-overlooked aspect of our apps. How many different ways can our software be launched? What environment variables or system setting affect startup? Have we ensured every shutdown scenario (files saved, not saved, premature closure, network connections still open, etc.)?
When I was at Microsoft, our Zune product suffered a nasty startup bug associated with starting up on the last day of a leap year. Had the Morning Commute Tour been part of the testing repertoire for Zune, this bug may have been caught before it managed to brick people's music devices on one of the biggest party days of the year, New Year's Eve!
Some tours make direct use of the maps we prepared while planning our “trip.” The Landmark Tour is one such tour. It picks slices of an application based on some set of landmarks. In this context, landmarks are capabilities that the software can perform; taking a tour through the capabilities means that we vary the paths we test and the order in which we exercise features. How many bugs have testers found simply by doing x before y instead of vice-versa? Lots, and that's why the Landmark Tour is such an important one to plan in advance and execute often (both manually and with automation). The end result will be that testers are more conscious of which paths are problematic (and thus require additional testing) and which paths are not. Just as tourists have learned which attractions are worth visiting and which are not.
There are many more tours detailed in my new book Exploratory Software Testing: Tips, Tricks, Tours, and Techniques to Guide Test Design.
Enjoy your travels.