How would you test the program in Figure 1? It’s a simple time-clock application. You give it two times, and it records them. The program includes some simple error messaging for invalid value combinations.
Figure 1 Simple time-clock application.
James Bach once gave me a problem similar to this. I started with some basic quick tests. For example, when I changed screen resolution and browser size, I noticed that on low resolution the time drop-downs fell off the screen, and I couldn’t select all the values.
Once the easy stuff was over, I started my analysis of the functionality. The nice thing about a simple problem like this is that there are natural equivalence classes: AM/PM and start/end. There are also some interesting values to try: noon, midnight, and at least one half-hour value for each class.
After around 30 minutes, I stopped testing and told James that I was done. I had executed a handful of tests based on my analysis by this point, found a couple of problems, and was ready to move on. James pointed out that in the time I took to model the problem and run my tests, I could have executed all possible combinations for the two fields (48 × 48 tests).
To illustrate his point, James wrote a quick Perl script. He copied the selection values from the source HTML and used a regular expression to read them in (so he wouldn’t need to waste time formatting the data), and he ran all the tests. The total time to write and execute the script was about 10 minutes.
One of the lessons of the exercise was that sometimes just trying something is cheaper than figuring out if you should try it—automation can be faster then thinking about the problem. Instead of taking the time to do the analysis of which values I wanted to test with, I could have executed all possible tests. While it’s not always possible, we need to look for it constantly.
Frameworks for Test Automation
Why didn’t I think of scripting the tests above? Partly because I have an automation bias. When I think of automation, I think of automation frameworks. I’ve done a lot of work implementing frameworks on projects, I talk about them, and I write about them. I’m constantly thinking about how we can leverage them in new ways, get more value out of them, make them more maintainable, and make them more powerful.
A framework is a set of assumptions, concepts, and practices for your test automation project. When I talk about frameworks, I say something more like, "It’s code that makes our automation more maintainable and sometimes makes scripts easier to write." Frameworks are most commonly used in large test automation efforts, where the focus is on creating regression test scripts. Common framework buzzwords include data-driven, keyword-driven, and object map.
I think frameworks are good. If you need to implement a large regression test effort, use a framework. But remember that frameworks are not the only way you can automate your tests. Sometimes it’s helpful to automate tests once and then throw them away.
When trying to figure out what type of test automation I need for a given problem, I ask myself the following questions:
- What’s the goal of my testing?
- What aspect of the application am I trying to cover with this test or set of tests?
- What risk(s) does this test address?
- What specifically am I looking for automation to solve?
- Where will this test run, and how long will I need to maintain it?
- Will someone other than me look at or maintain this code, and what (if anything) will he or she need to be able to do with it?
I do this for all automation, whether framework or ad hoc. If I can’t answer those questions, I don’t write any code. Those context-setting questions keep me focused on my testing and not my code (an occupational hazard). They tell me whether I need a framework or I can just start scripting.
Note that regardless of my initial conclusion, my decision is not set in stone. As I start writing code, I start to learn more about my test. At any point I might change my mind (based on my new understanding) and choose to script to a framework, or dump the framework and just run the test once. I have two specific examples of doing just that.