InformIT

Software Testing Automation on the Fly: Against the Frameworks

Date: Jun 23, 2006

Return to the article

Sometimes a quick throwaway script is all you need to analyze and test a program. Mike Kelly presents examples showing when ad hoc testing can do the trick, and when you really need to go the framework route.

How would you test the program in Figure 1? It’s a simple time-clock application. You give it two times, and it records them. The program includes some simple error messaging for invalid value combinations.

Figure 1

Figure 1 Simple time-clock application.

James Bach once gave me a problem similar to this. I started with some basic quick tests. For example, when I changed screen resolution and browser size, I noticed that on low resolution the time drop-downs fell off the screen, and I couldn’t select all the values.

Once the easy stuff was over, I started my analysis of the functionality. The nice thing about a simple problem like this is that there are natural equivalence classes: AM/PM and start/end. There are also some interesting values to try: noon, midnight, and at least one half-hour value for each class.

After around 30 minutes, I stopped testing and told James that I was done. I had executed a handful of tests based on my analysis by this point, found a couple of problems, and was ready to move on. James pointed out that in the time I took to model the problem and run my tests, I could have executed all possible combinations for the two fields (48 × 48 tests).

To illustrate his point, James wrote a quick Perl script. He copied the selection values from the source HTML and used a regular expression to read them in (so he wouldn’t need to waste time formatting the data), and he ran all the tests. The total time to write and execute the script was about 10 minutes.

One of the lessons of the exercise was that sometimes just trying something is cheaper than figuring out if you should try it—automation can be faster then thinking about the problem. Instead of taking the time to do the analysis of which values I wanted to test with, I could have executed all possible tests. While it’s not always possible, we need to look for it constantly.

Frameworks for Test Automation

Why didn’t I think of scripting the tests above? Partly because I have an automation bias. When I think of automation, I think of automation frameworks. I’ve done a lot of work implementing frameworks on projects, I talk about them, and I write about them. I’m constantly thinking about how we can leverage them in new ways, get more value out of them, make them more maintainable, and make them more powerful.

A framework is a set of assumptions, concepts, and practices for your test automation project. When I talk about frameworks, I say something more like, "It’s code that makes our automation more maintainable and sometimes makes scripts easier to write." Frameworks are most commonly used in large test automation efforts, where the focus is on creating regression test scripts. Common framework buzzwords include data-driven, keyword-driven, and object map.

I think frameworks are good. If you need to implement a large regression test effort, use a framework. But remember that frameworks are not the only way you can automate your tests. Sometimes it’s helpful to automate tests once and then throw them away.

When trying to figure out what type of test automation I need for a given problem, I ask myself the following questions:

I do this for all automation, whether framework or ad hoc. If I can’t answer those questions, I don’t write any code. Those context-setting questions keep me focused on my testing and not my code (an occupational hazard). They tell me whether I need a framework or I can just start scripting.

Note that regardless of my initial conclusion, my decision is not set in stone. As I start writing code, I start to learn more about my test. At any point I might change my mind (based on my new understanding) and choose to script to a framework, or dump the framework and just run the test once. I have two specific examples of doing just that.

 

Switching from Ad Hoc to Framework

On a recent project that tested a web service, we had a wonderful opportunity to run a large amount of production data through the web service before we released it to production. This was after most of our functional and performance testing was complete, but we had some suspicions that, given the nature of the service and the data, there might be some data abnormalities in production that we hadn’t anticipated.

I designed a one-time test, using Ruby to read in, line by line, an ASCII file dump of the production data and submit the data using the web service. I looked only for the most basic of application error codes and spectacular failures such as Java exceptions. We were quite surprised to find that this testing did in fact reveal many problems with our data of which we were unaware. If fact, we were so happy with the results of this test that we made it a standard part of our testing going forward.

What I had originally written in around two hours was now to be used by multiple people, for multiple data sets, on multiple services. At that point, it made sense to make some changes. I paired with another tester and we rewrote the script to be more modular. We added a command-line interface and options for running in different environments; we put more comments in the code, and we added the script (and test data) to source control. All the things that I didn’t want to do when I thought it was a one-time test, now made sense going forward.

Switching from Framework to Ad Hoc

On another project, I ran across an interesting bug that had to do with the way in which you traversed screens in the application. The application had more than 30 screens, and you could go from any screen to another, but typically a user would go from screen A to screen B to screen C, and so on, navigating in order from the first screen to the last. The bug I found was that if I navigated from screen G back up to screen B, the system would crash with a fatal exception. Oops.

Most of our testing focused on that primary path, and all of our automation focused on that primary path. We had built a data-driven test automation framework for regression testing that moved from the first screen to the last, entering the information supplied in the data files. Because we were looking for volume of data entry in our regression tests, it didn’t make sense to design the framework to navigate other paths easily. It could do so, but it would result in script that would be very difficult to maintain.

After finding the bug and logging the initial defect, I asked myself the following questions:

I answered these questions as best I could, and started writing the script using our standard framework. As I was working on the script (a slow process, since I was using the framework), the initial defect was resolved by the developer and sent back to me for retesting. Looking at the defect, I became suspicious from his resolution comments and gave him a call:

Mike: "Hey, Tim. I was looking at the defect you just sent back to me for retesting, and I saw a note you made that said the problem was related to the architectural changes we made for this release. Can you tell me more about that?"

Tim: "Yeah, we changed the way we were using Struts on some of the pages. The error you saw was because each page had an object on it with the same name, and it didn’t know how to handle that. You’ll only see that problem if two pages happen to have two objects with the same name. All I had to do was change the name."

Mike: "Interesting. Have you checked the other pages for a similar problem?"

Tim: "I thought about it, but for the most part we all used the naming conventions for each page when we developed them. The one you found just happened to be an exception where the naming convention failed us because the pages are so similar. For me to test all the pages would take hours."

Mike: "Is it something you can tell just by looking at the code?"

Tim: "Sure, but the way the code’s structured, it would be faster to test it."

Mike: "So what’s the likelihood of this problem coming back in a future release?"

Tim: "Slim to none. It’s not something we change. And even so, it should be caught in a code review."

Mike: "If I could write a script in about 30 minutes and leave it running in the background to test all the other pages, do you think that would be useful?"

Tim: "Yeah, if it only takes you half an hour, I do. Go ahead and do that before you close the defect. I think it’s a good idea just so we can be sure there are no other similar problems. But really, man, if it’s more than thirty minutes, forget about it."

Mike: "Cool, man. Thanks. I’ll let you know how it goes."

Armed with this new information, I scrapped the work I was doing in our regression test framework and instead turned out a quick Watir script that ran in the background for the rest of the day. No other problems were found. I closed the defect and threw away the script. Based on the information I had, it didn’t make sense to run that test for five hours on every release.

Getting Started with Ad Hoc Scripting

I find automation using a framework to be very different than ad hoc automation. This is typically because of the tools used (and the languages on which the tools are based), as well as the quality of my code. I find that when I’m throwing the code away, I have almost no comments, lots of typos, and a bad habit of implementing infinite loops by accident. Of course, when working in a framework, I don’t do any of that (except for the occasional infinite loop)—my code is clean, readable, and sometimes reviewed.

In terms of tools, typically I use an enterprise tool like Rational Robot or Rational Functional Tester (with a heavy language and a heavy IDE) for framework development, and a language like Ruby (with an intuitive language and a simple text editor) for ad hoc scripting. If you’re using an open source tool such as Watir for your framework automation, you won’t really have a problem switching back and forth.

If you’re new to scripting, I recommend Brian Marick’s book Scripting for Testers, due out late in 2006. I’ve read early versions of the book, and it’s an excellent reference for someone getting started. Bret Pettichord also has many great examples on his site—all worth a look.

800 East 96th Street, Indianapolis, Indiana 46240