Using Performance Test Tools for High Volume Automated Testing
This article offers an experience I had using an everyday performance test tool to assist in functional testing. The story starts on a project where we were doing a web rewrite of a mainframe legacy application. We wanted to execute parallel testing using data from the mainframe system, run it through the new system, and verify that the results matched. As luck would have it, a batch program (used for production maintenance-testing purposes) executed each Monday, writing all of the new transactions for that week from the production system to a flat file. Every Monday, the test team had anywhere from 300–500 new datasets that had already been processed by the legacy application and were available to us to be processed using the new web application. This meant that every week we received 300–500 new and interesting test cases to run through our system.
The Traditional GUI Automation Approach
We had already developed a data-driven automation framework for the web application, using a well-known enterprise test tool. Therefore, our initial instinct was to leverage that framework for data entry. We looked at some simple methods of extracting the data, converting it to the tool's desired format, and then populating the web application using "traditional" GUI-based methods for input.
After some initial prototyping, we soon found that the execution time using this method of input was incredible given our limited software and hardware resources. Using the GUI, it would take around 10 minutes per transaction to enter the data, for a possible total execution time of around 80 hours on one machine. Even if we distributed the testing across 10 machines, it would take 8 hours (assuming no problems) and it would use all of our resources during that time.