Home > Articles > Programming

  • Print
  • + Share This
Like this article? We recommend

Improvements to Your Software Test Program

Using automated test tools can increase the depth and breadth of testing. Additional benefits are outlined in Table Potential Benefits to Software Test Program with AST.[12]

Table Potential Benefits to Software Test Program with AST

Improved Quality of the Test Effort

  • Improved build verification testing (smoke testing)
  • Improved regression testing
  • Multiplatform compatibility and configuration testing
  • Improved execution of mundane tests
  • Improved focus on advanced test issues
  • Ability to reproduce software defects
  • Testing what manual testing can’t accomplish, such as security or memory leak testing
  • Enhancement of system expertise
  • After-hours “lights-out” testing
  • Improved requirements definition
  • Improved performance testing
  • Improved stress and endurance testing
  • Quality measurements and test optimization
  • Improved system development lifecycle
  • Improved documentation and traceability
  • Distributed workload and concurrency testing

Improved Build Verification Testing (Smoke Test)

The smoke test (build verification test) focuses on test automation of the system components that make up the most important functionality. Instead of having to repeatedly retest everything manually whenever a new software build is received, a test engineer plays back the smoke test, verifying that the major functionality of the system still exists. An automated test tool allows the test engineer to record the manual test steps that would usually be taken in support of software build/version verification. With an automated test tool, tests can be performed that verify that all major functionality is still present, before any unnecessary manual tests are performed. As an example, when delivering the systems for one of our largest customers between 2003 and 2008, it was critical that every delivery to every platform be smoke-tested. The smoke test involved overnight runs that were semiautomated and checked after a period of time, and then reports were generated based on the outcome. The goals (numbers of platforms) for delivering verified software to platforms were big and also necessary in order to bring needed capability and functionality to the customer. The point is that streamlining smoke testing through automation is a huge benefit and value to many of our customers. It is also a time and cost control, so you don’t test a system in depth that is not basically stable. This reduces rework and is another great cost-containment strategy.

Improved Regression Testing

A regression test is a test or set of tests executed on a baselined system or product (baselined in a configuration management system), when a part of the total system product environment has been modified. The test objective is to verify that the functions provided by the modified system or products are as specified and there has been no unintended change in operational functions.

An automated test tool provides for simplified regression testing. Automated regression testing can verify that no new bugs were introduced into a new build. Experience shows that modifying an existing program is a more error-prone process (in terms of errors per statement written) than writing a new program.[13]

Regression testing should occur after each release of a previously tested application. The smoke test described previously is a mini and rapid regression test of major functionality. Regression testing expands on the smoke test and involves testing all existing functionality that has already been proven viable. The regression test suite is the subset of all the test procedures, which exercises the basic functionality of the application. It may also include test procedures that have the highest probability of detecting the most errors. Regression testing should be done through an automated tool since it is usually lengthy and tedious and thus prone to human error.

Multiplatform and Configuration Compatibility Testing

Another example of the savings attributable to automated testing is the reuse of test scripts to support testing from one platform (hardware configuration) to another. Prior to the use of automated testing, a test engineer would have had to repeat each manual test required for a specific environment step by step when testing in a new environment. Now when test engineers create the test scripts for an AUT on platform x or configuration x, they can just play back the same scripts on platform y or configuration y, when using multiplatform-compatible tools. As a result, the test has been performed for the AUT on all platforms or configurations.

Improved Execution of Mundane Tests

An automated test tool will eliminate the monotony of repetitious testing. Mundane repetitive tests are the source of many errors. A test engineer may get tired of testing the same monotonous steps over and over again. We call that tester fatigue or immunity to defects. Habituation is when you become used to the way the system works and don’t see the problems—it has become a habit to see the working solution without considering the negative, possibly nonworking, paths. A test script will run those monotonous steps over and over again and can automatically validate the results.

Improved Focus on Advanced Test Issues

Automated testing allows for simple repeatability of tests. A significant amount of testing is conducted on the basic user interface operations of an application or analyzing outputs comparing expected to actual results.

Automated testing presents the opportunity to move on more quickly and to perform a more comprehensive overall test within the schedule allowed. Automatic creation of user interface operability tests or automated test result output comparison gets these tests out of the way, allowing test teams to turn their creativity and effort to more advanced test problems and concerns.

Testing What Manual Testing Can’t Accomplish

Software systems and products are becoming more complex, and sometimes manual testing is not capable of supporting all desired tests. There are some types of testing analysis that simply can’t be performed manually anymore, such as code coverage analysis, memory leak detection, and cyclomatic complexity testing. It would require many man-hours to produce the cyclomatic complexity of the code for any large application. And manual test methods employed to perform memory leakage tests would be nearly impossible.

Security testing of an application is almost impossible using manual testing techniques. Today, there are also tools on the market that allow automated security testing. Consider, for example, tests to determine whether the application’s Web links are up and running in a matter of seconds. Performing these tests manually would require hours or days, or would be almost impossible.

Ability to Reproduce Software Defects

Test engineers often encounter the problem of having detected a defect, only to find later that the defect is not reproducible. With an automated test tool the application developer can simply play back the automated test script, without having to worry about whether all exact steps performed to detect the defect were properly documented, or whether all the exact steps can be re-created.

Enhancement of System Expertise

Many test managers have probably experienced a situation where the one resident functional expert on the test team is gone from the project for a week during a critical time of testing. The use of existing automated test scripts allows the test team to verify that the original functionality still behaves in the correct manner even without the expert. At the same time, the tester can learn more about the functionality of the AUT by watching the script execute the exact sequence of steps required to exercise the functionality.

After-Hours “Lights-Out” Testing

Automated testing allows for simple repeatability of tests. Since most automated test tools allow for scripts to be set up to kick off at any specified time, automated testing allows for after-hours testing without any user interaction. The test engineer can set up a test script program in the morning, for example, to be kicked off automatically by the automated test tool at, say, 11 that night, while the test team is at home sound asleep. The next day, when the test team returns to work, the team can review the test script output and conduct an analysis. Another convenient time for kicking off a script is when the test engineer goes to lunch, attends a meeting, or is about to depart for home at the end of the workday. Initiating tests at these times makes maximum use of the test lab and time.

During these times automated testing can also take advantage of distributed testing after hours, see described later on. After the engineers have gone home, multiple machines in the lab can be used for concurrency and distributed testing.

Improved Requirements Definition

If Requirements Management is automated as part of the Software Testing Lifecycle, various benefits can be gained, such as:

Being able to keep historical records of any changes or updates, i.e. an audit trail of changes.

Automated Requirements Traceability Matrix (RTM), i.e. linking requirements to all artifacts of the software development effort, including test procedures/pass/fail and defects. Automated maintenance of the RTM is another major benefit.

Improved Performance Testing

Performance information or transaction timing data is no longer gathered with stopwatches. Even very recently, in one Fortune 100 company performance testing was conducted while one test engineer sat with a stopwatch, timing the functionality that another test engineer was executing manually. This method of capturing performance measures is labor-intensive and highly error-prone, and it does not allow for automatic repeatability. Today, many performance- or load-testing tools are available open-source or vendor-provided, which allow the test engineer to perform tests of the system/application response times automatically, producing timing numbers and graphs, pinpointing the bottlenecks and thresholds of the system. This genre of tool has the added benefit of traversing application functionality as part of gathering transaction timings. In other words, this type of test automation represents an end-to-end (ETE/E2E) test. A test engineer no longer needs to sit there with a stopwatch. Instead, the test engineer initiates a test script to capture the performance statistics automatically. The test engineer is now free to do more creative and intellectually challenging testing work.

Improved Stress and Endurance Testing

It is expensive, difficult, inaccurate, and time-consuming to stress-test an application adequately using purely manual methods. This is because of the inability to reproduce a test when a large number of users and workstations are required for it. It is costly to dedicate sufficient resources to these tests, and it is difficult to orchestrate the necessary number of users and machines. A growing number of test tools provide an alternative to manual stress testing. These tools can simulate a large number of users interacting with the system from a limited number of client workstations. Generally, the process begins by capturing user interactions with the application and the database server within a number of test scripts. Then the testing software runs multiple instances of test scripts to simulate large numbers of users.

A test tool that supports performance testing also supports stress testing. Stress testing is the process of running client machines and/or batch processes in high-volume scenarios subjecting the system to extreme and maximum loads to find out the thresholds of whether and where the system breaks and identifying what breaks first to see whether and at what point the application will break under the pressure. It is important to identify the weak points of the system. System requirements should define thresholds and describe how a system should respond when subjected to an overload. Stress testing is useful for operating a system at its maximum design load to make sure it works. Stress testing is also useful to make sure the system behaves as specified when subjected to an overload.

Many automated test tools come with a load simulator, which is a facility that lets the test engineer simulate hundreds or thousands of virtual users simultaneously working on the AUT. Nobody has to be present to kick off the tests or monitor them; a time can be set when the script will kick off and the test scripts can run unattended. Most tools produce a test log output listing the results of the stress test. The automated test tool can record any unexpected active window, such as an error dialog box, and test personnel can review the message contained in the unexpected window, such as an error message.

Quality Measurements and Test Optimization

Automated testing produces quality metrics and allows for test optimization. Automated testing produces results that can be measured and analyzed. The automated testing process can be measured and repeated. Without automation it is difficult to repeat a test. Without repetition it is difficult to get any kind of measurements. With a manual testing process, the chances are good that the steps taken during the first iteration of a test will not be the exact steps taken during the second iteration. As a result, it is difficult to produce any kind of compatible quality measurements. With automated testing the testing steps are repeatable and measurable.

Test engineer analysis of quality measurements support efforts to optimize tests, only when tests are repeatable. Automation allows for repeatability of tests. A test engineer can optimize a regression test suite by performing the following steps:

  1. Run the regression test set.
  2. If cases are discovered for which the regression test set ran OK, but errors surface later, include the test procedures that uncovered those bugs in the regression test set.
  3. Keep repeating these steps as he regression test set is being optimized, by using quality measurements (in this case the metric would be the amount of test procedure errors).

Improved System Development Lifecycle

AST can support each phase of the system development lifecycle and various vendor provided automated test tools are available to do just that. For example, there are tools for the requirements definition phase, which help produce test-ready requirements in order to minimize the test effort and cost of testing. Likewise, there are tools supporting the design phase, such as modeling tools, which can record the requirements within use cases. Use cases represent user scenarios that exercise various combinations of system-level (operational-oriented) requirements. Use cases have a defined starting point, a defined user (a person or an external system), a set of discrete steps, and defined exit criteria.

There are also tools for the programming phase, such as code checkers, static and dynamic analyzer, metrics reporters, code instrumentors, product-based test procedure generators and many more. If requirements definition, software design, and test procedures have been prepared properly, application development may just be the easiest activity of the bunch. Test execution will surely run more smoothly given these conditions.

Improved Documentation and Traceability

Test programs using AST will also benefit from improved documentation and traceability. The automated test scripts along with the inputs and expected results provide an excellent documentation baseline for each test. In addition, AST can provide exact records of when tests were run, the actual results, the configuration used, and the baseline that was tested. AST is a dramatic improvement over the scenario where the product from the test program is half-completed notebooks of handwritten test results and a few online logs of when tests were conducted.

Some of the benefits AST can provide to a test program include expanded test coverage, enabling tests to be run that cannot practically be run manually, repeatability, improved documentation and traceability, and freeing the test team to focus on advanced issues.

Distributed workload and concurrency testing

It is almost impossible to conduct a distributed workload or concurrency test that provides useful results without some form of AST. This is one of the types of testing that benefits most from AST. Since hardware can be expensive and replicating a production environment is often costly, using Virtual Machines (VM) Ware along with an AST framework allows for most effective implementation of this type of test.

  • + Share This
  • 🔖 Save To Your Account

InformIT Promotional Mailings & Special Offers

I would like to receive exclusive offers and hear about products from InformIT and its family of brands. I can unsubscribe at any time.


Pearson Education, Inc., 221 River Street, Hoboken, New Jersey 07030, (Pearson) presents this site to provide information about products and services that can be purchased through this site.

This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site. Please note that other Pearson websites and online products and services have their own separate privacy policies.

Collection and Use of Information

To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:

Questions and Inquiries

For inquiries and questions, we collect the inquiry or question, together with name, contact details (email address, phone number and mailing address) and any other additional information voluntarily submitted to us through a Contact Us form or an email. We use this information to address the inquiry and respond to the question.

Online Store

For orders and purchases placed through our online store on this site, we collect order details, name, institution name and address (if applicable), email address, phone number, shipping and billing addresses, credit/debit card information, shipping options and any instructions. We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes.


Pearson may offer opportunities to provide feedback or participate in surveys, including surveys evaluating Pearson products, services or sites. Participation is voluntary. Pearson collects information requested in the survey questions and uses the information to evaluate, support, maintain and improve products, services or sites, develop new products and services, conduct educational research and for other purposes specified in the survey.

Contests and Drawings

Occasionally, we may sponsor a contest or drawing. Participation is optional. Pearson collects name, contact information and other information specified on the entry form for the contest or drawing to conduct the contest or drawing. Pearson may collect additional personal information from the winners of a contest or drawing in order to award the prize and for tax reporting purposes, as required by law.


If you have elected to receive email newsletters or promotional mailings and special offers but want to unsubscribe, simply email information@informit.com.

Service Announcements

On rare occasions it is necessary to send out a strictly service related announcement. For instance, if our service is temporarily suspended for maintenance we might send users an email. Generally, users may not opt-out of these communications, though they can deactivate their account information. However, these communications are not promotional in nature.

Customer Service

We communicate with users on a regular basis to provide requested services and in regard to issues relating to their account we reply via email or phone in accordance with the users' wishes when a user submits their information through our Contact Us form.

Other Collection and Use of Information

Application and System Logs

Pearson automatically collects log data to help ensure the delivery, availability and security of this site. Log data may include technical information about how a user or visitor connected to this site, such as browser type, type of computer/device, operating system, internet service provider and IP address. We use this information for support purposes and to monitor the health of the site, identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents and appropriately scale computing resources.

Web Analytics

Pearson may use third party web trend analytical services, including Google Analytics, to collect visitor information, such as IP addresses, browser types, referring pages, pages visited and time spent on a particular site. While these analytical services collect and report information on an anonymous basis, they may use cookies to gather web trend information. The information gathered may enable Pearson (but not the third party web trend services) to link information with application and system log data. Pearson uses this information for system administration and to identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents, appropriately scale computing resources and otherwise support and deliver this site and its services.

Cookies and Related Technologies

This site uses cookies and similar technologies to personalize content, measure traffic patterns, control security, track use and access of information on this site, and provide interest-based messages and advertising. Users can manage and block the use of cookies through their browser. Disabling or blocking certain cookies may limit the functionality of this site.

Do Not Track

This site currently does not respond to Do Not Track signals.


Pearson uses appropriate physical, administrative and technical security measures to protect personal information from unauthorized access, use and disclosure.


This site is not directed to children under the age of 13.


Pearson may send or direct marketing communications to users, provided that

  • Pearson will not use personal information collected or processed as a K-12 school service provider for the purpose of directed or targeted advertising.
  • Such marketing is consistent with applicable law and Pearson's legal obligations.
  • Pearson will not knowingly direct or send marketing communications to an individual who has expressed a preference not to receive marketing.
  • Where required by applicable law, express or implied consent to marketing exists and has not been withdrawn.

Pearson may provide personal information to a third party service provider on a restricted basis to provide marketing solely on behalf of Pearson or an affiliate or customer for whom Pearson is a service provider. Marketing preferences may be changed at any time.

Correcting/Updating Personal Information

If a user's personally identifiable information changes (such as your postal address or email address), we provide a way to correct or update that user's personal data provided to us. This can be done on the Account page. If a user no longer desires our service and desires to delete his or her account, please contact us at customer-service@informit.com and we will process the deletion of a user's account.


Users can always make an informed choice as to whether they should proceed with certain services offered by InformIT. If you choose to remove yourself from our mailing list(s) simply visit the following page and uncheck any communication you no longer want to receive: www.informit.com/u.aspx.

Sale of Personal Information

Pearson does not rent or sell personal information in exchange for any payment of money.

While Pearson does not sell personal information, as defined in Nevada law, Nevada residents may email a request for no sale of their personal information to NevadaDesignatedRequest@pearson.com.

Supplemental Privacy Statement for California Residents

California residents should read our Supplemental privacy statement for California residents in conjunction with this Privacy Notice. The Supplemental privacy statement for California residents explains Pearson's commitment to comply with California law and applies to personal information of California residents collected in connection with this site and the Services.

Sharing and Disclosure

Pearson may disclose personal information, as follows:

  • As required by law.
  • With the consent of the individual (or their parent, if the individual is a minor)
  • In response to a subpoena, court order or legal process, to the extent permitted or required by law
  • To protect the security and safety of individuals, data, assets and systems, consistent with applicable law
  • In connection the sale, joint venture or other transfer of some or all of its company or assets, subject to the provisions of this Privacy Notice
  • To investigate or address actual or suspected fraud or other illegal activities
  • To exercise its legal rights, including enforcement of the Terms of Use for this site or another contract
  • To affiliated Pearson companies and other companies and organizations who perform work for Pearson and are obligated to protect the privacy of personal information consistent with this Privacy Notice
  • To a school, organization, company or government agency, where Pearson collects or processes the personal information in a school setting or on behalf of such organization, company or government agency.


This web site contains links to other sites. Please be aware that we are not responsible for the privacy practices of such other sites. We encourage our users to be aware when they leave our site and to read the privacy statements of each and every web site that collects Personal Information. This privacy statement applies solely to information collected by this web site.

Requests and Contact

Please contact us about this Privacy Notice or if you have any requests or questions relating to the privacy of your personal information.

Changes to this Privacy Notice

We may revise this Privacy Notice through an updated posting. We will identify the effective date of the revision in the posting. Often, updates are made to provide greater clarity or to comply with changes in regulatory requirements. If the updates involve material changes to the collection, protection, use or disclosure of Personal Information, Pearson will provide notice of the change through a conspicuous notice on this site or other appropriate way. Continued use of the site after the effective date of a posted revision evidences acceptance. Please contact us if you have questions or concerns about the Privacy Notice or any objection to any revisions.

Last Update: November 17, 2020