- 29.1 Three Grains of Rice
- 29.2 Understanding Has to Grow
- 29.3 First Day Automated Testing
- 29.4 Attempting to Get Automation Started
- 29.5 Struggling with (against) Management
- 29.6 Exploratory Test Automation: Database Record Locking
- 29.7 Lessons Learned from Test Automation in an Embedded Hardware-Software Computer Environment
- 29.8 The Contagious Clock
- 29.9 Flexibility of the Automation System
- 29.10 A Tale of Too Many Tools (and Not Enough Cross-Department Support)
- 29.11 A Success with a Surprising End
- 29.12 Cooperation Can Overcome Resource Limitations
- 29.13 An Automation Process for Large-Scale Success
- 29.14 Test Automation Isn't Always What It Seems
29.11 A Success with a Surprising End
George Wilkinson, United Kingdom
Test manager, trainer, and consultant
This anecdote describes some of my experiences on a large test automation project undertaken in 2007 and 2008. This project was to automate the core processes of the system validation tests of the National Health Service (NHS) Care Records System (CRS) application as rolled out within England by a large health IT Systems Integration company. This was being undertaken as part of the wider National Programme for IT (NPfit). The study covers 8 months of continuous progress, though with a surprising end.
An automation team was formed from a number of locations, including the North of England, the Midlands, and the West Country. Rather than looking for an exact skills match, we wanted people experienced in the CRS application who were enthusiastic about getting involved in automation. Because the team was geographically distributed, we decided to meet most weeks in a geographically central location for 2 days.
29.11.1 Our Chosen Tool
TestStream was a commercial validation suite from Vedant Health, a United States company specializing in test health-care automation targeted at laboratory information systems (LIS) and health informatics systems (HIS). Our representative from Vedant traveled from the United States to start the project going and to run the training in the product set and the TestStream methodology.
One of the useful features of TestStream was called Scenario Builder. It provided a way to construct automated patient journeys, which are made up of a number of predefined actions. The analyst simply pulls together these actions to create a longer test. There are over 600 actions for our CRS application system, and they include elements such as Register a Patient, Add Allergy, Schedule Surgery, and Check in a Patient. The Scenario Builder allows the sequence of events to be defined and viewed as a true patient journey.
No scripts, scripting, or further script development was required by either my team or Vedant Health, because the Scenario Builder’s actions provided the components or scenarios required. The only requirements were a solid familiarity with the application under test and a thorough understanding of the test case (normally a patient journey).
We built a comprehensive library of automated scripts and devised standards and procedures about how they were to be stored and maintained. We developed a customized comparison and data collection tool, which we called CAT (collection analysis tool).
29.11.2 The Tool Infrastructure and an Interesting Issue as a Result
The product was installed and accessible by the user via a secured network to servers running virtual machines (VMs), as shown in Figure 29.3. Access to the VMs and thus to the test environments was provided to both the automation team running tests and company IT support staff.
Figure 29.3 TestStream infrastructure
Vedant’s access for support could be from anywhere in the world because some of the highly experienced Vedant support staff moved around the world assisting other clients. This required remote access to our infrastructure, but we soon discovered that it didn’t work. The system was so secure (in order to prevent fraudulent access into any test environment that may hold live patient data) that it prevented the remote access facility from working.
We resolved the issue by allowing both companies independent access to another test system that was clean of any patient data. This solution was foolproof from a security perspective but provided only limited support, which was to be mitigated by the test system holding the same application version that the majority of systems were holding in the field. Although the solution was not perfect, because the deployments were not always running the same system version, it was a step in the right direction—and one on which we could make progress.
Looking back, we realized that no feasibility study had been conducted on support, which could have prevented the remote access issue from arising.
29.11.3 Going toward Rollout
Over the next 3 to 4 months, the team grew from 6 to 10, with an additional four part-time support members. We produced a catalog of the automation tests that were available to the deployment projects to build their own scenarios. As we progressed with the pilot, we identified data and configuration requirements that were localized to the individual projects as they moved away from a standard. This meant that our current generic approach needed to be tailored for the deployment-specific test environment. What we had done was created a process but lost some sight of our individual customer’s requirements.
We ran a sample of the data collection and clinical ordering features of the CRS for a particular deployment. This was a great success because we found many defects that were thereby prevented from entering the live environment. We found between 10 and 100 defects on well-built and established test environments and thousands on other environments.
We published a report to the stakeholders showing how we added value to the current manual test approach. We found that we could automate tests for around 70 percent of the installed CRS functionality and save approximately 30 percent of our current testing effort.
We now decided to initiate some public relations for the tool. We scheduled several educational sessions to explain the program and what we had been doing, to give stakeholders the opportunity to ask questions, and to gather feedback from the teams working on customer sites.
I was quite surprised at how many people had a very different interpretation than we did of the product set and its intentions and of software test automation itself. Most people’s experience with automation test tools is that they require constant scripting or maintenance to work. Fortunately, these sessions helped to convince people that our automation was an improvement on that.
We also dispelled some illusions and misperceptions about automation and set more realistic expectations. The public relations meeting also raised the team’s confidence and gave them some well-deserved recognition.
The automation team were elated by the results from the pilot project and the fact we were now in the rollout stage. Their confidence was really growing; after all, they had made it work. TestStream was out there and making a real difference! We were positioning ourselves well, and the future, at last, after a good deal of effort, was looking more positive.
29.11.4 The Unexpected Happens
In late May 2008, after discussing our success so far and the rollout plans, the overall project was cancelled because of a breakdown in the contract with the systems integration company. Therefore, our automation project was also cancelled. I gathered my team together for the last team meeting and officially announced the cancellation. They had worked extremely hard, but the automation project was over; all those many late evenings, weekends, sheer determination, and extra miles traveling to make this work were now history. What a heartbreaking end to what should have been a great success.