InformIT

Conventional Software Testing on a Scrum Team

Date: Sep 30, 2005

Return to the article

The Scrum methodology can pose a challenge for software testers who are used to more traditional waterfall-inspired development processes. Jonathan Kohl relates his experiences working on Scrum teams who found some clear advantages in changing their methods.

If you're a professional software tester, or work in quality assurance, I consider you to be (like me) a "conventional software tester." Increasingly, conventional software testers are finding themselves on teams using the Scrum development process. For testers unfamiliar with iterative lifecycles, this can be a real challenge. Having been on a variety of projects using Scrum as a conventional software tester, I'll share three stories that describe different approaches for testers.

What Is Scrum?

According to the Scrum web site, "Scrum is an agile, lightweight process that can be used to manage and control software and product development using iterative, incremental practices." The best place to start learning about Scrum is the About page on the Scrum web site. It is a good idea to learn as much about Scrum as possible as a tester, particularly the principles and theory behind it. To understand the terminology in this article, I'll summarize some aspects of Scrum.

Scrum is a software management process that uses an iterative lifecycle and that focuses on communication, particularly feedback loops. Each iteration, called a Sprint, lasts about four weeks. A Sprint is a block of time in which development of software is completed. Ideally, a complete vertical slice of the application is delivered at the end of each Sprint. When each "slice" is put together, you have the complete product. A Sprint is a longer feedback loop in which the software developed in the last iteration is demonstrated to project sponsors and end users.

Short feedback loops also occur daily. These daily Scrum meetings last for 15 minutes, during which team members describe what they're working on and raise issues they might be encountering. The Scrum Master (the person who manages the administration of the project) runs the Scrum meetings.

Work to be done over the life of a project (a project contains several Sprints) is summed up in a Product Backlog. The Product Backlog contains a list of features needed for the project; this is the big picture for the project. Each Sprint also has a Sprint Backlog containing the tasks to meet a certain number of features in the Product Backlog. This describes the work for a particular Sprint. These documents are often written at a high level. Details can be filled in depending on practices of the team. How to execute tasks to meet the goals of the project is left to the team in Scrum, so it's common to see Scrum teams using Extreme Programming or other methodologies within Scrum.

For the tester, the most important things to note about Scrum are its iterative lifecycle and frequent communication. Both can require some adjustment on the part of the tester, who can be relied on to do all of the following:

Getting Started

Several years ago, I was working in a quality assurance department on a software development team. We were looking at ways of improving our processes, and were encouraged to research new ways of developing and testing software. Several of us were big fans of iterative lifecycles. When a developer introduced us to Scrum at a lunch-and-learn session, we started using the daily Scrum meeting near the end of releases. We found these Scrum meetings to be useful because we all knew what everyone else was doing at any given time, and we were able to share information more freely. Instead of spinning wheels for a couple of days tackling a problem in isolation, raising the problem at a Scrum meeting might lead to someone else piping up to meet you offline to talk about it.

Because we found we were more productive and effective using Scrum meetings, we sought management support to adopt the entire Scrum process on a pilot project. We started the project with a meeting to determine the Product Backlog. This meeting involved all the project stakeholders, including the development team and testers. On a whiteboard, we wrote the features that needed to be delivered; then we brainstormed and set up general priorities. Because the pilot project was small and we were also working on other projects at the same time as testers, we weren't as active as other team members, but were able to get a good idea about what to expect to test. The Scrum Master gathered the feature wish list and recorded it as a rough Product Backlog in a spreadsheet that was sent to the team members for review. We met again after taking the time to review the Product Backlog and determining how much time it might take to deliver each item. At that second meeting, the Product Backlog was reviewed and trimmed. The final Product Backlog was refined and sent out again to each stakeholder. As testers, we decided to test the way we always had—at the end of the development cycle, once a complete product was ready. We would develop our test plan during the Sprints leading up to the end of the project, and added a testing/bugfix Sprint to the end of the project.

Next, the team held a Sprint planning meeting to determine what features to develop in the first four-week sprint. After selecting the features, each team member added high-level descriptions of tasks we would perform during the Sprint. For the testers, there wasn't much to do yet, so we created planning tasks and attended and provided feedback at design meetings. The entire team also selected a goal to reach by the end of the Sprint: creating working software with a limited number of features that could be demonstrated and tested. As with the Product Backlog, the Sprint Backlog contained line items in a spreadsheet describing the tasks we would complete in the next four weeks to help meet the Sprint goal. This spreadsheet was sent out to the entire team for their review.

The first Sprint was quiet for the testers. We attended some design meetings and began planning our tests. It took a bit to adjust to the daily Scrum meetings. At first we thought, "Oh no, not another meeting," but since the meetings were so short and useful, they hardly felt like meetings at all. It was tempting to get into longer discussions during the meetings, but the Scrum Master kept us on track and limited the Scrum meetings to 15 minutes.

At the end of the first Sprint, we had working software that was ready to demo. As testers, we were part of the demo; we recorded both positive and negative feedback from stakeholders, and continued test planning with this information. Test planning was faster and easier with working software in front of us, rather than trying to visualize the software by using requirements documents.

We repeated this process for subsequent Sprints, with testers spending more time in design meetings, talking to developers about testing, and working on test planning. The more features we added, the more time we were able to spend thinking about test strategies. Coming down to the final Sprint, we had a very solid handle on what we were going to test. Since we could plan from working software, test planning was a dream. We were able to stretch out and look at new areas of testing on which we hadn't previously been able to spend any time.

For the final Sprint, we were extremely busy testing. Because the software was already familiar to us, we didn't have much of a learning curve when we started testing. The focus went from the development team to testing, and we had a lot to say in meetings, but many of the questions we would normally have had at this point in a project had already been answered, so we were able to focus more on testing and providing bug reports and feedback to the developers. We shipped the software with a lot more confidence, and with new ideas on testing for the next release.

At the end of the project, it was deemed a success, and everyone from management down was pleased with the results. As testers, we were pleased that we had access to the product sooner, to draw out our own information for testing during development instead of reading from a requirements document. We knew that what we were looking at was up to date and more detailed than requirements documents usually are. Because we focused on different areas of interest and risk for testing with this methodology, we had some diversity in our testing techniques.

Improving the Testing Feedback

At the retrospective meeting, the developers had one complaint: The feedback loop on testing information was too long, because they had to wait until the final Sprint to get feedback on features that were developed sometimes months earlier. When we planned for the next product release, we decided to try a new approach.

The first improvement we made was to test in a cycle one Sprint behind development throughout the project. Once the development group completed and demonstrated the software at the end of a Sprint, we tested those features while the developers got started on the next Sprint. At first this strategy worked okay, but the further we got in the project, the more difficult it became to provide feedback on the emerging design and to test already developed features from prior Sprints at the same time.

As testers, we were much more engaged from the beginning of this second project, since we now had more responsibility during the development instead of waiting until the end. We were more effective at getting information early on because we were accountable to the team on a daily basis and at the end of each Sprint. And the developers were happy to get feedback more quickly—especially bug reports.

Once we reached the final testing and debugging Sprint, we had already tested much of the product's functionality. This took the pressure off because we were familiar with the software we were testing, knowing areas of risk and what was most important to the client. We were able to do much more testing on a project than we had done before, and could focus on risk areas more comfortably. This leeway inspired new test ideas at the end of the project, but unfortunately, we had to trim down what we wanted to test because more testing time simply wasn't available.

While we were confident in the amount of testing we had done, we felt we could do more. This was hard to take. In a typical waterfall process, you simply don't have as much time using the software, so sometimes it's easier to let something go if it isn't tested well. In an iterative process, it's easier to spend more time testing throughout development and spend more time thinking about testing. It's common to generate a lot of testing ideas and feel responsible for areas you may not have the time to reach. We felt a bit of "tester guilt" at not getting to every area we had hoped to test, but still shipped the software with an unprecedented level of confidence.

The developers were pleased with the improvements to the feedback loop, but they still wanted even tighter feedback on defect reporting. On the next project, we added two days of testing at the end of each Sprint prior to the demo. This helped the team have more confidence in the demo, before the customer saw the product, and provided feedback on developed software within the Sprint.

Full Integration

After working in this way on different Scrum teams, I felt I could add feedback more quickly during development, so I began informally working directly with a developer during a Sprint for a couple of hours a week. Sometimes I tested something on which the developer was working; at other times, I provided a second opinion on a design. This setup provided even earlier feedback to the developer and helped me be more effective in earlier Sprints, where there wasn't yet much of a product to test.

Eventually, I moved on to a project with very experienced agile developers who were using Scrum. At this point, I was ready for full integration; as a tester, I would work side by side with the developers throughout the entire project. We agreed to work as closely together as possible, since I wanted to learn more about test-driven development, and they wanted to learn more about conventional software testing.

When we started out with a Product Backlog, the lead developer began repeating a mantra I would hear throughout the project: "Is this testable?" He wanted to be sure that whatever was developed was something I could test. This strategy forced me to be more involved in the planning discussions instead of waiting until I could look at the software. I was also asked to provide documentation on testing from Sprint to Sprint, rather than providing test plan documentation just prior to the testing and debugging Sprint near the end of the project. We weren't working on a "big design up front" project, and our Sprints were shortened to two weeks instead of four weeks long, so I was a bit reluctant to create what I thought might be wasteful documentation. However, the project stakeholders needed the information. I worked out a short two-page testing strategy document based on the information from the Sprint Backlog for each Sprint. The template included the Sprint goal, the features that would be developed, and an initial risk assessment and corresponding test techniques we would use to mitigate those risks. I drew much from James Bach's articles for my test planning:

As I began to fill in the details of the document, it surprised me to find that the first Sprint goal we had brainstormed wasn't testable. It was too vague; in spite of us thinking hard about testability in our Sprint planning meeting, we had come up with something that was not testable. I talked to the Scrum Master and the lead developer, and we rewrote it. I had a couple of use cases already, so I plugged those in and found they were also vague. Despite all our planning discussions, it was only when the goals were committed to paper from a testing perspective that we found the problems. In the end, documenting our testing efforts provided more coherent design documentation and Sprint Backlogs. And later, when we added more testers, we were able to divide tasks and focus testing efforts much more easily.

Our focus was on speedy communication, as well as logging and fixing defects quickly in each Sprint. We deferred some defects for later Sprints, but we tried to keep that find/fix/verify feedback loop as tight as possible. A typical day involved pulling and installing a build for the testing team, running and developing automated tests, and executing manual tests. We would execute tests, verify bugs, and spend time talking to developers about emerging designs, test ideas, and areas to test. We designed automated tests to detect changes, so we could quickly find out if there were changes in the software to worry about. Manual testing was unscripted, focused on risk, with new features tested first and changed areas (revealed by the automated tests) tested second.

Testing in this new way was enjoyable. I was testing on working software sooner and closely collaborating with developers to provide instant feedback on development work or to help with test idea generation. With developers, I was as likely to be working on test problems as design considerations, usability concerns, or tracking down bugs. To facilitate testing quickly, I developed scripts that automated parts of the application that I had already tested thoroughly. I could run these to get to an area in the application very quickly, and then take over and do exploratory testing manually.

We still retained the final testing Sprint to do complete end-to-end testing on the final product. Because of the thorough testing of the features from Sprint to Sprint, there were few surprises in this effort other than system integration problems.

Lessons Learned

We experienced great results with a fully integrated testing team working alongside developers from the beginning. When testers and developers worked together to ensure that designs were coherent and to quickly find and fix defects, development went quite smoothly. We were a single team working together, not two departments constantly at odds.

New testers found testing in an iterative environment to be a challenge. Not having a "big design up front" requirements document was a struggle. Having to be more proactive to get the information on features they were testing was an adjustment. Also, the demand for testing with speed was difficult for testers used to spending a long time pre-scripting test cases and writing detailed test plans. In an iterative environment, there simply isn't time for this course of action.

In later Sprints, we found that the features already developed piled up and affected the new features we were testing. As a result, we had a lot more testing to do in later Sprints. In some cases, we would do less integration testing, incurring an "integration testing debt" that we covered in the last testing Sprint with a completed product. On smaller projects that kept up with bug fixing, this delay wasn't as difficult to handle; on projects that had more bug fixing later on, our testing duties were compounded.

There was a danger of burnout: Testers were testing new features as those features were developed in a Sprint, testing features developed from previous Sprints, looking after defects, getting information on testing from developers and customers, and supporting other team members who were testing. The upside was that a tremendous amount of testing was achieved, so we had to monitor testers to make sure that they weren't taking on too much. Now that a lot more testing was going on during development and the team needed rapid feedback, we sometimes needed more testers to help with testing tasks.

Communication and Fast Turnaround

To try testing on a Scrum team, learn about Scrum and find out about the motivation behind the methodology. Realize that you will have to be proactive in getting test information—requirements won't necessarily be handed to you. Your teammates will expect quick feedback, so look at testing techniques that will help you to get that information quickly. You can rely on your skills of observation and inference to apply different testing techniques on a product or feature for which you don't have a specification. This is a valuable testing skill, and works well for gathering useful test information. It's also a powerful way of uncovering hidden assumptions in a product. For more information on testing without a map and to execute tests quickly, check out James Bach's Rapid Software Testing course (also taught by Michael Bolton). Focus on testing according to risk, and learn to plan on the fly. Above all, communicate and ask questions. If Scrum item lists are too vague for you to test, ask for clarification and work with your teammates. A more testable design is a more coherent design, so they'll thank you for it. Scrum teams can be a rewarding place for a tester, so give it a try.

Eleven Steps for Integrating Conventional Software Testing on a Scrum Team

800 East 96th Street, Indianapolis, Indiana 46240