Conventional Software Testing on an Extreme Programming Team
Date: Oct 14, 2005
If you're a professional software tester, or work in quality assurance, I consider you to be (like me) a "conventional software tester." Lately, conventional software testers are finding themselves on Extreme Programming (XP) projects. XP is one of the best-known Agile development methodologies; it employs an iterative lifecycle, team collaboration, and customer involvement.
I'm often asked questions about the challenges faced by conventional software testers on XP teams. In my experience, conventional testers can end up on XP teams in many situations:
- A customer requests "QA people" on a project being conducted by an XP team.
- An XP project has such good results that a development lead from that team is put in charge of a QA group.
- A software company with an existing QA department tries XP.
- XP developers who want to learn more about testing request conventional testers on the project.
If you can relate to these scenarios, or if you're a conventional software tester who's curious about Extreme Programming, this article is for you. I'll discuss two experiences in which I worked as a conventional tester on an XP team.
Experience 1: Early Success
When I first read Kent Beck's book Extreme Programming Explained, I felt that it was the values of XP that were most important. I wanted to apply these values immediately in my testing, whether or not I was on an XP project. Yet, early on, Extreme Programming literature focused very much on activities for developers. Without explicit guidance on what to do, I thought the best thing would be to experiment with familiar testing activities and use the XP values as a guide for my behavior. I did testing as described by the context-driven testing school, including risk-based testing, exploratory testing, and whatever I could do to help provide feedback to the team. Testing was a service role, providing information to the project team.
By the time I found myself on a team with experienced XP developers, a lot more had been written about testing on XP projects. Much of the talk of testing on XP teams was focused on automated testing using a framework such as JUnit. The general opinion was that there was no need for conventional software testers on an XP team. Most people believed that with the developers testing and the customer representative testing, there was no place for a dedicated tester. As a tester, I felt torn. I identified with the XP values and was a fan of XP's communicative and collaborative iterative lifecycles, so I wanted to try to work on an XP team, but I didn't want to be on a team where I wouldn't provide value. To address this issue, I asked the developers to let me know if I wasn't providing value, in which case I would ask to be removed from the project and go to work on a team that needed my help. The developers promised that they would let me know.
So the developers and I began working together to figure out how I could add value on the team. I explained my view of testing as a service role, wherein the tester provides information to the project team and decision makers. I plugged myself into the team, focusing on providing testing ideas and working on finding faults before we released the software. The developers on this team were well-practiced in test-driven development, but they were interested in having my help to write other kinds of tests. One developer wanted my help in writing acceptance tests before writing code that would pass the test. Another developer wanted me to work with him on unit test development. The team was excited to have me involved in planning activities (known as the Planning Game) to provide feedback on project risk and testability.
Because most of the developers were focused on having me do more automated test development, it took some convincing to get developers to see value in manual exploratory testing. I did some automated functional tests to complement their unit tests, and we did a lot of tester/developer pairing. I worked with one developer to set up a test environment in which I could pull a daily build and do manual testing on the installation, on the application working as a system, and on the application itself. At first he was indifferent because he thought I was duplicating the work of the automated unit tests that ran on automated builds throughout the day. Even so, I continued to work exploratory testing on the daily builds. I even had my doubts that I would find anything, because I had worked so much with the developers—especially on test ideas for unit tests—and had seen how much they were doing with automated unit tests. In the end, we were both wrong. Even though I found fewer obvious bugs, (installation errors, bad builds, input overflows), most of the important, show-stopping bugs came out of my manual exploratory testing.
One developer was disappointed that I couldn't write test cases up front for the bugs I found during exploratory testing. He wondered why I wasn't thinking of these ideas at the beginning of the iteration. I realized then that, as in Agile practices, exploratory testing requires adaptive rather than predictive behavior. As Cem Kaner, James Bach, and others will tell you, exploratory testing is simultaneous learning, test design, and execution. A lot of the learning I needed to do to design new test cases came from actually using the software and observing behavior. In fact, a huge benefit of an XP project is that a tester can start doing this very early in the project. The developer agreed that the information I was able to provide early on was beneficial, even if I couldn't predict every fault that occurred with a test case.
As we started to hit our stride on the project, we discovered something simple, seemingly obvious, and yet very profound: The less I worked on developer-like activities (such as test automation and design) and the more I focused on conventional testing activities (manual exploratory testing, thoughtful feedback, and test idea generation), the happier the developers were with my work. By engaging in conventional testing activities, I gave developers the information they needed most. This included test design ideas and what the developers were most pleased with: detailed bug reports.
Testing and development activities can require different thought processes. When thinking of useful testing activities within a programming methodology, the developers were thinking in terms of programming because they found automated testing so useful. I was thinking about testing in terms of assessing risk, asking questions about the product by using tools and techniques and assessing behavior against a mechanism to let me know whether there was a problem.
When we realized the value that conventional testing was providing, the developers started worrying about having a specialist on the team, since XP teams favor generalists and they thought that having specialists would lead to problems. However, when we did an inventory of my testing work, I didn't work like a specialist at all. My work involved manual testing, basic programming for test automation, working with customers, writing documentation, and helping to administer test equipment. If I had worked like they thought a generalist should, I would have worked only on automated test development and maintenance. To a conventional tester, that's a specialist role in automated testing.
As pragmatists, we let the issue of whether I was a generalist or specialist slide, and our testing strategy became a hybridized Agile/conventional testing approach that moved toward complementary tasks with a unified focus. Because I bought into the values of XP and used those values to guide my work, the developers didn't feel that the process was threatened. They also liked that I was willing to step up and do any job that needed to be done. Finally, my approach in reporting bugs wasn't belittling or accusatory, so they grew to respect and value what I had to say.
Of course, the developers are only part of the picture on an XP team. Another important role is the customer representative, who is the voice of the business and the user community. The customer representatives provide specialized subject matter expertise and also do testing, called user acceptance testing (UAT). I spent quite a bit of time working with the customer representatives. At first, this work mainly involved helping set testing strategies during planning. I would work with them to assess the risk for each proposed feature, asking, "What is the potential cost to the business if this functionality is missing or doesn't work?" This risk assessment helped the customer to determine which parts of the system were critical. At first, the customer wanted to include all the features that came to mind, but asking the right questions helped to narrow the list to the important features to help prioritize the team's development work. Once we had the feature list, I also helped identify testing techniques that we could use to mitigate risk for those features. Finally, when testing time came, I coached the customer on testing the software.
Working with customers showed me that what techies consider to be bugs, customers usually think are their own mistakes made while using the software. If customers see error messages, they usually blame themselves. It turned out that the most important thing I taught the customers was how to recognize a bug for what it is, rather than to see it as a user error. We used Bret Pettichord's bug definition: "A bug is something that bugs someone." I encouraged customers to voice concerns whenever something bothered them. (Indeed, I worked with customers the way I do with any new tester, much the same way I was trained as a tester myself.) I provided support, answered questions, and encouraged customers to voice concerns over any issue that felt wrong. If they saw an error message and blamed themselves, I explained why this was actually a failure in the software. One particular customer representative was naturally gifted as a tester, and caught on quickly with just a bit of coaching; she ended up logging excellent bugs that the rest of us had overlooked.
A typical day working as a conventional tester on this XP team might include the following tasks:
- Act as sounding board on an emerging design for developers working on a story.
- Pull a new build, install the software, and run automated acceptance tests against it.
- Lightly test software from a story in progress to provide initial feedback to developers.
- Heavily test software from a recently completed story to provide more feedback on a story.
- Integration-test the software delivered to date.
- Work with a customer on new test ideas.
- Answer questions and provide support for customer testing.
- Add to or do maintenance on the automated acceptance test suite.
- Provide testing status information at a standup meeting.
Later, at the end of iteration, more of my time might be spent working with customers on testing, or working on risk assessments and testing strategy on emerging stories.
Looking back, I found that I was approaching testing from two fronts:
- Supporting the development team with testing and feedback
- Supporting customers by helping them assess risk and perform testing activities
These tasks fit nicely into what Cem Kaner and Brian Marick call business-facing and technology-facing activities. The information gathered in each of these areas helped the team to have more confidence in the software they were delivering, and helped customers to have more confidence in the product they were receiving.
When I did a retrospective with the developers, they gave me good feedback. They said I found bugs that they would never catch with their own unit testing. They were pleased that someone was doing integration tests on an installed system daily, because those tests revealed new sources of errors. They said my exploratory testing was very effective, and, unlike them, I was able to spend time tracking down intermittent errors. They were pleased to have someone on the team who thought about testing all the time, complementing their ideas. They liked my attitude about bug reporting: They didn't feel I was trying to humiliate them or catch them out when I found a bug, and bug reports were not the only way I provided feedback. They said I supported them and the customer and provided feedback in areas I didn't even realize, such as during initial design. They liked that I would work with customers and help them verbalize questions and concerns in terms everyone understood. Most importantly, the developers wanted to work with me again.
Overall, we found that my conventional software testing skills were complementary to what development and customers were doing. When I focused on areas the developers weren't testing with unit tests, such as installing the software in a test system daily and testing it, I found bugs that the automated tests didn't catch. When I worked with customers on testing ideas, I helped them learn how to test and taught them the technical vocabulary used by the development team. When a customer was the lone voice raising a concern, I was able to encourage her and back her up when facing down several developers. When coming up with features to develop for an iteration (called story cards because they're written on index cards), I could help to determine whether a story was testable and written coherently; incoherent stories were usually a sign of an incoherent design or areas we might have missed.
Experience 2: Challenges
While my first XP testing experience was a success, another project posed many more challenges. In this case, the project sponsors required a "QA person" on the project, but the developers didn't buy into that requirement. I knew how to work within the methodology, but the developers made it very clear that I wasn't welcome, with comments like this: "There is no testing role in the 'white book' [Extreme Programming Explained]." Finally, after pressure from the customer, the developers decided that it might be okay if I could write automated unit tests and do some user acceptance testing, but they really didn't think that I could add value. When I explained that my conventional testing activities had provided value on a previous XP project, they weren't interested. They spent some time telling me how the last project I was on "wasn't really XP"; if it had been, I wouldn't have been on the team. They thought that I should just come in to test at the end of the project, rather than being involved throughout the process, because that was how they had dealt with QA people in the past. I responded that that wasn't the way I worked, and the customer had asked for me to be on the team. I told them I'd work with them to see where testing could fit and help out.
My initial feedback was met with some suspicion from the developers. They even discouraged me from testing a functioning build when it was available: "We've already tested it with automated unit tests. Your retesting it is going to be a waste of time." When I explained that I could do testing that complemented their testing, they grudgingly agreed to let me work. When I found tricky problems that were due to system configuration issues that couldn't be taken into account by automated tests, I was told not to log that kind of bug. They also discouraged me from doing any manual exploratory testing. Again, I stuck to my guns and kept logging important bugs. After a few days, the developers started to realize that they were missing whole classes of tests that a willing conventional tester could execute. Slowly, they started to like having testers around because of the feedback we could provide.
Once we started getting along well, the developers decided that I should do work in the source code. They hoped that I could take over some of their unit test development, but I didn't really have their level of architectural and programming expertise; I couldn't just read the code and see the problem areas. I also felt that code-level testing was an important part of development work because a testable design is a good design. This bothered us at first, but we worked together on finding common ground. From my viewpoint, they were looking for a senior developer who was seasoned in design and test-driven development. What I offered was conventional testing activities to complement their work, within the XP developmental process. My knowledge and experience in XP carried a lot of weight, so the developers decided to support my doing conventional testing activities, especially since the feedback I provided was different and useful.
On this project, I felt a bit isolated from the developers at first, so I sought them out with clarifying questions on designs with stories. Sometimes this approach helped us to find problems with stories that were somewhat incoherent, so the designs firmed up a bit. I was logging bug reports as stories, but in general they were being ignored. The developers were ambivalent toward these bugs, but when the time came to demo the software to the project stakeholders at the end of an iteration, things started to change. The developers started asking for more conventional testing feedback, and they started taking bug reports more seriously. I stuck to my service role, focusing on providing feedback when they needed it, but being firm on bugs that would have a significant impact on customers. With time, they started asking for more feedback beyond bug reports, and took advantage of the availability of a dedicated tester to help them investigate bugs.
By project end, the developers had gone from saying "We don't need testers!" to "Please test this and give me feedback." Instead of dreading an unproductive period at the end of a project or iteration in which the QA team would begin to test the software and deem it worthy to be released at their discretion, the developers had testers whose job was to serve the team. They were enthusiastic about the feedback they got, particularly in the form of bug reports. They agreed that if the testers found problems before the users did, it was a good thing, so the more bugs we found, the happier they were. They said they were able to deliver the software with a great deal more confidence, and were surprised at how many bugs a conventional tester could find, even with all the automated unit tests the developers had written. In the end, they embraced the diversity of testing activities and were pleased to work with like-minded conventional testers.
Lessons Learned
Working on an XP team as a conventional tester has shown me that good, thoughtful, skilled conventional testing and Agile testing can be complementary. On an XP team, where most of the focus is on automated unit tests and other test automation activities, testers can plug conventional software testing activities into this environment. Testers need to adapt to a developer-centric methodology that values technical skill, rapid feedback, and close collaboration within an iterative lifecycle.
On an XP team, you literally all work together in a shared space, which poses special challenges for nonprogrammers. Developers know their role and have a lot of guidance on execution of activities. Nonprogrammers usually have less support in XP, and it can sometimes be difficult for developers to think of non-programming activities for the testers.
When working as a tester with developers in the same workspace, I learned that it's important to understand what developers need. Timing is essential. When developers are brainstorming about a design idea, they may be looking to you as a sounding board, not for test ideas. If you overwhelm the developers with counterexample test ideas at this point, they'll be frustrated. If developers are working on a story and want quick initial feedback, it's counterproductive for you to point out every flaw you see, digging in deep and trying to make a feature fail. There is a time for that technique: when the developers have finished the story. As a tester, you need to communicate with developers and find out what they want. They'll rely on you for test ideas and expect you to try to break the software they've developed, but they'll also rely on you for positive support and for examples to help drive development.
Both sides are responsible for good communication; developers need to tell you when you're providing feedback that isn't useful. Unfortunately, we don't always notice this situation at the time, so it's a good idea to ask for clarification. For example, if I'm generating test ideas and get strange looks, I ask the developers if this is the kind of feedback they want. After a while, they learned that they could tell me to hold off on test ideas, and request that I be a sounding board for a new idea. It's also a good idea to clarify terms such as test, tester, and so on that may appear to be part of a shared language. In XP circles, many testing terms can mean something different than the definitions that conventional testers are used to.
For testers, adapting to XP teams can be a challenge, but the benefits can be enormous. The code is always available to be tested, developers and customers are also testing, and test work can be done at any time during a development project. Adapting to requirements coming in the form of story cards, getting used to writing bugs on story cards instead of in a fault-tracking database, and being willing to step up and work on tasks that need to get done can all be rewarding.
Don't be surprised if there's a lot of talk at first about the team not needing your skills. In my experience, there's a lot to be learned from focusing on adding value. If I'm providing value, I stay on. If not, I look at other options. In my experience, and that of other testers, most of the resistance to conventional testers disappears once developers and testers start working together on a project.
Integrating Conventional Software Testing on an XP Team
Following are some suggestions for making the most of your experience as a software tester on an XP team:
- Try to understand the XP development process and the motivations behind it. Read Kent Beck's Extreme Programming Explained. Use the online resources at XProgramming.com and XP123.
- For information on how to test in this environment, read Cem Kaner's papers "The Role of Testers in XP" and "The Ongoing Revolution in Software Testing."
- Let go of preconceived notions of how you think a development process should work. Drive out fear, and give the methodology a try.
- Use the XP values as an overall guide for your working attitude and behavior. Not every answer to every problem will be written down, but the values are there for guidance.
- Read XP-related testing publications, but don't be discouraged if some of what they say opposes conventional testing activities such as manual testing. Be open to new ideas, but have confidence in your skill as a tester. Experiment—and stick with what works, not what seems to be popular in publications.
- Don't expect to base your testing on a requirements document written at the beginning of the project. Much of the communication will be face to face, with requirements for an iteration written on index (story) cards.
- Be proactive in getting information on what needs to be tested. Question developers, customers, and other team members; then use your judgment and skill as a tester to figure out what to test.
- Beware of vague language. For example, words such as test or tester on an XP project often refer to developers and automated unit testing. Always ask for clarification on shared language to be sure that everyone is on the same page.
- Understand that you may seem to share the same language with developers, but your terminology might have different meanings in their context. Always ask for clarification.
- Think of prewritten tests that guide development as examples. Think of tests that are designed to find potential problems as counterexamples.
- Work on your timing. Find out what developers want when they ask for
feedback:
- A sounding board to talk about ideas
- Test ideas to guide development
- Test ideas to firm up a design
- Light testing for quick feedback on a story under development
- Heavy testing on a finished story
- Work with customers to help them understand risk, and support them on uncomfortable areas. Discomfort is usually a sign of usability problems or incoherent stories.
- Don't worry about fault-tracking systems. You may simply write bug reports on story cards.
- Accept feedback. Listen to the developers' and customers' needs. Be empathetic and supportive; don't try to enforce a process.
- Use your expertise to look for answers and think of creative solutions. Much of what you already know as a tester is effective on any software project.
- Strive for continuous improvement, and be ready to try new testing activities to see what works well. Work on improving your skill as a tester.
- Get ready to be challenged—and to learn a tremendous amount.