InformIT

Test-Driven Development from a Conventional Software Testing Perspective, Part 3

Date: May 4, 2006

Return to the article

After practicing TDD himself, Jonathan Kohl was ready to weigh in with his thoughts. In part 3 of this series, he talks about some of the controversy surrounding TDD and some TDD-related challenges, and provides thoughts for the future of testers working in this area.

The Need for Skepticism

After first learning about TDD from an expert programmer (described in part 1 of this series), and trying it out myself in test automation projects (described in part 2), I was in a position to think reflectively about what I had experienced.

In an industry that seems to look for silver-bullet solutions, it’s important for software testers to be skeptics. We should be skeptical of what we’re testing, but also of the methodologies, processes, and tools on which we rely. Most importantly, as testers we should be skeptical of our own testing practices, and strive for improvement.

One way in which we can strive to improve our own testing is by learning about other testing ideas. Test-driven development (TDD) is one area from which software testers of all kinds of backgrounds and skill sets can learn. I’ve seen some programmers make enormous improvements by using TDD in the work that they deliver. But I’ve also seen programmers place too much trust in TDD alone for testing, and be shocked when their software fails when run through basic, manual functional tests. Testers and developers can learn a lot from each other.

TDD is only part of the testing picture—it doesn’t encompass all testing, nor does it replace other testing techniques. Test-driven development requires skill, and as with any other activity, it can easily be performed badly.

I was once lamenting to a TDD developer about how hard it is to write automated functional tests. I complained about dependencies, buggy test code, timing issues, design considerations, how to test the test code, etc. The developer smiled and said, "All of those things exist in automated unit test development. It’s just as hard to do well as automated functional testing is."

Controversy

Test-driven development is often misunderstood. To be honest, it wasn’t until I spent some time learning basics with a developer, and then actually practiced it myself, that I felt I had a basic grasp of TDD. Seasoned TDD programmers will read my story and find obvious gaps, naïveté, and assumptions, and those who practice the craft have a lot of good ideas on how to do TDD well (and much better than I described). However, since test-driven development contains the word test, other stakeholders, particularly QA or testing departments, feel that they should somehow "own" these tests and this process. In some cases, development managers have asked me or the testing team to take responsibility for the unit tests, and to ensure that they’re developed properly. There are a couple of problems with this approach:

Another source of confusion is the activity of test-driven development. A lot of it is serious program design. Some people use this point to counter "quality police" trying to be gatekeepers of TDD. Since TDD is about design, why try to have testers police it? Many developers say that TDD isn’t about testing at all—it’s a design technique.

Cem Kaner, Brian Marick, and others have pointed out that TDD is a lot like development using "examples." Maybe "example-driven development" would be a more accurate term. We write an example in the form of a test, and write the code to fit the example.

Still other developers argue that TDD is about testing: "Sure, it’s about design, but that design is driven out by the tests!"

My view is that TDD is about both design and testing, and while it’s more about design than testing, testing is still part of it. Some TDD practitioners have told me they feel that TDD is often at least 80% design, 20% testing, so I’m not alone in my views. However, TDD testing may look much different from what conventional software testers are used to. I certainly had some challenges starting out, and I consider myself to have basic programming skills. Modeling TDD using William Wake’s "generative test" and "elaborative test" phases of TDD helps me to understand it as both design and testing.

Cem Kaner has identified TDD as another school of thought in software testing, and I tend to agree. We can learn a lot from TDD practitioners, and there’s an enormous community of practice from which we can learn. While TDD is a different school of thought, it still requires programming skill to master. Many testers see TDD as programming at first, and it takes them a while to think of testing in the code context in a way that they can relate to. The TDD community of practice is often geared toward object-oriented programming and design patterns. Testers who want to do more TDD will need to gain a strong understanding of design as well as programming.

QA testers are often intimidated by programmer testing: "If the developers are testing now, what will we do?" However, TDD involves testing in only one context—the source code context. There’s still a lot of room for testing in other contexts. I’ve also found that working with the developers provides a great experience for test idea generation, helping make the code more testable, and gets more people involved in testing the application.

Challenges

Jerry Weinberg has a heuristic he calls "The Rule of Three": For any idea you have, if you can’t think of three ways it can fail, you don’t understand the problem space. This isn’t meant to discourage ideas, but to help you prepare for those times when an activity doesn’t go as planned. It also helps you to become aware of tradeoffs you’ll make. Tradeoffs are especially important to understand in software testing. What do I gain by using this technique, and what do I potentially lose?

By practicing TDD, and by talking with colleagues, I’ve learned about some of the problems we can run into. TDD has an important role in software development today, but isn’t a cure-all. Here are some of the challenges that are important to know as a conventional tester and a software developer:

Ideas for the Future

Where can conventional software testing and test-driven development meet and combine forces?

Pair Testing with Developers

I started pairing with developers during TDD because they wanted another perspective. The developers wanted to work with someone who thought about testing all the time, instead of working with other programmers who were thinking predominantly about programming. While it can be a bit intimidating to work in a technical environment where writing software code is the dominant activity, you don’t need to have development skills to pair with a developer doing TDD. All you need is to bring the testing knowledge you already have, a willingness to learn and be taught, and a willingness and confidence to share testing ideas. Combining a skilled software tester who has technical and non-technical experience with a highly technical code-level tester (the programmer) is powerful. For testers who want to do more with TDD, basic programming skills are required. To do pure TDD full-time, you’ll need to become a developer.

Because I was a tester, developers told me I provided a valuable service when I pair-tested with them when they did TDD. I sometimes felt that I wasn’t providing a lot of value during the generative phase, but we felt that with practice I could spot "code smells" and provide the design feedback they wanted, even without becoming an expert programmer. Smart testers can learn how to spot these patterns in the code in their own way, even if they aren’t programmers.

If you’d like to start pairing with a developer, the elaborative phase of TDD is probably the easiest place to start. Developers are looking for test ideas at this point, and software testers are full of all sorts of test ideas from functional testing, customer testing, and a wealth of experience with defects and testing challenges. I’ve written an article that provides some ideas on how to start pair testing. In TDD, talk to the developer about the different phases, and agree on a good place for you to step in and start. If it works out, move to the generative phase of test-driven development, and learn how to spot design problems from a testability perspective.

A key to remember is that if you’re a skilled software tester of any kind, you can bring your ideas and learn with a TDD programmer, without having any programming skill to start out. You’ll learn what you need as you go.

Why Limit Pairing to Testers?

Software tester Dana Spears points out that this activity of pairing with TDD programmers to help them as they develop a quality product need not be limited to testers. Systems analysts, designers, business analysts, and others with software expertise could also add a lot of value during TDD. I’ve witnessed these kinds of pairings, and the teams have said that they were very useful.

Exploratory Testing

I recently gave a talk on exploratory testing (simultaneous test design, execution, and learning). Most of my experience with exploratory testing is testing through a graphical user interface, the way an end user would use the software. TDD work has opened my eyes to other testable interfaces, and I’ve done some exploratory testing using an application programming interface for a program. Some software I tested was used by other machines, not humans, so I didn’t have a GUI to rely on.

I’ve also written about interactive automated testing, which involves using automated testing tools to interface into an application instead of using the GUI. In the TDD world, we’re always thinking about testable interfaces, and sometimes we do exploratory testing by changing the values of our test inputs based on the results we had from the last test run. We don’t do this very often—usually on areas that are either critical for the product to work, or are giving us trouble.

Several TDD developers attended my talk, and they asked me about combining exploratory testing with TDD even more extensively. They felt that sometimes the TDD automated tests contributed to a brittle design and hoped that doing more exploratory testing earlier in the process would help deal with this problem. Exploratory testing is a powerful way of thinking about testing, so test idea generation, execution, and the resulting learning would help improve the design and the reliability of the product.

There’s a lot of room for testers and developers to collaborate by combining exploratory testing with TDD. Testers and developers could try to use more exploratory testing in the generative phase, as well as the elaborative phase, where it comes more easily. Like our rule of "every tenth time we run the suite, we run a functional test with a database in a real system," exploratory testing could be done in conjunction with the automated unit test runs. More frequent staging-type tests would add a lot of value earlier in contributing to a design instead of feedback coming in the form of bug reports later.

Conclusions

I like what I see in test-driven development, and when writing software find that TDD works for the way I think as a tester. Prior to learning TDD, I could paralyze myself when programming because I kept thinking of all the areas that could go wrong. With TDD, I can start programming by writing a test, which as a tester comes naturally. I just had to get over the fact that when I’m starting out, I’m writing a test for something that doesn’t exist yet. I’ve found TDD to be a great complement to other kinds of testing. The more testing we can do, the more information we can gather to make decisions about the software we’re developing.

However, I will caution that TDD is not a cure-all. It doesn’t encompass all types of software testing, and it doesn’t replace testing at other interfaces, such as skilled manual exploratory testing at the user interface. I talk to too many developers who feel let down because TDD doesn’t solve all of their testing problems. Like anything we do in software development and testing, TDD involves tradeoffs, and we’ll know better in the long term how successful it is as a practice.

Conventional testers, try TDD with a developer and learn about it. Don’t be afraid to fail. Our failures tend to serve as wonderful lessons that stay with us. Be persistent, and don’t hesitate to ask a developer for help.

Developers, if you work with a conventional software tester, be prepared to learn other testing techniques that you can use in your own work. You might be surprised what you learn about your code by testing it from a different perspective.

I would like to thank John Kordyback for the TDD lessons, and Dana Spears, Michael Bolton, and Colin Kershaw for their review of and contributions to this series.

800 East 96th Street, Indianapolis, Indiana 46240