InformIT

Test-Driven Development from a Conventional Software Testing Perspective, Part 1

Date: Apr 14, 2006

Return to the article

Jonathan Kohl, a conventional software tester, set out to learn the process of test-driven development (TDD), pairing with a TDD expert to work on an application. Was it difficult? Sometimes. He shares his challenges and lessons learned in part 1 of a three-part series.

Learning About TDD

If you’re a professional software tester, or you work in quality assurance, I consider you to be (like me) a "conventional software tester." Conventional software testers are often asked for opinions and expertise on a myriad of testing-related questions. One new area of thought in software development is test-driven development (TDD). Because it contains the word test, TDD is a topic on which conventional software testers are increasingly asked to weigh in. But since TDD is a programmer-testing activity, conventional software testers often find that they’re inadequately prepared to deal with TDD.

As a curious software tester, I wanted to learn more about test-driven development first-hand. I began learning about TDD through my typical process of inquiry:

  1. Read literature about the subject to gain an overall understanding.
  2. Explore by working closely with practitioners to gain more comprehension.
  3. Immerse myself in the subject by learning from an expert and through practicing on my own.
  4. Spend time reflecting on my experiences.

This series of articles describes some highlights from this process. In this article, I share my experience learning test-driven development from an expert programmer. In part 2, I share my experience, applying what I learned by practicing test-driven development myself. In part 3, I share some of my reflective thoughts on what I learned.

A Unique Learning Opportunity

For the past couple of years, I had worked as a tester on software development teams that were doing test-driven development. I learned that, in TDD, you write tests while you’re developing code, instead of writing the code first and then testing after the fact. In fact, in TDD, you write a test first, and then write the code to satisfy the test. You only add new code when you have first written a test for it.

As a tester, I would pair with developers from time to time, but I had never spent full days working with them as they developed new software from scratch. I was to spend the next two weeks working full-time on a small project with John, an experienced TDD programmer.

John and I spent some time looking at the problem domain. He thought about possible design challenges, and I thought about possible tests. We worked together talking about areas where we thought we might have the most challenges. As a tester, I had experience with similar software and knew areas where bugs tended to come up over and over. John thought this was interesting, and took note of my experience. He talked to me about testable interfaces, and was bridging testing ideas with architectural design ideas. He wanted me to be completely prepared for when we started working together at the same computer doing test-driven development.

As we worked together sharing ideas during this preparation time, I had a breakthrough when it came to my understanding of testable interfaces. I viewed testing mostly through the user interface, because that was where my experience as an exploratory tester and much of my test automation work had been done. I tended to think of an application in terms of how the end user would see it; in other words, I modeled the application according to the visible, interactive user interface. I had also spent time programming, though, so I was familiar with modeling the application in terms of program code, When I worked with John and we talked about how he viewed the application, I realized that a testable interface is really any sort of doorway into the application that we could use to communicate with the application. The graphical user interface was just one of many potential testable interfaces we could program into the application. These testable interfaces differed in how we communicated with each interface, and with what area of the application.

This concept of many possible testable interfaces was something John wanted me to grasp. He told me that this was a key point to understand with test-driven development. He explained that a testable design is a good design; the two concepts are tightly related.

To prepare, I thought of test designs that might be applicable, brushed up on some testing skills, and gathered any related information we could use in development, such as data dictionaries. I wanted to be sure that I wasn’t holding John up when we were working together, so I prepared in order to be quick on the draw with test idea generation, which I thought was the most important contribution I could make.

Getting Started

The next morning, John and I paired together at one computer and began work. The first thing we did was model the first feature we needed to develop, breaking it down according to how we might test it. Once we had decided where to begin, we went over the integrated development environment (IDE), because the automated unit tests developed in TDD are usually written in the same language as the application code. John had prepared a demo for me, using an application that was familiar to both of us. He showed me how to run the automated unit tests from within the IDE. In this particular program, we could run a suite of automated unit tests, or we could run them individually by highlighting the test in the text editor and pressing a hotkey combination. Once I was familiar with the IDE, we began to write a test.

The IDE incorporated the xUnit test framework for us; all we had to do at this point was write code for an assertion. (Ron Jeffries says, "An assertion is a program statement that states what should be the case at that point in the program.") We started to translate my testing idea on the area where we needed to start coding, but soon hit a problem: My test was too complex. John helped me to design a simpler test, but it felt almost trivial to me. He explained that we needed to start with a test, and we would write more powerful tests as the design worked itself out. For now, we needed to start with a simple test that would call a method and then evaluate the result by using an assertion. We wrote the test code, and John stubbed in a name for the method that we hadn’t written yet. He then ran the test, and our xUnit tool in the IDE displayed exactly what we were looking for: a "red bar," meaning that the test had failed. This was correct behavior. The error message stated that the method we were calling was unknown.

Now it was time to move to the next step: creating that missing method, the real production code that we would be using in the program.

This procedure felt a little backward to me at first, but working with John soon made it seem smooth. We wrote a method using the name we had supplied the test case, and ran the test again. We got a red bar again, but it wasn’t due to nonexistence of the method; it was due to the assertion failing because we didn’t yet have the code to satisfy the assertion. So we went back to the code, filled in the method to satisfy the assertion, and reran the test. Now we had a green bar. Victory!

Now I started driving, and I ran the test and got the green bar. I changed the data in the automated unit test and reran it. John then taught me a little trick: He had me do another assertion that was identical, but did the opposite of the first test. For example, if you had a first test doing an assert true, you would pair it with an assert false. He said it was a good practice to try once in a while during development, that sometimes an assertion would work because your method was returning something misinterpreted by the framework.

Once I was comfortable with what I’d worked on so far, we wrote a new class and related methods. We decided on a simple test for a method we would eventually need to write—reading in values, evaluating them, and returning a result. I dove in with confidence, had the IDE generate the xUnit automated unit test framework code for me, and decided what kind of assertion to use. As I was typing, though, I began to freeze up. What would I name this method that I hadn’t written yet? I hit a mental block trying to think of a name for the method. John knew I was struggling and asked what was up. When I explained what I was grappling with, he said to just call it whatever came to mind; we would change the name later anyway, using the refactoring functionality of the IDE. He said that this was standard practice anyway; as the design tightened up, we would find our first choice would often be wrong. I named my to-be-developed method foo, and wrote an assertion that looked something like this:

assert_equal(foo(500,400),900)

I hadn’t written method foo yet, but knew that it needed to take two values, sum them, and return the result. The test code above reads like this: "Assert that the value returned from method foo is equal to 900 when I tell it to total 500 and 400." If this assertion fails, I either haven’t written the code to do it yet, or there’s a problem in my method.

This wasn’t easy to do at first. I struggled with thinking of how I would call code I hadn’t written yet, and then how I would test it with something tangible. We knew that the method would get more complex as we went on, but we needed to start coding and this was a simple place to start. I ran the test, it failed, we added the code to satisfy the assertion, and then added another assertion that looked something like this:

assert_not_equal(foo(500,400),0)

It was the opposite of the earlier assertion. It would not evaluate to zero, so it passed as well, and we were on track. We enhanced the tests to fit our design requirement for that method. We altered it to use variables populated with test data instead of hard-coded values, and we used setup and teardown methods to prepare and clean up the test data prior to and after each test run.

Now that the code behaved the way we needed it to behave and we had a handful of tests, we renamed the method I had struggled with naming. At this point, we had an idea for a better name than foo, so we opened a refactoring browser window and told the IDE to rename all instances of foo to our new method name, calculate_bonus.

In the past, when I had to do a method rename after a couple of hours of development, I would use find/replace and then run a few ad hoc unit tests, hoping that I hadn’t broken anything. Sometimes I might miss an instance. But now I saw a useful side effect of test-driven development: automated regression tests. Once John and I renamed our method, we reran our tests. The resulting green bar told us that we could move forward with confidence. Renaming the method didn’t cause any failures, so we knew that we hadn’t injected a bug in the renaming process. A suite of tests tested the method directly, along with other methods that called it. At the press of a button, the green bar and the passing tests it represented gave me a lot of confidence.

I now realized through experience how our test cases not only drove the design but served as a safety net for refactoring. This was something I had understood from working closely with TDD developers in the past, but until I experienced it firsthand on software I was helping to write, I didn’t fully grasp the power of the technique. John explained that he had had a similar watershed moment when he learned to program in a procedural language. After trying object-oriented (OO) programming for a while, he had a moment where he really felt he understood it. When he tried to do procedural programming again, he found it difficult at first because he now thought of programming in OO terms. After moving to TDD, practicing the techniques, and finally getting TDD at last, he said he had trouble programming in any other way. He now feared programming without tests because not only did the tests help drive the design, but they gave him much more confidence in the code he was developing. He wasn’t the only developer who mentioned being fearful of not having automated unit tests in place when developing software.

Early Challenges

In the afternoon, as John and I moved on from the basics, I started to feel out of place. While I was familiar with the programming language and had basic skills, John had a far better grasp of design patterns and object-oriented programming than I did. I wasn’t sure what he was doing all of the time, so I would ask clarifying questions, watch carefully, point out obvious syntax errors, and concentrate on generating test ideas based on what we were doing. Whenever John would ask me a question, I responded with the test ideas that were on my mind. I noticed that I was slowing down progress, though, and I wasn’t giving John what he needed at the time. Sometimes we needed my test ideas, but at other times he was thinking about the design and working through the design in his head and driving it with tests. My test ideas were too much and slowed us down and broke his concentration. Eventually, as the design we were working on took shape, he was eager for test ideas again.

At this point, John told me we had a decision to make. We had put it off until now, but we needed to connect to a database to continue, or preferably use a test double (in this case, a mock object) to simulate the database. This application would eventually use a production database, and we were using a test version. John asked what I thought, and I told him, "Don’t rely completely on automated unit tests; be sure to run functional tests in a system as close to production as possible." I’d seen too many instances of relying on mock objects go wrong in the past. When we hooked up to the real system, we’d have problems. He explained that a popular practice with automated unit tests was mocking the database. It would be too slow to connect to it all the time when running the tests in a continuous integration environment, which would threaten the usefulness of our automated unit tests. We agreed to a compromise: We would do both. We would create a mock object to run the automated unit tests, but every tenth time we ran the automated tests we would execute functional tests using the real database.

This strategy turned out to be invaluable as we were developing. Our mock object would throw known database errors that we couldn’t easily generate on the test database, and the database supplied us with real data through a live system. We were surprised that after nine successful test runs using the mock object, we would get a failure with our functional tests using the real database. This happened frequently enough that John said he was going to add this technique of "periodic functional testing in conjunction with automated unit test development" to his repertoire.

As the afternoon wore on, I could tell that something was bothering John. He kept asking questions, and I kept wanting to add more tests, and he would ask, "Are you sure?" I was pleased to be able to add a lot of different kinds of test ideas—even if they were small, almost trivial tests. It felt more like testing than a lot of what we had been doing that afternoon. I knew I was missing something, but it was end of day and we both went home.

Remember the Design!

The next morning we had a retrospective on our experience from the first day. John asked whether I had figured out what the problem was from the end of the previous day. I was hung up on test idea generation, so I was sure I must be missing an important test. He told me that by the end of the day he knew we were in trouble from a design perspective. Sure, we had lots of tests, but there was a "smell" in the code he wanted me to learn how to catch. Our tests were getting more complex, with a lot of setup required, and the code was becoming more awkward. In fact, we hadn’t realized it at first, but John pointed out that I hadn’t been driving (typing at the computer) at all for a couple of hours that afternoon. At this point, whenever we added new tests, he had to add them because the code had become complex and it was difficult for me to add tests.

We went over where I had gone wrong. I was only thinking about test idea generation, while John was simultaneously thinking of tests, improving the software design, and continuously improving the testability of the code. Because John was a good teacher, he had deliberately developed a "bad code smell" and tried to guide me into seeing it. But I was so focused on generating testing ideas that I missed it. John deliberately made the unit tests awkward and difficult to implement, but I simply trusted his design and took the complexity for granted as a technical issue that I didn’t understand. I didn’t realize that the fact that the tests were onerous and difficult to set up and code was a "bad test smell."

He explained that if we can add tests simply, it’s a sign of a good code design. Since the tests were awkward and I was dependent on the programmer to add them, I needed to be concerned. As a tester, part of my job when pairing in a TDD situation is to watch for bad test smells. Those bad smells in the tests are symptoms that something is wrong with the code. John pointed out that when it’s difficult to test, it’s time to improve the code. When testability is improved, one by-product is a better design.

Even though I didn’t have John’s architectural experience, he was confident that testers can learn to spot these smells in code. It doesn’t take a lot of programming skill to realize that some unit tests are awkward while others are simple and elegant—it just takes practice.

We continued our development, but I noticed that some parts of our pairing sessions went smoothly while others felt difficult. Later on, I talked with William Wake about how I struggled with some areas, as a tester pairing with a developer. He told me that there are two test activities in TDD:

In the generative phase, testing felt much more like programming and design, and wasn’t a natural fit for me as a tester. In this phase, test idea generation could frustrate development. Elaborative testing felt more natural. The sky was the limit for test ideas we could generate and try out, and test failures demonstrated weaknesses in the design that we could tighten up, or provide testing ideas for functional testing we would do later on.

Lessons Learned

As we progressed, we found that it was easier for me to work as a pair-partner in the elaborative phase. When I paired in the generative phase, I had to work on my timing and be sensitive to the creative work required in that phase. Constantly firing out test ideas could severely impede progress. James Bach has likened this process to reading a memo over someone’s shoulder as they type it, pointing out every grammatical error and typo as they type. It’s far more productive to let the writer get those initial ideas out, and edit from there. As John and I worked together, I learned to pick up on cues and listen more. I could recognize what phase we were in by the questions John would ask—often he would just want to bounce ideas off me—or by where we were in development. I learned to ask clarifying questions to be sure that I wasn’t overwhelming development. We used to joke that I would unleash the fire hose of testing ideas too soon, so I’d sometimes clarify when transitioning from the generative phase to the elaborative phase to see whether the programmer was ready to "drink from the test idea fire hose."

Some specific lessons I learned as a tester working with a developer during TDD:

True TDD looks like this:

  1. Write a test.
  2. Write enough code to get the test to pass.
  3. Write a new test.
  4. Write enough code to get that test to pass.
  5. Repeat.
  6. Look at testability in the design. If the new tests you’re writing are awkward, refactor them using the automated unit tests as a safety net.

It’s difficult to really understand test-driven development until you actually try it. Being trained by an expert programmer helped me to learn how to do TDD properly. Once I had experienced TDD with an expert, I was ready to practice it on my own. Part 2 of this series describes that experience.

800 East 96th Street, Indianapolis, Indiana 46240