The next morning, John and I paired together at one computer and began work. The first thing we did was model the first feature we needed to develop, breaking it down according to how we might test it. Once we had decided where to begin, we went over the integrated development environment (IDE), because the automated unit tests developed in TDD are usually written in the same language as the application code. John had prepared a demo for me, using an application that was familiar to both of us. He showed me how to run the automated unit tests from within the IDE. In this particular program, we could run a suite of automated unit tests, or we could run them individually by highlighting the test in the text editor and pressing a hotkey combination. Once I was familiar with the IDE, we began to write a test.
The IDE incorporated the xUnit test framework for us; all we had to do at this point was write code for an assertion. (Ron Jeffries says, "An assertion is a program statement that states what should be the case at that point in the program.") We started to translate my testing idea on the area where we needed to start coding, but soon hit a problem: My test was too complex. John helped me to design a simpler test, but it felt almost trivial to me. He explained that we needed to start with a test, and we would write more powerful tests as the design worked itself out. For now, we needed to start with a simple test that would call a method and then evaluate the result by using an assertion. We wrote the test code, and John stubbed in a name for the method that we hadn’t written yet. He then ran the test, and our xUnit tool in the IDE displayed exactly what we were looking for: a "red bar," meaning that the test had failed. This was correct behavior. The error message stated that the method we were calling was unknown.
Now it was time to move to the next step: creating that missing method, the real production code that we would be using in the program.
This procedure felt a little backward to me at first, but working with John soon made it seem smooth. We wrote a method using the name we had supplied the test case, and ran the test again. We got a red bar again, but it wasn’t due to nonexistence of the method; it was due to the assertion failing because we didn’t yet have the code to satisfy the assertion. So we went back to the code, filled in the method to satisfy the assertion, and reran the test. Now we had a green bar. Victory!
Now I started driving, and I ran the test and got the green bar. I changed the data in the automated unit test and reran it. John then taught me a little trick: He had me do another assertion that was identical, but did the opposite of the first test. For example, if you had a first test doing an assert true, you would pair it with an assert false. He said it was a good practice to try once in a while during development, that sometimes an assertion would work because your method was returning something misinterpreted by the framework.
Once I was comfortable with what I’d worked on so far, we wrote a new class and related methods. We decided on a simple test for a method we would eventually need to write—reading in values, evaluating them, and returning a result. I dove in with confidence, had the IDE generate the xUnit automated unit test framework code for me, and decided what kind of assertion to use. As I was typing, though, I began to freeze up. What would I name this method that I hadn’t written yet? I hit a mental block trying to think of a name for the method. John knew I was struggling and asked what was up. When I explained what I was grappling with, he said to just call it whatever came to mind; we would change the name later anyway, using the refactoring functionality of the IDE. He said that this was standard practice anyway; as the design tightened up, we would find our first choice would often be wrong. I named my to-be-developed method foo, and wrote an assertion that looked something like this:
I hadn’t written method foo yet, but knew that it needed to take two values, sum them, and return the result. The test code above reads like this: "Assert that the value returned from method foo is equal to 900 when I tell it to total 500 and 400." If this assertion fails, I either haven’t written the code to do it yet, or there’s a problem in my method.
This wasn’t easy to do at first. I struggled with thinking of how I would call code I hadn’t written yet, and then how I would test it with something tangible. We knew that the method would get more complex as we went on, but we needed to start coding and this was a simple place to start. I ran the test, it failed, we added the code to satisfy the assertion, and then added another assertion that looked something like this:
It was the opposite of the earlier assertion. It would not evaluate to zero, so it passed as well, and we were on track. We enhanced the tests to fit our design requirement for that method. We altered it to use variables populated with test data instead of hard-coded values, and we used setup and teardown methods to prepare and clean up the test data prior to and after each test run.
Now that the code behaved the way we needed it to behave and we had a handful of tests, we renamed the method I had struggled with naming. At this point, we had an idea for a better name than foo, so we opened a refactoring browser window and told the IDE to rename all instances of foo to our new method name, calculate_bonus.
In the past, when I had to do a method rename after a couple of hours of development, I would use find/replace and then run a few ad hoc unit tests, hoping that I hadn’t broken anything. Sometimes I might miss an instance. But now I saw a useful side effect of test-driven development: automated regression tests. Once John and I renamed our method, we reran our tests. The resulting green bar told us that we could move forward with confidence. Renaming the method didn’t cause any failures, so we knew that we hadn’t injected a bug in the renaming process. A suite of tests tested the method directly, along with other methods that called it. At the press of a button, the green bar and the passing tests it represented gave me a lot of confidence.
I now realized through experience how our test cases not only drove the design but served as a safety net for refactoring. This was something I had understood from working closely with TDD developers in the past, but until I experienced it firsthand on software I was helping to write, I didn’t fully grasp the power of the technique. John explained that he had had a similar watershed moment when he learned to program in a procedural language. After trying object-oriented (OO) programming for a while, he had a moment where he really felt he understood it. When he tried to do procedural programming again, he found it difficult at first because he now thought of programming in OO terms. After moving to TDD, practicing the techniques, and finally getting TDD at last, he said he had trouble programming in any other way. He now feared programming without tests because not only did the tests help drive the design, but they gave him much more confidence in the code he was developing. He wasn’t the only developer who mentioned being fearful of not having automated unit tests in place when developing software.