- Reduced Time in Up-Front Design
- Refactoring Versus Up-Front Design
- Reduced Time Producing Non-Executable Documentation
- Reduced Time in Reading/Updating Materials
- Less Time Wasted Reading Inaccurate Materials
- Reduced Debugging Time
- Reduced Number of Defects
- Reduced "Mulling" Time
- Reduced Amount of Code and Increased Reuse
- In Conclusion
We’ve completed eleven episodes of test-driving the development of a Texas Hold ’Em application. In installment 12, you saw a testimonial from Jerry Jackson about the value of TDD.
It seems as if we’ve only just gotten started on building a poker application, even after eleven programming sessions. Quite a lot is left to be built! But had we undertaken this effort start to finish, without all my verbose commentary, we might have spent half a day completing those eleven sessions’ worth of development work.
We’ve probably invested a little more time doing things the TDD way, as opposed to just slamming out the code. What’s going to happen over the long haul? Writing tests seems like more work; indeed, we’re building a lot more code that we must maintain. But many benefits derive from doing TDD. In this final series installment, I’ll talk about why I think TDD is the best way to approach developing a system.
Reduced Time in Up-Front Design
If we weren’t doing TDD, we’d want to spend more time doing up-front design. Instead of sketching a design path, we would want to think much harder about the system design, putting a lot more detail into it. Choosing a poor up-front design would have costly ramifications, since we wouldn’t have tests to help us recover.
Unfortunately, all that additional up-front design time produces rapidly diminishing returns. It’s simply impossible to create a perfect design for any real-sized system. Can I back up this statement? Consider the article "Engineer Notebook: An Extreme Programming Episode." This piece is a record of two talented developers, Bob Martin and Bob Koss, pairing and using TDD to produce an application that scores a bowling game.
Prior to coding, Bob and Bob brainstormed a design. It’s a good place to get started—a quick UML sketch provides a nice visual understanding of what we think the system should look like. The design that Bob and Bob came up with was simple and straightforward. It was also a design similar to what most other people come up with. The design included a Game class, a Frame class (there are 10 frames per game), and a Throw class (most frames have two throws, with the tenth frame allowing three throws under certain conditions).
Bob and Bob’s article digs into dozens of pages’ worth of coding the application, complete with tests and discussions about refactoring. When complete, the TDD-built solution exhibited an interesting characteristic: Its design was nowhere near the sketched-out design for the system. The sketch ended up representing far more design than was necessary to solve the problem. Bob and Bob had produced an overblown design.
Overdesign costs money! Not only does excessive design usually take longer to realize; it makes all future efforts to comprehend the system more costly, due to the added complexity. Further, I’ve found that it’s frequently more difficult to refactor an overblown design to accommodate a new feature than it is to refactor a simple design.
The big lesson to me is that Bob Martin, who is one of the best, most recognized software designers out there, was unable to produce a "perfect" design for something as simple as a bowling game. There are lessons from Bob’s older books on design, too—in later books, Bob admits that his earlier designs just weren’t as good as they could have been, because they hadn’t been validated by testing. And if Bob Martin can’t produce a perfect design, I doubt that the rest of us can.
It’s still useful to produce a design as a roadmap, but we must realize that efforts to put a lot of detail into a design result in rapidly diminishing returns. TDD is a far better way of driving out the details and shaping the design.