Test-Driven Development from a Conventional Software Testing Perspective, Part 3
The Need for Skepticism
After first learning about TDD from an expert programmer (described in part 1 of this series), and trying it out myself in test automation projects (described in part 2), I was in a position to think reflectively about what I had experienced.
In an industry that seems to look for silver-bullet solutions, it’s important for software testers to be skeptics. We should be skeptical of what we’re testing, but also of the methodologies, processes, and tools on which we rely. Most importantly, as testers we should be skeptical of our own testing practices, and strive for improvement.
One way in which we can strive to improve our own testing is by learning about other testing ideas. Test-driven development (TDD) is one area from which software testers of all kinds of backgrounds and skill sets can learn. I’ve seen some programmers make enormous improvements by using TDD in the work that they deliver. But I’ve also seen programmers place too much trust in TDD alone for testing, and be shocked when their software fails when run through basic, manual functional tests. Testers and developers can learn a lot from each other.
TDD is only part of the testing picture—it doesn’t encompass all testing, nor does it replace other testing techniques. Test-driven development requires skill, and as with any other activity, it can easily be performed badly.
I was once lamenting to a TDD developer about how hard it is to write automated functional tests. I complained about dependencies, buggy test code, timing issues, design considerations, how to test the test code, etc. The developer smiled and said, "All of those things exist in automated unit test development. It’s just as hard to do well as automated functional testing is."