Designing for Testability Saves Time
One good coding practice that helps testers is automated unit tests. Using the model of the automation test pyramid suggested by Mike Cohn, author of Agile Estimating and Planning (see Figure 1), unit tests form the base of a team's regression suite that run regularly. These automated unit tests are the shortest feedback cycle that programmers can use, because they can check for unexpected side effects immediately.
Figure 1 Test automation pyramid.
In his book xUnit Test Patterns: Refactoring Test Code, Gerard Meszaros talks about this feedback loop, pointing out how it gives programmers confidence in the software they write. If a team has a solid set of automated unit tests that run every time a developer checks in her code, I don't have to dread the "refactor" word. I know that a few developers will have good intentions and try to clean up the code, but end up breaking more than they fix, because they didn't have the automated testing to tell them otherwise. The unexpected side effects were found by trial-and-error. When good tests are in place, I don't have to cringe, but instead calmly ask what area of the application the developer is making better. Knowing that information lets me do exploratory testing to check the functionality and maybe look at some of the functional tests to make sure that they all pass. If the developers give attention to automated unit tests, I don't need to plan a full regression test in that area.
If programmers practice test-driven development (TDD), they're also designing for testability. TDD helps testers immensely when they work with the development team to automate functional tests. For example, separating the presentation layer from the business logic allows more robust automation of the business logic, because the tests don't have to go through the ever-changing (and often brittle) UI layer.
When developers think about testability, they can make a world of difference to the ease of automating functional tests. In one organization, we created new functional automated tests, and spent a lot of time changing them to reflect new GUI objects. The developers were using the default object references, which changed every time a new object was inserted. Although they helpfully promised to start naming new objects they created, it didn't help us testers much, as many existing GUI objects still changed automatically. Metrics showed that we were spending about an hour a day changing tests. When the developers learned that statistic, they created a story to address the issue. It took them a couple of hours in totaland saved us many.
When testers and programmers collaborate and understand the tests' purpose and coverage, the test suite has less duplication and fewer gaps. As a tester, I've worked with programmers to extend their unit tests to be more complete. The more I understand what the programmers have unit-tested, the less time I need to spend creating functional automated tests.
For example, the programmers may use JUnit for their unit testing, and the team might choose Fit for their API-level tests. The testers, working alongside the customers, can write the acceptance tests, using spreadsheets or a wiki with FitNesse, given to the programmers to be used for acceptance test-driven development (ATDD). The programmers can also write the fixtures to make the tests pass, which become part of the regression suite. When this collaboration occurs, the functional tests are completed early, providing time for the testers to do exploratory testing on edge conditions, or workflows of the bigger picture, giving more confidence in the correctness of the application.