In this chapter, we examined a number of enterprise components and patterns, and we outlined the testing techniques and approaches that can be used to ensure that we develop a growing library of tests, both unit and functional. Different approaches were discussed to handle some common issues that get in the way of enterprise testing, such as expecting a heavyweight environment or a database with specific data.
The most important lesson we hope to impart is to take a practical and pragmatic approach to testing. We should not concern ourselves with coverage rates or tests of trivial code. Every test should test potential issues with the code, rather than basic Java semantics. Breaking down an enterprise component into more digestible pieces is a simple process provided it is tackled one goal at a time, rather than trying to convert the whole thing in one go. The value of our tests lies in their evolutionary nature, and their value is expected to increase over time. Based on that, it is vital that having too small a set to start with does not discourage you.
In addition to allowing tests to grow organically, diversifying the types of tests is also important. A project with nothing but unit tests is not very useful when it comes to being able to make confident assertions about how it will behave in a production environment. Likewise, a suite of tests with just functional or integration tests is very hard to debug or isolate specific issues in.
Furthermore, the order in which we write our tests is flexible. When tracking down a bug in a complex unfamiliar system, for example, a functional test might be easier to get us started. We can then break it down into unit tests as we focus on the specifics of the bug. When developing new functionality, on the other hand, starting off with unit tests will help ensure the component is designed correctly and lends itself to greater testability.
Many managers balk at the suggestion of spending two weeks writing tests. Deadlines have to be met, clients need to see new functionality, and a bunch of tests that prove everything works just as it currently does is a very hard sell. This isn't surprising and is in fact quite sensible. If testing is to be a goal, it needs to become part of development, not a separate concern that is tagged on at the end. Testing is just another facet of development, and it's crucial that developers adopt testing as part of both debugging and developing new functionality.
When it comes to integrating tests into an existing large code base, we should keep these ideas in mind. Spending weeks writing tests is unsatisfying and very susceptible to test rot, where tests are written once and then forgotten and neglected. A far better approach is to introduce new tests as needed and to let them grow organically over time. The history of test growth tells a far more compelling story that any sudden testing spike.
One of the most powerful tools available to us to promote testability is refactoring. Code that is easily testable is code that is better designed. It has more versatile and well-specified contracts and fewer dependencies—all of which happen as a side effect of increased testability.
A concrete application of these principles is when we find ourselves repeating a time-consuming task just to reproduce a bug, such as testing HttpSession and other similar objects. Stepping back and thinking about the functionality we're testing, it becomes obvious that all we care about, for example, is a Map implementation, so spending the 20 minutes to write an abstraction will save us hours of restarts and redeployments.
Ultimately, in an ideal world, development and testing go hand in hand. We must always think of them concurrently. Development testing's biggest payoffs manifest themselves only when it becomes an ingrained habit and an approach to development as well as maintenance.