A vital implementation of acceptance test-driven development includes at least two spoonfuls of wishful thinking. In the example at the Major International Airport Corp. we saw Tony implementing the tests without any previous knowledge about details of the parking cost calculator.
Instead, Tony applied wishful thinking in order to automate the examples that the team had identified in the workshop. Tony avoided considering the available user interface. Instead, he used the interface he wished he would have. The examples clearly stated that there are different durations to be considered for different parking costs. The entry and exit dates did not play a role when writing down the examples with the business expert. Tony didn’t clutter up his examples with these unnecessary details.
Instead of programming against a real user interface, abstract from the GUI to the business cases behind your examples. As Tony demonstrated, consider that you could have any interface for your tests. Dale Emery recommended writing your tests as if you already have the interface you wish you had. Use the most readable interface to automate your examples. If you hook your automation code to the application under test, you may find out that you have to write a lot of code to get the application automated. If you listen to your tests [FP09], you will find that your application needs a different interface—at least for your automated tests.
Wishful thinking is especially powerful if you can apply it before any code is written. At the time you start implementing your production code, you can discover the interface your application needs in order to be testable. In our example, we saw that Tony and Alex started their work in parallel. The interface that Alex designed is sufficient for the discussed examples, but the lack of input parking durations directly forces the need for more test automation code.
The translation between parking durations and entry and exit dates and times is simple in this example. You may have noticed that all the examples start on the same date. Most testers and programmers faced with these hard-coded values feel uneasy about it. While it takes little effort to generate parking duration on the fly while the tests execute, the amount and complexity of support code would rise. As a software developer, I would love to write unit tests for this complex code and drive the implementation of the support code using test-driven development.
The translation between durations, entry and exit dates and times is an early sign that something might be wrong. Maybe the user interface is wrong. But as a customer at an airport, I would probably like to input my departure and arrival dates and times. So, the user interface seems to be correct based on the goal of the potential customers.
Another option could be that the tests point to a missing separation of concerns. Currently, the calculator calculates the parking duration first, and after that the parking costs. The cost calculation could be extracted from the code, so that it becomes testable separately without the need to drive the examples through the user interface.
In the end, your tests make suggestions for your interface design. This applies to unit tests as well as acceptance tests. When testers and programmers work in isolation, a more problematic interface for test automation can manifest itself than when both programmers and testers work together on that problem.