Lean-Agile Acceptance Test-Driven Development: An Interview with Ken Pugh
There are a handful of books coming out on Acceptance Test Driven Development (ATDD). Yet the topic is so new, and there have been very few long-term sustained results. How could anyone possibly be qualified to write about the subject?
It turns out ATDD might not be as new as you thought. Bertrand Meyer coined the term "design by contract" in 1986. And while he might not have been using the right magic words, consultant Ken Pugh has been incorporating the principles and values of ATDD as part of his continual improvements in his programming practice since he started programming in the 1960's.
It turns out that Ken wrote his first program in 1967. He worked as an electrical engineer and programmer before starting his own consulting business that did custom programming and consulting. In that time, he explored lots of methods and methodologies by doing them and experimenting. Since the turn of the millennium, he spends his time helping organizations while observing what is working for them ... and what is not.
Basing his book on years of experience, we thought InformIT readers might be interested on the backstory of the book: How Ken got involved with the intersection of lean and agile, what it looks like, how it's different from some of those older ideas—and just what exactly he means by this mythical term "ATDD."
Matt Heusser: Thank you for finding time for this interview, Ken. Let's start with Agile. Can you give us the five-minute talk on how testers can adjust to agile? I know, for example, that lots of organizations revert to a mini-waterfall, crunching the "testing" into the end and "blowing" iteration deadlines. How can agile teams get around that problem?
Ken Pugh: The answer is not to create a testing story for every development story. That way developers can be “done” with their story, while the testing story gets put off till the next iteration. A better answer is to separate out the authoring of a test from the execution of that test. If the authoring is done prior to implementation being finished, then the developer can execute the test to determine if he or she is done. This is a switch for the mindset of the entire team—developers doing testing, testers creating tests that do not emphasize the GUI, but the functionality. And it will take effort to make the switch, since testers will need time to catch up and reverse the sequence.
There will still be testing after a story is deemed done, such as usability and performance. And results from those tests may involve creating new stories.
Having tests first eliminates a lot of delay and looping. With tests last, there is a delay between developers finishing coding and testers testing it. When testers find defects, the issue has to loop back to the developer. Getting rid of delays follows lean principles.
Matt: So let's talk a little bit about Lean. What is it, and how did you start finding out about it?
Ken: I first discovered lean software development when I met Mary and Tom Poppendieck at a consultants' conference in Newport, Oregon. Part of the reason I joined Net Objectives was their emphasis on lean consulting.
Lean has seven principles which are nicely described in Mary and Tom’s books, so I won’t repeat them here. One major emphasis is eliminating waste and delivering quickly. You examine the value stream—the process by which you turn ideas into deliverables. Then you cut out delays. For example, putting the authoring of tests first eliminates delays.
Matt: So we have lean over here and agile over there. How do they work together? What does lean-agile testing look like?
Ken: One of the agile principles is that “working software is the primary measure of progress.” How do you know it works? It at least has to pass the tests. Getting the tests done without delay is the lean part.
Matt: Your book is not just on lean-agile testing—it is specifically on lean-agile Acceptance Test Driven Development (ATDD). That's quite a mouthful. Can you give us the five-minute overview of ATDD?
Ken: The first thing is that ATDD is not just about testing. It’s about communicating the requirements. And it’s not just for testers. It’s for the entire team—the customer, developer, and tester units. I refer to this group as the triad. Requirements and tests go together. You can’t have one without the other. Acceptance tests clarify the requirements and provide a check on when the requirement has been implemented. The triad collaborates at each phase in the process—the initial creation of features, the breakdown into stories, and the development of scenarios. When features are initially created, acceptance criteria should be associated with them. These are general plans for how the feature will be tested. When the stories are created, more specific acceptance criteria are developed for them. Finally, when specific scenarios are established, then acceptance tests are created. A story is not done until the acceptance tests pass for all scenarios. They provide the doneness measure for the story. If an acceptance test is created for a story after the story is declared done, then the test really represents a new requirement.
Matt: So to these acceptance tests have to be automated?
Ken: I suggest a stepped approach. First, you create the tests themselves to make sure that the requirement is well understood. Then you decide what you are going to automate and what framework to use—an acceptance test one such as FitNesse or a unit one, such as JUnit. Depending on the framework and how you structure the tests, the costs of automation can be relatively low. If you automate the tests, they can be used as regression tests to ensure that changes to the system do not cause unwanted effects. There may be some tests, particularly those involving the GUI, that don’t justify the cost since GUI-based tests tend to be fragile. If you automate the tests in a framework that relates them to the requirement, then you have what many call an executable specification. This specification not only describes what a system does, but provides the tests that demonstrate that the implementation actually does that.
Matt: Let's say we do choose to automate the tests—say it's a web-based application. Should we automate the driving of the GUI, the actual browser itself, or "just" test the business logic behind the GUI? It seems that you lean towards "no," but what kind of factors should we consider when making that decision?
Ken: Automating just beneath the GUI is usually the most effective place. The tests will run faster than GUI-based tests; in many cases, much faster. They can help drive the application to be more testable by requiring test-points that can be accessed for testing. If the GUI uses callbacks to the server, just as Ajax, the results of those calls can be tested. If you can run tests for business rules at this level, then the business rules may be kept out of the GUI.
Automated GUI-based tests can be fragile. They can also be lengthy. An example that I give in my book and repeat on the web shows how checking that the proper discount is given for an order can require many more test steps if performed using the GUI than if done at a lower level. You still need to do GUI tests in order to ensure that it is properly connected to the layer beneath. They just won’t be as extensive.
Matt: Tell us more about these acceptance tests. Experts disagree on the "right" number of tests (Pettichord, Pg 27), so how can we know when we have "enough" acceptance tests?
Ken: Acceptance tests are about understanding the requirement and its scenarios. If a test does not pass, then a customer requirement has not been met. They are necessary, but not sufficient. They emphasize what has to go right.
Tests are described in business domain terms. For example, you might have a speed in miles per hour or an email address. They are not described in programming terms, as an int or a string. If you use corresponding objects in the implementation, then those objects can check whether the speed is non-negative or the email address is in a valid format. Other tests should not have to check whether values of those types are valid. A test would only specify that an input should be an email address or a speed. This cuts down on the number of tests from an acceptance point of view.
Exploratory testing, which is manual, involves testing from a different perspective–concurrent test design and execution. It needs to be a part of the development process to catch issues which the acceptance tests do not. Part of the emphasis in exploratory testing is on what can go wrong. For the examples of speed and email address, you might enter many possibilities in a GUI field to see if there are any side effects caused by the GUI.
Matt: Can you tell us a bit about how these automated tests fit into the whole spectrum of software tests—unit tests, exploratory testing, attribute testing—things like security and scalability testing?
Ken: On one side of the spectrum are tests that can be created prior to implementation. For example, a scenario test can provide values that are used by the unit tests. Acceptance tests are created prior to implementation and are implementation independent. Unit tests are created during implementation and are tightly implementation dependent. Acceptance tests are tests for the business intent of a system. Unit tests drive the underlying design of the code. Acceptance tests can form a context for the unit tests. Unit tests are definitely automated; acceptance tests usually are automated.
On the other side of the spectrum are the tests that cannot be done until some implementation has been developed. Some attributes such as performance can be tested automatically with tools. Exploratory testing, as already described, is manual. Usability testing requires a human. You can’t ask a computer whether a GUI is usable.
Matt: Thank you for participating. Where can we go for more?
Ken: I’m adding more information to my website on the subject: http://www.atdd.biz. You can also write me at ken.pugh@netobjectives.com