InformIT

Feature-Driven Development and Extreme Programming

Date: Mar 22, 2002

Article is provided courtesy of Pearson.

Return to the article

Superficial similarities between Feature-Driven Development (FDD) and Extreme Programming (XP) hide a number of very important differences between the two processes. This article provides a short comparison of FDD and XP.

Introduction

Software development process is news again. Extreme programming (XP) is the cause of much debate on a number of popular discussion forums. Feature-Driven Development (FDD), pioneered by Jeff de Luca and Peter Coad, is another process with rapidly growing interest. Superficial similarities between FDD and XP hide a number of very important differences between the two processes. This article provides a short comparison of FDD and XP.

Feature-Driven Development is introduced in Chapter 6 of Java Modeling in Color with UML: Enterprise Components and Process [Coad]. The chapter is also available in electronic form at www.togethersoft.com/jmcu and in Together's online help (Users Guide—Part 1: Modeling with Together and 2. Introductions to Modeling).

Software development process is an emotional issue, so here's a few key quotes from the chapter to keep in mind when reading this article:

For enterprise-component modeling to be successful, it must live and breathe within a larger context, a software development process.

We think most process initiatives are silly. Well-intentioned managers and teams get so wrapped up in executing process that they forget that they are being paid for results, not process execution.

No amount of process over-specification will make up for bad people. Far better: Staff your project with good people, do whatever it takes to keep them happy, and use simple, well-bounded processes to guide them along the way.

FDD Summary

For those not familiar with FDD, I'll try to summarize in a few pictures and paragraphs (please do refer to [Coad] for a more detailed introduction). Those who are familiar with FDD might want to skip to the comparison.

FDD is a model-driven, short-iteration process. It begins with establishing an overall model shape. Then it continues with a series of two-week "design by feature, build by feature" iterations. The features are small, "useful in the eyes of the client" results. FDD consists of five processes or activities (see Figure 1), as discussed in the following sections.

Figure 1 The five processes of FDD.

1. Develop an Overall Model

For the first activity, domain and development members work together, under the guidance of an experienced component/object modeler (chief architect). Domain members present an initial high-level, highlights-only walkthrough of the scope of the system and its context. The domain and development members produce a skeletal model, the very beginnings of that which is to follow. Then the domain members present more detailed walkthroughs. Each time, the domain and development members work in small sub-teams (with guidance from the chief architect), present sub-team results, and merge the results into a common model (again with guidance from the chief architect), adjusting model shape along the way.

2. Build a Features List

Using the knowledge gathered during the initial modeling, the team next constructs as comprehensive a list of features as they can. A feature is a small piece of client-valued function expressed in the following form:

<action> <result> <object>

For example,

Calculate the total of a sale

Existing requirements documents, such as use cases or functional specs, are also used as input. Where they don't exist, the team notes features informally during the first activity. Features are clustered into sets by related function, and for large systems, these feature sets are themselves grouped into major feature sets.

3. Plan by Feature

The third activity is to sequence the feature sets or major feature sets (depending on the size of the system) into a high-level plan and assign them to chief programmers. Developers are also assigned to own particular classes identified in the overall object model.

4–5. Design by Feature / Build by Feature

Activities four and five are the development engine room. A chief programmer selects a small group of features to develop over the next 1–2 weeks and then executes the "Design by Feature (DBF)" and "Build by Feature (BBF)" activities. He or she identifies the classes likely to be involved, and the corresponding class owners become the feature team for this iteration (see Figure 2). This feature team works out detailed sequence diagrams for the features. Then the class owners write class and method prologs. Before moving into the BBF activity, the team conducts a design inspection. In the BBF activity, the class owners add the actual code for their classes, unit test, integrate, and hold a code inspection. Once the chief programmer is satisfied, the completed features are promoted to the main build. It's common for each chief programmer to be running 2–3 feature teams concurrently and for class owners to be members of 2–3 feature teams at any point in time.

Figure 2 Feature teams are formed dynamically as needed.

Track by Feature

With FDD, we can track and report progress with surprising accuracy. We begin by assigning a percentage weighting to each step in a DBF/BBF iteration (see Figure 3).

Figure 3 A set of sharp milestones and percentage weightings.

The chief programmers indicate when each step has been completed for each feature they're developing. Now we can easily see how much of a particular feature has been completed. Simply posting the list of features on a wall, color-coded green for "complete," blue for "in progress," and red for "requiring attention" provides a good visual feel for overall progress, with the ability to "zoom in" to read the detail by simply walking closer to the wall (see Figure 4).

Figure 4 Part of the Features wall chart.

Then use straightforward tools to roll up these percentages to feature set and major feature set level to provide highly accurate, color-coded progress reports for development leads, project managers, project sponsors, and upper management (see Figure 5 and 6).

Figure 5 Project status report.

Figure 6 Key to project status report.

Graph and trend over time to monitor progress rates (see Figure 7).

Figure 7 Graph of features complete against time.

Short Comparison with XP

Reading the introductions to FDD and XP reveals many similar factors driving the development of the two processes:

Both FDD and XP are designed to enable teams to deliver results faster without compromising quality. Both processes are highly iterative and results-oriented. They're both people-focused instead of document-focused (no more thousand-page specifications to write). Both dismantle the traditional separation of domain and business experts/analysts from designers and implementers; analysts are dragged out of their abstractions and put in the same room as developers and users. These new processes, together with new tools and techniques, are enabling and encouraging analysis, design, code, test, and deployment to be done concurrently.

So where do FDD and XP differ?

Team Sizes

"XP is designed to work with projects that can be built by teams of two to ten programmers, that aren't sharply constrained by the existing computing environment, and where a reasonable job of executing tests can be done in a fraction of a day." [Beck]

FDD was first used with a team of 16–20 developers of varying abilities, cultural backgrounds, and experience: four chief programmers (CPs), sixteen class owners split into user interaction (UI), problem domain (PD), and data management (DM) teams. FDD is designed to scale to much larger team sizes. The limiting factor is the number of available CPs. Chief programmer teams have been proven in practice to scale well to much larger project teams (by the authors of FDD and independently [Brooks]).

Metaphor and Model

The XP process begins with the business writing stories on index cards. A story is something that the system needs to do. Development then estimates the time required to implement each story. The whole project is guided by a system metaphor, "an overall story that everyone—customers, programmers, and managers—can tell about how the system works" [Beck]. The business selects the subset of stories that will form the next release and development makes a delivery commitment. Development splits each of the stories into a number of tasks. Each developer accepts responsibility for a set of tasks.

Replace stories with domain walkthroughs and tasks with features and it sounds very similar to the first three activities in FDD.

The enormous difference between XP and FDD is FDD's additional development of an overall domain object model. As developers learn of requirements they start forming mental images of the system, making assumptions and estimating on that basis. Developing an overall domain object model forces those assumptions out into the open, misunderstandings are resolved, and a more complete, common understanding is formed.

XP uses the analogy of driving a car. Driving requires continual little course adjustments; you can't simply point the car in the right direction and press the accelerator. A domain object model is the map to guide the journey; it can prevent you from driving around in endless circles. The domain object model provides an overall shape to which to add function, feature by feature.

The domain object model enables feature teams to produce better designs for each group of features. This reduces the number of times that a team has to refactor their classes to add a new feature. Reducing the time spent refactoring increases the time that can be spent adding new features.

Collective Ownership or Class Ownership?

XP promotes collective ownership of code; any developer can add to or alter any piece of source code as needed. But collective ownership usually degenerates into non-ownership as the number of people involved grows. Small communes often work; larger communes rarely work for any length of time. XP claims three benefits from collective code ownership:

Feature teams also solve these problems, while keeping the well-established benefits of individual code ownership:

XP also assumes that short integration and testing cycles means a low rate of collisions from developers updating the same piece of source code. For larger numbers of developers and systems, this is obviously less likely to be true.

Inspections and Pair Programming

Design and code inspections, when done well, are proven to remove more defects than testing. Secondary benefits include the following:

XP uses pair programming to provide a continuous level of design and code inspection. All low-level design and coding is done in pairs. This is obviously better than individual developers delivering code without any form of inspection.

FDD promotes more formal inspections by feature teams; the level of formality is left to the chief programmer's discretion. This takes more time, but it has added advantages over pair programming:

There's no reason why members of feature teams can't pair up during coding when this is desirable. It's not unusual to see two members of a feature team working together where care is needed. One of the great things about feature teams is that a feature is complete only when the team is finished—and not when any one individual is finished. It's in the team members' own interest to help each other.

Testing

Correctness in XP is defined by the running of unit and functional tests. FDD takes unit testing almost for granted as part of Build by Feature. FDD doesn't define the mechanisms or level of formality for unit testing; it leaves the chief programmer to do what's appropriate.

It's acceptable to use XP unit testing techniques in an FDD environment. Where continuous or regular system builds are performed, it certainly makes sense to have a growing set of tests that can be run against a new build. Again, FDD doesn't specify this because technology and resources differ so much between projects. In some circumstances it's very difficult to produce a set of completely isolated, independent tests that run in a reasonable amount of time.

Reporting

XP leaves tracking to the project managers, encouraging them to minimize the overhead of collecting data and use large, visible wall charts. By contrast, Tracking by Feature in FDD describes a low-overhead, highly accurate means of measuring progress and provides the data to construct a large variety of practical, useful progress charts and graphs.

Bottom Line

It's important to discover what works for you and your organization. The name of the process you use is unimportant. What's important is the ability to repeatedly deliver frequent, tangible, working results on time, within budget, and with agreed function.

References

[Brooks] Frederick P. Brooks, Jr., The Mythical Man Month: Essays on Software Engineering, Anniversary Edition (Addison-Wesley, 1995, ISBN 0-201-83595-9).

[Beck] Kent Beck, Extreme Programming Explained: Embrace Change (Addison-Wesley, 2000, ISBN 0-201-61641-6).

[Coad] Peter Coad, Jeff De Luca, Eric LeFebvre, Java Modeling in Color with UML: Enterprise Components and Process (Prentice Hall PTR, 1999, ISBN 0-13-011510-X).

Acknowledgments

Kent Beck acknowledges, among others, the contributions of Ward Cunningham, Ron Jefferies, Martin Fowler, Erich Gamma, and Doug Beck in the development of XP. [Beck]

The main minds behind FDD are Jeff De Luca and Peter Coad, with contributions from M. A. Rajashima, Lim Bak Wee, Paul Szego, Jon Kern, and Stephen Palmer [Coad].

Stephen Palmer is the editor of The Coad Letter, a special report on new advances in building better object-oriented software, focusing on analysis and design issues. This article originally appeared in The Coad Letter 70. (For information on subscribing, click here.)

TogetherSoft™ is a trademark of TogetherSoft Corporation. The Coad Letter® is a registered trademark of Object International, Inc.

800 East 96th Street, Indianapolis, Indiana 46240