Home > Articles > Programming > Windows Programming

Improving Software Economics, Part 5 of 7: Reducing Uncertainty: The Basis of Best Practice

  • Print
  • + Share This
In part 5 of this series, Walker Royce examines the scientific community's relationship with uncertainty.
Like this article? We recommend

The top 10 principles of iterative development resulted in many best practices, which are documented in the Rational Unified Process. [1] The Rational Unified Process includes practices for requirements management, project management, change management, architecture, design and construction, quality management, documentation, metrics, defect tracking, and many more. These best practices are also context-dependent. For example, a specific best practice used by a small research and development team at an independent software ventor isn't necessarily a best practice for an embedded application built to military standards. After several years of deploying these principles and capturing a framework of best practices, we began to ask a simple question: "Why are these best? And what makes them better?"

IBM research and the IBM Rational organization have been analyzing these questions for over a decade, and we have concluded that reducing uncertainty is the recurring theme that ties together techniques that we call best practices. Here is a simple story that Murray Cantor composed to illustrate this conclusion.

Suppose you're the assigned project manager for a software product that your organization needs to be delivered in 12 months to satisfy a critical business need. You analyze the project scope, develop an initial plan, and mobilize the project resources estimated by your team. They come back after running their empirical cost/schedule estimation models and tell you that the project should take 11 months. Excellent! What do you do with that information? As a savvy and scarred software manager, you know that the model's output is just a point estimate and simply the expected value of a more complex random variable, and you would like to understand the variability among all the input parameters and see the full distribution of possible outcomes. You want to go into this project with a 95% chance of delivering within 12 months. Your team comes back and shows you the complete distribution illustrated as the "baseline estimate" at the top of Figure 1. (I'll describe the three options in a moment.)

Examining the baseline estimate, you realize that about half of the outcomes will take longer than 12 months, and you have only about a 50% chance of delivering on time. The reason for this dispersion is the significant uncertainty in the various input parameters reflecting your team's lack of knowledge about the scope, the design, the plan, and the team capability. Consequently, the variance of the distribution is rather wide. [2]

Now, as a project manager there are essentially three paths that you can take, which are also depicted in Figure 1:

  • Option 1: Ask the business to move out the target delivery date to 15 months to ensure that 95% of the outcomes complete in less time than that.
  • Option 2: Ask the business to re-scope the work, eliminating some of the required features or backing off on quality so that the median schedule estimate moves up by a couple of months. This ensures that 95% of the outcomes complete in 12 months.
  • Option 3: This is the usual place we all end up and the project managers that succeed work with their team to shrink the variance of the distribution. You must address and reduce the uncertainties in the scope, the design, the plans, the team, the platform, and the process. The effect of eliminating uncertainty is less dispersion in the distribution and consequently a higher probability of delivering within the target date.

Figure 1 A baseline estimate and alternatives in dealing with project management constraints.

The first two options are usually deemed unacceptable, leaving the third option as the only alternative—and the foundation of most of the iterative and Agile delivery best practices that have evolved in the software industry. If you examine the best practices for requirements management, use case modeling, architectural modeling, automated code production, change management, test management, project management, architectural patterns, reuse, and team collaboration, you will find methods and techniques to reduce uncertainty earlier in the lifecycle. If we retrospectively examine my top 10 principles of iterative development, we can easily conclude that many of them (specifically principles 1, 2, 3, 6, 8, and 9) make a significant contribution to addressing uncertainties earlier. The others (4, 5, 7, and 10) are more concerned with establishing feedback control environments for measurement and reporting. It was not obvious to me that the purpose of these principles was also to reduce uncertainty, until I read Douglass Hubbard's book How to Measure Anything, [3] where I rediscovered the following definition:

Measurement: A set of observations that reduce uncertainty where the result is expressed as a quantity.

Voilà! The scientific community does not look at measurement as completely eliminating uncertainty. Any significant reduction in uncertainty is enough to make a measurement valuable. With that context, I concluded that the primary discriminator of software delivery best practices was that they effectively reduce uncertainty and thereby increase the probability of success—even if success is defined as canceling a project earlier so that wasted cost was minimized. What remains to be assessed are how much better these practices work in various domains, and how we best instrument them. IBM research continues to invest in these important questions.


[1] Philippe Kruchten, The Rational Unified Process: An Introduction, Addison-Wesley, 1999, 2003.

[2] The variance of a random variable (i.e., a probability distribution or sample) is a measure of statistical dispersion. Technically, variance is defined as the average of the squared distance of all values from the mean. The mean describes the expected value, and the variance represents a measure of uncertainty in that expectation. The square root of the variance is called the standard deviation and is a more accepted measure, since it has the same units as the random variable.

[3] Douglas W. Hubbard, How to Measure Anything: Finding the Value of 'Intangibles' in Business, John Wiley & Sons, 2007.

  • + Share This
  • 🔖 Save To Your Account