Home > Articles > Software Development & Management > Agile

  • Print
  • + Share This
This chapter is from the book

Local Safety

There is a recognized problem with estimating fine grained tasks or deliverables. This problem is known as the "local safety" problem. It can lead to significantly inaccurate estimates caused by explainable human psychology.

If asked how long a small task will take, a developer will naturally not want to deliver it late. After all, this is a small task, it should be on time or the developer wouldn't be professional. Hence, the estimate will include a buffer to absorb the developer's uncertainty about the task.

The estimate given by the developer will depend on how confident that developer is that he has understood the task and knows how to perform it. If there is uncertainty in the understanding of the task, then the estimated time to completion is likely to be significantly longer. No one wants to be late with such a small task.

Consider the graph in Figure 4–1, which shows confidence against the actual time to complete the task. It shows that there are a number of inexperienced developers who will underestimate due to overconfidence. However, most will overestimate, often by as much as 100% or 200%.

fn3Walter A. Shewart called this "uncontrolled" variation [Wheeler 1992] page 3.

Figure xxxFigure 4–1 Estimated delivery of a 12-day task.

Specifically, Figure 4–1 shows a task that is most likely to complete on the 12th day. Overconfidence can lead to some estimates as little as 5 days. Other estimates will be as long as 30 days. Goldratt predicts that developers with an 80% confidence about the task duration will suggest around 20 days for the task. The difference between 20 and 12 is known as the local safety, that is, 8 days.

So a 12-day task may be estimated as up to 30 days. Imagine if this was to happen across a project with 2,000 tasks. The estimates for the whole project would be wildly inaccurate. In fact, the project might look so costly that it is cancelled before it is even started.

How can this problem be avoided?

Goldratt's first truly astute assertion about project management [Goldratt 1997] is that in an accurately planned project consisting of many tasks, approximately half the tasks will be slightly late, assuming the planning was accurate. This is very counterintuitive, but it is a fact. If a project is accurately planned and delivered on time, statistically approximately half the tasks should be early and half should be late. Accepting this notion is hard, but once accepted, it is possible to consider the solution.

Accurate project planning techniques must avoid asking developers to estimate small tasks based on effort. The tasks to be performed must be broken out based on some form of (relative) complexity and risk. To work best, this requires a codification scheme for assessing the complexity and risk. An analysis method that enables small pieces of client-valued functionality to be codified for complexity or risk is required. This enables the inventory in the system of software production to be estimated without the psychological effect of local safety. By not asking the developer to estimate how long it will take, the local safety problem is removed. In Chapter 33, we will consider how effective various Agile and traditional methods are at codifying inventory to facilitate assessment of complexity in a repeatable fashion.

Hence, complexity and risk analysis of the inventory are the keys to providing estimates without local safety. There are several techniques for doing this: Function Points in structured analysis, Feature Complexity in FDD, and Story Size and Risk Points in XP.

It is possible to correlate the level of effort involved in a given unit of inventory through empirical measure. Historical data for recent projects can be used to estimate the level of effort involved in processing inventory of a given risk or complexity. This data can then be used to estimate how long it will take to process the remaining inventory through the software production system.

Donald Reinertsen explains the statistical reduction of uncertainty through aggregation of sequential and parallel processes [1997, pp. 93 & 212]. The uncertainty (or variability) in a set of sequential processes reduces as a percentage of the total when the sequential processes are combined. Specifically, for sequential activities the total uncertainty is the square root of the sum of the squares.

Consider this equation for the 12-day activity shown in Figure 4–1, which had a local safety buffer of 8 days. If there were three such activities to run sequentially, the aggregate uncertainty equation would look like this:

This is a very significant result. The local safety for the three tasks adds to 24 days, but the required uncertainty buffer is approximately 14 days. Hence, the total time required for the three activities is

The normal plan estimate would have used the local estimations of 20 days for each activity providing a total estimate of 60 days. The overestimation is 10 days or 16.7%. The more activities in the plan, the worse this problem will become.

It is vital that local safety is eliminated from estimates of effort. Inclusion of local safety leads to estimates that are false. False estimates undermine trust. Without trust it is impossible to properly manage a software production system.

For parallel dependent activities, the aggregate uncertainty is the single greatest uncertainty from the activities. For the example given, if all three tasks were undertaken in parallel, the aggregate buffer required would be only 8 days.

Uncertainty in software production is inevitable. There are five general constraints in software development—delivery date, budget, people, scope, and resources. Uncertainty can apply to any of the constraints.

  • + Share This
  • 🔖 Save To Your Account