Trends in Software Economics
Over the past two decades, the software industry has moved progressively toward new methods for managing the ever-increasing complexity of software projects. We have seen evolutions and revolutions, with varying degrees of success and failure. Although software technologies, processes, and methods have advanced rapidly, software engineering remains a people-intensive process. Consequently, techniques for managing people, technology, resources, and risks have profound leverage.
The early software approaches of the 1960s and 1970s can best be described as craftsmanship, with each project using custom or ad-hoc processes and custom tools that were quite simple in their scope. By the 1980s and 1990s, the software industry had matured and was starting to exhibit signs of becoming more of an engineering discipline. However, most software projects in this era were still primarily exploring new technologies and approaches that were largely unpredictable in their results and marked by diseconomies of scale. In recent years, however, new techniques that aggressively attack project risk, leverage automation to a greater degree, and exhibit much-improved economies of scale have begun to grow in acceptance. Much-improved software economics are already being achieved by leading software organizations who use these approaches.
Let’s take a look at one successful model for describing software economics.
A Simplified Model of Software Economics
There are several software cost models in use today. The most popular, open, and well-documented model is the COnstructive COst MOdel (COCOMO), which has been widely used by the industry for 20 years. The latest version, COCOMO II, is the result of a collaborative effort led by the University of Southern California (USC) Center for Software Engineering, with the financial and technical support of numerous industry affiliates. The objectives of this team are threefold:
- To develop a software cost and schedule estimation model for the lifecycle practices of the post-2000 era
- To develop a software project database and tool support for improvement of the cost model
- To provide a quantitative analytic framework for evaluating software technologies and their economic impacts
The accuracy of COCOMO II allows its users to estimate cost within 30% of actuals, 74% of the time. This level of unpredictability in the outcome of a software development process should be truly frightening to any software project investor, especially in view of the fact that few projects ever perform better than expected.
The COCOMO II cost model includes numerous parameters and techniques for estimating a wide variety of software development projects. For the purposes of this discussion, we will abstract COCOMO II into a function of four basic parameters:
Complexity. The complexity of the software solution is typically quantified in terms of the size of human-generated components (the number of source instructions or the number of function points) needed to develop the features in a usable product.
Process. This refers to the process used to produce the end product, and in particular its effectiveness in helping developers avoid “overhead” activities.
Team. This refers to the capabilities of the software engineering team, and particularly their experience with both the computer science issues and the application domain issues for the project at hand.
Tools. This refers to the software tools a team uses for development—that is, the extent of process automation.
The relationships among these parameters in modeling the estimated effort can be expressed as follows:
- Effort = (Team) × (Tools) × (Complexity)(Process)
Schedule estimates are computed directly from the effort estimate and process parameters. Reductions in effort generally result in reductions in schedule estimates. To simplify this discussion, we can assume that the “cost” includes both effort and time. The complete COCOMO II model includes several modes, numerous parameters, and several equations. This simplified model enables us to focus the discussion on the more discriminating dimensions of improvement.
What constitutes a good software cost estimate is a very tough question. In our experience, a good estimate can be defined as one that has the following attributes:
- It is conceived and supported by a team accountable for performing the work, consisting of the project manager, the architecture team, the development team, and the test team.
- It is accepted by all stakeholders as ambitious but realizable.
- It is based on a well-defined software cost model with a credible basis and a database of relevant project experience that includes similar processes, similar technologies, similar environments, similar quality requirements, and similar people.
- It is defined in enough detail for both developers and managers to objectively assess the probability of success and to understand key risk areas.
Although several parametric models have been developed to estimate software costs, they can all be generally abstracted into the form given above. One very important aspect of software economics (as represented within today’s software cost models) is that the relationship between effort and size exhibits a diseconomy of scale. The software development diseconomy of scale is a result of the “process” exponent in the equation being greater than 1.0. In contrast to the economics for most manufacturing processes, the more software you build, the greater the cost per unit item. It is desirable, therefore, to reduce the size and complexity of a project whenever possible.