Date: Nov 22, 2002
Sample Chapter is provided courtesy of Addison-Wesley Professional.
Robert L. Glass explains why a software manager can't forget about the most important facts like people are important, technical hype does more harm that good, and complexity is, well, complex.
To tell you the truth, I've always thought management was kind of a boring subject. Judging by the books I've read on the subject, it's 95 percent common sense and 5 percent warmed-over advice from yester-decade. So why am I leading off this book with the topic of management?
Because, to give the devil its due, most of the high-leverage, high-visibility things that happen in the software field are about management. Most of our failures, for example, are blamed on management. And most of our successes can be attributed to management. In Al Davis's wonderful book on software principles (1995), he says it very clearly in Principle 127: "Good management is more important than good technology." Much as I hate to admit it, Al is right.
Why do I hate to admit it? Early in my career, I faced the inevitable fork in the road. I could remain a technologist, continuing to do what I loved to dobuilding software, or I could take the other fork and become a manager. I thought about it pretty hard. The great American way involves moving up the ladder of success, and it was difficult to think of avoiding that ladder. But, in the end, two things made me realize I didn't want to leave my technology behind.
I wanted to do, not direct others to do.
I wanted to be free to make my own decisions, not become a "manager in the middle" who often had to pass on the decisions of those above him.
The latter thing may strike you as odd. How can a technologist remain more free to make decisions than his or her manager? I knew that, from my own experience, it was true, but it was tough explaining it to others. I finally wrote a whole book on the subject, The Power of Peonage (1979). The essence of that bookand my belief that led to my remaining a technologistis that those people who are really good at what they do and yet are at the bottom of a management hierarchy have a power that no one else in the hierarchy has. They can't be demoted. As peons, there is often no lower rank for them to be relegated to. It may be possible to threaten a good technologist with some sort of punishment, but being moved down the hierarchy isn't one of those ways. And I found myself using that power many times during my technical years.
But I digress. The subject here is why I, a deliberate nonmanager-type, chose to lead off this book with the topic of management. Well, what I want to say here is that being a technologist was more fun than being a manager. I didn't say it was more important. In fact, probably the most vitally important of software's frequently forgotten facts are management things. Unfortunately, managers often get so enmeshed in all that commonsense, warmed-over advice that they lose sight of some very specific and, what ought to be very memorable and certainly vitally important, facts.
Like things about people. How important they are. How some are astonishingly better than others. How projects succeed or fail primarily based on who does the work rather than how it's done.
Like things about tools and techniques (which, after all, are usually chosen by management). How hype about them does more harm than good. How switching to new approaches diminishes before it enhances. How seldom new tools and techniques are really used.
Like things about estimation. How bad our estimates so often are. How awful the process of obtaining them is. How we equate failure to achieve those bad estimates with other, much more important kinds of project failure. How management and technologists have achieved a "disconnect" over estimation.
Like things about reuse. How long we've been doing reuse. How little reuse has progressed in recent years. How much hope some people place (probably erroneously) on reuse.
Like things about complexity. How the complexity of building software accounts for so many of the problems of the field. How quickly complexity can ramp up. How it takes pretty bright people to overcome this complexity.
There! That's a quick overview of the chapter that lies ahead. Let's proceed into the facts that are so frequently forgotten, and so important to remember, in the subject matter covered by the term management.
Davis, Alan M. 1995. 201 Principles of Software Development. New York: McGraw-Hill.
Glass, Robert L. 1979. The Power of Peonage. Computing Trends.
The most important factor in software work is not the tools and techniques used by the programmers, but rather the quality of the programmers themselves.
People matter in building software. That's the message of this particular fact. Tools matter. Techniques also matter. Process, yet again, matters. But head and shoulders above all those other things that matter are people.
This message is as old as the software field itself. It has emerged from, and appears in, so many software research studies and position papers over the years that, by now, it should be one of the most important software "eternal truths." Yet we in the software field keep forgetting it. We advocate process as the be-all and end-all of software development. We promote tools as breakthroughs in our ability to create software. We aggregate a miscellaneous collection of techniques, call that aggregate a methodology, and insist that thousands of programmers read about it, take classes in it, have their noses rubbed in it through drill and practice, and then employ it on high-profile projects. All in the name of tools/techniques/process over people.
We even revert, from time to time, to anti-people approaches. We treat people like interchangeable cogs on an assembly line. We claim that people work better when too-tight schedules and too-binding constraints are imposed on them. We deny our programmers even the most fundamental elements of trust and then expect them to trust us in telling them what to do.
In this regard, it is interesting to look at the Software Engineering Institute (SEI) and its software process, the Capability Maturity Model. The CMM assumes that good process is the way to good software. It lays out a plethora of key process areas and a set of stair steps through which software organizations are urged to progress, all based on that fundamental assumption. What makes the CMM particularly interesting is that after a few years of its existence and after it had been semi-institutionalized by the U.S. Department of Defense as a way of improving software organizations and after others had copied the DoD's approaches, only then did the SEI begin to examine people and their importance in building software. There is now an SEI People Capability Maturity Model. But it is far less well known and far less well utilized than the process CMM. Once again, in the minds of many software engineering professionals, process is more important than people, sometimes spectacularly more important. It seems as if we will never learn.
The controversy regarding the importance of people is subtle. Everyone pays lip service to the notion that people are important. Nearly everyone agrees, at a superficial level, that people trump tools, techniques, and process. And yet we keep behaving as if it were not true. Perhaps it's because people are a harder problem to address than tools, techniques, and process. Perhaps it's like one of those "little moron" jokes. (In one sixty-year-old joke in that series, a little moron seems to be looking for something under a lamp post. When asked what he is doing, he replies "I lost my keys." "Where did you lose them?" he is asked. "Over there," says the little moron, pointing off to the side. "Then why are you looking under the lamp post?" "Because," says the little moron, "the light is better here.")
We in the software field, all of us technologists at heart, would prefer to invent new technologies to make our jobs easier. Even if we know, deep down inside, that the people issue is a more important one to work.
The most prominent expression of the importance of people comes from the front cover illustration of Barry Boehm's classic book Software Engineering Economics (1981). There, he lays out a bar chart of the factors that contribute to doing a good job of software work. And, lo and behold, the longest bar on the chart represents the quality of the people doing the work. People, the chart tells us, are far more important than whatever tools, techniques, languages, andyesprocesses those people are using.
Perhaps the most important expression of this point is the also-classic book Peopleware (DeMarco and Lister 1999). As you might guess from the title, the entire book is about the importance of people in the software field. It says things like "The major problems of our work are not so much technological as sociological in nature" and goes so far as to say that looking at technology first is a "High-Tech Illusion." You can't read Peopleware without coming away with the belief that people matter a whole lot more than any other factor in the software field.
The most succinct expression of the importance of people is in Davis (1995), where the author states simply, "People are the key to success." The most recent expressions of the importance of people come from the Agile Development movement, where people say things like "Peel back the facade of rigorous methodology projects and ask why the project was successful, and the answer [is] people" (Highsmith 2002). And the earliest expressions of the importance of people come from authors like Bucher (1975), who said, "The prime factor in affecting the reliability of software is in the selection, motivation, and management of the personnel who design and maintain it," and Rubey (1978), who said, "When all is said and done, the ultimate factor in software productivity is the capability of the individual software practitioner."
But perhaps my favorite place where people were identified as the most important factor in software work was an obscure article on a vitally important issue. The issue was, "If your life depended on a particular piece of software, what would you want to know about it?" Bollinger responded, "More than anything else, I would want to know that the person who wrote the software was both highly intelligent, and possessed by an extremely rigorous, almost fanatical desire to make their program work the way it should. Everything else to me is secondary. . . ." (2001).
Boehm, Barry. 1981. Software Engineering Economics. Englewood Cliffs, NJ: Prentice-Hall.
Bollinger, Terry. 2001. "On Inspection vs. Testing." Software Practitioner, Sept.
Bucher, D. E. W. 1975. "Maintenance of the Computer Sciences Teleprocessing System." Proceedings of the International Conference on Reliable Software, Seattle, WA, April.
Davis, Alan M. 1995. 201 Principles of Software Development. New York: McGraw-Hill.
DeMarco, Tom, and Timothy Lister. 1999. Peopleware. 2d ed. New York: Dorset House.
Highsmith, James A. 2002. Agile Software Development Ecosystems. Boston: Addison-Wesley.
Rubey, Raymond L. 1978. "Higher Order Languages for Avionics SoftwareA Survey, Summary, and Critique." Proceedings of NAECON.
The best programmers are up to 28 times better than the worst programmers, according to "individual differences" research. Given that their pay is never commensurate, they are the biggest bargains in the software field.
The point of the previous fact was that people matter in building software. The point of this fact is that they matter a lot!
This is another message that is as old as the software field. In fact, the sources I cite date mostly back to 19681978. It is almost as if we have known this fundamental fact so well and for so long that it sort of slides effortlessly out of our memory.
The significance of this fact is profound. Given how much better some software practitioners are than othersand we will see numbers ranging from 5 times better to 28 times betterit is fairly obvious that the care and feeding of the best people we have is the most important task of the software manager. In fact, those 28-to-1 peoplewho probably are being paid considerably less than double their not-so-good peersare the best bargains in software. (For that matter, so are those 5-to-1 folks.)
The problem isand of course there is a problem, since we are not acting on this fact in our fieldwe don't know how to identify those "best" people. We have struggled over the years with programmer aptitude tests and certified data processor exams and the ACM self-assessment programs, and the bottom line, after a lot of blood and sweat and perhaps even tears were spent on them, was that the correlation between test scores and on-the-job performance is nil. (You think that was disappointing? We also learned, about that time, that computer science class grades and on-the-job performance correlated abysmally also [Sackman 1968].)
The controversy surrounding this fact is simply that we fail to grasp its significance. I have never heard anyone doubt the truth of the matter. We simply forget that the importance of this particular fact is considerably more than academic.
I promised a plethora of "old-time" references in this matter. Here goes.
"The most important practical finding [of our study] involves the striking individual differences in programmer performance" (Sackman 1968). The researchers had found differences of up to 28 to 1 while trying to evaluate the productivity difference between batch and timesharing computer usage. (The individual differences made it nearly impossible for them to do an effective comparison of usage approaches.)
"Within a group of programmers, there may be an order of magnitude difference in capability" (Schwartz 1968). Schwartz was studying the problems of developing large-scale software.
"Productivity variations of 5:1 between individuals are common" (Boehm 1975). Boehm was exploring what he termed "the high cost of software."
"There is a tremendous amount of variability in the individual results. For instance, two people . . . found only one error, but five people found seven errors. The variability among student programmers is well known, but the high variability among these highly experienced subjects was somewhat surprising" (Myers 1978). Myers did the early definitive studies on software reliability methodologies.
These quotations and the data they contain are so powerful that I feel no need to augment what those early authors learned, except to say that I see no reason to believe that this particular findingand factwould have changed over time. But let me add a couple of more quotations from the Al Davis book (1995) "Principle 132A few good people are better than many less skilled people" and "Principle 141There are huge differences among software engineers."
And here is a more recent source: McBreen (2002) suggests paying "great developers" "what they're worth" ($150K to $250K), and lesser ones much less.
Boehm, Barry. 1975. "The High Cost of Software." Practical Strategies for Developing Large Software Systems, edited by Ellis Horowitz. Reading, MA: Addison-Wesley.
Glass, Robert L. 1995. Software Creativity. Englewood Cliffs, NJ: Prentice-Hall.
McBreen, Pete. 2002. Software Craftsmanship. Boston: Addison-Wesley.
Myers, Glenford. 1978. "A Controlled Experiment in Program Testing and Code Walkthroughs/Inspections." Communications of the ACM, Sept.
Sackman, H., W. I. Erikson, and E. E. Grant. 1968. "Exploratory Experimental Studies Comparing Online and Offline Programming Performances." Communications of the ACM, Jan.
Schwartz, Jules. 1968. "Analyzing Large-Scale System Development." In Software Engineering Concepts and Techniques, Proceedings of the 1968 NATO Conference, edited by Thomas Smith and Karen Billings. New York: Petrocelli/Charter.
Adding people to a late project makes it later.
This is one of the classic facts of software engineering. In fact, it is more than a fact, it is a law"Brooks's law" (1995).
Intuition tells us that, if a project is behind schedule, staffing should be increased to play schedule catch-up. Intuition, this fact tells us, is wrong. The problem is, as people are added to a project, time must be spent on bringing them up to speed. They have a lot to learn before they can become productive. But more important, those things they must learn typically must be acquired from the others on the project team. The result is that the new team members are very slow to contribute anything to the project at all, and while they are becoming productive, they are a drain on the time and attention of the existing project team.
Furthermore, the more people there are on a project, the more the complexity of its communications rises. Thus adding these new people when a project is late tends to make it even later.
Most people acknowledge the correctness of this fact. At the same time, it is possible to argue with some of the details. For example, what if the added people are already knowledgeable in this application domain, and perhaps even on this project? Then the learning curve problem diminishes, and the newcomers may end up contributing quite rapidly. Or what if the project is barely under way? In that case, there's not that much to bring the added people up to speed.
The opposition to the fact is best articulated by McConnell (1999), who notes that (a) ignoring the fact "remains commonplace" in practice, and (b) the fact is valid only in "limited circumstances that are easily identified and avoided."
Still, few dispute the fundamental fact here. One must be very careful in adding staff to a behind-schedule project. (For that matter, one must be careful in adding staff to any project, late or not. But it's especially temptingand especially dangerouswhen the manager is trying to accelerate progress.)
This fact accounts for the title of a classic software engineering book. The book is called The Mythical Man-Month (Brooks 1995) because, although we tend to measure staffing in people per month, not all people contribute the same amount to a project, and thus not all man-months are equal. This is especially true for those people added to a late project, whose man-month contribution may very well be negative.
Brooks, Frederick P., Jr. 1995. The Mythical Man-Month. Anniversary ed. Reading MA: Addison-Wesley.
McConnell, Steve. 1999. "Brooks' Law Repealed." From the Editor. IEEE Software, Nov.
The working environment has a profound impact on productivity and product quality.
The tendency in software projects is to try to staff them with the best people available, enlist the support of an appropriate methodology, establish a process fairly high up on the SEI CMM food chain, and let 'er rip! The problem is, there's something important left out of that mix. The setting in which the systems analysts analyze and the designers design and the programmers program and the testers testthat environment matters a lot. A whole lot.
What it all boils down to is that software work is thought-intensive, and the environment in which it is done must be one that facilitates thinking. Crowding and the resulting (intentional or unintentional) interruptions are deadly to making progress.
How deadly? There's a whole classic book that focuses on this issue. Peopleware (DeMarco and Lister 1999) spends quite a bit of its time and space telling us just how, and in what ways, the environment matters. In it, the authors report on their own studies of the effect of the work environment on job performance. They took members of a project team and separated the top quartile of performers from the bottom quartile (the top quartile performed 2.6 times better than those at the bottom). They then examined the working environment of those people at the top and those at the bottom. The top people had 1.7 times as much workspace (measured in available floor space in square feet). Twice as often, they found their workspace "acceptably quiet." More than 3 times as often, they found it "acceptably private." Between 4 and 5 times as often, they could divert phone calls or silence their phone. They were interrupted by other people (needlessly) about half as often.
It is certainly true that the individual differences between people have a profound effect on software productivity, as we have already seen in Fact 2. But this fact tells us that there is something more needed. You must get good people, and then you must treat them well, especially in providing a workable environment.
The controversy here is underground. Hardly anyone will disagree with this fact publicly. And yet when the time comes to provide office space, the field seems to revert to its old "crowd them in as closely as possible" philosophy. The money spent on additional office space is easily measured and easily manipulated, whereas the cost to productivity and quality by crowding software people into too little space is much harder to measure.
Workers say things like "you never get anything done around here" (that's the title of one of Peopleware's chapters, in fact). Managers give authority on defining workspace matters to people who think as if they were "the furniture police" (that's another of its chapter titles). And yet very little seems to change. Even in the academic world, where thinking time is valued more than in most settings, the pressure of too little space for too many people often results in crowded or doubled-up offices.
There is an old saying, "the hard drives out the soft." That is, those things that are solidly measurable ("hard" things) tend to take attention away from those that are not (the "soft" ones). This truism is about much more than software, but it seems especially relevant here. Office space measurement is hard. Productive benefits are soft. Guess which one wins?
There is one burgeoning controversy connected to this fact. Advocates of Extreme Programming argue for a way of working called pair programming. In pair programming, two software workers stay in close communication and proximity while they are doing their work, even sharing use of the computer keyboard. Here, we see intentional crowding, yet with the attendant claim that productivity and especially quality benefit. The controversy between these two viewpoints has not yet been articulated in the software literature, but as Extreme Programming becomes better known, this controversy may hit the fan.
There are several sources of information on Extreme Programming, but here is the first and best known:
Beck, Kent. 2000. Extreme Programming Explained. Boston: Addison-Wesley.
One leading advocate of pair programming, Laurie Williams, has written about it in many places, including the following:
Williams, Laurie, Robert Kessler, Ward Cunningham, and Ron Jeffries. 2000. "Strengthening the Case for Pair Programming." IEEE Software 17, no. 4
DeMarco, Tom, and Timothy Lister. 1999. Peopleware. 2d ed. New York: Dorset House.
Tools and Techniques
Hype is the plague on the house of software. Most software tool and technique improvements account for about a 5 to 35 percent increase in productivity and quality. But at one time or another, most of those same improvements have been claimed by someone to have "order of magnitude" benefits.
Time was, way back when, that new software engineering ideas were really breakthroughs. High-order programming languages. Automated tools like debuggers. General-purpose operating systems. That was then (the 1950s). This is now. The era of breakthrough techniques, the things that Fred Brooks (1987) referred to as silver bullets, is long since over.
Oh, we may have fourth-generation languages ("programming without programmers") and CASE tools ("the automation of programming) and object orientation ("the best way to build software") and Extreme Programming ("the future of the field") and whatever the breakthrough du jour is. But, in spite of the blather surrounding their announcement and advocacy, those things are simply not that dramatically helpful in our ability to build software. And, to paraphrase Brooks himself, the most rational viewpoint to take on breakthroughs is "not now, not ever." Or perhaps, "unlikely ever again."
In fact, there is some pretty solid data to that effect. Nearly all so-called breakthroughs, circa 1970 to today, are good for modest benefits (less than 35 percent) for software engineers. Considering that the breakthrough blather is making claims for "order of magnitude" improvements (that is, powers of 10), there is a huge gap between these claims and reality.
The evidence on this subject is quite strong. In my own longitudinal research, examining evaluative studies by objective researchers of the value of these improvements (Glass 1999), I have found
A serious lack of evaluative research because there are few such studies to draw on.
Enough studies to be able to draw some significant conclusions.
A nearly total lack of any evidence that any of these things has breakthrough benefits.
Fairly solid evidence that the benefits are indeed there, but at a far more modest level5 to 35 percent.
(The references and further readings in that paper can point you to the original studies that produced this objective evaluative data.)
These findings are echoed in a wonderful table in a best of practice book on software process improvement (Grady 1997), in which the author lists some of the various process changes that are part of a process improvement program and the benefits that may be achieved by them. What was the highest benefit process change, you may ask? Reuse. The payoff for reuse, according to Grady, is 10 to 35 percent. Contrast that with the extravagant "order of magnitude" claims of the componentry zealots of today. Or the claims of any of the other zealots of yesteryear.
Why do we go through this cycle of hype-and-dashed-hopes again and again? It takes two kinds of people to sustain this cyclethe hypesters themselves and the true believers. The hypesters, as it turns out, almost always are nonobjective folks who have something to gainproduct sales, high-priced courses, or funding for research projects. 'Twas always thus. Since the days of the Universal Elixir peddlers, there have always been people eager to make a fast buck on promises unsubstantiable by realities.
The ones who worry me, given that there will always be fast-buck pursuers, are those true believers. Why do those folks believe, again and again and again, the promises of the hypesters? Why are we subjected, again and again and again, to massive expenditure and training in new concepts that cannot possibly deliver what is claimed for them? Answering that question is one of the most important tasks of our field. If answering it were easy, we wouldn't have to ask it. But I will try to suggest answers in the material that follows.
I've never met anyone who disagreed with the notion that there is too much hype in the software field. But behavior, all too often, clashes with this belief. Nearly everyone, at the conceptual level, agrees with the Brooks notion that there is unlikely to be any silver bullet forthcoming. But so many people, when push really gets down to shove, leap eagerly aboard the latest software engineering breakthrough bandwagon.
Sometimes I think what is happening is a kind of "hardware envy." Our computer hardware brethren have made remarkable progress in the few decades that computer hardware has been produced. Cheaper/better/faster happens over and over again in that hardware world. Friends who study the history of technology tell me that progress in the computer hardware field is probably faster than that in any other field, ever. Perhaps we in software are so envious of that progress that we pretend that it is happeningor can happento us.
There is another thing going on, as well. Because the whole hardware/software thing has moved forward so dramatically and so rapidly, there is a great fear of being left behind and a great eagerness to participate in whatever is new. We draw up life cycles of new process/product innovation, and cheer for the "early adopters" while booing the "laggards." In the computing field there is a whole cultural thing that says new is better than old. Given all of that, who indeed would not want to embrace the new and step away from the old? In that emotional climate, buying into hype is a good thing, and stepping up in front of the hype steamroller is bad.
What a shame all of that is. The software field has been victimized so many times by its hypesters and their fellow travelers. And, to make matters worse, I'd be willing to bet thatas you are reading thisthere is some other new idea parading down the pike, its zealots leading the way and claiming dramatic benefits, your colleagues dancing merrily along in their wake. The Pied Piper of hype is striking yet again!
The pronouncements of the fact of hype are severely outnumbered by the purveyors of faulty promise. I can't remember back far enough to identify the first published hypester, and I wouldn't want to give them "credit" here if I could. But it has all been going on for a very long time. My very first book (1976), for example, was titled The Universal Elixir, and Other Computing Projects Which Failed and told a few tales that ridiculed these computing elixir-selling hypesters. Those stories had originally been published in Computerworld (under an assumed name, Miles Benson) for a decade before they were gathered into that book.
There are far more sources that promise panaceas then there are voices of reason crying in this wilderness. But here, as well as in the References that follow, are a few of those voices:
Davis, Alan M. 1995. 201 Principles of Software Development. New York: McGraw-Hill. Principle 129 is "Don't believe everything you read."
Weinberg, Gerald. 1992. Quality Software Development: Systems Thinking. Vol. 1, p. 291. New York: Dorset House.
Brooks, Frederick P., Jr. 1987. "No Silver BulletEssence and Accidents of Software Engineering." IEEE Computer, Apr. This paper has been published in several other places, most notably in the Anniversary Edition of Brooks's best-known book, The Mythical Man-Month (Reading, MA: Addison-Wesley, 1995), where it is not only included in its original form but updated.
Glass, Robert L. 1976. "The Universal Elixir, and Other Computing Projects Which Failed." Computerworld. Republished by Computing Trends, 1977, 1979, 1981, and 1992.
Glass, Robert L. 1999. "The Realities of Software Technology Payoffs." Communications of the ACM, Feb.
Grady, Robert B. 1997. Successful Software Process Improvement. Table 4-1, p. 69. Englewood Cliffs, NJ: Prentice-Hall.
Learning a new tool or technique actually lowers programmer productivity and product quality initially. The eventual benefit is achieved only after this learning curve is overcome. Therefore, it is worth adopting new tools and techniques, but only (a) if their value is seen realistically and (b) if patience is used in measuring benefits.
Learning a new tool or technique, assuming that there is value associated with its use, is a good thing. But perhaps not as good as the early adopters might have us believe. There is a cost to learning to use new ideas. We must come to understand the new idea, see how it fits into what we do, decide how to apply it, and consider when it should and shouldn't be used. Being forced to think about things that previously have been pretty automatic for us slows us down.
Whether the new idea is using a test coverage analyzer tool for the first time and figuring out what that means to our testing process or trying out Extreme Programming and adjusting to all the new techniques it contains, the user of the new idea will be less efficient and less effective. That does not mean that these new ideas should be avoided; it simply means that the first project on which they are employed will go more slowly, not faster, than usual.
Improving productivity and product quality have been the holy grails of software process for the last couple of decades. The reason we adopt new tools and techniques is to improve productivity and quality. So it is an irony of the technology transfer process that productivity and quality initially go down, not up, when we change gears and try something new.
Not to worry. If there truly is benefit to the new thing, eventually it will emerge. But that brings up the question "how long?" What we are talking about is the learning curve. In the learning curve, efficiency and effectiveness dip at the outset, rise back past the norm, and eventually plateau at whatever benefit the new thing is capable of achieving. Given that, the "how long" question translates into several "how longs." How long will we have diminished benefits? How soon do we return to normal benefits? How long before we get the ultimate benefits?
At this point, what all of us would like to be able to say is something like "three months" and "six months." But, of course, none of us can say thator anything else, for that matter. The length of the learning curve is situation- and environment-dependent. Often, the higher the benefit at the end, the longer the learning curve. To learn object orientation superficially might take three months, but to become proficient at it might take two fully immersed years. For other things, the lengths would be totally different. The only predictable thing is that there will be a learning curve. One can draw the curve on a chart, but one cannot put any meaningful scale on that chart's axes.
The one remaining question is "how much?" How much benefit will the new idea bring us? That is just as unknowable as the answers to the "how long" questions. Except for one thing. Because of what we learned in the previous fact, it is most likely true that the benefit will be 5 to 35 percent higher than was achieved before the new idea was assimilated.
There is one more important factor to add. Although we cannot give any general answers to the "how long" and "how much" questions, there are answers. For any new concepttest coverage analyzers or Extreme Programming, for examplethose with experience can help you with those questions. Find people who already have assimilated the concept you want to embrace, through your peer network, user group, professional society, and so on, and inquire about how long it took them (they are quite likely to know the answer to that question) and how much benefit they achieved (that's a tougher question, and unless they are users of software metrics approaches, they may not know). And don't forget, while you're asking, to find out what lessons they learnedpro and conin their adoption process.
Be sure to avoid zealots when you ask those questions, of course. Zealots sometimes give the answers they believe, rather than the answers they experienced.
There shouldn't be any controversy over this matter. It is obvious that there is cost attached to learning something new. But, in fact, there often is controversy. The claims of zealots for huge benefits and quick learning curves all too often trans-late into management belief in those benefits and in that quick learning process (see Fact 5). Managers expect new approaches, when employed, to work right out of the box. Under those circumstances, cost and schedule estimates are made with the assumption that the benefits will be achieved from the beginning.
In a classic story to this effect, one pharmaceutical company, installing the SAP megapackage, bid low on some contracts because they assumed the benefits of SAP would be achieved immediately. That company disappeared, a learning curve period later, in a flame of bankruptcy and litigation. "Don't try this at home" might be the lesson learned here.
The learning curve and its impact on progress is described in many places, including
Weinberg, Gerald. 1997. Quality Software Management: Anticipating Change. Vol. 4, pp. 13, 20. New York: Dorset House.
Software developers talk a lot about tools. They evaluate quite a few, buy a fair number, and use practically none.
Tools are the toys of the software developer. They love to learn about new tools, try them out, even procure them. And then something funny happens. The tools seldom get used.
A colorful term emerged from this tendency a decade or so ago. During the height of the CASE tools movement, when everyone seemed to believe that CASE tools were the way of the software future, a way that might well automate the process of software development, lots and lots of those CASE tools were purchased. But so many of them were put on the figurative shelf and never used, that the term shelfware was invented to describe the phenomenon.
I was a victim of that whole movement. At the time, I frequently taught a seminar on software quality. In that seminar, I expressed my own belief that those CASE tools were nowhere near the breakthrough technology that others claimed them to be. Some of my students sought me out after one session to tell me that I was out-of-date on the subject of CASE. They were convinced, as I was not, that these tools were indeed capable of automating the software development process.
I took those students seriously and immediately immersed myself in the CASE body of knowledge to make sure that I hadn't indeed missed something and become out-of-date. Time passed. And along with the passage of time came vindication for my point of view. CASE tools were indeed beneficial, we soon learned, but they were definitely not magical breakthroughs. The shelfware phenomenon resulted, among other things, from those dashed hopes.
But I digress. This fact is not about tools seen to be breakthroughs (we skewered that thought in Fact 5), but rather tools as successful productivity enhancers. And, for the most part, they are that. So why do these kinds of tools also end up on the shelf?
Remember Fact 6 about the learning curve? The one that says that trying out a new tool or technique, far from immediately improving productivity, actually diminishes it at the outset? Once the thrill of trying out a new tool has worn off, the poor schedule-driven software developer must build real software to real schedules. And, all too often, the developer reverts to what he or she knows best, the same tools always used. Compilers for the well-known programming language they know and love. Debuggers for that language. Their favorite (probably language-independent) text editors. Linkers and loaders that do your bidding almost without your thinking about it. Last year's (or last decade's) configuration management tools. Isn't that enough tools to fill your toolbox? Never mind coverage analyzers or conditional compilers or standards-conformance checkers or whatever the tool du jour might be. They might be fun to play with, these developers say to themselves, but they're a drag when it comes to being productive.
There is another problem about tools in the software field, in addition to the one we have already discussed. There is no "minimum standard toolset," a definition of the collection of tools that all programmers should have in their toolboxes. If there were such a definition, programmers would be much more likely to use at least the tools that made up that set. This is a problem that no one seems to be working on. (At one time, IBM proposed a toolset called AD/Cycle, but it made a dull thud in the marketplaceit was too expensive and too poorly thought throughand no one has tried to do that since.)
Often software practitioners are tagged with the "not-invented-here" (NIH) phenomenon. They are accused of preferring to do their own thing rather than building on the work of others.
There is, of course, some of that in the field. But there is not as much as many seem to think. Most programmers I know, given the choice of something new or something old, will use the something new, but only if they are sure that they can complete their tasks at hand more quickly if they do. Since that is seldom the case (there's that old learning curve problem again), they revert to the old, the tried, the true.
The problem here, I would assert, is not NIH. The problem is a culture that puts schedule conformance, using impossible schedules, above all elsea culture that values schedule so highly that there is no time to learn about new concepts. There are tools catalogs (for example, ACR) that describe the what/where/how of commercial tools. Few programmers are aware of the existence of many of the tools of our trade. Fewer still are aware of the catalogs that could lead to them. We will return to these thoughts in the facts that follow.
Regarding any controversy over a minimum standard toolset, so few are giving that any thought that there is no controversy whatsoever. Imagine what wonderful controversy could result if people began thinking about it.
ACR. The ACR Library of Programmer's and Developer's Tools, Applied Computer Research, Inc., P.O. Box 82266, Phoenix AZ 85071-2266. This was an annually updated software tools catalog, but has suspended publication recently.
Glass, Robert L. 1991. "Recommended: A Minimum Standard Software Toolset." In Software Conflict. Englewood Cliffs, NJ: Yourdon Press.
Wiegers, Karl. 2001. Personal communication. Wiegers says, "This fact is something I've said repeatedly. It has been published in an interview I once did, conducted by David Rubinstein and appearing in Software Development Times, Oct. 15, 2000."
One of the two most common causes of runaway projects is poor estimation. (For the other, see Fact 23, page 67.)
Runaway projects are those that spiral out of control. All too often, they fail to produce any product at all. If they do, whatever they produce will be well behind schedule and well over budget. Along the way, there is lots of wreckage, both in corporate and human terms. Some projects are known as "death marches." Others are said to operate in "crunch mode." Whatever they are called, whatever the result, runaway projects are not a pretty sight.
The question of what causes such runaways arises frequently in the software engineering field. The answer to the question, all too often, is based on the personal biases of the person who is answering the question. Some say a lack of proper methodology causes runaways; often, those people are selling some kind of methodology. Some say it is a lack of good tools (guess what those people do for a living?). Some say it is a lack of discipline and rigor among programmers (typically, the methodologies being advocated and often sold by those people are based on heavy doses of imposed discipline). Name an advocated concept, someone is saying the lack of it is what causes runaway projects.
In the midst of this clamor and chaos, fortunately, there are some genuinely objective answers to the question, answers from which typically no one stands to gain by whatever comes of the answer. And those answers are fascinatingly consistent: The two causes of runaways that stand head and shoulders above all others are poor (usually optimistic) estimation and unstable requirements. One of them leads in some research studies, and the other in other studies.
In this section of the book, I want to focus on estimation. (I will cover unstable requirements later.) Estimation, as you might imagine, is the process by which we determine how long a project will take and how much it will cost. We do estimation very badly in the software field. Most of our estimates are more like wishes than realistic targets. To make matters worse, we seem to have no idea how to improve on those very bad practices. And the result is, as everyone tries to meet an impossible estimation target, shortcuts are taken, good practices are skipped, and the inevitable schedule runaway becomes a technology runaway as well.
We have tried all kinds of apparently reasonable approaches to improve on our ability to estimate. To begin with, we relied on "expert" people, software developers who had "been there and done that." The problem with that approach is it's very subjective. Different people with different "been there and done that" experiences produce different estimates. In fact, whatever it was that those people had been and done before was unlikely to be sufficiently similar to the present problem to extrapolate well. (One of the important factors that characterizes software projects is the vast differences among the problems they solve. We will elaborate on that thought later.)
Then we tried algorithmic approaches. Computer scientists tend to be mathematicians at heart, and it was an obvious approach to try, developing carefully conceived parameterized equations (usually evolved from past projects) that could provide estimation answers. Feed in a bunch of project-specific data, the algorithmists would say, turn the algorithmic crank, and out pop reliable estimates. It didn't work. Study after study (for example, dating back to Mohanty ) showed that, if you took a hypothetical project and fed its data into a collection of proposed algorithmic approaches, those algorithms would produce radically different (by a factor of two to eight) results. Algorithms were no more consistent in the estimates they produced than were those human experts. Subsequent studies have reinforced that depressing finding.
If complex algorithms haven't done the job, some people have reasoned, perhaps simpler algorithmic approaches will. Many people in the field advocate basing an estimate on one or a few key pieces of datathe "lines of code," for example. People say that, if we can predict the number of lines of code (LOC) we expect the system to contain, then we can convert LOC into schedule and cost. (This idea would be laughablein the sense that it is probably harder to know how many LOC a system will contain than what its schedule and cost will beif it were not for the fact that so many otherwise bright computer scientists advocate it.) The "function point" (FP). People say that we should look at key parameters such as the number of inputs to and outputs from a system, and base the estimate on that. There is a problem with the FP approach, as wellin fact, there are a couple of problems. The first is that experts disagree on what should be counted and how the counting should happen. The second is that for some applications FPs may make sense, but for otherswhere, for example, the number of inputs and outputs is far less significant than the complexity of the logic inside the programFPs make no sense at all. (Some experts supplement FPs with "feature points" for those applications where "functions" are obviously insufficient. But that begs the question, which no one seems to have answered, how many kinds of applications requiring how many kinds of "points" counting schemes are there?)
The bottom line is that, here in the first decade of the twenty-first century, we don't know what constitutes a good estimation approach, one that can yield decent estimates with good confidence that they will really predict when a project will be completed and how much it will cost. That is a discouraging bottom line. Amidst all the clamor to avoid crunch mode and end death marches, it suggests that so long as faulty schedule and cost estimates are the chief management control factors on software projects, we will not see much improvement.
It is important to note that runaway projects, at least those that stem from poor estimation, do not usually occur because the programmers did a poor job of programming. Those projects became runaways because the estimation targets to which they were being managed were largely unreal to begin with. We will explore that factor in several of the facts that follow.
There is little controversy about the fact that software estimates are poor. There is lots of controversy as to how better estimation might be done, however. Advocates of algorithmic approaches, for example, tend to support their own algorithms and disparage those of others. Advocates of FP approaches often say terrible things about those who advocate LOC approaches. Jones (1994) lists LOC estimation as responsible for two of the worst "diseases" of the software profession, going so far as to call its use "management malpractice."
There is, happily, some resolution to this controversy, if not to the problem of estimation accuracy. Most students of estimation approaches are beginning to conclude that a "belt and suspenders" approach is the best compromise in the face of this huge problem. They say that an estimate should consist of (a) the opinion of an expert who knows the problem area, plus (b) the output of an algorithm that has been shown, in the past and in this setting, to produce reasonably accurate answers. Those two estimates can then be used to bound the estimation space for the project in question. Those estimates are very unlikely to agree with each other, but some understanding of the envelope of an estimate is better than none at all.
Some recent research findings suggest that "human-mediated estimation process can result in quite accurate estimates," far better than "simple algorithmic models" (Kitchenham et al. 2002). That's a strong vote for expert approaches. It will be worth watching to see if those findings can be replicated.
There are several studies that have concluded that estimation is one of the top two causes of runaway projects. The following two are examples, as are the three sources listed in the References section that follows.
Cole, Andy. 1995. "Runaway ProjectsCauses and Effects." Software World (UK) 26, no. 3. This is the best objective study of runaway projects, their causes, their effects, and what people did about them. It concludes that "bad planning and estimating" were a prime causative factor in 48 percent of runaway projects.
Van Genuchten, Michiel. 1991. "Why Is Software Late?" IEEE Transactions on Software Engineering, June. This study concludes that "optimistic estimation" is the primary cause of late projects, at 51 percent.
There are several books that point out what happens when a project gets in trouble, often from faulty schedule targets.
Boddie, John. 1987. Crunch Mode. Englewood Cliffs, NJ: Yourdon Press.
Yourdon, Ed. 1997. Death March. Englewood Cliffs, NJ: Prentice Hall.
Jones, Caper. 1994. Assessment and Control of Software Risks. Englewood Cliffs, NJ: Yourdon Press. This strongly opinioned book cites "inaccurate metrics" using LOC as "the most serious software risk" and includes four other estimation-related risks in its top five, including "excessive schedule pressure" and "inaccurate cost estimating."
Kitchenham, Barbara, Shari Lawrence Pfleeger, Beth McCall, and Suzanne Eagan. 2002. "An Empirical Study of Maintenance and Development Estimation Accuracy." Journal of Systems and Software, Sept.
Mohanty, S. N. 1981. "Software Cost Estimation: Present and Future." In Software Practice and Experience, Vol. 11, pp. 10321.
Most software estimates are performed at the beginning of the life cycle. This makes sense until we realize that estimates are obtained before the requirements are defined and thus before the problem is understood. Estimation, therefore, usually occurs at the wrong time.
Why is estimation in the software field as bad as it is? We are about to launch a succession of several facts that can more than account for that badness.
This fact is about estimation timing. We usually make our software estimates at the beginning of a projectright at the very beginning. Sounds OK, right? When else would you expect an estimate to be made? Except for one thing. To make a meaningful estimate, you need to know quite a bit about the project in question. At the very least, you need to know what problem you are to solve. But the first phase of the life cycle, the very beginning of the project, is about requirements determination. That is, in the first phase we establish the requirements the solution is to address. Put more succinctly, requirements determination is about figuring out what problem is to be solved. How can you possibly estimate solution time and cost if you don't yet know what problem you are going to be solving?
This situation is so absurd that, as I present this particular fact at various forums around the software world, I ask my audiences if anyone can contradict what I have just said. After all, this must be one of those "surely I'm wrong" things. But to date, no one has. Instead, there is this general head-nodding that indicates understanding and agreement.
Oddly, there seems to be no controversy about this particular fact. That is, as I mentioned earlier, there seems to be general agreement that this fact is correct. The practice it describes is absurd. Someone should be crying out to change things. But no one is.
I suspect that, like urban legends and old wives' tales, the expression of this particular fact cannot be traced to its origins. Here are a couple of places, however, where the "wrong time" problem has been clearly identified:
In a Q&A segment of an article, Roger Pressman quotes a questioner as saying, "My problem is that delivery dates and budgets are established before we begin a project. The only question my management asks is 'Can we get this project out the door by June 1?' What's the point in doing detailed project estimates when deadlines and budgets are predefined?" (1992).
In a flyer for an algorithmic estimation tool (SPC 1998), the text presents this all-too-common conversation: "Marketing Manager (MM): 'So how long d'you think this project will take?' You (the project leader): 'About nine months.' MM: 'We plan to ship this product in six months, tops.' You: 'Six months? No way.' MM: 'You don't seem to understand . . . we've already announced its release date.'"
Note, in these quotes, that not only is estimation being done at the wrong time, but we can make a case for it being done by the wrong people. In fact, we will explore that fact next.
Pressman, Roger S. 1992. "Software Project Management: Q and A." American Programmer (now Cutter IT Journal), Dec.
SPC. 1998. Flyer for the tool Estimate Professional. Software Productivity Centre (Canada).
Most software estimates are made either by upper management or by marketing, not by the people who will build the software or their managers. Estimation is, therefore, done by the wrong people.
This is the second fact about why software estimation is as bad as it is. This fact is about who does the estimating.
Common sense would suggest that the people who estimate software projects ought to be folks who know something abut building software. Software engineers. Their projects leaders. Their managers. Common sense gets trumped by politics, in this case. Most often, software estimation is done by the people who want the software product. Upper management. Marketing. Customers and users.
Software estimation, in other words, is currently more about wishes than reality. Management or marketing wants the software product to be available in the first quarter of next year. Ergo, that's the schedule to be met. Note that little or no "estimation" is actually taking place under these circumstances. Schedule and cost targets, derived from some invisible process, are simply being imposed.
Let me tell you a story about software estimation. I once worked for a manager in the aerospace industry who was probably as brilliant as any I have ever known. This manager was contracting to have some software built by another aerospace firm. In the negotiations that determined the contract for building the software, he told them when he needed the software, and they told him that they couldn't achieve that date. Guess which date went into the contract? Time passed, and the wished-for date slid by. The software was eventually delivered when the subcontractor said it would be. But the contract date (I suspect that you guessed correctly which date that was) was not achieved, and there were contractual penalties to be paid by the subcontractor.
There are a couple of points to be made in this story. Even very bright upper managers are capable of making very dumb decisions when political pressure is involved. And there is almost always a price to be paid for working toward an unrealistic deadline. That price is most often paid in human terms (reputation, morale, and health, among others), butas you can see in this storythere is likely a financial price to be paid as well.
Software estimation, this fact tells us, is being done by the wrong people. And there is serious harm being done to the field because of that.
This is another fact where controversy is warrantedand all too absent. Nearly everyone seems to agree that this is a fairly normal state of affairs. Whether it should be a desirable state of affairs is an issue that is rarely raised. But this fact does raise a major disconnect between those who know something about software and those who do not. Or perhaps this fact results from such a disconnect that already exists.
Let me explain. The disconnect that results is that software people may not know what is possible in estimating a project, but they almost always know what is not possible. And when upper management (or marketing) fails to listen to such knowledgeable words of warning, software people tend to lose faith and trust in those who are giving them direction. And they also lose a whole lot of motivation.
On the other hand, there is already a long-standing disconnect between software folks and their upper management A litany of failed expectations has caused upper management to lose its own faith and trust in software folks. When software people say they can't meet a wish list estimate, upper management simply ignores them. After all, what would be the basis on which they might believe them, given that software people so seldom do what they say they will do?
All of this says that the software field has a major disconnect problem (we return to this thought in Fact 13). The controversy, in the case of this particular fact, is less about whether the fact is true and more about why the fact is true. And until that particular controversy gets solved, the software field will continue to be in a great deal of trouble.
There is actually a research study that explores precisely this issue and demonstrates this fact. In it (Lederer 1990), the author explores the issue of what he calls "political" versus "rational" estimation practices. (The choice of words is fascinatingpolitical estimation is performed, as you might guess, by those upper managers and the marketing folks; rational estimation [I particularly love this choice of words], on the other hand, is performed by the software folk. And in this study, political estimation was the norm.)
In another reported study (CASE 1991), the authors found that most estimates (70 percent) are done by someone associated with the "user department," and the smallest number (4 percent) are done by the "project team." The user department may or may not be the same as upper management or marketing, but it certainly represents estimation by the wrong people.
There have been other similar, studies in other application domains (note that these studies were about Information Systems). The two quotes in the source material for the previous fact, for example, not only are about doing estimation at the wrong time, but are also about the wrong people doing that estimation.
CASE. 1991. "CASE/CASM Industry Survey Report." HCS, Inc., P.O. Box 40770, Portland, OR 97240.
Lederer, Albert. 1990. "Information System Cost Estimating." MIS Quarterly, June.
Software estimates are rarely adjusted as the project proceeds. Thus those estimates done at the wrong time by the wrong people are usually not corrected.
Let's look at common sense again. Given how bad software estimates apparently are, wouldn't you think that, as a project proceeds and everyone learns more about what its likely outcome will be, those early and usually erroneous estimates would be adjusted to meet reality? Common sense strikes out again. Software people tend to have to live or die by those original faulty estimates. Upper management is simply not interested in revising them. Given that they so often represent wishes instead of realistic estimates, why should upper management allow those wishes to be tampered with?
Oh, projects may track how things are progressing and which milestones have to slide and perhaps even where the final milestones should be extended to. But when the time comes to total a project's results, its failure or success is usually measured by those first-out-of-the-box numbers. You remember, the numbers concocted at the wrong time by the wrong people?
This failure to revisit early estimates is obviously bad practice. But few fight against this in any concerted way. (There was, however, one study [NASA 1990] that advocated reestimation and even defined the points in the life cycle at which it ought to occur. But I am not aware of anyone following this advice.) Thus, although there should be enormous controversy about this particular fact, I know of none. Software folks simply accept as a given that they will not be allowed to revise the estimates under which they are working.
Of course, once a project has careened past its original estimation date, there is public hue and cry about when product is likely to become available. The baggage handling system at the Denver International Airport comes to mind. So does each new delivery of a Microsoft product. So there is controversy about the relationship between political estimates and real outcomes. But almost always, that controversy focuses on blaming the software folks. Once again, "blame the victim" wins, and common sense loses.
I know of no research study on this issue. There are plenty of anecdotes about failure to reestimate, however, in both the popular computing press and software management books. I rely on these things to substantiate this fact:
Examples such as those mentioned earlier (the Denver Airport and Microsoft products)
My own 40-something years of experience in the software field
The half-dozen books I've written on studies of failed software projects
The fact that when I present this fact in public forums and invite the audience to disagree with me ("please tell me that I'm wrong"), no one does
NASA. 1990. Manager's Handbook for Software Development. NASA-Goddard."
Since estimates are so faulty, there is little reason to be concerned when software projects do not meet estimated targets. But everyone is concerned anyway.
Last chance, common sense. Given how bad software estimates arethat wrong time, wrong people, no change phenomenon we've just discussedyou'd think that estimates would be treated as relatively unimportant. Right? Wrong! In fact software projects are almost always managed by schedule. Because of that, schedule is consideredby upper management, at leastas the most important factor in software.
Let me be specific. Management by schedule means establishing a bunch of short-term and long-term milestones (the tiny ones are sometimes called inch-pebbles) and deciding whether the project is succeeding or failing by what happens at those schedule points. You're behind schedule at milestone 26? Your project is in trouble.
How else could we manage software projects? Let me give just a few examples to show that management by schedule isn't the only way of doing business.
We could manage by product. We could proclaim success or failure by how much of the final product is available and working.
We could manage by issue. We could proclaim success or failure by how well and how rapidly we are resolving those issues that always arise during the course of a project.
We could manage by risk. We could proclaim success or failure by a succession of demonstrations that the risks identified at the beginning of the project have been overcome.
We could manage by business objectives. We could proclaim success or failure by how well the software improves business performance.
We could even manage (gasp!) by quality. We could proclaim success or failure by how many quality attributes the product has successfully achieved.
"How naive this guy is," I can hear you muttering under your breath, thinking about what I have just finished saying here. "In this fast-paced age, schedule really does matter more than anything else." Well, perhaps. But isn't there something wrong with managing to estimates, the least controllable, least correct, most questionable factor that managers handle?
Everyone, software people included, simply accept that management by schedule is the way we do things. There is plenty of resentment of the aftereffects of management by schedule, but no one seems to be stepping up to the plate to do something about it.
Some new ideas on this subject may shake this status quo, however. Extreme Programming (Beck 2000) suggests that after the customer or user chooses three of the four factorscost, schedule, features, and qualitythe software developers get to choose the fourth. This nicely identifies the things at stake on a software project, and we can see clearly that only two of them are about estimates. It also proposes to change the power structure that is currently such a major contributor to poor estimation.
I know of no research on this matter. But in software workshops I have conducted little experiments that tend to illustrate this problem. Let me describe one of those experiments.
I ask the attendees to work on a small task. I deliberately give them too much to do and not enough time to do it. My expectation is that the attendees will try to do the whole job, do it correctly, and therefore will produce an unfinished product because they run out of time. Not so. To a person, these attendees scramble mightily to achieve my impossible schedule. They produce sketchy and shoddy products that appear to be complete but cannot possibly work.
What does that tell me? That, in our culture today, people are trying so hard to achieve (impossible) schedules that they are willing to sacrifice completeness and quality in getting there. That there has been a successful conditioning process, one that has resulted in people doing the wrong things for the wrong reasons. And finally, and most disturbingly, that it will be very difficult to turn all of this around.
Extreme Programming is best described by the source listed in the Reference section to follow.
Beck, Kent. 2000. eXtreme Programming Explained. Boston: Addison-Wesley.
There is a disconnect between management and their programmers. In one research study of a project that failed to meet its estimates and was seen by its management as a failure, the technical participants saw it as the most successful project they had ever worked on.
You could see this one coming. With all the estimation problems discussed earlier, it is hardly surprising that many technologists are trying very hard to pay no attention to estimates and deadlines. They don't always succeed, as I pointed out in my workshop experiment described in Fact 12. But it is tempting to those who understand how unreal most of our estimates are to use some factor other than estimation to judge the success of their project.
One research study doth not a groundswell make. But this particular study is so impressive and so important, I couldn't help but include it here as a possible future fact, an indicator of what may begin to happen more often.
This discussion focuses on a research project described in Linberg (1999). Linberg studied a particular real-world software project, one that management had called a failure, and one that had careened madly past its estimation targets. He asked the technologists on the project to talk about the most successful project they had ever worked on. The plot thickens! Those technologists, or at least five out of eight of them, identified this most recent project as their most successful. Management's failed project was a great success, according to its participants. What a bizarre illustration of the disconnect we have been discussing!
What on earth was this project about? Here are some of the project facts. It was 419 percent over budget. It was over schedule by 193 percent (27 months versus the estimate of 14). It was over its original size estimates by 130 percent for its software and 800 percent for its firmware. But the project was successfully completed. It did what it was supposed to do (control a medical instrument). It fulfilled its requirement of "no postrelease software defects."
So there are the seeds of a disconnect. Estimated targets didn't even come close to being achieved. But the software product, once it was available, did what it was supposed to do and did it well.
Still, doesn't it seem unlikely that putting a working, useful product on the air would make this a "most successful" project? Wouldn't that be an experience you would have hoped these technologists had had many times before? The answer to those questions, for the study in question, was the expected "yes." There was something else that caused the technologists to see this project as a successseveral something elses, in fact.
The product worked the way it was supposed to work (no surprise there).
Developing the product had been a technical challenge. (Lots of data show that, most of all, technologists really like overcoming a tough problem.)
The team was small and high performing.
Management was "the best I've ever worked with." Why? "Because the team was given the freedom to develop a good design," because there was no "scope creep," and because "I never felt pressure from the schedule."
And the participants added something else. Linberg asked them for their perceptions about why the project was late. They said this:
The schedule estimates were unrealistic (ta-da!).
There was a lack of resources, particularly expert advice.
Scope was poorly understood at the outset.
The project started late.
There is something particularly fascinating about these perceptions. All of them are about things that were true at the beginning of the project. Not along the way, but at the beginning. In other words, the die was cast on this project from day one. No matter how well and how hard those technologists had worked, they were unlikely to have satisfied management's expectations. Given that, they did what, from their point of view, was the next best thing. They had a good time producing a useful product.
There have been other studies of management and technologist perceptions that also reflect a major disconnect. For example, in one (Colter and Cougar 1983), managers and technologists were asked about some characteristics of software maintenance. Managers believed that changes were typically large, involving more than 200 LOC. Technologists reported that changes actually involved only 50 to 100 LOC. Managers believed that the number of changed LOC correlated with the time to do the task; technologists said that there was no such correlation.
And on the subject of software runaways, there is evidence that technologists see the problem coming far before their management does (72 percent of the time) (Cole 1995). The implication is that what the technologists see coming is not passed on to managementthe ultimate disconnect.
Perhaps the most fascinating comments on this subject came from a couple of articles on project management. Jeffery and Lawrence (1985) found that "projects where no estimates were prepared at all fared best on productivity" (versus projects where estimates were performed by technologists [next best] or their managers [worst]). Landsbaum and Glass (1992) found "a very strong correlation between level of productivity and a feeling of control" (that is, when the programmers felt in control of their fate, they were much more productive). In other words, control-focused management does not necessarily lead to the best project or even to the most productive one.
There are essentially two aspects to this fact: the problem of what constitutes project success and the problem of a disconnect between management and technologists.
With respect to the "success" issue, the Linberg research finding has not been replicated as of this writing, so there has been no time for controversy to have emerged about this fact. My suspicion is that management, upon reading this story and reflecting on this fact, would in general be horrified that a project so "obviously" a failure could be seen as a success by these technologists. My further suspicion is that most technologists, upon reading this story and reflecting on this fact, would find it all quite reasonable. If my suspicions are correct, there is essentially an unspoken controversy surrounding the issue this fact addresses. And that controversy is about what constitutes project success. If we can't agree on a definition of a successful project, then the field has some larger problems that need sorting out. My suspicion is that you haven't by any means heard the last of this particular fact and issue.
With regard to the disconnect issue, I have seen almost nothing commenting on it in the literature. Quotes like the ones from Jeffery and Landsbaum seem to be treated like the traditional griping of those at the bottom of a management hierarchy toward those above them, rather than information that may have real significance.
Another relevant source, in addition to those in the References section, is
Procaccino, J. Drew, and J. M. Verner. 2001. "Practitioner's Perceptions of Project Success: A Pilot Study." IEEE International Journal of Computer and Engineering Management.
Cole, Andy. 1995. "Runaway ProjectsCauses and Effects." Software World (UK) 26, no. 3.
Colter, Mel, and Dan Couger. 1983. From a study reported in Software Maintenance Workshop Record. Dec. 6.
Jeffery, D. R., and M. J. Lawrence. 1985. "Managing Programmer Productivity." Journal of Systems and Software, Jan.
Landsbaum, Jerome B., and Robert L. Glass. 1992. Measuring and Motivating Maintenance Programmers. Englewood Cliffs, NJ: Prentice-Hall.
Linberg, K. R. 1999. "Software Developer Perceptions about Software Project Failure: A Case Study." Journal of Systems and Software 49, nos. 2/3, Dec. 30.
The answer to a feasibility study is almost always "yes."
The "new kid on the block" phenomenon affects the software field in a lot of different ways. One of the ways is that we "don't get no respect." There is the feeling among traditionalists in the disciplines whose problems we solve that they have gotten along without software for many decades, thank you very much, and they can get along without us just fine now.
That played out in a "theater of the absurd" event some years ago when an engineering manager on a pilotless aircraft project took that point of view. It didn't matter that a pilotless aircraft simply couldn't function at all without computers and software. This guy wanted to dump all of that troublesome technology overboard and get on with his project.
Another way the "new kid on the block" phenomenon hits us is that we seem to possess all-too-often incurable optimism. It's as if, since no one has ever been able to solve the problems we are able to solve, we believe that no new problem is too tough for us to solve. And, astonishingly often, that is true. But there are times when it's not, times when that optimism gets us in a world of trouble. When we believe that we can finish this project by tomorrow, or at least by a couple of tomorrows from now. When we believe we will instantly produce software without errors and then find that the error-removal phase often takes more time than systems analysis, design, and coding put together.
And then there is the feasibility study. This optimism really gets us in trouble when technical feasibility is an issue. The result is, for those (all-too-few) projects when a feasibility study precedes the actual system construction project, the answer to the feasibility study is almost invariably "yes, we can do that." And, for a certain percentage of the time, that turns out to be the wrong answer. But we don't find that out until many months later.
There is such a time gap between getting the wrong answer to a feasibility study and the discovery that it really was the wrong answer, that we rarely connect those two events. Because of that, there is less controversy about this fact than you would tend to expect. Probably the existence of a feasibility study (they are all-too-seldom performed) is more controversial than the fact that they all too often give the wrong answer.
The source for this fact is particularly interesting to me. I was attending the International Conference on Software Engineering (ICSE) in Tokyo in 1987, and the famed Jerry Weinberg was the keynote speaker. As part of his presentation, he asked the audience how many of them had ever participated in a feasibility study where the answer came back "No." There was uneasy silence in the audience, then laughter. Not a single hand rose. All 1,500 or so of us realized at the same time, I think, that Jerry's question touched on an important phenomenon in the field, one we had simply never thought about before.
Reuse-in-the-small (libraries of subroutines) began nearly 50 years ago and is a well-solved problem.
There is a tendency in the computing world to assume that any good idea that comes along must be a new idea. Case in pointreuse.
Truth to tell, the notion of reuse is as old as the software field. In the mid-1950s, a user organization for scientific applications of IBM "mainframes" (that term was not used in those days) was formed. One of its most important functions was serving as a clearinghouse for contributed software subroutines. The organization was called Share, appropriately enough, and the contributed routines became the first library of reusable software. The way to gain fame, back in those early computing days, was to be known as someone who contributed good quality routines to the library. (It was not, however, a way of gaining fortune. Back in those days, software had no monetary valueit was given away free with hardware. Note, here, another good idea that is not newopen-source or freeware software.)
Now those early libraries of software routines contained what we today would call reuse-in-the-small routines. Math functions. Sorts and merges. Limited-scope debuggers. Character string handlers. All those wonderful housekeeping capabilities that most programmers needed (and still need) at one time or another. In fact, my first brush with (extremely limited!) fame came in contributing a debug routine addendum to the Share library.
Reuse was built into the software development process, back in those days. If you were writing a program that needed some kind of common capability, you went first to the Share library to see if it already existed. (Other user groups, like Guide and Common, probably had their own libraries for their own application domains. I was not a business application programmer at that time, so I don't really know whether Guide and Common functioned like Share.) I remember writing a program that needed a random number generator and going to the Share library to find one I could use. (There were plenty of them, from your basic random number generator to those that generated random numbers to some predictable pattern, like fitting a normal curve.)
Reuse in those days was catch-as-catch-can, with no quality control on what was placed in the library. However, having your name attached to a Share library routine was a big deal, and you worked very hard to make sure your contribution was error-free before you submitted it. I don't remember any quality problems with reused Share routines.
Why this trip down memory lane? Because it is important in trying to understand the reuse phenomenon and its status today, to realize that this is a very old and very successful idea. Following the success of reuse-in-the-small, and, in spite of efforts to expand that concept into larger components, the state of reuse remained fairly constant over the years. Why that is will be discussed in Fact 16.
The primary controversy here is that too many people in the computing field think that reuse is a brand-new idea. As a result, there is enormous (and often hyped) enthusiasm for this concept, an enthusiasm that would be more realistic if people understood its history and its failure to grow over the years.
This memory of early days' reuse is very vivid for me. In fact, the best account of this phenomenon is in my own personal/professional reflection (Glass 1998) (he immodestly said). The Share organization (it still functions today) would be another place to find documentation of its early days (it actually produced what we would today call a tools and parts catalog, wherein potential users could find out what modules were available to them, organized by the problem those modules solved).
Glass, Robert L. 1998. "Software ReflectionsA Pioneer's View of the History of the Field." In In the Beginning: Personal Recollections of Software Pioneers. Los Alamitos, CA: IEEE Computer Society Press.
Reuse-in-the-large (components) remains a mostly unsolved problem, even though everyone agrees it is important and desirable.
It is one thing to build useful small software components. It is quite another to build useful large ones. In Fact 15, we solved the reuse-in-the-small problem as far back as more than 40-something years ago. But the reuse-in-the-large problem has remained unsolved over those same intervening years.
Why is that? Because there are a lot of different opinions on this subject, I address this "why" question in the Controversy section that follows.
But the key word in understanding this problem is the word useful. It is not very difficult to build generalized, reusable routines. Oh, it is more difficultsome say three times more difficult than it is to build comparable special-purpose routines (there's a fact, Fact 18, that covers this)but that is not a prohibitive barrier. The problem is, once those reusable modules are built, they have to do something that truly matches a great variety of needs in a great variety of programs.
And there's the rub. We see in the discussion of the controversy surrounding this topic that (according to one collection of viewpoints, at least) a diverse collection of problems to be solved results in a diverse set of component needstoo diverse, at least at this time, to make reuse-in-the-large viable.
There is considerable controversy surrounding the topic of reuse-in-the-large. First, advocates see reuse-in-the-large as the future of the field, a future in which programs are screwed together from existing components (they call it component-based software engineering). Others, typically practitioners who understand the field better (there's no bias in that comment!), pooh-pooh the idea. They say that it is nearly impossible to generalize enough functions to allow finessing the development of special-purpose, fitted to the problem at hand, components.
The resolution of this particular controversy falls into a topic that might be called software diversity. If there are enough common problems across projects and even application domains, then component-based approaches will eventually prevail. If, as many suspect, the diversity of applications and domains means that no two problems are very similar to one another, then only those common housekeeping functions and tasks are likely to be generalized, and they constitute only a small percentage of a typical program's code.
There is one source of data to shed light on this matter. NASA-Goddard, which over the years has studied software phenomena at its Software Engineering Laboratory (SEL) and which services the very limited application domain of flight dynamics software, has found that up to 70 percent of its programs can be built from reused modules. Even the SEL, however, sees that fact as a function of having a tightly constrained application domain and does not anticipate achieving that level of success across more diverse tasks.
Second, there is a controversy in the field as to why reuse-in-the-large has never caught on. Many, especially academics, believe it is because practitioners are stubborn, applying the "not-invented-here" (NIH) syndrome to allow them to ignore the work of others. Most people who believe in NIH tend to view management as the problemand the eventual solution. From that point of view, the problems of reuse-in-the-large are about will, not skill. It is management's task, these people say, to establish policies and procedures that foster reuse to create the necessary will.
In fact, few claim that there is a problem of skill in reuse. Although it is generally acknowledged that it is considerably more difficult to built a generalized, reusable version of a capability than its ad hoc alternative, it is also generally acknowledged that there is no problem in finding people able to do that job.
My own view, which contradicts both the NIH view and the will-not-skill view, is that the problem is close to being intractable. That is, because of the diversity problem mentioned earlier, it is the exception rather than the rule to find a component that would be truly generalizable across a multiplicity of applications, let alone domains. My reason for holding that view is that over the years one of the tasks I set for myself was to evolve reuse-in-the-small into reuse-in-the-large. I sought and tried to build reuse-in-the-large components that would have all the widespread usefulness of those reuse-in-the-small routines from the Share library. And I came to understand, as few today seem to understand, how difficult a task that really is. For example, knowing that one of the bread-and-butter tools in the Information Systems application domain was the generalized report generator, I tried to produce the analogous capability for the scientific/engineering domain. Despite months of struggle, I could never find enough commonality in the scientific/engineering report generation needs to define the requirements for such a component, let alone build one.
In my view, then, the failure of reuse-in-the-large is likely to continue. It is not an NIH problem. It is not a will problem. It is not even a skill problem. It is simply a problem too hard to be solved, one rooted in software diversity.
No one wants me to be correct, of course. Certainly, I don't. Screwed-together components would be a wonderful way to build software. So would automatic generation of code from a requirements specification. And neither of those, in my view, is ever likely to happen in any meaningful way.
There are plenty of sources of material on reuse-in-the-large, but almost all of them present the viewpoint that it is a solvable problem.
As mentioned earlier, one subset of this Pollyanna viewpoint consists of those who see it as a management problem and present approaches that management can use to create the necessary will. Two recent sources of this viewpoint are
IEEE Standard 1517. "Standard for Information TechnologySoftware Life Cycle ProcessesReuse Processes; 1999." A standard, produced by the engineering society IEEE, by means of which the construction of reusable componentry can be fostered
McClure, Carma. 2001. Software ReuseA Standards-Based Guide. Los Alamitos, CA: IEEE Computer Society Press. A how-to book for applying the IEEE standard.
Over the years, a few authors have been particularly realistic in their view of reuse. Any of the writings of Ted Biggerstaff, Will Tracz, and Don Reifer on this subject are worth reading.
Reifer, Donald J. 1997. Practical Software Reuse. New York: John Wiley and Sons.
Tracz, Will. 1995. Confessions of a Used Program Salesman: Institutionalizing Reuse. Reading, MA: Addison-Wesley.
Reuse-in-the-large works best in families of related systems and thus is domain-dependent. This narrows the potential applicability of reuse-in-the-large.
OK, so reuse-in-the-large is a difficult, if not intractable, problem. Is there any way in which we can increase the odds of making it work?
The answer is "yes." It may be nearly impossible to find components of consequence that can be reused across application domains, but within a domain, the picture improves dramatically. The SEL experience in building software for the flight dynamics domain is a particularly encouraging example.
Software people speak of "families" of applications and "product lines" and "family-specific architectures." Those are the people who are realistic enough to believe that reuse-in-the-large, if it is ever to succeed, must be done in a collection of programs that attacks the same kinds of problems. Payroll programs, perhaps even human resource programs. Data reduction programs for radar data. Inventory control programs. Trajectory programs for space missions. Notice the number of adjectives that it takes to specify a meaningful domain, one for which reuse-in-the-large might work.
Reuse-in-the-large, when applied to a narrowly defined application domain, has a good chance of being successful. Cross-project and cross-domain reuse, on the other hand, does not (McBreen 2002).
The controversy surrounding this particular fact is among people who don't want to give up on the notion of fully generalized reuse-in-the-large. Some of those people are vendors selling reuse-in-the-large support products. Others are academics who understand very little about application domains and want to believe that domain-specific approaches aren't necessary. There is a philosophical connection between these latter people and the one-size-fits-all tools and methodologists. They would like to believe that the construction of software is the same no matter what domain is being addressed. And they are wrong.
The genre of books on software product families and product architectures is growing rapidly. This is, in another words, a fact that many are just beginning to grasp, and a bandwagon of supporters of the fact is now being established. A couple of very recent books that address this topic in a domain-focused way are
Bosch, Jan. 2000. Design and Use of Software Architectures: Adopting and Evolving a Product-Line Approach. Boston: Addison-Wesley.
Jazayeri, Mehdi, Alexander Ran, and Frank van der Linden. 2000. Software Architecture for Product Families: Principles and Practice. Boston: Addison-Wesley.
McBreen, Pete. 2002. Software Craftsmanship. Boston: Addison-Wesley. Says "cross-project reuse is very hard to achieve."
There are two "rules of three" in reuse: (a) It is three times as difficult to build reusable components as single use components, and (b) a reusable component should be tried out in three different applications before it will be sufficiently general to accept into a reuse library.
There is nothing magic about the number three in reuse circles. In the two rules of three, those threes are rules of thumb, nothing more. But they are nice, memorable, realistic rules of thumb.
The first is about the effort needed to build reusable components. As we have seen, to construct reusable components is a complex task. Often, someone building a reusable component is thinking of a particular problem to be solved and trying to determine whether there is some more general problem analogous to this specific one. A reusable component, of course, must solve this more general problem in such a way that it solves the specific one as well.
Not only must the component itself be generalized, but the testing approach for the component must address the generalized problem. Thus the complexity of building a reusable component arises in the requirements"what is the generalized problem?"design, "how can I solve this generalized problem? in coding, and in testing portions of the life cycle. In other words, from start to finish.
It is no wonder that knowledgeable reuse experts say it takes three times as long. It is also worth pointing out that, although most people are capable of thinking about problems in a generalized way, it still requires a different mindset from simply solving the problem at hand. Many advocate the use of particularly skilled, expert generalizers.
The second rule of thumb is about being sure that your reusable component really is generalized. It is not enough to show that it solves your problem at hand. It must solve some related problems, problems that may not have been so clearly in mind when the component was being developed. Once again, the number threetry your component out in three different settingsis arbitrary. My guess is that it represents a minimum constraint. That is, I would recommend trying out your generalized component in at least three different applications before concluding that it truly is generalized.
This fact represents a couple of rules of thumb, rules that few have reason to doubt. Everyone would acknowledge that reusable components are harder to develop and require more verification than their single-task brethren. The numbers three might be argued by some, but there is hardly anyone who is likely to defend them to the death, since they are rules of thumb and nothing more.
This fact has come to be known over the years as "Biggerstaff's Rules of Three." There is a very early paper by Ted Biggerstaff, published in the 1960s or 1970s, that first mentions reuse rules of three. Unfortunately, the passage of time has eroded by ability to recall the specific reference, and my many attempts to use the Internet to overcome that memory loss have not helped. However, in the References section, I mention studies of Biggerstaff's role.
I have a particular reason for remembering the rules of thumb and Biggerstaff, however. At the time Biggerstaff's material was published, I was working on a generalized report generator program for business applications (I mentioned it earlier in passing). I had been given three report generators (for very specific tasks) to program, andsince I had never written a report generator program before I gave more than the usual amount of thought to the problem.
The development of the first of the three generators went quite slowly, as I thought about all the problems that, to me, were unique. Summing up columns of figures. Summing sums. Summing sums of sums. There were some interesting problems, very different from the scientific domain that I was accustomed to, to be solved here.
The second program didn't go much faster. The reason was that I was beginning to realize how much these three programs were going to have in common, and it had occurred to me that a generalized solution might even work.
The third program went quite smoothly. The generalized approaches that had evolved in addressing the second problem (while remembering the first) worked nicely. Not only was the result of the third programming effort the third required report generator, but it also resulted in a general-purpose report generator. (I called it JARGON. The origin of the acronym is embarrassing and slightly complicated, but forgive me while I explain it. The company for which I worked at the time was Aerojet. The homegrown operating system we used there was called Nimble. And JARGON stood for Jeneralized (ouch!) Aerojet Report Generator on Nimble.)
Now, I had already formed the opinion that thinking through all three specific projects had been necessary to evolve the generalized solution. In fact, I had formed the opinion that the only reasonable way to create a generalized problem solution was to create three solutions to specific versions of that problem. And along came Biggerstaff's paper. You can see why I have remembered it all these years.
Unfortunately, I can't verify the first rule of three, the one about it taking three times as long. But I am absolutely certain that, in creating JARGON, it took me considerably longer than producing one very specific report generator. I find the number three quite credible in this context, also.
Biggerstaff, Ted, and Alan J. Perlis, eds. 1989. Software Reusability. New York: ACM Press.
Tracz, Will. 1995. Confessions of a Used Program Salesman: Institutionalizing Reuse. Reading, MA: Addison-Wesley.
Modification of reused code is particularly error-prone. If more than 20 to 25 percent of a component is to be revised, it is more efficient and effective to rewrite it from scratch.
So reuse-in-the-large is very difficult (if not impossible), except for families of applications, primarily because of the diversity of the problems solved by software. So why not just change the notion of reuse-in-the-large a little bit? Instead of reusing components as is, why not modify them to fit the problem at hand? Then, with appropriate modifications, we could get those components to work anywhere, even in totally unrelated families of applications.
As it turns out, that idea is a dead end also. Because of the complexity involved in building and maintaining significant software systems (we will return to this concept in future facts), modifying existing software can be quite difficult. Typically, a software system is built to a certain design envelope (the framework that enables but at the same time bounds the chosen solution) and with a design philosophy (different people will often choose very different approaches to building the same software solution). Unless the person trying to modify a piece of software understands that envelope and accepts that philosophy, it will be very difficult to complete a modification successfully.
Furthermore, often a design envelope fits the problem at hand very nicely but may completely constrain solving any problem not accommodated within the envelope, such as the one required to make a component reusable across domains. (Note that this is a problem inherent in the Extreme Programming approach, which opts for early and simple design solutions, making subsequent modification to fit an enhancement to the original solution potentially very difficult.)
There is another problem underlying the difficulties of modifying existing software. Those who have studied the tasks of software maintenance find that there is one task whose difficulties overwhelm all the other tasks of modifying software. That task is "comprehending the existing solution." It is a well-known phenomenon in software that even the programmer who originally built the solution may find it difficult to modify some months later.
To solve those problems, software people have invented the notion of maintenance documentation documentation that describes how a program works and why it works that way. Often such documentation starts with the original software design document and builds on that. But here we run into another software phenomenon. Although everyone accepts the need for maintenance documentation, its creation is usually the first piece of baggage thrown overboard when a software project gets in cost or schedule trouble. As a result, the number of software systems with adequate maintenance documentation is nearly nil.
To make matters worse, during maintenance itself, as the software is modified (and modification is the dominant activity of the software field, as we see in Fact 42), whatever maintenance documentation exists is probably not modified to match. The result is that there may or may not be any maintenance documentation, but if there is, it is quite likely out-of-date and therefore unreliable. Given all of that, most software maintenance is done from reading the code.
And there we are back to square one. It is difficult to modify software. Things that might help are seldom employed or are employed improperly. And the reason for the lack of such support is often our old enemies, schedule and cost pressure. There is a Catch-22 here, and until we find another way of managing software projects, this collection of dilemmas is unlikely to change.
There is a corollary to this particular fact about revising software components:
It is almost always a mistake to modify packaged, vendor-produced software systems.
It is a mistake because such modification is quite difficult; that's what we have just finished discussing. But it is a mistake for another reason. With vendor-supplied software, there are typically rereleases of the product, wherein the vendor solves old problems, adds new functionality, or both. Usually, it is desirable for customers to employ such new releases (in fact, vendors often stop maintaining old releases after some period of time, at which point users may have no choice but to upgrade to a new release).
The problem with in-house package modifications is that they must be redone with every such new release. And if the vendor changes the solution approach sufficiently, the old modification may have to be redesigned totally to fit into the new version. Thus modifying packaged software is a never-ending proposition, one that continues to cost each time a new version is used. In addition to the unpleasant financial costs of doing that, there is probably no task that software people hate more than making the same old modification to a piece of software over and over again. Morale costs join dollar costs as the primary reason for accepting this corollary as fact.
There is nothing new about this corollary. I can remember back to the 1960s when, considering how to solve a particular problem, rejecting modifying vendor software on the grounds that it would be, long-term, the most disastrous solution approach. Unfortunately, as with many of the other frequently forgotten facts discussed in this book, we seem to have to keep learning that lesson over and over again.
In some research I did on the maintenance of Enterprise Resource Planning (ERP) systems (for example, SAPs), several users said that they had modified the ERP software in-house, only to back out of those changes when they realized to their horror what they had signed up for.
Note that this same problem has interesting ramifications for the open-source software movement. It is easy to access open-source code to modify it, but the wisdom of doing so is clearly questionable, unless the once-modified version of the open-source code is to become a new fork in the system's development, never to merge with the standard version again. I have never heard open-source advocates discuss this particular problem. (One solution, of course, would be for the key players for the open-source code in question to accept those in-house modifications as part of the standard version. But there is never any guarantee that they will choose to do that.)
To accept these facts, it is necessary to accept another factthat software products are difficult to build and maintain. Software practitioners generally accept this notion. There is, unfortunately, a belief (typically among those who have never built production-quality software) that constructing and maintaining software solutions is easy. Often this belief emerges from those who have never seen the software solution to a problem of any magnitude, either because they have dealt only with toy problems (this is a problem for many academics and their students) or because their only exposure to software has been through some sort of computer literacy course wherein the most complicated piece of software examined was one that displayed "Hello, World" on a screen.
Because of the rampant naiveté inherent in that belief, there are many who simply will not accept the fact that modifying existing software is difficult. Those people, therefore, will continue to hold the belief that solution modification is the right approach to overcoming the diversity problems of reuse-in-the-large (and, I suppose, for tailoring vendor packages). There is probably nothing to be done for people who adhere to that beliefexcept to ignore them whenever possible.
The primary fact here was discovered in research studies of software errors and software cost estimation. The SEL of NASA-Goddardan organization that we discuss frequently in this bookconducted studies of precisely the problem of whether modifying old code was more cost-effective than starting a new version from scratch (McGarry et al. 1984; Thomas 1997). Their findings were impressive and quite clear. If a software system is to be modified at or above the 20 to 25 percent level, then it is cheaper and easier to start over and build a new product. That percentage is lowsurprisingly low, in fact.
You may recall that the SEL specializes in software for a very specific application domainflight dynamics. You may also recall that the SEL has been extremely successful in using reuse-in-the-large to solve problems in their very specialized domain. One might choose to question his or her findings on the grounds that they might differ for other domains; but, on the other hand, my tendency is to accept them because (a) the SEL appears to be more than objective in its explorations of this (and other) subjects, (b) SEL was quite motivated to make reuse-in-the-large work in whatever way it could be made to work, and (c) my own experience is that modifying software built by someone else is extremely difficult to get right. (Not to mention that famous quotation from Fred Brooks , "software work is the most complex that humanity has ever undertaken."
Brooks, Frederick P., Jr. 1995. The Mythical Man-Month. Anniversary ed. Reading, MA: Addison Wesley.
McGarry, F., G. Page, D. Card, et al. 1984. "An Approach to Software Cost Estimation." NASA Software Engineering Laboratory, SEL-83-001 (Feb.). This study found the figure to be 20 percent.
Thomas, William, Alex Delis, and Victor R. Basili. 1997. "An Analysis of Errors in a Reuse-Oriented Development Environment." Journal of Systems and Software 38, no. 3. This study reports the 25 percent figure.
Design pattern reuse is one solution to the problems inherent in code resue..
Up until now, this discussion of reuse has been pretty discouraging. Reuse-in-the-small is a well-solved problem and has been for over 45 years. Reuse-in-the-large is a nearly intractable problem, one we may never solve except within application families of similar problems. And modifying reusable components is often difficult and not a very good idea. So what's a programmer to do to avoid starting from scratch on each new problem that comes down the pike?
One thing that software practitioners have always done is to solve today's problem by remembering yesterday's solution. They used to carry code listings from one job to the next, until the time came that software had value (in the 1970s), and then various corporate contractual provisions and some laws made it illegal to do.
One way or another, of course, programmers still do. They may carry their prior solutions in their heads or they may actually carry them on disk or paper, but the need to reuse yesterday's solution in today's program is too compelling to quit doing it entirely. As a legal consultant, I have, on occasion, been called on to deal with the consequences of such occurrences.
Those transported solutions are often not reinstated verbatim from the old code. More often, those previous solutions are kept because of the design concepts that are embodied in the code. At a conference a couple of decades ago, Visser (1987) reported what most practitioners already know: "Designers rarely start from scratch."
What we are saying here is that there is another level at which to talk about reuse. We can talk about reusing code, as we have just finished doing. And we can talk about reusing design. Design reuse exploded dramatically in the 1990s. It was an idea as old as software itself; and yet, when it was packaged in the new form of "design patterns," suddenly it had new applicabilityand new respect. Design patterns, nicely defined and discussed in the first book on the subject (Gamma 1995), gained immediate credibility in both the practitioner and academic communities.
What is a design pattern? It is a description of a problem that occurs over and over again, accompanied by a design solution to that problem. A pattern has four essential elements: a name, a description of when the solution should be applied, the solution itself, and the consequences of using that solution.
Why were patterns so quickly accepted by the field? Practitioners recognized that what was happening here was something they had always done, but now it was cloaked in new structure and new respectability. Academics recognized that patterns were in some ways a more interesting concept than code reuse, in that they involved design, something much more abstract and conceptual than code.
In spite of the excitement about patterns, it is not obvious that they have had a major impact in the form of changed practice. There are probably two reasons for that.
Practitioners, as I noted earlier, had already been doing this kind of thing.
Initially, at least, most published patterns were so-called housekeeping (rudimentary, nondomain-specific) patterns. The need to find domain-specific patterns is gradually being recognized and satisfied.
This particular fact has its own interesting corollary:
Design patterns emerge from practice, not from theory.
Gamma and his colleagues (1995) acknowledge the role of practice, saying things like "None of the design patterns in this book describes new or unproven designs . . . [they] have been applied more than once in different systems" and "expert designers . . . reuse solutions that have worked for them in the past." This is a particularly interesting case of practice leading theory. Practice provided the notion of, and tales of the success of, something that came to be called patterns. Discovering this, theory built a framework around this new notion of patterns and facilitated the documentation of those patterns in a new and even more useful way.
The notion of design patterns is widely accepted. There is an enthusiastic community of academics who study and publish ever-widening circles of patterns. Practitioners value their work in that it provides organization and structure, as well as new patterns with which they may not be familiar.
It is difficult, however, to measure the impact of this new work on practice. There are no studies of which I am aware as to how much of a typical application program is based on formalized patterns. And some say that the overuse of patterns (trying to wedge them into programs where they don't fit) can lead to "unintelligible . . . code, . . . decorators on top of facades generated by factories."
Still, since no one doubts the value of the work, it is safe to say that design patterns represent one of the most unequivocally satisfying, least forgotten, truths of the software field.
In recent years, a plethora of books on patterns has emerged. There are almost no bad books in this collection; anything you read on patterns is likely to be useful. Most patterns books, in fact, are actually a catalog of patterns collected on some common theme. The most important book on patterns, the pioneer and now-classic book, is that by Gamma et al. This book with its collection of authors has become known as the "Gang of Four." It is listed in the References section that follows.
Gamma, Erich, Richard Helm, Ralph Johnson, and John Vlissides. 1995. Design Patterns. Reading, MA: Addison-Wesley.
Visser, Willemien. 1987. "Strategies in Programming Programmable Controllers: A Field Study of a Professional Programmer." Proceedings of the Empirical Studies of Programmers: Second Workshop. Ablex Publishing Corp.
For every 25 percent increase in problem complexity, there is a 100 percent increase in complexity of the software solution. That's not a condition to try to change (even though reducing complexity is always a desirable thing to do); that's just the way it is.
This is one of my favorite facts. It is a favorite because it is so little known, so compellingly important, and so clear in its explanation. We've already learned that software is very difficult to produce and maintain. This fact explains why that is so.
It explains a lot of the other facts in this book, as well.
Why are people so important? (Because it takes considerable intelligence and skill to overcome complexity.)
Why is estimation so difficult? (Because our solutions are so much more complicated than our problems appear to be.)
Why is reuse-in-the-large unsuccessful? (Because complexity magnifies diversity.)
Why is there a requirements explosion (as we move from requirements to design, explicit requirements explode into the hugely more numerous implicit requirements necessary to produce a workable design)? (Because we are moving from the 25 percent part of the world to the 100 percent part.)
Why are there so many different correct approaches to designing the solution to a problem? (Because the solution space is so complex.)
Why do the best designers use iterative, heuristic approaches? (Because there are seldom any simple and obvious design solutions.)
Why is design seldom optimized? (Because optimization is nearly impossible in the face of significant complexity.)
Why is 100 percent test coverage rarely possible and, in any case, insufficient? (Because of the enormous number of paths in most programs and because software complexity leads to errors that coverage cannot trap.)
Why are inspections the most effective and efficient error removal approach? (Because it takes a human to filter through all that complexity to spot errors.)
Why is software maintenance such a time consumer? (Because it is seldom possible to determine at the outset all the ramifications of a problem solution.)
Why is "understanding the existing product" the most dominant and difficult task of software maintenance? (Because there are so many possible correct solution approaches to solving any one problem.)
Why does software have so many errors? (Because it is so difficult to get it right the first time.)
Why do software researchers resort to advocacy? (Perhaps because, in the world of complex software, it is too difficult to perform the desperately needed evaluative research that ought to precede advocacy.)
Wow! It wasn't until I began constructing this list that I really realized how important this one fact is. If you remember nothing else from reading this book, remember this: For every 25 percent increase in problem complexity, there is a 100 percent increase in the complexity of the software solution And remember, also, that there are no silver bullets for overcoming this problem. Software solutions are complex because that's the nature of this particular beast.
This particular fact is little known. If there were greater awareness, I suppose there would be controversy as to its truth, with some (especially those who believe that software solutions are easy) claiming that whatever solution complexity exists is caused by inept programmers, not inherent complexity.
Woodfield, Scott N. 1979. "An Experiment on Unit Increase in Problem Complexity." IEEE Transactions on Software Engineering, (Mar.) Finding this source caused me more work than any other in this book. I looked through my old lecture notes and books (I was sure I had quoted this somewhere else), used search engines, and e-mailed so many colleagues I think I must have begun to annoy some of them (none had heard of this quote, but all of them said they wished they had). In the final analysis, it was Dennis Taylor of IEEE who found the correct citation and Vic Basili of the University of Maryland who got a copy of the paper for me. Thanks!
Eighty percent of software work is intellectual. A fair amount of it is creative. Little of it is clerical.
Through the years, a controversy has raged about whether software work is trivial and can be automated, or whether it is in fact the most complex task ever undertaken by humanity.
In the trivial/automated camp are noted authors of books like Programming without Programmers and CASEThe Automation of Software and researchers who have attempted or claimed to have achieved the automation of the generation of code from specifications. In the "most complex" camp are noted software engineers like Fred Brooks and David Parnas. In spite of the extremely wide diversity of these opinions, there have been few attempts to shed objective light on this vitally important matter. It was almost as if everyone had long ago chosen up sides and felt no need to study the validity of his or her beliefs. This fact, however, is about a study that did just that. (It is also, by the way, a wonderful illustration of another raging controversy in the field: Which is more important in computing research, rigor or relevance? I will return to that secondary controversy when I finish dealing with the first.)
How would you go about determining whether computing work was trivial/ automatable or exceedingly complex? The answer to that question, for this piece of research at least, is to study programmers at work. Systems analysts were videotaped performing systems analysis (requirements definition) tasks. They were seated at a desk, analyzing the description of a problem that they were to solve later. I was the researcher who led this project, and examining those videotapes was a fascinating (and yet boring) experience. For vast quantities of time, the subject systems analysts did absolutely nothing (that was the boring part). Then, periodically, they would jot something down (this was also boring, but a light that made this whole thing fascinating was beginning to dawn).
After I had observed this pattern for some period of time, it became obvious that when the subjects were sitting and doing nothing, they were thinking; and when they were jotting something down, it was to record the result of that thinking. A bit more research consideration, and it became clear that the thinking time constituted the intellectual component of the task, and the jotting time constituted the clerical part.
Now things really began to get interesting. As the videotaped results for a number of subjects were analyzed, a pattern soon emerged. Subjects spent roughly 80 percent of their time thinking and 20 percent of their time jotting. Or, putting things another way, 80 percent of the systems analysis task, at least as these subjects were performing it, was intellectual, and the remaining 20 percent was what I came to call clerical. And these findings were relatively constant across a number of subjects.
Let's return for a moment to that rigor/relevance issue. This was not, as you can imagine, a terribly sophisticated research process. From a researcher point of view, it definitely lacked rigor. But talk abut relevance! I could not imagine a more relevant research study than one that cast light on this issue. Nevertheless, my research colleagues on this study convinced me that a little more rigor was in order. We decided to add another facet to the research study. One weakness of the existing work was that it examined only systems analysis, not the whole of the software development task. Another was that it was empirical, relying on a small number of subjects who just happened to have been available when the study was run.
The second facet overcame those problems. We decided to look at the whole of software development by looking at taxonomies of its tasks. We decided to take those tasks and categorize them as to whether they were primarily intellectual or primarily clerical.
Now things get almost eerie. The categorization showed that 80 percent of those software development tasks were classified as intellectual and 20 percent were classified as clericalthe same 80/20 ratio that had emerged from the empirical study of systems analysis.
It would be wrong to make too much of the likeness of those 80/20s. The two facets of the study looked at very different things using very different research approaches. Likely, those 80/20s are more coincidental than significant. And yet, at least in the spirit of relevance if not rigor, it seems fair to say that it is quite possible that software development in general is 80 percent intellectual and 20 percent clerical. And that says something important, I would assert, about that trivial/automatable versus complex controversy. That which is clerical may be trivial and automatable, but that which is intellectual is unlikely to be.
There is a small addendum to this story. This research eventually evolved into an examination of the creative (not just intellectual) aspects of software development. As with the second facet of the first part of the research, we categorized the intellectual portion of those same tasks as to whether they were creative. After the amazing 80/20 finding of the first part of the research, we expected some similar lightning bolt result about how much of the software development job is creative.
We were to be disappointed. Our first problem was finding a useful and workable definition of creativity. But there is a creativity literature, and we were finally able to do that. The good news is that we did discover, according to our research, that roughly 16 percent of those tasks were classified as creative. But the bad news is that there were considerable differences among the classifiers; one thought only 6 percent of those tasks were creative, whereas another thought that 29 percent were. Regardless, it is certainly possible to say that a major portion of the work of software development is intellectual as opposed to clerical, and at least a significant but minor portion is even creative. And, to me at least, that clearly says that software work is quite complex, not at all trivial or automatable.
There is little left to say about the controversy involved here, since the entire Discussion section is about a pair of controversies. I would like to say that, in my mind at least, this study settled the first controversysoftware construction is definitely more complex than it is trivial. It also gives what I consider to be an excellent example of why rigor in research is not enough. If I had to choose between a rigorous study that was not relevant or a relevant one that was not rigorous, I would frequently choose relevance as my major goal. That, of course, is a practitioner's view. True researchers see things very differently.
In spite of my strong beliefs resulting from these studies, I have to confess that both controversies continue to rage. And, quite likely, neither will (or perhaps even should) be resolved.
In fact, the latest instantiation of the first controversy, the one about trivial/ automatable, takes a somewhat different tack. Jacobson (2002), one of object orientation's "three amigos" (the three people who formed Rational Software, the company that created the United Modeling Language object-oriented methodology), takes the point of view that most of software's work is "routine." (He does this in an article analyzing the relationship between agile software processes and his UML methodology.) He cites the numbers 80 percent routine and 20 percent creative as emerging from "discussions with colleagues . . . and . . . my own experience." Obviously his 20 percent creative tracks with this fact, but his 80 percent routine certainly does not. Note that Jacobson fails to take into account the intermediate category, "intellectual," something important between creative and routine.
The intellectual/clerical and creative/intellectual/clerical studies were published in several places, but they are both found in the following book:
Glass, Robert L. 1995. Software Creativity. Section 2.6, "Intellectual vs. Clerical Tasks." Englewood Cliffs, NJ: Prentice-Hall.
Jacobson, Ivar. 2002. "A Resounding 'Yes' to Agile Processes, but Also to More." Cutter IT Journal, Jan.