Home > Articles > Process Improvement

Economics of Software Quality: An Interview with Capers Jones, Part 1 of 2 (Podcast Transcript)

In this two-part interview, Rex Black talks with Capers Jones about his book The Economics of Software Quality, in which leading software quality experts Jones and Olivier Bonsignour show how to measure the economic impact of quality and how to use this data to deliver exceptional business value.
Like this article? We recommend

Like this article? We recommend

Welcome to OnSoftware, conversations with the industry's leading experts on a wide range of programming and development topics. You can access related resources, recommended reading, special offers, and more when you visit the OnSoftware resource center at InformIT.com/softwarecenter.

Rex Black: Hello, and welcome to a special podcast. I'm Rex Black, president of RBCS, a worldwide testing and quality assurance firm, providing training, consulting, and outsourcing services to clients ranging from small startups to Fortune 20 global enterprises. I'm also the author of eight books on the topic of software testing and quality, which is why I'm really pleased today to have the opportunity to talk to another author, Capers Jones, about his latest book.

If you don't know him, Capers Jones is currently the president of Capers Jones & Associates LLC. He is also the founder and former chairman of Software Productivity Research LLC (SPR). He holds the title of Chief Scientist Emeritus at SPR. Capers Jones founded SPR in 1984. Before founding SPR, Capers was Assistant Director of Programming Technology for the ITT Corporation, a programming technology center in Stratford, Connecticut. He was also a manager and software researcher at IBM in California, where he designed IBM's first software cost-estimating tool, in 1973.

Capers Jones is a well-known author and international public speaker. Some of his books have been translated into five languages. His two most recent books are Software Engineering Best Practices: Lessons from Successful Projects in the Top Companies [McGraw Hill, 2009] and The Economics of Software Quality [Addison-Wesley Professional, 2011]. Among his other book titles are Patterns of Software Systems Failure and Success [International Thomson Computer Press, 1995], Applied Software Management: Global Analysis of Productivity and Quality [McGraw Hill, 2008], Software Quality: Analysis and Guidelines for Success [International Thomson Computer Press, 2000], Estimating Software Costs: Bringing Realism to Estimating [McGraw Hill, 2007], and Software Assessments, Benchmarks, and Best Practices [Addison-Wesley Professional, 2000].

Today, Capers and I will talk about The Economics of Software Quality. In that book, Capers Jones and Olivier Bonsignour show how to systematically measure the economic impact of quality and how to use this information to deliver far more business value. For more information, and to purchase, visit InformIT.com/ESQ.

So, Capers, let's talk about your new book. Can you give our listeners a brief description of the book and its main themes?

Capers Jones: Well, thank you, Rex. The book is about the economics of software quality. And the reason that I had to write such a book, together with Olivier, was because many companies don't actually measure quality, and when they do try to measure it, they end up using metrics such as "cost per defect" or "lines of code," that actually distort the economic value of quality and make it difficult to see where the real advantages are. For example, "cost per defect" tends to penalize high quality and achieve its lowest values where the number of bugs is greatest. And "lines of code" metrics of course can't be used to measure requirements and design defects, which outnumber coding defects, and they also tend to penalize high-level languages. So the bottom line is that, for a long time, the industry has suffered under a misapprehension that high quality comes with high cost. But when you measure properly—when you use, for example, function-point metrics for normalizing the data—you discover that high quality ends up with lower costs, shorter schedules, and significant reductions in maintenance costs, all at the same time. But in order to see these economic advantages, you have to know how to measure quality and get the numbers right.

RB: Great. I really enjoyed reviewing the book. There are so many excellent points raised, such as the ones that you just mentioned, and they're all demonstrated with data. We see a lot of our clients, both large and small, making some really critical decisions about how to manage their software process and software quality without relying on data. So, do you have any thoughts on why we, as software engineers, don't rely more on data in our management decision-making?

CJ: Well, I wish the software engineering field did rely more on data, but unfortunately, the level of sophistication in software engineering is roughly equivalent to the level of sophistication of the medical profession in the days before sterile surgical processes were introduced—back in the days before surgeons sterilized their instruments prior to operating. What I mean by that is that a lot of companies do not pay attention to bugs before testing starts, which means that requirements and design bugs are still present, and end up in the software, and they're almost invisible to some kinds of testing. It means that they don't do pretest static analysis or pretest inspections. They often don't use trained and certified testing personnel, so that their test-case designs are somewhat amateurish. They try to let kind of casually trained developers do most of the testing, and the bottom line is that they end up delivering far too many bugs to clients, they spend far too much time during the test cycle because there are too many bugs when testing starts, and they have a very unbalanced combination of high cost and poor quality primarily because they don't measure well enough to know what the real advantages are.

RB: Yeah, we saw an example of that with a client not too long ago, where, based on the data that they had (which was kind of sketchy, but it was reliable enough), we were able to estimate that they had excess defect costs somewhere in the neighborhood of anywhere from $100 million dollars a year to $250 million dollars a year on a billion-dollar annual IT budget. And it's for exactly the reasons that you just outlined—letting this huge tsunami of bugs flood into the testing process and overwhelm it, and then be very difficult for them to recover once they got into that situation.

CJ: Yes, most projects that run late seem to be on time until testing starts. Then, because of the unexpected deluge of bugs, the testing schedule stretches out to two or three times what was anticipated, sometimes two or three shifts every day, and at that point it's too late to make an effective recovery because the bugs are already there. You have to keep them out before testing begins, in order to have a cost-effective development cycle.

RB: Yeah, what's really stunning sometimes in those situations is that companies, even when you point out that reality to them, refuse to accept it. We've had some clients where we explained, "Look, this is what's happening. You're not getting the bugs out soon enough, and this is why your testing processes are blowing up." And they say, "Well, yes, we understand that, but we don't have time to do design reviews and requirements reviews." [Chuckles.] Kind of difficult to explain, "No, the reason that you don't have time is that you don't have any time not to do those things," and it's sometimes hard for them to see that.

CJ: Yes, they get into a circular loop kind of situation. They think they don't have time, little realizing that if they did pretest inspections or static analysis, the testing cycle would be so much shorter that they would actually deliver early.

RB: [Laughs.] Yeah. I remember one of the things that really jumped out at me in your book was you had a chapter where you talked about accumulated technical debt, and just had some stunning figures on the accumulations of technical debt in some organizations. Can you maybe explain for our listeners a little bit more about that analysis that you did?

CJ: Yes. Technical debt is in a section that Olivier has written, too, so he'll talk more about it, I suspect, but the gist of the idea is this: Technical debt is the concept that if you skimp on defect removal and defect prevention while you're building software, when you finally deliver [the software], you're going to pay an ever-increasing amount of money for warranty repairs and fixing the bugs that you failed to eliminate before the software was delivered.

It's like paying interest on a loan; this technical debt stretches out for years and years and years, because in any given calendar year you're not likely to find more than, say, 25 or 30 percent of the bugs that were delivered. So you have at least a four- to six-year period, plus the fact that there's a situation called "bad fix injection": Something like 7 percent of your attempts to fix a bug add a new bug that wasn't there before. So you have kind of a compound interest that stretches out downstream. The basic idea to reduce technical debt is to prevent or remove as many bugs as possible before testing begins, and then to achieve very-high-efficiency test cases and test stages by using certified test personnel and scientific methods for designing test cases, which will give you higher coverage with reduced number of test cases, and that combination should raise your overall defect-removal efficiency from today's average of maybe 85 percent up to 99 percent, which means that your technical debt, which today is millions of dollars every year, would be cut down to almost nothing.

RB: Nice suggestions there. So, now, to pick up on something that you mentioned. You talk about, on average seven percent of defects are bad fixes, which is sometimes referred to in the testing world as regression. Now, we've noticed something interesting with some of our clients, where we do an analysis on what the percentage of bugs are that are regressions bugs or bad fixes. In some cases we've noticed significantly higher than seven percent, and I've generally attributed that to the presence of so-called defect clusters—highly unmaintainable code that also has a lot of bugs in it. Do you have any thoughts on that situation, and what companies that find themselves with excess bad-fix introduction can do to get out of it?

CJ: I do have some thoughts, and also some data—and my data confirms yours. It was discovered back in the late 1970s at IBM that bugs are not randomly distributed through the modules of big systems; they tend to clump in a small number of places IBM called error-prone modules. And in one extreme example, for a very buggy product that had 425 modules, 300 of them were zero-defect modules that never got any bugs, and 57% of the entire customer-reported bug rate came in from only 31 modules out of 425. Those things—a bad-fix injection rate for those can approach 50 percent, much higher than the seven percent average. They are often so complicated and so difficult to fix that they need to be surgically removed. In other words, you can't really repair them in place; you have to isolate them, and then develop better, newer modules, using better techniques to replace the ones that are error-prone.

RB: Right. So basically reengineering or refactoring or whatever the buzzword du jour is for that, but get in there and cut that bad stuff out, and replace it with good code.

CJ: Yes, and one caveat: If you let the people try to do it who built the error-prone module in the first place, you may end up with a new error-prone module that's just as bad as the first one; so you have to be sure that you either use better people or better technologies on the replacement version, because you don't want a second-generation error-prone module.

RB: Right. Right. It's difficult enough to convince management in many situations of the need to do that. You know, schedules are so tight, and budgets are so tight, [if] you say, "Well, we need to have two or three of our best programmers go off and spend two or three months, or maybe more, completely reengineering this one part of this system," many times, I would expect managers would react by saying, "Well, gee, you know, that would be nice, but we really can't afford to do that right now. Maybe we'll do it later." And of course that's one of those "laters" that never comes. Do you have any advice to listeners about, when you're in that situation, how to convince managers with data—how to make an economic argument that investing that time really is time well spent?

CJ: Well, I've seen situations where, when managers of commercial software or outsourced projects ignored error-prone modules, the clients sued them. And when it was discovered in court that they had not paid sufficient attention to those things, obviously there was a serious penalty for those who didn't take proactive steps to remove them. If you don't have the kind of software where your clients might sue you for ignoring these things, it's more difficult to make the case.

But, in general, the higher management in the company—the president, the board of directors, and the senior executives, who are on top of the software people—would welcome lowering the overall maintenance costs and the warranty repair costs, because they are enormous. As a matter of fact, one of the problems—the social problems—brought on by poor quality is the fact that company presidents and boards of directors as a class do not actually respect their software communities. They think that the software communities are less professional than the other parts of the company, primarily because they don't understand quality economics; they deliver software with far too many bugs; they're frequently late when they deliver software; and often, which is even worse, a significant percentage of software projects—the big ones—will be canceled and never get delivered at all, primarily because poor quality turned the return on investment from positive to negative. So the plug was pulled, and the projects were never finished.

RB: Yeah. Indeed. I've been involved in a couple post-mortems on projects like that, and it's just amazing—and somewhat depressing—how easily preventable the disasters were, in many cases.

How about for people who are on projects that are still in progress, and are still savable, if you will, and they see the signs of the kind of problems that you're talking about? Not taking time to do design reviews, not taking time to do code reviews, not taking time to do requirements reviews—a sort of, you know, "We'll let all this stuff sort itself out in the system testing or system integration-testing phase." Any tips to people who find themselves in these kinds of emerging-but-still-preventable disasters on how they should educate their managers about the economics of putting the project back on the right path?

CJ: Well, there's kind of a carrot-and-the-stick sort of suggestion. I worked as an expert witness in 15 lawsuits, and what's interesting is that, in all of the lawsuits where poor quality was part of the case, the technical employees—the software engineers and the test personnel, and Quality Assurance—knew about it, and wrote letters and gave suggestions to management about these problems and suggested that they be fixed. It was the project managers who did not seem to pass on the information to the clients and to higher management, because they naïvely hoped that those bugs and problems could be fixed in system test or at some later point, but they weren't. So when the situation finally exploded and ended up in court, it turns out that management resistance to passing on critical information about quality to clients and higher management is the chief source of the problem. The technical workers, the testers, and the QA people all knew about it, and wanted to fix the problems, but were actually prevented by management decisions.

Now, the "carrot" part (that was the "stick" part) is that, if you require as part of your monthly or weekly report for managers to higher management that the first section be called something like "Red Flag Items," and include quantifications of the number of bugs found compared to the number of bugs that were expected to be found, and the significance of those bugs on the status of the project; if you can change the reporting requirements for software projects so that bug counts and their significance are elevated to the very first thing that gets reported, then managers will begin to take quality seriously, and pretty soon the problems will go away. You won't have the same kind of problems, because managers are suddenly required to report information that they had been concealing.

RB: Right. Which is back to that old management aphorism of "what gets measured, gets done," basically, and its corollary that "what doesn't get measured, doesn't get done"; to the extent that organizations don't have good metrics about quality and defects, then that basically means that it won't get managed.

This concludes part 1 of Rex Black's interview with Capers Jones. Part 2 concludes their discussion about Capers Jones' new book The Economics of Software Quality.

InformIT Promotional Mailings & Special Offers

I would like to receive exclusive offers and hear about products from InformIT and its family of brands. I can unsubscribe at any time.

Overview


Pearson Education, Inc., 221 River Street, Hoboken, New Jersey 07030, (Pearson) presents this site to provide information about products and services that can be purchased through this site.

This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site. Please note that other Pearson websites and online products and services have their own separate privacy policies.

Collection and Use of Information


To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:

Questions and Inquiries

For inquiries and questions, we collect the inquiry or question, together with name, contact details (email address, phone number and mailing address) and any other additional information voluntarily submitted to us through a Contact Us form or an email. We use this information to address the inquiry and respond to the question.

Online Store

For orders and purchases placed through our online store on this site, we collect order details, name, institution name and address (if applicable), email address, phone number, shipping and billing addresses, credit/debit card information, shipping options and any instructions. We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes.

Surveys

Pearson may offer opportunities to provide feedback or participate in surveys, including surveys evaluating Pearson products, services or sites. Participation is voluntary. Pearson collects information requested in the survey questions and uses the information to evaluate, support, maintain and improve products, services or sites, develop new products and services, conduct educational research and for other purposes specified in the survey.

Contests and Drawings

Occasionally, we may sponsor a contest or drawing. Participation is optional. Pearson collects name, contact information and other information specified on the entry form for the contest or drawing to conduct the contest or drawing. Pearson may collect additional personal information from the winners of a contest or drawing in order to award the prize and for tax reporting purposes, as required by law.

Newsletters

If you have elected to receive email newsletters or promotional mailings and special offers but want to unsubscribe, simply email information@informit.com.

Service Announcements

On rare occasions it is necessary to send out a strictly service related announcement. For instance, if our service is temporarily suspended for maintenance we might send users an email. Generally, users may not opt-out of these communications, though they can deactivate their account information. However, these communications are not promotional in nature.

Customer Service

We communicate with users on a regular basis to provide requested services and in regard to issues relating to their account we reply via email or phone in accordance with the users' wishes when a user submits their information through our Contact Us form.

Other Collection and Use of Information


Application and System Logs

Pearson automatically collects log data to help ensure the delivery, availability and security of this site. Log data may include technical information about how a user or visitor connected to this site, such as browser type, type of computer/device, operating system, internet service provider and IP address. We use this information for support purposes and to monitor the health of the site, identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents and appropriately scale computing resources.

Web Analytics

Pearson may use third party web trend analytical services, including Google Analytics, to collect visitor information, such as IP addresses, browser types, referring pages, pages visited and time spent on a particular site. While these analytical services collect and report information on an anonymous basis, they may use cookies to gather web trend information. The information gathered may enable Pearson (but not the third party web trend services) to link information with application and system log data. Pearson uses this information for system administration and to identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents, appropriately scale computing resources and otherwise support and deliver this site and its services.

Cookies and Related Technologies

This site uses cookies and similar technologies to personalize content, measure traffic patterns, control security, track use and access of information on this site, and provide interest-based messages and advertising. Users can manage and block the use of cookies through their browser. Disabling or blocking certain cookies may limit the functionality of this site.

Do Not Track

This site currently does not respond to Do Not Track signals.

Security


Pearson uses appropriate physical, administrative and technical security measures to protect personal information from unauthorized access, use and disclosure.

Children


This site is not directed to children under the age of 13.

Marketing


Pearson may send or direct marketing communications to users, provided that

  • Pearson will not use personal information collected or processed as a K-12 school service provider for the purpose of directed or targeted advertising.
  • Such marketing is consistent with applicable law and Pearson's legal obligations.
  • Pearson will not knowingly direct or send marketing communications to an individual who has expressed a preference not to receive marketing.
  • Where required by applicable law, express or implied consent to marketing exists and has not been withdrawn.

Pearson may provide personal information to a third party service provider on a restricted basis to provide marketing solely on behalf of Pearson or an affiliate or customer for whom Pearson is a service provider. Marketing preferences may be changed at any time.

Correcting/Updating Personal Information


If a user's personally identifiable information changes (such as your postal address or email address), we provide a way to correct or update that user's personal data provided to us. This can be done on the Account page. If a user no longer desires our service and desires to delete his or her account, please contact us at customer-service@informit.com and we will process the deletion of a user's account.

Choice/Opt-out


Users can always make an informed choice as to whether they should proceed with certain services offered by InformIT. If you choose to remove yourself from our mailing list(s) simply visit the following page and uncheck any communication you no longer want to receive: www.informit.com/u.aspx.

Sale of Personal Information


Pearson does not rent or sell personal information in exchange for any payment of money.

While Pearson does not sell personal information, as defined in Nevada law, Nevada residents may email a request for no sale of their personal information to NevadaDesignatedRequest@pearson.com.

Supplemental Privacy Statement for California Residents


California residents should read our Supplemental privacy statement for California residents in conjunction with this Privacy Notice. The Supplemental privacy statement for California residents explains Pearson's commitment to comply with California law and applies to personal information of California residents collected in connection with this site and the Services.

Sharing and Disclosure


Pearson may disclose personal information, as follows:

  • As required by law.
  • With the consent of the individual (or their parent, if the individual is a minor)
  • In response to a subpoena, court order or legal process, to the extent permitted or required by law
  • To protect the security and safety of individuals, data, assets and systems, consistent with applicable law
  • In connection the sale, joint venture or other transfer of some or all of its company or assets, subject to the provisions of this Privacy Notice
  • To investigate or address actual or suspected fraud or other illegal activities
  • To exercise its legal rights, including enforcement of the Terms of Use for this site or another contract
  • To affiliated Pearson companies and other companies and organizations who perform work for Pearson and are obligated to protect the privacy of personal information consistent with this Privacy Notice
  • To a school, organization, company or government agency, where Pearson collects or processes the personal information in a school setting or on behalf of such organization, company or government agency.

Links


This web site contains links to other sites. Please be aware that we are not responsible for the privacy practices of such other sites. We encourage our users to be aware when they leave our site and to read the privacy statements of each and every web site that collects Personal Information. This privacy statement applies solely to information collected by this web site.

Requests and Contact


Please contact us about this Privacy Notice or if you have any requests or questions relating to the privacy of your personal information.

Changes to this Privacy Notice


We may revise this Privacy Notice through an updated posting. We will identify the effective date of the revision in the posting. Often, updates are made to provide greater clarity or to comply with changes in regulatory requirements. If the updates involve material changes to the collection, protection, use or disclosure of Personal Information, Pearson will provide notice of the change through a conspicuous notice on this site or other appropriate way. Continued use of the site after the effective date of a posted revision evidences acceptance. Please contact us if you have questions or concerns about the Privacy Notice or any objection to any revisions.

Last Update: November 17, 2020