Home > Articles > Process Improvement

Economics of Software Quality: An Interview with Capers Jones, Part 2 of 2 (Podcast Transcript)

Rex Black and Capers Jones continue discussing Capers Jones' book The Economics of Software Quality. Watch this podcast to learn some surprising and motivating facts about software quality and how to improve it.

This is a transcript of an audio podcast.

Like this article? We recommend

Like this article? We recommend

Welcome to OnSoftware, conversations with the industry's leading experts on a wide range of programming and development topics. You can access related resources, recommended reading, special offers, and more when you visit the OnSoftware resource center at InformIT.com/softwarecenter.

Rex Black and Capers Jones explore issues of software quality and how to measure it, as discussed in Jones' new book with Olivier Bonsignour, The Economics of Software Quality. In part 1 of this series, they covered the importance of learning how to measure quality in software development and why metrics for quality should be applied early in the process—long before testing begins.

Rex Black: Now, how do you think that we as a profession are doing, in terms of capturing information about defects? Not just steps to reproduce them, that allow the developers to then fix those defects, but also good classification information that allows people to really listen to the software process and its quality capabilities (or lack thereof) and make the kind of intelligent process-improvement decisions that need to be made?

Capers Jones: Well, as an overall industry, we're not doing very well. But there are some major exceptions to that—there are companies that are doing very well. For example, companies like IBM, Motorola, and Boeing—the companies that build large and expensive physical products, like airplanes, computers, and telephone switching systems absolutely need state-of-the-art quality levels, because otherwise these expensive devices won't work and can't be sold. So the engineered products community is actually pretty sophisticated in both measuring quality and achieving it consistently; defect-removal efficiencies in the high 90 percent, as opposed to the 85 percent.

Where you run into major problems worldwide are companies that did not have much in the way of quality assurance, and suddenly found themselves building software without adequate understanding of the need for quality. So I'm talking about insurance companies, banks, general manufacturing companies. The high-tech engineered products companies, like aircraft manufacturers and computer folks—they had formal quality assurance even before there were computers, because the physical devices needed high quality. But banks and insurance companies didn't have any quality assurance, because what they did was basically capable of being controlled by accountants and people who had the knowledge in their heads, and when software was suddenly added to the mix, they didn't realize how important quality was. So they're playing catch-up. What they need is to understand and model their quality control approaches on the best practices of the more-sophisticated companies, like IBM and the defense contractors and the like.

RB: Yeah, good point. One of the things that I've mentioned on a number of occasions is that, from a testing point of view, the state of the typical practice lags the state of the art by a good 30 to 40 years. What I mean by that is if you look at what the top companies—which are the same kind of companies that you were just talking about—are doing in terms of their testing practices (using certified testers, using formal techniques to develop their test cases, and so forth), differentiate that against the vast majority of companies that really kind of stuck back in the '70s—early '70s, sort of pre-Glenford Myers book The Art of Software Testing—and I think that's a fair characterization. Would you say that that doesn't apply just to testing, but is actually true of software quality practices in general?

CJ: Well, I think that it's a general truth that applies to many human activities, and it certainly applies to software. My own data confirms your data, that the leaders are many years ahead of the laggards in terms of understanding and utilizing state-of-the-art methods. But if you look around you at other kinds of things that people do, you discover similar lags in other domains. For example, there was a big gap in adopting paddle wheels and screw propellers for naval boats. There was a big gap in adopting rifles in place of muzzle-loading muskets for armies—it took almost 50 years for rifles to replace muskets. There was a huge gap in accepting concepts like continental drift. So, in a lot of fields, you end up with these gaps.

In fact, there are several books on the scientific revolution, and there are also some psychological studies. They're interesting enough to mention briefly. The psychological studies say that ideas that contradict ideas that you have in your head will initially be disbelieved because your mind can't accept the dissonance, the discontinuity between what you believe and alternate facts. And so you tend to disbelieve evidence until it becomes overwhelming, and then you make a sudden switch to the new paradigm. Right now, when you try to tell someone that you need inspections and pre-test defect removal like static analysis, they probably will resist it, because in their head, they have "Testing is the only thing we need to do," and so will challenge it. What needs to happen is that the evidence needs to become overwhelming, and then you can see a fairly abrupt transition to a new model, in which inspections, static analysis, formal testing, and certified test personnel become the norm, instead of being resisted.

RB: Yes. We've certainly seen this in testing quite a bit—that cognitive dissonance, and the confirmation bias that people have, where they prefer to hear information that confirms what their existing beliefs are, rather than uncritically accepting information that does conflict with what they expect. Exactly as you say, that disbelief, that misapprehension about what's going on generally persists long after sufficient data has built up on projects that any objective outsider would look at and say, "Whoa, this project is in a great deal of trouble." I also think that part of that is that quality professionals—and I'll include test professionals here—don't necessarily do the world's greatest job at presenting and analyzing the information that they do have. Well, basically, turning raw data into information that non-quality and non-test people can understand. Do you have some thoughts on whether that's a fair observation, and, if so, what people can do to try to better convey to their fellow software professionals what the meaning of their findings are?

CJ: Well, I think it's a fair observation, but I don't have an easy solution to solve that problem. The companies that I work with, where the data is professionally analyzed and turned into useful information, often employ statisticians in their quality assurance and project offices, so they have professionals who are conversant with data collection and data conversion. That's not training that's provided to ordinary software engineers, and it really isn't provided to ordinary software managers. So, if you work in a small company, with just the normal complement of software engineers, a few testers, some quality assurance people, and a management cadre, you won't have the internal skills, probably, to do that. But if you work in a big company—an AT&T, a Boeing, a Microsoft, a Google—they have internal statisticians and people who know how to use that data. Unfortunately, smaller companies don't usually afford these specialists to present the data, so they end up not being able to gain access to it.

RB: Yeah. It seems also from my perspective that some of the project management tools that we have, including test management tools, aren't really up to the job, either. They don't provide as much assistance as one could really hope. Often what we see is that people suffer from what I call the "fire hose of data" problem, which is that the test manager or quality assurance manager goes into project status meetings and just unleashes this torrent of raw data—test case counts, test case status, bug counts, bug arrival rates, and so forth—up to 20 or so graphs and charts of information that is far too tactical for the other people sitting in the room to understand, because it's really all just about test project status. None of it really relates back to what was to be tested and what the conclusions are in terms of quality. Do you have any thoughts on where we are in terms of tools to support the kind of economic analysis of software quality information that you talk about in your book? Do you see any promising trends there?

CJ: Well, I've seen the same phenomenon you do—that test managers and others present tons of data, but it doesn't actually show any conclusions, or make top executives aware of what the real problems are. In fact, that's one of the reasons why I wrote this book with Olivier on the economics of software quality; it's to try to clarify where the real value is from things like inspections, pre-test static analysis, certified testers, better test case design—and make that data easily available to people who may not have internal statisticians of their own. By publishing it, I thought I would kind of provide the data to companies that aren't able to collect it themselves. Now, in terms of the kinds of tools that are needed to do that, there is a relatively sophisticated set of companies that build predictive estimating models that can predict quality as well as cost and schedules. Companies like Galorath (SEER); PRICE Systems (TruePlanning), my old company Software Productivity Research (KnowledgePLAN)—all of these tools are relatively sophisticated in predicting quality and the cost of quality. The companies that use them can get a pretty good advance warning about what they need to do because they're sensitive to the utilization of things like inspections and static analysis. But the problem there is that that entire sub-industry of companies that build these tools—the whole set of companies together—probably have only reached 25 to 30 percent of the potential client base, so there's still 65, 70, 75 percent of companies that don't use these tools.

RB: Right, right. Yeah. Just understanding the importance of being able to predict in advance the number of defects that you're gonna have to deal with on a project is certainly more exceptional than one would want it to be.

CJ: Well, it is exceptional, but there's a very easy rule of thumb that will give you a good first-order approximation. If you take the function-point total of an application, and raise that to the 1.25 power, that will give you an alarmingly good prediction of the probable number of bugs that are gonna be encountered over the life of the project.

RB: I agree. It's not that it's not possible to do reasonably good estimations, and I've seen that particular rule of thumb work, as you say, quite well. And I have seen clients that have put together spreadsheets, based on historical data, that do a reasonable job of defect prediction. But it's surprising that, given how relatively simple that [sort of prediction] is, how few people do it. That's one of the things that I really hope about the publication of your book—that it will raise awareness that, hey, this is a solvable problem. This is not something we have to live with. Software quality is not something we're doomed to suffer from problems in that area. Is that something that you can see as an outcome of your book being published?

CJ: Well, I did publish the book because I wanted to point out that there were technologies available today, that, if they're used, can give you a very synergistic combination of higher quality, lower costs, and happier customers—and do that all at the same time. But, unfortunately, it's almost in a class like a self-help book, like a dieting book or an exercise book—it will help the people that read it and take it seriously, but millions of people may not read it, and therefore may not know about what the solutions are.

RB: Yeah. In addition to some of the things that we've discussed, one of the things that jumped out at me in reading your book is all the remarkable figures in there—shocking figures, often, about the economics of software quality. We've discussed a few of those. In addition to the ones that we've discussed, what would you say are some of the other really big "money pits" in terms of software quality? Do you have some thoughts about how software professionals can go about reducing losses [they] suffer?

CJ: Well, there are many kinds of quality, and they have different solutions to them. The form of quality that I tend to deal with is traditional functional quality, where you're looking at bugs and defects and the most effective ways of eliminating them. But there's also structural quality, which my coauthor Olivier dealt with, which deals with whether your application works well in a multi-tiered environment, how well it communicates and cooperates with the operating system, with related applications, and with the databases that are providing it the data. And then, finally, there is aesthetic quality. That's the cosmetic appearance of the screens and the reports, and the information that goes in and out of it. That [quality] is normally measured using opinion surveys, because aesthetics are kind of outside the area that can be carefully quantified, so you can only deal with opinion surveys. But you need to put all of those things together to form a cohesive whole if you want to optimize functional quality, structural quality, and aesthetic quality at the same time.

Which brings up the point: Some of the same large companies that have statisticians also have graphics artists and cognitive psychologists who help in designing optimal interfaces, like Microsoft and Google and IBM have psychologists as well as statisticians available inside. Bottom line is that big and sophisticated companies have done a good job, because they have people—specialists—who can help them do a good job. Smaller companies, or companies in industries like insurance and banking, that lack these internal specialists, are still groping for solutions.

RB: This sort of brings us back to the historical points that you were making earlier. It sounds like, thinking back to, say, the history of the early Industrial Revolution, when small groups of people—sometimes one or two people—got together and had great ideas, and were able to put together machines that, for their time, were remarkable. But if you think back to, say, the first steam engines now, they are obviously very crudely engineered products. So it seems like the implication here is that we in the software engineering field are still in that early phase where people sort of bang things together, and they kind of work, but they don't have a lot of quality—whether we're talking about functional, structural, or aesthetic quality. Do you see us on a similar trajectory, as engineers, and can we expect that the next 50 years will bring us maturation of software engineering?

CJ: Well, I think we will mature over 50 years. But if you look at the history of computing and software from the 1960s forward, when I got into the industry in the late '60s, there were a lot of computer manufacturers—IBM, RCA, Honeywell, Bull, and quite a few others—Control Data Corporation, Digital Equipment. One of the more interesting large-scale industrial studies demonstrated that one of the reasons that IBM tended to dominate the others was because IBM had better quality, and better customer service and customer support than their major competitors. So they were able to ship complicated products with fewer bugs, and then fix them faster (when customers found those bugs and had trouble with them) than their competitors. And not only for computers, but for other kinds of manufactured products, such as automobiles or television sets, that same phenomenon tends to lead to increases in market share. In other words, the companies with the best quality, the lowest delivered defect rates, and the best customer support end up surviving and doing well, whereas companies with similar products that don't have good customer support don't do so well. In fact, a current proof of the existence of this phenomenon is Apple. Every time you look at something that evaluates customer support, Apple turns out to be near the top of the list—and look at their market capitalization and value. What's happened to Apple primarily is a result of building innovative products, but also making sure that they work pretty well and have good support when they're released.

RB: Good point. Even having been in this field, testing and quality, for years myself, your book has certainly helped raise my awareness of the magnitude of the quality situation with software. I hope a lot of people read it, so that a lot more people do become aware. I know you mentioned that it's kind of like a self-help book—it can only help those who pick it up—and I hope a lot of people do. Have you seen signs over the span of your career, Capers, that we are as an industry gradually becoming more aware of the financial side of software quality?

CJ: Well, yes, I think in general we are becoming more aware of it. That doesn't mean that that trend is moving as fast as I would like to see it, but it certainly seems to be moving. And what may be a little bit alarming is that it tends to be moving faster in some [other] countries than it does here in the United States. For example, if you look overseas, India now has more companies certified to CMMI level 5 (that's the Capability Maturity Model developed by the Software Engineering Institute) than any other country—I believe including the U.S. So other countries are alerted to the fact that quality is a critical factor for increased market share, and tend to be moving at least as fast as, and possibly faster than us.

RB: Yeah, we've seen that trend with certified testers, too. There are about 250 percent more certified testers in India than there are in the United States.

CJ: Well, we also saw, for example, the rapid increase in the Japanese economy, not long after World War II; when, as a result of Deming and Juran and other people, they began to take quality seriously in their manufactured products, and suddenly catapulted ahead of the United States, with quality of automobiles, television sets, consumer electronics—and therefore also catapulted ahead of the United States in terms of sales of those products.

RB: Yeah, it seems like human beings in general are not necessarily very good at extrapolating from lessons learned by other groups, in this case other industries, into their own group. Certainly if software professionals would look at that lesson, as offered by the situation you've just mentioned, the Japanese predominance in manufacture in the '80s, a lot of the mistakes that we're making in terms of software quality would be clearer, and presumably would be resolved.

We've covered a lot of ground here, but are there any other things that you'd like to bring up to listeners before we close out this podcast about your book and things that they can glean from reading it?

CJ: Well, one small point. One final point. When I worked at ITT, I was part of the Corporate Quality Council; that's the thing that'd been started by Phil Crosby, whose famous book Quality Is Free makes the point that quality is free. And not only for software is quality free, but it actually has a positive return on investment. So by emphasizing quality in software, you get more than just a free ride. You actually gain market share, you lower your warranty costs, you shorten your development schedules, and you improve team morale. You get a relatively powerful return on investment for a relatively small outlay of cash and energy. So I think software can actually go beyond the concept of "quality is free," to the concept that "quality pays a handsome profit."

RB: That's a great summary, and a great point to end on. The sooner more people start thinking that way, the better off the software industry will be, certainly.

Well, thank you very much, Capers, for the opportunity to talk to you today. I've really enjoyed our discussion, and I hope you have as well.

CJ: Well, thank you, Rex. I enjoyed it. Thanks for the questions, and thanks for being a reviewer of the book and for the opportunity to speak with you.

RB: It was my pleasure.

InformIT Promotional Mailings & Special Offers

I would like to receive exclusive offers and hear about products from InformIT and its family of brands. I can unsubscribe at any time.

Overview


Pearson Education, Inc., 221 River Street, Hoboken, New Jersey 07030, (Pearson) presents this site to provide information about products and services that can be purchased through this site.

This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site. Please note that other Pearson websites and online products and services have their own separate privacy policies.

Collection and Use of Information


To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:

Questions and Inquiries

For inquiries and questions, we collect the inquiry or question, together with name, contact details (email address, phone number and mailing address) and any other additional information voluntarily submitted to us through a Contact Us form or an email. We use this information to address the inquiry and respond to the question.

Online Store

For orders and purchases placed through our online store on this site, we collect order details, name, institution name and address (if applicable), email address, phone number, shipping and billing addresses, credit/debit card information, shipping options and any instructions. We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes.

Surveys

Pearson may offer opportunities to provide feedback or participate in surveys, including surveys evaluating Pearson products, services or sites. Participation is voluntary. Pearson collects information requested in the survey questions and uses the information to evaluate, support, maintain and improve products, services or sites, develop new products and services, conduct educational research and for other purposes specified in the survey.

Contests and Drawings

Occasionally, we may sponsor a contest or drawing. Participation is optional. Pearson collects name, contact information and other information specified on the entry form for the contest or drawing to conduct the contest or drawing. Pearson may collect additional personal information from the winners of a contest or drawing in order to award the prize and for tax reporting purposes, as required by law.

Newsletters

If you have elected to receive email newsletters or promotional mailings and special offers but want to unsubscribe, simply email information@informit.com.

Service Announcements

On rare occasions it is necessary to send out a strictly service related announcement. For instance, if our service is temporarily suspended for maintenance we might send users an email. Generally, users may not opt-out of these communications, though they can deactivate their account information. However, these communications are not promotional in nature.

Customer Service

We communicate with users on a regular basis to provide requested services and in regard to issues relating to their account we reply via email or phone in accordance with the users' wishes when a user submits their information through our Contact Us form.

Other Collection and Use of Information


Application and System Logs

Pearson automatically collects log data to help ensure the delivery, availability and security of this site. Log data may include technical information about how a user or visitor connected to this site, such as browser type, type of computer/device, operating system, internet service provider and IP address. We use this information for support purposes and to monitor the health of the site, identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents and appropriately scale computing resources.

Web Analytics

Pearson may use third party web trend analytical services, including Google Analytics, to collect visitor information, such as IP addresses, browser types, referring pages, pages visited and time spent on a particular site. While these analytical services collect and report information on an anonymous basis, they may use cookies to gather web trend information. The information gathered may enable Pearson (but not the third party web trend services) to link information with application and system log data. Pearson uses this information for system administration and to identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents, appropriately scale computing resources and otherwise support and deliver this site and its services.

Cookies and Related Technologies

This site uses cookies and similar technologies to personalize content, measure traffic patterns, control security, track use and access of information on this site, and provide interest-based messages and advertising. Users can manage and block the use of cookies through their browser. Disabling or blocking certain cookies may limit the functionality of this site.

Do Not Track

This site currently does not respond to Do Not Track signals.

Security


Pearson uses appropriate physical, administrative and technical security measures to protect personal information from unauthorized access, use and disclosure.

Children


This site is not directed to children under the age of 13.

Marketing


Pearson may send or direct marketing communications to users, provided that

  • Pearson will not use personal information collected or processed as a K-12 school service provider for the purpose of directed or targeted advertising.
  • Such marketing is consistent with applicable law and Pearson's legal obligations.
  • Pearson will not knowingly direct or send marketing communications to an individual who has expressed a preference not to receive marketing.
  • Where required by applicable law, express or implied consent to marketing exists and has not been withdrawn.

Pearson may provide personal information to a third party service provider on a restricted basis to provide marketing solely on behalf of Pearson or an affiliate or customer for whom Pearson is a service provider. Marketing preferences may be changed at any time.

Correcting/Updating Personal Information


If a user's personally identifiable information changes (such as your postal address or email address), we provide a way to correct or update that user's personal data provided to us. This can be done on the Account page. If a user no longer desires our service and desires to delete his or her account, please contact us at customer-service@informit.com and we will process the deletion of a user's account.

Choice/Opt-out


Users can always make an informed choice as to whether they should proceed with certain services offered by InformIT. If you choose to remove yourself from our mailing list(s) simply visit the following page and uncheck any communication you no longer want to receive: www.informit.com/u.aspx.

Sale of Personal Information


Pearson does not rent or sell personal information in exchange for any payment of money.

While Pearson does not sell personal information, as defined in Nevada law, Nevada residents may email a request for no sale of their personal information to NevadaDesignatedRequest@pearson.com.

Supplemental Privacy Statement for California Residents


California residents should read our Supplemental privacy statement for California residents in conjunction with this Privacy Notice. The Supplemental privacy statement for California residents explains Pearson's commitment to comply with California law and applies to personal information of California residents collected in connection with this site and the Services.

Sharing and Disclosure


Pearson may disclose personal information, as follows:

  • As required by law.
  • With the consent of the individual (or their parent, if the individual is a minor)
  • In response to a subpoena, court order or legal process, to the extent permitted or required by law
  • To protect the security and safety of individuals, data, assets and systems, consistent with applicable law
  • In connection the sale, joint venture or other transfer of some or all of its company or assets, subject to the provisions of this Privacy Notice
  • To investigate or address actual or suspected fraud or other illegal activities
  • To exercise its legal rights, including enforcement of the Terms of Use for this site or another contract
  • To affiliated Pearson companies and other companies and organizations who perform work for Pearson and are obligated to protect the privacy of personal information consistent with this Privacy Notice
  • To a school, organization, company or government agency, where Pearson collects or processes the personal information in a school setting or on behalf of such organization, company or government agency.

Links


This web site contains links to other sites. Please be aware that we are not responsible for the privacy practices of such other sites. We encourage our users to be aware when they leave our site and to read the privacy statements of each and every web site that collects Personal Information. This privacy statement applies solely to information collected by this web site.

Requests and Contact


Please contact us about this Privacy Notice or if you have any requests or questions relating to the privacy of your personal information.

Changes to this Privacy Notice


We may revise this Privacy Notice through an updated posting. We will identify the effective date of the revision in the posting. Often, updates are made to provide greater clarity or to comply with changes in regulatory requirements. If the updates involve material changes to the collection, protection, use or disclosure of Personal Information, Pearson will provide notice of the change through a conspicuous notice on this site or other appropriate way. Continued use of the site after the effective date of a posted revision evidences acceptance. Please contact us if you have questions or concerns about the Privacy Notice or any objection to any revisions.

Last Update: November 17, 2020