Home > Articles > Process Improvement

  • Print
  • + Share This
From the author of

What Is Quality?

In software development, quality goes well beyond having the right screens that generate the right results. Is a car "high quality" simply because it works effectively the day you drive it off the lot? (Quite a few car salesmen would like you to believe that.)

Many cars drive perfectly fine off the lot and operate well during their first 20,000 miles or so. But it would be patently silly to assume that, because a car works okay for the first 20,000 miles, it will continue to work fine for the next 200,000 miles. Quality isn't measured by how something works on its first day; it's measured by how much the product costs to maintain over its expected life.

In computer software, maintainability—making changes confidently in one area of a system without having to worry about unrelated parts breaking—is the principal measure of quality. It's also the most infrequently measured of all the metrics in a Statement of Work.

But Our Company Is Focused on Quality!

My article Don't 'Enron' Your Software Project covered how easy it is to create "off balance-sheet entities" that contribute to technical debt. It's very easy for a technologist to take short-term actions that optimize development cost while the project is under the terms of the Statement of Work, but lead to grave long-term expense in the years after the initial contract has been paid. Unless the Statement of Work somehow has a way to measure code quality (let's admit it, almost none do), it's not hard to imagine which virtue is the first to go when the team moves into "death march" mode.

Let's say that you're Good Consulting Firm, Inc., and you've decided on a concerted effort to make sure that all software you deliver is of very high quality. You go forth and make sure that you specify enough time in contracts not just to do it fast, but do it correctly, so that your client doesn't have a high maintenance load after the project is done.

This approach can work (sometimes) when there's a good understanding of quality—specifically, the risks and true costs of low quality—between the supplier and the client. However, projects aren't always conceived under those kinds of terms. When multiple firms are competing in an RFP or other competitive bid situation, temptations to make concessions that hurt quality (usually by overpromising on the timeline) are frequent. The more numerous and unscrupulous the competitors, the more likely that this problem will occur.

Especially in downturns, where competition is more intense, and "price to win" becomes a strategy, quality is often the softest target when a sales team with mostly short-term incentives (that is, making quota) is on the hunt to make a deal.

Measuring Quality Is Hard

Again, intentions are good. The reason that quality tends to lose isn't really competition, "evil companies," or any other, more obvious factor. The reason is that quality is harder to measure than time, money, or scope. Despite Einstein's special relativity, time in our normal experience is a purely objective measure. Exchange rates aside, the same can be said for money. Scope, while not fixed (surprisingly, even in projects that claim that it is, in nearly all circumstances), tends to be one of those things that's very visible, and therefore becomes a basis for negotiation. Scope may not be objective, but at least it's visible.

Quality, on the other hand, is rarely measured, much less examined. Even when it is measured, quality usually is measured in terms of unresolved defect count, with defects being defined as things you see from the outside that don't work. While measuring defects is a good start, it doesn't even begin to touch on the costs that come from code that's hard to maintain.

What High-Quality Code Looks Like

High-quality code is loosely coupled. Upon code review, you have small classes that are, to the extent possible, blissfully unaware of as much of the rest of the system as possible. Changing a business rule, in a high-quality code base, means that you make a change in one place, not several places. It has fewer interacting moving parts; for example, using read-only values rather than variables, unless something truly needs to vary. Naming conventions are clear, accurately communicating intent.

In well-engineered code, you don't have single "God" classes that know everything about the system, with dozens of interacting moving parts that make the code look like an electronic version of a Rube Goldberg device. Well-engineered code has unit tests for the entire code base that get run prior to any code being checked in. In short, a high-quality code base invites change, making it easy to do the right thing. A low-quality code base makes new developers coming to the project run for cover.

That Sounds Great, But How Do We Measure Quality?

Despite protests that quality is subjective, the facts contradict such a conclusion. In mechanical engineering, we have concepts such as tolerance, which specifies that a given piece of material must be a certain dimension, plus or minus a small amount (for instance, 6[dp] +/- 0.01[dp]). Such specifications are part of the blueprint that is, literally, a "micro" version of a Statement of Work for a manufacturer. Clearly, we measure production quality in other realms of engineering and craftsmanship. Why not software? Certainly, we can use static analysis tools to determine whether our code meets naming standards, static design standards, and numerous other quality standards. We can use code coverage tools to at least determine whether we've reached a reasonable threshold of code base test coverage. Those two practices alone would be a great start, and I see no reason why we shouldn't dictate certain things in terms of conditions of software delivery projects.

Of course, we need to go further. Unit tests don't guarantee correctness. Static analysis is, well, static—and it doesn't deal with all the ways and conditions that more complex programs may have. In my opinion, there's room in the market for organizations that do independent code auditing. It's something already done in the space of security auditing of systems; doing so from a code-quality standpoint certainly seems plausible.

  • + Share This
  • 🔖 Save To Your Account