Home > Articles > Software Development & Management > Agile

  • Print
  • + Share This
From the author of


According to Philip Crosby's definition quality can be defined simply as conformance to requirements—how often the software behaves as required. But as Gerald Weinberg points out, the question is, whose requirements?

Quality in software development has both an external and an internal face. A user talking about the quality of the system talks about the user interface, response time, reliability, and ease of use of the system. A developer talking about quality talks about elegance of design; ease of maintenance and enhancement; and compliance to standards, patterns, and conventions. Of course, these two faces are related. Low internal quality makes it much, much harder to maintain high external quality. This is especially true for incremental or iterative development, where after the first iteration or increment we're building on what we built before.

Recognizing when a design for an iteration is good enough is a very valuable skill. If we make the design too simple, we add to the rework that we must do in later iterations when we need to extend that functionality and when time pressure may be more intense. If we select a more general and complex design, we could be spending time building in flexibility that will never be used.

It often seems that developers have a natural complexity level; some developers make their designs too complex and others oversimplify. Recognizing these tendencies in themselves helps developers make the appropriate tradeoff decisions, as does a good knowledge of the problem domain. Whatever the tradeoff decisions, it's important that they're made visible to others on the team—that communication word again.

The Quality Spectrum

Simply splitting quality into external and internal views is really too simple a model; users and developers are not the only people with opinions about a system's quality. A software manager looks at quality in terms of ease of maintenance and enhancement, compliance to standards and conventions, and ability to deliver on time.

Project sponsors look at how the well the system meets their business requirements. Does it allow them to meet a constantly changing business environment and be proactive in meeting the challenges that are ever present in the marketplace?

Add to this the views of testers, technical writers, technical support personnel, and so on...and we need to look at quality as a spectrum, with internal quality at one end and external quality at the other. Developers and software managers view the system more in terms of internal quality. System users and business sponsors tend to view the system more in terms of external quality.

Building in Quality

A naïve view associates quality with testing of code, and testing with a separate test team that goes to work once coding is complete. This model has severe problems. It's reminiscent of the waterfall approach to software development, in which you do all the analysis first, then do all the design, then all the coding, and finally test the system. One of the most documented problems with a waterfall approach over the last couple of decades is that mistakes made early in the process are often not found until very late in the process. The cost in finding an error later rather than sooner varies from study to study but is always high.

Observe that it is perhaps 100 times as costly to make a change to the requirements during system testing, as it is to make the change during requirements definition.

—Richard Fairley, Software Engineering Concepts (McGraw Hill, 1985)

One way to ease this problem is to split the project into iterations or increments so that the distance in time between analysis and test is reduced. If we test something earlier, we'll find the problem earlier, and the cost of fixing the problem is reduced.

A complementary approach is to use practices that increase quality throughout the four activities. In other words, broaden the concept of quality so that it applies to more than just the testing of running code, using techniques such as inspections, audits, and metrics tools.

People typically want to do "quality" work. Each individual developer has his or her own idea about the acceptable level of quality of a system, and that's usually approximately the best quality that they've achieved in the past.

It's better to ask those developers who have a lower idea of quality to reach for a higher standard than to ask those with higher standards to reduce their ideas of acceptable quality. Our self-esteem is linked to the quality of what we produce. If we're consistently forced to produce what we consider to be low-quality work, we'll lose self-esteem, resulting in low morale and lower productivity. On the other hand, asking developers to produce a higher-quality product than they would naturally do actually enhances their self-esteem...if they can do it.

Even if an organization standardizes on a level of acceptable internal quality, it may well be lower than that of the individual developers' ideas of acceptable quality. So at the beginning of a project, the development team needs to agree on what is an acceptable level of internal quality and what isn't. Obviously, it can't be lower than that of the organization (except in unusual circumstances), but it may be set higher. This level of quality is made public in published design and coding standards and enforced primarily by some sort of design and code inspection. (Automated source code audits and source code formatters help enforce code naming and layout standards.)

In iterative, object-oriented software development, where we want to reuse results, we need to ensure that internal quality is built in early so that we have quality to build on later. If we allow low-quality results at the start, we'll find ourselves in a vicious cycle of low quality resulting in more low quality.

A word of caution: Be careful not to confuse quality with optimization. Great results can be made from combinations of sub-optimal parts. Optimizing a small part may make no significant difference to the whole. On the other hand, an over-complicated, bug-ridden part or an incorrect implementation of a requirement can be the cause of many problems in other parts.

  • + Share This
  • 🔖 Save To Your Account