Home > Articles > Programming

This chapter is from the book

Achieving Quality Goals

The preceding concerns and observations have led to the understanding that goal setting, establishment of criteria, and—most important—measurement are critical in achieving quality goals. In terms of software, such considerations have given rise to much research and practice targeted toward the determination of goals and purposes for software development projects, determination of software product-quality criteria, and software product and process metrics. Beyond that, a number of approaches and paradigms have been based on the premises that (a) measurement is a necessary element of quality determination, (b) measurement must be made against specific criteria that have (c) been themselves selected in accordance with a specific purpose or goal. The Goal, Question, Metric (GQM) paradigm put forth by Basili and Selby (1987) is a good example of such effort.In terms of goals for IT development, Curtis (1989) identified the following seven reasons (neither orthogonal nor exhaustive) why organizations develop software and information technology:

  • An alternative system would not cope;

  • Cost savings;

  • The provision of better internal information for decision making;

  • The provision of competitive customer service;

  • The opportunities provided by new technology;

  • High-technology image; and

  • Change in legislation or regulations.

Any one of these reasons, or any combination of them, might be the context that can be used to determine the purpose or the goal with respect to which quality is to be assessed.

As noted by Davis (1994), the purpose of a system may be viewed differently by different individuals in an organization. For example, the purpose of a particular information system may be, according to the CEO of the firm, to provide a strategic alliance between the firm and the manufacturer of the system or the firm and its customers. To an administrative officer, the purpose of the system may be to assist with his daily administration. In a scenario like this, it is quite possible that the CEO would consider the system as fulfilling its purpose, yet the administrative officer, forced to battle with a poorly designed user interface, might not share this opinion.

The enigmatic nature of quality has contributed to considerable confusion in the investigation of improving software quality. The basic hurdles are as follows:

  • It is very difficult—if not impossible—to meet an undefined target. As mentioned earlier, software quality is very subjective. Despite many years of research in the field, there is still no universally agreed definition of software quality, let alone a standard measurement system with consensus validity and suitability. This makes defining and communicating a particular level of quality exceedingly challenging, resulting in what is ultimately a futile exercise of shooting in the dark.

  • Views differ as to what represents quality. This again is a consequence of the multiplicity of perceptions of the various stakeholders as to what is a quality product. These perceptions are impacted not only by technology, economics, and negotiating position, but also by culture, philosophy, and the psychology of the stakeholders. Determining what represents quality therefore becomes akin to searching in the dark.

  • Products and processes are not clearly related in a theoretical context. Neither researchers nor practitioners have yet established a direct, explicit, well-understood relationship between the characteristics of the development process to be employed and its capability for yielding a product of a given quality. In this sense, software development resembles stabbing in the dark. This is despite the existence of the Software Engineering Institute Capability Maturity Model (SEI CMM [Humphrey, 1989]) and many other approaches created to the same end (e.g., El-Emam et al., 1997; Koch, 1993) that provide mechanisms to assess the maturity and capability of software processes. The models are essentially reductionist in nature and are generally mappings of many aspects of organizational and process characteristics (in the case of CMM) into a linear scale of discrete values. They are fundamentally comparative in nature. Most (e.g., the CMM) do not explicitly consider the assessment of product quality from diverse human perspectives. Much further research remains to be done in all these areas.

Research into software and software process quality may be categorized into the three streams of product focus, the link between product and process, and process focus, which are discussed next.

Product Focus

Adherents of this view are of the opinion that as the product is the main artifact of software development, to ensure a high-quality product, one must concentrate solely on those features of the product that directly impact quality and ensure that these features are implemented adequately. Extremists within this category would be of the opinion that the research into process and process improvement is nonessential, as process quality does not impact product quality (Bach, 1995; Sanders, 1997).

This stance, at least in its extreme form, is unwarranted. As a practitioner and researcher in this field, I have very frequently encountered arguments put forward by the proponents of this camp (e.g., Bach, 1995; Sanders, 1997) that: There exist many organizations that possess a very well-developed "process" yet continue to fail to produce high-quality software. Therefore, the argument continues, there is no relationship between product quality and the softwaredevelopment process.3 Convincing on the surface, this argument breaks down on closer scrutiny of their definition of the term process. As we said earlier, a software process is the collection of technologies, methodologies, people, and organizational influences that are utilized to produce software products. It is, however, a common error that a software process is often equated with the methodology utilized (e.g., the Booch method; Booch, 1991) or the notation used (e.g., Unified Modeling Language [UML]; Rumbaugh et al., 1999). As this latter more restricted definition ignores the influence of technology and people, a variety of shortcomings or strengths demonstrated within a software development activity can no longer be explained adequately, giving rise to the preceding question and others like this: Why is it that Companies X and Y use the same "process" yet produce vastly different results in terms of product quality?

The answer might be that the software development technologies utilized (e.g., Computer Assisted Software Engineering [CASE], PSEE, compilers, static analyzers, etc.) might differ between the two firms or that the organizational aspects are not comparable. For example, it may be that employees at Company X are better trained or more experienced than those at Company Y or that they may employ more stringent verification and validation procedures than those in effect at Company Y. Therefore, the correct way of asking this question is: Why is it that Companies X and Y use the same methodology (e.g., both use SADT), yet one produces a vastly superior product to the other?

Why is it that our introduction of CASE as a means of process improvement did not have a significant effect? The answer to this may be associated with other aspects of the process such as inadequacy of staff training in utilization of technology introduced, with training planning being an important organizational aspect of a relatively mature process (Paulk et al., 1993).

Why did our ability to build high-quality software diminish significantly when X left the team, although we have not changed the process? The answer is that although the methodology may not have changed, the process obviously has, through the departure of X, which has altered the organizational context. This is a common feature of immature software development firms within which expertise is resident in individuals rather than the organizational procedures and hence not within the process.

All—or at least the vast majority—of the issues just enumerated would be resolved if we expand the definition of the software process to encompass the technological and organizational aspects. This is consistent with the bulk of informed current research such as Bootstrap (Haas et al., 1994) and Cleanroom (Dyer, 1992; Mills et al., 1987). Only through the inclusion of these other dimensions can we begin to compare apples with apples.

To produce a better product, one must produce a product that has at least one new, different, or improved feature or characteristic. I cannot envisage otherwise because to produce a different product, it is necessary to do at least one thing differently. Doing things differently defines a new process. If the result of doing things differently (i.e., a different process) would consistently yield a better, higher quality product, then the new process is a better process, so we have managed to improve the process. We define a process that has thus been improved to be of a higher quality than its predecessor.

This does not mean, however, that work of primarily product focus is not of value. In fact, because process quality cannot be assessed without measurement of product quality, one must start with product quality. Valuable work has been done on many different aspects of the software product artifact. This work, among other things, focuses on the following:

  • Ways to clarify the features a product must possess to be deemed of quality;

  • Ways of providing a measurement basis for assessment of product quality features; and

  • Identification and measurement of the influence of product characteristics that impact each product quality feature.

By its nature, however, this type of work is largely a posteriori, because if anything can be said about a product with certainty, then that product must exist. A software product can therefore be subjected to such analysis only after it has been produced. This is in line with the traditional manufacturing view of software and the idea of output quality control (Taylor, 1911). According to this view, artifact quality is determined for the purpose of acceptance or rejection of that product after it has been built.

If, however, the aim is to ensure that a quality product is built in the first place, then focus must shift to how quality products are built. This necessitates the study of the link between product quality and the process by which such a product can be built. This stance is essentially at the core of the second category of work in software quality: the quest for establishing a firm and quantitative relationship between product quality and the software process. One requirement for this is the importance of a search for a predictive model for software product quality.

The Link Between Product and Process

To establish a link between the quality of the product and the process that is utilized to build that product, research must be done in areas covered in the following sections.

Determination of Product Quality Attributes

A large body of work has concentrated on the issue of determining what quality attributes are of importance (e.g., Fenton, 1991; Gillies, 1992). This is significant and fundamentally difficult research to conduct. The difficulty lies in the fact that quality is highly subjective and, as such, determination of a static set of universally accepted quality attributes is problematic, if not impossible. Despite this, attempts have been made to arrive at a universally acceptable set of quality attributes through accommodated consensus. The ISO 9126 standard is the outcome of one such attempt.

Measurement of Product Quality Attributes

Once a set of quality attributes is agreed on, it will be necessary to measure these attributes according to a scale that allows differentiation of various products of differing quality against these yardsticks. A large portion of research in software metrics (e.g., Fenton, 1991; Fenton & Pfleeger, 1996; McCabe, 1976) is targeted to this end.

This work has enjoyed a high profile but little practical adoption by industry (Henderson-Sellers, 1996; Pfleeger et al., 1997). One reason for such lack of adoption might relate to the "advocative" nature of many of such proposed measures. Very few quality metrics are built on a sound scientific basis and in observance of measurement theory (Fenton, 1991; Henderson-Sellers, 1996; Pfleeger et al., 1997). This severely limits the utility and generality of a measure (Fenton, 1991). It is no surprise, therefore, that debate frequently rages on the effectiveness, appropriateness, and applicability of many metrics (e.g., see Fury & Kitchenham, 1997).

In a recent status report on software measurement, Pfleeger et al. (1997) still felt the need to devote much of the article to the issue of the need for a scientific (as opposed to advocative) basis for proposing of metrics.

Determination of Process Components

Questions that are central to this particular domain include the following: Which elements must be present in a software process? How are they identified? How do they impact the process?

Work in this area also continues at several levels. It has generally been recognized that there are only a small number of principal activities that are necessary in the development of a software product. At the highest level they might be presented as discover, invent, and validate. The aims of these activities may be accomplished via a selection from a range of tasks (basic units of work). Tasks, in turn, may be accomplished using a selection from a set of techniques (Graham et al., 1997). This framework allows identification and classification of activities, tasks, and techniques and aids in the gradual but orderly introduction of new ones. As mentioned earlier, one such scheme with which I have been closely associated is the OPEN framework (Graham et al., 1997; Henderson-Sellers et al., 1998).

Determination of Process Quality Attributes

This research aims at the determination of general quality attributes that effective software processes must possess. This work is at a much higher level of abstraction than the associated research that endeavors to establish specific quality attributes of components of the process (discussed later). This work concentrates only on attributes such as granularity, generality, understandability, and so on, of processes and must be distinguished from research into determination of process component quality, which looks into issues such as quality characteristics of the validation stage of the process, or the characteristics of a design model.

Determination of Process Component Quality Attributes

Once various process components are identified, it is possible to investigate the attributes required of those components for the process to be of high quality. This research, as mentioned earlier, deals with a lower level of granularity and quality characteristics of process components such as analysis models, design models, testing, coding style, and so on.

Measurement of Process Component Quality Attributes

Having determined the quality attributes of software process components, it is useful to establish a measurement for each one. It is only through establishment of a quantifiable scale that we can say anything meaningful and universally applicable about the relationship between product and process components.

Establishment of a Causal Relationship Between Process Component Quality Measures and Those of the Product

This is the "holy grail" of researchers in this stream of investigation. The aim is to establish firm and validated relationships between product and process attributes. This can only be done if there is an adequately quantifiable measurement associated with both dimensions of product and process.

Process Focus

This stream of research is based on the assumption that process quality does indeed underpin the quality of the product. As such it is assumed that any process improvement by necessity will be a useful activity that will enhance the quality of the artifact being developed. Researchers in this area therefore have focused on the software process as a whole. Attempts have been made to develop and validate frameworks for the determination of the level of capability and maturity of processes leading to frameworks such as the Capability Maturity Model (CMM; Paulk et al., 1993) and Software Process Improvement and Capability Determination (SPICE; El-Emam, 1997), or the degree to which they conform to specific standards such as ISO 9000. Proponents of this approach argue that it is through the assessment of capability that process shortcomings can be identified and action plans developed to breach those gaps, thus improving the process.

Arguments based on the requirements of the scientific approach and measurement theory show that although potentially useful, these approaches have a number of limitations. Alternative (complementary) approaches to frameworks, such as the CMM, that are based on a constructive approach and are at a much lower level of granularity, are emerging.Recent Developments

Irrespective of the approach taken, the question of software quality determination must start from the product perspective. As mentioned previously, the actual aim of all of this is to be able to determine the quality of the software product; determining software process quality is in many ways a means to that end.

As discussed earlier, determining product quality is not an easy task. In the past, attempts have been made to determine a number of quality attributes and then utilize these to assess the quality of the product. Although this approach is fundamentally reasonable, Dromey (1996) criticized it on several grounds, the most important of which are the issues of combinatorial explosion of attributes and subattributes and their nonorthogonality. These issues were also mentioned by Gillies (1992), among others.

An important development in the study of software from the product perspective has been the work of Dromey (1995) in which a possible solution to the problem of combinatorial explosion and nonorthogonality of software quality attributes is presented.

The fundamental axiom on which this approach is built states that "a product's tangible internal characteristics or properties determine its external quality attributes" (Dromey, 1996, p. 33). Based on this assumption, it is important to realize that high-level quality attributes cannot be built into a product directly. Rather, a consistent, complete, and compatible set of product properties must be selected and incorporated into the software, resulting in manifestations of these high-level attributes, such as reliability. This is—not accidentally—also a cornerstone on which the work presented in this book is based.

Dromey's model is therefore a general model for relating (through their properties) components that compose a software product to the high-level attributes for the software product that determines its quality. This is illustrated in Figure 1.2.

Figure 1.2Figure 1.2 The basis of the Dromey model. (Source: Dromey, 1996)

The advantage of this model is that it places only a single level (of quality-carrying properties) between the high-level product quality attributes and the components that compose the product. The model is currently proposed in considerable detail, but it remains to be validated.

What is important in all of this from our perspective is that recognition, prevention, identification, and ultimately removal of defects from software products are useful things to do. Specific techniques, in fact all techniques we use, should be conscious of the importance of this. In other words, in all steps of software development we must be vigilant to either prevent defects or, if they are already extant, to remove them effectively. We can thus look at the entire discipline of software engineering from a defect management perspective.

Approaches to Defect Management

It has already been established that defects affect all external dimensions of software quality. Correctness (lack of defects), therefore, is an essential feature of high-quality software (Sommerville, 1995) in that it is a logical assumption that lack of defects in a software product is a desirable property. Therefore, effort expended to prevent the introduction of defects or one that assists in their removal if introduced is a worthwhile process activity provided that the efficacy, effectiveness, and efficiency (Checkland, 1981) of the activity are acceptable.

There are logically only two primary approaches to the development of low-defect software:

  • Preventing the introduction of defects into the software when being constructed (defect prevention), and

  • Removal of defects introduced during construction through employment of a defect detection and removal activity (defect removal).

In a balanced approach to defect management, both approaches are of central importance.

However, the definitions and terminology in this area need clarification. Defect management has been traditionally referred to as testing, at least by the majority of early writers. For example, Hetzel's (1988) title for the chapter of his book on testing is "Testing Through Reviews," implying that activities that examine the source code for defect identification (e.g., reviews) can be considered testing. This is not the case, however, as testing is actually the activity of running the software for failure detection. Myers's (1979) work also uses a similar classification by including inspections under the title of testing. More recent writers (e.g., Ghezzi et al., 1992; Musa & Ackerman, 1989; Sommerville, 1995; Wallace & Fujii, 1989) have adopted the term validation (often in conjunction with the term verification) to refer to those activities that attempt to identify and remove defects whether through identification of failure during execution or through identification of defects in the source code. In this respect, validation refers to both testing and to activities such as inspection or walk-throughs.

Another source of confusion is the relationship between defect management and the stages of the life cycle. The confusion probably stems from the tradition of using the waterfall process model, which contained explicit testing phases within the last two stages of the life cycle, implying that testing was an activity to be practiced only in relation to the execution of the software code product (e.g., unit testing, system testing, etc.). The growth of this misconception led to the application of subsequent validation methods such as inspection originally only to code; hence code inspection (Fagan, 1976). Despite some earlier recommendations (Ackerman et al., 1984), only more recently, in keeping with the principle of early defect detection (Boehm, 1984), is validation applied to other stages of the process (Strauss & Ebenau, 1995) in a consistent manner.

Even when taken in its broader context and applied to all the elements of the life cycle, defect detection is not the only approach to defect management. Another important approach is defect prevention. By this we mean the utilization of methods (e.g., program derivation; Dromey, 1989) that attempt to minimize the introduction of defects into the software in the first place.

Based on the preceding considerations, Appendix A presents a diagram that reflects the current state of defect management.

InformIT Promotional Mailings & Special Offers

I would like to receive exclusive offers and hear about products from InformIT and its family of brands. I can unsubscribe at any time.

Overview


Pearson Education, Inc., 221 River Street, Hoboken, New Jersey 07030, (Pearson) presents this site to provide information about products and services that can be purchased through this site.

This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site. Please note that other Pearson websites and online products and services have their own separate privacy policies.

Collection and Use of Information


To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:

Questions and Inquiries

For inquiries and questions, we collect the inquiry or question, together with name, contact details (email address, phone number and mailing address) and any other additional information voluntarily submitted to us through a Contact Us form or an email. We use this information to address the inquiry and respond to the question.

Online Store

For orders and purchases placed through our online store on this site, we collect order details, name, institution name and address (if applicable), email address, phone number, shipping and billing addresses, credit/debit card information, shipping options and any instructions. We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes.

Surveys

Pearson may offer opportunities to provide feedback or participate in surveys, including surveys evaluating Pearson products, services or sites. Participation is voluntary. Pearson collects information requested in the survey questions and uses the information to evaluate, support, maintain and improve products, services or sites, develop new products and services, conduct educational research and for other purposes specified in the survey.

Contests and Drawings

Occasionally, we may sponsor a contest or drawing. Participation is optional. Pearson collects name, contact information and other information specified on the entry form for the contest or drawing to conduct the contest or drawing. Pearson may collect additional personal information from the winners of a contest or drawing in order to award the prize and for tax reporting purposes, as required by law.

Newsletters

If you have elected to receive email newsletters or promotional mailings and special offers but want to unsubscribe, simply email information@informit.com.

Service Announcements

On rare occasions it is necessary to send out a strictly service related announcement. For instance, if our service is temporarily suspended for maintenance we might send users an email. Generally, users may not opt-out of these communications, though they can deactivate their account information. However, these communications are not promotional in nature.

Customer Service

We communicate with users on a regular basis to provide requested services and in regard to issues relating to their account we reply via email or phone in accordance with the users' wishes when a user submits their information through our Contact Us form.

Other Collection and Use of Information


Application and System Logs

Pearson automatically collects log data to help ensure the delivery, availability and security of this site. Log data may include technical information about how a user or visitor connected to this site, such as browser type, type of computer/device, operating system, internet service provider and IP address. We use this information for support purposes and to monitor the health of the site, identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents and appropriately scale computing resources.

Web Analytics

Pearson may use third party web trend analytical services, including Google Analytics, to collect visitor information, such as IP addresses, browser types, referring pages, pages visited and time spent on a particular site. While these analytical services collect and report information on an anonymous basis, they may use cookies to gather web trend information. The information gathered may enable Pearson (but not the third party web trend services) to link information with application and system log data. Pearson uses this information for system administration and to identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents, appropriately scale computing resources and otherwise support and deliver this site and its services.

Cookies and Related Technologies

This site uses cookies and similar technologies to personalize content, measure traffic patterns, control security, track use and access of information on this site, and provide interest-based messages and advertising. Users can manage and block the use of cookies through their browser. Disabling or blocking certain cookies may limit the functionality of this site.

Do Not Track

This site currently does not respond to Do Not Track signals.

Security


Pearson uses appropriate physical, administrative and technical security measures to protect personal information from unauthorized access, use and disclosure.

Children


This site is not directed to children under the age of 13.

Marketing


Pearson may send or direct marketing communications to users, provided that

  • Pearson will not use personal information collected or processed as a K-12 school service provider for the purpose of directed or targeted advertising.
  • Such marketing is consistent with applicable law and Pearson's legal obligations.
  • Pearson will not knowingly direct or send marketing communications to an individual who has expressed a preference not to receive marketing.
  • Where required by applicable law, express or implied consent to marketing exists and has not been withdrawn.

Pearson may provide personal information to a third party service provider on a restricted basis to provide marketing solely on behalf of Pearson or an affiliate or customer for whom Pearson is a service provider. Marketing preferences may be changed at any time.

Correcting/Updating Personal Information


If a user's personally identifiable information changes (such as your postal address or email address), we provide a way to correct or update that user's personal data provided to us. This can be done on the Account page. If a user no longer desires our service and desires to delete his or her account, please contact us at customer-service@informit.com and we will process the deletion of a user's account.

Choice/Opt-out


Users can always make an informed choice as to whether they should proceed with certain services offered by InformIT. If you choose to remove yourself from our mailing list(s) simply visit the following page and uncheck any communication you no longer want to receive: www.informit.com/u.aspx.

Sale of Personal Information


Pearson does not rent or sell personal information in exchange for any payment of money.

While Pearson does not sell personal information, as defined in Nevada law, Nevada residents may email a request for no sale of their personal information to NevadaDesignatedRequest@pearson.com.

Supplemental Privacy Statement for California Residents


California residents should read our Supplemental privacy statement for California residents in conjunction with this Privacy Notice. The Supplemental privacy statement for California residents explains Pearson's commitment to comply with California law and applies to personal information of California residents collected in connection with this site and the Services.

Sharing and Disclosure


Pearson may disclose personal information, as follows:

  • As required by law.
  • With the consent of the individual (or their parent, if the individual is a minor)
  • In response to a subpoena, court order or legal process, to the extent permitted or required by law
  • To protect the security and safety of individuals, data, assets and systems, consistent with applicable law
  • In connection the sale, joint venture or other transfer of some or all of its company or assets, subject to the provisions of this Privacy Notice
  • To investigate or address actual or suspected fraud or other illegal activities
  • To exercise its legal rights, including enforcement of the Terms of Use for this site or another contract
  • To affiliated Pearson companies and other companies and organizations who perform work for Pearson and are obligated to protect the privacy of personal information consistent with this Privacy Notice
  • To a school, organization, company or government agency, where Pearson collects or processes the personal information in a school setting or on behalf of such organization, company or government agency.

Links


This web site contains links to other sites. Please be aware that we are not responsible for the privacy practices of such other sites. We encourage our users to be aware when they leave our site and to read the privacy statements of each and every web site that collects Personal Information. This privacy statement applies solely to information collected by this web site.

Requests and Contact


Please contact us about this Privacy Notice or if you have any requests or questions relating to the privacy of your personal information.

Changes to this Privacy Notice


We may revise this Privacy Notice through an updated posting. We will identify the effective date of the revision in the posting. Often, updates are made to provide greater clarity or to comply with changes in regulatory requirements. If the updates involve material changes to the collection, protection, use or disclosure of Personal Information, Pearson will provide notice of the change through a conspicuous notice on this site or other appropriate way. Continued use of the site after the effective date of a posted revision evidences acceptance. Please contact us if you have questions or concerns about the Privacy Notice or any objection to any revisions.

Last Update: November 17, 2020