Home > Store

Metrics and Models in Software Quality Engineering, 2nd Edition

Register your product to gain access to bonus material or receive a coupon.

Metrics and Models in Software Quality Engineering, 2nd Edition


  • Sorry, this book is no longer in print.
Not for Sale


  • Copyright 2003
  • Edition: 2nd
  • Book
  • ISBN-10: 0-201-72915-6
  • ISBN-13: 978-0-201-72915-3

Our society has become increasingly reliant on software in the past decade; businesses have learned that measuring the effectiveness of software projects can impact the bottom line; and quality is no longer an advantage in the software marketplace (it is a necessity). For these reasons, the demand for quality in software engineering has taken center stage in the twenty-first century. In this new edition, Stephen Kan presents a thoroughly updated overview and implementation guide for software engineers faced with the challenge of ensuring quality. The book balances theory, techniques, and real-life examples to provide practical guidelines in the practice of quality. Although there are equations and formulas presented, the book's focus remains on helping the reader understand and apply the metrics and models. With this book as a map, readers can navigate through the complex field of quality, and benefit their organization by improving their processes and products.

Sample Content

Online Sample Chapters

In-Process Metrics for Software Testing

Software Quality Metrics Overview

Downloadable Sample Chapter

Click below for Sample Chapter(s) related to this title:
Sample Chapter 4

Sample Chapter 10

Table of Contents

Foreword to the Second Edition.

Foreword to the First Edition


1. What Is Software Quality?

Quality: Popular Views.

Quality: Professional Views.

The Role of the Customer.

Software Quality.

Total Quality Management.

2. Software Development Process Models.

The Waterfall Development Model.

The Prototyping Approach.

The Spiral Model.

The Iterative Development Process Model.

The Object-Oriented Development Process.

The Cleanroom Methodology.

The Defect Prevention Process.

Process Maturity Framework and Quality Standards.

The SEI Process Capability Maturity Model.

The SPR Assessment.

The Malcolm Baldrige Assessment.

ISO 9000.

3. Fundamentals in Measurement Theory.

Definition, Operational Definition, and Measurement.

Level of Measurement.

Some Basic Measures.

Reliability and Validity.

Measurement Errors.

Assessing Reliability.

Correction for Attenuation.

Be Careful with Correlation.

Criteria for Causality.

4. Software Quality Metrics Overview.

Product Quality Metrics.

The Defect Density Metric.

Customer Problems Metric.

Customer Satisfaction Metrics.

In-Process Quality Metrics.

Defect Density During Machine Testing.

Defect Arrival Pattern During Machine Testing.

Phase-Based Defect Removal Pattern.

Defect Removal Effectiveness.

Metrics for Software Maintenance.

Fix Backlog and Backlog Management Index.

Fix Response Time and Fix Responsiveness.

Percent Delinquent Fixes.

Fix Quality.

Examples of Metrics Programs.



IBM Rochester.

Collecting Software Engineering Data.

5. Applying the Seven Basic Quality Tools in Software Development.

Ishikawa's Seven Basic Tools.


Pareto Diagram.


Run Charts.

Scatter Diagram.

Control Chart.

Cause-and-Effect Diagram.

Relations Diagram.

6. Defect Removal Effectiveness.

Literature Review.

A Closer Look at Defect Removal Effectiveness.

Defect Removal Effectiveness and Quality Planning.

Phase-Based Defect Removal Model.

Some Characteristics of a Special Case Two-Phase Model.

Cost Effectiveness of Phase Defect Removal.

Defect Removal Effectiveness and Process Maturity Level.

7. The Rayleigh Model.

Reliability Models.

The Rayleigh Model.

Basic Assumptions.


Reliability and Predictive Validity.

8. Exponential Distribution and Reliability Growth Models.

The Exponential Model.

Reliability Growth Models.

Jelinski-Moranda Model.

Littlewood Models.

Goel-Okumoto Imperfect Debugging Model.

Goel-Okumoto Nonhomogeneous Poisson Process Model.

Musa-Okumoto Logarithmic Poisson Execution Time Model.

The Delayed S and Inflection S Models.

Model Assumptions.

Criteria for Model Evaluation.

Modeling Process.

Test Compression Factor.

Estimating the Distribution of Total Defects Over Time.

9. Quality Management Models.

The Rayleigh Model Framework.

The Code Integration Pattern.

The PTR Submodel.

The PTR Arrival/Backlog Projection Model.

Reliability Growth Models.

Criteria for Model Evaluation.

In-Process Metrics and Reports.

Orthogonal Defect Classification.

10. In-Process Metrics for Software Testing.

In-Process Metrics for Software Testing.

Test Progress S Curve.

Testing Defect Arrivals Over Time.

Testing Defect Backlog Over Time.

Product Size Over Time.

CPU Utilization During Test.

System Crashes and Hangs.

Mean Time to Unplanned IPL.

Critical Problems: Show Stoppers.

In-Process Metrics and Quality Management.

Effort/Outcome Model.

Possible Metrics for Acceptance Testing to Evaluate Vendor-Developed Software.

How Do You Know Your Product Is Good Enough to Ship.

11. Complexity Metrics and Models.

Lines of Code.

Halstead's Software Science.

Cyclomatic Complexity.

Syntactic Constructs.

Structure Metrics.

An Example of Module Design Metrics in Practice.

12. Metrics and Lessons Learned for Object-Oriented Projects.

Object-Oriented Concepts and Constructs.

Design and Complexity Metrics.

Lorenz Metrics and Rules of Thumb.

Some Metrics Examples.

The CK OO Metrics Suite.

Validation Studies and Further Examples.

Productivity Metrics.

Quality and Quality Management Metrics.

Lessons Learned for OO Projects.

13. Availability Metrics.

Definition and Measurements of System Availability.

Reliability, Availability, and Defect Rate.

Collecting Customer Outage Data for Quality Improvement.

In-Process Metrics for Outage and Availability.

14. Measuring and Analyzing Customer Satisfaction.

Customer Satisfaction Surveys.

Methods of Survey Data Collection.

Sampling Methods.

Sample Size.

Analyzing Satisfaction Data.

Specific Attributes and Overall Satisfaction.

Satisfaction with Company.

How Good Is Good Enough?

15. Conducting In-Process Quality Assessments.

The Preparation Phase.

What Data Should I Look At?

Don't Overlook Qualitative Data

The Evaluation Phase.

Quantitative Data

Qualitative Data

Evaluation Criteria

The Summarization Phase.

Summarization Strategy.

The Overall Assessment

Recommendations and Risk Mitigation.

16. Conducting Software Project Assessments.

Audit and Assessment.

Software Process Maturity Assessment and Software Project Assessment.

Software Process Assessment Cycle.

A Proposed Software Project Assessment Method.

Preparation Phase.

Facts Gathering Phase 1.

Questionnaire Customization and Finalization.

Facts Gathering Phase.

Possible Improvement Opportunities and Recommendations.

Team Discussions of Assessment Results and Recommendations.

Assessment Report.


17. Dos and Don'ts of Software Process Improvement.

Measuring Process Maturity.

Measuring Process Capability.

Staged versus Continuous--Debating Religion.

Measuring Levels Is Not Enough.

Establishing the Alignment Principle.

Take Time Getting Faster.

Keep It Simple—or Face Decomplexification.

Measuring the Value of Process Improvement.

Measuring Process Adoption.

Measuring Process Compliance.

Celebrate the Journey, Not Just the Destination.

18. Using Function Point Metrics to Measure Software Process Improvement.

Software Process Improvement Sequences.

Stage 0: Software Process Assessment and Baseline.

Stage 1: Focus on Management Technologies.

Stage 2: Focus on Software Processes and Methodologies.

Stage 3: Focus on New Tools and Approaches.

Stage 4: Focus on Infrastructure and Specialization.

Stage 5: Focus on Reusability.

Stage 6: Focus on Industry Leadership.

Process Improvement Economics.

Measuring Process Improvements at Activity Levels.

19. Concluding Remarks.

Data Quality Control.

Getting Started with a Software Metrics Program.

Software Quality Engineering Modeling.

Statistical Process Control in Software Development.

Measurement and the Future.

Appendix: A Project Assessment Questionnaire

Index. 0201729156T09052002.


Looking at software engineering from a historical perspective, the 1960s and earlier could be viewed as the functional era, the 1970s the schedule era, the 1980s the cost era, and the 1990s and beyond the quality and efficiency era. In the 1960s, we learned how to exploit information technology to meet institutional needs and began to link software with the daily operations of institutions. In the 1970s, as the industry was characterized by massive schedule delays and cost overruns, the focus was on planning and control of software projects. Phase-based life-cycle models were introduced and analysis, like the mythical man-month, emerged. In the 1980s, hardware costs continued to decline, and information technology permeated every facet of our institutions and became available to individuals. As competition in the industry became keen and low-cost applications became widely implemented, the importance of productivity in software development increased significantly. Various cost models in software engineering were developed and used. In the late 1980s, the importance of quality was also recognized.

The 1990s and beyond is certainly the quality era. With state-of-the-art technology now able to provide abundant functionality, customers demand high quality. Demand for quality is further intensified by the ever-increasing dependence of society on software. Billing errors, large-scale disrupted telephone services, and even missile failures during recent wars can all be traced to the issue of software quality. In this era, quality has been brought to the center of the software development process. From the standpoint of software vendors, quality is no longer an advantage factor in the marketplace; it has become a necessary condition if a company is to compete successfully.

Starting mid 1990s, two major factors emerged that proved to have unprecedented impact on not only software engineering but also on global business environments: business reengineering for efficiency and the Internet. Software development has to be more efficient and the quality level of the delivered products has to be high to meet requirements and to be successful. This is especially the case for mission-critical applications. The adverse impact of poor quality is much more significant and at a much wider scale; the quality “dikes” that software provides were never more important. These factors will continue to affect software engineering for many years to come during this new millennium.

Measurement plays a critical role in effective and efficient software development, as well as provides the scientific basis for software engineering that makes it a true engineering discipline. This book describes the software quality engineering metrics and models: quality planning, process improvement and quality control, in-process quality management, product engineering (design and code complexity), reliability estimation and projection, and analysis of customer satisfaction data. Many measurement books take an encyclopedic approach, in which every possible software measurement is included, but this book confines its scope to the metrics and models of software quality. Areas such as cost estimation, productivity, staffing, and performance measurement, for which numerous publications exist, are not covered.

In this edition, seven new chapters have been added, covering in-process metrics for software testing, object-oriented metrics, availability metrics, in-process quality assessment, software project assessment, process improvement dos and don’ts, and measuring software process improvement. The chapter that described the AS/400 software quality management system has been eliminated. For the original chapters, updates and revisions have been made throughout, and new sections, figures, and tables were added.

Two of the new chapters are special contributions from two experts. This is a key feature of the new edition. The chapter on the dos and don’ts of software process improvement was contributed by Patrick O’Toole. Patrick brings to this book a perspective on process improvement that I share as a practitioner. That perspective is based on practical experience, is project-centric, and is aligned with the strategic business imperative of the organization. Patrick also brings humor to this otherwise serious subject, making the reading of the chapter so much enjoyable. The chapter on measuring software process improvement is a special contribution by Capers Jones. A pioneer in software metrics, productivity research, software quality control and software assessments, Capers’ work is well known nationally and internationally. His data-based and fact-based approach in software assessments and benchmarking studies is unparalleled. Based on experience and data from more than 10,000 projects, he brings to the readers a practical approach to software process improvement and the major quantitative findings related to software process improvement at the project and activity level. The chapter is a must read for software process professionals who are interested in quantitative measurements.

Another new feature added to this edition is a set of recommendations for small teams that are starting to implement a metrics program, with minimum resources available. These recommendations are shown in the form of sidebar inserts in nine of the chapters. A number of examples in the book are based on small team projects and many methods and techniques are appropriate to large projects as well as small ones. This set of recommendations is from the perspective of small organizations using a small number of metrics, with the intent to effect improvement in their software development effort.

This book is intended for use by software quality professionals; software project managers; software product managers; software development managers; software engineers; software product assurance personnel; and students in software engineering, management information systems, systems engineering, and quality engineering and management. For students, it is intended to provide a basis for a course at the upper-division undergraduate or graduate level. A number of software engineering, computer science, and quality engineering programs in the United States and overseas used the first edition of this book as a text.

Themes of This Book

This book has several themes. First, balancing theory, techniques, and real-life examples, it provides practical guidelines in the practice of quality engineering in software development. Although equations and formulas are involved, the focus is on the understanding and applications of the metrics and models rather than mathematical derivations. Throughout the book, numerous real-life examples are used from the software development laboratory at IBM Rochester, Minnesota, home of the AS/400 and the IBM eServer iSeries computer systems, and from other companies in the software industry. IBM Rochester won the Malcolm Baldrige National Quality Award in 1990. A number of metrics described in this book were being used dating back to that time, and many have been developed and refined since then. All metrics are substantiated by ample implementation experience. IBM Rochester develops and delivers numerous different size and type projects every year, including very large and complex as well as small ones; and they range from firmware, to operating systems, to middleware, to applications.

Second, I attempt to provide complete coverage of the type of metrics and models in software quality engineering. In addition to general discussions about metrics and techniques, this book categorizes and covers four types of models: (1) quality management models; (2) software reliability and projection models; (3) complexity metrics and models; and (4) customer-view metrics, measurements, and models. These metrics and models cover the entire software development process from high-level design to testing and maintenance, as well as all phases of reliability. Furthermore, although this book is not on total quality management (TQM), it is a major consideration in the coverage of metrics. The philosophy of TQM is the linking of product quality with customer satisfaction in order to achieve long-term success. TQM is the reason for including two chapters on the customer-view metrics and measurements—availability metrics and customer satisfaction—in addition to the many chapters on product and process metrics. In other discussions in the book, the customer’s perspective is also included where appropriate.

Third, linking metrics and models to quality improvement strategies and improvement actions, we attempt to focus on using, not just describing, metrics. A framework for interpreting in-process metrics and assessing in-process quality status, the Effort/Outcome model, is presented. The direct link between a recommended quality strategy during development and the defect-removal model is shown. Examples of actions tying to specific metrics and analysis are given. Furthermore, to illustrate the metrics, many figures are used. This is a direct reflection of the fact that in a real-life project and quality management, a clear visual presentation often improves understanding and increases the effectiveness of the metrics.

Fourth, following up on quality and process improvement at a more general level rather than on specific metric discussions, the continues with chapters that discuss the in-process quality assessment process, a method for conducting software project assessments, practical advice on process improvement dos and don’ts, and quantitative analysis of software process improvement. The common thread underlying these chapters, and with other chapters on metrics and models, is practical experience with industry projects.

Organization of This Book

The following list details what each chapter in this book addresses.

  • Chapter 1, What Is Software Quality?, discusses the definition of quality and software quality. The customer’s role in the definition is highlighted. Quality attributes and their interrelationships are discussed. In the second part of the chapter covers the definition and framework of TQM and the customer’s view of quality, a key focus in this book.
  • Chapter 2, Software Development Process Model, reviews various development process models that are used in the software industry. It briefly describes two methods of software process maturity assessment—the Carnegie Mellon University’s Software Engineering Institute’s (SEI) process Capability Maturity Model (CMM) and the Software Productivity Research (SPR) assessment method. It summarizes two bodies of quality management standards—the Malcolm Baldrige National Quality Award assessment discipline and ISO 9000.
  • Chapter 3, Fundamentals in Measurement Theory, examines measurement theory fundamentals, which are very important for the practice of software measurement. The concept of operational definition and its importance in measurement are illustrated with an example. The level of measurement, some basic measures, and the concept of six sigma are discussed. The two key criteria of measurement quality, reliability and validity, and the related issue of measurement errors are examined and their importance is articulated. This chapter also provides a discussion on correlation and addresses the criteria needed to establish causality based on observational data.
  • Chapter 4, Software Quality Metrics Overview, presents examples of quality metrics for the three categories of metrics associated with the software life-cycle: end-product, in-process, and maintenance. It describes the metrics programs of several large software companies and discusses software engineering data collection.
  • Chapter 5, Applying the Seven Basic Quality Tools in Software Development, describes the application of the basic statistical tools for quality control, known as Ishikawa’s seven basic tools, in software development. The potentials and challenges of applying the control chart in software environments are discussed. In addition, a qualitative tool for brainstorming and for displaying complex cause-and-effect relationships—the relations diagram—is discussed.
  • Chapter 6, Defect Removal Effectiveness, is the first of five chapters about the models and metrics that describe the quality dynamics of software development. Through two types of models, quality management models and software reliability and projection models, the quality of software development can be planned, engineered, managed, and projected. This chapter examines the central concept of defect-removal effectiveness, its measurements, and its role in quality planning.
  • Chapter 7, The Rayleigh Model, describes the model and its implementation as a reliability and projection model. The Rayleigh Model’s use as a quality management model is discussed in Chapter 9.
  • Chapter 8, Exponential Distribution and Reliability Growth Models, discusses the exponential distribution and the major software reliability growth models. These models, like the Rayleigh Model, are used for quality projection before the software is shipped to customers, just before development is complete. The models are also used to model the failure pattern or the defect arrival patterns in the field, for maintenance planning.
  • Chapter 9, Quality Management Models, describes several quality management models that cover the entire development cycle. In-process metrics and reports that support the models are shown and discussed. A framework for interpreting in-process metrics and assessing in-process quality status—the Effort/Outcome model, is presented.
  • Chapter 10, In-Process Metrics for Software Testing, is a continuation of Chapter 9; it focuses on the metrics for software testing. The Effort/Outcome model, as it applies to metrics during the testing phase, is elaborated. Candidate metrics for acceptance testing to evaluate vendor-developed software, and the central question of how do you know your product is good enough to ship, are also discussed.
  • Chapter 11, Complexity Metrics and Models, discusses the third type of metrics and models in software engineering. While quality management models and reliability and projection models are for project management and quality management, the objective of the complexity metrics and models is for software engineers to be able to improve their design and implementation of software development.
  • Chapter 12, Metrics and Lessons Learned for Object-Oriented Projects, covers design and complexity metrics, productivity metrics, quality and quality management metrics for object-oriented development, and lessons learned from the deployment and implementation of OO projects. The first section can be viewed as a continuation of the discussion on complexity metrics and models, while the other sections fall within the framework of quality and project management.
  • Chapter 13, Availability Metrics, discusses system availability and outage metrics, and explores the relationships among availability, reliability, and the traditional defect-rate measurement. Availability metrics and customer satisfaction measurements are the fourth type of metrics and models--customer-oriented metrics.
  • Chapter 14, Measuring and Analyzing Customer Satisfaction, discusses customer satisfaction data collection and measurements, and techniques and models for the analysis of customer satisfaction data. From Chapter 3 to this chapter, the entire spectrum of metrics and models is covered.
  • Chapter 15, Conducting In-Process Quality Assessments, describes in-process quality assessments as an integrated element of good project quality management. Quality assessments are based on both quantitative indicators, such as those discussed in previous chapters, and qualitative information.
  • Chapter 16, Conducting Software Project Assessments, takes the level of discussion yet another level higher; this chapter proposes a software project assessment method. The focus is at the project level and the discussion is from a practitioner’s perspective.
  • Chapter 17, Dos and Don’ts of Software Process Improvement by Patrick O’Toole, offers practical advice for software process improvement professionals. It provides a link to the process maturity discussions in Chapter 2.
  • Chapter 18, Measuring Software Process Improvement by Capers Jones, discusses the six stages of software process improvement. Based on a large body of empirical data, it examines the costs and effects of process improvement. It shows the results of quantitative analyses with regard to costs, time, schedule, productivity, and quality. It provides a link to the process maturity discussions in Chapter 2.
  • Chapter 19, Concluding Remarks, provides several observations with regard to software measurement in general and software quality metrics and models in particular, and it offers a perspective on the future of software engineering measurement.

Suggested Ways to Read This Book

The chapters of this book are organized for reading from beginning to end. Concepts and discussions in earlier chapters are referenced in later chapters. At the same time, each chapter addresses a separate topic and chapters within some groups are more closely coupled to each other than to others. Some readers may choose to read specific topics or decide on different starting points. For example, those who are not interested in quality definitions, process models, and measurement fundamentals discussions, can start with Chapter 4, Software Quality Metrics Overview. Those who intend to immediately get to the central topics of defect removals and metrics and models for quality planning, management and projection, can start with Chapter 6, Defect Removal Effectiveness. In general, I recommend that you read the chapters together in groups, as follows.

  • Chapters 1 to 3
  • Chapter 4
  • Chapter 5
  • Chapters 6 to 10
  • Chapters 11 and 12
  • Chapters 13 and 14
  • Chapters 15 to 18
  • Chapter 19

  • 0201729156P05222002


    Click below to download the Index file related to this title:


    Submit Errata

    More Information

    InformIT Promotional Mailings & Special Offers

    I would like to receive exclusive offers and hear about products from InformIT and its family of brands. I can unsubscribe at any time.


    Pearson Education, Inc., 221 River Street, Hoboken, New Jersey 07030, (Pearson) presents this site to provide information about products and services that can be purchased through this site.

    This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site. Please note that other Pearson websites and online products and services have their own separate privacy policies.

    Collection and Use of Information

    To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:

    Questions and Inquiries

    For inquiries and questions, we collect the inquiry or question, together with name, contact details (email address, phone number and mailing address) and any other additional information voluntarily submitted to us through a Contact Us form or an email. We use this information to address the inquiry and respond to the question.

    Online Store

    For orders and purchases placed through our online store on this site, we collect order details, name, institution name and address (if applicable), email address, phone number, shipping and billing addresses, credit/debit card information, shipping options and any instructions. We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes.


    Pearson may offer opportunities to provide feedback or participate in surveys, including surveys evaluating Pearson products, services or sites. Participation is voluntary. Pearson collects information requested in the survey questions and uses the information to evaluate, support, maintain and improve products, services or sites, develop new products and services, conduct educational research and for other purposes specified in the survey.

    Contests and Drawings

    Occasionally, we may sponsor a contest or drawing. Participation is optional. Pearson collects name, contact information and other information specified on the entry form for the contest or drawing to conduct the contest or drawing. Pearson may collect additional personal information from the winners of a contest or drawing in order to award the prize and for tax reporting purposes, as required by law.


    If you have elected to receive email newsletters or promotional mailings and special offers but want to unsubscribe, simply email information@informit.com.

    Service Announcements

    On rare occasions it is necessary to send out a strictly service related announcement. For instance, if our service is temporarily suspended for maintenance we might send users an email. Generally, users may not opt-out of these communications, though they can deactivate their account information. However, these communications are not promotional in nature.

    Customer Service

    We communicate with users on a regular basis to provide requested services and in regard to issues relating to their account we reply via email or phone in accordance with the users' wishes when a user submits their information through our Contact Us form.

    Other Collection and Use of Information

    Application and System Logs

    Pearson automatically collects log data to help ensure the delivery, availability and security of this site. Log data may include technical information about how a user or visitor connected to this site, such as browser type, type of computer/device, operating system, internet service provider and IP address. We use this information for support purposes and to monitor the health of the site, identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents and appropriately scale computing resources.

    Web Analytics

    Pearson may use third party web trend analytical services, including Google Analytics, to collect visitor information, such as IP addresses, browser types, referring pages, pages visited and time spent on a particular site. While these analytical services collect and report information on an anonymous basis, they may use cookies to gather web trend information. The information gathered may enable Pearson (but not the third party web trend services) to link information with application and system log data. Pearson uses this information for system administration and to identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents, appropriately scale computing resources and otherwise support and deliver this site and its services.

    Cookies and Related Technologies

    This site uses cookies and similar technologies to personalize content, measure traffic patterns, control security, track use and access of information on this site, and provide interest-based messages and advertising. Users can manage and block the use of cookies through their browser. Disabling or blocking certain cookies may limit the functionality of this site.

    Do Not Track

    This site currently does not respond to Do Not Track signals.


    Pearson uses appropriate physical, administrative and technical security measures to protect personal information from unauthorized access, use and disclosure.


    This site is not directed to children under the age of 13.


    Pearson may send or direct marketing communications to users, provided that

    • Pearson will not use personal information collected or processed as a K-12 school service provider for the purpose of directed or targeted advertising.
    • Such marketing is consistent with applicable law and Pearson's legal obligations.
    • Pearson will not knowingly direct or send marketing communications to an individual who has expressed a preference not to receive marketing.
    • Where required by applicable law, express or implied consent to marketing exists and has not been withdrawn.

    Pearson may provide personal information to a third party service provider on a restricted basis to provide marketing solely on behalf of Pearson or an affiliate or customer for whom Pearson is a service provider. Marketing preferences may be changed at any time.

    Correcting/Updating Personal Information

    If a user's personally identifiable information changes (such as your postal address or email address), we provide a way to correct or update that user's personal data provided to us. This can be done on the Account page. If a user no longer desires our service and desires to delete his or her account, please contact us at customer-service@informit.com and we will process the deletion of a user's account.


    Users can always make an informed choice as to whether they should proceed with certain services offered by InformIT. If you choose to remove yourself from our mailing list(s) simply visit the following page and uncheck any communication you no longer want to receive: www.informit.com/u.aspx.

    Sale of Personal Information

    Pearson does not rent or sell personal information in exchange for any payment of money.

    While Pearson does not sell personal information, as defined in Nevada law, Nevada residents may email a request for no sale of their personal information to NevadaDesignatedRequest@pearson.com.

    Supplemental Privacy Statement for California Residents

    California residents should read our Supplemental privacy statement for California residents in conjunction with this Privacy Notice. The Supplemental privacy statement for California residents explains Pearson's commitment to comply with California law and applies to personal information of California residents collected in connection with this site and the Services.

    Sharing and Disclosure

    Pearson may disclose personal information, as follows:

    • As required by law.
    • With the consent of the individual (or their parent, if the individual is a minor)
    • In response to a subpoena, court order or legal process, to the extent permitted or required by law
    • To protect the security and safety of individuals, data, assets and systems, consistent with applicable law
    • In connection the sale, joint venture or other transfer of some or all of its company or assets, subject to the provisions of this Privacy Notice
    • To investigate or address actual or suspected fraud or other illegal activities
    • To exercise its legal rights, including enforcement of the Terms of Use for this site or another contract
    • To affiliated Pearson companies and other companies and organizations who perform work for Pearson and are obligated to protect the privacy of personal information consistent with this Privacy Notice
    • To a school, organization, company or government agency, where Pearson collects or processes the personal information in a school setting or on behalf of such organization, company or government agency.


    This web site contains links to other sites. Please be aware that we are not responsible for the privacy practices of such other sites. We encourage our users to be aware when they leave our site and to read the privacy statements of each and every web site that collects Personal Information. This privacy statement applies solely to information collected by this web site.

    Requests and Contact

    Please contact us about this Privacy Notice or if you have any requests or questions relating to the privacy of your personal information.

    Changes to this Privacy Notice

    We may revise this Privacy Notice through an updated posting. We will identify the effective date of the revision in the posting. Often, updates are made to provide greater clarity or to comply with changes in regulatory requirements. If the updates involve material changes to the collection, protection, use or disclosure of Personal Information, Pearson will provide notice of the change through a conspicuous notice on this site or other appropriate way. Continued use of the site after the effective date of a posted revision evidences acceptance. Please contact us if you have questions or concerns about the Privacy Notice or any objection to any revisions.

    Last Update: November 17, 2020