Home > Articles > Programming

Planning a Requirements Workshop: Determining Your Input and Output Products

Design a successful requirements workshop that meets the needs of stakeholders and decision-makers. Learn how to build a repository of workshop products; learn the most effective questions to ask to determine workshop input and feedback; and learn how to add visual value to deliverables.
This chapter is from the book

Introduction

"You can't have everything. Where would you put it?"

—Steven Wright

Your workshop products include inputs and outputs. The outputs include requirements and supplemental deliverables such as statements of issues and follow-up actions. The inputs are any materials that jump-start workshop activities and prepare participants: posters, templates, documents, workshop activity instructions, draft or predecessor requirements models, and the results of participant pre-work (see Figure 7-1).

Figure Figure 7-1 Workshop Inputs and Outputs

I discuss output products first because they are the basis for deciding which inputs you should use for your workshop.

Output Products

Your primary output products are the requirements models created by the participants. (Later in this chapter I discuss intangible workshop products, such as decisions about the state of the requirements.) Along with your core requirements deliverables, you'll create supplemental deliverables, such as posters or documents that list actions or next steps, issues, and decisions.

In some cases, you may want to create supplemental workshop products to accelerate follow-up activities. In one workshop, the participants created a communication plan for organizations to be affected by the software. In another workshop, the participants identified risks and then created a risk mitigation plan and a post-workshop implementation plan. As part of an activity to jump-start their requirements modeling work, another group created a set of posters depicting a vision of what work would be like after the system was in place.

Although this book focuses on the core deliverables—the requirements models—you should consider how supplemental deliverables can dovetail with project activities and help the team to communicate effectively. Creating visually rich supplemental deliverables is fast and efficient.

Making Deliverables Visually Rich

To add visual value to deliverables, either the facilitator or the participants can use a variety of visual tools in addition to the diagram-oriented requirements models. Table 7-1 shows some formats for framing information and ideas graphically.

Table 7-1 Visual Deliverables

Visual Product

Uses

Limitations

Poster

Display visions, draw themes or concepts, show agenda

Describes single point or concept

List

Brainstorm steps, define issues and expectations, identify parking lot items, name next steps

Hard to compare listed items

Clusters, affinity groups

Group related items, find themes, analyze options, associate items

Doesn't show dynamics; categories may be unclear

Arrows, flows

Cause and effect, sequence, logical progression

Implies sequence that could be incorrect

Grids, matrices

Define missing elements, clarify choices, compare choices

Can compare only a few items at a time

Drawings

Visions, stories, road maps, history, plans, business concepts

Fear of not being skilled enough to draw

Mind map

Ideas, categories, text and image groups, hierarchies

Complex to read

Circles, wheels, mandalas (circles symbolizing unity)

Unifying concepts, unstructured relationships, layers of relationships

Doesn't show details or sequence


Note that a mind map is an unstructured diagram that shows groupings of ideas and concepts associated with a central theme or topic (Buzan, 1989). A mind map starts with a central image or idea, which forms a focus for both eye and brain. It then branches out organically, with each branch representing a grouping or category.

The requirements models created by participants are sources for other project activities (such as design, testing, and coding) and for additional workshops (if you decide to produce requirements iteratively in a series of workshops). For example, outputs such as detailed use cases, business rules, and sketches of screens would be inputs to design and coding. You can use high-level requirements models—such as context diagrams, event tables, stakeholder classes, and use cases produced in a workshop that follows the horizontal, or "width first," workshop strategy (see Chapter 10)—as inputs to another workshop that adds more depth to those models.

To determine which requirements models to deliver, you need to do the following:

  • Select model views that are aligned with the business problem domain.

  • Use multiple models to increase the quality of your requirements.

  • Mix text models with diagrammatic models to increase the speed of requirements definition and promote mutual learning.

  • Pick multiple views and focuses (see "Model Focus" in Chapter 2) to enhance the quality of your requirements.

  • Define the appropriate level of detail for each selected model.

  • Iteratively deliver the requirements.

  • Prioritize the requirements deliverables.

  • Decide whether you should partition requirements across multiple workshops.

  • Clarify your doneness tests for delivering "good enough" requirements in the workshop.

The following subsections explore these topics.

Select Models Aligned with the Business Problem

Model views—behavior, structural, dynamic, and control—provide differing perspectives of user requirements (see "Model Views" in Chapter 2). Even though each business problem is unique, the generic set of heuristics presented in Table 7-2 illustrates how the business domain influences the types of requirements models that most strongly express user requirements.

This table is intended to be representative, not inclusive. No one view can express user requirements completely, so it's important to draw from several views. For example, if users are handling the ordering task, behavior models are useful. At the same time, they're interacting with business concepts and domains such as orders, cancellations, and customers, which are best captured in structural models. As you can see from the table, a glossary should always be part of your requirements deliverables; in fact, a draft glossary should always be an input product (see "Draft Models" later in this chapter).

As you learn more about the models and use them in workshops, you'll begin to recognize which ones are more useful for the business problem you're trying to solve.

One project team asked me to review its requirements workshop plan. The business problem was to create a flexible hierarchy of salespeople and commission schemes to be used globally. The group's plan called for using use cases almost exclusively. I asked them a few questions, such as, "Who will directly interact with the system?" "How frequently will they interact with it?" "What kinds of decisions do you want the system to make?" "What information does the system use?" Their answers told me that little human interaction was needed and that the core characteristic of their business problem was to establish and manage commissions, salespeople, and zones (global groups). The problem was best expressed not with behavioral models but rather with structural and control models. Consequently, we refocused the model orientation from use cases to a data model, business policies, and business rules.

Use Multiple Models

No single user requirements model can fully express the functional requirements for your project. A solution is to use multiple models (see the Multi-Model collaboration pattern in the Appendix).

Delivering multiple models increases the comprehensiveness of your requirements because each model reveals aspects of another. In addition, you can use one model to test the correctness of another. (Chapter 9 describes how to weave testing into the workshop flow.) This testing is aided with the use of a list of model quality assurance (QA) questions devised before the workshop (see "QA Checklist " later in this chapter). Possible questions include the following:

  • For each event in our event table, is there at least one associated use case?

  • Which use case would handle this scenario?

  • For each use case step, have all the business rules been defined?

  • What data is needed to support this business rule?

The specific questions depend on which models you deliver and their degrees of detail, but you can see how one model acts to trigger elements of another model. As illustrated in Figure 7-2, using multiple models allows you thread one model to another within a single workshop.

Figure 7-2Figure 7-2 Threading Multiple Models Through a Workshop

A structural model (such a context diagram or the glossary) relates to a dynamic model (such as the event table); a behavioral model (such as a process map) provides clues for a control model (such as business policies). Figure 7-2 also illustrates the concept of "chunking" the workshop deliverables into iterations (see "Iteratively Delivering Requirements" later in this chapter).

You can arrange the sequence of your models differently, depending on what you have as a starting point (see "Draft Models" later in this chapter). An example is presented later of a variety of sequences for arriving at business rules.

Mix Text and Diagrammatic Models

Plan for participants to deliver a combination of both text and diagrammatic requirements models. Weave both styles of products into your process design (see Chapter 9). Text models are more precise and contribute to accuracy; diagrammatic models are fast to create and understand, something that promotes overall speed in the process. The two styles also work well with regard to our two-sided brains, with the right side being more adept at dealing with things that are visual and random and the left side being stronger at linear and analytical tasks.

To put it a different way, a picture may be worth a thousand words, but the question is, Which thousand, and what do you mean by them? You need words to answer that.

Consider mixing text with diagrams for each view you deliver. For example:

  • Use case text and a use case map

  • An actor table and an actor map

  • A glossary definition and a data model

Some models are strictly text-based and require visual models in order to elicit them. Business rules and business policies are good examples. You won't get too far asking business people, "What business rules do you need?" Instead, you need visual models that will call up the business rules. One way to overcome this problem is to represent business rules using a decision table or decision tree. Another way is to start harvesting business rules using other models. The BestClaims case study in Chapter 11 provides an example of using a visual model (a statechart diagram) to drive the specification of business rules.

Models for Harvesting Business Rules

You can use all or part of a model to thread to another model. For example, a single step of a use case gives you a thread to the data attributes needed by a data model and also to the business rules that must be enforced within that step. To arrive at business rules, you can use a variety of sequences, including the following:

  • Use case → events → business policies → business rules

  • Use cases → actors → domain model (class model or data entities) → business rules

  • Actor → decisions → business rules

  • Actors → use cases → domain model → business rules

  • Events → domain model → business rules

  • Events → use cases → business rules

  • Events → domain model → life cycles → business rules

  • Domain model → events → life cycles → business rules

Mix Focuses and Views

Draw from various requirements model focuses (who, what, when, where, why, and how) as well as views. Here's a typical combination:

  • Use cases are a behavior view focused on how work gets done.

  • An actor map is also a behavior view, but it focuses on who does the work.

  • A data model or class model is a structural view focused on what information is needed.

  • Business rules are a control view focused on why—the decisions that software needs to make.

Eliciting a combination of models with different focuses helps you to detect requirements errors. In one of my workshops, for example, the use cases that were created made the customer realize that at least 20 business rules needed to be defined more precisely before the requirements could be closed. The model view can trigger creation of additional models that provide more focuses. For example, if you choose a behavioral view of your requirements (say, use cases), you can use those use cases to harvest related models.

Define the Level of Detail

Decide how precise each requirements deliverable should be (see Table 2-4 in Chapter 2).

Scope-level models are particularly important when there's a high risk of scope creep or conflict among users about requirements, or when the requirements aren't precise enough to support the start of design work. In that case, if users and developers expect to make the transition to design at the end of a workshop that delivers that level of detail, you'll have many unhappy project team members.

A useful strategy for moving from scope-level or high-level requirements down to detailed-level requirements is to use iterations. This involves working together to deliver a set of models at roughly the same level of precision, checking their quality, and then moving down to the next level of detail. This approach is a top-down horizontal strategy (see "Building a Horizontal Strategy: The Top-Down Approach" in Chapter 10). Iterating in a top-down manner can accelerate the group's mutual understanding of the requirements and reduce rework within the workshop.

But you may not always want to elicit your requirements models in that top-down order. If your project has a well-defined scope, the team should be ready to jump into high-level requirements. In some cases (particularly if you're replacing an existing system), you'll start at a lower level of detail, with prototype screens and use case steps. At other times, you'll iterate among the levels in a zigzag strategy (see "The Zigzag Strategy" in Chapter 10).

Each project is unique. To decide the best way to elicit user requirements in workshops, you'll need to consider factors such as team size, location, degree of customer and user involvement, past history with software development, existing documentation and software, and team modeling experience.

Iteratively Delivering Requirements

As you consider each requirements model you want to deliver, begin to partition the model into its component parts. For example, divide a detailed use case into its name, header information (such as the triggering event, event response, and initiating actor), and use case steps. Next, group elements from your various models at about the same level of detail.

Figure 7-3 shows how a use case and its related requirements can be grouped in an iterative fashion.

Figure 7-3Figure 7-3 Iteratively Delivering Requirements in a Workshop

Although this example shows four iterations, you can use two or three iterations. This example moves from the high level to the detailed level, following what I call the vertical, how-first strategy (see Chapter 10), in which you drill down within one focus. Note that if you choose this strategy, time will constrain how many use cases (and related requirements) you can deliver.

In iteration 1, participants begin with a named use case; they then define the initiating actor, the triggering event, and the event response. In iteration 2, using the prior set of requirements, participants complete the use case header using a use case template (see "Templates" later in this chapter). They do this by writing a one-paragraph description of the use case, adding the locations of the actors, listing the associated business policies, and modeling the high-level domain model (class model or data entities).

In iteration 3, participants create a use case map to visually lay out the predecessor-successor relationships among the use cases. (Creating a process map puts the use cases in the context of the workflow, giving the team the big picture of the set of requirements. This map also allows the users to understand how the use cases can fit into their workflow. This work is accelerated if a process map has been created before the workshop.) Participants logically partition use cases into packages, which in turn they'll use to define releases or increments for delivery.

In iteration 4, the participants add detailed use case steps, define business rules for each step, list data attributes needed by the steps and their rules, and sketch prototypes for each use case.

At the end of each iteration—which should take one to three hours, depending on the number of use cases—participants should test the quality of the models they've delivered (see "Define Doneness Tests" later in this chapter) in order to reach closure on that set of requirements before moving on to another set.

Using a "divide and conquer" approach helps you to avoid workshop scope creep—getting off-track during the workshop by moving up and down to different levels of detail. It also gives you a basis for ordering group activities. You also save time by eliminating the step of cleaning models up.

Chapter 9 describes ways to assign work to subgroups, iterating among individual, subgroup, and whole group activities for maximum efficiency.

Prioritize the Deliverables

It's not possible to predict exactly how long it will take to deliver each model. Knowing what's important before you begin the workshop will help you to adjust the agenda during the session. Decide with your sponsor and planning team which of the planned requirements models are the most critical and which can be skipped or skimmed if you run out of time.

Estimating Time

There is no magic formula for knowing how long it will take to deliver your requirements in a workshop. You must consider people factors and product factors.

People factors include how "formed" the group is as you begin (see "Forming, Storming, Norming, and Performing" in Chapter 6). Newly formed groups need more time before they can be productive, whereas a group that has worked together will be able to get down to business more quickly.

Product factors include how "done" you need your requirements to be and how much of a head start you have before the workshop. You'll need more time if your requirements must be very precise and well tested or if you don't have models to serve as a starting point. Chapter 9 has generic guidelines for workshop timing by model.

I can offer a general heuristic from my years of experience: List the deliverables you think the group can deliver in the workshop, and then divide your expectations by 3. The result will be a more realistic estimate. For example, if you think you will deliver a revised glossary, stepwise use case descriptions, prototype screens, and a high-level data model, you're likely to deliver one-third less content for each deliverable. This is why it's important to prioritize your requirements deliverables before the workshop.

Once participants have a good understanding of their requirements and are working well together, you should consult them about which ones are most important to work on together. Well-formed groups are very wise: The participants know what to do together and how to compensate for time pressures. For example, one group I facilitated decided to work through four high-priority use cases and then trust two of the participants to draft the remaining ones and return for a workshop to review, revise, and approve them.

Know whether you need to deliver more models with less precision or fewer models with more precision. This will influence the number of QA activities you build into the schedule. For one set of workshops I facilitated, the products included business rules, a data model, and life cycles for a well-defined scope. The work was to be done in four- to six-hour sessions within a four-week time frame. Anticipating that we might not complete all the deliverables, I needed to know which was more important: volume or quality. I asked the workshop sponsor what she wanted from the workshops: more business rules, or fewer, but more correct, business rules. She chose quality over volume. For that reason, I designed an agenda that added scenarios as a deliverable and also incorporated a process for testing each set of business rules with those scenarios before moving on to another set of rules.

Partition Requirements Across Workshops

If you're under tight time constraints or if your group is new and will need time to form, consider delivering your requirements iteratively across multiple workshops. An advantage of this approach is the efficient use of group time; participants take on post-workshop tasks, and the group uses that work as input to later workshops.

After one workshop in which we created high-level requirements, the participants went back to their business areas to ask questions of their colleagues and management so that they'd be prepared to provide details about the use cases and assess their priority. In another workshop, a list of business policies became the basis for research by a business analyst to determine which policies could be changed along with the new system. In yet another workshop, we used the high-level data model created by the participants to conduct data mapping for two possible software packages; in that way, we were able to provide details for selecting a software package in the next workshop.

Figure 7-4 shows an example. Each workshop delivers a predefined set of related requirements. These requirements then serve as inputs to another workshop occurring soon thereafter. I like to schedule iterative workshops no more than five working days apart.

Figure 7-4Figure 7-4 Partitioning Requirements Across Workshops

There are numerous ways to arrange your session. You can conduct daily morning sessions and leave afternoons for post-workshop tasks, as described in the HaveFunds example in Chapter 11; you can use multiday workshops within a one- or two-week period, and so on. Use a schedule that optimizes the availability of people without exhausting them. Try to include time off between tasks that you will use to jump-start the next workshop.

Define Doneness Tests

A doneness test consists of one or more criteria that you use to determine whether a particular deliverable is "good enough" to reach closure on. Your doneness tests will be more or less precise depending on three factors:

  • The project's size (the number of people being coordinated)

  • The criticality of the systems being created

  • The priorities of the project (whether, for example, human lives are at stake or simply human comfort)

As Cockburn (2001) aptly points out, more correctness and documentation are needed by projects with a large number of team members producing critical systems using nondiscretionary funds. A "light" set of models would not do in that situation. If, however, you're building an application for internal consumption with a medium risk of monetary loss if you deliver defects, your doneness tests can be looser.

Different stakeholders have different views of doneness:

  • A customer might want to deliver the requirements and end product as quickly and cheaply as possible. Perhaps you need only do some scope requirements and a prototype. The desire is to deliver relatively little detail for fewer requirements models.

  • A user might be concerned with usability, so you'd focus on translating requirements associated with who and how focuses—such as actors, actor maps, prototypes, interface navigation diagrams, use cases, and use case maps—into more precise requirements.

  • A software designer or architect might be most interested in building a robust product. She would want requirements models that cross multiple views and focuses, thus offering a broader understanding of the whole project rather than only the current release.

Each of these perspectives presents a special challenge to the requirements facilitator, who must help the team determine the most appropriate doneness tests for the short term while also considering the long-term goals of the project and the needs of the various stakeholders.

Your doneness tests can involve the use of a tool or a process. Available tools and processes include the following:

  • A QA checklist for testing the models (see the next section)

  • Scenario- or prototype-based reviews integrated into your workshop process (see Chapter 9)

  • Matrices for analyzing one model or model element against another

  • Metaphors for testing doneness

QA Checklist A quality assurance (QA) checklist is a series of questions, usually stated in binary format, about a requirements model or its elements, along with questions about how one model cross-checks another. For example, if you're producing an actor table, use cases, and the glossary, these questions would apply for each use case:

  • Is the use case named as an actor's goal?

  • Does the use case name start with a strong action verb?

  • Does the use case name include a meaningful object?

  • Is the object in the use case name defined in the glossary?

The Web site for this book offers other QA checklist questions.

Using QA checklists provides more benefits than may be immediately obvious. The very process of creating and agreeing on the checklist helps the team—users, software team, perhaps even sponsors—to clarify and define expectations for each deliverable clearly and precisely. As with using reviews during a workshop (see Chapter 9), checklists push participants to create high-quality requirements in the first place. The checklist forces you to begin with the end in mind.

In one workshop I facilitated, the group created scope-level requirements in the form of an event table and a context diagram. I divided the group of 14 people into subgroups. Each subgroup received a copy of the same checklist.

As a facilitator, I've discovered that participants give you what you ask for. My experience is that taking a testing attitude toward deliverables helps workshop participants to find more defects and find them earlier. So I told them, "Find errors in what we created."

Each subgroup indeed found defects, which were shared with the larger group and corrected. For example, the group forgot that it had to get periodic updates from an employee database, and it realized that it would need someone to play the role of approving certain types of queries to sensitive data. After that, the group continued the workshop by defining detailed requirements for each event within the scope.

Table 7-3 Sample Matrix for Doneness Testing

 

Actor01

Actor02

Actor03

Actor04

UseCase01

x

 

x

 

UseCase02

 

 

 

x

UseCase03

 

 

x

 

UseCase04

x

 

 

 


Matrices Matrices can help you to detect incomplete and extraneous requirements; they also serve as a useful tool for checking doneness. Participants can also create complete matrices to detect missing or extraneous requirements.

In the matrix in Table 7-3, participants fill in the cells to indicate which actors initiate which use cases. An actor without an associated use case (such as Actor02) indicates either an extraneous actor or a missing use case. Perhaps having two initiating actors might be fine (as in UseCase01); or perhaps it indicates that the actors are truly the same and that you might benefit from having a more generic name for an actor.

Metaphors

A metaphor is a symbol, image, or figure of speech. You can use a metaphor as a loose form of doneness testing. In one workshop, we used a bull's-eye. I created a poster with a bull's-eye showing concentric circles with the label "100%" in the center. The sponsor and planning team declared before the workshop that the goal was to achieve 80 percent doneness for the models. At the end of each workshop day, I gave each participant a colored sticky dot and asked each of them to place the dot on the bull's-eye to represent where he believed our requirements deliverables were at the moment.

Each day, over four days, participants placed different dots on the bull's-eye. Each day, they got closer to the center. I also used each day's bull's-eye in leading a brief discussion about what they needed to do to get closer the next day.

Metaphors can also include wishes in the form of lists, scenarios, or visions. In one of my workshops, participants wrote stories of an ideal future, describing their work environment after all their requirements were met. In another, participants provided a list of ideal reports they could get if their requirements were met. For yet another, participants drew posters of their ideal operational environment.

Each of these metaphors serves as a doneness test because you can ask participants to return to their original metaphors or visions after they create their requirements to see whether their vision has been achieved.

Doneness Testing and Decision Making

No matter which doneness tests you use, they're not a substitute for your decision rule and decision-making process (see "Decision Rules" in Chapter 6). Your doneness tests check only the desired level of quality of your requirements, so you must follow up with your agreed-upon decision-making process. If a doneness test tells you that you have "uncooked" requirements, you can still make a decision on how to proceed.

Combining Pre-Work with Doneness Tests: An Example

For one workshop I facilitated, plant managers needed to provide high-level data requirements for financial analysis. There were many complaints about the existing data: It was inconsistent, incorrect, late, and more. The primary deliverable for the workshop would be a data model, supplemented by the questions that managers would need to ask to run their plants, streamline operations, and meet operating targets.

Using a spreadsheet, I created a template in which they would enter their plant-management questions along with the business reasons for the questions, the decisions they made based on the answers, and a list of specific data attributes they would need. During the workshop, a review of those questions led to the discovery of high-level data entities and attributes.

We also asked the plant managers to bring to the workshop a list of the top three reports they used to help run the business, and a list of their top five "wish list" reports.

Their pre-work served as doneness tests for their data requirements, which in the workshop were listed on wall posters. In subgroups, the managers responded to these questions:

  • Can you get the top three reports using the questions you've created? (If not, write a new question.)

  • Can you get the top three reports with the data shown on the walls? (If not, add all the missing data to the entity—we called them data groupings in the workshop—to which it belongs.)

  • Will your wish list reports answer the questions you've created? (If not, add one or more new questions to get answers.)

  • Will your wish list reports be answered with the data shown on the walls? (If not, add all of the data to the appropriate data grouping.)

In one of my workshops, the decision maker was the business project manager; he was also a subject matter expert. He decided to reach closure on a set of use cases, prototype screens, and business rules even though some of the QA answers indicated that the models were not complete. By following the decision-making process, he learned that the participants, including the software team, supported the current states of the models despite the flaws. He made the decision to declare them good enough, and we moved on to another set of requirements.

InformIT Promotional Mailings & Special Offers

I would like to receive exclusive offers and hear about products from InformIT and its family of brands. I can unsubscribe at any time.

Overview


Pearson Education, Inc., 221 River Street, Hoboken, New Jersey 07030, (Pearson) presents this site to provide information about products and services that can be purchased through this site.

This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site. Please note that other Pearson websites and online products and services have their own separate privacy policies.

Collection and Use of Information


To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:

Questions and Inquiries

For inquiries and questions, we collect the inquiry or question, together with name, contact details (email address, phone number and mailing address) and any other additional information voluntarily submitted to us through a Contact Us form or an email. We use this information to address the inquiry and respond to the question.

Online Store

For orders and purchases placed through our online store on this site, we collect order details, name, institution name and address (if applicable), email address, phone number, shipping and billing addresses, credit/debit card information, shipping options and any instructions. We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes.

Surveys

Pearson may offer opportunities to provide feedback or participate in surveys, including surveys evaluating Pearson products, services or sites. Participation is voluntary. Pearson collects information requested in the survey questions and uses the information to evaluate, support, maintain and improve products, services or sites, develop new products and services, conduct educational research and for other purposes specified in the survey.

Contests and Drawings

Occasionally, we may sponsor a contest or drawing. Participation is optional. Pearson collects name, contact information and other information specified on the entry form for the contest or drawing to conduct the contest or drawing. Pearson may collect additional personal information from the winners of a contest or drawing in order to award the prize and for tax reporting purposes, as required by law.

Newsletters

If you have elected to receive email newsletters or promotional mailings and special offers but want to unsubscribe, simply email information@informit.com.

Service Announcements

On rare occasions it is necessary to send out a strictly service related announcement. For instance, if our service is temporarily suspended for maintenance we might send users an email. Generally, users may not opt-out of these communications, though they can deactivate their account information. However, these communications are not promotional in nature.

Customer Service

We communicate with users on a regular basis to provide requested services and in regard to issues relating to their account we reply via email or phone in accordance with the users' wishes when a user submits their information through our Contact Us form.

Other Collection and Use of Information


Application and System Logs

Pearson automatically collects log data to help ensure the delivery, availability and security of this site. Log data may include technical information about how a user or visitor connected to this site, such as browser type, type of computer/device, operating system, internet service provider and IP address. We use this information for support purposes and to monitor the health of the site, identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents and appropriately scale computing resources.

Web Analytics

Pearson may use third party web trend analytical services, including Google Analytics, to collect visitor information, such as IP addresses, browser types, referring pages, pages visited and time spent on a particular site. While these analytical services collect and report information on an anonymous basis, they may use cookies to gather web trend information. The information gathered may enable Pearson (but not the third party web trend services) to link information with application and system log data. Pearson uses this information for system administration and to identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents, appropriately scale computing resources and otherwise support and deliver this site and its services.

Cookies and Related Technologies

This site uses cookies and similar technologies to personalize content, measure traffic patterns, control security, track use and access of information on this site, and provide interest-based messages and advertising. Users can manage and block the use of cookies through their browser. Disabling or blocking certain cookies may limit the functionality of this site.

Do Not Track

This site currently does not respond to Do Not Track signals.

Security


Pearson uses appropriate physical, administrative and technical security measures to protect personal information from unauthorized access, use and disclosure.

Children


This site is not directed to children under the age of 13.

Marketing


Pearson may send or direct marketing communications to users, provided that

  • Pearson will not use personal information collected or processed as a K-12 school service provider for the purpose of directed or targeted advertising.
  • Such marketing is consistent with applicable law and Pearson's legal obligations.
  • Pearson will not knowingly direct or send marketing communications to an individual who has expressed a preference not to receive marketing.
  • Where required by applicable law, express or implied consent to marketing exists and has not been withdrawn.

Pearson may provide personal information to a third party service provider on a restricted basis to provide marketing solely on behalf of Pearson or an affiliate or customer for whom Pearson is a service provider. Marketing preferences may be changed at any time.

Correcting/Updating Personal Information


If a user's personally identifiable information changes (such as your postal address or email address), we provide a way to correct or update that user's personal data provided to us. This can be done on the Account page. If a user no longer desires our service and desires to delete his or her account, please contact us at customer-service@informit.com and we will process the deletion of a user's account.

Choice/Opt-out


Users can always make an informed choice as to whether they should proceed with certain services offered by InformIT. If you choose to remove yourself from our mailing list(s) simply visit the following page and uncheck any communication you no longer want to receive: www.informit.com/u.aspx.

Sale of Personal Information


Pearson does not rent or sell personal information in exchange for any payment of money.

While Pearson does not sell personal information, as defined in Nevada law, Nevada residents may email a request for no sale of their personal information to NevadaDesignatedRequest@pearson.com.

Supplemental Privacy Statement for California Residents


California residents should read our Supplemental privacy statement for California residents in conjunction with this Privacy Notice. The Supplemental privacy statement for California residents explains Pearson's commitment to comply with California law and applies to personal information of California residents collected in connection with this site and the Services.

Sharing and Disclosure


Pearson may disclose personal information, as follows:

  • As required by law.
  • With the consent of the individual (or their parent, if the individual is a minor)
  • In response to a subpoena, court order or legal process, to the extent permitted or required by law
  • To protect the security and safety of individuals, data, assets and systems, consistent with applicable law
  • In connection the sale, joint venture or other transfer of some or all of its company or assets, subject to the provisions of this Privacy Notice
  • To investigate or address actual or suspected fraud or other illegal activities
  • To exercise its legal rights, including enforcement of the Terms of Use for this site or another contract
  • To affiliated Pearson companies and other companies and organizations who perform work for Pearson and are obligated to protect the privacy of personal information consistent with this Privacy Notice
  • To a school, organization, company or government agency, where Pearson collects or processes the personal information in a school setting or on behalf of such organization, company or government agency.

Links


This web site contains links to other sites. Please be aware that we are not responsible for the privacy practices of such other sites. We encourage our users to be aware when they leave our site and to read the privacy statements of each and every web site that collects Personal Information. This privacy statement applies solely to information collected by this web site.

Requests and Contact


Please contact us about this Privacy Notice or if you have any requests or questions relating to the privacy of your personal information.

Changes to this Privacy Notice


We may revise this Privacy Notice through an updated posting. We will identify the effective date of the revision in the posting. Often, updates are made to provide greater clarity or to comply with changes in regulatory requirements. If the updates involve material changes to the collection, protection, use or disclosure of Personal Information, Pearson will provide notice of the change through a conspicuous notice on this site or other appropriate way. Continued use of the site after the effective date of a posted revision evidences acceptance. Please contact us if you have questions or concerns about the Privacy Notice or any objection to any revisions.

Last Update: November 17, 2020