Home > Articles > Software Development & Management

  • Print
  • + Share This
This chapter is from the book

This chapter is from the book

Why Is Together Exciting Technology?

Together is exciting for software development teams for a simple reason—it takes multiple steps out of your job while increasing rather than reducing the quality of the end result. It automates the "nausea" (those jobs you know you ought to do but hate doing because they are repetitive and error-prone) while leaving you maximum freedom to work creatively at the "hard" part of the job—thinking, inventing, modeling, and producing excellent software. As a result, you are much more likely to get "in the flow" and stay there, where you are more productive, less error-prone, and more satisfied with your job.

There are several aspects of Together that enable this to take place, we discuss these in this section:

  • Maintaining a single source (LiveSource)

  • Controlled collaboration through configuration control

  • Automation of the mundane (document generation, deploy, audit, code layout)

  • Disseminating expertise through patterns

  • Continuous monitoring (and feedback) of quality

Maintaining a single source (LiveSource)

There may be over 28 patents pending for Together technology, but even so, the secret of its success is not rocket science. If you look at what software development teams produce, it is not just software. There are many other related artifacts from statements of the requirements and how to test them, to documentation of the design and how to build, maintain, and evolve the system. In many cases information is duplicated in many documents and other artifacts surrounding the system software. What Together provides are multiple, editable views of the information—all related to just one underlying set of files where the information is stored and versioned.

The classic situation to see this in Together is the set of class diagrams that show the static structure of the software (its basic design) and the source code implementing this structure. Before Together, tools had for some time been able to generate code skeletons from design diagrams. A few had even been able to "reverse-engineer" from code back to design diagrams. The revolutionary step taken in Together was to choose only one storage format for these artifacts (in this case the source code).

Figure 1-1 shows a class diagram generated by Together showing three classes, Person, Employee, and Company. The diagram gives information about the attributes, operations, and associations of the classes, and about constraints on objects of the classes. For example, according to this model, an object that is an instance of Person may (or may not) be an employee and therefore associated with one or more Employee objects. This is shown by the 0..* multiplicity on the association. Every Employee is part of (aggregation being shown in UML by the open diamond) exactly one Company. The role of the company in this relationship is employer; the 1 multiplicity on the aggregation link shows that the Employee object must be associated with exactly one Company.

Figure 1-1Figure 1–1 A simple UML class diagram.

The Java code associated with this class diagram contains all the information shown here—as well as quite a lot more, of course, if the design has been implemented. Some information, such as the multiplicity and role names on associations, is stored as javadoc2 tags. Together uses these tags to store design information in the source code files. For example, if you delete those tags, then the adornments on the diagram will disappear. So, we don't think of those tags as just comments; they are the actual source for multiple views of this information. Other information on the diagram comes directly from the compilable Java code—the names of attributes, operations, and so on. When changes are made, whether using Together or any other tool, Together keeps the diagrams continuously up to date.

Let's look at some of the source code for these classes. Here's an extract from Employee and Company.

 1 public class Employee {
 3  /**
 4   * @supplierCardinality 1
 5   * @clientCardinality 0..*
 6   */
 7  private Person person;
 9  private BigDecimal salary;
11  public String getName(){
12   return this.person.getName();
13  }
14  public String getAddress(){
15   return this.person.getAddress();
16  }
17  //other operations...
18 }

The comments on lines 3 through 6 store the multiplicity (or cardinality) of the association between this class and the class of the data member following the comments, in this case Person.

 1 public class Company {
 2  /**
 3   * @link aggregation
 4   *  @associates <{Employee}>
 5   * @supplierCardinality 0..*
 6   * @clientRole employer
 7   * @clientCardinality 1
 8   */
 9  private java.util.List employees =
10   new java.util.ArrayList();
12  public BigDecimal calculateTotalStaffCost(
13           java.util.Date from,
14           java.util.Date to) {
15   //implementation code here...
16  }
17  //other operations...
18 }

Similarly with the comments on lines 20 through 26: Here the comments additionally store the class of the objects (Employee) that will be held in the collection class, java.util.ArrayList. Operations such as those defined on line 30 are automatically kept in step with the diagram.

The diagram in Figure 1–1 is at a fairly high level, as in this instance we selected a view management level of Analysis in Together. If we wanted what Martin Fowler calls an implementation class diagram (Fowler 2000a), with the types and parameters of attributes and operations shown and the navigability indicators on the associations, we would not change the model at all, but merely select a different set of view management options for this diagram in Together.3

Alternatively, we might create a new class diagram with different options, providing a different view of the same classes. This means that multiple diagrams can be produced at different levels of detail, appropriate to their readership, and all of them can be kept up to date simultaneously.

A Single Source for All Information—One Model

Behind this simple explanation of where Together stores model information is a "big idea," one that TogetherSoft refers to as LiveSource technology, or "simultaneous roundtrip." If all it meant was that Together could easily do forward and reverse engineering, then we wouldn't really have gained that much. Other tools are competing with Together by speeding up their roundtrip capabilities, but in many ways this misses the point. What makes Together special is that there is no reverse or forward engineering. If you change the diagram, the code changes instantaneously because they both come from the same source. On the other hand, if you are clear on the implementation and just want to blast some code in, that's fine too—the diagram will be updated immediately when you next look at it, again because both diagram and text are merely different projections of the same source information.

Together allows you to model using all the other UML diagram types, as well as several additional diagrams such as entity-relationship diagrams and Enterprise JavaBeans (EJB) assembly diagrams, and wherever appropriate, the same approach is applied. With some diagrams, of course, there is no corresponding code view (for example, a use case diagram), and here the diagram itself is its own single source. Overall, the source code and related artifacts are all stored as simple, modular files, which can all be stored and versioned in one place.

Why One Model Works

Some development processes are designed to result in a succession of documents and models. For example, Figure 1-2 shows a UML activity diagram4 of a waterfall-style development process. The succession of documents, or models, produced phase by phase in the process, is shown as a series of rectangles on the diagram, here named:

  • Domain Analysis Document,
  • Requirements Specification,
  • Design Documentation, and
  • Executable System.

Figure 1-2Figure 1–2 Activity diagram of a waterfall-style process.

Regardless of the names of the models—different versions of the waterfall have different names for the artifacts—the common characteristic is that there is a set of models, produced in sequence, with any work on one model or artifact potentially discovering and necessitating rework of any or all of the earlier stages.

People often assert that the waterfall is not iterative, and that is where the problem with the process lies. This is not true—even in its earliest forms (Royce W.W. 1970, Royce W. 1998) a flow back to previous stages was included. The main point about such back flows is that they require reissue and sign-off of previous phases' outputs. It is this requirement that makes iterations very costly, especially those involving changes to the models of early phases.

It should be no surprise that a requirements change is always expensive when a waterfall-style process is strictly applied—and strictly applying it is the only way to ensure the models are kept in step with each other.

By contrast, we have already seen how Together uses LiveSource to keep design documentation and the source code for the executable system always in step. We have also seen how it can maintain multiple diagrams that display different levels of detail of the same elements. Together therefore makes it very straightforward to update multiple views simultaneously—in fact, to treat both the design and the implementation as a single model. Instead of viewing the development process as moving from one phase to another with different models (probably in different formats and notations being produced in each phase), Together allows us to see the set of all the artifacts that make up one integrated and interrelated model. When we make a change, Together will help us to update all the views together. Where this is not possible, we can nevertheless access all the diagrams and model elements of the single model from within Together, and it can report on outstanding issues of consistency or completeness.

The single model approach does not mean there are not separate parts of the model, which are not synchronized with other parts. For example, updating a use case diagram (discussed in Chapter 4, "The Stakeholder Step: Specify Requirements") or a features list (discussed in Chapter 5, "The Controlling Step: Feature-Centric Management") does not immediately result in generation of the implementing code—now that would be impressive code generation! What it does mean, however, is that information is as far as possible stored only once, viewed, and most importantly, updated through multiple views.

A single model is also a significant change in team organization. In the past it was not unusual for teams to be organized by phases of a waterfall lifecycle. A single model, on the other hand, encourages a much more holistic view of the system's artifacts: the domain model, requirements, architecture, detailed design, and code. The goal of the team is to continually present a consistent set of these artifacts as the conclusion of one iteration and the starting point for the next.

Corollary: Why Maintaining Two Models Doesn't Work

Some methodologies make a virtue of the need to concurrently maintain different models of the system. Sometimes, these multiple models are maintained by different workers who focus on different stages of a phased lifecycle (analysis models, logical data models, physical models, etc.). While this gives freedom to the workers to modify the models without immediate reference to others, it results in the need for either a costly synchronization process to harmonize the models or an acceptance of different models that conflict with the reality of the delivered system. We prefer the simpler and more effective route of maintaining only one model.

We have been on many projects where the analysis and design/code repositories have been managed as completely separate entities. After the first phase of the project has been delivered (and for some reason, it always seems to be a little late—we don't think it's just us), the project manager is under pressure to catch up on the schedule. Any requests to go back to the analysis repository to make those updates and modifications that came out while implementation was underway are usually given short thrift. As the project continues, the analysis and implementation views get further and further apart, and ultimately no one trusts the information in the analysis view at all.

It is also a reason why software maintenance documentation is so rarely read, even though it is produced at very great expense. The code is always likely to be different from what was documented. Since the maintainer of the code has to read the code—after all, it's the code that is compiled and executed in the computer, not the analysis and design documents—he or she will very often read only the code.

The "No Change" Directive

Andy Carmichael

On a famous, but to protect the innocent, nameless project to which I was assigned as a methods consultant a few years back, the project manager made a bold decision. He gathered the several hundred members of the project in the largest room on the site and placed his single slide on the overhead projector. It had two words on it: "No Change!"

The project, like nearly all large projects, was running late, and the deadline that was looming was not one that could be moved. The national and European law that affected this industry was being changed, and the businesses that required the new system would not be able to cope with the new legal framework with their existing systems. However, at this stage of the project, with software being delivered to system test, a phenomenon that worried the project manager immensely was observed. Modules that he thought had been finished and signed off were being checked out and changed again. Instead of the nice steady progress through the set of modules to be delivered, for every two modules getting through testing, at least one of them was being checked out again and changed. The project manager felt that something had to be done. As well as issuing this directive to the whole team, he decided to intensify the review process for what were considered essential changes to signed-off modules. His intention was that, first, his team would not try to change modules at all, and second, if they really felt it was essential to change them, the decision would be confirmed by several levels of review before the requirement for change was passed to an engineer for implementation.

I don't think you need to be a prophet to foresee what was likely to happen on this project. The directive did have the immediately desired effect in that signed-off modules were changed far less frequently. However, the changes that were needed (why else would they have been requested?) had to be made in other places—in modules that had not yet been signed off. This meant that the architecture and design of the system was being continuously compromised by "Band-Aid" modifications being made where they simply didn't belong. The code base became larger than it needed to be, and the design more complex. Changes that would have been relatively easy to make in the "right" place became harder, error-prone, and repetitive when applied in the "wrong" place. The true goal of the project manager—to deliver the system on time—was being threatened by a well-meaning but totally misinformed idea.

What is the moral of this tale?

First, rather than create an environment in which change of any kind is difficult and resisted, we should look to remove barriers to making change, enabling changes to be made as fast and as simply as possible. Protection against ill-advised change is one of the essential elements of such an environment—version control is a foundation of any development process. But developers should be faced with the minimum amount of technical and managerial constraints to carrying out even experimental changes.

Second, we should beware of thinking of the development lifecycle as a simple sequence of phases and a sequence of tasks to be completed within those phases. The analogy with evolution is more useful than the analogy of an assembly process. In manufacturing one follows a precisely defined set of steps only at the end of which is the completed article. In evolution, however, every step in the process must be an organism that will survive and provide the basis for the next generation. The project manager in this story felt that his problem arose because of poor discipline—developers changing things that were already "good enough." In fact, the problem was more likely to be related to the slow rate of change of certain parts of the software that were difficult for other developers to build on, or that were incomplete.

Controlled Collaboration Through Configuration Management

Another key reason that Together speeds development is that it is built on top of version-controlled files where all the information of the project is stored. Many modeling tools have their own proprietary databases or model file formats, and thus require different mechanisms for controlling collaboration than that used for source code and other documents, namely the configuration management or version control system.

In contrast, Together uses the same mechanism for models, documents, and code. There is therefore one place where all aspects of collaboration are defined and a much clearer mechanism for updating all artifacts. Together can be used with just about any commercially available configuration management system. This includes PVCS, SourceSafe, StarTeam, Continuus, ClearCase, and indeed any system that is SCC-compliant5 or controllable from a command-line interface. If there is no configuration management standard for your organization, then you can install the widely used open source system CVS, which is distributed with Together. The access to the version control system from within Together will look similar, whichever version control system you use. For example Figure 1-3

Figure 1-3Figure 1–3 Accessing Version Control from a Together Diagram.

Since all artifacts are held in the configuration management repository and are under version control, there is a single way to manage multiple developers and business analysts updating the evolving system and its requirements. While configuration control does require commitment and discipline from the team at the start of the project, once installed it becomes the core storage mechanism of the project, keeping current and historic versions of artifacts and allowing the team to make changes rapidly without losing track of previously tested and working versions. Configuration management systems are the best way for intellect workers to collaborate, since individuals or pairs can work on aspects of the requirements, design, or code without affecting other workers. When their work is ready to be shared, others can be updated very rapidly.

Automation of the mundane

Imagination, invention, creativity, innovation, and lateral thinking: These are all characteristics that we value and nurture in software designers and developers. But how much of their time is spent in such creative work?

The answer is a disappointingly small proportion. Developers and designers instead will spend a lot of their time carrying out tasks that are repetitive, boring, error-prone, and time-consuming. These are the types of tasks that computers are good at. For creativity, we need humans; for most other tasks, there's software!

As far as possible, we want to use Together for the mundane tasks like documentation generation, regression testing, and EJB deployment.6 Throughout this book we'll show you examples of how Together supports this out-of-the-box, primarily through its J2EE deployment diagrams and wizards, its GenDoc (Document Generation) functionality, and its support for patterns and templates for quickly developing standard code. Furthermore, we'll be showing you just a few of the things you can do with Together's open API (the Java interface for customizing and extending Together). When you find that your team is doing repetitive tasks and these tasks are not already automated within Together, you can readily write a plug-in module to automate the job.

Disseminating expertise through Patterns

Patterns have been a hot topic in software engineering for some time, with the first papers on software patterns being published around 1992 (Coad 1992; Coplien 1992; Johnson 1992; Booch 1993; Anderson 1994) and books like the Gang of Four's bible of patterns (Gamma, et al. 1995) emerging a few years later. In fact, patterns have been part of the universal experience of human learning for untold centuries. A pattern is an example to follow, an encapsulation of how to solve a class of problem that others can easily apply.

Together was one of the first tools to provide automated support for both the definition of software patterns and their application. It is a major building block in its success.

If you can capture the major patterns that are used in a particular architecture, you make the use of that architecture almost trivially easy to use compared to trying to give developers all the rules in documents or Web pages. A good example of this is the J2EE patterns that are automated in Together, and through the related scripts, plug-ins, and features for particular application servers.

There are three ways to define patterns in Together, ranging from code snippets and parameterized templates, which can be set up in seconds using menu options and cut-and-paste, to the Patterns API, which give a Java programmer full access to the source code he or she wants set up by the pattern. We give some examples of using parameterized templates in Appendix D. The patterns API is out of the scope of this book, but we do discuss some possible example patterns that could be implemented. You can also find patterns, including the source for one or two of the shipped patterns, at http://www.togethercommunity.com.

Continuous monitoring (and feedback) of quality

Together's LiveSource technology builds up a complete internal object model of the code. Every feature of the code, be it an operation, the types and names of that operation's parameters, or even the test expression within an if statement, is modeled internally as an object.

With such detailed information in its internal repository, it should be no surprise that Together is able to provide such sophisticated audits and metrics.


Audits check for conformance with standards. The out-of-the-box audits fall into the following categories:

  • Coding Style
  • Critical Errors
  • Declaration Style
  • Documentation
  • Naming Style
  • Performance
  • Possible Errors
  • Superfluous Content

An example of a critical error is "AHIA: Avoid Hiding Inherited Attributes," which according to this audit (and most would agree), is bad practice, since it makes the code more difficult to understand and modify. An example of a performance audit is "ADVIL: Avoid Declaring Variables Inside Loops"—a very bad idea if the loop is iterated through a large number of cycles.

It's not expected that all audits would be applied, so instead, custom sets of audits can be defined to be applied to your code. A good time to apply them is every night as part of an overnight build; at a minimum, it should be done before integration or system testing.

If for some reason you have a code-based standard not already provided out-of-the-box, then Together's open API also allows for the definition of new audits. We show you how to do this in Chapter 6, "The Continuous Step: Measure the Quality."

There is also a key "automation of the mundane" feature with audits. Some of the audits now have an "auto-fix" feature, which optionally refactors the code automatically so that it passes the audit. It's not realistic for all audits to be able to sport this feature, but we can hope that a growing number will.


Metrics are packaged in Together alongside the audit functionality, though here the main user is likely to be a project manager, whereas with audits it might be the chief developer or anyone about to start a system test cycle. Metrics allow quantitative analysis of the code base, again in a number of categories:

  • Basic
  • Cohesion
  • Complexity
  • Coupling
  • Encapsulation
  • Halstead
  • Inheritance
  • Maximum
  • Polymorphism
  • Ratio

An example of a basic metric is "LOC: Lines of Code." An example of an inheritance metric is "DOIH: Depth of Inheritance Hierarchy." Again, if not all of these metrics are deemed necessary for your project, you can define the set of metrics to be taken daily or perhaps weekly. Together's open API allows custom metrics to be created in a similar manner to the creation of custom audits.

We heard of one project manager who used the metrics facility in Together to publish a "Metric of the Week" on the project intranet, with a league table of software packages. This was not done with a "name-and-shame" attitude, but in order to bring a focus of different aspects of quality that were being sought in the project. It's an interesting idea.

Document Generation

Sometimes, there is no substitute for a paper-based version of a project's documentation as part of the review process. Together allows an RTF format (compatible with Microsoft Word and other word processors) or plain text document to be generated, using a template that can be edited and customized for your project.

Our preference for most purposes is to generate an HTML web site version of the project documentation, which Together can also produce, giving up-to-date online access to the software design and other artifacts. Many projects generate this each night, alongside the nightly build, so that all team members can always access the most current documentation set, even accessing it from within the Context Help menu from selections in Together's editor, if you set up the appropriate paths for the Help system.

Other Possibilities

Together's open API allows for the inspection of not just the code, but indeed any artifact of the repository. For example, if you set a standard that every sequence diagram should be hyperlinked to the use case that it realizes (both of which are accessible as objects in Together's repository, even though neither are code artifacts), then this can be enforced by iterating over the repository and checking for conformance.

This feature by itself means that virtually any quality standard can be monitored, taking the QA features far beyond mere coding standards. We give examples of this in Chapter 6.

So, Together is exciting technology because of its LiveSource technology, its controlled collaboration support, its automation of the mundane, and its continuous monitoring of quality. We now consider the impact all this has on our development process.

  • + Share This
  • 🔖 Save To Your Account

InformIT Promotional Mailings & Special Offers

I would like to receive exclusive offers and hear about products from InformIT and its family of brands. I can unsubscribe at any time.


Pearson Education, Inc., 221 River Street, Hoboken, New Jersey 07030, (Pearson) presents this site to provide information about products and services that can be purchased through this site.

This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site. Please note that other Pearson websites and online products and services have their own separate privacy policies.

Collection and Use of Information

To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:

Questions and Inquiries

For inquiries and questions, we collect the inquiry or question, together with name, contact details (email address, phone number and mailing address) and any other additional information voluntarily submitted to us through a Contact Us form or an email. We use this information to address the inquiry and respond to the question.

Online Store

For orders and purchases placed through our online store on this site, we collect order details, name, institution name and address (if applicable), email address, phone number, shipping and billing addresses, credit/debit card information, shipping options and any instructions. We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes.


Pearson may offer opportunities to provide feedback or participate in surveys, including surveys evaluating Pearson products, services or sites. Participation is voluntary. Pearson collects information requested in the survey questions and uses the information to evaluate, support, maintain and improve products, services or sites, develop new products and services, conduct educational research and for other purposes specified in the survey.

Contests and Drawings

Occasionally, we may sponsor a contest or drawing. Participation is optional. Pearson collects name, contact information and other information specified on the entry form for the contest or drawing to conduct the contest or drawing. Pearson may collect additional personal information from the winners of a contest or drawing in order to award the prize and for tax reporting purposes, as required by law.


If you have elected to receive email newsletters or promotional mailings and special offers but want to unsubscribe, simply email information@informit.com.

Service Announcements

On rare occasions it is necessary to send out a strictly service related announcement. For instance, if our service is temporarily suspended for maintenance we might send users an email. Generally, users may not opt-out of these communications, though they can deactivate their account information. However, these communications are not promotional in nature.

Customer Service

We communicate with users on a regular basis to provide requested services and in regard to issues relating to their account we reply via email or phone in accordance with the users' wishes when a user submits their information through our Contact Us form.

Other Collection and Use of Information

Application and System Logs

Pearson automatically collects log data to help ensure the delivery, availability and security of this site. Log data may include technical information about how a user or visitor connected to this site, such as browser type, type of computer/device, operating system, internet service provider and IP address. We use this information for support purposes and to monitor the health of the site, identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents and appropriately scale computing resources.

Web Analytics

Pearson may use third party web trend analytical services, including Google Analytics, to collect visitor information, such as IP addresses, browser types, referring pages, pages visited and time spent on a particular site. While these analytical services collect and report information on an anonymous basis, they may use cookies to gather web trend information. The information gathered may enable Pearson (but not the third party web trend services) to link information with application and system log data. Pearson uses this information for system administration and to identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents, appropriately scale computing resources and otherwise support and deliver this site and its services.

Cookies and Related Technologies

This site uses cookies and similar technologies to personalize content, measure traffic patterns, control security, track use and access of information on this site, and provide interest-based messages and advertising. Users can manage and block the use of cookies through their browser. Disabling or blocking certain cookies may limit the functionality of this site.

Do Not Track

This site currently does not respond to Do Not Track signals.


Pearson uses appropriate physical, administrative and technical security measures to protect personal information from unauthorized access, use and disclosure.


This site is not directed to children under the age of 13.


Pearson may send or direct marketing communications to users, provided that

  • Pearson will not use personal information collected or processed as a K-12 school service provider for the purpose of directed or targeted advertising.
  • Such marketing is consistent with applicable law and Pearson's legal obligations.
  • Pearson will not knowingly direct or send marketing communications to an individual who has expressed a preference not to receive marketing.
  • Where required by applicable law, express or implied consent to marketing exists and has not been withdrawn.

Pearson may provide personal information to a third party service provider on a restricted basis to provide marketing solely on behalf of Pearson or an affiliate or customer for whom Pearson is a service provider. Marketing preferences may be changed at any time.

Correcting/Updating Personal Information

If a user's personally identifiable information changes (such as your postal address or email address), we provide a way to correct or update that user's personal data provided to us. This can be done on the Account page. If a user no longer desires our service and desires to delete his or her account, please contact us at customer-service@informit.com and we will process the deletion of a user's account.


Users can always make an informed choice as to whether they should proceed with certain services offered by InformIT. If you choose to remove yourself from our mailing list(s) simply visit the following page and uncheck any communication you no longer want to receive: www.informit.com/u.aspx.

Sale of Personal Information

Pearson does not rent or sell personal information in exchange for any payment of money.

While Pearson does not sell personal information, as defined in Nevada law, Nevada residents may email a request for no sale of their personal information to NevadaDesignatedRequest@pearson.com.

Supplemental Privacy Statement for California Residents

California residents should read our Supplemental privacy statement for California residents in conjunction with this Privacy Notice. The Supplemental privacy statement for California residents explains Pearson's commitment to comply with California law and applies to personal information of California residents collected in connection with this site and the Services.

Sharing and Disclosure

Pearson may disclose personal information, as follows:

  • As required by law.
  • With the consent of the individual (or their parent, if the individual is a minor)
  • In response to a subpoena, court order or legal process, to the extent permitted or required by law
  • To protect the security and safety of individuals, data, assets and systems, consistent with applicable law
  • In connection the sale, joint venture or other transfer of some or all of its company or assets, subject to the provisions of this Privacy Notice
  • To investigate or address actual or suspected fraud or other illegal activities
  • To exercise its legal rights, including enforcement of the Terms of Use for this site or another contract
  • To affiliated Pearson companies and other companies and organizations who perform work for Pearson and are obligated to protect the privacy of personal information consistent with this Privacy Notice
  • To a school, organization, company or government agency, where Pearson collects or processes the personal information in a school setting or on behalf of such organization, company or government agency.


This web site contains links to other sites. Please be aware that we are not responsible for the privacy practices of such other sites. We encourage our users to be aware when they leave our site and to read the privacy statements of each and every web site that collects Personal Information. This privacy statement applies solely to information collected by this web site.

Requests and Contact

Please contact us about this Privacy Notice or if you have any requests or questions relating to the privacy of your personal information.

Changes to this Privacy Notice

We may revise this Privacy Notice through an updated posting. We will identify the effective date of the revision in the posting. Often, updates are made to provide greater clarity or to comply with changes in regulatory requirements. If the updates involve material changes to the collection, protection, use or disclosure of Personal Information, Pearson will provide notice of the change through a conspicuous notice on this site or other appropriate way. Continued use of the site after the effective date of a posted revision evidences acceptance. Please contact us if you have questions or concerns about the Privacy Notice or any objection to any revisions.

Last Update: November 17, 2020