Home > Articles

This chapter is from the book

7.2 OpenStack

A structured implementation of a private cloud would benefit from well-defined services, which are consumed by the virtual environments that self-service users deploy. One popular implementation of those services, along with the management tools necessary to deploy and use a private cloud, is OpenStack. The following subsections describe OpenStack briefly, and then discuss the integration of Oracle Solaris and OpenStack.

7.2.1 What Is OpenStack?

OpenStack is a community-based open-source project to form a comprehensive management layer to create and manage private clouds. This project was first undertaken as a joint effort of Rackspace and NASA in 2010, but is now driven by the OpenStack Foundation. Since 2010, OpenStack has been the fastest-growing open-source project on a worldwide basis, with thousands of commercial and individual contributors spread across the globe. The community launches two OpenStack releases per year.

OpenStack can be considered an operating system for cloud environments. It provides the foundation for Infrastructure as a Service (IaaS) clouds. Some new modules add features required in Platform as a Service (PaaS) clouds. OpenStack should not be viewed as layered software, however, but rather as an integrated infrastructure component. Thus, although the OpenStack community launches OpenStack releases, infrastructure vendors must integrate the open-source components into their own platforms to deliver the OpenStack functionality. Several operating system, network, and storage vendors offer OpenStack-enabled products.

OpenStack abstracts compute, network, and storage resources for the user, with those resources being exposed through a web portal with a single management pane. This integrated approach enables administrators to easily manage a variety of storage devices and hypervisors. The cloud services are based on a series of OpenStack modules, which communicate through a defined RESTful API between the various modules.

If a vendor plans to offer support for certain OpenStack services in its products, it must implement the functionality of those services and provide access to the functionality through the REST APIs. This can be done by delivering a service plugin, specialized for the product, that fills the gap between the REST API definition and the existing product feature.

7.2.2 The OpenStack General Architecture

Figure 7.3 depicts the general architecture of an OpenStack deployment. It consists of services provided by the OpenStack framework, and compute nodes that consume those services. This section describes those services.

Figure 7.3

Figure 7.3 OpenStack Architecture

Several OpenStack services are used to form an OpenStack-based private cloud. The services are interconnected via the REST APIs and depend on each other. But not all services are always needed to form a cloud, however, and not every vendor delivers all services. Some services have a special purpose and are configured only when appropriate; others are always needed when setting up a private cloud.

Because of the clearly defined REST APIs, services are extensible. The following list summarizes the core service modules.

  • Cinder (block storage): Provides block storage for OpenStack compute instances and manages the creation, attaching, and detaching of block devices to OpenStack instances.

  • Glance (images): Provides discovery, registration, and delivery services for disk and server images. The stored images can be used as templates for the deployment of VEs.

  • Heat (orchestration): Enables the orchestration of complete application stacks, based on heat templates.

  • Horizon (dashboard): Provides the dashboard management tool to access and provision cloud-based resources from a browser-based interface.

  • Ironic (bare-metal provisioning): Used to provision bare-metal OpenStack guests, such as physical nodes.

  • Keystone (authentication and authorization): Provides authentication and high-level authorization for the cloud and between cloud services. It consists of a central directory of users mapped to those cloud services they can access.

  • Manila (shared file system): Allows the OpenStack instances to access shared file systems in the cloud.

  • Neutron (network): Manages software-defined network services such as networks, routers, switches, and IP addresses to support multitenancy.

  • Nova (compute): The primary service that provides the provisioning of virtual compute environments based on user requirement and available resources.

  • Swift (object storage): A redundant and scalable storage system, with objects and files stored and managed on disk drives across multiple servers.

  • Trove (database as a service): Allows users to quickly provision and manage multiple database instances without the burden of handling complex administrative tasks.

7.2.3 Oracle Solaris and OpenStack

Oracle Solaris 11 includes a full distribution of OpenStack as a standard, supported part of the platform. The first such release was Oracle Solaris 11.2, which integrated the Havana OpenStack release. The Juno release was integrated into Oracle Solaris 11.2 Support Repository Update (SRU) 6. In Solaris 11.3 SRU 9, the integrated OpenStack software was updated to the Kilo release.

OpenStack services have been tightly integrated into the technology foundations of Oracle Solaris. The integration of OpenStack and Solaris leveraged many new Solaris features that had been designed specifically for cloud environments. Some of the Solaris features integrated into OpenStack include:

  • Solaris Zones driver integration with Nova to deploy Oracle Solaris Zones and Solaris Kernel Zones

  • Neutron driver integration with Oracle Solaris network virtualization, including Elastic Virtual Switch

  • Cinder driver integration with the ZFS file system

  • Unified Archives integration with Glance image management and Heat orchestration

  • Bare-metal provisioning implementation using the Oracle Solaris Automated Installer (AI)

Figure 7.4 shows the OpenStack services implemented in Oracle Solaris and the related supporting Oracle Solaris features.

Figure 7.4

Figure 7.4 OpenStack Services in Oracle Solaris

All services have been integrated into the Solaris Service Management Framework (SMF) to ensure service reliability, automatic service restart, and node dependency management. SMF properties enable additional configuration options. Oracle Solaris Role-Based Access Control (RBAC) ensures that the OpenStack services, represented by their corresponding SMF services, run with minimal privileges.

The OpenStack modules are delivered in separate Oracle Solaris packages, as shown in this example generated in Solaris 11.3:

# pkg list -af | grep openstack
cloud/openstack                0.2015.2.2-    i--
cloud/openstack/cinder         0.2015.2.2-    i--
cloud/openstack/glance         0.2015.2.2-    i--
cloud/openstack/heat           0.2015.2.2-    i--
cloud/openstack/horizon        0.2015.2.2-    i--
cloud/openstack/ironic         0.2015.2.1-    i--
cloud/openstack/keystone       0.2015.2.2-    i--
cloud/openstack/neutron        0.2015.2.2-    i--
cloud/openstack/nova           0.2015.2.2-    i--
cloud/openstack/openstack-common 0.2015.2.2-  i--
cloud/openstack/swift          2.3.2-         i--

To easily install the whole OpenStack distribution on a system, the cloud/openstack group package may be installed. It automatically installs all of the dependent OpenStack modules and libraries, plus additional packages such as rad, rabbitmq, and mysql.

The integration of OpenStack with the Solaris Image Packaging System (IPS) greatly simplifies updates of OpenStack on a cloud node, through the use of full package dependency checking and rollback. This was accomplished through integration with ZFS boot environments. Through a single update mechanism, an administrator can easily apply the latest software fixes to a system, including the virtual environments.

7.2.4 Compute Virtualization with Solaris Zones and Solaris Kernel Zones

Oracle Solaris Zones and Oracle Solaris Kernel Zones are used for OpenStack compute functionality. They provide excellent environments for application workloads and are fast and easy to provision in a cloud environment.

The life cycle of Solaris Zones as compute instances in an OpenStack cloud is controlled by the Solaris Nova driver for Solaris Zones. The instances are deployed by using the Nova command-line interface or by using the Horizon dashboard. To launch an instance, the cloud user selects a flavor, a Glance image, and a Neutron network. Figures 7.5 and 7.6 show the flavors available with Oracle Solaris OpenStack and the launch screen for an OpenStack instance.

Figure 7.5

Figure 7.5 OpenStack Flavors

Figure 7.6

Figure 7.6 OpenStack Instance Launch Screen

Oracle Solaris options specify the creation of a Solaris native zone or a Solaris kernel zone. Those special properties are assigned as extra_specs, which are typically set through the command line. The property’s keys comprise a set of zone properties that are typically configured with the zonecfg command and that are supported in OpenStack.

The following keys are supported in both kernel zones and non-global zone flavors:

  • zonecfg:bootargs

  • zonecfg:brand

  • zonecfg:hostid

  • zonecfg:cpu-arch

The following keys are supported only in non-global zone flavors:

  • zonecfg:file-mac-profile

  • zonecfg:fs-allowed

  • zonecfg:limitpriv

The list of current flavors can be displayed on the command line:

| ID | Name                                    | extra_specs                       |
| 1  | Oracle Solaris kernel zone - tiny       | {u'zonecfg:brand': u'solaris-kz'} |
| 10 | Oracle Solaris non-global zone - xlarge | {u'zonecfg:brand': u'solaris'}    |
| 2  | Oracle Solaris kernel zone - small      | {u'zonecfg:brand': u'solaris-kz'} |
| 3  | Oracle Solaris kernel zone - medium     | {u'zonecfg:brand': u'solaris-kz'} |
| 4  | Oracle Solaris kernel zone - large      | {u'zonecfg:brand': u'solaris-kz'} |
| 5  | Oracle Solaris kernel zone - xlarge     | {u'zonecfg:brand': u'solaris-kz'} |
| 6  | Oracle Solaris non-global zone - tiny   | {u'zonecfg:brand': u'solaris'}    |
| 7  | Oracle Solaris non-global zone - small  | {u'zonecfg:brand': u'solaris'}    |
| 8  | Oracle Solaris non-global zone - medium | {u'zonecfg:brand': u'solaris'}    |
| 9  | Oracle Solaris non-global zone - large  | {u'zonecfg:brand': u'solaris'}    |

The sc_profile key can be modified only from the command line. This key is used to specify a system configuration profile for the flavor—for example, to preassign DNS or other system configurations to each flavor. For example, the following command will set a specific system configuration file for a flavor in the previously given list (i.e., “Oracle Solaris kernel zone – large”):

$ nova flavor-key 4 set sc_profile=/system/volatile/profile/sc_profile.xml

Launching an instance initiates the following actions in an OpenStack environment:

  • The Nova scheduler selects a compute node in the cloud, based on the selected flavor, that meets the hypervisor type, architecture, number of VCPUs, and RAM requirements.

  • On the chosen compute node, the Solaris Nova implementation will send a request to Cinder to find suitable storage in the cloud that can be used for the new instance’s root file system. It then triggers the creation of a volume in that storage. Additionally, Nova obtains networking information and a network port in the selected network for an instance, by communicating with the Neutron service.

  • The Cinder volume service delegates the volume creation to the storage device, receives the related Storage Unified Resource Identifier (SURI), and communicates that SURI back to the selected compute node. Typically this volume will reside on a different system from the compute node and will be accessed by the instance using shared storage such as FibreChannel, iSCSI, or NFS.

  • The Neutron service assigns a Neutron network port to the instance, based on the cloud networking configuration. All instances instantiated by the compute service use an exclusive IP stack instance. Each instance includes an anet resource with its configure-allowed-address property set to false, and its evs and vport properties set to UUIDs supplied by Neutron that represent a particular virtualized switch segment and port.

  • After the Solaris Zone and OpenStack resources have been configured, the zone is installed and booted, based on the assigned Glance image. This uses Solaris Unified Archives.

The following example shows a Solaris Zones configuration file, created by OpenStack for an iSCSI Cinder volume as boot volume:

compute-node # zonecfg -z instance-00000008 info
zonename: instance-00000008
brand: solaris
tenant: 740885068ed745c492e55c9e1c688472
        linkname: net0
        configure-allowed-address: false
        evs: a6365a98-7be1-42ec-88af-b84fa151b5a0
        vport: 8292e26a-5063-4bbb-87aa-7f3d51ff75c0
        storage: iscsi://st01-sn:3260/target.iqn.1986-03.com.sun:02:...
        [ncpus: 1.00]
        [swap: 1G]
        name: zone.cpu-cap
        value: (priv=privileged,limit=100,action=deny)
        name: zone.max-swap
        value: (priv=privileged,limit=1073741824,action=deny)

7.2.5 Cloud Networking with Elastic Virtual Switch

OpenStack networking creates virtual networks that interconnect VEs instantiated by the OpenStack compute node (Nova). It also connects these VEs to network services in the cloud, such as DHCP and routing. Neutron provides APIs to create and use multiple networks and to assign multiple VEs to networks, which are themselves assigned to different tenants. Each network tenant is represented in the network layer via an isolated Layer 2 network segment—comparable to VLANs in physical networks. Figure 7.7 shows the relationships among these components.

Figure 7.7

Figure 7.7 OpenStack Virtual Networking

Subnets are properties that are assigned much like blocks of IPv4 or IPv6 addresses—that is, default-router or nameserver. Neutron creates ports in these subnets and assigns them together with several properties to virtual machines. The L3-router functionality of Neutron interconnects tenant networks to external networks and enables VEs to access the Internet through source NAT. Floating IP addresses create a static one-to-one mapping from a public IP address on the external network to a private IP address in the cloud, assigned to one VE.

Oracle Solaris Zones and Oracle Solaris Kernel Zones, as OpenStack instances, use the Solaris VNIC technology to connect to the tenant networks. All VNICs are bound with virtual network switches to physical network interfaces. If multiple tenants use one physical interface, then multiple virtual switches are created above that physical interface.

If multiple compute nodes have been deployed in one cloud and multiple tenants are used, virtual switches from the same tenant are spread over multiple compute nodes, as shown in Figure 7.8.

Figure 7.8

Figure 7.8 Virtual Switches

A technology is needed to control these distributed switches as one switch. The virtual networks can be created by, for example, VXLAN or VLAN. In the case of Oracle Solaris, the Solaris Elastic Virtual Switch (EVS) feature is used to control the distributed virtual switches. The back-end to OpenStack uses a Neutron plugin.

Finally, EVS is controlled by a Neutron plugin so that it offers an API to the cloud. In each compute node, the virtual switches are controlled by an EVS plugin to form a distributed switch for multiple tenants.

7.2.6 Cloud Storage with ZFS and COMSTAR

The OpenStack Cinder service provides central management for block storage volumes as boot storage and for application data. To create a volume, the Cinder scheduler selects a storage back-end, based on storage size and storage type requirements, and the Cinder volume service controls the volume creation. The Cinder API then sends the necessary access information back to the cloud.

Different types of storage can be used to provide storage to the cloud, such as FibreChannel, iSCSI, NFS, or the local disks of the compute nodes. The type used depends on the storage requirements. These requirements include characteristics such as capacity, throughput, latency and availability, and requirements for local storage or shared storage. Shared storage is required if migration of OpenStack instances between compute nodes is needed. Local storage may often be sufficient for short-term, ephemeral data. The cloud user is not aware of the storage technology that has been chosen, because the Cinder volume service represents the storage simply as a type of storage, not as a specific storage product model.

The Cinder volume service is configured to use an OpenStack storage plugin, which knows the specifics of a storage device. Example characteristics include the method to create a Cinder volume, and a method to access the data.

Multiple Cinder storage plugins are available for Oracle Solaris, which are based on ZFS to provide volumes to the OpenStack instances:

  • The ZFSVolumeDriver supports the creation of local volumes for use by Nova on the same node as the Cinder volume service. This method is typically applied when using the local disks in compute nodes.

  • The ZFSISCSIDriver and the ZFSFCDriver support the creation and export of iSCSI and FC targets, respectively, for use by remote Nova compute nodes. COMSTAR allows any Oracle Solaris host to become a storage server, serving block storage via iSCSI or FC.

  • The ZFSSAISCSIDriver supports the creation and export of iSCSI targets from a remote Oracle ZFS Storage Appliance for use by remote Nova compute nodes.

In addition, other storage plugins can be configured in the Cinder volume service, if the storage vendor has provided the appropriate Cinder storage plugin. For example, the OracleFSFibreChannelDriver enables Oracle FS1 storage to be used in OpenStack clouds to provide FibreChannel volumes.

7.2.7 Sample Deployment Options

The functional enablement of Oracle Solaris for OpenStack is based on two main precepts. The first aspect is the availability and support of the OpenStack API with various software libraries and plugins in Oracle Solaris. The second aspect is the creation and integration of OpenStack plugins to enable specific Oracle Solaris functions in OpenStack. As discussed earlier, those plugins have been developed and provided for Cinder, Neutron, and Nova, as well as for Ironic.

Deploying an OpenStack-based private cloud with OpenStack for Oracle Solaris is similar to the setup of other OpenStack-based platforms.

  • The design and setup of the hardware platform (server systems, network and storage) for the cloud are very important. Careful design pays off during the configuration and production phases for the cloud.

  • Oracle Solaris must be installed on the server systems. The installation of Oracle Solaris OpenStack packages can occur with installation of Solaris—a process that can be automated with the Solaris Automated Installer.

  • After choosing between the storage options, the storage node is installed and integrated into the cloud.

  • The various OpenStack modules must be configured with their configuration files, yielding a full functional IaaS private cloud with OpenStack. The OpenStack configuration files are located in the /etc/[cinder, neutron, nova, ..] directories. The final step is the activation of the related SMF services with their dependencies.

The design of the hardware platform is also very important. Besides OpenStack, a general cloud architecture to be managed by OpenStack includes these required parts:

  • One or multiple compute nodes for the workload.

  • A cloud network to host the logical network internal to the cloud. Those networks link together network ports of the instances, which together form one network broadcast domain. This internal logical network is typically composed with VxLAN or tagged VLAN technology.

  • Storage resources to boot the OpenStack instances and keep application data persistent.

  • A storage network, if shared storage is used, to connect the shared storage with the compute nodes.

  • An internal control network, used by the OpenStack API’s internal messages and to drive the compute, network, and storage parts of the cloud; this network can also be used to manage, install, and monitor all cloud nodes.

  • A cloud control part, which runs the various OpenStack control services for the OpenStack cloud like the Cinder and Nova scheduler, the Cinder volume service, the MySQL management database, or the RabbitMQ messaging service.

Figure 7.9 shows a general OpenStack cloud, based on a multinode architecture with multiple compute nodes, shared storage, isolated networks and controlled cloud access through a centralized network node.

Figure 7.9

Figure 7.9 Single Public Network Connection

7.2.8 Single-System Prototype Environment

You can demonstrate an OpenStack environment in a single system. In this case, a single network is used, or multiple networks are created using etherstubs, to form the internal network of the cloud. “Compute nodes” can then be instantiated as kernel zones. However, if you use kernel zones as compute nodes, then OpenStack instances can be only non-global zones. This choice does not permit application of several features, including Nova migration. This single-node setup can be implemented very easily with Oracle Solaris, using a Unified Archive of a comprehensive OpenStack installation.

Such single-system setups are typically implemented so that users can become familiar with OpenStack or to create very small prototypes. Almost all production deployments will use multiple computers to achieve the availability goals of a cloud.

There is one exception to this guideline: A SPARC system running Oracle Solaris (e.g., SPARC T7-4) can be configured as a multinode environment, using multiple logical domains, connected with internal virtual networks. The result is still a single physical system, which includes multiple isolated Solaris instances, but is represented like a multinode cloud.

7.2.9 Simple Multinode Environment

Creating a multinode OpenStack cloud increases the choices available in all parts of the general cloud architecture. The architect makes the decision between one unified network or separate networks when choosing the design for the cloud network, the internal network, and the storage network. Alternatively, those networks might not be single networks, but rather networks with redundancy features such as IPMP, DLMP, LACP, or MPXIO. All of these technologies are part of Oracle Solaris and can be selected to create the network architecture of the cloud.

Another important decision to be made is how to connect the cloud to the public or corporate network. The general architecture described earlier shows a controlled cloud access through a centralized network node. While this setup enforces centralized access to the cloud via a network node, it can also lead to complicated availability or throughput limitations. An alternative setup is a flat cloud, shown in Figure 7.10, in which the compute nodes are directly connected to the public network, so that no single access point limits throughput or availability. It is the responsibility of the cloud architect to decide which option is the most appropriate choice.

Figure 7.10

Figure 7.10 Multiple Public Network Connections

For the compute nodes, the decision can be made between SPARC nodes (SPARC T5, T7, S7, M7, or M10 servers), x86_64 nodes, or a mixed-node cloud that combines both architectures. Oracle Solaris OpenStack will handle both processor architectures in one cloud. Typically, compute nodes with 1 or 2 sockets with medium memory capacity (512 GB) are chosen. More generally, by using SPARC systems, compute nodes ranging from very small to very large in size can be combined in one cloud without any special configuration efforts.

The cloud storage is typically shared storage. In a shared storage architecture, disks storing the running instances are located outside the compute nodes. Cloud instances can then be easily recovered with migration or evacuation, in case of compute node downtime. Using shared storage is operationally simple because having separate compute hosts and storage makes the compute nodes “stateless.” Thus, if there are no instances running on a compute node, that node can be taken offline and its contents erased completely without affecting the remaining parts of the cloud. This type of storage can be scaled to any amount of storage. Storage decisions can be made based on performance, cost, and availability. Among the choices are an Oracle ZFS storage appliance, shared storage through a Solaris node as iSCSI or FC target server, or shared storage through a FibreChannel SAN storage system.

To use local storage, each compute node’s internal disks store all data of the instances that the node hosts. Direct access to disks is very cost-effective, because there is no need to maintain a separate storage network. The disk performance on each compute node is directly related to the number and performance of existing local disks. The chassis size of the compute node will limit the number of spindles able to be used in a compute node. However, if a compute node fails, the instances on it cannot be recovered. Also, there is no method to migrate instances. This omission can be a major issue for cloud services that create persistent data. Other cloud services, however, perform processing services without storing any local data, in which case no local persistent data is created.

The cloud control plane, implemented as an OpenStack controller, can consist of one or more systems. With Oracle Solaris, typically the OpenStack controller is created in kernel zones for modular setups. Scalability on the controller site can then be achieved just by adding another kernel zone. The OpenStack control services can all be combined in one kernel zone. For scalability and reliability reasons, the services can be grouped into separate kernel zones, providing the following services:

  • RabbitMQ

  • MySQL management database

  • EVS Controller

  • Network Node

  • The remaining OpenStack Services

7.2.10 OpenStack Summary

Running OpenStack on Oracle Solaris provides many advantages. A complete OpenStack distribution is part of the Oracle Solaris Repository and, therefore, is available for Oracle Solaris without any additional cost. The tight integration of the comprehensive virtualization features for compute and networking—Solaris Zones, virtual NICs and switches, and the Elastic Virtual Switch—in Oracle Solaris provide significant value not found in other OpenStack implementations. The integration of OpenStack with Oracle Solaris leverages the Image Packaging System, ZFS boot environments, and the Service Management Facility. As a consequence, an administrator can quickly start an update of the cloud environment, and can quickly update each service and node in a single operation.

InformIT Promotional Mailings & Special Offers

I would like to receive exclusive offers and hear about products from InformIT and its family of brands. I can unsubscribe at any time.


Pearson Education, Inc., 221 River Street, Hoboken, New Jersey 07030, (Pearson) presents this site to provide information about products and services that can be purchased through this site.

This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site. Please note that other Pearson websites and online products and services have their own separate privacy policies.

Collection and Use of Information

To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:

Questions and Inquiries

For inquiries and questions, we collect the inquiry or question, together with name, contact details (email address, phone number and mailing address) and any other additional information voluntarily submitted to us through a Contact Us form or an email. We use this information to address the inquiry and respond to the question.

Online Store

For orders and purchases placed through our online store on this site, we collect order details, name, institution name and address (if applicable), email address, phone number, shipping and billing addresses, credit/debit card information, shipping options and any instructions. We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes.


Pearson may offer opportunities to provide feedback or participate in surveys, including surveys evaluating Pearson products, services or sites. Participation is voluntary. Pearson collects information requested in the survey questions and uses the information to evaluate, support, maintain and improve products, services or sites, develop new products and services, conduct educational research and for other purposes specified in the survey.

Contests and Drawings

Occasionally, we may sponsor a contest or drawing. Participation is optional. Pearson collects name, contact information and other information specified on the entry form for the contest or drawing to conduct the contest or drawing. Pearson may collect additional personal information from the winners of a contest or drawing in order to award the prize and for tax reporting purposes, as required by law.


If you have elected to receive email newsletters or promotional mailings and special offers but want to unsubscribe, simply email information@informit.com.

Service Announcements

On rare occasions it is necessary to send out a strictly service related announcement. For instance, if our service is temporarily suspended for maintenance we might send users an email. Generally, users may not opt-out of these communications, though they can deactivate their account information. However, these communications are not promotional in nature.

Customer Service

We communicate with users on a regular basis to provide requested services and in regard to issues relating to their account we reply via email or phone in accordance with the users' wishes when a user submits their information through our Contact Us form.

Other Collection and Use of Information

Application and System Logs

Pearson automatically collects log data to help ensure the delivery, availability and security of this site. Log data may include technical information about how a user or visitor connected to this site, such as browser type, type of computer/device, operating system, internet service provider and IP address. We use this information for support purposes and to monitor the health of the site, identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents and appropriately scale computing resources.

Web Analytics

Pearson may use third party web trend analytical services, including Google Analytics, to collect visitor information, such as IP addresses, browser types, referring pages, pages visited and time spent on a particular site. While these analytical services collect and report information on an anonymous basis, they may use cookies to gather web trend information. The information gathered may enable Pearson (but not the third party web trend services) to link information with application and system log data. Pearson uses this information for system administration and to identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents, appropriately scale computing resources and otherwise support and deliver this site and its services.

Cookies and Related Technologies

This site uses cookies and similar technologies to personalize content, measure traffic patterns, control security, track use and access of information on this site, and provide interest-based messages and advertising. Users can manage and block the use of cookies through their browser. Disabling or blocking certain cookies may limit the functionality of this site.

Do Not Track

This site currently does not respond to Do Not Track signals.


Pearson uses appropriate physical, administrative and technical security measures to protect personal information from unauthorized access, use and disclosure.


This site is not directed to children under the age of 13.


Pearson may send or direct marketing communications to users, provided that

  • Pearson will not use personal information collected or processed as a K-12 school service provider for the purpose of directed or targeted advertising.
  • Such marketing is consistent with applicable law and Pearson's legal obligations.
  • Pearson will not knowingly direct or send marketing communications to an individual who has expressed a preference not to receive marketing.
  • Where required by applicable law, express or implied consent to marketing exists and has not been withdrawn.

Pearson may provide personal information to a third party service provider on a restricted basis to provide marketing solely on behalf of Pearson or an affiliate or customer for whom Pearson is a service provider. Marketing preferences may be changed at any time.

Correcting/Updating Personal Information

If a user's personally identifiable information changes (such as your postal address or email address), we provide a way to correct or update that user's personal data provided to us. This can be done on the Account page. If a user no longer desires our service and desires to delete his or her account, please contact us at customer-service@informit.com and we will process the deletion of a user's account.


Users can always make an informed choice as to whether they should proceed with certain services offered by InformIT. If you choose to remove yourself from our mailing list(s) simply visit the following page and uncheck any communication you no longer want to receive: www.informit.com/u.aspx.

Sale of Personal Information

Pearson does not rent or sell personal information in exchange for any payment of money.

While Pearson does not sell personal information, as defined in Nevada law, Nevada residents may email a request for no sale of their personal information to NevadaDesignatedRequest@pearson.com.

Supplemental Privacy Statement for California Residents

California residents should read our Supplemental privacy statement for California residents in conjunction with this Privacy Notice. The Supplemental privacy statement for California residents explains Pearson's commitment to comply with California law and applies to personal information of California residents collected in connection with this site and the Services.

Sharing and Disclosure

Pearson may disclose personal information, as follows:

  • As required by law.
  • With the consent of the individual (or their parent, if the individual is a minor)
  • In response to a subpoena, court order or legal process, to the extent permitted or required by law
  • To protect the security and safety of individuals, data, assets and systems, consistent with applicable law
  • In connection the sale, joint venture or other transfer of some or all of its company or assets, subject to the provisions of this Privacy Notice
  • To investigate or address actual or suspected fraud or other illegal activities
  • To exercise its legal rights, including enforcement of the Terms of Use for this site or another contract
  • To affiliated Pearson companies and other companies and organizations who perform work for Pearson and are obligated to protect the privacy of personal information consistent with this Privacy Notice
  • To a school, organization, company or government agency, where Pearson collects or processes the personal information in a school setting or on behalf of such organization, company or government agency.


This web site contains links to other sites. Please be aware that we are not responsible for the privacy practices of such other sites. We encourage our users to be aware when they leave our site and to read the privacy statements of each and every web site that collects Personal Information. This privacy statement applies solely to information collected by this web site.

Requests and Contact

Please contact us about this Privacy Notice or if you have any requests or questions relating to the privacy of your personal information.

Changes to this Privacy Notice

We may revise this Privacy Notice through an updated posting. We will identify the effective date of the revision in the posting. Often, updates are made to provide greater clarity or to comply with changes in regulatory requirements. If the updates involve material changes to the collection, protection, use or disclosure of Personal Information, Pearson will provide notice of the change through a conspicuous notice on this site or other appropriate way. Continued use of the site after the effective date of a posted revision evidences acceptance. Please contact us if you have questions or concerns about the Privacy Notice or any objection to any revisions.

Last Update: November 17, 2020