Home > Articles > Software Development & Management

This chapter is from the book

3.3 Video Packet Categorization Based on the Relative Priority Index (RPI)

3.3.1 Desired Characteristics for Prioritization

In our approach, the importance to an end-user of obtaining a low packet loss rate or reduced delay is determined based on the following criteria. First, the decision focuses on relative and fine-grained priorities inside an application, which rely on a linkage to an absolute metric related to the application’s contents. When initially marked in the DS field with RLI/RDI, video application packets at the user end have limited knowledge of the dynamic status of the network and of competing applications within the same boundary. The assigned relative priority for each packet is supposed to be interpreted at the DiffServ-aware node. Next, because more assured (but not guaranteed) service is to be provided, the video application needs to be able to cope with possible packet losses and delays. The assigned RLI/RDI parameters basically assume that the video stream has already been generated with error-resilience features so that the loss or delay of each packet can be tolerated to a certain degree. Finally, the resulting prioritization should exhibit some kind of clustering behavior in the RLI/RDI space so that it can keep its distinction when mapped to the limited DS byte.

As discussed previously, the degree of a video stream’s importance for receiving low-delay network service depends heavily on the application’s context. If we consider different degrees of importance for receiving low delay for different packets within a stream, we find that varying demands for delay quality are usually connected with the layered coding of video compression. For example, the I, P, and B frames of ISO/IEC MPEG-1/2/4 frames have varying demands with regard to delay as well as loss. The situation is similar for the spatial-scalable, SNR-scalable, and data-partitioned layers of MPEG or H.261/H.263, with the exception of the temporal-scalable layer. However, as a video application becomes network-aware, the trend seems to move gradually toward delay variation, even within a stream. A good example is the asynchronous media transmission scenario, where flexible delay margins can be exploited throughout transmission. This idea has been denoted as delay-cognizant video coding in [101], which applied an extended form of region-based scalability to H.263 video. Multiple-region layers within a stream were constructed in [101], where varying delay requirements were associated by perceiving a motion event such as nodding, gesturing, or maintaining lip synchronization. To be more specific, cognizant video coding assigns the lowest delay to the most visually significant region, and vice versa. Since packets assigned a longer delay may arrive late, this presents the opportunity for the network service to route packets through less congested but longer routes and charge lower prices. Another example is packets of the MPEG-4 system that encompass multiple elementary streams with different demands under one umbrella. This kind of integrated stream can justify the demand for inter-media RDI/RLI differentiation. Thus, we propose to distinguish a packet based on the loss rate and delay tuple, leaving flexibility to subsequent stages. Note, however, that only fixed RDI for a flow is employed currently.

3.3.2 Macroblock-Level Corruption Model

Investigation of Macroblock Error Propagation

Most state-of-the-art video compression techniques, including H.263+ and MPEG-1, 2, and 4 are based on motion-compensated prediction (MCP). The video codec employs inter-frame prediction to remove temporal redundancy and transform coding to reduce spatial redundancy. MCP is performed at the macroblock (MB) level of 16 × 16 luminance pixels. For each MB, the encoder searches the previously reconstructed frame for the MB that best matches the target MB, being encoded. To increase estimation accuracy, sub-pixel accuracy is used for the motion vector representation, and interpolation is used to build the reference block. Residual errors are encoded by the discrete cosine transform (DCT) and quantization. Finally, all information is coded with either a fixed-length code or variable-length code (VLC).

Basically, two different types of MBs can be selected adaptively for each MB. One is the inter-mode coding that includes motion compensation and residual error encoding. The other is intra-mode coding, in which only DCT and quantization are used for the original pixels. In the initial anchor frame, or when a new object appears in a frame to be encoded, the sum of absolute differences (SAD) of the most closely matching MB can be larger than the sum of the original pixels. For these cases, the intra-mode MB is used. Otherwise, the inter-mode MB, which usually costs less due to temporal redundancy elimination, is used.

In the Internet environment, packets may be discarded due to buffer overflow at intermediate nodes of the network, or they may be considered lost due to long queuing delays. VLC is very vulnerable to even one bit of error. If error occurs in the bit-stream once, the VLC decoder cannot decode the next codes until it finds the next synchronization point. Thus, a small error can result in catastrophic distortion in the reconstructed video sequence.

Packet loss results in the loss of encoded MBs as well as the loss of synchronization. It affects MBs in subsequent packets until the decoder is re-synchronized (or refreshed). When packet loss occurs, an error recovery action (i.e., error concealment), which attempts to identify the best alternative for the lost portion, is performed at the decoder.

Usually, no normative error concealment method is defined for most MCP video compression standards, but various error concealment schemes have been proposed [102], [103]. The temporal concealment scheme exploits the temporal correlation in video signals by replacing a damaged MB with the spatially corresponding MB in the previous frame. This straightforward scheme, however, can produce adverse visual artifacts in the presence of large motion, and the motion-compensated version of temporal concealment is usually employed with estimated motion vectors from surrounding MBs. Even after sophisticated error concealment, there remains residual error, which is propagated due to the recursive prediction structure. This temporal error propagation is typical for hybrid video coding that relies on MCP in the inter-frame mode. The number of times a lost MB is referenced in the future depends on the coding mode and motion vectors of MBs in subsequent frames. This tells us the importance of each MB from the viewpoint of error propagation.

While propagating temporally and spatially, the residual error after error concealment decays over time due to leakage in the prediction loop. Leaky prediction is a well-known technique to increase the robustness of differential pulse code modulation (DPCM) by attenuating the energy of the prediction signal. For hybrid video coding, leakage is introduced by spatial filtering operations that are performed during encoding. Spatial filtering can either be introduced explicitly by an explicit loop filter or implicitly as a side-effect of half-pixel motion compensation (i.e., with bilinear interpolation). This spatial filtering effect in the decoder was analyzed by Farber et al. [104]. In their work, the loop filter is approximated as a Gaussian-shaped filter in the spatial frequency domain. It is given by

Equation 3.1

03equ01.gif


where 051fig01.gif is determined by the filter shape and indicates the strength of the loop filter. As shown in Eq. (3.1), the loop filter behaves like a low-pass filter, and its bandwidth is determined by time t and filter strength 051fig02.gif.

By further assuming that the power spectral density (PSD) of error signal u(x, y) is a zero-mean, stationary, random process, we obtain

Equation 3.2

03equ02.gif


(i.e., a separable Gaussian PSD with variance 051fig03.gif that can be interpreted as the average energy). The shape of the PSD of the error signal is characterized as 051fig04.gif. Thus, the pair of parameters 051fig05.gif determines the energy and shape of the error signal’s PSD and can be used to match Eq. (3.2) with the true PSD. With these approximations, the variance of the error propagation random process v (x, y) can be derived as

Equation 3.3

03equ03.gif


where 052fig01.gif is a parameter describing the efficiency of the loop filter in reducing the introduced error and α[t] is the power transfer factor after t time steps.

This analytical model, as given in Eqs. (3.1, 3.2, 3.3) has been verified in experimental results [103], [104]. While the statistical propagation behavior is analyzed with this model, it is difficult to estimate the loss effect for each packet composed of several MBs [i.e., the group of block (GOB) unit, in general]. Thus, we extend the propagation behavior by incorporating additional coding parameters such as error concealment schemes and the encoding mode so that it is possible to track the error propagation effect better with moderate computational complexity in the following section. The purpose of the corruption model is to estimate the total impact of packet loss. When one or more packets are lost, errors are introduced and propagated. The impact of errors is defined as the difference between reconstructed frames with and without packet loss as measured by mean square errors (MSEs).

Derivation of MB-Level Corruption Model

Figure 3.3 shows the error propagation of a lost MB, in which the initial error of the corrupted MB is denoted by u(x, y) and its energy is measured in terms of the error variance 051fig03.gif. The propagation error v(x, y) in consecutive frames has energy 052fig03.gif in impaired MB j of frame n + m.

03fig03.gifFigure 3.3. Error propagation example of a lost MB.

The initial error due to packet loss is dependent on the error concealment scheme adopted by the decoder. The amount of initial error can be calculated at the encoder if the error concealment scheme used in the decoder is known a priori. For simplicity, we assume that the decoder uses the TCON error concealment scheme as specified in H.263+ Test Model 10 [105]. Also, at this stage, we will confine the corruption model analysis to the case of isolated packet loss. Under a low loss rate, this corruption model exhibits reliable accuracy. The multiple-packet loss case will be discussed in a later section, which will address the scenario involving a higher loss rate. Also, only an MB unit (i.e., 16 × 16) estimation is analyzed, excluding the advanced prediction option, at present. This simplifies an MB as the unit of the coding mode with a unique motion vector (for inter-frame modes). From an MB with 256 pixels, we can extract the PSD of the signal by analyzing its frequency component. In addition, an MB-based calculation costs much less than a pixel-based computation. The MB-level corruption model will also later be extended to the GOB level, with some restrictions.

Let us analyze the energy transition through an error propagation trajectory. Typical propagation trajectories are illustrated in Figure 3.3. A trajectory is composed of two basic trajectory elements: a parallel trajectory and a cascaded trajectory. Before the analysis, some assumptions are required to reduce the computational complexity. Let the decoder with the DPCM loop and spatial filter be a linear system. Also, for each frame to be predicted, the time difference is set to 1, that is, t = 1 in Eq. (3.3).

Figure 3.4(a) shows the parallel propagation trajectory. The error in reference frame n is characterized by 053fig01.gif, which determines the PSD of the error. In the parallel trajectory, an error can propagate to more than two different areas in subsequent frames. For each path, a different motion vector and a different spatial filtering can be applied. Thus, a different filter strength can be applied, as depicted in the equivalent linear system of Figure 3.4(b).

03fig04.gifFigure 3.4. Cascade propagation: (a) the trajectory and (b) the equivalent linear system.

Ha(ω) and Hb(ω) may have a different filter function, but both functions are the Gaussian approximation of the spatial filter. The PSD of the error frame is transferred to the predicted frame via

Equation 3.4

03equ04.gif


and the corresponding error energy is derived as

Equation 3.5

03equ05.gif


where 054fig01.gif and 054fig02.gif. Therefore, the MSE can be individually estimated and accumulated for the parallel error propagation trajectory.

In the cascaded propagation trajectory of Figure 3.4(a), the initial error energy 054fig03.gif of U of frame n is referenced in A of frame n + 1 and then it is transferred to B in frame n + 2 again. The equivalent linear system is shown in Figure 3.4(b). For each transition, the loop filter function is characterized by 054fig04.gif and 054fig05.gif respectively. Then, the PSD of the propagation error in frame n + 2 is given by

Equation 3.6

03equ06.gif


and its energy can be derived as

Equation 3.7

03equ07.gif


where

03equ07a.gif


The propagation error energy from U to B is given in Eq. (3.7). It is the same as Eq. (3.3) except for the loop filter efficiency γ. For the cascaded propagation, the loop filter efficiency γ can be derived from the equivalent loop filter strength 054fig06.gif, which is the sum of filter strengths σfa and σfb. As a result, the equivalent filter strength for the cascaded propagation is the sum of filter strengths along the propagation path. Because of the motion vector (even in sub-pixel accuracy), a portion of one MB is usually referenced by the predicted frames. In this case, all errors of an impaired MB do not propagate to subsequent frames. Thus, we must consider the portion of an MB that contributes to the next predicted frames as a reference, a quantity that is denoted as the dependency weight. If we denote the ith MB of frame n as MBn,i and a portion of MBn,i contributes to MBn+m,j, then the dependency weight wn,i(m, j) is defined as the normalized number of pixels that are transferred from MBn,i to MBn+m,j. If no portion of an MB is referenced by the jth MB of the (n + m)th frame, wn,i(m, j) is zero. Otherwise, it has a value between zero and one.

The dependency weight can be calculated recursively with stored motion vectors and MB types as shown in Figure 3.5. However, since motion compensation is not a linear operation, we have to assume that the motion vectors of neighboring MBs are the same (or at least very similar), so that the target MB is transferred without severe break, as depicted in Figure 3.5. Another assumption is that the error in an MB is uniformly distributed in the spatial domain while also having a Gaussian PSD. Then, the transferred error energy from MBn,i to MBn+m,j can be calculated with the loop-filtering effect and dependency weight. Finally, to evaluate the total impact of the loss of MBn,i, the weighted error variances for MBs of subsequent frames should be summed. Because the initial error can sustain over a number of frames without converging to zero, we must limit frames to be evaluated under an acceptable computational complexity. Fortunately, when the intra-MB refresh technique is used at the encoder, the propagated error energy converges to zero within a fixed number of frames. Thus, in general, a pre-defined number of frames is sufficient to estimate the total impact of the MB loss. This defines an estimation window for the corruption model.

03fig05.gifFigure 3.5. Recursive weight calculation

The appropriate estimation window might be determined based on the strength of the intra-MB refresh. As a result, the total energy of errors due to an MB loss in a sequence can be written as

Equation 3.8

03equ08.gif


 

where M is the size of the estimation window and N is the total number of MBs in a frame, respectively. Also, we have

Equation 3.9

03equ09.gif


The corruption model derived above can be viewed as an MB-level extension of the statistical error propagation model given in [104].

3.3.3 Simplified RLI Association and Categorization

The MB-level corruption model described in the previous section indicates the level of importance, for each MB, of not losing the MB. Thus, RLI can be defined on the basis of such a corruption model. However, determining the importance of not losing a particular MB requires an estimation window as data history, which induces some delay before marking RLI per packet. In this section, a simple and practical version is presented for online RPI generation. This RLI is calculated from video factors such as initial error, motion vector, and MB encoding type. Such a simplified MB-based corruption model provides RPI association to approximate the actual loss impact in MSE.

Under packetized video transmission, the RLI assignment for a packet is best when it can precisely represent its error propagation impact on the receiving video quality. However, the specific method for RLI prioritization is totally application- (and furthermore, video compression scheme-) dependent. Thus, we have chosen ITU-T H.263+ video [96] as the evaluation codec, considering its wide acceptance for low-latency video conferencing and its potential for video streaming. Note that with regard to this RLI association, there is not much difference among the motion-compensated prediction codecs, which include both MPEGs and H.261/H.263.

Given frame size and target bit rate, each packet has a different loss effect on end-to-end video quality due to inter-frame prediction. The impact of packet loss may spread within a frame up to the next re-synchronization (e.g., the picture or GOB headers) due to differential coding, run-length coding, and variable-length coding. This is referred to as spatial error propagation, and it can damage any type of frame. For temporal error propagation, damaged MBs affect the non-intra-coded MBs in subsequent frames, which use corrupted MBs as references. Recent research on the corruption model [106] has attempted to model this loss effect. The corruption model is a tool used to estimate the packet loss impact on the overall received video quality. However, most modelling efforts have focused on the statistical side of the loss effect, while a dynamic solution was sought in our approach. Instead of computationally complex options, we devised ways to associate RLI to H.263 packets with a simple and online calculation.

Basically, we use the error-resilient H.263+ stream, compressed at a target rate of 384 kilobits per second (kbps), for a common intermediate format (CIF) test sequence with 10 frames per second (fps). Several error resilience and compression efficiency options (“Annexes D, F, I, J, and T” with random intra-refresh) are used in the generation of the so-called “Anchor” (i.e., GOB) mode stream. The random intra-refresh rate is set to 5% to cover the network packet loss. It is then packetized by one or more packets per GOB, which is dependent on the maximum transfer unit (MTU) size of the IP network. Thus, we propose a simple yet effective RLI association scheme for this H.263+ video stream, which is calculated for each GOB packet.

Various video factors are taken into consideration for the proposed RLI association. First, the magnitude and direction of the motion vector for each MB are included to reflect the loss strength and temporal loss propagation due to motion. Then, the encoding types (intra, intra-refreshed, inter, etc.) are considered. In other words, the refreshing effect is added by counting the number of intra-coded MBs in a packet. Finally, the initial error due to packet loss is considered, assuming the normative error concealment adopted at the decoder. To better explain this, sample distributions of these three video factors are depicted in Figure 3.6. The RLI for a packet may take into account all of these video factors by summarizing the normalized video factors with an appropriate weighting as

Equation 3.10

03equ10.gif


 

where NVF is the total number of video factors considered, NVFn stands for the normalized video factors, and Wn stands for the corresponding weight factors. The normalization is done with an online version such as 057fig01.gif through updating the sampling mean 058fig01.gif at an ith update time.

03fig06.jpgFigure 3.6. Video factors (VFs) for RLI with “Foreman” sequence: (a) motion vector magnitude; (b) number of I-coded MBs in each packet; and (c) initial error due to packet loss.

Figure 3.7 shows an example of the resulting RLI assignment for a “Foreman” sequence. This model provides a much more accurate estimated MSE, but it needs forehead information in a particular window range to allow calculation of the propagation effect. For our purpose of QoS mapping, we do not need an accurate MSE amount because the relative priority order and relative quantity are enough. Eq. (3.10) is simple and can be generated easily as an online version with the sampling mean. A more accurately estimated RLI based on a corruption model is proposed in [107], but it needs the history information of several subsequent frames to make such an estimate, which is somewhat of a trade off. As shown in Figure 3.8(d), using 20 frame history windows to calculate RLI has more correlation with actual measured MSE than the process shown in Figure 3.8(c).

03fig07.gifFigure 3.7. RLI for “Foreman” sequence obtained by applying Wn’s (0.15, 0.15, and 1.75) for the three video factors of Figure .

03fig08.jpgFigure 3.8. A comparison of: (a) the actually measured MSE coming from loss of each GOB packet; (b) the proposed RLI pattern for a “Foreman” sequence; (c) the correlation distribution between actual MSE and proposed RLI of the same GOB packet number; and (d) the correlation distribution between actual MSE and corruption model-based RLI in [107].

To show that proposed RLI approximates actual loss propagation, the actually measured MSE from each GOB packet is calculated and compared with the proposed RLI pattern in Figure 3.8. The simple RLI shows a pattern similar to the actual MSE, and there is general correlation between two. This is enough for our goal of differentiating video packets in a stream for prioritization marking according to their importance since the QoS mapping is needed from only several source categories and the network DS levels are few in number.

Finally, RLIs are categorized into K DS traffic categories to enable mapping to limited DS field space (or eventually to be ready for mapping to more limited network DS levels). In our approach, simple non-uniform quantization of RLI is performed for this categorization. That is, as shown in Figure 3.9, categorization is done with gradually increasing steps as category k increases. Another possible approach is to categorize packets into an equal number (i.e., uniform distribution). After categorization, all packets belonging to category k may be represented by their average RLI value, 059fig01.gif.

03fig09.gifFigure 3.9. Packet distribution and average RLI of each video DS category for a “Foreman” sequence.

Higher RLI thus represents more potential damage to visual quality. To verify the clustering behavior of RLI association, we observed RLI distribution for several video sequences. As shown in Figure 3.10, the RLI distribution varied according to a scene’s characteristics. Since it affects the QoS mapping, we will explore the impact of RLI distribution patterns in Section 3.4.2. Finally, RLIs are categorized to enable mapping to limited DS field space (or eventually to be ready for mapping to more limited network DS levels), which is represented by source category k among K categories. In our approach, simple, uniform quantization of RLI is performed for this categorization. After categorization, all packets belonging to category k may be represented by their average RLI value, 059fig01.gif.

03fig10.gifFigure 3.10. Different RLI (cumulative) distributions for several video sequences.

InformIT Promotional Mailings & Special Offers

I would like to receive exclusive offers and hear about products from InformIT and its family of brands. I can unsubscribe at any time.

Overview


Pearson Education, Inc., 221 River Street, Hoboken, New Jersey 07030, (Pearson) presents this site to provide information about products and services that can be purchased through this site.

This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site. Please note that other Pearson websites and online products and services have their own separate privacy policies.

Collection and Use of Information


To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:

Questions and Inquiries

For inquiries and questions, we collect the inquiry or question, together with name, contact details (email address, phone number and mailing address) and any other additional information voluntarily submitted to us through a Contact Us form or an email. We use this information to address the inquiry and respond to the question.

Online Store

For orders and purchases placed through our online store on this site, we collect order details, name, institution name and address (if applicable), email address, phone number, shipping and billing addresses, credit/debit card information, shipping options and any instructions. We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes.

Surveys

Pearson may offer opportunities to provide feedback or participate in surveys, including surveys evaluating Pearson products, services or sites. Participation is voluntary. Pearson collects information requested in the survey questions and uses the information to evaluate, support, maintain and improve products, services or sites, develop new products and services, conduct educational research and for other purposes specified in the survey.

Contests and Drawings

Occasionally, we may sponsor a contest or drawing. Participation is optional. Pearson collects name, contact information and other information specified on the entry form for the contest or drawing to conduct the contest or drawing. Pearson may collect additional personal information from the winners of a contest or drawing in order to award the prize and for tax reporting purposes, as required by law.

Newsletters

If you have elected to receive email newsletters or promotional mailings and special offers but want to unsubscribe, simply email information@informit.com.

Service Announcements

On rare occasions it is necessary to send out a strictly service related announcement. For instance, if our service is temporarily suspended for maintenance we might send users an email. Generally, users may not opt-out of these communications, though they can deactivate their account information. However, these communications are not promotional in nature.

Customer Service

We communicate with users on a regular basis to provide requested services and in regard to issues relating to their account we reply via email or phone in accordance with the users' wishes when a user submits their information through our Contact Us form.

Other Collection and Use of Information


Application and System Logs

Pearson automatically collects log data to help ensure the delivery, availability and security of this site. Log data may include technical information about how a user or visitor connected to this site, such as browser type, type of computer/device, operating system, internet service provider and IP address. We use this information for support purposes and to monitor the health of the site, identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents and appropriately scale computing resources.

Web Analytics

Pearson may use third party web trend analytical services, including Google Analytics, to collect visitor information, such as IP addresses, browser types, referring pages, pages visited and time spent on a particular site. While these analytical services collect and report information on an anonymous basis, they may use cookies to gather web trend information. The information gathered may enable Pearson (but not the third party web trend services) to link information with application and system log data. Pearson uses this information for system administration and to identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents, appropriately scale computing resources and otherwise support and deliver this site and its services.

Cookies and Related Technologies

This site uses cookies and similar technologies to personalize content, measure traffic patterns, control security, track use and access of information on this site, and provide interest-based messages and advertising. Users can manage and block the use of cookies through their browser. Disabling or blocking certain cookies may limit the functionality of this site.

Do Not Track

This site currently does not respond to Do Not Track signals.

Security


Pearson uses appropriate physical, administrative and technical security measures to protect personal information from unauthorized access, use and disclosure.

Children


This site is not directed to children under the age of 13.

Marketing


Pearson may send or direct marketing communications to users, provided that

  • Pearson will not use personal information collected or processed as a K-12 school service provider for the purpose of directed or targeted advertising.
  • Such marketing is consistent with applicable law and Pearson's legal obligations.
  • Pearson will not knowingly direct or send marketing communications to an individual who has expressed a preference not to receive marketing.
  • Where required by applicable law, express or implied consent to marketing exists and has not been withdrawn.

Pearson may provide personal information to a third party service provider on a restricted basis to provide marketing solely on behalf of Pearson or an affiliate or customer for whom Pearson is a service provider. Marketing preferences may be changed at any time.

Correcting/Updating Personal Information


If a user's personally identifiable information changes (such as your postal address or email address), we provide a way to correct or update that user's personal data provided to us. This can be done on the Account page. If a user no longer desires our service and desires to delete his or her account, please contact us at customer-service@informit.com and we will process the deletion of a user's account.

Choice/Opt-out


Users can always make an informed choice as to whether they should proceed with certain services offered by InformIT. If you choose to remove yourself from our mailing list(s) simply visit the following page and uncheck any communication you no longer want to receive: www.informit.com/u.aspx.

Sale of Personal Information


Pearson does not rent or sell personal information in exchange for any payment of money.

While Pearson does not sell personal information, as defined in Nevada law, Nevada residents may email a request for no sale of their personal information to NevadaDesignatedRequest@pearson.com.

Supplemental Privacy Statement for California Residents


California residents should read our Supplemental privacy statement for California residents in conjunction with this Privacy Notice. The Supplemental privacy statement for California residents explains Pearson's commitment to comply with California law and applies to personal information of California residents collected in connection with this site and the Services.

Sharing and Disclosure


Pearson may disclose personal information, as follows:

  • As required by law.
  • With the consent of the individual (or their parent, if the individual is a minor)
  • In response to a subpoena, court order or legal process, to the extent permitted or required by law
  • To protect the security and safety of individuals, data, assets and systems, consistent with applicable law
  • In connection the sale, joint venture or other transfer of some or all of its company or assets, subject to the provisions of this Privacy Notice
  • To investigate or address actual or suspected fraud or other illegal activities
  • To exercise its legal rights, including enforcement of the Terms of Use for this site or another contract
  • To affiliated Pearson companies and other companies and organizations who perform work for Pearson and are obligated to protect the privacy of personal information consistent with this Privacy Notice
  • To a school, organization, company or government agency, where Pearson collects or processes the personal information in a school setting or on behalf of such organization, company or government agency.

Links


This web site contains links to other sites. Please be aware that we are not responsible for the privacy practices of such other sites. We encourage our users to be aware when they leave our site and to read the privacy statements of each and every web site that collects Personal Information. This privacy statement applies solely to information collected by this web site.

Requests and Contact


Please contact us about this Privacy Notice or if you have any requests or questions relating to the privacy of your personal information.

Changes to this Privacy Notice


We may revise this Privacy Notice through an updated posting. We will identify the effective date of the revision in the posting. Often, updates are made to provide greater clarity or to comply with changes in regulatory requirements. If the updates involve material changes to the collection, protection, use or disclosure of Personal Information, Pearson will provide notice of the change through a conspicuous notice on this site or other appropriate way. Continued use of the site after the effective date of a posted revision evidences acceptance. Please contact us if you have questions or concerns about the Privacy Notice or any objection to any revisions.

Last Update: November 17, 2020