Home > Articles

Like this article? We recommend

Ethernet Performance Troubleshooting

Ethernet performance troubleshooting is device specific because not all devices have the same architecture capabilities. Therefore, the discussion of troubleshooting performance issues will have to be tackled on a per-device basis.

The following SolarisTM tools aid in the analysis of performance issues:

  • kstat to view device-specific statistics

  • mpstat to view system utilization information

  • lockstat to show areas of contention

You can use the information from these tools to tune specific parameters. The tuning examples that follow describe where this information is most useful.

You have two options for tuning: using the /etc/system file or the ndd utility.

Using the /etc/system file to modify the initial value of the driver variables requires a system reboot for the to take effect.

If you use the ndd utility for tuning, the changes take effect immediately. However, any modifications you make using the ndd utility will be lost when the system goes down. If you want the ndd tuning properties to persist through a reboot, add these properties to the respective driver.conf file.

Parameters that have kernel statistics but have no capability to tune for improvement are omitted from this discussion because no troubleshooting capability is provided in those cases.

ge Gigabit Ethernet

The ge interface provides the following tuning parameters that assist in performance troubleshooting.

TABLE 3 ge Performance Tunable Parameters

Parameter

Values

Description

ge_intr_mode

0-1

Enables the ge driver to send packets directly to the upper communication layers rather than queueing them.

0 = Packets are not passed in the interrupt service routine but are placed in a streams service queue and passed to the protocol stack later, when the streams service routine runs.

1 = Packets are passed directly to the protocol stack in the interrupt context.

Default: 0 (queue packets to upper layers)

ge_dmaburst_mode

0-1

Enables infinite burst mode for PCI DMA transactions rather than using cache-line size PCI DMA transfers. This feature supported only on Sun platforms with the UltraSparc® III CPU.

0 = Disabled (default)

1 = Enabled

ge_nos_tmd

32-8192

Number of transmit descriptors used by the driver.

Default = 512

ge_put_cfg

0-1

An enumerated type that can have a value of 0 or 1.

0 = receive processing occurs in the worker threads.

1 = receive processing occurs in the streams service queues routine.

Default = 1


The ge interface provides some statistics you can use to measure the performance bottlenecks in the driver at the transmit or receive end of the link. The kstats allow you to decide what corrective tuning can be applied, based on the tuning parameters previously described. The useful statistics are shown in TABLE 4.

TABLE 4 List of ge Specific Interface Statistics

kstat name

Type

Description

rx_overflow

counter

Number of times the hardware is unable to receive a packet due to the internal FIFOs being full.

no_free_rx_desc

counter

Number of times the hardware is unable to post a packet because there are no more Rx descriptors available.

no_tmds

counter

Number of times transmit packets are posted on the driver streams queue for processing later by the queue's service routine.

nocanput

counter

Number of times a packet is simply dropped by the driver because the module above the driver cannot accept the packet.

pci_bus_speed

value

The PCI bus speed that drives the card.


When rx_overflow is incrementing, packet processing is not keeping up with the packet arrival rate. If it is incrementing and no_free_rx_desc is not, this indicates that the PCI bus or SBus bus is presenting an issue to the flow of packets through the device. This could be because the ge card is plugged into a slower I/O bus. You can confirm the bus speed by looking at the pci_bus_speed statistic. An SBus bus speed of 40 MHz or a PCI bus speed of 33 MHz might not be sufficient to sustain full bidirectional one-gigabit Ethernet traffic.

Another scenario that can lead to rx_overflow incrementing on its own is sharing the I/O bus with another device that has similar bandwidth requirements to those of the ge card.

These scenarios are hardware limitations. There is no solution for SBus. For PCI bus, a first step in addressing them is to enable infinite burst capability on the PCI bus. You can achieve that by using the /etc/system tuning parameter ge_dmaburst_mode.

Alternatively, you can reorganize the system to give the ge interface a 66-MHz PCI slot, or separate devices that contend for a shared bus segment by giving each of them a bus segment.

The probability that rx_overflow incrementing is the only problem is small. Typically, Sun systems have a fast PCI bus, and memory subsystem, so delays are seldom induced at that level. It is more likely is that the protocol stack software might fall behind and lead to the Rx descriptor ring being exhausted of free elements with which to receive more packets. If this happens, then the kstat no_free_rx_desc will begin to increment, meaning the CPU cannot absorb the incoming packet in the case of a single CPU. If more than one CPU is available, it is still possible to overwhelm a single CPU. But given that the Rx processing can be split using the alternative Rx data delivery models provided by ge, it might be possible to distribute the processing of incoming packets to more than one CPU. You can do this by first ensuring that ge_intr_mode is not set to 1. Also be sure to tune ge_put_cfg to enable the load-balancing worker thread or streams service routine.

Another possible scenario is where the ge device is adequately handling the rate of incoming packets, but the upper layer is unable to deal with the packets at that rate. In this case, the kstat nocanputs parameter will be incrementing. The tuning that can be applied to this condition is available in the upper layer protocols, although if you're running the Solaris 8 operating system or earlier, then upgrading to the Solaris 9 version will help your application experience fewer nocanputs. The upgrade might reduce nocanput errors due to improved multithreading and IP scalability performance improvements in the Solaris 9 operating system.

While the Tx side is also subject to an overwhelmed condition, this is less likely than any Rx-side condition. If the Tx side is overwhelmed, it will be visible when the no_tmds parameter begins to increment. If the Tx descriptor ring size can be increased, the /etc/system tunable parameter ge_nos_tmd provides that capability.

ce Gigabit Ethernet

The ce interface provides the following tunable parameters that assist in performance troubleshooting. Note that these are ndd parameters.

TABLE 5 ce Performance Parameters Tunable Using ndd

Parameter

Values

Description

tx-dma-weight

0-3

Determines the multiplication factor for granting credit to the Tx side during a weighted round robin arbitration.

Values are 0 to 3.

Zero means no extra weighting. The other values are powers of 2 extra weighting, on that traffic.

For example, if tx-dma-weight = 0 and

rx-dma-weight = 3, then as long as Rx traffic is continuously arriving its priority will be eight times greater than Tx to access the PCI

(Default = 0)

rx-dma-weight

0-3

Determines the multiplication factor for granting credit to the Rx side during a weighted round-robin arbitration.

Values are 0 to 3.

(Default = 0)

infinite-burst

0-1

Allows the infinite burst capability to be utilized. When this is in effect and the system supports infinite burst, the adapter will not free the bus until complete packets are transferred across the bus.

Values are 0 or 1.

(Default = 0)

red-dv4to6k

0 to 255

Random early detection and packet drop vectors for when FIFO threshold is greater than 4096 bytes and less than 6144 bytes. Probability of drop can be programmed on a 12.5 percent granularity. For example, if bit 0 is set, the first packet out of every eight will be dropped in this region.

(Default = 0)

red-dv6to8k

0 to 255

Random early detection and packet drop vectors for when FIFO threshold is greater than 6144 bytes and less than 8192 bytes. Probability of drop can be programmed on a 12.5 percent granularity. For example, if bit 0 is set, the first packet out of every eight will be dropped in this region. (Default = 0)

red-dv8to10k

0 to 255

Random early detection and packet drop vectors for when FIFO threshold is greater than 8192 bytes and less than 10,240 bytes. Probability of drop can be programmed on a 12.5 percent granularity. For example, if bits 1 and 6 are set, the second and seventh packets out of every eight will be dropped in this region. (Default = 0)

red-dv10to12k

0 to 255

Random early detection and packet drop vectors for when FIFO threshold is greater than 10,240 bytes and less than 12,288 bytes. Probability of drop can be programmed on a 12.5 percent granularity. If bits 2, 4, and 6 are set, then the third, fifth, and seventh packets out of every eight will be dropped in this region. (Default = 0)


TABLE 6 lists the /etc/system tunable parameters that assist in performance troubleshooting.

TABLE 6 ce Performance Parameters Tunable Using /etc/system

Parameter

Values

Description

ce_ring_size

32-8192

The size of the Rx buffer ring, a ring of buffer descriptors for Rx.

One buffer = 8K. This value must be power of 2. Maximum value is 8192 buffers of 8K each.

Default = 256.

ce_comp_ring_size

0-8192

The size of each Rx completion descriptor ring. It also is power of 2.

Default = 2048

ce_inst_taskqs

0-64

Controls the number of taskqs set up per ce device instance. This value is only meaningful if ce_taskq_disable is false.

Any value less than 64 is meaningful.

Default = 4.

ce_srv_fifo_depth

30-100000

Gives the size of the service FIFO, in number of elements. This variable can be any integer value.

Default = 2048

ce_cpu_threshold

1-1000

Gives the threshold for the number of CPUs required in the system and online before the taskqs are utilized to Rx packets.

Default = 4

ce_taskq_disable

0-1

Disables the use of Task queues and forces all packets to go up to Layer 3 in the interrupt context.

Default depends on whether the number of CPUs in the system exceeds the ce_cpu_threshold

ce_start_cfg

0-1

An enumerated type that can have a value of 0 or and 1.

0 = ce transmit algorithm does not do serialization

1 = ce transmit algorithm does serialization.

Default = 0

ce_tx_ring_size

0-8192

The size of each Tx descriptor ring. It also is power of 2.

Default = 2048

ce_no_tx_lb

0-1

Disables the Tx load balancing and forces all transmission to be posted to a single descriptor ring.

0 = Tx Load balancing is enabled.

1 = Tx Load Balancing is disabled.

Default = 1

ce_bcopy_thresh

0-8192

The mblk size threshold used to decide when to copy a mblk into a pre-mapped buffer, as opposed to using DMA or other methods.

Default = 256

ce_dvma_thresh

0-8192

The mblk size threshold used to decide when to use the fast path DVMA interface to transmit mblk.

Default = 1024

ce_dma_stream_thresh

0-8192

This global variable splits the ddi_dma mapping method further by providing Consistent mapping and Streaming mapping. In the Tx direction, for larger transmissions, Streaming is better than Consistent mappings. If the mblk size is greater than 256 bytes but less than 1024 bytes, then mblk fragments will be transmitted using ddi_dma methods.

Default = 512


The ce interface provides a far more extensive list of kstats that can be used to measure the performance bottlenecks in the driver in the Tx or the Rx. The kstats allow you to decide what corrective tuning can be applied, based on the tuning parameters described previously. The useful statistics are shown in TABLE 7.

TABLE 7 List of ce Specific Interface Statistics

kstat name

Type

Description

rx_ov_flow

counter

Number of times the hardware is unable to receive a packet due to the internal FIFOs being full.

rx_no_buf

counter

Number of times the hardware is unable to receive a packet due to Rx buffers being unavailable.

rx_no_comp_wb

counter

Number of times the hardware is unable to receive a packet due to no space in the completion ring to post Received packet descriptor.

ipackets_cpuXX

counter

Number of packets being directed to load-balancing thread XX.

mdt_pkts

counter

Number of packets sent using multidata interface.

rx_hdr_pkts

counter

Number of packets arriving which are less than 252 bytes in length.

rx_mtu_pkts

counter

Number of packets arriving which are greater than 252 bytes in length.

rx_jumbo_pkts

counter

Number of packets arriving which are greater than 1522 bytes in length.

rx_ov_flow

counter

Number of times a packet is simply dropped by the driver because the module above the driver cannot accept the packet.

rx_nocanput

counter

Number of times a packet is simply dropped by the driver because the module above the driver cannot accept the packet.

rx_pkts_dropped

counter

Number of packets dropped due to Service FIFO queue being full.

tx_hdr_pkts

counter

Number of packets hitting the small packet transmission method, copy packet into a pre-mapped DMA buffer.

tx_ddi_pkts

counter

Number of packets hitting the mid range DDI DMA transmission method.

tx_dvma_pkts

counter

Number of packets hitting the top range DVMA fast path DMA transmission method.

tx_jumbo_pkts

counter

Number of packets being sent which are greater than 1522 bytes in length.

tx_max_pend

counter

Measure of the maximum number of packets which was ever queued on a Tx ring.

tx_no_desc

counter

Number of times a packet transmit was attempted and Tx descriptor elements were not available. The packet is postponed until later.

tx_queueX

counter

Number of packets transmitted on a particular queue.

mac_mtu

value

The maximum packet allowed past the MAC.

pci_bus_speed

value

The PCI bus speed that is driving the card.


When rx_ov_flow is incrementing, packet processing is not keeping up with the packet arrival rate. If rx_ov_flow is incrementing while rx_no_buf or rx_no_comp_wb is not, this indicates that the PCI bus is presenting an issue to the flow of packets through the device. This could be because the ce card is plugged into a slower PCI bus. You can confirm the bus speed by looking at the pci_bus_speed statistic. A bus speed of 33 MHz, might not be sufficient to sustain full bidirectional one gigabit Ethernet traffic.

Another scenario that can lead to rx_ov_flow incrementing on its own is sharing the PCI bus with another device that has bandwidth requirements similar to those of the ce card.

These scenarios are hardware limitations. A first step in addressing them is to enable the infinite burst capability on the PCI bus. Use the ndd tuning parameter infinite-burst to achieve that.

Infinite burst will help give ce more bandwidth, but the Tx and Rx of the ce device will still be competing for that PCI bandwidth. Therefore, if the traffic profile shows a bias toward Rx traffic and this condition is leading to rx_ov_flow, you can adjust the bias of PCI transactions in favor of the Rx DMA channel relative to the Tx DMA channel, using ndd parameters rx-dma-weight and tx-dma-weight

Alternatively, you can reorganize the system by giving the ce interface a 66-MHz PCI slot, or separate devices that contend for a shared bus segment by giving each of them a bus segment.

If this doesn't contribute much to reducing the problem, then you should consider using Random Early Detection (RED) to ensure that the impact of dropping packets is minimized with respect to keeping connections alive which would be normally terminated due to regular overflow. The following parameters that allow enabling RED are configurable using ndd: red-dv4to6k, red-dv6to8k, red-dv8to10k, and red-dv10to12k.

The probability that rx_overflow incrementing is the only problem is small. Typically Sun systems have a fast PCI bus and memory subsystem, so delays are seldom induced at that level. It is more likely that the protocol stack software might fall behind and lead to the Rx buffers or completion descriptor ring being exhausted of free elements with which to receive more packets. If this happens, then the kstats parameters rx_no_buf and rx_no_comp_wb will begin to increment. This can mean that there's not enough CPU power to absorb the packets but it can also be due to a bad balance of the buffer ring size versus the completion ring size, leading to the rx_no_comp_wb incrementing without the rx_no_buf incrementing. The default configuration is one buffer to four completion elements. This works great provided that the packets arriving are larger than 256 bytes. If they are not and that traffic dominates, then 32 packets will be packed into a buffer leading to a greater probability that configuration imbalance will occur. For that case, more completion elements need to be made available. This can be addressed using the /etc/system tunables ce_ring_size to adjust the number of available Rx buffers and ce_comp_ring_size to adjust the number of Rx packet completion elements. To understand the traffic profile of the Rx so you can tune these parameters, use kstat to look at the distribution of Rx packets across the rx_hdr_pkts and rx_mtu_pkts.

If ce is being run on a single CPU system and rx_no_buf and rx_no_comp_wb are incrementing, you will have to resort again to RED, or enable Ethernet flow control.

If more than one CPU is available, it is still possible to overwhelm a single CPU. Given that the Rx processing can be split using the alternative Rx data delivery models provided by ce, it might be possible to distribute the processing of incoming packets to more than one CPU, described earlier as Rx load balancing. This will happen by default if the system has four or more CPUs, and it will enable four load-balancing worker threads. The threshold of CPUs in the system and the number of load-balancing worker threads enabled can be managed using the /etc/system tunables ce_cpu_threshold and ce_inst_taskqs.

The number of load balancing worker threads, and how evenly the Rx load is being distributed to each worker thread can be viewed with the ipacket_cpuxx kstats the highest number of xx tells you how many load balancing worker threads are running while value of these parameters give you the spread of the work across the instantiated load balancing worker threads. This, in turn, gives an indication if the load balancing is yielding a benefit. For example, if all ipacket_cpuxx have an approximately even number of packets counted on each then the load balancing is optimal. On the other hand, if only one is incrementing and the others are not, then the benefit of Rx load balancing is nullified.

It is also possible to measure whether the system is experiencing a even spread of CPU activity using mpstat. In the ideal case, if you experience good load balancing as shown in the kstats ipackets_cpuxx, it should also be visible in mpstat that the workload is evenly distributed to multiple CPUs.

If none of this benefit is visible, then disable the load balancing capability completely, using the /etc/system variable ce_taskq_disable.

The Rx load balancing provides packet queues, also known as service FIFOs, between the interrupt threads which fan out the workload and the service FIFO worker threads which drain the service FIFO and complete the workload. These service FIFOs are of fixed size, controlled by the /etc/system variable ce_srv_fifo_depth. It is possible that the service FIFOs can also overflow, and drop packets as the rate of packet arrival exceeds the rate with which the service FIFO draining thread can complete the post processing. These dropped packets can be measured using the rx_pkts_dropped kstat. If this is measured as occurring, you can increase the size of the service FIFO, or you can increase the number of service FIFOs allowing more Rx load balancing. In some cases, it may be possible to eliminate increments in rx_pkts_dropped, but the problem may move to rx_nocanputs, which is generally only addressable by tuning that can be applied by upper layer protocols, although if you're running the Solaris 8 operating system or earlier, then upgrading to the Solaris 9 version will help your application experience fewer nocanputs. The upgrade might reduce nocanput errors due to improved multithreading and IP scalability performance improvements in the Solaris 9 operating system.

There is a difficulty is maximizing the Rx load balancing, and that's contingent on the Tx ring processing. This is measurable using the lockstat command and will show contention on the ce_start routine at the top as the most contended driver function. This contention cannot be eliminated, but it is possible to employ a new Tx method known as Transmit serialization, which keeps contention to a minimum while forcing the Tx processes on a fixed set of CPUs. Keeping the Tx process on a fixed CPU reduces the risk of CPUs spinning waiting for other CPUs to complete their Tx activity, ensuring CPUs are always kept busy doing useful work. This transmission method can be enabled using the /etc/system variable ce_start_cfg, setting it to 1. When you enable Transmit serialization, you will be trading off Transmit latency for avoiding mutex spins induced by contention.

The Tx side is also subject to an overwhelmed condition, which occurs when the CPU speed exceeds the Ethernet line rate, although this is less likely than any Rx side condition. When the Tx side becomes overwhelmed, tx_max_pending value matches the size of the /etc/system variable ce_tx_ring_size. If this occurs, you know that packets are being postponed because Tx descriptors are being exhausted. Therefore the size of the ce_tx_ring_size should be increased.

The tx_hdr_pkts, tx_ddi_pkts, and tx_dvma_pkts are useful for establishing the traffic profile of an application and matching that profile with the capabilities of a system. The parameters ce_bcopy_thresh, ce_dvma_thresh, and ce_dma_stream_thresh are used for adjusting the transmission method applied to an outgoing packet. These parameters are described in TABLE 7 in terms of mblks, which is the mechanism used to transmit packets in the Solaris operating system. The following output shows how these parameters relate to each other:

mblk size < ce_bcopy_thresh: driver will copy into pre-mapped buffer

mblk size > ce_dvma_thresh: driver uses fast path DVMA interface

ce_dma_stream_thresh < mblk size < ce_dvma_thresh:
           driver uses streaming DMA method

Otherwise: driver uses consistent DMA method.

How to set these parameters is again system dependant and application dependant. The system dependency is associated with memory latency. The rule of thumb to apply here is if the system has a large number of CPUs the memory latency will tend to be larger.

Considering larger memory latency systems it's best to avoid moving data from one memory location to another, so using the premapped buffer for DMA will be more expensive than setting up and tearing down DMA mapping on a per-packet basis.

Furthermore, if the tx_hdr_pkts appears to be incrementing at a higher rate than tx_dvma_pkts, you have an application with a traffic profile that uses a lot of small packets. Therefore, you should adjust the ce_dvma_thresh and ce_bcopy_thresh so that most of the packets hit the tx_dvma_pkts path in the driver and avoid copies. The following may be reasonable parameters, for such a system:

ce_bcopy_thresh = 97
ce_dvma_thresh = 96
ce_dma_stream_thresh = <don't care>

Alternatively, in low memory latency systems, the inverse is true and you would need to adjust ce_dvma_thresh and ce_bcopy_thresh so that most packets take the bcopy route.

ce_bcopy_thresh = 256
ce_dvma_thresh = 255
ce_dma_stream_thresh = <don't care>

The Streaming DMA and Consistent DMA methods are provided as the fall back path, and tend to provide little improvement over the Fast DVMA method or the copy into premapped buffer method. This can be tuned out most of the time, as shown in the previous examples, since it seldom gives improvement over the Fast DVMA method.

You can adjust the DMA thresholds of ce_bcopy_thresh, ce_dvma_thresh, and ce_dma_stream_thresh, using the /etc/system file to push more packets into the preprogrammed DMA versus the per-packet programming. Once the tuning is complete, the statistics can be viewed again to see if the tuning took effect.

The tx_queueX parameter gives a good indication of whether Tx load balancing is happening. Like the Rx side, if no load balancing is visible, meaning all the packets appear to be getting counted by only one tx_queue, then you should switch this feature off and use the ce_no_tx_lb variable.

The mac_mtu gives an indication of the maximum size of packet that will make it through the ce device. It is useful to know if jumbo frames is enabled at the DLPI layer below TCP/IP. If jumbo frames is enabled, then the MTU indicated by mac_mtu will be 9216.

This is helpful as it will show that if there's a mismatch between the DLPI layer MTU and the IP layer MTU, allowing troubleshooting to occur in a layered manner.

Once jumbo frames is successfully configured at the driver layer and the TCP/IP layer, then use the rx_jumbo_pkts and tx_jumbo_pkts, to ensure Transmits and Receives of jumbo frame packets respectively is happening correctly.

InformIT Promotional Mailings & Special Offers

I would like to receive exclusive offers and hear about products from InformIT and its family of brands. I can unsubscribe at any time.

Overview


Pearson Education, Inc., 221 River Street, Hoboken, New Jersey 07030, (Pearson) presents this site to provide information about products and services that can be purchased through this site.

This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site. Please note that other Pearson websites and online products and services have their own separate privacy policies.

Collection and Use of Information


To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:

Questions and Inquiries

For inquiries and questions, we collect the inquiry or question, together with name, contact details (email address, phone number and mailing address) and any other additional information voluntarily submitted to us through a Contact Us form or an email. We use this information to address the inquiry and respond to the question.

Online Store

For orders and purchases placed through our online store on this site, we collect order details, name, institution name and address (if applicable), email address, phone number, shipping and billing addresses, credit/debit card information, shipping options and any instructions. We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes.

Surveys

Pearson may offer opportunities to provide feedback or participate in surveys, including surveys evaluating Pearson products, services or sites. Participation is voluntary. Pearson collects information requested in the survey questions and uses the information to evaluate, support, maintain and improve products, services or sites, develop new products and services, conduct educational research and for other purposes specified in the survey.

Contests and Drawings

Occasionally, we may sponsor a contest or drawing. Participation is optional. Pearson collects name, contact information and other information specified on the entry form for the contest or drawing to conduct the contest or drawing. Pearson may collect additional personal information from the winners of a contest or drawing in order to award the prize and for tax reporting purposes, as required by law.

Newsletters

If you have elected to receive email newsletters or promotional mailings and special offers but want to unsubscribe, simply email information@informit.com.

Service Announcements

On rare occasions it is necessary to send out a strictly service related announcement. For instance, if our service is temporarily suspended for maintenance we might send users an email. Generally, users may not opt-out of these communications, though they can deactivate their account information. However, these communications are not promotional in nature.

Customer Service

We communicate with users on a regular basis to provide requested services and in regard to issues relating to their account we reply via email or phone in accordance with the users' wishes when a user submits their information through our Contact Us form.

Other Collection and Use of Information


Application and System Logs

Pearson automatically collects log data to help ensure the delivery, availability and security of this site. Log data may include technical information about how a user or visitor connected to this site, such as browser type, type of computer/device, operating system, internet service provider and IP address. We use this information for support purposes and to monitor the health of the site, identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents and appropriately scale computing resources.

Web Analytics

Pearson may use third party web trend analytical services, including Google Analytics, to collect visitor information, such as IP addresses, browser types, referring pages, pages visited and time spent on a particular site. While these analytical services collect and report information on an anonymous basis, they may use cookies to gather web trend information. The information gathered may enable Pearson (but not the third party web trend services) to link information with application and system log data. Pearson uses this information for system administration and to identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents, appropriately scale computing resources and otherwise support and deliver this site and its services.

Cookies and Related Technologies

This site uses cookies and similar technologies to personalize content, measure traffic patterns, control security, track use and access of information on this site, and provide interest-based messages and advertising. Users can manage and block the use of cookies through their browser. Disabling or blocking certain cookies may limit the functionality of this site.

Do Not Track

This site currently does not respond to Do Not Track signals.

Security


Pearson uses appropriate physical, administrative and technical security measures to protect personal information from unauthorized access, use and disclosure.

Children


This site is not directed to children under the age of 13.

Marketing


Pearson may send or direct marketing communications to users, provided that

  • Pearson will not use personal information collected or processed as a K-12 school service provider for the purpose of directed or targeted advertising.
  • Such marketing is consistent with applicable law and Pearson's legal obligations.
  • Pearson will not knowingly direct or send marketing communications to an individual who has expressed a preference not to receive marketing.
  • Where required by applicable law, express or implied consent to marketing exists and has not been withdrawn.

Pearson may provide personal information to a third party service provider on a restricted basis to provide marketing solely on behalf of Pearson or an affiliate or customer for whom Pearson is a service provider. Marketing preferences may be changed at any time.

Correcting/Updating Personal Information


If a user's personally identifiable information changes (such as your postal address or email address), we provide a way to correct or update that user's personal data provided to us. This can be done on the Account page. If a user no longer desires our service and desires to delete his or her account, please contact us at customer-service@informit.com and we will process the deletion of a user's account.

Choice/Opt-out


Users can always make an informed choice as to whether they should proceed with certain services offered by InformIT. If you choose to remove yourself from our mailing list(s) simply visit the following page and uncheck any communication you no longer want to receive: www.informit.com/u.aspx.

Sale of Personal Information


Pearson does not rent or sell personal information in exchange for any payment of money.

While Pearson does not sell personal information, as defined in Nevada law, Nevada residents may email a request for no sale of their personal information to NevadaDesignatedRequest@pearson.com.

Supplemental Privacy Statement for California Residents


California residents should read our Supplemental privacy statement for California residents in conjunction with this Privacy Notice. The Supplemental privacy statement for California residents explains Pearson's commitment to comply with California law and applies to personal information of California residents collected in connection with this site and the Services.

Sharing and Disclosure


Pearson may disclose personal information, as follows:

  • As required by law.
  • With the consent of the individual (or their parent, if the individual is a minor)
  • In response to a subpoena, court order or legal process, to the extent permitted or required by law
  • To protect the security and safety of individuals, data, assets and systems, consistent with applicable law
  • In connection the sale, joint venture or other transfer of some or all of its company or assets, subject to the provisions of this Privacy Notice
  • To investigate or address actual or suspected fraud or other illegal activities
  • To exercise its legal rights, including enforcement of the Terms of Use for this site or another contract
  • To affiliated Pearson companies and other companies and organizations who perform work for Pearson and are obligated to protect the privacy of personal information consistent with this Privacy Notice
  • To a school, organization, company or government agency, where Pearson collects or processes the personal information in a school setting or on behalf of such organization, company or government agency.

Links


This web site contains links to other sites. Please be aware that we are not responsible for the privacy practices of such other sites. We encourage our users to be aware when they leave our site and to read the privacy statements of each and every web site that collects Personal Information. This privacy statement applies solely to information collected by this web site.

Requests and Contact


Please contact us about this Privacy Notice or if you have any requests or questions relating to the privacy of your personal information.

Changes to this Privacy Notice


We may revise this Privacy Notice through an updated posting. We will identify the effective date of the revision in the posting. Often, updates are made to provide greater clarity or to comply with changes in regulatory requirements. If the updates involve material changes to the collection, protection, use or disclosure of Personal Information, Pearson will provide notice of the change through a conspicuous notice on this site or other appropriate way. Continued use of the site after the effective date of a posted revision evidences acceptance. Please contact us if you have questions or concerns about the Privacy Notice or any objection to any revisions.

Last Update: November 17, 2020