Home > Articles > Security > Network Security

Methods of Computer System Attacks

The methods of attack that are available are broad-ranging and insidious, yet many of them are available to even amateur hackers through the use of tools widely available on the Internet. For this reason, securing applications today is no small challenge. This chapter discussed the various kinds of attack, including categories and examples of social engineering attacks.
This chapter is from the book

For more information on Security, visit our Security Reference Guide or sign up for our Security Newsletter

"To an extent, it was through magic that I discovered the enjoyment in fooling people."
—Kevin Mitnick [1]

This chapter will present some of the common attack techniques that are used to compromise computer systems. Attacks often exploit security design flaws, but not always. For example, a "denial of service" attack that interrupts or overwhelms an application can be successful in putting an impregnable system out of service. In general, attacks exploit any form of technical or human weakness. Technical weaknesses can include design flaws, implementation flaws, inadequate protection features, and environmental changes or weaknesses. Human weaknesses can include poor usage practices, inexperienced or inadequately trained users, and poor physical security.

The discussion here focuses first on attacks against technical weaknesses, which are referred to here as technical attacks. The discussion then turns to the methods used against humans in order to obtain computer access. The latter is often referred to as "social engineering."

5.1 Technical Attacks

The enumeration of attack patterns provided in this section is not exhaustive, as the patterns that are possible are limited only by the ingenuity of attackers. It is important to understand these basic patterns as a precursor to examining the software design principles presented afterwards, so that the purpose and motivation of those principles can be appreciated. The attack patterns presented here are not intended to be mutually exclusive or orthogonal. In fact, many of them overlap or are related, and real attacks often fall into more than one pattern or use multiple techniques in combination.

The attack patterns presented here are based primarily on the work of others (for example, [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]). The categories here are arranged in a way that is, hopefully, relevant to the way that most software developers think. This book’s Web site1 contains references to sources and other work related to vulnerability and attack taxonomies.

Technical Attacks, by Section

Interception
Related: Sniffing. Covert channel.

5.1.1

Man-in-the-Middle

5.1.2

Replay

5.1.3

Modification in Place or in Transit
Related: File manipulation. MiM

5.1.4.

Interruption

5.1.5

Saturation and Delay
Related: Denial-of-service

5.1.6

Exploitation of Non-Atomicity
Related: TOCTTOU

5.1.7

Coordination Interference

5.1.8

Forced Crash and Retrieval of Crash Artifacts

5.1.9

Forced Restart, Forced Re-Install

5.1.10

Environmental Interference

5.1.11

Spoofing

5.1.12

Hijacking

5.1.13

Circumvention

5.1.14

Trap Door

5.1.15

Exploit of Incomplete Traceability
Related: Non-repudiation

5.1.16

Exploit of Incomplete Validation
Related: Buffer overflow

5.1.17

Exploit of Incomplete Authentication or Authorization

5.1.18

Exploit of Exposure of Internal State

5.1.19

Exploit of Residual Artifacts
Related: "Dumpster diving" or "trashing"

5.1.20

Embedded Attack
Related: Planting. Trojan horse. Time bomb. Logic bomb.

5.1.21

Pattern Matching and Brute Force Guessing
Related: Exhaustive search

5.1.22

Namespace Attack

5.1.23

Weak Link as Gateway

5.1.24

Trusted Resource Attack

5.1.25

Scope Attack
Related: Domain errors

5.1.26


5.1.1 Interception

Attacks that involve any form of subversive interception of information can be categorized as either "eavesdropping" or "sniffing." The term "sniffing" usually refers specifically to non-intrusive and often undetectable interception, such as by reading information that is broadcast or by attaching a passive listener to a communication channel. The term "eavesdropping" is a less technical term and applies more broadly and loosely.

This category of attacks often involves the use of a "covert channel." A covert channel is any communication pathway that exists but was not intended by the designers of the system and thereby violates the system’s security policy. [13] A covert channel need not be an actual mechanism intended for any form of communication at all; for example, the technique of varying the load on a CPU has been used as a covert channel for the binary encoded signaling of sensitive information to another process in an undetected manner.

Notorious examples of eavesdropping or sniffing attacks include:

  1. Sending ICMP packets in order to re-direct packets to flow through an attacker's system, thereby allowing the attacker to read the data in the packets.

  2. Reading the mailbox message files in a POP server.

  3. Installing a keyboard handler that silently listens to keystrokes.

  4. Reading someone's screen on an X-Windows system or on a Windows system via a program such as RealVNC.

When an application reads the information stored by another application, this is often also referred to as "interference," especially if that information is then acted upon in a subversive manner or is used to interfere with the operation of the other application.

5.1.2 Man-in-the-Middle

When a party succeeds in interposing itself between two endpoints and is thereby able to intercept and possibly modify the communication without either party being aware, this is referred to as a "man-in-the-middle" (MiM) attack.

MiM is related to interception, but requires that the interception occurs as the result of the interposition of a listener rather than strictly passive eavesdropping.

5.1.3 Replay

Replay involves the interception of information intended for a target system, followed by sending that information—possibly with additional information inserted—to the target system for the purpose of attacking the system.2 Replay is a form of MiM attack in which the intercepted message is not modified, although it may be augmented.

Replay often (but not always) involves the use of an intercepted bearer credential of some kind, such as a password or session credential. If an attacker intercepts information that is used to access a resource, such as credentials, the attacker might be able to impersonate a trusted party and thereby access the resource. In this scenario, the attacker "replays" the intercepted information, leading the receiver to believe that the attacker is a trusted party. An example of this kind of replay is intercepting someone’s browser session cookie or authentication header and using it to masquerade as the user’s session.

Replay may be coupled with a timing attack (see Section 5.1.6, "Saturation and Delay") against a credential-validation server to thwart the detection of credential expiration.

5.1.4 Modification in Place or in Transit

Many attacks rely on the ability to modify data in a persistent store or while it is in transit. A credential store or password file is an obvious target for an attack, as is a password on its way to an end user.

In business computing environments, it is far more common that persistent data sources are attacked rather than data in transit because the latter requires network-level penetration, a higher level of sophistication, and constitutes a MiM attack. Attacks against persistent stores have been categorized by others as "File Manipulation" attacks [15].

The practice of modifying hidden tags in a Web form can be considered to be modification in transit and is not a MiM attack because the client is usually the attacker.

5.1.5 Interruption

If a system’s security depends on the completion of certain precursor processes, and those processes can be interrupted such that the system assumes that they completed, it might be possible to put the system into an insecure state by interrupting the precursor processes. For example, overwhelming a logging service such that it fails might be used to prevent a trace of intrusion activity from being recorded. This is often considered to be a denial of service, but often there is no service involved—merely an interconnected process—and so the term "interruption" seems more appropriate. Denial of service also can be caused by overwhelming an application, rather than by interrupting it. See the next attack description.

5.1.6 Saturation and Delay

In some systems, security relies on the existence of a service that will detect intrusion. In that case, all that is needed is to delay the response of the intrusion detection system long enough to allow an attack to complete or to force the service request to time out so that the requester uses cached data. This can often be accomplished by overwhelming the service or intrusion detection system. This is a type of attack that is commonly referred to as a denial of service, but the actual technique is a saturation technique; denial of service is the immediate effect [16] on the service or intrusion detection system, and there is then a security consequence as a result of the failure of the system to detect intrusion.

The intrusion detection system might not be specifically designed as an intrusion detection system per se, but might merely be, for example, a normal service that is designed to shut down if any anomalous behavior is detected; for example, if packets with the same sequence number are received.

Delay is a powerful technique because it takes time to identify an intruder, and if delay can be achieved, the attacker has time to cover their tracks and leave—and possibly enter through another means or mount an attack from a different compromised host location.

Besides their use as a means of penetrating a system, saturation and delay can be attack objectives in their own right. Saturation or delay perpetrated for the purpose of making a system inaccessible or unusable (i.e., making it "unavailable") is properly known as a denial of service attack. However, note that denial of service can be achieved in other ways; for example, by interfering with any process that is critical to an application.

5.1.7 Exploitation of Non-Atomicity

If a software process or thread of execution accesses objects or resources that can be accessed by other processes or threads, or by other activities of the same thread, there is a possibility that logically concurrent access by more than one process, thread, or activity might interfere and make it possible to compromise the state of the system.©

Some languages, such as Java, provide low-level primitives for controlling concurrent access to language objects. Databases provide locking mechanisms for serializing concurrent access to file-based data. A proper design for a concurrent system usually employs these mechanisms to ensure that interference does not occur. This often means designing software routines that access shared resources in such a way that the accesses are "atomic"—that is, that their effect is all-or-nothing, and intermediate states are unobservable.

If there is non-atomicity in the system, it is sometimes possible to force the system to perform steps out of sequence, thereby putting it into a state that was not anticipated by its designers. This is often referred to as a "race condition."

A special case of this is when the interference is purpose- fully performed after a resource access rule is checked but before the resource is accessed: During that interval, a change is made to a critical context value, such as a user identity, causing the system to perform a function in a different context than the context that was authorized. This is referred to as a "time of check to time of use" (TOCTTOU) attack.

Attacks based on non-atomicity are often coupled with a denial of service attack that slows a system down and thereby "opens a window of vulnerability."

5.1.8 Coordination Interference

Non-atomicity and delay pretty much cover attacks related to synchronization, but a related class of attacks deserves its own consideration for systems that are inherently asynchronous or independent and that depend on presumed event sequences, timing, or timestamps. Independent systems that cooperate are sometimes assumed to perform actions in a certain sequence or with certain effects, and interference with one of these systems can result in inconsistent effects that cause a failure in a different system.

Delay is often used to achieve this type of interference, for example, by interfering with or spoofing a timing service. Interference with a messaging service can result in certain events not being registered that are relied upon.

5.1.9 Forced Crash and Retrieval of Crash Artifacts

When systems fail, they often leave traces of their internal operation or leave resources in an inconsistent and potentially unprotected or insecure (for example, unencrypted) state. Access to protected information can, therefore, sometimes be achieved by forcing a system to crash and then examining the artifacts that remain.

A crash can sometimes be achieved by exploitation of incomplete validation of inputs or exposure of internal objects that can be modified during an attack to force the system into a state that causes failure.

The most common type of artifact left behind is a file containing an image of the process. Such an image often contains sensitive data, such as unencrypted credentials or the details and relative addresses of stack variables and program code.

5.1.10 Forced Restart, Forced Re-Install

One way of inserting malicious software into a system is by compromising a system’s bootup or installation configuration. If the system is then caused to crash or become unusable so that it will have to be re-started, or corrupt so that it will have to be re-installed (with a compromised installation), the compromised configuration will be started or installed, respectively.

This is an extremely powerful and subtle technique. It is unfortunately true that "backup" resources are usually much less protected than primary resources. Thus, by silently implanting a trojan horse (see Section 5.1.21, "Embedded Attack") in a backup resource (or in an emergency response tool) and then merely forcing the primary resource to crash or be crippled, the compromised backup resource will be installed.3

This technique is closely related to the "Trusted Resource Attack" (discussed in Section 5.1.25).

5.1.11 Environmental Interference

The normal operation of programs usually presumes the availability of resources, such as memory, threads, sockets, and file space. Software designers often do not anticipate the failure conditions that can occur when these resources are unavailable or are exhausted. This can result in certain functions not completing that are expected to complete. If the system’s security depends on these functions (for example, a system log) it might be possible to attack a system without traceability or without intrusion detection completing.

5.1.12 Spoofing

Spoofing involves forging or corrupting (destroying the integrity of) a resource or artifact for the purpose of pretending to be—i.e., for the purpose of masquerading as—something or someone else. There are many variations on spoofing, and it can be done at any level of a system, from the network level through the application level. Some examples are:

  • Forging IP packet source addresses.

  • Forging ARP packets to fool a router into thinking that your machine has someone else's IP address.

  • Creating misleading Web pages that fool a user into thinking that they are at a different site [17].

  • Sending a name resolution request to a DNS server, forcing it to forward the request to a more authoritative server, and then immediately sending a forged response—causing the first DNS server to cache the forged response and supply that address to its clients. (Attacks that use this technique as a method of tricking users into accessing sites that mimick trusted sites, for the purpose of obtaining user credentials or other personal identity information, are often referred to as "pharming".)

  • Replacing a trusted file or program with a file or program that mimics the original one but that contains malicious data or code. This is also a kind of "trusted resource attack" (see Section 5.1.25).

  • Forging or constructing an application object through abnormal means in order to instantiate an illegitimate instance that appears to be legitimate.

Spoofing often exploits an unsophisticated end user. For example, many Web users do not adequately understand or manage their browser security policies. Common ways of exploiting weakly secured browsers to spoof users include creating hidden windows from which attacks on other windows are launched, as well as manipulating the appearance and contents of the window to make it appear as if it were another kind of window, and modifying other windows that show legitimate content.

Spoofing is especially effective when coupled with delay or interruption because many spoofing schemes involve preventing a legitimate service from responding before an illegitimate one does. It is also powerful in combination with a forced restart or forced re-install or any kind of interference requiring an operator emergency response. This is because diagnostic tools or incident response tools can often be attacked more easily than the system itself. When users are in a crisis, they usually do not question the integrity of tools they invoke to help them respond to their crisis. Thus, the combination of attacking poorly-protected tools or configurations, followed by an attack that forces the system to fail and the tools to be used, is extremely powerful. This is in fact the very technique used by the thieves in the movie Ocean’s Eleven: The thieves accomplish the removal of the bank’s assets by carrying them away in the equipment bags of a spoofed SWAT team, having compromised the 911 channel and thereby enabling the spoof SWAT team to respond to the robbery. See also "Distraction and Diversity," discussed later in this chapter.

5.1.13 Hijacking

The term "hijacking" is usually used to refer to an attack that involves disconnecting a server resource in some manner from a resource channel and replacing it with a different server resource. Thus, the channel is "hijacked."

This is a variation of spoofing because users of the channel think that they are accessing the intended resource, via the channel, but are "spoofed" by the replacement resource.

5.1.14 Circumvention

This book shall define "circumvention"’ as any method by which an attacker bypasses intended controls, access checks, or system pathways in order to gain access to or control of protected resources. Circumvention can involve a covert channel or it can involve incompletely protected resources. Many of the attacks discussed here represent variations of circumvention.

5.1.15 Trap Door

A trap door is a mechanism embedded within a system that allows the normal access paths or access checks of a system to be bypassed. This often takes the form of a special password that is hard-coded into the software. It can also take the form of a special diagnostic interface.

5.1.16 Exploit of Incomplete Traceability

If the system’s design is such that it fails to record the actions of users, this can lead to a situation in which either appropriate or inappropriate actions are later untraceable or unprovable. It is important to emphasize that this is a result of the system’s design—not a result of an attack directed against its logging mechanism. (An attack directed against the logging mechanism would most likely be a trusted resource attack, which is discussed later.)

The ability of a party to deny having performed appropriate actions or aspects of those actions (for example, the time at which they were performed) is known as repudiation. For example, a user might deny that she performed a particular transaction such as a purchase, and if there is no record that conclusively links her to the transaction, her denial might be successful. Another example would be denial that a message was received when in fact it was. The term non-repudiation refers to the ability of a system to defeat repudiation attempts, for example, by recording authenticated records (logs) of all transactions and by using communication mechanisms that provide secure acknowledgment at both endpoints.

Incomplete logs can also enable an intruder to perform inappropriate actions without traceability. For example, if an attacker’s modification of sensitive files is not recorded in a manner that identifies the attacker’s identity, the attack cannot be traced to its source. A failure to log actions in a traceable manner, therefore, represents a significant vulnerability.

5.1.17 Exploit of Incomplete Validation

If a software module does not fully check that its inputs fall within expected ranges, it might be possible to invoke the module with inputs outside of those ranges and thereby cause the program to do things that were not intended by the software designer. This might enable an attacker to circumvent normal system pathways or checks.

The infamous "buffer overflow" attack is a variation of incomplete validation, although in a buffer overflow the validation failure can be considered to be within the application framework (for example, language itself) rather than in the application design because a secure application framework should prevent buffer overflow as well as any other kind of type failure or range failure.

5.1.18 Exploit of Incomplete Authentication or Authorization

The design or configuration of a system might intentionally or unintentionally omit certain checks, enabling an attacker to "slip through" access control or authentication mechanisms and thereby obtain unauthorized access or control. This is most likely to be possible if authorization decisions are interspersed throughout the application code.

5.1.19 Exploit of Exposure of Internal State

Circumvention can also occur if resources expose their internal state, thereby allowing a client module to read or modify the resource’s internal state in unintended ways.

Inappropriate reading of a resource’s internal state is a breach of confidentiality because information that is intended to be private to the resource is revealed to an unintended party. This is known as a containment failure

Inappropriate writing of internal state is a breach of integrity. For example, if a resource’s interface returns references ("aliases") for internal objects instead of returning separate copies of those objects, any client of the resource might be able to modify the internal objects because they can obtain direct references to them. This kind of failure has been categorized by some as an "integrity failure" resulting from an "aliasing error." [18]

This form of attack is the motivation behind the security model embedded in many browsers. In this model, often referred to as the "same origin" policy, Web pages can only affect their own contents. However, there are loopholes in the policy. For example, scripts can embed executable objects that do not adhere to the security policy, but rather adhere to a different (possibly looser) security policy.

5.1.20 Exploit of Residual Artifacts

If objects are re-used and contain artifacts of prior use, an attacker might be able to use those artifacts to obtain secret information or to obtain references to protected objects. This is analogous to an intruder searching through your trash can, a practice sometimes referred to as "dumpster-diving" or "trashing." [19]

5.1.21 Embedded Attack

This book uses the term "embedded attack" to refer to all attacks that rely on the placement of attack software within a trusted software system. The act of setting up an embedded attack is commonly referred to as planting, [20] because a subversive component is "planted" on the target system. Planting can be achieved using other techniques, such as social engineering (discussed in Section 5.2, "Social Engineering") or technical means.

A very common form of embedded attack is known as a "trojan horse" attack. A trojan horse is a trusted component that is imported or installed (somehow) into the system but which contains a secret mechanism to facilitate a subsequent attack. Generally the user has rights that the program’s author (i.e., the attacker) does not, so the attacker obtains the user’s rights by "hiding" inside a trusted component. This implies that the attacker has access to the trusted component, has convinced the user that the component can be trusted, or has lured the user into installing or enabling it.

A trojan horse program can be installed (planted) as a result of a computer virus. An example of this delivery method is the "Troj/BankAsh" virus (2005), which attempts to disable anti-virus software and then monitors the user’s Internet access for banking Web sites, such as Barclays, Cahoot, Halifax, and others. If a banking Web site is accessed, the program silently monitors the user’s keystrokes in order to capture a login ID and password and other account information and then FTPs this information to a remote site.

So-called "script injection" attacks are a special case of a trojan horse attack in which a script (i.e., a program) is input in lieu of data and is then later inadvertently interpreted (executed) by the application. A trojan horse can also be used to execute a MiM attack by intercepting internal information "from the inside" and using it maliciously.

An embedded attack is sometimes implemented as a "bomb." A time bomb is a subversive mechanism secretly embedded within a trusted system for the purpose of initiating an attack at a later point in time. A logic bomb is similar to a time bomb except that it is triggered by a sequence of program events rather than by the passage of time.

Embedded attacks are especially effective when coupled with a forced crash. An example is the compromise of a repair tool or boot script followed by causing the system to fail so that it will have to be repaired or rebooted using the compromised tool or script. This is particularly effective because "build-time" components, such as tools and scripts, are often less stringently protected than runtime systems.

5.1.22 Pattern Matching and Brute Force Guessing

Attacks that utilize sophisticated knowledge to derive or anticipate the state of a system or credentials used for authentication are often referred to as "oracle" attacks. These include deciphering, discovery of exploitable patterns, non-randomness or predictable pseudo-randomness, and exploitation of algorithmic weaknesses. A notorious example is the cracking of Netscape’s implementation of SSL by taking advantage of a weakness in its random number generation.

Attacks that merely try every possibility until they succeed are known as "brute force" or "exhaustive search" attacks. Encryption algorithms that are not sufficiently strong or that use relatively short keys can often be cracked using brute force: This is a result of the ever-decreasing cost of computing power.

5.1.23 Namespace Attack

Many attacks exploit weaknesses in the name resolution process used to identify resources. These include the insertion of rogue components in a name-resolution path as well as the insertion of components with similar names that are equivalent. It is often the case that abbreviated names are used to identify resources, and a failure to canonicalize a resource name can enable an attacker to substitute other resources with the same abbreviated name but a different canonical name.

5.1.24 Weak Link as Gateway

Attackers often reach their goal by following a circuitous path: entering a weak point and then using that point as a point of trust from which to reach other points.4

Human resources Web sites are famous examples of this. Those sites are often poorly protected, but because they have the same domain as other organization sites, they can be used as a launching point when compromised.

Virtual private networks (VPNs) represent another consideration. For example, if a network is linked to a partner via a VPN and the partner’s network has a known weakness, the partner’s network can first be penetrated and then used as a gateway to the target network.

5.1.25 Trusted Resource Attack

An application can often be penetrated by attacking a resource on which the application relies. Examples of this include:

  • Attacking a DNS server’s zone files.

  • Attacking an object lookup service, for example, by covertly embedding a trojan horse within its code.

  • Modifying the system time. (Some taxonomies identify this kind of attack as a category in its own right.)

  • Modifying files that are used by an application but that have insufficient protection.

  • Attacking log files that are used to record the actions of users.

  • Attacking other programs that are poorly protected and that access (and ideally modify) the same resources on which the application of interest relies.5 Thus, this approach is transitive in that it involves attacking a trusted resource, in order to attack another target that uses the trusted resource. This particular pattern is an example of a "Weak Link as Gateway" attack.

This is how source code to Cisco Systems routers was stolen in 2004—by planting a compromised version of the trusted SSH program on Cisco’s network to act as a trojan horse by sending users’ passwords to the attacker. [23]

An effective way to attack a protected resource is to subvert resources used by those resources—with many levels of transitivity in between.

This technique has easy parallels in the non-computer world. There was a movie in the 1960s called Kaleidoscope in which a professional card player stealthily entered the factory of a playing card manufacturer and modified the very dies used to print a popular brand of playing cards. He alone knew of the tiny modifications and was able to play poker and win. This is an example of a two-level transitive attack: He attacked a resource (the factory) used to produce the resources used by the casinos (the cards). Thus, an effective way to attack a protected resource is to subvert resources used by those resources. This includes emergency response resources. (See the "Forced Restart, Forced Re-Install" attack.)

5.1.26 Scope Attack

Weaknesses in scoping mechanisms, such as a runtime container, a language-defined scoping boundary, or any other security policy domain or mechanism designed to keep objects from communicating, can be used to access objects or resources outside of their intended scope or boundary. Weaknesses that make this kind of attack possible have been described as "domain errors" [24].

A security domain or access scope can sometimes be circumvented by employing a circumvention technique such as a covert channel (discussed previously). Another kind of scoping attack is to exploit the inadvertent exposure of a resource’s internal structure (see the "Exploit of Exposure of Internal State" attack in Section 5.1.19).

InformIT Promotional Mailings & Special Offers

I would like to receive exclusive offers and hear about products from InformIT and its family of brands. I can unsubscribe at any time.

Overview


Pearson Education, Inc., 221 River Street, Hoboken, New Jersey 07030, (Pearson) presents this site to provide information about products and services that can be purchased through this site.

This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site. Please note that other Pearson websites and online products and services have their own separate privacy policies.

Collection and Use of Information


To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:

Questions and Inquiries

For inquiries and questions, we collect the inquiry or question, together with name, contact details (email address, phone number and mailing address) and any other additional information voluntarily submitted to us through a Contact Us form or an email. We use this information to address the inquiry and respond to the question.

Online Store

For orders and purchases placed through our online store on this site, we collect order details, name, institution name and address (if applicable), email address, phone number, shipping and billing addresses, credit/debit card information, shipping options and any instructions. We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes.

Surveys

Pearson may offer opportunities to provide feedback or participate in surveys, including surveys evaluating Pearson products, services or sites. Participation is voluntary. Pearson collects information requested in the survey questions and uses the information to evaluate, support, maintain and improve products, services or sites, develop new products and services, conduct educational research and for other purposes specified in the survey.

Contests and Drawings

Occasionally, we may sponsor a contest or drawing. Participation is optional. Pearson collects name, contact information and other information specified on the entry form for the contest or drawing to conduct the contest or drawing. Pearson may collect additional personal information from the winners of a contest or drawing in order to award the prize and for tax reporting purposes, as required by law.

Newsletters

If you have elected to receive email newsletters or promotional mailings and special offers but want to unsubscribe, simply email information@informit.com.

Service Announcements

On rare occasions it is necessary to send out a strictly service related announcement. For instance, if our service is temporarily suspended for maintenance we might send users an email. Generally, users may not opt-out of these communications, though they can deactivate their account information. However, these communications are not promotional in nature.

Customer Service

We communicate with users on a regular basis to provide requested services and in regard to issues relating to their account we reply via email or phone in accordance with the users' wishes when a user submits their information through our Contact Us form.

Other Collection and Use of Information


Application and System Logs

Pearson automatically collects log data to help ensure the delivery, availability and security of this site. Log data may include technical information about how a user or visitor connected to this site, such as browser type, type of computer/device, operating system, internet service provider and IP address. We use this information for support purposes and to monitor the health of the site, identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents and appropriately scale computing resources.

Web Analytics

Pearson may use third party web trend analytical services, including Google Analytics, to collect visitor information, such as IP addresses, browser types, referring pages, pages visited and time spent on a particular site. While these analytical services collect and report information on an anonymous basis, they may use cookies to gather web trend information. The information gathered may enable Pearson (but not the third party web trend services) to link information with application and system log data. Pearson uses this information for system administration and to identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents, appropriately scale computing resources and otherwise support and deliver this site and its services.

Cookies and Related Technologies

This site uses cookies and similar technologies to personalize content, measure traffic patterns, control security, track use and access of information on this site, and provide interest-based messages and advertising. Users can manage and block the use of cookies through their browser. Disabling or blocking certain cookies may limit the functionality of this site.

Do Not Track

This site currently does not respond to Do Not Track signals.

Security


Pearson uses appropriate physical, administrative and technical security measures to protect personal information from unauthorized access, use and disclosure.

Children


This site is not directed to children under the age of 13.

Marketing


Pearson may send or direct marketing communications to users, provided that

  • Pearson will not use personal information collected or processed as a K-12 school service provider for the purpose of directed or targeted advertising.
  • Such marketing is consistent with applicable law and Pearson's legal obligations.
  • Pearson will not knowingly direct or send marketing communications to an individual who has expressed a preference not to receive marketing.
  • Where required by applicable law, express or implied consent to marketing exists and has not been withdrawn.

Pearson may provide personal information to a third party service provider on a restricted basis to provide marketing solely on behalf of Pearson or an affiliate or customer for whom Pearson is a service provider. Marketing preferences may be changed at any time.

Correcting/Updating Personal Information


If a user's personally identifiable information changes (such as your postal address or email address), we provide a way to correct or update that user's personal data provided to us. This can be done on the Account page. If a user no longer desires our service and desires to delete his or her account, please contact us at customer-service@informit.com and we will process the deletion of a user's account.

Choice/Opt-out


Users can always make an informed choice as to whether they should proceed with certain services offered by InformIT. If you choose to remove yourself from our mailing list(s) simply visit the following page and uncheck any communication you no longer want to receive: www.informit.com/u.aspx.

Sale of Personal Information


Pearson does not rent or sell personal information in exchange for any payment of money.

While Pearson does not sell personal information, as defined in Nevada law, Nevada residents may email a request for no sale of their personal information to NevadaDesignatedRequest@pearson.com.

Supplemental Privacy Statement for California Residents


California residents should read our Supplemental privacy statement for California residents in conjunction with this Privacy Notice. The Supplemental privacy statement for California residents explains Pearson's commitment to comply with California law and applies to personal information of California residents collected in connection with this site and the Services.

Sharing and Disclosure


Pearson may disclose personal information, as follows:

  • As required by law.
  • With the consent of the individual (or their parent, if the individual is a minor)
  • In response to a subpoena, court order or legal process, to the extent permitted or required by law
  • To protect the security and safety of individuals, data, assets and systems, consistent with applicable law
  • In connection the sale, joint venture or other transfer of some or all of its company or assets, subject to the provisions of this Privacy Notice
  • To investigate or address actual or suspected fraud or other illegal activities
  • To exercise its legal rights, including enforcement of the Terms of Use for this site or another contract
  • To affiliated Pearson companies and other companies and organizations who perform work for Pearson and are obligated to protect the privacy of personal information consistent with this Privacy Notice
  • To a school, organization, company or government agency, where Pearson collects or processes the personal information in a school setting or on behalf of such organization, company or government agency.

Links


This web site contains links to other sites. Please be aware that we are not responsible for the privacy practices of such other sites. We encourage our users to be aware when they leave our site and to read the privacy statements of each and every web site that collects Personal Information. This privacy statement applies solely to information collected by this web site.

Requests and Contact


Please contact us about this Privacy Notice or if you have any requests or questions relating to the privacy of your personal information.

Changes to this Privacy Notice


We may revise this Privacy Notice through an updated posting. We will identify the effective date of the revision in the posting. Often, updates are made to provide greater clarity or to comply with changes in regulatory requirements. If the updates involve material changes to the collection, protection, use or disclosure of Personal Information, Pearson will provide notice of the change through a conspicuous notice on this site or other appropriate way. Continued use of the site after the effective date of a posted revision evidences acceptance. Please contact us if you have questions or concerns about the Privacy Notice or any objection to any revisions.

Last Update: November 17, 2020