Home > Store

Essential ADO.NET

Register your product to gain access to bonus material or receive a coupon.

Essential ADO.NET

Book

  • Sorry, this book is no longer in print.
Not for Sale

Description

  • Copyright 2002
  • Edition: 1st
  • Book
  • ISBN-10: 0-201-75866-0
  • ISBN-13: 978-0-201-75866-5

"Essential ADO.NET is a comprehensive guide for developers of data-aware applications who want to transition to Microsoft's new .NET data access architecture. If you want to dig deep into the how's and why's of ADO.NET, this book will prove to be a worthy guide."

—Omri Gazitt, Product Manager of ADO.NET and Product Unit Manager, XML, Microsoft

"Bob Beauchemin packs a career's worth of knowledge into ten well-organized chapters, each one dense with information and insights. If you work with databases on Windows, buy this book."

—David Chappell, Principal of Chappell & Associates

Microsoft's ADO.NET enables Windows-based applications to access many types of databases from a variety of different vendors. This technology offers the advantage of cross-database flexibility that is essential for Internet and distributed-computing success.

Essential ADO.NET presents a thorough explanation of ADO.NET, exploring in depth the capabilities of classes, interfaces, properties, and methods. This book also examines the structure of data and demonstrates the ways in which ADO.NET can solve data access challenges. In particular, the discussion focuses on how ADO.NET effectively balances the need for generic functionality with efficiency, and how it is specifically designed to address today's need for scalability, concurrency, and robustness. A convenient reference for programmers moving from other data access APIs, including OLE DB, ADO, ODBC, and JDBC, Essential ADO.NET correlates classes and functions from these other APIs to those of ADO.NET.

You will find practical information on:

  • How the ADO.NET model can be used to access relational data stores
  • The DataSet class for in-memory data representation
  • How ADO.NET handles concurrency conflict-resolution problems
  • Guidelines for deciding when to use an online stream (DataReader) or offline cache (DataSet)
  • How to map database information using ASP.NET WebForm
  • How ADO.NET integrates relational data storage and XML representation with Microsoft's SQL Server
  • Essential ADO.NET is the most complete source of information for this crucial piece of the new Microsoft platform. Whether you are new to this application or have used it before, this book will help you to understand and get the most out of ADO.NET.



    0201758660B05132002

    Downloads

    Source Code

    Source Code and Online Resources

    source code

    Extras

    Related Article

    Letting ADO.NET Write Your SQL

    Web Resources

    Click below for Web Resources related to this title:
    Support Site

    Sample Content

    Online Sample Chapter

    Essential ADO.NET: XML and Data Access Integration

    Downloadable Sample Chapter

    Click below for Sample Chapter(s) related to this title:
    Sample Chapter 7

    Table of Contents



    Foreword by Tim Ewald.


    Foreword by Richard Grimes.


    Preface.


    Acknowledgments.


    1. Data: Models, Applications, and APIs.

    Information, Data Modeling, and Databases.

    Database and API Fundamentals.

    The Relational Model.

    Relational Design Rules.

    Benefits of the Relational Model.

    Relational Model Support in ADO.NET.

    Hierarchies, Objects, and Hybrids.

    Modern Nonrelational Data.

    Multidimensional Data.

    Nonrelational Data and ADO.NET.

    XML and the Infoset.

    XML, Databases, and Universal Data Representation.

    Data-centric Application Models.

    Evolution of Data Access APIs.

    Where Are We?



    2. ADO.NET Basics.

    Data Access and the .NET Architecture.

    Two Patterns of Data Access.

    Connected Mode.

    OleDb and SqlClient Data Providers.

    Writing Generic Data Access Code with ADO.NET Data Providers.

    Cursors in the Data Provider Model.

    Disconnected Mode and the DataSet.

    The XML APIs in .NET.

    Streaming XML.

    XML Schemas.

    The XmlDocument, XPath, and XPathNavigators.

    Mixing XML and Data Providers.

    Layout of the Managed Data Classes.

    Where Are We?



    3. The Connected Model: Streaming Data Access.

    .NET Data Providers and the Connected Model.

    Connection Classes.

    Connection Pooling.

    Metadata Methods.

    Commands.

    Command Behaviors.

    Using Parameters.

    Command Preparation, Cancellation, Timeouts, and Cleanup.

    Streaming Data through a DataReader.

    Reading Column Values through IDataRecord.

    Handling Large Data Columns.

    Error Handling.

    Using Transactions.

    Distributed Transactions.

    How Connection Pooling Works.

    How Declarative Transactions Work.

    Permission Classes.

    Database Security.

    Where Are We?



    4. The DataSet Class: Sets of Relational Data.

    DataSets.

    DataSet as an In-memory Database.

    What Can You Do with a DataSet?

    The DataSet Object Model.

    DataColumns, DataRows, and DataTables.

    DataTable and Its Uses.

    DataRows.

    Keys, Relations, and Constraints.

    Navigating through Relationships: Select and Find.

    Adding, Retrieving, Changing, and Deleting Rows.

    Combining Changes.

    Merging DataSets.

    DataRow States and Versions.

    Rules and Relationships.

    Error Handling.

    Events.

    DataSets and Nonrelational Types.

    Defining an Information Schema.

    Where Are We?



    5. DataAdapters: Synchronizing Databases and Datasets.

    Optimistic Concurrency.

    DataAdapter Classes.

    Populating a DataSet from a Managed Provider.

    Using Fill with Schema and Mapping Information.

    Error Handling during DataAdapter.Fill.

    How DataAdapter.Fill Works.

    ADO Integration in OleDbDataAdapter.

    Updating a Database through DataAdapter.

    The CommandBuilder Class.

    Coding Update Logic Directly.

    How Update Works.

    Controlling Updates.

    The DataSet Event Model.

    Refreshing DataSet Using Update and Merge.

    Writing General Customized Commands.

    The ADOX CommandBuilder.

    Building a Batch Update Command.

    DataSets and Nonrelational Data Types Revisited.

    Should You Use DataSet or DataReader?

    Where Are We?



    6. Data Binding: ADO.NET and Graphical User Interfaces.

    Windows Forms and Web Forms.

    Patterns of Data Presentation.

    Using Databound Controls.

    Web Forms Data Binding Types.

    Anatomy of Databound Control Types.

    Binding to a DataReader.

    Data Binding with DataSets.

    DataViews and Common Transformations.

    Table and Column Mappings.

    Editable List Controls: DataList and DataGrid.

    DataList.

    DataGrid.

    Nonrelational Data and DataViews.

    Integrating Visual Studio.

    Server Explorer and Data Toolbox.

    Controls and Data Forms.

    Where Are We?



    7. XML and Data Access Integration.

    XML and Traditional Data Access.

    XML and ADO.NET.

    Defining a DataSet's Schema.

    Refining DataSet's XML Schema.

    Reading XML into DataSet.

    Writing XML Schemas from DataSet.

    Microsoft-Specific Annotations.

    Writing XML Data from DataSet.

    Serialization, Marshaling, and DataSet.

    Typed DataSets.

    The XmlDataDocument Class.

    XmlDataDocuments and DataSets.

    XmlDataDocument and DataDocumentXPathNavigator.

    Why Databases and XML?

    XML as a Distinct Type.

    Document Composition and Decomposition.

    SQL Server, XML, and Managed Data Access.

    The FOR XML Keyword.

    OpenXML.

    The SQLOLEDB Provider.

    The SqlXml Managed Classes.

    The SQLXML Web Application.

    Updategrams.

    FOR XML in the SQLXMLOLEDB Provider.

    Bulk Loading.

    Future Support.

    Using SQLXML and .NET.

    Where Are We?



    8. Providers: ADO.NET and Data Providers.

    What Are Your Choices?

    Staying with OLE DB: A Summary of OLE DB Concepts.

    Staying with OLE DB: Interaction with the OleDb Data Provider.

    Main Cotypes and Type Mapping.

    Accessors.

    Executing Commands, Returning Results, and Using OpenRowset.

    Command Results Format and Behaviors.

    Command Dialects.

    Hierarchical Data.

    Updating from a Rowset.

    Errors.

    Unsupported Functions.

    Supported and Unsupported Providers.

    Writing a Data Provider.

    Implementing the Connection Class.

    Specification.

    Implementation.

    Specialization.

    Implementing the Command Class.

    Specification.

    Implementation.

    Specialization.

    Implementing the DataReader Class.

    Specification.

    Implementation.

    Specialization.

    Implementing the DataAdapter Class.

    Specification.

    Implementation.

    Specialization.

    Adding Enhanced Functionality.

    Implementing XML Infoset Providers.

    Implementing XmlReader.

    Implementing XPathNavigator.

    Implementation Alternatives: Conclusions.

    Is a Single Universal Data Access Model Possible?

    Where Are We?



    9. Consumers: ADO.NET Migration for Consumers.

    ADO.NET Migration Paths.

    ADO.NET for OLE DB Programmers.

    Cotype Equivalents.

    Data Provider Transparency.

    Using Provider-Specific Functionality.

    Error Handling.

    System-Supplied Services.

    System-Supplied Components.

    Service Providers.

    Marshaling.

    A Brief Overview of ADO.

    ADO.NET for ADO Programmers.

    Class Equivalences.

    ADO Connections, Commands, and Transactions.

    Connections and Connection Strings.

    Using Transactions.

    Commands and Command Behaviors.

    Hierarchical Data.

    Asynchronous Operations.

    Properties.

    ADO.NET Versus ADO Disconnected Model.

    Class Equivalents.

    Navigation.

    What Happened to GetRows in ADO.NET?

    Updates.

    Update Statement Creation.

    Batch Updates.

    ADO.NET DataSet Extensions.

    Column and Table Naming.

    Sorting and Filtering.

    ADO DB Interoperability.

    ADO.NET for ODBC Programmers.

    Handles and Environment.

    Commands.

    Fetching Data.

    Metadata and Schema Information.

    Errors.

    ADO.NET for JDBC Programmers.

    Generic Code.

    Provider Types.

    Connection Pooling.

    Nonrelational Data Types.

    Object Databases.

    Other SQL-99 Extensions.

    Metadata.

    Transactions.

    Commands and Behaviors.

    Executing Queries and Returning Results.

    Server Cursors.

    Errors.

    ADO.NET JDBC Programmers and the Disconnected Model.

    SQL/J Part 0 and Typed DataSets.

    Where Are We?



    10. ADO.NET and Various Types of Data.

    Evolution in Data Access.

    ADO.NET with Server- and File-Based RDBMS.

    ADO.NET with Homogeneous Hierarchical Data and ORDBMS.

    ADO.NET and Network Data: Object Graphs, OODBMS, and Network DBMS.

    ADO.NET and Structured Files, Multidimensional Data,and ORDBMS.

    ADO.NET Flat Files and Semistructured Files.

    Where Are We Going?



    Appendix A. Data Types and Type Mappings.


    Appendix B. Expression Syntax.


    Appendix C. Schema Inference Rules.


    Bibliography.


    Index. 0201758660T05132002

    Preface

    This book is about a new provider-model data access library known as ADO.NET. While ADO.NET is part of Microsoft .NET runtime initiative, this book is also about data; where we store it, how we retrieve it, update it, and index it. This involves more than just enumerating the usage of all of the classes, interfaces, properties and methods in a library. It entails looking at the problem you are trying to solve with the library and determining which API or portion of the API to use.

    Although there are a lot of repeating patterns in business problems, there are no hard and fast rules that encompass a cookbook solution, especially with respect to data access, presentation and exchange. Today's lightning fast solution programmed in native assembly language for the ultimate in speed may be a maintenance nightmare tomorrow. Patterns that work perfectly with a single user may fail with a group of users. Patterns for a small workgroup may fail when applied to a large multi-country or time zone worldwide presence. Cut and paste solutions will not help you to improve your work with ADO.NET and should not be expected from this book, although the book contains some solutions for more generic problems.

    Generic vs Specific Programming Model

    Until relatively recently, I'd always programmed database access using the most vendor specific, down-to-the-metal programming interface available, on the (correct) premise that they were always fastest. However, throughout my career, I'd oftentimes made my living doing application conversions when the hardware type or database changed. In a given long-lived company, these sometimes seem to change as quickly as clothing fashions. I was convinced of the usefulness of an abstract, provider-style interface when I was asked to convert a company's web site from a vendor-specific database API to a more abstract API, ODBC. The reason for this was that the actual database choice depended on the direction of other parts of the company and was outside of our control. My project lead's choice of ODBC over native API was confirmed when I put the finishing touches on our new application and generated the database. The project was on schedule and the project lead trundled off to a Friday afternoon meeting. Monday morning, I was informed that the choice of database had completely changed. Luckily because we had used ODBC (and because both databases had ODBC drivers) we needed only to generate the new database, make small changes to the programs because of subtle nuances in SQL vocabulary, retest the programs, and we were converted. We were lucky.

    A provider-based data access API always must balance the problems of abstracting generic functionality and allowing vendors enough room so that the API doesn't become an excuse for lowest common denominator coding. ADO.NET seems to do a good job in this area, the SQL Server-specific data provider (known as the SqlClient data provider) seems to be proof of the room afforded to vendor-specific extensions. The real proof will come with future data providers.

    Cyclical Technologies and Innovation

    I've been doing information processing with computers for a living for a long time. When I went through the interview at DevelopMentor, I was asked, "as you get older, does it get more difficult to acquire new skills in the constantly changing world of programming languages, design paradigms, and application programming interfaces"? I laughed and responded that it actually gets easier. After being exposed to multiple generations of these "new ideas" in computing, the realization eventually sets in that the actual reality that is being abstracted doesn't change and the abstractions themselves are usually only subtly different. In addition, the abstractions themselves are sometimes cyclical; today's brand new idea may have been a throwback to something we've encountered before.

    As an example, I have always been intrigued by the problem of data access and structure of database management system products. Having lived through using files as "databases" (or at least data storage), hierarchical databases, CODASYL databases, relational databases, object databases, multidimensional databases and variations that were somewhere in between, I was intrigued when introduced to using XML as a data model. I was impressed when I investigated its data abstraction model, in-memory object model, rich data manipulation tools and query languages, but got the feeling I'd been here (at least somewhat) before. Then, when thumbing through an old book that compared the relational and hierarchical databases, I was struck by the similarity of the hierarchical navigation functions in IMS/DB (IBM's flagship database until the mid 1980s) with the hierarchical navigation functions used in the XML DOM APIs. Later, I was able to correlate the (old) idea of letting the underlying SQL engine do the navigation with the possibility of using the same concept in the XPath query language. Speed improvements will be based on future query engine improvements, without rewriting your navigation-based program.

    As another example, one of the latest technologies seems to be centered on the use of business-to-business data exchange between unlike systems, using a known data format (XML). This pattern appears to be similar to a system that I'd worked on to import and export banking transactions using the well known automatic clearinghouse (ACH) format. This used a known data format and telecommunication protocol to exchange data between unlike systems.

    Being able to grab on to something that was familiar in a new technology is always a help. I'd found out, however, that I'd irritated co-workers by overuse of the phrase "its just like..." when speaking about a new technology. Using this phrase was irritating as it implied there was nothing new in the technology, no improvements. On the contrary, there are usually improvements based upon new problems to solve and novel ways of looking at an existing problem. What "its just like..." meant to me was that I had a handle into the new world from the old, I could go from there to consider just what the improvements are. I noticed when I mentioned the improvements in new technologies, folks were a lot less irritated.

    One of my major assignments at DevelopMentor was to develop and teach a course on OLE DB. In my travels, my students would ask, "what makes this different from ODBC" or "why should I switch to this from JDBC?" What do I gain, what do I lose, and what's the same. After having to answer this question many times, and considered changes with many different technologies, I find myself in a good position to assess the repercussions of the new ADO.NET interface.

    But Things Have Changed

    When giving a talk on data access and presentation strategies, I used to joke that a web browser was "just a multi-color terminal that could sing and dance." Well, maybe sing. But, without sounding cliche-ish, accessing applications through a web browser has drastically changed the way we design our data access strategies. Here's how:

    1. User volumes can vary wildly and without much notice. In the days of terminals, user volume were fairly predictable or at least quantifiable, growth was constrained by the ability the add terminals and users to the private network. The users also had to fill out forms to gain access to corporate applications; this made it possible to control growth. In the age of internet business, the number of users of an application is constrained only by the popularity of your product. This is one of the major selling points of the internet as a vehicle of commerce. Businesses want to be able to start with a small server and scale up or out to a large user base on limited notice. It changes the way we design and write applications.
    2. Users do not sign off of applications and they are sometimes physically unidentifiable. The presence of hyperlinks in internet applications makes it possible to branch immediately from using your favorite business application to checking on your favorite stock or sports team. If that sport team website contains something new and interesting, you will never return to the application. This makes it impossible to incorporate signoff logic into applications. Users also cannot be identified by "terminal ID" due to the presence of dynamic address assignment protocols like DHCP, and proxy server software that results in obfuscation of the end user's address. Combined with the fact that users do not sign off applications, this makes it impossible to set aside a chunk of storage on a per-user basis, based on the location of a computing device. Another big change.
    3. Users do a lot of their own data entry. Pessimistic concurrency and record locking in older systems was based on the premise that data entry was done by an array of terminal operators. It was just as likely as not that two operators could be accessing data tied to your account at the same time and that, if your data was "busy", these operators could put your slip of paper at the bottom of the pile and key it in later. Web applications make everyone their own data entry clerk. The fact that there probably won't be "two of you" updating your personal and financial information at the same time make optimistic concurrency more viable. The fact that users do their own data entry also means the data editing techniques and business rules must be more robust as non-professional operators "key in the darndest things".

    The ADO.NET APIs use data access patterns tuned to the way data access and input is done today. The techniques of connection pooling, disconnected updates and optimistic concurrency have a large part in shaping functionality to today's problems, even when using some of yesterday's technologies. Here is a brief summary of the chapters of the book.

    Chapter 1 - Data Models, Applications and APIs - A data access API has to strike a balance between being skewed towards a single data storage style or database and trying to be so all encompassing that individual classes and methods are so abstract that they are meaningless. This chapter starts by enumerating the functionality in ADO.NET that is truly novel and describing the data storage styles and application types with which it is most useful.

    Chapter 2 - ADO.NET Basics - This chapter divides the landscape into three parts: data providers, the ADO.NET DataSet object model, and the XML data access model. We begin with a simple program illustrating each part. I include the XML API portions, along with a discussion of XML and ADO.NET integration in chapter 7, because I consider the XML stack as an important part of the overall data access stack in the .NET framework.

    Chapter 3 - The Connected Model - Most data access APIs are centered around a provider model that abstracts away the differences between slightly different concrete implementations. This chapter investigates ADO.NET's variation on the familiar connection, command and results model used to access relational data stores. Some application style specific features, such as connection pooling and automatic distributed transactions, conclude this chapter.

    Chapter 4 - The DataSet - Most data access APIs include the concept of an in-memory data representation that can be used to populate controls on a form, and can be traversed and manipulated. In ADO.NET, this model is based around the DataSet class and its supporting collection classes that model a relational database. This chapter introduces the DataSet by examining its intricacies as a stand-alone object model.

    Chapter 5 - DataAdapters - While the DataSet is a convenient way to represent data in memory, it is most useful as a client-side model of data in a database. This chapter is about getting data from a data provider into a DataSet and persisting the changes made to the DataSet while "offline" back to the database. Since a database is shared among many users who will be working on the same data, this chapter explores the problems that ensue when more than one user changes the same data in offline mode, known as the optimistic concurrency conflict resolution problem. In addition, there has been a lot of debate about when to use a direct-from-the-database stream (the ADO.NET DataReader) and when to use an offline cache (the ADO.NET DataSet) with common application access patterns. This chapter presents some guidelines.

    Chapter 6 - Data Binding - Presenting data to a human via a graphic program or web page means mapping the bits in the database or data cache to graphic controls such as text boxes, list boxes and data grids. This chapter looks at ADO.NET's integration with the user interface items. This chapter contains a reprise of the DataReader vs DataSet debate, this time when using the ASP.NET WebForm application style.

    Chapter 7 - XML and Data Access Integration - Since most of the world's data is stored in relational databases, this chapter examines the different methods that ADO.NET contains for integrating relational data storage and XML representation and manipulation, with a specific focus on integration with SQL Server, Microsoft's flagship database. Since this is one of the main advances in ADO.NET, this is a fairly long and involved chapter.

    Chapter 8 - Providers - Data access (and other models with multiple similar implementations that differ only in details) is abstracted through a provider model. We'll look at the mapping of the OLE DB provider model to the ADO.NET data provider model. We'll also introduce the concept of XML data providers, abstraction of the data provider model over the XML Infoset model.

    Chapter 9 - Consumers - This chapter provides a useful reference for data consumers (i.e. programmers writing data access code) that may be moving from other data access APIs, such as ADO, ODBC and JDBC. It's designed to make each group feel "at home" with ADO.NET by being able to correlate their "class X" or "function Y" to ADO.NET's classes and methods.

    Chapter 10 - We started in Chapter 1 by describing the different types of data storage and applications that use this data. The book concludes by revisiting the landscape presented in Chapter 1 in light of the knowledge gained in the rest of the book, and examining whether the data stacks (ADO.NET and XML) really do contain a solution for everyone. We'll end up with a mention of future directions in data access.

    How To Read This Book

    This book is actually divided into three parts. It is part manifesto and analysis, part API reference by example, and part pure reference material for use of programmers involved with data conversions. It should really be read sequentially, but need not be completely understood upon first reading. Any time you find yourself wondering "why should I care about this" or "I have no point of reference to this", read on. The meaning may become clear later on.

    Chapters 1 and 10 are the manifesto and analysis of database types, database APIs and application styles in broad brush strokes. Chapters 2 through 7 are the minute details of how the ADO.NET classes, interfaces and methods work. This can be read in its entirety or browsed through piecemeal, but is best read in order if you have the time. Chapters 8 and 9 are special purpose, for provider writers and consumer writers using other database libraries looking for a "way in" to ADO.NET. It is more reference oriented and will not read like a story; in particular, some idiosyncrasies and analogies in the provider chapter may be a bit detailed for those who have not written OLE DB code. This chapter is for my friends and students with whom I've learned and written OLE DB providers in the past few years. They always complain that no one ever writes books for them.

    What you won't find here is hundreds of pages of class, interface, method, and property information arranged by class name. For this information I suggest you consult the .NET Framework SDK documentation. Having spent many hours browsing the Windows platform documentation, my impressions is that it seems to mature with the Windows platform itself. The Framework SDK documentation is probably the best yet, although the OLE DB Programmer's reference docs were impressive.What took you so long? Observation vs documentation

    There are a few reasons why this book was not released earlier. One was that I knew the .NET product would change during the beta process, sometimes drastically. I did not want to produce a "programming with the beta" book as my one contribution to this space, even though I am aware that these books have their place. Such a book would be obsolete or misleading mere months after it had been published.

    In addition, I've tried to verify anything that appeared in the documentation through direct experimentation with code. I've also tried to describe some internal behaviors of the providers and details of their implementation. This changed (sometimes drastically) during the beta process and I was forced to rewrite sections of the prose and rethink solutions that I'd come up with. I actually waited until the final product was released so I could run the code fragments and experiments one more time.

    Deduction of internals through experimentation and observation is most risky as the implementation of these hidden details will change over time. It will change because software vendors are constantly working to update and improve their products. On the subjects where there exists the most risk of change, I've attempted to document the experimentation process so that you can repeat the experiment in the future, as the software or underlying database changes.



    0201758660P01242002

    Index

    Click below to download the Index file related to this title:
    Index

    Updates

    Submit Errata

    More Information

    InformIT Promotional Mailings & Special Offers

    I would like to receive exclusive offers and hear about products from InformIT and its family of brands. I can unsubscribe at any time.

    Overview


    Pearson Education, Inc., 221 River Street, Hoboken, New Jersey 07030, (Pearson) presents this site to provide information about products and services that can be purchased through this site.

    This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site. Please note that other Pearson websites and online products and services have their own separate privacy policies.

    Collection and Use of Information


    To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:

    Questions and Inquiries

    For inquiries and questions, we collect the inquiry or question, together with name, contact details (email address, phone number and mailing address) and any other additional information voluntarily submitted to us through a Contact Us form or an email. We use this information to address the inquiry and respond to the question.

    Online Store

    For orders and purchases placed through our online store on this site, we collect order details, name, institution name and address (if applicable), email address, phone number, shipping and billing addresses, credit/debit card information, shipping options and any instructions. We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes.

    Surveys

    Pearson may offer opportunities to provide feedback or participate in surveys, including surveys evaluating Pearson products, services or sites. Participation is voluntary. Pearson collects information requested in the survey questions and uses the information to evaluate, support, maintain and improve products, services or sites, develop new products and services, conduct educational research and for other purposes specified in the survey.

    Contests and Drawings

    Occasionally, we may sponsor a contest or drawing. Participation is optional. Pearson collects name, contact information and other information specified on the entry form for the contest or drawing to conduct the contest or drawing. Pearson may collect additional personal information from the winners of a contest or drawing in order to award the prize and for tax reporting purposes, as required by law.

    Newsletters

    If you have elected to receive email newsletters or promotional mailings and special offers but want to unsubscribe, simply email information@informit.com.

    Service Announcements

    On rare occasions it is necessary to send out a strictly service related announcement. For instance, if our service is temporarily suspended for maintenance we might send users an email. Generally, users may not opt-out of these communications, though they can deactivate their account information. However, these communications are not promotional in nature.

    Customer Service

    We communicate with users on a regular basis to provide requested services and in regard to issues relating to their account we reply via email or phone in accordance with the users' wishes when a user submits their information through our Contact Us form.

    Other Collection and Use of Information


    Application and System Logs

    Pearson automatically collects log data to help ensure the delivery, availability and security of this site. Log data may include technical information about how a user or visitor connected to this site, such as browser type, type of computer/device, operating system, internet service provider and IP address. We use this information for support purposes and to monitor the health of the site, identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents and appropriately scale computing resources.

    Web Analytics

    Pearson may use third party web trend analytical services, including Google Analytics, to collect visitor information, such as IP addresses, browser types, referring pages, pages visited and time spent on a particular site. While these analytical services collect and report information on an anonymous basis, they may use cookies to gather web trend information. The information gathered may enable Pearson (but not the third party web trend services) to link information with application and system log data. Pearson uses this information for system administration and to identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents, appropriately scale computing resources and otherwise support and deliver this site and its services.

    Cookies and Related Technologies

    This site uses cookies and similar technologies to personalize content, measure traffic patterns, control security, track use and access of information on this site, and provide interest-based messages and advertising. Users can manage and block the use of cookies through their browser. Disabling or blocking certain cookies may limit the functionality of this site.

    Do Not Track

    This site currently does not respond to Do Not Track signals.

    Security


    Pearson uses appropriate physical, administrative and technical security measures to protect personal information from unauthorized access, use and disclosure.

    Children


    This site is not directed to children under the age of 13.

    Marketing


    Pearson may send or direct marketing communications to users, provided that

    • Pearson will not use personal information collected or processed as a K-12 school service provider for the purpose of directed or targeted advertising.
    • Such marketing is consistent with applicable law and Pearson's legal obligations.
    • Pearson will not knowingly direct or send marketing communications to an individual who has expressed a preference not to receive marketing.
    • Where required by applicable law, express or implied consent to marketing exists and has not been withdrawn.

    Pearson may provide personal information to a third party service provider on a restricted basis to provide marketing solely on behalf of Pearson or an affiliate or customer for whom Pearson is a service provider. Marketing preferences may be changed at any time.

    Correcting/Updating Personal Information


    If a user's personally identifiable information changes (such as your postal address or email address), we provide a way to correct or update that user's personal data provided to us. This can be done on the Account page. If a user no longer desires our service and desires to delete his or her account, please contact us at customer-service@informit.com and we will process the deletion of a user's account.

    Choice/Opt-out


    Users can always make an informed choice as to whether they should proceed with certain services offered by InformIT. If you choose to remove yourself from our mailing list(s) simply visit the following page and uncheck any communication you no longer want to receive: www.informit.com/u.aspx.

    Sale of Personal Information


    Pearson does not rent or sell personal information in exchange for any payment of money.

    While Pearson does not sell personal information, as defined in Nevada law, Nevada residents may email a request for no sale of their personal information to NevadaDesignatedRequest@pearson.com.

    Supplemental Privacy Statement for California Residents


    California residents should read our Supplemental privacy statement for California residents in conjunction with this Privacy Notice. The Supplemental privacy statement for California residents explains Pearson's commitment to comply with California law and applies to personal information of California residents collected in connection with this site and the Services.

    Sharing and Disclosure


    Pearson may disclose personal information, as follows:

    • As required by law.
    • With the consent of the individual (or their parent, if the individual is a minor)
    • In response to a subpoena, court order or legal process, to the extent permitted or required by law
    • To protect the security and safety of individuals, data, assets and systems, consistent with applicable law
    • In connection the sale, joint venture or other transfer of some or all of its company or assets, subject to the provisions of this Privacy Notice
    • To investigate or address actual or suspected fraud or other illegal activities
    • To exercise its legal rights, including enforcement of the Terms of Use for this site or another contract
    • To affiliated Pearson companies and other companies and organizations who perform work for Pearson and are obligated to protect the privacy of personal information consistent with this Privacy Notice
    • To a school, organization, company or government agency, where Pearson collects or processes the personal information in a school setting or on behalf of such organization, company or government agency.

    Links


    This web site contains links to other sites. Please be aware that we are not responsible for the privacy practices of such other sites. We encourage our users to be aware when they leave our site and to read the privacy statements of each and every web site that collects Personal Information. This privacy statement applies solely to information collected by this web site.

    Requests and Contact


    Please contact us about this Privacy Notice or if you have any requests or questions relating to the privacy of your personal information.

    Changes to this Privacy Notice


    We may revise this Privacy Notice through an updated posting. We will identify the effective date of the revision in the posting. Often, updates are made to provide greater clarity or to comply with changes in regulatory requirements. If the updates involve material changes to the collection, protection, use or disclosure of Personal Information, Pearson will provide notice of the change through a conspicuous notice on this site or other appropriate way. Continued use of the site after the effective date of a posted revision evidences acceptance. Please contact us if you have questions or concerns about the Privacy Notice or any objection to any revisions.

    Last Update: November 17, 2020