Home > Store

Essential ADO.NET

Register your product to gain access to bonus material or receive a coupon.

Essential ADO.NET

Premium Website

  • Sorry, this book is no longer in print.
Not for Sale

Description

  • Copyright 2002
  • Dimensions: 7-3/8x9-1/4
  • Pages: 560
  • Edition: 1st
  • Premium Website
  • ISBN-10: 0-201-75866-0
  • ISBN-13: 978-0-201-75866-5

"Essential ADO.NET is a comprehensive guide for developers of data-aware applications who want to transition to Microsoft's new .NET data access architecture. If you want to dig deep into the how's and why's of ADO.NET, this book will prove to be a worthy guide."

—Omri Gazitt, Product Manager of ADO.NET and Product Unit Manager, XML, Microsoft

"Bob Beauchemin packs a career's worth of knowledge into ten well-organized chapters, each one dense with information and insights. If you work with databases on Windows, buy this book."

—David Chappell, Principal of Chappell & Associates

Microsoft's ADO.NET enables Windows-based applications to access many types of databases from a variety of different vendors. This technology offers the advantage of cross-database flexibility that is essential for Internet and distributed-computing success.

Essential ADO.NET presents a thorough explanation of ADO.NET, exploring in depth the capabilities of classes, interfaces, properties, and methods. This book also examines the structure of data and demonstrates the ways in which ADO.NET can solve data access challenges. In particular, the discussion focuses on how ADO.NET effectively balances the need for generic functionality with efficiency, and how it is specifically designed to address today's need for scalability, concurrency, and robustness. A convenient reference for programmers moving from other data access APIs, including OLE DB, ADO, ODBC, and JDBC, Essential ADO.NET correlates classes and functions from these other APIs to those of ADO.NET.

You will find practical information on:

  • How the ADO.NET model can be used to access relational data stores
  • The DataSet class for in-memory data representation
  • How ADO.NET handles concurrency conflict-resolution problems
  • Guidelines for deciding when to use an online stream (DataReader) or offline cache (DataSet)
  • How to map database information using ASP.NET WebForm
  • How ADO.NET integrates relational data storage and XML representation with Microsoft's SQL Server
  • Essential ADO.NET is the most complete source of information for this crucial piece of the new Microsoft platform. Whether you are new to this application or have used it before, this book will help you to understand and get the most out of ADO.NET.



    0201758660B05132002

    Downloads

    Source Code

    Source Code and Online Resources

    source code

    Extras

    Related Article

    Letting ADO.NET Write Your SQL

    Web Resources

    Click below for Web Resources related to this title:
    Support Site

    Sample Content

    Online Sample Chapter

    Essential ADO.NET: XML and Data Access Integration

    Downloadable Sample Chapter

    Click below for Sample Chapter(s) related to this title:
    Sample Chapter 7

    Table of Contents



    Foreword by Tim Ewald.


    Foreword by Richard Grimes.


    Preface.


    Acknowledgments.


    1. Data: Models, Applications, and APIs.

    Information, Data Modeling, and Databases.

    Database and API Fundamentals.

    The Relational Model.

    Relational Design Rules.

    Benefits of the Relational Model.

    Relational Model Support in ADO.NET.

    Hierarchies, Objects, and Hybrids.

    Modern Nonrelational Data.

    Multidimensional Data.

    Nonrelational Data and ADO.NET.

    XML and the Infoset.

    XML, Databases, and Universal Data Representation.

    Data-centric Application Models.

    Evolution of Data Access APIs.

    Where Are We?



    2. ADO.NET Basics.

    Data Access and the .NET Architecture.

    Two Patterns of Data Access.

    Connected Mode.

    OleDb and SqlClient Data Providers.

    Writing Generic Data Access Code with ADO.NET Data Providers.

    Cursors in the Data Provider Model.

    Disconnected Mode and the DataSet.

    The XML APIs in .NET.

    Streaming XML.

    XML Schemas.

    The XmlDocument, XPath, and XPathNavigators.

    Mixing XML and Data Providers.

    Layout of the Managed Data Classes.

    Where Are We?



    3. The Connected Model: Streaming Data Access.

    .NET Data Providers and the Connected Model.

    Connection Classes.

    Connection Pooling.

    Metadata Methods.

    Commands.

    Command Behaviors.

    Using Parameters.

    Command Preparation, Cancellation, Timeouts, and Cleanup.

    Streaming Data through a DataReader.

    Reading Column Values through IDataRecord.

    Handling Large Data Columns.

    Error Handling.

    Using Transactions.

    Distributed Transactions.

    How Connection Pooling Works.

    How Declarative Transactions Work.

    Permission Classes.

    Database Security.

    Where Are We?



    4. The DataSet Class: Sets of Relational Data.

    DataSets.

    DataSet as an In-memory Database.

    What Can You Do with a DataSet?

    The DataSet Object Model.

    DataColumns, DataRows, and DataTables.

    DataTable and Its Uses.

    DataRows.

    Keys, Relations, and Constraints.

    Navigating through Relationships: Select and Find.

    Adding, Retrieving, Changing, and Deleting Rows.

    Combining Changes.

    Merging DataSets.

    DataRow States and Versions.

    Rules and Relationships.

    Error Handling.

    Events.

    DataSets and Nonrelational Types.

    Defining an Information Schema.

    Where Are We?



    5. DataAdapters: Synchronizing Databases and Datasets.

    Optimistic Concurrency.

    DataAdapter Classes.

    Populating a DataSet from a Managed Provider.

    Using Fill with Schema and Mapping Information.

    Error Handling during DataAdapter.Fill.

    How DataAdapter.Fill Works.

    ADO Integration in OleDbDataAdapter.

    Updating a Database through DataAdapter.

    The CommandBuilder Class.

    Coding Update Logic Directly.

    How Update Works.

    Controlling Updates.

    The DataSet Event Model.

    Refreshing DataSet Using Update and Merge.

    Writing General Customized Commands.

    The ADOX CommandBuilder.

    Building a Batch Update Command.

    DataSets and Nonrelational Data Types Revisited.

    Should You Use DataSet or DataReader?

    Where Are We?



    6. Data Binding: ADO.NET and Graphical User Interfaces.

    Windows Forms and Web Forms.

    Patterns of Data Presentation.

    Using Databound Controls.

    Web Forms Data Binding Types.

    Anatomy of Databound Control Types.

    Binding to a DataReader.

    Data Binding with DataSets.

    DataViews and Common Transformations.

    Table and Column Mappings.

    Editable List Controls: DataList and DataGrid.

    DataList.

    DataGrid.

    Nonrelational Data and DataViews.

    Integrating Visual Studio.

    Server Explorer and Data Toolbox.

    Controls and Data Forms.

    Where Are We?



    7. XML and Data Access Integration.

    XML and Traditional Data Access.

    XML and ADO.NET.

    Defining a DataSet's Schema.

    Refining DataSet's XML Schema.

    Reading XML into DataSet.

    Writing XML Schemas from DataSet.

    Microsoft-Specific Annotations.

    Writing XML Data from DataSet.

    Serialization, Marshaling, and DataSet.

    Typed DataSets.

    The XmlDataDocument Class.

    XmlDataDocuments and DataSets.

    XmlDataDocument and DataDocumentXPathNavigator.

    Why Databases and XML?

    XML as a Distinct Type.

    Document Composition and Decomposition.

    SQL Server, XML, and Managed Data Access.

    The FOR XML Keyword.

    OpenXML.

    The SQLOLEDB Provider.

    The SqlXml Managed Classes.

    The SQLXML Web Application.

    Updategrams.

    FOR XML in the SQLXMLOLEDB Provider.

    Bulk Loading.

    Future Support.

    Using SQLXML and .NET.

    Where Are We?



    8. Providers: ADO.NET and Data Providers.

    What Are Your Choices?

    Staying with OLE DB: A Summary of OLE DB Concepts.

    Staying with OLE DB: Interaction with the OleDb Data Provider.

    Main Cotypes and Type Mapping.

    Accessors.

    Executing Commands, Returning Results, and Using OpenRowset.

    Command Results Format and Behaviors.

    Command Dialects.

    Hierarchical Data.

    Updating from a Rowset.

    Errors.

    Unsupported Functions.

    Supported and Unsupported Providers.

    Writing a Data Provider.

    Implementing the Connection Class.

    Specification.

    Implementation.

    Specialization.

    Implementing the Command Class.

    Specification.

    Implementation.

    Specialization.

    Implementing the DataReader Class.

    Specification.

    Implementation.

    Specialization.

    Implementing the DataAdapter Class.

    Specification.

    Implementation.

    Specialization.

    Adding Enhanced Functionality.

    Implementing XML Infoset Providers.

    Implementing XmlReader.

    Implementing XPathNavigator.

    Implementation Alternatives: Conclusions.

    Is a Single Universal Data Access Model Possible?

    Where Are We?



    9. Consumers: ADO.NET Migration for Consumers.

    ADO.NET Migration Paths.

    ADO.NET for OLE DB Programmers.

    Cotype Equivalents.

    Data Provider Transparency.

    Using Provider-Specific Functionality.

    Error Handling.

    System-Supplied Services.

    System-Supplied Components.

    Service Providers.

    Marshaling.

    A Brief Overview of ADO.

    ADO.NET for ADO Programmers.

    Class Equivalences.

    ADO Connections, Commands, and Transactions.

    Connections and Connection Strings.

    Using Transactions.

    Commands and Command Behaviors.

    Hierarchical Data.

    Asynchronous Operations.

    Properties.

    ADO.NET Versus ADO Disconnected Model.

    Class Equivalents.

    Navigation.

    What Happened to GetRows in ADO.NET?

    Updates.

    Update Statement Creation.

    Batch Updates.

    ADO.NET DataSet Extensions.

    Column and Table Naming.

    Sorting and Filtering.

    ADO DB Interoperability.

    ADO.NET for ODBC Programmers.

    Handles and Environment.

    Commands.

    Fetching Data.

    Metadata and Schema Information.

    Errors.

    ADO.NET for JDBC Programmers.

    Generic Code.

    Provider Types.

    Connection Pooling.

    Nonrelational Data Types.

    Object Databases.

    Other SQL-99 Extensions.

    Metadata.

    Transactions.

    Commands and Behaviors.

    Executing Queries and Returning Results.

    Server Cursors.

    Errors.

    ADO.NET JDBC Programmers and the Disconnected Model.

    SQL/J Part 0 and Typed DataSets.

    Where Are We?



    10. ADO.NET and Various Types of Data.

    Evolution in Data Access.

    ADO.NET with Server- and File-Based RDBMS.

    ADO.NET with Homogeneous Hierarchical Data and ORDBMS.

    ADO.NET and Network Data: Object Graphs, OODBMS, and Network DBMS.

    ADO.NET and Structured Files, Multidimensional Data,and ORDBMS.

    ADO.NET Flat Files and Semistructured Files.

    Where Are We Going?



    Appendix A. Data Types and Type Mappings.


    Appendix B. Expression Syntax.


    Appendix C. Schema Inference Rules.


    Bibliography.


    Index. 0201758660T05132002

    Preface

    This book is about a new provider-model data access library known as ADO.NET. While ADO.NET is part of Microsoft .NET runtime initiative, this book is also about data; where we store it, how we retrieve it, update it, and index it. This involves more than just enumerating the usage of all of the classes, interfaces, properties and methods in a library. It entails looking at the problem you are trying to solve with the library and determining which API or portion of the API to use.

    Although there are a lot of repeating patterns in business problems, there are no hard and fast rules that encompass a cookbook solution, especially with respect to data access, presentation and exchange. Today's lightning fast solution programmed in native assembly language for the ultimate in speed may be a maintenance nightmare tomorrow. Patterns that work perfectly with a single user may fail with a group of users. Patterns for a small workgroup may fail when applied to a large multi-country or time zone worldwide presence. Cut and paste solutions will not help you to improve your work with ADO.NET and should not be expected from this book, although the book contains some solutions for more generic problems.

    Generic vs Specific Programming Model

    Until relatively recently, I'd always programmed database access using the most vendor specific, down-to-the-metal programming interface available, on the (correct) premise that they were always fastest. However, throughout my career, I'd oftentimes made my living doing application conversions when the hardware type or database changed. In a given long-lived company, these sometimes seem to change as quickly as clothing fashions. I was convinced of the usefulness of an abstract, provider-style interface when I was asked to convert a company's web site from a vendor-specific database API to a more abstract API, ODBC. The reason for this was that the actual database choice depended on the direction of other parts of the company and was outside of our control. My project lead's choice of ODBC over native API was confirmed when I put the finishing touches on our new application and generated the database. The project was on schedule and the project lead trundled off to a Friday afternoon meeting. Monday morning, I was informed that the choice of database had completely changed. Luckily because we had used ODBC (and because both databases had ODBC drivers) we needed only to generate the new database, make small changes to the programs because of subtle nuances in SQL vocabulary, retest the programs, and we were converted. We were lucky.

    A provider-based data access API always must balance the problems of abstracting generic functionality and allowing vendors enough room so that the API doesn't become an excuse for lowest common denominator coding. ADO.NET seems to do a good job in this area, the SQL Server-specific data provider (known as the SqlClient data provider) seems to be proof of the room afforded to vendor-specific extensions. The real proof will come with future data providers.

    Cyclical Technologies and Innovation

    I've been doing information processing with computers for a living for a long time. When I went through the interview at DevelopMentor, I was asked, "as you get older, does it get more difficult to acquire new skills in the constantly changing world of programming languages, design paradigms, and application programming interfaces"? I laughed and responded that it actually gets easier. After being exposed to multiple generations of these "new ideas" in computing, the realization eventually sets in that the actual reality that is being abstracted doesn't change and the abstractions themselves are usually only subtly different. In addition, the abstractions themselves are sometimes cyclical; today's brand new idea may have been a throwback to something we've encountered before.

    As an example, I have always been intrigued by the problem of data access and structure of database management system products. Having lived through using files as "databases" (or at least data storage), hierarchical databases, CODASYL databases, relational databases, object databases, multidimensional databases and variations that were somewhere in between, I was intrigued when introduced to using XML as a data model. I was impressed when I investigated its data abstraction model, in-memory object model, rich data manipulation tools and query languages, but got the feeling I'd been here (at least somewhat) before. Then, when thumbing through an old book that compared the relational and hierarchical databases, I was struck by the similarity of the hierarchical navigation functions in IMS/DB (IBM's flagship database until the mid 1980s) with the hierarchical navigation functions used in the XML DOM APIs. Later, I was able to correlate the (old) idea of letting the underlying SQL engine do the navigation with the possibility of using the same concept in the XPath query language. Speed improvements will be based on future query engine improvements, without rewriting your navigation-based program.

    As another example, one of the latest technologies seems to be centered on the use of business-to-business data exchange between unlike systems, using a known data format (XML). This pattern appears to be similar to a system that I'd worked on to import and export banking transactions using the well known automatic clearinghouse (ACH) format. This used a known data format and telecommunication protocol to exchange data between unlike systems.

    Being able to grab on to something that was familiar in a new technology is always a help. I'd found out, however, that I'd irritated co-workers by overuse of the phrase "its just like..." when speaking about a new technology. Using this phrase was irritating as it implied there was nothing new in the technology, no improvements. On the contrary, there are usually improvements based upon new problems to solve and novel ways of looking at an existing problem. What "its just like..." meant to me was that I had a handle into the new world from the old, I could go from there to consider just what the improvements are. I noticed when I mentioned the improvements in new technologies, folks were a lot less irritated.

    One of my major assignments at DevelopMentor was to develop and teach a course on OLE DB. In my travels, my students would ask, "what makes this different from ODBC" or "why should I switch to this from JDBC?" What do I gain, what do I lose, and what's the same. After having to answer this question many times, and considered changes with many different technologies, I find myself in a good position to assess the repercussions of the new ADO.NET interface.

    But Things Have Changed

    When giving a talk on data access and presentation strategies, I used to joke that a web browser was "just a multi-color terminal that could sing and dance." Well, maybe sing. But, without sounding cliche-ish, accessing applications through a web browser has drastically changed the way we design our data access strategies. Here's how:

    1. User volumes can vary wildly and without much notice. In the days of terminals, user volume were fairly predictable or at least quantifiable, growth was constrained by the ability the add terminals and users to the private network. The users also had to fill out forms to gain access to corporate applications; this made it possible to control growth. In the age of internet business, the number of users of an application is constrained only by the popularity of your product. This is one of the major selling points of the internet as a vehicle of commerce. Businesses want to be able to start with a small server and scale up or out to a large user base on limited notice. It changes the way we design and write applications.
    2. Users do not sign off of applications and they are sometimes physically unidentifiable. The presence of hyperlinks in internet applications makes it possible to branch immediately from using your favorite business application to checking on your favorite stock or sports team. If that sport team website contains something new and interesting, you will never return to the application. This makes it impossible to incorporate signoff logic into applications. Users also cannot be identified by "terminal ID" due to the presence of dynamic address assignment protocols like DHCP, and proxy server software that results in obfuscation of the end user's address. Combined with the fact that users do not sign off applications, this makes it impossible to set aside a chunk of storage on a per-user basis, based on the location of a computing device. Another big change.
    3. Users do a lot of their own data entry. Pessimistic concurrency and record locking in older systems was based on the premise that data entry was done by an array of terminal operators. It was just as likely as not that two operators could be accessing data tied to your account at the same time and that, if your data was "busy", these operators could put your slip of paper at the bottom of the pile and key it in later. Web applications make everyone their own data entry clerk. The fact that there probably won't be "two of you" updating your personal and financial information at the same time make optimistic concurrency more viable. The fact that users do their own data entry also means the data editing techniques and business rules must be more robust as non-professional operators "key in the darndest things".

    The ADO.NET APIs use data access patterns tuned to the way data access and input is done today. The techniques of connection pooling, disconnected updates and optimistic concurrency have a large part in shaping functionality to today's problems, even when using some of yesterday's technologies. Here is a brief summary of the chapters of the book.

    Chapter 1 - Data Models, Applications and APIs - A data access API has to strike a balance between being skewed towards a single data storage style or database and trying to be so all encompassing that individual classes and methods are so abstract that they are meaningless. This chapter starts by enumerating the functionality in ADO.NET that is truly novel and describing the data storage styles and application types with which it is most useful.

    Chapter 2 - ADO.NET Basics - This chapter divides the landscape into three parts: data providers, the ADO.NET DataSet object model, and the XML data access model. We begin with a simple program illustrating each part. I include the XML API portions, along with a discussion of XML and ADO.NET integration in chapter 7, because I consider the XML stack as an important part of the overall data access stack in the .NET framework.

    Chapter 3 - The Connected Model - Most data access APIs are centered around a provider model that abstracts away the differences between slightly different concrete implementations. This chapter investigates ADO.NET's variation on the familiar connection, command and results model used to access relational data stores. Some application style specific features, such as connection pooling and automatic distributed transactions, conclude this chapter.

    Chapter 4 - The DataSet - Most data access APIs include the concept of an in-memory data representation that can be used to populate controls on a form, and can be traversed and manipulated. In ADO.NET, this model is based around the DataSet class and its supporting collection classes that model a relational database. This chapter introduces the DataSet by examining its intricacies as a stand-alone object model.

    Chapter 5 - DataAdapters - While the DataSet is a convenient way to represent data in memory, it is most useful as a client-side model of data in a database. This chapter is about getting data from a data provider into a DataSet and persisting the changes made to the DataSet while "offline" back to the database. Since a database is shared among many users who will be working on the same data, this chapter explores the problems that ensue when more than one user changes the same data in offline mode, known as the optimistic concurrency conflict resolution problem. In addition, there has been a lot of debate about when to use a direct-from-the-database stream (the ADO.NET DataReader) and when to use an offline cache (the ADO.NET DataSet) with common application access patterns. This chapter presents some guidelines.

    Chapter 6 - Data Binding - Presenting data to a human via a graphic program or web page means mapping the bits in the database or data cache to graphic controls such as text boxes, list boxes and data grids. This chapter looks at ADO.NET's integration with the user interface items. This chapter contains a reprise of the DataReader vs DataSet debate, this time when using the ASP.NET WebForm application style.

    Chapter 7 - XML and Data Access Integration - Since most of the world's data is stored in relational databases, this chapter examines the different methods that ADO.NET contains for integrating relational data storage and XML representation and manipulation, with a specific focus on integration with SQL Server, Microsoft's flagship database. Since this is one of the main advances in ADO.NET, this is a fairly long and involved chapter.

    Chapter 8 - Providers - Data access (and other models with multiple similar implementations that differ only in details) is abstracted through a provider model. We'll look at the mapping of the OLE DB provider model to the ADO.NET data provider model. We'll also introduce the concept of XML data providers, abstraction of the data provider model over the XML Infoset model.

    Chapter 9 - Consumers - This chapter provides a useful reference for data consumers (i.e. programmers writing data access code) that may be moving from other data access APIs, such as ADO, ODBC and JDBC. It's designed to make each group feel "at home" with ADO.NET by being able to correlate their "class X" or "function Y" to ADO.NET's classes and methods.

    Chapter 10 - We started in Chapter 1 by describing the different types of data storage and applications that use this data. The book concludes by revisiting the landscape presented in Chapter 1 in light of the knowledge gained in the rest of the book, and examining whether the data stacks (ADO.NET and XML) really do contain a solution for everyone. We'll end up with a mention of future directions in data access.

    How To Read This Book

    This book is actually divided into three parts. It is part manifesto and analysis, part API reference by example, and part pure reference material for use of programmers involved with data conversions. It should really be read sequentially, but need not be completely understood upon first reading. Any time you find yourself wondering "why should I care about this" or "I have no point of reference to this", read on. The meaning may become clear later on.

    Chapters 1 and 10 are the manifesto and analysis of database types, database APIs and application styles in broad brush strokes. Chapters 2 through 7 are the minute details of how the ADO.NET classes, interfaces and methods work. This can be read in its entirety or browsed through piecemeal, but is best read in order if you have the time. Chapters 8 and 9 are special purpose, for provider writers and consumer writers using other database libraries looking for a "way in" to ADO.NET. It is more reference oriented and will not read like a story; in particular, some idiosyncrasies and analogies in the provider chapter may be a bit detailed for those who have not written OLE DB code. This chapter is for my friends and students with whom I've learned and written OLE DB providers in the past few years. They always complain that no one ever writes books for them.

    What you won't find here is hundreds of pages of class, interface, method, and property information arranged by class name. For this information I suggest you consult the .NET Framework SDK documentation. Having spent many hours browsing the Windows platform documentation, my impressions is that it seems to mature with the Windows platform itself. The Framework SDK documentation is probably the best yet, although the OLE DB Programmer's reference docs were impressive.What took you so long? Observation vs documentation

    There are a few reasons why this book was not released earlier. One was that I knew the .NET product would change during the beta process, sometimes drastically. I did not want to produce a "programming with the beta" book as my one contribution to this space, even though I am aware that these books have their place. Such a book would be obsolete or misleading mere months after it had been published.

    In addition, I've tried to verify anything that appeared in the documentation through direct experimentation with code. I've also tried to describe some internal behaviors of the providers and details of their implementation. This changed (sometimes drastically) during the beta process and I was forced to rewrite sections of the prose and rethink solutions that I'd come up with. I actually waited until the final product was released so I could run the code fragments and experiments one more time.

    Deduction of internals through experimentation and observation is most risky as the implementation of these hidden details will change over time. It will change because software vendors are constantly working to update and improve their products. On the subjects where there exists the most risk of change, I've attempted to document the experimentation process so that you can repeat the experiment in the future, as the software or underlying database changes.



    0201758660P01242002

    Index

    Click below to download the Index file related to this title:
    Index

    Updates

    Submit Errata

    More Information

    Unlimited one-month access with your purchase
    Free Safari Membership