Home > Articles > Programming > Windows Programming

Building Intelligent .NET Applications: Data-Mining Predictions

📄 Contents

  1. Introducing Data Mining with SQL Server
  2. Savings Mart
  3. Working with Mining Models
  4. Summary
  • Print
  • + Share This
This chapter uses a fictional retailer named Savings Mart to show how Microsoft's Analysis Services, included with Microsoft SQL Server, can be used to improve operational efficiencies and reduce costs.
This chapter is from the book

Improving human-to-computer interaction through speech processing is just one area of computing that can benefit from enhanced computing. On the other side of the interface is the backend, which usually ties in to a database. It is here that enhanced computing can help users get the most from their data.

Over the past ten years, there has been a dramatic increase in computer usage—and in the number of home users. Electronic commerce has resulted in the collection of vast amounts of customer and order information. In addition, most businesses have automated their processes and converted legacy data into electronic formats. Businesses large and small are now struggling with the question of what to do with all the electronic data they have collected.

Data warehousing is a multi-billion-dollar industry that involves the collection, organization, and storage of large amounts of data. Data cubes—structures comprising one or more tables in a relational database—are built so that data can be examined through multiple dimensions. This allows databases containing millions of records and hundreds of attributes to be explored instantly.

Data mining is the process of extracting meaningful information from large quantities of data. It involves uncovering patterns in the data and is often tied to data warehousing because it makes such large amounts of data usable. Data elements are grouped into distinct categories so that predictions can be made about other pieces of data. For example, a bank may wish to ascertain the characteristics that typify customers who pay back loans. Although this could be done with database queries, the bank would first have to know what customer attributes to query for. Data mining can be used to identify what those attributes are and then make predictions about future customer behavior.

Data mining is a technique that has been around for several years. Unfortunately, many of the original tools and techniques for mining data were complex and difficult for beginners to grasp. Microsoft and other software makers have responded by creating easier-to-use data-mining tools. A 2004 report titled "The Golden Vein" by the Economist.com states:

As the cost of storing data plummets and the power of analytic tools improves, there is little likelihood that enthusiasm for data mining, in all its forms, will diminish.

This is the first of two chapters that will examine how a fictional retailer named Savings Mart was able to utilize Microsoft's Analysis Services, included with Microsoft SQL Server, to improve operational efficiencies and reduce costs. The present chapter will examine a standalone Windows program named LoadSampleData which is used to load values into a database and generate random purchases for several of the retailer's stores. A mining model will then be created based on shipments to each store. The mining model will be the first step toward revising the way Savings Mart procedurally handles product orders and shipments.

Chapter 6 will extend the predictions made in this chapter through the use of a Windows service designed to automate mining-model processing and the application of processing results. Finally, a modified version of the LoadSampleData program will be used to verify that Savings Mart was able to successfully lower its operating costs.

The chapter also includes a Microsoft case study which examines a real company that used Analysis Services to build a data-mining solution. In the case study, a leaser of technology equipment needed to predict when clients would return their leased equipment. By using Analysis Services, it was able to quickly build a data-mining solution that helped to reduce costs and more accurately predict the value of assets.

Introducing Data Mining with SQL Server

Although SQL Server 7.0 offered Online Analytical Processing (OLAP) as OLAP Services, it was not until the release of SQL Server 2000 that data-mining algorithms were included. Analysis Services comes bundled with SQL Server as a separate install. It allows developers to build complex OLAP cubes and then utilize two popular data-mining algorithms to process data within the cubes.

Of course, it is not necessary to build OLAP cubes in order to utilize data-mining techniques. Analysis Services also allows mining models to be built against one or more tables from a relational database. This is a big departure from traditional data-mining methodologies. It means that users can access data-mining predictions without the need for OLAP services.

Data mining involves the gathering of knowledge to facilitate better decision-making. It is meant to empower organizations to learn from their experiences—or in this case their historical data—in order to form proactive and successful business strategies. It does not replace decision-makers, but instead provides them with a useful and important tool.

The introduction of data-mining algorithms with SQL Server represents an important step toward making data mining accessible to more companies. The built-in tools allow users to visually create mining models and then train those models with historical data from relational databases.

Data-Mining Algorithms

Data mining with Analysis Services is accomplished using one of two popular mining algorithms—decision trees and clustering. These algorithms are used to find meaningful patterns in a group of data and then make predictions about the data. Table 5.1 lists the key terms related to data mining with Analysis Services.

Table 5.1 Key terms related to data mining with Analysis Services.




The data and relationships that represent a single object you wish to analyze. For example, a product and all its attributes, such as Product Name and Unit Price. It is not necessarily equivalent to a single row in a relational table, because attributes can span multiple related tables. The product case could include all the order detail records for a single product.

Case Set

Collection of related cases. This represents the way the data is viewed and not necessarily the data itself. One case set involving products could focus on the product, whereas another may focus on the purchase detail for the same product.


One of two popular algorithms used by Analysis Services to mine data. Clustering involves the classification of data into distinct groups. As opposed to the other algorithm, decision trees, clustering does not require an outcome variable.


Multidimensional data structures built from one or more tables in a relational database. Cubes can be the input for a data-mining model, but with Analysis Services the input could also be based on an actual relational table(s).

Decision Trees

One of two popular algorithms used by Analysis Services to mine data. Decision trees involves the creation of a tree that allows the user to map a path to a successful outcome.

Testing Dataset

A portion of historical data that can be used to validate the predictions of a trained mining model. The model will be trained using a training dataset that is representative of all historical data. By using a testing dataset, the developer can ensure that the mining model was designed correctly and can be trusted to make useful predictions.

Training Dataset

A portion of historical data that is representative of all input data. It is important that the training dataset represent input variables in a way that is proportional to occurrences in the entire dataset. In the case of Savings Mart, we would want the training dataset to include all the stores that were open during the same time period so that no bias is unintentionally introduced.

Decision Trees

Decision trees are useful for predicting exact outcomes. Applying the decision trees algorithm to a training dataset results in the formation of a tree that allows the user to map a path to a successful outcome. At every node along the tree, the user answers a question (or makes a "decision"), such as "years applicant has been at current job (0–1, 1–5, > 5 years)."

The decision trees algorithm would be useful for a bank that wants to ascertain the characteristics of good customers. In this case, the predicted outcome is whether or not the applicant represents a bad credit risk. The outcome of a decision tree may be a Yes/No result (applicant is/is not a bad credit risk) or a list of numeric values, with each value assigned a probability. We will see the latter form of outcome later in this chapter.

The training dataset consists of the historical data collected from past loans. Attributes that affect credit risk might include the customer’s educational level, the number of kids the customer has, or the total household income. Each split on the tree represents a decision that influences the final predicted variable. For example, a customer who graduated from high school may be more likely to pay back the loan. The variable used in the first split is considered the most significant factor. So if educational level is in the first split, it is the factor that most influences credit risk.


Clustering is different from decision trees in that it involves grouping data into meaningful clusters with no specific outcome. It goes through a looped process whereby it reevaluates each cluster against all the other clusters looking for patterns in the data. This algorithm is useful when a large database with hundreds of attributes is first evaluated. The clustering process may uncover a relationship between data items that was never suspected. In the case of the bank that wants to determine credit risk, clustering might be used to identify groups of similar customers. It could reveal that certain customer attributes are more meaningful than originally thought. The attributes identified in this process could then be used to build a mining model with decision trees.

OLE DB for Data-Mining Specification

Analysis Services is based on the OLE DB for Data Mining (OLE DB for DM) specification. OLE DB for DM, an extension of OLE DB, was developed by the Data Mining Group at Microsoft Research. It includes an Application Programming Interface (API) that exposes data-mining functionality. This allows third-party providers to implement their own data-mining algorithms. These algorithms can then be made available through the Analysis Services Manager application when building new mining models.

  • + Share This
  • 🔖 Save To Your Account