In another article in this series, I give you a crash course on populating a data warehouse after it is built. The term data warehousing is rather popular these days, despite the fact that many people don't know what it stands for. Therefore, it might be prudent to step back and give you a general idea of what a data warehouse (DW) is and what it takes to build one.
Most companies have realized that collecting transactional data is useful. In fact, it is tough to find any company (besides some of the old-fashioned "Mom-and-Pop" stores) that do not record their transactions. The data that has been collected for a number of years reside in various data sourcessome in the mainframes, some in proprietary systems, and some in client-server applications. Also, each of these systems was probably built and is being maintained by different people.
The typical dilemma of today's IT managers is not how to collect the data, but how to use the data accumulated over the years. The answer might sound simple: Put everything in one place and run reports against that database. Well, the programmer who built the mainframe system left the company 10 years ago. The consultants that were hired to build the proprietary system have since moved on to other jobs as well. Finally, you're already running the reports against the client server system you use for daily data collection, but those reports are fairly rigidafter they're printed, you can't really change or customize them. Each time you need a specific report, you have to pay a premium rate for a week or two to the outside consultant or to your own programmer. What can you do?
The goal of a data warehouse is to provide your company with an easy and quick look at its historical data. Advanced OLAP (on-line analytical processing) tools let DW users generate reports at a click of a mouse and look at the company's performance from various angles. How much data you need to examine depends on the nature of your business.
Suppose you have a manufacturing plant that produces thousands of parts per hour. The type of information you might be interested in includes the number of defects per hour or per day. Although you might want to examine the number of defective parts this year against the same number five years ago, such a ratio probably wouldn't provide the best picture of the company's performance. On the other hand, if you're in a car rental business, you might want to examine the number of customers this month against the same number six months ago. If you need to analyze the purchasing trends for customers with various demographic backgrounds, you might wish to examine data collected for a number of years. In short, if you need to make use of the data residing in some or all of your systems, you need to build a data warehouse.
Building a Data Warehouse
In general, building any data warehouse consists of the following steps:
Extracting the transactional data from the data sources into a staging area
Transforming the transactional data
Loading the transformed data into a dimensional database
Building pre-calculated summary values to speed up report generation
Building (or purchasing) a front-end reporting tool
Extracting Transactional Data
A large part of building a DW is pulling data from various data sources and placing it in a central storage area. In fact, this can be the most difficult step to accomplish due to the reasons mentioned earlier: Most people who worked on the systems in place have moved on to other jobs. Even if they haven't left the company, you still have a lot of work to do: You need to figure out which database system to use for your staging area and how to pull data from various sources into that area.
Fortunately for many small to mid-size companies, Microsoft has come up with an excellent tool for data extraction. Data Transformation Services (DTS), which is part of Microsoft SQL Server 7.0 and 2000, allows you to import and export data from any OLE DB or ODBC-compliant database as long as you have an appropriate provider. This tool is available at no extra cost when you purchase Microsoft SQL Server. The sad reality is that you won't always have an OLE DB or ODBC-compliant data source to work with, however. If not, you're bound to make a considerable investment of time and effort in writing a custom program that transfers data from the original source into the staging database.
Transforming Transactional Data
An equally important and challenging step after extracting is transforming and relating the data extracted from multiple sources. As I said earlier, your source systems were most likely built by many different IT professionals. Let's face it. Each person sees the world through their own eyes, so each solution is at least a bit different from the others. The data model of your mainframe system might be very different from the model of the client-server system.
Most companies have their data spread out in a number of various database management systems: MS Access, MS SQL Server, Oracle, Sybase, and so on. Many companies will also have much of their data in flat files, spreadsheets, mail systems and other types of data stores. When building a data warehouse, you need to relate data from all of these sources and build some type of a staging area that can handle data extracted from any of these source systems. After all the data is in the staging area, you have to massage it and give it a common shape. Prior to massaging data, you need to figure out a way to relate tables and columns of one system to the tables and columns coming from the other systems.
Creating a Dimensional Model
The third step in building a data warehouse is coming up with a dimensional model. Most modern transactional systems are built using the relational model. The relational database is highly normalized; when designing such a system, you try to get rid of repeating columns and make all columns dependent on the primary key of each table. The relational systems perform well in the On-Line Transaction Processing (OLTP) environment. On the other hand, they perform rather poorly in the reporting (and especially DW) environment, in which joining multiple huge tables just is not the best idea.
The relational format is not very efficient when it comes to building reports with summary and aggregate values. The dimensional approach, on the other hand, provides a way to improve query performance without affecting data integrity. However, the query performance improvement comes with a storage space penalty; a dimensional database will generally take up much more space than its relational counterpart. These days, storage space is fairly inexpensive, and most companies can afford large hard disks with a minimal effort.
The dimensional model consists of the fact and dimension tables. The fact tables consist of foreign keys to each dimension table, as well as measures. The measures are a factual representation of how well (or how poorly) your business is doing (for instance, the number of parts produced per hour or the number of cars rented per day). Dimensions, on the other hand, are what your business users expect in the reportsthe details about the measures. For example, the time dimension tells the user that 2000 parts were produced between 7 a.m. and 7 p.m. on the specific day; the plant dimension specifies that these parts were produced by the Northern plant.
Just like any modeling exercise the dimensional modeling is not to be taken lightly. Figuring out the needed dimensions is a matter of discussing the business requirements with your users over and over again. When you first talk to the users they have very minimal requirements: "Just give me those reports that show me how each portion of the company performs." Figuring out what "each portion of the company" means is your job as a DW architect. The company may consist of regions, each of which report to a different vice president of operations. Each region, on the other hand, might consist of areas, which in turn might consist of individual stores. Each store could have several departments. When the DW is complete, splitting the revenue among the regions won't be enough. That's when your users will demand more features and additional drill-down capabilities. Instead of waiting for that to happen, an architect should take proactive measures to get all the necessary requirements ahead of time.
It's also important to realize that not every field you import from each data source may fit into the dimensional model. Indeed, if you have a sequential key on a mainframe system, it won't have much meaning to your business users. Other columns might have had significance eons ago when the system was built. Since then, the management might have changed its mind about the relevance of such columns. So don't worry if all of the columns you imported are not part of your dimensional model.
Loading the Data
After you've built a dimensional model, it's time to populate it with the data in the staging database. This step only sounds trivial. It might involve combining several columns together or splitting one field into several columns. You might have to perform several lookups before calculating certain values for your dimensional model.
Keep in mind that such data transformations can be performed at either of the two stages: while extracting the data from their origins or while loading data into the dimensional model. I wouldn't recommend one way over the othermake a decision depending on the project. If your users need to be sure that they can extract all the data first, wait until all data is extracted prior to transforming it. If the dimensions are known prior to extraction, go on and transform the data while extracting it.
Generating Precalculated Summary Values
The next step is generating the precalculated summary values which are commonly referred to as aggregations. This step has been tremendously simplified by SQL Server Analysis Services (or OLAP Services, as it is referred to in SQL Server 7.0). After you have populated your dimensional database, SQL Server Analysis Services does all the aggregate generation work for you. However, remember that depending on the number of dimensions you have in your DW, building aggregations can take a long time. As a rule of thumb, the more dimensions you have, the more time it'll take to build aggregations. However, the size of each dimension also plays a significant role.
Prior to generating aggregations, you need to make an important choice about which dimensional model to use: ROLAP (Relational OLAP), MOLAP (Multidimensional OLAP), or HOLAP (Hybrid OLAP). The ROLAP model builds additional tables for storing the aggregates, but this takes much more storage space than a dimensional database, so be careful! The MOLAP model stores the aggregations as well as the data in multidimensional format, which is far more efficient than ROLAP. The HOLAP approach keeps the data in the relational format, but builds aggregations in multidimensional format, so it's a combination of ROLAP and MOLAP.
Regardless of which dimensional model you choose, ensure that SQL Server has as much memory as possible. Building aggregations is a memory-intensive operation, and the more memory you provide, the less time it will take to build aggregate values.
Building (or Purchasing) a Front-End Reporting Tool
After you've built the dimensional database and the aggregations you can decide how sophisticated your reporting tools need to be. If you just need the drill-down capabilities, and your users have Microsoft Office 2000 on their desktops, the Pivot Table Service of Microsoft Excel 2000 will do the job. If the reporting needs are more than what Excel can offer, you'll have to investigate the alternative of building or purchasing a reporting tool. The cost of building a custom reporting (and OLAP) tool will usually outweigh the purchase price of a third-party tool. That is not to say that OLAP tools are cheap (not in the least!).
There are several major vendors on the market that have top-notch analytical tools. In addition to the third-party tools, Microsoft has just released its own tool, Data Analyzer, which can be a cost-effective alternative. Consider purchasing one of these suites before delving into the process of developing your own software because reinventing the wheel is not always beneficial or affordable. Building OLAP tools is not a trivial exercise by any means.
In this article I gave you an overview of what a data warehouse is and what it takes to build one. In my other article in this series on data warehousing, I discuss gathering data and populating the staging area in greater detail.