Home > Articles > Data > SQL Server

SQL Server Reference Guide

Hosted by

Toggle Open Guide Table of ContentsGuide Contents

Close Table of ContentsGuide Contents

Close Table of Contents

Migrating Departmental Data Stores to SQL Server: Attach the Front End, Test, and Monitor

Last updated Mar 28, 2003.

This is the last article in a series of a formal process you can follow to migrate data stored in “departmental data stores” (such as Excel, text files, XML documents and so on) into a Relational Database Management System (RDBMS) like SQL Server. The first article in this series is here, and you can click the “Next” link at the bottom of each subsequent article to bring you here.

In the first article, I explained what these data stores are, what they mean to your organization, and when they should be considered for migration. Some data doesn’t need to be stored in an RDBMS, and other data does. I also explained a few methods you can use to locate that data. That’s the first step.

In the second installment I explained how to take that data and model it so that you can tease out the requirements from the discovery you’ve done and how to model that data so that everyone agrees on its final format. I wasn’t able to complete the entire part of that process there, so I finished that step in the article that followed.

In the third article I covered a Business Requirements document in more depth and I explained how to normalize the model into tables and columns. After that I cleaned up the model and explained how I decided to pick the location for the data.

In the last two tutorials I explained how to design your Extract, Transform and Load (ETL) process, and what your considerations are for data type conversions and so on. The last tutorial covered the actual implementation of that process, using multiple approaches.

Here’s the outline of the process and where we are so far:

  1. Locate the data
  2. Model the system
  3. Decide on the Destination
  4. Design the ETL and Migrate the Data
  5. Attach the front-ends
  6. Test and Monitor

In this article I’ll explain the final part for my project, and I’ll give you other options for your own. It’s also not a-typical to see multiple processes used, based on what you need to do.

Front-End Options

There are multiple ways to access data from SQL Server, but the first thing you need to decide is whether your users need to just read the data or enter, edit and delete data. If all they need to do is to see the data, then your options are very simple. You can use everything from a simple web page using a script to Reporting Services. Both are no-cost options for the most part — just a little time invested in the process. In fact, if all you need is to show the users the data, you can actually look at it using Excel and the “External Data” option.

But things are rarely that simple. Recall that my example started many tutorials ago with the users importing some data from another system and then adding their own entries to it. For that, Microsoft Office Excel really isn’t a good fit. Yes, you could code things to work that way, but it takes a lot of work that way and turns out not to handle those multi-table inserts as well as it should.

So you’re left with a few other choices — but don’t worry. Programming database applications is one of the most documented forms of development. I’ll cover the options you have, along with the pros and cons of each form, for creating applications that read, write and update data.

Web Applications

Years ago it was fairly difficult to develop good applications for a web browser that could easily edit data in a database. This is not only no longer the case, but actually the default application space for many shops.

Whether you use PHP, ASP, ASP.NET or any of the other programming acronyms, you can find a free or low-cost method to write a web page that accesses a database. I won’t cover those code types in this article, since InformIT has books, articles, guides and tutorial that already do that.

The basic layout for a web application involves the code for the browser to interpret, the web server with either optional or built-in extensions to handle the calls to the database, and the database server itself. I’ve seen all three on the same server (for testing, normally) or more often spread out onto multiple systems.

I’ll point out some of the references we have on InformIT for this kind of programming at the end of this tutorial.

The next most popular method to allow your users is to code up a desktop or “fat” client application that you compile into an executable program. Once again there are lots of helps right here on InformIT for that, and I’ll point to those at the end.

Don’t be concerned with cost in this area, either. You can download the Express editions of Visual Studio even to write commercial programs, all for free.

If you’re after the fastest, easiest way to write code against a database, this is it. You simply write the code, compile it and hit the database. Of course, it’s more complicated than that, but the simplicity of dragging a grid to the screen and pointing it to a stored procedure or table is really hard to beat.

Middle Tier

At the more complicated end of the spectrum is writing either a web-based or desktop-based “front end” application that communicates with another process (called a “middle tier”) that in turn connects to the database. The advantages here are that the database code and business logic can be separated out from the graphical front-end, or what is called the “presentation layer.”

Once again, I’ll provide some links for middle-tier programming at the end of this article.

Database Decisions

While I haven’t covered the specifics on the front-end code for your project, I do have some definite opinions on how you should set up the database. In addition to the design work you’ve done on the database tables and keys, I always advise that you consider using stored procedures and/or functions along with views to control the access to the system.

The first advantage in a stored procedure, function or view is that it abstracts the database structure away from the application. You can ensure that the same stored procedure always works for the application even if you need to add or take away a column or table. In the case of a stored procedure or function (but not necessarily a view) you can see performance gains as opposed to sending direct Transact-SQL code from the application.

Don’t misunderstand — there’s no such thing as “always” in coding. There are exceptions to just about all “rules” that someone (including me) comes up with.

Another very important advantage to using stored procedures and functions is that they can help protect you from SQL Injection attacks. SQL Injection isn’t a SQL Server (or any database) issue; it’s a coding issue. Let’s take an example.

Say that you have a box on the screen where the user enters a part number: 1234.

Now assume that the code that runs when the user clicks looks something like this:

SELECT PartDescription
FROM InventoryTable 
WHERE PartNumber = ‘ <ENTERED VALUE FROM WEB PAGE> ‘;
GO
An enterprising user might type this in the box:
1234’; GO SELECT * FROM InventoryTable; GO ‘
Which would translate in your code to this:
SELECT PartDescription
FROM InventoryTable 
WHERE PartNumber = ‘1234’; GO SELECT * FROM InventoryTable; GO 
‘ ‘;
GO

And they would be able to see what they shouldn’t. A stored procedure, however, can test for that extra “tick” mark or even fail with special logic if more than one string is entered in. Of course, you could do that with code as well, but this protects the database even when the developer forgets to do that.

Test and Monitor

Once everything is all hooked up, make sure you create and run your tests for the users. It isn’t just ensuring that the data is correct and is handled the way you expect — you should also consider writing some code that sends a lot of data to the test server and then deletes it to check to ensure performance and design.

And now is the perfect time to take a performance tuning baseline, since no one is on the system. This will form a good platform to check against periodically. You can read more about how to do that here.

By the way — I hooked up an application for my users when I migrated the data. You might be surprised by my choice — I used a Microsoft Access “Database Project.” That’s right — I used a department-level application to solve a department-level application problem! Here’s my logic:

My users already have Access, and are comfortable with making reports in it. I was able to code up and application in just a few minutes that made calls to the proper views, functions and stored procedures, and there’s no Access database files involved — it’s just a very rich graphical interface. There are best-practices for this, so make sure and do a web search on that topic.

InformIT Articles and Sample Chapters

Here is a great article on Web programming: ASP.NET 2.0: Is It Really This Easy?

One on general programming: The Essence of LINQ

And another on Middle-Tier programming: Middle-Tier Patterns in .NET

Books and eBooks

Here is a great book on Web programming: AJAX, Rich Internet Applications, and Web Development for Programmers

One on general programming: LINQ Unleashed: for C#

And another on Middle-Tier programming: .NET Patterns: Architecture, Design, and Process

Online Resources

You could even use Access to create a web page — not recommended, but interesting.

Free programming tools from Microsoft

Middle-Tier programming