Table of Contents
- Microsoft SQL Server Defined
- Microsoft SQL Server Features
Microsoft SQL Server Administration
- The DBA Survival Guide: The 10 Minute SQL Server Overview
- Preparing (or Tuning) a Windows System for SQL Server, Part 1
- Preparing (or Tuning) a Windows System for SQL Server, Part 2
- Installing SQL Server
- Upgrading SQL Server
- SQL Server 2000 Management Tools
- SQL Server 2005 Management Tools
- SQL Server 2008 Management Tools
- SQL Azure Tools
- Automating Tasks with SQL Server Agent
- Run Operating System Commands in SQL Agent using PowerShell
- Automating Tasks Without SQL Server Agent
- Storage – SQL Server I/O
- Service Packs, Hotfixes and Cumulative Upgrades
- Tracking SQL Server Information with Error and Event Logs
- Change Management
- SQL Server Metadata, Part One
- SQL Server Meta-Data, Part Two
- Monitoring - SQL Server 2005 Dynamic Views and Functions
- Monitoring - Performance Monitor
- Unattended Performance Monitoring for SQL Server
- Monitoring - User-Defined Performance Counters
- Monitoring: SQL Server Activity Monitor
- SQL Server Instances
- DBCC Commands
- SQL Server and Mail
- Database Maintenance Checklist
- The Maintenance Wizard: SQL Server 2000 and Earlier
- The Maintenance Wizard: SQL Server 2005 (SP2) and Later
- The Web Assistant Wizard
- Creating Web Pages from SQL Server
- SQL Server Security
- Securing the SQL Server Platform, Part 1
- Securing the SQL Server Platform, Part 2
- SQL Server Security: Users and other Principals
- SQL Server Security – Roles
- SQL Server Security: Objects (Securables)
- Security: Using the Command Line
- SQL Server Security - Encrypting Connections
- SQL Server Security: Encrypting Data
- SQL Server Security Audit
- High Availability - SQL Server Clustering
- SQL Server Configuration, Part 1
- SQL Server Configuration, Part 2
- Database Configuration Options
- 32- vs 64-bit Computing for SQL Server
- SQL Server and Memory
- Performance Tuning: Introduction to Indexes
- Statistical Indexes
- Backup and Recovery
- Backup and Recovery Examples, Part One
- Backup and Recovery Examples, Part Two: Transferring Databases to Another System (Even Without Backups)
- SQL Profiler - Reverse Engineering An Application
- SQL Trace
- SQL Server Alerts
- Files and Filegroups
- Full-Text Indexes
- Read-Only Data
- SQL Server Locks
- Monitoring Locking and Deadlocking
- Controlling Locks in SQL Server
- SQL Server Policy-Based Management, Part One
- SQL Server Policy-Based Management, Part Two
- SQL Server Policy-Based Management, Part Three
- Microsoft SQL Server Programming
- Performance Tuning
- Practical Applications
- Professional Development
- Application Architecture Assessments
- Business Intelligence
- Tips and Troubleshooting
- Additional Resources
Last updated Mar 28, 2003.
All technical professionals are concerned with performance, usability, maintenance and so on, but among technical professionals DBAs area some of the most paranoid, or at least they should be. We are entrusted with the data that runs a company or organization. The other team-members in that company or organization might not even be aware how risky it is to store data on a computer system, but the DBA knows (or should know) how risky that is. With paper records there is always the danger of physical calamity, such as a fire or flood. But since the data is static and exists in the physical dimension, it is difficult to corrupt it — you can almost always read the data directly, as long as the language or code it was written in is available. If the paper record is dated and kept, then you can almost always reconstruct the values from "pounds" to "euros" for instance.
This is not true for computer records. Not only do they face the same physical dangers of paper records, but they face even more danger of losing another key part of the data — its state. For instance, assume you are looking at a purchase order record on a computerized system. You can read the data on the screen, and even trace that data path from the application to the database source record. But without someone taking extra measures, you have no way of knowing if the record you are looking at is the same as what was originally entered.
So along with protecting the data from direct loss, such as taking backups and securing them off-site, you have the added concern of protecting the state of the data. This is called "Change Management." It is often paired with Disaster Recovery, which I've explained here.
Change management is quite simply the formal process for tracking changes to your databases and servers. It may seem like common sense, but it’s important to know if someone changes anything in a database, whether that is the constructs of the data (such as a table or column layout) or the data itself.
In addition to the data constructs and the data contents, you also need to be concerned about the data environment. For instance, the service pack level of the SQL Server system is as important to know as the data type on a column. Each version of the software, from major point-release (like 7 to 2000) to the service pack can influence how the data is displayed and accessed. Above that, the version of the operating system of the server is important to know, as is the hardware that the system runs on. This can be vital when the entire system is wiped out and all you have are your off-site backups. If these backup files are taken with a third-party application using an older tape drive, you could be in for a difficult time in restoring them if you don't know their state, and sometimes even the history of the state change.
Change management may not seem important if you are the DBA of a vendor-provided database where the only changes are the ones the users make through the application. Unless you application is pretty limited and your community is pretty small, you don’t normally get this luxury. Of course, if you or your company writes code against a database then you’re well aware of the need for managing change as the code moves from the development environment to the production server.
In this tutorial I'll explain some of these factors, and in others on this site I'll explain more details about how you can track each of them using strategies such as auditing and scripting for change management.
Production and Development Environments
There are two kinds of environments where databases live. The first is the production environment, and the second is the development environment. A production environment is one in which you have a SQL Server back-end to a commercially developed product, such as Siebel or PeopleSoft. A development environment exists at vendor firms, or perhaps even at your company if you design custom applications against SQL Server.
If you’re in a production environment there is still a need for change management. Even if you don’t change objects in the vendor’s database yourself, you might be asked to implement patches or upgrades. This can often be a daunting task, since as of this writing there are tens of thousands objects in a normal PeopleSoft implementation, and even more in an SAP instance.
Normally vendor companies shy away from allowing you to directly edit their databases, but there are custom forms and views that you may have created to assist your users with functionality that isn’t available "out of the box." These changes are vital to track, because an upgrade could trash your precious work or may no longer function in a patched environment. At that point it’s always nice to have scripts or documentation so that you know what happened and can correct any issues.
Development environments absolutely require change management. Normally code moves from a development server or database, to a testing server, and then on to the production or release server. In these environments, each of these servers is less volatile than the server before it. In these shops, there are normally two types of controls placed on this code progression. In some shops, the development server may have little or no controls placed on how the developers create or change database objects. In others, only the DBAs are allowed to create or change objects in a database on any of the servers. Each has its advantages and drawbacks.
Whichever environment you’re in, or whatever level of control your shop selects, if you’re responsible for the integrity of the servers, you need to set up a process to manage the change.
Implementing Change Management
There are several formal steps involved in tracking the changes in your database. I’ll go through of those here. There are two main methods for implementing change management — manual and automatic.
Automated Methods of Change Management
If you work in a large shop or if you’re in a development environment, you may want to investigate commercial packages that track your database changes automatically. Some of the packages work by creating a "bug tracking" process where SQL scripts are sent when an object changes to the DBA. This process is normally controlled via tracking numbers.
The other type of automated change management monitors the database, and creates script commands (T-SQL or proprietary) to re-create or back out the changes. These changes are automatically applied to another database as you specify.
These automated systems are great — but are a couple of downsides. The first is the cost — these things aren’t very cheap. The more significant downside is that you might have to change your development process to match what the software needs. This may not be a bad thing, but making development managers change their process can be... well, hard.
Manual Methods of Change Management
The other type of change management is normally used in a shop where the money is tight, or the changes aren’t frequent enough to justify the cost and trouble of installing another piece of software. These are normally implemented with a combination of spreadsheets and script directories. In this method there are two tracking types — Pre and Post change management.
In pre–change management, you track changes as they occur. Each and every change is captured by you, scripted to a file, and then applied to the proper server. If there are groups of changes, they are stored as sequential scripts in a larger text file that is heavily commented as to the effect of the change. This type can be difficult to manage, since you must be aware of the dependencies on each change. For instance, let’s assume that you need to change a primary key on a table. The script for this one change would need to have the drops of the foreign keys from all child tables, the alteration of the primary key (which might include a drop), and then re-creation of applicable foreign keys. There are times when you just can’t get there from here...you need to make some fairly hefty changes just to alter a table or field.
In post–change management, you create a "differences script" by comparing the database in its initial state to the current state of the database. This process is sometimes a bit easier, but lacks the granularity of the pre-change method. There are a lot of tools that do this, and I’ve even seen a script or two that claims to be able to see these changes. I’ve written VB code that uses the Data Management Objects (DMO) which can create the scripts of the database objects. I send those to a file, make my changes, and then do the process again. My code then compares the two text files for differences. You can also perform this process manually by using the scripting features in Enterprise Manager and an automated file script or two.
Implementing your Method
Let’s take a look at the process you should use to track changes on your database.
- Identify the objects and properties to track
To set up a manual process, define what it is that you need to track. If this is a patch release, perhaps you just need to identify the changes it makes. If it’s a version release, you may want to know the entire structure, whether it changes or not. This holds true for objects like tables and stored procedures as well as meta-data. Your database may contain meta-data if it stores information about the program or lookup tables. Any data that you need but don’t let users modify could be considered meta-data.
- Identify Dependencies (Impact Analysis)
Once you identify the objects to track, you need to check what they affect, both upstream and downstream. While you may only change a column on a table, it may have a foreign key against it (downstream) or it may be a foreign key to another table (upstream). In my experience, this is the hardest thing to check.
- Secure the Servers
Regardless of which method you use to track changes to your server, you’ll need to know how and when those changes are being made. The most difficult part of this process is database security. If there are users in the database with Data Definition Language (DDL) — level privileges, you potentially do not know what they are doing. It’s just that simple. Regardless of how well everyone gets along, there’s going to be a day when you’re not around and a change needs to be made. When that happens, if they can alter a table, they will. The potential is that you won’t know about it, and that makes the entire structure untrustworthy. I’ve had lengthy arguments about this with developers and managers, but I always get proved right in the end — and make no mistake, it’s you that will be asked why the database is in the shape it’s in. Stick to your guns on this one, you will need complete control over at least the production server.
- Create and Apply the Scripts
Create the scripts of your database by using the pre or post change methods. The process you choose will dictate how the scripts are created and applied. Each change should have a back-out process defined. You need to be able to return to the first state on any change.
- Keep Current Backups and Scripting, Version the Database
It goes without saying that before any major change you need to have a backup process in place. When I’ve the lead DBA on a team, I also require a complete script of the database and meta-data to be placed in a version-control system at each release level. I know, its overkill, but I’ve used this more times than I care to think about.
- Perform Regression Testing
The final piece of the puzzle is regression testing. Regression testing is nothing more than making sure that what used to work still does. While this sounds simple, if it’s a big application, this can take a long time. It’s tempting to skip this step — and dangerous. Once the users begin to put data in a production application, it can be tough to back it out if the upgrade or patch doesn’t work.
- Document the process
Whether it’s a manual or automatic process, and whether it’s a production or development shop, you need to keep good documentation. I call this the "hit by a bus" binder, and it should contain a discussion of the process your firm chooses. This is the toughest thing to implement, since it’s normally yourself that is the bottleneck. Note that this item has both security and maintenance implications. You need to ensure that the binder (whether electronic or physical) is secured to authorized view, and that it is not co-located with the system. I normally mail these electronic tracking systems off-site in case our building is destroyed.
I hope this tutorial has shown you the importance of change management on your servers. Even though it may seem a bit much to implement, if you put these processes into practice you’ll thank yourself later. And so will the company — if you make them aware of what you are doing. They need to know in case you are not available to implement the system restoration.
InformIT Articles and Sample Chapters
Rich Schiesser has a whole series on change management strategies here.
I don't normally plug a vendor's products, but you can read the review of two change management tools here.