Home > Articles > Data > SQL Server

SQL Server Reference Guide

Hosted by

Toggle Open Guide Table of ContentsGuide Contents

Close Table of ContentsGuide Contents

Close Table of Contents

Controlling Locks in SQL Server

Last updated Mar 28, 2003.

As I’ve described in other overviews and tutorials, SQL Server handles locking, blocking and deadlocking automatically. But you can actually cause or fix locks using three control levers. In this tutorial I’ll show you how to do just that.

As a refresher, all Relational Database Management Systems (RDBMS) have locks. They are integral to the operation of the system, primarily to ensure data integrity. You have to guarantee that while a user or even an operation reads data, the data shouldn’t be allowed to change. That of course goes double for changes to the data — which in fact involves a read operation first.

In the last tutorial of this series I explained how you can discover and view the locking activity on your databases. In this tutorial I’ll show you how to mitigate, or deal with them.

Controlling Locking Behavior Using Proper Design

The first and most basic thing you can do to deal with locks is to create a design that minimizes them as much as possible. Interestingly, even though it is simple to design good locking behavior at the start of your application this is one of the most difficult tasks to do later. Once you have a design, it is very painful to evaluate it as the cause of a locking problem, and it can be almost impossible to take the system apart to fix it once there is data in the tables.

Let’s look at a concrete example. In the AdventureWorks database in SQL Server 2005, there are several tables that deal with inventory for a bicycle shop. Some of these data elements are the item name, its type, and its location, how many are on hand and in some cases even a picture of the item.

At first glance it makes sense to create a table with all of this information. After all, we normally think of the data as the end result — what it would look like in an application or a report on the screen. But what you need to consider is how the data will be used. The location, for instance, will be set up by plant managers and manufacturing divisions and the image of the item will probably be handled by marketing or even engineering. Sales will want to query the amount of items, but both sales and manufacturing will change the number. If all of these applications have to look in the same table and even row for a piece of data, then the system will have to lock that row to prevent one application from reading a datum that has changed — called a "dirty read." The more groups of people need to look at or change an atomic value (such as the quantity) the more you should separate those data elements out to other tables. This is called normalization, and I’ve covered it in another article here on InformIT.

So as you can see, the design of the database is the first place to start in dealing with locks. It’s mostly trivial for a database engine to join several tables together, so you don’t pay an extremely large penalty for having more tables rather than fewer. And, you let one function access only the data they care about long enough to read or make changes.

If you follow the process I detailed in the normalization article, you’ll have a good start on the design as far as locks go. The important thing to remember is to think about how the system will be used when you design the tables.

Controlling Locking Behavior Using Timeouts

The next lever you have for controlling locks is using timeouts. This sets how long a query will wait for a blocked resource. If you have a suspicion that there will be a lot of locking for a particular column, you can code your application to only wait a few seconds on the resource before it either tells the user that the resource is unavailable, or you can have the application retry the query a bit later to avoid the lock. Your application will still slow down a bit if the locks occur, but at least this way you don’t escalate the locks so quickly and you’re in charge of how long the process takes. People dislike waiting, but they really dislike it when the application appears to "freeze".

You can change the timeout behavior in a couple of ways, but the simplest is to use the @@LOCK_TIMEOUT function in your code. You just specify the timeout value in milliseconds at the top of the batch that you run:

SET LOCK_TIMEOUT 1800
SELECT * FROM Inventory;
GO

Controlling Locking Behavior Using Isolation Levels

Most of the time you’ll want to leave SQL Server alone to deal with locks, since it will handle that automatically. But in some circumstances you’ll want to control the type of locking that your query will take. You may decide that it’s more important to read the data quickly, even if it might be subject to change in a moment. You can control the type of lock, and even how long it will be taken using Transaction Isolation Levels.

Note that no matter what you do with these Isolation Levels, data changes always get an exclusive lock. You’d lose data integrity if they didn’t.

You control the Transaction Isolation Level with the SET TRANSACTION ISOLATION LEVEL statement. It takes one of the following parameters, which I’ll explain inline.

READ UNCOMMITTED

This Isolation Level allows your query to read rows that have been changed by another transaction, but hasn’t yet been committed to the database. What this means is that they don’t issue a SHARED lock, meaning other queries don’t know that it isn’t in there working away. Using this Isolation Level, you can get those "dirty reads" I mentioned earlier. Sometimes that’s alright, but normally this isn’t what you’re looking for.

This Isolation Level also doesn’t block a system from updating a piece of data it has — once again, as long as it isn’t committed yet. When data is being committed, it always gets an exclusive lock.

This Isolation level will also allow "phantom reads," meaning that by the time the query finishes, there might not be as many rows in the table as what was returned. Let’s say you have the values 1 through 1,000,000 in a table. With this Isolation Level you can’t always be sure someone won’t delete row 9,000 before you finish reading it, so you can see data in the application that no longer appears in the database.

READ COMMITTED

This is the default Isolation Level of SQL Server, and it basically says that queries can’t read data that has been changed but not yet committed by other queries. It’s one of the "safest" Isolation Levels.

REPEATABLE READ

This is one of the most restrictive Isolation Levels in a transaction. It says that queries can’t read data that has been changed but not yet committed by another query. Also, other queries can’t change any of the data that has been read by the transaction until it finishes.

SNAPSHOT

This Isolation Level is similar to READ UNCOMMITTED, since it allows a complete data read without locks. What that means is that the data you see in the application is what it was when you started the query.

To use this level, you have to set a database option called ALLOW_SNAPSHOT_ISOLATION to be ON.

SERIALIZABLE

This hybrid type of Isolation Level sort of combines SNAPSHOT and READ COMMITTED — it says that even if you repeat the query, you’ll get the same data, at least in the same transaction. It also does a lot of blocking, so use it with care.

You can use any of these strategies to deal with your locking issues. I recommend that you leave the defaults for SQL Server locking, and let the engine take care of the proper levels and timeouts. But if you use the monitoring processes and tools I mentioned in my previous article and find locks to be a problem, then you should investigate these methods and test them on your development server.

InformIT Articles and Sample Chapters

Kevin Kline explains how you can write code to minimize locking here.

Online Resources

There is a lot more here about concurrency (which is basically locking) here.