Home > Articles > Data > SQL Server

SQL Server Reference Guide

Hosted by

Toggle Open Guide Table of ContentsGuide Contents

Close Table of ContentsGuide Contents

Close Table of Contents

Performance Monitoring Tools: User Defined Counters

Last updated Mar 28, 2003.

In a previous series I introduced you to a tool that is included with every Windows Server operating system, the Windows System Monitor [md] also known as Performance Monitor, or sometimes Perfmon (even though technically Perfmon is another tool). You can use the Windows System Monitor as part of the second phase of the performance tuning process, where you evaluate a system's components for baselines and performance tuning bottleneck evaluation.

In my discussions on the performance tuning process, I explained that you need to monitor not only the objects and counters within SQL Server, but all parts of the landscape including things like middle-tier servers and processes as well as all of the other parts of the system. While most other parts of the landscape have built-in monitoring capabilities such as System Monitor objects and counters, one glaring issue is the custom-developed application. If the developer did not write a special set of code to talk to the registry entries for the System Monitor, you don't have a lot of options for instrumenting the application.

The reason I’m focusing on the Windows System Monitor is that it can instrument the various parts of your application using the Performance Monitor interface, either when it first goes to production or in a controlled testing environment. Once you’ve captured the performance metrics when the system is running properly, you have a "baseline" of information to work from. You can then repeat the monitoring periodically and compare the values to how the system is running today. That’s performance tuning. When you have an issue, you can take the measurements again and find out which component is having a problem. That’s error correction. So the Windows Performance Monitor has a place when you baseline, it can help you find bottlenecks for tuning, and it can help you locate areas that are having problems.

The primary difficulty is the hardest to deal with: you know how to monitor, what exactly should you monitor?

There are no hard and fast rules for what parts of the application should be instrumented. However, you can use a simple method to find out exactly what you need to check to baseline the system, tune it, and find the issues it’s having. To determine the parts of the application I’ll show you how to use the inductive method.

I’m a big fan of Sherlock Holmes. If you’ve read any of the works from Sir Arthur Conan Doyle you’ll remember that he always advocated the deductive method of reasoning. Mr. Holmes would examine the clues, and go from the specific to the general. In this case, I’ll show you how to do the opposite [md] you’ll going to go from the general to the specific.

What I mean is you can monitor the various systems (clients, middle-tier and back-end servers) during a standard level of activity, using the broad counters that have to do with the processors, memory, I/O (hard disk subsystem) and network activity. You’ll notice that as you perform various tasks make higher demands on one or more of those components. Once you find the component showing movement, you can drill in deeper and deeper to find specific parts of the component that show the effects. Once you narrow down all the objects and counters that move during the test, you save those as a template for that part of the system.

For instance, you’ll probably find that in an N-tier application that the disk I/O on the client or perhaps even on the middle-tier won’t show a lot of movement, but the CPU on the client may. In that case you would first prove that by running multiple iterations of the testing procedure again and recording only the counters that show significant movement. You can follow the same procedure for the middle tier and SQL Server, using the objects and counters I showed you in the last couple of tutorials.

Where you’ll need to be careful is in the interpretation of the values the objects and counters show you. You might think that there is significant I/O pressure when a certain stored procedure or function within the code runs on the server. But by comparing all of the counters you notice that the memory requested by SQL Server and the memory in use are the same. This might mean that SQL Server needs more memory, so it’s paging to disk, causing higher I/O readings. Not only that, if the I/O is local, the CPUs will probably spike higher since they have to deal with handling the I/O transfer. If you’re only considering one counter you might miss that.

Using the inductive method you can detail your entire system, right down to the lowest function or call. The main difficulty you’ll run into is the problem of correlation. Correlation is determining how (or if) two or more things are affecting each other. In practical terms, each login takes resources on the server. There is a correlation between logins and system resource allocation. However, not everything shows correlation, and in a complex system you’re not always certain how closely correlated things are.

There are a few ways to deal with this problem. The first is the chronological method. You watch the resources while you perform a certain activity in the application. If the memory moves up when you click a certain button, you can infer a correlation based on the time it happened.

Using the clock isn’t foolproof. You might not be able to click a button because a particular section of the application runs automatically, or is based on some event in the internal programs. In fact, one of these might be running when you click the button in the application, which invalidates the results of the button-click test as well.

The other method of dealing with correlation is with instrumentation. That means you code "markers" within the application so that you know exactly when something happens. And for that, Microsoft has provided the perfect addition to SQL Server. You can control ten of your own Performance Monitor counters and change their values within regular Transact-SQL (T-SQL) statements. When the developer calls a stored procedure or runs a set of T-SQL statements, they can use the sp_user_counter(1-10) system stored procedure to cause the performance counters to rise in System Monitor. You can then use the same techniques I explained in the last two articles to track them. By selecting the User Settable sub-object in the MSSQL object on your System Monitor you’ll see ten counters there. You can store the values or track them just like any other counter.

I’ll show you a concrete example. Assume that you have a function within an application that runs a particular query inside an application. You can "bracket" that code with the sp_user_counter stored procedure, setting the value to the number 1, and then setting it back to 0 when you’re done. Watch your other counters in relation to this one, and you can know exactly what the correlations are. Here's the code:

/* Turn on the counter at the beginning of code */
EXEC sp_user_counter1 1
/* Place your more complex code or statements here */
USE pubs
GO
SELECT * from authors
/* Turn the counter back off */
EXEC sp_user_counter1 0
GO

You can use these counters for more than just correlation, but they are very useful for that purpose. You can only assign a single number to them, but by using various numbers to indicate system state, you can extend the ten counters to have more meanings than just off and on.

The best way to determine the place to include these stored procedures is to start with the users of the application. You're looking for processes that indicate an increase in load. Next, work with the developers to find out what statements run when that event happens.

For instance, the users might tell you that each person that enters a new order into the system represents a load factor. They notice that when a lot of people are entering orders, the system slows. You check with the developers who tell you that a certain stored procedure is used to create the orders. To instrument the application, you simply ask them to include the stored procedures shown above within their own. You can include the user counter in your System Monitor logging and correlate or rule out the orders increases with the other counters.

InformIT Articles and Sample Chapters

I have a complete section on Performance tuning elsewhere in this Reference Guide.

Books and eBooks

You can find out a lot more about SQL Server performance tuning in the book SQL Performance Tuning, by Peter Gulutzan and Trudy Pelzer.

Online Resources

The best performance tuning site for SQL Server is here.