InformIT

Monitoring and Tuning SQL Server with Profiler

Date: Mar 3, 2006

Return to the article

Database administration expert Baya Pavliashvili introduces several tools in SQL Server you can use to monitor and tune application performance, most importantly Profiler.

SQL Server comes with several excellent tools for monitoring and tuning application performance. Profiler is perhaps the most powerful of these tools; it allows you to spy on and analyze the impact of SQL statements sent to your server.

This article provides a very brief overview of main Profiler features. I also discuss some of the advanced Profiler features: profiler templates, replaying a previously recorded trace, and recording traces without Profiler’s graphical user interface.

Profiler Trace Contents

Profiler traces mainly consist of server events you want to monitor. Profiler provides dozens of events you can choose from, or you can define your own events. For example, you can record every login and logout event, start and stop of stored procedure executions, recompilation of stored procedures, and much more. In addition, you can choose from more than 40 data columns describing the events that you could record. Data columns you could record with the trace include the text of the SQL statement that was sent to your server, user name, application name, amount of CPU used by the statement, number of reads performed, and more.

As you might guess, a trace collecting all possible events and all possible data columns can get very large and difficult to decipher, particularly on a busy production server. I have seen traces that grow to be several gigabytes and can be rather overwhelming. That’s why you need to carefully consider the events you truly need to record.

Profiler also allows you to limit the output of your trace by adding filters. You can filter the trace by particular application, database, specific user, and number of other categories.

Note that not all data columns are available with each event; for example, the FileName data column makes sense for Data File Auto Grow event, but not for the SP:completed event which records the execution of the stored procedure. A description of each event and data column is available directly within the Profiler as well as in Books Online. I highly recommend examining these descriptions directly with Profiler so that you know which events matter to you the next time you wish to create a trace.

Profiler Templates

SQL Server comes with eight built-in trace templates that can help you get started. Templates simply save your selection of the events, data columns, and filters. To use one of the built-in templates for your trace, simply start a new trace and specify the path to the desired template in the Template File Name box within Trace Properties dialog shown in Figure 1.

Figure 1

Figure 1 You can use one of the eight built-in trace templates for your trace.

In addition to built-in templates, you can also create your own templates. This is a handy shortcut if you like the trace you have recorded and want to reuse it in the future—after all, picking dozens of events and data columns each time you record a trace can be tedious. To create a new trace template, simply choose File, New, Trace Template with the Profiler; choose the events, data columns, and filters; and save the template. The next time you want to use this template, you can pick it just like you would the built-in templates.

Let’s take a quick look at each of the built-in Profiler templates and see when each one of them could be useful:

View Table

Saving and Examining Trace Output

You can save and examine trace output in a trace file, SQL script, or a SQL Server table. Perhaps saving as a table allows the most flexibility because you can write queries against a trace table to examine and analyze the output. However, saving as trace files makes traces more portable. If you save the trace file as a SQL script, you only get a file with SQL statements sent to your server; all other data columns are simply discarded.

When troubleshooting performance issues, I prefer saving trace output in a SQL Server table. The reason is that the biggest bang for your buck comes from the SQL statements or routines that execute most frequently and most sluggishly. Performance is in the eyes of your end-users; therefore, if a database routine takes four minutes to complete and is executed 10,000 times daily, the database administrator’s phone is likely to ring off the hook. If you optimize such procedure to execute within 10 seconds, you’ll be sure to make your managers happy. On the other hand, you’re not likely to hear much praise for optimizing a batch job that runs once a night from 2 seconds down to 1.5 seconds. In order to find the top 10 worst performing queries, you can execute the following query against a saved trace table:

SELECT TOP 10 
TextData, 
Duration, 
StartTime, 
EndTime 
FROM trace 
WHERE duration IS NOT NULL AND duration <> 0
ORDER BY duration DESC

Finding the most frequently executed statements is also fairly easy. The only problem is that TextData column is of TEXT data type; therefore, you can’t easily group by this column. Furthermore, the parameter values for procedures that accept parameters will change from one execution to the next, so you’re not likely to see the same procedure executed with the same set of parameters repeatedly. The workaround for these issues is to grab a portion of the TextData column that won’t change for the same procedure. For example, let’s say you have a procedure called usp_GetCustomerList that takes three parameters. The TextData column recording execution of this procedure might have the following values:

EXEC usp_GetCustomerList 1, 32, 5
EXEC usp_GetCustomerList 193, 3532, 555
EXEC usp_GetCustomerList 419, 322, 425

You can see that the first 24 characters of this column will always be the same regardless of the parameters passed to this procedure. Therefore, we can execute the following query to get a count of each SQL statement (or procedure) and number of times it was executed:

SELECT SUBSTRING(textData, 1, 24), COUNT(*) 
FROM trace 
WHERE TextData IS NOT NULL
GROUP BY SUBSTRING(textData, 1, 24)
ORDER BY 2 DESC

When troubleshooting a particular application that I’m not familiar with, I prefer to examine the contents of the trace directly in the Profiler. This way, I can examine not only the stored procedure call but also the statements included in the procedure. I can also tell if any procedure calls other procedures and examine parameter values passed to each routine.

Replaying Previously Recorded Traces

You might want to replay SQL statements recorded in your trace on the same or different server for a couple of reasons. First, you might want to see if the performance problem is resolved after you have tuned your indexes or modified a stored procedure. Second, you could use a trace for load-testing to see if your application can deliver acceptable performance on a previously unused piece of hardware.

In order to replay a trace, you must capture certain events and data columns (along with others that aren’t required). The events required for replaying a trace are:

Data columns necessary for replaying traces include:

When you load a trace that can be replayed, the Replay toolbar buttons and Replay menu become active within Profiler’s user interface, as shown in Figure 2.

Figure 2

Figure 2 When you load a trace that can be replayed, the Replay toolbar becomes available.

The Replay button allows you to replay the entire trace at once, whereas the Step option allows you to run one command at a time. You also have an option called Run to Cursor which executes all statements from the beginning of the trace to the statement that you have highlighted within the trace. You can also set breakpoints by choosing Replay, Toggle break-point or clicking the icon that looks like a hand after highlighting the statement you wish to use as a breakpoint. Breakpoints are a convenient way to stop and resume trace execution as needed.

You must specify the server where you want your trace SQL statements to be replayed as well as the credentials for connecting to the server. Once you specify connection credentials, Profiler presents you with the dialog shown in Figure 3.

Figure 3

Figure 3 The Replay Options dialog box.

This dialog allows you to change the server you wish to replay the events to, specify replay options, and specify whether you want to save output into a file. You can save the output of a trace replay into another trace file that you can examine later. Replay options allow you to:

  1. Debug the events and execute all commands serially in the order they were recorded.
  2. Replay events in parallel on multiple threads. With this option, you can simulate the load on the server, where the events are replayed but cannot debug the trace output.

When you debug SQL statements with trace replay, the Profiler user interface has three panes. The top pane shows you events and data columns, the middle one displays commands that are being executed, and the bottom pane shows the results of the statement as shown in Figure 4.

Figure 4

Figure 4 When you debug SQL statements with trace replay, the Profiler user interface has three panes.

Debugging can be helpful for determining the number of rows that are sent back to the user when a particular statement is being executed. If you see thousands of "replay result row" for each execution of your procedure, it should be apparent that you’re not filtering your queries appropriately—unless your procedure is generating a huge report that will be printed for further analysis. Also notice that "Replay Result Set" event (in the middle pane) shows you the columns that will be available in the output, whereas "Replay Result Row" shows you a single row of the query’s output.

There are a couple of gotcha’s you need to be aware of with replaying traces. First, notice that you must record the database identifier in order to be able to replay the trace. If you replay the trace on the same server as the server where you recorded the trace all will be well. However, if you attempt replaying the same trace on a different server, the database identifier might not be the same. System databases such as master, tempdb, MSDB, and model have the same identifiers on all servers. User databases, however, won’t have the same IDs. Therefore, if you need to replay the trace recorded on a user database, you need to make sure that trace source and destination servers have the same identifiers for the database of interest.

Second, if you run the trace using multiple threads, you might notice that trace will execute a few commands and hang—what’s the deal? Some of the commands within your trace might lock resources needed by other commands. So, if you examine the current activity on your server while replaying the trace on multiple threads, you might notice blocking. If this is the case, you must either stop the replay or kill the offending connection to continue replaying the trace.

Recording Traces Without Profiler

You can create traces without Profiler’s user interface by executing system stored procedures. First, you must create a trace using sp_trace_create, then add events using sp_trace_setevent, and set filters using sp_trace_setfilter. Finally, use the sp_trace_setstatus procedure to start, stop, close, and delete the trace. All of these procedures are well-documented in Books Online.

Here is an example of creating the trace, recording a few events, and starting and stopping the trace:

/* variable to hold a trace ID */
DECLARE @trace_id INT

EXEC sp_trace_create 
 @traceid = @trace_id OUTPUT, 
 @options = 2,
 @tracefile = N’c:\trace_example.trc’ 

SELECT @trace_id AS trace_id

/* define some events for the trace we just created */

DECLARE @on BIT, @tid INT
SELECT @on=1, @tid = 1

/* add text data column for SQL: BatchCompleted event */
EXEC sp_trace_setevent @traceid = @tid, 
@eventid = 12, @columnid = 1, @on = @on

GO

DECLARE @on BIT, @tid INT
SELECT @on=1, @tid = 1

/* add text data column for RPC: completed event */
EXEC sp_trace_setevent @traceid = @tid, 
@eventid = 10, @columnid = 1, @on = @on

GO

DECLARE @on BIT, @tid INT
SELECT @on=1, @tid = 1

/* add application name column for SQL: Batch Completed event */
EXEC sp_trace_setevent @traceid = @tid, 
@eventid = 12, @columnid = 10, @on = @on

/* create a filter for the existing trace */


DECLARE @tid INT
SELECT @tid = 1

EXEC sp_trace_setfilter @traceid = @tid, 
@columnid = 10, @logical_operator = 1, 
@comparison_operator = 7, @value = N’SQL Profiler’

/* turn on the trace */
sp_trace_setstatus 1, 1

/* stop the trace */

sp_trace_setstatus 1, 0
GO

/* delete trace definition */
Sp_trace_setstatus 1, 2

Figure 5 shows the trace that was saved to my C drive.

Figure 5

Figure 5 A trace that was generated without using the Profiler user interface.

Summary

In this article, I showed you some of the advanced features available with Profiler traces—a very powerful tool for debugging your applications and tuning performance. If you haven’t used Profiler traces before, do yourself a favor and learn how to use this tool; the dividends will pay for your time investment very quickly.

800 East 96th Street, Indianapolis, Indiana 46240