Home > Articles > Data > SQL Server

SQL Server Reference Guide

Hosted by

Toggle Open Guide Table of ContentsGuide Contents

Close Table of ContentsGuide Contents

Close Table of Contents

Time Tracking for SQL Server Operations

Last updated Mar 28, 2003.

When you manage a SQL Server Instance, you’re concerned with performance, maintenance and capacity. You want to know how well the system is operating within its hardware and software configuration.

I have several articles in this guide on Performance and Maintenance. In many of those I have queries that show the system’s performance and the result of maintenance, SQL Server Agent Jobs and so on. In this tutorial, I’ll give you a few more scripts to use to track the time various events take on your Instance of SQL Server. I’ll also create a quick and simple report from that data, and explain the other output options available.

There are several reasons to examine the time it takes for various events in your system. The most obvious of course is for Performance Tuning. I advocate using the “top five” approach, which means I look for the top five items that are taking the most time. That means the top five queries, top five reasons for system waits, and so on.

Another, perhaps not as obvious reason for tracking the time in your system is to create a Maintenance Window. Tracking how long each index operation, backup, and consistency check on the files takes is vital to know how to make your system more available to the users. Also, knowing when and how long each backup takes helps you with Business Continuity (Disaster Recovery) planning. You can also use this information to find out how busy each system is, so that you can consolidate properly.

I won’t cover every single query you could possibly track in this tutorial. I’ll focus instead on the process and tools I use to monitor my system, and give you some options along the way. I’ll save the output options for last.

For all of these, I’ll show you Transact-SQL (T-SQL) queries for this process. I normally use PowerShell these days, since I can include things like the Windows logs and even use e-mail as an output right from the command line, but to get a lot of this information, you’ll need the Dynamic Management Views (DMV’s) to get at some of the data, and hard system tables for others. That’s best suited to T-SQL. You can certainly send the queries across PowerShell, or using a command-line tool such as SQLCMD on a schedule, but at the end it’s all T-SQL. Plus, this approach allows you to be very flexible in the tool you use – you can even just put these in a SQL Server Agent Job if you wish.

I’ll also focus on SQL Server 2005 and higher for this exercise. I’ve flushed out all of my SQL Server 2000 Instances at this point, plus, you can get a lot of this information in my past articles on monitoring and Performance Tuning. The process stands for all of the versions even if the queries do not.

I normally track three “buckets” of events. The first is query time, the second is maintenance tasks, and the last is SQL Server Agent Jobs. You can certainly track more, and I actually track the time not only of one of each of these are but multiples. I’ll keep it simple for this example however so that you can see how to create your own.

Monitoring Query Time

The first thing to monitor is the amount of time queries are taking. You can do this in several ways, and a simple web search of “Top SQL Server Queries” yields lots of scripts for you to explore and try on your test system. I’ll show you one I use, which contains some data that is useful on a short duration.

I rely on this script less for the particular user or process that is running the query than for the query and duration. All I use the data for is further exploration – I don’t collect enough data here to do fine-grained performance tuning; just a notification that queries are taking a long time. I really want to know if I have any queries over a certain amount of time, and if I do, that prompts me to take a deeper look using other tools.

Note that this is only one vector – I’m tracking highest CPU time, which isn’t always the major culprit. You might have a long I/O query, one that takes a lot of memory and so on. Again, I’ll keep things simple for this article and you can add those other items on your own test system.

Here’s the query I’m using for a simple output of the longest tasks:

/* Query Time */
SELECT(hcq.total_worker_time)* .000001 AS 'DurationInSeconds'
, sdb.name AS 'Database'
, q.[text] AS 'QueryText'
-- , qp.query_plan 'QueryPlan'
FROM  (SELECT TOP 5
       qs.plan_handle
       , qs.total_worker_time
       FROM   sys.dm_exec_query_stats qs
       ORDER BY qs.total_worker_time DESC) AS hcq
CROSS APPLY sys.dm_exec_sql_text(plan_handle) AS q
CROSS APPLY sys.dm_exec_query_plan(plan_handle) AS qp
LEFT OUTER JOIN sys.databases sdb
	ON sdb.database_id = q.dbid
ORDER BY hcq.total_worker_time DESC;
GO

The data comes from three DMV’s and one system table. You might want to get different data, but for my final result, where I put all three of the query outputs together, this is the most useful for me. I’m actually only after three pieces of information here, although you can see in line 5 I’ve commented out the query plan – which if you include in SQL Server Management Studio (SSMS) you can click to see the graphical plan for that query. In Line one, since the results are presented in microseconds (which I found by looking in Books Online) I decided to normalize the data into seconds. I’ll do that for all of the queries in this example.

You can also see in line 6 that I’m selecting only the top 5 queries. You can increase that number if you’d like to see more.

I’m getting the first set of data from a sub-select, and then using two CROSS APPLY statements to form a kind of JOIN operation through those DMV’s. If you’re not familiar with how the CROSS APPLY statement works, check out this reference.

Normally I do not include an ORDER BY in my queries, allowing the client to do that work. But depending on the output you choose, having the query results pre-sorted can be quite useful. Plus, I’m only returning a few rows so it’s not heinous for the server to do the work instead of the client.

You’re not going to hurt anything on a test system (not a production system, mind you!) with this query, so feel free to copy it and see what other columns you have available in the various tables and views I’m using here. Try taking out that comment (the two dashes) on line 5 and then look at the results in the results panel. Just click on the link and you’ll open the graphical query plan so that you can see exactly what the query did. That’s not needed for the regular output, so you can replace those comments when you’re done.

Monitoring Maintenance Tasks

The next tracking I do is for my maintenance tasks. As I’ve mentioned before, I have no problem with the Maintenance Wizard that comes with SQL Server, provided you understand how it works, what it does and why you make the choices you do.

If you’re using something other than the Maintenance Wizard, obviously this query won’t help you much. You’ll need to know what your process is and track it accordingly.

On this small system, I do in fact have the Maintenance Wizard running, so I can use the tracking tables it has for the history of the maintenance. This script combines a few tables together to track the major and minor steps in the Maintenance Plan history:

SELECT DATEDIFF(SECOND, ld.start_time,ld.end_time) AS 'DurationInSeconds'
, mp.name AS 'PlanName'
, ld.line1 AS 'Task'
FROM msdb.dbo.sysmaintplan_plans AS mp  
RIGHT OUTER JOIN msdb.dbo.sysmaintplan_subplans AS sp 
	ON sp.plan_id=mp.id  
RIGHT OUTER JOIN msdb.dbo.sysmaintplan_log AS spl 
	ON spl.subplan_id=sp.subplan_id  
RIGHT OUTER JOIN msdb.dbo.sysmaintplan_logdetail AS ld 
	ON ld.task_detail_id=spl.task_detail_id
ORDER BY 'DurationInSeconds' DESC;
GO

Once again, I’m normalizing for seconds in this example. The actual results for both queries and maintenance on a busier system will probably be at least in minutes.

As you’re creating your own scripts for tracking maintenance time, you may want to include when the maintenance steps start and stop as well as the duration that I’m capturing here. For my purposes, only the length is important to me. Knowing the start and stop times is interesting when you’re developing that maintenance window.

Also, I’m grabbing all maintenance history here. You might want to focus on only today based on your needs and output choices.

Monitoring SQL Server Agent Jobs

The final “bucket” of timed events that I track is SQL Server Agent Jobs. I actually have an entire system to watch these Jobs from start/stop/ duration, variance, history and more, but for this example once again I’m just looking for an overview.

There are quite a few things you have to do to make this work. For one, I had to convert an integer (for the date of the job steps) into a date. This isn’t an operation that can be done implicitly (automatically), and an integer doesn’t directly convert into a date. So....

I had to use the SUBSTRING function to break out the day, month and year into the U.S. standard of month, day and then year. From there, the string can be converted into a date.

You also have to join a few tables to get what you’re after, but they are well-documented. Here’s the script I’m using:

SELECT sjh.run_duration AS 'TotalDurationInSeconds'
, sj.name AS 'JobName'
FROM msdb.dbo.sysjobs AS sj
INNER JOIN msdb.dbo.sysjobhistory AS sjh
	ON sj.job_id = sjh.job_id 
WHERE sjh.step_id = 0
AND CAST(SUBSTRING( CAST(sjh.run_date AS CHAR(8)), 5, 2 )
+ '/' + SUBSTRING( CAST(sjh.run_date AS CHAR(8)), 7, 2 )
+ '/' + SUBSTRING( CAST(sjh.run_date AS CHAR(8)), 1, 4 ) AS datetime) > (GETDATE()-1)
ORDER BY sjh.run_duration DESC;
GO

Notice in line 6 I’m restricting the output to “0”. That’s the code for the “outer” or complete job, rather than the steps. Storing a flag like this to represent a type of data in a table is not always a good idea, but you should be aware that this is how it’s done here.

Output Options

Now that you have the data you want, you need to decide what to do with it. You have lots of options. You could store the data when you collect it. You can do that with T-SQL (using the SELECT INTO statement) or by using an output from SQLCMD or PowerShell, for instance.

If you don’t want to store the data historically, you can look at the query real-time, using T-SQL, PowerShell, Excel, or HTML. Anything that can drive a query and show the output will work.

For my systems, I’m using he new Report Builder 2.0 (a free download) for SQL Server 2008. It can hit SQL Server 2005 and higher systems.

I won’t cover the process to create those reports here – I’ve done that in another step-by-step tutorial that you can read in the link at the bottom of this article. All I did was to make three simple charts using those step-by-step instructions, with the queries I showed you in this tutorial.

As you can see, it’s not difficult to find objects to track for time on a SQL Server Instance. And you have a lot of choices for where that data ends up. In fact, you can have it end up in a tracking table for historical purposes, and report on it real-time as well.

One final thought – you might already have a monitoring solution in place, either from a vendor or something else that you’ve written yourself. Research the data they collect to see if you can just leverage the information they store. You may already have the time information and just need to create report showing you the results.

InformIT Articles and Sample Chapters

The Report Builder 2.0 article I mentioned is The SQL Server Central Management System: Reporting the Data and Project Summary from this Reference Guide.

Books and eBooks

There’s more on Reporting Services in Microsoft SQL Server 2008 Reporting Services Unleashed, by Michael Lisin, Jim Joseph, and Amit Goyal.

Online Resources

The entire reference on what you can find out through Dynamic Management Views is here.