Table of Contents
- Microsoft SQL Server Defined
- Microsoft SQL Server Features
- Microsoft SQL Server Administration
- Microsoft SQL Server Programming
- Performance Tuning SQL Server: Tools and Processes
- Performance Tuning SQL Server: Tools Overview
- Creating a Performance Tuning Audit - Defining Components
- Creating a Performance Tuning Audit - Evaluation Part One
- Creating a Performance Tuning Audit - Evaluation Part Two
- Creating a Performance Tuning Audit - Interpretation
- Creating a Performance Tuning Audit - Developing an Action Plan
- Understanding SQL Server Query Plans
- Performance Tuning: Implementing Indexes
- Performance Monitoring Tools: Windows 2008 (and Higher) Server Utilities, Part 1
- Performance Monitoring Tools: Windows 2008 (and Higher) Server Utilities, Part 2
- Performance Monitoring Tools: Windows System Monitor
- Performance Monitoring Tools: Logging with System Monitor
- Performance Monitoring Tools: User Defined Counters
- General Transact-SQL (T-SQL) Performance Tuning, Part 1
- General Transact-SQL (T-SQL) Performance Tuning, Part 2
- General Transact-SQL (T-SQL) Performance Tuning, Part 3
- Performance Monitoring Tools: An Introduction to SQL Profiler
- Performance Tuning: Introduction to Indexes
- Performance Monitoring Tools: SQL Server 2000 Index Tuning Wizard
- Performance Monitoring Tools: SQL Server 2005 Database Tuning Advisor
- Performance Monitoring Tools: SQL Server Management Studio Reports
- Performance Monitoring Tools: SQL Server 2008 Activity Monitor
- The SQL Server 2008 Management Data Warehouse and Data Collector
- Performance Monitoring Tools: Evaluating Wait States with PowerShell and Excel
- Practical Applications
- Professional Development
- Application Architecture Assessments
- Business Intelligence
- Tips and Troubleshooting
- Additional Resources
Creating a Performance Tuning Audit - Defining Components
Last updated Mar 28, 2003.
To adequately tune a system, you need to know how it is currently performing. Ideally, you'll want to document the system's performance metrics in three circumstances: when the system is first built, when it shows average use and performance, and when it changes significantly. You'll monitor again when the users report a slowdown. Comparing the previous metrics to the current behavior will often point out the problem area(s), or at least point you to a place to start your investigation.
Even if you don't have a baseline measurement from when the system was built or running well, you still need to monitor against expected normal values to find any issues. Alternatively, you can also watch for counters that move dramatically during the monitoring period. Even so, not knowing what "normal" is for a value is less helpful than it could be. There are some well-defined values for certain counters, which you can compare in a vacuum to your monitored values. In most cases, however, there aren't clearly defined comparison values. You'll need some sort of baseline or movement to be able to determine whether the value is an indicator of the issue or not.
Although this is a SQL Server site and we're focusing on the database server most often, you really can't tune a complete system by limiting your focus to the database engine. You will need to either educate yourself in multiple technology stacks or include professionals in those areas. And that is the focus of this tutorial.
When you approach the system to either baseline or find the source of an issue, you will need to define the components that make up the system. Since as the DBA you're involved, there is probably at least one database server in the mix. But prior to examining any one part of the database server, you need to step back and document the entire system, keeping in mind that you may need the involvement of others.
If you're dealing with a commercial application, you may already have documentation for the landscape. Remember that the landscape for a system includes all components. If it's an enterprise-sized implementation of a commercial product, you'll most certainly have documentation somewhere for the system because it's almost impossible to implement something like an SAP or PeopleSoft application without begin required to create that in the first place.
If the application is something your firm put together yourselves, then you may still have documentation. Check with the original project manager, assuming he or she is still at the company. If not, you may have to do a little sleuthing to find out where the documentation lives.
Don't assume, however, that the system still looks like the original documentation. Make sure you involve the current system or application owners to validate the design you're looking at.
But perhaps you're working with a system where you can't locate the documentation — or even worse, one where the documentation was never created. In this case you'll have to create a landscape document yourself, or work with others to do so. You'll need to make the documentation as in-depth as you can, depending on how much time you have to correct the issue you're facing. If you're in deep trouble already, you'll just have to make the documentation as you make the discovery.
As far as tools and format for the documentation, it's really up to you. Most technical professionals are familiar with either a vector-based drawing program like Visio or a bitmap-based program like PowerPoint. You can even use a word processor or spreadsheet, although you'll find a picture really is worth a thousand words, especially when you're validating the design. If you bring a ten-page Word document to a system administrator to ask them if this is how the landscape looks, you're almost guaranteed to be ignored or lied to. If you bring a picture of the layout, however, odds are you'll get a quick glance from the rest of the technical team and nod or the head or a slight correction.
So what goes in the document? I usually define what I call the "critical path" for the system. The critical path of a system is the path any discrete datum takes from origination to destination. In other words, from any unique kind of client through the network path to the distinct processing engines to the ultimate data destination. Using this definition you would add only one of each class of client, the network path, the processing layers and the database systems. Just because one workstation in the system doesn't perform the same kind of work as another doesn't necessarily mean that it is unique. If it is running the same software modules as another and generates a similar level of load, you don't have to include it. Only systems that create or modify the data in a significant way needs to be documented, at least until you determine they aren't the problem.
Once you've identified all of the systems, you can start the document them. The first things you need to know about them are what they do, and how they do it. If this is a workstation, you'll need a short description somewhere that specifies how they fit into the overall scheme of things.
Next you'll want to detail static data, and dynamic metrics the component holds. The static data are things like machine type, operating system (if applicable), service pack, memory or storage installed, and other items that either never change or change less than every day or week. The dynamic metrics are the objects and counters that create values you can measure, usually in increments of one second or less. For instance, in a router, you'll have static information such as model number and firmware revision, and dynamic information like the port object, and a counter for IP packets per second.
Don't do this part alone. In fact, it's pretty difficult for any one professional to document all objects in a landscape completely. What you can do, however, is work from the general to the specific. Find out as many components as you can, and then get each area's expert to help you document the objects within the component. Make sure you explain to that person what you're trying to accomplish. If you ask them about all of the static and dynamic information for a component without explaining that you're trying to flush out a performance issue or set a baseline, you'll probably get so much detail you'll never be able to figure out what you should evaluate.
That brings up an important point. This part of the process is often difficult to do, and takes a lot of work. For that reason it's often left out. That's a bad thing, because trying to find out what's wrong without knowing all the parts of the system is like trying to drive a car without any gauges. You don't know what's wrong because you don't have any feedback.
Once you have all of the components documented, you'll need to document the connections they have with each other. This means treating those connections as a whole, and adding that same static and dynamic information as you did earlier with workstations and servers.
As you go along, you (or others) will be tempted to measure or collect data on the components as you define them. You (or someone else) may also uncover an obvious issue as you're documenting them. "How did that server get Microsoft Office installed! I'll take it off..." Don't do anything at this stage, no matter how tempting. I've encountered many situations where that happened, and someone "fixes" a component as I'm trying to diagnose it. If you find an issue (unless something is physically on fire or something like that), document it and move on.
In the next part of the process, I'll show you how to begin to collect the measurements and what you should do next.
Informit Articles and Sample Chapters
Scott Fulton authors the Windows Server side of Informit, and his articles have much of the information you'll need to discover the objects and counters you need to include from the systems that run your database server.
Since we're talking about far more than databases, you'll need to have a level of theory and experience in the industry with general computing technology. Things like a Computer Science degree and/or a Microsoft certification help, and the A+ certification from CompTIA is also helpful.