Home > Articles > Data > SQL Server

📄 Contents

  1. SQL Server Reference Guide
  2. Introduction
  3. SQL Server Reference Guide Overview
  4. Table of Contents
  5. Microsoft SQL Server Defined
  6. SQL Server Editions
  7. SQL Server Access
  8. Informit Articles and Sample Chapters
  9. Online Resources
  10. Microsoft SQL Server Features
  11. SQL Server Books Online
  12. Clustering Services
  13. Data Transformation Services (DTS) Overview
  14. Replication Services
  15. Database Mirroring
  16. Natural Language Processing (NLP)
  17. Analysis Services
  18. Microsot SQL Server Reporting Services
  19. XML Overview
  20. Notification Services for the DBA
  21. Full-Text Search
  22. SQL Server 2005 - Service Broker
  23. Using SQL Server as a Web Service
  24. SQL Server Encryption Options Overview
  25. SQL Server 2008 Overview
  26. SQL Server 2008 R2 Overview
  27. SQL Azure
  28. The Utility Control Point and Data Application Component, Part 1
  29. The Utility Control Point and Data Application Component, Part 2
  30. Microsoft SQL Server Administration
  31. The DBA Survival Guide: The 10 Minute SQL Server Overview
  32. Preparing (or Tuning) a Windows System for SQL Server, Part 1
  33. Preparing (or Tuning) a Windows System for SQL Server, Part 2
  34. Installing SQL Server
  35. Upgrading SQL Server
  36. SQL Server 2000 Management Tools
  37. SQL Server 2005 Management Tools
  38. SQL Server 2008 Management Tools
  39. SQL Azure Tools
  40. Automating Tasks with SQL Server Agent
  41. Run Operating System Commands in SQL Agent using PowerShell
  42. Automating Tasks Without SQL Server Agent
  43. Storage – SQL Server I/O
  44. Service Packs, Hotfixes and Cumulative Upgrades
  45. Tracking SQL Server Information with Error and Event Logs
  46. Change Management
  47. SQL Server Metadata, Part One
  48. SQL Server Meta-Data, Part Two
  49. Monitoring - SQL Server 2005 Dynamic Views and Functions
  50. Monitoring - Performance Monitor
  51. Unattended Performance Monitoring for SQL Server
  52. Monitoring - User-Defined Performance Counters
  53. Monitoring: SQL Server Activity Monitor
  54. SQL Server Instances
  55. DBCC Commands
  56. SQL Server and Mail
  57. Database Maintenance Checklist
  58. The Maintenance Wizard: SQL Server 2000 and Earlier
  59. The Maintenance Wizard: SQL Server 2005 (SP2) and Later
  60. The Web Assistant Wizard
  61. Creating Web Pages from SQL Server
  62. SQL Server Security
  63. Securing the SQL Server Platform, Part 1
  64. Securing the SQL Server Platform, Part 2
  65. SQL Server Security: Users and other Principals
  66. SQL Server Security – Roles
  67. SQL Server Security: Objects (Securables)
  68. Security: Using the Command Line
  69. SQL Server Security - Encrypting Connections
  70. SQL Server Security: Encrypting Data
  71. SQL Server Security Audit
  72. High Availability - SQL Server Clustering
  73. SQL Server Configuration, Part 1
  74. SQL Server Configuration, Part 2
  75. Database Configuration Options
  76. 32- vs 64-bit Computing for SQL Server
  77. SQL Server and Memory
  78. Performance Tuning: Introduction to Indexes
  79. Statistical Indexes
  80. Backup and Recovery
  81. Backup and Recovery Examples, Part One
  82. Backup and Recovery Examples, Part Two: Transferring Databases to Another System (Even Without Backups)
  83. SQL Profiler - Reverse Engineering An Application
  84. SQL Trace
  85. SQL Server Alerts
  86. Files and Filegroups
  87. Partitioning
  88. Full-Text Indexes
  89. Read-Only Data
  90. SQL Server Locks
  91. Monitoring Locking and Deadlocking
  92. Controlling Locks in SQL Server
  93. SQL Server Policy-Based Management, Part One
  94. SQL Server Policy-Based Management, Part Two
  95. SQL Server Policy-Based Management, Part Three
  96. Microsoft SQL Server Programming
  97. An Outline for Development
  98. Database
  99. Database Services
  100. Database Objects: Databases
  101. Database Objects: Tables
  102. Database Objects: Table Relationships
  103. Database Objects: Keys
  104. Database Objects: Constraints
  105. Database Objects: Data Types
  106. Database Objects: Views
  107. Database Objects: Stored Procedures
  108. Database Objects: Indexes
  109. Database Objects: User Defined Functions
  110. Database Objects: Triggers
  111. Database Design: Requirements, Entities, and Attributes
  112. Business Process Model Notation (BPMN) and the Data Professional
  113. Business Questions for Database Design, Part One
  114. Business Questions for Database Design, Part Two
  115. Database Design: Finalizing Requirements and Defining Relationships
  116. Database Design: Creating an Entity Relationship Diagram
  117. Database Design: The Logical ERD
  118. Database Design: Adjusting The Model
  119. Database Design: Normalizing the Model
  120. Creating The Physical Model
  121. Database Design: Changing Attributes to Columns
  122. Database Design: Creating The Physical Database
  123. Database Design Example: Curriculum Vitae
  124. NULLs
  125. The SQL Server Sample Databases
  126. The SQL Server Sample Databases: pubs
  127. The SQL Server Sample Databases: NorthWind
  128. The SQL Server Sample Databases: AdventureWorks
  129. The SQL Server Sample Databases: Adventureworks Derivatives
  130. UniversalDB: The Demo and Testing Database, Part 1
  131. UniversalDB: The Demo and Testing Database, Part 2
  132. UniversalDB: The Demo and Testing Database, Part 3
  133. UniversalDB: The Demo and Testing Database, Part 4
  134. Getting Started with Transact-SQL
  135. Transact-SQL: Data Definition Language (DDL) Basics
  136. Transact-SQL: Limiting Results
  137. Transact-SQL: More Operators
  138. Transact-SQL: Ordering and Aggregating Data
  139. Transact-SQL: Subqueries
  140. Transact-SQL: Joins
  141. Transact-SQL: Complex Joins - Building a View with Multiple JOINs
  142. Transact-SQL: Inserts, Updates, and Deletes
  143. An Introduction to the CLR in SQL Server 2005
  144. Design Elements Part 1: Programming Flow Overview, Code Format and Commenting your Code
  145. Design Elements Part 2: Controlling SQL's Scope
  146. Design Elements Part 3: Error Handling
  147. Design Elements Part 4: Variables
  148. Design Elements Part 5: Where Does The Code Live?
  149. Design Elements Part 6: Math Operators and Functions
  150. Design Elements Part 7: Statistical Functions
  151. Design Elements Part 8: Summarization Statistical Algorithms
  152. Design Elements Part 9:Representing Data with Statistical Algorithms
  153. Design Elements Part 10: Interpreting the Data—Regression
  154. Design Elements Part 11: String Manipulation
  155. Design Elements Part 12: Loops
  156. Design Elements Part 13: Recursion
  157. Design Elements Part 14: Arrays
  158. Design Elements Part 15: Event-Driven Programming Vs. Scheduled Processes
  159. Design Elements Part 16: Event-Driven Programming
  160. Design Elements Part 17: Program Flow
  161. Forming Queries Part 1: Design
  162. Forming Queries Part 2: Query Basics
  163. Forming Queries Part 3: Query Optimization
  164. Forming Queries Part 4: SET Options
  165. Forming Queries Part 5: Table Optimization Hints
  166. Using SQL Server Templates
  167. Transact-SQL Unit Testing
  168. Index Tuning Wizard
  169. Unicode and SQL Server
  170. SQL Server Development Tools
  171. The SQL Server Transact-SQL Debugger
  172. The Transact-SQL Debugger, Part 2
  173. Basic Troubleshooting for Transact-SQL Code
  174. An Introduction to Spatial Data in SQL Server 2008
  175. Performance Tuning
  176. Performance Tuning SQL Server: Tools and Processes
  177. Performance Tuning SQL Server: Tools Overview
  178. Creating a Performance Tuning Audit - Defining Components
  179. Creating a Performance Tuning Audit - Evaluation Part One
  180. Creating a Performance Tuning Audit - Evaluation Part Two
  181. Creating a Performance Tuning Audit - Interpretation
  182. Creating a Performance Tuning Audit - Developing an Action Plan
  183. Understanding SQL Server Query Plans
  184. Performance Tuning: Implementing Indexes
  185. Performance Monitoring Tools: Windows 2008 (and Higher) Server Utilities, Part 1
  186. Performance Monitoring Tools: Windows 2008 (and Higher) Server Utilities, Part 2
  187. Performance Monitoring Tools: Windows System Monitor
  188. Performance Monitoring Tools: Logging with System Monitor
  189. Performance Monitoring Tools: User Defined Counters
  190. General Transact-SQL (T-SQL) Performance Tuning, Part 1
  191. General Transact-SQL (T-SQL) Performance Tuning, Part 2
  192. General Transact-SQL (T-SQL) Performance Tuning, Part 3
  193. Performance Monitoring Tools: An Introduction to SQL Profiler
  194. Performance Tuning: Introduction to Indexes
  195. Performance Monitoring Tools: SQL Server 2000 Index Tuning Wizard
  196. Performance Monitoring Tools: SQL Server 2005 Database Tuning Advisor
  197. Performance Monitoring Tools: SQL Server Management Studio Reports
  198. Performance Monitoring Tools: SQL Server 2008 Activity Monitor
  199. The SQL Server 2008 Management Data Warehouse and Data Collector
  200. Performance Monitoring Tools: Evaluating Wait States with PowerShell and Excel
  201. Practical Applications
  202. Choosing the Back End
  203. The DBA's Toolbox, Part 1
  204. The DBA's Toolbox, Part 2
  205. Scripting Solutions for SQL Server
  206. Building a SQL Server Lab
  207. Using Graphics Files with SQL Server
  208. Enterprise Resource Planning
  209. Customer Relationship Management (CRM)
  210. Building a Reporting Data Server
  211. Building a Database Documenter, Part 1
  212. Building a Database Documenter, Part 2
  213. Data Management Objects
  214. Data Management Objects: The Server Object
  215. Data Management Objects: Server Object Methods
  216. Data Management Objects: Collections and the Database Object
  217. Data Management Objects: Database Information
  218. Data Management Objects: Database Control
  219. Data Management Objects: Database Maintenance
  220. Data Management Objects: Logging the Process
  221. Data Management Objects: Running SQL Statements
  222. Data Management Objects: Multiple Row Returns
  223. Data Management Objects: Other Database Objects
  224. Data Management Objects: Security
  225. Data Management Objects: Scripting
  226. Powershell and SQL Server - Overview
  227. PowerShell and SQL Server - Objects and Providers
  228. Powershell and SQL Server - A Script Framework
  229. Powershell and SQL Server - Logging the Process
  230. Powershell and SQL Server - Reading a Control File
  231. Powershell and SQL Server - SQL Server Access
  232. Powershell and SQL Server - Web Pages from a SQL Query
  233. Powershell and SQL Server - Scrubbing the Event Logs
  234. SQL Server 2008 PowerShell Provider
  235. SQL Server I/O: Importing and Exporting Data
  236. SQL Server I/O: XML in Database Terms
  237. SQL Server I/O: Creating XML Output
  238. SQL Server I/O: Reading XML Documents
  239. SQL Server I/O: Using XML Control Mechanisms
  240. SQL Server I/O: Creating Hierarchies
  241. SQL Server I/O: Using HTTP with SQL Server XML
  242. SQL Server I/O: Using HTTP with SQL Server XML Templates
  243. SQL Server I/O: Remote Queries
  244. SQL Server I/O: Working with Text Files
  245. Using Microsoft SQL Server on Handheld Devices
  246. Front-Ends 101: Microsoft Access
  247. Comparing Two SQL Server Databases
  248. English Query - Part 1
  249. English Query - Part 2
  250. English Query - Part 3
  251. English Query - Part 4
  252. English Query - Part 5
  253. RSS Feeds from SQL Server
  254. Using SQL Server Agent to Monitor Backups
  255. Reporting Services - Creating a Maintenance Report
  256. SQL Server Chargeback Strategies, Part 1
  257. SQL Server Chargeback Strategies, Part 2
  258. SQL Server Replication Example
  259. Creating a Master Agent and Alert Server
  260. The SQL Server Central Management System: Definition
  261. The SQL Server Central Management System: Base Tables
  262. The SQL Server Central Management System: Execution of Server Information (Part 1)
  263. The SQL Server Central Management System: Execution of Server Information (Part 2)
  264. The SQL Server Central Management System: Collecting Performance Metrics
  265. The SQL Server Central Management System: Centralizing Agent Jobs, Events and Scripts
  266. The SQL Server Central Management System: Reporting the Data and Project Summary
  267. Time Tracking for SQL Server Operations
  268. Migrating Departmental Data Stores to SQL Server
  269. Migrating Departmental Data Stores to SQL Server: Model the System
  270. Migrating Departmental Data Stores to SQL Server: Model the System, Continued
  271. Migrating Departmental Data Stores to SQL Server: Decide on the Destination
  272. Migrating Departmental Data Stores to SQL Server: Design the ETL
  273. Migrating Departmental Data Stores to SQL Server: Design the ETL, Continued
  274. Migrating Departmental Data Stores to SQL Server: Attach the Front End, Test, and Monitor
  275. Tracking SQL Server Timed Events, Part 1
  276. Tracking SQL Server Timed Events, Part 2
  277. Patterns and Practices for the Data Professional
  278. Managing Vendor Databases
  279. Consolidation Options
  280. Connecting to a SQL Azure Database from Microsoft Access
  281. SharePoint 2007 and SQL Server, Part One
  282. SharePoint 2007 and SQL Server, Part Two
  283. SharePoint 2007 and SQL Server, Part Three
  284. Querying Multiple Data Sources from a Single Location (Distributed Queries)
  285. Importing and Exporting Data for SQL Azure
  286. Working on Distributed Teams
  287. Professional Development
  288. Becoming a DBA
  289. Certification
  290. DBA Levels
  291. Becoming a Data Professional
  292. SQL Server Professional Development Plan, Part 1
  293. SQL Server Professional Development Plan, Part 2
  294. SQL Server Professional Development Plan, Part 3
  295. Evaluating Technical Options
  296. System Sizing
  297. Creating a Disaster Recovery Plan
  298. Anatomy of a Disaster (Response Plan)
  299. Database Troubleshooting
  300. Conducting an Effective Code Review
  301. Developing an Exit Strategy
  302. Data Retention Strategy
  303. Keeping Your DBA/Developer Job in Troubled Times
  304. The SQL Server Runbook
  305. Creating and Maintaining a SQL Server Configuration History, Part 1
  306. Creating and Maintaining a SQL Server Configuration History, Part 2
  307. Creating an Application Profile, Part 1
  308. Creating an Application Profile, Part 2
  309. How to Attend a Technical Conference
  310. Tips for Maximizing Your IT Budget This Year
  311. The Importance of Blue-Sky Planning
  312. Application Architecture Assessments
  313. Transact-SQL Code Reviews, Part One
  314. Transact-SQL Code Reviews, Part Two
  315. Cloud Computing (Distributed Computing) Paradigms
  316. NoSQL for the SQL Server Professional, Part One
  317. NoSQL for the SQL Server Professional, Part Two
  318. Object-Role Modeling (ORM) for the Database Professional
  319. Business Intelligence
  320. BI Explained
  321. Developing a Data Dictionary
  322. BI Security
  323. Gathering BI Requirements
  324. Source System Extracts and Transforms
  325. ETL Mechanisms
  326. Business Intelligence Landscapes
  327. Business Intelligence Layouts and the Build or Buy Decision
  328. A Single Version of the Truth
  329. The Operational Data Store (ODS)
  330. Data Marts – Combining and Transforming Data
  331. Designing Data Elements
  332. The Enterprise Data Warehouse — Aggregations and the Star Schema
  333. On-Line Analytical Processing (OLAP)
  334. Data Mining
  335. Key Performance Indicators
  336. BI Presentation - Client Tools
  337. BI Presentation - Portals
  338. Implementing ETL - Introduction to SQL Server 2005 Integration Services
  339. Building a Business Intelligence Solution, Part 1
  340. Building a Business Intelligence Solution, Part 2
  341. Building a Business Intelligence Solution, Part 3
  342. Tips and Troubleshooting
  343. SQL Server and Microsoft Excel Integration
  344. Tips for the SQL Server Tools: SQL Server 2000
  345. Tips for the SQL Server Tools – SQL Server 2005
  346. Transaction Log Troubles
  347. SQL Server Connection Problems
  348. Orphaned Database Users
  349. Additional Resources
  350. Tools and Downloads
  351. Utilities (Free)
  352. Tool Review (Free): DBDesignerFork
  353. Aqua Data Studio
  354. Microsoft SQL Server Best Practices Analyzer
  355. Utilities (Cost)
  356. Quest Software's TOAD for SQL Server
  357. Quest Software's Spotlight on SQL Server
  358. SQL Server on Microsoft's Virtual PC
  359. Red Gate SQL Bundle
  360. Microsoft's Visio for Database Folks
  361. Quest Capacity Manager
  362. SQL Server Help
  363. Visual Studio Team Edition for Database Professionals
  364. Microsoft Assessment and Planning Solution Accelerator
  365. Aggregating Server Data from the MAPS Tool

Years ago I worked on a mainframe system. Although that technology was mature even when I worked with it, in the early days “the computer” (if a company even had one) was a huge affair, with a purpose-built room and a limited set of applications. When someone used an application, they normally entered data on a terminal, which was a simple keyboard and textual screen. To see the results, they often sent the results to a print job for spooling out to a printer somewhere in a locked room, and they would pick it up or had it delivered to their office. As you can imagine, this was a very expensive set of things to do, not only from the cost of the application, but the power, network cables, printers, and other hardware, not to mention the people involved.

Built right in to the operating system of many of these mainframes were several log entries showing not only who logged on to the system, but how long they stayed connected, which applications they ran as well as a way to tie all that information back to how much CPU, memory and storage they used. This information was used to “charge back” the cost of operating the system to the user’s department.

Things have certainly changed. We all have computers now, with even the most humble laptop having far more power than those mainframes of the past. We rarely print, since most of the information is not only entered into an application but viewed there as well.

And yet some things remain the same. Even with “open source” software, technology isn’t free — there’s a cost associated with using it. We still need special equipment, facilities and staff to run the IT departments where we work. And many of our applications have moved away from the desktop into what we’re now calling “the cloud,” which has remarkable similarities to the mainframe days of old.

Business users are now far more computer-savvy. They have combined spreadsheets, portals, shared workspaces and the like which they want to access and create at will. But again, nothing is free — even if the user can simply click a button to create a new SharePoint site, a little more memory is used on the server, another database is created underneath it, there’s more maintenance time needed, backup licenses are increased and so on. At some point, the IT department has to add hardware, licenses and staff to handle the increased load on the servers.

And who pays this cost? Normally the entire business does, out of profits. This is often referred to as a “tax” model — everyone is taxed for the same roads, whether they drive them or not. Somewhere on a balance sheet, the cost for buildings, electricity, and yes, even IT is recorded. The IT manager plans for the budget amounts, and brings them to a business committee for approval. Since no one gets all the budget amounts they ask for, services or goods are cut. And yet the demand from the business increases. The IT manager, stuck between higher demand from the business but stagnant or even decreased budgets, is looking for a way to charge the users for the amount of IT resources they are using. This is called a “toll” model. In this model only the people who drive on the road are charged for it, which is exactly what the IT manager is looking for from his or her technology requests.

Consider also the “hosting” provider, whose customers create demand that the provider doesn’t control and can’t predict. Since they don’t have each user’s business to pay the “tax,” they must resort to a “toll” model.

So that brings us to the discussion of chargeback, just like we had in the mainframe days. Since technology has changed dramatically from that day, this brings up some interesting questions — some from the technical implementation of chargeback, and some from the political or business decisions that go along with it.

When to Implement a Chargeback System

Businesses commonly use a “spread” model for their IT costs. They take the complete IT budget, and divide that by the number of employees in the company, coming up with a “per seat” cost for IT. If the business absorbs this number as a standard “cost of doing business” budget calculation, this is the “tax” model I mentioned earlier. They may, however, charge each department’s user count against that per-user number, decreasing that department’s profit. This is more akin to the “tolls” model, and is in fact a chargeback system. Many colleges use this model — here is a link to an example of that kind of charge and calculation.

If you’re in one of the situations I described earlier, you might consider a chargeback system for your enterprise. The basic premise is that you have a budget reason driving the decision — you are asked to provide more resources to specific business units and you don’t have the budget backing to implement those resources. Using a chargeback system, you can ask each department to “pay its own way” — especially those that use more IT resources.

For instance, on a farm or in a factory, some employees may never directly use the computing system or have any demands for more IT resources. Management functions and finance, on the other hand, disproportionally use those resources. The managers held to a profit number on a single farm or manufacturing location may not want to have their profit number charged for IT resources they do not directly use.

But there is another use for a chargeback system, even if you don’t implement the charging part. You can use a chargeback system to track the use of your systems, and then provide that to the business so that they understand the true cost/benefit ratio a particular business unit provides.

Understand that a chargeback system isn’t free — it has design-time costs, increase manpower requirements, needs increased resources, and the monitoring it enacts will have an impact on performance. You have to weigh the cost of implementing a chargeback system against the benefits it provides.

In many cases I’ve see a chargeback system used to justify an increase in IT resources, and then dropped once those resources are procured. The system can be “resurrected” whenever a new justification is needed.

So you can implement the chargeback system as a continuous process, or one that you start, stop and restart whenever you need it. You can charge the departments directly or pass along the information to the business so that they can see which departments or applications are using the most IT resources — something they should consider anyway.

The Chargeback Strategy Methodology

Once you’ve made the decision to implement a chargeback system, you begin its design with a question of what you want to charge for.

A very pertinent question is whether this is a data platform issue alone. After all, there are multiple layers in modern applications, from the cost of the desktop, network and server hardware all the way through the software costs of the licenses that run on that hardware and the personnel required to implement and support all of the layers back to the data platform.

But there is a strong argument for implementing a chargeback system at the data layer. No database exists for itself — it always has some sort of application (even if it is only a script) that uses the data it contains. It also uses all of types of resources within a server and network to answer application requests. And since the application is the ultimate user of the database, those requests are more easily tracked and accounted for. The database is also an expensive component with the application system, requiring hardware, software and personnel. Finally, the database uses the file system to store the data for the application, which is also easily tracked.

This brings up an interesting point. A chargeback system is largely a tracking exercise. Using the methodology in this article, you will choose what to track, how to track it, and how to report on the tracking.

The second part of the chargeback system is to determine what to charge for each element you track. This is highly variable, and depends a great deal on the spread of “taxes” versus “tolls” that you want to charge back to the business department. There is some level of cost the business must carry to have IT in the building — things like power, the management staff, and even the tracking exercise itself. These are elements that are common to the business function just like utilities, phones and other parts of infrastructure of the business.

The process below will help you determine which level of tracking you want to provide, and then you will work with the business to determine the cost of each component, summing those per measured department or user. Combining these two parts creates an element where you can assign a cost, dividing each element use by time or unit. This is similar to a Return On Assets (ROA) calculation. In this type of calculation, you consider the cost of the operation of the application, and then divide that by the number of users and their use of the system. I’ll provide a simple model implementation of this methodology and a more complex version in a moment for you to use as a guide.

If you use this model, every department ends up with a “per unit” rather than “per seat” cost for IT. There are several factors to consider in this cost model:

  • Hardware Cost (including initial purchase and maintenance)
  • Software Licenses and Maintenance
  • Personnel
  • Power
  • Other infrastructure requirements (extended security or fire suppression systems)
  • High Availability
  • Disaster Response

These costs are not “fixed.” It is assumed that you will depreciate the costs of the hardware over time and buy new hardware; there will be consolidation of resources, the personnel costs change over time and so on. Normally you “smooth” these costs into a single number that is adjusted every two to three years, creating a new chargeback rate. But you do need to break down some of the costs, such as the database server, into a variable number than can be tracked. After all, it’s difficult to break down how much power, licensing and fire suppression is used for a single request!

So the first part of the methodology is to record these general costs, along with any others that are particular to your situation. Depending on the method you choose for assigning these costs, you’ll amortize the cost over a unit of component measurement (such as CPU or memory) or create an absolute value for that cost. I’ll explain this decision further in the examples I’ll show you in the next installment of this article.

Determining the Components to Track

The next step is to break down those cost areas into the components involved for each. For the costs that do not vary for a single request, you can simply amortize the amount per year, and fold it in to the final calculation. For the costs that do vary per request, you need to detail the components that are used.

For the server there are four major components involved in the application call to the database:

  1. CPU
  2. Memory
  3. I/O
  4. Network

The CPU element can be tracked at many levels — at the server, or down at the database level. Most database chargeback schemes track to the database level, since you may have multiple Instances of SQL Server installed on a single server, and you’ll want to charge each application user only for their use of the CPU element, not for things like backup or maintenance time.

Memory is another element that can be tracked per user and request. It is added to the mix of the variable components to be tracked. Once again, the database memory used is the best metric to track.

I/O, or the disk subsystem, actually has two sub-components. The first is the amount of storage, and the other is the transfer of data back and forth. You’re able to track both, at the database level. If you notice that the application is storage intensive in a forward-growth pattern, then you will want to track only the file size growth. If the application only grows the data slightly, or adds and removes data in a consistent pattern, then you should track only I/O transfers (reads and writes). If it does both, track both elements.

The network transfer is probably the most difficult metric to track, since this depends on how the traffic is generated and routed. Although it is possible to track at the database level, it requires a great deal of instrumenting. Most of the time this metric is not used in chargeback systems directly, but is part of the amortized cost.

You don’t have to track each and every one of these metrics. In the simple model I’ll show you in a moment, I’ll track only the CPU and Memory use per application. The rest of the costs will simply be divided into the calculation to create the cost for the application.

You may, however, decide to put a very fine point on the tracking, so that you have a very accurate cost model. If you decide to go this route, make sure that the planning, implementation and maintenance of the tracking process is worth the effort.

Determining the Granularity of the Component Tracking

The next part of the methodology is to decide which level of detail you want to track and how often you want to track it. For instance, for the I/O file growth, you may want to take a weekly or monthly measurement and simply compare the start and end values. If you measure the CPU use, you will probably want to measure far more often, even if you report on it monthly.

Determining the Owner of the Tracking Component

Next, you will need to find out if you can track the component back to the user or application. This is not always possible, such as the considerations for High Availability or power. If the component does not have a single “owner,” that makes the component a candidate to be included only in the amortized cost value.

In many cases you can track the user or at least the application back to the calling transaction, given a few conditions:

  • The application name is available (set by the developer of the application)
  • The user is using a login to the database server, not an application or common login
  • The transactions are not involved in a “middle tier” system that mixes the calls between applications to the same data source

Most of the time the application you’re tracking is hooked directly to a database, so all calls to that database are by a single department or set of users, so tracking down the user is not difficult.

Selecting Tools and Processes for Tracking

The simplest part of the process is selecting the tools you need to track the calls to the database and each component’s use. You have multiple tools to choose from, and in this article I’ll stick with those you have available in the operating system and SQL Server, focusing on version 2008 which has enhanced tracking capabilities, although many of the features I’ll reference are available in earlier versions as well.

In many ways the chargeback system is similar to Performance Tuning, and in fact it is even simpler to create and implement. In fact, you may be able to pull all of the metrics you need from the tracking you’re already doing for Performance Tuning.

For instance, assume that you’re looking for the file growth and you’re using the Management Data Warehouse feature in SQL Server 2008. This query will pull the start and ending database sizes from the monitoring database that the system already provides:

SELECT database_name
, SUM(num_of_reads) AS 'Reads'
, SUM(num_of_writes) AS 'Writes'
, SUM(num_of_reads) + SUM(num_of_writes) AS 'TotalIO'
FROM
snapshots.io_virtual_file_stats
GROUP BY database_name
ORDER BY TotalIO DESC

But even if you are not using a defined monitoring system for performance, you can use other features to track database use. Note that you will probably rely on a mix of tools and processes to collect the tracking data, so you should read and understand each of these before you develop your solution. In the next installment of this tutorial I’ll show you how to implement many of these features.

The Windows Server Operating System

If your application is tied directly to a database, you can use the Windows operating system to track the use of the system.

The first option you have is the Windows System Resource Monitor (WSRM), available in the Enterprise or Datacenter editions of the operating system. If you’re not familiar with this tool, check this link to learn how to implement it.

To use this feature to track memory, CPU and I/O requests for SQL Server, use the wsrm command line tool with this query once you’ve turned on the monitoring to see the data it has collected for SQL Server:

wsrmc /get:acc /where:"[process name] exactly equals 'sqlservr.exe'" /groupby:"command line" \\<servername>

The “servername” variable is the name of your SQL Server system. Although the WSRM feature does not use SQL Server as a storage engine, you can import the data from that command into tables with the same column names that it reports, using the documentation provided at that reference I just mentioned. From there, you can query the data this way:

SELECT 	[Process Id], [Creation Time],
	Max([Policy Set Time]) as 'Policy Set Time', 
	Max([Time Stamp]) as 'Time Stamp',
	Max([Process Name]) as 'Process Name',
	Max([Process Matching Criteria]) as 'Process Matching Criteria',
	Max([Policy Name]) as 'Policy Name',
	Max([Executable Path]) as 'Executable Path',
	Max([User]) as 'User',
	Max([Domain]) as 'Domain',
	Max([Command Line]) as 'Command Line',
	(Max([Elapsed Time]) - Min([Elapsed Time])) as 'Elapsed Time',
	(Max([Kernel Mode Time]) - Min([Kernel Mode Time])) as 'Kernel Mode Time',
	(Max([User Mode Time]) - Min([User Mode Time])) as 'User Mode Time',
	(Max([Total CPU Time]) - Min([Total CPU Time])) as 'Total CPU Time',
	Avg([Thread Count]) as 'Thread Count',
	Max([Session Id]) as 'Session Id',
	Max([Peak Virtual Size]) as 'Peak Virtual Size',
	Avg([Virtual Size]) as 'Virtual Size',
	(Max([Page Fault Count]) - Min([Page Fault Count])) as 'Page Fault Count',
	Avg([Private Page Count]) as 'Private Page Count',
	Max([Peak Working Set Size]) as 'Peak Working Set Size',
	Avg([Working Set Size]) as 'Working Set Size',
	Avg([Page File Usage]) as 'Page File Usage',
	Max([Peak Page File Usage]) as 'Peak Page File Usage',
	(Max([Read Operation Count]) - Min([Read Operation Count])) as 'Read Operation Count',
	(Max([Read Transfer Count]) - Min([Read Transfer Count])) as 'Read Transfer Count',
	(Max([Write Operation Count]) - Min([Write Operation Count])) as 'Write Operation Count',
	(Max([Write Transfer Count]) - Min([Write Transfer Count])) as 'Write Transfer Count',
	(Max([Other Operation Count]) - Min([Other Operation Count])) as 'Other Operation Count',
	(Max([Other Transfer Count]) - Min([Other Transfer Count])) as 'Other Transfer Count',
	Avg([Quota Non Paged Pool Usage]) as 'Quota Non Paged Pool Usage',
	Avg([Quota Paged Pool Usage]) as 'Quota Paged Pool Usage',
	Max([Quota Peak Non Paged Pool Usage]) as 'Quota Peak Non Paged Pool Usage',
	Max([Quota Peak Paged Pool Usage]) as 'Quota Peak Paged Pool Usage' 
FROM <Accounting Raw data source>
WHERE (NOT ([Creation Time] is NULL) AND [Time Stamp] >= ‘<Scope Start Date>’ AND [Time Stamp] < ‘<Scope End Date>’ ) 
GROUP BY [Creation Time], [Process Id], [Policy Name], [Policy Set Time], [Process Matching Criteria]) T

You can also use the Windows System Monitor (sometimes incorrectly referred to as “Perfmon”) to track SQL Server access. You have a finer grain of control with the reporting using this tool as far as the Instances and databases on the system, but less visibility into the users and processes. Still, in a simple example this process works well.

If you’re not familiar with this tool, you can read more about it here. I recommend that you track the data into a SQL Server Instance, separate from the one you’re monitoring. The objects and counters that are relevant for chargeback tracking are as follows:

Object

Counter

Description

SQLServer:Databases

Active Transactions

Number of active transactions for the database

SQLServer:Databases

Data File(s) Size (KB)

Cumulative size (in kilobytes) of all the data files in the database

SQLServer:General Statistics

User Connections

Counts the number of users currently connected to SQL Server.

SQLServer:Transactions

Transactions

The number of currently active transactions

SQL Server System Views and Dynamic Management Views

You can also run queries from SQL Server directly to access the same System Monitor counters in the table above. For older systems, such as SQL Server 2000, you can use the sys.sysperfinfo system table:

SELECT * 
FROM sysperfinfo;
GO

For SQL Server 2005 and higher, you can use the new sys.dm_os_performance_counters Dynamic Management View to get the objects and counters:

SELECT *
FROM sys.dm_os_performance_counters;
GO

In both cases, you can output the results of the query to a table in another database, and then query that for the historic data for the chargeback.

These tables and views are still only hitting the Windows System Monitor counters. If you’re implementing the chargeback solution in SQL Server using Transact-SQL queries, you should use other Dynamic Management Views (DMV), functions and tables to find not only the measurements, but the user or application data as well. Here are the meta-data sources you can use to find data on CPU, I/O, memory and network use for a chargeback system:

Source

Type

Description

sys.dm_db_file_space_usage

DMV

Gives you space usage information for each file in the database

sys.database_files

System Table

Shows space used by each file

sys.dm_io_virtual_file_stats

DMV

I/O Read and write information

sys.dm_exec_sessions

DMV

Shows information about all active user connections and internal tasks. This information includes client and program names, login times, memory and I/O use

Once again, I’ll show you how I implement a combination of these views for a tracking feature in SQL Server for both a simple and a complex example of chargeback in the next tutorial in this series.

SQL Server Profiler and SQL Trace

The SQL Server Profiler tool can “watch” the activity on the server and record the information to a file called a trace file. It can also store the data directly in a monitoring database, or you can export the data from the trace file to the database at a later time. SQL Trace is the command version of this graphical tool. I’ve explained how to use the SQL Server Profiler in another tutorial that you can find here.. You can find more on SQL Trace here.

In both cases, the pertinent event classes to watch for your chargeback system are as follows:

Event Class

Description

Audit Login

Tracks user logins

Audit Logout

Tracks user logouts

SQL:BatchStarting

Shows the start of a SQL batch

SQL:BatchCompleted

Shows the start of a SQL batch

SQL Server Audit

Beginning in SQL Server 2008, Microsoft includes a new auditing feature called Extended Events. While you can also use this feature to create your chargeback system, there is another feature called SQL Server Audit that is built on Extended Events. This feature is useful because you can use it to track objects all the way down to SELECT statements and the like.

To use this feature, you create a SQL Server Audit Object or Database Audit Object, depending on what you want to track. Server and Database Audit Objects that are interesting for a chargeback solution include:

Object Type

Object Name

Description

Server

SUCCESSFUL_LOGIN_GROUP

Indicates that a principal has successfully logged in to SQL Server

Server

LOGOUT_GROUP

Indicates that a principal has logged out of SQL Server

Database

SELECT

This event is raised whenever a SELECT is issued.

Database

UPDATE

This event is raised whenever an UPDATE is issued.

Database

INSERT

This event is raised whenever an INSERT is issued.

Database

DELETE

This event is raised whenever a DELETE is issued.

Database

EXECUTE

This event is raised whenever an EXECUTE is issued.

SQL Server Data Collector

Earlier I referenced this new SQL Server feature, and I’ve written an article on the Data Collector here. I won’t cover that again in this tutorial, since it really is a method to automate the collection of Windows System Counters, Transact-SQL Statements and Profiler Trace Events into a single database. But because of this very behavior, it makes this feature an ideal candidate for your chargeback system. By implementing custom collectors, you can track everything in one place, automatically, with a rolling data archival schedule built right in, all to a central location. Not only that, you can export the custom collectors to other systems to make a repeatable methodology for your process. This is everything you’re looking for in a chargeback system.

The catch is that the Data Collector is a SQL Server 2008 Enterprise feature only — you have to have both the right version and the right edition to make that work. But if you have that environment, this is the route you should follow to create your system.

Reporting the Tracking Results

Regardless on which tool or mix of tools you use to track the data you want to charge back to the organization, you need some way of getting that data to them.

One of the simplest methods you can use is to export the data using either SQL Server Integration Services or Reporting Services into an Excel spreadsheet. The beauty of this approach is that many business budgets are done in Excel anyway, so giving them the data this way integrates into their current processes.

You can also create a report using either HTML with custom code or by using Reporting Services that displays the information along with the charges, posted to a location the appropriate parties can access.

Another method is to export the data into another format that the users can import into their own systems.

In any case, you want to make sure you archive this data, so that the department can periodically review the systems use and its associated cost for future budget forecasting. This helps the business evaluate the true cost of operations.

Building your chargeback system is not a “cookie-cutter” approach, using a single template or tool that fits all situations. It’s a process that you and your business contacts will work out over time that fits the needs you have to track and bill for your system resources. In the next tutorial I’ll show you some examples of the measurements, collection methods and reporting examples for a chargeback system.

InformIT Articles and Sample Chapters

In The IT Utility Model — Part 1, Sun Microsystems describes chargeback systems.

Books and eBooks

The work referenced above is Consolidation in the Data Center: Simplifying IT Environments to Reduce Total Cost of Ownership.

Online Resources

In Planning for Consolidation with Microsoft SQL Server 2000, my good friend Allen Hirt collaborated on a whitepaper on consolidation strategies which includes some great stored procedures for chargeback.

InformIT Promotional Mailings & Special Offers

I would like to receive exclusive offers and hear about products from InformIT and its family of brands. I can unsubscribe at any time.

Overview


Pearson Education, Inc., 221 River Street, Hoboken, New Jersey 07030, (Pearson) presents this site to provide information about products and services that can be purchased through this site.

This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site. Please note that other Pearson websites and online products and services have their own separate privacy policies.

Collection and Use of Information


To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:

Questions and Inquiries

For inquiries and questions, we collect the inquiry or question, together with name, contact details (email address, phone number and mailing address) and any other additional information voluntarily submitted to us through a Contact Us form or an email. We use this information to address the inquiry and respond to the question.

Online Store

For orders and purchases placed through our online store on this site, we collect order details, name, institution name and address (if applicable), email address, phone number, shipping and billing addresses, credit/debit card information, shipping options and any instructions. We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes.

Surveys

Pearson may offer opportunities to provide feedback or participate in surveys, including surveys evaluating Pearson products, services or sites. Participation is voluntary. Pearson collects information requested in the survey questions and uses the information to evaluate, support, maintain and improve products, services or sites, develop new products and services, conduct educational research and for other purposes specified in the survey.

Contests and Drawings

Occasionally, we may sponsor a contest or drawing. Participation is optional. Pearson collects name, contact information and other information specified on the entry form for the contest or drawing to conduct the contest or drawing. Pearson may collect additional personal information from the winners of a contest or drawing in order to award the prize and for tax reporting purposes, as required by law.

Newsletters

If you have elected to receive email newsletters or promotional mailings and special offers but want to unsubscribe, simply email information@informit.com.

Service Announcements

On rare occasions it is necessary to send out a strictly service related announcement. For instance, if our service is temporarily suspended for maintenance we might send users an email. Generally, users may not opt-out of these communications, though they can deactivate their account information. However, these communications are not promotional in nature.

Customer Service

We communicate with users on a regular basis to provide requested services and in regard to issues relating to their account we reply via email or phone in accordance with the users' wishes when a user submits their information through our Contact Us form.

Other Collection and Use of Information


Application and System Logs

Pearson automatically collects log data to help ensure the delivery, availability and security of this site. Log data may include technical information about how a user or visitor connected to this site, such as browser type, type of computer/device, operating system, internet service provider and IP address. We use this information for support purposes and to monitor the health of the site, identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents and appropriately scale computing resources.

Web Analytics

Pearson may use third party web trend analytical services, including Google Analytics, to collect visitor information, such as IP addresses, browser types, referring pages, pages visited and time spent on a particular site. While these analytical services collect and report information on an anonymous basis, they may use cookies to gather web trend information. The information gathered may enable Pearson (but not the third party web trend services) to link information with application and system log data. Pearson uses this information for system administration and to identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents, appropriately scale computing resources and otherwise support and deliver this site and its services.

Cookies and Related Technologies

This site uses cookies and similar technologies to personalize content, measure traffic patterns, control security, track use and access of information on this site, and provide interest-based messages and advertising. Users can manage and block the use of cookies through their browser. Disabling or blocking certain cookies may limit the functionality of this site.

Do Not Track

This site currently does not respond to Do Not Track signals.

Security


Pearson uses appropriate physical, administrative and technical security measures to protect personal information from unauthorized access, use and disclosure.

Children


This site is not directed to children under the age of 13.

Marketing


Pearson may send or direct marketing communications to users, provided that

  • Pearson will not use personal information collected or processed as a K-12 school service provider for the purpose of directed or targeted advertising.
  • Such marketing is consistent with applicable law and Pearson's legal obligations.
  • Pearson will not knowingly direct or send marketing communications to an individual who has expressed a preference not to receive marketing.
  • Where required by applicable law, express or implied consent to marketing exists and has not been withdrawn.

Pearson may provide personal information to a third party service provider on a restricted basis to provide marketing solely on behalf of Pearson or an affiliate or customer for whom Pearson is a service provider. Marketing preferences may be changed at any time.

Correcting/Updating Personal Information


If a user's personally identifiable information changes (such as your postal address or email address), we provide a way to correct or update that user's personal data provided to us. This can be done on the Account page. If a user no longer desires our service and desires to delete his or her account, please contact us at customer-service@informit.com and we will process the deletion of a user's account.

Choice/Opt-out


Users can always make an informed choice as to whether they should proceed with certain services offered by InformIT. If you choose to remove yourself from our mailing list(s) simply visit the following page and uncheck any communication you no longer want to receive: www.informit.com/u.aspx.

Sale of Personal Information


Pearson does not rent or sell personal information in exchange for any payment of money.

While Pearson does not sell personal information, as defined in Nevada law, Nevada residents may email a request for no sale of their personal information to NevadaDesignatedRequest@pearson.com.

Supplemental Privacy Statement for California Residents


California residents should read our Supplemental privacy statement for California residents in conjunction with this Privacy Notice. The Supplemental privacy statement for California residents explains Pearson's commitment to comply with California law and applies to personal information of California residents collected in connection with this site and the Services.

Sharing and Disclosure


Pearson may disclose personal information, as follows:

  • As required by law.
  • With the consent of the individual (or their parent, if the individual is a minor)
  • In response to a subpoena, court order or legal process, to the extent permitted or required by law
  • To protect the security and safety of individuals, data, assets and systems, consistent with applicable law
  • In connection the sale, joint venture or other transfer of some or all of its company or assets, subject to the provisions of this Privacy Notice
  • To investigate or address actual or suspected fraud or other illegal activities
  • To exercise its legal rights, including enforcement of the Terms of Use for this site or another contract
  • To affiliated Pearson companies and other companies and organizations who perform work for Pearson and are obligated to protect the privacy of personal information consistent with this Privacy Notice
  • To a school, organization, company or government agency, where Pearson collects or processes the personal information in a school setting or on behalf of such organization, company or government agency.

Links


This web site contains links to other sites. Please be aware that we are not responsible for the privacy practices of such other sites. We encourage our users to be aware when they leave our site and to read the privacy statements of each and every web site that collects Personal Information. This privacy statement applies solely to information collected by this web site.

Requests and Contact


Please contact us about this Privacy Notice or if you have any requests or questions relating to the privacy of your personal information.

Changes to this Privacy Notice


We may revise this Privacy Notice through an updated posting. We will identify the effective date of the revision in the posting. Often, updates are made to provide greater clarity or to comply with changes in regulatory requirements. If the updates involve material changes to the collection, protection, use or disclosure of Personal Information, Pearson will provide notice of the change through a conspicuous notice on this site or other appropriate way. Continued use of the site after the effective date of a posted revision evidences acceptance. Please contact us if you have questions or concerns about the Privacy Notice or any objection to any revisions.

Last Update: November 17, 2020