Home > Articles > Data > SQL Server

Monitoring and Tuning Analysis Services with SQL Profiler, Part 2

  • Print
  • + Share This
In part 1 of this series, database administration expert Baya Dewald provided a quick overview of the options for recording events in your SQL activities. Part 2 concludes the series with a discussion of how to use SQL Profiler to analyze and adjust your MDX queries, your database's processing, and more, with the goal of streamlining everything for best performance.
Like this article? We recommend

Like this article? We recommend

Monitoring and Tuning MDX Query Performance

In part 1 of this series, I discussed Analysis Services trace events and columns, providing examples of when collecting each event and column would be useful. This article concludes our examination of how to monitor and tune your SSAS cubes. First we'll focus on troubleshooting querying, processing, and synchronization performance. Then I'll describe the framework for using Profiler for load testing and collecting Profiler traces for historical performance analysis.

Queries for Storage-Engine Bottlenecks

Generally MDX queries can have a storage-engine bottleneck, a formula-engine bottleneck or both. For queries that have a storage-engine bottleneck, Profiler includes events that report reading data from one or multiple partitions. The combined duration of all partition reads will be roughly equal to the total duration of the query. If you run the query twice in a row without clearing the cache, the query will execute considerably faster the second time, because SSAS will retrieve data from the cache instead of having to read the data from the disk.

For example, suppose I run the following query against the Adventure Works DW database to demonstrate Profiler output while reading data from multiple partitions:

SELECT measures.members ON 0,
[product].[product categories].[product name].members ON 1
FROM [adventure works]

This simplistic query retrieves all measures on columns axes and all product names on rows. Normally we hope that users would run more focused queries that examine only a subset of measures and products, but we could also expect to see more measures and more products in a real-world application, compared to those available in the sample database. Figure 1 shows what we would see in the Profiler as a result of this query.

As shown in Figure 1, the query read multiple partitions and examined multiple subcubes that weren't found in cache (notice Event Subclass "Non-cache data"). I highlighted the partition that took the longest to read, "Fact Internet Sales." Now I'll execute the query a second time and check the output again (see Figure 2).

The Profiler shows a very different set of events for the second execution. All subcubes were found in cache; the query didn't read any data from partitions. The query took three seconds to execute the first time, while the second execution only took 203 milliseconds. So the first point to keep in mind is that queries bottlenecked at the storage engine can benefit from caching.

Option 1: Warm Up the SSAS Storage-Engine Cache with a Set of Simple Standard Queries

Now suppose you have an application with a dashboard or typical set of queries that users run each day. You could write a simple script using the ascmd.exe command-line tool that submits commonly executed queries immediately after the cubes are processed, to "warm up" the SSAS storage-engine cache. When users log onto the dashboard, or when they run typical reports, the data will be found in cache, and queries will perform well.

But what if we don't have the luxury of keeping data in SSAS cache from the time objects are processed until the common queries are executed? Perhaps another set of users run ad hoc queries that cannot be predicted ahead of time. Furthermore, queries will be evicted from cache when the SSAS engine requires memory for other operations. Let's look at another tuning possibility.

Option 2: Build Aggregations for Faster Reading

The next tuning option is to create aggregations. You can define a different set of aggregations for each partition, but generally it's convenient to have the same aggregations for partitions that are typically examined by the same queries. In the sample query in Figure 1, we noticed that the "Fact Internet Sales" partition took 250 milliseconds to read. In realistic queries, it's not uncommon to see each partition scan taking multiple seconds or even minutes, depending on the partition's size. Let's see what Profiler shows after I add aggregations to the same partition (see Figure 3).

Reading an aggregation for the same partition took only 78 milliseconds, versus the original 250. This query is executed against a tiny sample database, so the savings of 172 milliseconds might not sound like much, but it's a 69% improvement over the original query. For large cubes, aggregations can improve performance quite dramatically.

Option 3: Alter Your Partition Scheme to Decrease Partition Size

You can further tune queries bound at the storage engine by altering your partitioning scheme. Normally each measure group is partitioned by day or month. If you suspect that many of your queries will be focused on specific product categories, you could further divide daily partitions by product category. For example, a query reading road bike sales for June 1, 2011 will only need to examine a small partition specific to this category and date. If you have a 5GB partition for June 1, 2011, and after altering the partition scheme you have a 1GB partition for road bike sales for the same day, you could save 4/5 of total query-execution time by scanning a 1GB file instead of the 5GB version.

  • + Share This
  • 🔖 Save To Your Account