Home > Articles > Data > SQL Server

  • Print
  • + Share This
This chapter is from the book

This chapter is from the book

Migrating to In-Memory OLTP

In-Memory OLTP is not necessarily a solution for every type of relational database performance problem. So when should you, or should you not, consider implementing In-Memory OLTP and what tables would make good candidates? The following scenarios are situations where you might be able to take advantage of the benefits of In-Memory OLTP.

  • Applications that are incurring high lock/latch contention can alleviate this contention by converting the tables where the contention is occurring from disk-based tables to memory-optimized tables.
  • Applications that are incurring high I/O and logging latency—In-Memory OLTP can help alleviate the excessive disk writes since most operations are in memory, index updates are not written to disk, and logging and checkpoint operations are streamlined to reduce I/O.
  • Assuming that the business logic can be natively compiled, moving the data memory-optimized tables and the T-SQL code to natively compiled stored procedures may reduce the response times associated with poor performing procedures and reduce latency.
  • Operations that require only read access to data that are suffering from CPU performance bottlenecks. Moving the data to In-Memory OLTP, it may be possible to significantly reduce CPU allowing you to achieve higher throughput.
  • If you perform ETL with data-staging and load phases with numerous operations that need to be completed, including uploading data to staging tables in SQL Server, modifying the data, and then transferring the data to a target table, these types of operations can benefit significantly from using nondurable memory-optimized tables. The non-durable memory-optimized tables provide an efficient way to store staging data by completely eliminating physical storage cost as well as transactional logging.

Applications that would be unsuitable for migration to In-Memory OLTP include:

  • Applications that require table features that are not supported by memory-optimized tables or if the application code for accessing and manipulating the table data uses constructs not supported for natively compiled procedures.
  • When the size of the tables exceeds what SQL Server In-Memory OLTP or a particular machine supports, you would not be able to have all the required data in memory. You could try setting it up to have only some memory-optimized tables and some disk-based tables, but you’ll need to analyze the workload carefully to identify those tables that will benefit most from migration to memory-optimized tables.
  • Applications that are not primarily OLTP workload oriented. In-Memory OLTP, as the name implies, was designed to be of most benefit to OLTP operations. You may experience improvements for other types of processing, such as reporting and data warehousing, but those are not the design goals of this feature. You should carefully test all operations to verify that In-Memory OLTP provides measurable improvements.
  • Applications that are highly dependent on the current locking behavior provided by pessimistic concurrency on disk-based tables. For example, an application might use the READPAST hint to manage work queues, which requires SQL Server to use locks in order to find the next row in the queue to process.

To help you determine the suitability of migrating your existing databases, tables, or store procedures to In-Memory OLTP, SQL Server provides a number of tools to aid in the assessment.

Using the AMR Tool

Before migrating any tables or stored procedures to use In-Memory OLTP, you ideally should perform a thorough analysis of your current workload to establish a baseline and identify if the system is a candidate for migration to In-Memory OLTP. SQL Server 2014 provides a tool called Analysis, Migration and Reporting (AMR), to assist with the performance analysis prior to migrating to In-Memory OLTP.

If you chose to install the complete set of management tools during the installation process, the AMR tool will be included. You can use this tool to analyze your workload and provide recommendations on the tables and procedures that may benefit most from migrating to In-Memory OLTP.

AMR uses information collected by the Transaction Performance Collection Sets in the Management Data Warehouse (MDW) to produce a Transaction Performance Analysis Overview. After running a representative workload, or using the data collector to gather performance statistics on your production system, you can review the Transaction Performance Analysis Overview reports. To bring up the AMR reports, right click on your Management Data Warehouse database, select the ”Report” item from the drop down menu, then select the ”Management Data Warehouse” from the Report submenu, and then finally click on the ”Transaction Performance Analysis Overview.”

One of the reports available is the table usage analysis report which provides information on which tables are prime candidates for conversion to memory-optimized tables, as well as providing an estimate of the size of the effort required to perform the conversion, based on how many unsupported features the table concurrently uses. For example, it will point out unsupported data types and constraints used in the table. Another report will contain recommendations on which procedures might benefit from being converted to natively compiled procedures for use with memory-optimized tables.

Based on recommendations from the MDW reports, you can start looking into converting some of the recommended tables into memory-optimized tables. It is recommended that you start one at a time, starting with the ones indicated by the report that would benefit most from being converted to memory-optimized tables. If you start seeing benefits from the conversion, you can continue to convert more of your tables. Initially, it’s recommended that you continue accessing the memory-optimized tables using your normal T-SQL interface to minimize application changes. Once the appropriate tables have been converted, you can then start planning a rewrite of the code into natively compiled stored procedures, again starting with the ones that the MDW reports indicate would provide the most benefit.

To assist in converting the tables and stored procedures, SQL Server 2014 provides two other tools, the Table Memory Optimization Advisor and the Native Compilation Advisor. The Native Compilation Advisor was discussed previously in the section on natively compiled stored procedures and provided some sample screen shots. Let’s take a look at the Table Memory Optimization Advisor.

Using the Table Memory Optimization Advisor to Migrate Disk-Based Tables

After identifying one or more tables as candidates for conversion to memory-optimized tables, you can use the Table Memory Optimization Advisor to help migrate specific disk-based tables to memory-optimized tables.

The Memory Optimization Advisor can be launched by right-clicking the table in SSMS and then choosing the Memory Optimization Advisor option from the menu. The Memory Optimization Advisor is a wizard that walks you through various validation tests and provides migration warnings of potential issues with the table and/or indexes if you were to convert the table to a memory-optimized table. For example, Figure 33.16 shows the Migration Validation page for the SalesOrderDetail table in the AdventureWorks2012 database. As you can see, the SalesOrderDetail table contains a number of features which are not compatible with memory-optimized tables and the buttons to continue the process are grayed out.

Figure 33.16

Figure 33.16 Memory Optimization Advisor Validation of SalesOrderDetail table.

If the table is a valid candidate for converting to a memory-optimized table, the wizard will walk you through the migration steps prompting for information to set up the memory-optimized table, such as selecting which memory-optimized filegroup to use, the logical file name and file path, the data durability option, the name to use to rename the original table, whether to copy the current table data to the new table, what column to define as the primary key, and whether to migrate any other existing indexes. When defining the indexes, if you are defining any as hash indexes, the wizard will recommend a hash bucket count based upon the table contents (you can override the hash bucket values if you disagree with the recommendation). The wizard also provides an estimate of the current memory cost of the table.

At the end of the wizard, you’ll be presented with a summary screen to verify the migration options and actions (see Figure 33.17). You also have the option on this screen to generate a script of the actions to be performed if you prefer to save the script and run it yourself. Otherwise you can click the migrate button and let the Table Memory Optimization Advisor perform the migration which you can monitor in the Migration progress page.

Figure 33.17

Figure 33.17 Memory Optimization Advisor Validation of SalesOrderDetail table.

  • + Share This
  • 🔖 Save To Your Account