Home > Articles > Data > SQL Server

  • Print
  • + Share This
This chapter is from the book

This chapter is from the book

Managing Memory for In-Memory OLTP

When running In-Memory OLTP, SQL Server will need to be configured with sufficient memory to hold all your memory-optimized tables. If available memory is insufficient transactions will fail at run-time during any operations that require additional memory. Normally this would happen during INSERT or UPDATE operations, but could also happen for DELETE operations on a memory-optimized nonclustered index if the DELETE generates a page merge operation which, if you recall from the discussion previously in this chapter, requires allocating additional pages.

If the system runs out of memory during memory-optimized table transactions and you cannot add more memory to the SQL Server instance, you may have to drop some of your less critical memory-optimized tables to free up memory space.

A general rule of thumb is that you should have at least two times the amount of memory that you estimate the data itself will require. Beyond that, the total memory requirement depends on your workload profile. If there are a lot of data modifications generated by your OLTP operations, you’ll need more memory for the row versions than if you’re primarily performing reads of your memory-optimized tables. The current maximum amount of memory available for memory-optimized tables is 256GB.

To estimate the amount of memory for a memory-optimized table’s rows, consider that the memory-optimized table is comprised of three components:

  • The Row header/timestamps which is 24 bytes in size
  • Index pointers—Each row includes one 8-byte pointer to the next row in the table for each index. The space required per row is the number of hash indexes times 8 bytes.
  • Data—The size of the data portion of the row is determined by adding up the number of bytes for each data column. If any columns are variable length, use the max size in your calculations.

The total row size is the sum of the size of all three components. Assume that you’ll be storing 1 million rows in your memory-optimized table and it has 2 hash indexes and the sum of the data column size is 200 bytes. The total space required for the data itself would be:

  • 24 bytes
  • + 16 bytes (2 index pointers @ 8 bytes each)
  • + 200 bytes
  • = 240 bytes per row

From the above calculations, the size of each row in the memory-optimized table is 240 bytes. If the table held 1 million rows, the table will consume 1,000,000 * 240 bytes, or 240,000,000 bytes—approximately 240MB.

Estimating memory requirements for hash indexes is pretty straightforward. Each bucket requires 8 bytes, so you can just estimate the number of hash buckets you need and multiply it by 8 bytes.

The size for your memory-optimized nonclustered indexes depends on both the size of the index key and the number of rows in the table. You can assume each index row is 8 bytes plus the size of the index key (assume N bytes), so the maximum number of rows that fit on a page would be 8176/(N+8). Divide that result into the expected number of rows to get an initial estimate.

After calculating all of the above, multiply it by 2 to ensure you have enough space for data growth as well as for row versioning.

Monitoring Memory Usage

To keep an eye on your memory usage by memory-optimized tables and their indexes, SSMS provides a new report, Memory Usage by Memory-Optimized Objects. To bring up this report, right click on the database, select Reports, select Standard, and click on Memory Usage By Memory-Optimized Objects. This will bring up a report like that shown in Figure 33.15.

Figure 33.15

Figure 33.15 Memory Usage By Memory-Optimized Objects Report.

Alternatively, you can also get information on the memory usage of your memory-optimized tables by running a query against the sys.dm_db_xtp_table_memory_stats DMV:

SELECT CAST(OBJECT_NAME(object_id) AS varchar(24)) AS tablename,
       memory_allocated_for_table_kb AS table_alloc_kb,
       memory_used_by_table_kb AS table_used_kb,
       memory_allocated_for_indexes_kb AS idx_alloc_kb,
       memory_used_by_indexes_kb as idx_used_kb
FROM sys.dm_db_xtp_table_memory_stats
WHERE object_id > 0


tablename                table_alloc_kb table_used_kb idx_alloc_kb idx_used_kb
------------------------ -------------- ------------- ------------ -----------
salesorderdetail_newce   16945          15797         2048         2048

Managing Memory with the Resource Governor

In SQL Server 2014, you can use the Resource Governor to manage the amount of memory available for in-memory databases and objects. You can bind a database with memory-optimized table(s) databases to a resource pool, assign the desired amount of memory to the pool and that will be the maximum amount of memory available to memory-optimized objects in that database.

The first step is to create a resource pool for the In-Memory OLTP database, specifying a MIN_MEMORY_PERCENT and MAX_MEMORY_PERCENT of the same value. This will ensure that this fixed percentage of available memory is always available for the In-Memory OLTP database.


Next, bind the databases to the resource pool you just created, using the procedure sp_xtp_bind_db_resource_pool.

EXEC sp_xtp_bind_db_resource_pool 'AdventureWorks2012', 'InMemPool';

You should see a message returned by the above command to take the database offline and bring it back online in order for it to begin using the resource pool. SQL Server assigns memory to a resource pool as it is allocated, so simply associating a database with a pool will not transfer the assignment of any memory already allocated via the previous pool (which is probably the default pool). In order to activate the binding to the new pool, you need to take the database offline and bring it back online (after making sure there are not user sessions active in the database.

USE master

As SQL Server reads data into the memory-optimized tables, it will associate the memory with the new resource pool.

  • + Share This
  • 🔖 Save To Your Account