Home > Articles > Data > Oracle

  • Print
  • + Share This
This chapter is from the book

This chapter is from the book

Smart Flash Cache WriteBack

Prior to Storage Server Software version (associated with Exadata X3) Exadata Smart Flash Cache was a write-through cache, meaning that write operations were applied both to the cache and to the underlying disk devices but were not signaled as complete until the I/O to the disk completed.

Starting with of the Exadata Storage Software, Exadata Smart Flash Cache may act as a write-back cache. This means that a write operation is made to the cache initially and de-staged to grid disks at a later time. This can be effective in improving the performance of an Exadata system that is subject to I/O write bottlenecks on the Oracle data files.

Data File Write I/O Bottlenecks

As with earlier incarnations of the Exadata Smart Flash Cache, the write-back cache deals primarily only with data file blocks; redo writes are optimized separately by the Exadata Smart Flash Logging function.

Writes to data file generally happen as a background task in Oracle, and most of the time we don’t actually wait on these I/Os. That being the case, what advantage can we expect if these writes are optimized? To understand the possible advantages of the write-back cache let’s review the nature of data file write I/O in Oracle and the symptoms that occur when write I/O becomes the bottleneck.

When a block in the buffer cache is modified, it is the responsibility of the DBWR to write these “dirty” blocks to disk. The DBWR does this continuously and uses asynchronous I/O processing, so generally sessions do not have to wait for the I/O to occur—the only time sessions wait directly on write I/O is when a redo log sync occurs following a COMMIT.

However, should all the buffers in the buffer cache become dirty, a session may wait when it wants to bring a block into the cache, resulting in a free buffer wait.

Figure 15.17 illustrates the phenomenon. User sessions wishing to bring new blocks into the buffer cache need to wait on free buffer waits until the Database Writer cleans out dirty blocks. Write complete waits may also be observed. These occur when a session tries to access a block that the DBWR is in the process of writing to disk.

Figure 15.17

Figure 15.17 Buffer cache operation and free buffer waits

Free buffer waits can occur in update-intensive workloads when the I/O bandwidth of the Oracle sessions reading into the cache exceeds the I/O bandwidth of the Database Writer. Because the Database Writer uses asynchronous parallelized write I/O, and because all processes concerned are accessing the same files, free buffer waits usually happen when the I/O subsystem can service reads faster than it can service writes.

There exists just such an imbalance between read and write latency in Exadata. The Exadata Smart Flash Cache accelerates reads by a factor of perhaps four to ten times, while offering no comparable advantage for writes. As a result, a very busy Exadata X2 system could become bottlenecked on free buffer waits. The -Exadata Smart Flash Cache write-back cache provides acceleration to data file writes as well as reads and therefore reduces the chance of free buffer wait bottlenecks.

Write-Back Cache Architecture

Figure 15.18 illustrates the Exadata Smart Flash Cache write-back architecture. An Oracle process modifies a database block which is then dirty (1). The DBWR periodically sends these blocks to the Storage Cell for write (2). For eligible blocks (almost all blocks in the buffer cache will be eligible) the Storage Cell CELLSRV process writes the dirty blocks to the Flash Cache (3) and returns control to the DBWR. Later the CELLSRV writes the dirty block to the database files on the grid disk (4).

Figure 15.18

Figure 15.18 Exadata Smart Flash Cache write-back architecture

There’s no particular urgency in the CELLSRV flushing blocks to grid disk, since any subsequent reads will be satisfied by the Flash Cache.

Furthermore, since the Exadata Smart Flash Cache is a persistent cache, there’s no reason to be concerned about data loss in the event of power failure. The write-back cache is also subject to the same redundancy policies as the underlying ASM-controlled grid disks, so even in the event of a catastrophic cell failure the data will be preserved.

Enabling and Disabling the Write-Back Cache

You can check if you have the write-back cache enabled by issuing the command list cell attributes flashcachemode. The flashcachemode variable returns writeThrough if the write-back cache is disabled and writeBack if it is not:

CellCLI> list cell attributes flashcachemode detail
         flashCacheMode:         writeback

Enabling the cache is described in Oracle Support Note ID 1500257.1. For good reason, the Storage Cells need to be idled during the process so that writes can be quiesced before being channeled through the cache. This can be done one cell at a time in a rolling procedure, or during a complete shutdown of all databases and ASM instances.

The non-rolling method involves issuing the following commands on each cell while all database and ASM services on the system are shut down:


The rolling method is similar but involves some extra steps to ensure that grid disks are not in use. See Oracle Support Note ID 1500257.1 for the detailed procedure.

Write-Back Cache Performance

Figure 15.19 illustrates the effectiveness of the write-back cache for workloads that encounter free buffer waits. The workload used to generate Figure 15.19 was heavily write intensive with very little read I/O overhead (all the necessary read data was in cache). As a result, it experienced a very high degree of free buffer waits and some associated buffer busy waits. Enabling the write-back cache completely eliminated the free buffer waits by effectively accelerating the write I/O bandwidth of the database writer. As a result, throughput increased fourfold.

Figure 15.19

Figure 15.19 Effect of write-back cache performance on free buffer waits

However, don’t be misled into thinking that the write-back cache is a silver bullet for all workloads. Only workloads that are experiencing free buffer waits are likely to see this sort of performance gain. Workloads where the dominant waits are for CPU, read I/O, Global Cache coordination, log writes, and so on are unlikely to see any substantial benefit from the write-back cache.

  • + Share This
  • 🔖 Save To Your Account