Home > Articles

Supporting Multiple Page Sizes in the Solaris Operating System

📄 Contents

  1. Understanding Why Virtual-to-Physical Address Translation Affects Performance
  2. Working With Multiple Page Sizes in the Solaris OS
  3. Configuring for Multiple Page Sizes
  4. About the Author
  • Print
  • + Share This
The Solaris 9 Operating System contains a feature to enable the use of larger memory page sizes for the heap and stack segments of a program. The use of larger page sizes is often able to deliver significant performance gain for a large range of applications. This article explains how to engage the MPSS feature and how to analyze the performance effect. This article requires an intermediate to advanced level reader.
Editor's Note: This article has an Appendix containing additional data.
Like this article? We recommend

The availability of both processor and operating system (OS) support for 64-bit address spaces has enabled applications to take a quantum leap in the size and efficiency with which they manipulate data. UltraSPARC® processor-based servers from Sun Microsystems paired with the SolarisTM Operating System (Solaris OS) have enabled applications that were once limited to a virtual memory size of 4 gigabytes to a virtually unlimited span. This dramatic change in the size of virtual memory has changed the way developers create their applications and how organizations use them. Database management systems can have larger tables than ever before and often, they can fit entirely in main memory, significantly reducing their I/O requirements. Large-scale simulations can run with fewer restrictions on array sizes. In fact, working set sizes of several gigabytes are now common.

The performance of a memory-intensive application is dependent on the performance of the underlying memory management infrastructure. Time spent converting virtual addresses into physical addresses will slow down an application, often in a manner that is not evident to standard performance tools. In many cases, there is an opportunity to increase the performance of the underlying memory management infrastructure, resulting in higher application performance.

We can often increase an application's performance by increasing the memory management page size. The memory management unit (MMU) in Sun's UltraSPARC processors typically has just a few hundred entries, each of which can translate 8 kilobytes of address space by default, resulting in access to only a few megabytes of memory before performance is affected. Fortunately, recent improvements in the Solaris OS allow these limitations to be overcome.

Beginning with the Solaris 9 OS, multiple page sizes can be supported on UltraSPARC processors so administrators can optimize performance by changing the page size on behalf of an application. Typical performance measurement tools do not provide sufficient detail for evaluating the impact of page size and do not provide the needed support to make optimal page size choices.

This article explains how to use new tools to determine the potential performance gain. In addition, it explains how to configure larger page sizes using the multiple page size support (MPSS) feature of the Solaris 9 OS. The article addresses the following topics:

  • "Understanding Why Virtual-to-Physical Address Translation Affects Performance"

  • "Working With Multiple Page Sizes in the Solaris OS"

  • "Configuring for Multiple Page Sizes"

Understanding Why Virtual-to-Physical Address Translation Affects Performance

The faster the microprocessor converts virtual addresses into physical addresses, the faster the application can run. Ideally, the MMU converts virtual addresses into physical addresses quickly enough (every microprocessor cycle) that the microprocessor won't stall and wait. Under certain circumstances, however, the translation process can take considerably longer. In fact, this can typically take from tens to hundreds of cycles.

We can often minimize the time taken to translate virtual addresses for applications with large working sets by increasing the page size used by the MMU. However, some applications that work well with small pages might see performance degradation when they are forced to use large pages. For example, processes that are short-lived, have small working sets, or have memory access patterns with poor spatial locality could suffer from the overhead of using large pages. Additionally, copy-on-write (COW) faults for a large page require a significant amount of time to process, as does first-touch access, which involves allocating and zeroing the entire page. For these reasons, we must analyze the application to determine whether the use of large pages is beneficial. Methods for determining when applications will benefit from large page sizes are presented later in this article.

Solaris OS Address Translation

Memory is abstracted so that applications only need to deal with virtual addresses and virtual memory. Behind the scenes, the OS and hardware, in a carefully choreographed dance, transparently translate the application's virtual addresses into the physical addresses for use by the hardware memory system.

The task of translating a virtual address into a physical address is accomplished by software and hardware modules. The software translates the mappings within an address space of a process (or the kernel) into hardware commands to program the microprocessor's MMU. The hardware then translates requests for virtual memory from the running instructions into physical addresses in real time. Optimally, this happens in the time of one microprocessor cycle. Some microprocessors, such as UltraSPARC, require assistance from the OS to manage this process, facilitated by a hardware-generated exception into system software where these helper tasks are performed.

Figure 1FIGURE 1 Solaris OS Virtual-to-Physical Memory Management

The combined virtual memory system translates virtual addresses into physical addresses in page-size chunks of memory as depicted in the preceding figure.

The hardware uses a table known as the translation lookaside buffer (TLB) in the microprocessor to convert virtual addresses to physical addresses on-the-fly. The software programs the microprocessor's TLB with entries identifying the relationship of the virtual and physical addresses. Because the size of the TLB is limited by hardware, the TLB is typically supplemented by a larger (but slower) in-memory tables of virtual-to-physical translations. On UltraSPARC processors, these tables are known as the translation storage buffer (TSB); on most other architectures, they are known as a page table. When the microprocessor needs to convert a virtual address into a physical address, it searches the TLB (a hardware search), and if a physical address is not found (for example, the hardware encounters a TLB miss), the microprocessor searches the larger in-memory table. The following figure illustrates the relationship of these components.

Figure 2FIGURE 2 Virtual Address Translation Hardware and Software Components

UltraSPARC I–IV microprocessors use a software TLB replacement strategy. For example, when a TLB miss occurs, software is invoked to search the in-memory table (the TSB) for the required translation entry.

Let's step through a simple example. Suppose a process allocates some memory within its heap by calling malloc(). Further suppose that malloc() returns to the program a virtual address of the requested memory. When that memory is first referenced, the virtual memory layer requests a physical memory page from the system's free lists. The newly acquired page has an associated physical address within physical memory. The virtual memory system then constructs a translation entry in memory containing the virtual address (the start of the page returned by malloc) and the physical address of the new page. The newly created translation entry is then inserted into the TSB and programmed into an available slot in the microprocessor's TLB. The entry is also kept in software, linked to the address space of the process to which it belongs. Later, when the program accesses the virtual address, if the new TLB entry is still in the TLB virtual-to-physical address is translated on-the-fly. However, if the TLB entry has been evicted by other activity, a TLB miss occurs. The corresponding hardware exception looks up the translation entry in the larger TSB and reloads it into the TLB.

The TSB is also limited in size. In extreme circumstances, a TSB miss can occur. Translating the virtual address of a TSB miss requires a lengthy search of the software structures associated with the process.

The mechanism is similar for other processors using hardware TLB-miss strategies, including the Intel x86, except that a hardware TLB replacement strategy is used to refill the TLB rather than software. When a TLB miss occurs, an in-hardware engine is invoked to search the page table.

TLB Reach and Application Performance

The objective of the TLB is to cache as many recent page translations in hardware as possible so that it can satisfy a process's memory accesses by performing all of the virtual-to-physical translations on-the-fly. Most TLBs are limited in size because of the amount of transistor space available on the CPU die. For example, the UltraSPARC I and II TLBs are only 64 entries. This means that the TLB can address no more than 64 pages of translations at any time; therefore, on UltraSPARC, the TLB can address 64 \ 8 kilobytes (512 kilobytes).

The amount of memory the TLB can concurrently address is known as the TLB reach. The UltraSPARC I and II have a TLB reach of 512 kilobytes. If an application makes heavy use of less than 512 kilobytes of memory, the TLB will be able to cache the entire set of translations. However, if the application were to make heavy use of more than 512 kilobytes of memory, the TLB will begin to miss, and translations will have to be loaded from the larger TSB.

The following table shows the TLB miss rate and the amount of time spent servicing TLB misses from a study of older SPARCTM architectures. We can see from the table that only a small range of compute-bound applications fit well in the SuperSPARCTM TLB (gcc, ML, and pthor), whereas the others applications spend a significant amount of their time in the TLB miss handlers.

TABLE 1 Sample TLB Miss Data From a SuperSPARC Processor Study

Workload

Total Time (secs)

User Time (secs)

# User TLB Misses

% User Time in TLB Miss Handling

Cache Misses ('000s)

Peak Memory Usage (MB)

coral

177

172

85974

50

71053

19.9

nasa7

387

385

152357

40

64213

3.5

compress

99

77

21347

28

21567

1.4

fftpde

55

53

11280

21

14472

14.7

wave5

110

107

14510

14

4583

14.3

mp3d

37

36

4050

11

5457

4.8

spice

620

617

41923

7

81949

3.6

pthor

48

35

2580

7

6957

15.4

ML

945

917

38423

4

314137

32.0

gcc

118

105

2440

2

9980

5.6


TLB effectiveness has become a larger issue in the past few years because the average amount of memory used by applications has grown significantly (almost doubling year upon per year according to recent statistical data). The easiest way to increase the effectiveness of the TLB is to increase the TLB reach so that the working set of the application fits within the TLB's reach.

The TLB reach can be improved using either of the following methods:

  • Increase the number of entries in the TLB. This approach adds complexity to the TLB hardware and increases the number of transistors required; therefore, requiring more valuable die space.

  • Increase the page size for each entry. This approach increases the TLB reach without the need to increase the size of the TLB.

One trade-off of increasing the page size is that doing so might boost the performance of some applications at the expense of slower performance elsewhere. This trade-off is caused by wasted space that results from larger memory allocation units. We would almost certainly increase the memory usage of many applications.

Luckily, a solution is at hand. Some of the newer processor architectures allow us to use two or more different page sizes at the same time. For example, UltraSPARC provides hardware support concurrently to select 8 kilobyte, 64 kilobyte, 512 kilobyte, or 4 megabyte pages. If we were to use 4 megabyte pages to map all memory, then the TLB would have a theoretical reach of 64 \ 4 megabytes (256 megabytes).

  • + Share This
  • 🔖 Save To Your Account