Home > Articles > Home & Office Computing > Microsoft Windows Desktop

Advanced Windows Debugging: Memory Corruption Part II—Heaps

While heap-based security attacks are much harder to exploit than their stack-based counterparts, their popularity keeps growing. Daniel Pravat and Mario Hewardt discuss security vulnerabilities and stability issues that can surface in an application when the heap is used in a nonconventional fashion.
This chapter is from the book

This chapter is from the book

In Chapter 5, "Memory Corruption Part I—Stacks," we discussed how stack-based buffer overflows can cause serious security problems for software and how stackbased buffer overflows have been the primary attack angle for malicious software authors. In recent years, however, another form of buffer overflow attack has gained in popularity. Rather than relying on the stack to exploit buffer overflows, the Windows heap manager is now being targeted. Even though heap-based security attacks are much harder to exploit than their stack-based counterparts, their popularity keeps growing at a rapid pace. In addition to potential security vulnerabilities, this chapter discusses a myriad of stability issues that can surface in an application when the heap is used in a nonconventional fashion.

Although the stack and the heap are managed very differently in Windows, the process by which we analyze stack- and heap-related problems is the same. As such, throughout this chapter, we employ the same troubleshooting process that we defined in Chapter 5 (refer to Figure 5.1).

What Is a Heap?

A heap is a form of memory manager that an application can use when it needs to allocate and free memory dynamically. Common situations that call for the use of a heap are when the size of the memory needed is not known ahead of time and the size of the memory is too large to neatly fit on the stack (automatic memory). Even though the heap is the most common facility to accommodate dynamic memory allocations, there are a number of other ways for applications to request memory from Windows. Memory can be requested from the C runtime, the virtual memory manager, and even from other forms of private memory managers. Although the different memory managers can be treated as individual entities, internally, they are tightly connected. Figure 6.1 shows a simplified view of Windows-supported memory managers and their dependencies.

Figure 6.1

Figure 6.1 An overview of Windows memory management architecture

As illustrated in Figure 6.1, most of the high-level memory managers make use of the Windows heap manager, which in turn uses the virtual memory manager. Although high-level memory managers (and applications for that matter) are not restricted to using the heap manager, they most typically do, as it provides a solid foundation for other private memory managers to build on. Because of its popularity, the primary focal point in this chapter is the Windows heap manager.

When a process starts, the heap manager automatically creates a new heap called the default process heap. Although some processes use the default process heap, a large number rely on the CRT heap (using new/delete and malloc/free family of APIs) for all their memory needs. Some processes, however, create additional heaps (via the HeapCreate API) to isolate different components running in the process. It is not uncommon for even the simplest of applications to have four or more active heaps at any given time.

The Windows heap manager can be further broken down as shown in Figure 6.2.

Figure 6.2

Figure 6.2 Windows heap manager

Front End Allocator

The front end allocator is an abstract optimization layer for the back end allocator. By allowing different types of front end allocators, applications with different memory needs can choose the appropriate allocator. For example, applications that expect small bursts of allocations might prefer to use the low fragmentation front end allocator to avoid fragmentation. Two different front end allocators are available in Windows:

  • Look aside list (LAL) front end allocator
  • Low fragmentation (LF) front end allocator

With the exception of Windows Vista, all Windows versions use a LAL front end allocator by default. In Windows Vista, a design decision was made to switch over to the LF front end allocator by default. The look aside list is nothing more than a table of 128 singly linked lists. Each singly linked list in the table contains free heap blocks of a specific size starting at 16 bytes. The size of each heap block includes 8 bytes of heap block metadata used to manage the block. For example, if an allocation request of 24 bytes arrived at the front end allocator, the front end allocator would look for free blocks of size 32 bytes (24 user-requested bytes + 8 bytes of metadata). Because all heap blocks require 8 bytes of metadata, the smallest sized block that can be returned to the caller is 16 bytes; hence, the front end allocator does not use table index 1, which corresponds to free blocks of size 8 bytes.

Subsequently, each index represents free heap blocks, where the size of the heap block is the size of the previous index plus 8. The last index (127) contains free heap blocks of size 1024 bytes. When an application frees a block of memory, the heap manager marks the allocation as free and puts the allocation on the front end allocator's look aside list (in the appropriate index). The next time a block of memory of that size is requested, the front end allocator checks to see if a block of memory of the requested size is available and if so, returns the heap block to the user. It goes without saying that satisfying allocations via the look aside list is by far the fastest way to allocate memory.

Let's take a look at a hypothetical example. Imagine that the state of the LAL is as depicted in Figure 6.3.

Figure 6.3

Figure 6.3 Hypothetical state of the look aside list

The LAL in Figure 6.3 indicates that there are 3 heap blocks of size 16 (out of which 8 bytes is available to the caller) available at index 1 and two blocks of size 32 (out of which 24 bytes are available to the caller) at index 3. When we try to allocate a block of size 24, the heap manager knows to look at index 3 by adding 8 to the requested block size (accounting for the size of the metadata) and dividing by 8 and subtracting 1 (zero-based table). The linked list positioned at index 3 contains two available heap blocks. The heap manager simply removes the first one in the list and returns the allocation to the caller.

If we try allocating a block of size 16, the heap manager would notice that the index corresponding to size 16 (16+8/8-1=2) is an empty list, and hence the allocating cannot be satisfied from the LAL. The allocation request now continues its travels and is forwarded to the back end allocator for further processing.

Back End Allocator

If the front end allocator is unable to satisfy an allocation request, the request makes its way to the back end allocator. Similar to the front end allocator, it contains a table of lists commonly referred to as the free lists. The free list's sole responsibility is to keep track of all the free heap blocks available in a particular heap. There are 128 free lists, where each list contains free heap blocks of a specific size. As you can see from Figure 6.2, the size associated with free list[2] is 16, free list[3] is 24, and so on. Free list[1] is unused because the minimum heap block size is 16 (8 bytes of metadata and 8 user-accessible bytes). Each size associated with a free list increases by 8 bytes from the prior free list. Allocations whose size is greater than the maximum free list's allocation size go into index 0 of the free lists. Free list[0] essentially contains allocations of sizes greater than 1016 bytes and less than the virtual allocation limit (discussed later). The free heap blocks in free list[0] are also sorted by size (in ascending order) to achieve maximum efficiency. Figure 6.4 shows a hypothetical example of a free list.

Figure 6.4

Figure 6.4 Hypothetical state of the free lists

If an allocation request of size 8 arrives at the back end allocator, the heap manager first consults the free lists. In order to maximize efficiency when looking for free heap blocks, the heap manager keeps a free list bitmap. The bitmap consists of 128 bits, where each bit represents an index into the free list table. If the bit is set, the free list corresponding to the index of the free list bitmap contains free heap blocks. Conversely, if the bit is not set, the free list at that index is empty. Figure 6.5 shows the free list bitmap for the free lists in Figure 6.4.

Figure 6.5

Figure 6.5 Free list bitmap

The heap manager maps an allocation request of a given size to a free list bitmap index by adding 8 bytes to the size (metadata) and dividing by 8. Consider an allocation request of size 8 bytes. The heap manager knows that the free list bitmap index is 2 [(8+8)/8]. From Figure 6.5, we can see that index 2 of the free list bitmap is set, which indicates that the free list located at index 2 in the free lists table contains free heap blocks. The free block is then removed from the free list and returned to the caller. If the removal of a free heap block results in that free list becoming empty, the heap manager also clears the free list bitmap at the specific index. If the heap manager is unable to find a free heap block of requested size, it employs a technique known as block splitting. Block splitting refers to the heap manager's capability to take a larger than requested free heap block and split it in half to satisfy a smaller allocation request. For example, if an allocation request arrives for a block of size 8 (total block size of 16), the free list bitmap is consulted first. The index representing blocks of size 16 indicates that no free blocks are available. Next, the heap manager finds that free blocks of size 32 are available. The heap manager now removes a block of size 32 and splits it in half, which yields two blocks of size 16 each. One of the blocks is put into a free list representing blocks of size 16, and the other block is returned to the caller. Additionally, the free list bitmap is updated to indicate that index 2 now contains free block entries of size 16. The result of splitting a larger free allocation into two smaller allocations is shown in Figure 6.6.

Figure 6.6

Figure 6.6 Splitting free blocks

As mentioned earlier, the free list at index 0 can contain free heap blocks of sizes ranging from 1016 up to 0x7FFF0 (524272) bytes. To maximize free block lookup efficiency, the heap manager stores the free blocks in sorted order (ascending). All allocations of sizes greater than 0x7FFF0 go on what is known as the virtual allocation list. When a large allocation occurs, the heap manager makes an explicit allocation request from the virtual memory manager and keeps these allocations on the virtual allocation list.

So far, the discussion has revolved around how the heap manager organizes blocks of memory it has at its disposal. One question remains unanswered: Where does the heap manager get the memory from? Fundamentally, the heap manager uses the Windows virtual memory manager to allocate memory in large chunks. The memory is then massaged into different sized blocks to accommodate the allocation requests of the application. When the virtual memory chunks are exhausted, the heap manager allocates yet another large chunk of virtual memory, and the process continues. The chunks that the heap manager requests from the virtual memory manager are known as heap segments. When a heap segment is first created, the underlying virtual memory is mostly reserved, with only a small portion being committed. Whenever the heap manager runs out of committed space in the heap segment, it explicitly commits more memory and divides the newly committed space into blocks as more and more allocations are requested. Figure 6.7 illustrates the basic layout of a heap segment.

Figure 6.7

Figure 6.7 Basic layout of a heap segment

The segment illustrated in Figure 6.7 contains two allocations (and associated metadata) followed by a range of uncommitted memory. If another allocation request arrives, and no available free block is present in the free lists, the heap manager would commit additional memory from the uncommitted range, create a new heap block within the committed memory range, and return the block to the user. Once a segment runs out of uncommitted space, the heap manager creates a new segment. The size of the new segment is determined by doubling the size of the previous segment. If memory is scarce and cannot accommodate the new segment, the heap manager tries to reduce the size by half. If that fails, the size is halved again until it either succeeds or reaches a minimum segment size threshold—in which case, an error is returned to the caller. The maximum number of segments that can be active within a heap is 64. Once the new segment is created, the heap manager adds it to a list that keeps track of all segments being used in the heap. Does the heap manager ever free memory associated with a segment? The answer is that the heap manager decommits memory on a per-needed basis, but it never releases it. (That is, the memory stays reserved.)

As Figure 6.7 depicts, each heap block in a given segment has metadata associated with it. The metadata is used by the heap manager to effectively manage the heap blocks within a segment. The content of the metadata is dependent on the status of the heap block. For example, if the heap block is used by the application, the status of the block is considered busy. Conversely, if the heap block is not in use (that is, has been freed by the application), the status of the block is considered free. Figure 6.8 shows how the metadata is structured in both situations.

Figure 6.8

Figure 6.8 Structure of pre- and post-allocation metadata

It is important to note that a heap block might be considered busy in the eyes of the back end allocator but still not being used by the application. The reason behind this is that any heap blocks that go on the front end allocator's look aside list still have their status set as busy.

The two size fields represent the size of the current block and the size of the previous block (metadata inclusive). Given a pointer to a heap block, you can very easily use the two size fields to walk the heap segment forward and backward. Additionally, for free blocks, having the block size as part of the metadata enables the heap manager to very quickly index the correct free list to add the block to. The post-allocation metadata is optional and is typically used by the debug heap for additional bookkeeping information (see "Attaching Versus Running" under the debugger sidebar).

The flags field indicates the status of the heap block. The most important values of the flags field are shown in Table 6.1.

Table 6.1. Possible Block Status as Indicated by the Heap Flag




Indicates that the allocation is being used by the application or the heap manager


Indicates whether the heap block has a fill pattern associated with it


Indicates that the heap block was allocated directly from the virtual memory manager


Indicates that this is the last heap block prior to an uncommitted range

You have already seen what happens when a heap block transitions from being busy to free. However, one more technique that the heap manager employs needs to be discussed. The technique is referred to as heap coalescing. Fundamentally, heap coalescing is a mechanism that merges adjacent free blocks into one single large block to avoid memory fragmentation problems. Figure 6.9 illustrates how a heap coalesce functions.

Figure 6.9

Figure 6.9 Example of heap coalescing

When the heap manager is requested to free the heap block of size 32, it first checks to see if any adjacent blocks are also free. In Figure 6.9, two blocks of size 16 surround the block being freed. Rather than handing the block of size 32 to the free lists, the heap manager merges all three blocks into one (of size 64) and updates the free lists to indicate that a new block of size 64 is now available. Care is also taken by the heap manager to remove the prior two blocks (of size 16) from the free lists since they are no longer available. It should go without saying that the act of coalescing free blocks is an expensive operation. So why does the heap manager even bother? The primary reason behind coalescing heap blocks is to avoid what is known as heap fragmentation. Imagine that your application just had a burst of allocations all with a very small size (16 bytes). Furthermore, let's say that there were enough of these small allocations to fill up an entire segment. After the allocation burst is completed, the application frees all the allocations. The net result is that you have one heap segment full of available allocations of size 16 bytes. Next, your application attempts to allocate a block of memory of size 48 bytes. The heap manager now tries to satisfy the allocation request from the segment, fails because the free block sizes are too small, and is forced to create a new heap segment. Needless to say, this is extremely poor use of memory. Even though we had an entire segment of free memory, the heap manager was forced to create a new segment to satisfy our slightly larger allocation request. Heap coalescing makes a best attempt at ensuring that situations such as this are kept at a minimum by combining small free blocks into larger blocks.

This concludes our discussion of the internal workings of the heap manager. Before we move on and take a practical look the heap, let's summarize what you have learned.

When allocating a block of memory

  1. The heap manager first consults the front end allocator's LAL to see if a free block of memory is available; if it is, the heap manager returns it to the caller. Otherwise, step 2 is necessary.
  2. The back end allocator's free lists are consulted:
    1. If an exact size match is found, the flags are updated to indicate that the block is busy; the block is then removed from the free list and returned to the caller.
    2. If an exact size match cannot be found, the heap manager checks to see if a larger block can be split into two smaller blocks that satisfy the requested allocation size. If it can, the block is split. One block has the flags updated to a busy state and is returned to the caller. The other block has its flags set to a free state and is added to the free lists. The original block is also removed from the free list.
  3. If the free lists cannot satisfy the allocation request, the heap manager commits more memory from the heap segment, creates a new block in the committed range (flags set to busy state), and returns the block to the caller.

When freeing a block of memory

  1. The front end allocator is consulted first to see if it can handle the free block. If the free block is not handled by the front end allocator step 2 is necessary.
  2. The heap manager checks if there are any adjacent free blocks; if so, it coalesces the blocks into one large block by doing the following:
    1. The two adjacent free blocks are removed from the free lists.
    2. The new large block is added to the free list or look aside list.
    3. The flags field for the new large block is updated to indicate that it is free.
  3. If no coalescing can be performed, the block is moved into the free list or look aside list, and the flags are updated to a free state.

Now it's time to complement our theoretical discussion of the heap manager with practice. Listing 6.1 shows a simple application that, using the default process heap, allocates and frees some memory.

Listing 6.1. Simple application that performs heap allocations

#include <windows.h>
#include <stdio.h>
#include <conio.h>

int __cdecl wmain (int argc, wchar_t* pArgs[])
    BYTE* pAlloc1=NULL;
    BYTE* pAlloc2=NULL;
    HANDLE hProcessHeap=GetProcessHeap();

    pAlloc1=(BYTE*)HeapAlloc(hProcessHeap, 0, 16);
    pAlloc2=(BYTE*)HeapAlloc(hProcessHeap, 0, 1500);

    // Use allocated memory

    HeapFree(hProcessHeap, 0, pAlloc1);
    HeapFree(hProcessHeap, 0, pAlloc2);

The source code and binary for Listing 6.1 can be found in the following folders:

  • Source code: C:\AWD\Chapter6\BasicAlloc
  • Binary: C:\AWDBIN\WinXP.x86.chk\06BasicAlloc.exe

Run this application under the debugger and break on the wmain function.

Because we are interested in finding out more about the heap state, we must start by finding out what heaps are active in the process. Each running process keeps a list of active heaps. The list of heaps is stored in the PEB (process environment block), which is simply a data structure that contains a plethora of information about the process. To dump out the contents of the PEB, we use the dt command, as illustrated in Listing 6.2.

Listing 6.2. Finding the PEB for a process

0:000> dt _PEB @$peb
   +0x000 InheritedAddressSpace : 0 ''
   +0x001 ReadImageFileExecOptions : 0 ''
   +0x002 BeingDebugged    : 0x1 ''
   +0x003 SpareBool        : 0 ''
   +0x004 Mutant           : 0xffffffff
   +0x008 ImageBaseAddress : 0x01000000
   +0x00c Ldr              : 0x00191e90 _PEB_LDR_DATA
   +0x010 ProcessParameters : 0x00020000 _RTL_USER_PROCESS_PARAMETERS
   +0x014 SubSystemData    : (null)
   +0x018 ProcessHeap      : 0x00080000
   +0x01c FastPebLock      : 0x7c97e4c0 _RTL_CRITICAL_SECTION
   +0x020 FastPebLockRoutine : 0x7c901005
   +0x024 FastPebUnlockRoutine : 0x7c9010ed
   +0x028 EnvironmentUpdateCount : 1
   +0x02c KernelCallbackTable : (null)
   +0x030 SystemReserved   : [1] 0
   +0x034 AtlThunkSListPtr32 : 0
   +0x038 FreeList         : (null)
   +0x03c TlsExpansionCounter : 0
   +0x040 TlsBitmap        : 0x7c97e480
   +0x044 TlsBitmapBits    : [2] 1
   +0x04c ReadOnlySharedMemoryBase : 0x7f6f0000
   +0x050 ReadOnlySharedMemoryHeap : 0x7f6f0000
   +0x054 ReadOnlyStaticServerData : 0x7f6f0688 -> (null)
   +0x058 AnsiCodePageData : 0x7ffb0000
   +0x05c OemCodePageData  : 0x7ffc1000
   +0x060 UnicodeCaseTableData : 0x7ffd2000
   +0x064 NumberOfProcessors : 1
   +0x068 NtGlobalFlag     : 0
   +0x070 CriticalSectionTimeout : _LARGE_INTEGER 0xffffffff`dc3cba00
   +0x078 HeapSegmentReserve : 0x100000
   +0x07c HeapSegmentCommit : 0x2000
   +0x080 HeapDeCommitTotalFreeThreshold : 0x10000
   +0x084 HeapDeCommitFreeBlockThreshold : 0x1000
   +0x088 NumberOfHeaps    : 3
   +0x08c MaximumNumberOfHeaps : 0x10
   +0x090 ProcessHeaps     : 0x7c97de80 -> 0x00080000
   +0x094 GdiSharedHandleTable : (null)
   +0x098 ProcessStarterHelper : (null)
   +0x09c GdiDCAttributeList : 0
   +0x0a0 LoaderLock       : 0x7c97c0d8
   +0x0a4 OSMajorVersion   : 5
   +0x0a8 OSMinorVersion   : 1
   +0x0ac OSBuildNumber    : 0xa28
   +0x0ae OSCSDVersion     : 0x200
   +0x0b0 OSPlatformId     : 2
   +0x0b4 ImageSubsystem   : 3
   +0x0b8 ImageSubsystemMajorVersion : 4
   +0x0bc ImageSubsystemMinorVersion : 0
   +0x0c0 ImageProcessAffinityMask : 0
   +0x0c4 GdiHandleBuffer  : [34] 0
   +0x14c PostProcessInitRoutine : (null)
   +0x150 TlsExpansionBitmap : 0x7c97e478
   +0x154 TlsExpansionBitmapBits : [32] 0
   +0x1d4 SessionId        : 0
   +0x1d8 AppCompatFlags   : _ULARGE_INTEGER 0x0
   +0x1e0 AppCompatFlagsUser : _ULARGE_INTEGER 0x0
   +0x1e8 pShimData        : (null)
   +0x1ec AppCompatInfo    : (null)
   +0x1f0 CSDVersion       : _UNICODE_STRING "Service Pack 2"
   +0x1f8 ActivationContextData : (null)
   +0x1fc ProcessAssemblyStorageMap : (null)
   +0x200 SystemDefaultActivationContextData : 0x00080000
   +0x204 SystemAssemblyStorageMap : (null)
   +0x208 MinimumStackCommit : 0

As you can see, PEB contains quite a lot of information, and you can learn a lot by digging around in this data structure to familiarize yourself with the various components. In this particular exercise, we are specifically interested in the list of process heaps located at offset 0x90. The heap list member of PEB is simply an array of pointers, where each pointer points to a data structure of type _HEAP. Let's dump out the array of heap pointers and see what it contains:

0:000> dd 0x7c97de80
7c97de80  00080000 00180000 00190000 00000000
7c97de90  00000000 00000000 00000000 00000000
7c97dea0  00000000 00000000 00000000 00000000
7c97deb0  00000000 00000000 00000000 00000000
7c97dec0  01a801a6 00020498 00000001 7c9b0000
7c97ded0  7ffd2de6 00000000 00000005 00000001
7c97dee0  ffff7e77 00000000 003a0044 0057005c
7c97def0  004e0049 004f0044 00530057 0073005c

The dump shows that three heaps are active in our process, and the default process heap pointer is always the first one in the list. Why do we have more than one heap in our process? Even the simplest of applications typically contains more than one heap. Most applications implicitly use components that create their own heaps. A great example is the C runtime, which creates its own heap during initialization.

Because our application works with the default process heap, we will focus our investigation on that heap. Each of the process heap pointers points to a data structure of type _HEAP. Using the dt command, we can very easily dump out the information about the process heap, as shown in Listing 6.3.

Listing 6.3. Detailed view of the default process heap

0:000> dt _HEAP 00080000
   +0x000 Entry           : _HEAP_ENTRY
   +0x008 Signature        : 0xeeffeeff
   +0x00c Flags           : 0x50000062
   +0x010 ForceFlags       : 0x40000060
   +0x014 VirtualMemoryThreshold : 0xfe00
   +0x018 SegmentReserve    : 0x100000
   +0x01c SegmentCommit     : 0x2000
   +0x020 DeCommitFreeBlockThreshold : 0x200
   +0x024 DeCommitTotalFreeThreshold : 0x2000
   +0x028 TotalFreeSize    : 0xcb
   +0x02c MaximumAllocationSize : 0x7ffdefff
   +0x030 ProcessHeapsListIndex : 1
   +0x032 HeaderValidateLength : 0x608
   +0x034 HeaderValidateCopy : (null)
   +0x038 NextAvailableTagIndex : 0
   +0x03a MaximumTagIndex  : 0
   +0x03c TagEntries       : (null)
   +0x040 UCRSegments      : (null)
   +0x044 UnusedUnCommittedRanges : 0x00080598 _HEAP_UNCOMMMTTED_RANGE
   +0x048 AlignRound       : 0x17
   +0x04c AlignMask        : 0xfffffff8
   +0x050 VirtualAllocdBlocks : _LIST_ENTRY [ 0x80050 - 0x80050 ]   
   +0x058 Segments      : [64] 0x00080640 _HEAP_SEGMENT
   +0x158 u                : __unnamed
   +0x168 u2               : __unnamed
   +0x16a AllocatorBackTraceIndex : 0
   +0x16c NonDedicatedListLength : 1
   +0x170 LargeBlocksIndex : (null)
   +0x174 PseudoTagEntries : (null)
   +0x178 FreeLists    : [128] _LIST_ENTRY [ 0x829b0 - 0x829b0 ]
   +0x578 LockVariable     : 0x00080608 _HEAP_LOCK
   +0x57c CommitRoutine    : (null)
   +0x580 FrontEndHeap  : 0x00080688
   +0x584 FrontHeapLockCount : 0
   +0x586 FrontEndHeapType : 0x1 ''
   +0x587 LastSegmentIndex : 0 ''

Once again, you can see that the _HEAP structure is fairly large with a lot of information about the heap. For this exercise, the most important members of the _HEAP structure are located at the following offsets:

+0x050 VirtualAllocdBlocks : _LIST_ENTRY

Allocations that are greater than the virtual allocation size threshold are not managed as part of the segments and free lists. Rather, these allocations are allocated directly from the virtual memory manager. You track these allocations by keeping a list as part of the _HEAP structure that contains all virtual allocations.

+0x058 Segments            : [64]

The Segments field is an array of data structures of type _HEAP_SEGMENT. Each heap segment contains a list of heap entries active within that segment. Later on, you will see how we can use this information to walk the entire heap segment and locate allocations of interest.

+0x16c NonDedicatedListLength

As mentioned earlier, free list[0] contains allocations of size greater than 1016KB and less than the virtual allocation threshold. To efficiently manage this free list, the heap stores the number of allocations in the nondedicates list in this field. This information can come in useful when you want to analyze heap usage and quickly see how many of your allocations fall into the variable sized free list[0] category.

+0x178 FreeLists        : [128] _LIST_ENTRY

The free lists are stored at offset 0x178 and contain doubly linked lists. Each list contains free heap blocks of a specific size. We will take a closer look at the free lists in a little bit.

+0x580 FrontEndHeap

The pointer located at offset 0x580 points to the front end allocator. We know the overall architecture and strategy behind the front end allocator, but unfortunately, the public symbol package does not contain definitions for it, making an in-depth investigation impossible. It is also worth noting that Microsoft reserves the right to change the offsets previously described between Windows versions.

Back to our sample application—let's continue stepping through the code in the debugger. The first call of interest is to the GetProcessHeap API, which returns a handle to the default process heap. Because we already found this handle/pointer ourselves, we can verify that the explicit call to GetProcessHeap returns what we expect. After the call, the eax register contains 0x00080000, which matches our expectations. Next are two calls to the kernel32!HeapAlloc API that attempt allocations of sizes 16 and 1500. Will these allocations be satisfied by committing more segment memory or from the free lists? Before stepping over the first HeapAlloc call, let's try to find out where the heap manager will find a free heap block to satisfy this allocation. The first step in our investigation is to see if any free blocks of size 16 are available in the free lists. To check the availability of free blocks, we use the following command:

dt _LIST_ENTRY 0x00080000+0x178+8

This command dumps out the first node in the free list that corresponds to allocations of size 16. The 0x00080000 is the address of our heap. We add an offset of 0x178 to get the start of the free list table. The first entry in the free list table points to free list[0]. Because our allocation is much smaller than the free list[0] size threshold, we simply skip this free list by adding an additional 8 bytes (the size of the _LIST_ENTRY structure), which puts us at free list[1] representing free blocks of size 16.

0:000> dt _LIST_ENTRY 0x00080000+0x178+8
 [ 0x80180 - 0x80180 ]
   +0x000 Flink            : 0x00080180 _LIST_ENTRY [ 0x80180 - 0x80180 ]
   +0x004 Blink            : 0x00080180 _LIST_ENTRY [ 0x80180 - 0x80180 ]

Remember that the free lists are doubly linked lists; hence the Flink and Blink fields of the _LIST_ENTRY structure are simply pointers to the next and previous allocations. It is critical to note that the pointer listed in the free lists actually points to the user-accessible part of the heap block and not to the start of the heap block itself. As such, if you want to look at the allocation metadata, you need to first subtract 8 bytes from the pointer. Both of these pointers seem to point to 0x00080180, which in actuality is the address of the list node we were just dumping out (0x00080000+0x178+8=0x00080180). This implies that the free list corresponding to allocations of size 16 is empty. Before we assume that the heap manager must commit more memory in the segment, remember that it will only do so as the absolute last resort. Hence, the heap manager first tries to see if there are any other free blocks of sizes greater than 16 that it could split to satisfy the allocation. In our particular case, free list[0] contains a free heap block:

0:000> dt _LIST_ENTRY 0x00080000+0x178
 [ 0x82ab0 - 0x82ab0 ]
   +0x000 Flink            : 0x00082ab0 _LIST_ENTRY [ 0x80178 - 0x80178 ]
   +0x004 Blink            : 0x00082ab0 _LIST_ENTRY [ 0x80178 - 0x80178 ]

The Flink member points to the location in the heap block available to the caller. In order to see the full heap block (including metadata), we must first subtract 8 bytes from the pointer (refer to Figure 6.8).

0:000> dt _HEAP_ENTRY 0x00082ab0-0x8
   +0x000 Size            : 0xab
   +0x002 PreviousSize     : 0xb
   +0x000 SubSegmentCode   : 0x000b00ab
   +0x004 SmallTagIndex    : 0xee ''
   +0x005 Flags            : 0x14 ''
   +0x006 UnusedBytes      : 0xee ''
   +0x007 SegmentIndex     : 0 ''

It is important to note that the size reported is the true size of the heap block divided by the heap granularity. The heap granularity is easily found by taking the size of the _HEAP_ENTY_STRUCTURE. A heap block, the size of which is reported to be 0xab, is in reality 0xb8*8 = 0x558 (1368) bytes.

The free heap block we are looking at definitely seems to be big enough to fit our allocation request of size 16. In the debug session, step over the first instruction that calls HeapAlloc. If successful, we can then check free list[0] again and see if the allocation we looked at prior to the call has changed:

0:000> dt _LIST_ENTRY 0x00080000+0x178
 [ 0x82ad8 - 0x82ad8 ]
   +0x000 Flink            : 0x00082ad8 _LIST_ENTRY [ 0x80178 - 0x80178 ]
   +0x004 Blink            : 0x00082ad8 _LIST_ENTRY [ 0x80178 - 0x80178 ]
0:000> dt _HEAP_ENTRY 0x00082ad8-0x8
   +0x000 Size            : 0xa6
   +0x002 PreviousSize     : 5
   +0x000 SubSegmentCode   : 0x000500a6
   +0x004 SmallTagIndex    : 0xee ''
   +0x005 Flags            : 0x14 ''
   +0x006 UnusedBytes      : 0xee ''
   +0x007 SegmentIndex     : 0 ''

Sure enough, what used to be the first entry in free list[0] has now changed. Instead of a free block of size 0xab, we now have a free block of size 0xa6. The difference in size (0x5) is due to our allocation request breaking up the larger free block we saw previously. If we are allocating 16 bytes (0x10), why is the difference in size of the free block before splitting and after only 0x5 bytes? The key is to remember that the size reported must first be multiplied by the heap granularity factor of 0x8. The true size of the new free allocation is then 0x00000530 (0xa6*8), with the true size difference being 0x28. 0x10 of those 0x28 bytes are our allocation size, and the remaining 0x18 bytes are all metadata associated with our heap block.

The next call to HeapAlloc attempts to allocate memory of size 1500. We know that free heap blocks of this size must be located in the free list[0]. However, from our previous investigation, we also know that the only free heap block on the free list[0] is too small to accommodate the size we are requesting. With its hands tied, the heap manager is now forced to commit more memory in the heap segment. To get a better picture of the state of our heap segment, it is useful to do a manual walk of the segment. The _HEAP structure contains an array of pointers to all segments currently active in the heap. The array is located at the base _HEAP address plus an offset of 0x58.

0:000> dd 0x00080000+0x58 l4
00080058  00080640 00000000 00000000 00000000
0:000> dt _HEAP_SEGMENT 0x00080640
   +0x000 Entry            : _HEAP_ENTRY
   +0x008 Signature        : 0xffeeffee
   +0x00c Flags            : 0
   +0x010 Heap             : 0x00080000 _HEAP
   +0x014 LargestUnCommittedRange : 0xfd000
   +0x018 BaseAddress      : 0x00080000
   +0x01c NumberOfPages    : 0x100
   +0x020 FirstEntry     : 0x00080680 _HEAP_ENTRY
   +0x024 LastValidEntry   : 0x00180000 _HEAP_ENTRY
   +0x028 NumberOfUnCommittedPages : 0xfd
   +0x02c NumberOfUnCommittedRanges : 1
   +0x030 UnCommittedRanges : 0x00080588 _HEAP_UNCOMMMTTED_RANGE
   +0x034 AllocatorBackTraceIndex : 0
   +0x036 Reserved         : 0
   +0x038 LastEntryInSegment : 0x00082ad0 _HEAP_ENTRY

The _HEAP_SEGMENT data structure contains a slew of information used by the heap manager to efficiently manage all the active segments in the heap. When walking a segment, the most useful piece of information is the FirstEntry field located at the base segment address plus an offset of 0x20. This field represents the first heap block in the segment. If we dump out this block and get the size, we can dump out the next heap block by adding the size to the first heap block's address. If we continue this process, the entire segment can be walked, and each allocation can be investigated for correctness.

0:000> dt _HEAP_ENTRY 0x00080680
   +0x000 Size             : 0x303
   +0x002 PreviousSize     : 8
   +0x000 SubSegmentCode   : 0x00080303
   +0x004 SmallTagIndex    : 0x9a ''
   +0x005 Flags            : 0x7 ''
   +0x006 UnusedBytes      : 0x18 ''
   +0x007 SegmentIndex     : 0 ''
0:000> dt _HEAP_ENTRY 0x00080680+(0x303*8)
   +0x000 Size             : 8
   +0x002 PreviousSize     : 0x303
   +0x000 SubSegmentCode   : 0x03030008
   +0x004 SmallTagIndex    : 0x99 ''
   +0x005 Flags            : 0x7 ''
   +0x006 UnusedBytes      : 0x1e ''
   +0x007 SegmentIndex     : 0 ''
0:000> dt _HEAP_ENTRY 0x00080680+(0x303*8)+(8*8)
   +0x000 Size             : 5
   +0x002 PreviousSize     : 8
   +0x000 SubSegmentCode   : 0x00080005
   +0x004 SmallTagIndex    : 0x91 ''
   +0x005 Flags            : 0x7 ''
   +0x006 UnusedBytes      : 0x1a ''
   +0x007 SegmentIndex     : 0 ''
   +0x000 Size             : 0xa6
   +0x002 PreviousSize     : 5
   +0x000 SubSegmentCode   : 0x000500a6
   +0x004 SmallTagIndex    : 0xee ''
   +0x005 Flags            : 0x14 ''
   +0x006 UnusedBytes      : 0xee ''
   +0x007 SegmentIndex     : 0 ''

Let's see what the heap manager does to the segment (if anything) to try to satisfy the allocation request of size 1500 bytes. Step over the HeapAlloc call and walk the segment again. The heap block of interest is shown next.

+0x000 Size             : 0xbf
+0x002 PreviousSize     : 5
+0x000 SubSegmentCode   : 0x000500bf
+0x004 SmallTagIndex    : 0x10 ''
+0x005 Flags            : 0x7 ''
+0x006 UnusedBytes      : 0x1c ''
+0x007 SegmentIndex     : 0 ''

Before we stepped over the call to HeapAlloc, the last heap block was marked as free and with a size of 0xa6. After the call, the block status changed to busy with a size of 0xbf (0xbf*8= 0x5f8), indicating that this block is now used to hold our new allocation. Since our allocation was too big to fit into the previous size of 0xa6, the heap manager committed more memory to the segment. Did it commit just enough to hold our allocation? Actually, it committed much more and put the remaining free memory into a new block at address 0x000830c8. The heap manager is only capable of asking for page sized allocations (4KB on x86 systems) from the virtual memory manager and returns the remainder of that allocation to the free lists.

The next couple of lines in our application simply free the allocations we just made. What do we anticipate the heap manager to do when it executes the first HeapFree call? In addition to updating the status of the heap block to free and adding it to the free lists, we expect it to try and coalesce the heap block with other surrounding free blocks. Before we step over the first HeapFree call, let's take a look at the heap block associated with that call.

0:000> dt _HEAP_ENTRY 0x000830c8-(0xbf*8)-(0x5*8)
   +0x000 Size             : 5
   +0x002 PreviousSize     : 0xb
   +0x000 SubSegmentCode   : 0x000b0005
   +0x004 SmallTagIndex    : 0x1f ''
   +0x005 Flags            : 0x7 ''
   +0x006 UnusedBytes      : 0x18 ''
   +0x007 SegmentIndex     : 0 ''
0:000> dt _HEAP_ENTRY 0x000830c8-(0xbf*8)-(0x5*8)-(0xb*8)
   +0x000 Size             : 0xb
   +0x002 PreviousSize     : 5
   +0x000 SubSegmentCode   : 0x0005000b
   +0x004 SmallTagIndex    : 0 ''
   +0x005 Flags            : 0x7 ''
   +0x006 UnusedBytes      : 0x1c ''
   +0x007 SegmentIndex     : 0 ''
0:000> dt _HEAP_ENTRY 0x000830c8-(0xbf*8)
   +0x000 Size             : 0xbf
   +0x002 PreviousSize     : 5
   +0x000 SubSegmentCode   : 0x000500bf
   +0x004 SmallTagIndex    : 0x10 ''
   +0x005 Flags            : 0x7 ''
   +0x006 UnusedBytes      : 0x1c ''
   +0x007 SegmentIndex     : 0 ''

The status of the previous and next heap blocks are both busy (Flags=0x7), which means that the heap manager is not capable of coalescing the memory, and the heap block is simply put on the free lists. More specifically, the heap block will go into free list[1] because the size is 16 bytes. Let's verify our theory—step over the HeapFree call and use the same mechanism as previously used to see what happened to the heap block.

0:000> dt _HEAP_ENTRY 0x000830c8-(0xbf*8)-(0x5*8)
   +0x000 Size             : 5
   +0x002 PreviousSize     : 0xb
   +0x000 SubSegmentCode   : 0x000b0005
   +0x004 SmallTagIndex    : 0x1f ''
   +0x005 Flags            : 0x4 ''
   +0x006 UnusedBytes      : 0x18 ''
   +0x007 SegmentIndex     : 0 ''

As you can see, the heap block status is indeed set to be free, and the size remains the same. Since the size remains the same, it serves as an indicator that the heap manager did not coalesce the heap block with adjacent blocks. Last, we verify that the block made it into the free list[1].

I will leave it as an exercise for the reader to figure out what happens to the segment and heap blocks during the next call to HeapFree. Here's a hint: Remember that the size of the heap block being freed is 1500 bytes and that the state of one of the adjacent blocks is set to free.

This concludes our overview of the internal workings of the heap manager. Although it might seem like a daunting task to understand and be able to walk the various heap structures, after a little practice, it all becomes easier. Before we move on to the heap corruption scenarios, one important debugger command can help us be more efficient when debugging heap corruption scenarios. The extension command is called !heap and is part of the exts.dll debugger extension. Using this command, you can very easily display all the heap information you could possibly want. Actually, all the information we just manually gathered is outputted by the !heap extension command in a split second. But wait—we just spent a lot of time figuring out how to analyze the heap by hand, walk the segments, and verify the heap blocks. Why even bother if we have this beautiful command that does all the work for us? As always, the answer lies in how the debugger arrives at the information it presents. If the state of the heap is intact, the !heap extension command shows the heap state in a nice and digestible form. If, however, the state of the heap has been corrupted, it is no longer sufficient to rely on the command to tell us what and how it became corrupted. We need to know how to analyze the various parts of the heap to arrive at sound conclusions and possible culprits.

InformIT Promotional Mailings & Special Offers

I would like to receive exclusive offers and hear about products from InformIT and its family of brands. I can unsubscribe at any time.


Pearson Education, Inc., 221 River Street, Hoboken, New Jersey 07030, (Pearson) presents this site to provide information about products and services that can be purchased through this site.

This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site. Please note that other Pearson websites and online products and services have their own separate privacy policies.

Collection and Use of Information

To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:

Questions and Inquiries

For inquiries and questions, we collect the inquiry or question, together with name, contact details (email address, phone number and mailing address) and any other additional information voluntarily submitted to us through a Contact Us form or an email. We use this information to address the inquiry and respond to the question.

Online Store

For orders and purchases placed through our online store on this site, we collect order details, name, institution name and address (if applicable), email address, phone number, shipping and billing addresses, credit/debit card information, shipping options and any instructions. We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes.


Pearson may offer opportunities to provide feedback or participate in surveys, including surveys evaluating Pearson products, services or sites. Participation is voluntary. Pearson collects information requested in the survey questions and uses the information to evaluate, support, maintain and improve products, services or sites, develop new products and services, conduct educational research and for other purposes specified in the survey.

Contests and Drawings

Occasionally, we may sponsor a contest or drawing. Participation is optional. Pearson collects name, contact information and other information specified on the entry form for the contest or drawing to conduct the contest or drawing. Pearson may collect additional personal information from the winners of a contest or drawing in order to award the prize and for tax reporting purposes, as required by law.


If you have elected to receive email newsletters or promotional mailings and special offers but want to unsubscribe, simply email information@informit.com.

Service Announcements

On rare occasions it is necessary to send out a strictly service related announcement. For instance, if our service is temporarily suspended for maintenance we might send users an email. Generally, users may not opt-out of these communications, though they can deactivate their account information. However, these communications are not promotional in nature.

Customer Service

We communicate with users on a regular basis to provide requested services and in regard to issues relating to their account we reply via email or phone in accordance with the users' wishes when a user submits their information through our Contact Us form.

Other Collection and Use of Information

Application and System Logs

Pearson automatically collects log data to help ensure the delivery, availability and security of this site. Log data may include technical information about how a user or visitor connected to this site, such as browser type, type of computer/device, operating system, internet service provider and IP address. We use this information for support purposes and to monitor the health of the site, identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents and appropriately scale computing resources.

Web Analytics

Pearson may use third party web trend analytical services, including Google Analytics, to collect visitor information, such as IP addresses, browser types, referring pages, pages visited and time spent on a particular site. While these analytical services collect and report information on an anonymous basis, they may use cookies to gather web trend information. The information gathered may enable Pearson (but not the third party web trend services) to link information with application and system log data. Pearson uses this information for system administration and to identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents, appropriately scale computing resources and otherwise support and deliver this site and its services.

Cookies and Related Technologies

This site uses cookies and similar technologies to personalize content, measure traffic patterns, control security, track use and access of information on this site, and provide interest-based messages and advertising. Users can manage and block the use of cookies through their browser. Disabling or blocking certain cookies may limit the functionality of this site.

Do Not Track

This site currently does not respond to Do Not Track signals.


Pearson uses appropriate physical, administrative and technical security measures to protect personal information from unauthorized access, use and disclosure.


This site is not directed to children under the age of 13.


Pearson may send or direct marketing communications to users, provided that

  • Pearson will not use personal information collected or processed as a K-12 school service provider for the purpose of directed or targeted advertising.
  • Such marketing is consistent with applicable law and Pearson's legal obligations.
  • Pearson will not knowingly direct or send marketing communications to an individual who has expressed a preference not to receive marketing.
  • Where required by applicable law, express or implied consent to marketing exists and has not been withdrawn.

Pearson may provide personal information to a third party service provider on a restricted basis to provide marketing solely on behalf of Pearson or an affiliate or customer for whom Pearson is a service provider. Marketing preferences may be changed at any time.

Correcting/Updating Personal Information

If a user's personally identifiable information changes (such as your postal address or email address), we provide a way to correct or update that user's personal data provided to us. This can be done on the Account page. If a user no longer desires our service and desires to delete his or her account, please contact us at customer-service@informit.com and we will process the deletion of a user's account.


Users can always make an informed choice as to whether they should proceed with certain services offered by InformIT. If you choose to remove yourself from our mailing list(s) simply visit the following page and uncheck any communication you no longer want to receive: www.informit.com/u.aspx.

Sale of Personal Information

Pearson does not rent or sell personal information in exchange for any payment of money.

While Pearson does not sell personal information, as defined in Nevada law, Nevada residents may email a request for no sale of their personal information to NevadaDesignatedRequest@pearson.com.

Supplemental Privacy Statement for California Residents

California residents should read our Supplemental privacy statement for California residents in conjunction with this Privacy Notice. The Supplemental privacy statement for California residents explains Pearson's commitment to comply with California law and applies to personal information of California residents collected in connection with this site and the Services.

Sharing and Disclosure

Pearson may disclose personal information, as follows:

  • As required by law.
  • With the consent of the individual (or their parent, if the individual is a minor)
  • In response to a subpoena, court order or legal process, to the extent permitted or required by law
  • To protect the security and safety of individuals, data, assets and systems, consistent with applicable law
  • In connection the sale, joint venture or other transfer of some or all of its company or assets, subject to the provisions of this Privacy Notice
  • To investigate or address actual or suspected fraud or other illegal activities
  • To exercise its legal rights, including enforcement of the Terms of Use for this site or another contract
  • To affiliated Pearson companies and other companies and organizations who perform work for Pearson and are obligated to protect the privacy of personal information consistent with this Privacy Notice
  • To a school, organization, company or government agency, where Pearson collects or processes the personal information in a school setting or on behalf of such organization, company or government agency.


This web site contains links to other sites. Please be aware that we are not responsible for the privacy practices of such other sites. We encourage our users to be aware when they leave our site and to read the privacy statements of each and every web site that collects Personal Information. This privacy statement applies solely to information collected by this web site.

Requests and Contact

Please contact us about this Privacy Notice or if you have any requests or questions relating to the privacy of your personal information.

Changes to this Privacy Notice

We may revise this Privacy Notice through an updated posting. We will identify the effective date of the revision in the posting. Often, updates are made to provide greater clarity or to comply with changes in regulatory requirements. If the updates involve material changes to the collection, protection, use or disclosure of Personal Information, Pearson will provide notice of the change through a conspicuous notice on this site or other appropriate way. Continued use of the site after the effective date of a posted revision evidences acceptance. Please contact us if you have questions or concerns about the Privacy Notice or any objection to any revisions.

Last Update: November 17, 2020