Home > Articles > Programming > C#

.NET Reference Guide

Hosted by

Toggle Open Guide Table of ContentsGuide Contents

Close Table of ContentsGuide Contents

Close Table of Contents

More on Maximum Object Sizes

Last updated Mar 14, 2003.

Last year I discussed the two gigabyte memory limitation in .NET. To recap, no single object in .NET can exceed two gigabytes in size. This limitation has some very important ramifications when working with arrays or collections that contain large numbers of items.

Array Size Limitations

The actual single-object size limit is slightly smaller because of allocation overhead. That is, you can’t allocate an array of 2,147,483,647 bytes. The limit, arrived at experimentally with the program below, appears to be 2,147,483,591 bytes, which means that the overhead for an array of bytes is 56 bytes.

class Program
{
  static void Main(string[] args)
  {
    AllocateMaxSize<byte>();
    AllocateMaxSize<short>();
    AllocateMaxSize<int>();
    AllocateMaxSize<long>();
    AllocateMaxSize<object>();
  }

  const long twogigLimit = ((long)2 * 1024 * 1024 * 1024) - 1;
  static void AllocateMaxSize<T>()
  {
    int twogig = (int)twogigLimit;
    int num;
    Type tt = typeof(T);
    if (tt.IsValueType)
    {
      num = twogig / Marshal.SizeOf(typeof(T));
    }
    else
    {
      num = twogig / IntPtr.Size;
    }

    T[] buff;
    bool success = false;
    do
    {
      try
      {
        buff = new T[num];
        success = true;
      }
      catch (OutOfMemoryException)
      {
        --num;
      }
    } while (!success);
    Console.WriteLine("Maximum size of {0}[] is {1:N0} items.", typeof(T).ToString(), num);
  }
}

The full output for that program is:

Maximum size of System.Byte[] is 2,147,483,591 items.
Maximum size of System.Int16[] is 1,073,741,795 items.
Maximum size of System.Int32[] is 536,870,897 items.
Maximum size of System.Int64[] is 268,435,448 items.
Maximum size of System.Object[] is 268,435,447 items.

If you do the math, you’ll see that the overhead for allocating an array is 56 bytes. There are some bytes left over at the end due to object sizes. For example, 268,435,448 64-bit numbers occupy 2,147,483,584 bytes. Adding the 56 byte array overhead gives you 2,147,483,640, leaving you 7 bytes short of 2 gigabytes.

I realize that it’s rare to allocate arrays of such large sizes, but it’s quite common to create large collections: List, Dictionary, HashSet, etc. The implementations of all those collections eventually end up allocating arrays, so it’s helpful to understand maximum array sizes so you can get an idea of how many items you can put in the other collection types.

The maximum size of an array of object will depend on which version of the .NET runtime you’re using. As you know, any object is a reference (pointer). With the 32-bit runtime, pointers are four bytes long, so in theory you could allocate an array of 536,870,897 items. Although you could allocate such an array, you probably couldn’t populate it because you can’t access more than three gigabytes of memory, total, in 32-bit land.

In the 64-bit runtime, pointers are 8 bytes, which cuts the maximum size of your array in half, to 268,435,447 items. I find it somewhat curious that the maximum size of an object[] is one smaller than the maximum size of a long[]. Both data types are 8 bytes in size, so I don’t know how to account for the discrepancy.

The nice thing about reference types is that no matter how large the actual objects are, you can still have an array of a quarter billion items. That is, whether your object is 12 bytes in size or 120 bytes, an array of references to them occupies the same amount of space. Of course, a quarter billion 120-byte objects will occupy 30 gigabytes. That’s a lot of memory, but not unheard of. I’m on the verge of getting a server that has 64 gigabytes of RAM, and I’ll be able to use every bit of it.

Things get more interesting when you have an array of structures.

Say you have this structure:

struct MyThing
{
  public long l;
  public byte b;
  public int x;
  public short y;
  public byte z;
}

Marshal.SizeOf() reports that the size of this structure is 24 bytes when running on a 64-bit machine. As you would expect, the largest array of these that you can allocate is 89,478,482 items. But you can do better if you’re willing to rearrange things or change the structure packing.

The default structure packing tries to align things in memory so that they’re more efficient to access. You can sacrifice a little bit of efficiency by changing the default packing by applying the StructLayoutAttribute to your structure, like this:

[StructLayout(LayoutKind.Sequential, Pack=1)]
struct MyThing
{
  public long l;
  public byte b;
  public int x;
  public short y;
  public byte z;
}

The size of this structure is only 16 bytes because there is no automatic alignment. However, it’s a little less efficient to access items in the structure.

If you’re worried about memory access efficiency, then you should lay out your structures so that the larger items are declared first, and work down to the smaller items. This has the same effect as packing with a value of 1, but naturally aligns things. That is, modify the structure so that it looks like this:

[StructLayout(LayoutKind.Sequential, Pack=1)]
struct MyThing
{
  public long l;
  public int x;
  public short y;
  public byte b;
  public byte z;
}

Marshal.Sizeof will report a size of 16 bytes for this structure. You still want to apply the StructLayoutAttribute when you use this technique, to prevent the compiler from aligning the structure itself on a larger boundary. For example, this structure occupies 10 bytes:

[StructLayout(LayoutKind.Sequential, Pack=1)]
struct MyThing
{
  public long l;
  public short y;
}

But if you don’t apply StructLayoutAttribute, the compiler will add 6 bytes to the end so that the structures are aligned on 8-byte boundaries.

When you’re designing an application that has to work with large arrays, you have to choose between using objects or structures in the array. If the items you’re storing in the array are smaller than 8 bytes, then storing them as structures is a clear win--provided, of course, that you can accept the ramifications of value semantics. But if your data structure is larger than 8 bytes, the question of whether to make it a struct or class is a little more involved.

Let’s say your structure is 10 bytes in size. You could have an array of 214,748,359 items. But if you made it an object (a reference type), then you could have 268,435,448 of them in the array. However, that’s not the whole story.

All of a sudden you’ve more than doubled your memory footprint. That is, the array of structures will occupy right at two gigabytes. But if you make it an array of objects, then the array of references will occupy two gigabytes, and the individual objects will occupy another two gigabytes and then some.

Which you use (value type or reference type) is going to depend on how many items you want, and how much memory you have. If you want the maximum possible number of items, then you’ll have to go with references, and pay the penalty of using more memory.

With a better understanding of just how large you can make arrays in .NET, we can now discuss limitations of other object types.