Home > Articles > Programming > C#

.NET Reference Guide

Hosted by

Toggle Open Guide Table of ContentsGuide Contents

Close Table of ContentsGuide Contents

Close Table of Contents

Maximum Object Size in .NET

Last updated Mar 14, 2003.

Early in the development of the .NET runtime, the designers made a decision to limit the maximum size of a single object to two gigabytes. That was a reasonable decision back then, considering that a computer with just one gigabyte of RAM was considered a hot machine, and the most prevalent versions of Windows couldn't access more than two gigabytes total. I doubt that there were many who chafed at that decision.

Seven (or more) years later, that decision is becoming less popular. With 64-bit versions of Windows running on dual core, eight-gigabyte machines, people want to access all that RAM. Granted, the .NET runtime does allow you to access all of your memory. It just limits you to two gigabytes for a single object. The reasons for this limitation have to do with performance, ease of porting, and runtime library size. All quite reasonable, even today. But that doesn't help if you need an array larger than two gigabytes.

I speak of arrays here because they would be the most common objects for which you'd want to allocate that much space. Certainly it's possible to have a media file that's larger than two gigabytes (a full-length movie, for example), although I suspect that even then the majority of the structure would be a compressed array of bytes that represent the video stream.

It's also true that you can't have more than (2^31 - 1) items in a single array, but that's more of an API issue (array indexes are limited to signed 32-bit integers) than a memory limitation. Especially when you consider that a managed array that size will occupy a little more than two gigabytes due to object overhead. But I digress.

This problem isn't as bad as it looks at first. For example, when you first hear about the two gigabyte size limitation, you worry that you can't have a <tt>List</tt> of objects larger than two gigabytes. If your objects are a kilobyte in size, then you wouldn't be able to store more than two million of them. But that's not the case. Why?

Remember, a <tt>List</tt> of objects stores references to those objects. The <tt>List</tt> object's memory footprint isn't much more than an array. So if you wanted to store two million items, your <tt>List</tt> would be around eight megabytes in size-just big enough to hold two million 32-bit object references. Each object in the list is allocated separately, so you won't run into the object size limitation there.

Also, objects that have references to other objects store only the reference. So if you have an object that holds references to three other objects, the memory footprint is only 12 extra bytes: one 32-bit pointer to each of the referenced objects. It doesn't matter how large the referenced object is.

The real question is how you get around the limitation if you really do need an array that approaches two gigabytes in size. One way to do it is the BigArray, of which Josh Williams' implementation is just one of many. The idea here is to allocate the large array as a bunch of smaller blocks. The array accessor code computes the block number from the passed array index, and then computes an offset into the relevant block. It's simple and easy to implement, and performs reasonably well.

Another way would be to allocate the memory yourself by calling the Windows heap management functions. This will give you a large, contiguous block, but you'll have to access it with unsafe code. Or you could write unmanaged DLL functions that allocate and manipulate the memory for you. Either way, you end up executing unmanaged or unsafe code. The benefit of using native memory management is that the block will be contiguous and will (at least in theory) perform better than the blocked <tt>BigArray</tt> approach.

These workarounds assume, of course, that you have enough available physical RAM to accommodate whatever huge data structure you allocate. If you allocate an eight gigabyte array on a machine that has only four gigabytes of RAM, and then try to access that array in a random (i.e. sparse) manner, you're going to be thrashing virtual memory. The computer will spend the vast majority of its time swapping things in and out of physical RAM.

The final solution, and often the most profitable, is to re-architect your program. Are you sure you need an array that large? Are you sure you can't make your objects smaller, keep fewer of them around, or come up with a way to reduce each object's footprint? Redesigning the algorithm is often the lease expensive of all options and the most likely to garner large performance gains.

I've been unable to find any information on whether this limitation was addressed in .NET 3.0, or if they plan to address it in an upcoming release. Until they do fix it, I'll stick with working around the limitation in the few cases when I really need that much RAM all in a single object.