Home > Articles > Programming > C#

.NET Reference Guide

Hosted by

Toggle Open Guide Table of ContentsGuide Contents

Close Table of ContentsGuide Contents

Close Table of Contents

A Larger Integer

Last updated Mar 14, 2003.

The .NET Framework has support for integers and unsigned integers up to 64 bits long. On a 64 bit processor, which has a native 64 bit type, operations on these long integers are very fast. Even on a 32 bit processor, the operations are normally pretty quick because the processor instruction set includes some instructions that appear to be intended specifically for making 64 bit computations.

If you need a larger integer type, you have a few choices. You can use the Decimal type, which can express integers up to 96 bits, but has a few drawbacks. In particular, Decimal doesn’t support the bitwise operations, so that’s a non-starter for most.

If you want the best possible performance, you’ll write extended precision integer arithmetic in assembly language, C++, or some other unmanaged language and make a .NET interface so that you can use the new type as you would any other .NET value type.

If you’re not quite as concerned with absolute performance, but rather just need a larger integer type from time to time, it’s perfectly reasonable to write the new type in C# or Visual Basic. The resulting code will be noticeably slower than what you can achieve with native assembly language, but it’s much easier to implement and will give you the extended precision you need in some situations.

My needs fall into the latter category: I need a larger integer type, but I don’t need it to be particularly fast. For now. I might need additional performance in the future, but right now all I need is those extra bits. So I chose to create a larger integer type in C#.

The obvious choices for a larger integer type are 96 bits or 128 bits. To be honest, 96 bits would be large enough for my current needs. But implementing a 128 bit integer isn’t any more difficult than 96 bits, so I figured there was no reason to go with the smaller type. The only drawback is that on a 32 bit machine, the 128 bit type will be somewhat slower than the 96 bit type. On a 64 bit processor, the two types will perform about the same, with the 128 bit type possibly being slightly faster.

Defining the Interface

The hardest part of creating what amounts to a new basic type isn’t, as you might think, coding the bitwise multiply and divide, but rather designing the interface so that the new type acts like a native type. We want the new type to work "just like" a native .NET Int32 or Int64, including assignments, arithmetic and bitwise operations, comparisons, and conversions from and to other types. In all, you have to create an astonishing amount of code.

It probably comes as no surprise that I call the new type Int128, in keeping with the .NET naming convention for integer types. I’ve also created a corresponding UInt128 type, although I’ll limit the discussion here to the signed type.

Internally, the Int128 type is a structure that contains two 64 bit numbers: a "high" part and a "low" part. This is how a 64 bit number is typically expressed on a 32 bit processor. Since the compiler can’t parse a 128 bit integer, it’s necessary to create a constructor to which you can pass the low and high parts of the number. The constructor is:

public Int128(UInt64 low, Int64 high)

Don’t worry too much about that constructor, because you won’t have to use it very often: only when you want to initialize an Int128 with a very large value. In practice, you’ll be able to use normal assignment statements in most situations.

Aside from the constructor, there are a few other things that all .NET types need. Since it’s likely that we’ll want to use Int128 in collection classes, we need to override the Equals method. And overriding Equals requires that we also override GetHashCode. So we need to add these two methods to the public interface:

public override bool Equals(object obj)
public override int GetHashCode()

In keeping with the way that the other numeric types are implemented, we’ll also define an Equals method that takes an Int128 parameter:

public bool Equals(Int128 value)

Also, the .NET integer types have MaxValue and MinValue constants, which define the largest and smallest values that the type can represent.

public const Int128 MaxValue
public const Int128 MinValue

We’ll also want a ToString method so that we can output numbers of our new type. All types inherit the Object.ToString method, and can override it:

public override string ToString()

The other numeric types supply additional ToString functionality that allows you to format numbers with thousands separators, output in different representations (hexadecimal, for example), and such. We’ll eventually want those, too:

public string ToString(IFormatProvider provider)
public string ToString(string format)
public string ToString(string format, IFormatProvider provider)

We’ll also want corresponding parsing functions so that we can read a string and create an Int128. The built-in types provide Parse and TryParse methods:

public Int128 Parse(string s)
public Int128 Parse(string s, NumberStyles style)
public Int128 Parse(string s, IFormatProvider provider)
public Int128 Parse(string s, NumberStyles style, IFormatProvider provider)
public bool TryParse(string s, out Int128 result)
public bool TryParse(string s, NumberStyles style, IFormatProvider provider, out Int128 result)

Since we’ll want the ability to compare our Int128 values, we’ll need to implement the IComparable and IComparable<Int128> interfaces, just like the built-in types do:

public int CompareTo(object obj)
public int CompareTo(Int128 value)

The built-in numeric types also implement several interfaces that

The rest of the interface consists of overloaded operators and implicit and explicit conversion operators. The overloaded operators are the standard integer operators, and can be broken down by:

  • Unary operators: +, -, ~, ++, --
  • Binary arithmetic operators: +, -, *, /, %
  • Bitwise operators: &, |, ^, <<, >>
  • Comparison operators: ==, !=, >, <, >=, <=

That leaves conversion operators, which come in two flavors: implicit and explicit. Both types of conversions allow you to convert from one data type to another. The difference between implicit and explicit is very important.

An implicit conversion is one which is guaranteed not to lose data. Converting from Int16, for example, to Int32 is guaranteed not to lose data because Int32 can exactly represent every value that Int16 can express.

Because an implicit conversion is guaranteed not to lose data, you don’t need a cast to write it. So, for example, you can write the following:

Int16 a = -32;
Int32 b = a;

An explicit conversion is one that has the potential of losing data. Converting from Int32 to Int16, for example, can lose data because the larger integer can express values that the smaller integer cannot. Consider, for example, this code:

Int32 a = 65537;
Int16 b = (Int16)a;


The output from this program is, of course, 0, because 65537 is beyond the range of a 16 bit number, and only the first 16 bits of the value are copied from a to b. Converting from Int32 to Int16 is a narrowing conversion, and can lose data.

Conversions that can lose data should be written as explicit conversions so that they require a cast. This prevents the compiler from allowing the conversion without explicit instructions from the programmer. I think you’ll agree that this is A Good Thing.

So which conversions do we need? Which should be implicit and which should be explicit? The simple answer, of course, is, "make it work like Int64." And that’s what we’ll do. The .NET documentation provides Type Conversion Tables that ended up being very useful in deciding which conversions to allow.

All of the implicit conversions that we will allow convert from native types to Int128. There is no native type that will hold the full precision of a 128 bit integer. The implicit conversions we will support are:

  • From Byte to Int128
  • From SByte to Int128
  • From Int16 to Int128
  • From UInt16 to Int128
  • From Char to Int128
  • From Int32 to Int128
  • From UInt32 to Int128
  • From Int64 to Int128
  • From UInt64 to UInt128

An Int128 can be converted to any numeric type, with varying amounts of data loss. Working under the assumption that the programmer knows what he’s doing, we’ll allow all of the conversions if they’re made explicitly (with a type cast). In addition, we’ll also allow conversions from floating point types to Int128. The explicit conversions we’ll allow are:

  • From Single to Int128
  • From Double to Int128
  • From Decimal to Int128
  • From Int128 to Byte
  • From Int128 to SByte
  • From Int128 to Int16
  • From Int128 to UInt16
  • From Int128 to Int32
  • From Int128 to UInt32
  • From Int128 to Int64
  • From Int128 to UInt64
  • From Int128 to Single
  • From Int128 to Double
  • From Int128 to Decimal

With all those conversions defined, we can implement the IConvertible interface that the native types implement. IConvertible defines methods that convert the value to a CLR type, and is normally exposed through the Convert class.

That’s a lot of infrastructure just for a new integer type! In the next section, we’ll start building it.