Data Storage Overview
Before you delve into C's subtleties, you should review the basics of C types—specifically, their storage sizes, value ranges, and representations. This section explains the types from a general perspective, explores details such as binary encoding, twos complement arithmetic, and byte order conventions, and winds up with some pragmatic observations on common and future implementations.
The C standards define an object as a region of data storage in the execution environment; its contents can represent values. Each object has an associated type: a way to interpret and give meaning to the value stored in that object. Dozens of types are defined in the C standards, but this chapter focuses on the following:
 Character types—There are three character types: char, signed char, and unsigned char. All three types are guaranteed to take up 1 byte of storage. Whether the char type is signed is implementation defined. Most current systems default to char being signed, although compiler flags are usually available to change this behavior.
 Integer types—There are four standard signed integer types, excluding the character types: short int, int, long int, and long long int. Each standard type has a corresponding unsigned type that takes the same amount of storage. (Note: The long long int type is new to C99.)
 Floating types—There are three real floating types and three complex types. The real floating types are float, double, and long double. The three complex types are float _Complex, double_Complex, and long double _Complex. (Note: The complex types are new to C99.)
 Bit fields—A bit field is a specific number of bits in an object. Bit fields can be signed or unsigned, depending on their declaration. If no sign type specifier is given, the sign of the bit field is implementation dependent.
From an abstract perspective, each integer type (including character types) represents a different integer size that the compiler can map to an appropriate underlying architecturedependent data type. A character is guaranteed to consume 1 byte of storage (although a byte might not necessarily be 8 bits). sizeof(char) is always one, and you can always use an unsigned character pointer, sizeof, and memcpy() to examine and manipulate the actual contents of other types. The other integer types have certain ranges of values they are required to be able to represent, and they must maintain certain relationships with each other (long can't be smaller than short, for example), but otherwise, their implementation largely depends on their architecture and compiler.
Signed integer types can represent both positive and negative values, whereas unsigned types can represent only positive values. Each signed integer type has a corresponding unsigned integer type that takes up the same amount of storage. Unsigned integer types have two possible types of bits: value bits, which contain the actual basetwo representation of the object's value, and padding bits, which are optional and otherwise unspecified by the standard. Signed integer types have value bits and padding bits as well as one additional bit: the sign bit. If the sign bit is clear in a signed integer type, its representation for a value is identical to that value's representation in the corresponding unsigned integer type. In other words, the underlying bit pattern for the positive value 42 should look the same whether it's stored in an int or unsigned int.
An integer type has a precision and a width. The precision is the number of value bits the integer type uses. The width is the number of bits the type uses to represent its value, including the value and sign bits, but not the padding bits. For unsigned integer types, the precision and width are the same. For signed integer types, the width is one greater than the precision.
Programmers can invoke the various types in several ways. For a given integer type, such as short int, a programmer can generally omit the int keyword. So the keywords signed short int, signed short, short int, and short refer to the same data type. In general, if the signed and unsigned type specifiers are omitted, the type is assumed to be signed. However, this assumption isn't true for the char type, as whether it's signed depends on the implementation. (Usually, chars are signed. If you need a signed character with 100% certainty, you can specifically declare a signed char.)
C also has a rich typealiasing system supported via typedef, so programmers usually have preferred conventions for specifying a variable of a known size and representation. For example, types such as int8_t, uint8_t, int32_t, and u_int32_t are popular with UNIX and network programmers. They represent an 8bit signed integer, an 8bit unsigned integer, a 32bit signed integer, and a 32bit unsigned integer, respectively. Windows programmers tend to use types such as BYTE, CHAR, and DWORD, which respectively map to an 8bit unsigned integer, an 8bit signed integer, and a 32bit unsigned integer.
Binary Encoding
Unsigned integer values are encoded in pure binary form, which is a basetwo numbering system. Each bit is a 1 or 0, indicating whether the power of two that the bit's position represents is contributing to the number's total value. To convert a positive number from binary notation to decimal, the value of each bit position n is multiplied by 2^{n1}. A few examples of these conversions are shown in the following lines:
0001 1011 
= 
2^{4} + 2^{3} + 2^{1} + 2^{0} = 27 
0000 1111 
= 
2^{3} + 2^{2} + 2^{1} + 2^{0} = 15 
0010 1010 
= 
2^{5} + 2^{3} + 2^{1} = 42 
Similarly, to convert a positive decimal integer to binary, you repeatedly subtract powers of two, starting from the highest power of two that can be subtracted from the integer leaving a positive result (or zero). The following lines show a few sample conversions:
55 
= 
32 + 16 + 4 + 2 + 1 
= 
(2^{5}) + (2^{4}) + (2^{2}) + (2^{1}) + (2^{0}) 

= 
0011 0111 

37 
= 
32 + 4 + 1 
= 
(2^{5}) + (2^{2}) + (2^{0}) 

= 
0010 0101 
Signed integers make use of a sign bit as well as value and padding bits. The C standards give three possible arithmetic schemes for integers and, therefore, three possible interpretations for the sign bit:
 Sign and magnitude—The sign of the number is stored in the sign bit. It's 1 if the number is negative and 0 if the number is positive. The magnitude of the number is stored in the value bits. This scheme is easy for humans to read and understand but is cumbersome for computers because they have to explicitly compare magnitudes and signs for arithmetic operations.
 Ones complement—Again, the sign bit is 1 if the number is negative and 0 if the number is positive. Positive values can be read directly from the value bits. However, negative values can't be read directly; the whole number must be negated first. In ones complement, a number is negated by inverting all its bits. To find the value of a negative number, you have to invert its bits. This system works better for the machine, but there are still complications with addition, and, like sign and magnitude, it has the amusing ambiguity of having two values of zero: positive zero and negative zero.
 Twos complement—The sign bit is 1 if the number is negative and 0 if the number is positive. You can read positive values directly from the value bits, but you can't read negative values directly; you have to negate the whole number first. In twos complement, a number is negated by inverting all the bits and then adding one. This works well for the machine and removes the ambiguity of having two potential values of zero.
Integers are usually represented internally by using twos complement, especially in modern computers. As mentioned, twos complement encodes positive values in standard binary encoding. The range of positive values that can be represented is based on the number of value bits. A twos complement 8bit signed integer has 7 value bits and 1 sign bit. It can represent the positive values 0 to 127 in the 7 value bits. All negative values represented with twos complement encoding require the sign bit to be set. The values from 128 to 1 can be represented in the value bits when the sign bit is set, thus allowing the 8bit signed integer to represent 128 to 127.
For arithmetic, the sign bit is placed in the most significant bit of the data type. In general, a signed twos complement number of width X can represent the range of integers from 2^{X1} to 2^{X1}1. Table 61 shows the typical ranges of twos complement integers of varying sizes.
Table 61. Maximum and Minimum Values for Integers
8bit 
16bit 
32bit 
64bit 

Minimum value (signed) 
128 
32768 
2147483648 
9223372036854775808 
Maximum value (signed) 
127 
32767 
2147483647 
9223372036854775807 
Minimum value (unsigned) 
0 
0 
0 
0 
Maximum value (unsigned) 
255 
65535 
4294967295 
18446744073709551615 
As described previously, you negate a twos complement number by inverting all the bits and adding one. Listing 61 shows how you obtain the representation of 15 by inverting the number 15, and then how you figure out the value of an unknown negative bit pattern.
Listing 61. Twos Complement Representation of 15
0000 1111 – binary representation for 15 1111 0000 – invert all the bits 0000 0001 – add one 1111 0001 – twos complement representation for 15 1101 0110 – unknown negative number 0010 1001 – invert all the bits 0000 0001 – add one 0010 1010 – twos complement representation for 42 original number was 42
Byte Order
There are two conventions for ordering bytes in modern architectures: big endian and little endian. These conventions apply to data types larger than 1 byte, such as a short int or an int. In the bigendian architecture, the bytes are located in memory starting with the most significant byte and ending with the least significant byte. Littleendian architectures, however, start with the least significant byte and end with the most significant. For example, you have a 4byte integer with the decimal value 12345. In binary, it's 11000000111001. This integer is located at address 500. On a bigendian machine, it's represented in memory as the following:
Address 500: 00000000 Address 501: 00000000 Address 502: 00110000 Address 503: 00111001
On a littleendian machine, however, it's represented this way:
Address 500: 00111001 Address 501: 00110000 Address 502: 00000000 Address 503: 00000000
Intel machines are little endian, but RISC machines, such as SPARC, tend to be big endian. Some machines are capable of dealing with both encodings natively.
Common Implementations
Practically speaking, if you're talking about a modern, 32bit, twos complement machine, what can you say about C's basic types and their representations? In general, none of the integer types have any padding bits, so you don't need to worry about that. Everything is going to use twos complement representation. Bytes are going to be 8 bits long. Byte order varies; it's little endian on Intel machines but more likely to be big endian on RISC machines.
The char type is likely to be signed by default and take up 1 byte. The short type takes 2 bytes, and int takes 4 bytes. The long type is also 4 bytes, and long long is 8 bytes. Because you know integers are twos complement encoded and you know their underlying sizes, determining their minimum and maximum values is easy. Table 62 summarizes the typical sizes for ranges of integer data types on a 32bit machine.
Table 62. Typical Sizes and Ranges for Integer Types on 32Bit Platforms
Type 
Width (in Bits) 
Minimum Value 
Maximum Value 
signed char 
8 
128 
127 
unsigned char 
8 
0 
255 
short 
16 
32,768 
32,767 
unsigned short 
16 
0 
65,535 
Int 
32 
2,147,483,648 
2,147,483,647 
unsigned int 
32 
0 
4,294,967,295 
long 
32 
2,147,483,648 
2,147,483,647 
unsigned long 
32 
0 
4,294,967,295 
long long 
64 
9,223,372,036,854,775,808 
9,223,372,036,854,775,807 
unsigned long long 
64 
0 
18,446,744,073,709,551,615 
What can you expect in the near future as 64bit systems become more prevalent? The following list describes a few type systems that are in use today or have been proposed:
 ILP32—int, long, and pointer are all 32 bits, the current standard for most 32bit computers.
 ILP32LL—int, long, and pointer are all 32 bits, and a new type—long long—is 64 bits. The long long type is new to C99. It gives C a type that has a minimum width of 64 bits but doesn't change any of the language's fundamentals.
 LP64—long and pointer are 64 bits, so the pointer and long types have changed from 32bit to 64bit values.
 ILP64—int, long, and pointer are all 64 bits. The int type has been changed to a 64bit type, which has fairly significant implications for the language.
 LLP64—pointers and the new long long type are 64 bits. The int and long types remain 32bit data types.
Table 63 summarizes these type systems briefly.
Table 63. 64Bit Integer Type Systems
Type 
ILP32 
ILP32LL 
LP64 
ILP64 
LLP64 
char 
8 
8 
8 
8 
8 
short 
16 
16 
16 
16 
16 
int 
32 
32 
32 
64 
32 
long 
32 
32 
64 
64 
32 
long long 
N/A 
64 
64 
64 
64 
pointer 
32 
32 
64 
64 
64 
As you can see, the typical data type sizes match the ILP32LL model, which is what most compilers adhere to on 32bit platforms. The LP64 model is the de facto standard for compilers that generate code for 64bit platforms. As you learn later in this chapter, the int type is a basic unit for the C language; many things are converted to and from it behind the scenes. Because the int data type is relied on so heavily for expression evaluations, the LP64 model is an ideal choice for 64bit systems because it doesn't change the int data type; as a result, it largely preserves the expected C type conversion behavior.