2.5 Number Codes
Digital signals are either control signals of some kind or information. In general, information takes the form of numbers or characters. These numbers and characters have to be coded in a form suitable for storage and manipulation by digital hardware. Thus, one integer or one character may be represented by a set of bits. From the point of view of a computer or other digital system, no one system of coding is better than another. There do, however, need to be standards, so that different systems can communicate. The standards that have emerged are generally also designed such that a human being can interpret the data if necessary.
2.5.1 Integers
The simplest form of coding is that of positive integers. For example, a set of three bits would allow us to represent the decimal integers 0 to 7. In base 2 arithmetic, 000_{2} represents 0_{10}, 011_{2} represents 3_{10}, and 111_{2} represents 7_{10}. As with decimal notation, the most significant bit is on the left.
For the benefit of human beings, strings of bits may be grouped into sets of three or four and written using octal (base 8) or hexadecimal (base 16) notation. For example, 66_{8} is equal to 110 110_{2} or 54_{10}. For hexadecimal notation, the letters A to F represent the decimal numbers 10 to 15. For example, EDA _{16} is 1110 1101 1010_{2} or 7332_{8} or 3802_{10}.
The simple translation of a decimal number into bits is sufficient for zero and positive integers. Negative integers require additional information. The simplest approach is to set aside one bit as a sign bit. Therefore, 0 110_{2} might represent +6_{10}, while 1 110_{2} would represent –6_{10}. While this makes translation between binary and decimal numbers simple, the arithmetic operations of addition and subtraction require that the sign bits be checked before an operation can be performed on two numbers. It is common, therefore, to use a different notation for signed integers: two's complement. The principle of two's complement notation is that the code for –b, where b is a binary number represented using n bits, is the code given by 2 ^{n} – b. For example, –6_{10} is represented by 10000_{2} – 0110_{2}, which is 1010_{2}. The same result is obtained by inverting all the bits and adding 1: –6_{10} is 1001_{2} +1 = 1010_{2}.
The advantage of two's complement notation is that addition and subtraction may be performed using exactly the same hardware as for unsigned arithmetic; no sign checking is needed. The major disadvantage is that multiplication and division become much more complicated. Booth's algorithm, described in Section 5.7, is a technique for multiplying two's complement numbers.
2.5.2 Fixed Point Numbers
For many applications, noninteger data needs to be stored and manipulated. The binary representation of a fixedpoint number is exactly the same as for an integer number, except that there is an implicit "decimal" point. For example, 6.25 is equal to 2^{2} + 2^{1} + 2^{–} 2 or 110.01_{2}. Instead of representing the point, the number 11001_{2} (25_{10}) is stored with the implicit knowledge that it and the results of any operations involving it have to be divided by 2^{2} to obtain the true value. Notice that all operations, including two's complement representations, are the same as for integer numbers.
2.5.3 FloatingPoint Numbers
The number of bits that have been allocated to represent fractions limits the range of fixed point numbers. Floatingpoint numbers allow a much wider range of accuracy. In general, floatingpoint operations are only performed using specialized hardware because they are very computationally expensive. A typical single precision floatingpoint number has 32 bits, of which 1 is the sign bit (s), 8 are the exponent (e), biased by an offset (2 ^{e} – 1 = 127), and the remaining 23 are the mantissa (m), such that a decimal number is represented as
(–1) ^{s} x 1 · m x 2 ^{e} 
IEEE standard 7541985 defines formats for 32, 64, and 128bit floatingpoint numbers, with special patterns for ± and the results of invalid operations, such as .
2.5.4 Alphanumeric Characters
Characters are commonly represented by 7 or 8 bits. The ASCII code is widely used. Seven bits allow the basic Latin alphabet in upper and lower cases, together with various punctuation symbols and control codes to be represented. For example, the letter A is represented by 1000001. For accented characters, 8bit codes are commonly used. Manipulation of text is normally performed using general purpose computers rather than specialized digital hardware. NonEuropean languages may use 16 or 32 bits to represent individual characters.
2.5.5 Gray Codes
In the normal binary counting sequence, the transition from 0111 (7_{10}) to 1000 (8_{10}) causes 3 bits to change. In some circumstances, it may be undesirable that several bits should change at once because the bit changes may not occur at exactly the same time. The intermediate values might generate spurious warnings. A Gray code is one in which only 1 bit changes at a time. For example a 3bit Gray code would count through the following sequence (other Gray codes can also be derived):
 000
 001
 011
 010
 110
 111
 101
 100
Note that the sequence of bits on a Kmap is a Gray code. Another application of Gray codes is as a position encoder on a rotating shaft, as shown in Figure 2.18. Only 1bit changes at each transition, so missed transitions are easily spotted.
Figure 2.18 Gray code as shaft encoder.
2.5.6 Parity Bits
When data are transmitted either by wire or by using radio communications, there is always the possibility that noise may cause a bit to be misinterpreted. At the very least, it is desirable to know that an error has occurred, and it may be desirable to transmit sufficient information to allow any error to be corrected.
The simplest form of error detection is to use a parity bit with each word of data. For each 8 bits of data, a ninth bit is sent that is 0 if there is an even number of 1s in the data word (even parity) or 1 otherwise. Alternatively, odd parity can be used; in which case, the parity bit is inverted. This is sufficient if the chances of an error occurring are low. We cannot tell which bit is in error, but knowing that an error has occurred means that the data can be transmitted again. Unfortunately, if two errors occur, the parity bit might appear to be correct. A single error can be corrected by using a twodimensional parity scheme, in which every ninth word is itself a set of parity bits, as shown in Table 2.10. If a single error occurs, both the row parity and column parity will be incorrect, allowing the erroneous bit to be identified and corrected. Certain multiple errors are also detectable and correctable.
Table 2.10. TwoDimensional Parity
Bit 7 
Bit 6 
Bit 5 
Bit 4 
Bit 3 
Bit 2 
Bit 1 
Bit 0 
Parity 

Word 0 
0 
1 
0 
1 
0 
1 
1 
0 
0 
Word 1 
0 
1 
0 
0 
1 
0 
0 
0 
0 
Word 2 
0 
1 
0 
1 
0 
0 
1 
1 
0 
Word 3 
0 
1 
0 
0 
1 
0 
0 
1 
1 
Word 4 
0 
1 
0 
0 
0 
0 
1 
1 
1 
Word 5 
0 
1 
0 
0 
1 
0 
0 
0 
0 
Word 6 
0 
1 
0 
0 
0 
1 
0 
0 
0 
Word 7 
0 
1 
0 
0 
1 
1 
0 
0 
1 
Parity 
0 
0 
0 
0 
0 
1 
1 
1 
1 
By using a greater number of parity bits, each derived from part of the word, multiple errors can be detected and corrected. The simplest forms of such codes were derived by Hamming in 1948. Better codes were derived by Reed and Solomon in 1960.