- Item 3: Beware of Implicit Coercions
- Item 4: Prefer Primitives to Object Wrappers
- Item 5: Avoid using == with Mixed Types
- Item 6: Learn the Limits of Semicolon Insertion
- Item 7: Think of Strings As Sequences of 16-Bit Code Units
17; // "number" typeof
98.6; // "number" typeof
-2.1; // "number"
Most arithmetic operators work with integers, real numbers, or a combination of the two:
100; // 1
12.3; // 8.7
5; // 0.5
8; // 5
The bitwise arithmetic operators, however, are special. Rather than operating on their arguments directly as floating-point numbers, they implicitly convert them to 32-bit integers. (To be precise, they are treated as 32-bit, big-endian, two’s complement integers.) For example, take the bitwise OR expression:
1; // 9
You can see this for yourself by using the toString method of numbers:
2); // "1000"
The argument to toString specifies the radix, in this case indicating a base 2 (i.e., binary) representation. The result drops the extra 0 bits on the left since they don’t affect the value.
The integer 1 is represented in 32 bits as:
The bitwise OR expression combines the two bit sequences by keeping any 1 bits found in either input, resulting in the bit pattern:
This sequence represents the integer 9. You can verify this by using the standard library function parseInt, again with a radix of 2:
2); // 9
(The leading 0 bits are unnecessary since, again, they don’t affect the result.)
A final note of caution about floating-point numbers: If they don’t make you at least a little nervous, they probably should. Floating-point numbers look deceptively familiar, but they are notoriously inaccurate. Even some of the simplest-looking arithmetic can produce inaccurate results:
0.2; // 0.30000000000000004
While 64 bits of precision is reasonably large, doubles can still only represent a finite set of numbers, rather than the infinite set of real numbers. Floating-point arithmetic can only produce approximate results, rounding to the nearest representable real number. When you perform a sequence of calculations, these rounding errors can accumulate, leading to less and less accurate results. Rounding also causes surprising deviations from the kind of properties we usually expect of arithmetic. For example, real numbers are associative, meaning that for any real numbers x, y, and z, it’s always the case that (x + y) + z = x + (y + z).
But this is not always true of floating-point numbers:
0.3; // 0.6000000000000001
0.3); // 0.6
Floating-point numbers offer a trade-off between accuracy and performance. When accuracy matters, it’s critical to be aware of their limitations. One useful workaround is to work with integer values wherever possible, since they can be represented without rounding. When doing calculations with money, programmers often scale numbers up to work with the currency’s smallest denomination so that they can compute with whole numbers. For example, if the above calculation were measured in dollars, we could work with whole numbers of cents instead:
30; // 60
30); // 60
With integers, you still have to take care that all calculations fit within the range between –253 and 253, but you don’t have to worry about rounding errors.
Things to Remember
- Bitwise operators treat numbers as if they were 32-bit signed integers.
- Be aware of limitations of precisions in floating-point arithmetic.