Making JavaScript Fast, Part 2

• Print

The Problem of Numbers

The Problem of Numbers

One of the hardest bits to get right in a dynamic language is how to handle numbers. In Smalltalk, small numbers are typically stored as integers inside a pointer, with the lowest bit set to 1 to flag them as not being real objects. (The lowest bit in an object pointer is always 0, because objects are word-aligned.) The ECMAScript specification makes things a bit more difficult. Numbers in ECMAScript have to have the range and precision of a 64-bit floating-point value.

Imagine for a moment that you took a C program and rewrote it so that every number was a double, and then tried running it. You'd probably find that it was much, much slower. A lot of the work that goes with trying to make JavaScript run quickly comes from pretending to be using double-precision floating-point values, but really using something faster. One simple way of doing this is to have all integer constants in a program start off being represented as integers. When you divide one integer by another, you can typically do the calculation as an integer divide and get the remainder for free. If this is zero, you carry on; if not, you do it again as a floating-point divide.

A similar option uses addition and multiplication, checking the CPU's carry flag after the operation to ensure that the result really fits into an integer, and promoting it if it doesn't. Here, you may be able to cheat a bit more, and initially only promote it to a 32-bit floating-point value. On some processors, this effort won't make any difference. The x87 FPU, for example, performs all calculations at 80-bit precision, so there's little to be gained. On a machine with SSE, the vector unit can be used instead, but not to much benefit for JavaScript. SSE doesn't do 32-bit arithmetic faster than 64-bit arithmetic; it can just do twice as much at once, and only if you group the operations together. On a modern ARM chip, however, this technique makes a huge difference. ARM chips often don't include a scalar floating-point unit, but a lot now include the NEON vector unit, which can do 32-bit floating-point calculations, but not 64-bit floating-point calculations. If you need to use 64-bit floating-point operations, they have to be emulated in software, which is a lot slower.

By the way, the implementation here can affect the semantics of JavaScript programs. The specification defines a minimum precision, in a very woolly fashion. You can expect a minimum of 51 bits of precision, but an implementation may decide to use 64-bit integers until a non-integer or an overflowed result occurs. You might end up with 64 bits of precision for integer values and 51 bits for floating-point values, giving some quite interesting results due to rounding.

If you're really clever, you can avoid some of these tests. If you have a loop variable, for example, often you can guarantee its range. Consider a loop statement like this:

`for (i = 1 ; i<100 ; i++)`

If i isn't written to anywhere in the loop, you can guarantee that the increment will never overflow, and you can guarantee that i will always be an integer. This kind of type analysis can be extended to cover a lot of cases. The more you can eliminate, the faster the code will run.

The fact that a variable in JavaScript can be an integer, a floating-point value, or an object reference means that this kind of test is needed all over the place. Type inference becomes very important for eliminating tests. In JavaScript, all numbers are objects whose prototype is Number. You can add methods to a number value, just as you could to any other value. This makes things even more complicated, because some objects may just be integers with an extra method attached. To perform any method on a JavaScript value, you may need to perform several tests to see what it really is, and what the method arguments are. The more you can infer, the more tests you can eliminate.