How do you measure the speed of a language? The obvious way is to write some algorithms in it, run them, and see how long they take to execute. For the most part, this is a good, pragmatic solution. There are a few potential pitfalls, however:
Selecting the correct compiler. If you’re evaluating Lisp, for example, it wouldn’t be fair to benchmark an interpreted version of Lisp against compiled C++ or just-in-time compiled Java. But it would be fair to compare these languages with something like Perl, Ruby, or Python, in which the default (supported) implementation runs interpreted bytecode.
The fastest compiler isn’t always the correct one for your needs. For some time, the Microsoft Java runtime was the fastest implementation available. If you were looking for a language for cross-platform development, however, it would have been a mistake to select Java just because the Microsoft runtime was fast.
Be aware of the platform you’re targeting when you decide on a compiler. If you’re writing cross-platform code, you may want to standardize the compiler (or runtime) across all of your supported platforms to ensure that you have a minimum of issues from varying levels of language support across implementations. In this case, you might choose something like GCC, which is likely to produce code that’s slower than a compiler written for a specific architecture. Be sure to take this factor into account when performing a speed evaluation. Just because IBM’s XLC or Intel’s ICC produces faster code from C than whatever other language you’re evaluating, that doesn’t mean that you can always benefit from this speed
At the opposite extreme, you might need to support only one platform. In this case, you’re free to choose the best implementation available for that platform. Make sure that you do this for all of the languages you evaluate, however. It’s easy to read benchmarks showing that the Steel Bank Common Lisp compiler produces code that runs faster than C++ code, and miss the fact that it performs somewhat worse on register-starved architectures such as x86. If you’re targeting x86, this is an important factor.
Don’t place too much faith in micro-benchmarks. It’s quite easy to design an algorithm that plays to the strengths of a particular implementation—or to its weaknesses. Something that requires a lot of array accesses, for example, is likely to cripple Java’s performance. Something that requires a high degree of concurrency is likely to show off Erlang’s strengths. When you look at micro-benchmarks, try to imagine where you would use algorithms like the ones described in your own application. If you can’t, then disregard them.
Remember that speed isn’t always important. The CPU usage graph on my machine could almost be described as having two states: one showing 20% usage, and the other showing 100% usage. If you’re writing the kind of application that will contribute to the 20%, then you would be hard-pressed to select a language that the end user would notice was slow. If you’re writing an application that uses as much processing power as you can throw at it, or an embedded application in which processing power is still expensive, speed is a very important consideration.