At The Same Time?
Smalltalk implements message passing as an indirect function call. From a certain perspective, it can be seen that any function call is in fact a special case of a message-sending operation—specifically a synchronous message-passing operation. The caller sends a message to the callee and then waits for it return.
A simple extension to this to support parallelism. The concept of futures allows parallelism to be simply added to a lot of existing algorithms. (This was touched on briefly in the functional programming discussion.)
The concept behind a future is that you should not block execution of the caller when the function is called, but instead when the return value is needed.
Consider the Quicksort algorithm, which partitions a set of data and then runs recursively on both parts. A simple functional implementation of this would perform the pivot, run recursively on one subarray, run again on the other, and then return.
The two recursive calls, however, do not interfere with each other (this is trivial to prove in a functional language, and fairly easy in other languages). A clever compiler could run both recursive calls in separate threads and wait for the return.
Sufficiently clever compilers are quite hard to come by, but some languages make it easy to implement this model in the library.
In Objective-C, for example, it is possible to write a generic piece of code that spawns an object in a new thread, and executes messages sent to it asynchronously, returning a proxy object that blocks whenever a message is sent to it.
Languages such as Erlang go a step further and expose asynchronous message passing directly to the developer. This is not conceptually harder than synchronous messaging, but can be difficult to grasp for people who have a lot of experience with synchronous programming.
In Erlang, the processes and messages are integral parts of the language. Creating a new process is a very cheap operation, as is sending and receiving a message.
While Erlang code tends to be slower on a single processor than other languages, the fact that it is very easy to write code that scales to tens or hundreds of processors makes up for it in a number of areas.
Languages such as Erlang are still in their infancy, but asynchronous programming is likely to grow in the next few years as the number of processors in the average computer increases.
Synchronous programming tends to cause performance problems on parallel systems due to the overhead of locking, while an asynchronous system can be implemented using lockless data structures for communication.
The one requirement for a good parallel programming language, which is missing from most of the languages discussed here, is that it must distinguish between aliased and mutable data.
If data is allowed to be both aliased (that is, multiple threads or processes have references to it) and mutable, there are a large number of optimizations that are impossible, and locking is required for safe access.
If the language (or, at least, the library) can enforce this restriction, parallel programming becomes a lot easier. Erlang does this in a very simple way; all data is immutable with the exception of a dictionary associated with a process (which is mutable, but rarely used).
This is the sort of solution a compiler writer would think of; it makes implementation easy at the cost of some ease of use.
Erlang inherited this single-assignment form from a family of languages known as dataflow programming languages. They view programs as a directed graph through which data flows.
This model fits well with parallel programming in a lot of cases because each filter in the graph can execute concurrently. This model is common in visualization, and simple versions are found in most media programming frameworks.
Web programming has introduced a lot of people to programming models that were previously consigned to niches, and server-side scripting languages have brought in more.
While the ’90s were dominated by C-with-syntactic-sugar languages, this is slowly changing as more people discover more flexible programming styles. Hopefully this trend will continue, making the next 10 years an even more fun time to be a programmer.