The map-reduce pattern has been around since the early days of Lisp, making it about a half-century old, but only recently has Google made it a buzzword. The idea behind using this pattern for parallel programming is a special case of using functional programming for parallelism.
In the map-reduce model, you have two higher-order functions: One maps an input to an output, and the other combines outputs. In the simplest case, the maps can all happen independently, and then the reduce happens sequentially on all of the outputs. In a slightly more advanced implementation, individual nodes can perform a reduce on all of their mapped values, and then these can be further combined.
Algorithms such as merge sort employ a similar structure and are intrinsically well suited to concurrency. Unlike the other techniques that we've discussed, map-reduce isn't a good solution to arbitrary parallel problems. It's very good for a certain category of problems, but other things aren't as well suited to the approachespecially anything requiring interactive performance.