Home > Articles

The Pitfalls of Parallelism

  • Print
  • + Share This
  • 💬 Discuss
From the author of
David Chisnall looks at some of the pitfalls that face programmers who try to add parallelism to applications without understanding the underlying architecture.

If you have a workload that can be parallelized, you most likely start by splitting it into threads. Then you get a processor that has more cores—you expect it to go faster. Often, you're lucky and it does. This article looks at some of the reasons why that's not always the case.

Understanding Cache Coherency

When you have multiple processors, you end up having multiple caches whether they're on the same die or not. Normally, caches make things faster. They optimize for locality of reference by keeping recently accessed data around near the processor so that the processor doesn't have to wait for a message to go all the way out to the main memory and come back again.

For a multiprocessor system, caches introduce a problem. If one processor updates some memory, the write will first go to its level 1 cache, then its level 2 cache, and then eventually to main memory. If another processor tries to read from that bit of memory, it may see the old value.

The solution is for the hardware to implement a cache coherency protocol. This is sometimes referred to as a MESI (pronounced messy) protocol, after the canonical protocol that provides the required guarantees, although in modern implementations it is typically more complex.

MESI stands for Modified, Exclusive, Shared, Invalid. The protocol associates one of these four states with each cache line. Initially, each line is invalid. When you load from main memory, the line moves to the shared state. This doesn't mean that it is shared; just that might be shared.

Before a core can write to a cache line, it must move it to the exclusive state. This process involves sending a message to every other core, telling it to invalidate its version of the cache line in the shared state. If a core tries to load a line that another has in the exclusive state, it will tell the other core to move its version back to the shared state.

Once a core has the line in the exclusive state, it can execute store instructions and write to the cache line, which triggers a transition to the modified state. When another core tries to load the cache line from main memory, it will instead get the modified version and return the state of the modified line to shared.

Synchronization Overheads

Although modern processors contain a lot of optimizations to improve on the basic MESI protocol, the basic principle remains the same. This means that any time you force two processors to have the same view on memory, they will spend a lot of time sending messages and, most importantly, waiting for the results.

By default, the memory model is sequentially consistent on x86. This means that there exists a linear sequence of memory operations that matches the view that all cores see. For example, if one core writes a value to X and then another value to Y, then (independent of their location in memory) another core may either see the new value of X and the old value of Y, the new value of both, or the old value of both. It might not see the old value of X and the new value of Y because that would mean that it observed a sequence of memory operations that was incompatible with the order that another core saw.

This imposes a lot of overhead, which is why most other architectures are weakly ordered, meaning that they have no (or fewer) such guarantees except in the presence of explicit memory barrier instructions.

Although this can make things cheaper in the general case, when synchronization is required, the barrier instructions can often pause the entire pipeline, which can have a noticeable impact on serial performance.

More importantly, the cost of acquiring an exclusive cache line can be around 300 cycles. If you test code without contention, on x86 an atomic add instruction can be only three or so times slower than the non-atomic version. When another thread has the cache line, it can be more than 300 times slower.

On a weakly ordered architecture, you can often get a significant speedup if you don't care whether a thread gets old values. For example, in a lockless ring buffer shared between two threads, with free-running counters indicating the producer and consumer pointers, if one thread gets a stale value, it doesn't matter very much because it will just wait a bit before getting or inserting a value. It can harm throughput if it happens too often, but either batching updates or requesting a weaker consistency model can reduce the amount of inter-processor traffic and may increase throughput.

False Sharing

This slowdown is bad enough when you actually do want to have one thread reading a value and one writing a value. It's far worse when it's accidental, however.

Remember that the processor deals only with cache lines. A typical cache line size for a modern processor is 64 bytes, which leads to two bad cases. First, consider the simple multiword consistent update algorithm between a single producer and one or more consumers. The producer increments a counter, issues a memory barrier to ensure that the update is visible, writes the data, updates the counter, and then issues another barrier.

The consumer reads the counter, reads the data, and reads the counter again. If the low bit of the counter is one, or the value has changed, the consumer loops. On some architectures, the consumer requires some memory barriers to ensure that writes from other processors are visible.

If all the data fits into a single cache line, it is likely to go a lot faster than if the counter is in one cache line and the remainder of the data is in the next, for example.

By default on most operating systems, malloc() will return values aligned to a boundary that satisfies the greatest alignment requirements of a primitive type—typically 16 bytes for x86 (some AVX instructions require 32-byte alignment, but compilers generally won't emit those unless a type is explicitly aligned). This means that you have a 25 percent chance that a malloc()'d data structure will start on a cache line boundary. The same is true for globals.

If you don't explicitly align your shared data structures on cache line boundaries, there's a good chance that you'll end up generating twice as much cache coherency traffic as you want. Worse, if it depends on malloc() behavior, it might be different between runs; if it depends on global layout, it might be different between builds.

The opposite part of this problem, false sharing, happens when you have two things in a single cache line that have different sharing properties. If you have a structure that has one field that is infrequently updated, but frequently read from multiple threads, and another that is updated frequently, what happens?

If the structure is in a single cache line, every time the second field is updated it will invalidate the copies of the first field in all CPUs' caches. Each reading thread will then need to pull in a new copy of the cache line, just to get at a field that hasn't changed.

You can eliminate this by padding the structure to ensure that it spans more than one cache line, or by ensuring that you have only data with similar access patterns in a single structure.

Filling Up the Cache

One of the most difficult problems to diagnose on a multicore CPU happens when one thread is starving another for cache. On a conventional single-core machine, you can often see sharp discontinuities in a program's performance as the working set increases in size:

  • When the data no longer fits in L1 cache
  • When the data no longer fits in L2 cache
  • When the data no longer fits in main memory covered by the TLB
  • When the data no longer fits in main memory at all

Most multicore CPUs have a private L1 cache, but have shared L2 caches and often shared TLBs. And, of course, they have shared main memory. This means that if one thread uses a lot of L2 cache, it reduces the amount that's available to the threads running on other cores.

This is particularly problematic when the problem is not embarrassingly parallel, and the two threads depend on each other for partial results: It means that a thread that starves another for cache can end up waiting for the thread that it has just starved to complete before it can progress.

Unfortunately, this is one problem that you can't easily solve. The cache is hidden, as its name implies. The policy for which cores are allowed to store how much data is embedded in the CPU.

Priority Failures

Most developers writing multithreaded code are familiar with the priority inversion problems of constructs such as spinlocks. Many of these are made considerably worse by the policy embedded in modern CPUs. For example, in a CPU with hardware multithreading support, such as Intel's Hyperthreading, you have two threads using the same execution units and running at the same time. Unfortunately, the OS has no way of setting their priorities.

In a typical operating system, threads have two values associated with them under the broad umbrella of priority. One is a static value, saying that this thread is more or less important than that one and so should globally be allowed to consume more resources. The other is a dynamic one, indicating whether a thread has used more or less than its static priority would imply.

When the OS decides to schedule a thread, it does so based on how much CPU time it has consumed so far, but that is not always a meaningful number on a hyperthreaded or even multicore system. For example, a thread might have spent most of its time waiting for execution units to become free or it might have spent most of its time waiting for memory because another thread kept pushing its data out of cache.

So it's not always a good idea to rely on the operating system to correctly enforce priorities for threads within your application: Often it simply doesn't have the tools required to do so. If your application has very unbalanced memory accesses between threads, you're probably making the situation a lot worse.

It's only been about a decade since multicore computers went mainstream, and there are a lot of challenges in the OS and architecture space still left to solve. It typically takes about 5-7 years to get a completely new processor design to market, so we're only now starting to use processors whose design process started after multicore became common. Operating systems have a shorter development cycle, but completely redesigning the entire scheduler to take cache pressure and program phase into account is not a simple project.

  • + Share This
  • 🔖 Save To Your Account


comments powered by Disqus