Home > Articles > Programming

Writing Concurrent Systems, Part 3: Specialized Systems

  • Print
  • + Share This
Concluding his three-part series on concurrent programming, David Chisnall examines some of the most interesting modern techniques for designing concurrent systems.
From the author of

In part 1 of this series, we looked at using locks to make systems designed for sequential access into concurrent code. In part 2, we examined the options for replacing locks with lockless data structures. In this article, the last in this series, we're going to look at some of the strategies that you might consider in new code, if you don't have any legacy requirements. Many of these techniques aren't well supported, but they're likely to become more mainstream in the next few years.

Transactional Memory

Transactions have long been used in multiuser databases to improve consistency and scalability. The idea behind a transaction is grouping a set of operations so that either all succeed or all fail; you're never left with only some of the transactions having happened. The same basic mechanism is used in most modern filesystems in the form of journaling; a filesystem modification either completes or the filesystem can be rolled back to the state before it started.

The atomic operation discussed in the previous articles in this series can be thought of as a very simple kind of transaction. For example, an atomic increment is a transaction involving a load, an add, and a store. On RISC chips, as we saw in part 2, that's exactly how it's implemented.

The transactional support in these chips is quite limited—it only allows transactions on a single word. There's no hardware support for a transaction that spans multiple memory locations. Sun's Rock architecture was intended to address this problem, and it provided hardware support for larger transactions. Unfortunately, it was cancelled before it shipped.

Fortunately, it's possible to implement transactions purely in terms of an atomic compare-and-exchange operation. We actually looked at how to do this, without giving it the name, in the "Multiword Operations" section of part 2.

The big advantage of transactional memory, including software transactional memory, is that it's very easy to reason about it. When you program with locks, you need to think about all of the possible interactions between parts of your code. This is particularly difficult in a shared memory system, such as a system using threads. Designing lockless structures is even harder—you have to think about two sequences of instructions and imagine what will happen in any possible combination of states. The number of things that can go wrong suffers from combinatorial explosion.

With transactional memory in a programming language, you just have to identify a group of things that will fail if something else interferes. For example, inserting into a linked list is trivial with transactional memory; you just need to group the pointer updates in a transaction. The important thing about this option is that it doesn't matter what the rest of the code does; the insert will either succeed or fail, and the rest of the program will only ever see the list in the "before" or "after" state.

The other huge benefit from the transactional model is that it's composable. One of the biggest problems that people tend to have with lock-based concurrent programming is making things atomic, but not thread-safe. For example, consider an object with set and get methods. These methods can be atomic, but there's no way to combine them into an atomic test and set, or an atomic increment. With transactions, combining becomes trivial; you simply create a transaction that calls both methods in order and then calls the first method again and asserts that the result is as expected. If the assertion fails, the transaction is not committed and is retried.

Although hardware support for transactional memory is still very rare, you'll find frameworks for software transactional memory in most languages.

  • + Share This
  • 🔖 Save To Your Account