Home > Articles > Programming > C/C++

  • Print
  • + Share This
Like this article? We recommend

Like this article? We recommend

Multitasking: A Critical Section

Multitasking: A Critical Section

Imagine you have a piece of code that must not be run simultaneously by more than one thread, such as code that debits or credits a bank account. Such code is called a critical section, and the key domain requirement is that the code section in question can be executed by only one thread of execution. If the same critical section of code is executed simultaneously by more than one thread, the likely result is some type of data loss.

Critical sections can be used to protect against competing changes to shared data. Listing 1 shows a simple example, where my shared data is a global integer and I protect (or implement) the critical sections using a variable of type std::mutex. This illustration is ultra-simple, and using global variables like this is definitely not good practice, but the mechanism is surprisingly powerful.

Listing 1—Critical sections in multiple threads.

#include <iostream>
#include <thread>
using namespace std;

int sharedDataItem = 0;
std::mutex m;

void processData(int increment)
{
      sharedDataItem += increment;
}

void hello1()
{
    cout << "Hello Concurrent World 1 - attempting to lock mutex!" << endl;
    std::lock_guard<std::mutex> lk(m);
    cout << "Hello Concurrent World 1 - successfully locked mutex!" << endl;
    sleep(1);
    processData(1);
    cout << "Hello Concurrent World 1 - just about to unlock mutex!" << endl;
}

void hello2()
{
    cout << "Hello Concurrent World 2 - attempting to lock mutex!" << endl;
    std::lock_guard<std::mutex> lk(m);
    cout << "Hello Concurrent World 2 - successfully locked mutex!" << endl;
    sleep(2);
    processData(2);
    cout << "Hello Concurrent World 2 - just about to unlock mutex!" << endl;
}

void hello3()
{
    cout << "Hello Concurrent World 3 - attempting to lock mutex!" << endl;
    std::lock_guard<std::mutex> lk(m);
    cout << "Hello Concurrent World 3 - successfully locked mutex!" << endl;
    sleep(3);
    processData(3);
    cout << "Hello Concurrent World 3 - just about to unlock mutex!" << endl;
}

int main() {
      sharedDataItem = 100;
      cout << "Hello World - sharedDataItem: " << sharedDataItem << endl;
      cout << "Starting threads" << endl;
      std::thread t1(hello1);
      std::thread t2(hello2);
      std::thread t3(hello3);
      cout << "Thread ID 1 " << t1.get_id() << endl;
      cout << "Thread ID 2 " << t2.get_id() << endl;
      cout << "Thread ID 3 " << t3.get_id() << endl;
      t1.join();
      t2.join();
      t3.join();
      cout << "After threads - sharedDataItem: " << sharedDataItem << endl;
      return 0;
}

Also included in Listing 1 are three functions that access and modify the shared data. To make the example more interesting, the three functions are invoked from separate threads. Each of the three functions makes a call to the function processData(), which actually changes the shared data item.

So, in Listing 1, I have three threads, each of which runs a function. Each of the functions takes a different amount of time to complete—simulated by a call to sleep()—and modifies the global shared-data variable. Listing 2 illustrates one run of the program on a dual-core Dell Latitude E5400 laptop.

Listing 2—A program run on a dual-core machine.

Hello World - sharedDataItem: 100
Starting threads
Thread ID 1 3076213616
Thread ID 2 3067820912
Thread ID 3 3059428208
Hello Concurrent World 2 - attempting to lock mutex!
Hello Concurrent World 2 - successfully locked mutex!
Hello Concurrent World 1 - attempting to lock mutex!
Hello Concurrent World 3 - attempting to lock mutex!
Hello Concurrent World 2 - just about to unlock mutex!
Hello Concurrent World 1 - successfully locked mutex!
Hello Concurrent World 1 - just about to unlock mutex!
Hello Concurrent World 3 - successfully locked mutex!
Hello Concurrent World 3 - just about to unlock mutex!
After threads - sharedDataItem: 106

In Listing 2, the first thread function to run is hello2(). This function runs inside the thread t2; when it attempts to lock the mutex, it gets in ahead of the other two functions. Once hello2() has locked the mutex, the competing function hello1() tries without success to acquire a lock, followed closely by hello3(). However, the mutex remains locked by hello2() until it returns, at which time hello1() gets a chance to apply its lock. Once hello1() gets its lock, it does its stuff and then finally releases the mutex to allow hello3() a chance to get the lock.

Notice that the mutex is unlocked when the above helloX() functions exit. This point is important; if an explicit unlock is required, the possibility exists that the programmer may forget to call it. Or an exception might be thrown. With the C++11 mutex, the lock is removed once the caller exits, which makes life simpler.

Listing 3 shows another program run with a slightly different result.

Listing 3—A second program run illustrating a race condition.

Hello World - sharedDataItem: 100
Starting threads
Thread ID 1 3075492720
Thread ID 2 3067100016
Thread ID 3 3058707312
Hello Concurrent World 1 - attempting to lock mutex!Hello Concurrent World 2
 - attempting to lock mutex!
Hello Concurrent World 3 - attempting to lock mutex!
Hello Concurrent World 2 - successfully locked mutex!

Hello Concurrent World 2 - just about to unlock mutex!
Hello Concurrent World 1 - successfully locked mutex!
Hello Concurrent World 1 - just about to unlock mutex!
Hello Concurrent World 3 - successfully locked mutex!
Hello Concurrent World 3 - just about to unlock mutex!
After threads - sharedDataItem: 106

Notice in Listing 3 the two competing lines from hello1() and hello2(). This example graphically illustrates one of the most difficult types of problems to solve in multithreaded programming—the problem of race conditions. However, here the race condition is not a problem, because we've protected all of the critical code sections with a mutex. We're guaranteed that only one thread will win in the race to acquire the mutex. Without a mutex, we'd be at the mercy of the underlying operating-system scheduler.

  • + Share This
  • 🔖 Save To Your Account