Home > Articles > Software Development & Management > Architecture and Design

  • Print
  • + Share This
This chapter is from the book

7.2 Critical Section Pattern

The Critical Section Pattern is the simplest pattern to share resources that cannot be shared simultaneously. It is lightweight and easy to implement, but it may prevent high priority tasks, even ones that don't use any resources, from meeting their deadlines if the critical section lasts too long.

7.2.1 Abstract

This pattern has been long used in the design of real-time and embedded systems whenever a resource must have at most a single owner at any given time. The basic idea is to lock the Scheduler whenever a resource is accessed to prevent another task from simultaneously accessing it. The primary advantage of this pattern is its simplicity, both in terms of understandability and in terms of implementation. It becomes less applicable when the resource access may take a long time because it means that higher-priority tasks may be blocked from execution for a long period of time.

7.2.2 Problem

The main problem addressed by the Critical Section Pattern is how to robustly share resources that may have, at most, a single owner at any given time.

7.2.3 Pattern Structure

Figure 7-4 shows the basic structural elements in the Critical Section Pattern.

Figure 7-4Figure 7-4: Critical Section Pattern


7.2.4 Collaboration Roles

  • Abstract Thread

    The Abstract Thread class is an abstract (noninstantiable) superclass for Concrete Thread. Abstract Thread associates with the Scheduler. Since Concrete Thread is a subclass, it has the same interface to the Scheduler as the Abstract Thread. This enforces interface compliance. The Abstract Thread is an «active» object, meaning that when it is created, it creates an OS thread in which to run. It contains (that is, it has composition relations with) more primitive application objects that execute in the thread of the composite «active» object.

  • Concrete Thread

    The Concrete Thread is an «active» object most typically constructed to contain passive "semantic" objects (via the composition relation) that do the real work of the system. The Concrete Thread object provides a straightforward means of attaching these semantic objects into the concurrency architecture. Concrete Thread is an instantiable subclass of Abstract Thread.

  • Scheduler

    This object orchestrates the execution of multiple threads based on some scheme requiring preemption. When the «active» Thread object is created, it (or its creator) calls the createThread operation to create a thread for the «active» object. Whenever this thread is executed by the Scheduler, it calls the StartAddr address (except when the thread has been blocked or preempted—in which case it calls the EntryPoint address).

    In this pattern, the Scheduler has a Boolean attribute called taskSwitchingEnabled and two operations, startCriticalSection() and endCriticalSection(), which manipulate this attribute. When FALSE, it means that the Scheduler will not perform any task switching; when TRUE, tasks will be switched according to the task scheduling policies in force.

  • Shared Resource

    A resource is an object shared by one or more Threads but cannot be reliably accessed by more than one client at any given time. All operations defined on this resource that access any part of the resource that is not simultaneously sharable (its nonreentrant parts) should call Scheduler.startCriticalSection() before they manipulate the internal values of the resource and should call Scheduler.endCriticalSection() when they are done.

  • Task Control Block

    The TCB contains the scheduling information for its corresponding Thread object. This includes the priority of the thread, the default start address, and the current entry address if it was preempted or blocked prior to completion. The Scheduler maintains a TCB object for each existing Thread. Note that TCB typically also has a reference off to a call and parameter stack for its Thread, but that level of detail is not shown in Figure 7-4.

7.2.5 Consequences

The designers and programmers must show good discipline in ensuring that every resource access locks the resource before performing any manipulation of the source. This pattern works by effectively making the current task the highest-priority task in the system. While quite successful at preventing resource corruption due to simultaneous access, it locks out all higher-priority tasks from executing during the critical section, even if they don't require the use of the resource. Many systems find this blocking delay unacceptable and must use more elaborate means for resource sharing. Further, if the initial task that locks the resource neglects to deescalate its priority, then all other tasks are permanently prevented from running. Calculation of the worst-case blocking for each task is trivial with this pattern: It is simply the longest critical section of any single task of lesser priority.

It is perhaps obvious, but should nevertheless be stated, that when using this pattern a task should never suspend itself while owning a resource because task switching is disabled so that in a situation like that no tasks are permitted to run at all. This pattern has the advantage in that it avoids deadlock by breaking the second condition (holding resources while waiting for others) as long as the task releases the resource (and reenables task switching) before it suspends itself.

7.2.6 Implementation Strategies

All commercial RTOSs have a means for beginning and ending a critical section. Invoking this Scheduler operation prevents all task switching from occurring during the critical section. If you write your own RTOS, the most common way to do this is to set the Disable Interrupts bit on your processor's flags register. The precise details of this vary, naturally, depending on the specific processor.

7.2.7 Related Patterns

As mentioned, this is the simplest pattern that addresses the issue of sharing nonreentrant resources. Other resource sharing approaches, such as Priority Inheritance, Highest Locker, and Priority Ceiling Patterns, solve this problem as well with less impact on the schedulability of the overall system but at the cost of increased complexity. This pattern can be mixed with all of the concurrency patterns from Chapter 5, except the Cyclic Executive Pattern, for which resource sharing is a nonissue.

7.2.8 Sample Model

An example of the use of this pattern is shown in Figure 7-5. This example contains three tasks: Device Test (highest priority), Motor Control (medium priority), and Data Processing (lowest priority). Device Test and Data Processing share a resource called Sensor, whereas Motor Control has its own resource called Motor.

Figure 7-5Figure 7-5: Critical Section Pattern Example (continued)


The scenario starts off with the lowest-priority task, Data Processing, accessing the resource that starts up a critical section. During this critical section both the Motor Control task and the Device Test task become ready to run but cannot because task switching is disabled. When the call to the resource is almost done, the Sensor.gimme() operation makes a call to the scheduler to end the critical section. The scenario shows three critical sections, one for each of the running tasks. Finally, at the end, the lowest-priority task is allowed to complete its work and then returns to its Idle state.

  • + Share This
  • 🔖 Save To Your Account