Home > Articles

  • Print
  • + Share This
This chapter is from the book

4.9 Managing Threads

When creating applications with multiple threads, there are several ways to control how threads perform and how threads use and compete for resources. Part of managing threads is setting the scheduling policy and priority of the threads. This contributes to the performance of the thread. Thread performance is also determined by how the threads compete for resources, either on a process or system scope. The scheduling, priority, and scope of the thread can be set by using a thread attribute object. Because threads share resources, access to resources will have to be synchronized. This will briefly be discussed in this chapter and fully discussed in Chapter 5. Thread synchronization also includes when and how threads are terminated and canceled.

4.9.1 Terminating Threads

A thread's execution can be discontinued by several means:

  • By returning from the execution of its assigned task with or without an exit status or return value

  • By explicitly terminating itself and supplying an exit status

  • By being canceled by another thread in the same address space

When a joinable thread function has completed executing, it returns to the thread calling pthread_join(), for which it is the target thread. The pthread_join() returns the exit status passed to the pthread_exit() function called by the terminating thread. If the terminating thread did not make a call to pthread_exit(), then the exit status will be the return value of the function, if it has one; otherwise, the exit status is NULL.

It may be necessary for one thread to terminate another thread in the same process. For example, an application may have a thread that monitors the work of other threads. If a thread performs poorly or is no longer needed, to save system resources it may be necessary to terminate that thread. The terminating thread may terminate immediately or defer termination until a logical point in its execution. The terminating thread may also have to perform some cleanup tasks before it terminates. The thread also has the option to refuse termination.

The pthread_exit() function is used to terminate the calling thread. The value_ptr is passed to the thread that calls pthread_join() for this thread. Cancellation cleanup handler tasks that have not executed will execute along with the destructors for any thread-specific data. No resources used by the thread are released.


#include <pthread.h>

int pthread_exit(void *value_ptr);

When the last thread of a process exits, then the process has terminated with an exit status of 0. This function cannot return to the calling thread and there are no errors defined.

The pthread_cancel() function is used to cancel the execution of another thread in the same address space. The thread parameter is the thread to be canceled.


#include <pthread.h>

int pthread_cancel(pthread_t thread thread);

A call to the pthread_cancel() function is a request to cancel a thread. The request can be granted immediately, at a later time, or ignored. The cancel type and cancel state of the target thread determines when or if thread cancellation actually takes place. When the request is granted, there is a cancellation process that occurs asynchronously to the returning of the pthread_cancel() function to the calling thread. If the thread has cancellation cleanup handler tasks, they are performed. When the last handler returns, the destructors for thread-specific data, if any, are called and the thread is terminated. This is the cancellation process. The function returns 0 if successful and an error if not successful. The pthread_cancel() function will fail if the thread parameter does not correspond to an existing thread.

Some threads may require safeguards against untimely cancellation. Installing safeguards in a thread's function may prevent undesirable situations. Threads share data and depending on the thread model used, one thread may be processing data that is to be passed to another thread for processing. While the thread is processing data, it has sole possession by locking a mutex associated with the data. If a thread has locked a mutex and is canceled before the mutex is released, this could cause deadlock. The data may be required to be in some state before it can be used again. If a thread is canceled before this is done, an undesirable condition may occur. To put it simply, depending on the type of processing a thread is performing, thread cancellation should be performed when it is safe. A vital thread may prevent cancellation entirely. Therefore, thread cancellation should be restricted to threads that are not vital or points of execution that do not have locks on resources. Cancellations can also be postponed until all vital cleanups have taken place.

The cancelability state describes the cancel condition of a thread as being cancelable or uncancelable. A thread's cancelabilty type determines the thread's ability to continue after a cancel request. A thread can act upon a cancel request immediately or defer the cancellation to a later point in its execution. The cancelability state and type are dynamically set by the thread itself.

The pthread_setcancelstate() and pthread_setcanceltype() functions are used to set the cancelability state and type of the calling thread. The pthread_setcancelstate() function sets the calling thread to the cancelability state specified by state and returns the previous state in oldstate.


#include <pthread.h>

int pthread_setcancelstate(int state, int *oldstate);
int pthread_setcanceltype(int type, int *oldtype);

The values for state and oldstate are:


PTHREAD_CANCEL_DISABLE is a state in which a thread will ignore a cancel request. PTHREAD_CANCEL_ENABLE is a state in which a thread will concede to a cancel request. This is the default state of any newly created thread. If successful, the function will return 0. If not successful, the function will return an error number. The pthread_setcancelstate() may fail if not passed a valid state value.

The pthread_setcanceltype() function sets the calling thread to the cancelability type specified by type and returns the previous state in oldtype. The values for type and oldtype are:


PTHREAD_CANCEL_DEFFERED is a cancelability type in which a thread puts off termination until it reaches its cancellation point. This is the default cancelability type for any newly created threads. PTHREAD_CANCEL_ASYNCHRONOUS is a cancelability type in which a thread terminates immediately. If successful, the function will return 0. If not successful, the function will return an error number. The pthread_setcanceltype() may fail if not passed a valid type value.

The pthread_setcancelstate() and pthread_setcanceltype() functions are used together to establish the cancelabililty of a thread. Table 4-5 list combinations of state and type and a description of what will occur for each combination.

Table 4-5. Combinations of Cancelabililty State and Type

Cancelability State

Cancelability Type




Deferred cancellation. The default cancellation state and type of a thread. Thread cancellation takes places when it enters a cancellation point or when the programmer defines a cancellation point with a call to pthread_testcancel().



Asynchronous cancellation. Thread cancellation takes place immediately.



Disabled cancellation. Thread cancellation does not take place. Cancellation Points

When a cancel request is deferred, the termination of the thread takes place later in the execution of the thread's function. Whenever it occurs, it should be "safe" to cancel the thread because it is not in the middle of executing critical code, locking a mutex, or leaving the data in some usable state. These safe locations in the code's execution are good locations for cancellation points. A cancellation point is a check point where a thread checks if there are any cancellation requests pending and, if so, concede to termination.

Cancellation points can be marked by a call to pthread_testcancel(). This function checks for any pending cancellation request. If a request is pending, then it causes the cancellation process to occur at the location this function is called. If there are no cancellations pending, then the function continues to execute with no repercussions. This function call can be placed at any location in the code where it is considered safe to terminate the thread.


#include <pthread.h>

void pthread_testcancel(void);

Program 4.3 contains functions that use the pthread_setcancelstate(), pthread_setcanceltype(), and pthread_testcancel() functions. Program 4.3 shows three functions setting their cancelability types and states.

Program 4.3

#include <iostream>
#include <pthread.h>

void *task1(void *X)
   int OldState;

   // disable cancelability

   for(int Count = 1;Count < 100;Count++)
     cout << "thread A is working: " << Count << endl;



void *task2(void *X)
   int OldState,OldType;

   // enable cancelability, asynchronous

   for(int Count = 1;Count < 100;Count++)
     cout << "thread B is working: " << Count << endl;


void *task3(void *X)
   int OldState,OldType;

   // enable cancelability, deferred

   for(int Count = 1;Count < 1000;Count++)
     cout << "thread C is working: " << Count << endl;
     if((Count%100) == 0){



In Program 4.3, each task has set its cancelability condition. In task1, the cancelability of the thread has been disabled. What follows is critical code that must be executed. In task2, the cancelability of the thread is enabled. A call to the pthread_setcancelstate() is unnecessary because all new threads have an enabled cancelability state. The cancelability type is set to PTHREAD_CANCEL_ASYNCHRONOUS. This means whenever a cancel request is issued, the thread will start its cancellation process immediately, regardless of where it is in its execution. Therefore, it should not be executing any vital code once this type is activated. If it is making any system calls, they should be cancellation-safe functions (discussed later). In task2, the loop iterates until the cancel request is issued. In task3, the cancelability of the thread is also enabled and the cancellation type is PTHREAD_CANCEL_DEFFERED. This is the default state and type of a newly created thread, therefore, calls to the pthread_setcancelstate() and pthread_setcanceltype() are unnecessary. Critical code can be executed after the state and type are set because the termination will not take place until the pthread_testcancel() function is called. If there is no request pending, then the thread will continue executing until, if any, calls to pthread_testcancel() are made. In task3, the pthread_cancel() function is called whenever Count is evenly divisible by 100. Code between cancellation points should not be critical because it may not execute.

Program 4.4 shows the boss thread that issues the cancellation request for each thread.

Program 4.4

int main(int argc, char *argv[])
   pthread_t Threads[3];
   void *Status;



   for(int Count = 0;Count < 3;Count++)

      if(Status == PTHREAD_CANCELED){
         cout << "thread" << Count << " has been canceled" << endl;
              cout << "thread" << Count << " has survived" << endl;


The boss thread in Program 4.4 creates three threads, then it issues a cancellation request for each thread. The boss thread calls the pthread_join() function for each thread. The pthread_join() function does not fail if it attempts to join with a thread that has already been terminated. The join function just retrieves the exit status of the terminated thread. This is good because the thread that issues the cancellation request may be a different thread than the thread that calls pthread_join(). Monitoring the work of all the worker threads may be the sole task of a single thread that also cancels threads. Another thread may examine the exit status of threads by calling the pthread_join() function. This type of information may be used to statistically evaluate which threads have the best performance. In this program, the boss thread joins and examines each exit thread's exit status in a loop. Thread[0] was not canceled because its cancelability was disabled. The other two threads were canceled. A canceled thread may return an exit status, for example, PTHREAD_CANCELED. Program Profile 4.2 contains the profile for Programs 4.3 and 4.4.

Program Profile 4.2

Program Name



Demonstrates the use of thread cancellation. Three threads have different cancellation types and states. Each thread executes a loop. The cancellation state and type determines the number of loop iterations or whether the loop is executed at all. The primary thread examines the exit status of each thread.

Libraries Required


Headers Required

<pthread.h> <iostream>

Compile and Link Instructions

c++ -o program4-34 program4-34.cc -lpthread

Test Environment

SuSE Linux 7.1, gcc 2.95.2,

Execution Instructions


Cancellation points marked by a call to the pthread_testcancel() function are used in user-defined functions. The Pthread library defines the execution of other functions as cancellation points. These functions block the calling thread and while blocked the thread is safe to be canceled. These are the Pthread library functions that act as cancellation points:


If a thread with a deferred cancelability state has a cancellation request pending when making a call to one of these Pthread library functions, the cancellation process will be initiated. As far as system calls, Table 4-6 lists some of the system calls required to be cancellation points.

While these functions are safe for deferred cancellation, they may not be safe for asynchronous cancellation. An asynchronous cancellation during a library call that is not an asynchronously safe function may cause library data to be left in an incompatible state. The library may have allocated memory on behalf of the thread and when the thread is canceled, may still have a hold on that memory. For other library and systems functions that are not cancellation safe (asynchronously or deferred), it may be necessary to write code preventing a thread from terminating by disabling cancellation or deferring cancellation until after the function call has returned. Cleaning Up Before Termination

Once the thread concedes to cancellation, it may need to perform some final processing before it is terminated. The thread may have to close files, reset shared resources to some consistent state, release locks, or deallocate resources. The Pthread library defines a mechanism for each thread to perform last-minute tasks before terminating. A cleanup stack is associated with every thread. The stack contains pointers to routines that are to be executed during the cancellation process. The pthread_cleanup_push() function pushes a pointer to the routine onto the cleanup stack.

Table 4-6. POSIX System Calls Required to be Cancellation Points

POSIX System Calls (Cancellation Points)






















































#include <pthread.h>

void pthread_cleanup_push(void (*routine)(void *), void *arg);
void pthread_cleanup_pop(int execute);

The routine parameter is a pointer to the function to be pushed onto the stack. The arg parameter is passed to the function. The function routine is called with the arg parameter when the thread exits by calling pthread_exit(), when the thread concedes to a termination request, or when the thread explicitly calls the pthread_cleanup_pop() function with a nonzero value for execute. The function does not return.

The pthread_cleanup_pop() function removes routine's pointer from the top of the calling thread's cleanup stack. The execute parameter can have a value of 1 or 0. If the value is 1, the thread executes routine even if it is not being terminated. The thread continues execution from the point after the call to this function. If the value is 0, the pointer is removed from the top of the stack without executing.

It is required for each push there be a pop within the same lexical scope. For example, funcA() requires a cleanup handler to be executed when the function exits or cancels:

void *funcA(void *X)
   int *Tid;
   Tid = new int;
   // do some work
   // do some more work

Here, funcA() pushes cleanup handler cleanup_funcA() onto the cleanup stack by calling the pthread_cleanup_push() function. The pthread_cleanup_pop() function is required for each call to the pthread_cleanup_push() function. The pop function is passed 0, which means the handler is removed from the cleanup stack but is not executed at this point. The handler will be executed if the thread that executes funcA() is canceled.

The funcB() also requires a cleanup handler:

void *funcB(void *X)
   int *Tid;
   Tid = new int;
   // do some work

   // do some more work

Here, funcB() pushes cleanup handler cleanup_funcB() onto the cleanup stack. The difference in this case is the pthread_cleanup_pop() function is passed 1, which means the handler is removed from the cleanup stack but will execute at this point. The handler will be executed regardless of whether the thread that executes funcA() is canceled or not. The cleanup handlers, cleanup_funcA() and cleanup_funcB(), are regular functions that can be used to close files, release resources, unlock mutexes, and so on.

4.9.2 Managing the Thread's Stack

The address space of a process is divided into the text and static data segments, free store, and the stack segment. The location and size of the thread's stacks are cut out of the stack segment of the process. A thread's stack will store a stack frame for each routine it has called but has not exited. The stack frame contains temporary variables, local variables, return addresses, and any other additional information the thread needs to find its way back to previously executing routines. Once the routine is exited, the stack frame for that routine is removed from the stack. Figure 4-12 shows how stack frames are placed onto a stack.

04fig12.gifFigure 4-12. Stack frames generated from a thread.

In Figure 4-12, Thread A executes Task 1. Task 1 creates some local variables, does some processing, then calls Task X. A stack frame is created for Task 1 and placed on the stack. Task X does some processing, creates local variables, then calls Task C. A stack frame for Task X is placed on the stack. Task C calls Task Y, and so on. Each stack must be large enough to accommodate the execution of each thread's function along with the chain of routines that will be called. The size and location of a thread's stack are managed by the operating system but they can be set or examined by several methods defined by the attribute object.

The pthread_attr_getstacksize() function returns the default stack size minimum. The attr parameter is the thread attribute object from which the default stack size is extracted. When the function returns, the default stack size, expressed in bytes, is stored in the stacksize parameter and the return value is 0. If not successful, the function returns an error number.

The pthread_attr_setstacksize() function sets the stack size minimum. The attr parameter is the thread attribute object for which the stack size is set. The stacksize parameter is the minimum size of the stack expressed in bytes. If the function is successful, the return value is 0. If not successful, the function returns an error number. The function will fail if stacksize is less than PTHREAD_MIN_STACK or exceeds the system minimum. The PTHREAD_STACK_MIN will probably be a lower minimum than the default stack minimum returned by pthread_attr_getstacksize(). Consider the value returned by the pthread_attr_getstacksize() before raising the minimum size of a thread's stack. A stack's size is fixed so the stack's growth during runtime will only be within the fixed space of the stack set at compile time.


#include <pthread.h>

void pthread_attr_getstacksize(const pthread_attr_t *restrict attr,
                               void **restrict stacksize);
void pthread_attr_setstacksize(pthread_attr_t *attr, void
graphics/ccc.gif *stacksize);

The location of the thread's stack can be set and retrieved by the pthread_attr_setstackaddr() and pthread_attr_getstackaddr() functions. The pthread_attr_setstackaddr() function sets the base location of the stack to the address specified by the parameter stackattr for the thread created with the thread attribute object attr. This address addr should be within the virtual address space of the process. The size of the stack will be at least equal to the minimum stack size specified by PTHREAD_STACK_MIN. If successful, the function will return 0. If not successful, the function will return an error number.

The pthread_attr_getstackaddr() function retrieves the base location of the stack address for the thread created with the thread attribute object specified by the parameter attr. The address is returned and stored in the parameter stackaddr. If successful, the function will return 0. If not successful, the function will return an error number.


#include <pthread.h>

void pthread_attr_setstackaddr(pthread_attr_t *attr, void
graphics/ccc.gif *stackaddr);
      void pthread_attr_getstackaddr(const pthread_attr_t *restrict attr,
      void **restrict stackaddr);

The stack attributes (size and location) can be set by a single function. The pthread_attr_setstack() function sets both the stack size and stack location of a thread created using the specified attribute object attr. The base location of the stack will be set to the stackaddr parameter and the size of the stack will be set to the stacksize parameter. The pthread_attr_getstack() function retrieves the stack size and stack location of a thread created using the specified attribute object attr. If successful, the stack location will be stored in the stackaddr parameter and the stack size will be stored in the stacksize parameter. If successful, these functions will return 0. If not successful, an error number is returned. The pthread_setstack() function will fail if the stacksize is less than PTHREAD_STACK_MIN or exceeds some implementation-defined limit.


#include <pthread.h>

void pthread_attr_setstack(pthread_attr_t *attr, void *stackaddr,
                           size_t stacksize);
void pthread_attr_getstack(const pthread_attr_t *restrict attr,
                           void **restrict stackaddr, size_t
graphics/ccc.gif stacksize);

Example 4.3 sets the stack size of a thread using a thread attribute object.

Example 4.3 Changing the stack size of a thread using an offset.


if(DefaultSize < Min_Stack_Req){

   SizeOffset = Min_Stack_Req - DefaultSize;
   NewSize = DefaultSize + SizeOffset;

In Example 4.3, the thread attribute object retrieves the default size from the attribute object then determines whether the default size is less than the minimum stack size desired. If so, the offset is calculated then added to the default stack size. This becomes the new minimun stack size for this thread.

Setting the stack size and stack location may cause your program to be nonportable. The stack size and location you set for your program on one platform may not match the stack size and location of another platform.

4.9.3 Setting Thread Scheduling and Priorities

Like processes, threads execute independently. Each thread is assigned to a processor in order to execute the task it has been given. Each thread is assigned a scheduling policy and priority that dictates how and when it is assigned to a processor. The scheduling policy and priority of a thread or group of threads can be set by an attribute object using these functions:


These functions can be used to return scheduling information about the thread:



#include <pthread.h>
#include <sched.h>

void pthread_attr_setinheritsched(pthread_attr_t *attr,
                                 int inheritsched);
void pthread_attr_setschedpolicy(pthread_attr_t *attr, int policy);
void pthread_attr_setschedparam(pthread_attr_t *restrict
                               attr, const struct sched_param
                               *restrict param);

The pthread_attr_setinheritsched(), pthread_attr_setschedpolicy(), and pthread_attr_setschedparam() are used together to set the scheduling policy and priority of a thread. The pthread_attr_setinheritsched() function is used to determine how the thread's scheduling attributes will be set, either by inheriting the scheduling attributes from the creator thread or from an attribute object. The inheritsched parameter can have one of these values:


Thread scheduling attributes shall be inherited from the creator thread and any scheduling attributes of the attr parameter will be ignored.


Thread scheduling attributes shall be set to the scheduling attributes of the attribute object attr.

If the inheritsched parameter value is PTHREAD_EXPLICIT_SCHED, then the pthread_attr_setschedpolicy() function is used to set the scheduling policy and the pthread_attr_setschedparam() function is used to set the priority.

The pthread_attr_setschedpolicy() function sets the scheduling policy of the thread attribute object attr. The policy parameter values can be one of the following defined in the <sched.h> header:


First-In-First-Out scheduling policy where the executing thread runs to completion.


Round-robin scheduling policy where each thread is assigned to a processor only for a time slice.


Other scheduling policy (implementation-defined). By default, this is the scheduling policy of any newly created thread.

The pthread_attr_setschedparam() function is used to set the scheduling parameters of the attribute object attr used by the scheduling policy. The param parameter is a structure that contains the parameters. The sched_param structure has at least this data member defined:

struct sched_param {
   int sched_priority;

It may also have additional data members along with several functions that return and set the priority minimum, maximum, scheduler, paramaters, and so on. If the scheduling policy is either SCHED_FIFO or SCHED_RR, then the only member required to have a value is sched_priority.

To obtain the maximum and minimum priority values, use the sched_get_priority_min() and sched_get_priority_max() functions.


#include <sched.h>

int sched_get_priority_max(int policy);
int sched_get_priority_min(int policy);

Both functions are passed the scheduling policy policy for which the priority values are requested and both will return either the maximum or minimum priority values for the scheduling policy.

Example 4.4 shows how to set the scheduling policy and priority of a thread by using the thread attribute object.

Example 4.4 Using the thread attribute object to set the scheduling policy and priority of a thread.

#define Min_Stack_Req 3000000

pthread_t ThreadA;
pthread_attr_t SchedAttr;
size_t DefaultSize,SizeOffset,NewSize;
int MinPriority,MaxPriority,MidPriority;
sched_param SchedParam;

int main(int argc, char *argv[])

   // initialize attribute object

   // retrieve min and max priority values for scheduling policy
   MinPriority = sched_get_priority_max(SCHED_RR);
   MaxPriority = sched_get_priority_min(SCHED_RR);

   // calculate priority value
   MidPriority = (MaxPriority + MinPriority)/2;

   // assign priority value to sched_param structure
   SchedParam.sched_priority = MidPriority;

   // set attribute object with scheduling parameter

   // set scheduling attributes to be determined by attribute object

   // set scheduling policy

   // create thread with scheduling attribute object

In Example 4.4, the scheduling policy and priority of ThreadA is set using the thread attribute object SchedAttr. This is done in eight steps:

  1. Initialize attribute object.

  2. Retrieve min and max priority values for scheduling policy.

  3. Calculate priority value.

  4. Assign priority value to sched_param structure.

  5. Set attribute object with sceduling parameter.

  6. Set scheduling attributes to be determined by attribute object.

  7. Set scheduling policy.

  8. Create thread with scheduling attribute object.

With this method, the scheduling policy and priority is set before the thread is running. In order to dynamically change the scheduling policy and priority, use the pthread_setschedparam() and pthread_setschedprio() functions.


#include <pthread.h>

int pthread_setschedparam(pthread_t thread, int policy,
                          const struct sched_param *param);
int pthread_getschedparam(pthread_t thread, int *restrict policy,
                          struct sched_param *restrict param);
int pthread_setschedprio(pthread_t thread, int prio);

The pthread_setschedparam() function sets both the scheduling policy and priority of a thread directly without the use of an attribute object. The thread parameter is the id of the thread, policy is the new scheduling policy, and param contains the scheduling priority. The pthread_getschedparam() function shall return the scheduling policy and scheduling parameters and store their values in policy and param parameters, respectively, if successful. If successful, both functions will return 0. If not successful, both functions will return an error number. Table 4-7 lists the conditions in which these functions may fail.

The pthread_setschedprio() function is used to set the scheduling priority of an executing thread whose thread id is specified by the thread parameter. The scheduling priority of the thread will be changed to the value specified by prio. If the function fails, the priority of the thread will not be changed. If successful, the function will return 0. If not successful, an error number is returned. The conditions in which this function fails are also listed in Table 4-7.

Table 4-7. Conditions in Which the Scheduling Policy and Priority Functions May Fail

Pthread Scheduling and Priority Functions

Failure Conditions

int pthread_getschedparam
(pthread_t thread,
 int *restrict policy,
 strct sched_param
 *restrict param) ;

  • The thread parameter does not refer to an existing thread.

int pthread_setschedparam
(pthread_t thread,
 int *policy,
 const struct sched_param

  • The policy parameter or one of the scheduling parameters associated with the policy parameter is invalid.

  • The policy parameter or one of the scheduling paramaters has a value that is not supported.

  • The calling thread does not have the appropriate permission to set the scheduling parameters or policy of the specified thread.

  • The thread parameter does not refer to an existing thread.

  • The implementation does not allow the application to change one of the parameters to the specified value.

int pthread_setschedprio
(pthread_t thread,
 int prio) ;

  • The prio parameter is invalid for the scheduling policy of the specified thread.

  • The priority parameter has a value that is not supported.

  • The calling thread does not have the appropriate permission to set the scheduling priority of the specified thread.

  • The thread parameter does not refer to an existing thread.

  • The implementation does not allow the application to change the priority to the specified value.

Remember to carefully consider why it is necessary to change the scheduling policy or priority of a running thread. This may diversely affect the overall performance of your application. Threads with higher priority preempt running threads with lower priority. This may lead to starvation, or a thread constantly being preempted and therefore not able to complete execution. Setting Contention Scope of a Thread

The contention scope of the thread determines which set of threads with the same scheduling policy and priority, the thread will compete for processor usage. The contention scope of a thread is set by the thread attribute object.


#include <pthread.h>

int pthread_attr_setscope(pthread_attr_t *attr, int contentionscope);
int pthread_attr_getscope(const pthread_attr_t *restrict attr,
                          int *restrict contentionscope);

The pthread_attr_setscope() function sets the contention scope attribute of the thread attribute object specified by the parameter attr. The contention scope of the thread attribute object will be set to the value stored in the contentionscope parameter. The contentionscope parameter can have the values:


System scheduling contention scope


Process scheduling contention scope

The pthread_attr_getscope() function returns the contention scope attribute from the thread attribute object specified by the parameter attr. If successful, the contention scope of the thread attribute object will be returned and stored in the contentionscope parameter. Both functions return 0 if successful and an error number otherwise.

4.9.4 Using sysconf()

It is important to know the thread resource limits of your system in order for your application to appropriately manage its resources. For example, the maximum number of threads per process places an upper bound on the number of worker threads that can be created for a process. The sysconf() function is used to return the current value of configurable system limits or options.


#include <unistd.h>
#include <limits.h>

int sysconf(int name);

The name parameter is the system variable to be queried. What is returned is the POSIX IEEE Std. 1003.1-2001 values for the system variable queried. These values can be compared to the constants defined by your implementation of the standard to see how compliant they are. There are several variables and constant counterparts concerned with threads, processes, and semaphores, some of which are listed in Table 4-8.

The sysconf() function will return -1 and set errno to indicate an error has occurred if the parameter name is not valid. The variable may have no limit defined and may return -1 as a valid return value. In that case, errno will not be set. No defined limit does not mean there is an infinite limit. It simply indicates that no maximum limit is defined and higher limits are supported depending upon the system resources available.

Here is an example of a call to the sysconf() function:


The constant value of PTHREAD_STACK_MIN is compared to the _SC_THREAD_STACK_MIN value returned by the sysconf() function.

Table 4-8. Systems Variables and Their Corresponding Symbolic Constants


Name Value




Supports threads.



Supports thread stack address attribute.



Supports thread stack size attribute.



Minimum size of thread stack storage in bytes.



Maximum number of threads per process.



Maximum number of keys per process.



Supports priority inheritance option.



Supports thread priority option.



Supports thread priority scheduling option.



Supports process-shared synchronization.



Supports thread-safe functions.



Determines the number of attempts made to destroy thread-specific data on thread exit.



Maximum number of processes allowed to a UID.



Supports process scheduling.



Supports real-time signals.



Supports X/Open POSIX real-time threads feature group.



Determines the number of streams one process can have open at a time.



Supports semaphores.



Determines the maximum number of semaphores a process may have.



Determines the maximum value a semaphore may have.



Supports shared memory objects.

4.9.5 Managing a Critical Section

Concurrently executing processes, or threads within the same process, can share data structures, variables, or data. Sharing global memory allows the processes or threads to communicate or share access to data. With multiple processes, the shared global memory is external to the processes that the processes in question have access. This data structure can be used to transfer data or commands among the processes. When threads need to communicate, they can access data structures or variables that are part of the same process to which they belong.

Whether there are processes or threads accessing shared modifiable data, the data structures, variables, or data is in a critical region or section of the processes' or threads' code. A critical section in the code is where the thread or process is accessing and processing the shared block of modifiable memory. Classifying a section of code as a critical section can be used to control race conditions. For example, in a program two threads, thread A and thread B, are used to perform a multiple keyword search through all the files located on a system. Thread A searches each directory for text files and writes the paths to a list data structure TextFiles then increments a FileCount variable. Thread B extracts the filenames from the list TextFiles, decrements the FileCount, then searches the file for the multiple keywords. The file that contains the keywords is written to a file and another variable, FoundCount, is incremented. FoundCount is not shared with thread A. Threads A and B can be executed simultaneously on separate processors. Thread A executes until all directories have been searched while thread B searches each file extracted from TextFiles. The list is maintained in sorted order and can be requested to display its contents any time.

A number of problems can crop up. For example, thread B may attempt to extract a filename from TextFiles before thread A has added a filename to TextFiles. Thread B may attempt to decrement SearchCount before thread A has incremented SearchCount or both may attempt to modify the variable simultaneously. Also TextFiles may be sorting its elements while thread A is simultaneously attempting to write a filename to it or thread B is simultaneously attempting to extract a filename from it. These problems are examples of race conditions in which two or more threads or processes are attempting to modify the same block of shared memory simultaneously.

When threads or processes are simply simultaneously reading the same block of memory, race conditions do not occur. Race conditions occur when multiple processes or threads are simultaneously accessing the same block of memory with at least one of the threads or processes attempting to modify the block of memory. The section of code becomes critical when there are simultaneous attempts to change the same block of memory. One way to protect the critical section is to only allow exclusive access to the block of memory. Exclusive access means one process or thread will have access to the shared block of memory for a short period while all other processes or threads are prevented (blocked) from entering their critical section where they are accessing the same block of memory.

A locking mechanism, like a mutex semaphore, can be used to control race condition. A mutex, short for "mutual exclusion," is used to block off a critical section. The mutex is locked before entering the critical section then unlocked when exiting the critical section:

lock mutex
   // enter critical section
   // access shared modifiable memory
   // exit critical section
unlock mutex

The pthread_mutex_t models a mutex object. Before the pthread_mutex_t object can be used, it must first be initialized. The pthread_mutex_init() initializes the mutex. Once initialized the mutex can be locked, unlocked, and destroyed with the pthread_mutex_lock(), pthread_mutex_unlock(), and pthread_mutex_destroy() functions. Program 4.5 contains the function that searches a system for text files. Program 4.6 contains the function that searches each text file for specified keywords. Each function is executed by a thread. Program 4.7 contains the primary thread. These programs implement the producer-consumer model for thread delegation. Program 4.5 contains the producer thread and Program 4.6 contains the consumer thread. The critical sections are bolded.

Program 4.5

 1 int isDirectory(string FileName)
 2 {
 3   struct stat StatBuffer;
 5   lstat(FileName.c_str(),&StatBuffer);
 6   if((StatBuffer.st_mode & S_IFDIR) == -1)
 7   {
 8      cout << "could not get stats on file" << endl;
 9      return(0);
10   }
11   else{
12          if(StatBuffer.st_mode & S_IFDIR){
13             return(1);
14         }
15   }
16   return(0);
17 }
20 int isRegular(string FileName)
21 {
22   struct stat StatBuffer;
24   lstat(FileName.c_str(),&StatBuffer);
25   if((StatBuffer.st_mode & S_IFDIR) == -1)
26   {
27      cout << "could not get stats on file" << endl;
28      return(0);
29   }
30   else{
31          if(StatBuffer.st_mode & S_IFREG){
32             return(1);
33          }
34   }
35   return(0);
36 }
39 void depthFirstTraversal(const char *CurrentDir)
40 {
41   DIR *DirP;
42   string Temp;
43   string FileName;
44   struct dirent *EntryP;
45   chdir(CurrentDir);
46   cout << "Searching Directory: " << CurrentDir << endl;
47   DirP = opendir(CurrentDir);
49   if(DirP == NULL){
50      cout << "could not open file" << endl;
51      return;
52   }
53   EntryP = readdir(DirP);
54   while(EntryP != NULL)
55   {
56      Temp.erase();
57      FileName.erase();
58      Temp = EntryP->d_name;
59      if((Temp != ".") && (Temp != "..")){
60         FileName.assign(CurrentDir);
61         FileName.append(1,'/');
62         FileName.append(EntryP->d_name);
63         if(isDirectory(FileName)){
64            string NewDirectory;
65            NewDirectory = FileName;
66            depthFirstTraversal(NewDirectory.c_str());
67         }
68         else{
69                 if(isRegular(FileName)){
70                    int Flag;
71                    Flag = FileName.find(".cpp");
72                    if(Flag > 0){
73                       pthread_mutex_lock(&CountMutex);
74                         FileCount++;
   75                      pthread_mutex_unlock(&CountMutex);
   76                       pthread_mutex_lock(&QueueMutex);
   77                         TextFiles.push(FileName);
   78                      pthread_mutex_unlock(&QueueMutex);
   79                   }
   80                }
   81        }
   83     }
   84     EntryP = readdir(DirP);
   85 }
   86   closedir(DirP);
   87 }
   91 void *task(void *X)
   92 {
   93   char *Directory;
   94   Directory = static_cast<char *>(X);
   95   depthFirstTraversal(Directory);
   96   return(NULL);
   98 }

Program 4.6 contains the consumer thread that performs the search.

Program 4.6

1 void *keySearch(void *X)
2 {
3   string Temp, Filename;
4   less<string> Comp;
6   while(!Keyfile.eof() && Keyfile.good())
7   {
8      Keyfile >> Temp;
9      if(!Keyfile.eof()){
10        KeyWords.insert(Temp);
11     }
12  }
13  Keyfile.close();
15  while(TextFiles.empty())
16  { }
18  while(!TextFiles.empty())
19  {
20     pthread_mutex_lock(&QueueMutex);
21     Filename = TextFiles.front();
   22     TextFiles.pop();
   23     pthread_mutex_unlock(&QueueMutex);
   24     Infile.open(Filename.c_str());
   25     SearchWords.erase(SearchWords.begin(),SearchWords.end());
   27     while(!Infile.eof() && Infile.good())
   28     {
   29        Infile >> Temp;
   30        SearchWords.insert(Temp);
   31     }
   33     Infile.close();
   34     if(includes(SearchWords.begin(),SearchWords.end(),
   35         Outfile << Filename << endl;
   36         pthread_mutex_lock(&CountMutex);
   37          FileCount--;
   38         pthread_mutex_unlock(&CountMutex);
   39         FoundCount++;
   40     }
   41   }
   42   return(NULL);
   44 }

Program 4.7 contains the primary thread for producer–consumer threads in Programs 4.5 and 4.6.

Program 4.7

 1 #include <sys/stat.h>
 2 #include <fstream>
 3 #include <queue>
 4 #include <algorithm>
 5 #include <pthread.h>
 6 #include <iostream>
 7 #include <set>
 9 pthread_mutex_t QueueMutex = PTHREAD_MUTEX_INITIALIZER;
10 pthread_mutex_t CountMutex = PTHREAD_MUTEX_INITIALIZER;
12 int FileCount = 0;
13 int FoundCount = 0;
15 int keySearch(void);
16 queue<string> TextFiles;
17 set <string,less<string> >KeyWords;
18 set <string,less<string> >SearchWords;
19 ifstream Infile;
20 ofstream Outfile;
21 ifstream Keyfile;
22 string KeywordFile;
23 string OutFilename;
24 pthread_t Thread1;
25 pthread_t Thread2;
27 void depthFirstTraversal(const char *CurrentDir);
28 int isDirectory(string FileName);
29 int isRegular(string FileName);
31 int main(int argc, char *argv[])
32 {
33   if(argc != 4){
34      cerr << "need more info" << endl;
35      exit (1);
36   }
38    Outfile.open(argv[3],ios::app||ios::ate);
39    Keyfile.open(argv[2]);
40    pthread_create(&Thread1,NULL,task,argv[1]);
41    pthread_create(&Thread2,NULL,keySearch,argv[1]);
42    pthread_join(Thread1,NULL);
43    pthread_join(Thread2,NULL);
44    pthread_mutex_destroy(&CountMutex);
45    pthread_mutex_destroy(&QueueMutex);
47    cout << argv[1] << " contains " << FoundCount
           << " files that contains all keywords." << endl;
48    return(0);
49 }

With mutexes, one thread at a time is permitted to read from or write to the shared memory. There are other mechanisms and techniques that can be used to ensure thread safety for user-defined functions implementing one of the PRAM models:

  • EREW (exclusive read and exclusive write)

  • CREW (concurrent read and exclusive write)

  • ERCW (exclusive read and concurrent write)

  • CRCW (concurrent read and concurrent write)

Mutexes are used to implement EREW algorithms, which will be discussed in Chapter 5.

  • + Share This
  • 🔖 Save To Your Account

InformIT Promotional Mailings & Special Offers

I would like to receive exclusive offers and hear about products from InformIT and its family of brands. I can unsubscribe at any time.


Pearson Education, Inc., 221 River Street, Hoboken, New Jersey 07030, (Pearson) presents this site to provide information about products and services that can be purchased through this site.

This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site. Please note that other Pearson websites and online products and services have their own separate privacy policies.

Collection and Use of Information

To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:

Questions and Inquiries

For inquiries and questions, we collect the inquiry or question, together with name, contact details (email address, phone number and mailing address) and any other additional information voluntarily submitted to us through a Contact Us form or an email. We use this information to address the inquiry and respond to the question.

Online Store

For orders and purchases placed through our online store on this site, we collect order details, name, institution name and address (if applicable), email address, phone number, shipping and billing addresses, credit/debit card information, shipping options and any instructions. We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes.


Pearson may offer opportunities to provide feedback or participate in surveys, including surveys evaluating Pearson products, services or sites. Participation is voluntary. Pearson collects information requested in the survey questions and uses the information to evaluate, support, maintain and improve products, services or sites, develop new products and services, conduct educational research and for other purposes specified in the survey.

Contests and Drawings

Occasionally, we may sponsor a contest or drawing. Participation is optional. Pearson collects name, contact information and other information specified on the entry form for the contest or drawing to conduct the contest or drawing. Pearson may collect additional personal information from the winners of a contest or drawing in order to award the prize and for tax reporting purposes, as required by law.


If you have elected to receive email newsletters or promotional mailings and special offers but want to unsubscribe, simply email information@informit.com.

Service Announcements

On rare occasions it is necessary to send out a strictly service related announcement. For instance, if our service is temporarily suspended for maintenance we might send users an email. Generally, users may not opt-out of these communications, though they can deactivate their account information. However, these communications are not promotional in nature.

Customer Service

We communicate with users on a regular basis to provide requested services and in regard to issues relating to their account we reply via email or phone in accordance with the users' wishes when a user submits their information through our Contact Us form.

Other Collection and Use of Information

Application and System Logs

Pearson automatically collects log data to help ensure the delivery, availability and security of this site. Log data may include technical information about how a user or visitor connected to this site, such as browser type, type of computer/device, operating system, internet service provider and IP address. We use this information for support purposes and to monitor the health of the site, identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents and appropriately scale computing resources.

Web Analytics

Pearson may use third party web trend analytical services, including Google Analytics, to collect visitor information, such as IP addresses, browser types, referring pages, pages visited and time spent on a particular site. While these analytical services collect and report information on an anonymous basis, they may use cookies to gather web trend information. The information gathered may enable Pearson (but not the third party web trend services) to link information with application and system log data. Pearson uses this information for system administration and to identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents, appropriately scale computing resources and otherwise support and deliver this site and its services.

Cookies and Related Technologies

This site uses cookies and similar technologies to personalize content, measure traffic patterns, control security, track use and access of information on this site, and provide interest-based messages and advertising. Users can manage and block the use of cookies through their browser. Disabling or blocking certain cookies may limit the functionality of this site.

Do Not Track

This site currently does not respond to Do Not Track signals.


Pearson uses appropriate physical, administrative and technical security measures to protect personal information from unauthorized access, use and disclosure.


This site is not directed to children under the age of 13.


Pearson may send or direct marketing communications to users, provided that

  • Pearson will not use personal information collected or processed as a K-12 school service provider for the purpose of directed or targeted advertising.
  • Such marketing is consistent with applicable law and Pearson's legal obligations.
  • Pearson will not knowingly direct or send marketing communications to an individual who has expressed a preference not to receive marketing.
  • Where required by applicable law, express or implied consent to marketing exists and has not been withdrawn.

Pearson may provide personal information to a third party service provider on a restricted basis to provide marketing solely on behalf of Pearson or an affiliate or customer for whom Pearson is a service provider. Marketing preferences may be changed at any time.

Correcting/Updating Personal Information

If a user's personally identifiable information changes (such as your postal address or email address), we provide a way to correct or update that user's personal data provided to us. This can be done on the Account page. If a user no longer desires our service and desires to delete his or her account, please contact us at customer-service@informit.com and we will process the deletion of a user's account.


Users can always make an informed choice as to whether they should proceed with certain services offered by InformIT. If you choose to remove yourself from our mailing list(s) simply visit the following page and uncheck any communication you no longer want to receive: www.informit.com/u.aspx.

Sale of Personal Information

Pearson does not rent or sell personal information in exchange for any payment of money.

While Pearson does not sell personal information, as defined in Nevada law, Nevada residents may email a request for no sale of their personal information to NevadaDesignatedRequest@pearson.com.

Supplemental Privacy Statement for California Residents

California residents should read our Supplemental privacy statement for California residents in conjunction with this Privacy Notice. The Supplemental privacy statement for California residents explains Pearson's commitment to comply with California law and applies to personal information of California residents collected in connection with this site and the Services.

Sharing and Disclosure

Pearson may disclose personal information, as follows:

  • As required by law.
  • With the consent of the individual (or their parent, if the individual is a minor)
  • In response to a subpoena, court order or legal process, to the extent permitted or required by law
  • To protect the security and safety of individuals, data, assets and systems, consistent with applicable law
  • In connection the sale, joint venture or other transfer of some or all of its company or assets, subject to the provisions of this Privacy Notice
  • To investigate or address actual or suspected fraud or other illegal activities
  • To exercise its legal rights, including enforcement of the Terms of Use for this site or another contract
  • To affiliated Pearson companies and other companies and organizations who perform work for Pearson and are obligated to protect the privacy of personal information consistent with this Privacy Notice
  • To a school, organization, company or government agency, where Pearson collects or processes the personal information in a school setting or on behalf of such organization, company or government agency.


This web site contains links to other sites. Please be aware that we are not responsible for the privacy practices of such other sites. We encourage our users to be aware when they leave our site and to read the privacy statements of each and every web site that collects Personal Information. This privacy statement applies solely to information collected by this web site.

Requests and Contact

Please contact us about this Privacy Notice or if you have any requests or questions relating to the privacy of your personal information.

Changes to this Privacy Notice

We may revise this Privacy Notice through an updated posting. We will identify the effective date of the revision in the posting. Often, updates are made to provide greater clarity or to comply with changes in regulatory requirements. If the updates involve material changes to the collection, protection, use or disclosure of Personal Information, Pearson will provide notice of the change through a conspicuous notice on this site or other appropriate way. Continued use of the site after the effective date of a posted revision evidences acceptance. Please contact us if you have questions or concerns about the Privacy Notice or any objection to any revisions.

Last Update: November 17, 2020