Home > Articles

This chapter is from the book

4.11 Dividing Your Program into Multiple Threads

Earlier in this chapter we discussed the delegation of work according to a specific strategy or approach called a thread model. Those thread models were:

  • delegation (boss–worker)

  • peer-to-peer

  • pipeline

  • producer–consumer

Each model has its own WBS (Work Breakdown Structure) that determines who is responsible for thread creation and under what conditions threads are created. In this section we will show an example of a program for each model using Pthread library functions.

4.11.1 Using the Delegation Model

We discussed two approaches that can be used to implement the delegation approach to dividing a program into threads. To recall, in the delegation model, a single thread (boss) creates the threads (workers) and assigns each a task. The boss thread delegates the task each worker thread is to perform by specifying a function. With one approach, the boss thread creates threads as a result of requests made to the system. The boss thread processes each type of request in an event loop. As events occur, thread workers are created and assigned their duties. Example 4.5 shows the event loop in the boss thread and the worker threads in pseudocode.

Example 4.5 Approach 1: Skeleton program of boss and worker thread model.

//...
pthread_mutex_t Mutex = PTHREAD_MUTEX_INITIALIZER
int AvailableThreads
pthread_t Thread[Max_Threads]
void decrementThreadAvailability(void)
void incrementThreadAvailability(void)
int threadAvailability(void);


// boss thread
{
   //...
   if(sysconf(_SC_THREAD_THREADS_MAX) > 0){
      AvailableThreads = sysconf(_SC_THREAD_THREADS_MAX)
   }
   else{
          AvailableThreads = Default
   }

   int Count = 1;

   loop while(Request Queue is not empty)
      if(threadAvailability()){
         Count++
         decrementThreadAvailability()
         classify request
         switch(request type)
         {
            case X : pthread_create(&(Thread[Count])...taskX...)
            case Y : pthread_create(&(Thread[Count])...taskY...)
            case Z : pthread_create(&(Thread[Count])...taskZ...)
            //...
         }
      }
      else{
              //free up thread resources
      }
   end loop
}

void *taskX(void *X)
{
   // process X type request
   incrementThreadAvailability()
   return(NULL)
}

void *taskY(void *Y)
{
   // process Y type request
   incrementThreadAvailability()
   return(NULL)
}

void *taskZ(void *Z)
{
   // process Z type request
   decrementThreadAvailability()
   return(NULL)
}

//...

In Example 4.5, the boss thread dynamically creates a thread to process each new request that enters the system, but there are a maximum number of threads that will be created. There are n number of tasks to process n request types. To be sure the maximum number of threads per process will not be exceeded, these additional functions can be defined:

threadAvailability()
incrementThreadAvailability()
decrementThreadAvailability()

Example 4.6 shows pseudocode for these functions.

Example 4.6 Functions that manage thread availability count.

void incrementThreadAvailability(void)
{
   //...
   pthread_mutex_lock(&Mutex)
   AvailableThreads++
   pthread_mutex_unlock(&Mutex)
}

void decrementThreadAvailability(void)
{
   //...
   pthread_mutex_lock(&Mutex)
   AvailableThreads—
   pthread_mutex_unlock(&Mutex)
}

int threadAvailability(void)
{
   //...
   pthread_mutex_lock(&Mutex)
   if(AvailableThreads > 1)
      return 1
   else
      return 0
   pthread_mutex_unlock(&Mutex)
}

The threadAvailability() function will return 1 if the maximum number of threads allowed per process has not been reached. This function accesses a global variable ThreadAvailability that stores the number of threads still available for the process. The boss thread calls the decrementThreadAvailability() function, which decrements the global variable before the boss thread creates a thread. The worker threads call incrementThreadAvailability(), which increments the global variable before a worker thread exits. Both functions contain a call to pthread_mutex_lock() before accessing the variable and a call to pthread_mutex_unlock() after accessing the global variable. If the maximum number of threads are exceeded, then the boss thread can cancel threads if possible or spawn another process, if necessary. taskX(), taskY(), and taskZ() execute code that processes their type of request.

The other approach to the delegation model is to have the boss thread create a pool of threads that are reassigned new requests instead of creating a new thread per request. The boss thread creates a number of threads during initialization and then each thread is suspended until a request is added to the queue. The boss thread will still contain an event loop to extract requests from the queue. But instead of creating a new thread per request, the boss thread signals the appropriate thread to process the request. Example 4.7 shows the boss thread and the worker threads in pseudocode for this approach to the delegation model.

Example 4.7 Approach 2: Skeleton program of boss and worker thread model.

//...

pthread_t Thread[N]

// boss thread
{

    pthread_create(&(Thread[1]...taskX...);
    pthread_create(&(Thread[2]...taskY...);
    pthread_create(&(Thread[3]...taskZ...);
    //...

    loop while(Request Queue is not empty
       get request
       classify request
       switch(request type)
       {
           case X :
                    enqueue request to XQueue
                    signal Thread[1]

           case Y :
                    enqueue request to YQueue
                    signal Thread[2]

           case Z :
                    enqueue request to ZQueue
                    signal Thread[3]
           //...
       }

   end loop
}

void *taskX(void *X)
{
   loop
       suspend until awaken by boss
       loop while XQueue is not empty
          dequeue request
          process request

       end loop
   until done
{

void *taskY(void *Y)
{
   loop
       suspend until awaken by boss
       loop while YQueue is not empty
          dequeue request
          process request
       end loop
   until done
}

void *taskZ(void *Z)
{
   loop
       suspend until awaken by boss
       loop while (ZQueue is not empty)
          dequeue request
          process request
       end loop
   until done
}

//...

In Example 4.7, the boss thread creates N number of threads, one thread for each task to be executed. Each task is associated with processing a request type. In the event loop, the boss thread dequeues a request from the request queue, determines the request type, enqueues the request to the appropriate request queue, then signals the thread that processes the request in that queue. The functions also contain an event loop. The thread is suspended until it receives a signal from the boss that there is a request in its queue. Once awakened, in the inner loop, the thread processes all the requests in the queue until it is empty.

4.11.2 Using the Peer-to-Peer Model

In the peer-to-peer model, a single thread initially creates all the threads needed to perform all the tasks called peers. The peer threads process requests from their own input stream. Example 4.8. shows a skeleton program of the peer-to-peer approach of dividing a program into threads.

Example 4.8 Skeleton program using the peer-to-peer model

//...

pthread_t Thread[N]

// initial thread
{

    pthread_create(&(Thread[1]...taskX...);
    pthread_create(&(Thread[2]...taskY...);
    pthread_create(&(Thread[3]...taskZ...);
    //...

  }

void *taskX(void *X)
{
    loop while (Type XRequests are available)
          extract Request
          process request
    end loop
    return(NULL)
}

//...

In the peer-to-peer model, each thread is responsible for its own stream of input. The input can be extracted from a database, file, and so on.

4.11.3 Using the Pipeline Model

In the pipeline model, there is a stream of input processed in stages. At each stage, work is performed on a unit of input by a thread. The input continues to move to each stage until the input has completed processing. This approach allows multiple inputs to be processed simultaneously. Each thread is responsible for producing its interim results or output, making them available to the next stage or next thread in the pipeline. Example 4.9 shows the skeleton program for the pipeline model.

Example 4.9 Skeleton program using the pipeline model.

//...

   pthread_t Thread[N]
   Queues[N]

   // initial thread
   {
       place all input into stage1's queue
       pthread_create(&(Thread[1]...stage1...);
       pthread_create(&(Thread[2]...stage2...);
       pthread_create(&(Thread[3]...stage3...);
       //...
    }

void *stageX(void *X)
{
   loop
     suspend until input unit is in queue
     loop while XQueue is not empty
         dequeue input unit
         process input unit
         enqueue input unit into next stage's queue
      end loop
   until done
   return(NULL)
}

//...

In Example 4.9, N queues are declared for N stages. The initial thread enqueues all the input into stage 1's queue. The initial thread then creates all the threads needed to execute each stage. Each stage has an event loop. The thread sleeps until an input unit has been enqueued. The inner loop continues to iterate until its queue is empty. The input unit is dequeued, processed, then that unit is then enqueued into the queue of the next stage.

4.11.4 Using the Producer–Consumer Model

In the producer-consumer model, the producer thread produces data consumed by the consumer thread or threads. The data is stored in a block memory shared between the producer and consumer threads. This model was used in Programs 4.5, 4.6, and 4.7. Example 4.10 shows the skeleton program for the producer-consumer model.

Example 4.10 Skeleton program using the producer–consumer model.

pthread_mutex_t Mutex = PTHREAD_MUTEX_INITIALIZER
pthread_t Thread[2]
Queue

// initial thread
{
    pthread_create(&(Thread[1]...producer...);
    pthread_create(&(Thread[2]...consumer...);
    //...
 }

void *producer(void *X)
{
   loop
      perform work
        pthread_mutex_lock(&Mutex)
         enqueue data
      pthread_mutex_unlock(&Mutex)
         signal consumer
      //...
   until done
}

void *consumer(void *X)
{
   loop
      suspend until signaled
      loop while(Data Queue not empty)
          pthread_mutex_lock(&Mutex)
           dequeue data
       pthread_mutex_unlock(&Mutex)
          perform work
      end loop
   until done
}

In Example 4.9, an initial thread creates the producer and consumer threads. The producer thread executes a loop in which it performs work then locks a mutex on the shared queue in order to enqueue the data it has produced. The producer unlocks the mutex then signals the consumer thread that there is data in the queue. The producer iterates through the loop until all work is done. The consumer thread also executes a loop in which it suspends itself until it is signaled. In the inner loop, the consumer thread processes all the data until the queue is empty. It locks the mutex on the shared queue before it dequeues any data and unlocks the mutex after the data has been dequeued. It then performs work on that data. In Program 4.6, the consumer thread enqueues its results to a file. The results could have been inserted into another data structure. This is often done by consumer threads in which it plays both the role of consumer and producer. It plays the role of consumer of the unprocessed data produced by the producer thread, then it plays the role of producer when it processes data stored in another shared queue consumed by another thread.

4.11.5 Creating Multithreaded Objects

The delegation, peer-to-peer, pipeline. and producer–consumer models demonstrate approaches to dividing a program into multiple threads along function lines. When using objects, member functions can create threads to perform multiple tasks. Threads can be used to execute code on behalf of the object: free-floating functions and other member functions.

In either case, the threads are declared within the object and created by one of the member functions (e.g., the constructor). The threads can then execute some free-floating functions (function defined outside the object), which invokes member functions of the object that are global. This is one approach to making an object multithreaded. Example 4.10 contains an example of a multithreaded object.

Example 4.10 Declaration and definition of multithreading an object.

#include <pthread.h>
#include <iostream>
#include <unistd.h>

void *task1(void *);
void *task2(void *);

class multithreaded_object
{
   pthread_t Thread1,Thread2;
public:

   multithreaded_object(void);
   int c1(void);
   int c2(void);
   //...
};

multithreaded_object::multithreaded_object(void)
{

   //...
   pthread_create(&Thread1,NULL,task1,NULL);
   pthread_create(&Thread2,NULL,task2,NULL);
   pthread_join(Thread1,NULL);
   pthread_join(Thread2,NULL);
   //...

}

int multithreaded_object::c1(void)
{
   // do work
   return(1);
}

int multithreaded_object::c2(void)
{
   // do work
return(1);
}

multithreaded_object MObj;

void *task1(void *)
{
   //...
   MObj.c1();
   return(NULL);
}

void *task2(void *)
{
   //...
   MObj.c2();
   return(NULL);
}

In Example 4.10, the class multithread_object declares two threads. From the constructor of the class, the threads are created and joined. Thread1 executes task1 and Thread2 executes task2. task1 and task2, then invokes member functions of the global object MObj.

InformIT Promotional Mailings & Special Offers

I would like to receive exclusive offers and hear about products from InformIT and its family of brands. I can unsubscribe at any time.

Overview


Pearson Education, Inc., 221 River Street, Hoboken, New Jersey 07030, (Pearson) presents this site to provide information about products and services that can be purchased through this site.

This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site. Please note that other Pearson websites and online products and services have their own separate privacy policies.

Collection and Use of Information


To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:

Questions and Inquiries

For inquiries and questions, we collect the inquiry or question, together with name, contact details (email address, phone number and mailing address) and any other additional information voluntarily submitted to us through a Contact Us form or an email. We use this information to address the inquiry and respond to the question.

Online Store

For orders and purchases placed through our online store on this site, we collect order details, name, institution name and address (if applicable), email address, phone number, shipping and billing addresses, credit/debit card information, shipping options and any instructions. We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes.

Surveys

Pearson may offer opportunities to provide feedback or participate in surveys, including surveys evaluating Pearson products, services or sites. Participation is voluntary. Pearson collects information requested in the survey questions and uses the information to evaluate, support, maintain and improve products, services or sites, develop new products and services, conduct educational research and for other purposes specified in the survey.

Contests and Drawings

Occasionally, we may sponsor a contest or drawing. Participation is optional. Pearson collects name, contact information and other information specified on the entry form for the contest or drawing to conduct the contest or drawing. Pearson may collect additional personal information from the winners of a contest or drawing in order to award the prize and for tax reporting purposes, as required by law.

Newsletters

If you have elected to receive email newsletters or promotional mailings and special offers but want to unsubscribe, simply email information@informit.com.

Service Announcements

On rare occasions it is necessary to send out a strictly service related announcement. For instance, if our service is temporarily suspended for maintenance we might send users an email. Generally, users may not opt-out of these communications, though they can deactivate their account information. However, these communications are not promotional in nature.

Customer Service

We communicate with users on a regular basis to provide requested services and in regard to issues relating to their account we reply via email or phone in accordance with the users' wishes when a user submits their information through our Contact Us form.

Other Collection and Use of Information


Application and System Logs

Pearson automatically collects log data to help ensure the delivery, availability and security of this site. Log data may include technical information about how a user or visitor connected to this site, such as browser type, type of computer/device, operating system, internet service provider and IP address. We use this information for support purposes and to monitor the health of the site, identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents and appropriately scale computing resources.

Web Analytics

Pearson may use third party web trend analytical services, including Google Analytics, to collect visitor information, such as IP addresses, browser types, referring pages, pages visited and time spent on a particular site. While these analytical services collect and report information on an anonymous basis, they may use cookies to gather web trend information. The information gathered may enable Pearson (but not the third party web trend services) to link information with application and system log data. Pearson uses this information for system administration and to identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents, appropriately scale computing resources and otherwise support and deliver this site and its services.

Cookies and Related Technologies

This site uses cookies and similar technologies to personalize content, measure traffic patterns, control security, track use and access of information on this site, and provide interest-based messages and advertising. Users can manage and block the use of cookies through their browser. Disabling or blocking certain cookies may limit the functionality of this site.

Do Not Track

This site currently does not respond to Do Not Track signals.

Security


Pearson uses appropriate physical, administrative and technical security measures to protect personal information from unauthorized access, use and disclosure.

Children


This site is not directed to children under the age of 13.

Marketing


Pearson may send or direct marketing communications to users, provided that

  • Pearson will not use personal information collected or processed as a K-12 school service provider for the purpose of directed or targeted advertising.
  • Such marketing is consistent with applicable law and Pearson's legal obligations.
  • Pearson will not knowingly direct or send marketing communications to an individual who has expressed a preference not to receive marketing.
  • Where required by applicable law, express or implied consent to marketing exists and has not been withdrawn.

Pearson may provide personal information to a third party service provider on a restricted basis to provide marketing solely on behalf of Pearson or an affiliate or customer for whom Pearson is a service provider. Marketing preferences may be changed at any time.

Correcting/Updating Personal Information


If a user's personally identifiable information changes (such as your postal address or email address), we provide a way to correct or update that user's personal data provided to us. This can be done on the Account page. If a user no longer desires our service and desires to delete his or her account, please contact us at customer-service@informit.com and we will process the deletion of a user's account.

Choice/Opt-out


Users can always make an informed choice as to whether they should proceed with certain services offered by InformIT. If you choose to remove yourself from our mailing list(s) simply visit the following page and uncheck any communication you no longer want to receive: www.informit.com/u.aspx.

Sale of Personal Information


Pearson does not rent or sell personal information in exchange for any payment of money.

While Pearson does not sell personal information, as defined in Nevada law, Nevada residents may email a request for no sale of their personal information to NevadaDesignatedRequest@pearson.com.

Supplemental Privacy Statement for California Residents


California residents should read our Supplemental privacy statement for California residents in conjunction with this Privacy Notice. The Supplemental privacy statement for California residents explains Pearson's commitment to comply with California law and applies to personal information of California residents collected in connection with this site and the Services.

Sharing and Disclosure


Pearson may disclose personal information, as follows:

  • As required by law.
  • With the consent of the individual (or their parent, if the individual is a minor)
  • In response to a subpoena, court order or legal process, to the extent permitted or required by law
  • To protect the security and safety of individuals, data, assets and systems, consistent with applicable law
  • In connection the sale, joint venture or other transfer of some or all of its company or assets, subject to the provisions of this Privacy Notice
  • To investigate or address actual or suspected fraud or other illegal activities
  • To exercise its legal rights, including enforcement of the Terms of Use for this site or another contract
  • To affiliated Pearson companies and other companies and organizations who perform work for Pearson and are obligated to protect the privacy of personal information consistent with this Privacy Notice
  • To a school, organization, company or government agency, where Pearson collects or processes the personal information in a school setting or on behalf of such organization, company or government agency.

Links


This web site contains links to other sites. Please be aware that we are not responsible for the privacy practices of such other sites. We encourage our users to be aware when they leave our site and to read the privacy statements of each and every web site that collects Personal Information. This privacy statement applies solely to information collected by this web site.

Requests and Contact


Please contact us about this Privacy Notice or if you have any requests or questions relating to the privacy of your personal information.

Changes to this Privacy Notice


We may revise this Privacy Notice through an updated posting. We will identify the effective date of the revision in the posting. Often, updates are made to provide greater clarity or to comply with changes in regulatory requirements. If the updates involve material changes to the collection, protection, use or disclosure of Personal Information, Pearson will provide notice of the change through a conspicuous notice on this site or other appropriate way. Continued use of the site after the effective date of a posted revision evidences acceptance. Please contact us if you have questions or concerns about the Privacy Notice or any objection to any revisions.

Last Update: November 17, 2020