Home > Articles > Business & Management

  • Print
  • + Share This
This chapter is from the book

1.4 Fundamentals of Improving a Product, Service, or Process

Process Basics (Voice of the Process [VoP])

Definition of a Process

A process is a collection of interacting components that transform inputs into outputs toward a common aim, called a mission statement. The job of management is to optimize the entire process toward its aim. This may require the sub-optimization of selected components of the process. Sometimes a particular department in an organization may have to give up resources in the short run to another department to maximize profit for the overall organization. This is particularly true when one department expends effort to correct the failings or omissions of another department working on the same process. Inspection, signature approvals, rework areas, complaint-resolution areas, etc. are all evidence that the process was not done effectively and efficiently the first time. The consumption of resources utilized in correcting the failings and omissions would have been avoided if the process was done "right."

The transformation, as shown in Figure 1.1, involves the addition or creation of value in one of three aspects: time, place, or form. An output has "time value" if it is available when needed by a user. For example, you have food when you are hungry. Or material inputs are ready on schedule. An output has "place value" if it is available where needed by a user. For example, gas is in your tank (not in an oil field), or wood chips are in a paper mill. An output has "form value" if it is available in the form needed by a user. For example, bread is sliced so it can fit in a toaster, or paper has three holes so it can be placed in a binder.

Figure 1.1

Figure 1.1 Basic Process

Processes exist in all facets of organizations, and our understanding of them is crucial. Many people mistakenly think only of production processes. However, administration, sales, service, human resources, training, maintenance, paper flows, interdepartmental communication, and vendor relations are all processes. Importantly, relationships between people are processes. Most processes can be studied, documented, defined, improved, and innovated.

An example of a generic assembly process is shown in Figure 1.2. The inputs (component parts, machines, and operators) are transformed in the process to make the outputs (assembled product).

Figure 1.2

Figure 1.2 Production Process

An organization is a multiplicity of micro sub-processes, all synergistically building to the macro process of that organization. All processes have customers and suppliers; these customers and suppliers can be internal or external to the organization. A customer can be an end user or the next operation downstream. The customer does not even have to be a human; it could be a machine. A supplier could be another organization supplying sub-assemblies or services, or the prior operation upstream.

Variation in a Process

The outputs from all processes and their component parts may be measured; the measurements invariably fluctuate over time, creating a distribution of measurements. The distribution of measurements of the outputs from a process over time is called the "Voice of the Process (VoP)." Consider a process such as getting ready for work or for class in the morning. Some days you are busier than usual, while on other days you have less to do than usual. Your process varies from day to day to some degree. This is common variation. However, if a construction project begins on the highway you take to work or school, you might drastically alter your morning routine. This would be special variation because it would have been caused by a change external to your "driving to work or school" process. If the traffic patterns had remained as they were, your process would have continued on its former path of common variation.

The design and execution of a process creates common causes of variation. In other words, common variation is due to the process itself. Process capability is determined by inherent common causes of variation, such as hiring, training, or supervisory practices; inadequate lighting; stress; management style; policies and procedures; or design of products or services. Employees working within the process cannot control a common cause of variation and should not be held accountable for, or penalized for, its outcomes. Process owners (management) must realize that unless a change is made in the process (which only they can make), the capability of the process will remain the same. Special causes of variation are due to events external to the usual functioning of the process. New raw materials, a drunken employee, or a new operator can be examples of special causes of variation. Identifying the occurrence of special and common causes of variation is discussed extensively in References 2 and 3.

Because unit-to-unit variation decreases the customer’s ability to rely on the dependability and uniformity of the outputs of a process, managers must understand how to reduce and control variation. Employees use statistical methods so that common and special causes of variation can be differentiated; special variation can be resolved and common variation can be reduced by management action, resulting in improvement and innovation of the outputs of a process.

The following fictionalized case history demonstrates the need for management to understand the difference between common and special causes of variation to take appropriate action. In this case history, an employee comes to work intoxicated. His behavior causes productivity, safety, and morale problems. You, as the supervisor, speak to the employee privately, try to resolve the situation, and send the employee home with pay. After a second instance of intoxication, you speak to the employee privately, try to resolve the problem again, and send the employee home without pay. A third instance causes you to refer the employee to an Employee Assistance Program. A fourth offense results in you terminating the employee. As a good manager, you document the employee’s history to create a paper trail in case of legal action. All of the above is necessary and is considered to be good management practice.

The thought process behind the preceding managerial actions assumes that the employee is the problem. In other words, you view the employee’s behavior as the special cause of variation from the desired sober state. However, this is true only if there is a statistically significant difference between the employee in question and all other employees. If the employee’s behavior is part of a process that allows such behavior to exist, then the problem is not a special cause, but rather a common cause; it requires a different solution. In the latter case, the employee must be dealt with as before; but, additionally, organizational policies and procedures (processes) must be changed to prevent future incidents of intoxication. This new view requires a shift in thought. With the new thought process, if existing organizational policies and procedures allow employees with drinking problems to be present in the workplace, an intoxicated employee must be dealt with according to the original solution, and policies and procedures must be improved to prevent future incidents of such behavior on the job.

Feedback Loops

An important aspect of any process is a feedback loop. A feedback loop relates information about outputs from any stage(s) back to other stage(s) to make an analysis of the process. Figure 1.3 depicts the feedback loop in relation to a basic process.

Figure 1.3

Figure 1.3 Feedback Loop

The tools and methods discussed in this book provide vehicles for relating information about outputs to other stage(s) in the process. Decision making about processes is aided by the transmission of this information. A major purpose of quality management is to provide the information (flowing through a feedback loop) needed to take action with respect to a process.

There are three feedback loop situations: no feedback loop, special cause only feedback loop, and special and common cause feedback loop. A process that does not have a feedback loop is probably doomed to deterioration and decay due to the inability of its stakeholders to rejuvenate and improve it based on data from its outputs. An example of a process without a feedback loop is a relationship between two people (manager and subordinate, husband and wife, or buyer and seller) that contains no vehicle (feedback loop) to discuss issues and problems with the intention of establishing a better relationship in the future. A process in which all feedback information is treated as a special cause will exhibit enormous variation in its output. An example of a process with a special cause only feedback loop could be a relationship between two people; but in this case, the relationship deteriorates through a cycle of successive overreactions to problems that are perceived as special by both members of the relationship. In fact, the problems are probably repetitive in nature due to the structure of the relationship itself and to common causes of variation. Finally, in a process in which feedback information is separated into common and special causes—special causes are resolved and common causes are reduced—products, services, or processes will exhibit continuous improvement of their output. For example, the relationship problems between a superior and a subordinate can be classified as either due to special and/or common causes; statistical methods are used to resolve special causes and to remove common causes, thereby improving the relationship in the future.

Consider the following example. Paul is a 40-year-old, mid-level manager who is unhappy because he wants his boss to give him a promotion. He thinks about his relationship with his boss and wonders what went wrong. He determines that over a period of 10 years, he has had about 40 disagreements with his boss (one per quarter).

Paul thinks about what caused each disagreement. Initially, he thought each disagreement had its own special cause. After studying the pattern of the number of disagreements per year, Paul discovered that it was a stable and predictable process of common causes of variation. Subsequently, he wrote down the reason for as many of the disagreements as he could remember (about 30). However, after thinking about his relationship with his boss from the perspective of common causes, he realized his disagreements with his boss were not unique events (special causes); rather, they were a repetitive process, and the reasons for the disagreements could be classified into common cause categories. He was surprised to see that the 30 reasons collapse down to four basic reasons—poor communication of a work issue, a process failure causing work not to be completed on schedule, unexcused absence, and pay-related issues—with one reason, poor communication of a work issue, accounting for 75% of all disagreements. Armed with this insight, he scheduled a discussion with his boss to find a solution to their communication problems. His boss explained that he hates the e-mails that Paul is always sending him and wished Paul would just talk to him and say what is on his mind. They resolved their problem; their relationship was greatly improved, and, eventually, Paul received his promotion.

Definition of Quality (Voice of the Customer [VoC])Goal Post View of Quality

Quality is a concept whose definition has changed over time. In the past, quality meant "conformance to valid customer requirements." That is, as long as an output fell within acceptable limits (called specification limits) around a desired value or target value (also called the nominal value, denoted by "m"); it was deemed conforming, good, or acceptable. We refer to this as the "goal post" definition of quality. The nominal value and specification limits are set based on the perceived needs and wants of customers. Specification limits are called the Voice of the Customer. Figure 1.4 shows the "goal post" view of losses arising from deviations from the nominal value. That is, losses are minimum until the lower specification limit (LSL) or upper specification limit (USL) is reached. Then, suddenly, losses become positive and constant, regardless of the magnitude of the deviation from the nominal value.

Figure 1.4

Figure 1.4 Goal Post View of Losses Arising from Deviations from Nominal

An individual unit of product or service is considered to conform to a specification if it is at or inside the boundary (USL or LSL) or boundaries (USL and LSL). Individual unit specifications are made up of a nominal value and an acceptable tolerance from the nominal. The nominal value is the desired value for process performance mandated by the customer’s needs and/or wants. The tolerance is an allowable departure from a nominal value established by designers that is deemed non-harmful to the desired functioning of the product or service. Specification limits are the boundaries created by adding and/or subtracting tolerances from a nominal value; for example:

USL = upper specification limit = nominal + tolerance

LSL = lower specification limit = nominal – tolerance

A service example of the goal post view of quality and specification limits can be seen in a monthly accounting report that must be completed in 7 days (nominal), no earlier than 4 days (lower specification limit—not all the necessary data will be available), and no later than 10 days (upper specification limit—the due date for the report at the board meeting). Therefore the "Voice of the Customer" is that the report must be completed ideally in 7 days, but no sooner than 4 days or no later than 10 days.

Another example of the goal post view of quality and specification limits is to insert a medical device into the chest of a patient that is 25 mm in diameter (the nominal value). A tolerance of 5 mm above or below the nominal value (25 mm) is acceptable to the surgeon performing the operation. Thus, if a medical device’s diameter measures between 20 mm and 30 mm (inclusive), it is deemed conforming to specifications. It does not matter if the medical device is 21 mm or 29 mm; they are both conforming units. If a medical device’s diameter measures less than 20 mm or more than 30 mm, it is deemed as not conforming to specifications and is scrapped at a cost of $1,000.00 per device. Therefore, the "Voice of the Customer" states that the diameters of the medical devices must be between 20 mm and 30 mm, inclusive, with an ideal diameter of 25 mm.

In this section, you assumed that there is a reasonable target from which deviations on either side are possible. For situations in which there is only one specification limit—such as time to deliver mail in hours, with the target of 0 hours and an upper specification limit of 5 days—the objective is not to exceed the upper specification, and to deliver the mail on a very consistent basis (little variation) to create a highly predictable mail delivery process. In other words, whether there are two-sided specifications or a one-sided specification, the goal is to have increased consistency, implying minimal variation in performance and, thus, increased predictability and reliability of outcomes.

Continuous Improvement View of Quality

A more modern definition of quality states that: "Quality is a predictable degree of uniformity and dependability, at low cost and suited to the market" [see Reference 1]. Figure 1.5 shows a more realistic loss curve in which losses begin to accumulate as soon as a quality characteristic of a product or service deviates from the nominal value. As with the "goal post" view of quality, once the specification limits are reached, the loss suddenly becomes positive and constant, regardless of the deviation from the nominal value beyond the specification limits.

The continuous improvement view of quality was developed by Genichi Taguchi [see Reference 10]. The Taguchi Loss Function, called the Loss curve in Figure 1.5, expresses the loss of deviating from the nominal within specifications: the left-hand vertical axis is "loss" and the horizontal axis is the measure, y, of a quality characteristic. The loss associated with deviating (ym) units from the nominal value, m, is:

L(y) = k(ym)2 = Taguchi Loss Function (1.1)

where

y = the value of the quality characteristic for a particular item of product or service.

m = the nominal value for the quality characteristic.

k = a constant, A/d2.

A = the loss (cost) of exceeding specification limits (e.g., the cost to scrap a unit of output).

d = the allowable tolerance from the nominal value that is used to determine specification limits.

Figure 1.5

Figure 1.5 Continuous Improvement View of Losses of Deviations from Nominal

Under this Taguchi Loss Function, the continuous reduction of unit-to-unit variation around the nominal value is the most economical course of action, absent capital investment (more on this later). In Figure 1.5, the righthand vertical axis is "Probability" and the horizontal axis is the measure, y, of a quality characteristic. The distribution of output from a process before improvement is shown in Curve A, while the distribution of output after improvement is shown in Curve B. The losses incurred from unit-to-unit variation before process improvement (the lined area under the loss curve for Distribution A) is greater than the losses incurred from unit-to-unit variation after process improvement (the hatched area under the loss curve for Distribution B). This definition of quality promotes continual reduction of unit-to-unit variation (uniformity) of output around the nominal value, absent capital investment. If capital investment is required, then an analysis must be conducted to determine if the benefit of the reduction in variation in the process justifies the cost. The capital investment for a process improvement should not exceed the single lined area under the Taguchi Loss Function in Curve A, but not in Curve B, in Figure 1.5. This modern definition of quality implies that the Voice of the Process should take up a smaller and smaller portion of the Voice of the Customer (specifications) over time, rather than just being inside of the specification limits. The logic here is that there is a loss associated with products or services that deviate from the nominal value, even when they conform to specifications.

To illustrate the continuous definition of quality, return to the example of the medical device that is to be inserted into a patient’s chest. Every millimeter higher or lower than 25 mm causes a loss that can be expressed by the following Taguchi Loss Function:

L(y) = k(y – m)2 = (A/d2)(y – m)2 = ($1,000/[52])(y – 25mm)2 = (40)(y – 25mm)2

if 20 ≤ y ≤ 30

L(y) = $1,000 if y < 20 or y > 30

Table 1.1 shows the values of L(y) for values of the quality characteristic (diameter of the medical device).

Table 1.1 Loss Arising from Deviations in Diameters of the Medical Device

Diameter of the Medical Device (y)

Value of Taguchi Loss Function (L[y])

18

1,000

19

1,000

20

1,000

21

...640

22

...360

23

...160

24

...40

25

...0

26

..40

27

...160

28

...360

29

...640

30

1,000

31

1,000

32

1,000


Under the loss curve shown in Table 1.1, it is always economical to continuously reduce the unit-to-unit variation in the diameter of medical devices, absent capital investment. This will minimize the loss of surgically inserting medical devices.

If a Taguchi Loss Function has only one specification limit, such as an upper specification limit, the preceding discussion applies without loss of generality. For example, if in the opinion of customers, 30 seconds is the maximum acceptable time to answer phone calls at a customer call center and the desired time is 0 seconds, any positive deviation will result in loss to the customer. Moreover, the greater the process variation (above the nominal time of 0), the greater the loss to the customer. In the case where there is no natural nominal value (e.g., 0 seconds), deviation between the process average and the desired time results in a process bias. The loss function can be used to show in these cases that the loss is a function of the bias squared plus the process variation. This implies that the goal is to eliminate the bias (i.e., move the process average toward the desired time) and to reduce process variation. For example, customer call centers not only wish to reduce their time to answer phone calls from their customers, but they want to have uniformly short answer times. Why? When management determines staffing requirements for the customer call center, it needs to be able to have enough staff to meet its specification for time-to-answer. The more variation in the time-to-answer per call, the more unpredictable the process, and the less confidence management will have in its staffing model. Management may actually overstaff to ensure it meets its specifications. This introduces more cost to the customer call center, which is indirectly passed on to the customer.

Definitions of Six Sigma Management (Relationship Between VoC and VoP)

Non-Technical Definitions of Six Sigma Management

Six Sigma management is the relentless and rigorous pursuit of the reduction of variation in all critical processes to achieve continuous and breakthrough improvements that impact the bottom line and/or top line of the organization and increase customer satisfaction. Another common definition is that Six Sigma management is an organizational initiative designed to create manufacturing, service, and administrative processes that produce a high rate of sustained improvement in both defect reduction and cycle time (e.g., when Motorola began its effort, the rate it chose was a 10-fold reduction in defects in two years, along with a 50% reduction in cycle time). For example, a bank takes an average of 60 days to process a loan with a 10% rework rate in 2004. In a Six Sigma organization, the bank should take no longer than an average of 30 days to process a loan with a 1% error rate in 2006, and no more than an average of 15 days to process a loan with a 0.10% error rate by 2008. Clearly, this requires a dramatically improved/innovated loan process.

Technical Definitions of Six Sigma Management

The Normal Distribution. The term Six Sigma is derived from the normal distribution used in statistics. Many observable phenomena can be graphically represented as a bell-shaped curve or a normal distribution [see Reference 3], as illustrated in Figure 1.6.

Figure 1.6

Figure 1.6 Normal Distribution with Mean (μ) and Standard Deviation (σ)

When measuring any process, its outputs (services or products) vary in size, shape, look, feel, or any other measurable characteristic. The typical value of the output of a process is measured by a statistic called the mean or average. The variability of the output of a process is measured by a statistic called the standard deviation. In a normal distribution, the interval created by the mean plus or minus 2 standard deviations contains 95.44% of the data values; 45,600 data values per million are outside of the area created by the mean plus or minus 2 standard deviations (45,600 = 1,000,000 x [4.56% = 100% – 95.44%]). In a normal distribution, the interval created by the mean plus or minus 3 standard deviations contains 99.73% of the data; 2,700 defects per million opportunities are outside of the area created by the mean plus or minus 3 standard deviations (2,700 = 1,000,000 x[0.27% = 100% – 99.73%]). In a normal distribution, the interval created by the mean plus or minus 6 standard deviations contains 99.9999998% of the data; 2 data values per billion data values are outside of the area created by the mean plus or minus 6 standard deviations (2 = 1,000,000,000 x [0.0000002% = 100% – 99.9999998%]).

Relationship Between VoP and VoC. Six Sigma promotes the idea that the distribution of output for a stable normally distributed process (Voice of the Process) should be designed to take up no more than half of the tolerance allowed by the specification limits (Voice of the Customer). Although processes may be designed to be at their best, you assume that the processes may increase in variation over time. This increase in variation may be due to small variation with process inputs, the way the process is monitored, changing conditions, etc. The increase in process variation is often assumed to be similar to temporary shifts in the underlying process mean. In practice, the increase in process variation has been shown to be equivalent to an average shift of 1.5 standard deviations in the originally designed and monitored process. If a process is originally designed to be twice as good as a customer demands (i.e., the specifications representing the customer requirements are 6 standard deviations from the process target), then even with a shift in the Voice of the Process, the customer demands are likely to be met. In fact, even if the process mean shifted off target by 1.5 standard deviations, there are 4.5 standard deviations between the process mean and closest specification limit, resulting in no more than 3.4 defects per million opportunities (dpmo). In the 1980s, Motorola demonstrated that in practice, a 1.5 standard deviation shift was what was observed as the equivalent increase in process variation for many processes that were benchmarked.

Figure 1.7 shows the "Voice of the Process" for an accounting function with an average of 7 days, a standard deviation of 1 day, and a stable normal distribution. It also shows a nominal value of 7 days, a lower specification limit of 4 days, and an upper specification limit of 10 days. The accounting function is referred to as a 3-sigma process because the process mean plus or minus 3 standard deviations is equal to the specification limits; in other terms, USL= μ + 3σ and LSL = μ – 3σ. This scenario will yield 2,700 defects per million opportunities, or one early or late monthly report in 30.86 years [(1/0.0027)/12].

Figure 1.7

Figure 1.7 Three Sigma Process with 0.0 Shift in the Mean

Figure 1.8 shows the same scenario as in Figure 1.7, but the process average shifts by 1.5 standard deviations (the process average is shifted down or up by 1.5 standard deviations [or 1.5 days] from 7.0 days to 5.5 days or 8.5 days) over time. This is not an uncommon phenomenon. The 1.5 standard deviation shift in the mean results in 66,807 defects per million opportunities at the nearest specification limit, or one early or late monthly report in 1.25 years [(1/.066807)/12], if the process average moves from 7.0 days to 5.5 days or from 7.0 days to 8.5 days. In this discussion, only the observations outside the specification nearest the average are considered.

Figure 1.8

Figure 1.8 Three Sigma Process with a 1.5-Sigma Shift in the Mean

Figure 1.9 shows the same scenario as Figure 1.7, but the Voice of the Process takes up only half the distance between the specification limits. The process mean remains the same as in Figure 1.7, but the process standard deviation has been reduced to one half-day through application of process improvement. In this case, the resulting output will exhibit two defects per billion opportunities, or one early or late monthly report in 41,666,667 years [(1/.000000002)/12].

Figure 1.10 shows the same scenario as Figure 1.9, but the process average shifts by 1.5 standard deviations (the process average is shifted down or up by 1.5 standard deviations [or 0.75 days = 1.5 x 0.5 days] from 7.0 days to 6.25 days or 7.75 days) over time. The 1.5 standard deviation shift in the mean results in 3.4 defects per million opportunities at the nearest specification limit, or one early or late monthly report in 24,510 years [(1/.0000034/12]. This is the definition of 6-sigma level of quality.

Another Look at the 1.5-Sigma Shift in the Mean. The engineer responsible for creating the concept of Six Sigma at Motorola was Bill Smith. Bill Smith indicated that product failures in the field were shown to be statistically related to the number of product reworks and defect rates observed in production. Therefore, the more "defect and rework free" a product was during production, the more likely there would be fewer field failures and customer complaints. Additionally, Motorola had a very strong emphasis on total cycle time reduction. A business process that takes more steps to complete its cycle increases the chance for changes/unforeseen events, and the opportunity for defects. Therefore, reducing cycle time is best accomplished by streamlining the process, removing non-value added effort, and as a result, reducing the opportunities for making mistakes (defects). What a concept! Reducing cycle time by simplifying a process will result in fewer defects, lower remediation/warranty/service costs, and ultimately increased customer satisfaction with the results. This last concept is not new to those who are familiar with Toyota production system concepts, Just-In-Time philosophy, or what many call "Lean Thinking." Six Sigma practitioners concern themselves with reducing the defect or failure rate while Lean practitioners concern themselves with streamlining processes and reducing cycle time. Defect reduction and lean thinking are "flip sides" of the "same coin." The integrated strategy of considering both sides at the same time was the basis of the original work in Six Sigma.

Figure 1.9

Figure 1.9 Six Sigma Process with a 0.0 Shift in the Mean

Figure 1.10

Figure 1.10 Six Sigma Process with 1.5-Sigma Shift in the Mean

Some proof of this was gained in the period from 1981 to 1986 when Bob Galvin (CEO of Motorola) set a goal of a tenfold improvement in defect rates over those five years. During those five years, positive results were demonstrated in field failures and warranty costs. However, some of Motorola’s key competitors improved at a faster rate. In 1987, Motorola indicated it would increase the rate of improvement to tenfold improvement every two years rather than five years. What was the target? The target was called Six Sigma quality (which was defined to be 3.4 defects per million opportunities) by 1992.

Of course, the key question was whether there was a tradeoff between reducing defect rates and implementation cost. Bill Smith and others were not advocating increasing costs by increasing inspection, but rather that engineers design products and production processes so that there would be little chance for mistakes/defects during production and customer usage. The focus was on the upstream X variables that would be indicators of future performance and process problems that were observed. The Y variables were the downstream defect rates, rework rates, and field failures that were observed and measured.

Motorola’s strict focus on the rate of improvement challenged engineering, supply management, and production to develop materials, production equipment, and products that were more robust to variation, and as a result, less sensitive to processing variation. Hence, the focus was on understanding the X variables.

What is interesting about the preceding two paragraphs is that often the initial focus of Statistical Process Control (SPC) was limited to monitoring Y variables or average/target values of process variables. Six Sigma did not really change the tools, but instead focused the tools on their usage upstream on X variables; in particular, on understanding the relationship of the variation in the X variables on the variation of the Y variables, and finally, using the tools in such a sequence as to uncover the relationships and be able to improve and control the results of the Y variables.

Studies did show that Bill Smith’s insights were valid: defects per million opportunities (dpmo) and defects per unit (dpu) measures calculated in production facilities did predict field performance, customer complaints, and warranty costs. Therefore, dpmo and dpu became metrics of emphasis at Motorola.

Around the same time that these studies were done, employees at Motorola gathered empirical evidence that even when the Y variables were in statistical control, the X variables might not be in statistical control. Additionally, SPC as practiced in many operations was more of a monitoring method on the Y variables with the only "out of control" indicator being a point beyond a control limit. Consequently, run tests1 were not used as indicators of "out of control." Empirical evidence indicated that a process could shift within the 3-sigma control limits as much as 2 standard deviations and stay there for some run of points before a point outside 3 standard deviations was observed. In fact, if a process with stable variation shifts 1.5 standard deviations, there is an average run of 16 points that would be observed before one point was beyond the 3 standard deviation control limits.

In addition to dpmo and dpu measures, Motorola was also concerned about upstream X variables that could be measured (rather than attribute variables). To control measurement data, a focus on means (i.e., targets) and spreads (i.e., standard deviations) was needed. If the Voice of the Process (VoP) is equal to the Voice of the Customer (VoC), the process’s mean output plus or minus 3 standard deviations equals the specification limits; about 0.27% of the process output is "defective" given a normal distribution. If SPC were utilized to track that variable, and the mean shifted halfway to the control limits (i.e., this assumes an individual—moving range type control chart that is discussed in References 2 and 3), then there could be an average run of 16 observations before a point beyond a control limit would be noted. Another way of saying this is there could be an increase in dpmo from 2,700 to 66,807 with no points being beyond a control limit. If various run tests were conducted, then the shift in the mean would be detected; but in practice, production personnel rarely shut down a process for failure of a run test, if no points were outside the control limits.

So, why does Six Sigma often reference a 1.5 standard deviation shift in the mean of a process? Studies of various production lines at Motorola Corporation showed that even in a state of control, where the only out-of-control condition to be checked was observations outside the 3 standard deviation control limits, there often would be uncontrolled shifts of between 1 to 2 standard deviations. For example, for some manual assembly processes, the shift averaged between 1.2 and 1.8 standard deviations at the time an out-of-control observation was recorded. Of course, for automated processes, this degree of shift is frequently not allowed.

A statistical purist would argue that the genesis of the sigma metric is flawed because it is based on a shift factor. The engineers viewed the metric as a worst-case dpmo for a process because they assumed that any shift factor significantly larger than 1.5 would be caught by the common usage of statistical process control (a point beyond a 3-sigma control limit). If there is a shift less than 1.5 sigma, that is all to the good since the dpmo is less.

From a practical standpoint, Six Sigma seems to be an effective form of management. Moreover, the argument against the 1.5-sigma shift in the mean seems similar to the claim that a yard is not really three feet. Some say a yard was based on the distance from the tip of the nose to the tip of the middle finger on an outstretched arm for an average male. What is an "average" male? Is that similar to knowing an "average" shift? It turns out that eventually everyone accepted the definition that a yard is equal to three feet, and few remember the original definition. At Motorola, the story is similar in that only a few folks remember the original reason for the definition of the sigma levels, and it is accepted that the dpmo levels can be equated with sigma levels.

Interestingly, many of those who continue to argue about the derivation of sigma levels are those who have learned about Six Sigma in the last seven years. It seems that they are trying to understand the "legend" of Six Sigma rather than seeing the upside and benefit. We can continue to argue about this, but practitioners are continuing to improve their organizations regardless of any technical flaws in the derivations of the methods.

Does Six Sigma Matter? The difference between a 3-sigma process (66,807 defects per million opportunities at the nearest specification limit) and a 6-sigma process (3.4 defects per million opportunities at the nearest specification limit) can be seen in a service with 20 component steps. If each of the 20 component steps has a quality level of 66,807 defects per million opportunities, assuming each step does not allow rework, then the likelihood of a defect at each step is 0.066807 (66,807/1,000,000) or 6.68 percent. By subtraction, the likelihood of a defect-free step is 0.933193 (1.0 – 0.066807) or 93.3 percent. Consequently, the likelihood of delivering a defect-free final service is 25.08 percent. This is computed by multiplying 0.933193 by itself 20 times ([1.0 – 0.066807]20 = 0.2508 = 25.08%). However, if each of the 20 component parts has a quality level of 3.4 defects per million opportunities (0.0000034), then the likelihood of delivering a defect-free final service is 99.99932% ([1.0 – 0.0000034]20 = 0.9999996620 = 0.9999932 = 99.99932%). A 3-sigma process generates 25.08% defect-free services, while a 6-sigma process generates 99.99932% defect-free services. The difference between the 3-sigma process and the 6-sigma process is dramatic enough to certainly believe that 6-sigma level of performance matters, especially with more complex processes with a greater number of steps or components.

The DMAIC Model for Improvement

The relationship between the Voice of the Customer, the Voice of the Process, and the DMAIC model is explained in Figure 1.11. DMAIC is an acronym for Define, Measure, Analyze, Improve, and Control. The left side of Figure 1.11 shows an old flowchart with its 3-sigma output distribution. The right side of Figure 1.11 shows a new flowchart with its 6-sigma output distribution. The method utilized in Six Sigma management to move from the old flowchart to the new flowchart through improvement of a process is called the DMAIC model.

The Define Phase of a Six Sigma DMAIC project involves identifying the quality characteristics that are critical to customers (called CTQs) using a SIPOC analysis and a Voice of the Customer analysis, and for preparing a business case for the project with a project objective. SIPOC is an acronym for Supplier, Input, Process, Output, and Customer. The Measure Phase involves operationally defining the CTQs, conducting studies of the validity of the measurement system of the CTQ(s), collecting baseline data for the CTQs, and establishing baseline capabilities for CTQs. The Analyze Phase involves identifying input and process variables that affect each CTQ (called Xs) using process maps or flowcharts, creating a cause-and-effect matrix to understand the relationships between the Xs and the CTQs, conducting an FMEA analysis (Failure Mode and Effects Analysis) to identify the critical Xs for each CTQ, operationally defining the Xs, collecting baseline data for the Xs, establishing baseline capabilities for the Xs, conducting studies of the validity of the measurement system of the Xs, identifying the major noise variables (MNVs) in the process, and generating hypotheses about which Xs affect which CTQs. The Improve Phase involves designing appropriate experiments to understand the relationships between the Xs and MNVs that impact the CTQs, generating the actions needed to implement optimal levels of critical Xs that minimize spread in CTQs, and conducting pilot tests of processes with Xs set at their optimal levels. The Control Phase involves avoiding potential problems in Xs with risk management and mistake proofing, standardizing successful process changes, controlling the critical Xs, developing process control plans for the critical Xs, documenting the control plans, and turning over the control plan to the process owner.

Figure 1.11

Figure 1.11 Relationship Between the VoC, the VoP, and the DMAIC Model

  • + Share This
  • 🔖 Save To Your Account