Process Improvements and Design of Experiments
- Background: Process Improvements
- Basic Statistical Process Control Tools
- SPC Tools Used for Scale-Up and Production
- Precontrol for Production Start-Up
- Process Capability for Production Start-Up
- Design of Experiments
- Sources, References and Selected Bibliographic Information
——————————————————————————————-
Background: Process Improvements
Ellis Ott in his book on Process Quality Control pointed out that every manufacturing operation has problems. Some are painfully obvious; others require ingenuity and hard work to identify. Every research and development person, production engineer and supervisor has to be a troubleshooter. Anyone hoping to become a research and development person, production engineer or supervisor must learn about process improvement and troubleshooting that uses systematic data obtained from the process. There are two approaches to reducing trouble: learn to prevent it and learn to curate when it develops.
Methods in this section are presented for identifying opportunities for process improvements and for identifying important differences. The knowledge that such differences do exist and that pinpointing of them are vital information to the successful commercialization of new technology. Experience has shown to those familiar with carefully observing a process can find ways to make improvements and corrections, if they can first be convinced that the differences actually do exist.
Specifically in his book on World-Class Quality, Keki Bhote points out that the role of the design engineer in effecting change, both for the product and for the process, is to be the principal hands-on instrument for change. It is a design engineer who will:
1. Hear and be the voice of the customer, i. e., determine the customer’s needs and expectations.
2. Determine a target value associated with each product specification and design to such target values rather than broad specification windows.
3. Use design of experiment techniques to greatly reduce variation at the prototype stage and during engineering and production pilot runs. The design engineer should not use full production or the field is an extension of the laboratory to solve problems.
4. Establish product and process capability by not designing processes as mere afterthoughts to fit frozen product designs. Instead the design engineer should use a product process inter-disciplinary team approach.
5. Translate important product parameters into component specification using design of experiment techniques. That is he or she should specify high CPK’s for the truly important component specifications while opening up the tolerances on all other component specifications to reduce costs.
Thus the business of providing product to the consumer requires many major functions. Production itself has major subdivisions: design and specify, purchase and acquire, manufacturer, package, inspect and ensure quality. Each of these critical functions observes Murphy’s first law: if anything go can go wrong, it will. Although troubleshooting and process improvement projects are as old as civilization itself, today there are procedures that are rather universally applicable to production troubleshooting. These procedures usually employ data collection and logic in addition to process science and know-how.
Manufacturing Operation Variability
In every manufacturing operation there is variability. The variability becomes evident whenever a quality characteristic of the product is measured. There are two basically different reasons for variability, and it is very important to distinguish between them.

Variability Inherent in the Process occurs even when all adjustable factors known to affect the process have been set and held constant during its operations. There is a pattern to the inherent variation of specific stable processes, and their different basic characteristic forms of data from different processes. However the most frequent useful pattern is called a “Normal Distribution” as shown in the figure. This is frequently found in manufacturing processes and technical investigations.


There are other basic patterns of variability; they are referred to as non-normal distributions. The “Lognormal Distribution” figure shows a distribution that is fairly common when making acoustical measurements and certain measurements of electronic products. If the logarithms of the measurements are plotted, the resulting pattern is a normal distribution (hence the name). For manufacturing processes the basic lognormal distribution does not exist very frequently. Many apparent basic lognormal distributions of data are not the consequence of a stable lognormal process but rather of two basically normal distributions with a large percentage produced at one level, as shown in the “Binomial Distribution” figure. The net result of these two distributions can produce a bimodal distribution with presents a false appearance of being inherently lognormal. On the other hand human knowledge work and intellectual property processes to produce true lognormal distributions. Examples are basic inventor activity and patent values.
Variability from Assignable Causes is the other important source of variability. This type of variation, named by Dr. Walter Shewhart, often contributes a large part to the overall variability of a process. Evidence of this type of variability offers important opportunities for improving the uniformity of product. The process average may change gradually as a result of gradual changes in temperature, tool wear, or operator fatigue. Or the process may be unnecessarily variable because two operators and machines are performing at different averages. Variability resulting from two or more processes operating at different levels, or a single source operating with an unstable average, are typical of production processes. They are the rule, not the exception to the role. This second type of variability must be studied by various techniques and data analysis which are discussed in this chapter. After the responsible factors are identified and corrected, continuing control of the process will be needed to ensure on going quality.
Basic Statistical Process Control Tools

This section will briefly outline elementary statistical process control tools. There are three major techniques to achieve quality, the traditional approach, statistical process control, and the design of experiments. Traditional quality control consists of ineffective methods such as brute force inspection, management exhortation, delegation of quality responsibility to a detached quality control department, and even sampling plans. This statistical process control (SPC) approach typically utilizes control charts to understand process variability. These charts are useful to ensure that a process is and stays in control but are complex, costly, and almost useless their ability to solve chronic quality problems.
As shown in the “Contribution of Traditional, SPC, and DOE Tools to Quality Progress” figure the widespread use of design of experiments (DOE)has the biggest impact of the three methods on improving quality. This is because the object of DOE is to discover key variables in product and process design, to drastically reduce the variations a cause, and open up the tolerances to the lesser variables so as to reduce costs. This chapter will now highlight the SPC tools used in scale-up and production and DOE tools used in product design and scale-up.
SPC Tools Used for Scale-Up and Production
Although many companies have abandoned SPC tools for the more powerful DOE techniques, there are exceptions. Major exception is they have trained their entire direct labor force in elementary SPC tools so they can tackle low-grade quality problems. The result is instead having a few professionals tackle these problems, they now have a whole host of problem solvers.

The “Elementary SPC Techniques” figure lists the “seven tools of SPC”. These are tools that every production line worker should be learning and using. They are listed here because every R&D professional should be also familiar with them. Because they’re both limited and value as is compared to the design of experience methodology, only the objectives and methodology are outlined here. The one exception is the control chart, which will be discussed in some detail the next section. That said a brief commentary on each of these tools is in order.
PDCA (Plan, Do, Check, Act) is a variant of the traditional problem-solving approach of “observe, think, try, explain”. As a problem-solving tool, it has the same poor effectiveness as brainstorming and Kepneff-Tragoe techniques for solving technical problems.
Data Collection and Analysis is the first step in the long road to variation identification and reduction. Some planning is the key to effective data collection. The “why, what, when, where, who, and how” of data collection must be established a priori, that is, before the fact. This avoids teams and plants drowning in meaningless and useless data. Common pitfalls include: not defining the objective; not knowing what parameter to measure how to measure; not having sufficiently accurate equipment for the measurement; not randomizing; and poor stratification of data. Similarly, the analysis of data should be undertaken only with proven approaches, rather than with hit and miss approaches, such as PDCA, brainstorming, cause-and-effect diagrams, etc.
Graphs/Charts are tools for the organization, summarization, and statistical display of data. As in the case of data collection and analysis, the purpose of using graphs and charts should be clearly established and the usefulness and longevity periodically reexamined.
Checks Sheets/ Tally Sheets/ Histograms/ Frequency Distributions are tools whose main function is to simplify data gathering and to arrange data for statistical interpretation and analysis. There are several types of check sheets: for process distribution; for defective items/ causes/ defect locations (sometime referred to as measles charts); and memory joggers for inspectors, quality control, and servicers in checking product.
Tally sheets are special forms of check sheets to record data, keeps score of privacy process in operation, and divide data into distinct groups to facilitate statistical interpretation.
Histograms and frequency distributions provide a graphical portrayal of variability. Their shape often gives clues about the process measured, such as mixed lots (binomial distribution); screened lots (truncated distribution); amount of spread relative to specifications; nonstandard spread relative to the specification’s, etc. There are two general characteristics of frequencies distributions that can be quantified, central tendency and dispersion. Central tendency is the bunching up effective observations of a particular quality characteristic at the center and is measured by the average of all the observations, mode (the value of a quality characteristic with the largest number of observations), and median (the value that divides the number of observations into two equal parts). Dispersion is the spread of the observations and can be measured by range, the highest observation minus the lowest, and the standard deviation, which is approximately 1/6 of the range (but only for a normal distribution).
Pareto’s Law was put forth by Alfredo Pareto who was a 19th-century Italian economist who studied the distribution of income in Italy and concluded that a very limited number of people have most of its wealth. The study produced the famous Pareto Lorentz mal-distribution law, which states that the cause and effect are not linearly related; that a few causes produce most of the given effect, and, more specifically, that 20% of the causes produce 80% or more of the effects.
Juran, however, is credited with converting Pareto’s law into a versatile, universal industrial tool applicable in diverse areas, such as quality, manufacturing, suppliers, materials, inventory control, cycle time, value engineering, sales and marketing. In fact it can be applied to almost any industrial situation, blue-collar or white color. By separating the few important causes of any industrial phenomenon away from the trivial many, work can be prioritized to focus on these few important causes.
Brainstorming/Cause-and-Effect Diagrams/CEDAC when they have been applied to process control, sometimes become good examples of beautiful techniques applied wrongly. Brainstorming for instance in the social sciences, and even white-collar industrial work, is a marvelous tool for generating the maximum number of ideas and utilizing group synergy. However, its effectiveness in quality problem-solving is highly overrated. Even though group ideas are generally better than individual ones, guessing of problems is a kindergarten approach to finding root causes of variation.
Cause-and-effect diagrams were developed by Dr. Ishikawa, one of the foremost authorities on quality control in Japan. As a result, is often called the Ishikawa diagram or by reason of its shape, a fishbone diagram. It is probably the most widely used quality control tool for problem-solving among line workers. However its effectiveness is poor. Because only one cause is varied at a time, interaction affects are missed, which results in partial solutions and marginal improvements in quality.
CEDAC is the acronym for cause-and-effect diagram with the addition of cards. Developed by Fuduka, the technique is explained in detail in his book “Managerial Engineering”. CEDAC represents an improvement over the cause-and-effect diagram, with workers free to change any branch or tweak costs in the diagram as they observe new phenomena in a process and thereby gain new insights. Use of cards, under their own control, facilitate such instant updating of causes. Worker participation is enhanced and raw, unqualified information is captured before it evaporates. All this said, CEDAC still suffers from the same judgment weaknesses as the cause-and-effect diagrams contain.
Control Charts
Control Charts are the last of the elementary seven tools of QC. Because of its wide use and misuse, a good portion of this section will be devoted to it.
In the minds of many, quality professionals and nonprofessionals alike, the control chart is synonymous with statistical process control. For a number of years it held center stage in the discipline that used to be called statistical quality control. Developed by Shewhart, the control chart’s main function, past and present, is to maintain a process under control, once it’s inherent variation has been reduced through the design of experiments. Quoting Dr. Shewhart, “an adequate science of control for management should take into account the fact that measurements of phenomena in both social and natural science for the most part obey neither deterministic or statistical laws, until assignable causes of variability have been found and removed.”
Gradual change in a critical adjustment for condition in a process is expected to produce a gradual change in the data pattern. An abrupt change in the process is expected to produce an abrupt change in the data pattern. Civil processes are often ingested without reference to any data. The data from even the simplest process will provide unsuspected information on its behavior. In order to benefit from data coming from either regularly, or in special studies from a temperamental process, it is important to follow one important and basic rule: “plot the data in a time sequence”. Once this is done, too important analysis methods can be employed to diagnose the behavior of time sequence data. These are (1) use of run criteria and (2) control charts with control limits and various other criteria that will signal the presence of assignable causes.
Run criteria are used to test the hypothesis that the data represent random variation from stable sources. A simple way to describe will run is when someone repeatedly tosses a coin and produces a run of six heads in succession. If this happens we realize it’s very unusual. Similarly for any process we are studying we sample that process for an attribute of interest to us and we plotted in sequence above or below the average of all the readings we have. What we will get is a median line with one half the points above and half below. The order in which points of fallen above and below the median however may not be random. There may be runs of points above or below the median and the question is depending upon the run length of her run above or below the median what does that tell us about the process stability.

In the “Example of Production Data for Product Weight” figure we see that the data for product weight has been plotted in a sequential manner, that the median product weight has been calculated and drawn on the figure, and that there are Runs of data above and below this median going from left to right of seven, three, three, one, one, two, one, and six, respectively. In this case each of the 24 points on the plot (n) is an average of 4 sub-group samples (k) taken at the same time.
Too many Runs above and below the median indicate the following possible engineering reasons. (1) samples are being drawn alternately from two different populations resulting in a saw-tooth effect. These effects occur regularly in portions of sets of data. Their explanation is often found to be two different sources, that being analysts, machines, raw materials which enter the process alternately were nearly alternately. (2) there may be three or four different sources which enter the process in a cyclic manner. Too few Runs are quite common. Their explanations include: (a) a general shift in the process average, (b) an abrupt shift in the process average, (c) is slow cyclic change in the averages.
The total number of Runs in the Product Weight data is eight. The average expected number, and the standard deviation of the sampling distribution, are given by the following formulas.
Average Number of Runs = (n + 2) / 2 = 13 = m+1 {used in the formula below}
Standard Deviation of Runs =square root of ( (m * (m-1) ) / ( (2 * m) – 1 ) =

The expected Number of Runs of Exactly Length “s” derived from the table in the “Runs Above and Below the Median of Length “s” figure. When running these calculations for the run lengths observed in the Example of Production Data for Product Weight study, one would find that there were too few short runs and too many long runs observed. In particular, the two long Runs of 6 and 7 suggest that the data are not random. When looking at the figure itself we see that in there appears to be an overall increase in filling weight during the study. Thus using run criteria in seeing whether run links are the result of a random or nonrandom process is important hints on whether or not there is a large assignable cause that could be found and controlled to greatly improve the process.
This can be seen from the above example there are criteria which indicate the presence of assignable causes by unusual runs in the set of points comprised of subgroup samples. The method works even if the subgroup size is one, all the way up to very large subgroups.
The ultimate value of using run criteria can be summarized by saying “if the distribution of Runs are not as shown in the “Run Criteria” below, there is a process that can greatly benefit from finding assignable causes”. This is important in production scale-up runs from R&D and Business Development projects. It is critical to know when the new process is stable and when it is not.
Run Criteria
1. The total number of Runs about the median is = (n+2) / 2
2. The Run length of expected Runs, as shown in the “Runs Above and Below the Median of Length “s” figure, are inconsistent with those Run length frequencies observed
3. A Run of length greater than six is evidence of an assignable cause warranting investigation.
4. A long Run-up or Run-down usually indicates a gradual shift in the process average. A Run-up or Run-down of length five or six is usually longer than expected.
The Shewhart control chart is a well-known and powerful method of checking on the stability of a process. It was conceived as a device to help in production with routine hour by hour adjustments and its value in this regard is unequaled. The control chart provides a graphical time sequence of data from the process itself. This permits a run analysis to study the historical behavior patterns. Further, the control chart provides additional signals to the current behavior of the process such as the upper and lower control limits which define the maximum expected variation of the process.
The control chart is a method of studying your process from a sequence of small random samples from the process. The basic idea of the procedures to collect 3 to 5 samples at each point of regular time intervals. Sample sizes of four or five are usually best. Sometimes it’s more expedient to use sample sizes of 12 or three. Sample sizes of larger than six or seven or not recommended. Quality characteristic of each unit of the sample is then measured and the measurements. Their range is plotted on a Range or “R” control chart and their their average recorded on a second control chart.
In starting control charts it is necessary to collect some data to provide the preliminary information regarding the central lines on the average value and ranges of the values. It is usually recommended to collect the data over 20 to 25 time intervals before generating a plot. A lower number can be used but it is not recommended to drop below 14.

Step one is to plot the values on a time axis and compute the average and range of each sample in the time sequence.
Step two is to draw the average of the average values and average of the range values of the population on the chart as horizontal lines.
Step three is to compute the Upper and Lower Control Limits on the range values. These are obtained using the “Factors Used to Calculate the UCL/LCL of the Range” figure. Note that “n” in this figure is the number of samples taken at each point in time that were averaged to obtain the single “time” point. The UCL and LCL are drawn on the range chart as horizontal lines. The UCL is D4 * Range Average. The LCL is D3 * Range Average
Step four is to compute the three Sigma control limits on the average and draw them as lines on the chart as well. The upper control limit is the average plus 3 times the standard deviation of the point average values. The lower control limit is the average minus 3 times the standard deviation of the point average values.

What is important about control charts is that it was found that almost every set of start-up production data that has over 30 time sequence data points will show points outside the three Sigma limits. Further, the nature of the assignable causes signaled by these points outside the three Sigma limits is usually important and should be identified. Upon identification, the source of the variation should be addressed and reduced.
Combining these upper and lower control limit tests along with Run criteria provides powerful signals when a production processes in control or not. If it is out of control finding the assignable cause promptly is of great importance. This has to be done during scale up in order to hand a smooth production process over to manufacturing. It also needs to be done when technical personnel are called in to troubleshoot a production process. Knowing whether assignable or un-assignable causes are present is a critical problem-solving tool.
Control chart theory is based upon the central limit theorem and statistics. Most importantly this states that averages drawn from any process will be normally distributed and the algorithms of the normal distribution can be applied. Most important of these algorithms is as follows: the area under the normal distribution bound by three standard deviations on either side of the average is 99.73% of the total area in the normal distribution. This means for manufacturing operations the area within the three Sigma limits of a normal distribution is 99.73% due to random causes. Another way to say this is that if an outlier falls outside these limits there is only a .27% probability that this reading occurred entirely by chance, but a 99.73% chance that it was caused by a nonrandom assignable cause. For the manufacturing operation this means the process must be stopped and an investigation begun to correct the problem. On the other hand, even if there is considerable variation amongst the average readings, if they fall within the upper and lower control limits, the variation is due to small, random, non-assignable causes that are not worth investigating, and the process should be left alone.
When dealing with intellectual property, upper and lower control limits can also be applied. Control Limits can be used to find unique inventor performance and unique chokepoint prior art. In this case it is the outlier events or people that have the value. Finding an outlier in intellectual property or human capital information patterns is a wonderful discovery. They are a very special asset and should be treated accordingly.
Pre-control for Production Start-Up
Frank Satherwaite was a brilliant statistician who establish the theoretical underpinnings for Pre-control. Pre-control was further developed by the consulting company of Rath and Strong. The mechanics of Pre-control can be taught and less than 10 minutes as there are only four simple rules to follow as exemplified in the “Pre-Control Example” figure:

Rule 1. Divide the specification width by four. The boundaries of the middle half of the specification then become the Pre-Control lines. The area between these Pre-Control lines is called the Green Zone. The two areas between each Pre-Control line in each specification limit or call the Yellow Zones. The two areas beyond the specification limits are called the Red Zones.
Rule 2. To determine process capability, take a sample of five consecutive units in the process. If all five fall within the Green Zone, the processes in control.(In fact, with a simple rule, usual samples of 50 to 100 units to calculate CP and CPK are not necessary. By applying the multiplication theorem of probabilities or the binomial distribution, it can be proven that a minimum CPK of 1.33 will automatically result.) Full production can now commence. If even one of the units falls outside the Green Zone, the process is not in control. Conduct an investigation, using engineering judgment or better still, using design of experiments to determine and reduce the cause of variation.
Rule 3. Once production starts, take two consecutive items from the process periodically. The following possibilities can occur:
(A) if both units fall inside the Green zone, continue production.
(B) if one unit is in the Green zone and the other unit in the yellow zones, the process is still in control. Continue production.
(C) if both units are in the Yellow zones (with both in the same Yellow zone or one in one Yellow zone and the second one in the other), stop production and conduct an investigation into the cause of variation and correct it.
(D) if even one of the units falls in the red zone, there is a known reject, and production must be stopped and the reject cause investigated. When the process is stopped, and the cause of variation identified reduced or eliminated, Rule 2, i.e., five units in a row in the Green zone, must be reapplied before production can resume.
Rule 4. The frequency a sampling of two consecutive units is determined by dividing the time period between two stoppages (i.e., between two points of yellows) by six. In other words, if there is a stoppage (two yellows) say at nine in the morning and the processes corrected and restarted soon after, followed by another stoppage at noon (again two yellows), the period of three hours between these stoppages is divided by six, to give frequency of sampling of every half hour. If, on the other hand, the period between two stoppages is three days, the frequency of sampling is every half-day.
The theory behind the effectiveness of Pre-control is based on the multiplication theorem of probabilities and the binomial distribution. Although the mathematical derivation is beyond the scope of this discussion the following is a summary of the results. The worst alpha risk, the risk of overcorrection and stopping a process when it should continue, is around 2%. The worst beta risk, the risk of allowing a process to continue when it should be stopped, is close to 1.5%. Pre-control is a method that’s extremely useful when starting up a production process coming out of a new business development or R&D project.
Process Capability for Production Start-Up
The only surefire method to eliminate waste is to design a product and its process so that the parameters of interest to consumers are brought close to the target value or design center. There is no other way to get 100% yields or to achieve zero defects. Once this is done, production becomes a breeze, and manufacturing can put products together without the necessity of inspection tests, which are no value-add whatsoever.
Before the variation in a parameter can be reduced, however, it must be measured. Two yardsticks, Cp and CpK, have become standard terminologies. Process capability, Cp, is approximately defined as a specification with divided by the process width. Is a measure of spread. The “Cp as a Measure of Variation” figure depicts six frequency distributions comparing the specification with(always 40-20=20) to the process width.

Process A in the “Cp as a Measure of Variation” figure has a process width of 30, is defined by traditional three Sigma limits, to give a Cp of 0.67. It is a process that is out of control, with the 2 ½% rejection tails at both ends. For such an out-of-control condition only brute force sorting, scrap, and rework can produce product suitable for shipment to customers.
Process B has a process width equal to the specification width to give a Cp of 1.0. Although somewhat better than Process A, it too can be considered almost out-of-control because any slight change or ripple cause rejects.
Process C has a Cp of 1.33, showing a margin of safety between the tighter process limits and the specification limits. This is a good standard for important parameters in scale-up and early production runs.
Process D, with a Cp of 1.66 is better, with an even wider safety margin. Process E, with a Cp of 2.0 is an important milestone in the march towards variation reduction. Here the process width is only half the specification width. Most companies today have established a Cp of 2.0 as a minimum standard for their own, as well as their suppliers, important quality characteristics.
Process F, with the Cp of 8.0 is not only much better; it is also attainable and at a lower overall cost. In fact, there’s no limit to higher and higher Cp numbers, so long as no reoccurring costs are added to the product or process, and only the cost of design of experiments, is incurred.

Taking process capabilities a bit further, the use of CpK allows an even better measure variation and process control. This is because see P does not take into account any non-centering of the process relative to the specification limits of the parameter. Such non-centering reduces the margins of safety, and therefore has a penalty imposed, called the K correction factor. The “Process Capability Equations” figure shows the calculations involved for these factors.

When the process average, and the design center or target value, coincide, the correction factor K is reduced to zero, making Cp and CpK equal. If however, the process average is skewed towards one end or the other of the specification limit, away from the design center, the value of K increases, causing a decrease in CpK relative to Cp. This effect is shown in the “CpK as a Measure of Process Capability” figure. For example sub-figure A has a wide spread, with the Cp of 0.71. Since is the design center and its average, coincide, the Cp, and CpK values are the same at 0.71. Sub-figure B has a narrow spread, with a respectable Cp of 2.5. However, because it is located close to the lower specification limit, the K factor penalizes it to give it a poor CpK of 1.0. Sub-figure C has a broader spread than sub-figure B, with the lower Cp of 1.67. But it is closer to the design center, than is sbu-figure B, and so the K factor is less of a penalty, resulting in a CpK of 1.33, which is better than sub-figure B. Sub-figure D is ideal, with both a very narrow spread and a centered process to give a Cp and CpK of 5.0.
CpK is an excellent measure of variability and process capability because it takes into account both spread and non-centering. In process control, centering a process is much easier than reducing spread. Centering typically requires only simple adjustments, whereas spread reduction often requires the patient application of design of experiments techniques. As with Cp, the objective should be to attain higher and higher CpK values, with a CpK of 2.0 considered merely as a passing milestone on the march pass zero defects to near zero variation. CpK is also a convenient and effective method of specifying supplier quality.
Design of Experiments
The above sections dealt with the measurement of variation, specifically with Cp and CpK parameters. This section will concentrate on more powerful statistical techniques, generally called Design of Experiments or DOE. They are particularly important in two areas: (1) for resolving chronic quality problems in production, and (2) at the design stage of both products and processes. A chronic problem can be described as a product with an acceptable defect rate, but also one that has measurably higher dollar waste and that has defied traditional engineering solutions for a long time. DOE techniques are especially important on all new designs, so the chronic quality problems in production can be prevented before firefighting becomes necessary. The objectives in both these areas are to: (a) identify the important variables, whether they are product or process parameters; materials or components from suppliers; environmental or measuring equipment factors. (b) separate these important variables out, as generally there are no more than 1 to 4 important ones. (c) reduce variation of the important variables (including the tight control of interaction effects) to close tolerances through redesign, supplier process improvement, etc. (e) open up the tolerances on the unimportant variables to reduce costs substantially.

There are three approaches to the design of experiments, the classical, Taguchi, and Shainin. The classical approach is based on the pioneering work of Sir Ronald Fisher, who applied design of experiment techniques to the field of agriculture as early as the 1930s. Dr. Taguchi of Japan adopted the classical approach with the development of orthogonal arrays. The third DOE approach is a collection of the techniques taught by Shainin. The “Three Approaches to the Design of Experiments” figure shows the principal methods used by each approach. The classical tools start with fraction factorials and end with evolutionary optimization (EVOP). The Taguchi methods use orthogonal arrays (inner and outer) in “tolerance design”, employing analysis of variance and signal-to-noise for statistical evaluation. All three approaches are far superior to the conventional SPC which attempts to solve chronic problems by means of control charts. All three approaches are also far superior to experiments in which one variable at a time is varied, with the other variables kept rigidly constant. Besides an inordinate amount of time needed for such experimentation, the central statistical weakness of this approach is a chronic inability to separate the main from interaction affects. These weaknesses create frustration and high costs.
Of the three DOE methodologies the Shainin methodology is the best use. This is because there are several fundamental improvements over the Taguchi methods. These include lack of randomization, interactions, and use of orthogonal arrays. In short the Taguchi results are suboptimal and time is better spent using the Shainin DOE tools that can diagnose and greatly reduce variation, leading to zero defects, and to near zero their ability. In short these tools are: (1) simple, understood by engineers and line workers alike. The mathematics involved are unbelievably, and almost embarrassingly, elementary. (2) logical, based on common sense. (3) practical, easy to implement, in production, in design, and with suppliers. (4) universal and scope, applicable in a wide variety of industries, big and small, process intensive as well as assembly intensive. (5) statistically powerful, in terms of accuracy, with no violations of statistical principles. (6) excellent in terms of results, with quality gains not in the inconsequential range of 10 to 50% improvement but in the 100 to 500% range.

The “Variation Reduction Roadmap” figure represents a time-tested romance to variation reduction. He consists of seven DOE tools invented or perfected by Shainin. They are based on his philosophy of “Don’t let the engineers do the guessing, let the parts to the talking.” The analogy of a detective story is appropriate to use in this diagnostic journey. Clues can be gathered with each DOE tool, each progressively more positive, until the culprit cause, the Red X in that Shainin lexicon, is captured, reduced, and controlled. The second most important cause is called the Pink X, and the third most important the Pale Pink X. Generally by the time the top one, two, or three causes, the Red X, the Pink X, and the Pale Pink X, are captured, well over 80% of the variation allowed within the specification of limits is eliminated. In short a minimum CpK of 5.0 is achieved with just one, two, or three DOE experiments.

The “Seven DOE Tools” figure presents a capsule summary of each of the seven DOE tools used, their objectives, and where and when each is applicable. The figure also gives a sample size needed, depicting the unbelievable economy of experimentation. It is strongly recommended that new product and process developments utilize DOE experiments as they move through the scale-up phase of their development. It is only when the diagnostic journey using the seven DOE tools has ended, with a substantial reduction variation, that the focus can shift from DOE to SPC. The true role of SPC therefore is maintenance to insure that the variation, now captured and reduced, is held to set levels.
Sources, References and Selected Bibliographic Information
1. “Process Quality Control, Troubleshooting and Interpretation ff Data” by Ellis Ott, McGraw-Hill Book Company, 1975.
2. “World-class Quality: Design of Experiments Made Easier, More Cost-Effective Than SPC” , by Keki Bhote, American Management Association, 1988.
3. “Managerial Engineering” by Ryuji Fukuda, Productivity, Inc. Publisher.
4. “Statistical quality control”, by W.A. Shewhart, Trans. ASME, Ten-Year Management Report, May 1942
5. “World Class Quality”, by Dorian Shainin, AMACOM Press, 1991.
You must be logged in to post a comment.