Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Chapter 16: Taguchi's Contribution to Design of Experiments (DOE) Applications in TQ

Taguchi's most celebrated contribution is the use of statistical design of experiments not
only for the traditional objective of adjusting the mean to the desired value, but also to reduce the
variation around the mean. He also promotes robust design, which attempts to reduce the variance
a product or process exhibits under different conditions. For example, it is important to produce
high quality products not only on cold days, but also on hot days. Thus, the production process has
to be robust to temperature differences. Furthermore, customers subject the product to different
temperatures, so it is important that products be robust against temperature variations in use as
well. In both cases, a robust design assures that the ambient temperature will not cause serious
variation in the results. Furthermore, Taguchi and Clausing (1988) assert that products that are
robust in use also tend to be easier to produce. This contrasts with the ZD assertion that products
that are produced to print tend to be robust in use. Box 16.1 discusses a well-publicized example of
Taguchi's approach, dating back to 1953.
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Box 16.1: Taguchi's Ina Seito Tile Experiment
In 1953, Ina Seito, a Japanese tile manufacturer, faced a serious quality problem: The size of too many tiles
did not conform to the tolerances. About one third of the tiles failed to meet the requirements to be considered top
grade. The production process includes five steps. In the first step, materials are apportioned, pulverized and mixed to
form clay. In the second stage, the clay is molded into shape. The third stage is pre-firing in a tunnel kiln. The fourth
stage is glazing, and the last stage is the final firing. The quality problems occurred in the third stage, pre-firing, and
were the result of temperature differences at various locations in the kiln. Clay, when subjected to different
temperatures, shrinks at different rates, and thus some of the tiles went out of tolerance. The first solution engineers
came up with involved an investment of about $500,000 in an improved kiln with more consistent temperature across
the section. This solution was too expensive to be seriously considered. (In those years the dollar had a much higher
buying power than today, and the Japanese were short of foreign currency funds. For example, at the same period Akio
Morita of Sony acquired the rights to produce transistors for $25,000, and he had a very tough time convincing MITI --
the Japanese ministry in charge of industrial development -- to allocate the scarce currency for this purpose!)
Because it was out of the question to solve the problem by reducing the temperature variation of the kiln, Ina
Seito decided to try Taguchi's recommendation and find out if it would be possible to change the clay mixture or the
operating conditions in such a way that the percent defective would be reduced. To this end what's desired is not
necessarily clay that does not shrink in the kiln, but clay that shrinks consistently. Consistent shrinkage can easily be
compensated for by making the mold larger than the desired finished dimension, but inconsistent shrinkage will
invariably lead to out-of-tolerance tiles.
Ina Seito devised an experiment where they checked seven factors by eight experiments (a questionable
technical practice known as a highly saturated design). One factor addressed the lime content of the clay mixture; five
other factors addressed the type, texture, or quantity of other additives; one factor measured the size of each batch, with
a view to find out if it would be possible to increase the production run without paying for it in terms of conformance.
The experiment suggested that by increasing the lime content, decreasing the percentage of recycled material, and
using a more expensive agalmotolite mixture (instead of replacing it by a cheap alternative) the percent defective could
be reduced by almost two thirds. In practice, the changes suggested by the experiment proved themselves, and there
was no need to invest a fortune in the kiln. As an added benefit, lime, whose content had to be increased, is one of the
cheapest ingredients, so the new mixture was cheaper to produce. But the experiment suggested that the production
quantity should not be increased, in spite of the fact that it would save costs to do so had it not caused more defects.
The Ina Seito experiment is a classic Taguchi achievement in three senses: (i) it is well publicized; (ii) it
achieved great benefits at low costs; (iii) the technical design of the experiment was questionable (because of the
highly saturated design without follow up with a less saturated design), and it is highly likely that further improvement
opportunities were missed or that some results were not assessed correctly.
In generic terms, the temperature variation that caused the problems in the tile factory is referred to as
"noise," and the objective of robust process or product design is to be unsensitive (i.e., robust) to such noise.
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Recall that the key to Taguchi's teachings is that we should minimize the loss function, and
that the default loss function is quadratic. In Chapter 9 we've used this principle to optimize the
tolerances of fabricated parts. We've also seen that if our loss function is quadratic then the
expected loss is proportional to the variance plus the squared bias. To this loss it is important to
add the production cost, since real optimization takes this cost into account too. Indeed, Taguchi's

Copyright © 1995 Dan Trietsch Chapter 16, Page 1


methods are often used to prevent the need for expensive materials or processes when the design
can be robust enough to make them redundant. A high quality product made from inexpensive (but
not shoddy) inputs is the ideal.
In this chapter we discuss the basic philosophy behind Taguchi's contributions. This
philosophy is practically uncontested at this time. Nonetheless, in the opinion of many (myself
included), Taguchi did not use the best available theory in developing his practical methods.
Western researchers, led by George Box, accept Taguchi's objective to reduce the variance and not
concentrate on bias alone. In that sense there is no doubt that Taguchi is a leader with many
followers, at least in the practical sense. But the same authorities question Taguchi's choice of
methods. Here, we will barely touch on Taguchi's techniques: There are several other sources
where readers can study them, which are far more supportive of them. The reference list includes
pointers to sources that adopt Taguchi's methodology, and can be used to study it. It also includes
pointers to discussions that highlight the controversy behind these methods. In the next chapter we
only cover the main methods recommended by Taguchi's technical critics. Furthermore, among the
theoretically sound methods that are available to statisticians, we'll only cover those that can be
taught quickly to non-statisticians. Covering more sophisticated approaches would not be in line
with the book's goal, since (i) it would not further the qualitative understanding of the role of
statistical methods in TQ; and (ii) it is a good utilization of available resources to use professional
statisticians for more complex projects, at least for the first time they are attempted. (Chapter 17
cites a case where a quality improvement team led by a machinist used sophisticated statistical
methods that are beyond the scope of this text. So this omission does not imply that non-
statisticians cannot use methods that are more advanced than those presented here.)

16.1: Experimental Design for Improving Products and Processes


A typical process has several input variables, or factors, that control the output. For
example, in a machining process we control the speed of cutting, the depth and width of the cut, the
angle at which the tool cuts, the make of the machine and of the tool, the materials, the coolant, and
on and on. When we design a product we also have many design choices which have an impact on
the performance. For example, think about the design of the cutting tool we just discussed:
Obviously by making good selections for the geometry of the cutting surfaces and creating rigid
structure for the tool we can improve the performance of the cutting process; and such design
selections can be tested by experimental design. As another example, suppose a subassembly of two
parts has to fit a given space tightly, then, potentially, by combining the two parts to just one we
may be able to hold a tighter overall tolerance. Or consider the case discussed in Box 16.1 where
the composition of the clay (part of the product) and the size of each production batch (part of the
process) were investigated as potential sources of better quality of conformance and lower cost.
Statistical experimental design may help compare various designs where several such ideas are
tested to see which one gives the better performance. Selecting the best levels for the parameters is
known as parameter design. Applying methods such as those presented in Chapter 9 to set limits to
parameters -- both of products and of processes -- is called tolerance design.
Some of these factors operate essentially independently of each other, but others interact
with each other. For instance, the brightness and contrast adjustments of a TV monitor, or a
computer terminal, interact with each other strongly. If we adjust one to our satisfaction, and then
adjust the other, we typically find that the first now requires further adjustment. In product design,
any two parts that are connected to each other, or move against each other, are likely to have
interactions.
Because of these interactions, the prevailing pre-scientific method of optimizing each factor
independently, while holding the others constant, is not effective. Sir Ronald Fisher invented a
methodology -- design of experiments (DOE) -- that makes possible studying the main effects of

Copyright © 1995 Dan Trietsch Chapter 16, Page 2


each factor and all their interactions by experiments with relatively small number of runs. We
discuss some technical details of DOE in the next chapter. Here, suffice it to say that typically more
than one factor is changed between runs, and they can be studied together more effectively and
much more economically than with the one-factor-at-a-time method. In chapter 18 we'll discuss a
method introduced by Dorian Shainin that is even more efficient than Fisher's scheme, provided we
assume the highest order interactions present involve two factors only; this is an assumption that
most practitioners are often willing to make.
DOE had been used extensively in Europe and America, especially in agriculture,
chemicals, and pharmaceutical applications. But usually its objective was limited to finding effects
that change the mean response effectively. For instance, DOE can serve to study how to adjust
processes that we do not yet know how to adjust, such as determining the necessary temperature
and cooling time required in a hardening process to obtain a desired depth of hardened steel in a
shaft. But Taguchi observed that we should also strive for consistent depth of hardening between
different shafts! High quality implies small bias and small variance. Taguchi studied Fisher's work,
and decided to adopt it to the need to control variation as well as the mean. By doing this Taguchi
opened the door to many exciting opportunities.
At the time Taguchi became involved in this, however, the methodology was not as highly
developed as it is today, and because his objectives were different, Taguchi had to develop his own
techniques. These techniques are the ones that are controversial today. To clarify this even further,
if one has to choose between using Taguchi's methods as given by him, or use the western
techniques as they were used before Taguchi's input -- it's best to go with Taguchi. But if one can
achieve the same ends better, why not?

16.2: Sources of Variation


Variation in manufactured goods in general is the result of a combination of three sources:

t Production variation: Even new products show variation. This variation may be large and
-- if the production process is out of control -- unpredictable.

t Wear and tear: Obviously, there is a difference between used products and new products.
The question is whether used products can still be reliable if designed well.

t Usage variation: Different customers use the product differently. Even the same customer
uses the products differently from day to day.

Of these three types of variation, we can hope to control the first, and we can design to
reduce the second, but we must accept the third. It may also be cost effective to accept the first, if
we can make the product perform well in spite of it, and if it is too expensive to control. Taguchi
1
refers to variation that is impossible or too expensive to control as noise. The idea is that the
design has to be robust against such noise.
Indeed, not all variables are created equal. Some variables are easier to control than others;
such variables can be manipulated to achieve desired ends. Other variables should not be controlled,
but rather planned for. For example, variations in the way a customer uses the product are best
handled by a more robust design, not by sending the customer a thick handbook with operating
instructions and a disclaimer of warrantee should she stray. Variation due to normal wear and tear

1
Taguchi often uses terms from the field of communications, and this is one. Another example is his use of the
signal-to-noise ratio: Both the concept and its measurement units -- decibels -- come from communications
engineering.

Copyright © 1995 Dan Trietsch Chapter 16, Page 3


can often be reduced by specifying more expensive materials, but it's better, and often possible, to
make the design work well even after a considerable amount of wear. In other words, reliability can
be achieved by better design, not only by better materials.

Transmitted variation and induced variation


Variables are not only easy to control or difficult to control: They also differ in terms of
their influence. Some variables have a strong effect on the mean response. We say they have mean
effects. Sometimes interactions of variables have mean effects too. Such interactions must involve
at least one variable with a mean effect, but the second one may operate only through influencing
the first.
Similarly, other variables or interactions influence the variance in a direct way. For example,
the sharpness of a cutting tool, the speed of cutting, and perhaps their interaction, are all likely to
influence the variance of the dimensions of machined parts. This is true because these variables are
likely to influence the amount of vibration the machining causes, and vibration causes variation in
the dimensions of machined parts. So, some variables and interactions have variance effects. And it
is quite possible that the same variable will have both mean and variance effects, in which case we
may refer to them as mixed; otherwise, they are pure. But some variables are inert both in terms of
mean and of variance.
In general, Taguchi's approach is to use the variance effects to minimize the variance, and
then use the pure mean effects to adjust the process to the level we want. As for the inert variables,
in both cases they should be adjusted to their most economical setting. If it's not possible to achieve
low variation and achieve a centered process at the same time, we may have to adjust to the
optimal bias that minimizes the sum of the loss and the production cost. A formal approach for that
purpose would be to use nonlinear programming to optimize the settings of all parameters in such a
manner that the total loss and the total cost together will be minimized.
The variation that is governed by variance effects is called induced. But mean effects
always have an important role in variance reduction efforts. This is because their actual level in
production is always subject to variation. If the level of a mean effect varies, by definition the level
of the response varies with it! This is called transmitted variation. Unfortunately, during
experiments, factors are not subject to the same variation they have in production. This is because
we intentionally control their levels carefully. So DOE never reveals the magnitude of transmitted
variation directly. What it does to that end is identify the real effects, and it is up to us to find out
what variation is associated with the factors that have these real effects in practice, and calculate the
expected transmitted variance. Some mathematical details are given in Supplement 17.
In practice some variation is transmitted (always!), and some is induced (usually). To
minimize variation we must consider both sources. As discussed in Chapter 9, it is more effective to
deal with the larger source first, but we cannot tell in advance if the largest variation source is
induced or transmitted. One way to reduce the transmitted variation, if necessary, is by setting
tolerances on the adjustment of mean effects and variance effects. Setting such tolerances is part of
tolerance design. Furthermore, in some cases interactions can be used to reduce the transmitted
variation caused by input variations of mean effects. That is, we set an interacting factor to the level
at which the other mean effect with which it interacts is smaller, which reduces the total transmitted
variation. The result is indistinguishable from the use of variance effects.
Theoretically, variance effects may actually operate by influencing mean effects that have
not been considered explicitly in the experiment. One can also argue that transmitted variation is
induced by those factors that cause the variation of the level of the real effects in question.
Nonetheless, these causes are usually not part of the experiment, especially not if the experiment is
in a laboratory. Therefore, in practice we can and should differentiate between variance effects and
mean effects, and we should also look for beneficial interactions. Incidentally, one of the strongest

Copyright © 1995 Dan Trietsch Chapter 16, Page 4


criticisms against Taguchi is that he studiously ignores interactions. In fact, he claims that when the
process is well adjusted there should be no interactions. This claim, however, had never been
substantiated, and Taguchi's critics claim that they have numerous counterexamples.

16.3: Dealing with Multiple Objectives


So far we implicitly assumed that our response is expressed by a single measurement. True,
we wanted to consider both the mean and the variance, and -- in the mathematical sense at least --
this is a dual objective. But it may also be possible to have more than one objective, and such
objectives may be conflicting. For example, we may want to control both the viscosity and the
acidity of a chemical at the same time. We may want both of them to be on target, and both of them
to be consistent.
Generally speaking, when we conduct statistical experiments we can measure several
responses at each run, and we can study them separately and together. To select the final designed
adjustments, however, may involve tradeoffs. The best technical way to do the job is the use of
mathematical programming tools that are beyond our scope. In Chapter 17, however, we will
discuss briefly how to pursue objectives in terms of both mean and variance, and mathematically
this is a special case of pursuing more than one objective.
Taguchi, however, claims that data can be transformed in such a way that both mean and
variance can be optimized together. This is to be done by using signal-to-noise (S/N) ratios. They
are defined for three main cases:

t smaller is better: In this case, the squared response is the magnitude we wish to minimize.
This can be done by minimizing its following transformation instead, if we wish,

1 n 
S/N = - 10 log10  ∑ yi2 
 n i=1 

t nominal is best: Here, Taguchi looks at the statistic

mean squared response  y2 


S/N = 10 log10 = 10 log10  2 
variance  s 

t larger is better: In this case we may look at 1/yi as the response that we should minimize,
leading to

1 n 1 
S/N = - 10 log10  ∑ 2 
 n i=1 y i 

The main idea here is to select settings with high signal to noise ratio, i.e., relatively strong
signal, and use other factors to adjust the mean to the desired value. If these other factors do not
influence the variation, it follows that we'll obtain any desired value with the minimal possible
variation.
The efficacy of using signal-to-noise ratios is subject to debate. Some researchers agree that
S/N ratios are effective for the first and last cases, although Box (1988) does not accept this either.
As for the nominal is best case, there is only one special case where S/N ratios work as advertized,
and that's when the mean and the standard deviation are proportional to each other (see Leon et al.

Copyright © 1995 Dan Trietsch Chapter 16, Page 5


1987). This is, reportedly, true in some electrical plating or chemical deposition cases, where the
more we plate the larger our standard deviation gets. Likewise, in coating silicon wafers a similar
phenomenon occurs. It can be shown, however, that even in this case if we take the production
costs into account, using S/N ratios preempts optimization (Trietsch, 1992). Furthermore, it is not
necessary to use S/N ratios. In conclusion, there seems to be no justification and no need to use
S/N ratios.

16.4: Dealing with Noise Variables


If we want to allow noise variables to assume large variation, and design our processes or
products around that, how can we do so technically? Taguchi adopted external arrays for this
purpose. DOE, in general, can be viewed as an array of settings such that each run is as per one of
the settings. Essentially Taguchi suggests adding an external array, that is associated only with the
noise variables. Suppose there are 16 rows in the internal array, corresponding to 16 settings of the
non-noise variables. The external array may have, say, 8 similar rows (Taguchi prefers to tilt the
o
external array by 90 , so these rows actually appear as columns). Now, the main idea of using
external arrays is to run each of the 16 internal settings with each of the 8 external settings, giving a
total of 16⋅8=128 different runs. But now each internal run is tested over a range of 8 possible noise
settings. Taguchi designed this so that the full influence of the potential noise will be tested with
each internal setting. An internal setting that gives good results for all 8 external settings is likely to
be good in practice too, the theory says.
The western critique against this idea is that it is inefficient. Box and his colleagues see no
reason to assign the noise variables to an external array: They might as well be part of the internal
array, which would make for a more parsimonious design that is more amenable to analysis. Such a
design is also more likely to unearth interactions of control variables with noise variables -- a prime
tool to decrease noise-related variation.

16.5: Dealing with Curvature


A key to efficient design in terms of reducing variation is to utilize nonlinearities of the data
to reduce transmitted variation. If we intentionally operate at regions where the response is not
linear, then necessarily we expect some curvature in the data. Taguchi recommends studying
curvature by using experimental designs with three levels for some variables. The western approach
is to use only two levels as much as possible, and then, when the curvature is likely to be important,
add points to the experiment that together make possible representing the behavior of the system by
a quadratic function. Quadratic functions are capable of modelling curvature, and it is
mathematically easy to optimize settings with them.

16.6: The Relevance of Taguchi-Inspired Methods to Services


DOE has been applied extensively in agriculture, science, process industries, and
manufacturing. Taguchi's version has been applied to manufacturing more so than any other field.
But the main ideas hold for services as well. For example, in transportation it is well known that
variation in delivery time is a very important service quality measurement. There are many choices
that we have to make in transportation, starting with the mode (barge, rail, truck, air). Within each
mode there are further choices. For example, if we choose highway transportation we can use
truck-load carriers, less-than-truckload carriers, private fleet, etc. If we decide to go with a private
fleet, we can opt for large, medium, or small trucks (or mixtures), etc. All these choices have
important ramifications on our costs, mean delivery time, and variance. Conceivably, we can use
DOE to help us make these selections. In this particular case the DOE will probably be part of a
simulation application, because it can be very expensive to try all the alternatives on the ground.
Another example is bidding for government business. With few exceptions, a contractor

Copyright © 1995 Dan Trietsch Chapter 16, Page 6


that is authorized to bid on government contracts (at all levels of government, in America and
elsewhere), wins those bids where his bid is lowest. A typical contractor bids by estimating the cost
necessary to do the job, and then adding a margin to cover overhead, for protection against cost
overruns, and for profit. There are two questions that a contractor needs to answer to win enough
bids with enough profits. The first is how to estimate the job with as little error as possible, and the
second is what margin to add. Suppose all contractors have the same relative error and margin,
then it is likely that the winner in each and every bid will be the one that underestimated the real
cost most. It may be difficult to make money that way. But if we can reduce our error and make it
negligible, then we can adjust our bid so that we will only win if it pays to win. Furthermore, if all
contractors are able to reduce their error all of us will be the winners, because bids will reflect the
true economic cost and fair profit, without a safety margin that is only necessary because of
variation. DOE and other statistical methods, notably regression to analyze costs based on simple
parameters, can help. For instance, an electrical contractor may use the number of outlets and the
total length of wire as explaining variables for the total cost.

16.7: Conclusion
Taguchi provided leadership in identifying the value of hitting the target exactly rather than
settle for producing within the design tolerances. This implies that reducing variation is equally
important to reducing the bias. Taguchi also provided leadership by demonstrating that DOE
techniques can be used to achieve this end. Finally, he provided leadership by pointing out that
some variables should be allowed to vary, some others should be used to adjust the mean, and yet
others should be used to reduce the variance. Furthermore, he pointed out that variables that are
not important in terms of adjusting either the mean or the variance should be set to their most
economical levels. Most contemporary quality experts agree with Taguchi on all these points.
But the methods Taguchi recommends, specifically (i) the use of external arrays for noise
variables; (ii) the use of signal-to-noise ratios; and (iii) estimating curvature directly from data by
specifying three levels for some variables, have not been adopted as eagerly by everybody, and in
the opinion of this author these methods should indeed be rejected. There are also arguments that
his designs are not effective in making possible clear conclusions due to confounding -- a subject
we postpone for the next chapter, which presents some western methods to achieve Taguchi's ends.
More details about Taguchi's methods may be found in Taguchi and Wu (1980), as well as several
references that are listed within Nair, ed. (1992). A direct comparison between Taguchi's methods
and western methods vis a vis a well-publicized example is given by Montgomery (1991, pp. 532-
543). The particular example itself may be found in Ealey (198?, Appendix A). Ealey also reports
that Taguchi's staple answer to the criticisms is that his methods work. This is a sound argument,
but the critiques don't deny that: They just say they can and should be improved.

Sources and Readings


Box, George E. P. (1988), Signal-to-Noise Ratios, Performance Criteria, and Transformations,
Technometrics 30(1), pp. 1-17. [Shows better alternatives to the use of signal-to-noise
ratios, and recommends graphic analysis of experimental results. Followed by discussion
section on pp. 19-40, by various authors.]

Box, George E. P., Soren Bisgaard, and Conrad Fung (19??), An Explanation and Critique of
Taguchi's Contribution to Quality Engineering. [The critique in question is that Taguchi's
technical methods -- signal to noise ratios and outer arrays -- are not the best choice
available, but his objectives are flawless. The methods presented in the next chapter are
basically those recommended in this paper.]

Copyright © 1995 Dan Trietsch Chapter 16, Page 7


Ealey, Lance (198?), Quality by Design, American Supplier Institute (ASI), Dearborn, MI. [Mostly
qualitative book about Taguchi's methods -- which ASI promotes vigorously. ASI is a
Ford-spinoff community service organization, and Ford is one of the major adopters of
Taguchi methods in America. The Appendix shows a detailed example of Taguchi's
analysis. This particular example involves choosing the correct length of overlap in gluing a
hose. Too little, and the result is not reliable, too much and it's difficult to do and also less
reliable. Thus this can serve as an example of using Taguchi's methods for design of
products. Other authors, e.g., Montgomery, claim they can analyze the same data better.]

Leon, Ramon V, Anne C. Shoemaker, and Raghu N. Kacker (1987), Performance Measures
Independent of Adjustment: An Explanation and Extension of Taguchi's Signal-to-Noise
Ratios, Technometrics 29(3), pp. 253-265. [A central paper about S/N ratios. Followed by
discussion section on pp. 266-285, by various authors.]

Lochner, Robert H. and Joseph E. Matar (1990), Designing for Quality: An Introduction to the
Best of Taguchi and Western Methods of Statistical Experimental Design, Quality
Resources, White Plains, NY and ASQC Quality Press, Milwaukee, Wisconsin. [Although
this text is not in complete agreement with the methods presented here, it is a clear and easy
to understand basic source for the fundamental methods inspired by Taguchi.]

Montgomery, Douglas C. Introduction to Statistical Quality Control, 2nd edition, Wiley, New
York.

Nair, Vijayan N., editor (1992), Taguchi's Parameter Design: A Panel Discussion, Technometrics,
34(2), pp. 127-161. Panel Discussants: Bovas Abraham and Jock MacKay; George Box,
Rahu N. Kacker, Tomas J. Lorenzen, James M. Lucas, Raymond H. Myers and G.
Geoffrey Vining, John A. Nelder, Mahdav S. Phadke, Jerome Sacks and William J. Welch,
Anne C. Shoemaker and Kwok L. Tsui, Shin Taguchi [Genichi's son], C. F. Jeff Wu.
[Along with the discussions associated with Leon et. al (1987) and Box (1988), this is
required reading for those who want to decide for themselves whether to adopt Taguchi's
techniques or not -- but there is no open challenge against his objectives.]

Taguchi, Genichi (1986), Introduction to Quality Engineering: Designing Quality into Products
and Processes, Asian Productivity Organization, Hong Kong. (Available from Quality
Resources, White Plains, NY, and American Supplier Institute, Dearborn, MI.)

Taguchi, Genichi and Y. Wu (1980), Introduction to Off-Line Quality Control, Central Japan
Quality Control Association, Nagoya, Japan. (Available American Supplier Institute,
Dearborn, MI.)

Taguchi, Genichi and Don Clausing (1990), Robust Quality: Harvard Business Review, January-
February, pp. 65-75. [The main message: We should design Products not to fail in the field;
this will simultaneously reduce defectives in the factory.]

Trietsch, Dan (1992), Augmenting the Taguchi Loss Function by the Production Cost May
Invalidate Taguchi's Signal-to-Noise Ratio, AS Working Paper 92-07, Naval Postgraduate
School, Monterey, CA. [This paper was rejected for publication because the reviewer felt
that taking the production cost into account is tantamount to cheating the customer! The
thought that in the final analysis the customer pays for the production cost did not occur to

Copyright © 1995 Dan Trietsch Chapter 16, Page 8


that scholar, which goes to show that statistical training is not a cure for suboptimization.
Taguchi himself, however, wrote that production costs are as important as the loss
function.]

Copyright © 1995 Dan Trietsch Chapter 16, Page 9

You might also like