Document 11070326

advertisement
.c,KCUUs
fUBRARIES)
iD
Of
1\^^
Dewey
Optimum
Pooling Level and Factors
Identification in Product Prototyping
by
Massimo de Falco
WP #3789-95
February 1995
Optimum Pooling Level and Factors Identification
in Product Prototyping
Massimo de Falco
University of Naples "Federico
11"
Dpt. of Materials and Production Engineering
P.le Tecchio,
Naples 80125
-
Italy
February 1995
Abstract
In the
new product
or process development, most of the efforts are focused
Here, when engineers
not have the sufficient knowledge of basic phenomena,
experimentation becomes the main track to obtain those linkages. Even if
industrial applications typically include this practice, because of the deep
statistics background required, experimentation is still ineffective and
sometimes biased by human behavior.
This paper illustrates a method to support the experimenter during the
analysis of effects, introducing a technique to set a priori the number of
causes to "p)Ool" in the residuals. This approach controls the experimental
sensitivity and the risks involved in the design decisions, making the
analysis more reliable and reducing, in this way, the arbitrary handling of
on the
identification of cause-effect relationships.
do
data.
Increasing the objectivity of the methodology, the approach eliminates, as
a consequence, some of the criticisms against the "pooling technique".
V.ASSACHlJSErrs INSTITUTE
OP TECHNOLOGY
^f^y 2 6
]995
LIBRARIES
1 Introduction
The needs
of interpreting the physical
development
phenomena plays
main
role in all
new products and
particularly in the launch of
activities,
a
processes where the engineers confront problems not yet investigated.
approach followed
typical
While the
first
approach
in these stages are simulation
effective only
is
second
we
inevitable
is
if
information and then the only alternative
and experimentation.
already have knowledge of the
phenomenon and we
singular events composing the
in algorithms, the
if
The
we do
are able to express
not have any previous
to observe the reality
is
them
through
experimentation.
This form of investigation
many
is
However
utilizations.
known
well
complex
the
and already has
in the industry
statistical tools
involved and the
recent introduction of different oriented techniques contribute to
correct application
more confused and
make
their
unclear.
Thus, the industrial needs encouraged factions of experimental design experts
to
develop simpler and
less
expensive techniques, in this way, dividing the
studious into different confronting parts.
the creation of additional
1.1
room
The Experimental Design
Although the experimental
the Research
in the
and Development
new tendency
is
New Product Development
area,
always had a place in the industry in
with the
new approaches proposed by
has had a renewed powerful boost.
it
and the shortcuts
result of this
subsequent improvement.
activity has
Genichi Taguchi and others,
simplification
for
One
illustrated in these
The
methodologies broke the
walls of the industries and found several applications.
This scenario
obviously faces the inconvenience of an increased rude utilization of these
methods, which already have, in themselves,
J.J.,
Furthermore, the quality of the experimental applications has to deal
1992].
with
criticized parts [Pignatiello
its
greatest
enemy, the human nature.
The aim
of the experiment
is
purely definable as the attempt to better understand the real world through
the generation of
new
appear and his role
is
data, but in this
unraveled.
broad view the experimenter does not
The engineer or
new product development can be tempted by
-
the
the desire
team working on the
to:
demonstrate the correctness of his/their intuition;
-
new solution that will give him/them
study to many effects at same time;
-
conclude rapidly and economically
-
We
push
honors;
a
all
phases.
can face in the experiment design and conduction any of these wishes
consciously or not, and their strong biasing on the result
is
more than
evident.
This situation becomes even more
critical
in the
complex new product
development, where the numerous constraints and the expensive activity of
prototype building impose a
boundary on the number of experiments
strict
to
conduct.
Besides these different sources of influence,
we can
categories of behavior: the conservative approach in
consideration of a
new
solution
solutions,
which
is
even
more
a
if
there
where we
revolutionary approach,
new
if
we
is
identify three basic
which we renounce the
not strong evidence of
its effect;
the
face a greater propensity to consider the
are not very confident;
theoretical than practical
and the neutral approach,
concept, being very difficult to
reach.
A
approach
typical conservative
is
recognizable in the aeronautic industry
where, for example, the bonding assembly, even
if
demonstrated
safe, is
not
allowed in the structural parts and will be excluded until everything
known about
industry any
it.
Differently in the
new design
is
consumer goods or
is
in the electronic
rapidly pursued to feed the market need for
novelties.
1.2
Objectives of the proposed approach
This paper has as
its
aim the proposal of
developer in reaching
experiments
A method
is
when some
a
error estimation.
greater effectiveness in design and analysis of
constraints impose a small
illustrated to
analysis, identifying the
a technique to support the industrial
of
perform the "pooling" of residuals
optimum
level of degree of
freedom
trials.
in the
Anova
to pool in the
Thus, the pooling technique becomes less discretionary and
the argued point of arbitrary handling of data
In the
number
is
attenuated.
proposed procedure, the guiding criterion
is
the maximization of
experiment sensitivity and the control of risks connected to the typical design
decision making.
2 Risks Related to the Experimental Design Decisions
In the decision
between two
making process regarding
alternatives, coherently with the
different kinds of risk can be faced: the first
design
if it
the design solution to adopt
is
ongoing reasoning, two
the possibility to accept the
does not provide any improvement; the second
when it actually
known respectively
is
to refuse a
new
new
design
provides an improvement. The two related mistakes,
also
as type
used in the
statistics
hypothesis'
and type
I
tests,
II
are defined
and
to as the null (Hq)
and
errors (exhibit
and are referred
1),
alternatives (Ha) hypothesis where:
new design does not provide improvement.
Ha: The new design provides improvement.
Hq: The
so the decision turns in to accept or reject the null hypothesis (Hq).
Exhibit
1.
Types of error occurring
in the decision
making.
REALITY
Improvement
Accept
New Design
g
u
IT.
^
Refuse
New Design
OK
(!-«)
No Improvement
These
risks are identified
commonly
called
a and
(i:
by the probability of
P(reject H<,
H<,
I
their occurrence
which are
true)=a and P(accept H<,
is
Ho is
I
false)=p.
All these expressions
change
if
we
consider the situation where the decision
must be made among more than two design
experiment, which means more factors
null
hypotheses,
we should
levels.
we have
If
k levels, and then k
between the probability
differentiate
erroneously rejecting the null hypothesis (the already defined type
and the probability of erroneously
which
is
rejecting at least
called experimentalwise error
[Mason
same
alternatives in the
I
of
error)
one null hypothesis,
In
R.L., 1989].
fact, in this
case
the null HyfKJthesis can be written as:
Hq:
T|
= X2=
='fic=0;
the probability of occurrence of this error (E)
type
I,
the relationship between
them
is
clear
that,
experimentalwise can be
k=5, even fixing strictly
2.1
a
if
much
(0.05)
greater than the probability of
expressed by:
E = l-(l-a)
which makes
is
k-l
only the type
error
I
is
larger in presence of several
we have
controlled,
the
factor levels. For
E=0.2.
The Experimental Strategy
The acceptable
levels of the experimental risks are the keys to defining the
outline of the strategy and the approach to follow.
will require a
low
level of the a/E-error
vice versa for a revolutionary approach.
equally setting the two errors, but
at a
them
low
at a
Working
level
conduces
we
to a very
and
A
A
conservative approach
a high level of the P-error;
and
neutral tendency can be followed
should be aware that setting both errors
unproductive experiment, while setting
high level could imply a very sophisticated and expensive plan.
in
an environment with many constraints, like that which
industrial engineers usually face, requires a
more
even
flexible,
technique to perform experiments with a useful outcome.
if
less precise,
This explains the
broad diffusion that the fractional
have so
In fact,
factorial
design and the Taguchi method
far obtained.
avoiding the case of unlimited resources, and therefore the
willingness to perform expensive experiment, fitting the strategy and the
constraints, often complicates greatly the design,
making
a too sophisticated
pattern of the experiment necessary.
Traditionally, the industrial experimenter
overcomes these
difficulties
new design
controlling the P-error, taking risks that, tending to refuse
solutions, penalize mostly the customers
by not
and consequently the
long term
firm quality.
2.2
Main Elements
The outcome
of the Experimental Design
known
yet,
product and the pattern of the experiment
are
of the experiment
once the framework of
the
we
entirely settled,
Without describing
defined [de Falco M., 1994].
experimental phase,
is
even
all
not
if
steps in planning the
can identify some of the main elements, strongly
These
interrelated, that influence the validity of the results.
probabilities of the errors previously introduced (a/E, P), the
each average to investigate
(de)
(n),
the
knowledge of an
and the precision of the experiment
(5).
is
the
sample size of
estimation
initial error
The precision
are:
defined as the
capacity to detect a change in the response of the experiment at the least of the
size 5,
and
is
usually joined with the error standard deviation in the
parameter <1)=5/Ge.
In this
way
(appendix A) can be written, in an implicit form,
/(a. p,
It is
among
the relationship
<D,
these parameters
as:
n) =
also possible to visualize part of this relationship in the exhibit 2,
the comparison of
two normal populations with the same sample
shown, and the area under the
the experimental observation
tails
{[La)
respectively at the left
represent the risk
affects the curves shrinking or enlarging their
a and
and
where
size are
at the right of
p; the
sample
size
shape and the precision
considered 5=<I>Ge translating the curves. This shows clearly that in trying to
improve the
level of
one parameter the others became harder
to
manage. For
example, also illustrated in the exhibit, as imposing a greater precision
(5'<5,
^'«t>), without changing the other parameters, the risk P
automatically
increases (P'>P).
Exhibit
2.
ReUtionships
Thus, the
total
amon^
number
the
main experimental design elements
of trials to run (N)
levels of the factors to study (k),
total
N
number
= n»
of trials
and interactions of
resolution
the
and the obtained sample
is
size (n).
the
of
In fact, the
number
of factors
interest in the experiment, verifying that the necessary
reached, and must be approximate to the next orthogonal array
to use a fractional factorial design.
number
number N, and so
of levels inlcuded should be used.
that the
available in the experiment
is
if
For experiments with different
the technical and economical constraints impose an
number
number
comes from the product between these two parameters
factor levels the biggest
When
linked with the total
The number obtained must be compared with
k.
we want
is
maximum number
upperbound
for
of degree of
freedom
shows how
a greater
settled, this relationship
of factors/ interaction implies a reduction in the resolution with the
consequent confounding of the
effects.
3
The proposed approach
The
initial
question
is,
how many and which
The answer necessarily
experiment?
hypotheses.
lies
factors
on
In industrial application, the
should
we
the a priori
include in the
knowledge and
phenomenon under study
is
usually product performance, observed through a response factor, which
depends on
a subset of factors /interactions, included in a potential broader set
of factors (exhibit
Here the known
3).
factors
have been separated into
uncontrollable and controllable, these last inlcude the "control factors" of the
experiment and two remaining categories: the steady
during the experiment and the variable
factors,
factors,
which are fixed
which do not have any kind
of restriction.
The dependence between response and known
with a cause-effect diagram, in which
we
factors is a priori representable
delineate the relationships to test in
terms of significance and relevance.
Exhibit
3.
Categorization of the factors influencing the phenomenon.
Known Domain
rVneiiti\nii\Aik-'/
^
Controllable
Variably
VjACTORy
EzTor
noise factors
The raw knowledge
in the first stage of the experimental design
cause of inefficiencies.
number
In fact, the
of factors
is
and
the natural
interactions
included in the analysis could be revealed to be greater or smaller than the
correct one.
If
the
number
that, the significance of the
factors not included in
it
is
smaller,
and there are not many way
experiment
is
to realize
saved by fixing or minimizing the
(steady factors), even
if
this implies
tendency to neglect possible improvements of the product.
an obvious
In
we
more frequent
cases,
face, coherently
when more
effects
if
it
is
correct,
to
inefficiency of the initial
scientists are skeptical
collection,
even
if
in the
model,
with the observations of the previous paragraph, a
Here,
effective use of the exj:)erimental analysis.
and,
have been included
we
could ask
if
there
is
a
less
way
intervene after the experiment to reduce this
model?
This
is
an argued point.
The
traditional
about any form of handling the models after the data
they do not keep always a rigid position and tolerate
forms of manipulation [Dunn
O.J., 1987;
The new tendency developed among
some
Paull A.E., 1950].
the sustainers of the
modern
simplified
industrial application of experimental design [Taguchi, Ross, Phadke, etc]
do
not leave doubt about this possibility and advocate, to reduce the inefficiency
of the model, the use of the "Pooling Technique".
This technique relies on the principle that
have the relevant
supposed
effect initially
if
a factor or interaction
{(})(C)}, its
does not
variation estimation
nothing more than another estimation of the casual variation, and in
sense another estimation of the experimental error (exhibit
is
possible to improve the
freedom of
Exhibit
4.
power
of the
this factor/ interaction.
PcKiling
test,
4).
is
this
Therefore,
it
pooling in the error the degree of
freedom
denominator increases and the F-distribution provides lower
for the
critical values.
Underlying Principles
3.1
A
rule to enforce the validity of this technique can be the a priori definition of
if
and how
to pool the factors/interactions, reducing, in this
biasing and the arbitrary handling of the data.
criterion to decide
and how many
if
we
To pursue
way, the form of
we need a
deciding how
this track
can pool, and a criterion to follow in
factors to pool.
For example, Paull suggested the rule proceed pooling
if
the F value obtained
in a preliminary test is less than the critical value determined
by
2»F[o.5,vi,v2]
[Paull A.E., 1950].
3.2
Pooling methodology
The
criterion
optimum
of a type
This
is
for the pooling technique
level for the error degree of
is
based on finding the
freedom which reduces the probability
n mistake.
made
obtained by observing that, analogously to the reasoning
paragraph
2.3
probability
experiment
initial
proposed here
on the relationship between the precision
P,
the type
to detect the
II
error
is
(5
= ^'Oe)
in
ar*d the
always linked to the capacity of the
change in the averages and
this, if
we
accept the
assumptions, becomes a function of the estimated error standard
deviation.
The detection capacity
will be
called a
posteriori
"sensitivity"
of the
experiment and defined using the Fisher's Least Significance Difference
(LSD):
SEN=±t[a/2,v] (2/n)l/2 Se
here the
LSD
is
relative to
two samples
of
same
size,
and
t,
is
]
the
t-
distribution with an upper-tail of probability a/2; v = degree of freedom of
error;
n =
size of each
more samples,
sample compared; Se = error standard deviation. With
to control the experimentalwise error, the Bonferroni
can be adopted in the formula to assess the probability of the type
I
method
error as
a/2k, with k equal to the number of samples compared in the experiment.
the relationship between the P probability
Then
obtained simply with the function
/[a, P,
form (appendix A). This linkage allows us
<l>(Se),
to
and the
N] =
sensitivity can be
0,
available in table
draw completely
the course of
these parameters once the behavior of the error standard deviation
is
known.
Error behavior
variation
is
necessarily an expression of the system designed for the
experiment, which
is
to say that the error variation
The error
and the experimental
noise
(exhibit
factors
depends on the external
This sources of variation in
5).
the fractional design can also be confounded with high order interactions
considered negligible.
Exhibit
5.
Exp«nnwnUl
Error
EEl
Measured Error
= Erroneous parameters setting
EE2 = Erroneous response measurament
EE3 = Erroneous
c»ntrol of steady
factors
External Noise
ENl =
EN2
Variation
due
to variable factors
= Variation due to uncontrollable
factors
EN3 = Variation due to unknown factors
When
a control factor of
a variation
an experiment or an interaction under study reveals
lower than then error variation, pooling them together gives a
smaller error variation, and vice versa.
how
increasing the
number
allows us to smooth
its
In
physical interpretation, this
shows
of small casual sources of variation in the error
estimation and to approach the normal distribution,
while adding greater source of variation corresponds to including well
identified sources in the error
does not alterate the concept of
cause,
and thus the error estimation
"error", since
and by definition the error
control.
When
is
the
any variation
sum
is
increases.
This
always due to a
of the causes that
we do
not
they are many, small and not distinguishable, they give to the
error the normal distribution behavior necessary for the analysis of variance.
10
Finding the optimum level
Observing that an increase of error degree of freedom influences both the
sensitivity of the
experiment and the power of the
conceptual courses of these parameters in exhibit
Exhibit
6.
Experimental parameter courses.
6.
F-test,
we
can draw the
by the wishes
discussed,
and
in the pooling
Up
initially
problems showed
also represents a solution to the possible
Down
and Pooling
strategies [Ross
P.J.,
1988].
The
first
strategy tests the smallest effect against the next larger;
not significant these are pooled to test the next one.
strategy
but the larger effect are pooled; then,
all
next larger
is
removed from
against the remaining.
m defining
the pool
we
In exhibit 6
and
is
the ratio
In the Pooling
the test
if
if
is
is
Down
significant the
tested with the previous effect
see that, in addition to the subjectivity
the level of significance each time, the
two
strategies are subject to
different tendencies:
Aup= tendency
Adn= tendency
These strategies both
more
significant factors;
to consider less significant factors.
affect the type
I
and
II
mistakes, and the two tendencies
different kinds of effectiveness
constitute
particular, the pooling
related
to consider
the experiment.
in
a
level,
In
strategy (most preferred) does not really control the
because considering more significant factors, without
risks,
sustainable
up
lost
obviously reduces the tendency to
make
the type
a
II
mistakes but dos not give effectiveness to the experiment.
4
The Powered
Fruit Juicer application
The profX)sed approach
Powered
Fruit Juicers (PFJ).
of experiment
on
will
be illustrated with an application
The
familiarity of the reader
and the anova analysis
is
assumed, so as
made with two
with
classic
design
to focus the attention
the subsequent steps:
i)
Product Description;
ii)
Sample
size determination;
iii)
Experiment conduction;
iv)
Pooling Technique.
Product Description
The PFJs
are typical
consumer product diffuse
in
oranges, they simply
work with
an electric engine.
The performance observed
machine,
in
any home, mostly used
the principle of the
is
for
head rotation moved by
the juice yield
by the
terms of the ratio between the juice produced and the juice
12
remaining in the
the factors related to the performance, this
grouping made in paragraph
"selected factors".
how
understand
phenomenon
Exhibit 7 synthesizes the
fruit.
coherent with the factor
which the control
in
3,
is
categorizing
factors are indicated as
This differentiation of the factors
is
also useful to
the error variation and the controlled variation are
generated.
This product causes additional difficulties due to the presence of the
factor nested in the design factor (head shape), impeding, in this
power
way, the
evaluation of the interaction between them.
Exhibit
7.
Factor categorization and selection.
KNOWN DOMAIN
UNKNOWN
I
Controllable
Uncontrollable
Unselected
Selected
•
•
Head shape
Power (Engine)
• Rotation
Minimized
Steady
Architecture
technique
pressure
•
Oranges dimension
•
Head dimension
•
Experimental time
•
Operator
• Lateral
Uncontrolled
Variable
Setting
Orange shape
•
Rotation speed
•
• Squeezing time
•
Material
• Envir.
• Axis slope
•
Flash control
• Envir.
• Vertical
pressure
• Filter
temperature
umidity
shape
External Noise
Experimental Error
1
Architecture foctor
= factor embodied in the prototypes;
Setting factor
= factor varied in the experiment;
Steady factor
= factor settled
Variable factor
= factor not controlled;
Minimized
= factor with reduced
factor
for all the experiment;
Uncontrolled factor = factor uncontrollable.
variability;
Sample Size Determination
This step
is
worth mentioning because
all
risks
and the experimental
precision are defined in this phase, any subsequent intervention has as
target only the full use of the feature defined here but not
total
number
of trials
was
where
as already introduced
the experiment
and the
increment. The
calculated using tables available in literature
[Davies D.L., 1956], which express the relationship f(a,
A),
its
its
O
= 5/Oe
is
O, n)=0 (appendix
the ratio between the precision of
error standard deviation, n
13
p,
is
the sample size of each
group compared
number
half of the total
experiment (equal,
in the
of
two
for
level factorial design, to
trials).
In the design of experiment of PFJ, because of the total lack of
preliminary test was conducted to estimate
out to be approximately 0.0277. The risks
table
Al
CTp
[Davies O.L., 1978], which
a and
(3
were both fixed
of appendix A, with 0=1.45 (=0.04/0.0277),
orthogonal array
is
n=16=N/2 and then
knowledge,
the total
we
a
came
at 0.05. In
find n=14, the closest
number
of trials will be
N=32.
Experiment Conduction
The pattern of the experiment views
factors
V
and
a full factorial for the architectural
a half factorial for the setting factors,
for the architectural factors
and
thus
we
obtained a resolution
a resolution III for the setting factors plus a
resolution IV for the interactions between the factors of the
categories,
is
and the confounding imposed by the nested
two
structure.
different
The pattern
recognizable in exhibit 8 thanks to the positioning of the architectural factor
on the top of
the matrix,
combination of
all
where the
the factors.
On
full factorial
the side
implies the presence of the
we have
the setting factors
where
the half fractional design implies the presence of only half of the data, but the
orthogonality of the design guaranties the possibility of comparing any factor
at
each of
Exhibit
O
B
S
E
R
V
A
T.
8.
its
two
levels
(Appendix
C).
Matrix of experiment for the PF] case with the results.
the sequence of the runs
randomization of the
was random [Montgomery
trials is
perception, randomization
is
The
D.C., 1984].
an argued point, because, against the
common
very exper\sive and the tendency to conduct the
group)ed runs often compromises the quality of the experiment [Pignatiello
J-Jv 1992].
The Pooling Technique
The analysis was
is
started
assuming negligible three
already a form of effects pooling even
Anova
tables.
In table
1
the initial
if it
model
is
factor interactions
does not appear in the
shown with
interaction but those being relative to the nested factor,
is
and
and
all
this
initial
second order
also the p-Value
provided, which gives the smallest significance level at which the null
hypothesis can be rejected.
The
last
column supplies the percentual
contribution of the effect variation with respect to the total variation and
helpful in understanding the behavior and the impact of each factor
interaction
Table
(Appendix
1. Initial
Anova
table
B).
is
and
(residual)
all
effects
with an F-ratio
degree of freedom became 22 and
the experiments are:
Table
2.
S<iurre
less
we
then
1,
as illustrated before, the error
obtained table
Sen2=ww, MSe2= 005 and
P2=ss.
2.
Here the parameters of
the last
mean
column gives
the F ratio
between the actual and preliminary error
squares (MSe/MSgp).
Table
4.
Experimental Parameters in the PFJ case.
STEP
6 References
Box
G., Bisgaard S. Statistical
Tools for Improving Designs, Mechanical
Engineering, January 1988, p.32-41.
E., Hool J. N., Maghsoodloo S., A Simple Method for Obtaining
Resolution IV Designs for Use with Taguchi's Orthogonal Arrays, Journal
of Quality Technology, vol. 22, October 1990, p. 260-264.
BuUington K.
Cochran W.G., Cox G.M., "Experimental Design", 2nd Edition, John Wiley
Sons Inc., New York, and Chapman & Hall Ltd., London, 1957.
Dehnad
K., "Quality Control,
Bell Laboratories,
Robust Design, and Taguchi Method"
Wadsworth & Brooks
-
Pacific
&
AT&T
Grove, California, 1988.
de Falco M., Ulrich K.T., "Integrating the Choice of Product Configuration
with Parameter Design", Working Paper, Sloan School of Management MIT, December 1994.
Desu M.M., Raghavarao
D., "Sample Size Methodology", Academic Press,
Harcourt Brace Jovanovich, Publ. - Boston, New York, London, 1990.
Devies O.L., "The Design and Analysis of Industrial Experiment",
Group Limited, New York, London, 1978.
Diamond W.
J.,
"Practical
Lifetime Learning
Wadsworth
Longman
Experimental Designs for Engineers and scientists".
Belmont California - A division of
Publications,
Inc, 1981.
Dunn
O.J., Clark V.A., "Applied Statistics: Analysis of Variance
Regression", 2nd Ed, John Wiley & Sons, New York, 1987.
Goh
T.N.,
An
Inc.
and
Organizational Approach to Product Quality via Statistical
Experiment Design,
International Journal of Production Economics, vol 27,
1992, p. 167-173.
Hunter
J.
S., Statistical
Design Applied to Product Design, Journal of Quality
Technology, Vol. 17, N.4, October 1985.
Kacker R.N., Lagergren E.S., Filliben J. J., Taguchi's Orthogonal Arrays Are
Classical Designs of Experiments, Journal of Research of the National
Standards and Technology, vol. 96, sep-oct 1991.
Mason
R.L.,
Gunst
R.F.,
Hess
Experiments", John Wiley
&
J.L.,
Sons,
18
"Statistical
New
Design and Analysis of
York, 1989.
Montgomery D. C, "Design and Analysis
&
Wiley
Phadke
Sons,
New
of experiments",
2nd Edition John
York, 1984.
M.S., Kackar R.N.,
Speeney D.V., Grieco M.J., Off-Line Quality Control
Using Experimental Design, AT&T
in Integrated Circuit Fabrication
Technical journal, 1983.
Pignatiello
J. J. Jr.,
Ramberg
J.S.,
Taguchi, Quality Engineering,
Top Ten Triumphs and Tragedies
of Genichi
4(2), 1991-92, p. 211-225.
P. J., "Taguchi Techniques for quality Engineering", McGraw-Hill,
York, 1988.
Ross
Tsui K.L.,
An Overview
Methods
for
New
Method and Newly Developed Statistical
Transactions, vol. 5, November 1992.
of Taguchi
Robust Design,
//£
Tsui K.L., Strategies for Planning Experiments Using Orthogonal Arrays and
Counfounding
tables,
John Wiley
&
Sons
Ltd., 1988.
Ulrich K.T., Eppinger S.D., "Product Design and
Hill, Inc., New York, 1994.
Development", McGrow-
Unal R., Stanley D.O., Joyner C.R., Propulsion System Design Optimization
Using the Taguchi Method, IEEE Transactions on Engineering
Management, vol. 40, August 1993, p.315-322.
Wayne
A.
Taylor,
McGraw-Hill,
Winer
"Optimization
New
B.J., "Statistical
New York,
&
Variation Reduction in Quality",
York, 1991.
Principles in Experimental Design",
1970.
19
McGraw
Hill, Inc.
Appendix A
Table Al. Sample size requirements for tests on difference of two means from independent
normal population, equal population standard deviations.
Appendix B
Percent Contribution Calculation
The variance calculated
interaction, listed in
in the fixed levels
Anova
some amount
contains
Table,
experiment for a factor or
the error [Phadke M.S, 1983; Ross P. J., 1988].
observed (Vx
)
can be written
of variation
The generic
due
to
factor variance
as:
Vx = V*x + Ve
where V*x
the error.
is
the variance
due
solely to the factor x
The pure variation of
factor x can
V*x = Vx
and since
of squares
in the
Anova Table
-
the variance
and degree of freedom of
factor
and Ve
be isolated
is
the variance of
as:
Ve
is
sum
expressed as the ratio of the
(Vx=SSx/Vx)
we
have:
SSVVx = SSx/Vx-Ve
we
Solving for SS*x
obtain:
SS*x = SSx
-
Ve
X
then the percent contribution of the factor x
variation expressed in terms of the
sum
Ox = SS*x
The contribution
removed by
of error (residual)
the factors
and
He =
where dof f.i.
is
the total
Vx
(Fix)
with respect to the
total
of squares can be calculated as:
/ SSiot X 100
is
then calculated replacing
all
variation
interactions:
(SSx + dof
number
F.I.
X
Ve) / SSjot
of degree of
x 100
freedom available
for factors
and
interactions.
This correction
effects,
initial
and
it
Anova
is
is
necessary to avoid overestimating the contribution of the
the cause of the presence of the negative
table.
21
numbers
in the
Appendix
C
Orthogonal matrix of the experiment
Table CI.
Trial
Random
stH]uence of the
trials
and
results of the PFJ experiment.
Date Due
Lib-26-67
MIT LIBRARIES
3 9080 00927 9016
Download