Document 11057701

advertisement
M.i.T.
LIBRARfES
-
DEWEY
Q>
Dewey
HD28
.M414
'^
Integrating the Choice of Product Configuration
with Parameter Design
M. de Falco and
WP
#3749-94
K. T. Ulrich
December 1994
;/iAsaACHijstns iNSvirufe
OF TECHN'OLOGY
JAN 19
1995
LIBRARIES
Integrating the Choice of Product Configuration
with Parameter Design
Massimo de
Falco
University of Naples "Federico
11"
Dpt. of Materials and Production Engineering
P.le Tecchio,
Naples 80125
-
Italy
Karl T. Ulrich
Massachusetts Institute of Technology
Sloan School of Management
Cambridge,
02139 - USA
MA
December 1994
Abstract
Parameter design and experimental techniques to develop new products
essentially focus on the optimization of a single feasible configuration of a
product. This paper, using an example, illustrates how to extend factor
optimization to system design to support the designer in the choice of the
product configuration.
The proposed approach integrates the design phases to reach the best
configuration using a process aimed at subsequent improvement of the
product, providing confidence in the information underlying the decision
making process, and correctly testing all hypotheses and assumptions.
In this way, this paper is a further attempt to overcome barriers in industrial
practice, by combining the engineering knowledge with effective statistical
techniques.
MIT
-
Sloan School of
Management
12-19-1994
Integrating the Choice of Product Configuration
with Parameter Design
M. de Falco and K.
1
T. Ulrich
Introduction
The current concept
of quality indicates that quality
must be "designed
in" the
product, and that other forms of interventions present limitations and
not be cost effective [Unal
R., 1993].
In typical product
may
development practice
"Quality by Design" consists of a three-step approach: system design, applies
the available
knowledge
to
produce
parameter
a basic functional prototype;
design, identifies the setting of parameters which optimize the system
performance; and tolerance design, establishes limits on allowable variation
in the
partial
parameters [Dehnad
K., 1988].
upstream extension of design
However,
this
methodology
for quality, since
it
is
only a
works only on
a
predefined product concept, and thus does not include the optimization of
the product configuration.
Using the Powered Fruit Juicer example,
in this
work,
we
will
show how
experimental design can be used to evaluate complex architectural factors
such as different design solutions for head shape, or various technologies
such as two systems of head rotation, and
performance.
how
these choices affect final
Including system design in the experimental optimization, this
approach moves the development process naturally toward greater
integration.
The structure
of this paper reflects the temporal differentiation in the
underlying methodology between the
framework" and the
a priori stage, in
which the "product
"experiment pattern" are defined and the
stage in which, after experimentation, analysis and feedback
complete the development
cycle.
is
a posteriori
conducted
to
1.1
Current Problems with System Design
The
new product development approach
typical
in
industry has been
characterized by strong pressure to reduce time and improve quality, while
the lack of planning and statistical tools are
One
design [Goh T.N., 1991].
approach
first
is
a process
prototype and
where
its
still
strongly
felt in all
efforts are concentrated
on the
phases of
development
of the consequences of this
realization of the
subsequent optimization through the definition of
tolerances [Unal R., 1993].
Pushing
to reach quickly a first feasible
row tuning
of parameters under cost
demonstrated
to
be ineffective.
product architecture with subsequent
and quality constraints has already been
As Taguchi pointed out
make products with high performance and
optimize
all
key factors
identifies the product's
sources of variation.
is
necessary.
a long time ago, to
robustness, a design process to
This phase,
known
as parameter design,
parameter settings that reduce the sensitivity to
Even
if
approach
this
is
essentially the classical
Design
of Experiment [Kacker R.N., 1991], Taguchi popularized this approach in
industrial applications [Pignatiello
Concentrating efforts on the
sequential
way
J. J.,
first
1992].
selected product solution, in a strongly
that does not allow either for
feedback represent a weakness in
subsequent iteration or ongoing
This tendency
approach.
this traditional
endures in spite of the possibility offered by the prototype as a milestone to
close the loop in the early stages of product design,
required to validate the assumptions of the
1.2
and of
verifications
Objectives of Proposed Approach
The present work points out the importance
extending factor optimization back
the product
1).
with the integration of design phases
to:
is
of
to system design
not yet frozen (Exhibit
and the
where
This main objective
predispose and activate a continuous improvement process,
-
test correctly all
know
possibility of
the architecture of
-
-
all
statistical analysis.
is
also
combined
hypotheses and assumptions,
the reliability and confidence of the information underlying the
decision-making process.
Exhibit
1.
Extension of the approach in the development processExtended Approach
Idea-Need Analysis
Architecture Definition
Features Identification
Cause-Effect Analysis
Feasibility Analysis
Solutions Compilation
Alerna'tives'
These goals are intended
control.
The
Comparison
Architecture Selection
to
Optimization
Parameter Setting
Variation Reduction
Tolerance Setting
achieve in the design process flexibility and
flexibility is relevant
because
it
avoids a situation in which, after
the prototyping of product, the configuration
is
In fact, the
treated as fixed.
early freezing of the architecture has a strong impact either in finding the best
overall solution for the product or in the creation of
other alternatives.
keys to
and
move
to avoid
knowledge about the
This, integrated with the feedback,
the process toward the
common
optimum,
becomes one of the
knowledge
to formalize the
mistakes occurring in the early stage of the design
phases.
2
Approach
2.1
The "Powered
An
Fruit Juicer"
Example
application of the proposed approach has been conducted on a consumer
product, the Powered Fruit Juicer (PFJ), using brands already on the market as
prototypes for the development of a potential next generation.
will
be used
cases
to illustrate
where the example does not apply,
The Powered
Fruit Juicers are
home
to hypothetical situations.
products mostly used for oranges, and
manufacturers offer products with similar designs.
work through head
This example
the different steps of the present work, referring, in
The
rotation while the user holds the fruit
pressure. These products exhibit
some
all
juicers essentially
and applies
interesting features that are perhaps
driven by the developer's experience and by n:\arketing considerations rather
than by objective performance evaluation.
For this reason this example
appears to be appropriate for an application of the proposed approach, which
introduces architectural performance evaluation as part of the product design
process.
2.2
Logic and Stages of the Approach
In our approach,
from the
two main concepts
traditional
differentiate partially the design process
methodology. These
and the "pattern of the experiment"
are: the
(Exhibit
"framework of the product"
The
2).
first
relates to the
product/process under evaluation and synthesizes present knowledge about
it.
The second
constraints
and
also identifies
refers
to
the planned experimental process, respecting
objectives, to
improve the
level of
two stages of the improvement
knowledge. The approach
cycle, a priori
and
a posteriori
of the experiment.
With only one
first
iteration, like in the
PFJ case, the design process provides the
best configuration of the product
Exhibi
and updates related know-how.
Framework
2.2.1
of the Product
The concept of framework
In fact,
architecture.
it
an upstream extension of the product
includes the final architecture and other feasible
alternatives,
and
is
way
the
framework
In this
is
expressly defined to apply during the experimental phase.
constitutes the experiment plan relative to the
product and, although interrelated with the pattern of experimentation,
it
helps to simplify and to clarify the process.
In order to settle the product framework, as illustrated in the a priori stage of
Exhibit 2
,
it is
necessary to determine the predominant performance of the
product, the hypotheses to investigate and the feedback plan.
The performance under
is
evaluation, and the response factor associated. This
a delicate point because, through a cause-effect analysis, the performance
will
be linked to the key
factors,
and the response
controllable or uncontrollable factors.
evaluation
was the
the juice produced
of this response factor
fruit,
juice that
is its
remained
factor
3.
System
performance under
was
in the fruit.
very low sensitivity
dimension and the shape of the
the ratio between
The
characteristic
to the "juiciness" of a specific
a critical element in the experimentation, but
sensitive to the
Exhibit
and the response
juice yield,
and the
factor can affect either
In the PFJ case the
it
remains
fruit.
table.
KNOWN DOMAIN
UNKNOWN
Controllable
Uncontrollable
Selected
Unselected
I
Steady
•
•
'
Head shape
Power (Engine)
• Lateral
Rotation technique
•
•
Oranges dimension
pressure
Head dimension
Operator
Experimental time
Variable
Uncontrolled
Rotation speed
• Material
•
Orange shape
•
Envir. temperature
Envir. umidity
Vertical pressure
•
Squeezing time
Axis slope
•
Rash
• Filter
control
•
shape
Experimental Error
ArchKecture factor = factor emlxxJied
External Noise
in the
prototypes;
Steady factor
= factor settled for at) the experiment;
Minimized
= factor with reduced variability;
factor
Setting factor
= factor varied In the experiment;
= factor not oontmlled;
Uncontrolled factor = factor uncontrollable.
Variable factor
somehow
The principal hypotheses
and
•
scientific
to investigate.
These synthesize the engineering
knowledge about the phenomenon under evaluation. They
are:
the classification of the factors playing significant roles, with the choice of
which are considered responsible
selected factors,
main
for the
on the
effects
Exhibit 3 shows the different categories using factors
performance.
identified for the PFJ application with oranges.
The
selected factors,
which
and
architecture factors
will be directly observed, are divided into
setting
factors;
the first
group constitutes the
peculiarities of the product concept or configuration.
is
composed
of
two design
factors, (1) the
power, applied with different
In the case of PFJ, this
head shape (Exhibit
electric
motors
the
to
4)
and
(2)
the
then a
heads;
technological factor, the head rotation scheme,
where
way standard
an alternated rotation in two
rotation
directions (Exhibit
factors, require
to
5).
and the second type
The
is
the
first
type
is
a one-
architecture factors, as distinct from the setting
more prototypes
or a
modular prototype with the
possibility
exchange the parts during the experiment, and constitute the
critical
elements to investigate.
Exhibit
4.
The two
different solutions of
head shape.
Design Factor
Head Shapes
Type
It is
Type
I
worth mentioning
I
I
that not all categories identified for unselected
uncontrollable factors have to be present, but
all
of
and
them influence the
experimental results and are helpful for correctly planning the experiment.
The choice
of juice yield as response factor implies the minimization of
variation in the orange dimension to reduce the uncontrolled variability in
the experiment; while the shape of the orange
was
left
uncontrolled because
of technical difficulties in measurenient.
Exhibit
5.
The two
different technologies applicable to the PFJ.
Technology Factor
Head Rotation
Technology
I
Technology
•
The
interactions
interaction
intensities.
and
Exhibit
Rotation
Two Way
Rotation
the factors selected.
In table 6
diagram showing the relationships among the
is
represented an
factors
and
their
These connections are generally based on previous experiences
reflect present
6.
among
f^
One Way
knowledge.
Interactions diagram, the a priori hypotheses
on
interactions.
= strong Effect
= Weak Effect
•
Scaling and leveling systems. Assuming a metric and defining the levels at
which the
factors
have
to
be observed,
architecture factors,
(Table
1).
their levels.
it is
we
necessary to divide the factors into different types
For continuous factors, on the other hand,
In the PFJ case the "range"
difference of
two
levels
and
their
was defined
the factors.
Table
we
1.
known
Leveling for the factors c
this
determine
between the
range should be
or supposed variation of
Erroneous definitions of levels can lead
will see for the time factor.
we must
as the ratio
middle point;
significantly close to the percentage of the
as
can face factors that are not
In these cases, very frequent with
measurable and/or not continuous.
to altered conclusions,
in order to achieve the right data at the
important to plan the experinnent
lowest
cost.
This
by several
often restricted
The
particularly true in the prototyping phase,
is
central point in this tradeoff
experiment, which
is
can investigate, and
defined, there
still
and economical
technical
the
is
linked with the
number
number
constraints.
of trials to run during the
of factors
the precision of the data collected.
are
many
where we are
and interactions we
Once
boundary
this
is
parameters and structures to define in order to
conduct the experiment that will yield the desired outcome.
All these
parameters along with the structure choices determine the "pattern of
experiment".
The outcome
once
we have
of the experiment,, although
the
unknown,
is
completely defined
framework of the product under study and the
the pattern of experiment in terms of design constraints
and
definition of
leverages.
Design Constraints
•
Physical constraints.
Once
all
factors are characterized
control during the experiment are selected,
some
restrictions
exist
in
it is
and the
essential to
the measurement
of
these
factors to
understand
due
to
if
the
characteristics of the prototypes.
Some
of the typical restrictions that can occur are the "nested factors"
and /or
"split-plot factors".
These are particularly frequent
testing because architecture factors
separable.
embodied
in the
in
prototype
prototypes are often not
In fact, each prototype can constitute a specific solution that,
because of the different design, assembly and parts, can interact with other
structure of prototypes
where
Precisely because of the
power
way
power
factor
level of
X
B.J.,
is
if
is
face a
nested in the design factor.
different electric motors, the output levels of
are similar but not equal, even with the
to recognize
Winer
two
the
we
In the PFJ pattern (Exhibit 7)
factors generating altered effects.
a factor (X)
is
same power
nested in another (Y)
combined with every
levels of factor
is
input.
An
to verify
Y [Mason
if
easy
each
R.L., 1989;
1970].
The main consequence
of this restriction
is
that the effect of the
power
will
be confounded with the design-power interactions, and these data will
require a different treatment (Appendix A).
Exhibit
Pattern of the
7.
Powered
Fruit Juicer experiment.
Design Pattern
Combmations
1st
of the present factor levels
2nd Prototype
Prototype
Architecture
A) Design Type
B)
Power
C) Technology
Setting
Dl) Pressure
D2)Time
D3) Slope
S
L4 =
1
1
Peculiarity of the pattern
Power nested in design type.
1 2
2 1
FiiJl
2 I 1
2 2 2
factorial for architecture factors.
Half fractional factorial for setting factors
(L4).
Economical-technical constraints. These constraints determine the number
•
of prototype variants
Knowledge
•
we
can build and the number of
upperbound
constitute the
constraints.
for the total
number
Since this approach
structured as an iterative
is
process, the quality of the pattern, and then of the data,
knowledge, of which
a
fundamental element
of the experiment (Cq).
The
is
linked to previous
the error standard deviation
availability of this
minimum number
calculation of the
is
and so
trials feasible,
of runs in the experiment.
parameter allows
a better
of trials that matches our expected
reliability.
Design leverages are the values
fit
•
the strategy
and design
Acceptable Risk
key
factors,
factor
is
a factor
we
level.
can
performed
In the statistical analysis
make two
different types of error:
considered significant and in reality
is
experiment parameters to
to assign to the free
constraints.
is
not;
Type
I
and Type
considered not significant and in reality
it
is.
to identify the
occurs
11
when a
when
occurs
We
prevent
ourselves from making these errors by fixing the probabilities of their
occurring at a low level, P(Type
case
we
a
fixed
parameter p
justified
is
design solution
•
=0.05 and
if it
in the
error)=a and P(Type
P =0.05,
II
error)=p.
makes an improvement
It is
In the PFf
the unusually strict control of the
by the importance of avoiding the
Precision of the experiment.
change
I
in
rejection of a
new
product performance.
the required capacity to detect a specific
average of a response factor due
10
to a factor or interaction.
This value should be defined in accordance with the variation considered of
interest.
number
Moreover we should observe
and
of trials are strongly interrelated
coherently [Diamond W.J., 1981].
was
the PFJ experiment the precision
•
5=±4%
in the ratio
Degree of freedom
previous parameters
Once
defined, the total
available in the experiment (N-1)
is
fact,
and
factors in the experiment
possibility to
number
of degree of freedom (doO
is
However, there
with the nested
and
This
is
for the
factor, including
a
reason
the
error. In
more dof
available for the error
factors.
still is
more
and more
why
a high
not indicated for high precision experiments.
Resolution of experiment.
The
last
make
choice to
resolution, with related confounding, for each factor
interest in the
of trials (N) with
factors, interactions
means fewer dofs
have confounded
factors /trials ratio
•
factor.
number
determined.
for each interaction
In
the total
respecting constraints which require in our case one
nested factor
error.
fixed as the capacity to detect a
among
possibility of deploying these dofs
precision, without
making type n
risk of
chosen as the response
allocation.
is
imposed
total
be made
their choice n\ust
In fact, a high
changing other parameters, implies a high
variation of
and
that risk level, precision
experiment [Tsui K.L., 1988].
the pattern of experiment,
is
is
the definition of
and interaction
of
This choice, which completes
important because
it
defines factors that will be
deeply understood and factors that will remain somewhat uncertain due to
confounded
Table
2.
effects.
Resolution and confounding for factors and interactions
.
Any
knowledge about the impact of some
a priori
guide in the resolution decision [Alkhairy
A.,
made
Accordingly with the factor categorization
effects
could be a helpful
1991].
above, in the PFJ example,
the architecture parameters were positioned in the highest resolution
position, because an error in the architecture
in the setting of the parameters (Table 2).
trials,
is
more
In this
serious than an error
way we
can perform few
reducing quality of the data only where eventual errors would be
less
serious.
3
The PFJ Experiment
The experiment
completely designed once
is
,
we have determined
previous elements in the framework and in the pattern. The
trials
can be calculated using tables available
which express the relationship /(a,
p,
the
number
total
of
in literature [Davies D.L., 1956],
O, n)=0 (Appendix
F),
where
0= 8/Ge is
the ratio between the precision of the experiment and the error standard
deviation, n
(equal, for
is
two
the sample size of each
group compared
level factorial design, to half of the total
in the
number
In the design of experiment of PFJ, because of a total lack of
conducted a preliminary
test to
(=0.04/0.0277),
if
Oe
is
we
find n=14.
number
not known,
the help of
R.L., 1989].
is
trials).
knowledge,
estimate Oq [Davies O.L., 1978], which
out to be approximately 0.0277. In Table Fl of the Appendix
then the total
of
experiment
The
closest orthogonal array
of trials will be N=32.
to define 5 as
is
Another system
percentage of Gg.
F,
we
came
with 0=1.45
n=16=N/2 and
to
determine O,
This can be done with
some conservative considerations [Diamond W.J., 1981; Mason
The minimum number of trials so calculated (32) matches the
reduction imposed by the resolution choices.
The
resulting pattern of the experiment
factors
and a half
resolution
V
is
a full factorial for the architecture
factorial for the setting factors, thus
for the architecture factors, a resolution
and a resolution IV
for the interactions
III
we
obtained the
for the setting factors,
between the factors of the two
different categories.
The pattern
is
recognizable in Exhibit 8 thanks to the positioning of the
architecture factor
presence of
all
on the top of the matrix, where
combinations
among
12
the factors.
full factorial
On
the side
implies the
we have
the
setting factors
where the
half fractional design implies the presence of only
half the data, but the orthogonality of the design guaranties the possibility of
comparing any
Exhibit 8
factor at each of
its
two
levels.
Before starting the experiment,
useful to check
it is
possible sources of
if all
error variation have been considered, because too high an error variation can
compromise the
variations,
ability
according
The main
distinguish factor effects.
to
the
to
error
previous categorization of factors, are
conceptually illustrated in Exhibit
9.
From
we
this description,
see that the
preliminary error estimation could be underestimated, since this error did
not appear the "parameters setting" component, because
were made
trials
at the
same
error can be estimated only
Exhibit
9.
Components
of
This
factor level.
is
the preliminary
all
how
evidence of
the correct
by an analogous experiment.
measured experimental
error.
Experimental Error
Misured Error
EEl
= Erroneous parameters setting
EE2
= Erroneous response measurament
EE3
= Erroneous control of steady
factors
External Noise
ENl = Variation due
to variable factors
EN2
= Variation due to uncontrollable
EN3
= Variation due to
factors
The experiment has been conducted with
technique, which
trials
the
means
that
all
trials
is
fixed factors level
and randomized
[Montgomery
D.C.,
1984].
an argued point, because, against
perception, the randomization
is
1992].
between
The main
trials.
This
difficulties are
may
3,
where two
due
to
much
[Pignatiello
setup and tuning to change
significantly enlarge experimental time.
The random sequence and outcome
Table
The
common
very expensive and the tendency to conduct
grouped runs often compromises the quality of the experiment
J.J.,
factors
factors were kept at defined levels and
sequence of runs was random
randomization of
unknown
of the 32 trials conducted
levels of factors are indicated
14
with indexes
is
1
illustrated in
and
2.
Table
3.
Ranciom
[Phadke M.S.,
still
presents
1989].
many
In spite of
abundant available
obstacles to adoption in industry.
Exhibit 10. Different analyses involved in the approach.
literature, this
problem
Table 4a.
Also supplied in the table
is
level at
is significant.
we
that a factor or
Since high significance of a factor or interaction
does not mean necessarily a strong impact on the
product,
as "probability value",
which the null hypothesis can
The smaller the p-value, the higher the chances
be rejected.
interaction
known
the p-Value,
which gives the smallest significance
final
performance of the
also provide the percent contribution of each effect,
the percentage of total variation
This information
is
due
to a factor or interaction
often important; in fact, as
we
which gives
(Appendix
B).
note in Table 4 for the
head-rotation factor, the significant level obtained (p<0.05)
is
related to a
low
impact on performance variation Ur=2%.
Pooling Technique
Although
this is a controversial step
improve the analysis by observing
[Davies O.L., 1978], from Table 4a
that the
power and
we can
many
the time with
other interactions have a high p-Value (p>0.5), and so their variation can be
considered as a component of unexplained (casual) variation and pooled in
the residual.
Thus the residual reaches 22 degrees of freedom, which means
more accuracy
factors
and
in the error estimation.
interaction.
p-Value (p<0.1)
is
5.
Anova
can also
the design-slope interaction, so
interactions to obtain the final
Table
We
test better the
remaining
Table 4b shows that the only interaction with a low
model shown
table for the final model.
in
we keep
pooling the other
Table 4c [Ross
P.J.,
1988]
factors, interactions
From
this
and main
effects.
we
information, beyond the significance of the model,
can
determine the quality of the approach by observing the overall Percent
Contribution (n) of the model.
Exhibit 11
shows the generic
alternatives
we
can encounter. The bounds are arbitrary and only indicate a tendency rather
then a rule.
In the PFJ case, the
sufficiently high, but
still
83%
of contribution of the final
model
is
requires a careful analysis of the residual to explain
eventual anomalies either in the framework or in the pattern of experiment.
Alternatives in the
Exhibit 11.
model contribution
analysis.
The explained
part
unknown
in the
n:\issed
domain.
of variation.
total variation
sources
it
likely necessary to
is
diagnosis of whether
we have
phenomenon,
ca n procede
with the
v\fe
analysis.
explained by the model.
where the model contribution appears
where
situation
adequately the
which can be
the right data,
= Percentage of the
In the cases
describe
the
inadequate.
n
The model
perforin a
We don't have
known
relevant
Residual
Analysis to
identify eventual
We can be in
domain, but
is
but it is
necessary to
to
be very low,
we
face the
A row
change the framework.
the right data or not can be
made through
the
observation of the ratio between the dof available for the error and the total
When
number
of dof.
analysis
was conducted
fact relevant.
The
the ratio
alternatives
domain or the key
is
correctly
factors
is
high,
and
that
if
we
should check
if
everything in the
one of the "not" selected
factors
is
in
missed key factors are in the unknown
have been neutralized as steady
factors.
This
is
a
pertinent point in any case, because during the definition of the framework,
any architecture
and then
it
factor left out of the selected set
will not
behaves as a steady
be possible to make any inference on
19
it.
factor,
In the PFJ
we
example, the head dimension was not included in the experiment, and so
did not
know
if it
was
At the same time, we do not know
a key factor.
performance has been significantly affected by the
the
if
and head
ratio of fruit
diameter.
Residual Analysis
This part, often neglected,
is
most important,
if all
to verify
essential to
complete the preceding analyses and,
assumptions underlying the methodology are
met.
The
analysis of residuals requires a previous regression analysis (Appendix
from which we calculate
C),
verification
if
we
and residual
fitted
values and residuals.
interpretation are
detect any pattern in the residual
two
it
sides of the
means
The assumptions
same
coin.
In fact,
some assumptions
that
are
not completely verified and therefore an unexpected effect can be in the
residual,
and vice versa [Montgomery
D.C., 1984].
The assumptions of the analysis of variance
are:
and homogeneity of
many
literature
and
in
There are
residuals.
some software
to verify the
of residuals, such as the Shapiro-Wilk test
last
assumption
to the others
accomplish
is
the
[Dunn
normality, independence
tests
available in the
normality and the independence
and the Durbin-Watson
most important, since the methodology
A
O.J., 1987].
is
test.
The
quite robust
complementary and easy approach
this analysis is to plot the residuals against
any of the
factors
parameters that can be considered worrisome. In the PFJ residual analysis
assumptions were verified and
contribution
variation.
it
was impossible
The obtained
in
spite
to identify
error variation
of
the
indication
of
to
and
all
model
any other form of unexpected
was explained with the low precision
of the experiment (Appendix D).
Sensitivity of
Experiment
After the experimental phase, although defined in the pattern of experiment,
it
is
useful to recalculate the precision of the experiment,
"sensitivity",
Difference.
now
called
with the formula coming from the Fisher's Least-Significance
This also gives the confidence interval for the difference of the
means estimated [Mason
R.L., 1989]:
20
Sen = ±
this is relative to
(2/n)l/2
[a/2, V]
t
samples of same
and
size,
Sg
t[a/2, v) is
the t-distribution with an
upper-tail of probability a/2; v = degree of freedom of error; n = size of each
sample compared; Se = error standard deviation. In the PFJ example, where a
=
0.05,
we
V = 26, n = 16 and Se = 0.0608
Sen = ± 2.056
*
have:
0.354
*
±
0.0608 =
4.42
%
This indicates again the capacity of the experiment to detect a variation in the
response factor average of ± 4.42 %, which
previously imposed.
This increment
greater than the precision
is
essentially
is
the
initial precision, in
N=2n=128
trials
at the cost of
same
condition,
(0=0.04/0.0608=0.658). This
some power
of the
preliminary
to
Using the obtained
underestimation of the error standard deviation.
reach the
due
we
should perform
Se, to
at least
number would be reducible only
test.
Multiple Comparisons
Another useful technique
to
included in the framework
is
advantage
to
be easy
to
calculate
Experimentalwise (E) error, which
least
and interactions
the effects of factors
test
the multiple comparisons
and consents
the probability of
is
one null hypotheses when making
test.
statistical tests of
two or more
it
is
the
clear that,
number
1 -
of population
even with
a
The comparison among
is
(
1 -
approximately 0.005
The
(a)
•^-1
)
means compared. From
if
the
number
the four averages of the
reported in the PFJ example (Exhibit
to select the
a
low comparisonwise error
reach a high experimentalwise error
used
null
R.L., 1989].
is:
E =
is
the
rejecting at
between the experimentalwise error and the comparisonwise
error defined previously
where k
introduce
to
wrongly
hypotheses using the data from a single experiment [Mason
relationship
This has the
12).
(i.e.
21
a=0.05),
we
can
of comparisons increases.
head shape-slope interaction
The Bonferroni procedure was
comparisonwise error (a /2k), which
[Ibid.].
this expression,
is
resulted to be
Exhibit 12. Multiple comparison diagram for the head shape-slope interaction
85.00%
,
The PFJ experiment revealed
since
it is
intuitive that
sec) there
must be
relationship
Exhibit 13.
that the time factor
between a very short time
a difference,
we have
between response and the
Comparison of
curve (Exhibit 14)
(R=46%) was not big enough
view
this implies that
we
and
after the experiment.
Factor Nested
Not Evaluable
O
O
= Strong Effect
=
Weak Effect
observe that in reality the range chosen
to detect a change,
but from a practical point of
excluding short uses of the product the performance
quite robust with respect to time.
power
(10
of the curve
Rotation!
I
this
and
and a longer time
row deduction
Rotation!
:
From
(1 sec)
significant,
time.
interactions tables before
I
a
was not
Analogous inferences can be made
factor.
Exhibit 14. Relationship between
Time and Response
23
factor
is
for the
The
final,
but no
less important,
information created
a better estimation of
is
This allows more precise planning of any
the error standard deviation.
subsequent experiments.
6 Conclusion
Although the concept of quality keeps evolving [Gehani
R.R., 1993]
and many
quantitative techniques have been developed for optimizing products,
emphasis has been placed on identifying
This
work
is
a further attempt to
all
move
little
their possible applications.
quality by design
upstream, by
introducing in the experimental phases factors typically neglected and
otherwise not easily evaluable.
learning process,
it is
Nevertheless, in the improving-through-
necessary to place the proper emphasis on
competencies, which have to complement engineering
statistical
skills.
7 References
Alkhairy A., "Optimal product and manufacturing process selection: issues of
formulation and methods of parameter design", Mit Ph.D. thesis, Dept. of
Electrical Engineering and Computer Science, 1991.
Barker T. B., "Engineering Quality by Design - Interpreting the Taguchi
Approach", M. Dekker Inc., New York N.Y. 1990.
Bendel
T.,
Taguchi Methods, Proceedings of the 1988 European Conference,
London and New York 1989.
Elsevier Applied Science,
Box C, Bisgaard
S. Statistical
Tools for Improving Designs, Mechanical
Engineering, January 1988, p.32-41.
Bullington K. E., Hool J. N., Maghsoodloo S., A Simple Method for Obtaining
Resolution IV Designs for Use with Taguchi's Orthogonal Arrays, Journal
of Quality Technology, vol. 22, October 1990, p. 260-264.
Cochran W.G., Cox G.M., "Experimental Design", 2nd Edition, John Wiley
Sons Inc., New York, and Chapman & Hall Ltd., London, 1957.
&
Ferretti M., "Trasferibilit^ di una Tecnologia e Fattori Critici del
Trasferimento", Internat. Conference Airi- CNR-IFAP, Roma, Italy, 1993.
de Falco M.,
Dehnad
K., "Quality Control,
Bell Laboratories,
Robust Design, and Taguchi Method" AT&T
& Brooks - Pacific Grove, California, 1988.
Wadsworth
24
Deming W.E., "Out
of the Crisis", Massachusetts Institute of Technology,
Center for Advanced Engineering Study, Cambridge, Ma.
D., "Sample Size Methodology", Academic Press,
Harcourt Brace Jovanovich, Publ. - Boston, New York, London, 1990.
Desu M.M., Raghavarao
Devies O.L., "The Design and Analysis of Industrial Experiment",
Group Limited, New York, London, 1978.
Diamond W.
J.,
Wads worth
Dunn
O.J.,
Publications, Belmont California
-
A
Longman
scientists".
division of
Inc, 1981.
Clark V.A., "Applied
Statistics:
&
Regression", 2nd Ed, John Wiley
Gehani
Designs for Engineers and
"Practical Experimental
Lifetime Learning
Inc.
Sons,
New
Analysis of Variance and
York, 1987.
R.R., Quality value-chain: a meta-synthesis of frontiers of quality
movement. Academic
of
Management
Executive, vol. 7,
No.
2,
1993, p. 29-
42.
Goh
S.
H.,
"Improved Parameter Design Methodology and Continuous
Process Improvement", Master Thesis Leader for Manufacturing Program,
MIT, 1992.
Goh
T.N.,
An
Organizational Approach to Product Quality via Statistical
Experiment Design,
International Journal of Production Economics, vol 27,
1992, p. 167-173.
Hunter
J.
S., Statistical
Design Applied
to
Product Design, Journal of Quality
Technology, Vol. 17, N.4, October 1985.
Kacker R.N., Lagergren E.S., Filliben J.J., Taguchi's Orthogonal Arrays Are
Classical Designs of Experiments, Journal of Research of the National
Standards and Technology, vol. 96, sep-oct 1991.
Lance A. Ealy, "Quality by design
Press, Dearborn Michigan, 1988.
-
Taguchi methods and U.S. Industry", ASI
MacDonald
in
F. J. Jr., "An Integrative Approach to Process Parameter Selection
Fused-Silica Capillary Manufacturing", Master Thesis Leader for
Manufacturing Program, MIT, 1992.
Mason
R.L.,
Gunst
R.F.,
Hess
Experiments", John Wiley
&
J.L.,
Sons,
"Statistical
New
25
Design and Analysis of
York, 1989.
Montgomery D. C, "Design and Analysis
Wiley
&
Sons,
New
of experiments",
2nd Edition John
York, 1984.
Phadke M.S., "Quality Engineering Using Robust Design", Prentice
Englewood Cliffs, New Jersey, 1989.
Hall,
Phadke
M.S., Kackar R.N., Speeney D.V., Grieco M.J., Off-Line Quality Control
in Integrated Circuit Fabrication Using Experimental Design,
AT&T
Technical Journal, 1983.
Pignatiello
J. J. Jr.,
Ramberg
J.S.,
Taguchi, Qualitx/ Engineering,
Top Ten Triumphs and Tragedies
of Genichi
4(2), 1991-92, p. 211-225.
Robinson G.K., Improving Taguchi's Packaging of Fractional Factorial
Designs, Journal of Quality Technology, vol. 25, January 1993, p. 1-11.
P. J., "Taguchi Techniques for quality Engineering", McGraw-Hill,
York, 1988.
Ross
Tsui K.L.,
New
An Overview
Methods
for
of Taguchi Method and Newly Developed Statistical
Robust Design, HE Transactions, vol. 5, November 1992.
Tsui K.L., Strategies for Planning Experiments Using Orthogonal Arrays and
Counfounding tables, John Wiley & Sons Ltd., 1988.
Ulrich K.T., Eppinger S.D., "Product Design and
Hill, Inc., New York, 1994.
Development", McGrow-
Unal R., Stanley D.O., Joyner C.R., Propulsion System Design Optimization
Using the Taguchi Method, IEEE Transactions on Engineering
Management, vol. 40, August 1993, p.315-322.
Wall M. B., "Making Sense of Prototyping Technologies for Product Design",
Master Thesis Leader for Manufacturing Program, MIT, 1991.
Wayne
A.
Taylor,
McGraw-Hill,
New
&
"Optimization
Variation
Reduction
in
Quality",
York, 1991.
Winer B.J., "Statistical Principles
NewYork, 1970.
in
Experimental Design",
26
McGraw
Hill, Inc.
Appendix
A
Siuns of Squares Calculation
The formulas used
show
the
in the calculation of the
sums
of squares are reported to
the different approach necessary with factor B nested in factor,
mixed model
of Full Factorial for Factors A, B,
which here implies
factors Dl, D2, D3,
C and
A
and
Half Fractional for
that the interactions DiDj (for every
ij)
are not evaluable.
Defining
response observation as
the generic
corresponding to factors A,
start the first step
B, C, Dl, D2,
with calculation of the
Yij^imn/
with indexes
D3 and varying between
1
and
2,
we
Correction Factor (CF) [Montgomery
D.C., 1984]:
CF = Y2
where dots
indicate that indexes have been totalized.
squares can be expressed
SStot
and
all
factors
SSA = IiY2i
SSb(A) = ^i ^]
/24
-
Y2ij..../23
-
Ei Y2i
-
CF
of squares are:
/24
CF
-
-
CF
SScB(A) = 5:i5:j5:kY2ijk.../22
-
SSD2 = ^mY2....n,./24
n/24
sums
Y^ijj^imri
CF
^A^h-k-n^
SSD3 = 5:nY2
^m ^n
= ^i ^j ^k ^1
interactions
SSdi=I:iY2...,../24
Then the Total sum of
as:
and
SSc = IkY2..k.../24
SSAC =
/25
-
SSa
-
SSc
Zi2:jY2ij..../23
CF
-
-
-
CF
CF
27
-
EiIkY2i.k.../23 + IiY2i
/24
CF - SSa - SSdi
CF - SSa " SSd2
5:i5:mY2i...n,./23
SSAD3 = ^i5:nY2i....n/23 - CF - SSa " SSd3
SSadi
-
=2:iSiY2i..,../23
SSAD2 =
-
-
SSb(A)D1 =Si5:jIiY2ij.i../22
SSb(A)D2 = Si
ZiIjY2ij..../23
^ Sm Y2ij..n,./22
SSb(A)D3 = li Sj In
Y2ij...n/22
Zj Zj Y2ij..../23
-
IiE,Y2i..i../23 - ZiY2i
-
Ij Zj Y2ij..../23
Ij
"
Zj
-
Zm ^h-mJi'
Zn
^
/24
^ Y^i
Y2i....n/23 + Zi Y2i
/2*
/24
SScDi =5:k5:iY2..ki../23 - CF - SSc - SSdi
SSCD2 = ZkZm Y2..k.m./23 - CF - SSc - SSd2
SScD3 = SkInY2..k..n/23
-
CF
SSc
-
SSd3
"
The sum
of squares for the residual can be calculated as difference
the total
sum
of squares
and the sum of
sums
all
between
of squares previously
calculated:
SSResidual = SStot
It
-
SSa
-
SSaDI
-
-
SSCDI
-
"
SSb(A)
SSaD2
SScD2
-
"
SSc
SSaD3
-
SSdi
-
SSd3
-
SSac
SSb(A)D2
-
SSb(A)D3
SSd2
SSb(A)D1
-
-
-
SSab(A)
-
SScD3
could be easily demonstrated, developing the formulas above shown, that
the effect of the nested factor B
the factor
A
and
factor
is
confounded with the interaction between
B [Montgomery D.C.,
1984]:
SSb(A) = SSb + SSab
and analogous considerations can be made
for
any interaction with the nested
factor B, for example:
SSb(A)C = SSbc + SSabC
and so
on.
The value
of the correction factor
the values
assumed by the
was CF= 12.934 and
in
Table Al are reported
variables appearing in the previous sums, in
function of their levels.
28
Table Al.
Sums
of Experimental Observations
Appendix B
Percent Contribution Calculation
The variance calculated
in the fixed
Anova
interaction, listed in
the error [Phadke M.S, 1983; Ross
observed (Vx
)
can be written
experiment for a factor or
levels
some amount
contains
Table,
P.J.,
of variation
The generic
1988].
due
to
factor variance
as:
Vx = V*x + Ve
where V*x
is
the variance
due
solely to the factor x
The pure variation of
the error.
in the
Anova Table
is
the variance of
factor x can be isolated as:
V*x = Vx
and since
and Vg
-
the variance
Ve
expressed as the ratio of
is
squares and degree of freedom of factor (Vx=SSx/Vx)
we
sum
of
have:
SS*x/Vx = SSx/Vx-Ve
Solving for SS*x
we
obtain:
SS*x = SSx
-
Ve
Vx
X
then the percent contribution of the factor x
expressed in terms of
sum
Ox = SS\
The contribution of error
variation
removed by
where dof
p.i.
is
the total
/ SSTot
(residual) then
the factors
He =
(Fix)
respect the total variation
of squares can be calculated as:
and
(SSx + dof
number
X
is
100
calculated replacing on
it
all
interactions:
F.l.
X
Ve) / SSjot
of degree of
x
100
freedom available
for factors
and
interactions.
This correction
effects,
Table
and
it is
is
necessary to avoid overestimating the contribution of the
cause of the presence of negative number in the
(14a).
30
initial
Anova
Appendix
C
Regression Analysis
The regression
analysis,
conducted with the MINITAB® software
results obtained with the
then
it
Anova Analysis
produces residual and
fitted
for the factors
,
confirms the
for the
model,
values necessary for the Residual
Analysis, the regression results are listed in Table CI:
Table Cl. Regression
and
The residuals obtained from the regression analysis are listed in Table C2,
where it is also shown fitted values, sorted residuals and normal quantile
values.
Table C2. Residuals of the
Trials
final
models.
D
Appendix
Residuals Analysis
Using the regression
with the intent to
investigate
if
results,
it is
possible to perform the residuals analysis
assumptions of the Anova Analysis and to
test the basic
some unpredicted sources
The assumptions
homogeneity of
to
errors.
The
of variation are detected.
independence, normality distribution and
are:
test
first,
and most important, can be tested with the
Durbin-Watson method, which requires the calculation of the
d = Zi
where
T\ is
the residual, and the value d*=1.80 obtained
with the bounds du and dt
[Mason
literature
with
/ ZiT^i
(fi - ri.i)2
R.L., et
To
1989].
test the
dt,
11)
do not
we
is
did not
reject
Hq
if
d>du; HI) draw no conclusion
required for negative autocorrelation.
reject
=
2, 3,...32
must be compared
hypothesis Hq: errors not
autocorrelated vs Hg: errors positively autocorrelated,
reasoning
i
a=0.05) from the tables available in
(1.18, 1.73 for
al.,
statistic:
we
if
can:
I)
reject
Hq
if
d<
dL<d<du. Analogous
In the
FPS case d*>du and
Hq, which means that the independence assumption was
supported.
The normality assumption can be
which requires
to
tested using the Shapiro-Wilk
order ascending the residuals and calculate the
W = b2
where b = Zi
tables [ibid.]
ai r^j
we
hypothesis of normality
critical.
S^ =
and
The value
is
rejected
for the
aj
)2
with
and the
the value
if
statistic:
/ S2
Ei (ri.Ijri/n
obtain the values
method,
i=l,2,...,n
critical
W
FPS case was W*=0.966,
obtained
that
and from the
value for
is less
W.
then
The
W-
compared with the
W(n=32, a=o.45)= 0.965 did not allow us to reject the hypothesis of normality.
Independence and homogeneity assumptions can be further investigated
plotting the residuals vs the factors and interactions (exhibit Dl), vs fitted
values and time (exhibit D2, D3); the normality assumption with the plot of
the residuals vs the fitted values
D4).
and
These do not show any pattern
do not help
in finding other
the
to
normal quantile values (exhibit D3,
worry about.
On
missed sources of variation.
33
the other hand, they
Table Dl.
Exhibit D2. Plot Residuals vs Experimental
0.12
1
Time
Appendix E
Exhibit El.
3 9080 00922 4194
Appendix F
Table Fl. Sample size requirements for tests on difference of two means from independent
normal population, equal population standard deviations.
BARCODE
ONNEKT
TO LAST
PAQB
J
Date Due
^P^ 2
JUL.
KV
rEB
23
8
B 1^98
*^
i«a9
2 8
Lib-26-67
Download