____________ ________________ 2.830J / 6.780J / ESD.63J Control of Manufacturing Processes (SMA... MIT OpenCourseWare

advertisement
MIT OpenCourseWare
http://ocw.mit.edu
____________
2.830J / 6.780J / ESD.63J Control of Manufacturing Processes (SMA 6303)
Spring 2008
For information about citing these materials or our Terms of Use, visit: ________________
http://ocw.mit.edu/terms.
Control of
Manufacturing Processes
Subject 2.830/6.780/ESD.63
Spring 2008
Lecture #15
Response Surface Modeling and
Process Optimization
April 8, 2008
Manufacturing
2.830J/6.780J/ESD.63J
1
Outline
• Last Time
– Fractional Factorial Designs
– Aliasing Patterns
– Implications for Model Construction
• Today
Reading: May & Spanos, Ch. 8.1 – 8.3
– Response Surface Modeling (RSM)
• Regression analysis, confidence intervals
– Process Optimization using DOE and RSM
Manufacturing
2.830J/6.780J/ESD.63J
2
Regression Fundamentals
• Use least square error as measure of goodness to
estimate coefficients in a model
• One parameter model:
–
–
–
–
–
–
–
–
Model form
Squared error
Estimation using normal equations
Estimate of experimental error
Precision of estimate: variance in b
Confidence interval for β
Analysis of variance: significance of b
Lack of fit vs. pure error
• Polynomial regression
Manufacturing
2.830J/6.780J/ESD.63J
3
Measures of Model Goodness – R2
• Goodness of fit – R2
– Question considered: how much better does the model do than just
using the grand average?
– Think of this as the fraction of squared deviations (from the grand
average) in the data which is captured by the model
• Adjusted R2
– For “fair” comparison between models with different numbers of
coefficients, an alternative is often used
– Think of this as (1 – variance remaining in the residual).
Recall νR = νD - νT
Manufacturing
2.830J/6.780J/ESD.63J
4
Least Squares Regression
• We use least-squares to estimate
coefficients in typical regression
models
• One-Parameter Model:
• Goal is to estimate β with “best” b
• How define “best”?
– That b which minimizes sum of squared
error between prediction and data
– The residual sum of squares (for the
best estimate) is
Manufacturing
2.830J/6.780J/ESD.63J
5
Least Squares Regression, cont.
• Least squares estimation via normal
equations
– For linear problems, we need not
calculate SS(β); rather, direct solution
for b is possible
– Recognize that vector of residuals will
be normal to vector of x values at the
least squares estimate
• Estimate of experimental error
– Assuming model structure is adequate,
estimate s2 of σ2 can be obtained:
Manufacturing
2.830J/6.780J/ESD.63J
6
Precision of Estimate: Variance in b
• We can calculate the variance in our estimate of the slope, b:
• Why?
Manufacturing
2.830J/6.780J/ESD.63J
7
Confidence Interval for β
• Once we have the standard error in b, we can
calculate confidence intervals to some desired
(1-α)100% level of confidence
• Analysis of variance
– Test hypothesis:
– If confidence interval for β includes 0, then β not
significant
– Degrees of freedom (need in order to use t distribution)
p = # parameters estimated by least squares
Manufacturing
2.830J/6.780J/ESD.63J
8
income Leverage Residuals
Example Regression
Age
Income
8
6.16
22
9.88
35
14.35
40
24.06
57
30.34
73
32.17
78
42.18
87
43.23
98
48.76
Whole Model
Analysis of Variance
Source
DF
Sum of Squares
Model
1
8836.6440
Error
8
64.6695
9
8901.3135
C. Total
Tested against reduced model: Y=0
F Ratio
1093.146
Prob > F
<.0001
Parameter Estimates
Term
Intercept
age
Zeroed
Estimate
0
0.500983
Std Error
0
0.015152
t Ratio
.
33.06
Prob>|t|
.
<.0001
Effect Tests
Source
age
50
Mean Square
8836.64
8.08
Nparm
1
DF
1
Sum of Squares
8836.6440
F Ratio
1093.146
Prob > F
<.0001
40
30
20
10
0
0
25
50
75
100
age Leverage, P<.0001
Manufacturing
• Note that this simple model assumes an
intercept of zero – model must go through
origin
• We can relax this requirement
2.830J/6.780J/ESD.63J
9
Lack of Fit Error vs. Pure Error
• Sometimes we have replicated data
– E.g. multiple runs at same x values in a designed experiment
• We can decompose the residual error contributions
Where
SSR = residual sum of squares error
SSL = lack of fit squared error
SSE = pure replicate error
• This allows us to TEST for lack of fit
– By “lack of fit” we mean evidence that the linear model form is
inadequate
Manufacturing
2.830J/6.780J/ESD.63J
10
Regression: Mean Centered Models
• Model form
• Estimate by
Manufacturing
2.830J/6.780J/ESD.63J
11
Regression: Mean Centered Models
• Confidence Intervals
• Our confidence interval on output y widens as
we get further from the center of our data!
Manufacturing
2.830J/6.780J/ESD.63J
12
Polynomial Regression
• We may believe that a higher order model structure
applies. Polynomial forms are also linear in the
coefficients and can be fit with least squares
Curvature included through x2 term
• Example: Growth rate data
Manufacturing
2.830J/6.780J/ESD.63J
13
Regression Example: Growth Rate Data
Bivariate fit of y by x
Observation
Number
Amount of Supplement
(grams) x
1
10
73
2
10
78
3
15
85
4
20
90
5
20
91
6
25
87
7
25
86
8
25
91
9
30
75
10
35
65
Growth rate data
95
Growth Rate
(coded units) y
90
Fit mean
85
y
80
75
Linear fit
70
Polynomial fit degree = 2
65
60
5
10
15
20
x
25
30
35
40
Figures by MIT OpenCourseWare.
• Replicate data provides opportunity to check for lack of fit
Manufacturing
2.830J/6.780J/ESD.63J
14
Growth Rate – First Order Model
• Mean significant, but linear term not
• Clear evidence of lack of fit
Source
Total
Degrees of
Freedom
{ extra for linear: 24.5
SM = 67,428.6
Model
Residual
Sum of Squares
{
mean: 67,404.1
{
lack of fit
S = 659.40
SR = 686.4 L
pure error
SE = 27.0
ST = 68,115.0
{1
2
1
{4
8
4
Mean Square
67,404.1
24.5
ratio = 24.42
{164.85
6.75
85.8
10
Analysis of variance for growth rate data: Straight line model
Figure by MIT OpenCourseWare.
Manufacturing
2.830J/6.780J/ESD.63J
15
Growth Rate – Second Order Model
• No evidence of lack of fit
• Quadratic term significant
Source
Sum of Squares
Model
SM = 68,071.8
Residual
SR = 43.2
Total
{
{
mean 67,404.1
extra for linear 24.5
extra for quadratic 643.2
SL = 16.2
SE = 27.0
ST = 68,115.0
Degrees of
Freedom
{
1
3 1
1
7
{
4
67,404.1
24.5
643.2
{
5.40
ratio = 0.80
6.75
10
Analysis of variance for growth rate data: Quadratic model
Manufacturing
3
Mean Square
2.830J/6.780J/ESD.63J
Figure by MIT OpenCourseWare.
16
Polynomial Regression In Excel
• Create additional input columns for each input
• Use “Data Analysis” and “Regression” tool
x
x^2
10
10
15
20
20
25
25
25
30
35
y
100
100
225
400
400
625
625
625
900
1225
73
78
85
90
91
87
86
91
75
65
Regression Statistics
Multiple R
0.968
R Square
0.936
Adjusted R Square
0.918
Standard Error
2.541
Observations
10
ANOVA
df
Regression
Residual
Total
Intercept
x
x^2
Manufacturing
2
7
9
Coefficients
35.657
5.263
-0.128
SS
665.706
45.194
710.9
Standard
Error
5.618
0.558
0.013
2.830J/6.780J/ESD.63J
MS
332.853
6.456
F
Significance F
51.555
6.48E-05
P-value
t Stat
6.347 0.0004
9.431 3.1E-05
-9.966 2.2E-05
Lower
Upper
95%
95%
22.373 48.942
3.943
6.582
-0.158
-0.097
17
Polynomial Regression
Analysis of Variance
Source
Model
Error
C. Total
DF
2
7
9
Sum of Squares Mean Square
665.70617
332.853
45.19383
6.456
710.90000
F Ratio
51.5551
Prob > F
<.0001
• Generated using JMP package
Lack Of Fit
Source
Lack Of Fit
Pure Error
Total Error
DF
3
4
7
Sum of Squares Mean Square
18.193829
6.0646
27.000000
6.7500
45.193829
F Ratio
0.8985
Prob > F
0.5157
Max RSq
0.9620
Summary of Fit
RSquare
RSquare Adj
Root Mean Sq Error
Mean of Response
Observations (or Sum Wgts)
0.936427
0.918264
2.540917
82.1
10
Parameter Estimates
Term
Intercept
x
x*x
Estimate
35.657437
5.2628956
-0.127674
Std Error
5.617927
0.558022
0.012811
t Ratio
6.35
9.43
-9.97
Prob>|t|
0.0004
<.0001
<.0001
Effect Tests
Source
x
x*x
Nparm
1
1
Manufacturing
DF
1
1
Sum of Squares
574.28553
641.20451
F Ratio
88.9502
99.3151
2.830J/6.780J/ESD.63J
Prob > F
<.0001
<.0001
18
Outline
• Response Surface Modeling (RSM)
– Regression analysis, confidence intervals
• Process Optimization using DOE and RSM
– Off-line/iterative
– On-live/evolutionary
Manufacturing
2.830J/6.780J/ESD.63J
19
Process Optimization
• Multiple Goals in “Optimal” Process Output
– Target mean for output(s) Y
∂Y
∂Y
– Small variation/sensitivity ΔY =
Δα +
Δu
∂α
• Can Combine in an Objective Function
– Minimize or Maximize, e.g. min J
x
∂u
“J”
max J
x
– Such that J = J(factors); might include J(x); J(α)
• Adjust J via factors with constraints
Manufacturing
2.830J/6.780J/ESD.63J
20
Methods for Optimization
• Analytical Solutions
– ∂y/ ∂x = 0
• Gradient Searches
– Hill climbing (steepest ascent/descent)
– Local min or max problem
– Excel solver given a convex function
• Offline vs. Online
Manufacturing
2.830J/6.780J/ESD.63J
21
Basic Optimization Problem
y0 = J0
y (or J)
x0
Manufacturing
2.830J/6.780J/ESD.63J
x
22
3D Problem
5
z0
0
-5
-10
-15
-20
1
0.6
-25
0.2
-0.2
-0.6
x20
Manufacturing
x10
-1
2.830J/6.780J/ESD.63J
23
Analytical
∂y(x)
=0
∂x
y
x
• Need Accurate y(x)
– Analytical Model
– Dense x increments in experiment
• Difficult with Sparse Experiments
– Easy to missing optimum
Manufacturing
2.830J/6.780J/ESD.63J
24
Sparse Data Procedure – Iterative
Experiments/Model Construction
y
β1
x-
x+
x
• Linear models with small increments
• Move along desired gradient
• Near zero slope change to quadratic model
Manufacturing
2.830J/6.780J/ESD.63J
25
Extension to 3D
1
5
0.8
0.6
0
0.4
0.2
-5
0
-10
-0.2
-0.4
-15
-0.6
-20
1
-0.8
0.6
-1
-25
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
Manufacturing
-1
1
0.8
0.6
-0.6
0.4
0.2
0
-0.2
-0.2
-0.4
-0.6
-0.8
-1
0.2
-1
2.830J/6.780J/ESD.63J
26
Linear Model Gradient Following
S21
S20
S19
3.5
S18
S17
S16
3.0
S15
S14
2.5
y
S13
y
2.0
S12
S11
S10
1.5
S9
S8
1.0
S7
S6
0.5
S5
S4
S19
0.0
S3
S16
S2
S13
S7
x2
S1
x1
S10
x1
S4
S1
x2
x1
yˆ = β 0 + β1 x1 + β 2 x 2 + β 12 x1 x2
Manufacturing
2.830J/6.780J/ESD.63J
27
Steepest Descent
yˆ = β 0 + β1 x1 + β 2 x 2 + β 12 x1 x2
S20
S19
S18
S17
S16
S15
S14
y
S13
S12
S11
S10
S9
g2
Make changes in x1and x2 along G
Δx 2 =
Manufacturing
gx1
gx 2
G
S8
S7
S6
S5
S4
x2
S3
g1
x1
∂y
gx1 =
= β1 + β12 x 2
∂x1
∂y
gx 2 =
= β 2 + β 12 x1
∂x 2
S21
S2
S1
x1
Δx1
2.830J/6.780J/ESD.63J
28
Various Surfaces
10
20
0
15
-10
-20
10
-30
5
-40
0
-50
-60
-5
1
-1
1
0.6
0.2
-1
-0.2
-0.6
-1
0
1
0.6
0.2
-0.2
1
-10
0
-0.6
-1
-70
35
30
25
20
15
10
5
0
1
Manufacturing
-1
1
0.6
0.2
-0.2
0
-0.6
-1
-5
2.830J/6.780J/ESD.63J
29
A Procedure for DOE/Optimization
• Study Physics of Process
– Define important inputs
– Intuition about model
– Limits on inputs
• DOE
– Factor screening experiments
– Further DOE as needed
– RSM Construction
• Define Optimization/Penalty Function
– J=f(x)
max J
x
Manufacturing
min J
x
2.830J/6.780J/ESD.63J
For us, x = u or α
30
(1) DOE Procedure
• Identify model (linear, quadratic, terms to
include)
• Define inputs and ranges
• Identify “noise” parameters to vary if possible
(Δα’s)
• Perform experiment
– Appropriate order
• randomization
• blocking against nuisance or confounding effects
Manufacturing
2.830J/6.780J/ESD.63J
31
(2) RSM Procedure
• Solve for ß’s
• Apply ANOVA
– Data significant?
– Terms significant?
– Lack of Fit significant?
• Drop Insignificant Terms
• Add Higher Order Terms as needed
Manufacturing
2.830J/6.780J/ESD.63J
32
(3) Optimization Procedure
• Define Optimization/Penalty Function
• Search for Optimum
– Analytically
– Piecewise
– Continuously/evolutionary
• Confirm Optimum
Manufacturing
2.830J/6.780J/ESD.63J
33
Confirming Experiments
• Checking intermediate points
•
•
•
•
15
10
5
Y
0
-5
0
+1
-10
-1
X1
+1
-1
-1-0.2
Data only at corners
Test at interior point
Evaluate error
Consider Central
Composite?
X2
0.6
• Rechecking the “optimum”
Manufacturing
2.830J/6.780J/ESD.63J
34
Optimization Confirmation Procedure
• Find optimum value x*
• Perform confirming experiment
– Test model at x*
– Evaluate error with respect to model
– Test hypothesis that
y(x*) = yˆ(x*)
• If hypothesis fails
– Consider new ranges for inputs
– Consider higher order model as needed
– Boundary may be optimum!
Manufacturing
2.830J/6.780J/ESD.63J
35
Experimental Optimization
• WHY NOT JUST PICK BEST POINT?
• Why not optimize on-line?
– Skip the Modeling Step?
• Adaptive Methods
– Learn how best to model as you go
• e.g. Adaptive OFACT
Manufacturing
2.830J/6.780J/ESD.63J
36
On-Line Optimization
• Perform 2k Experiment
• Calculate Gradient
• Re-center 2k Experiment About
Maximum Corner
• Repeat
• Near Maximum?
– Should detect quadratic error
– Do quadratic fit near maximum point
• Central Composite is good choice here
– Can also scale and rotate about principal axes
Manufacturing
2.830J/6.780J/ESD.63J
37
Continuous Optimization: EVOP
• Evolutionary Operation
x2
y2
y3
y0
y4
±δx1
±δx2
y1
• Pick “best” yi
• Re-center process
• Do again
x1
Manufacturing
2.830J/6.780J/ESD.63J
38
Summary
• Response Surface Modeling (RSM)
– Regression analysis, confidence intervals
• Process Optimization using DOE and RSM
– Off-line/iterative
– On-live/evolutionary
• Next Time:
– Process Robustness
– Variation Modeling
– Taguchi Approach
Manufacturing
2.830J/6.780J/ESD.63J
39
Download