Simulation Modeling and Analysis Comparing Alternative System Designs

advertisement
Simulation Modeling and
Analysis
Session 12
Comparing Alternative
System Designs
Outline
•
•
•
•
Comparing Two Designs
Comparing Several Designs
Statistical Models
Metamodeling
Comparing two designs
• Let the average measures of performance for
designs 1 and 2 be 1 and 2.
• Goal of the comparison: Find point and
interval estimates for 1 - 2
Example
• Auto inspection system design
• Arrivals: E(6.316) min
• Service:
– Brake check N(6.5,0.5) min
– Headlight check N(6,0.5) min
– Steering check N(5.5,0.5) min
• Two alternatives:
– Same service person does all checks
– A service person is devoted to each check
Comparing Two Designs -contd
• Run length (ith design ) = Tei
• Number of replications (ith design ) = Ri
• Average response time for replication r (ith
design = Yri
• Averages and standard deviations over all
replications, Y1* = S Yri / Ri and Y2* , are
unbiased estimators of 1 and 2.
Possible outcomes
• Confidence interval for 1 - 2 well to the
left of zero. I.e. most likely 1 < 2.
• Confidence interval for 1 - 2 well to the
right of zero. I.e. most likely 1 > 2.
• Confidence interval for 1 - 2 contains
zero. I.e. most likely 1 ~ 2.
• Confidence interval
(Y1* - Y2*) ± t /2, s.e.(Y1* - Y2*)
Independent Sampling with
Equal Variances
• Different and independent random number
streams are used to simulate the two designs.
Var(Yi*) = var(Yri)/Ri = i2/Ri
Var(Y1* - Y2*) = var(Y1*) + var(Y2*)
= 12/R1 + 22/R2 = VIND
• Assume the run lengths can be adjusted to
produce 12 ~ 22
Independent Sampling with
Equal Variances -contd
• Then Y1* - Y2* is a point estimate of 1 - 2
Si2 = S (Yri - Yi*)2/(Ri - 1)
Sp2 = [(R1-1) S12 + (R2-1) S22]/(R1+R2-2)
s.e.(Y1*-Y2*) = Sp (1/R1 + 1/R2)1/2
 = R1 + R2 -2
Independent Sampling with
Unequal Variances
s.e.(Y1*-Y2*) = (S12/R1 + S22/R2)1/2
 = (S12/R1 + S22/R2)2/M
where
M = (S12/R1)2/(R1-1) + (S22/R2)2/(R2-1)
• Here R1 and R2 must be > 6
Correlated Sampling
• Correlated sampling induces positive
correlation between Yr1 and Yr2 and reduces
the variance in the point estimator of
Y1*-Y2*
• Same random number streams used for both
systems for each replication r (R1 = R2 = R)
• Estimates Yr1 and Yr2 are correlated but Yr1
and Ys2 (r n.e. s) are mutually independent.
Recall: Covariance
var(Y1* - Y2*) = var(Y1* ) + var(Y2* ) 2 cov(Y1* , Y2* ) =
= 12/R + 22/R - 2 12 1 2/R = VCORR
= VIND - 2 12 1 2/R
Recall: definition of covariance
cov(X1,X2) = E(X1 X2) - m1 m2 =
= corr(X1 X2) 1 2 =
=  1 2
Correlated Sampling -contd
• Let Dr =Yr1 - Yr2
D* = (1/R) S Dr = Y1* - Y2*
SD2 = (1/(R - 1)) S (Dr - D*)2
• Standard error for the 100(1- )%
confidence interval
s.e.(D*) = s.e.(Y1* - Y2* ) = SD/ R
(Y1* - Y2*) ± t /2, SD/ R
Correlated Sampling -contd
• Random Number Synchronization Guides
– Dedicate a r.n. stream for a specific purpose
and use as many streams as needed. Assign
independent seeds to each stream at the
beginning of each run.
– For cyclic task subsystems assign a r.n. stream.
– If synchronization is not possible for a
subsystem use an independent stream.
Example: Auto inspection
An = interarrival time for vehicles n,n+1
Sn(1) = brake inspection time for vehicle n in
model 1
Sn(2) = headlight inspection time for vehicle n
in model 1
Sn(3) = steering inspection time for vehicle n in
model 1
• Select R = 10, Total_time = 16 hrs
Example: Auto inspection
• Independent runs
-18.1 < 1-2 < 7.3
• Correlated runs
-12.3 < 1-2 < 8.5
• Synchronized runs
-0.5 < 1-2 < 1.3
Confidence Intervals with
Specified Precision
• Here the problem is to determine the
number of replications R required to
achieve a desired level of precision e in the
confidence interval, based on results
obtained using Ro replications
R = (t /2,Ro-1 SD/e)2
Comparing Several System
Designs
• Consider K alternative designs
• Performance measure i
• Procedures
– Fixed sample size
– Sequential sampling (multistage)
Comparing Several System
Designs -contd
• Possible Goals
–
–
–
–
Estimation of each i
Comparing i to a control 1
All possible comparisons
Selection of the best i
Bonferroni Method for Multiple
Comparisons
• Consider C confidence intervals 1-i
• Overall error probability E = S j
• Probability all statements are true (the
parameter is contained inside all C.I.’s)
P  1 - E
• Probability one or more statements are false
P  E
Example: Auto inspection (contd)
• Alternative designs for addition of one
holding space
– Parallel stations
– No space between stations in series
– One space between brake and headlight
inspection
– One space between headlight and steering
inspection
Bonferroni Method for Selecting
the Best
• System with maximum expected
performance is to be selected.
• System with maximum performance and
maximum distance to the second best is to
be selected.
i - max j i j  e
Bonferroni Method for Selecting
the Best -contd
1.- Specify e ,  and R0
2.- Make R0 replications for each of the K systems
3.- For each system i calculate Yi*
4.- For each pair of systems i and j calculate Sij2 and select the
largest Smax2
5.- Calculate R = max{R0, t2 Smax2 / e2}
6.- Make R-R0 additional replications for each of the K
systems
7.- Calculate overall means Yi** = (1/R) S Yri
8.-Select system with largest Yi** as the best
Statistical Models to Estimate the
Effect of Design Alternatives
• Statistical Design of Experiments
– Set of principles to evaluate and maximize the
information gained from an experiment.
• Factors (Qualitative and Quantitative),
Levels and Treatments
• Decision or Policy Variables.
Single Factor, Randomized
Designs
• Single Factor Experiment
– Single decision factor D ( k levels)
– Response variable Y
– Effect of level j of factor D, j
• Completely Randomized Design
– Different r.n. streams used for each replication
at any level and for all levels.
Single Factor, Randomized
Designs -contd
• Statistical model
Yrj = m + j + erj
where
Yrj = observation r for level j
m = mean overall effect
j = effect due to level j
erj = random variation in observation r at level j
Rj= number of observations for level j
Single Factor, Randomized
Designs -contd
• Fixed effects model
– levels of factors fixed by analyst
 erj normally distributed
– Null hypothesis H0: j = 0 for all j=1,2,..,k
– Statistical test: ANOVA (F-statistic)
• Random effects model
– levels chosen at random
 j normally distributed
ANOVA Test
• Levels-replications matrix
• Compute level means (over replications) Y.i* and
grand mean Y..*
• Variation of the response w.r.t. Y..*
Yrj - Y..* = (Y.j* - Y..*) + (Yrj - Y.j*)
• Squaring and summing over all r and j
SSTOT = SSTREAT + SSE
ANOVA Test -contd
• Mean square MSE = SSE/(R-k) is unbiased estimator of
var(Y). I.e. E(MSE) = 2
• Mean square MSTREAT = SSTREAT/(k-1) is also unbiased
estimator of var(Y).
• Test statistic
F = MSTREAT / MSE
• If H0 is true F has an F distribution with k-1 and R-k
d.o.f.
• Find critical value of the statistic F1-
• Reject H0 if F > F1-
Metamodeling
• Independent (design) variables xi, i=1,2,..,k
• Output response (random) variable Y
• Metamodel
– A simplified approximation to the actual
relationship between the xi and Y
– Regression analysis (least squares)
– Normal equations
Linear Regression
• One independent variable x and one
dependent variable Y
• For a linear relationship
E(Y:x) = 0 + 1 x
• Simple Linear Regression Model
Y = 0 + 1 x + e
Linear Regression -contd
• Observations (data points)
(xi,Yi) i=1,2,..,n
• Sum of squares of the deviations ei2
L = S ei2 = S [ Yi - 0’ - 1(xi - x*)]2
• Minimizing w.r.t 0’ and 1 find
0’* = S Yi /n
1* = S Yi (xi - x*)/ S (xi - x*)2
0* = 0’* - 1* x*
Significance Testing
• Null Hypothesis H0: 1 = 0
• Statistic (n-2 d.o.f)
t0 = 1* /(MSE/Sxx)
where
MSE = S(Yi - Ypi)/(n-2)
Sxx = Sxi2 - ( Sxi )2/n
• H0 is rejected if |t0| > t/2,n-2
Multiple Regression
• Models
Y = 0 + 1 x1 + 2 x2 + ... + m xm + e
Y = 0 + 1 x + 2 x 2 + e
Y = 0 +  1 x 1 + 2 x 2 + 3 x 1 x 2 + e
Download