Building Valid, Credible & Appropriately Detailed Simulation Models Validation and Verification • Validation is the process of determining whether a simulation model is an accurate representation of the system for the particular objectives of the study. – A valid model can be used to make decisions – Validation depends on the complexity of the model – There is no perfect or absolute validity, the simulation model is only an approximation – Models should be developed for a particular set of purposes – The performance measure used to validate a model should be actually used for evaluating system designs – Validation should be done during model development • Verification is the process of determining whether the conceptual simulation model is correctly translated into a computer program. Credibility and Accreditation Credibility of a simulation model and its results is obtained if the manager and other key personnel accept them as “correct”. To attain credibility: • The manager should understand and agree with model assumptions • Validation and verification should be demonstrated • The manager should own the project and be involved in it • Model developers should be reputable Accreditation is the process of determining officially whether a simulation model is acceptable for a particular purpose. Issues to be considered are: • Validation and verification • Simulation model development and use history • Quality of data • Quality of documentation • Known problems and limitations Timing and Relationships Steps in a Sound Simulation Study Validation vs Output Data Analysis Suppose that we want to estimate the mean μS of some system. We construct a similar simulation model with corresponding mean M and make a simulation run to obtain an estimate of M. Then, the error is Error in ˆ M ˆ M S ˆ M M M S ˆ M M M S Output data analysis Validation Guidelines for Determining the Level of Model Detail • Carefully define the specific issues to be investigated and the measures of performance that will be used for evaluation • The entity moving through the simulation model does not have to be the same as the entity moving through the real system • Use subject-matter experts and sensitivity analyses to help determine the level of detail • A mistake often made by beginners is to include an excessive amount of model detail • Do not have more detail than necessary to address your issues • The level of model detail should be consistent with the type of data available • Time and money constraints are a major factor in determining the amount of model detail • If the number of factors for the study is large, then use a coarse simulation or analytical model to identify which factors have a significant impact on system performance Verification of Simulation Computer Programs • • • • • • • • Write and debug the computer program in modules and subprograms Have more than one person to review the program Run the simulation under a variety of settings of the input parameters, and check to see that the output is reasonable Trace the program to compare the state of the simulated system by hand calculations. If available, an interactive debugger allows the analyst to stop the simulation at selected points in time The model should be run, when possible, under simplifying assumptions for which its true characteristics are known or can be computed It may be helpful to observe an animation of the simulation Compare the sample mean and sample variance of input probability distributions with the desired mean and variance Use a commercial simulation package to reduce the amount of programming The M/M/1 queue • Inter-arrival times Expo(1) hours, Service times Expo(0.9) hours • Simulate for 1000 hours Performance Measure Simulation Theoretical Number in 1033 1000 WIP (number in system) 11.33 9 Utilization of server 0.93 0.9 Comparison with Theory Techniques for Increasing Model Validity and Credibility - 1 • Collect high-quality information and data on the system – Conversations with subject-matter experts – Observations of the system to obtain data – Use of existing theory on model assumptions – Relevant results from similar simulation studies – Experience and intuition of the modelers • Interact with the manager on a regular basis – The manager should formulate/reformulate the objectives – The manager’s involvement should be maintained – The manager’s knowledge contributes to actual validity – The model is more credible if management understands it Techniques for Increasing Model Validity and Credibility - 2 • Maintain an assumptions document and perform a structured walk-through – Record the assumptions in a report with information on • Goals, specific issues, performance measures • Detailed description of each subsystem • Simplifying assumptions • Summaries of data and input probability distributions • Sources of important and controversial information – Go over the assumptions by presenting a walk-through to all those involved • Use animation to find invalid model assumptions and to enhance credibility Techniques for Increasing Model Validity and Credibility - 3 • Validate components of the model using quantitative techniques – Use statistical tests: t-test, Chi-square goodness of fit test, Kolmogorov-Smirnov test, Kruskal-Wallis test of homogeneity, Other tests and procedures – Use sensitivity analysis to identify the important factors: The value of a parameter, The choice of a distribution, The entity moving through the system, The level of detail of a subsystem, Data crucial for simulation (using a course model) • Validate the output from the overall simulation model – Results validation establishes close resemblance of simulation output data to the expected output of the actual system – This often requires statistical procedures – Output data is often non-stationary and auto correlated! Management’s Role in the Simulation Process • Formulating program objectives • Directing personnel to provide information and data • Interacting with the simulation modeler on a regular basis • Using the simulation results as an aid in decision-making Statistical Procedures for Comparing Real-World Observations and Simulation Output Data Inspection Approach This is based on comparing the statistics (like mean, variance, correlation coefficient, histogram) without a formal testing procedure. The inherent problem is that each statistic is essentially from a sample of size 1 only! Example 5.34: Suppose that the real-world system is the M/M/1 queue with = 0.6 and the corresponding simulation model is the M/M/1 queue with = 0.5. The arrival rate is 1 for both. Suppose that the output process is D 1 , D 2, D 3, … . 200 X D i 1 200 200 i (for the system) Y D i 1 200 i (for the model) Inspection Approach • It is known from Heathcote and Winer (1969) that X E X 0.87, Y E Y 0.49 X Y 0.38 (Poor model!) Correlated Inspections • Game 1: Roll a single die. X is the outcome • Game 2: Roll a single die then toss a fair coin. Y is the number on the die + the outcome of toss (Heads=1, Tails=0). • In a simulation, we can compare E[X] and E[Y] in different ways: – Independent experiments: roll different dies to generate realizations of X and Y – Dependent experiments: roll the same die to generate realizations of X and Y Correlated Inspection Approach • This is a more definitive approach to validate the assumptions of the simulation model other than the probability distributions. The system and the model both experience exactly the same observations from the input random variables using historical data. Example • Suppose that the system is the five-teller bank with jockeying and the simulation model is the same bank without jockeying. Assume that the mean service time is 4 minutes. • X = average delay in queue for the system (mean X ) • Y = average delay in queue for the simulation model (mean Y ) • We want to estimate the difference X – Y by experimenting with the same data for 500 days and observing the differences Xj – Yj. Suppose also that independent random numbers are used to generate interarrival times and service times to obtain the average delays given by Y´j. Y ~ Y ' EX Y E X Y ' X Y Cov( X , Y ' ) 0, Cov( X , Y ) 0 Var ( X Y ' ) Var ( X ) Var (Y ' ) Var ( X Y ) Var ( X ) Var (Y ) 2Cov( X , Y ) Var ( X Y ' ) Simulation Results Confidence-Interval Approach Based on Independent Data • {X1, X2, … , Xm} = system data (mean X ) • {Y1, Y2, … , Yn} = simulation model data (mean Y ) • We can construct a (1 - )% confidence interval for = X - Y or test the null hypothesis H0: X = Y. • Paired t-interval If n = m, let W = X – Y, then = W = X - Y W (n) t n1,1 / 2 SW (n) / n Example • We want to construct a 90% confidence interval for = X - Y using the paired t-interval with n = 10 data. W (10) X (10) Y (10) 2.99 3.68 0.69 W SW2 (10) j 1 2 10 j W (10) 0.2 9 W (10) t 9, 0.95 SW (10) / 10 0.69 0.26 0.95,0.43 • So the difference is statistically significant at level = 0.10.