Navy Supply Chain Model Verification and Validation Rich Sewersky

advertisement
DSES-6620 Simulation Modeling and Analysis
Navy Supply Chain Model
Verification and Validation
Rich Sewersky
May 2, 2002
Page 1
Introduction
The helicopter industry has recently started exploration of a new business venture to
provide spare parts to the Navy and other customers on the basis of a fixed cost per flying
hour of the covered fleet. The Navy coined the term Performance Based Logistics (PBL)
and the first attempt was to contract for 14 components on a year to year basis. Key
performance measures of this business are Fill Rate (%) which is the ratio of orders filled
on time to the total requests received. Incentive payments to the contractor would be
based, in part, on this measure. Inventory costs of expensive helicopter components are
borne by the customer or the contractor and hence must be predicted and controlled.
Large variations in demand, transportation and in-shop processing time must be
examined and planned for so that performance impact will be minimized.
To explore these issues, an effort was established to develop discrete event simulation
models of the Navy supply chain for repairable components. The first of these was
developed by researchers under guidance by the author to predict fill rate given particular
starting inventory, demand rate assumptions, Navy transportation times of components to
be repaired (called carcasses) and shop flow time to repair and test components (repaired
components are called Ready For Issue (RFI). A research report (proprietary) was
prepared but no formal verification and validation (V&V) was completed at the time.
The goal of this project was to complete a verification and validation of this model to the
point that it could be used for business trade studies with some confidence. One
component (gearbox) of the 14 was chosen to make the scope manageable and simplify
presentation. This goal was substantially achieved but additional work would be needed
to use the results numerically. Some ideas for further work will be suggested in the
conclusion.
Technical Approach
A project proposal was completed and is enclosed as Appendix A. The essential steps are
outlined and then described below.
 Conduct a brief literature search for military or commercial standard practices
used in V&V.
 Review the model logic and implemented code of the existing model and
review the research report for previous V&V work
 Explore sensitivity to changes in input variable values and distributions
chosen for modeling purposes
 Calibrate the model key output to some well understood source (no real data
was available on system performance that matched the proposed business
arrangement.
Literature search: A web site was found sponsored by the Defense Modeling and
Simulation Office (DMSO) which contained a large variety of resources. These included
Page 2
specific guideline documents, policies, educational materials and bibliography. It is
focused on the type of simulations that DOD specializes in (war games) and the critical
problem they face is how to ensure that the money spent on contracted development of
models that need to be tied together is well spent. They have specialists in the role of
verification and validation which act a quality control on the V&V process. Formal
requirements development and traceability is used and formal test procedures are
developed and followed.
While this level of documentation makes sense for large projects with extensive
interconnection requirements, a scaled down process is needed for small models such that
the one explored herein. The web site is worth exploration:
www.dmso.mil/public/transition/vva/.
Review of model logic: Developing a working understanding of how a model (developed
by someone else) is constructed and coded is essential in starting a verification effort.
This particular model was developed using TaylorED, a commercially available discrete
event simulation package. Taylor uses modules called “atoms” which are connected to
each other to build a simulation of various complexities. Connection ports are shown as
dots on yellow connection blocks. Figure 1 provides a screen shot that shows this
connectivity for the model.
Figure 1 – TaylorED Atom Connectivity Example
Page 3
The full supply chain simulation has 102 atoms to cover the 14 components it was
designed for. Many of these were essentially duplicates so to simplify the analysis, a
stripped down version (26 atoms) was created for code review purposes only. It is
provided as a file called “RPI_TestCaseTruncated.mod”. A special atom was used to
extract a text version of the code which was subsequently edited further to remove
duplicate code and clarify module boundaries. The resulting file is provided as Appendix
B and called “Model Document.doc”. Model atom structure was traced and a simplified
connection diagram was prepared as shown in figure 2.
PoolInfoTb
InfoTb
BkOrderTb
Uniform (1, flowtime)
Uniform (1, deltime)
Negexp (3.121)
Figure 2 – Supply Chain Model Structure and Connectivity
The input and output to the model is accomplished using Excel and internal TaylorED
tables. They are referenced in the code using a (row, column) numbering scheme.
Appendix C includes the important tables used in the model. Using these printouts and
diagrams, a detailed code walkthrough was completed by the author with occasional
consultation with the researcher who developed it. In general the trace went well and one
code error was actually found which was corrected prior to running the V&V model runs.
This highlights how important peer code walkthroughs can be in finding errors and
avoiding incorrect results and wasted work. A test was conducted and this error had a
fairly minor effect on the results as originally reported by the researcher (discussed later).
Page 4
Input data was also reviewed for source and accuracy. The raw data is summarized in a
file called “Model Input.xls”. Shop turnaround times were provided from incoming and
outgoing records over about a 2 year period. They are in the first tab of the spreadsheet
and were submitted to Statfit for plotting and Autofit distribution fitting although the
resulting distribution was not used in the analysis. It is summarized in Figure 3.
Figure 3 - Flowtime Input Continuous Distribution (days)
The other key variable is the demand for repaired products coming from the field (Navy).
After extensive (prior to this project) investigation, it was decided that Navy databases
capturing field maintenance was insufficient for modeling purposes and hence Navy
Inventory Control Point (ICP) transaction records were acquired, hand corrected and
trended to project demand. The remaining 3 tabs contain those data with the relevant
items highlighted in gray shading. Again it was run through Statfit and the resulting fit is
shown in figure 4.
Page 5
Figure 4 - Demand Input Continuous Distribution (monthly)
As the commercial package which is used for validation checks assumes demand is
Poisson distributed, Statfit was used to fit a Poisson distribution which is shown in figure
5.
Page 6
Figure 5 - Demand Input Discrete Distribution (monthly)
The Statfit files are called “Tat.sfp” and “Demands.sfp”.
The project proposal had indicated that debug mode tracing would be used to further
check code implementation. This was not accomplished, as the version of TaylorED
being used did not support that feature (although it was described in the help file !). That
is a worthwhile exercise for future validation efforts.
Sensitivity: After concluding that the model structure, code implementation and input
data seemed reasonable, the bulk of remaining effort was to exercise the model to
understand how its outputs reacted to input data. Selective changes were sequentially
made to the model and 3 to 10 repetitions of the model were run to examine the outputs
statistically. Outputs for only the gearbox (component 5) were extracted to Excel
including the relevant backlog history which was analyzed in Excel. These outputs are
all stored in file “RunResults.xls” and are summarized in Table 1. The model file used
generate this data is included as file “RPI_TestCaseR2.mod”. The various iterations
referenced in the spreadsheet file were saved for future reference and reuse but not
included due to space considerations. A review and discussion of these results follows.
Page 7
B a c k o rd e rs
To t a l B K
To t a l
F ill
A vg
A vg R F I
O rd e r
O rd e r#
R a t e (% )
W IP
in W H
Avg
7.9
119.8
93.7
16.8
10.6
S tD ev
9.8
8.0
7.8
1.1
B a s e lin e R u n (w it h W IP d is t rib u t io n e rro r c o rre c t e d )
Avg
8.4
117.5
93.3
S tD ev
11.5
9.2
R u n w it h a ll va ria t io n s re m o ve d
Avg
11
116
S tD ev
n/a
n/a
R u n w it h d e m a n d va ria t io n o n ly (t ra n s p o rt a n d
Avg
15.8
110.3
p ro c e s s in g a t w o rs t c a s e fix e d le ve l)
S tD ev
14.5
12.1
11.1
R u n w it h d e m a n d va ria t io n o n ly (t ra n s p o rt a n d
Avg
2.4
121.6
98.1
p ro c e s s in g a t a ve ra g e fix e d le ve l)
S tD ev
3.2
6.7
2.5
" W a rt im e " S c e n a rio (W o rs t C a s e E ve ry t h in g )
Avg
344.2
352.2
20.1
20.1
B a s e lin e R u n (w it h W IP d is t rib u t io n e rro r)
S tD ev
Count
A ve ra g e
M in im u m
M a x im u m
D u ra t io n
D u ra t io n
D u ra t io n
13.2
4.0
0.1
18.3
1.7
9.5
3.1
0.6
5.8
18.6
9.2
21.0
6.6
0.1
23.3
9.2
1.5
2.4
6.6
3.9
0.3
8.2
90.5
20.7
7.2
11
16.3
0.7
31.9
n/a
n/a
n/a
n/a
n/a
n/a
n/a
86.6
20.2
8.3
15.8
13.1
0.1
50.3
1.8
2.9
14.5
5.6
1.7
10.7
11.7
21.7
3.0
4.1
0.8
11.9
0.6
0.9
3.4
2.1
1.6
4.5
2.3
57.0
0.1
344.2
50.9
30.5
79.6
0.1
3.1
0.1
20.1
3.1
2.7
2.1
Table 1 – Run Results Summary
Baseline Runs – the first two groups of 10 runs were used to test the effect of the code
error found during walkthrough. The error effected the way Work In Process (WIP) is
pre-distributed at the start of a years run resulting in processing times that ranged from 01 day instead of 1-60 days. This would have the largest effect if there was significant
WIP at the start of a year. The key measure of its effect is Fill Rate and RFI on the shelf
which changed from 93.7% to 93.3 %(after correction) and from 10.6 units to 9.2 units
respectively. The other effect to be observed from these runs are the significant levels of
variation in run to run results. For example, the fill rate (the measure that contract
incentive could be tied to) has a standard deviation of 9.2 on an average value of 93.3.
The high was 100% while the low was 76.8 which matches to +/- 2 standard deviation
levels of 74.8 to 100. This indicates that much effort is needed to control variability in
demand, transportation and processing or hold large stock levels to buffer this variability.
Variations removed – In this run set, of 3 (essentially identical) the fill rate was 90.5 %
which matches the results from the OPUS spares modeling tool marketed by Systecon,
Inc. While not as good as real supply chain performance data which should eventually
become available if a contract is awarded for PBL, it provides a good sanity check point
similar to using queuing theory to crosscheck simple queuing models. It is used in the
European defense industry to help configure spare parts deployment and uses a similar
model structure including demand rates, processing rates and transportation times to
predict fill rates and inventory levels. There are US equivalent products as well as DOD
developed versions based on theory developed by Dr. Craig Sherbrooke.
Demand variation only with worst case transport and processing – In this set, the
transportation and process times were set to worst case levels with demand sampled from
the negative exponential distribution. Note that fill rate went down from 93.3 to 86.6,
back orders went up (doubled), and WIP and RFI levels stayed about the same. This
makes sense in that using worst case transport and processing times will slow down
conversion of carcasses to RFI and hence increase backorders.
Demand variation with average transport and processing – Fill rate improved
significantly from 93.3 to 98.1 with a decrease in variability (std dropped from 9.2 to
2.5). Back orders also dropped from 8.4 to 2.4 but std was still proportionally high.
WIP dropped significantly and RFI doubled with significantly lower variance. This all
Page 8
makes sense in that you assume response times are always at the average level (never
higher than average)
Wartime surge – This scenario was the most interesting in that is shows how bogged
down the supply chain can get when things go bad. The demand rate was tripled, the
processing time was forced between 45 and 90 days uniformly distributed and the
transport times forced to 20-60 days uniform distribution. Fill rate dropped to 2.3 % with
only 8 units actually shipped. Average WIP went up to 57 units and average RFI to zero.
The average backorder time was 51 days with lowest of 30.5 days and worst of those
shipped at 80 (note that this number is deceiving as there are many units still on
backorder at the time simulation stopped after 1 year). It would be interesting to explore
how long it would take to recover from such a surge in demand but the model is not setup
to do that at this time.
Conclusions/Recommendations
Based on the analysis that was accomplished, the model seems constructed correctly for
the limited purpose for which it was developed and behaves in intuitively correct ways to
input variations that were explored. It compared well with the OPUS commercial tool in
the one calibration point that was checked. (Additional OPUS setup points should be
checked if possible). The author would feel comfortable using this tool to conduct
qualitative studies comparing alternate business strategies but not to base contract
quantitative terms on its output.
One drawback in conducting further runs, is the highly manual cut and paste back and
forth from TaylorED to Excel to complete a scenario (it took about an hour of dedicated
time to run a 10 repetition set). This ultimately limited how many options could be
explored. Building a more automated process within TaylorED is probably possible but
not within the skills of the author at this time. Such a process would be needed to
thoroughly evaluate model behavior. As previously mentioned, an extension of the
model to run surge demands (or other variations) followed by a recovery period would be
very useful to explore various transient phenomena. The current model would need to be
able to keep the state of its WIP, pause and accept new input conditions and continue a
run for some followon period.
Another process that should be completed is a debug trace review using the updated
version of TaylorED.
The last recommendation is to run additional points using distributions more closely
matched to the input data (instead of assuming uniform).
Based on the time investment to complete this project, a thorough validation effort can
take significant resources, which must be carefully planned and allocated to result in a
quantitatively useful model.
Page 9
Download