DSES-6620: Simulation Modeling and Analysis Professor Ernesto Gutierrez-Miravete

advertisement
DSES-6620: Simulation Modeling and Analysis
Professor Ernesto Gutierrez-Miravete
Final Project: Software Development Process
Adam McConnell Starling
05/02/02
2
Executive Summary
The objective of this report is to use Pro-Model 4.2 to simulate both the current
process being used within the software development department to resolve
problem reports and the proposed process that intends to cut down on the
amount of time spent on each problem report. There are currently five different
departments involved in originating and resolving problem reports. Those five
departments, along with their current locations are as follows:
1.
2.
3.
4.
5.
Software Requirements – Stratford, CT
Software Development – Stratford, CT
Simulation Testing and Integration – Stratford, CT
Hardware Testing and Integration – Philadelphia, PA
Aircraft Testing and Integration – West Palm Beach, FL
In the near future, it will be possible to have a hot bench (i.e., actual aircraft
intended solely for testing) that can accomplish both steps 4 and 5 in the
previous breakdown. I am proposing to have the hot bench moved to the
Bridgeport, CT facility in order to further increase the efficiency of the process.
The proposed process would breakdown as follows:
1.
2.
3.
4.
Software Requirements – Stratford, CT
Software Development – Stratford, CT
Simulation Testing and Integration – Stratford, CT
Hot Bench Testing and Integration – Bridgeport, CT
I was able to accumulate data for interarrival times and in-work times for each
problem report originated from each location from the database where the
information is logged throughout the correction process. It was important to
breakdown each time accordingly to fully understand the varying times amongst
the locations.
Based on the output from each scenario, I can show what kind of efficiency we
can expect from the new process as opposed to the old one.
3
Table of Contents
Executive Summary
1
Table of Contents
2
1.0 Introduction
3
2.0 Objectives and Project Plan
7
3.0 Data Collection
7
4.0 Model Translation
4.1 Locations
4.2 Entities
4.3 Arrivals
4.4 Processing
4.5 Variables
4.6 Assumptions
13
14
16
16
18
19
5.0 Model Verification
19
6.0 Model Results
20
7.0 Conclusions
25
4
1.0 Introduction
The first modeled process is the current set of steps used to originate, correct,
and test problem reports. Some liberties have been taken in order to map this
process to the simulation world, but the basic structure remains intact.
Currently, there are five different locations from which a problem report can be
originated. The locations are as follows:
1.
2.
3.
4.
5.
Software Requirements – Stratford, CT
Software Development – Stratford, CT
Simulation Testing and Integration – Stratford, CT
Hardware Testing and Integration – Philadelphia, PA
Aircraft Testing and Integration – West Palm Beach, FL
Although problem reports can be originated from various departments, once a
problem report is generated, the engineers who work them treat it identically to
the others. The differences between how long problem reports originated from
different locations take to be resolved is mostly due to the difficulty in
communicating from one location to the other. Obviously, the complexity of the
issue plays a role in that time, but from experience as a member of the software
development team, I can attest to the lack of communication across the
departments playing a large role in the process.
A further breakdown of each department will aid in depicting its function in the
procedure of working problem reports. Each department, with its primary role,
is as follows:




Software Requirements – A department within Sikorsky Aircraft
Corporation, the primary objective of this group is to provide the
necessary requirements that the software development group must
adhere to.
Software Development – Another member of Sikorsky Aircraft
Corporation, this group utilizes the requirements provided by the
software requirements department to design the software that
operates the aircraft.
Simulation Testing and Integration – The focus of this group, a
division within Sikorsky Aircraft Corporation, is to use simulated
displays and hardware to test the functionality of the aircraft.
Hardware Testing and Integration – A group under the umbrella of
Boeing and the only group not within Sikorsky Aircraft
Corporation, this department tests the software on the hardware
5

that will actually go onto the aircraft; however, it does not have all
the pieces for full testing procedures.
Aircraft Testing and Integration – The final approval needed for a
problem report and a member of the Sikorsky Aircraft Corporation
team, this division does the final tests of the software on the actual
assembled aircraft.
It is possible at any stage in the software development process for any member of
the process to submit a problem report when one is found. Once the report is
generated, it is passed to the software requirements group for analysis. The only
time this is not the case is when the software development team generates a
problem report. If this is the case, that group does the analysis itself. The
problem report is handed from one group to the next in the preceding order until
the aircraft testing and integration group finally verifies it. At any step along the
way, the problem report can be sent back to the original position if it does not
correct the problem. A pictorial of the process appears on the following page:
6
Yes
Receive Problem
Report
Requirements
Change or
Software Change?
Change
Required?
N
o
Software
Requirements
Change
Requirements
Change
Software
Software-Test
Software
No
Passed Test?
Yes
SimulationTest Software
No
Generate Problem
Report
Passed Test?
Yes
Hardware-Test
Software
Close Report
No
Yes
Passed Test?
Yes
Helicopter-Test
Software
Passed Test?
No
7
The proposed process is very similar to the current process; it simply combines
the last two steps into one. Instead of testing the software on the majority of the
hardware or on the actual flight aircraft, a former aircraft no longer intended for
flight will be used. This new test bed is referred to as a hot bench. Not only will
this allow for full hardware testing to be accomplished, but a closer location will
also cut down on the communication breakdowns that spring up in going from
one group to the other. A listing of the proposed process is as follows:
1.
2.
3.
4.
Software Requirements – Stratford, CT
Software Development – Stratford, CT
Simulation Testing and Integration – Stratford, CT
Hot Bench Testing and Integration – Bridgeport, CT
Overall, the complexity of the process is diminished considerably so it is
expected that the system will be more efficient in both time and money. It
should be noted that the task of correcting problem reports is not the only focus
of each of these groups. This is only a portion of what goes into developing
software for an aircraft.
2.0 Objectives and Project Plan
The primary objective of these simulations is to come to a realistic understanding
of what to expect when the new process for correcting problem reports is put
into place. Using a simulation design based off of that for the current process,
data can be gathered to observe the number of problem reports that can be
resolved in a calendar year. The utilization of each department can also serve as
an indication of how many engineers will be needed at each location.
3.0 Data Collection
In order to accumulate the necessary data, the database that serves as a record of
dates and events in the problem reporting process served as the primary source
of information. This database contains the date of origination and dates of
closure, which were used to determine the time spent at each location. Once the
origination dates were tallied, and they were further broken down by location,
the interarrival time for each problem report based on location was devised. The
two preceding pieces of data – processing time and interarrival time – were the
important times to use in the simulation of the problem report process.
8
The following figures depict the layout of the data by number of days in order to
calculate the processing time:
Figure 1: Software Requirements In-Progress Time
Figure 2: Software Development In-Progress Time
9
Figure 3: Simulation Testing and Integration In-Progress Time
Figure 4: Hardware Testing and Integration In-Progress Time
10
Figure 5: Aircraft Testing and Integration In-Progress Time
The preceding figures include what was determined to be the most appropriate
fit for each distribution. The same data can be viewed in the following table:
Location
Software Requirements
Software Development
Simulation Testing and Integration
Hardware Testing and Integration
Aircraft Testing and Integration
Processing Time (Days)
N(12, 7.68)
E(11.2)
E(13.2)
U(0,44.5)
E(13.7)
Table 1: Process In-Progress Times
The following figures depict the layout of the data by number of days in order to
calculate the interarrival time:
11
Figure 6: Software Requirements Interarrival Time
Figure 7: Software Development Interarrival Time
Figure 8: Simulation Testing and Integration Interarrival Time
12
Figure 9: Hardware Testing and Integration Interarrival Time
Figure 10: Aircraft Testing and Integration Interarrival Time
The preceding figures include what was determined to be the most appropriate
fit for each distribution. The same data can be viewed in the following table:
Location
Software Requirements
Software Development
Simulation Testing and Integration
Hardware Testing and Integration
Aircraft Testing and Integration
Interarrival Time (Days)
E(4.44)
E(6.33)
U(0,14)
E(2.5)
E(9.6)
Table 2: Process In-Progress Times
13
4.0 Model Translation
4.1 Locations
The simulation model for the current process has 11 locations. These locations
are described in the following table:
Location Name
SA_WPB
SA_WPB_PRs
Boeing_Philadelphia
Boeing_Philadelphia_PRs
SA_Lab_Stratford
SA_Lab_Stratford_PRs
SA_Reqs_Stratford
SA_Reqs_Stratford_PRs
SA_SW_Dev_Stratford
SA_SW_Dev_Stratford_PRs
Approval
Location Description
Sikorsky Aircraft, West Palm Beach
-Tests problem reports
Sikorsky Aircraft, West Palm Beach
-Originates problem reports
Boeing, Philadelphia
-Tests problem reports
Boeing, Philadelphia
-Originates problem reports
Sikorsky Aircraft, Simulation Lab
-Tests problem reports
Sikorsky Aircraft, Simulation Lab
-Originates problem reports
Sikorsky Aircraft, Requirements
-Tests problem reports
Sikorsky Aircraft, Requirements
-Originates problem reports
Sikorsky Aircraft, Development
-Tests problem reports
Sikorsky Aircraft, Development
-Originates problem reports
Location to tally problem report
closures
Table 3: Current Process Locations
The locations with the suffix “PRs” act as the arrival point for problem reports
and simply pass them on to either software requirements or software
development. These locations do not take any process timing. The locations
without this suffix are where the actual testing takes place. This is strictly a
design implementation, as these locations do not truly exist; however, they offer
a good place to have arrivals. The last location – Approval – is also a design
14
decision. It is a common location within the simulation that allows the number
of corrected problem reports to be counted.
The locations for the proposed process are very similar and can be viewed in the
following table:
Location Name
SA_Hotbench
SA_Hotbench_PRs
SA_Simulation_Lab
SA_Simulation_Lab_PRs
SA_Requirements
SA_Requirements_PRs
SA_SW_Development
SA_SW_Development_PRs
Approval
Location Description
Sikorsky Aircraft, Hot Bench
-Tests problem reports
Sikorsky Aircraft, Hot Bench
-Originates problem reports
Sikorsky Aircraft, Simulation Lab
-Tests problem reports
Sikorsky Aircraft, Simulation Lab
-Originates problem reports
Sikorsky Aircraft, Requirements
-Tests problem reports
Sikorsky Aircraft, Requirements
-Originates problem reports
Sikorsky Aircraft, Development
-Tests problem reports
Sikorsky Aircraft, Development
-Originates problem reports
Location to tally problem report
closures
Table 4: Proposed Process Locations
Once again, the locations with the suffix “PRs” are simply a design
implementation that serves to act as the location for arrival of the different
problem reports. The Approval location serves the same purpose in this
simulation as well (i.e., a location to accumulate closures).
4.2 Entities
The only entities that are used in this simulation are the problem reports. These
are passed from one location to the next in order to be worked on. In reality, the
problem report is only the definition of what is wrong with the aircraft, and it is
continually updated throughout each step in the process. It is actually the
software itself that is changed. However, since there is a limit to the number of
15
entities in the student version, it was easier to simply track the updating of the
problem report as an indicator of what is happening within the system.
There are five (5) entities that are being worked on in the current process. Each
location can originate a problem report; therefore, each has its own entity. These
can be seen in the following table:
Entity Name
SA_WPB_Problem_Report
Boeing_Problem_Report
SA_Lab_Problem_Report
SA_Reqs_Problem_Report
SA_SW_Dev_Problem_Report
Entity Description
Problem report generated by Sikorsky
Aircraft, West Palm Beach
Problem report generated by Boeing,
Philadelphia
Problem report generated by Sikorsky
Aircraft, Simulation Lab
Problem report generated by Sikorsky
Aircraft, Requirements
Problem report generated by Sikorsky
Aircraft, Development
Table 5: Current Process Entities
The proposed process entities are identical except for the problem reports
originating from the hot bench replace both of those originated by the aircraft
and hardware integration and testing groups. These entities are as follows:
Entity Name
SA_Hotbench_PR
SA_Simulation_Lab_PR
SA_Requirements_PR
SA_SW_Development_PR
Entity Description
Problem report generated by Sikorsky
Aircraft, Hot Bench
Problem report generated by Sikorsky
Aircraft, Simulation Lab
Problem report generated by Sikorsky
Aircraft, Requirements
Problem report generated by Sikorsky
Aircraft, Development
Table 6: Proposed Process Entities
The distinction is made between problem reports generated at the different
locations due to the different amounts of time between their arrivals and the
different amounts of time it takes to process them.
16
4.3 Arrivals
The arrivals take place at the locations with the suffix “PRs”. The arrivals are
based upon the data collected from the database where the problem reports are
logged. However, the distributions listed in Tables 1 and 2 are the number of
days, which translate to 24 hours worth of time in Pro-Model. These day values
were multiplied by 8 working hours, and those values were used as the
interarrival times in units of hours for the simulation. The translation for each
distribution can be viewed in the following tables:
Distribution (Days)
E(4.44)
E(6.33)
U(0,14)
E(2.5)
E(9.6)
Distribution (Working Hours)
E(31.5)
E(46.8)
U(0,112)
E(20)
E(72.8)
Table 7: Current Process Interarrival Times Translated to Hours
Distribution (Days)
E(4.44)
E(6.33)
U(0,14)
E(1.25)
Distribution (Working Hours)
E(31.5)
E(46.8)
U(0,112)
E(10)
Table 8: Proposed Process Interarrival Times Translated to Hours
4.4 Processing
The processing for each of the simulations is virtually identical in logic. Each
location can originate a problem report. These reports are immediately assigned
to software requirements except when software development originates them, in
which case it does the evaluation. After the initial analysis by either of the two
locations, the problem report follows to the simulation lab, hardware lab, and
aircraft hangar, respectively. The following order depicts the flow of the
problem reports:
Requirements -> Development -> Simulation -> Hardware -> Aircraft
17
Along this line of transfer, any of the departments can fail the problem report
and send it back to its point of analysis. This would happen if the problem
report fails the particular test that is being performed at the corresponding
location. If the problem report is rejected, it follows the same chain of events in
order to be tested.
Figure 11: Current Process Processing Diagram
The preceding figure demonstrates the pathway that the entities can follow in
order to get to the approval stage. It should be noted that the aircraft testing and
integration team is not the only group that can pass a problem report. It is
possible for a problem report generated by the requirements group to simply be
a change to there requirements that does not require a software change to be
made.
For the proposed reduction in number of locations, the logic is going to be the
same as that above. The only difference between the two is the amount of
locations. The rejection of problem reports holds true the same as the current
situation.
The following pictorial represents the processing for the proposed process:
18
Figure 12: Proposed Process Processing Diagram
As can be seen in the comparison of the two diagrams, the latter is slightly less
complex with the removal of the fifth location. The same possibility for a
requirements-originated problem report being immediately sent to approval
without being tested holds true.
4.4 Variables
There are only two variables that are kept track of in each simulation –
PRs_In_Progress and Approved_PRs. These variables are pretty selfexplanatory, but the following table explicitly defines them:
Variable Name
PRs_In_Progress
Approved_PRs
Variable Definition
Number of problem reports currently
being worked
Number of problem reports that have
been closed
19
4.5 Assumptions
The following assumptions have been made in order to translate the actual
software development process into a simulation:






The work is conducted continually (i.e., what is left at the end of
one 8-hour period is begun at the beginning of the next).
Most research is done that software development level.
In the proposed process, the steps leading up to the hot bench
testing take the same amount of time as those in the current
process.
The interarrival time for the hot bench problem reports will be
more often than those in either the aircraft or hardware problem
reports.
For both the hardware and aircraft locations, the numbers were
estimated based on my knowledge of which engineers my
colleagues have dealt with.
The amount of engineers working on the hot bench will be less than
the combined number of engineers on the aircraft and hardware
locations.
5.0 Verification
The current process model is can be verified based on comparisons to the actual
results after a one-year period of working on problem reports in the database. I
also polled members of my team to get an estimate of the number of problem
reports they closed over the year in order to come up with an average of the
number of problem reports accomplished per engineer. These values tallied
matched up well in comparison to those values that I found in the database and
from fellow engineers.
The first simulation run was that of the current process. A few alterations were
made along the way until a realistic number of problem reports were closed in a
one-year period. These values obtained compare well with what I would expect
from looking into the performance of the total software team, which is where the
brunt of the work is spent. That is the only real location affected by the location
originating the problem report. The number of days spent on the problem report
20
also looks feasible for comparison in the simulation realm. It was expected for
those problem reports originating outside of the company would take more time,
and this fact is also reflected in the simulation as well.
Based on these numbers, the simulation runs for the proposed process was a
virtual carbon copy with the exception of the combined final two (2) states.
Assumptions that were mentioned in the previous section were made to actually
stress the system more than it probably would be in the hopes that that would
still demonstrate an improvement.
The first three steps hold up to the validation from the current process, and the
last step is, if anything, more than is expected in the amount of problem reports
that will be generated. This fact intends to use the worst-case arrival time in
order to make the system have a cushion of validity. If, in actuality, the realworld process has less frequent problem reports coming from the hot bench
location, there will be a buffer in which to scale the efficiency of the system. This
was done to make the system more useful over a broad range of inputs.
After a review by some of my peers at work, each model was deemed valid
enough given the assumptions made in creating it.
6.0 Model Results
The three values that were of the most important to capture include the
following:



Number of problem reports still in progress
Amount of time for a problem report from each location to be closed
Number of problem reports closed/approved.
These values serve to most appropriately understand the system. A further
breakdown of each location will then show how possible changes can be made in
order to improve the system.
From 10 replications of the current process, the average values for those
aforementioned is as follows:
Output Name
In-progress problem reports
SA WPB PR average days in work
Output Value
28.6
6.694161
21
Boeing PR average days in work
SA Lab PR average days in work
SA Reqs PR average days in work
SA SW Dev PR average days in work
Approved problem reports
9.878549
6.828335
6.159518
6.139331
257.7
The next three graphs show the location utilization, location states, and entity
states, respectively.
Figure 13: Current Process Location Utilization
The above figure demonstrates that the software development group does the
bulk of the work on a problem report. This is in most part due to the amount of
time spent writing code for the aircraft, writing test code for that code, and
running its own simulation tests.
22
Figure 14: Current Process Location States
The amount of time spent empty by the groups other than development is
expected since these groups have a lot of other tasks to perform. Software
development, currently in a heavy support phase, would not be expected to have
any empty time for other tasks. There are always at least a few engineers
working problem reports.
23
Figure 15: Current Process Entity States
The only deviation from the actual software development process can be seen
above. Problem reports generally have a down time of closer to 40% of the time.
However, this value could not be achieved while still maintaining the other
statistics.
With the current process having been set up appropriately, the proposed process
could then be modeled off of that and results for it could be gained. Those
results, run over 10 replications, are as follows:
Output Name
In-progress problem reports
SA Hotbench PR average days in work
SA Lab PR average days in work
SA Reqs PR average days in work
SA SW Dev PR average days in work
Approved problem reports
Output Value
28.5
6.029566
6.046231
5.128569
5.193021
340.5
The following figures demonstrate the location utilization, location states, and
entity states, respectively:
24
Figure 16: Proposed Process Location Utilization
Once again, the bulk of the activity is done at the software development phase.
This was not expected to change since that location’s role is identical to that of
the current phase.
Figure 17: Proposed Process Location States
Due to the increase in the interarrival time for the hot bench problem reports,
each location is much busier than in the current process. This was done in order
to gain a perspective on the worst-case in terms of system stress due to incoming
problem reports. If the interarrival times were scaled back, the states would be
less stressed, but each problem report would be accomplished just as efficiently
allowing for more time for other tasks to be performed by the group.
25
Figure 18: Proposed Process Entity States
As in the previous case, this statistic is flawed because the problem reports are
not blocked for a long enough period of time.
7.0 Conclusions
The transfer from a two-phase hardware and aircraft testing and integration
procedure to a single hot bench testing procedure will increase the efficiency of
the process as a whole. However, depending on the amount of time and money
spent in transferring the aircraft for use as a test bed, and the time spent setting it
up may not necessarily be efficient in comparison.
The total number of problem reports increased drastically for a one-year period
with the proposed process. In part, this is due to the faster interarrival times for
the problem reports from the hot bench location. The possibility of getting a
two-fold increase in the number of problem reports is only a moderate
possibility, and there may not be any increase whatsoever.
The time spent on each problem report diminished by a matter of approximately
nine (9) days. This is almost two (2) weeks worth of time, which is a very
promising number. That allows for a sizable increase in the number of problem
reports being resolved even with the same current interarrival time. Time not
spent on problem reports would allow for more research into preventing
problems.
There is a lot of down time spent in locations other than software development.
Unless there is a proportional amount of work for the remaining members of the
26
other groups, these can be scaled down to conserve money. This would not
necessarily mean lay-offs, but perhaps transfers to groups on other programs
that need assistance.
All in all, the outcome of the simulation was what was expected. A notable
increase in the efficiency of the system was seen in the proposed process, but not
so much that it made the simulation for the latter seem unrealistic. The data
collected should provide for an idea of what to expect in the future as far as
problem reporting is concerned.
Download