Engine Overhaul and Repair Center

advertisement
Engine Overhaul & Repair Center
Simulation Final Project Report
Laura Newell
DSES 6620-Fall 2002
Table of Contents
Introduction
Description of the Current System
Data Collection
Data Analysis
TARs
Figure 1. TAR Processing Time AutoFit Output
Figure 2. TAR Processing Time Data Fitted to Lognormal Curve
Figure 3. Goodness of Fit Test for TAR Processing Time Data
Figure 4. TAR Interarrival Time AutoFit Output
Figure 5. TAR Interarrival Data Fitted to Exponential Curve
Figure 6. Goodness of Fit Test for TAR Interarrival Data
Light Maintenance
Figure 7. LM Processing Time AutoFit Output
Figure 8. LM Processing Time Data Fitted to Lognormal Curve
Figure 9. Goodness of Fit Test for LM Processing Time Data
Figure 10. LM Interarrival Time AutoFit Output
Figure 11. LM Interarrival Data Fitted to Pearson 5 Curve
Figure 12. Goodness of Fit Test for LM Interarrival Data
Heavy Maintenance
Figure 13. HM Processing Time AutoFit Output
Figure 14. HM Processing Time Data Fitted toExponential Curve
Figure 15. Goodness of Fit Test for HM Processing Time Data
Figure 16. HM Interarrival Time AutoFit Output
Figure 17. HM Interarrival Data Fitted to Uniform Curve
Figure 18. Goodness of Fit Test for HM Interarrival Data
Description of the Model
Figure 19. ProModel Locations for E.O.R.C. Model
Figure 20. ProModel Entities for E.O.R.C. Model
Figure 21. ProModel Arrivals for E.O.R.C. Model
Figure 22. ProModel Processing for E.O.R.C. Model
Figure 23. ProModel Layout of E.O.R.C. Model
Simulation Runs
Figure 24. ProModel Simulation Options for E.O.R.C. Model
Figure 25. ProModel Snapshot of E.O.R.C. Model
Results
Table 1. ProModel Output for E.O.R.C. Model
Figure 26. Capacity for Each Location
Table 2. Failed Arrivals from E.O.R.C. System
Table 3. Entity Activity for E.O.R.C. System
Changing the Model
Table 4. Location Activity for 3 Bays and Reduced HM P.T.
Conclusion
2
3
4
6
7
7
7
8
8
9
9
10
10
11
11
12
12
13
13
14
14
14
15
15
16
16
17
17
17
18
18
19
19
19
20
21
21
22
22
23
23
23
Engine Overhaul and Repair Center
Introduction
Pratt & Whitney Aircraft Engines has been designing and manufacturing jet engines for
over many years. Recently, Pratt & Whitney expanded their business to the aftermarket,
making more money repairing an engine than selling it. They have many overhaul and
repair centers around the world, but their “homebase” is in Cheshire, CT. However, as
the overhaul business grew and grew, the repair station in Cheshire began working over
their capacity. Therefore, Pratt & Whitney Aircraft Engines recently decided to expand
their Overhaul and Repair Center in Cheshire. A second facility in Middletown, the
Engine Overhaul and Repair Center (E.O.R.C.) was created almost 2 years ago. This
repair station created was much smaller than its counterpart and was only supposed to
handle 5-7 engines a month, and those engines could only be light maintenance.
However, as the facility grew, they were able to handle more engines and engines with
larger workscopes.
However, lately the repair station has been experiencing growing pains. Sometimes they
are starved for work and other times they do not have the capacity to keep up. Therefore,
it has been hard to realize whether they should expand, stay in their current configuration,
or downsize. To help the E.O.R.C., a simulation model will be created for the system.
Once the system is correctly modeled, the Overhaul and Repair center will use the data to
answer the following questions:

Does the induction rate follow a distribution?

Do the processing times follow a distribution?
3

What are the practical turnover times for each repair class?

How long do engines wait in queue before inducted?

Should the engine center expand or downsize based on the data?

Is the current configuration optimal?
Description of the Current System
There are three different types of engines that come into the E.O.R.C. for repair:

TARs: these are engines that are Tested As Received. Meaning, they only have
to be prepped for test, then tested. This could be an engine that was behaving in a
strange manner, and the airline wanted it to have certain tests to diagnose the
problem.

Light Maintenance (LM): these are engines that need certain small things
repaired, or simply need a check-up/inspection. Sometimes, specific tests are
performed on these engines as well. Also, Service Bulletins may be incorporated
while the engine is at the shop.

Heavy Maintenance (HM): these are engines that have large workscopes.
Sometimes, they have experienced an engine surge or an in-flight shutdown that
caused significant damage to the engine. Or, they can just be engines that have
not been overhauled in a while, and there are several things that need to be done
to the engine. Also, Service Bulletins may be incorporated while the engine is at
the shop.
4
Service Bulletins are updates made to the engine to correct certain problems. Depending
on how severe the problem is, the customer may wait to have them incorporated during a
shop visit, meaning, the engine may come to the shop with another problem, but the
customer may use that as an opportunity to incorporate some Service Bulletins.
Each of these engines has different arrival times and different processing times.
Heavy Maintenance engines do not arrive as often as TARs or Light Maintenance
engines. As expected, a Heavy Maintenance engine would take much longer to overhaul
than a Light Maintenance engine or a TAR. Many times, the workscope of the engine
will change while it is in the shop. Problems are discovered as the engine is
disassembled. It is always at the customer’s discretion whether or not these problems get
fixed at that point in time.
For all engines that require maintenance, there is a process that is followed:

Tear down/disassemble the engine

Appropriate parts sent out for repairs

Parts arrive back from repair

Reassemble the engine

Test the engine

Check After Test

Ship to customer
5
Obviously, the time it takes to perform these tasks depends on the type of workscope
assigned to that engine.
The engines arrive and usually wait in a queue outside of the repair station. Once
inducted, the engines would move to their designated station. In the E.O.R.C., there are
four engine bays for TARs and Light Maintenance engines. There are also 2 Heavy
Maintenance Annexes reserved for the Heavy Maintenance engines.
Each engine remains in the same spot as it is disassembled and assembled. Each engine
is assigned a group of mechanics and inspectors depending on the workscope. For
example, a Heavy Maintenance engine would receive 2 inspectors and 2 mechanics,
whereas a Light Maintenance engine would only receive 1 inspector with 2 mechanics.
A TAR can get by with only 1 inspector and 1 mechanic.
Data Collection
For every engine that has entered the E.O.R.C., specific data is kept. An EXCEL
spreadsheet lists the engine serial number, engine model, the workscope (TAR, LM,
HM), the customer, the receive-in date, and the ship date. Therefore, everything exists in
this spreadsheet needed to create the model: the workscope, the receive-in date, and the
ship date.
6
Data Analysis
First, using EXCEL the data was separated by workscope (TAR, HM, LM). For each
workscope, the data was sorted by receive-in date. Using the EXCEL function
DAYS360, the numbers of days between each receive-in date were counted to get the
interarrival times. The same EXCEL function is used to calculate the number of days
between the receive-in date and the ship date. Since the E.O.R.C. works seven days a
week, it is reasonable to count the weekends in this calculation. Next, the data were
plugged in StatFit in order to determine what distribution it followed. Here are the results
for each workscope.
TARs
The processing times for TARs calculated in EXCEL are plugged into StatFit and the
AUTOFIT function is used to determine the distribution. StatFit suggested that the
processing time data fit a LogLogistic distribution (see Figure 1). However, the
Lognormal distribution is chosen since it is easier to work with and more recognizable.
Figure 1. TAR Processing Time AutoFit Output.
The data also seem to fit the Lognormal curve given by AutoFit (see Figure 2).
7
Figure 2. TAR Processing Time Data Fitted to Lognormal Curve.
The Goodness of Fit test shows that the null hypothesis (data is Lognormal) is not
rejected for any of the tests (see Figure 3). Therefore, the data is assumed to be
Lognormal for the purposes of the project. According to AutoFit, mu=2.92 and
sigma=.588.
Figure 3. Goodness of Fit Test for TAR Processing Time Data.
The interarrival times for TARs calculated in EXCEL are plugged into StatFit and the
AUTOFIT function is used to determine the distribution for arrival times. StatFit
suggested that the arrival data fit an Exponential distribution (see Figure 4).
8
Figure 4. TAR Interarrival Time AutoFit Output.
The data also seem to fit the Exponential curve given by AutoFit (see Figure 5).
Figure 5. TAR Interarrival Data Fitted to Exponential Curve.
9
The Goodness of Fit test shows that the null hypothesis (data is Exponential) is not
rejected for the first two tests (see Figure 6). The Anderson-Darling test is rejected since
that is the test for normality. Since the data are not normal (they are exponential), the test
is rejected. Therefore, the data is assumed to be Exponential with Beta = 10.6 for the
purposes of the project.
Figure 6. Goodness of Fit Test for TAR Interarrival Data.
Light Maintenance
The processing times for LMs calculated in EXCEL are plugged into StatFit and the
AUTOFIT function is used to determine the distribution. StatFit suggested that the
processing time data fit a LogLogistic distribution (see Figure 7). However, the
Lognormal distribution is chosen since it is easier to work with and more recognizable.
10
Figure 7. LM Processing Time AutoFit Output.
The data also seem to fit the Lognormal curve given by AutoFit (see Figure 8).
Figure 8. LM Processing Time Data Fitted to Lognormal Curve.
The Goodness of Fit test shows that the null hypothesis (data is Lognormal) is not
rejected for any of the tests (see Figure 9). Therefore, the data is assumed to be
Lognormal for the purposes of the project. According to AutoFit, mu=2.98 and
sigma=.772.
11
Figure 9. Goodness of Fit Test for LM Processing Time Data.
The interarrival times for LMs calculated in EXCEL are plugged into StatFit and the
AUTOFIT function is used to determine the distribution for arrival times. StatFit
suggested that the arrival data fit an Inverse Gaussian distribution (see Figure 10).
However, the Pearson 5 is chosen since it is easier to work with and more recognizable.
Figure 10. LM Interarrival Time AutoFit Output.
12
The data also seem to fit the Pearson 5 curve given by AutoFit (see Figure 11).
Figure 11. LM Interarrival Data Fitted to Pearson 5 Curve.
The Goodness of Fit test shows that the null hypothesis (data is Pearson 5) is not rejected
for the first two tests (see Figure 12). The Anderson Darling Test is rejected since it is a
test for normality and this data is not normal. The data is assumed to be Pearson 5 with
Alpha = .89 and Beta = 4.42 for the purposes of the project.
Figure 12. Goodness of Fit Test for LM Interarrival Data.
13
Heavy Maintenance
The processing times for HMs calculated in EXCEL are plugged into StatFit and the
AUTOFIT function is used to determine the distribution. StatFit suggested that the
processing time data fit a Pareto distribution (see Figure 13). However, the Exponential
distribution is chosen since it is easier to work with and more recognizable.
Figure 13. HM Processing Time AutoFit Output.
The data also seem to fit the Exponential curve given by AutoFit (see Figure 14).
Figure 14. HM Processing Time Data Fitted to Exponential Curve.
14
The Goodness of Fit test shows that the null hypothesis (data is Exponential) is not
rejected for any of the tests (see Figure 15). Therefore, the data is assumed to be
Exponential for the purposes of the project. According to AutoFit, the minimum value is
80, and Beta = 22.4 for purposes of the project.
Figure 15. Goodness of Fit Test for HM Processing Time Data.
The interarrival times for HMs calculated in EXCEL are plugged into StatFit and the
AUTOFIT function is used to determine the distribution for arrival times. StatFit
suggested that the arrival data fit a Pearson 5 distribution (see Figure 16). However, the
Uniform is chosen since it is easier to work with and more recognizable.
Figure 16. HM Interarrival Time AutoFit Output.
15
The data also seem to fit the Uniform curve given by AutoFit (see Figure 17).
Figure 17. HM Interarrival Data Fitted to Uniform Curve.
The Goodness of Fit test shows that the null hypothesis (data is Uniform) is not rejected
for the first test (see Figure 18). The Anderson Darling Test is rejected since it is a test
for normality and this data is not normal. The data is assumed to be Uniform with
minimum = 0 and maximum = 113 for purposes of the project.
Figure 18. Goodness of Fit Test for HM Interarrival Data.
16
Description of the Model
To set up the model, first the locations were created (see Figure 19).
Figure 19. ProModel Locations for E.O.R.C. Model.
As shown in Figure 19, a bay was created with 4 units. Each bay is capable of doing all
of the work on an engine. These bays work in parallel. Also, 2 units of HM_Annex were
created. They, too, work in parallel on Heavy Maintenance engines. A queue for each
bay and each annex was created. The Engine_Queue can hold an infinite number of
engines, but the HM_Queue can only hold 2 maximum.
Next, the entities were created (see Figure 20).
Figure 20. ProModel Entities for E.O.R.C. Model.
17
As shown in Figure 20, 3 different entities were created to represent the 3 different
maintenance scopes.
Next, the arrivals were created (see Figure 21).
Figure 21. ProModel Arrivals for E.O.R.C. Model.
As shown in Figure 21, each of the entities was given the frequency assigned by StatFit.
Finally, the processing was built (see Figure 22).
Figure 22. ProModel Processing for E.O.R.C. Model.
As shown in Figure 22, TARs and LMs are processed in the first available bay, whereas
HMs are processed in the first available HM_Annex. Each type of entity takes a different
amount of time to process. The times were determined by StatFit.
18
Figure 23 shows the layout of the model as presented by ProModel.
Figure 23. ProModel Layout of E.O.R.C. Model.
Simulation Runs
Before running the model, the options were setup (see Figure 24).
Figure 24. ProModel Simulation Options for E.O.R.C. Model.
19
As shown in Figure 24, the model was run for 9000 straight hours. This is the best
depiction of E.O.R.C. operation. They do not start over everyday, which is what
replications represent. They are a continuous operation, working 3 shifts close to 365
days a year, which roughly equals 9000 hours. Figure 25 shows a snapshot in time of the
simulation run.
Figure 25. ProModel Snapshot of E.O.R.C. Model.
Results
Table 1 shows the output from ProModel General Statistics.
20
Table 1. ProModel Output for E.O.R.C. Model.
Table 1 reveals some interesting details about E.O.R.C.’s operation. According to the
data we entered, E.O.R.C. has too much capacity. Bay 4 is never even used, while Bay 3
is rarely used. The extra capacity would be better spent working on the Heavy
Maintenance engines to reduce the amount of time they spend in the engine center.
Figure 26 shows the percentages of time for each location in the model. The figure
confirms the output data: bay 4 is never even used. This means that either the staff could
be reduced, or more people could be placed elsewhere, or the space could be reduced.
Figure 26. Capacity for Each Location.
21
Other output from the model included the failed arrivals (see Table 2). The table shows
that none of the TAR or LM arrivals were turned away. This is not due to the fact that
the queue had infinite capacity because Table 1 shows that the average queue contents
were approximately zero. This means that a bay was always open to receive TARs and
LMs. However, there were 3 HM engines that were turned away due to the size of the
queue.
Table 2. Failed Arrivals from E.O.R.C. System.
The next output to examine is the Entity Activity (see Table 3).
Table 3. Entity Activity for E.O.R.C. System.
Table 3 shows the average minutes spent in the system by each entity type. Of course,
the HMs take longer than the TARs and LMs. However, in the next section, a new model
is created using only 3 bays instead of 4. The leftover resources are dedicated to the HM
22
engines, which reduces the amount of time the HM engine spends in the system. This
new time will just be an estimate based on observations.
Changing the Model
First, the number of bays was reduced from 4 to 3. Next, the time to process an HM was
reduced by 20%. Table 4 shows the Location Activity for the new model.
Table 4. Location Activity for 3 Bays and Reduced HM Processing Time.
As shown in the table, the average minutes per entry for HM was reduced. Bay 3 was
still not utilized that much, and the system could probably be reduced further. However,
to create buffers, it is important to have extra capacity in case there are problems
completing an engine.
Conclusion
The analysis completed for E.O.R.C. gave all of the answers they were seeking. As
shown in the paper, the time between inductions follows a distribution, as well as the
processing times. The output from ProModel shows the average time to process each of
the three types of engines. The ProModel output also showed that rarely do engines have
to wait in queue before being inducted. Based on the results shown, it is recommended
23
that the E.O.R.C. reduce their size but keep the same workflow. They should be able to
reduce the average time of a Heavy Maintenance Engine by adding the leftover
manpower to that bay.
24
Download