Validation of Helicopter Nominal and Faulted Conditions Using Fleet

Maynard, K. P.; Validation of Helicopter Nominal and Faulted Conditions Using Fleet Data Sets,,
Proceedings of the International Conference on Condition Monitoring, University of Wales Swansea,
UK, 12th - 16th April 1999, pp. 129 -141.
Validation of Helicopter Nominal and Faulted
Conditions Using Fleet Data Sets
Kenneth Maynard, Carl S. Byington, G. William Nickerson, and Michael Van Dyke
Applied Research Laboratory, Pennsylvania State University, University Park, PA 16804
Abstract: A transition to condition-based maintenance for aircraft will be enabled by
implementation of advanced Health and Usage Monitoring Systems (HUMS). Condition-based
maintenance (CBM) will require advanced tools to enable reliable and accurate detection of
evolving faults at an early stage, diagnosis to the subsystem or component level, and prognosis of
the remaining useful life of components. Several important questions are raised along this
transition to CBM.
How do we quantify and compare proposed techniques for failure mode diagnosis? With
the desire for modular, open systems for health assessment and the large list of potential
technology providers, acquisition offices require metrics of performance and effectiveness.
How do we reduce false alerts to the operator/ maintainer? Fusion of operational context
into the diagnosis can reduce false alarms. Development of this capability will require that existing
diagnostic data sets be augmented with operational data.
This paper describes a program to collect operational data sets and evaluate diagnostic
performance of a variety of technology providers addressing questions one and two. The Office
of Naval Research (ONR) Navy Repair Technology (REPTECH) program supported the effort.
The REPTECH project has instrumented one H-46 aft transmission for initial trials and flight
check with successful results. Plans are to proceed with full squadron operation and posting of
Key Words: Condition-Based Maintenance; Helicopter; Health and Usage Monitoring Systems;
HUMS; Transmission Data; Prognostics.
Background: Presently, the amount of maintenance on flight critical aircraft components is
excessive while the amount truly performed “on-condition” is minimal. Safety-of-flight
considerations have demanded that critical components be maintained “before they can fail”, the
definition of a preventive maintenance philosophy. This approach has resulted in high
maintenance costs as expensive components are removed and discarded long before their useful
life is actually consumed. Unfortunately, despite the conservative nature of component safe life
estimates, many failures have continued to occur.[1][2] Implementation of advanced rotorcraft
Health and Usage Monitoring Systems (HUMS) will facilitate transition to a Condition-Based
Maintenance (CBM) philosophy with a resultant decrease in maintenance costs for rotorcraft. [1]
[2] [3] [4]
Presented at the International Conference on Condition Monitoring, University of Wales
Swansea, Swansea, Wales, April 12-15, 1999.
Dealing with the economic realities of the ‘90s and the foreseeable future in both military and
industrial applic ations necessitates such an alternative approach to maintenance. The military is
flying aircraft longer than they ever intended (as are the airlines) resulting in the “aging aircraft”
problem. Budgets are being pressed on all sides. The call to do more with less (or at least the
same with less) is everywhere. Condition-based maintenance offers a potential solution.
Condition-Based Maintenance philosophy stipulates maintenance of equipment only when there
is objective evidence of an impending failure on the particular piece of equipment [5]. As
described in Ref [5], the steps are serial and require data and an available knowledge base.
Detection: Monitored parameter has departed its normal operating envelope.
Diagnosis: Identify, localize, and determine severity of an evolving fault condition.
Prognosis: Reliably and accurately forecast remaining operational time until end of useful life.
Many other CBM sources can be found [6] [7] [8].
Introduction: Realization of the benefits of CBM requires a change in the operational planning
process as well as technology. Until operators knowingly operate equipment past the state at
which a failure has started, we will continue to “throw away” useful life of equipment. This
change places the maintenance decision in the operational realm and makes it less of a logistics
function. The result is not to minimize the requirement for logistics; in fact, CBM requires a more
timely and effective logistics support chain than does time-directed maintenance. The operational
decision is now driven by the fact that we are no longer “maintaining equipment before it can
break”, we are attempting to “maintain equipment before it does break.” The operator needs to
be sure that the equipment he chooses to start a mission will successfully complete that mission.
Equipment surveillance or machinery health monitoring will provide the operator with reliable
information about the state of the equipment. Models and other techniques will fuse that
information with the operational context and predic t the time to end-of-useful life. A usable
definition of end-of-useful life is:
“The time at which the operator would not knowingly start a mission if the true condition
As the technology base to provide both diagnostics and prognostics matures, the designer and
owner must be able to quantitatively compare the merits of alternative techniques and systems. If
the vision of an “open architecture, multi-vendor” system is realized, this means picking individual
elements of the system from among a variety of potential sources. Hence the need for
development of qualification and validation techniques for CBM technologies.
CBM Technology and Algorithm Development: Owners cannot now quantitatively assess
the performance of the variety of CBM technology available to them. Marketing in the area is
based on anecdotal evidence or limited (at best) experimental validation. As systems become
more open and the owner is able to choose the best technology from among a number of
vendors, the requirement to make this selection on a rigorous basis will become more critical.
Multiple Vendors: As we discuss elsewhere [9], there is an incentive to apply open architecture
to the design of CBM systems. That paper addresses a shipboard application, but the arguments
apply equally to both aviation and industrial applications. While an open architecture poses
problems in technology selection and integration, it will encourage a marketplace of technology
development that will incentivize and speed innovation. Without the open architecture, multivendor approach, the owner will be continuously forced to trade performance in one area of a
particular vendor’s system against relative weakness in other areas.
No Rigorous Evaluation Metrics: Selection among multiple technology vendors requires an
unbiased and usable comparative mechanism – a yardstick. No such consistent set of metrics
exists. These metrics must be fair and provide consistent answers. They must be able to establish
effectiveness and performance in a variety of applications for a variety of non-commensurate
techniques. For example, one frequent question is whether on-line oil-debris analysis or on-line
vibration analysis is a better (more cost-effective) solution for a given application. Comparing the
two is akin to comparing apples to oranges, but these are the selections that the owner is forced
to make today and will increasingly be making in the future.
Few Well-Documented Data Sets: The limiting factor in development and fielding of effective
diagnostics is lack of a library of well-documented data sets that can be used for development
and evaluation. Data sets that do exist are generally difficult to obtain, have strict limitations on
distribution, are recorded on media that is cumbersome to use, or are not sufficiently documented
in terms of data acquisition parameters or comparison to ground truth to be useful.
Establishment of an archive of these data sets across a variety of equipment types and
applications should be a priority in the CBM community. The data set must be easily accessible,
have reasonable distribution limits, be fully documented relative to all data acquisition parameters,
and be directly correlated to the true condition of the monitored equipment.
False Alarms as Technology Enters Fleet: False alarms and their counterpart “misses” are the
nemesis of any diagnostic system. We must constantly balance the sensitivity of the detection
process to avoid false alarms that cause the user to ignore or disable the system and ensure that
we don’t miss any significant failures. This process is frequently a “tuning” process that occurs as
the technology is introduced into operations. In some applications, it will be acceptable to set the
sensitivity high and decrease the sensitivity until fals e alarms are reduced to an acceptable level.
In other applications, that won’t be acceptable owing to cost or risk considerations.
Establishment of a means to quantitatively assess performance prior to introduction will not totally
eliminate the “tuning” process, but it should dramatically reduce the amount of effort required to
optimize the correct sets of alerts and alarms. The problem is significant because high false alarm
rates and misses may annul all logistical gains made possible through ground-based analysis of
data and short-circuit the introduction of future technology [7]. Moreover, the operational
consequences are potentially much greater when aircrew utilize such diagnostic assessments to
make decisions about mission completion and in-flight fault management (discussed in a paper by
Byington, et al, in these proceedings).
Improving the Approach: A comprehensive approach is necessary to address the situation
outlined above. We must develop both the metrics and the data archive to support the analysis
across a range of equipment. First we will describe the metrics and data archive elements for
transitional data sets in some detail, then we will describe an ongoing project to collect
operational data sets.
Evaluation Metrics: As outlined above, evaluation metrics must be unbiased and fairly represent
the performance of the technology in the application. A comparison of technologies across a set
of well-controlled laboratory experiments is academically interesting, but may have little to do
with performance in the “real-world”. To be effective, the metrics need to be broken into two
parts – effectiveness and performance.
Measures of Effectiveness: Measures of effectiveness address the question:
“How well does the information provided by this technology about the state of health of
monitored equipment match the “ground-truth” of the actual equipment?”
At first glance, this seems like a fairly simple proposition: Run the technique on some test
equipment, document the readings the technology provides, document the true condition of the
equipment as a function of time, and compare the reading provided with the true condition.
Unfortunately, it isn’t that easy!
The machines themselves, the failure process in machines, and measurements of observables all
have variability and uncertainty. Tolerance stack-up results in some machines that are very tight,
others that are loose. Failures such as initiation and evolution of a spall in a bearing do not follow
a single, easily predictable track and are heavily affected by the loads on the machine. All
measurements have error. All of these issues drive one to conduct a statistically valid set of
experiments (generally accepted as between 11 and 20 replicates or more). Unfortunately, these
tests are expensive (at a minimum you are knowingly risking the equipment under test) and the
tests take a long time. No one has shown the patience or had the resources to conduct such a
test series. Thus, we are left seeking a good compromise between statistical validity and realism.
Measures of Performance: Measures of performance address the question:
“How much does it cost to have the information provided by this technology?”
No matter how inexpensive sensors, processing, and communications becomes, they will never
be free. It will always cost something in acquisition cost, maintenance, or unreliability of the
overall system to have the information they provide. Various approaches to this problem will
require different levels of investment in different elements of the system. Some may require high
fidelity sensor input (read expensive) but require less downstream processing (read cheaper).
Others may weight their requirements in the opposite way. Each technique will need to be
assessed on a system basis to determine its true cost of implementation.
Well-Documented Data Sets: As noted previously, few (if any) well-documented widely
accessible data sets exist today. Also noted is the expense of conducting multiple replicates of a
machinery failure test. We need to establish a repository of these data sets with prescribed
documentation requirements and make those data as widely available as possible.
At least two types of data sets are required. One is transitional failure data. That is, recordings of
the observables of failure on a machine from new condition to a point defined as past the end of
useful life of the equipment. These data sets need to represent real equipment and should be
conducted multiple times to observe the spread in the life -time of presumably identical pieces of
equipment operated under identical (within experimental error) conditions. We are presently in
the midst of such a test program using a Mechanical Diagnostics Test Bed (MDTB) to test
industrial speed reducers. The MDTB is shown in Figure 1. We refer the reader to [10] for a
complete description of the test facility and the test series underway.
Figure 1 - Mechanical Diagnostics Test Bed (MDTB) at Penn State ARL running transitional
failure tests on industrial speed-reducers
While the MDTB provides an opportunity for transitional data, the constraints of test bed
operation and data collection drive one to collect data on more representative applications. The
Office of Naval Research funded as a part of the Air Vehicle Diagnostics System (AVDS)
project a program of seeded fault testing on the aft -main transmission of the H-46 helicopter at
Westland Helicopters in 1993 [11]. Data collected in this test sequence included eight
accelerometers, torque, and speed. This data set has been widely used in the diagnostic
development community to develop and compare techniques. But, it is test stand data and each
fault is shown only in one instance. Enrichment of the data set requires actual, on-aircraft data
over a long time period (many flights) and across a population of aircraft. The Office of Naval
Research Repair Technology Program (REPTECH) has funded such a program that is described
in the following.
Diagnostic Technique Qualification and Validation Project: In order to expand the
available data archive to include a representative sample size and actual aircraft signatures versus
test-stand signatures, it is necessary to design a solution that unobtrusively collects data in an
operational scenario. Trying to buy dedicated flight time is unaffordable. Aircraft fly everyday, the
challenge is designing a workable solution for the squadron that meets the technical requirements
for the data.
The measures of effectiveness and measures of performance developed must also be exercised
against a representative number of diagnostic approaches on a common data set. The second
objective of this project is to establish and exercise an infrastructure to support that intent. Both
of these objectives and the status in addressing them will be discussed.
Project Plan: The project was initiated mid-1997. We met with Squadron, Wing, and Type
Commander personnel to enlist their support in the endeavor. Without their commitment and
willingness to see this project through, it had no hope of success. Fortunately, we found a very
willing and cooperative group of professionals who were willing to provide access and time on a
“minimal interference basis.” The prime directive in this project is to ensure that the demands we
place on the operators for time, space, resources, etc. is minimized even at the expense of
sacrificing data collection opportunities.
The goal is to instrument ten H-46D aircraft of HC-3 with an instrumentation suite virtually
identical to that installed as part of the AVDS program seeded fault trials. We varied from the
original instrumentation in some details and most significantly in the data recording format. AVDS
used an analog (14 inch, 28 track, Wideband Group 2) recorder. The recorder and necessary
signal conditioning entailed a package weighing about 150 lbs and about 12 cubic feet. Since the
implementation of our program would involve a portable data acquisition unit for plug-in
operation on any of the instrumented aircraft, this unit was too large and heavy for the aircrew to
routinely install in the aircraft.
Our first proposal was to apply a digital audiotape solution. This yielded a data recording
package of about 6 cubic feet and 75 lbs, also too large for reasonable squadron use. This fact
was made abundantly clear during early meetings with the squadron! Our final design employs a
full digital implementation storing data on a hard disk. The data acquisition and computer
hardware is enclosed in a mid-sized
aluminum carrying case. This package,
weighing 21 lbs. and easily transported by
one person, is shown installed in the aircraft
in Figure 2.
We chose to replicate the AVDS sensor
installation for two reasons. First, the data
we collect will be traceable to the earlier
data sets for comparison. While not an
exact duplicate, within the bounds of
necessary trade-offs, comparisons can be
made. Second, the aircraft installation was
already designed, had received flight
clearance, and had been installed on
aircraft. Modifying that design reduced the
installation learning curve and simplified
preparations for our flight clearance
Figure 2 - Data acquisition package strapped
in-place on aircraft. Note power and signal
Our original objective was to have all ten
aircraft of HC-3 instrumented by fall of ’97.
That was an unreasonably optimistic goal.
Receiving flight clearance is a time
consuming task that took longer than
anticipated. We also had technical
difficulties with our data acquisition board.
The board being used is a new design and
we worked closely with the manufacturer to resolve problems.
We completed our first aircraft installation (tail number 07) in the fall of 1997 and are now
preparing to instrument additional aircraft. The intent is still to instrument all available squadron
aircraft during 1999 and collect data for an additional 12 months. After the 12-month period, the
aircraft will either be restored to their original condition or data collection will continue for
another prescribed period.
Each instrumented aircraft
will have sensors (torque
• Logger automatically takes data snapshots
and tach interfaces and Process Flow
every 5 min. during flight
accelerometers) installed
After flight
• Logger is strapped down to A/C
• Crew switches off power, unplugs logger
for the duration of the
• Power cord and sensor harness plugged into A/C
• Logger unstrapped, carried back to server
• Unit power switch enabled
experiment. An interface
panel on the aircraft
• Logger power cord plugged into DC power
• Crewman carries logger from
• Case opened, ethernet cable from server
server area to aircraft
provides access for the
• Logger power switched on
digital data acquisition
package to the sensors
and aircraft power. The
• Logger is stored with server
data acquisition package is
computer near flightline
installed by the aircrew for
• Crewman hits Enter on keyboard
• Flight scheduled
• Monitor prompts through download process
selected flights. Data will
for one of 10 designated aircraft
• After 2-4 min. download, monitor prompts
insertion of blank JAZ drive
be downloaded from the
• JAZ copy removed, placed in preaddressed Fed-X envelope
aircraft post-flight to the
• Logger powered down, unplugged, stored
Intranet established for the
project and a physical Figure 3 - Process flow from aircraft to data archive
copy shipped to Penn
State ARL for archiving.
This process is outlined in Figure 3.
Issues: Collecting the type of data we are gathering and effectively conducting an experiment
using operational fleet assets compounds programmatic issues. While we’ve dealt with the issues
presented to us, we’re certain there are many yet to come. Following are descriptions of some of
the issues we faced and our resolution.
Flight Clearance: Installation of any hardware on an aircraft poses a potential hazard to the
airframe and crew. As such, close attention is paid to ensure that the risks are minimized. As
stated earlier, we leveraged the already approved AVDS aircraft installation. This gave us a
running start on the approval process. One of the primary safety requirements for flight clearance
is the assurance that no part of the installed package can break free and become loose debris
inside the cabin. This requirement led to a design of a data acquisition package that was
contained and strapped to the fuselage . The data acquisition package is shown in Figure 2
installed in the aircraft.
Data Formats: It is imperative in constructing a data archive of this type that early decisions are
made and adhered to concerning data formats. The diagnostic community has many established
and sometimes conflicting desires for data formats and recording parameters. The goal was and
is to meet as many of those as possible while still producing a process that is workable in the
An early decision was to record time series vibration recordings. Ideally, the entire flight would
be recorded for later analysis. Unfortunately, at the high digitization rates (up to 100 kHz) and the
high analog to digital dynamic range requirements (16 bits), this would result in files too large to
be conveniently managed (many gigabytes per flight). To compromise, a snapshot data format
was constructed to provide long data records at relatively low sample rates on slow moving
shafts and shorter data records at higher sample rates on higher speed shafts. A canvas of
interested diagnostics researchers resulted in the data format shown in Figure 4. These data
snapshots are taken at five-minute intervals during a nominal one-hour flight resulting in data files
of 150MB per flight.
Analysis Process Flow: Figure 5 shows the process of data flow involving the diagnostics
research community conducting analyses of each data set collected on the aircraft. These
analyses are planned to be conducted with the results reported to Penn State ARL for
documentation as a part of the archive and evaluation. Results of these analyses and their
4 sec All accelerometer channels + tach @ 100 kHz
16 seconds
5 aft box channels + tach @ 25 kHz
32 seconds
2 aft box channels + tach @ 6.25 kHz
Frequencies Shown are Sample Rates at 16 bits
Figure 4 - Data snapshot format collected in flight
“scores” on the measures of effectiveness and performance will be maintained as a part of the
data archive and available to the Office of Naval Research as desired. Further dissemination of
the results will be in accordance with the details of the agreements made for access to the data
archive. These details are yet to be established.
Project Status: The project is underway. At the time of this writing, we have instrumented one
aircraft with the intention (given the availability of sufficient support) of instrumenting the balance
of the squadron during FY 99. Each of the major elements of the project is described in more
detail below.
Flight Clearance: As described above, the necessity for flight clearance is clear. The process itself
is necessarily long and careful. In October 1997 we received interim flight clearance which
allowed us to instrument the first aircraft. Aircraft 07 was instrumented shortly thereafter. In
January 1998, we received a modification to the interim clearance to allow us to install a
tachometer signal taken from the main rotor tachometer signal (Nr) versus a dedicated sensor on
the sync shaft as originally planned. The change was necessitated by clearance problems and
aircraft-to-aircraft variability discovered during the first installation. Upon successful installation
and debug on the first aircraft, final
flight clearance will be issued prior to
Server Monitor
Data Logger
Logger Ground
Power Supply
Server Computer
Figure 6 - Download workstation located in QA
Figure 5 - Data Flow and Analysis Process Flow
completing installation on other aircraft.
Workstations & Intranet: The workstation for data download and Intranet service is shown in
Figure 6. The equipment was installed in the Quality Assurance office at the Squadron in January
1998. This workstation will be connected to the Internet by a dedicated connection installed in
the Quality Assurance Office for this project. Access will be controlled in accordance with terms
of access to be developed in coordination with the Type Commander and Squadron.
Owing to effort and time involved with establishing a secure link to the NALCOMIS system, a
separate workstation was installed in Logs and Records to allow direct scanning of the
Maintenance Action Forms (MAF) and any other information relating to the instrumented
aircraft. This workstation was installed also in January 1998 and is similar to that shown in Figure
6 with the addition of a scanner to accommodate MAF input by the squadron.
Analysis and Availability: The data collection effort has resulted in several data sets. These
data sets should be analyzed in conjunction with the pertinent MAFs. For instance, MAFs have
indicated routine oil leaks and an evolving Sprague clutch failure this past year.
An example web page that researchers will see when logging into the Intranet is shown in Figure
7. While extensive analysis of the data sets has not been conducted within the scope of this initial
collection effort, limited spectral and time wave form evaluation has been performed. Figure 8
shows a typical acceleration spectrum from an accelerometer mounted near the starboard engine
input section of the aft main transmission.
Figure 8: Typical power spectrum of the vibration signal obtained from an accelerometer
mounted near the starboard engine input section of the aft main transmission of the Navy H46D, Tail No. 07 during a routine flight. The predominant engine input shaft frequency
Figure 7 - Home Page for access to data archive
sidebands are observed to be typical of normal operating condition.
Collector Gear / Spur Pinion Mesh Frequency
Sidebands at Mixbox Input Shaft Frequency
Data Snapshot: 18:35:46 on July 8, 1998
Future Plans: The intent is to collect data on instrumented aircraft for a period of 12 months. At
that time, a decision will be made by the squadron and the Type and Wing Commanders as to
whether the data collection process may continue. Of course, the decision will be based on
interference with squadron operations, value added, and available funding. We anticipate that the
pool of researchers analyzing the data will grow considerably over time and that the data archive
created by this project will continue to serve the HUMS development community for many years
to come.
Summary: The issues that are expected to arise as HUMS systems evolve and the capability to
implement advanced diagnostics systems were discussed. We believe the time is right for a
project that collects the necessary archival data and develops the measures of effectiveness and
performance to allow intelligent selection and qualification of diagnostic techniques from a variety
of vendors. While this project does not (and was never intended to) fulfill all needs in this area, it
does provide a significant contribution to the CBM community and provides an opportunity to
exercise a rational and fair qualification process. It also provides a roadmap on how to do (and in
some cases, how not to do) a project that rigorously and efficiently collects a large data set and
distributes it for development and qualification purposes. It is hoped that the paper will be useful
in defining a future HUMS which will expand current diagnostic and prognostic capabilities.
Acknowledgment: The authors gratefully acknowledge the support of the Repair Technology
(REPTECH) Project of the Navy Manufacturing Technology (MANTECH) Program at the
Office of Naval Research and Mr. Steve Linder. The REPTECH Project Manager at Penn State
ARL, Mr. Lewis Watt, has also provided guidance and support. Special thanks goes to the HC3 helicopter squadron at the Naval Air Station, North Island.
[1] J. Land and C. Weitzman, “How HUMS Systems Have the Potential of Significantly
Reducing the Direct Operating Cost for Modern Helicopters Through Monitoring”, Teledyne
Document, presented at the American Helicopter Society 51st Annual Forum and
Technology Display, Ft. Worth, TX, May 1995.
G. Marsh, “The Future of HUMS”, Avionics Magazine, p. 22-27, February 1996.
D. Parry, “Evaluating IHUMS”, Avionics Magazine, p. 28-32,February 1996
“Health and Monitoring System Enhances Helicopter Safety”, Aviation Week and Space
Technology (New York), v 139, n 12, p. 94, September 20, 1993.
Byington, C. S., and
Nickerson, G. W., Technology Issues in Condition-Based
Maintenance, 7 Annual Predictive Maintenance Technology Conference, December 5,
1995. Reprinted in P/PM Technology Magazine , Vol. 9, Issue 3, June 1996.
Nickerson, G. W., Prognositcs: What does it mean in Condition-Based Maintenance?
Proceedings of NOISECON ’97, June 1997.
Byington, C. S., George, S. E. and Nickerson, G. W., Prognosticst Issues for Rotorcraft
Health and Usage Monitoring Systems, Proceedings of the 51 Meeting of the Society for
Machinery Failure Prevention Technologies, April 1997.
Nickerson, G. W., and R. W. Lally, An Intelligent Component Health Monitoring System:
A Building Block for a Distributed Approach to Machinery Health Assessment, Pamphlet
publication of the 1997 ASME ImechE, Dallas, TX, November 1997
Nickerson, G. W., and B. Thomason, Hierarchical Open Architecture Approach to
Shipboard Condition -Based Maintenance, Proceedings of the ASNE Condition-Based
Maintenance Symposium, June 1998.
Byington, C. S. and J. Kozlowski, Mechanical Diagnostics Test Bed, Proceedings of the 51
Meeting of the Society for Machinery Failure Prevention Technologies, April 1997.
Church, K. G., R. R. Kolesar, M. E. Phillips, R. C. Garrido, Air Vehicle Diagnostic System
CH-46 Aft Main Transmission Fault Diagnostic – Final Report, NRaD Technical
Document 2966, June 1997.