Uploaded by Stephen Deguire

UNS A-6 Trainer Effectiveness

advertisement
Calhoun: The NPS Institutional Archive
Theses and Dissertations
Thesis Collection
1981
A model to measure bombardier/navigator
performance during radar navigation in device
2F114, A-6E weapon system trainer.
Mixon, Ted R.
Monterey, California. Naval Postgraduate School
http://hdl.handle.net/10945/20611
i..'<.t;'.'
M't
HJ^
MvV,'
,*.l';l'^^••
^vl;!<v;i;'
-
¥0.'
.OL
/o^i-'9t
NAVAL POSTGRADUATE SCHOOL
Monterey, California
THESIS
A MODEL TO MEASURE BOMBARDIER/ NAVIGATOR
PERFORMANCE DURING RADAR NAVIGATION IN
DEVICE 2F114, A-6E WEAPON SYSTEM TRAINER
by
Ted R. Mixor
L
March 1981
Thesis Advisor:
W.
F.
Moroney
Approved for public release; distribution unlimited.
T199180
6
UNCLASSIFIED
SCCUHITY CLAtSlFICATlOM Of THIS ^ACC (Wham Omm
enl«#«tf)
READ INSTRUCTIONS
BEFORE COMPLETING FORM
REPORT DOCUMENTATION PAGE
I.
4.
mtPour NUMSCM
TITLE (mtd
2.
OOVT ACCCtSION
NO.
Subtltit)
TYPE OF REPOBT
5.
A Model to Measure Bombardier/Navigator
Performance During Radar Navigation in
Device 2F114, A-6E Weapon System Trainer
7.
WeClPtEHT'S CATALOG NUMBER
i-
AUTMO»»f«>
k
PERIOD COVERED
Master's Thesis
March 1981
(.
RERFONMINS ORG. RCRORT NuMSCR
•
COMTWACT OM GRANT NLMSCRCl
.
Ted R. Mixon
» RCMFORMINS OnOANlZATtON NAME ANO AOOMCSI
10.
RROORAM ELEMENT. PROJECT TASK
AREA 4 WORK UNIT NUMBERS
'
Naval Postgraduate School
Monterey, California 93940
1
I.
CONTROLLING OFFICE NAME ANO ADDRESS
12.
M^rrh 1QR1
Naval Postgraduate School
Monterey, California 93940
14.
MONITOMINC AGENCY NAME
REPORT DATE
IS.
NUMBER OF PAGES
301
* AOORESS<i/ «ll«r«n(
ham
Cati«rollln«
OlUcm)
IS.
SECURITY CLASS,
(al
(Ma ti^on)
Unclassified
tSa.
DECLASSIFICATION/ OOWNCRAOINC
SCHEDULE
l«.
DISTRIBUTION STATEMENT
(at tttl» JtapMl)
Approved for public release; distribution unlimited.
17.
DISTRIBUTION STATEMENT
It.
SUPPLEMENTARY NOTES
IS.
KEY WORDS (Comtlmf
(ol (ft* mhttrmel
an rawmrtm tl^a
II
mttfH
In
A-6 Flight Training
A-6E Weapon System Trainer
Aircraft Simulator
Training Devices
ABSTRACT (Camtlmf
on r«v«r««
alM
II
30, II tllllmrmtl fra« ff«por*)
n«e«««arr ">* I4amtlfr kr MacM numbmr)
A-
20.
Sloe*
nacmmamr mt4
Simulator Training
Simulator Performance
Bombardier/Navigator Performance
Human Performance
Pprfnrm;:inr-<^
i^mmlllr *r M«efe
w—
fc»rj
This thesis modeled a performance measurement system for
the Bombardier/Navigator (B/N) Fleet Replacement Squadron
student during low level radar navigation flight in the Navy
A-6E Weapon System Trainer.
The model was designed to determine student skill acquisition measures for the purpose of
providing information for decision-making by the squadron instructor and training manager.
Model formulation methodology
DO
FORM
,1 AN 7) 1473
EDITION OW NOV ••
S/N 0tO2>OI4'6«01
t
I
IS
OBSOLETE
UNCLASSIFIED
StCUBITY CLASSIFICATION OF THIS PAGE (Whm-
Of Kniarad)
<
UNCLASSIFIED
euiT^
euitM*>«e*T»»ii o»
Two m^%%n^tm ntm
««it—
(continued)
was based on a literature review of aircrew performance
measurement from 1962-1980 and an analytical task analysis
Over 50 currently accessible candidate
of the B/N's duties.
measures were listed and a proposal was made for a competitive exercise (Derby) using A-6E fleet aircrews flying
preprogrammed routes to establish performance standards
using the candidate measures. Multivariate discriminate
analysis was recomnfended for measure reduction. A sequential
sampling decision model selected for evaluation provided
fixed decisional error rates for successful training termination decisions and utilized both objective and subjective
performance measures.
Several display formats were recommended for use in debriefing.
20
19
(continued)
Performance measurement
Performance criteria
Performance tests
Measurement
Pilot performance
Pilot performance measures
Pilot performance measurement
Aircrew performance
Aircrew performance measurement
Objective measurement
Student measurement
Student achievement
Training evaluation
Evaluation
Form
D
1473
Jan 73
i/N 0102-014-6601
Sequential testing
Quantitative methods
Proficiency
Pilot proficiency
Instructor pilot
Task analysis
System analysis
Time Line Analysis
Individual differences
Skill
Flight skills
Navigation skills
Complex skill acquisition
Navigator
UNCLASSIFIED
1
iteuMi^v cuAUiriCATiOH o^ »•<• ^•atr«*«" o***
t»»»«»«<i
Approved for public release; distribution unlimited
A Model to Measure Bombardier/Navigator Performance
During Radar Navigation in Device 2F114,
A-6E Weapon System trainer
by
Ted R. Mixon
Lieutenant, United States Navy
B.S., United States Naval Academy, 1974
Submitted in partial fulfillment of the
requirements for the degree of
MASTER OF SCIENCE IN OPERATIONS RESEARCH
from the
NAVAL POSTGRADUATE SCHOOL
March 1981
^.
OUOtEV KNOX
I
'SflAftV
NAVAJL POSTGrtAOlisT'
MONTEREY. CALIf
a39«.
ABSTRACT
This thesis modeled a performance measurement system for
the Bombardier/Navigator (B/N) Fleet Replacement Squadron
student during low level radar navigation flight in the Navy
A-6E Weapon System Trainer.
The model was designed to deter-
mine student skill acquisition measures for the purpose of
providing information for decision-making by the squadron instructor and training manager.
Model formulation methodology
was based on a literature review of aircrew performance meas-
urement from 1962-1980 and an analytical task analysis of the
B/N's duties.
Over 50 currently accessible candidate measures
were listed and a proposal was made for a competitive exercise
(Derby)
using A-6E fleet aircrews flying preprogrammed routes
to establish performance standards using the candidate measures,
Multivariate discriminate analysis was recommended for measure
reduction.
A sequential sampling decision model selected for
evaluation provided fixed decisional error rates for successful
training termination decisions and utilized both objective and
subjective performance measures.
recommended for use in debriefing.
Several display formats were
TABLE OF CONTENTS
I.
A.
B.
C.
II.
12
INTRODUCTION
13
BACKGROUND
1.
Simulation and Performance Measurement
13
2.
Review of Previous Studies
17
SUBJECTIVE AND OBJECTIVE PERF0Rr4ANCE MEASUREMENT- 22
1.
Introduction
22
2.
Subjective Performance Measurement
23
3.
Objective Performance Measurement
27
4.
Summary
31
A-6E TRAM AIRCRAFT
AIvID
ITS MISSION
32
1.
A-6E Performance Specifications
32
2.
Mission
33
3.
Scenarios
34
4.
Summary
3 5
STATEMENT OF PR0BLEI4
37
A.
PROBLEM STATEMENT
38
B.
THE IMPORTANCE OF OBJECTIVE PERFORMANCE
MEASUREMENT
39
1.
Policy Guidance
40
2.
Establishment of Standards
41
3.
Effective Use of Simulators
42
4.
Effectiveness of Instructors
42
5.
Effectiveness of Training
43
III.
IV.
6.
Skill Identification and Definition
44
7.
Instructional Systems Development-
45
47
iMETHODOLOGY
A.
PROCEDURE
47
B.
ASSUriPTIONS
49
1.
Simulator Fidelity
50
2.
Pilot and B/N Relationship
50
3.
Literature Review Results
51
4.
Mathematical Description of Behavior
51
5.
Experience and Skilled Behavior
52
THE FUNDAMENTAL NATURE OF PERFORMANCE EVALUATION
A.
B.
MEASUREMENT CONSIDERATIONS
53
54
1.
Definition and Purpose of Measurement
54
2.
Types of Measures
57
3.
Levels of Measurement
66
4.
Measure Transformations
68
5.
Accuracy of Measurement
69
6.
Reliability of Measurement
72
7.
Validity of Measurement
75
8.
Selection of Initial Measures
79
CRITERIA CONSIDERATIONS
81
1.
Definition and Purpose of Criteria
81
2.
Types of Criteria
81
3.
Characteristics of Good Criteria
3 3
4.
Other Criteria Characteristics
83
5.
Establishment of Criteria
84
6
C.
D.
V.
6.
Sources of Criterion Error
85
7.
Measures of Effectiveness
85
8.
Selection of Criteria
88
PERFORMANCE MEASUREJMENT CONSIDERATIONS
90
1.
Subjective Versus Objective Measures
90
2.
Combining Measures
91
3.
Overall Versus Diagnostic Measures
93
4.
Individual Versus Crew Performance
95
5.
Measures and Training
96
PERFORMANCE EVALUATION CONSIDERATIONS
97
1.
Definition and Purpose
97
2.
Types of Performance Evaluation
99
3.
Accuracy of Evaluation
4.
Evaluating Individual and Group Differences
5.
Characteristics of Evaluation
THE NATURE OF THE BOMBARDIER/NAVIGATOR TASK
101
— 102
103
104
A.
INTRODUCTION
104
B.
TASK CONSIDERATIONS
107
1.
Task Definition
107
2.
Classification of Tasks
107
3.
Task Relation to Performance and
System Purpose
103
C.
RADAR NAVIGATION TASK ANALYSIS
110
1.
Background
110
2.
Previous A-6 Task Analyses
113
3.
Current Task Analysis for
Performance Measurement
118
4.
VI.
A-6E WST-PERFORiyiANCE MEASUREMENT SYSTEM
A.
B.
C.
VII.
B/N Skills and Knowledge
WST CHARACTERISTICS
B.
137
137
1.
General Description
137
2.
Performance Evaluation System
14
PERFORMANCE MEASUREMENT SYSTEMS
14 5
1.
Systems Analysis
146
2.
Allocation of Functions
150
3.
System Criteria
154
4.
System Implementation
156
CURRENT PERFORMANCE MEASUREMENT IN THE WST
RESULTS AND DISCUSSION
A.
127
CANDIDATE MEASURES FOR SKILL ACQUISITION
158
166
167
1.
Method of Measurement
167
2.
Measure Segment
167
3.
Scale
182
4.
Sampling Rate
182
5.
Criteria
182
6.
Transformation
182
7.
Currently Available
183
8.
Accessible
183
ESTABLISHMENT OF PERFORMANCE STANDARDS
18
3
1.
Radar Navigation Maneuver
18 3
2.
Simulator "Intruder Derby"
184
C.
MEASURE SELECTION TECHNIQUES
185
D.
EVALUATION METHODS
18 6
-
VIII
.
E.
INFORMATION DISPLAYS
190
P.
IMPLEMENTING THE MODEL—
198
SUr4MARY
200
A.
STATEriENT OF PROBLEM
200
B.
APPROACH TO PROBLEM
201
C.
FORMULATION OF THE MODEL-
203
D.
IMPACT TO THE FLEET
204
BIBLIOGRAPHY
206
APPENDIX A
A-6E TRAM RADAR NAVIGATION TASK LISTING
221
APPENDIX B
GLOSSARY OF TASK SPECIFIC BEHAVIORS
23 6
APPENDIX C
A-6E TRAM RADAR NAVIGATION TASK ANALYSIS
240
APPENDIX D
RADAR NAVIGATION MISSION TIME LINE
ANALYSIS
280
SEQUENTIAL SAMPLING DECISION MODEL-
287
APPENDIX E:
INITIAL DISTRIBUTION LIST
298
LIST OF TABLES
I.
NAVIGATION MANEUVERS AND SEGMENTS
36
II.
A TAXONOMY OF MEASURES
59
III.
LEVELS OF MEASUREMENT
67
IV.
COMMON MEASURE TRANSFORMATIONS
7
V.
PERFORMANCE EVALUATION DECISION MODEL
FOR THE INSTRUCTOR
98
VI.
CLASSIFICATION OF BOMBARDIER/NAVIGATOR BEHAVIORS
121
VII.
TAXONOMY OF SKILLS REQUIRED
124
VIII.
A-6E WST EVENT RECORDING SELECTION
14 4
IX.
HUMAN AND I4ACHINE CAPABILITIES AND LIMITATIONS
151
X.
CURRENT B/N TASKS GRADED IN WST CURRICULUM
160
XI.
CANDIDATE PERFORMANCE MEASURES
168
XII.
RADAR NAVIGATION PERFORMANCE SUMMARY
197
10
LIST OF FIGURES
1.
Methodology Flow Diagram
48
2.
A-6E Crew-System Network
105
3.
Skill Acquisition and Performance Evaluation Model
4.
Device 2F114, A-6E WST
5.
A-6E WST Performance Evaluation System
141
6.
Typical B/N Flight Evaluation Sheet for WST
163
7.
Sequential Sampling Decision Model for TP
Navigation Task
189
8.
Time Activity Record Display
192
9.
Time-on-TP Management Display
193
10.
Fuel Management Display
194
11.
Radar Navigation Accuracy Display
196
El.
Hypothetical Sequential Sampling Chart
290
— 132
-138
11
I.
INTRODUCTION
Since 1929, when Edwin A. Link produced the first U.S.-
built synthetic trainer designed to teach people how to fly,
flight simulation has witnessed substantial advances in simu-
lation technology and increased incorporation of these devices
into both military and civilian aviation training programs.
A recent addition in 1980 to this advance was the A-6E Weapon
System Trainer (WST)
,
device 2F114.
A sophisticated flight
simulator with a six degree of freedom motion system, the A-6E
WST was designed to provide the capability for pilot transition
training, Bombardier/Navigator (B/N)
transition training, in-
tegrated crew training, and maintenance of flight and weapon
system proficiency in all non-visual elements of the A-6E
Carrier Aircraft Inertial Navigation System (CAINS) mission.
The development of high-fidelity flight simulation has been
accompanied by advances in aircrew performance measurement
systems, which are ideal for the simulator training environment, and have been widely implemented and the subject of
extensive research in all three military aviation communities.
The purpose of this thesis is to design a system to
improve current performance measurement techniques for the
B/N Fleet Replacement Squadron (FRS) student by the develop-
ment and application of a performance measurement system that
12
incorporates the advantages of both objective skill acquisition
measures and subjective instructor measurement.
A
.
BACKGROUND
While simulation has been a popular component of many
aviation training systems for over forty years, objective
performance assessment had not been incorporated until some
fifteen years ago.
This section will discuss the current state
of flight simulation, aircrew performance measurement, and
provide a brief review of previous navigator performance measurement studies.
1.
Simulation and Performance Measurement
Simulation is the technique of reproducing or imitating
some system operation in a highly-controlled environment.
Modern flight simulators have evolved from simple procedure
trainers into devices that represent specific aircraft counterparts, and imitate or duplicate on- board systems and environ-
mental factors.
The two main purposes of flight simulators
within the training environment are training and evaluation.
Training is designed to improve performance and some means of
providing feedback to the student is needed to indicate the
adequacy of his behavior, and ought to provide guidance for
the correction of inappropriate response patterns.
Evaluation
involves testing and recording the student's behavior in the
performance examination situation.
[Angell, et al.,
1964].
The reasons for using the simulator as an integral
part of a military flight training program were examined by
13
Tindle [1979], Shelnutt, et al.
[1977]
,
and Roscoe
[1976]
.
[1980], North and Griffin
Some basic justifications for
simulator training include:
(1)
Simulation provides training in skill areas not
adaptable to an actual training flight because of
technological or safety considerations.
(2)
Crews master sJcills in the aircraft in less time
after learning those skills in a simulator.
(3)
The cultivation of decision-making skills is an
instructional objective calling for situational
training that may be carried out safely only in a
simulated tactical environment.
(4)
Simulators are effective for training crewmembers
of varying experience and expertise in a variety
of aircraft for a number of flight tasks.
(5)
Greater objectivity is obtainable for measuring
student performance by using controlled conditions
and automated performance measurement features in
the simulator than in the aircraft.
.
(6)
Instructors are not distracted by operational
constraints in the simulator and are more available
for teaching and evaluation roles.
These considerations are by no means exhaustive, but
they do indicate the utility of simulators in flight training
programs, especially in evaluating student performance.
This thesis is addressed primarily to the problem of
measuring B/N performance during a radar navigation training
flight while in the A-6E WST.
The performance of a B/N is
the exerted effort (physical or mental) combined with internal
ability to accomplish the A-6E mission and its functions.
Some
development of performance measurement definitions and goals
is necessary because the problem of assessing B/N performance
14
,
during a training program is obviously but a segment of a
broader topic
-
the measurement of human behavior.
Glaser and Klaus [1966] defined performance evaluation
as the assessment of criterion behavior, or the determination
of the characteristics of present performance or output in
The importance of defining
terms of specified standards.
performance evaluation is paramount to any training assessment
situation, as it gives common ground to operationally describing the human behaviors that make up performance itself, and
identifies behavior elements that may be measured by either
objective or subjective means.
An expanded discussion of
performance measurement and evaluation can be found in Chapter
IV.
The purposes for assessing performance by the appli-
cation of standard objective measurement operations were
stated by Angell, et al.
[1964], and Riis
[1966]:
(1)
Achievement - to determine the adequacy with which
an activity can be performed at the present time,
without regard, necessarily, for antecedent events
or circumstances.
(2)
Aptitude - to predict the level of proficiency at
which a person might perform some activity in the
future if he were given instructions concerning
the activity.
(3)
Treatment efficacy - to observe the effects upon
performance of variation in some independent circumstances such as (a) instructional techniques,
(b) curriculum content,
(c) selection standards,
(d) equipment configurations, or the li]<:e.
The flight simulator training environment allows for special
applications of the above as found in Danneskiold [1955]
Angell, et al.
[1964], Riis
[1966], Glaser and Klaus
15
[1966],
.
Farrell [1974], Shipley [1976J
and McDowell
,
[1978]
:
(1)
Diagnostic - determine strong and weak areas of
student proficiency.
(2)
Readiness - determine operational readiness of an
aviation unit.
(3)
Discrimination - assess performance to provide information about an individual's present beliavior
as compared to other individuals.
(4)
Selection - of persons for promotion or advancement
or placement.
(5)
Learning rates - determining the rate at which
learning takes place.
(6)
Management
subsystems
(7)
Evaluation - of training devices in terms of
effectiveness and transf er-of-training.
of an entire training program and its
-
The above goals of performance measurement represent
some of the major reasons why assessment of student performance
is important in training program simulators.
Most importantly,
measurement provides FRS instructors and training officers
with the information needed to make correct decisions
[Obermayer, et al.
,
1974; Vreuls and Wooldridge,
1977].
Per-
formance measurement does not in itself replace the decision-
maker in the FRS, but instead provides complete and necessary
information of an objective nature to the appropriate evaluator
(instructor or training officer)
,
so that more accurate and
reliable decisions can be made concerning student progress
within the training syllabus.
If instructors and training
officers utilize the potential of a performance measurement
system, a more effective and efficient training program would
be a result.
16
,
Review of Previous Studies
2,
Most of the literature on aircrew performance measurement in the last forty years has primarily concentrated on the
pilot crewmember [Ericksen, 1952; Danneskiold, 1955; Smode
et al.,
1962; Buckhout,
Mixon and Moroney, 1981]
1962; Obermayer and Muckler, 1964;
.
The first comprehensive evaluation
of techniques used in flight grading was by Johnson and Boots
[1943]
,
who analyzed ratings given by instructors and inspec-
tors to students on various maneuvers throughout stages of
training.
One result showed correlations between grades
assigned by different raters to the same subject as being very
low.
This result of low observer-observer reliability when
using subjective ratings will be discussed in the next section.
The earliest studies involving the radar navigation
performance of a crewmember other than the pilot were
a
pen
and pencil radar scope interpretation experiment by Beverly
[1952]
,
and two Air Force radar bombing error projects by
Voiers [1954], and Daniel and Eason [1954].
The first study
was concerned with constructing a suitable test for the meas-
urement of navigational radar scope interpretation ability of
student aircraft observers.
The latter two studies were con-
cerned with identifying perceptual factors which contributed
to cross-hair error during bomb runs of a radar bombing mission
and with comparing the components of simulated radar bombing
error in terms of reliability and sensitivity to practice,
respectively.
A similar follow-up study on radar scope
17
interpretation and operator performance in finding and identifying targets using a radar was performed by Williams, et
al.
[I960].
These four studies represent most of the research
of non-pilot radar navigation performance measurement prior
to 1965.
This fact is not surprising, due mainly to the early
role played by the observer in very simplified and pilot-
oriented aircraft as compared to today's specialized navigator
in complex, computer-oriented aircraft.
Since navigation is a primary duty of any aviator
across a spectrum of aircraft types, some helicopter pilot
and copilot studies are of some value to review.
Nap-of-the-Earth (NOE) flight is
a
Helicopter
visual-dominated low level
mission where altitude and airspeed are variable in close proximity to the ground.
Some navigational performance measures
utilized in these studies were;
number of turn points found,
probability of finding a turn point, and route excursions
beyond a criterion distance [Fineberg, 1974; Farrell and
Fineberg, 1976; Fineberg, et al., 1978; Smith, 1980].
Low
level visual navigation flights in helicopters were also
studied in some detail, again with pilot performance being
the main concern [Lewis, 1966; Billings, et al., 1968;
Sanders, et al.
,
1979].
Two rather novel investigations of Anti- Submarine
Warfare (ASW) helicopter team performance using the content
and flow of team communications during simulated attacks were
done by Federman and Siegel [1965] and Siegel and Federman [1968]
18
All team communications were recorded, classified, and com-
pared to target miss distance as an effectiveness measure.
Although they found some types of team communication to
correlate highly with mission success, their method was
highly impractical for the operational situation due to the
large number of personnel needed to play back and classify
the communication types.
Nevertheless, the results from
this research indicate the value of using crew communication
as a measure of crew performance.
Several fixed-wing studies with navigation as the
primary mission are also of interest to the current study.
Schohan, et al.
[19 65]
and Soliday
[197 0]
used the Dynamic
Flight Simulator for several Low Altitude High Speed (LAHS)
missions designed to investigate pilot and observer performance
during turbulent, lengthy (3-hour) flights.
[1972]
Jensen, et al.
did several studies investigating pilotage errors in
area navigation missions for the Federal Aviation Administration.
These three studies are significant in that numerous
navigational accuracy performance measures were used to assess
pilot (or observer) performance.
After 1970, due to the increased complexity of many
modern aircraft, more research was directed toward individual
aircrew members, and not just the pilot.
Among the aircraft
investigated were:
P-3C, F-4J, A-7, F-106, B-52, C-141, C-130,
KC-135, and the C-5
[Matheny, et al.,
1970; Vreuls and Ober-
mayer, 1971; Obermayer and Vreuls, 1974; Geiselhart, et al.,
19
.
1976;
.
These studies are unique in
Swink, et al., 1978].
that defining and assessing aircrew performance by other than
the subjective ratings, as commonly used for decades, became
a technological challenge requiring new analytical and empir-
ical approaches
An Air Force fighter-bomber, the F-lllD, was designed
and built during the late 1960
tical capability of the A-6E.
's
with virtually the same tac-
With a two-man side-by-side
cockpit arrangement, this land-based aircraft is the closest
counterpart to the A-6E for the radar navigation air inter-
diction mission.
Two experiments using the F-lllA flight
simulator were performed mainly for equipment configuration
effects on pilot performance [Geiselhart, et al.
Geiselhart, et al., 1971].
,
1970;
Research by Jones [1976] examined
the use of the F-lllD flight simulator as an aircrew perfor-
mance evaluation device.
Unfortunately, these F-111 studies
do not specifically address the issue of how to measure navi-
gator performance during radar navigation, but do provide
some information on measuring performance in an aircraft with
a similar mission and almost the same crew interactions as
the A-6E.
Only one experiment known to this author has been
conducted using an A-6 configured simulation.
[1970]
Klier and Gage
investigated the effect of different simulation motion
conditions on pilots flying air-to-air gunnery tracking tasks
in the Grumman Research Vehicle Motion Simulator
20
(RVMS)
They concluded that simulator motion need not be a faithful
reproduction of real-life motion in order to provide essential
motion cues.
Saleh, et al.
[1980]
performed an analytical
study for two typical tactical combat missions representative
of the A-6E and A-7E aircraft to determine significant deci-
sions which are made in the course of accomplishing mission
objectives.
The results of this study provide information
regarding the decision type, difficulty, and criticality and
can be used in identifying the critical areas in which aircrew
decision-aiding may significantly improve performance.
Finally, a study by Tindle [1979] concluded that the integra-
tion of the A-6E WST (device 2F114)
into the FRS training
program would be more cost-effective than using the A-6E WST
as an addition to existing training programs.
This study
also concluded that aircrew performance measurement in the
A-6E WST was vital for more effective use of the simulator.
This section has presented a brief review of aircrew
performance measurement studies in the literature.
The po-
tential value in reviewing the literature lies in uncovering
the analytical and empirical approaches taken in measuring
aircrew performance, noting both the significance and practicality of those approaches.
Since previous research on actual
B/N performance during the radar navigation air interdiction
mission appears to be nonexistent, extrapolations from other
aircrew performance measurement studies is important and
necessary to the current study.
21
What has worlced and been
practical for other aircraft and aircrews in the way of
performance measurement certainly applies to the A-6E B/N,
keeping in mind the definitions and goals of the A-6E B/N,
performance measurement, and the A-6E mission.
B.
SUBJECTIVE AND OBJECTIVE PERFORMANCE MEASUREMENT
1,
Introduction
Traditionally, all aircrew performance measurement
in the Navy, Air Force and Army has been assessed by an in-
structor pilot or navigator using a subjective rating scale
which places the student in one of several skill categories
based on norm-referenced testing.
More recently, objective
methods of evaluating performance have been developed and
implemented in both the simulator and in-flight environments.
Subjective and objective methods are not dichotomous but
represent a continuum of performance measurement.
At one
extreme there exists the strictly personal judgement and rating of performance, and on the other end of the continuum is
a
completely automated performance measurement and assessment
system.
This section will define and describe the elements of
each method together with the advantages and disadvantages
associated with each.
The approach taken in this study will
be to integrate the use of automatic performance measurement
within the A-6E training environment while still exploiting
the advantages of using the instructor as a component of the
measuring system.
22
2.
Subjective Performance Measurement
Subjective measurement can be defined as an observer's
interpretation or judgement between the act observed and the
record of its excellence with an almost complete reliance
placed on the judgement and experience of the evaluator
[Cureton,
1951; Ericksen,
1952]
.
Simply stated, subjective
measurement is qualitative in nature as what is being measured
is observed privately
1973;
[Danneskiold/ 1955; Knoop and Welde,
Roscoe, 1976; Vreuls and Wooldridge,
1977].
Through an
introspective process, the "expert" instructor judges the
performance level demonstrated by a student whether or not
agreed-upon standards of performance have been applied
[Billings, 1968; McDowell, 1978].
a.
Advantages of Subjective Measurement
The advantages of using subjective performance
measurement methods have been well-documented throughout the
literature.
Instructor ratings in the past have been the
least expensive of all evaluation methods [Marks, 1961].
This
decisive advantage has been eroded in recent times by severe
shortages of military aircrew in both operational and training
units.
Marks [1961] also pointed out that ratings forms are
constructed easily and quickly, and the administration of the
ratings system requires no physical arrangement.
[1978]
McDowell
recently concluded that a subjective performance meas-
urement system for many complex tasks such as flying were easy
to develop, gave the rating instructor high face validity since
23
he is usually an acknowledged expert, and contained specific
feedback of a type important in the training situation usually
not found in objective performance measurement systems.
Rat-
ings are still used because they meet the needs of training
management without seriously intruding into the instructor
pilot's operational capabilities [Shipley, 1976].
One impor-
tant use of an instructor to subjectively grade a student is
to motivate the student through selective reinforcement
[Prophet,
1972; Carter,
1977]
.
Sometimes an overly positive
or negative grade by the instructor in the appropriate area
of desired performance improvement for the student serves as
a
catalyst in the student's attitude toward self- improvement.
Some studies have shown that some degree of high
reliability can be achieved between instructor ratings [Greer,
et al.,
1963; Marks,
1961].
Brictson [1971], in a study of
over 2500 carrier arrested landings, reported measures derived
from the Landing Signal Officer (LSO) grades to be highly cor-
related with objective estimates derived from a weighted
combination of wave-off s, bolters, and the particular wire
engaged.
Similar high correlations between raters were also
found by Waag, et al.
[1975]
for undergraduate pilots flying
seven basic instrument maneuvers in the Air Force Advanced
Simulation in Undergraduate Pilot Training (ASUPT) facility,
b.
Disadvantages of Subjective Measurement
High reliability between instructors using sub-
jective rating methods is generally not the case and therein
24
lies the foremost disadvantage of the subjective performance
measurement method.
Knoop [197 3 J reported two instructor
pilots (IP's) subjective ratings were each correlated with
certain objective performance measures, but the objective
measures themselves were determined to be not highly correlated with skilled or proficient operator performance.
Ericksen [1952] reviewed numerous flight studies involving
pilot training between 1932 and 1952 and concluded that subjective grading involved a lack of reliability and inconsistent
differentiation between students.
Danneskiold [1955] found
observer-observer correlations no higher than .47 for three
Basic Instrument Check maneuvers, while a more objective test
had observer-observer correlations of .86.
The training and evaluation skills of the instructor evolve primarily from their personal experiences in the
highly complex aircraft and simulator environment.
Establish-
ing adequate standards of performance, or criteria,
is a major
problem in all flight training.
Knoop and Welde [197 3] found
lack of agreement between pilots on the specific criteria for
successful performance of certain aerobatic maneuvers, due
largely to the
and proficiency.
differences in examiner knowledge, experience,
It was also found that the same maneuver may
be flown satisfactorily in a number of different ways.
Other
research has explored the criteria problem which is inherently
part of subjective performance measurement
[Cureton, 1951;
Danneskiold, 1955; Marks, 1961; McDowell, 1978].
25
Even rating
.
.
.
,
,
methods were inadequate when used as criteria to validate
alternative methods of measurement and evaluation [Knoop and
Welde, 1973J
An instructor must be able to process large quantities of information during a simulator session.
Subjective
grading competes with this capability of the instructor, and
may prevent perception and evaluation of all the relevant
dimensions of task performance during
[Knoop,
1968;
a
training mission
Roscoe, 1976; Shipley, 1976; Carter, 1977;
Vreuls and Wooldridge, 1977]
Several other factors which contribute to subjective aircrew rating variances are discussed below:
(1)
A tendency of raters to be more lenient in
evaluating those whom they know well or are particularly
interested in [Smode, et al., 1962; Bowen, et al., 1966].
(2)
The observations tend to accumulate on one
or two points in the rating scale, usually near the central
or average point.
This phenomenon contributes toward a lack
of sufficient discrimination among students
1962;
[Smode,
et al.
Shipley, 1976].
(3)
Instructor and student personalities interact
to yield a result which does not reflect true performance
[Marks,
1961]
(4)
A tendency for the ratings on specific dimen-
sions to be influenced by the rater's overall impression of
the student's performance - the "halo effect"
1962; Glaser and Klaus,
1966].
26
[Smode,
et al
.
.
.
(5)
.
.
An unrelated problem with the simulator
may influence the evaluation outcome [Jones, 197 6]
(.6)
Evaluation results are dependent upon the
attitude, concern, and values of the instructor, thus a
natural personal bias is introduced into the performance
observation [Smode, et al.
1962; Knoop and Welde,
,
1973;
Jones, 1976]
(7)
Instructors have different concepts of the
specific grading system in regard to the flight parameters
involved, knowledge tested, weights to be assigned, and ranges
of qualifying categories
(8)
[Knoop and Welde, 1973]
A tendency to actually rate others in the
opposite direction from how the rater perceives himself on
the particular performance dimension has been found [Smode,
et al.
,
1962]
.
(9)
Ratings tend to become more related along
different dimensions when they are made closer to each other
in time than ratings having a larger interval of time between
observations [Smode, et al.
(10)
,
1962]
Unless the simulator has a playback capabil-
ity, a permanent record of the performance is lost when sub-
jective ratings are used [Forrest, 1970; Gerlach, 1975].
3.
Objective Performance Measurement
Objective measurement is defined as observations where
the observer is not required to interpret or judge, but only
to record his observations
[Cureton,
27
1951]
.
While subjective
judgement is more qualitative in nature, objective measurement
demands that what is being measured be observed publicly and
with
a
quantitative result [Knoop and Welde, 197 3; Roscoe,
1976; Vreuls and Wooldridge,
1977; McDowell,
1978]
.
Objective
measures demand that performance be evaluated in terms of criteria which are relatively independent of the observer, have
consistent interpretations, and a high degree of observerobserver reliability [Ericksen, 1952; Danneskiold, 1955; Marks,
Smode, et al., 1962].
1961;
The first systematic use of an objective grading method
was by Miller, et al.
Objective measures were collected
[1947].
during a single week from over 8,000 students in four different
phases of pilot training.
Objective observations by way of
a
prepared checklist reduced variability attributable to the
observer and correlations as high as .88 were found between
instructors observing a student during the same flight.
In
most cases, higher observer-observer reliability has been found
when objective measures are used [Angell, et al., 1964; Forrest,
1970]
.
The measures are free from personal and emotional bias
of the instructor, as well as judgemental bias that are char-
acteristic of subjective measurement.
a.
Advantages of Objective Measurement
Most advantages of objective performance measure-
ment appear to contrast the disadvantages of subjective measurement.
By having a computer process large amounts of
continuously varying information, the instructor is freed to
28
concentrate on those aspects of student performance which
resist objective measurement, and to devote more attention
to the primary duty of instructing
[Krendel and Bloom, 1963;
Knoop and Welde, 197 3; Vreuls and Obermayer, 1971, Vreuls, et
al.,
Development of performance criteria could be made
1974].
on the basis of permanently recorded objective measures
[Forrest, 1970; Angell, et al
.
,
1964].
A system of data col-
lection would provide records and transcriptions of individual
and crew performance in practice missions to identify particu-
larly effective or ineffective behaviors for later analysis
in the event of an aircraft accident
et al.
,
1964]
[Forrest, 1970; Angell,
.
Objective measurement enables timely and diagnostic
information of consistent weaknesses in performance [Angell,
et al.
,
1964;
Knoop and Welde, 1973].
Instructional methods
may be modified as performance results indicate their effectiveness, or
laclc
of it.
Students attaining desired achieve-
ment levels may also be identified earlier within the training
syllabus.
Several researchers postulate the quantification
of skill learning rates
1973].
Bowen, et al.
[Angell, et al., 1964; Knoop and Welde,
[1966],
even found objective measures
motivated pilots to actualize their skills in overt performance
measurement.
They speculated that the "heightening of per-
formance is due to intrinsic motivation (personal desire to
achieve)
,
social motivation (pressures from group to demon-
strate proficiency)
,
and a focusing of attention on each
29
particular performance which encourages the pilot to actualize
the knowledge and skills which he possesses."
b.
Disadvantages of Objective Measurement
If objective performance measures are so much more
desirable than subjective ratings, why have they not been incorporated into more training situations?
The major reason
lies in the fact that they are much more expensive than sub-
jective methods and some aircrew tasks are difficult to auto-
matically record and computer grade.
Whenever a number of
simultaneously occurring tasks such as communication/ procedures, and the application of knowledge are present during a
particular task, measuring and quantifying the operator behavior involved becomes a complex and inherently difficult
task in itself.
Smode [1966]
found that when flight instruc-
tors are required to monitor and record performance information
during a flight task, some resentment against the objective
measurement method occurred due to the large amount of instructor attention required.
This instructor monitoring and record-
ing of student performance information has since been replaced
by automatic digital computers.
The same report also concluded
that detailed analyses of aircrew tasks into perceptual-motor
tasks,
procedural tasks, and decision-making distorts reality
as all three are very interrelated.
Objective tests are
rigidly constrained by given hardware that has to be physically
arranged, programmed, and is subject to equipment malfunctions.
Since objective tests require more structure than subjective
30
testing,
instructors are given very little choice in what
they must do and "there is a certain natural resentment against
the regimentation of setting up and observing this event at
this time [Smode, 1966]."
4.
Summary
Subjective and objective performance measurement of
aircrew has been defined, compared, and contrasted for strengths
Subjective testing is universally feasible,
and weaknesses.
minimizes paperwork, allows for instructor flexibility, is easy
to develop and administer, and is inexpensive.
Objective test-
ing minimizes instructor bias, eases grading due to automatic
data collection, storage and dissemination, improves perfor-
mance standardization, and produces a high inter-rater reliability.
Each method has its merits individually, but when
used together in a cohesive and synergistic combination, im-
provements can be made in aircrew performance measurement and
assessment.
Angell, et al.
[1964]
stated,
"There are some areas in
which the human observer can make more subtle judgements and
more sophisticated evaluations than can any electromechanical
instruments.
.
.
the human observer/teacher should not be an
adjunct, but rather an integral part of the total measurement
system."
This report will attempt to use this observation in
the design of a system to measure A-6E B/N performance during
radar navigation in the simulator.
This section is concluded
by a listing of items the examiner should evaluate, as deter-
mined by Knoop and Welde [1973]:
31
(1)
Ability to plan effectively.
(2)
Decision-making capability.
(3)
Sensorimotor coordination and smoothness of control.
(4)
Ability to share attentions and efforts appropriately
in an environment of simultaneous activities.
(5)
Knowledge and systematic performance of tasks.
(6)
Confidence proportionate to the individual's level
of competence.
(7)
Maturity; willingness to accept responsibility, the
ability to accomplish stated objectives, judgements,
and reaction to stress, unexpected conditions, and
aircraft emergencies.
(8)
Motivation (attitude) in terms of the manner in
which it affects performance.
(9)
Crew coordination.
(10)
Fear of flying.
(11)
Motion sickness.
(12)
Air discipline - adherence to rules, regulations,
assigned tasks, and command authority.
C.
A-6E TRAM AIRCRAFT AND ITS MISSION
1.
A-6E Performance Specifications
The A-6E aircraft is a two-man, subsonic, twin engine
medium attack jet aircraft, with side-by-side seating for the
pilot and B/N.
Designed as a true all-weather attack aircraft
using a sophisticated radar navigation and attack system, the
aircraft can accurately deliver a wide variety of weapons
without the crew ever having visually acquired the ground or
the target.
Capable of carrying a pay load of up to 8.5 tons,
it is the only carrier-based aircraft in the Tactical Air
32
(TACAIR) wing capable of penetrating enemy defenses at night,
or in adverse weather, to detect, identify and attack fixed
or moving targets.
The TRAM (Target Recognition, Attack,
Multisensor) configured A-6E aircraft has a completely inte-
grated computer navigation and control system, radar, armament,
flight sensors, and cockpit displays that enable the aircraft
to penetrate enemy defenses at distances approaching 600
nautical miles in radius while at an extremely low altitude.
2
.
Mission
The mission of the A-6 "Intruder" is to perform high
and low altitude all-weather attacks to inflict damage on the
TACAIR recognizes three primary
enemy in a combat situation.
missions to accomplish the objective of successfully waging
war [Gomer, 1979].
Counter Air
(CA)
,
The missions are:
Close Air Support (CAS),
and Air Interdiction (AI)
.
CAS is air action
against hostile ground targets that are in close proximity to
friendly ground forces, requiring detailed integration of each
air mission with the battle activities and movements of those
forces.
CA operations involve both offensive and defensive
air actions conducted to attain or maintain a desired degree
of air superiority by the destruction or neutralization of
enemy air forces.
AI missions are conducted to destroy, neu-
tralize, or delay the enemy's military potential before it
can be brought to bear against friendly forces, usually at
far distances not requiring detailed integration of air and
ground activities.
The AI mission was selected in this study
33
because it is representative of those missions frequently
performed in the A-6 community.
Analysis of all three primary
TACAIR missions was beyond the scope of this study.
Saleh [1980] defined a mission as the aggregate
scenarios, maneuvers, and segments that constitute successful
employment of the system.
The starting point in determining
criteria for performance measurement and suggesting what specific and clearly identifiable operations of the B/N should
be examined in greatest detail is an operational definition
of the man-machine mission
[1963]
[Smode,
1962; Vreuls,
1974].
McCoy
further stated that in order to judge the effectiveness
of any element of a man-machine system,
it must be judged in
terms of contribution of the element to the final system output, which is the ultimate objective of the man-machine system.
It is with these criteria in mind that the AI mission defini-
tion is used to limit performance measurement of the
B/Z^
in
the A-6E WST.
3.
Scenarios
Analysis for comprehensive performance measurement
begins with a complete decomposition of the mission into smaller parts for which activities and performance criteria are
more easily defined [Vreuls, 1974; Connelly, 1974; Vreuls and
Cotton, 1980J
.
Any mission may be described in terms of a
scenario, or intended flight profile or regime.
A performance
measurement standards tri-service project [Vreuls and Cotton,
1980]
,
classified military aviation into ten scenarios or
34
flight regimes:
tion,
(3)
euvers,
(8)
Transition/Familiarization,
(1)
Formation,
(6)
(4)
Instruments,
Air Combat Maneuvering,
Ground Attack,
(9)
(5)
(7)
(2)
Naviga-
Basic Fighter Man-
Air- to- Air Intercept,
Air Refueling, and (10) Air Drop.
Scenarios may be further subdivided into maneuvers by identifying natural breakpoints using time, position, or definitive
portions requiring computation of different performance measures or changes in required operator skill level.
of maneuvers are take-off, climb,
navigation.
Examples
landing, and point-to-point
Segments are subdivisions of maneuvers that con-
tain groupings of those activities that must be accomplished
in performing the maneuver.
Table
I
(adapted from Vreuls and
Cotton [1980]) contains possible maneuvers and segments for
the navigation scenario.
The present study selected point-to-point navigation
using radar terrain mapping to further narrow the scope of the
effort and to tailor the performance measurement aspects of
the A-6E mission toward the tasks of the B/N.
4
.
Summary
The mission of the A-6 for the current study has been
defined as Air Interdiction which is further subdivided into
point-to-point navigation using radar terrain mapping.
Success-
ful accomplishment of the radar navigation point-to-point seg-
ments reasonably infers some degree of overall Air Interdiction
mission accomplishment, which is the overall objective of the
A-6 man-machine system.
Operationally dividing the overall A-6
35
mission into flight maneuvers enables practical performance
measurement as the operator skills required (and measured)
vary from segment to segment.
Within each segment, measure-
ment is conceivably possible at two levels:
(1)
measurement
of the total man-machine system outputs for comparison to
expected mission goals, and
(2)
measurement of human operator
activity in relation to system outputs [Vreuls, 1974].
TABLE
SCENARIO
Navigation
I:
NAVIGATION iMANEUVERS AND SEGMENTS
I4ANEUVERS
SEGiMENTS
Point-to-Point Flight
Dead Reckoning
Contact (visual)
Inertial
Radar Terrain Mapping
Nap-of- the- Earth
Very Low Map Interpretation
Airways -Radio
VOR
TACAN
ADF-
Source:
Off Airways -Radio
Area Navigation
Over Water
LORAN
Celestial
Global Positioning
System
Vreuls and Cotton [198 0]
36
II.
STATEMENT OF PROBLEM
The need for aircrew performance measurement and assessment
has long been recognized across all aviation communities.
Per-
formance measurement produces information needed for a specific
purpose, such as the evaluation of student performance or the
identification of aircrews needing training.
Unfortunately,
the assessment of aircrew proficiency in those skills associ-
ated with advanced flying training still depends largely on
subjective evaluations by qualified instructor pilots (IPs)
and instructor B/Ns
(IB/Ns)
,
supplemented with analytically-
derived somewhat objective mission performance metrics, e.g.,
bombing scores [Obermayer, et al., 1974].
An economically
acceptable means of objectively measuring behavioral skills
in the operational or crew training environment has continued
to be a critical problem in the FRS, due mainly to a "nice to
have" and nonessential outlook towards any performance measure-
ment scheme other than the traditional "always done this way"
method of subjective ratings.
A Department of Defense review
of tactical jet operational training in 1968 commented:
"The
key issue underlying effective pilot training is the capability
for scoring and assessing performance
...
in essence,
the
effectiveness of training is dependent upon how well performance is measured and interpreted [Office of Secretary of
Defense, 1968]."
37
Despite the key issue of scoring and assessing performance
being identified, current aircrew performance measurement
by-
IPs and IB/Ns during simulated missions in the A-6E WST is all
subjective in nature, although
the
2F114 simulator has a
More
current objective performance measurement capability.
details on current aircrew performance measurement by instructors in the A-6E WST will be given in Chapter VI.
A.
PROBLEM STATEMENT
Current student performance measurement and assessment in
the A-6E WST by an instructor is entirely subjective in nature.
The A-6E WST has the capability to objectively measure student
performance, but is not being utilized in this fashion.
Meas-
uring performance is the key to training effectiveness.
Objective measurement for aviation training programs has been
Effective performance measure-
prescribed by higher authority.
ment by using objective methods is vital to establishing performance criteria, the effective utilization of the simulator,
instructor effectiveness, aircrew skill identification and
definition, and the Instructional Systems Development
systems approach to training.
(ISD)
The problem that must be ad-
dressed is designing a performance measurement and evaluation
system for the B/N during radar navigation that will incorporate
the characteristics of objective performance measurement and
still retain the judgement and experience of the instructor
as a valuable measuring tool.
This thesis will focus upon
38
,
performance measurement as a system with definable components
that interact and produce information necessary for the suc-
cessful identification of the skill level of the student in
regards to navigating the A-6E aircraft.
As a result, the A-6E
WST can be utilized more effectively, instructors can become
more effective in teaching students critical combat skills,
and students can complete FRS training being identified at a
minimum skill level and "mission ready" for full-system radar
navigation in the A-6E aircraft.
B.
THE IMPORTANCE OF OBJECTIVE PERFORMANCE MEASUREMENT
A number of factors have contributed to the emerging role
of objective aircrew performance measurement in both actual
flight and simulators of military aviation units.
Generally,
this role has developed through an increased awareness of the
advantages associated with objective measurement, and the
several basic disadvantages of the subjective evaluation
method, as outlined in Chapter
I.
The remainder of this section will outline both potential
and actual necessities for objective performance measurement
of aircrew in the training environment.
Beginning with a
study of Department of Defense policy toward aircrew perfor-
mance and evaluation methods, the benefits of objective performance measurement are discussed in regards to:
standards
establishment, increased simulator, instructor, and training
effectiveness, aircrew skill level identification and definition, and lastly,
ISD requirements.
39
:
1.
Policy Guidance
Several studies offer guidance with respect to the
issue of using more objective measurement techniques in air-
This guidance supports the development and
crew training.
utilization of objective performance measurement as an adjunct
or complement to current subjective ratings.
In 1968,
the
Department of Defense review previously cited found that
"subjective evaluation was the technique in general use in
training programs observed" and had been since before World
War II
[Office of Secretary of Defense,
on to comment,
1968].
The study went
"Judgement and experience can be helped by
quantitative analytical methods" and that the application of
such methods serves three purposes
(1)
They make it necessary to identify the standards of
performance desired for each of the many events the
pilot must learn.
(2)
They determine how many practices or trials a
student must accomplish, on the average, to meet
the desired standard.
(3)
They tell the manager how much improvement he
normally may anticipate with each additional
practice or trial.
This study concluded:
"The services should apply objective
evaluation techniques where currently feasible in parts of
existing training programs
..."
and "where valid performance
data in -aircrew training programs can be recorded and stored,
quantitative analytical methods should be used to assist the
commander in making decisions concerning revising and adjusting the course."
40
A Study by the Comptroller General of the United
States
(General Accounting Office)
in 1973 to the Congress
on the use of flight simulators in military pilot training
programs stated, "Simulators could also be used to more accurately measure pilot proficiency by using systematic grading
procedures."
A lack of standardized grading instructions
which did not show performance tolerances for the Navy was
noted.
Conclusions reached were:
Objective grading of pilot proficiency using simulators
would provide more consistent and accurate results for
many phases of flight training and eliminate the possibility of human bias and error associated with the
current evaluation method
simulator grading
accurately evaluates pilot proficiency for certain
flight maneuvers.
.
2.
.
.
Establishment of Standards
The performance criteria, or standard, is a statement
or measure of performance level that the individual or group
must achieve for success in a system function or task [Office
of Secretary of Defense,
1968]
.
When performance standards
are established on the basis of subjective experience and
expertise, the result in most cases will be inadequate.
When
standards are set too low, some risk is incurred with degraded
system effectiveness.
is the result
[Riis,
1968; Campbell,
McDaniel, 1980]
When set too high, costly overtraining
1966; Office of Secretary of Defense,
et al.,
.
1976; Deberg,
1977; Rankin and
The establishment of a standard or baseline
of performance is an important result of objective performance
measurement.
The Department of Defense review in 1968 stated.
41
"Reliable measures of pilot performance against validated
standards is the keystone for determining how much instruction
and practice is required to attain desired levels of skills
[Office of Secretary of Defense, 1968]."
Even though the
importance of performance standards is recognized, some concern by the A-6 FRS instructors has occurred about establishing operational standards for aircrew performance, due to
possible misuse, incorrect adaptation in the training program,
or insufficient assessment before implementation
et al.
3.
,
1976].
[Campbell,
This issue will be addressed in Chapter VII.
Effective Use of Simulators
Objective performance measurement increases the effec-
tive use of simulators.
When performance measures and criteria
are defined, inputted, and monitored by an automatic system
requiring little instructor intervention, other training and
teaching functions of the simulator may be used by the instructor; thus an increase in the effective use of simulators occurs
[Danneskiold, 1955; Knoop, 1968].
4.
Effectiveness of Instructors
The major impact of an effective measurement method on
the instructor during a simulator mission would be to free him
from monitoring dials. Cathode Ray Tubes
(CRTs), and lights
for aircrew performance measurement and evaluation.
Due to
the complexity of the A-6E WST, the simulator instructor is
humanly unable to monitor and interpret in real-time all pertinent performance information during a training mission.
42
Objective measurement techniques would relieve the instructor
of these monitoring duties, enabling more time for teaching
and evaluating those aspects of human performance that are
inaccessible by objective methods [Smode and Meyer, 1966;
Knoop,
1968; Vreuls,
1977; Charles,
1974; Kemmerling,
et al.,
1978, Semple, et al
.
,
1979].
1975; Carter,
When performance
standards are established by objective methods, instructor
judgements can be made more reliable and valid by confining
the instructor's judgement to evaluating performance without
the additional burden of establishing and adjusting personal
standards
[Office of Secretary of Defense, 1968].
Efficiency
of instructor utilization may be achieved by allowing instruc-
tors more flexibility in identifying and assisting students
who are found to be deficient from objective performance
measurement feedbacJc of criterion levels reached [Carter, 1977;
Deberg, 1977; Kelly, et al., 1979].
Such objective information
might also provide instructors diagnostic information about
their own performance as a teacher after seeing patterns of
strengths and wea]^nesses in their students
1979]
[Kelly,
et al.,
.
5.
Effectiveness of Training
Many factors influence simulator training effectiveness,
including:
simulator design, the training program,
students, instructors, and the attitude of personnel towards
the simulator
[Tindle,
1979]
Smode and Meyer [1966]
.
,
in a
review of Air Force pilot training, concluded: "The development
43
of objective scores to be used in simulator training would
represent a major step toward improving the effectiveness of
pilot training programs."
Other notable results of objective
measurement can also contribute to increased training effectiveness.
The real-time feedback of performance measurement
to the student is essential to the fundamental concept of
knowledge of results, a prerequisite to any learning process.
Quantitative feedback, in turn, allows the student and instructor to determine the student's individual strengths and weak-
nesses in performing the mission, which may then be concentrated
on by the instructor for remedial training [Smode and Meyer,
1966; Obermayer, et al.,
1974, Deberg,
1977;
Pierce, et al., 1979; Kelly, et al., 1979].
Carter,
1977;
Modifications in
training methods, course content, and sequence of course
material could be more accurately assessed by the FRS training
officer [Vreuls and Obermayer, 1971; Pierce, et al
Kelly, et al., 1979].
.
,
1979;
Student progress within a training
program can be more accurately monitored, culminating with
the introduction to the fleet of an "operationally capable"
or "mission ready" aircrew member at minimum cost
[Riis,
1966;
Campbell, et al., 1976; Pierce, et al., 1979].
6.
Skill Identification and Definition
The employment of objective measures in simulator
training will enable the identification and definition of
critical combat skills of mission ready aircrews.
The precise
definitions of "current" and "proficient" and the quantitative
44
.
major problem
measurement of these concepts continues to be
a
in both training and fleet environments today
[McMinn,
1981]
Objective performance measurement requires the definition of
"proficient" as a prerequisite to quantification of perfor-
mance [Pierce, et al., 1979].
7.
Instructional Systems Development
Instructional Systems Development is currently being
applied to military flight training systems.
The approach
requires extensive analysis of the specific training to be
accomplished, the behavioral objective for each task to be
trained, and the level of proficiency required [Vreuls and
Obermayer, 1971].
In support of ISD, measures and a measure-
ment system are necessary to:
(1)
in their operational environments,
instructional standards,
(3)
perform analyses of systems
(2)
establish quantitative
provide an index of achievement
for each behavioral objective, and
(4)
evaluate alternative
instructional content, approaches, and training devices [Vreuls
and Obermayer, 1971; Obermayer, et al.,
1974; Deberg
,
1977;
Prophet, 1978; Kelly, et al., 1979].
When a state-of-the-art flight simulator is available
to an ISD team,
it should be the basic medium around which
the course is organized [Prophet, 1978].
[1976]
Campbell, et al.
applied the ISD process to the design of an A-6E air-
crew training program and used the A-6E WST for a large part
of student training.
The study concluded that "difficulty
was experienced in applying the ISD process to the development
45
of Specific Behavioral Objectives
(SBOs)
and criteria test
statements, due to the lack of documented quantitative standards
of performance."
In a review of U.S.
ing program development, Prophet
Navy fleet aviation train-
[197 8]
reviewed the A-6E ISD
application as well as three other major ISD efforts for various
aircraft.
The results of that study concluded that one
"...
major shortcoming was in the area of performance measurement
and evaluation," and recommended measurement as a possible
future area for improvement to the ISD model.
The need for incorporating objective performance
measurement methods has been addressed.
The methodology for
the introduction of objective performance measurement into
the A-6E WST for B/N performance will now be discussed.
46
III.
A.
METHODOLOGY
PROCEDURE
The methodology used in formulating a model to measure
B/N performance during radar navigation in the A-6E WST was
based on an extensive literature review and an analytical task
analysis of the B/Ns
'
duties.
proach taken in this report.
Figure
1
illustrates the ap-
After selection of the mission,
scenario, and segment of interest, the review concentrated on
aircrew performance measurement research, which emphasized
navigation, training, and skill acquisition.
A model was then
formulated to show the relationship among student skill acquisition, performance evaluation, and the radar navigation task.
This hybrid model, discussed in Chapter V, was improvised by
the author specifically to illustrate difficult concepts of
aircrew performance measurement and evaluation.
The literature
review identified different approaches taken in using perfor-
mance measurement from a systems point of view, some of which
were integrated and applied to the current situation.
An in-depth task analysis of the B/N was performed with
the purpose of generating candidate performance measures for
operator behavior.
Skills and knowledge required to perform
the radar navigation maneuver were identified and a mission
time line analysis was conducted to identify tasks critical
to performance.
A model was formulated of the A-6E crew-system
47
Select A-6E
Mission, Scenario
and Segment
\/
\/
\/
Review Literature
on Aircrew Perf.
Methods /Measures
1
2k
Reviev/ Generic
Aircrew PMS
Concepts
aL
Perform B/N
Task Analysis
±
Formulate Model
of Measurement
and Skill Acq.
\^
Identify Skill/
Knowledge Required
for each Task
-^
\/
Review Current
Grading of B/N
and WST PMS
Identify Candidate
Measures for
Navigator Training
Perform MTLA to
Identify Critical
Tasks /Measures
J^
Formulate Model
of Crew-System
Network
3^
Identify Candidate
Measures for
Radar Navigation
\/
Candidate
Select
Measures for Skill
Acquisition
^'
Formulate PMS Model
for B/N
Figure
1.
Methodology Flow Diagram.
48
interactions to illustrate the complexity involved in measuring B/N performance.
After defining the purpose of measuring
B/N performance, candidate performance measures were identified
for possible use in measuring B/N performance.
Candidate performance measures from the literature and the
task analysis were compared and measures were selected that
met the criteria of face validity, ease of use, instructor and
student acceptance, and appropriateness to the training environment.
These candidate m.easures were then compared to cur-
rent B/N student performance measurement and generic performance
measurement systems.
The result was a performance measurement
system for the B/N during radar navigation in the A-6E WST.
Evaluation models were then investigated; a sequential sampling
decision model was selected for B/N performance evaluation.
B.
ASSUMPTIONS
This section will present some underlying assumptions that
are necessary for performance model development and implementation, beginning with a discussion on the necessity of the
A-6E WST to realistically duplicate the A-6E CAINS aircraft
in both engineering and mission aspects.
The unique role of
the pilot during the radar navigation mission in the A-6E WST
is discussed with respect to his contribution to measuring the
B/N's performance.
A discussion of the literature review in
respect to the relationship between results from pilot studies
and navigator performance is presented, followed by a discus-
sion of the need for the existence of a mathematical
49
.
relationship between measurement and operator behavior.
Finally, an assumption is stated concerning the relationship
between motivated and experienced aircrew and high skill
levels.
1.
These discussions follow.
Simulator Fidelity
It is assumed that the A-6E WST represents to a satis-
factory degree those elements of the A-6E CAINS aircraft such
that the A-6E WST aircrew is confronted with a realistic
"duplication" of the operational situation and that the air-
crew should be required to perform as they would in actual
aircraft flight.
Given this assumption, training and per-
formance evaluation can be effectively achieved in the simulator for most B/N activities.
2
Pilot and B/N Relationship
The A-6E effectiveness in terms of crew-system output
is a function of pilot and B/N crew coordination.
Because of
the major role of the B/N's activities in achieving the desired
mission success during A-6E radar navigation, it is assumed
that any variability within the A-6E system that can be meas-
ured and attributed to the pilot will be small.
In effect,
the pilot's function within this mission, scenario and segment
will be to "fly system steering," which, for the most part, is
the result of the B/N's performance as a navigator and systems
operator.
50
.
.
3.
Literature Review Results
Most of the literature in the area of aircrew per-
formance measurement has for the most part concentrated on
the pilot for performance measurement and evaluation.
For
similar missions, scenarios and aircrew tasks, it is assumed
that what was a significant result in terms of performance
measurement for a pilot will be much the same result as that
for a navigator.
This assumption does not include the psycho-
motor domain of human operator performance entirely, but does
draw some parallels from pilot research results to the naviga-
Although each position within the aircraft is somewhat
tor.
different, many similarities are assumed to exist in terms of
operator output, man-machine system output, and measures of
effectiveness
4
Mathematical Description of Behavior
It is assumed that a mathematical relationship exists
between some aspects of operator behavior and performance
measurement and evaluation.
Most likely, for the multi-
dimensional aspects of behavior, a multi-descriptive mathematical result would best describe that behavior in valid and
reliable terms.
Objective performance measurement relies for
the most part on numerical and statistical analysis of operator and system outputs.
Thus, this assumption is necessary
for the utilization of objective performance measurement to
measure and evaluate the B/N's control movements.
51
5.
Experience and Skilled Behavior
When properly motivated and presented with a realistic
simulated flight mission with the representative flight tasks,
highly experienced ("fleet qualified") aircrews are assumed
to exhibit skilled behavior of an advanced stage or high level
that is characterized by minimum effort and consistent re-
sponses ordinarily found in actual aircraft flight for the
same mission.
The demonstrated performance of highly skilled
aircrew, under this assumption, allows for the establishment
of performance standards from which comparisons can be made
to populations of aircrew that are less than highly skilled.
Both the problem of motivated behavior and establishment of
performance standards will be discussed in Chapter VII.
52
IV.
THE FUNDAMENTAL NATURE OF PERFORjyiANCE EVALUATION
This section is not intended to be a definitive expo-
sition on performance measurement and evaluation theory.
However, certain basic concepts of performance measurement
and evaluation need to be defined and explained so that a
common understanding of subsequent chapters will occur with
minimum confusion.
This material is approached with a log-
ical time-dependency sequence, beginning with measurement
theory, and ending with some desirable characteristics of a
total performance measurement and assessment system in the
training environment.
Four major areas of performance evaluation will be dis-
cussed in this section:
measurement considerations, criteria
considerations, performance measurement considerations, and
performance evaluation considerations.
The main purpose is
to show that measurement and criteria are needed before the
evaluation process begins.
Measurement considerations include
the definition and purpose of measurement, types of measures,
levels of measurement, transformations, measurement accuracy,
reliability
and validity of measurement, and the selection
of initial measures for man-machine performance.
The area of
criteria considerations addresses the definition and purpose
of criteria,
types of criteria, characteristics of criteria,
establishing criteria, sources of criterion error, measures
53
.
of effectiveness, and selection of criteria.
Performance
measurement considerations include other aspects of performance measurement such as:
subjective versus objective
measures, combining measures, overall versus diagnostic
measures, individual versus crew performance, and training
measures.
The last area of this section, performance evalu-
ation considerations, shows how evaluation depends upon meas-
urement and criteria, and discusses the definition and purpose
of performance evaluation, types of evaluation, accuracy of
evaluation, evaluating individual and group differences, and
the characteristics of evaluation.
The reader already familiar with the above material may
wish to skip ahead to the next section.
Others not familiar
will need the theory to aid understanding of subsequent
chapters
A.
MEASUREMENT CONSIDERATIONS
1.
Definition and Purpose of Measurement
Measurement is information about performance for a
specific purpose, such as whether or not a student is "mission
ready" to navigate a particular aircraft
1980]
.
[Vreuls and Cotton,
Unfortunately, this definition leaves open the serious
question of quantification; just what and how do you measure
and then transform the raw data into useful information?
In
elemental measurement theory, measurement involves the assignment of a class of numerals to a class of objects, where the
54
class of objects becomes human behavior, the class of numerals
must be defined, and some sort of rules for assigning the
numerals to the objects must exist [Lorge, 1951; Forrest, 1970]
Measurement then becomes an abstract concept of "mapping"
a
class or set of numerals to a class or set of human behaviors
or performance, but this concept then becomes more quantifi-
able in nature.
All measurements are estimates of the true
value or actual amount of the human behavior possessed at a
given point in time [Smode, et al., 1962].
The difficulty in
the measurement of human behavior increases when the important
aspects of the behavior being measured are more qualitative
than quantitative in nature [Glaser and Klaus, 1966].
Meas-
urement requires the action of observation, where behavior is
noticed or perceived and recorded.
Glaser and Klaus [1966]
and Lorge [1951] noted that some observations can be made
directly, involving perceptions of the behavior or of the
behavior's properties, where other observations can only be
estimated from inferences about the behavior, or its properties from its effects on other system components.
to measurement, as outlined by Forrest
[197 0]
,
The steps
include:
(1)
Determine the specific object or aspect to be
measured.
(2)
Locate or expose the particular object or aspect
to view.
(3)
Apply a measurement scale.
(4)
Express the results as a dimension.
55
Measurement must precede the activity of performance
evaluation, which is the process of interpreting the results
of measurement and comparing them to an established standard.
Measurement in the training context serves
a
variety of func-
tions which emphasize either achievement (present knowledge
or skill level) or prediction (expected performance under
specified conditions)
.
Several specific purposes of training
performance measurement as outlined by Smode, et al.
Buckhout and Cotterman [1963]
and Obermayer [1971], Farrell
[1962],
Glaser and Klaus [1966]
,
[1974], and Vreuls,
Vreuls
,
et al
.
[1975]
are enumerated below:
(1)
Prediction of future performance of a student for
specified future operational setting.
(2)
Present performance evaluation of knowledge, skill
level, or performance level of a student.
(3)
Learning rate evaluation at several points in a
training program to provide a basis for judging a
student's present stage of learning for subsequent
advancement to the next training phase.
(4)
Diagnostic identification of strengths and weaknesses of a student so that additional training
may occur.
(5)
Training effectiveness resulting from the nature
and extent of training syllabus or course material
changes.
(6)
Criterion information necessary for defining what
constitutes successful or proficient performance.
(7)
Functional requirements for future training
equipment.
C8)
Selection and placement of the student with an
achieved level of proficiency to a position or
mission with a required minimum proficiency level.
56
a
.
2.
Types of Measures
Measurement is a process of producing raw data in the
form of measures
(parameters or variables) as a result of
observing human behavior in a man-machine system.
Measures
are quantities which can take on any of the numbers of some
set and which usually vary along a defined continuum, or scale
[Knoop,
1968]
.
The classification of measures is commonly
done by using the characteristics of the measures themselves.
Using taxonomies developed by Smode, et al
et al.
[1964]
,
and Vreuls and Obermayer
.
[1962], Angell,
[1971]
,
measures may
be grouped into several major classes with a collection of
like measures within each class.
The major classes are listed
and briefly defined below:
(1)
Time periods in output or performance.
(2)
Accuracy or correctness of output or performance.
(3)
Frequency of occurrence or the rate of repetition
of behavior.
(4)
Amount achieved or accomplished in output or
performance.
(5)
Quantity used or resources expended in performance
in terms of standard references.
(6)
Behavior categorization by observers (subjective
measurement)
(7)
Condition or state of the individual in relation
to the task which describes the behaviors and
results of that behavior on system output.
This classification produced approximately 83 measures within
the seven major classes and are listed completely in Smode,
et al.
[1962]
.
A more recent taxonomy by Mixon and Moroney
57
.
[1981]
.
.
grouped objective only aircrew performance measures
by the following six major classes:
(1)
Physiological outputs from the operator.
(2)
Aircraft systems
(3)
Man-machine system output within the operating
environment
(4)
Time periods in output or performance.
(5)
Frequency of occurrence or the rate of repetition
of behavior.
(6)
Combined overall measures.
,
instruments or equipment.
These measures were obtained from an extensive literature
review of aircrew performance measurement spanning the years
1962-1980.
class.
Table II lists over 180 measures within each major
It is interesting to note that all of these measures
were obtained from actual aircrew performance field or laboratory results, in contrast to previous listings of proposed
or candidate measures.
Some measures listed in Table II will
be used as candidate measures for B/N performance during radar
navigation in the WST (see Table XI)
a.
Distributions of Measures
The process of measuring continuous and discrete
human behavior results in a sample of measures that are esti-
mators of the actual operator behaviors in the system being
examined.
The usefulness of the measures becomes apparent
when they are used to describe the behavior as "good" or "bad"
when compared to a reference or standard measure (criterion)
Statistical techniques are used to transform raw measures into
58
.
TABLE II:
A TAXONOMY OF MEASURES
PHYSIOLOGICAL OUTPUTS
Biochemical analysis (blood)
Cardiovascular
Communications/Speech
Electroencephalogram (EEG)
Electromyogram (EMG)
Eye movements
Finger tremor
Galvanic Skin Response (GSR)
Metabolic rate
Perspiration weight loss
Pupillography
Respiration
Temperature
Time of sleep
Urinary catecholomines
Visual Evoked Potential (VEP)
AIRCRAFT SYSTEMS
ADI displacement
Aileron
Aircraft gross weight
Angle of attack
Approach glideslope display error
Approach localizer display error
Automatic Direction Finder (ADF)
Autopilot vertical tracking error
Ball angle
CDI error
Collective
Control stick
Cyclic
DME
Elevator
Engine Pressure Ratio (EPR)
Engine RPM
Flaps
Flight director error
Fuel flow
Heading
L
Source:
Mixon and Moroney [1981]
59
TABLE II
(Continued)
AIRCRAFT SYSTEMS
(Cont'd)
Landing gear
Pedal (helicopter)
Radar altimeter error
Rotor RPM
Rudder
Speed brake
Tail rotor position
Throttle
Thrust attenuator
Thumbwheel inputs
Trim
MAN-MACHINE SYSTEM OUTPUT
Acceleration
ACM plane of action (X^Y^Z)
Aircraft/boom oscillations
Airspeed
Altitude
Altitude (pitchout)
Altitude (radar)
Approach angle error
Approach center line error
Approach glideslope error
Approach range
Checklist errors
Circular bomb error
Closing speed
Course error
Crosstrack error
Deviations from ideal flight path
Dip to target error (ASW)
Distance traveled
Dive angle at bomb release
Drift
Emergency detections
Energy Management Index (EMI)
Ground speed
Ground track
Landing aim point
Landing attitude
Ldg. dist. to ideal touchdown point
60
TABLE II
(Continued)
MAN-MACHINE SYSTEM OUTPUT
(Cont'd)
Ldg. dist. to runway threshold
Ldg. height at runway threshold
Landing result (flare, bolter, ..
Lateral acceleration
Lateral velocity
Mach number
Maneuvering rate
Miss distance (ASW)
Navigational accuracy (LAT/LONG)
Pitch
Pitch/roll coordination
Pointing angle advantage
Position estimation
Positional error (formations)
Power
Prob. of finding turn point
Prob. of target acquisition
Prob. of target detection
Procedural errors
Range at target detection
Range a-t target identification
Range at target recognition
Rate of information processing
Ratio: carrier accidents
Ratio: carrier bolters
Ratio: carrier bolter rate
Reaction to an event
Roll
Sideslip
Takeoff position error
Torque
Tracking error
Turn errors
Turn rate
Vertical acceleration
Vertical velocity
Yaw
TIME
Combined total seconds of error
Defensive time
61
TABLE II (Continued)
TIME (Cont'd)
Lead time
Offensive time
Offensive time with advantage
Opponent out of view time
Ratio: offensive/defensive time
Reaction time to an event
Time
Time estimation
Time of event
Time of task exec-ution
Time on criterion
Time on target
Time to acquire target
Time to criterion
Time to detect target
Time to envelope
Time to first kill
Time to identify target
Time to recover from unusual attack
Time to turn
Time within criterion
Time within envelope
Time within flight path
Time within gun range
Time within missile range
FREQUENCY
No.
No.
No.
No.
No.
No.
No.
No.
No.
No.
No.
No.
No.
No.
62
of
of
of
of
of
of
of
of
of
of
of
of
of
of
aircraft ground impacts
collisions (formations)
control losses
control reversals
correct decisions
correct responses
correct target acquisitions
correct target classifications
correct target detections
correct target identifications
correct trials
course corrections
crossovers
errors per trial
TABLE II
(Continued)
FREQUENCY (Cont'd)
No.
No.
No.
No.
No.
No.
No.
No.
No.
No.
No.
No.
No.
No.
No.
No.
No.
No.
No.
No.
No.
No.
No.
of
of
of
of
of
of
of
of
of
of
of
of
of
of
of
of
of
of
of
of
of
of
of
errors to criterion
false target detections
false target identifications
gun hits/kills
incorrect control inputs
incorrect decisions
lost target contacts (ASW)
missile hits/kills
overshoots
refueling disconnects
qualifying (criterion) bombs
scorable bombs
successful unusual attack rec.
taps (secondary task)
target detections (no fires)
target hits
target kills
target misses
times inside criterion
times off target
times outside criterion
turn points found
turns to assigned heading
COMBINED OVERALL MEASURES
Good Stick Index (GSI)
Landing Performance Score (LPS)
Objective Mission Success Score (OMSS)
Trials to criterion
63
.
.
useful forms of information for the comparison or evaluation
Since all measures are, in effect, random variables
process.
in a statistical sense,
it is important to determine the family,
or distribution population, that characterizes the particular
measure being examined before performing any statistical operations.
Two such example distributions would be the Exponential
(time measures)
and the Normal or Gaussian
(accuracy measures)
Each distribution has preferred statistical summary estimators
that use the generated measures
(data)
in order to best esti-
mate the actual or true operator performance dimension at hand.
b.
Error Measures
Accuracy, or error measures, are of special interest
in aircrew performance measurement due to the obvious relation-
ship between operator error and aircraft accidents.
These
unique measures are usually expressed as a measurable deviation
of a variable from an established or arbitrary reference point,
and have been of great interest to the aviation accident in-
vestigation community [Hitchcock, 1966; Ricketson, 1974]
Chapanis [1951] dichotomized errors as basically constant or
variable; constant errors indicated the difference between a
statistical estimator of a quantity and the true, or expected
value, and variable errors are described by a statistical
estimator of measure spread or dispersion.
That study con-
cluded that variable errors indicated the actual instability
of the man-machine relationship and thus were more of a con-
tributing factor to accidents.
64
:
c.
Aircrew- Aircraft Measurement
A descriptive structure for flight crew performance
measurement relating system performance and human behavior to
segments of maneuvers which constitute a flight mission was
recently developed by Vreuls and Cotton [1980]
earlier work by Benenatti, et al.
Vreuls, et al.
[1973]
,
[1962], Knoop
derived from
[1968], and
The structure states that a measure
.
must have meaning only when specified fully by the following
five determinants
(1)
Measure segment.
(2)
System state variable
(3)
Sampling rates for continuous variables.
(4)
Desired values.
(5)
Transformations.
(s)
and their scaling.
Measure segments are any portions of flight for
which the desired behavior of the system follows a lawful
relationship from an unambiguously defined beginning to end.
Segments are related closely to specific behavioral objectives
(from the ISD training approach)
,
are possibly but not required
to be the same as a maneuver segment, and defined explicitly
by start/stop logic.
discussed (measures)
System state variables, as previously
,
are scaled so that their entire dynamic
range is represented without information loss.
Scaling will
be discussed along with desired values and transformations
later in this section.
Sampling rates are the temporal fre-
quency at which a measure is recorded or observed by the
65
.
measurement system.
One sampling rate guideline is to record
data faster than the natural frequency response for the specific axis in which the measurement is being made, although
others are proposed [Vreuls and Cotton, 1980]
3
.
Levels of Measurement
In examining the nature of performance measurement,
four levels can be distinguished.
The level of scale deter-
mines the amount of information resulting from the measurement
and the mathematical and statistical operations that are per-
missible within that level.
[1951],
Siegel
[1956],
Table III is adapted from Lorge
and Smode, et al
.
[1962], and lists
each level and the applicable characteristics associated with
A brief description of each level as analyzed by Smode,
each.
et al.
[1962]
a.
is discussed below:
Nominal measurement scales have units being
measured placed into classes or categories.
Units placed in
the same class are considered equivalent along some dimension
or in some respect.
b.
ranJc
order.
Ordinal measurement scales have units assigned a
Nominal categories are now ranked with respect
to each other in terms of an "amount possessed" of the quantity
being measured, with judgements assessing the amount possessed
by the units involved.
RanJcings can be composed of unidimen-
sional or multidimensional variables.
In the latter case,
a
composite judgement or ordering is performed which essentially
places a multidimensional variation on a unidimensional linear
scale.
66
0)
G
..
M
rc
1
c
-H
•H 0) 4-4 j-i
r^ rH LH (U
(U -H (U T3
w
^
CO
CQ
<3
CO E-t
CO CO
U
H
H M
S ft
K <
W E^
Oi CO
S
-P
W
>,
(U
.*
0)
0)
U
-H
cn
CJ
0)
M
4J
G
CO
-H
•H
U
C
a
3
O"
0)
0)
G
(U
-H
(U
CO
-M
<U
Id
•H
-P
>
G
HU
QJ
i-l
4J
M
iH
CO
(U
CO
G C
4J
(T3
oj
T3
a;
O
0)
<U
Oi -H
g
-H
O
C --H
rd
-P
(d
cn
(U
.*
iq
Cfl
3
CO
-H +J
4-1
(T3
-H
C O
U"
(D
-P
G
U-i
Oi-P
QJ
T3
G
0)
0)
G
13
fa
M
fe4
-H
SU U O S
C
0)
3
CO
CO
CT
G
0)
ra
QJ
td
T!
-P
S-i
Q;
CO
fc4
M
S S
CO
Q)
CO
^H
CO
-H
4->
-H
-P
Q>
-P
03
e
03
Q4-P
03
-P
M
CO
•4-1
(U
03
4-1
(H
^-1
G U C
-P
0)
fC
CO 04 Oi
03
<U
G
^
rH
03
O
0)
G O
-H
-H
(T3
M
^
4J -H 4J
T3
-P
^
-H 4J -H
S
n3
M
5-1
CO
U
rH 0} o
tJ -H T3 -H
•H
CO
+J
CU
+J
-P
MOO
S U
(U
>i^
04
M-i
•H
-H
•H
c
OJ
M
C U
U
G
CPH-(
cn
4J
CO
-P
CO
<N
03
2
O4
CTl
a
>i cn
G G
w
^ D
< Eh
UU
H D
Oi
< &H
S CO
K eu
E^ D
<^ o
S OJ
Pc5
E-«
CJ
a
CO
M
CO
nj
en
G
0)
CU
05
S^
3
OJ
3
M
en
c
•H
^X
S-i
cn
-U
-'
to
U-4
4J
D
e
u
•
II
U
H
G
-
-^^
3
cn
r-l
-H
(T3
CU
-P -P
M
X M
O
>i
OJ
jj
X
•H
X
(d
S-l
03
03
1!
rH
•H
-
X
O
e
CO
II
-
X
-73
G
g
G
0)
•H
03
O
'^
QJ
+
0)
G G
^03
3. S
H
^
-H
0)
04
u
0)
•H
II
CO
Q)
(T3
G
-H
X X o -^
--^'-'•H G
M-l U-l
G
+J
a,
u
O
e G
-
X
3
-H
n3
CO
in
CO
4J
fO
>i
c
•H
>
-p
<;
CQ
>,
u
CO
U-l
-p
c
c
G
U
(TJ
T)
vH
>i-H
-P
-P
Q)
0)
-P
CO
T3
-H
MH
x:
C
H
M
Q)
•H
C
CO
a
-H
X5
fd
5-1
G
0)
CO
•H +J
CD
M
0)
-P
^3
03
tn
S fO
M 3
^3
-H
S
13
T3-
S-l
0)
-H TS
>
OJ
V^
0)
cr
4J
(U
h-3
<
2
H
5
3
0)
-P
u
04
a
Q
>i
3
to
-P
<U
Q
J
<
2
H
M
-H T>
O -P
G G
cr
04
-P Tj
4-1
0)
Q)
03
-H
S-l
M
>i-H
(U
> G
iT>
-P
-H
OJ
(U
^
-H
CO
G
G
•H
03
-P
Jh
-TO
<D
>
U
04
S-i
O
67
•H
QJ
CO
QJ
M N
QJ
>i-H
(T3
4-1
QJ
QJ
-P
H
e 03
M 3
QJ
Cr
-P
QJ
CO
QJ
V
s-i
M
3
r-t
^3 QJ -G
-H T3 -P CO
> M -H X!
U
5
03
QJ
cn
O4
O
QJ
H
Z
H
G M
C
-H
Q
cn
OJ
T3
G
QJ
QJ
<D
QJ
-P
•H
03
G
4J
73
03
<
>
Eh
4-1
T3
0)
-
-P
03
-H T3
i-i
4-1
'73
4-1
C -H 4-4
•H rH -H
e 03 13
S-i
•H
4J T3
5-1
>
>
CO
-P
-HO
-H
^
QJ
•H
>i
+J
CO
>i-H
G
-P
G -H
•H rH
Q
hJ
<:
4J
ij-l
(1)
W
U
4J
kj
<:
u
H CO
« 2
MO
04 M
S Eh
W <
2
uw
M Ch
CO O
rH
>i
>1
iH
Eh
2
O
3
O
CO
.
c.
Interval measurement scales have units being
measured in equidistant terms.
In addition to an indication
of not only rank order or direction, there is also an indica-
tion of the amount or size of difference between units on the
Since the zero point is usually arbitrary, it does
scale.
not represent complete absence of the property being measured.
d.
Ratio measurement scale is an extension of an
interval scale with a natural, absolute zero point, and represents the highest measurement level.
It is with this level
that the most powerful statistical tests of significance are
available when evaluating performance measures.
The determination of a parameter or measure of per-
formance should include the units of scaling, i.e.,
knots
(air speed)
,
and
to 64000 feet
to 640
(pressure altitude)
Without a clear definition of the scaling units, the improper
use of statistical operations or tests may occur, causing the
measurement of performance to provide irrelevant or erroneous
results.
4
.
Measure Transformations
Measures are observed, recorded, and usually subjected
to transformation, which is any mathematical, logical, or
statistical treatment designed to convert raw measures into
usable information [Vreuls and Cotton, 1980]
.
When measures
are discrete, the transformation may be the actual value,
absolute value, or a tolerance band.
When measures are con-
tinuous, transformations may include means, variances.
68
frequency content, departures from norms, or several others
as listed in Table IV,
The relationship between the distribution of a measure
or estimate and transformations is well known to statisticians
For a given popula-
but sometimes not very clear to others.
tion distribution, there exists unbiased and consistent esti-
mators
(transformations) that will best describe the true
value or quantity that is being estimated.
transformations are not applicable for
and in a sense are useless.
to Mood,
et al.
[1974]
a
Indeed, some
given population,
The interested reader is referred
for a detailed analysis of applicable
transformations for a known population distribution.
5.
Accuracy of Measurement
Measurement produces measures which are sample estimators of the true value or actual am.ount of the quantity
possessed.
Accuracy of measurement refers to how close an
estimator is to the true value.
All measurement systems are
subject to accuracy problems as discussed below.
a.
Measuring aircrew behavior is confounded by the
systematic influence of the total operating system, since the
measures obtained are frequently determined to some extent by
the performance of other components in the system [Glaser and
Klaus, 1966J.
b.
Any statistic, or known function of observable
random variables that is itself a random variable, whose values
are used to estimate a true quantity, is an estimator of the
69
TABLE IV:
COMMON iMEASURE TRANSFORjyiATIONS
TIME HISTORY MEASURES
Time on target
Time out of tolerance
Percent time in tolerance
Maximum value out of tolerance
Response time, rise time, overshoot
Frequency domain approximations
Count of tolerance band crossing
Zero or average value crossings
Derivative sign reversals
Damping ratio
AMPLITUDE-DISTRIBUTION MEASURES
Mean, median, mode
Standard deviation, variance, range
Minimum/maximum value
Root-mean- squared error
Absolute average error
FREQUENCY DOMAIN MEASURES
Autocorrelation function
Power spectral density function
Bandwidth
Peak power
Low/high frequency power
Bode plots, fourier coefficients
Amplitude ratio
Phase shift
Transfer function model parameters
Quasi-linear describing function
Cross-over model
BINARY, YES/NO MEASURES
Switch activation sequences
Segmentation sequences
Procedural/decisional sequences
Source:
Vreuls and Cotton [1980]
70
the true quantity and may be subject to biased or inconsistent
properties.
For all distributions of measures that have been
identified during measurement, there exists at least one un-
biased and consistent estimator that will be closer to the
true value of the quantity being measured than any other
estimator.
Using the incorrect estimator for a known popula-
tion will, in effect, result in avoidable measurement inac-
curacies [Krendel and Bloom, 1963; Mood, et al
.
,
1974].
There is no simple way to assure measurement accuracy,
but several techniques to improve the accuracy of measurement
may be incorporated and follow.
c.
Scope of Measurements
Accuracy will increase as
a
result of increasing
the inclusiveness or completeness of the measures to include
all relevant behavior and system components.
Skilled perfor-
mance in an aircraft normally involves complex behaviors that
require a wide scope of measurement to accurately estimate
the existing true performance level
d.
[Smode,
1962].
et al.,
Number of Measurements
For most situations, the greater the number of
observations involved in deriving an estimator, the closer
the estimator wil be to the true value [Mood, et al
.
,
1974].
Increasing the number of observations when there is a large
variability in the individual measurements will reduce the
variability and minimize the effects of random or chance
variations which may occur from measurement to measurement
71
.
[Smode,
1962].
et al.,
.
Large sample sizes also are desirable
when establishing standards or references (criteria) when
evaluating performance [Krendel and Bloom, 1963]
e.
Controlled Conditions of Measurement
By insuring the desired measurement conditions
are controlled, accurate measurement may be improved.
This
may be done by defining those factors which are to be present
and varied systematically, maintaining uniformity of condi-
tions during measurement in order to reduce bias and unwanted
variability, and insuring the intended measurements are ta>:en
correctly [Smode, et al., 1962].
6
.
Reliability of Measurement
Reliability refers to the accuracy of measurement or
the consistency or stability of the recorded and statistical
data upon repetition [Glaser and Klaus, 1966; Knoop and Welde,
1973; Grodsky,
1967; ThorndiJce,
1951].
When the dispersion,
or spread, of measures obtained from one individual on a par-
ticular task is large, the measures lack reliability, and
any statistics that are formed from the measures that are used
in evaluation will be incapable of differentiating consistently
among individuals who are at different skill levels.
If the
measures are precise, resulting in a statistic that is stable,
an individual would receive exactly the same evaluation score
each time his performance was measured [Glaser and Klaus, 1966;
Thorndike, 1951]
72
Computation of Reliability
a.
Thorndike [1951] cautioned: "There is no single,
universal, and absolute reliability coefficient for a test.
Determination of reliability is as much a logical as a
statistical problem."
Several methods of approximating
reliability have since been proposed or utilized in aircrew
measurement.
Smode, et al.
[1962]
classified reliability
expressions as either absolute or relative and suggested
using the standard error of measurement (absolute measure of
precision)
,
coefficient of internal consistency (uses single
set of observations)
ment over time)
,
,
coefficient of stability (measure agree-
and the coefficient of equivalence
(agreement
between two like measures) as statistical computations of
reliability.
Glaser and Klaus [1966] dichotomized reliability
assessment into two methods:
test-retest and alternate form,
and recommended computing reliability by using the statistical
correlation coefficient.
Winsorization and trimming
Some recent statistical techniques -
may provide a better reliability
approximation than was previously possible.
Winsorization
and trimming involve removing the effects of a large varia-
bility in a measure sample, with virtually no loss in infor-
mation [Dixon and Massey, 1969]
.
These techniques would appear
to be quite useful for reliability calculations,
although their
actual use in aircrew performance measurement has not yet been
demonstrated.
73
b.
Sources of Unreliability
The sources of measurement accuracy problems, as
previously discussed, are also sources of unreliability.
Other sources inherent in the measurement of human behavior,
which hinder reliable measurement of aircrew performance,
are from Glaser and Klaus
[1966]
and Thorndike [1951], and
include the following:
(1)
The environment in which performance is
being measured influences measurement variability.
Differ-
ences in weather, amount of turbulence, wind direction and
velocity, unexpected noise, equipment malfunction, and extreme
temperatures are factors contributing to unreliability.
(2)
Equipment used for measurement or personnel
participating in performance assessment are sources of unreliability.
(3)
The complexity of the behavior being evalu-
ated influences unreliability.
Since the behavior being
measured and evaluated involves many dimensions of performance,
and any individual's skill level may fluctuate considerably
from one dimension to the next, each component dimension is
susceptible to all previously discussed sources of unreliability that have been described above.
(4)
The performance of the individual being
assessed fluctuates as a function of temporary variations in
the state of the organism.
Some factors frequently involved
that decrease measurement reliability are:
74
individual
motivation, emotional state, fatigue, stress, test-taking
ability, and circadian rhythm.
c.
Improving Reliability of Measurement
In any training situation,
some degree of reli-
ability in the measure of the ability being trained is necessary if any evidence of improvement is required.
By reducing
chance factors or variability in measurement, reliability can
be improved.
The techniques for improving the accuracy of
measurement, as previously discussed, also contribute towards
improving reliability.
Knoop and Welde [197 3] suggested other
factors to improve reliability that are listed below:
(1)
Calibration of performance measurement equipment
should be conducted on a continuing basis.
(2)
Software processes involved in data collection,
reduction, conversion, analysis, and plotting
should be validated and monitored to avoid data
loss.
Accurate records of flight conditions and mission
requirements should be maintained to facilitate
measurement interpretation.
(3)
7
.
Validity of Measurement
Validity is the degree to which the measurement or
evaluation process correctly measures the variable or property
intended to be measured [Knoop and Welde, 197 3]
.
In regards
to the evaluation process, validity has two aspects;
(1)
relevance or closeness of agreement between what the test
measures and the function that it is used to measure, and
(2)
reliability or the accuracy and consistency with which
the test measures whatever it does measure in the group with
75
which it is used [Cureton, 1951]
.
In the training environ-
ment, a performance test is a stimulus situation constructed
to evoke the particular kinds of operator behavior to be
measured or assessed.
The validity of a performance test is
established by demonstrating that the test results reflect
differences in skill levels of the performance being assessed
[Glaser and Klaus, 1966].
a.
Types of Validity
Four types of validity which have applicability
to performance measurement in general have been described by
Smode, et al.
[1962], iMcCoy
[1963],
and Chiles
[1977]
as
follows:
(1)
Predictive validity refers to the correlational
agreement between obtained measures and future
status on some task or dimension external to the
measurement and requires statistical operations
for evaluation.
(2)
Concurrent validity refers to the correlational
agreement between obtained measures and the present
status of the units being measured on some task or
dimension external to the measurement and also
requires statistical computation.
(3)
Content validity is based on expert opinion and is
evaluated by qualified people determining if the
measures to be taken truly sample the types of
performance or subject matter about which conclusions will be drawn.
(4)
Construct validity is a logical process where the
emphasis is on trait, quality, or ability presumed
to underlie the measures being taken.
While the
measures themselves do not reflect directly the
performance to be evaluated, they are accepted to
be valid on the basis of logical considerations and
related empirical evidence.
76
.
b.
Validity of Measurement in the Simulator
Without empirical or judgmental evidence, the use
of full-scale state-of-the-art flight simulation provides
maximum face validity, where the performance evaluation situation in the simulator appears to duplicate the actual task
of flying or navigating an aircraft
1977]
Bowen, et al.
.
[1966]
,
[Alluisi,
1967; Chiles,
in an experiment using twenty
A-4 pilots to study and assess pilot proficiency in an Oper-
ational Flight Trainer (OFT; device 2F62)
,
found that:
For measures taJcen in the OFT to be valid, the task set
to the pilot should be multiple in nature and have a
considerable difficulty level equivalent to the more
difficult levels found in actual flight.
In this manner, the pilot is more likely to display his skills in
the same pattern of priority (i.e., time-sharing,
attention-shifting, standard-setting, etc.) as he does
in actual flight.
This conclusion is also supported by Kelley and Wargo [1968]
and in terms of the relevance component of validity as pre-
viously discussed by Cureton [1951]
c.
Improving Validity of Measurement
A high degree of validity is essential to the
effectiveness of any measurement system.
Improving validity
can be achieved by increasing either or both relevance and
reliability.
Relevance may be increased by reproducing the
simulation situation to closely resemble that of the actual
aircraft itself, or by simulating the task or mission being
performed more closely to the actual task or mission environment.
Reliability improvements were discussed in the previous
section.
77
.
d.
.
Relationship of Validity and Reliability
Validity has been described as having aspects of
relevance and reliability.
Reliability is the consistency
or self-correlation of a measurement while validity is its
correlation with some independent standard or reference from
that which is measured [Kelley and Wargo, 1968]
.
A given
performance measurement can be highly reliable yet not have
validity [Smode, et al.
,
1962; Kelley and Wargo,
1968].
ever, an unreliable test cannot have practical validity,
Howi.e.,
a measurement that does not even correlate well with itself
will not correlate well with other measurements [Kelley and
Wargo, 1968; Steyn, 1969].
In performance measurement, high
validity must be combined with high reliability; this combination means that a highly skilled operator consistently must
achieve a higher performance evaluation result than a less
skilled operator [Smode, et al.
,
1962;
Kelly, et al
.
,
1979].
If the performance evaluation occurs during the actual task
instead of a simulated task, the question of validity reduces
simply to the question of reliability, as perfect relevance
will have been achieved [Cureton, 1951]
Because of the unique relationship of validity
and reliability,
it is generally easier and more efficient to
improve the reliability of a measure than to raise its validity,
On the other hand,
ing,
if the validity of a measure appears promis-
improving reliability is preferred to using a reliable
measure which has lower validity [Glaser and Klaus, 1966]
78
.
,
Selection of Initial Measures
8.
After the identification and selection of a desired
mission, scenario, flight segment, and human operator with
the behavior of interest, performance associated with these
requirements can be specified and defined.
The initial
selection of appropriate measures has been a major problem,
as evidenced by the lack of concordance in recent aircrew
performance measurement research [Mixon and Moroney, 1981]
Unless aircrew performance measurement empirical results are
collected and standardized, an analytical approach must be
taken when initially selecting which measures best describe
the performance being examined.
Some optimum balance exists
between the "measure everything that moves" philosophy and
the measurement of a few measures with apparent face validity.
The initial selection of any measures, however, should be
guided by both the purpose of the measurement and the man-
machine system as well as the facility for collecting and
processing the data.
The following criteria for initial
measure selection are provided by Meister and Rabideau [1965]
Parsons [1972]
[1977]
,
Greening [1975]
,
and Vreuls and Wooldridge
:
(1)
Relevance - the measures should be pertinent to the
purpose of measurement.
(2)
Quantifiable - measures should be in the form of
numerals on a ratio scale for statistical analysis,
except where only subjective collection is feasible.
(3)
Accessibility - measures should not only be observable but easily collectable, preferably by electronic
means.
79
.
Operational utility - a measure that has relevance
and accessibility in both the aircraft and simulator
environments
(4)
Efficiency - a measure with utility at minimum cost
of collection and transformation into usable infor-
(5)
mation.
Content validity - a positive correspondence between
the performance measure and what is known about the
underlying behavior.
(6)
(7)
Reliability - collection of more measures or
samples of a measure than planned would offset the
likelihood of electronic data collection equipment
failures.
(8)
Dependence - the availability of human or automatic
data observers and collectors limit what measures
are feasible to collect.
(9)
Objectivity - where possible, automatic data observation and recording is preferred to human observers
and recorders.
(10)
Usable - measures collected should be usable for
either evaluation information or supportive to
evaluation results.
(11)
Acceptable - measures that are used by instructors
in the operational environment must be consistent
with their expert judgement.
(12)
Collection criteria - data must be accurate and
precise to what is known about the underlying
behavior.
Once the performance measures of interest are identified and
selected, they should be defined, as a minimum, in terms of
the descriptive structure as previously outlined by Vreuls
and Cotton [1980]
.
Some consolation in not selecting the
appropriate measures for performance is offered by Knoop
[1968]
,
"Determining exact criteria [standards] and perfor-
mance measures for virtually any flight task or mission is
an accomplishment yet to be made."
80
.
CRITERIA CONSIDERATIONS
B.
1.
Definition and Purpose of Criteria
Criteria are standards, rules, or tests by which
measures of system behavior are evaluated in terms of success
or failure, or to some degree of success or failure.
The
purpose of human performance criteria is to provide standards
or baselines for evaluating the success or failure, goodness
or badness, or usefulness of human behavior
[Knoop,
1968;
Vreuls and Cotton, 1980; Cureton, 1951; Steyn, 1969; Davis
and Behan, 1966; Shipley, 1976; Buckhout and Cotterman, 1963].
The criterion is a measuring device which is not generally or
readily available, but a device which should be constructed
from the beginning for each particular situation [Steyn, 19 69;
Christensen and Mills, 1967].
Criteria should not only define
the unique manner in which the operator should perform a task,
but should define the performance objectives of the entire
man-machine system [Demaree and Matheny, 1965; Connelly, et
al.,
1974].
2.
Types of Criteria
The classification of criteria can be accomplished
from a measurement standpoint; begin with the smallest known
entity and end with the "ultimate" quantity that may exist.
Several types identified are listed below:
(1)
Parametric referent or standard of performance
which is sought to be met by the operator or system.
Example: maintain 500 feet of altitude [Demaree
and Matheny, 19 65; Shipley, 197 6]
81
.
.
Parametric limit about the parametric standard
within which the operator or system is required,
Example: maintain plus or
or seeks, to remain.
minus 100 feet while at 500 feet altitude [Demaree
and Matheny, 1965; Shipley, 1976].
(2)
System component criteria which distinguishes the
relationship between system components and system
Example: "least effort" measured from
output.
the pilot in relation to maintaining altitude
[Krendel and Bloom, 19 63; Uhlaner and Drucker,
(3)
1964]
Test criterion used to evaluate overall human
ability, usually expressed as a single overall
subjective judgement of inExample:
measure.
structor for a student as to "pass" or "fail"
(4)
[Marks,
1961]
Ultimate criteria are multidimensional in nature
and represent the complete desired end result of
a system.
This type of criterion is impossible to
quantify due to the multidimensional nature of the
system's purpose, and hence, is a theoretical entity
Example: Any aircraft's
that must be approximated.
mission [Cureton, 1951; Smode, et al., 1962; Uhlaner
and Drucker, 1964; Steyn, 1969; Shannon, 1972].
(5)
It may be noted that all five types of criteria can
be quantified or approximated in some manner, with decreasing
accuracy as the ultimate criteria level is reached.
Obtain-
ing direct measures of the ultimate criteria for a complex
system is seldom feasible.
This is particularly true in mil-
itary systems where such criteria would be expressed in terms
of combat effectiveness or effectiveness in preventing a
potential aggressor from starting a conflict [Smode, et al.,
1962]
.
Therefore, it becomes apparent that we must select
intermediate criteria (types one through four above)
evaluating skilled operator behavior.
82
in
.
.
3.
.
Characteristics of Good Criteria
Using actual criteria as approximations of the ulti-
mate criteria can be accomplished by several methods that will
be discussed in a later section.
Although there is no certain
method that will lead to the specification of good criteria,
there are some considerations that can be taken into account
which are discussed below:
(1)
A good criterion is both reliable and relevant
[Smode, et al
19 62; Krendel and Bloom, 1963;
Cureton, 1951; Grodsky, 1967; Steyn, 1969].
.
,
(2)
Criteria must be comprehensive in that the utility
of the individual being evaluated is unambiguously
reflected [Steyn, 1969]
(3)
Criteria should possess selectivity and have ready
applicability [Krendel and Bloom, 1963]
4
Other Criteria Characteristics
Steyn [1969], in a review of criterion studies, noted
that performance measures under simulated conditions can at
best serve as substitute criteria.
This observation reflects
the engineering and mathematical model of reality represented
by the simulator that can, at best, approximate an aircraft
and its systems.
Shipley and Gerlach [1974] measured pilot
performance in a flight simulator CT4-G) and found that differences in pilot performance outcomes varied as a function
of the difference in criterion limits that were established,
with the relationship between criterion limits and tracking
performance found to be a nonlinear one.
The multidimension-
ality of a criterion was also noted by Steyn [1969]
83
,
Cureton
[1951]
and Connelly, et al.
,
[1974]
These latter two studies
.
observed that multiple criteria must exist for a single task
since different operator action patterns having no single
feature in common could conceivably obtain the same desired
system output.
Establishment of Criteria
5.
Criteria may either be derived from regulatory requirements, system operating limits, knowledge of common practice,
or empirical studies
[Vreuls and Cotton, 198 0]
.
When criteria
are established analytically, some caution must be taken.
Campbell, et al.
[1976], in designing the A-6E
TRAJyi
training
program using ISD methodology, observed that:
A standard or criterion of performance for that terminal
behavior must also be established
at a level comparable to the earlier established operational standards.
These latter criteria, however, while reflecting an
acceptable level of behavior, imply a repeatability,
that
whenever they are performed, that some
acceptable level will be attained.
.
.
.
.
.
.
Criteria derived from objective, empirical techniques are
preferable to analytical methods
[Steyn,
1969]
.
Regression
analysis, discriminant function analysis, multivariable regression, and norm or group referencing are but a few of the
empirical approaches to establishing criteria [Connelly, et
al.,
1974; Danneskiold, 1955; Dawes, 1979].
No matter which
method is used, criteria must be defined and are necessary
for the evaluation process.
84
.
6.
Sources of Criterion Error
As previously mentioned, a good criterion is one that
Reliability, as previously
is both reliable and relevant.
defined, implies that what constitutes successful performance
will be resistant to the effects of chance factors.
Relevancy
refers to the validity of the actual or approximated criterion
to the ultimate criterion.
By definition, the ultimate cri-
Sources of criterion error
terion is completely relevant.
can then be identified in terms of reliability and relevance.
Smode, et al.
[1962], lists some significant sources of
criterion error below:
(1)
Low reliability, as previously mentioned.
(2)
Irrelevancy or the lack of relation of the actual
criterion with respect to the ultimate or "ideal"
criterion.
(3)
Contamination of the criterion by the presence of
factors or ingredients in the actual criterion
which do not in fact comprise the ultimate criterion.
(4)
Distortion in the criterion caused by errors arising
from assigning incorrect weights to the separate
factors that comprise the actual criterion (combining criteria is discussed below)
7.
Measures of Effectiveness
Aircraft missions are all multidimensional in nature.
This means that every mission can be divided into usually one
overall goal or purpose (i.e., destroy the target, deliver
the supplies, rescue the survivors, etc.), with several subgoals
(safety, minimize susceptibility,
timeliness, etc.).
Since missions are multidimensional, the operator effort in
85
the form of mental and physical action (performance) becomes
multidimensional.
The multidimensional nature of skilled air-
crew performance, in turn, requires that several criteria, all
of which are relevant for a particular activity, be defined
Each of these criteria must
and used [Smode, et al., 1962].
be operationally defined, theoretically quantifiable, and
collectively give a reasonable portrayal of operator and system
performance.
Typically, one may wish to bring these component
criteria together in an overall effectiveness measure
a single
-
"measure of effectiveness" for the system being investigated.
The process of combining criteria into a single composite
measure of effectiveness is one of the most difficult tasks
to undertake in any field of research,
and has been the focus
of continuous investigation in the science of Operations Re-
search for decades [Hitch, 1953; Morris, 1963; Steyn, 1969;
Lindsay,
1979; Dawes,
1979].
A Measure of Effectiveness
(MOE)
is a quantifiable
measure used to compare the effectiveness of the alternatives
in achieving the objective, and must measure to what degree
the actual objective or mission is achieved
mittee, Naval Science Department, 1968]
.
[Operations Com-
For the particular
situation of measurement and evaluation of an aircrew member's
performance in an aircraft, criteria can be thought of as
"alternatives," each of which has an individual effectiveness
for each performance component of an aircrew member, who is
accomplishing an overall objective
86
-
the mission.
MOE
'
s
have
been applied in economics, management, and military problems
-
just about any area where a decision based on information
from system performance has to be made.
Combining criteria into an MOE can be accomplished
either analytically or statistically.
Most methods concen-
trate on assigning weights that are either determined on the
basis of "expert" opinion or statistical treatment [Smode,
et al.
,
1962; Steyn,
1969].
In a review consisting of numer-
ous criterion weighting studies, Steyn [1969]
concluded that
"it would appear that the most acceptable approach
[to
weight-
ing criterion variables] would be to identify the job dimen-
sions clearly and unambiguously and to use these pure dimensions
as criteria to be predicted independently."
There is no established procedure for combining cri-
Lindsay [1979] and Smode,
teria into a single overall MOE.
et al.
[1962]
offer some suggestions to approach the problem:
(1)
Look at the big picture.
Examine what is to be
done with the results of the aggregation.
Determine
how the numbers will be used, and in what decisions.
(Usually one finds that this has not been thought
out in advance.)
It may be that all that is really
needed is the identification of satisfactory systems.
(2)
If possible, aggregate subjectively.
Give the subcriteria values to the decision-makers or their
advisers and let them subjectively determine how
effective the systems are.
(3)
Recognize that one is defining, not approximating.
The development of a formal scoring system should
be done with the awareness that a definition of
system effectiveness is being made.
The procedure
developed should include reference points, diminishing marginal returns, and avoid substitutability
except where appropriate.
87
(4)
Sub-criteria should be weighted in accordance with
their relevance to the ultimate criterion.
(5)
Sub-criteria which repeat or overlap factors in
other sub-criteria should receive a low weight.
(6)
Other things being equal, the more reliable subcriteria should be given greater weight.
The unique situation of an aircrew flying an aircraft for a
specific mission and the necessary determination of sub-
criteria for evaluating the overall accomplishment of that
mission requires further research of an analytical and empirical nature.
The relationship among altitude, airspeed,
operator activity, and the hundreds of other system variables
that comprise the total system must be compared to mission
success in quantifiable terms.
Since "mission success" is
multidimensional and may not be totally measurable and quantifiable, some analytical approaches toward combining criteria
into an overall MOE appear to be feasible, notwithstanding the
possible use of empirical methods to describe some aspects of
the process.
8
.
Selection of Criteria
Criteria have been defined, their purpose established,
some types identified, and some characteristics discussed.
Since a large number of criteria may exist to evaluate a par-
ticular system, some selection in the way of "trade-offs" may
be necessary [Davis and Behan, 1976]
.
More than one criterion
may describe the same dimension of performance whereas another
carefully selected criterion may accurately describe more than
88
,
one performance dimension.
Reducing the number of criteria
that are relevant, reliable, and practical into a feasible
and usable set that can accurately and consistently evaluate
the performance of an aircrew and the accomplishment of their
mission is extremely difficult at present and will probably
remain as an unsolved future problem in the Human Factors
field unless specific research is undertaken to attack it.
In the meantime,
teria
some general guidelines for selecting cri-
as discussed by Hitch [1953]
and Smode, et al
.
[1962]
are listed below:
(1)
Selection of any criteria should always be consistent with the highest level or type of criterion
associated with the system mission.
(2)
Specify the activity in which it is desired to
determine successful and skillful performance.
(3)
Consider the activity in terms of the purpose or
goals, the types of behaviors and skills that seem
to be involved, the relative importance of the
various skills involved, and the standards of
performance which are expected.
(4)
Identify the elements that contribute to succesful performance and weight these elements in terms
of their relative importance.
(5)
Develop a combined measure of successful performance
composed of sub-criteria that measure each element
of success and are weighted in accordance with the
relative importance of each.
The definition, computation, combining and selection
of criteria is perhaps the most difficult problem encountered
by researchers investigating complex man-machine systems
[Osborn,
1973;
Krendel and Bloom, 1963].
The importance of
criteria is once more emphasized, as stated by McCoy [1963]
89
"You must define the criterion precisely and accurately
before interpreting any measures used in investigating a
system."
Christensen and Mills [1967], in quoting earlier
work done by H.R. Leuba, stated:
There are many ludicrous errors in quantification as
it is practiced today, but none quite as foolish as
It is awkward
trying to quantify without a criterion.
thing
when
a
criterion
wrong
quantify
the
enough to
most
unprofessional
exists, but it is a sham of the
sort to quantify in the absence of a criterion.
C.
PERFORMANCE MEASUREMENT CONSIDERATIONS
1.
Subjective Versus Objective Measures
As previously discussed in Chapter
I,
subjective and
objective measures are not dichotomous, but rather represent
a
continuum of performance measurement.
At one extreme of
the continuum, a human observer mentally records actual per-
formance during a specified mission, and uses his perceptions
to form a judgement or degree of success rating as to how
skillful the operator was in achieving the system objectives.
This extreme is the subjective method of measurement and
evaluation.
At the other extreme of the performance measure-
ment continuum, automatic digital computers sense, record,
transform, analyze, and compare actual man and system perfor-
mance to statistically established criteria and form a complete
set of performance data or information to be used by the
decision-maker (instructor or training officer)
the skills and abilities of an operator.
is the objective method of
in evaluating
This other extreme
measurement and evaluation.
90
.
Each method of performance measurement has advantages
and disadvantages, as were discussed previously, and will not
be repeated here.
Objective measures and measurement have
become more feasible and less costly to implement for aircrew
performance than in previous decades, and the method has
established itself as
a
very powerful and useful model for
describing actual human behavior [Mixon and Moroney, 1981]
2.
Combining Measures
What has been discussed previously with respect to
combining criteria also applies to combining performance
measures into a single overall index of skill level or proficiency.
As Smode, et al.
[1962]
indicated:
(1)
Measures should be weighted in accordance with
their relevance to the criterion.
(2)
Measures which repeat or overlap factors included
in another measure should receive a low weight.
(3)
Other things being equal, the more reliable
measures should be given greater weight.
In combining performance measures,
it is often possible to
determine quantitatively the interrelationships among the
performance measures and the relationship between each measure
and the actual or immediate criterion [Smode, et al., 1962].
A single overall measure or score composed of numerous performance measures along different dimensions of system behavior
is highly desirable in any performance measurement and evalu-
ation system, due to the use of the total score in determining
overall performance when compared to a criterion.
91
A single
.
score or estimate of total performance, when compared to a
criterion or MOE, provides the necessary information for
evaluation that determines goodness or badness, success or
failure, and usefulness of human performance [Buckhout and
Cotterman, 1963]
Combining performance measures, like criteria, can be
performed by either analytical or empirical methods which
commonly assign weights to each measure which are then mathe-
matically combined into a single proficiency score.
Analytical
methods employ the judgement of experts for situations usually
involving complex man-machine systems where definitive and
quantifiable measures of output are not available [Glaser and
1966; Marks,
Klaus,
1961].
Empirical methods of combining
aircraft system performance measures with relative weightings
into a single score were reviewed by Vreuls and Obermayer
[1971]
.
Among some of the methods from that study for devel-
oping multidimensional algorithms were:
factor analysis,
multiple discriminate analysis, linear-weighted algorithm,
nonlinear (threshold) model, energy maneuverability model,
time demand, recursion models, and empirical curve fit.
The
interested reader is referred to that study for more detail
on each model and the circumstances in which it was employed.
In summary,
separate performance measures of different
behavioral dimensions are combined by various analytical and
empirical methods into a single overall or composite score
with the idea that when the score is high, as compared to a
92
predetermined criterion or MOE, it indicates "good" or
"successful" performance, and when low, indicates "poor" or
"unsuccessful"
performance [Cureton, 1951]
.
Difficulties in
how to combine the measures into a single overall score leads
to preservation of the behavior dimensions and a "vector" of
measures.
3.
Overall Versus Diagnostic Measures
Overall measures of skilled performance, as previously
discussed, along with total system output measures
(e.g.,
bomb-drop accuracy, number of targets hit, fuel consumed) are
beneficial in assessing total system performance but are seriously lacking in diagnostic information of potential value
to the trainee [Buckhout and Cotterman,
Wargo, 1968; Bergman and Siegel, 1972]
.
1963; Kelley and
Overall scores tell
nothing about the operator's performance on various specific
tasks which are involved in flying an aircraft on a mission,
but are highly useful for performance evaluation.
Diagnostic measures are the same measures that result
from performance measurement before any combining operations
take place.
These measures identify certain aspects or ele-
ments of a task or performance in specific skill areas and
provide useful information on strengths and weaknesses in
individual skills [Smode, et al., 1962].
Since they are con-
cerned with smaller and more precisely defined units of behavior, they are easier to measure by objective methods.
93
It thus appears that overall and diagnostic measures
are contradictory but both essential.
For the training envi-
ronment/ where a student is learning skills necessary to per-
form a task, both measures are valuable for what information
they provide, as discussed above.
Using the two together in
a complementary fashion was perhaps best stated by Smode,
al.
[1962]
,
et
"A prime value of an overall measure is the support
it provides in evaluation since diagnostic measures alone are
difficult to interpret without some terminal output measure
of performance."
Kelley and Wargo [1968] recommended that
performance be measured in each dimension, evaluated separately by comparison to specific and predefined criteria, and
then combined into an overall total score, so the trainee can
receive feedback relating to his relative performance on the
various dimensions of his task, as well as on his overall
performance.
More recently, Vreuls and Wooldridge. [1977]
described multivariate statistical modeling techniques that
are powerful enough to provide measures for diagnosis, and
yet also provide single measures that could be combined into
an overall score.
The qualities of overall and diagnostic measures have
been described and their relationship has been discussed.
Within the training environment, using one without the other
appears to cause a decrease in the quality of information
available to the individuals who most need
and instructor.
it:
both student
Therefore, for the design of a system to
94
measure student B/N performance during
a
radar navigation
mission, it appears advantageous to employ the use of both
overall and diagnostic measures in a mutually beneficial
manner that will provide the maximum amount of accurate information for the purposes of training situations.
4.
Individual Versus Crew Performance
One of the assumptions of this thesis is that the
variability of the total contribution of the pilot in the
conduct and successful accomplishment of the radar navigation
mission is small enough to essentially be ignored when measuring the performance of the B/N during the same mission.
This
assumption was based on the major role played by the B/N
during radar navigation, the unique design of the A-6E CAINS
navigation system, and the radar navigation mission itself.
Although it is recognized that any successful accomplishment
of a mission depends to some degree on the crew interactions
and coordination, the actual measurement of the interaction
and coordination was beyond the current scope of study, and
will be left for future investigation and research.
The unique problem of measuring crew coordinated per-
formance has been the focus of much research, but the question
of what "crew coordination" is remains unanswered
Moroney, 1981].
Smode, et al
.
[1962]
[Mixon and
provided a detailed
discussion on the problems and approaches taken in measuring
aircrew coordination, and concluded that measured interaction
and communication were good for differentiating "good" and
95
.
"bad" crews.
Good crews reduced individual interaction and
communication to a minimum so that more time was available to
devote effort to performing the individual tasks associated
This conclusion does not
with accomplishing the mission.
acknowledge the inherent difficulties involved in objectively
measuring individual interaction and communications.
Until
further research uncovers objective, valid, reliable, and
practical methods of measuring crew coordination, this area
is perhaps a measurement function best delegated to a human
observer (instructor)
5.
Measures and Training
The importance of overall and diagnostic measures in
the training environment has been previously discussed.
Meas-
ures for the evaluation of performance are related in some
degree to the stages of training.
Early in training, when
skilled behavior is made up largely of familiarization with
the task and basic knowledge of procedures, measurement may
consist of more familiarization and procedure-related measures
Late in training, when skilled performance has become more or
less automatic, measurement becomes more difficult due to the
highly cognitive and covert nature of the skilled behavior
[Fitts,
1965; Glaser and Klaus,
1966]
.
In this case, measure-
ment becomes more indirect than direct.
Designing any measurement system within the training
environment requires a detailed understanding of the training
process and its relationship to performance measures, in
96
,
addition to an explicit understanding of the basic nature
of the skills involved in performing the task.
The latter
subject will be discussed in Chapter V.
D.
PERFORMANCE EVALUATION CONSIDERATIONS
Although the purpose of this thesis is
to design a system
to improve current performance measurement techniques for the
FRS B/N student/ some mention must be made of performance
evaluation since performance measurement exists as information
With-
necessary to evaluate individual and system performance.
out evaluation, little reason exists for the measurement,
recording, and storage of performance measures.
This section
will briefly outline current evaluation methods and the char-
acteristics of evaluation itself.
1.
Definition and Purpose
Being consistent with the previous discussions on
performance measurement and criteria, performance evaluation
is simply the process of identifying and defining performance
criteria and then comparing the criteria to performance measures
produced by performance measurement.
All performance evalua-
tion requires some comparison between a standard and an esti-
mate of what the standard truly represents [Angell, et al
1964; Demaree and Matheny,
1965]
.
.
The purpose of performance
evaluation in the training environment is usually multidimensional in nature but all evaluation occurs for the purpose of
accurate decision-making by the instructor regarding student
97
:
performance and by the training officer for effective training control.
On the instructor level of evaluation, faulty
dec is ion- making due to any performance evaluation involves
two possible errors:
TABLE V:
Type
I
and Type II, as found in Table V.
PERFORMANCE EVALUATION DECISION MODEL
FOR THE INSTRUCTOR
REALITY
UNSKILLED
DECISION:
Student has not
acquired skill
to perform task
Correct
decision
Student has
acquired skill
to perform task
Type II
The tangible effects of a Type
I
SKILLED
Type
(3)
I
(a)
Correct
decision
error are possible increased
costs due to overtraining, an inefficient training flow of
students, and a demotivated student.
On the other hand, a
Type II error may result in increased costs due to an aircraft
accident and the loss of human life.
This example illustrates
the important role that performance measurement and subsequent
evaluation plays in providing accurate information necessary
for correct decision-making by the instructor.
98
.
Types of Performance Evaluation
2.
Based on the purpose of the evaluation, evaluation
may be divided into two general types;
ment.
According to Marks [1961]
,
aptitude and achieve-
if the purpose is to predict
the capacity of a trainee to absorb training and perform a
task, the evaluation is called an aptitude test.
If the pur-
pose is to tell how well the trainee has absorbed training or
can perform the task, the evaluation is called an achievement
When considering achievement measures, it is possible
measure.
to distinguish three basic kinds:
(1)
Proficiency tests require the individual to answer
questions about his job or about some content
knowledge area related to his job.
(2)
Performance tests involve controlled observation
of an individual actually performing his job.
(3)
Rating methods use the opinion of someone who has
actually seen the man's performance on the job.
For details on the characteristics, advantages, and disadvan-
tages of each kind of achievement measure, the interested
reader is referred to Marks [1961]
A model is anything that represents reality.
Two
performance evaluation models that are utilized in achievement measures are norm-referenced testing and criterion-
referenced testing.
a.
Norm-Referenced Testing
Norm-referenced testing involves the use of normreferenced measures in evaluating performance.
Norm-referenced
measures compare the performance of an individual with the
99
.
.
performance of other individuals having similar backgrounds
and experience [Glaser and Klaus, 1966; Knoop and Welde, 1973;
Danneskiold, 1955].
The stability of a norm-referenced meas-
ure is highly dependent upon sample size.
Too small a sample
can yield measures of central tendency and variability that
poorly approximate actual population values
1966; Danneskiold,
b.
[Glaser and Klaus,
1955].
Criterion-Referenced Testing
Criterion-referenced testing uses criterionreferenced measures for making an evaluation of performance.
These measures involve a comparison between system capabilities and individual performance
[Glaser and Klaus, 1966]
Such measures indicate whether an individual has reached a
given performance standard [Knoop and Welde, 1973].
The
standard for criterion-referenced measures may be determined
either by analysis, subjective judgements by a panel of experts, or numerous successful performances as sampled from
a
large population [Knoop and Welde, 197 3]
c.
Criterion- Versus Norm- Referenced Testing
A recent article by Swezey [1973] reviewed and
described the relative advantages and disadvantages of cri-
terion-referenced and norm-referenced testing, from which
the conclusions are cited below:
Content validated criterion-referenced tests, which are
derived from appropriate job, task, or training analyses,
often provide the best available measure of performance;
particularly in objectives-oriented situations.
It is
often the case that no better criterion exists upon which
to validate the instrument.
100
other researchers have supported this conclusion, especially
in the field of aircrew training performance measurement,
and
it is perhaps a more feasible alternative to the more tradi-
tional and less efficient method of norm-referenced testing
[Knoop and Welde,
1973; Waag and Eddowes,
1978; Uhlaner and Drucker,
3
.
1975; McDowell,
1980].
Accuracy of Evaluation
The accuracy of evaluation is dependent upon the
accuracy of measurement, the accuracy and relevance of the
criteria, and the evaluation conditions.
Since the accuracy
of measurement and criteria have already been addressed, this
section will be limited to evaluation conditions.
During any evaluation, several sources of contamination, or bias, as discussed by Danneskiold
and Klaus
[1966]
,
[1955]
and Glaser
may affect the performance evaluation of
individuals, and are listed below:
(1)
In performance testing, one individual may naturally perform better than another during the
examination situation, even though both may
actually possess the same skill level.
(2)
The sequence and construction of the simulated
mission test may cause some individuals to respond
in a way that is dependent only on the test sequence and construction.
(3)
Judgemental errors occur whenever individuals are
used to observe performance, due to prejudices
and st.ereotypes formed by the observer.
(4)
Evaluating condenses performance dimensions into
a compact and meaningful unit, where in the process
some information is lost.
(5)
Observed performance is only a sample of the total
skills and knowledge of the individual.
101
The accuracy of evaluation may be increased by improving either measurement accuracy, the accuracy and relevance of
criteria, or the evaluation conditions.
For evaluation condi-
tions, the common method of eliminating some of the five bias
factors mentioned above is to increase objectivity in measure-
ment and to standardize the test conditions [Glaser and Klaus,
1966]
.
4.
Evaluating Individual and Group Differences
The measurement of differences in individual performance
is highly desirable in a training situation.
As training pro-
gresses, the performance of individual trainees gradually
approaches the desired minimum skill level required for system
operation.
The accurate evaluation of what skill level the
individual actually possesses during the training process is
necessary for efficient training control and for efficient
instruction [Glaser and Klaus, 1966].
Several methods have
been developed to identify which performance measures can best
discriminate among individuals at various ability levels; these
will be discussed in Chapter VII due to their applicability in
designing the measurement system at hand [Parker, 1967;
Buckhout and Cotterman, 1963; Thorndike, 1951].
Measuring group differences, as opposed to individual
differences, is more suitable for the purposes of treatment
evaluation, such as the training method, length of instruction,
and design of displays and controls, and will not be addressed
in detail here.
Interested readers may consult Glaser and
102
.
.
.
.
Klaus [1966] or Moore and Meshier [1979]
for further dis-
cussions of measuring methods for group differences.
5.
Characteristics of Evaluation
Some characteristics and considerations that contrib-
ute to improving the evaluation process are listed below:
(1)
Repeatability of a measure implies that a specified
score achieved today represents the same level of
performance as it did at a previous time (temporal
invariance) [McDowell, 197 8]
(2)
Sensitivity of a measure occurs when a measure
reliably changes whenever the operator's performance
changes [Grodsky, 1967; Kelley and Wargo, 1968;
Knoop and Welde, 1973].
(3)
Comprehensiveness of the measures employed in covering as wide a range of flying sJcills as possible
[Ericksen, 1952]
(4)
Interpretability of measures and evaluation results
[Demaree and Matheny, 19 65; Waag and Eddowes, 197 5;
McDowell, 1978].
(5)
Immediately available measures and scores to provide
the student with knowledge of results [Buckhout and
Cotterman, 1963; Demaree and Matheny, 1965; Welford,
1971; Waag and Eddowes, 1975; McDowell, 1978; Kelly,
1979]
(6)
Economical considerations require that evaluation
be constrained by cost and availability of personnel, yet adequate at a minimum level for the purpose
at hand [Marks, 1961; Demaree and Matheny, 1965]
(7)
Standardization of test conditions and environments
enables performance to more accurately reflect true
operator skill [Ericksen, 1952; Marks, 1961; Smode,
at al., 1962; Demaree and Matheny, 1965].
This list of desirable characteristics of evaluation
is not all-inclusive but does provide some foundation for
examining existing evaluation systems for those properties
that are in consonance with system and evaluation goals.
103
V.
A.
THE NATURE OF THE BOMBARDIER/NAVIGATOR TASK
INTRODUCTION
Navigating an A-6E
TRAi!4
aircraft during a low altitude,
non-visual air interdiction mission is perhaps one of the most
demanding and complex tasks expected of navigators today.
The
aircraft must avoid rough or mountainous terrain while traveling narrow corridors between geographical turn points, which
must be crossed with pinpoint accuracy at predesignated times.
Literally hundreds of individual steps or procedures are involved in navigating the aircraft, each of which contribute
in some dimension to attaining the mission objective.
2,
adapted from Obermayer and Vreuls [1974]
system interactions in the A-6E aircraft.
,
Figure
shows the crew-
The pilot controls
the aircraft and manages aircraft flight systems while receiving visual and auditory navigational information from the
Vertical Display Indicator (VDI) and B/N, respectively.
can be noted in Figure
2,
As it
the B/N manages the navigational
equipment, processes large amounts of concurrent information,
and makes critical decisions regarding the navigational accur-
acy of the aircraft.
At any one time, the B/N may be executing
tasks that are dichotomous, sequential, continuous, monitorial,
computational, or decisional in nature.
serving as the systems manager.
104
At all times he is
TK
±
Eh
O
H
M
<r
U
O
?
0)
IK
(U
4-»
±.
s
o
H
Pi
H
<
Q CJ
H
>
<;
s
^^HM
^
>1
CO
1
5
(U
M
u
w
VT)
1
<
a:
^^
2
w
w
El
a.
<
s
o
u
U
D
^
•H
/^
7R
^
-Vw
>
T>
Q
2
>
^
105
One of the more difficult subtasks is radar scope inter-
pretation.
This activity involves the recognition of the
relationship between specific symbols or patterns of symbols
on a flight chart with the specific returns or patterns of
returns on the radar scope [Beverly, 1952]
.
The success of
this identification subtask depends largely upon the quantity
and quality of a priori information about the target that was
available to the radar navigator [Williams, et al., I960].
This subtask may be performed while the B/N is monitoring the
Inertial Navigation System (INS)
,
observing computer-generated
navigational information displays, and informing the pilot
about current equipment status.
It is an axiom among student
B/Ns that "if you are sitting there perceiving that everything
has been done, you are getting behind."
This section will define and describe the nature of the
B/N's tasks in terms of:
(1)
aircrew-aircraft system, and
the physical variables of the
(2)
the complex skills and abil-
ities of a perceptual, psychomotor, and cognitive nature.
The importance of operationally describing and systematically
classifying the B/N's tasks from a behavioral point of view
is that such an analysis may point to areas where measurement
of performance is both desirable and feasible,
and may indi-
cate the relationships of individual tasks to overall mission
success [Smode, et al., 1962; Vreuls, et al., 1974].
The
tool used to define and describe the tasks of the B/N will be
a
task analysis.
106
.
B.
.
TASK CONSIDERATIONS
Task Definition
1.
A task is one or more activities performed by
a
single
human operator to accomplish a specified objective [Connelly,
et al.,
1974].
In the aviation training environment, a navi-
gation task is the successful action of the navigator in
response to visual, aural, vestibular, and tactile information
concerning the actual and desired values of
ameter
(or
a
particular par-
more than one parameter) associated with navigating
the aircraft, usually after completing a lesson or series of
lessons [Demaree and Matheny, 1965; Anderson and Faust, 197 4]
2
Classification of Tasks
There are numerous task classification methods, all
of which depend on the purpose of describing the tasks and the
nature of the tasks themselves.
fied
Smode, et al.
[1962]
classi-
behavior for performance measurement purposes with the
idea of accommodating both diagnostic measures relating to
elemental tasks as well as the more global measurements relating to overall system performance.
These general behavior
classes are listed and defined as follows:
a.
Level
I
-
Elemental Tasks
The simplest level of analysis, referring to any
homogeneous series of work sequences conducted at one time,
or single actions taken toward accomplishing a desired objective.
These tasks range from short duration discrete homogen-
eous acts to longer sequences of routing activity.
107
b.
Level II
Complex Tasks
-
The composite of activities which involve identi-
fiable sequences of homogeneous activity or recurring single
Each complex task
actions and sub-routines in performance.
is made up of tasks from Level I,
involving either the simul-
taneous and/or sequential integration of combinations of ele-
mental tasks, or the repetition of a single Level
I
activity
over time.
c.
Level III
-
Mission Segments
The segments or phases of performance that are
identified in full mission activity.
Essentially, a segment
is composed of a group of complex tasks
(Level II) which are
integrated in the performance at this level of description.
d.
Level IV
-
Overall Missions
The major types of missions anticipated for
advanced flight vehicles.
Each mission is composed of a
group of segments of activity which are integrated in the
performance at this level of description.
Beginning with overall missions and ending with elemental
tasks, these progressive refinements in task specificity allow
performance measurement decisions to be made at progressively
more detailed levels.
3.
Task Relation to Performance and System Purpose
For measurement purposes,
it is neither practical nor
desirable to measure all possible task conditions which might
occur in accomplishing a mission objective [Smode, et al.,
108
:
1962; Vreuls, et al.,
1974J
.
To be practical, an attempt
should be made to simplify the analysis of tasks and remove
irrelevant measurement [Vreuls, et al., 1974].
Since measure-
ment is only possible on the basis of specific, observable
events, a great deal of investigation and analysis must be
accomplished to describe tasks that are representative of
accomplishing the mission purpose while at the same time are
measurable [Glaser and Klaus, 1966]
.
Glaser and Klaus [1966]
identified two kinds of observable performance that are useful
for performance measurement:
the behavioral repertory of the
operator in the form of verbal and motor actions
,
and the
operator's effects on overall system performance or output.
From the measurement of either of these two observed performances, some inference can be made about the operator's level
of skill in performing operational and describable tasks.
The intimate relationship between the task of the
operator and performance measurement of the operator for the
purpose of estimating his level of skill was best stated by
Smode, et al
.
[1962]
The behaviors and tasks which are observed and measured
necessarily will be a sampling from those which comprise
the complete system activity, for it is neither feasible
nor necessary to measure everything in order to evaluate
proficiency. What one evaluates depends on purpose.
Determining those properties of behavior significant to
the purpose aids in defining the areas of human behavior
for assessment.
In the interest of maximizing validity
of measurement, this sampling should be guided by the
criticality of the tasks and operational segments to
mission or system success. As a rule, those tasks should
be selected for measurement on which good performance
results in mission success and on v/hich poor performance
means failure.
109
As discussed previously, identifying the purpose of
the system is important for defining performance standards
and for accurate measurement of behavior.
Likewise, the pur-
pose of the system helps define those tasks which should be
measured, and is essential for discovering what the relationship is between mission tasks performed by the operator and
the probability of mission success
[Smode, et al.
,
1962;
Buckhout and Cotterman, 1963; Cotterman and Wood, 1967].
Actions that are critical to performance in that they differ-
entiate between success and failure in performance can only
be identified properly in terms of the ultimate purpose or
goal of the man-machine system [Smode, et al., 1962].
C.
RADAR NAVIGATION TASK ANALYSIS
1
.
Background
A task analysis is a time-oriented description of
man-machine interactions brought about by an operator in accomplishing a unit of work with an item of the machine, and
shows the sequential and simultaneous manual and intellectual
activities of the man operating, maintaining, or controlling
equipment, rather than a sequential operation of the equipment
[Department of Defense, iMIL-H-4 68 55B, 1979]
.
Miller [1953]
presented a more usable definition of a task analysis:
the
gathering and organization of the psychological aspects of the
indication to be observed (stimulus and channel)
,
the action
required (response behavior, including decision-making)
110
,
the
skills and knowledge required for task performance, and
probable characteristic human errors and equipment malfunctions.
A task analysis is conducted mainly for the design of
new systems or for improvements to existing systems and provides basic building blocks for the rest of human engineering
analysis [Van Cott and Kinkade, 197 2]
.
The purpose of the
task analysis presented here is to improve current performance
measurement of the B/N in the A-6E WST, and will be discussed
more fully in respect to this purpose in Chapter VII.
There are several methods of conducting a task analysis
which are classified as either empirical, analytical, or some
combination of both.
The empirical methods rely on industrial
engineering techniques such as time and motion study [Mundel,
1978]
while the analytical techniques involve the use of expert
opinions through interviews or questionnaires.
Van Cott and
Kinkade [197 2] advocated seeking information from a wide
variety of sources and employing more than one technique in
order to adequately describe what an operator actually does
in a system.
"A completely developed task analysis will present a
detailed description of the component behavioral skills that
the accomplishment of the task entails, the relationships
among those components, and the function of each component in
the total task
[Anderson and Faust, 1974]
."
Since
a
task
analysis involves breaking down a task into behavioral
111
components for the purpose of performance measurement, the
question of when to stop subdividing the task is most impor-
Anderson and Faust [197 4] proposed that enough task
tant.
analysis detail is reached when the intact or component skill
is part of the student's entering behavior.
The use of a task analysis for the purpose of perfor-
mance measurement assumes that behavior can be analyzed in
terms of basic components that are conceptually identified
in a way that is convenient and agreeable to people and that
specific measurement techniques appropriate for the various
behavioral components exist [Smode, et al., 1962].
This
assumption becomes less theory and more factual in light of
research conducted in the helicopter community.
[1965]
,
Locke, et al.
in a study of over 500 primary helicopter students
using the OH-23D helicopter, reported that nearly all complex
man-machine maneuvers can be broken down into independent component parts with associated component abilities.
A more
recent study by Rankin and iMcDaniel [1980] assessed helicopter
flight task proficiency using a Computer Aided Training Evalu-
ation and Scheduling
(GATES)
system, where flight maneuvers
were divided into tasks that were used for performance measurement and evaluation, and were then utilized to determine
overall aviator proficiency.
Just as no two task analyses are ever the same, there
may be multiple sets of operator behavior possible to accomplish the tasks as described in one task analysis [Fleishman,
112
.
1967; Vreuls and Wooldridge,
This major limitation
1977].
of using a task analysis to measure performance of an oper-
ator reaffirms the idea of measuring all observable system
outputs and establishing the relationship among operator
actions, system outputs, and mission success or failure.
By
empirically validating a task analysis in the operational en-
vironment and establishing the above mentioned relationships,
any limitations imposed by differences in operator strategy
on the measurement system may be circumvented.
2
.
Previous A-6 Task Analyses
a.
Naval Flight Officer Function Analysis
In 1972, the Chief of Naval Operations requested
the Naval Aerospace Medical Research Laboratory
(NAMRL)
to
conduct a series of investigations analyzing the operational
functions of the Naval Flight Officer
(NFO)
for the purposes
of revising NFO training programs and to aid in determining
future training equipment requirements and characteristics.
Addressing NFOs of P-3B/C, RA-5C, A-6A, EA-6B, E-2C, and F-4B/J
aircraft, the investigations determined the roles, duties and
tasks performed by the NFO in a given aircraft, the percent of
NFOs performing a given task/duty, the time and effort spent
on various roles, duties and tasks, and finally, the task
criticality
The study of interest for the purposes of this
thesis involves the analysis of the B/N operational functions
in the A-6A
[Doll,
et al.,
1972].
113
The procedure used for the
.
function analysis was based on a method of job analysis
developed by the USAF Personnel Research Laboratory at LackThe principal method of analyzing
land Air Force Base, Texas.
functions was an inventory of activities approach that com-
bined features of the checklist, open-ended questionnaire,
and interview methods
The results of analyzing the A-6A B/N tasks was
based on
84
surveys completed by operational B/Ns.
major operational roles identified
Of six
(communication, navigation,
tactics, sensors, armament, and system data processing), more
time and effort was spent
(28
percent)
in flight by the B/N
performing the navigation role than any other single role.
Within the navigation role, five duties were identified:
navigate using Inertial Doppler systems,
using ADF/UHF-ADF,
and
(5)
(4)
using radar.
were listed.
(2)
(1)
using TACAN,
(3)
using visual references/Dead Reckoning,
Over 98 tasks within those five duties
Amount of time and effort as well as the criti-
cality of each task was recorded and a rank order listing of
all tasks for these two categories was presented.
In developing a task analysis for the B/N during
radar navigation (presented later in this section)
,
this A-6A
function analysis for the B/N shows the importance of navigation in terms of time and effort spent by B/Ns in the opera-
tional environment.
A measurement system that accurately
describes B/N performance during radar navigation would be
extremely useful from this standpoint.
114
The time and effort.
and criticality rankings of this source were also useful for
those tasks that corresponded with the same task or subtask
in the current effort, and in developing a performance meas-
urement system that encompassed critical tasks in terms of
their contribution to overall mission success,
b.
Grumman A-6E
Training Program
TRAT-I
Grumman Aerospace Corporation completed a study
on the application of ISD methodology to the design of a train-
ing program for A-6E TRAM FRS pilots and B/Ns in mid-1976.
Comprised of over seven volumes, the study included a task
analysis, development of SBOs, media analysis, and formulation
of lesson specifications
1975; Campbell, et al.,
et al.
,
197 5; Campbell,
[Campbell, 1975; Campbell and Sohl,
Hanish and Feddern, 1975; Graham,
1975;
et al.
1977]
,
.
The task analysis phase
of the ISD process was performed jointly by a team consisting
of Navy Subject Matter Experts
(SMEs)
and Grumman training
psychologists, educational specialists, and flight test personnel.
Tasks were to be identified based on performance in
the operational environment and described in sufficient depth
to permit an identification of the underlying skills and
knowledge required by the crewmen to perform the task.
A
hierarchical approach for describing the pilot and B/N behaviors during a mission resulted in three levels of description:
major mission events, the tasks which comprise the events, and
the steps which describe the incremental actions an aircrewman
must take to complete a task.
115
The first result of the task analysis effort was
a
comprehensive task listing comprised of over
4
00 nominal
pilot tasks, each with an average of approximately 10 steps;
7
airframe emergency sequences involving an average of 7-10
steps each/ 35 system malfunctions, and more than 200 nominal
B/N tasks with an average of 10 steps each.
The listings
represented tasks for which training needed to be conducted
at the FRS level.
A Task Analysis Record (TAR) form was util-
ized for each task to ascertain the following:
(1)
crewman
where training was given,
(3)
skills and
performing task,
(2)
knowledge required by task,
is performed,
(5)
system involved,
difficulty,
(9)
(4)
conditions under which task
cues involved in performance,
(7)
degree of difficulty,
task criticality,
(10)
(8)
(6)
aircraft
factors in task
factors in performance
measurement, and (11) other special factors which impacted on
training.
Because the TAR was used for the purpose of in-
structional sequencing and blocking downstream in the ISD
process and as an aid in selecting appropriate instructional
strategies, it was not published as part of the study.
The actual task analysis appears in the form of
an ISD record developed from the TAR and SBOs.
Objectives
were classified on the basis of eight major taxonomic categories:
(4)
and
(1)
knowledge,
application,
(8)
(5)
(2)
comprehension,
analysis,
complex performance.
(6)
(3)
synthesis,
discrimination,
(7)
evaluation,
This taxonomy was retained for
the current thesis task analysis effort and will be defined
116
in Table VII.
The ISD record then contained the SBO, task
identification data, condition/constraints, performance standard,
taxonomic data, a criterion test statement, and test type
and format.
The Grumman task analysis effort becomes useful
to the current effort of developing a task analysis for the
B/N during the radar navigation maneuver, and using that task
analysis for the purpose of performance measurement.
In this
respect, the Grumman study was used as a guiding outline in
developing the current task analysis.
Prophet [1978] reviewed past ISD efforts in Navy
fleet aviation training program development that included the
Grumman A-6E ISD program, and made the following comments in
reference to measurement and evaluation for that program:
(1)
Methodologies being followed did not necessarily
require a systematic treatment of measurement and
evaluation.
(2)
No discussion of the mechanics of measurement for
standards found in SBOs is given.
(3)
While a clear recognition of when and where measurement will take place is addressed, no information
is given concerning how.
(4)
The problems of flight versus non-flight measurement
were not discussed.
Although some criticism may be found in the lack of measurement mechanics from the Grumman task analysis effort, it is
not a surprising revelation given the purpose of their task
analysis.
Their effort is still deserving in the light of the
complexity of the aircrew and A-6E combination, and this author
117
used their unclassified task analysis material in defining
and describing exactly what a B/N does during a radar navi-
gation maneuver.
c.
Perceptronics Incorporated Decision Task Analysis
In early 1980, a study designed to identify sig-
nificant aircrew decisions in Navy attack aircraft was performed by Perceptronics, Inc. for the Naval Weapons Center,
China Lake, California.
The study selected two mission scen-
arios that were representative of A-6E and A-7E aircraft:
close air support and fixed target attack [Saleh, et al., 1980].
A mission analysis followed by an Air crew/ Avionics Functions
Analysis was performed on each scenario.
Finally, a decision
identification analysis was performed which resulted in a
listing of significant decisions in each mission for each aircrew.
The study results provided information on decision type,
difficulty, and criticality.
Limited use was made of this decision identification analysis due to the scenarios developed and the purpose
of the task analysis:
decision-making, and some dependence of
that task analysis upon the previous efforts by Grumman.
Nevertheless, a few decisional tasks were reviewed for use in
the current effort.
3.
Current Task Analysis for Performance Measurement
Using the research provided by the Naval Flight Officer
function analysis, the Grumman A-6E TRAM training program task
analysis, and the Perceptronics, Inc. decision task analysis,
118
"
as discussed previously,
performed with
a task analysis v;as
the purpose of measuring B/N performance during radar navi-
gation in the A-6E WST.
The results of that effort, in the
form of a task listing (Appendix A)
C)
,
,
and a Mission Time Line Analysis
a task
analysis (Appendix
(MTLA; Appendix D)
,
are
each presented separately below.
Task Listing
a.
As shown in Appendix A, the radar navigation man-
euver was divided into three segments:
(2)
(1)
after takeoff checks,
navigation to the initial point (IP), and
to the turn point
(TP)
.
(3)
navigation
The navigation to TP segment
(3)
was
the portion of the A-6E CAINS flight within the scope of this
thesis, and was the segment of interest to be later expanded
upon in the form of a task analysis and MTLA that will be dis-
cussed later in this section.
The following definitions will explain the significance of the symbology within the navigation to TP segment of
Appendix C:
(1)
Tn
-
Task number, where the number is represented by "n.
(2)
Sn
-
Subtask number.
(3)
(a)
-
Subtask element, where the element is
represented by the lower case letter "a,"
or other letters.
The nomenclature for actual switches, keys, controls, or
buttons in the A-6E CAINS cockpit is underlined throughout
the task listing
(e.g.,
Rcvr control).
119
Discrete or continuous
,
.
settings for each switch, key, control, or button is to the
far right of the task, subtask, or subtask element, and is
separated by a line of periods.
The choice of language in the form of action verbs
for
which behaviors are described was
a
difficult process,
due to the lack of standardization in both the science of
analyzing tasks and in aircrew performance measurement research.
The necessity of employing action verbs that described
simple and easily observable activities and were easily identified in terms of performance measurement was paramount to the
current effort.
A hybrid taxonomy, using 31 action verbs as
shown in Table VI, was developed from earlier work by Angell,
et al.
[1964]
that was, in a sense, later validated by Chris-
tensen and Mills
[1967]
in an analysis of locating represent-
ative data on human activities in complex operational systems.
Using a later study by Oiler [19 63]
in the form of a human
factors data thesaurus as applied to task data, the original
50 action verbs used by Angell,
a total of 31
et al.
[1964]
was reduced to
action verbs by eliminating redundant synonyms
and by using the recommended acceptable action verbs and nouns.
Except for the reduction of action verbs
(specific behaviors)
the remainder of the original taxonomy was preserved.
For the
convenience of the reader, the 31 action verbs or specific
behaviors utilized in the current task analysis are presented
as a glossary
(Appendix
B)
120
TABLE VI:
CLASSIFICATION OF BOMBARDIER/
NAVIGATOR BEHAVIORS
SPECIFIC BEHAVIORS
ACTIVITIES
PROCESSES
Perceptual
Searching for information
Receiving information
Identifying objects,
actions, or events
Checks
Monitors
Observes
Reads
Information processing
Initiates
Records
Uses
Mediational
Problem solving and
decision-making
Communication
Communicating
Alerts
Informs
Instructs
Simple/discrete
Activates
Depresses
Places
Pushes
Throws
Sets
Complex/continuous
Adjusts
Inserts
Positions
Rotates
Tunes
Motor
Source:
Checkouts
Compares
Continues
Delays
Determines
Evaluates
Performs
Repeats
Selects
Troubleshoots
Angell, et al.
[1964]
121
(adapted)
b.
Task Analysis
A task analysis for the specific purpose of meas-
uring B/N performance during radar navigation was performed
and is presented as Appendix C.
As previously discussed, only
segment three, navigation to TP, was examined during the task
analysis to limit the scope of this study.
Since the concepts
of segment and tasks have already been addressed, the seven
columns of the A-6E
TRAI"!
radar navigation task analysis form
in Appendix C will now be explained in detail,
provided by Van Cott and Kinkade [197 2]
[1974], Pickrel and McDonald
Rosenmayer and Asiala [197 6]
,
using guidance
Anderson and Faust
[1964], Smode, et al.
[1962], and
:
- a component activity of a task.
Within a
task, collectively all subtasks comprise the
task.
Subtasks are represented by the letter
"S" followed immediately by a numeral.
Subtask elements are represented by a small letter
in parentheses.
(1)
Subtask
(2)
Feedback
(3)
Action Stimulus - the event or cue that instigates
performance of the subtask.
This stimulus may
be an out-of-tolerance display indication, a
requirement of periodic inspection, a command,
the indication of adequacy of response or
action.
Listed as VISUAL, TACTILE, AUDITORY,
or VESTIBULAR and located in the subtask column
for convenience only.
-
a failure,
etc.
(4)
Time
(5)
Criticality - the relationship between mission
success and the below-minimum performance or
required excessive performance time of a particular subtask or subtask element.
"High"
(H) indicates poor subtask performance may
lead to mission failure or an accident.
the estimated time in seconds to perform the
subtask or task element calculated from initiation to completion.
-
122
.
.
,
"Medium" (M) indicates the possibility of
"Low" (L) indegraded mission capability.
dicates that poor performance may have little
effect on mission success.
(6)
Potential Error - errors are classified as either
performing
failure to perform the task (OMIT)
the task inappropriately in time or accuracy
(C0MI4IT)
or performing sequential task steps
in the incorrect order (SEQUENTIAL)
,
,
(7)
Skills Required - the taxonomy of training objectives
used for the Grumman task analysis was retained
and is presented in Table VII [Campbell, et al.,
This concept will be discussed in more
1977]
detail later in this section.
.
(8)
Performance Measure Metrics - a candidate metric which
may best describe the successful performance of
the task or a genuine display of the required
skills.
The types of metrics suggested were
classified as TIME (time in seconds from start
to finish of task)
T-S (time-sharing or proportion of time that particular task is performed
in relation to other tasks being performed in
the same time period)
R-T (reaction time in
seconds from the onset of an action stimulus
to task initiation)
ACC (accuracy of task performance)
FREQ (number of task occurrences)
DEC (decisions made as a correct or incorrect
choice depending on the particular situation
and mission requirements)
QUAL (quality of a
task, especially in regards to radar scope
tuning quality)
and SUBJ (subjective observation or comprehension of task execution success
by an instructor)
,
,
,
/
,
,
Due to the lack of operational data, the task
analysis was derived analytically with close attention being
paid to consistency with previous A-6 task analysis efforts.
The validation of any task analysis can only occur when it is
subjected to the operational environment for repeated empirical
analysis.
Unfortunately, time, cost and system availability
constraints precluded the execution of this important phase.
123
TABLE VII:
TAXONOMY OF SKILLS REQUIRED
Knowledge
Technological - Learning "how to" perforin a single
Learning
switch and control configuring procedure.
"how to" read meters, digital displays, scopes,
lighting displays, etc.
In general, learning "how to."
Formal - Learning the meaning of special symbols,
acronyms, words, nomenclature, etc.
Descriptive
- To describe "what is" and "what was":
facts, data, special information about systems, subsystems, equipment, weapons, tactics, missions, etc.
Concepts and Principles - Fundamental truths, ideas,
opinions and thoughts formed from generalizations
of particulars.
Comprehension
Understanding the meaning of meter readings, scope, digital
and lighting displays.
Understanding the switch and control
configuring procedure, i.e., the reason for a specified
sequence, the reason for a switch or control position, the
reason for a verification, etc.
Grasping the meaning of concepts and principles, i.e.,
understanding the basic principles of infrared and radar
detection.
Understanding the meaning of facts, data, specific information, etc.
Discrimination
Distinguishing among different external stimuli and making
appropriate responses to them, e.g., scanning gages for
out-of-tolerance trends. Also includes the recognition of
the essential similarity among a class of objects or events,
e.g., classifying aircraft types or radar* return images.
Source:
Campbell, et al.
[1977].
124
.
TABLE VII
CContinued)
Application
Simple Procedure - A demonstration of a simple learned
procedure in the cockpit or simulator requiring not
more than simply repeating required switch and control
configuring and simple visual verification (i.e.,
advisory light status)
Complex Procedure - A demonstration of a learned
procedure in a cockpit or simulator that requires
differentiating or distinguishing between readings
on meters, digital displays, and images on video and
radar displays and interpreting and applying the
meaning of the readings and images.
General - Using learned materials in new and concrete
situations (e.g., using rules, methods, concepts,
principles, procedures, etc.).
Analysis
A demonstration
material (i.e.,
ents so that it
safety, mission
of a learned process of breaking down
data, other information) into its componmay be evaluated with respect to crew's
success, A/C maintenance, etc.
Synthesis
A demonstration of learned process, i.e., putting tactical
elements together (e.g., weapons, targets, available
systems, A/C capability, etc.) to formulate a mission.
Evaluation
A demonstration of a learned process of assessing or judging
a system or situation, based on criteria (i.e., data, rules,
available equipment, conditions, etc.) and then reaching a
conclusion based on this assessment.
Complex Performance
A demonstration that requires psychomotor skills and/or
critical thinking skills usually requiring practice.
125
As it stands, a reasonable assumption of the existence of
some face validity in the current task analysis can be made
in the light of the author's operational experience as a B/N
in the A-6E CAINS aircraft
(over 600 hours)
and the dependence
of the task analysis upon previous task analysis efforts, even
though none of the previous efforts were formally validated
by empirical methods.
The current task analysis was also
informally reviewed by other A-6E B/Ns before finalization of
the effort.
The purpose of the task analysis was to improve
current performance measurement of the 3/N during radar navi-
gation in the WST by providing performance measure metrics
(right-hand column of Appendix
C)
that are possible candidates
for describing successful task performance or B/N skill acqui-
sition.
Several hundred metrics are available from which a
candidate set can be chosen based on the initial measure selection criteria as previously discussed in Chapter IV.
From the
"performance measure metrics" column of Appendix C, several
potential candidate measures were identified and will be combined with potential measures from Table II in Chapter IV and
presented as part of the final candidate measure set listed
in Table XI
c.
(Chapter VII).
Mission Time Line Analysis
An MTLA is a graphical analysis which relates the
sequence of tasks to be performed by the operator to a real
time basis
[Matheny, et al.,
1970].
126
The purpose of an MTLA
as used in the current study is to identify those performance
measurement points within
a
man-machine system where standards
of accuracy and time may be applied in the evaluation process.
Essentially a bar chart, an MTLA for the navigation- to-TP segment of the radar navigation maneuver is presented as Appendix
D.
The time of execution for each subtask was extracted from
estimated completion times on the task analysis record form
(Appendix C)
.
Time was estimated with the assumption that
sensing conditions were good and the B/N was highly skilled.
Darkly shaded time lines represent tasks that demand full mental
attention whereas shaded time bars represent "monitoring" tasks
or "troubleshooting" tasks that may not have to be executed.
The MTLA is a performance measurement source for
both the identification of critical subtasks and the use of
time to perform as a measure of skilled behavior.
Thus,
the
MTLA was utilized to identify candidate performance measures
as found in the "performance measure metric" column of the
task analysis record form (Appendix
C)
that were later used
for the final candidate measure set as will be described in
Chapter VII
4.
(Table XI).
B/N Skills and Knowledge
This section will relate current skill acquisition
principles to performance measurement, and present a skill
acquisition model of the relationship between skill acquisition, the task, performance measurement, and performance
evaluation.
A great deal of discussion about the concept of
127
:
:
skill, how skill is attained, and how skill acquisition is
measured can be found in the literature from such diverse
areas as private industry, control theory, information processing, and education [Bilodeau, 1966 and 1969; Jones, 1970;
Welford, 1971; Hulin and Alvares, 1971 and 1971; Singleton,
1971; Leshowitz, et al.,
1974;
Shipley, 1976; Welford, 1976].
Despite the global interest in skill, this discussion will be
limited to aircrew skill acquisition and the measurement of
that skill.
a.
Definition of Skill
Skill may be defined as the ability to perform
given tasks successfully or competently in relation to specified standards
[Cureton,
Prophet, 1978].
1951;
Senders, 197 4; Smit, 197 6;
A more precise definition of skill is offered
by Connelly, et al.
[1974]
The ability to use knowledge to perform manual operations
in the achievement of a specific task objective in a
manner which provides for the elimination of irrelevant
action and erroneous response.
This conceptualization
exists only in conjunction with an individual task and
is reflected in the quality with which this task is
performed.
Most definitions of skill rely on the fundamental concept that
the use of capacities efficiently and effectively as the result
of experience and practice would generally characterize skill
[Welford,
1976].
Indeed, the concept of skill cannot be well
defined due to the diversity of its nature and remains more
or less an amorphous quantity, best described by its charac-
teristics [Singleton, 1971]
128
,
1.
It is continuous, there is always an extensive overlap
Even in principle, it cannot be
and interaction.
analyzed by separation into discrete units along either
space or time axes.
2.
It involves all the stages of information processing
identifiable in the organism, basically inputs, processing and outputs.
3.
It is learned and therefore highly variable within and
between individuals.
4.
There is a purpose, objective, or goal providing meaning to the activity.
b.
Skill Acquisition
The development of skill, as previously discussed,
is due mainly to the effects of practice and experience on the
use of basic capacities.
Therefore, the acquisition of skill
appears to result from learning and seems to improve the efficiency and effectiveness of underlying basic capacities
[Welford, 1976].
Singleton [1971] advanced the idea that
skill develops by selectivity and by the integration of activities.
For most skill development theories, it is generally
agreed that as learning a new task takes place, operators
learn a basic strategy in performing the task, that in effect
becomes an increasingly skilled template with the qualities
of organizational and efficiency of operation
1980]
.
[Engler, et al.
Depending on the task, the level of skill required to
perform the task is universally measured with the property of
variability.
Bowen, et al.
[1966]
found considerable varia-
bility as measured by a lack of consistency for all skill
levels of pilots performing tasks in an OFT, even pilots with
129
substantial flight experience.
operator begins to learn
a
It thus appears that as an
new task, his control strategy in
performing the task is highly inefficient, resulting in a
large variability of actions.
As skill development progresses,
his control strategy becomes highly efficient and effective,
resulting in what should be smaller variability of actions.
Three phases of skill development have been
hypothesized in earlier research by Fitts [1962] and discussed
in terms of aircrew skill acquisition by Smode, et al.
and Prophet [1976]
.
[1962]
The stages of skill acquisition are dis-
cussed below.
(1)
Early Skill Development.
In this phase the
student seeks to develop a cognitive structure of the task
in the form of discriminating the task purpose, ascertaining
standards of task performance, and interpreting performance
information feedback.
Actions tend to be slow and deliberate,
and depend a great deal on concentrated attention and effort
in performing the task.
(2)
Intermediate Skill Development.
After
learning the task purpose and experiencing some practice at
the task, the student begins to organize his control strategy
by becoming more efficient in responding to signals displayed
to him.
Perceptual and response patterns become fixed with
less reliance placed on verbal mediation of response integra-
tion by the student.
130
.
(3)
Advanced Skill Development.
This phase
represents the higher level of skill acquisition, where performance becomes more resistant
performed concurrently.
to stress and activities are
The rate to acquire this stage
through practice is different for each individual, as practice on any complex task generally spreads individuals out
into stable but different skill levels
[Jones,
1970].
Navi-
gating an A-6E CAINS aircraft falls into this category of
complex tasks.
This stage is characterized by the individual
performing in an automated manner requiring little conscious
awareness and little allocation of mental effort [Norman,
[1976]
c.
Measurement of Skill Acquisition
Figure
3
is a model developed by the author to
illustrate the relationship among B/N skill acquisition, the
radar navigation task, and performance measurement and evaluation.
An understanding of the model depends heavily upon
concepts defined and discussed in Chapter IV and in the early
part of this chapter.
This model will be used for the current
discussion of skill acquisition measurement.
The actual measurement of skill acquisition
\
'
through its various stages has received little practical atteni
tion and research, most likely due to the complexity of the
subject and the difficulty involved in accurately assessing
human performance as system complexity increases [Glaser and
Klaus,
1966; Vreuls and Wooldridge,
131
1977; Kelley and Wargo,
-\
U
o
u
o
4J
lu
2
O
M
D
•J
<
>
NX
>
O
•
w
O 0)
C M
'H 3
13
-§
o
01
IT3
OJ
OS,
•W
>
CO
1-3
""^
^
C
a
n
a
o
0)
U
C
Ul
(U
M
6 3
rd
2
O
M
H
<
fl
M-l
•h
d)
s
\
V"
/
/
/
\\
\/
M.
<u
•H
\/
\ /
cr
o
<:
N
•
>•
X
^
y^
U
M
X
X
'
Q
CO
/
•
b3
rt;
\
•H
N
<
>
o
2
U
H
+J
\\ //
\V
\ //
\
\ /
^
/
•H
/
V
/
'^^.
N
_ v»
•
•
^
•
^
•
•
•
^
•
^<
.
•
''
•^
A^
r
^ X
X
N
N
.^A
w
U3
0)
o la
C-H
M
10
«J
w
U
IT]
tJ
O-H
•H
•H 4J
-U
la
4J
•W
0)
MS
V«
0)
s
(U
Met
Crite
Met/N
•H
U
U
u
3
2
^
04
L
00
C
<t3
•H
6-»
132
J
•H
fa
1963;
Senders, 1974]
.
As shown in Figure
3,
through the
process of learning and practice of the task over several
trials, the student
(represented by the oval shapes in the
center column) should progress from the early or unskilled
state through the intermediate stage and into the skilled or
"proficient" stage where he is "trained."
Over the course of
one task trial, the most that can be accomplished is to obtain
objective performance measures and combined measures, and to
obtain a subjective opinion of the skill level from the one
individual well qualified and skilled in performing the task:
an instructor.
a
Once this "indirect" measurement takes place,
comparison is made between the objective and subjective
measures and the predefined performance criteria or MOEs.
is from this
It
comparison that the student's skill level is
finally evaluated, with severe limitations imposed due to the
measurement over one trial.
Skill acquisition through the
three stages of development occurs not only at different rates,
but over the course of several trials.
This fact would lend
support to a measurement system that indirectly measured skill
development over one trial and used historical records of performance to measure skill development over several task trials.
The model shows highly likely, likely, and very unlikely eval-
uation results by assuming that both criteria and measures are
^
valid, reliable, and specific to the purpose of measuring skill
i
acquisition.
133
.
Early conceptions of measuring aircrew skill were
discussed by Smode, et al.
[1962]
and Angell, et al.
[1964]
Both structured tasks or skills into a hierarchical model and
theorized what types of measures
(e.g.,
time, accuracy,
fre-
quency, etc.) would be appropriate for a particular level of
task or skill.
The former study also discussed measurement
at the three stages of skill acquisition:
Measurement at
(1)
the first stage should be concerned with knowledge and task
familiarity as well as distinctions between task relevant and
task irrelevant information and cues, and a differentiation
between in- tolerance and out-of-tolerance conditions;
(2)
the
intermediate stage has measurement concerned with procedure
learning, the identification of action stimuli, and the per-
formance of manipulative activities; and
(3)
measurement of
highly developed procedural, perceptual-discriminative motor
and concept-using skills and the integration of these combin-
ations into more complex units of performance is of concern.
An experiment by Ryack and Krendel
[1963]
based
on research by Krendel and Bloom [1963] measured highly skilled
pilots performing a tracking task using a laboratory apparatus.
The measurement was based on a theory that a highly skilled
pilot displays consistency of system performance, is highly
adaptable to changing dynamic requirements, and performs the
task with least effort.
The transfer of the measurement of
these three conceptualizations of high pilot skill from the
laboratory to an actual aircraft was not demonstrated.
134
.
Later conceptions of skill acquisition measure-
ment were advanced by Welford [1971 and 197 6] who proposed
that as practice on a task increased, the speed of performance
on that task as measured by time would fall exponentially.
Haygood and Leshowitz [197 4] proposed using an information
processing model to measure flying skill acquisition.
[1979]
Bittner
evaluated three methods for assessing "differential
stability":
(1)
graphical analysis,
(2)
early versus late
correlational Analysis of Variance (ANOVA)
Test of Correlational Equality.
,
and
(3)
Lawley
That study recommended graph-
ical analysis as a method of first choice.
Three recent experiments regarding measurement of
aircrew skill acquisition are noteworthy.
[1974]
Vreuls, et al
used multiple discriminant and canonical correlation
analyses to discriminate between different levels of skill
using four pilots in an F-4E configured simulator.
Using six
pilots in a UH-IB (helicopter) simulator, Murphy [197 6]
in-
vestigated individual differences in pilot performance by
measuring both man-machine system outputs and pilot control
outputs during an instruinent approach and landing.
This study
concluded that performance differences may be attributed to
crewmember differences in cognitive styles, information processing abilities, or experience.
Pierce, et al.
[1979]
con-
centrated on procedures to assess cognitive skills through
the use of behavioral data on eight pilots performing F-4
aircraft pop-up maneuvers.
The primary measurement instrument
135
:
was an instructor using subjective ratings that were validated
by comparison to actual bomb scores.
From this previous research and the discussion of
the skill acquisition measurement model,
it becomes readily
apparent that measurement of B/N skill acquisition in the A-6E
WST during radar navigation will require both an analytical
foundation, as described in this thesis, and empirical vali-
dation that would result from implementation of the proposed
measurement system.
This section is concluded with six
recommendations for the procedure of
skill appraisal, as
discussed by Singleton [1971]
(1)
Discuss the skilled activity almost ad nauseam with
the individuals who practice it and with those to
whom and for whom they are responsible.
It is not
enough to pop in at intervals, the investigator must
spend whole shifts and weeks with the practitioners
to absorb the operational climate.
(2)
Try to make this verbal communication more precise
by using protocol techniques, critical incident
techniques, good/poor contrast techniques, and so on.
(3)
Observe the development of the skill in trainees and
by analysis of what goes on in the formal and informal
training procedures and in professional assessment.
Make due allowance for history, tradition, technological change, and so on.
(4)
Structure the activity.
Identify the dimensions of
the percepts, the decision making, the strategies of
action and the overt activities, and try to provide
scales of measurement along each dimension.
(5)
Check as many conclusions as possible by direct
observation, performance measurement, and by experiment
.
(6)
Implement the conclusions and provide techniques for
assessing the limitations and successes of the innovations.
136
A-6E WST-PERFORiMANCE MEASUREMENT SYSTEM
VI.
The A-6E WST, Device 2F114, is designed to provide full
mission capability for pilot transition training, B/N transition training, integrated crew training, and maintenance
of flight and weapon system proficiency in the A-6E Intruder
aircraft.
The WST will be used to train Navy /Marine flight
crew members in all A-6E procedures
-
ground handling, normal
and emergency flight modes, communications, navigation and
cross-country missions, tactics, and crew coordination [Read,
1974].
Inherent in these design and training requirements is
the necessity for the measurement of aircrew performance and
the subsequent measurement of improved performance after
training has occurred; a necessary goal for any training simulator [Knoop, 1968].
This section will discuss the general
characteristics of the WST together with current performance
measurement capabilities, generic performance measurement
systems
(PMS)
for simulators, and current performance measure-
ment and evaluation practices for student B/Ns in the WST.
A.
WST CHARACTERISTICS
1.
General Description
The trainer system consists of the following elements
Trainee Station, Instructor Station, Simulation Area, and
Mechanical Devices Room (see Figure
137
4)
.
The Trainee Station
Instructor Station
Trainee Station
A»e« AWSAPOW SWrSM"
TRAIfMBR OVVICS
aft^t^>
Mechanical Devices Room
Figure
4.
Simulation Area
Device 2F114, A-6E WST,
138
is an exact replica of the A-6E CAINS cockpit and is mounted
on a six degree-of-f reedom motion base to give realistic
motion cues.
Sound cues, environmental controls, and controls
with natural "feel" increase the similarity between
and
V7ST
Normal and emergency flight configurations
aircraft cockpits.
are simulated, together with all modes of weapon system operation.
The Instructor Station area consists of a wrap-around
console that can accommodate two principal instructors and two
assistant instructors.
Controls and displays are utilized by
the instructors to:
set up and control the training prob-
lem,
(2)
(1)
introduce malfunctions and failures,
(3)
trainee actions and responses to malfunctions, and
trainee performance.
monitor
(4)
evaluate
Four interactive CRT displays for alpha-
numeric and graphic presentations, together with repeater
displays of the VDI, direct view radar indication (DVRI)
electronic countermeasures
(ECM)
,
and
found in the aircraft are
available to the instructors for use in training.
The Simulation Area contains the computers necessary
for simulation of the A-6E CAINS aircraft and its missions.
Four real-time minicomputers are utilized, two for flight,
one for tactics, and one for the Digital Radar Land Mass
Simulation (DRLMS)
.
iMagnetic tape units,
teletypewriter
printers, digital conversion equipment, and the DRLMS are
also contained in this area.
The DRLMS is designed to sim-
ulate landmass radar return for the AN/APQ-15 6 radar, which
139
is a major component of the A- 6 navigation/weapon system and
is used by the B/N for the tasks of radar scope
interpretation
and target location.
The Mechanical Devices Room contains hydraulic and
power equipment for positioning of the Trainee Station motion
Also provided from this area is compressed air needed
system.
for g-suit and environmental control requirements.
2.
Performance Evaluation System
A comprehensive list of system features and characteristics is beyond the scope of the present effort, but can be
found in the Grumman Aerospace Corporation Final Configuration
or Criteria Reports for the A-6E WST
and 1977; Rinsky,
1977].
[Blum,
et al.,
1977, 1977,
Those features which are of interest
to performance evaluation in the WST are shown in Figure
5
and
discussed below:
Program Mission Modes are provided for up to ten
a.
missions.
In this mode,
the computer system automatically
generates and sequences mission profiles.
During this mode,
the instructor station monitor displays a listing of the
mission leg number, maneuver to be performed, mission leg end,
parameters to be monitored, and remarks in order of occurrence.
Programmed missions are activated by controls from the instructor station.
b.
Program Mission Mode Critiquing is provided by
instructor selection of up to six parameters for monitoring
on each leg of the program mission.
140
The system computes the
—
't
t
»«
^
z u
-]
iJ
u
a:
u
u
62 q
(^
W
«
4)
0*
CS cu
0. 1
S
n
u
M
C
•
b
ot
i«
"^
u
19
4)
a.
V5
IQ
fl
<
u S
crt
0.
CO
>H CO
a
4)
<-<
^
ig
u «
'U
1)
a
u
b
3
i:^
m
3
-U
^
J/5
>
a
UI
'J
^
U3
<
u
\
cu
c
HI
c^
Z
w
a.
&-
\
o
Cu
/ k
k
T3
4)
1
4)
1
1
<a
\
Cu
-U
t
4)
I
H
a
u
SI
OS
c
S
-u
3
'^
s
f^
Cm
£-
(U
lO
CJ >>
C
-o
M
1
•-•
1
4J
(/I
OI
u>
1
-W
-W
0-1
u
Z
u
1
fl
1
04
1
b
•^
z
o
hi
£-•
<
3
•n
<u
Ul
•-4
-u
s
s
Zi
;
L^
„
.
a
u
—w
/
s:
-a
M
>
-1
C
3 U
C
(A
H)
o
X
«J
C3
D
:>
a«
to
TS
TJ
z
U
a.
4)
ca
'^
a
Q
CJ
U
i
wi
CJ
4>
4)
w
CO jrf
CJ
-W
X w
U
ca
fl
4)
-u
CO
i4
u
s
-^
V4
s
CO
T
SlO
4J
<
CU
c
^
2S
>i
c
o
•H
U
1
4J
CTI
(tJ
U
3
ifl
rH
>
i
o
c
fO
6
o
04
(M
U
«
to
J
-u
(0
5<
4)
4>
4)
cJ
a
fl
w
Ot
X
u
01
cn
4)
CO
Oi
C
•*4
k
•
,
4)
J
1-
S
«3
u
D
U
3
^
k
-M
U
•w
E"
\>
u *i
c-1
CO
a<
f_J]
s-t
>
u
a
u
1
—
TJ
>J
04 <£
rt3
:T
1
/
t»
CO
>
—
CO
1
<u
OJ
1
3
a*
1
„
4)
-^
flj
<q
.
1
i
e
u
1 k
s
u
u
z
^
2
O
^
«
a
0)
2
1
1-1
'
w
s
4J
1
U
SI
<
i
<
Oi
a.
/
t
1
1
1
E->
a^
1
1
U
u
6H
q
0)
.,
'^t'^
_
2S
2
u
1
L
tt
T3
CO
ifl
H <
MS'U4)U<«4iqU4]
^
1
,^
4)
U
"o
-n
ja
_
\j
i
CO
<0
^t T?t
i X
St
w
/
4J
CO
4)
.-4
f^
s
/\
I
CO
<«>•
yi
a
n
1) ••4
a. yi
fl
S
u
H
Q >
4»
e u
4)
r4
>*
fl
JJ
'U
>
/ .
4>
14
<9
4J
to
_
vw
4>
^
05
>
/ ^
.
4)
9->
CO
•»<
1-}
4J
cn
Cd
~
/>
»m
^
^m.
o
*t
c
a -—
3 3^-OJ
,«-•
UJ
jj
en
1)
13
'J
—
fl j^
o
n-noi'^-iJi-uaoa)'-!
*j r-o—
<a3-t> 3 <o
^
'^ U
^ U ^'
aj
'^
(X
a
1^
CJ
S
<
<
a.
yjiii^u-Ltu ^C-^Ct
JJ
fl
D
U
T^ITtCJ
U-U4J
--» •-•
C C U
OJ
-1
-•
•-•
<<=:<<<<<>c5a.
iTl
a>.-.
CO
4)
^^
2;
>
0.
s
^^
c
Q
u
U
c
^^
V
-U
•-<
—»
Jl
C
cu
4)
y
—
«
(13
<a
U
U
Cu
c
/ V
^ ^
'O
3
CJ OS
<
in
-J
a.
4J
<a
4;
-»
Q.
Cu CO
141
c
fl
0)
-H
4J
.-4
u
4)
M
3
>
fl
ai
4J
-^
•-•
-u
as
U
-M
C C
•H
CO
00
-W —1
«i
-H -^
UI
ca
T>
tj
c
.^^
•-4
-u
•-»
^-a
J
>-» •^-*
!Ti—
4J
-a
1-4
«-
y i
3 ^ U
H in
ra
z
ij T3
c s
< 3
X. r.
jz
-a.^oi^pHa-wM'j
ffJ-J
^
'-I
-.
iJ
0)
CS
CJCJ
\
\
^
^
-H
4)
lyigiMiutg^
,g ^
CTi-<
StJO^-OOIOO'-^'fl'^CS ^
Oa3C3(y
(UUO
jj.Mija.5)ur-^-w
S
m
CJ
/
\
VJ
4)
oj
^ •^ - >. \3<
j^ —
T3
— -t ^
U
--—
e-
M
z
3
I
/
c
S
a 3;
4J S
> 4>
>
O
•M
'Ji
a.
/k
(0
3
a. 0.
—
o<
^
2
a
s
.
4)
4)
-• ->
^
jJ
rg
Q
a«
S
Q
(0
.^
SZ.
1)
4J
CJ
tl
C>
4J •-•
S
'-I
"4
cd
-
i
:->
c
::
'fl
£-<
&•
Ci.I
1
s
J
^
3
> =
C:
00u w
":»
""
-a
4)
u
a
0)
•-.
4J
c
>
N
<
< <
u
CJ CJ
S
•-•
-tJ
a
c- '^
C
>*4
—
CO
CO
M
z
a
=
-M
9-«
•H
difference between the parameter and a tolerance value for
When the toler-
the parameter preselected by the instructor.
ance is exceeded for a selected parameter, the exceeded amount
is displayed to the instructor and a printout record is pro-
vided at rates of every
30 seconds,
1
5
minute, and
seconds,
2
10 seconds,
15 seconds,
minutes on a printer/plotter.
Available parameters are shown in the left-hand column of
Figure
5.
c.
Procedure Monitor Display is a mode in which all
steps of up to any two procedures, normal or emergency, appear
automatically on the instructor's display system when called
up by the instructor.
The text for the procedures and mal-
functions is a listing of the steps to be performed by the
trainee and an indication of the elapsed time required by the
trainee to complete the procedure.
This mode is also avail-
able during the Programmed Mission Mode.
d.
Param^eter Recording is available on all modes of
WST operation.
A minimum of six parameters may be simultan-
eously recorded on a continuous basis as a function of time,
and compared to preselected tolerance values as discussed in
(b)
above.
The parameter record mathematical model program
is available in the flight simulation computer.
e.
CRITIQUE Mode calculates miss distance and clock
code of trainee delivered weapons on an instructor-designated
target by the use of a Scoring Math Model.
Bomb circular
probable error (CEP) is calculated using available functions
142
and parameters at time of release.
Missile releases are
scored as "hit" or "miss" based on computed comparisons to
a
respective missile envelope.
A CRITIQUE display for a
permanent record of the results is available.
f.
Event Recording is provided where up to thirty
events selected by the instructor are monitored by the Per-
formance Evaluation System.
After the event is selected for
recording by the instructor, a printout is initiated which
contains a statement of the event, other parameter values,
and the time of occurrence.
Table VIII is a listing of avail-
able events for recording along with other recorded parameters.
g.
Audio voice recording with a time mark and rapid
recall function permits the instructor to access desired portions of trainee headset radio during and after a training
All pertinent communications can be recorded for up
mission.
to 2.5 hours.
h.
Navigational computations, display drive signals
and positional readouts were designed to be within 0.1 nautical
mile of true position based on ground speed and true course.
i.
A Versatec electrostatic Printer Plotter unit is
furnished at the instructor console area and has the capability
of simultaneously printing and plotting parameter recordings.
Program iMission parameter recordings, and event recordings
during Free Flight, Program Mission, and Demonstration Man-
euver/Trainee mode training missions.
143
This unit can also
TABLE VIII.
A-6E WST EVENT RECORDING SELECTION
OTHER EVENT
PARAMETERS
EVENT
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
Airborne
Gear-up and locked
Flaps/Slats-up
Stability Augmentation Engaged
Search Radar Switch Stby/on
Doppler Radar Switch Stby/on
Chaff emitted
Isolation Valve Switch FH/Land
Fuel Dump (wing or FUS) -on/secured
Tank Pressure Switch-on/off
Present Position Correct Button-Depr essed
Computer - on/off
Computer Error Lite Lit
Master Caution Lite Lit
ALQ-126 Switch - Rec/Repeat
Reselect Lite Flashing/Steady
Master Arm - on/off
Attack; Step In to/Out of
Bomb Release
Commit Trigger - Depressed
AZ Range Switch - on/off
Velocity Correct Switch - Memory/Off Save
Track-While-Scan; on
Computer - Out of Attack
Throttle (s) below 75 percent
Gear Handle Down
Flap/Slat Lever -30/40 degrees
Touchdown
Ram Air Turbine - in/out
Designate - on/off
Source:
Blum, et al.
[1977]
144
IAS
IAS
ALT
-
—
-
IAS
IAS
IAS, AOA
—
Slant Range
print out any display type designated by the instructor with
a
maximum of 20 printouts possible during any 2.5-hour mission.
j
.
The tactics computer exercises master control of
the system and includes functional control of the attack nav-
igation system, system displays, weapons release system, inflight refueling system, ECM, threats, magnetic variations.
Programmed Missions, dynamic replay, malfunctions, instructor
display system, malfunction control, displays, instructor
flight control, demonstration maneuvers, CRITIQUE mode, and
others.
This computer was installed with future hardware and
software growth for input-output, memory core, and computation
as a design specification.
The perfonniance measurement capability of the A-6E WST
appears to have an impressive objective measurement capability.
The hardware and software computer system was designed with
objective performance measurement in mind, although no definite
model or technique was provided by the designers for evaluating
B/N skill acquisition during a radar navigation mission.
The
foundation has been laid for objective measurement; all that
remains is building a sound performance measurement structure
based on principles and models that have been examined and
evaluated thus far.
B.
PERFORMANCE MEASUREMENT SYSTEMS
Measuring aircrew performance can be viewed as
within itself.
a
system
Every system consists of an assemblage or
145
combination of objects or parts forming a complex or unitary
whole with definable characteristics and a common purpose.
The purpose of the performance measurement system
(P24S)
examined here will be to provide FRS instructors and training
managers with valid, reliable, and objective information needed
to guide decisions about trainee skill acquisition.
for the similator has definable components,
The PMS
functions,
inputs,
outputs, communication links, and procedures that all interact
to form a system that may or may not be efficiently designed
or implemented.
Criteria for PMS selection may outweigh some
desirable system characteristics as well as the optimal allocation of functions to a particular component.
has been analyzed,
After a PMS
functions allocated, and system criteria
selected, implementation of the PMS within the operational
environment may impose further constraints that cause redesign
of the system.
The interactions of PMS analysis, functional
allocation, criteria, and implementation are discussed below.
1.
Systems Analysis
Since the purpose of the PMS being discussed is to
provide information about student skill level to training personnel for accurate training control decision-making, this
system can be viewed from an information-processing approach.
Information in the form of data is sensed and collected, recorded and processed, and presented in an output form that is
useful for performance evaluation purposes.
These functions
of the system are interdependent and may be served by the
146
.
same component.
Major components of the PMS are instructors
Discussion of
and computers with data storage capability.
the PMS analysis follows.
Data Sensing and Acquisition
a.
Performance must be observed to be sensed and
Performance measurement considerations in data
collected.
collection include but are not limited to:
pose,
(5)
(2)
flight regime,
skills required,
ures,
(7)
measures,
(6)
(3)
maneuver performed,
(8)
mission results,
procedural control,
(12)
operator motivation, and
mission pur(4)
tasks,
operator physiological output meas-
aircraft measures,
(9)
(1)
aircrew-aircraft system output
(10)
flight management,
aircraft systems management,
(14)
(11)
(13)
historical data.
Sensing and collecting performance information in
a
simulator can be accomplished by:
(1)
mechanical and elec-
tronic devices including digital computers and
human observation [Angell, et al.
,
1964].
is usually referred to as "automated"
(2)
direct
The first category
measurement devices,
and may include video/photo recorders, audio/digital recorders,
timers and counters, graphic recorders, and plotters [Smode,
et al., 1962; Angell,
Hagin, et al., 1977].
et al.,
1964; Obermayer
,
et al.,
1974;
Direct human observation may or may not
be standardized by preplanned performance checklists or
instructions
Video/photo recorders provide permanent records
of performance and are suitable for instructors to use in
147
observing performance more objectively because of playback
features.
Audio/digital recorders involve recording of com-
munications and direct measurement conversion to digital form
by a computer for selected observable parameters.
Audio
recordings may be utilized as a more objective measurement
for instructor use but currently have data conversion limita-
tions.
Digital recording of discrete and continuous measures
from all levels of aircrew-aircraft system performance has
been demonstrated in both simulators and in actual aircraft
flight over the past twenty years
1978; Mixon and Moroney,
1981].
[Wierwille and Williges,
Timers and counters are suit-
able as auxiliary components to digital computer measurement
for both time and frequency performance measures
al.,
1964].
[Angell,
et
Graphic recorders are electrom.echanical in oper-
ation and provide continuous records of event states and magnitudes along a time continuum.
Graphic recorders are usually
either classified as event (discrete performance) or continuous
(magnitude of continuous variable)
[Angell, et al.,
1964]
Plotters display information in Cartesian or rectangular coordinates, and are useful for both performance data collection
and output.
Charles [1978J thoroughly studied the role of the
instructor in a simulator and determined one function of the
instructor was to monitor performance in the form of student
procedures, techniques, skill level, and simulator performance,
Indeed, the direct observation of student performance may
148
sometimes be the sole source of valuable performance measurement, especially when unexpected events occur during a simu-
lator mission [Smode, et al., 1962].
As previously mentioned,
video recording with playback capability improves the human
observation data collection method.
Usually, performance
measurements resulting from this technique must be converted
for digital computer use in the data playback and processing
stage.
b.
Data Processing and Analysis
Once performance data are sensed and collected
by mechanical or human means, some conversion is usually re-
quired to make the raw data more useful for the system purpose,
i.e.,
tion.
to provide information for accurate performance evalua-
Usually all data are converted to a digital format
appropriate to the general purpose computer.
It is in this
stage where computers and peripheral equipment such as input/
output devices, memory core units, and magnetic tape drives
are extremely accurate, efficient and cost-effective as com-
pared to human processing of data, although some data types
may not be convertible to a digital format and must be carried
to the system output stage in raw form.
In this stage,
usually video recordings are reviewed by the instructor to
increase the objectivity of his direct human observation of
performance.
[1974]
For the interested reader, Obermayer and Vreuls
present a more detailed account of data playback and
processing components and interactions.
149
c.
Data Presentation
After data analysis, the data will be available
as output measures for the evaluation process.
The output
format may be numerical, graphical, audio, visual, or some
other form.
Since the evaluation process involves the com-
parison of performance data to standards or criteria, some
of the performance data may be utilized as criteria for sub-
sequent evaluation use.
Most likely, the data output will
be typical measures of time, accuracy, and frequency for
various task levels.
Some measures may be combined in the
processing stage and used as output data for comparisons to
established MOEs.
2
.
Allocation of Functions
One result of the systems analysis of the simulator
PMS was to identify functions that have to be performed.
Given there may be an option as to whether any particular
function should be allocated to the human or a machine, some
knowledge of the relative capabilities of humans and machines
would be useful for determining the allocation of functions.
Some relative capabilities among mechanical devices and h'oman
observers were discussed in the previous section but more
detail is necessary.
Using the results of iMcCormick [1976]
Buckhout and Cotterman [1963], Obermayer
Angell, et al.
[1964], and Coburn
,
[1973],
et al.
[1974],
the capabilities
of humans and machines for the purpose of performance meas-
urement in the simulator
are presented in Table IX.
150
,
.
TABLE IX:
HUI'4AN
.
.
AND MACHINE CAPABILITIES AND LIxMITATIONS
HUMAN CAPABILITIES
Detect stimuli against background of high noise (CRT)
Recognize patterns of complex stimuli (DVRI)
Sense and respond to unexpected events.
Store large amounts of diverse information for long periods.
Retrieve information from storage (with low reliability)
Draw upon experience in making decisions.
Reason inductively, generalizing from observations.
Apply principles to solutions of varied problems.
Make subjective estimates and judgements.
Develop entirely new solutions.
Select only most important events for sensing inputs.
Acquire and record information incidental to primary
mission.
High tolerance for ambiguity, uncertainty, and vagueness.
Highly flexible in terms of task performance.
Performance degrades gradually and gracefully.
Override own actions should need arise.
Uses machines in spite of design failures or for a
different task.
Modify performance as a function of experience.
MACHINE CAPABILITIES
Sense stimuli beyond man's range of sensitivity.
Apply deductive reasoning when classes are specified.
Monitor for prespecified frequent and infrequent events.
Store coded information quickly and in quantity.
Retrieve coded information quickly and accurately.
151
TABLE IX (Continued)
MACHINE CAPABILITIES (Continued)
Process quantitative information.
Make rapid, consistent, and repetitive responses.
Perform repetitive and concurrent activities reliably.
Maintain performance over time.
Count or measure physical quantities.
Transfer function is known.
Data coding, amplification, and transformation tasks.
Large channel capacity.
Not influenced by social and physiological factors.
HUMAN LIMITATIONS
Sense stimuli within a limited range.
Poor monitoring capability for activities.
Mathematical computations are poor.
Cannot retrieve large amounts of information rapidly and
reliably.
Cannot reliably perform repetitive acts.
Cannot respond rapidly and consistently to stimuli.
Cannot perform work continuously over long periods.
Requires time to train for measurement and evaluation.
Expectation set leads to "see what he expects to see."
Requires review time for decisions based on memory.
Does not always follow an optimum strategy.
Short-term memory for factual material.
Not suited for data coding, amplification, or transformation.
Performance degraded by fatigue, boredom and anxiety.
152
.
TABLE IX (Continued;
HUMAN LIMITATIONS
(Continued)
Cannot perform simultaneous tasks for long periods.
Channel capacity limited.
Dependent upon social environment.
MACHINE LIMITATIONS
Cannot adapt to unexpected, unprograramed events.
Cannot learn or modify behavior based on experience.
Cannot "reason" or exercise judgement.
Uncoded information useless.
Inflexible.
Requires stringent environmental control (computers)
Cannot predict events in unusual situations.
Performance degraded by wearing out or lack of calibration
Limited perceptual constancy and are expensive.
Non- portable.
Long-term memory capability is expensive.
Generally fail all at once.
Little capacity for inductive reasoning or generalization.
153
.
When fully exploited with no other limitations imposed, these
capabilities and limitations of the human and machine define
what might be described as an "optimal" performance measurement system from the engineering standpoint.
As Knoop and
Welde [197 3] observed, a performance measurement system
should "capitalize on the advantages of an automated, objective system and yet retain some of the unique capabilities
afforded by the human evaluator."
For each task which is to
be measured and evaluated, a decision must be made as to
whether it would be more efficient for the man or the machine
to measure or evaluate performance on that task
[Buckhout and
Cotterman [1963]
3
.
System Criteria
In addition to examining human and machine capabili-
ties and limitations for the functions and components of a
performance measurement system, other factors with potential
impact on system design and implementation must be identified,
analyzed, and weighed for importance.
Choosing a system
solely by human-machine advantages does not take into account
other apparently extrinsic influences that may turn out to be
deciding factors.
The following listing of system criteria
for performance measurement systems was gleaned from research
by Obermayer and Vreuls
[1974], Buckhout' and Cotterman
[1963],
Demaree and Matheny [1965], Farrell [1974], and Carter [1977]:
a.
Conflicts of system purpose may exist.
is required to provide objective,
154
The PMS
reliable, and valid
information for decision-making purposes and to also identify
changes in student skill level.
A component may be the most
objective choice available for the first goal but inadequate
for the second.
An alternative component or function may be
identified to do both satisfactorily.
b.
Data should be provided in a useful form for
evaluation purposes.
c.
Data collection, processing, and presentation must
be timely enough to enhance the training process in the form
of knowledge of results.
d.
Costs to modify or supplement equipment and soft-
ware must be weighed against the utility of the information
derived.
e.
Data distortion must be controlled for accurate
and reliable results.
f.
Minimum interference with the training process
should occur with the measurement system having an inconspicuous role requiring little or no attention from the student
or instructor.
g.
Social impacts of any system may have adverse
effects on morale or personnel involvement.
If an instructor
perceives that automated performance measurement is a replacement to his traditional role as an evaluator, the effectiveness
of the measurement system will be greatly reduced.
h.
Economic and political constraints may affect
system design.
The ideal measurement device may not be
155
recommended for procurement by higher authority, while some
selection of components is based on available equipment at
time of procurement.
i.
Other factors such as size, weight, safety, ease
of use and reliability should also be considered.
Obviously, system criteria should be used in the sense
that selecting components and functions for a performance meas-
urement system would maximize those criteria that are advantageous and minimize those aspects that are not optimum for
the system purpose.
These criteria must be taken into account
during allocation of functions for the performance measurement
system, and must be weighed at least qualitatively if not in
a
quantitative sense for overall contribution to the final
system configuration.
4
.
System Implementation
Once the objectives of the performance measurement
system are identified and the allocation of functions and
system criteria are applied, the system model then requires
a
deliberate implementation procedure if it is to produce
meaningful results and have utility to the end user.
et al.
[1975]
Waag,
identified four phases of development for imple-
mentation of the measurement system in the simulator:
(1)
Definition of criterion objective in terms of a
candidate set of simulator parameters.
(2)
Evaluation of the proposed set of measures for the
purpose of validation and simplification.
156
Specification of criterion performance by requiring
experienced instructor aircrew to fly the maneuver
in question.
(3)
Collection of normative data using students as
they progress through the training program.
(4)
When using automatic or human measurement components
within the performance measurement system, other implementation considerations should apply.
a.
These are discussed below,
Automatic Measurement Considerations
In simulator environments,
the organization of
the software will be the key to successful implementation of
flight training measurement systems [Vreuls and Obermayer,
1971]
.
Extensive research into programming techniques for
the automatic monitoring of human performance in the flight
regime has been accomplished [Knoop, 1966 and 1968; Vreuls
and Obermayer, 1974; Vreuls, et al., 1973, 1974, and 1975].
Knoop [1968] examined some of the prerequisites for automatically monitoring and evaluating human performance:
(1)
Knowledge is required of which performance variables
are important in evaluating an operator's proficiency,
(2)
Knowledge is required of how these variables should
be related for optimal performance.
(3)
A digital computer program is required which compares actual relationships among these variables
during performance with those required for optimal
performance to evaluate operator proficiency.
These prerequisites point out the need for careful front-end
analysis of the system in terms of performance measures and
criteria, and the complex problem involved of programming
this analysis for automatic measurement and evaluation.
157
b.
Human Measurement Considerations
Smode [1962] provides some rules for enhancing
the validity and reliability of resulting measurement where
human observers are employed in data collection;
(1)
Provide standardized checklists that specify what
to observe, when, and how often observations are
recorded.
(2)
Train observers for the measurement process to insure
full understanding of how to make the observations.
(3)
Provide data collection sheets that conveniently
indicate what is to be observed and the sequence
of observation.
(4)
Avoid overloading the observer with too much simultaneous observation and recording.
(5)
Data collection forms should have notation or
symbology for recording observations, when feasible,
and should result in a permanent record that can
be easily transformed into a form for rapid analysis.
These guidelines still appear sensible today, with perhaps
some additional information about the relationship between
automatic measurement and the human observer being established
and provided within the system.
C.
CURRENT PERFORMANCE MEASURExMENT IN THE WST
Navy B/N replacement training is conducted by both the
East Coast FRS, Attack Squadron Forty-Two
(VA-42), and the
West Coast FRS, Attack Squadron One Twenty Eight (VA-128).
Each FRS has developed and maintains its own training program;
however, these programs are similar in nature and utilize
virtually the same performance measurement and evaluation
techniques.
Each training program is divided into specific
158
phases designed to develop certain skills such as navigation,
system operation, or attack procedures, and each uses the
mediums of classroom lecture, simulator, or actual aircraft
flight.
A building-block approach to developing progressive
knowledge of skills is utilized, including training missions
in the WST.
This study will focus on a Category One
(CAT
I)
B/N student, where entry skills and knowledge for measurement
in the WST are minimal.
Determination of the skill level of CAT
I
B/Ns performing
simulated missions in the A-6E WST continues to be based on
subjective judgements made by instructor B/Ns and pilots.
A
syllabus of progressively more difficult flights in the WST
is part of the training curriculum.
During or shortly after
each flight, the instructors "grade" the student on tasks
performed employing mostly personal criteria, based on experience and normative comparisons of the student's performance
with other student performances.
Table X is a compilation of
tasks taken from B/N flight evaluation sheets for the VA-4 2
simulator curriculum for which B/N performance is graded on
a
scale using four categories:
average,
(3)
average, and
(4)
(1)
unsatisfactory,
above average.
(2)
below
A typical B/N
flight evaluation sheet for the WST is shown in Figure
6.
The four ratings listed above are then converted to a 4.0
scale for numerical analysis and student rankings.
During a personal visit by the author to the VA-42
A-6E WST in June 198 0, subjective performance measurement
159
TABLE X:
CURRENT B/N TASKS GRADED IN WST CURRICULUM
Aggressiveness
Aircraft familiarity
Aircraft system knowledge
-
NATOPS
Aircraft/system operations
Aircraft/system turnup
Aircraft/system utilization
ALE-3 9 awareness
ALQ-12 6 awareness
ALR-4 5/50 awareness
AMTI
Approach
Approach (TACAN, GCA, ASR)
Attack procedures/ACU knowledge
Attitude
Basic air work
BINGO procedures
Communications
Computer operation
Crew concept
Degraded aircraft/system operation
Degraded aircraft/system utilization
Degraded system CEP
Degraded system utilization
Departure
Departure procedures
ECM equipment knowledge
ECM tactics
Emergency procedures
160
TABLE X (Continued)
Flight briefing
Fuel management
Full system utilization
General attack CEP
Glideslope control
Headwork
HI Loft attack CEP
Impact accuracy
Knowledge of the cockpit
LABS attack CEP
Landing transition
Line-up corrections
Low level navigation
Marshall pattern
Mining procedures
NATOPS
Navigation procedures
NORDO procedures
Normal procedures
Planning/preparation
Point checks
Post landing/shutdown procedures
Post start
Prestart
Radar interpretation
Radar operation
S-1 pattern
S-3 pattern
161
TABLE X (Continued)
Shipboard procedures
SRTC utilization
Stall series
Start
Start/point checks
Straight path attack CEP
System shutdown
System turnup
Takeoff /departure
Target procedures
Targets
(geographical turn point listed)
Turn point acquisition
UHF communications
Use of checklists
Use of the clock
162
;
.
B/H FLIGHT EVALUATION SHEET
>>>
<<<
BCWOl
REPLACEMENT
BUNO:.
DISPOSITION CODE:_
INSTRUCTOR
FLIGHT TIME:
NIGHT
TOTAL_
T/0
BRIEF TIME
DEPARTURE PROCEDURES
2.
7.
MARSHALL PATTERN
APPROACH
GLIDESLOPE CONTROL
LINE-UP CORRECTIONS
COMMUNICATIONS
NORDO PROCEDURES
8.
BINGO PROCEDURES
9.
EMERGENCY PROCEDURES
5.
6.
10.
HEADWORK
11.
CREW CONCEPT
.
.
.
.
•
•
•
•
•
:
;
:
•
•
«
*
*
•
•
•
•
•
a
•
•
•
«
.
"
m
*
•
•
*
*
*
•
*
•
•
*
•
•
'•
•
•
•
i
Figure
•
*
*
*
•
•
•
i
;
•
•
:
I
',
DATE:
6.
Typical B/N Flight Evaluation Sheet for WST
163
2
I
COMMENTS
SIGNATURE
•
•
*
•
•
T OTAL
AA
3A
UN
1.
4.
DEBRIEF
Tir4E
ITEM
3.
INST_
and evaluation was exclusively being conducted for training
missions of students in the
V7ST.
The use of subjective per-
formance measurement and evaluation was due mainly to the
newness of the simulator and the traditional and acceptable
role that subjective methods have played for the last three-
quarters of a century across all aviation communities.
Oper-
ational FRS personnel rarely have the time to carefully
analyze and employ new performance measurement models and
techniques or utilize new systems that are incorporated into
a
newly-delivered simulator.
One purpose of this thesis is
to eliminate the gap of carefully analyzing and evaluating
performance measurement models for operational use.
Due to the exclusive use of subjective performance methods, standards of performance in the A- 6 FRS are established
analytically based on perceptions by the instructors on what
constitutes "proficient" or "skilled" performance.
and radar target identification
(RTI)
Bombing
criteria are used
internally, with RTI grading based on target difficulty and
the replacement B/N's level of exposure and experience.
Essentially, RTI criteria use a trinomial division of "hit
the target," "in ball park," and "out of ball park" that has
been defined well enough to be converted to a numerical grade
In addition,
the replacement B/N must identify
75
percent of
the assigned targets on a specific radar navigation "check
flight" as a criterion for radar navigation skill.
164
The proposed model for measuring 3/N performance during
radar navigation in the WST, to be discussed in Chapter VII,
incorporates the best qualities of both subjective and
objective measurement, and uses the results to provide
accurate and valid information for making decisions about
student progress
within
the training process.
Under the
proposed model, criteria for successful performance will be
established empirically for operational use by either the
A- 6 FRS or the A- 6 community as a whole.
165
VII.
RESULTS AND DISCUSSION
The purpose for designing a performance measurement system
for the B/N during radar navigation in the A-6E WST was to
provide objective, reliable, valid, and timtely performance
information for accurate decision-making to the training mission instructor and FRS training manager.
This section pre-
sents the performance measurement system model developed from
the previous analysis of related aircrew performance literature,
generic performance measurement systems concepts, the B/N radar
task analysis and MTLA, and the A-6E crew-system network model.
The model developed is specific from the standpoint of identi-
fying what to measure, when to measure,
scaling, sampling
frequency, criteria establishment, applicable transformations,
observation method, current availability in the A-6E WST, and
the accessibility of the measure if not currently available.
The proposed model embodies:
(1)
the establishment of standards
of performance for the candidate measure set by utilizing
fleet-experienced and motivated A-6E aircrews performing well
defined radar navigation maneuvers and segments,
(2)
techniques
for reducing the candidate measures to a small and efficient
set by statistical analysis,
(3)
evaluation methods which use
the results from established performance standards and perfor-
mance measurement of student B/Ns for decision analysis bv
the FRS instructor and training manager, and
166
(4)
some
.
,
performance measurement informational displays which present
diagnostic and overall evaluation results in a usable and
efficient format.
A.
CANDIDATE MEASURES FOR SKILL ACQUISITION
Using the candidate performance measure metrics derived
from the 3/N task analysis
research (Table II)
,
a
(Appendix
C)
and previous aircrew
composite list of candidate measures
for B/N skill acquisition is presented as Table XI.
Informa-
tion is provided for each measure in terms of the method of
measurement, measure segment, scaling, sampling rate, criteria
establishment, transformations, availability in the A-6E WST,
and accessibility in the A-6E WST
(if not available)
.
Each
of these terms is defined below.
1.
Method of Measurement
Either electronically
or both.
(E)
,
instructor observation
(0)
The primary basis for the determination of the best
method was both measure selection criteria (Chapter IV) and
human and machine capabilities and limitations (Table IX)
2.
Measure Segment
This is the period of time or segment of flight in
which the measure should be observed.
"ENTIRE LEG" defines
the radar navigation segment from TP to TP,
"MISSION" defines
the segment from takeoff to landing, and "TP" defines within
one minute of approaching the TP.
167
»
X
^
^
o
2
en
en
•H
M
OJ
rH
+J
u-i
fU
O
•H
M C
<U 3
-Q 1
e e
3
S O
C
CO
•H
0)
-p
a
fd
(U
O >
C -P
3 U
S (U
g U-)
U
w
c:
^
4J
K
'-
G
cn
fal
DU
^M
0)
Q)
•H
r!3
Cm
o
IT)
T3
en
\
rH
1
o
2
w
"J-i
0)
o
C
•H
en
en
(0
•H
rH
\
O
en
en
-H
fa
•H
s
'-
s w
D U
—
H
CO
V
S
o
G
j^
73
fa*"
2
2
H
2
^^
\
X
&I
W
•H -H
Pi
r)
"V
o
o
2
«
fa
Oi
o
H
Eh
05
X
o
CU
O
\.
ff;
rH
CU
rH
<
C
•H
C
-P -P
C
(C
1
\
•HOOT)
4J
S-4
a
5-1
-H
cu -p
nj
fa
X
u
G
73
•r^
(^
U
G
MO
fa
a:
D
^
G
3
1 (U
g g
-H
-P
-P
fd
a)
a
r-H
w
M
•H
-P
--
(U
0)
CO
v^
g
C7^
a
(U
H
\
H
1
o
G
04
2
fa
a
nj
en
to
!^
D U -H
"-' H a-P
73
•H
-H
73
G
+J
(U
rH
-P
a-H c w
w C
3
1
g g
73
U
x
^
12
en
-P
M e
+J
G C
0)
en
en
U
H
.H
(d
4J
M
Cr>
CU-P
en
(13
0)
-H
(D
-^
0)
-H
g G
-H 3
4J 1
g
(1)
w
en
CD
o
o
o
5h
rH
rH
u
OJ
en
\
H
•H
-P
G
W
u-j
1
X
^
2
H
2
X
^
^
o
2
u
01
2
a*
s
fa
O
-P
fa
X
2
^
2
H
2
•z
o
H
e-
E-t
2
O 2
fa
fa fa
Q
z
S
O 3 :d
< < o
W
2
S
cij
a: CO CO
£CJ
CJ CJ
CO
r^
168
<
CJ
oq fa E^
Qi Di
DJ
^4
z
M
J
Q.
\~*
m
<
U
S
c_.
CO
-/)
s
<x
fa
£-
M
CC
u
1
o
fa
CO
2
2
K
C-i
> fa J
J J a
hca
2 < CO
CO
fa J
oi h^
fa
2i <
u
D > u
u < <
£-•
u
:imes
sent
DATA
r
^
0)
4-1
^
fC
o a c
^
^ S -H
0)
^
g
a
2
0)
re
(u
T3
13
+J
O
Q
Q
^
+j -P
C
5
w
-H
W
a.
nj
(u
V^
W
O
04
w
^
o
o
0)
in
w
•P
G
cr;
W
s
a.
CO =
\
rH
1
o
^
&.
2
W
0)
rH
U
T! T3
<D
0)
-P
s
X
V
a
w
«
d"
C7>
a
!-i
u
(U
w
fa
4J
.T!
i:
M
•H
cn
+J -H
fd
4-4
o
C7»
0)
M
j::
+J -H -H no 4J
+J rd
CO
>i
^
o
&a
o
in
OJ
en
\
rH
!-l
0)
cn
J-i
3
•H
S^
Q)
Q)
Cu
-P
C
e e +j s
3 -H C
3 4J (U O
2
o
o
w
1
o
cu
s
w
2
X
V
2X
h-^
cn
<
a
2
M-i
rd
^
+J
OJ
X
da
mput
ion
ent
rH
g
a
w
C
M W
O
u
+j
-H
a
^
0)
S
-H
CU -P
i-l
•H
+J
0)
rH
\
1
rH
o
a,
S
a
w
c c
o"
X
2
*»
r-T
H
2
c
-P -P
Q)
rH
cu
-H
'0
+j
S-1
i^
S
Q)
IT3
en =
o
^
D^
r
0)
(0
4J
0)
-P
p
g
^«W
fT3
g
O 2 W
^s W W
•H rn
a
-P
-P
r^
U
(d
OJ
en
4J
O
Di
>iQ
Eh =
G
-H
C C W
3
Eh -H -H
S g
o
^
U
"-'^
w
E^
cn
=
w
<U
o
o
o
^
rH
-H
ffi
0)
O^Ol
^
E^
*
W
2
OJ
'^
0)
en
\
rH
•H
•P
G
W
CL,
s
a
1
o
u
Eh
Ou
Eh
X
U u o
< 2
-P
c
<
X
2 ^
^
in
o
Q
O
E
EH
w
s
z
U
s
S 2 z
3 D u
CO w s
< <: o
cu W CJ
s: X m
169
w
^
<:
u
en
O
2
M
<
M
hJ
CIh
CJ
S§
C.T
CX
a;
CJ
E^
M
Qi
U
2
O
M
E^
^
2
2
o
fa
en
2
^
C-i
>> fa
.J
Eh
J
m
2 <
fa fa
Di M
Qi <
D >
CJ <
fa
fa
aa
M
en
en
fa
u
'J
<
»
t
IM
u
-P T3
a
•H
4J
5-1
a
a
en
OJ
rH
nj
«
rH
w
t7>
CO)
a.
(u
M
-H
a.
E-t
-H
CD
M
rH
-p
Oi
D^
0)
-H
G
e G
a
E-t
X
^
2j
Cn
(T3
C
1
o
\
.H
W
CO
Eh
W
n3
rH OJ
^
}-(
fd
.H 4J
E-t
-H CU
c ->
W
(U
(T3
a. +j
iH
T(
G
S-l
(T3
w
Eh
G
O
a
W
^3 rH
(T3
Oi
<
w
0)
c
cc;
-U
cn
(t3
iH
s
H
s
w
s
u
OJ
CO)
e c
•H p U
(U
^
^
rH
en
X
cu
S
w
0)
^
H
4J
a
G g
G
W
--U U
OJ
cn
o
o
o
O
(U
tn
\
rH
'-\
2
cu
s
w
1
o
5x
>»
X
<I
1^
H
2 M O
h^ 2
c:;
en
<
s
CO
^
5
en
-H
^
r.
^ >^^
4-1^
S
-^
en
=
wq
W=
Q «
>iO W
U S W
>iS
X5
e/3
cn
J
<
0)
< o
G
0) Eh
Eh Eh 3 <
H W -H
VJ &4 en
Qo
"^ =
rH
Si
rH D
O O
S -M
<y o o
Q
3
ZQ>rt:--'-Pt.Q
-P ^3
OJ
73
r^
o
o
O
Q)
in
QJ
w
n3
•H
M-l
0)
(Ti
-r^
en
\
>-J
+J
':i
G
5-1
W
03
1
iH
o
0^
s
w
2
-O
W H
2 EH
«
-O
Ol cu
W O
« oi
X
fe a.
4-)
u
td
-P
0)
G
d
G Oi-H
en
S -P
G-H
O
4J
(d
Q
M-J
^
-p
a U
^
g
a
rH
U
u
<:
-P
en ^—
>i
0)
4J
3
rd
'Z >A
en
G G
^
O
0.
-r^ --^
(D
w
Q)
X!
2
O
H
cn
en
V4
•H
rH
\
o
cn
\
rH
4J
o
Eh
oi
0)
G
cu
2
a
X
Qi ci
Cm a.
W
"^
-O
O,
O
o o
4-1
C
2
O
M
U
Eh
Dl.
X!
CQ
<
Eh
o
Q
O
~
H
U3
S
170
E-
2
u
s
W
CJ
Qi a:
D
<
W
S
e/2
CJ
)Cj*
3
<
w
S
e/3
o
<
M
z
M
Eh
z
u
S
O
w
.J
e/1
'Si
'^
<
u
hJ
&< cu
Eh
S
en
S
q;
CJ
Eh
H
Qi
U
;^
2
o
Cn
cn
^
^
ex
Eh
>H Cd
hJ >-5
Eh OQ
2 <
as
J
CQ
1—
en
en
CJ
< u
> u
U < <
2i
CD
^
G
OJ
•*
H
Q)
-P
-H
w
s
M
-P -M
Eh
>
tT3
cn
D^
TJ
-H
3
-P rH
W
*+-!
CQ
C
05
-H
fd
+J
(T3
O,
G
(D
0^ TJ
E^
H
H
Di
>
Q
Q)
Oi
6
Q;
>
^S
fi
dJ
-r^
W
a,
Eh
4-1
O
CT>
W
G G
W C7>-H
XI G -H t3
g M W
3 3 03 OJ
+J
G
ftJ
^3
a
M
•H
OJ
M a
o
o
U
o
in
^4
<
Z
O
M
Eh
UO
<: w
W tf
ce;
TJ
Eh
2
u
CO
'H 04
OJ
2
03
\
O
cu
W
rH
0)
0)
•
-H
M-(
4-)
rH
rH
tT>
w
<
(
03
\
tH
1
o
X
li.
w
s
cu
S
w
X
*»
a
a
ci
t4
W
ffi
Pi
V)
<
s
en
^
fi3
V^
03
W W
(T3
cno^
G
d
Q)
i2
G
H
•*
cr>
w
G
G
(U
-H
T3
D
T3
fC
S-(
n3
0)
x) Eh
0)
K
T3
^
•H CU
E-t
(T3
o
2
cn
0^
Eh
o
u
o
VD
00
03
\
rH
1
^K
Oi
s
a
o
>i
M rH -^
^3 3 rH i3
G O d
3 O CU G
<C
Xi
4J
d
O
H
D^CO
G X
-H
^Eh
»
(ti
—
CT"
U
a
dJ
w
CI.
Eh
TJ
litJ
O
^
4J
ro
03
\
rH
0^
s
w
1
1
O
X
^o.
w
s
H
Eh
c
Da
S
Z
o
M
u
Eh
Eh
z
Dm
O S
W
Q Oi
<J*
ClI
<
^
0.
0}
C/3
u
X
u
u
u
u z
u o
< M
£-5
**—
+J
OJ
a:
X
^
<
rH
13
W
s
O
::d
r: en
Eh
CJ
g|
a W
< < 3
M W M
s s S 03
171
O
z
M
CJ
J
<
u
C/3
k4
a, cj
<
!—
^
2
o
Qi
Du
U^
C-(
"^
M
M
a:
u
2
(X
Eh
U
J
z <
Da J
Qi M
Qi <
3 >
U <
>H
hJ
E- CQ
u
J
03
1—
73
CO
W
U
u
<
t
3
o
H
Steer-
G
uble-
g
•H
G
g M
-H
o;
-u
E-"
•H
O
W
CO
S
W
en
>i c s:
en -H tn
0)
u
M
0^
^^
W
C
<
u
s
S
M
cn
-HO
+J
-H
G
4J
Cn
+J -H
s
a;
M-J
—
-4J
OJ
U
(U
O g W
<U -H
G
•H
o
o
o
OJ
+J
I-i
(U
•k
W
!-i
^
G
i-i
O W
W
G
O
X
s
w
•.
a
2 X
M <;
Eh g
rH
t
1
o
••
2
H
2
g
i-i
5-1
S-,
W
-H
0)
-P
M
-H
U-(
M-l
c--P
Cn
4J
>i
cn
\
rH
0)
W
^
o.
\
o
1
(D
u
U
i-i
g
^
z
H
2
0)
cn
rH
O
M-i
cn 04 Qi Eh
-P
CO
0)
—M
(U
cn
tH
o
Q)
«k
H
M
o g
G O -H G M
tT>
cn -H
X
Clj
G
(U
>i
s
\
o
CM
o
«
W
O
Oi
rH
g
CO
cn
•H
\
o
/
-H
W
Eh
Oi
05
cn
.H
GO
W
\
o
•H
-P
-u
iH
«
E-t
o W
o
-P
o
o
o
c
r^
U
•H
w
U
o
0)
rn
Ji
cn
\
o^
2
V
H
2
H X
<J
Eh 2
s
w
<H
X
I
o
<-i
1
03
o
-P
td
>
•H
te
5^
U M
<C
}-l
W
Ai
o G
t3
g
-H
CO
Eh
W
1^4
Eh
--
03
Oi
Q)
0}
03
0}
rH -H C4
-P Eh
U
o
CU
Cn
-P
--
O
o
o
Q)
CN
cn >i
OJ rM
i-q
cn
cn
\
rH
0^
2
w
1
G
1
o
Di
c
£h
Eu
OQ
[ii
X
Cli
z
o
M
u
X
u
2 W
M 2
Eh
^Z
Z p. H
Eh
O ^
H '^ a;
Eh
-O
U O 04
w
< o
w « «
O
Q
o
c
Eh
CJ
s
172
Z
U
S
2
s
<
W
s
'«n
£h
rf
U Eh
Oi z
D W
CO S
<: o
u u
S
'J^
M
^
<
U
CO
o
<
H
z
M
(X
•J
u
& u Eh
S
Eh
CO
2
M
Qi
U
y
2
o
Ci.
cn
z
2
E-
>
J
ez
W
w
J
a
<
hJ
oi M
Qi <
D >
U <
CJ
J
03
)—
'J5
eo
a
D
U
<
1
1
W
2
w
0^
Eh
cri
en
QJ
-P
<U
C
0)
0)
a.
Q)
U
M
o
o
tn
IsO
t3
rH
W
QJ
M
S
•H
-P
-p
•H
0}
G
W
< m
4J
0)
'd
<T3
Q)
Ui
W
W
W
a
W
2
U
o
o
q;
en
\
01
MM
tH
fd
c
fi
S
w
W
2
H
-P -P
>i
^
Eh
Pi
E-t
0-P(T3fdDfHi<
=
S
(ua3M-p&4= o
w
J
o
g
e
S-l
i2
^-1
2
i-H
(U
O
Jh
-H
-H ti
rH
Q)
en
\
o
\
rH
-P
G
0)
•HOo>coocn
E-io<:^H-p-p
W
en
a
G
cu
tH
1
G
O
0)
en
w
D^
o
T3 Oi -H
g
en
Eh
-H
en
Eh
fC
—
G
G
"4-1
S
0^
(U
Cn-H
(0
rH
rO
03
(U
en
a
en
4J
•H
nj
1
J-i
Oi a
g
B U
M G -H
Eh
W
Eh CU
V
W CO
2 2
o
(X
(U
en
a.
Eh
w
0^
Eh
u
o
o
Cl^
cn
2
\
LO
+
0)
w
iH
2 M X
2 ^
*»
X
*^
U ^
u
< 2
1
i+H
<
c
2
H
u
Eh
2
U
o s
u W
Q Oi Qi
O D
E en CO
Eh < <:
U
u
2 2 2
Eh
ClH
u
K
PU
CT>
(T3
OJ
Oi
'oj
0)
tT3
CQ
X
cr;
=
TJ
X
U X
u <;
< 2
nj
=
M
2
^
rH
cn
cn
-H
cojUrHooo:!--3
2"
a,
o
< H
1
U X
< 2
1
1
1
X
•ik
o
(i^
a.
Eh -H
4J
2
H
2
1
X)
•H CU
<
w
S
2
W
-i4
£^
Eh
S-l
s
\
iH
a.
-P
!Ti
&.
en
en
<n
Q)
o
::3
tjj
173
ej
2
»—
E-
Z
W
2
o
u
M
J
<
u
'j:
CO
1-3
D-
M
2
E-
CO
S
<
M
Ci
a
E^
M
Qi
U
^
2
ti-)
71
2
rH
a
J J
cu 03
2 <
>H
Sri
Oi <
3 >
u <
M
J
03
t—
'-0
CO
u
u
u
<
a<
5
fi
OJ
^
U
^
**
(u
•H rH -H CO
<D
5
E
0)
en 4J
—
cn
•H rH
^
tT3
-p
w ^^ xJ -^
03 G
^
J^ Pi
C W 4J
4->
0) M
a ^3 4J
a3
c 3
Eh en
U U
tT3
CTi
Q)
Q)
cn
.H
(tJ
Eh en
rH
w
s
o
rH
w
5h
o
o
o
•H
iH
o
c
cn
a,
X
w
rH
JJ
w
cu
\
1
1
o
CQ
5x
X
-s
w "
^
H O
Eh S
'^
>i
0)
+J
c c
^ s
^
-H
G
3 S
4J rH t3 Q
W >i O a C
•H o oi (U td (^
<+-(
i-i
fO
>-i
M ^ 0^
a 3 fo
g U T3 T3
O 03 a
(T3
w
u
<: Qi
Its
-P
2
O
H
en
0)
iH
o
^
rH
(D
^
j-i
o
•H
03
-H
+J
^ 3
u a
CT>
OJ
rH
\
o \
rH
Pi
w
cu
O
C
Eh
Pi
o
Cu
o
X
Pi
cu
W
a;
1
en
<
^
u
(U
0)
0^
4J Eh
g
>iT3
QJ «4H
s^
n3
-H oi
4J
4J
u
-H -H
fi
-U
W
M
3
U
0)
en
03
0)
-p
rH
c:
•H
'H
Q
j::
cr
4J ^3 +J -p
c
0)
0)
T! -H
i^ en
-i^
M
0)
C C
(^
M
o
en
3 a
)H
o
CTi
U
-H
x:
rH -H +J
(T3
S-i
tT3
•H T!
Eh
i-i
»k
w
•H
.H
+J
c
-r^
"^
u
o
o
o
0)
w
•H
Q)
••
w
(U
cn
1
1
s
o
cu
cn
\
Oi
s
w
rH
o
2
O
M
CD
rH
u
en
er"
>i
o
4J
to
-H
rH
T3
03
03
a
i-l
Pi
0)
M
•H
4J
r-i
QJ
\
o \
Pi
rH
w
rH
o
C
0^
W
3
X
*».
CT"
a
2 §
U u o
< 2
CH
Pi
o
cu
o
X
Pi
cu
-P
C
2
o
M
u
Eh
E-^
<j*
z
u
o s
Ct4
u
qS
ag
o o D u
CQ
S
a: en en
CJ3
Eh
Ci3 Cd CJ
CJ
en
s:
< <
S
s
174
DJ
J
<:
u
en
<
M
hJ
cu DJ
^3
en
ci
Pi
DJ
Eh
t-t
Di
u
2
2
o
en
2
s
E-
>• CJ
CJ
kJ
CQ
J J
M
2 < en
CO
pq k4
Qi M
CJ
CX <
u
a > u
u < <
V
r—
•-
(
C
Q)
SO
M
—W
+J
(T3
--I
Xi
M
"
CHrH
ctjiGw^-iaj-Ho
o
o-H(u<uo-i->»^c:
•H > e
(0
3
s-i
CcjUWfaO^WH
(U
rH
w
^
>-^
o
cu
w
•H
o
o
o
\
•P
rH
r-i
U
w
G
(T3
W
0)
a.
s
w
1
1
o
cnu
4^
<U
M G
0^
M
(U
^—^
05
C G
T3 ^
-M -H
-U 4J
CO
DOU
Q)
CT"
0)
r-l
3
(d
w
U
4J -P
M
a >
(D
-H
03
-H
-P
T3 T3 U2 -P
G
Eh Oh
(13
a
D
03
ff;
^
W
X
2
H <
Eh 2
(U
-H
J
+J
3
Vh
aou
e
X
u
uw
^
•.
o
o
M
-H
o
2
O
(D
W
\
H
CD
M
S £ W
D -H
2
-H OJ
-r^
^
w
fl
H
2
D-
U
X
^
^
C
O W
W -H
-p
x:
0)
Q)
^
-ptua^-pcuMc^'O S -H rH (U S
3-HC0O5-ICOU
OOCrnJCUS-iS-P
+j
-—
tj"
-l-l
5-1
4J
«.
w
2
^
MS-)
l+-(
-r-^
E-t
\
o
a U
(T3
1
o
Oh
2
W
V
2
H
2
W
O
X
2
CM <:
3
X
2
5-1
^
y-i
M
(U
X!
5-1
0)
Q)
13
3
rH
o
o
o
0)
rH
U
W
e e
D -H
c-t
C
-P
-P
!-<
a
w
(/)
P
G
W
oj
S-l
03
0)
\
1
Oh
2
W
-H
o
Q)
QJ
rH
Qj cn
G
X
^
2
rH
tn
s-j
-HO
5 w
e oj M
M -H rH d
5h
•H
0)
Oj Eh 03
Cn
a)
w
P
G
W
U
OJ
rH
\
1
o
04
2
W
X
^
2
H
2
C
2
O
H
u
Eh
Ct4
O
Q
O
<
^
O
2
<-{
"3
an
X
2
2
M
2
•»
C7>
a
en
OJ
M
•H
QJ
2
H
0)
r-l
•»
O
2
a:
&H
w
s
175
2
U
S
2 §1
D D W
en cn 2
< <: o
u W CJ
s S ui
Eh
DJ
o
u
-3
<
u
en
2
M
J
&. DJ
§ ^
cn S
<
M
1
o
Oi
Cd
Eh
Cm
cn
M
a:
u
2
<
c:
E-*
>H CU
hJ
E-i ca
J
2 <
W k4
M
Qi <
3 >
U <
c:;
J
CO
^H
73
cn
y
u
u
<
«
(
T3
CD
a)
iH
a,
T)
w
c
3
u
U
O
O
•H
<o
Q)
o
cu
en
•\
cu
S
w
rH
4J
1
1
o
H
S
X
^
u x
< 2
}H
u
u
-p
w
4H
cn
w
•H
4i4
o
o
m
Cm
Eh
o
p^
en
\
rH
s
w
1
1
(TJ
o
5-1
S
X
>•
w
2
Cl3
U
1
W
iT>
4J
C
c;
CO)
•H
^
OJ
5
U ^
fd O
S^
Eh
CQ
H
C
(d
G
>-i
CP
-P
'-^
a
I4H
Oh
Eh
.H
H
-H
(T3
C/3
-P
3
i-l
tj
-P
--U
M
•H
rn
W
-P
c
w
d a
en
3
e
C
C 6
+J
U
(K
o
-H
S-4
S
0)
!-l
M-l
0)
-H
0)
rH
—
-H -H
4J -p -p
-H -H
W
U
tT3
>,
U M
+J
4-»
0)
-H
-H
OJ
>H
-P
g
U3
-H
-H
a
M
U
rH
(u M-i
cc;4J>
^
K
X
•»
w
2
1
1
^
w
2
^
C G
W
W
X
w
o
,—
W
+J +J 4J
•H
\
Oh
^
\
o
•H
Eh
Q)
cn
<:
+J
0)
cn
ui
o
o
QJ
CP
CJ
0)
Q)
rH
(J)
O
•r^
o
o
o
-P
rH
o
M
^
w
G
c
Oh:>OUM
W
T3
^
Q)
cn
\
0^
s
w
rH
2
H
2
X
«
w
2 X
Eh t
1
1
O
c
2
O
M
u
£h
&^
Cm
O
Q
O
K
H
DJ
X
176
z
U
S
W
Oi
D
<
U
X
c/i
<£
SI
D
DJ
2
< U
U M
S
c/^
C/1
o
2
M
CJ
J
<
u
'J)
k3
a. cd
E^
S
in
a
<
H
a:
w
£-
^
2
o
Dl^
cn
z
1—
Qi
u
r
>. CJ
hJ
CJ
hJ
CO
E-,
h-l
J
CQ
Z < cn
CO
CU J
Ci w
CJ
ex <
u
3 > u
u < <
(
w"
2
Ul
51
cu
Eh
0)
TJ
C
4J
0)
0)
•H
4.)
3
-H
<
w
0)
-p
rH
U-l
(U
i4
S-i
5
O
•H
U3
-P
4-1
1
1
c
0)
O
0)
w
\
Oh
2
a
rH
o
w
CQ
^
to
<y
(D
(T3
M-l
w
3 W
TJ
w
Oh
«
<u
LD
\
O
E-t
(d
•H 04
-g
1
1
iH 04
<
*
+J
CT"
w
X
^
UX
u<
<2
a
2
4J
+J
2
H
2
M
0.
s
w
iH
O
Eh
2
M
2
X
•.
U X
u<
< 2
a;
HD
en
C/3
M
en
Cm
Pi
•H -H
tT>
+J -P
C
(u
c
td
-H
-H
4->
Iji
M
4J
nj
fO
S >
0)
CO
<
>
'-^
0)
>i
rH
rrj
rH
a
u
< -H
S Q
—
w
en
M
O
O
•H
rH
(U
-p
o
Q)
W
\
2
W
rH
1
1
c
0*
O
w
a
2
X
*k
u
u
<
_
w
cn
ca
c
(0
a
e
U
H
1
<d
CI
Jj
Q)
•H
rH
C
3
^-^
-H
4J
Q)
o
rH
(D
-73
^
o; M-i
OJ
(U
0)
M
w
M
o
•H
c
cd
u
<u
\w
•.
0.
2
a
rH
o
2
o
M
u
E-t
ir>
z
w
o s
w CJ
Q 3i oi
O 3 s
a:
W
Ch < <
M DJ W
s r: 2
irf
Ct,
w
c/5
<
X
^
<
C
X
a
2
u
u
1
1
w
S S Q W
2
rH
4J
M
M
en
177
O
2
£-.
z
u
z
o
Cd
W
<
U
'Jl
C/5
M
»J
Oj
u
1 ^
S
C/1
<
M
a
02
E1—
Qi
u
2
2
o
a
01
'2'
2
C6
Ch
a
a
E- CQ
2 <
a a
CC M
2i <
3 >
u <
>*
vj
a
a
oa
M
cn
cn
MM
o
u
<
1
1
W
2
'-^
2
+J
CT>\rH
(U
g
c U
•H
-H -H
S
M
E-.
^
4J
(T3
6
-P
(C
C
U-i
4J
(U
5-1
-P
(u
3
a
Eh CO CU
CU
>
cn
0)
>H
H £h
H U
^ U w
fO u
kJ «
P
1
M
M
w
<S
-H
OJ
QJ
rH
CO
«.
\
•H
C
> u
--
(U
CO
U
-P
C >iW
-p
u
Q)
ce;
(tj
en
w
<-i
Oi
2
W
rH
CU
EH
fO
c
0)
^
S
a
oa
H
M-l
-H
5^
-P
CO
M c a
-H
S-i
1
1
X
2 2
H
W CH
Pi
2
•w
tjl
(U
Cl.
rH
S
\
s
0)
Tl -H
SU M
LT)
-H
CO
CO
OJ
X
^
<J
-H
-P
X
<
X
2
H
2
I
1
1
u-i
(
a
rH
+J
X
-Oi
s
M a
S CU
02;
3
U2
<
g"
OJ
0)
e
CO
"Z
^
•H
U
Eh
w
rH &,
03
cu
Eh
OJ
CO
rH
Eh
4J
\
cu
s
w
r-i
a
1
Eh
C
<U
e
CO
Eh
U ^ U
rH
(i^
fd
Eh
-P
C
Eh
CP
0)
>j-i
•H
CO
^
D
U
^
-H
5
U
a
(U
Q)
CO
-P
c
nq Eh
rH
1
1
\
rH
a,
2
w
w
c
Eh
2
w
s
w
Q Qi sg
D :3 w
= CO CO S
Eh < < CJ
w u W DJ
s p: S cn
E-t
X
V
2 X
H <;
2 2
Eh
Dl,
<
a
2
2
M
u
X
w
2 X
S 2
«.
U
H
X
^
2
Q)
CO
rH
-H 0) 4J
x: -P -H
-P -H g --H S-i -H Oi
w
2
178
w
<
u
-3
CO
U
Z
M
•J
Pu
<
M
(X
w
£-1
s
a
e-
>—
en
S
Qi
U
y
2
Du
cn
2
Dj
£-
ui
M J
J CQ
1—
t- CQ
2 <
M J 00
a; w
Ci <
u
13 >
u
u < _^
>-'
1-;
'/I
"-^
w c
P4 -H
Eh x:
+J
"W -H
V^
a
S
^•H
U
Q)
S-l
Ti
<D
M
U P
c o
ai
O
C
e
=s
-H
S
IJ4
c
(T3
Xi
M
U
OJ
-P AJ
:3
^
>i
4-1
Ln
fH
cn
•H
W
W
cn
<
•H
s
en
Q)
rH
\
1
o
04
2
w
^
c
O 'C
>i W (U c
O -H 5
X
^
IS
-P
14H
(tJ
n3 -P
^
D
rH
<U
m
n3
O D
i4
u
u a
ta
Eh
<
w
i:^
s-i
>»
a.
Eh
CO
\
Eh
l-l
<: -p
lOi
o
o
-M
M-l
td
0)
X
H «
2 (^
&^
1
td
o
z o
-Oi
rH
-H cu
U Q
X
<^
2 2
O
^ H
W Eh
2 «
^
ttJ
-H Oi
-' 4J
<: Eh
a.
s
w
rH
1
W
2
X
«»
Z cn
H 2
2 2
o
a:
3
en
<
w
2
-P
x:
oj
QJ
n3
C^T3
3
(13
OJ
-P
G
-P
td
S
tt3
g
D
G
&:.
to
w
0^
Eh
M-l
rH +J
:d
tn
<
CO
a
*»
(T3
W
U M
•H
w -H
—
S
C^
ja
rH
d*
rH
\
«
o
rH -QJ
CT"
3
0)
Cti
s
w
rH
r-l
+
1
J
IW
C
-P
•H
(U
-H
W
g
M
+j
M
a
t3
01
g
M
-H
Ph
Eh
rH
o
>
2
\
u
a >
a. <
rH
tJI
cu
w
dJ
U
•H
^-1
+J
s-l
C
1
o
rH
\
Oi
s
H
*.
o
2
X
rH
»
2
M
2
w
C
z
o
H
u
£-
E^
z
Cu CJ
O S
CJ
Q Oi
O
- 75
Eh <
CJ u
s r:
::>
CQ
X
s o
-cu
z o
M oi
2 cu
X
i
0)
c c
a
X
<
2
o z
2 O
M
179
(rf
o
2
M
CJ Eh
Oi
Z
D a
2
< C
U CJ
S
a:)
c/]
CJ
J
<
u
cn
hJ
Oi '^
<
H
^
2
o
Qi
Ixi
CiJ
C-l
(-H
II
ai
U
cn
2
<
Oi
c_
> il
J J
Eh a
z <
W .J
-H
Oi
Oi
<
3 >
u <
a3
J
CQ
»H
cn
cn
CjJ
u
u
<
(
M
en
u
§
S-i
(D
Eh
rH
T3
0)
T3
CD
S-I
o
o
\
-H
en
u
w
C M
D
u
u u
•H
4-1
c
o w
fa
en
^
a*
S
*k
U
<
1
fa
(U
'0
•H
-P
V^
rO
cu
a.
rH
o
o
O
0)
in
en
-H
V-l
fa
4->
a
x:
j3 (U 4-) en
g g -H -H
3 -H M rH
dj
2
03
Eh
U
^
en
4->
C
=5
O
2
£
W
<+-i
X
2
fa
rH
+
fa
!h
•H
4J
c
1
0)
\
cu
2
fa
rH
X
«k
O
S
>^
oi <c
fa
o
2
fa
fa
2
H
2
^
en
s
«^2
CO
C
O
S-l
+J
0)
x:
-P
g
-H -H
Eh
rH
-H
oj
•H
cu
nj
CU
i-)
a
o
+J
•H
M
U
w
a
-H
^
a.
£h +j
u-(
\
&4
2
W
rH
fa
X
-
2 X
Eh 2
1
1
o
O
tn
M
u 3
M G o
QJ n3 U
r-\
•H
en
fa
-u <;
en
•H
en
3 ^
Q
^O
V
c
-H
4-1
^ O
o
5
^ a:
X
^
2
OJ
-H -H
M
en
>i
u-l
c;
0.
0)
M -^
C 3
a
rH
c
fa
•H fa
en
-t-l
H
c:
(^
•H
M
cr>
en
o
o
o
OJ
w
T3
+j
•^ Eh
s
-H a.
H
Cn
OJ
1
O
rH
\.
fa
0^
2
w
rH
2
*
X
2
H
2
T3
c
o
2
O
M
u
C-l
Eh
2
fa w
O X
Q Di
o a
X 75
Eh <
U CJ
s ^
CiJ
an
<
180
<c
U
o:
O
en
<
H
-z
M
2
o
1x1
C-J
w
<
u
^/)
en
S
.J
o
2
M
J
a. jj
2 5h
5 ^
IX
en
<
1—4
Qi
M
^
M
a
u
s
2
o
fa
en
2
<
Oi
Cj
fa
>H DJ
hJ vJ
Eh 03
J
J3
1—
2 < en
en
fa -S
(X M
fa
(X <
u
Z) >
u
u <
r'
Da
D
CO
<
2
^ G
5
O-H
i
4J
c
(TJ
a.
•H
i^
4-)
x:
en
•H -H
M rH
0)
P
u
p4
5-1
4J
M-l
-H
^
CO
-p +j
a
o
u c
Ph
>i
(TJ
M
3
<
rd
0)
Oi
fO
rH
(T3
-P
ft3
OJ
x: T!
01
3
Q O
-'
CT^
•H
rH
^ to
1^
>i
-P
2
M-l
-^
^
Vh
-H 4J
O
en
o
4J
0)
w
(T3
!-^
•H
C
3
O
O
n
04
2
Oh
H
<;
X
-P
c
w
I
o
<: -P fa CQ
c
o
u
fa
o
Q
O
CQ
&^
fa fa fa fa
CO
2 S 2
181
2
M
fa
<
U
cn
M J
^ CQ
ca
2 < CO
><
kJ
fa
a.
u
CO
a
Qi
fa
M
E-.
r-t
^:i
X <c
D >
u <
u
u
<
CO
fa
,
.
3.
Scale
Scale shows the numerical limits and units of the
measure, e.g., seconds, feet, miles, or if units are unassigned,
the values that the measure is assumed to take on over the
"0/1" defines a dichotomous "not occurred/
measure segment.
occurred" situation.
(e.g.,
4.
Subjective scales are listed separately
"effectiveness of communication").
Sampling Rate
The sampling rate is the rate of measurement defined
by time or by measure segment.
Determination was based on
research by Vreuls and Cotton [1980]
5.
Criteria
The recommended method of establishing performance
criteria for the performance measure is defined as "EMP" for
empirical, "OPER" for operational
by fleet aircrew)
6.
,
(subjective determination
or both.
Transformation
A recommended mathematical or statistical process for
the measure is provided based on the literature review by
Mixon and Moroney [198 0]
.
A caution is provided that the
determination of the measure's distribution would be in order
before applying any transformations, as previously discussed
in Chapter IV.
(frequency)
,
ACC
Transformations are listed as
(accuracy)
,
ME
(of
successes to total number)
RMS
(root mean squared error)
,
1S2
,
(mean)
,
MO (mode)
MIN (minimum)
or N/A
TIxME,
,
,
FREQ
PROPORTION
MAX (maximum)
(not applicable;
if
.
measured for computational purposes only)
,
Other transfor-
mations may be possible; see Table IV.
7
Currently Available
The measure exists within the A-6E WST PMS
in Table VIII or Figure 5.
If blank,
,
as listed
the measure is not
available.
8
Accessible
If not currently available in the A-6E WST PMS
,
a
determination was made as to the feasibility of incorporating
the measure into the existing WST PMS with a minor software
If blank, major changes may be required in the WST
change.
PMS to facilitate the accessibility of the measure.
It is recognized that the resultant candidate measure set
contains redundant and perhaps overlapping measures but analytical derivation is necessary before empirical analysis is
possible.
Only "objective" aspects of performance were listed;
the determination of subjective components
(i.e., motivation)
and their measurement is a subject for further research.
B.
ESTABLISHMENT OF PERFORMANCE STANDARDS
1.
Radar Navigation Maneuver
The radar navigation maneuver consists of multiple
"legs" of varying distances between predesignated turn points
or targets that are selected using radar significant criteria.
Each turn point
(TP)
must be reached within a criterion dis-
tance at a predetermined time, airspeed, heading, and altitude
183
for the success of the air interdiction mission.
are more difficult than others to detect,
Some TPs
identify and suc-
cessfully fly over using the A-6E CAINS navigation system.
Operationally, a TP is "crossed" or "reached" when the actual
position of the TP passes more than ninety degrees abeam the
actual aircraft position.
This operational definition is
independent of the DVRI moving bug cue used by the B/N, as
based on cursor intersection on the perceived TP radar return.
This operational definition of TP "passage" can easily be con-
verted mathematically using Boolean functions for use in the
A-6E WST performance measurement system.
Appropriate radar navigation routes can be planned by
either FRS or Medium Attack Wing (I4ATWING) personnel and pro-
grammed into the WST.
This task is simplified by the current
capability of the simulator to facilitate preplanned mission
routes for training purposes.
2.
Simulator "Intruder Derby"
A competitive exercise is currently conducted on an
annual basis using actual A-6E TRA^4 or CAINS aircraft that
perform radar navigation maneuvers in both East and West Coast
A-6 communities.
A similar competition could be applied in
the A-6E WST using the programmed radar navigation routes as
previously discussed.
Each route is then flown by fleet-
experienced aircrew on a competitive basis under the cognizance
of the appropriate r4ATWING command with performance measured
by the proposed model.
Using fleet aircrew that are carefully
184
selected by individual squadron Commanding Officers almost
guarantees motivated and skilled performance during the radar
navigation routes due to the intrinsic importance placed upon
the competition results as an aid in determining that A-6
squadron which is the most "excellent" in each MATWING community.
The performance results of the simulator "Intruder
Derby" would be most useful for establishing standards of
performance for each radar navigation route.
Establishing
standards in this manner for comparisons of performance to
other groups is both feasible and operationally acceptable.
The use of "ideal" flight path performance criteria lacks this
acceptance criteria among operational FRS and fleet aviators,
as "ideal" performance may not be achieved by even the most
highly skilled aviator in a consistent manner.
The fleet-
established standards of performance would be carefully analyzed and performance limits set by operational personnel for
those performance dimensions which are within or approaching
the performance standards of the fleet.
C.
MEASURE SELECTION TECHNIQUES
Various empirical methods and models have been formulated
and applied toward reducing a list of candidate measures to a
small,
efficient set with the characteristics of reliability,
validity, objectivity and timeliness.
One technique, employed
successfully by the Air Force for air combat maneuvering (ACM)
185
performance measurement, used univariate and multivariate
analysis techniques to find the smallest comprehensive set
of measures which discriminated skill differences in "novice"
and "expert" ACM pilots in one-versus-one free engagements
[Kelly, et al.,
1979].
Using multivariate analysis, corre-
lational analysis, regression analysis, and ridge adjusted
discriminant analysis, an original set of twenty-seven candidate performance measures were reduced to a final set of
sixteen measures that were:
(1)
Sensitive to differences in pilot ACM skill level.
(2)
Diagnostic of performance proficiencies and
deficiencies.
(3)
Usable by instructor pilots and compatible with
their judgements.
(4)
Capable of providing results immediately after
the end of the engagement.
(5)
Compatible with current projected training and
measurement hardware.
These statistical analysis techniques appear to be appropriate
for application to the measurement of B/N performance during
radar navigation in the A-6E WST.
Computer programs have been
developed and are available at minimum cost for possible software alterations to the current performance system in the WST.
D.
EVALUATION METHODS
Once a small and efficient set of performance measures
has been derived using suitable statistical techniques, meas-
urement and evaluation of student performance may then occur.
186
.
The proposed performance measurement model uses the results
of a student's performance com.pared to fleet performance as
usable information for several decision levels.
The task of
operating the radar by the student on each leg of the radar
navigation maneuver is measured and evaluated in terms of the
underlying skill level of the student, leading to a decision
of "proficient" or "not proficient" for that task.
Decisions
on quality of performance must also be made for global indices
of navigational skill, e.g.,
managem.ent.
fuel management or time-on-TP
Students just learning the task are expected to
be "not proficient" when compared to fleet performance on the
same mission whereas students near the end of scheduled training are expected to meet fleet standards.
An evaluation technique proposed by Rankin and McDaniel
[1980]
that uses a sequential method of making statistical
decisions has the capability of utilizing both objective and
subjective performance results for more accurate and potentially less costly training evaluation.
This decision model
focuses on proportions of "proficient" trials, where "proficient" is determined by the instructor using either subjective
evaluation or objective standards established prior to performance.
The model sequentially samples performance during the
training of a particular task or maneuver and uses the historically sampled performance results to eventually terminate
training for that particular task or maneuver.
187
Figure
7
illustrates the proposed sequential sampling
decision model using the task of navigating the A-6E aircraft
to a radar navigation TP as an example.
For this particular
radar navigation route, eight TPs must each be navigated to
within an empirically established criterion radial distance
by the student.
The instructor must use the performance
measurement results to evaluate the student's actual performance on each TP and assign a "proficient"
ficient"
a trial.
(1)
(P)
or "not pro-
score for each TP, where each TP is considered
The figure shows "proficient"
(P)
trials plotted
against total trials and indicates in this example that three
TPs were successively and accurately navigated,
followed by
a
missed fourth TP and ending with the remaining four TPs suc-
cessfully navigated.
The regions of "proficient," "undeter-
mined," and "not proficient" are derived statistically; more
detail on their actual calculation
E.
is presented in Appendix
As can be shown in the figure, the student has "mastered"
the important task of navigating to a TP on his eighth trial.
This information can then be used by the training manager in
deciding whether the student has actually mastered the task
and needs to progress to more difficult tasks or the student
has not mastered the task in a previously established,
statistically-based number of training trials and needs remedial training for that task.
For this example,
the training
manager could safely determine that the student is ready for
training on tasks other than accurately navigating to a TP.
130
^"^
00
a;
CO
03
rH
.•^
Eh
•H
(ti
!^
•H
U
Eh
(U
0)
U
C
0)
Q)
C
CT
•H
Q)
fU
03
&4
0^
Eh
O
.f^
•H
•
cu
•
Oi
4J
i-H
C
cu
Ot
a.
OJ
•r4
4J
<0
CN
•H
>
U
M
•H
&H
M-l
1^
Eh
5-1
O t
en
a\
U
O
M-l
iH
•H
<U
O
»co
S
o
£h
C
O
-H
U}
•H
U3
0)
Q
cn
G
•H
rH
a
s
(d
01
(0
•H
-P
c
0)
cr
w
(U
4J
M
3
G
Qj
en
•H rH
U
^
cn
•H
O^
•H -H Eh
<+-J
fd
M
>-'
Eh
U
04
189
This sequential sampling decision model has been previ-
ously used in educational and training settings.
[1969]
Ferguson
used the sequential test to determine whether indi-
vidual students should be advanced or given remedial assistance
after they completed instructional learning modules, and
Kalisch [1980] employed the model for an Air Force Weapons
Mechanics Training Course (63ABR46320) conducted at Lowry Air
Force Base, Colorado.
Both applications resulted in greater
test efficiency than for tests composed of a fixed number of
items and substantially reduced testing time.
As discussed in Chapter IV (Table V)
,
the resulting costs
associated with training manager decisional errors predicates
that statistical and systematic methods be employed to measure
and evaluate student performance.
The sequential sampling plan
accomplishes this by fixing the error rates
as previously discussed in Table V)
(Types
I
and II
and allowing the number
of trials to vary according to the performance demonstrated
by the student.
This evaluation decision model is currently
being integrated into the training program of a helicopter FRS
that uses only subjective determinations of "proficient" in-
structors [RanJ^in and McDaniel, 1980]
E.
.
INFORMATION DISPLAYS
This section discusses some displays proposed for use by
the instructor at the A-6E WST console for the purpose of per-
formance evaluation.
Several classes of performance measures
190
Figure
are represented.
8
shows a time activity record for
the B/N's interactive control of the A-6E CAINS navigation
system during one leg of a radar navigation maneuver.
This
type of display tells what, when, and for how long a particular
From this, one may infer what
equipment was being operated.
particular task was being accomplished at
during a radar navigation leg.
performance may be used as
a
a
particular time
Time activity records of fleet
performance standard by simply
preparing a transparent overlay to show means and ranges of
activity by the fleet performing the same radar navigation
leg.
This comparison provides diagnostic information for the
individual student in regards to efficient or appropriate
operation of the complex navigation system of the A-5E.
Specific tasks may be measured directly and displayed as
shown in Figures
9
and 10.
The tasks of time and fuel manage-
ment are measured in the simulator by comparing planned time
and fuel values with actual time and fuel values at each turn
point, based on the premise of facilitating the input of the
student-planned values by the instructor into the simulator
computer prior to the simulated mission.
Trends may show
decisional errors not otherwise detectable by an instructor.
Again, fleet performance standards can be compared by using
simple overlays to an individual student's performance, as
well as the performance of other students, to facilitate
evaluation.
191
CTi
1
G
a -H
e M
5
u
•H
ra
t^
j::
iH
CN
O
in
iH
ro
<-{
o
LD
rH
•
•
•
•
•
•
r-
-g*
en
o«
u
1
1
rH
•
1
1
1
1
(T>i
•
co-
•
i
•
[^-
i
•H
-
1
'
>1
1
•
•
(0
pH
C4
I
Eh
i
1
Cfl
C7>
•H
1
Q
(U
t-q
"^
-
'T3
'
u
n-
i
1
i
1
1
CN
p>1
-
•H
>
CD
•H
r-{
«
-P
1
1
t
1
i
rH
G
OJ
•H
C
a
-P
a.
03
U
J
>1
C
O
•H
cn
>
H
4->
•H
S
O
<
O
o
u
-P
-P
f
a
3
o
C
CO
C
<u
(ti
s3
M
M
cL)
u
^4
U
-p
<D
a
0}
c
>i
•H
rO
+J
rH
(TJ
a
G
i-1
+J
0)
•H
U
U
4J
•H
3
O
•H
T3
G
3
S
Q
Q
U
O
5
ttj
iH
OJ
;>i
Eh
rH
•H
tt3
D
M
73
iA
rH
C
•H
^
^-1
-p
•H
1
rH
+J
O
<
•
•
OJ
w
C
>i
ci.
c
5
(D
c-)
192
rH
(U
>
g
CO
36
30
Fast
24
18
12
Planned minus
actual time
6
(sec)
6
-6
-12
-18
-24
Slow
-30
-36
Figure
9.
Time-on-TP Management Display,
193
8
Leg
3000
t
2500
Over
2000
1500
Estimated
minus actual
fuel
1000
.
500
.
(lbs)
r
-500
•
-1000
•
-1500
-
12
I
r
3
t
4
r
5
-2000
Under
-2500
-
-3000
.
Figure 10.
Fuel Management Display
194
r
i
6
7
.eg
.
.
Figure 11 shows the usefulness of displaying the results
of an overall measure of performance:
acy.
radar navigation accur-
The solid boundary lines between and surrounding each
TP are performance standards established by fleet A-6E CAINS
aircrew; in this situation a 90 percent confidence interval
Student
has been constructed about the mean flight path.
navigational accuracy over the entire mission may be evaluated
from this display as well as diagnostic navigational informaThis figure illustrates the perfor-
tion for each leg or TP.
mance of a student who has met fleet-established criterion
limits for all leg and TP navigation portions of the route
except TP number two
Summarized performance of the entire mission is depicted
by Table XII.
Each turn point (TP) is evaluated for B/N
equipment time-sharing activity on the previous leg, time-onTP management, fuel management, heading accuracy
to planned run-in heading)
as a "P"
,
(as
related
navigational accuracy (reported
if within criterion limits or reported in miles from
TP if not within limits)
,
minim.um altitude for the previous
leg, and indicated airspeed
(IAS)
at the TP
This brief discussion on information displays for B/N
perfonnance is not exhaustive and does not reflect what may
be the most efficient and reliable measures for determining
skill acquisition.
Only statistical methods will produce
those measures that should be displayed and used.
presented here are for illustrative purposes only.
195
The examples
>1
71
•H
Q
>i
O
(tj
M
3
O
O
<
(T3
•H
>
TJ
3
•H
196
en
<
M
-P
-H
<
m
LD
LD
en
in
ro
o
o
rH
<N
in
CN
>
O
VO
m
in
^
m
s
•
CJi
ro
o
r^
H
o
O
o
<T»
rH
CN
CN
in
(N
a.
a.
&.
a.
O
>^
ro
O
^
in
in
ro
o
<Ti
in
in
ro
ro
in
o
(N
rH
^n
=^2
^
00
1
m
o
E-t
iH
rH
04
CO
00
ro
rM
1
1
o
H
in
CM
CN
CN
CN
ro
ro
in
in
rvi
CN
1
1
1
1
1
1
1
1
rg
in
VO
CN
^
o
CN
C»
in
CN
CO
•
r^
H
1
Eh
H
H
H
H
H
H
1
H
1
*
U3
>1
ID
rH
CN
H
r^
rH
in
rH
o
H
rH
o
O
H
H
•
•
•
•
•
•
•
•
•
ro
CN
+J
•H
>
in
•H
o
\D
in
(N
rH
OJ
Pu
•
CN
CN
o.
(7\
•
a:
B C
o
^
rH
T3
•H
'^
ro
r-i
Oh
<n
Cn
O
O
•
•
in
(N
CN)
r--
ro
•
•
in
o
•
o
H
O
CN
CN
ro
CN
O
H
O
•
•
•
•
•
•
•
•
H
o
•^
O
O
O
4->
<
T3<
H
H
H
o
H
O
o
H
o
o
o
o
CO
o
<o
r-\
'
•
•
•
•
t
•
•
•
•
^
ro
o
cn
C
•H
M
(^
03
ro
H
in
CN
rH
H
H
O
O
o
o
o
•
•
•
•
•
•
r^
o
H
H
o
X
r^
•
•
•
•
CM
o
x:
w
o
(y\
>^
CN
in
(Ti
ro
rH
^
o
ro
CN
t
•
•
*
•
•
•
•
•
•
^
rH
CN
rCN
CN
CN
^
H
r~
un
CN
H
(X)
ro
H
ro
CN
^
o
•
•
«
•
t
1
s
o
O
•H
Eh
M
H
H
CN
o
O
O
•
H
H
Q)
G
x:
<
04
0.
Eh
Q)
-h
)-(
03
H
(TJ
C^
Eh
0^
Eh
K
&•
(t3
rd
H
Oi
Di
i2
>H
04
04
0)
4->
H
H
Q)
5
QJ
+J
H
•H
03
o
-P
U
e
>
c
•H
•H
(X(
•H
ITJ
(T3
rH
U
CN
>
ro
x
•^
3.
r-i
cu
Ci
Q
in
197
en
H
M
TJ
•H
UH
x:
rH
g
2
CO
-u
e
02
V£>
e
'h
cn
K
>
CQ
U2
r^
D
en
TJ
Oi-U
fO
J
O
M
^
H
a
O C
w
(U
U CuC
4J
(T3
>,Jh
>rrM
(U
+J
Sh
C
-U
^
O
03
H
4J
03
G M >.a.u
u
1)
-P cn-H
>i
T3
03
03
-H-H C
3 O'd 3
5 Q4O
E
g
H QJQ
H
CN ro •^ in U3
OJ
Eh
5h
03
Cl4
00
X
O
4J
gHD
Ul
U>QU
F.
IMPLEMENTING THE MODEL
A model for measuring B/N performance during radar navi-
gation in the A-6E WST has been designed and proposed for use
by the East and West Coast A- 6 Fleet Replacement Squadrons.
The final model design was predicated on:
the model at minimum cost,
(2)
(1)
implementing
utilizing existing computer
algorithms and software that have been validated, and
(3)
requiring no additional personnel to operate the model after
implementation.
Some software changes are necessary, but
they appear to be minor in light of the 2F114 design specifi-
cations for currently accessible programs.
Some translation
may be necessary due to different computer languages but
these are feasible alternatives given the implications for
reducing training costs and increasing the effectiveness of
both the instructor and the simulator.
Additionally, a com-
puter-managed system will be necessary for implementing the
sequential sampling decision model.
Currently available
desk-top computers could accomplish this function assuming
the simulator's computer capacity was fully utilized after
the measurement portion of the model was installed.
Objective performance measurement provides useful infor-
mation necessary for training evaluation and control.
Per-
formance measurement models that incorporate this powerful
technique can increase simulator and instructor effectiveness, reduce training costs, and may contribute toward
reducing accidents attributed to "unskilled" aviators.
193
Implementation of these systems appears to be cost-effective
in view of the potential savings from their effective utili-
zation.
199
VIII.
SUxMMARY
The purpose of this thesis was to model a performance
measurement system for the Bombardier/Navigator (B/N) Fleet
Replacement Squadron (FRS) student during the radar navigation
maneuver in the A-6E V7eapon System Trainer (WST, device 2F114)
that would best determine student skill acquisition and would
incorporate the advantages of both objective and subjective
aircrew performance measurement methods.
This chapter is pro-
vided as a compendium due to the extensive material covered
and assumes reader unf amiliarity of previous chapters.
A.
STATEMENT OF PROBLEM
Traditional and current FRS student performance measurement and assessment in the A-6E WST by an instructor is mostly
subjective in nature with disadvantages of low reliability,
lack of established performance standards, and human perceptual
measurement inadequacies.
The recently delivered A-6E WST has
the capability to objectively measure student performance but
is not being utilized in this fashion due to the lack of an
operational performance measurement system that incorporates
the characteristics of objective performance measurement and
still retains the valuable judgement and experience of the
instructor as a measuring and evaluating system component.
Objectivity in performance measurement is a highly desirable
200
component of the performance measurement and evaluation process that enables the establishment of performance standards,
increases instructor and simulator effectiveness, and fulfills
the requirements of Department of Defense policy.
B.
APPROACH TO PROBLEM
Designing
a
model to measure B/N performance during radar
navigation in the A-6E WST necessarily assumed:
(1)
the A-6E
WST realistically duplicated the A-6E aircraft in both engi-
neering and mission aspects,
(2)
little variability in overall
A-6E crew-system performance is attributable to the pilot,
(3)
that results from pilot performance measurement literature
were applicable to the B/N,
(4)
that a mathematical relation-
ship existed between some aspects of B/N behavior and perfor-
mance measurement and evaluation, and
(5)
competitively selected,
motivated and experienced A-6E fleet aircrew exhibit advanced
skill or "proficiency" characterized by minimum effort and con-
sistent responses ordinarily found in actual aircraft flight.
The methodology used in formulating a model to measure 3/N
performance was based on an extensive literature review of
aircrew performance measurement from 1962-198
tical task analysis of the B/N's duties.
and an analy-
After selection of
the Air Interdiction scenario and turn point-to-turn point
radar navigation flight segment, the review concentrated on
aircrew performance measurement research which emphasized
navigation, training, and skill acquisition.
201
A brief review
was presented of the concepts of performance measurement and
evaluation including measure types, reliability, validity,
measure selection criteria, performance standards, aviation
measures of effectiveness and types of evaluation within the
framework of aircrew training.
A model was then formulated
to illustrate the relationship among student B/N skill acqui-
sition, the radar navigation task, and performance measurement
and evaluation.
Candidate measures for navigation training
and the radar navigation flight segment were identified from
an original listing of 182 performance measures from previous
aircrew performance measurement research.
The task analysis
was performed to identify skills and knowledge required of the
B/N and to identify candidate performance measures for both
B/N skill acquisition and the radar navigation segment.
A
Mission Time Line Analysis (MTLA) was conducted to identify
B/N tasks critical to performance.
A model was then formulated
to illustrate A-6E crew-system interaction and the complexity
involved in measuring B/N performance.
Generic aircrew per-
formance measurement system concepts were reviewed for the
training environment.
Current performance measurement and
evaluation practices of the A-6 FRS for the B/N student in
the WST were reviewed as well as the current objective per-
formance capabilities of the WST.
A final list of candidate
measures was presented that had met selection criteria of face
validity, ease of use, instructor and student acceptance, and
appropriateness to training.
202
C.
FORMULATION OF THE MODEL
The purpose of measuring B/N performance during radar
navigation in the WST was to provide objective, reliable, and
valid information for accurate decision-making about B/N skill
acquisition to the FRS instructor and training manager.
model developed candidate measures
(Table XI)
The
that determined
what to measure, observation method, when to measure, scaling,
sampling frequency, criteria establishment, applicable transformations, current availability in the A-6E WST, and access-
ibility of the measure if not currently available.
After
operationally defining the radar navigation segment, a proposal was made to conduct an annual competitive exercise in
the A-6E WST under the cognizance of the appropriate Medium
Attack Wing (MATWING) command utilizing A-6E fleet squadron
aircrew.
A-6E fleet squadron personnel would fly preprogrammed
radar navigation routes while their performance was measured
using the candidate measures previously developed.
The results
from the proposed competitive exercise were cited as being
useful and operationally acceptable for establishing standards
of performance for each radar navigation route since the se-
lected aircrew would be highly motivated and fleet-experienced.
Statistical techniques for reducing the initial candidate
measures for B/N radar navigation performance were reviewed
and evaluated with respect to skill acquisition, and a multi-
variate discriminant analysis model was selected as applicable
due to the model's utility and previous practical development
and applications.
203
The final part of the performance measurement model
proposed an evaluation application which used dichotomous
results of student perf oirraance compared to fleet performance
as information for several decision levels.
Performance
results of the student for the majority of the statistically
reduced performance measures can be dichotomized by the instructor as either "proficient"
(unskilled)
(skilled)
or "not proficient"
based on the empirically established objective
performance standards or operationally-defined subjective
performance standards.
and iMcDaniel
[198 0]
An evaluation model developed by Rankin
that used a sequential method of making
statistical decisions incorporating the dichotomized results
of student performance was adapted and modified for determining
successful completion of task training for the B/N based on
both objective and subjective performance measurement and evaluation.
The sequential sampling model was selected due to its
inherent power to fix decisional error rates, previous practical
developments, and potential for reducing training costs.
Sev-
eral informational displays specific to the B/N radar navigation
segment and based on hypothetical model results were presented.
Model implementation was discussed with regards to personnel,
costs, A-6E WST software changes, and effective training control.
D.
liMPACT TO THE FLEET
The performance measurement model as outlined in this thesis
is specific to measuring A-6E 3/N performance during radar
204
navigation but has generic qualities applicable to aircrew
members of any aircraft.
The advantages of objective meas-
urement in the form of reduced paperwork, permanent perfor-
mance records, established performance standards, diagnostic
and timely information, and high reliability are fulfilled.
At the same time, subjective measurements for those aircrew
behavioral aspects that currently defy objective measurement
are made using the experienced simulator mission instructor,
who also remains as the final decision-maker on whether or not
a student has
demonstrated task performance that reflects an
acquired skill for that task.
The application of the model to individual aircrew readiness,
fleet squadron unit readiness, and selection of individual
aircrew teams for multi-crew aircraft appears to be feasible
and operationally acceptable.
The model has potential utility
in tactics development, accident prevention, predictive per-
formance, and proficiency training of reserve aviators.
Almost
certainly some reduction in aircrew training costs and an
increase in instructor, simulator, and training program effectiveness would be realized.
205
,
,
BIBLIOGRAPHY
"Methodology in the Use of Synthetic Tasks to
Alluisi, E.A.
Complex
Performance,"
Human Factors v. 9(4)
Assess
p. 375384, August 1967.
,
,
:
Educational Psychology; The
Anderson, R.C. and Faust, G.W.
Science of Instruction and Learning Dodd, Mead & Company,
,
,
1974.
Shearer, J.W.
and Berliner, D.C., Study of
Angell, D.
Training Performance Evaluation Techniques U.S. Naval Training
Device Center, Port Washington, NY, NAVTRADEVCEN 14 4 9-1, October
1964, AD 609605.
,
,
,
Bergman, B., and Siegel, A., Training Evaluation and Student
Achievement Measurement; A Review of the Literature Air Force
Human Resources Laboratory, Lowry AFB, CO 8 230, AFHRL-TR-3,
January 1972, AD 747040.
,
Beverly, R.F., A Measure of Radar Scope Interpretation Ability
RSI Test #1 Human Resources Research Center, Lackland AFB,
San Antonio, TX, Research Note AO 52-9, December 1952, AD 7956.
,
;
Billings, C.E., "Studies of Pilot Performance:
I. Theoretical
Considerations," Aerospace Medicine v. 39; p. 17-19, January
,
1968.
Billings, C.E., Eggspuehler, J.J., Gerke, R.J., and Chase, R.C,
"Studies of Pilot Performance:
II.
Evaluation of Performance
During Low Altitude Flight in Helicopters," Aerospace Medicine
V. 39: p. 19-31, January 1968.
,
Bilodeau, E.A.
,
Acquisition of Skill
Bilodeau, E.A. (Editor)
Academic Press, 1969.
,
,
Academic Press, 1966.
Principles of Skill Acquisition
,
Bittner, A.C. Jr., Statistical Tests for Differential Stability
Proceedings of the 23rd Annual Meeting of the Human Factors
Society, p. 541-545, October 1979.
Blum, A., Melynis J., Flader, D.
Beardsley, VI., Ramberg, T.
Criteria Report for A-6E Weapon System Trainer Device 2F114,
Final, Vol. I Grumman Aerospace Corporation, TR-T01-CD-A005-04
January 19 77.
,
,
206
,
,
,
,
,
Kapp, N.
Blum, A., Lovrecich, J., Force, J., Cohen, L.
Configuration Report for A-6E Weapon System Trainer Device
2F114, Final, Vol. 1 Gr'omraan Aerospace Corporation, TR-TOlCD-AOOl-06, May 1977.
,
,
Happ, N.
Blum, A., Lovrecich, J., Force, J., Cohen, L.
Device
Trainer
Weapon
System
Configuration Report for A-6E
2F114, Final, Vol. II Grumman Aerospace Corporation,
TR-TOl-CD-AOOl-06, May 1977.
,
,
and Robins, J., Study,
Promisel, D.
Bishop, E.W.
Bowen, H.M.
Assessment of Pilot Proficiency U.S. Naval Training Device
Center, Port Washington, NY, NAVTRADEVCEN 1614-1, August 1966,
,
,
,
,
AD 637659.
Brictson, C.A., Human Factors Research on Carrier Landing
Office of Naval
System Performance Final Report (1966-1971)
Research, July 1971, AD 733703.
,
,
Buckhout, R.
A Bibliography on Aircrew Proficiency Measurement Aerospace Medical Division, Wright-Patterson AFB, OH,
MRL-TDR-62-4 9, May 1962, AD 283545.
,
,
Buckhout, R.
and Cotterman, T.E., Considerations in the Design
of Automatic Proficiency Measurement Equipment in Simulators
,
,
6570th Aerospace Medical Research Laboratories, Wright-Patterson
AFB, OH, AIvIRL Memorandum P-40, June 1963, AD 417 425.
Campbell, S.
Task Listing, A-6E (TRAM) Aircraft, Pilot Procedures, Vol. I Grumman Aerospace Corporation, Report ISD
(SAT) 0003-1, September 1975.
,
,
Campbell, S., and Sohl, D.
Task Listing, A-6E (TRAM) Aircraft
Airframe Emergencies and System Malfunctions, Vol. Ill Grumman
Aerospace Corporation, Report ISD (SAT) 0003-3A, October 1975.
,
,
,
Campbell, S., Graham, G.
and Hersh, S., Specific Behavioral
Objectives and Criterion Test Statements, Pilot Procedures
A-6E (TRAM) Aircraft, Vol. I Grumman Aerospace Corporation,
Report ISD (SAT) 0003-4, December 1975.
,
,
Campbell, S.C., Feddern, J., Graham, G.
and Morganlander M.
Design of A-6E TRAJ4 Training Program Using ISD Methodology.
Phase I Final Report Naval Training Equipment Center, Orlando,
FL 32313, NAVTRAEQUIPCEN 75-C-0099-1, May 1976.
Also titled as
A-6E Systems Approach to Training. Phase I Final Report
February 1977.
,
,
,
,
Carter, V.E., Development of Automated Performance Measures
for Introductory Air Combat Maneuvers
Proceedings of the 21st
Annual Meeting of the Human Factors Society, p. 436-439, October
,
1977.
207
Chapanis, A., Theory and Methods for Analyzing Errors in
Man-Machine Systems from Sinaiko, H.W. (Editor), Selected
Papers on Human Factors in the Design and Use of Control
Systems Dover Publications, Inc., p. 42-66, 1961.
,
,
Instructor Pilot's Role in Simulator Training
Charles, J. P.
_
III)
Nava:
(aval
Training Equipment Center, Orlando, FL 3 2813,
Phase
NAVTRAEQUIPCEN 76-C-0034-2, June 1978, AD A061732.
,
,
Objective Methods for Developing Indices of Pilot
Chiles, W.
Workload, Federal Aviation Administration, Oklahoma City, OK
73125, FAA-AM-77-15, July 1977, AD A044556.
,
and Mills, R.G., "What Does the Operator
Christiansen, J.M.
Do in Complex Systems," Human Factors v. 9(4): p. 329-340,
August 197 0.
,
,
Human Engineering Guide to Ship System Development
Coburn, R.
Naval Electronics Laboratory Center, San Diego, CA 92152,
NLEC/TD 27 8, October 197 3.
,
,
Comptroller General of the United States, Greater Use of Flight
Simulators in Military Pilot Training Can Lower Costs and
Increase Pilot Proficiency General Accounting Office, Washington, D.C. 20548, B-157905, 9 August 1973.
,
Connelly, E.M.
Bourne, F.J., and Loental, D.G., Computer- Aided
Techniques for Providing Operator Performance Measures Air
Force Human Resources Laboratory, Wright-Patterson AFB, OH
45433, AFHRL-TR-74-87, December 1974, AD A014330.
See also
Proceedings of the 18th Annual Meeting of the Human Factors
Society, Huntsville, AL, p. 359-367, October 1974.
,
,
Connelly, E.M.
Bourne, F.J., Loental, D.G., Migliaccio, J.S.,
Burchick, D.A.
and Knoop, P.A.
Candidate T-37 Pilot Performance Measures for Five Contact Maneuvers Air Force Human
Resources Laboratory, Wright-Patterson AFB, OH 45433, AFHRLTR-74-88, December 1974, AD A014331.
,
,
,
,
Cotterman, T.E., and Wood, M.E., Retention of Simulated Lunar
Landing Mission Skills: A Test of Pilot Reliability Aerospace
Medical Research Laboratories, Wright-Patterson AFB, OH 45433,
AMRL-TR-66-222, April 1967, AD 817232.
,
Cureton, E.E.
Validity
Educational Measurement
,
,
,
from Lindquist, E.F. (Editor),
George Banta Publishing Company, 1951.
Daniel, R.S., and Eason, R.G.
Error in Aiming Point Identification on a Radar Bombing Mission Air Force Personnel and
Training Research Center, Mather AFB, CA, Project No. 7711,
April 1954, AD 36050.
,
,
208
,
Danneskiold, R.D., Objective Scoring Procedure for Operational
Flight Trainer Performance U.S. Naval Special Devices Center,
Port Washington, NY, SPECDEVCEN 999-2-4, February 1955, AD
,
110925.
and Behan, R.A., Evaluating System Performance
Davis, R.H.
from Gagne, R.M. (Editor)
Psychoin Simulated Environments
logical Principles in System Development Holt, Rinehart, &
Winston, Inc., 1966.
,
,
,
,
"The Robust Beauty of Improper Linear Models in
Dawes, R.M.
Decision Making," American Psychologist v. 34(7): p. 571-582,
July 1979.
,
,
Deberg, O.K., A Scoring Algorithm for Assessing Air-to-Air
Combat Performance. Phase I: A Factor Analysis Air Command
and Staff College, Maxwell AFB, AL 36112, Report No. 0590-77,
May 1977, AD B019569.
,
Demaree, R.G., Norman, D.A., and Matheny, W.G., An Experimental
Program for Relating Transfer of Training to Pilot Performance
and Degree of Simulation U.S. Naval Training Device Center,
Port Washington, NY, NAVTRADEVCEN 138 8-1, June 1965, AD 471806.
,
Department of Defense Military Specification MIL-H-46855B,
Human Engineering Requirements for Military Systems, Equipment
and Facilities
January 1979.
,
Dixon, W.J., and Massey, F.J. Jr., Introduction to Statistical
Analysis 3rd ed., McGraw-Hill, 1969.
,
R.E., Lane, N.E., Shannon, R.H., and Clisham, W.F. Jr.,
Naval Flight Officer Function Analysis Volume V: A- 6 Bombardier/
Navigator (B/N)
Naval Aerospace Medical Research Laboratory,
Pensacola, FL, December 1972.
Doll,
,
Engler, H.F., Davenport, E.L., Green, J., and Sears, W.E. Ill,
Human Operator Control Strategy Model Air Force Human Resources
Laboratory, Wright-Patterson AFB, OH 45433, AFHRL-TR-7 9-60
April 1980, AD A084695.
,
Ericksen, S.C, A Review of the Literature on Methods of Measuring Pilot Proficiency Human Resources Research Center,
Lackland AFB, San Antonio, TX, Research Bulletin 52-25, August
1952, ATI No. 169181.
,
Farrell, J.P.
Measurement Criteria in the Assessment of
Helicopter Pilot Performance Proceedings of a Conference
on Aircrew Performance in Army Aviation, held at U.S. Army
Aviation Center, Fort Rucker, Alabama on November 27-2 9, 197 3,
Office of the Chief of Research, Development and Acquisition
(Army) and U.S. Army Research Institute for the Behavioral and
Social Sciences, Arlington, VA 22209, July 1974, p. 141-148,
AD A001539.
,
,
209
,
Farrell, J. P., and Fineberg, M.L., "Specialized Training
versus Experience in Helicopter Navigation at Extremely Low
Altitudes," Human Factors v. 18(3): p. 305-308, June 1976.
,
Federman, P.J., and Siegel, A.I., Communications as a Measurable Index of Team Behavior U.S. Naval Training Device Center,
Port Washington, NY, NAVTRADEVCEN 1537-1, October 196 5, AD
623135.
,
The Development, Implementation, and Evaluation
Ferguson, R.
of A Computer-Assisted Branch Test for A Program of Individually
Prescribed Instruction Ph.D. Dissertation, University of
Pittsburgh, 1969.
,
,
Fineberg, M.L., Navigation and Flight Proficiency under Nap
of the Earth Conditions as a Function of Aviator Trainina and
Experience Proceedings of the 18th Annual Meeting of the Human
Factors Society, Huntsville, AL, p. 249-254, October 1974.
,
Fineberg, M.L., Meister, D., and Farrell, J. P., An Assessment
of the Navigation Performance of Army Aviators Under Nap-ofthe-Earth Conditions U.S. Army Research Institute for the
Behavioral and Social Sciences, Alexandria, VA 22333, Research
Report 1195, August 1978, AD A060563.
,
Fitts, P.M., Psychomotor Skills and Behavior Modification
Academic Press, 1965.
,
Forrest, E.G., Develop An Objective Flight Test for the
Certification of A Private Pilot Federal Aviation Administration, Washington, D.C. 20590, FAA-DS-70-17 May 1970, AD 708293.
,
,
Geiselhart, R.
Kemmerling, P., Cronburg, J.G., and Thorburn,
D.E., A Comparison of Pilot Performance Using a Center Stick
VS Sidearm Control Configuration Aeronautical Systems Division,
Wright-Patterson AFB, OH, Technical Report ASD-TR-70-39
November 1970, AD 720846.
,
,
Geiselhart, R.
Jarboe, J.K., and Kemmerling, P.T. Jr.,
Investigation of Pilots' Tracking Capability Using a Roll
Command Display Aeronautical Systems Division, WrightPatterson AFB, OH, ASD-TR-71-46, December 1971, AD A009590.
,
,
Geiselhart, R.
Schiffler, R.J., and Ivey, L.J., A Study of
the Task Loading Using a Three Man Crew on a KC-135 Aircraft
Aeronautical Systems Division, Wright-Patterson AFB, OH 45433,
ASD-TR-76-19, October 1976, AD A044257.
,
,
Gerlach, S., Cues, Feedback and Transfer in Undergraduate
Pilot Training Air Force Office of Scientific Research,
Boiling AFB, DC 20332, AFOSR-TR-76-0056 August 1975, AD A033219,
,
,
210
and Klaus, D.J., Proficiency Measurement:
Glauser, R.
Assessing Human Performance from Gagne, R.M. (Editor)
Psychological Principles in System Development Holt, Rinehart
& Winston, Inc., 19 66.
,
,
,
,
F.E., Beideman, L.R., and Levine, S.H., The Application
of Biocybernetic Techniques to Enhance Pilot Performance
Corner,
During Tactical Missions Defense Advanced Research Projects
Agency, MDC E204 6, October 197 9.
,
Hanish, J., and Hersh, S., Specific Behavioral
Graham, G.
Objectives and Criterion Test Statements, Airframe Emergencies
and System Malfunctions, A-6E (TRAM) Aircraft, Vol. Ill
Grumman Aerospace Corporation, Report ISD (SAT) 0003-6,
December 197 5.
,
,
Greening, C.P., Performance Measurement and Prediction in
Air-to-Ground Target Acquisition Naval Weapons Center, China
Lake, CA 93555, NWC TP 5747, April 1975.
,
Greer, G.D. Jr., Smith, D., Hatfield, J.L., Colgan, CM., and
Duffy, J.O., PPDR Handbook.
Use of Pilot Performance Description Record in Flight Training Quality Control Human Resources
Research Office, Fort Rucker, AL, Department of the Army,
Washington, DC 20310, December 1963, AD 675337.
,
Grodsky, M.A.
"The Use of Full Scale Mission Simulation for
the Assessment of Complex Operator Performance," Human Factors
v. 9(4): p. 341-348, August 1967.
,
Hanish, J.
and Feddern, J.
Specific Behavioral Objectives
and Criterion Test Statements, Bombardier/Navigator Procedures
A-6E (TRAM) Aircraft, Vol. II Grumman Aerospace Corporation,
Report ISD (SAT) 0003-5, December 1975.
,
,
,
,
,
Haygocd, R.C., Leshowitz B.
Parkinson, S.R., and Eddowes, E.E.,
Visual and Auditory Information Processing Aspects of the
Acquisition of Flying Skill Air Force Human Resources Laboratory, Williams AF3, AZ 85224, AFHRL-TR-7 4-7 9
December 197 4,
AD A007721.
,
,
,
,
Hitch, C, "Sub-optimization in Operations Problems," Journal
of the Operations Research Society of America v. 1(3): p.
87-99, May 1953.
,
Hitchcock, L. Jr., The Analysis of Human Performance within the
Operational Flight Environment Proceedings, Annual AGARD
Meeting of the Aerospace Medical Panel, Toronto, Canada,
Advisory Group for Aerospace Research and Development, Paris,
France, AGARD Conference Proceedings No. 14, d. 9-14, September
1966, AD 661165.
,
211
An Evaluation of Three Possible
Hulin, C.L., and Alvares, K.M.
in Predicting Pilot
Decay
Temporal
Explanations of the
Proficiency Air Force Human Resources Laboratory/ Williams
AFB/ AZ 85224/ AFHRL TR-71-5/ February 1971/ AD 731191
,
/
Three Explanations of Temporal
Hulin/ C.L./ and Alvares, K.M.
Changes in Ability-Skill Relationships: Literature Review and
Theoretical Analysis Air Force Human Resources Laboratory/
Williams AFB/ AZ 85224/ AFHRL TR-71-6/ February 1971/ AD 732612.
/
/
Jensen/ R.S./ Vander Kolk/ R.J./ and Roscoe, S.N., Pilotage
Error in Area Navigation:
Effects of Symbolic Display
Variables, Federal Aviation Administration/ Washington/ DC
20590/ FAA-RD-72-29/ January 1972/ AD 750178.
Johnson, H.M.
and Boots M.L./ Analysis of Ratings in the
Preliminary Phase of the CAA Training Program CAA Division
of Research Report No. 21/ 1943.
/
/
/
JoneS/ M.D., "Rate and Terminal Processes in Skill Acquisition,"
Journal of Psychology v 8 3: 1970.
.
/
Jones, R.A.
The F-lllD Simulator Can Reduce Cost and Improve
Aircrew Evaluation Air War College, Maxwell AFB, AL 36112,
Professional Study No. 5959, April 1976/ AD B010452.
,
,
Kalisch/ S.J./ Computerized Instructional Adaptive Testing
Model:
Formulation and Validation Air Force Human Resources
Laboratory/ Brooks AFB, TX 78235/ AFHRL-TR-79-33 February
/
,
1980.
Kelley, C.R./ and Wargo, M.J., Adaptive Techniques for
Synthetic Flight Training Systems Naval Training Device
Center, Orlando, FL/ NAVTRADEVCEN 68-C-0136-1/ October 1968,
AD 678536.
,
Kelly/ M.J./ Wooldridge/ L.
Hennessy/ R.T., VreulS/ D.,
Barnebey, S.F., Cotton, J.C, and Reed, J.C, Air Combat
Maneuvering Performance Measurement Naval Training Equipment
Center, Orlando, FL 32813, and Air Force Human Resources
Laboratory, Brooks AFB/ TX 78 23 5/ NAVTRAEQUIPCEN IH-315/AFHRCTR-79-3/ September 1979; see also Proceedings of the 23rd
Annual Meeting of the Human Factors Society/ p. 324-323, 1979.
/
,
Kemmerling, P.T. Jr./ The Impact of Full-Mission Simulation
on the TAC Combat Crew Training System Air War College,
Maxwell AFB, AL 3 6112, Professional Study No. 5655, April
1975/ AD B004457.
,
Klier/ S., and Gage/ H.
Motion Factors in Flight Simulation
Naval Training Device Center, OrlandO/ FL 32813, NAVTRADEVCEN
68-C-0007-1/ December 1970/ AD 880341.
/
,
212
,
Knoop, P. A., Programming Techniques for the Automatic
Monitoring of Human Performance Aerospace Medical Research
Laboratory, Wright-Patterson SFB, OK 45433, AMRL-TR-66-16
April 1966. AD 637454.
,
Knoop, P. A., Development and Evaluation of a Digital Computer
Program for Automatic Human Performance Monitoring in Flight
Simulator Training Aerospace Medical Research Laboratories,
Wright-Patterson AFB, OH 45433, AMRL-TR-67-97 August 1968,
,
,
AD 675542.
Knoop, P. A., and Welde, W.L., Automated Pilot Performance
Assessment in the T-37
A Feasibility Study Air Force Human
Resources Laboratory, Advanced Systems Division, AFHRL/AS,
Wright-Patterson AFB, OH 45433, AFHRL-TR-7 2-6 April 1973,
AD 766446; see also Human Factors v. 15(6): p. 583-597,
December 197 3.
:
,
,
,
Krendel, E.S., and Bloom, J.W.
The Natural Pilot Model for
Flight Proficiency Evaluation U.S. Naval Training Device
Center, Port Washington, NY, NAVTRADEVCEN 323-1, April 1963,
AD 410805.
,
,
Leshowitz, B.
Parkinson, S.R., and Waag, W.L., Visual and
Auditory Information Processing in Flying Skill Acquisition
Air Force Human Resources Laboratory, Williams AFB, AZ 85224,
AFHRL-TR-74-103, December 1974, AD A009636.
,
,
Lewis, R.E.F., Navigation of Helicopters in Slow and Very Low
Flight.
A Comparison of Solo and Dual Pilot Performance
Proceedings, Annual AGARD Meeting of the Aerospace Medical
Panel, Toronto, Canada, Advisory Group for Aerospace Research
and Development, Paris, France, AGARD Conference Proceedings
No. 14, September 1966, p. 29-34, AD 661165.
,
Lindsay, G.F.
Statistical Considerations in MX Site Characterization Systems Exploration, Inc., Report No. B3440-79462H53BAXSX37718-H-2590D, November 1979.
,
,
Locke, E.A.,
Zavala, A., and Fleishman, E.A., "Studies of
Helicopter Pilot Performance:
II.
The Analysis of Task
Dimensions," H uman Factors v. 7(3): p. 285-301, June 1965.
,
Lorge, I., The Fundair.ental Nature of Measurement
from
Lindquist, E.F. (Editor)
Educational Measurement George Banta
Publishing Company, 1951.
,
,
,
Marks, xM.R., Development of Human Proficiency and Performance
Measures for Weapon Systems Testing Aerospace Medical Research
Laboratories, Wright-Patterson AFB, OH, ASD Technical Report
61-733, December 1961, AD 272975.
,
213
Matheny, W.G., Patterson, G.W. Jr., and Evans, G.I., Human
Factors in Field Testing Office of Naval Research, Washington,
DC, LS-ASR-70-1, May 1970, AD 716438.
,
McCormick, E.J., Human Factors in Engineering and Design
McGraw-Hill, 4th Ed., 1976.
,
McCoy, W.K. Jr., "Problems of Validity of Measures Used in
Investigating Man-Machine Systems," Human Factors v. 5(4):
p. 373-377, August 1963.
,
The Development and Evaluation of Objective
McDowell, E.D.
Frequency Domain Based Pilot Performance Measures in ASUPT
Air Force Office of Scientific Research, Boilings AFB, DC
20332, AFOSR-TR-78-1239, April 1978, AD A059477.
,
,
McMinn, V.L.,
January 1981.
"A Subjective Leap," Approach
,
v.
24-25,
26(7): p.
Meister, D.
and Rabideau, G.F., Human Factors Evaluation in
System Development Wiley and Sons, 1965.
,
,
Miller, N.E., Gait, W.E.
and Gershenson, C.P., A Large Scale
Objective Study of the Effects of Additional Training on Flying
Skill from Miller, N.E. (Editor), Psychological Research on
Pilot Training AAF Aviation Psychology Program Research
Report No. 8, 1947.
,
,
,
Mixon, T.R., and Moroney, W.F., Annotated Bibliography of
Objective Pilot Performance Naval Training Equipment Center,
Orlando, FL 32813, NAVTRAEQUIPCEN IH-330, March 1981. (In
press)
,
.
Mood, A.M., Graybill, F.A., and Boes, D.C., Introduction to
the Theory of Statistics 3rd ed., McGraw-Hill, 197 4.
,
Moore, S.B., Meshier, C.W., and Coward, R.E., The Good Stick
Index.
A Performance Measurement for Air Combat Training
First Interservice/Industry Training Equipment Conference,
sponsored by Naval Training Equipment Center, Orlando, FL
32813, NAVTRAEQUIPCEN IH-316, November 1979, p. 145-154.
.
Morris, W.T., Management Science in Action
Inc., Homewood, IL, 1963.
Mundel, M.E., Motion and Time Study:
Prentice-Hall, 1978.
,
Richard
D,
Irwin,
Improving Productivity
,
Murphy, R.
Individual Differences in Pilot Performance
Proceedings of the 6th Congress of the International Ergonomics
Association, College Park, MD, July 1976, p. 403-409.
,
,
214
1
Norman, D.A., Memory and Attention
ed., 1976.
,
Wiley
&
Sons,
Inc.,
2nd
Aviator Selection 1919-1977
and Griffin, G.R.
North, R.A.
Naval Aerospace Medical Research Laboratory, Pensacola, FL
32508, NAIvlRL-SR-77-2, October 1977, AD A048105.
,
,
,
,
and Muckler, F.A., Performance Measurement
Obermayer, R.W.
in Flight Simulation Studies National Aeronautics and Space
Administration, Washington, DC, NASA CR-82, July 1964.
,
,
and Vreuls, D., Combat-Ready Crew Performance
Obermayer, R.W.
Measurement System: Phase III A Crew Performance Measurement
Air Force Human Resources Laboratory, Williams AFB, AZ 85224,
AFHRL-TR-7 4-10 8 (IV), December 1974, AD B005520.
,
,
Muckler, F.A.
Conway, E.J., and
Obermayer, R.W., Vreuls, D.
Fitzgerald, J. A., Combat-Ready Crew Performance Measurement
System:
Final Report Air Force Human Resources Laboratory,
Williams AFB, AZ 85224, AFHRL-TR-74-108 (I)
December 1974,
AD B005517.
,
,
,
,
Obermayer, R.W., Vreuls, D.
Muckler, F.A., and Conway, E.J.,
Combat- Ready Crew Performance Measurement System:
Phase III D
Specifications and Implementation Plan Air Force Human
Resources Laboratory, Williams AFB, AZ 85224, AFHRL-TR-7 4-108
(VII), December 1974, AD B005522.
,
,
Office of the Secretary of Defense, Department of Defense
Review of Tactical Jet Operational Readiness Training
Assistant Secretary of Defense (Manpower and Reserve Affairs)
Washington, DC 20301, November 1968, AD 845259.
,
Oiler, R.G.
Human Factors Data Thesaurus (An Application to
Task Data)
Aerospace Medical Research Laboratories, WriahtPatterson AFB, OH, AMRL-TR-67-211, March 1968, AD 670578
,
,
Operations Committee, Naval Science Department, Naval
Operations Analysis U.S. Naval Institute, 1968.
,
W.C, Developing Performance Tests for Training
Evaluation Human Resources Research Organization, Alexandria,
VA 22314, HumRRO-PP-3-7 3, October 1971, AD 758436.
Osborn,
,
Parker, J.F. Jr., "The Identification of Performance Dimensions Through Factor Analysis," Human Factors v. 9(4): p.
367-373, August 1967.
,
Parsons, H.M.
Press, 1972.
,
Man-Machine System Experiments
"
~
215
,
Johns Hopkins
.
,,
,
"Quantification of Human
Pickrel, E.W., and McDonald, T.A.
Performance in Large, Complex Systems," Human Factors v. 6(6):
p. 647-662, December 1964.
,
,
Pierce, B.J., De Maio, J., Eddowes E.E., and Yates, D.
Airborne Performance Methodology Application and Validation;
F-4 Pop-Up Training Evaluation Air Force Human Resources
Laboratory, Williams AFB, AZ 85224, AFHRL-TR-7 9-7 June 1979,
AD A072611.
See also Proceedings of the 23rd Annual Meeting
of the Human Factors Society, p. 320-324, November 1979.
,
,
,
Performance Measurement in Helicopter Training
Prophet, W.W.
and Operations Office of Chief of Research and Development
April 1972,
(Army), Washington, DC 20310, HumRRO-PP-10-72
AD 743157.
,
,
,
Long-Term Retention of Flying S ills: A Review
Prophet, W.W.
of the Literature U.S. Air Force Headquarters (AF/SAA)
Washington, DC 2 0332, HumRRO FR- ED (P) -7 6-35 October 197 6,
AD A036077.
,
,
,
Prophet, W.W.
U.S. Navy Fleet Aviation Training Program
Development Naval Training Equipment Center, Orlando, FL
32813, NAVTRAEQUIPCEN 77-C-0009-1, xMarch 1978, AD A055788.
,
,
Rankin, W.C., and McDaniel, W.C., Computer Aided Training
Evaluation and Scheduling (CATES) System: Assessing Flight
Task Proficiency Training and Analysis Group, Orlando, FL
32813, TAEG Report No. 94, December' 1980
,
Read, J.H., Specification for Trainer, A-6E Weapon System
Device 2F114 Grumman Aerospace Corporation, September 1974.
,
Ricketson, D.S., Pilot Error As A Cause of Army Helicopter
Accidents Proceedings of a Conference on Aircrew Performance
Army Aviation held at U.S. Army Aviation Center, Fort
Rucker, AL, on November 27-29, 1973, Office of the Chief of
Research, Development and Acquisition (Army) and U.S. Army
Research Institute for the Behavioral and Social Sciences,
Arlington, VA 22209, July 1974, p. 75-80, AD A001539.
,
m
Riis, E., Measurement of Performance in F-8 6K Simulator
Proceedings, Annual AGARD Meeting of the Aerospace Medical
Panel, Toronto, Canada, Advisory Group for Aerospace Research
and Development, Paris, France, AGARD Conference Proceedings
No. 14, Assessment of Skill and Performance in Flying, September
1966, p. 49-56, AD 661165.
,
Rinsky, R.
Pearlman, T., Nienaltowski, W.
and Marchessault
CCriteria Report for A-6E Weapon System Trainer Device
2F114, Final, Vol. II
Grumman Aerospace Corporation,
TR-T01-CD-A005-04, January 1977.
,
,
,
,
216
Roscoe, S.N., Review of Flight Training Technology U.S. Army
Research Institute for the Behavioral and Social Sciences,
Research Problem Review 76-3, July 1976, AD A076641.
,
Rosenmayer, C.E., and Asiala, C.F., F-18 Human Engineering
Task Analysis Report McDonnell Aircraft Company, St. Louis,
MO 63166, MDC A4276, August 1976.
,
Ryack, B.L., and Krendel, E.S., Experimental Study of the
Natural Pilot Flight Proficiency Evaluation Model U.S. Naval
Training Device Center, Port V7ashington, NY, NAVTRADEVCEN
323-2, April 1963, AD 414666.
,
Identification of
Significant Aircrew Decisions in Navy Attack Aircraft Naval
Weapons Center, China Lake, CA 93 555, NWC TP 6117, January
1980, AD B045034.
Saleh, J., Thomas, J.O., and Boylan, R.J.,
,
"Visual
Sanders, M.G., Simmons, R.R., and Hofmann, M.A.
Workload of the Copilot/Navigator During Terrain Flight,"
Human Factors v. 21(3): p. 369-383, June 1979.
,
,
Schohan, B.
Rawson, H.E., and Soliday, S.M., "Pilot and
Observer Performance in Simulated Low Altitude High Speed
Flight," Human Factors v. 7(3): p. 257-265, June 1965.
,
,
Semple, C.A.
Vreuls, D.
Cotton, J.C., Durfee, D.R., Hooks,
J.T., and Butler, E.A., Functional Design of an Automated
Instructional Support System for Operational Flight Trainers
Naval Training Equipment Center, Orlando, FL 32813,
NAVTRAEQUIPCEN 76-C-0096-1, January 1979, AD A065573.
,
,
,
Senders, J.W., A Study of the Attentional Demand of VFR
Helicopter Pilotage Proceedings of a Conference on Aircrew
Performance in Army Aviation held at U.S. Army Aviation Center,
Fort Rucker, AL on November 27-29, 1973, Office of the Chief
of Research, Development and Acquisition (Army) and U.S. Army
Research Institute for the Behavioral and Social Sciences,
Arlington, VA 22209, July 1974, p. 175-187, AD A001539.
,
Shannon, R.H., and Waag, W.L., Toward the Development of a
Criterion for Fleet Effectiveness in the F-4 Fighter Community
Proceedings of the 16th Annual Meeting of the Human Factors
Society, Los Angeles, CA, October 1972, p. 335-337.
See also
NAMRL-1173, December 1972, AD 755184,
Shelnutt, J.B., Childs, J.M.
Prophet, W.W.
and Spears, W.D.,
Human Factors Problems in General Aviation Federal Aviation
Administration, Atlantic City, NJ 08405, FAA-CT-8 0-194 April
,
,
,
,
1980.
217
,
,
Shipley, B.D., Gerlach, V.S., and Brecke, F.H., Measurement
Air Force Office
of Flight Performance in a Flight Simulator
AFOSR-TR-75-0208
DC
AFB,
20332,
of Scientific Research, Boiling
August 1974, AD A004488.
,
Shipley, B.D., An Automated Measurement Technique for Evaluating
Pilot Skill Air Force Office of Scientific Research, Boiling
February 1976, AD A033920.
AFB, DC 20332, AFOSR-TR-76-1253
,
,
Siegel, A.I., and Federman, P.J., Increasing ASW Helicopter
Effectiveness Through Communications Training Naval Training
Device Center, Orlando, FL, NAVTRADEVCEN 66-C-0045-1, October
1968, AD 682498.
,
Siegel, S., Nonparametric Statistics for the Behavioral
Sciences McGraw-Hill, 1956.
,
and Whitfield, D., Measurement of
Singleton, W.T., Fox, J.G.
Man At Work Taylor & Francis LTD, 1971.
,
,
Smit, J.
Pilot Workload Analysis Based Upon In-Flight
Physiological Measurements and Task Analysis Methods National
Aerospace Laboratory (NLR)
The Netherlands, NLR-MP-7 6001-U,
June 1976, AD B026957.
,
,
,
Smith, B.A.
Development and Inflight Testing of a Multi-Media
Course for Instructing Navigation for Night Nap-of-the-Earth
Flight Proceedings of the 24th Annual Meeting of the Human
Factors Society, Los Angeles, CA, October 1980, p. 568-571.
,
,
Smode, A.F., Gruber, A., and Ely, J.H., The Measurement of
Advanced Flight Vehicle Crew Proficiency in Synthetic Ground
Environments 657 0th Aerospace Medical Research Laboratories,
Wright-Patterson AFB, OH, MRL-TDR-62-2 February 1962,
,
,
AD 273449.
Smode, A.F., and Meyer, D.E., Research Data and Information
Relevant to Pilot Training. Volume I. General Features of Air
Force Pilot Training and Some Research Issues Aerospace
Medical Research Laboratories, Wright-Patterson AFB, OH,
AMRL-TR-66-99-V0I I, July 1966, AD 801776.
,
Soliday, S.M., "Navigation in Terrain-Following Flight,"
Human Factors v. 12(5): p. 425-433, October 1970.
,
Steyn, D.W.,
"The Criterion - Stagnation or Development? A
Selective Review of Published Research Work on Criterion
Problems," Psychologia Africana v. 12(3): p. 193-211, March
,
1969.
Swezey, R.W.
"Aspects of Criterion-Referenced Measurement in
Performance Evaluation," Human Factors v. 20(2): p. 169-178,
April 1978.
,
,
218
,
Lankford, H.E., Miller, R.M.
Swink, J.R., Butler, E.A.
and Waag, W.L., Definition of Requirements for
Watkins, H.
Performance Measurement System for C-5 Aircrew Members Air
Force Human Resources Laboratory, Williams AFB, AZ 85224,
AFHRL-TR-7 8-54, October 1978, AD A063282.
,
,
,
Thorndike, R.L., Reliability from Lindquist, E.F. (Editor),
Educational Measurement George Banta Publishing Company, 1951.
,
,
Effects on A-6E Bombardier/Navigator Flight
Tindle, J.R.
Training with the Introduction of Device 2F114, A-6E Weapon
System Trainer Master's Thesis, Naval Postgraduate School,
March 1979.
,
,
Uhlaner, J.E., and Drucker, A.J., "Criteria for Human Performance Research," Human Factors v. 6(3): p. 265-278, June 1964.
,
Uhlaner, J.E., and Drucker, A.J., "Military Research on Performance Criteria: A Change of Emphasis," Human Factors
v. 22(2): p. 131-139, April 1980.
,
Van Cott, H.P., and Kinkade, R.G. (Editors), Human Engineering
Guide to Equipment Design Joint Army-Navy-Air Force Steering
Committee, 197 2.
,
Voiers, W.D., A Comparison of the Components of Simulated
Radar Bombing Error in Terms of Reliability and Sensitivity
to Practice Air Force Personnel & Training Research Center,
Lackland AFB, TX, AFPTRC-54-74 April 1954, AD 62566.
,
,
Vreuls, D.
and Obermayer, R.W.
Study of Crew Performance
Measurement for High-Performance Aircraft Weapon System
Training:
Air-to- Air Intercept Naval Training Device Center,
Orlando, FL 32813, NAVTRADEVCEN 70-C-0059-1, February 1971,
AD 727739.
,
,
,
Vreuls, D., and Obermayer, R.W., Emerging Developments in
Flight Training Performance Measurement from U.S. Naval
Training Device Center 25th Anniversary Commemorative Technical Journal, Orlando, FL 32813, November 1971, p. 199-210.
,
Vreuls, D., Obermayer, R.W., Goldstein, I., and Lauber J.K.,
Measurement of Trainee Performance in a Captive Rotary-Wing
Device Naval Training Equipment Center, Orlando, FL 32813,
June 1973, AD 764088.
,
,
Vreuls, D., Obermayer, R.W., and Goldstein, I., Trainee performance Measurement Development Using Multivariate Measure
Selection Techniques Naval Training Equipment Center, Orlando,
FL 32813, NAVTRAEQUIPCEN 73-C-00 66-1, September 1974, AD 787594.
See also Proceedings, NTEC/Industry Conference, Orlando, FL,
NAVTRAEQUIPCEN IH-240, November 1974, P. 227-236, AD A000970.
,
219
,
Vreuls, D., Wooldridge, A.L., Obermayer, R.W., Johnson, R.M.
and Goldstein, I., Development and Evaluation
Norman, D.A.
Measures in an Automated Instrument
Performance
of Trainee
Flight Maneuvers Trainer Naval Training Equipment Center,
Orlando, FL 32813, NAVTRAEQUIPCEN 74-C-0063-1, October 1975,
AD A024517.
,
,
Aircrew Performance
and Wooldridge, A.L.
Vreuls, D.
Measurement Proceedings of the Symposium on Productivity
Personnel Assessment in Navy Systems, Naval
Enhancement:
Personnel Research and Development Center, San Diego, CA,
October 1977.
,
,
,
Vreuls, D.
and Cotton, J.C., Feasibility of Aircrew Performance Measurement Standardization for Research Air Force
Office of Scientific Research, Boiling AFB, DC 20332, April
,
,
1980.
Waag, W.L., Eddowes, E.E., Fuller, J.H. Jr., and Fuller, R.R.,
ASUPT Automated Objective Performance Measurement System Air
Force Human Resources Laboratory, Williams AFB, AZ 85224,
AFHRL-TR-7 5-3, March 1975, AD A014799.
,
Welford, A.T., Fundamentals of Skill
,
Methuen
&
Co.
LTD,
1971.
Welford, A.T., Skilled Performance:
Perceptual and Motor
Skills Scott, Foresman and Company, 197 6.
,
Williams, A.C. Jr., Simon, C.W., Haugen, R.
and Roscoe, S.N.,
Operator Performance in Strike Reconnaissance Wright Air
Development Division, Wright-Patterson AFB, OH, WADD Technical
Report 60-521, August 1960, AD 246545.
,
,
220
APPENDIX A
A-6E TRAM RADAR NAVIGATION TASK LISTING
The enclosed task listing for the B/N during radar navi-
gation in the A-6E WST was compiled from various sources as
discussed in Chapter V.
This was the first phase of develop-
ing a task analysis for the purpose of measuring performance
of the B/N during radar navigation.
221
A-6E TRAM RADAR NAVIGATION TASK LISTING
SEGMENT
Tl
T2
T3
T4
1:
AFTER TAKEOFF CHECKS
ADJUST RADAR PANEL FOR SCOPE RETURN
51
SET SEARCH RADAR PWR SWITCH
52
SET
XxMT
ON
NORM
SWITCH
ACTIVATE SYSTEM STEERING TO INITIAL POINT
51
CHECK C0MPTM0D5 SWITCH
52
DEPRESS TGT N ADDRESS KEY HAVING IP
LAT/LONG
53
CHECK C0J4PT/MAN SWITCH
(IP)
STEER
COMPT
CHECK FOR ACCURATE SYSTExM STEERING TO IP
51
READ SYSTEM BEARING AND RANGE TO IP FROM
DVRI BUG AND RANGE DISPLAYS
52
COMPARE SYSTEM BEARING AND RANGE TO IP
WITH PRE-PLANNED OR ESTIMATED BEARING
AND RANGE TO IP
53
GO TO T5 IF SYSTEM STEERING TO IP IS
CORRECT
54
GO TO T4 IF SYSTEM STEERING TO IP IS
NOT CORRECT
TROUBLESHOOT SYSTEM STEERING IF REQUIRED
51
DETERMINE IP LAT/LONG FROM CHART OR
IFR SUPPLEMENT
52
COMPARE SYSTEM IP LAT/LONG WITH ACTUAL
IP LAT/LONG
(a)
THROW DDU DATA SWITCH
(b)
READ SYSTEM TGT N ADDRESS (IP)
FROM LOWER DDU LAT/LONG DISPLAYS
222
ON CALL
53
INSERT CORRECT IP LAT/LONG IF REQUIRED
54
EVALUATE SYSTEM PRESENT POSITION AND
ESTIMATED POSITION FROM CHART
55
T5
T7
THROW DDU DATA SWITCH
(b)
READ SYSTEM PRESENT POSITION
LAT/LONG FROM LOWER DDU LAT/LONG
DISPLAYS
(c)
COMPARE SYSTEM PRESENT POSITION
WITH ESTIMATED PRESENT POSITION
FROM CHART
(d)
INSERT CORRECT PRESENT POSITION
IF REQUIRED
PRES POS
(1)
DEPRESS PRES LOC ADDRESS KEY
ON COMPUTER KEYBOARD
(2)
THROW COMPTMODE SWITCH
(3)
INSERT CORRECT PRESENT
POSITION LAT/LONG
(4)
THROW COMPTMODE SWITCH
(5)
CHECK LOWER DDU LAT/LONG DISPLAYS
FOR ACCURATE DATA ENTRY
ENTER
STEER
INFORM PILOT OF APPROXIxMATE HEADING TO IP
IF REQUIRED AND GO TO T6
INFORM PILOT THAT SYSTEM STEERING IS TO IP
SI
T6
(a)
CHECK THAT PILOT MAINTAINS SAFE FLIGHT
AND FOLLOWS SYSTEM STEERING
CHECKOUT RADAR FOR STATUS
51
GO TO NEXT EVENT IF RADAR RETURN IS
PRESENT
52
GO TO T7 IF RADAR RETURN IS NOT PRESENT
TROUBLESHOOT RADAR IF REQUIRED
SI
CHECK TEST MODE SWITCH (RADAR/DRS
TEST)
CENTERED
223
SEGMENT
Tl
52
CHECK FAULT ISLN SWITCH
53
CHECK RADAR CIRCUIT BREAKER
54
USE PCL CHECKLIST FOR RADAR TURN-UP
PROCEDURES
55
ALERT PILOT IF RADAR INOPERATIVE AND
ABORT RADAR NAVIGATION FLIGHT
2:
CENTERED
IN
NAVIGATION TO IP
TUNE RADAR FOR OPTIMUM PPI DISPLAY
51
ROTATE CONTRAST CONTROL
CW
52
ROTATE BRT CONTROL (UNTIL SWEEP PRESENT)
CW
53
CHECK VIDEO/DIF CONTROLS
54
ROTATE RCVR CONTROL
(UNTIL RETURN IS PRESENT)
55
CHECK DISPLAYS BUTTONS
56
ADJUST PPI RANGE CONTROL
(UNTIL IP AT TOP OF SCOPE)
57
ROTATE RNG MKR/AZ MKR CONTROLS
(UNTIL CURSORS PRESENT)
58
CHECK SCAN STAB CONTROL
59
ROTATE SCAN ANGLE CONTROL
(UNTIL DESIRED SWEEP WIDTH PRESENT)
CCW
CW
PPI
CW/CCW
CW
ADL
CW/CCW
510
CHECK SCAN RATE SWITCH
511
CHECK AMTI CONTROL
512
CHECK STC SLOPE/DEPTH CONTROLS
513
THROW ANT PATT SWITCH
(CONTINUE FOR 10 SECOND MINIMUM)
FAR
514
CHECK RCVR SWITCH
AFC
515
CHECK BEACON CONTROL
CCW
516
CHECK AZ-RNG TRKG SWITCH
OFF
224
FAST
CCW
CW/CCW
S17
T2
T3
51
PLACE LEFT HAND ON SLEW CONTROL STICK
52
DEPRESS RADAR SLEW BUTTON AND PUSH SLEW
CONTROL STICK IN DIRECTION OF DESIRED
CURSOR INTERSECTION MOVEMENT
53
COMPARE RADAR RETURN IMAGE ON CURSOR
INTERSECTION WITH PREPLANNED CHART
AND QUICK & DIRTY IP
54
REPEAT SI AND S2 IF REQUIRED
55
PUSH CORRECT POS BUTTON ON LOWER DDU PANEL
ACTIVATE DOPPLER RADAR
52
T5
ON
POSITION CURSOR INTERSECTION ON IP IF REQUIRED
51
T4
CHECK FREQ AGILITY SWITCH
SET DOPPLER CONTROL SWITCH
(LAND OR SEA AS APPROPRIATE)
MONITOR DOPPLER CONTROL PANEL FOR
DOPPLER RADAR STATUS
(a)
OBSERVE MEMORY LIGHT OUT FOR PROPER
OPERATION
(b)
OBSERVE DRIFT AND GND SPEED DISPLAYS
FOR DRIFT ANGLE AND GROUND SPEED
PRESENT
SELECT SYSTEM STEERING OR DEAD RECKONING
(DR) NAVIGATION
51
DETERMINE STATUS OF INS, RADAR, AND
DOPPLER RADAR
52
OBSERVE CURSOR INTERSECTION DRIFT ON
RADAR SCOPE
53
GO TO T5 IF DEAD RECKONING IS SELECTED
(LARGE CURSOR INTERSECTION DRIFT) OR SEGTl IF SYSTEM NAVIGATION IS
MENT 3,
SELECTED (SMALL CURSOR INTERSECTION DRIFT)
PERFORM DEAD RECKONING NAVIGATION
CBEYOND THE SCOPE OF THIS REPORT)
225
ON
SEGMENT
Tl
T2
T3
T4
NAVIGATION TO TURN POINT
3:
(TP)
INITIATE TURN AT IP
51
ALERT PILOT OF NEXT OUTBOUND HEADING
ONE MINUTE PRIOR TO REACHING IP
52
SET OUTBOUND HEADING ON HORIZONTAL
SITUATION INDICATOR (HSI) BY ROTATING
HEADING SELECT KNOB UNTIL THE HEADING
SELECT MARKER IS ON THE DESIRED HEADING
53
CHECK AZ-RNG TRKG SWITCH
54
MONITOR DVRI HEADING BUG FOR MOVEMENT
TO 18 0° RELATIVE POSITION (IP PASSAGE)
55
ACTIVATE COCKPIT CLOCK AT IP PASSAGE
56
INFORM PILOT OF IP PASSAGE AND OUTBOUND
HEADING
57
CHECK FOR PILOT TURNING TO NEW HEADING
OFF
ACTIVATE SYSTEM STEERING TO TP
51
DEPRESS TGT N ADDRESS KEY HAVING NEXT
TP LAT/LONG
52
CHECK COMPTMODE SWITCH
CHECK FOR ACCURATE SYSTEM STEERING TO TP
51
READ SYSTEM BEARING AND RANGE TO TP
FROM DVRI BUG AND RANGE DISPLAYS
52
COMPARE SYSTEM BEARING AND RANGE TO TP
WITH PREPLANNED OR ESTIMATED BEARING
AND RANGE TO TP
53
GO TO T5 IF SYSTEM STEERING TO TP IS
CORRECT
34
GO TO T4 IF SYSTEM STEERING TO TP IS
NOT CORRECT
TROUBLESHOOT SYSTEM STEERING IF REQUIRED
SI
LAT/LONG FROM CHART OR
IFR SUPPLEMENT
DETERI-IINE TP
226
STEER
52
T5
(a)
THROW DDU DATA SWITCH
(b)
READ SYSTEM TGT N ADDRESS (TP)
FROM LOWER DDU LAT/LONG DISPLAYS
53
INSERT CORRECT TP LAT/LONG IF REQUIRED
54
EVALUATE SYSTEM PRESENT POSITION AND
ESTIMATED POSITION FROM CHART
ON CALL
PRES POS
(a)
THROW DDU DATA SWITCH
(b)
READ SYSTEM PRESENT POSITION LAT/
LONG FROM LOWER DDU LAT/LONG DISPLAYS
(c)
COMPARE SYSTEM PRESENT POSITION WITH
ESTIMATED PRESENT POSITION FROM CHART
(d)
INSERT CORRECT PRESENT POSITION IF
REQUIRED
(1)
DEPRESS PRES LOC ADDRESS KEY
ON COMPUTER KEYBOARD
(2)
THROW COMPTMODE SWITCH
(3)
INSERT CORRECT PRESENT POSITION
LAT/LONG
(4)
THROW COMPTMODE SWITCH
(5)
CHECK LOWER DDU LAT/LONG DISPLAYS
FOR ACCURATE DATA ENTRY
ENTER
STEER
INFORM PILOT THAT SYSTExM STEERING IS TO THE TP
SI
T6
COMPARE SYSTEf-1 TP LAT/LONG WITH
ACTUAL TP LAT/LONG
CHECK THAT PILOT MAINTAINS SAFE FLIGHT
AND FOLLOWS SYSTEM STEERING
INSERT DATA FOR NEXT REMAINING TPs IF REQUIRED
51
THROW COMPTMODE SWITCH
52
DEPRESS PREVIOUSLY UTILIZED TGT N
ADDRESS KEY
53
DEPRESS POS ACTION KEY
227
ENTER
54
DEPRESS APPROPRIATE QUANTITY KEYS
IN SEQUENCE FOR TP LAT/LONG
55
DEPRESS ALT ACTION KEY
56
DEPRESS APPROPRIATE QUANTITY KEYS
IN SEQUENCE FOR TP ALTITUDE
57
ALERT PILOT TO MAINTAIN PRESENT HEADING
58
THROW COMPTMODE SWITCH
59
THROW DDU DATA SWITCH
810
CHECK LOWER DDU LAT/LONG DISPLAYS FOR
ACCURATE DATA ENTRY
511
REPEAT SI THROUGH SIO IF REQUIRED
512
DEPRESS REQUIRED TGT N ADDRESS KEY FOR
CURRENT TP SYSTEM STEERING
513
CHECK FOR ACCURATE SYSTEM STEERING TO TP
(a)
READ SYSTEM BEARING AND RANGE TO TP
FROM DVRI BUG AND RANGE DISPLAYS
(b)
COMPARE SYSTEM BEARING AND RANGE TO
TP WITH PREPLANNED OR ESTIMATED
BEARING AND RANGE TO TP
514
REPEAT S12 AND S13 IF REQUIRED
515
INFORM PILOT THAT SYSTEM STEERING IS
TO THE TP
(a)
T7
STEER
ON CALL
CHECK THAT PILOT MAINTAINS SAFE
FLIGHT AND FOLLOWS SYSTEM STEERING
PERFORM SYSTEM NAVIGATION TASKS
SI
TUNE RADAR FOR OPTIMUM PPI DISPLAY
(a)
(b)
(c)
ADJUST PPI RANGE CONTROL
(UNTIL TP AT TOP OF SCOPE)
CW
ADJUST RCVR CONTROL
(UNTIL RETURN IS ENHANCED)
CW/CCW
ADJUST STC DEPTH CONTROL
(UNTIL EVEN RETURN PRESENT)
228
CW
(d)
52
ADJUST SCAN ANGLE CONTROL
(UNTIL DESIRED SWEEP WIDTH PRESENT)
..
CW/CCW
MONITOR FLIGHT PROGRESS USING RADAR
SIGNIFICANT TERRAIN/CULTURAL FEATURES
AS CHECK POINTS
(a)
COMPARE RATAR RETURN IMAGE WITH
PREPLANNED CHART AND QUICK & DIRTY
(b)
COMPARE SYSTEM PRESENT POSITION WITH
ESTIMATED PRESENT POSITION FROM CHART
IF REQUIRED
(c)
PRES POS
(1)
CHECK DDU DATA SWITCH
(2)
READ SYSTEM PRESENT POSITION
LAT/LONG FROM LOWER DDU LAT/LONG
DISPLAYS
(3)
RECORD SYSTEM PRESENT POSITION
ON CHART
(4)
READ TIME- INTO- LEG OR TOTAL TIME
FROM COCKPIT CLOCK
(5)
RECORD ESTIMATED PRESENT POSITION
ON CHART
REPEAT
(a)
AND
(b)
IF REQUIRED
53
INFORM PILOT OF SYSTEM NAVIGATIONAL ACCURACY
54
POSITION CURSOR INTERSECTION ON TP IF
REQUIRED
(a)
PLACE LEFT HAND ON SLEW CONTROL STICK
(b)
DEPRESS RADAR SLEW BUTTON AND PUSH SLEW
CONTROL STICK IN DIRECTION OF DESIRED
CURSOR INTERSECTION MOVEMENT
(c)
COMPARE RADAR RETURN IMAGE ON CURSOR
INTERSECTION WITH PREPLANNED CHART
AND QUICK & DIRTY TP
(d)
REPEAT
(e)
PUSH CORRECT POS BUTTON ON LOWER
DDU PANEL
(a)
AND
229
(b)
IF REQUIRED
55
MONITOR NAVIGATIONAL EQUIPMENT FOR
OPERATING STATUS OR EQUIPMENT CONDITION
(a)
CHECK NAVIGATION (INS) CONTROL
PANEL FOR INS FAILURE INDICATIONS
(b)
CHECK VTR PANEL FOR OPERATING
INDICATIONS IF REQUIRED
(c)
CHECK FOR COMPUTER ERROR LIGHT ON
DVRI PANEL AND LOWER DDU PANEL
(d)
CHECK ATTITUDE REF SWITCH
(e)
CHECK MAGNETIC VARIATION FROM MAG
VAR DISPLAY ON LOWER DDU PANEL
(1)
(f)
56
57
COMP IN
COMPARE TO CHART MAGNETIC
VARIATION AND SET IF NECESSARY
INFORM PILOT OF NAVIGATIONAL
EQUIPMENT STATUS
MONITOR TIME ON TP
(a)
COMPARE RECORDED ACTUAL LEG OR TOTAL
TIME FOR PREVIOUS TP WITH PREPLANNED
LEG OR TOTAL TIME FOR PREVIOUS TP
(b)
INSTRUCT PILOT TO ADJUST THROTTLE
CONTROLS SO AS TO CORRECT TIME ON TP
(c)
INFORM PILOT OF TIME ON TP RESULTS
MONITOR SYSTEM VELOCITIES
(a)
READ SYSTEM GROUND SPEED (GS) AND
WIND
(1)
THROW DDU DATA SWITCH
(2)
RECORD GROUND SPEED IN G/S
DISPLAY
(3)
RECORD WIND SPEED AND DIRECTION
IN WIND SPEED/WIND DIR DISPLAYS
(b)
READ AND RECORD GS FROM DOPPLER PANEL
GND SPEED DISPLAY
(c)
READ AND RECORD INDICATED AIRSPEED
(IAS) FROM AIRSPEED INDICATOR ON
PILOT'S INSTRUMENT PANEL
230
DATA
,
(d)
58
59
READ SYSTEM TRUE AIRSPEED (TAS)
(.1)
THROW DDU DATA SELECT SWITCH
(2)
RECORD TAS FROM DISPLAY
A
1
(e)
EVALUATE SYSTEM VELOCITIES USING GS
WIND, IAS, AND TAS
(f)
TROUBLESHOOT SYSTEM VELOCITIES IF
REQUIRED
(g)
INFORM PILOT OF SYSTEM VELOCITY RESULTS
MONITOR SYSTEM HEADING
(a)
READ TRUE HEADING FROM DVRI DISPLAY BUG
(b)
READ MAGNETIC HEADING FROM WET COMPASS
(c)
READ HEADING FROM HORIZONTAL SITUATION
INDICATOR
(d)
READ MAGNETIC VARIATION FROM MAG VAR
DISPLAY ON LOWER DDU PANEL
(e)
EVALUATE HEADING ACCURACY USING TRUE
HEADING, MAGNETIC VARIATION, AND COMPASS
HEADING DATA WITH "CDMVT" FORMULA
(f)
ADJUST MA-1 COMPASS NEEDLE DEFLECTION
WITH PULL TO SET CONTROL IF REQUIRED
(g)
TROUBLESHOOT SYSTEM HEADING IF REQUIRED
(h)
INSTRUCT PILOT TO ADJUST HEADING SO AS
TO MAINTAIN PREPLANNED COURSE
(i)
INFORM PILOT OF SYSTEM HEADING RESULTS
MONITOR SYSTEM ALTITUDE
(a)
READ ALTITUDE (AGL) FROM RADAR
ALTIMETER
Cb)
READ PRESSURE ALTITUDE (MSL) FROM
PRESSURE ALTIMETER
(c)
READ SYSTEM ALTITUDE
CD
(MSL)
THROW DDU DATA SWITCH
231
PRES POS
C2)
(d)
READ TERRAIN ALTITUDE FOR PRESENT
POSITION ON CHART
(e)
EVALUATE SYSTEM ALTITUDE ACCURACY
USING ABOVE SOURCES
(f)
TROUBLESHOOT SYSTEM ALTITUDE IF
REQUIRED
(g)
(h)
SIO
T8
READ ALTITUDE IN ALT DISPLAY
INSERT CORRECT ALTITUDE IF REQUIRED
INFORM PILOT OF SYSTEM ALTITUDE
RESULTS
MONITOR SAFETY OF FLIGHT INSTRUMENTS
AND EQUIPMENT
(a)
EVALUATE FUEL STATUS BY COMPARING
ACTUAL FUEL REMAINING WITH PREPLANNED
FUEL REMAINING
(b)
INFORM PILOT OF FUEL STATUS
(c)
CHECK ANNUNCIATOR CAUTION LIGHTS
FOR EMERGENCY INDICATIONS
(d)
CHECK FIRE WARNING LIGHTS FOR AIRCRAFT
FIRE INDICATIONS
(e)
CHECK ACCESSIBLE CIRCUIT BREAKERS
(f)
INFORM PILOT OF SAFETY OF FLIGHT
INSTRUMENTS AND EQUIPMENT RESULTS
PERFORM APPROACH TO TP PROCEDURES
SI
CHECK FLIGHT PROGRESS USING RADAR SIGNIFICANT TERRAIN/CULTURAL FEATURES AS
CHECK POINTS
(a)
COMPARE RADAR RETURN IMAGE ON CURSOR
INTERSECTION WITH PREPLANNED CHART
AND QUICK & DIRTY
Cb)
POSITION CURSOR INTERSECTION ON TP
IF REQUIRED
(c)
REPEAT
(a)
AND
232
(b)
IF REQUIRED
IN
)
Cd)
52
SELECT ARE 30/60 DISPLAY AT APPROXIiMATELY
17 MILES FROM TP AND NAVIGATION IS
ACCURATE
53
OBSERVE ARE 30/60 EXPANDED DISPLAY AT
APPROXIMATELY 17 MILES
54
TUNE RADAR FOR OPTIMUM ARE 30/60 DISPLAY
(a)
ROTATE STC SLOPE CONTROL
(b)
THROW ANT PATT SWITCH (CONTINUE
UNTIL BRIGHTEST RETURN PRESENT)
(c)
T9.1
CCW
(e)
CHECK AZ-RNG TRKG SWITCH
OFF
CHECK ELEV TRKG SWITCH
OFF
f
ADJUST VIDEO/DIF CONTROLS TO
ENHANCE RETURN RESOLUTION IF
REQUIRED
CONTINUE POSITIONING CURSOR INTERSECTION
ON FRONT LEADING EDGE CENTER OF TURN
POINT RETURN
PERFORM VELOCITY CORRECT PROCEDURES
51
GO TO T9.1 FOR AUTOMATIC VELOCITY CORRECT
(AZ- RANGE LOCK-ON OR TRACK- WHILE- SCAN
REQUIRED)
52
GO TO T9.2 FOR JIANUAL VELOCITY CORRECT
53
FLIR TRACKING MANUAL VELOCITY CORRECT
(NOT PART OF SIMULATOR CAPABILITY)
PERFORM AUTOMATIC VELOCITY CORRECT
SI
NEAR
ADJUST SCAN ANGLE CONTROL
(g)
55
ADJUST RCVR CONTROL
(UNTIL RETURN IS ENHANCED)
CCW
(d)
(
T9
PUSH CORRECT PQS BUTTON ON LOWER
DDU PANEL
POSITION AZIMUTH CURSOR TO CENTER OF TP
RETURN
233
CW/CCW
T9.2
TIO
52
POSITION RANGE CURSOR TO JUST LEADING
EDGE OF TP RETURN
53
THROW AZ-RNG TRKG SWITCH
ON
54
CHECK AZ- RANGE INDICATOR LIGHT
ON
55
ROTATE VELOCITY CORRECT SWITCH
MEMORY POINT
56
CHECK DDU DATA SELECT SWITCH
57
CHECK FLT DATA DISPLAY
VALUE GREATER THAN 000
58
ROTATE VELOCITY CORRECT SWITCH
(BEFORE TP WALK- DOWN)
4
A
FOR SOME
OFF SAVE
PERFORM TdANUAL VELOCITY CORRECT
51
POSITION AZIMUTH CURSOR TO CENTER OF
TP RETURN
52
POSITION RANGE CURSOR TO JUST LEADING
EDGE OF TP RETURN
53
CHECK AZ-RNG TRKG SWITCH
54
ROTATE VELOCITY CORRECT SWITCH
55
DELAY FOR 10 SECOND MINIMUM/128 SECOND
MAXIMUM TO ALLOW CURSOR DRIFT
56
REPEAT POSITIONING BEARING AND RANGE
CURSORS TO JUST LEADING EDGE CENTER
OF TP RETURN
57
MONITOR FURTHER CURSOR DRIFT AND
REPEAT POSITIONING IF REQUIRED
SB
CHECK DDU DATA SELECT SWITCH
59
CHECK FLT DATA DISPLAY
GREATER THAN 000
510
ROTATE VELOCITY CORRECT SWITCH
(BEFORE TP WALK-DOWN)
4
OFF
MEMORY POINT
FOR SOME VALUE
INITIATE TURN AT TP
SI
ALERT PILOT OF NEXT OUTBOUND HEADING
ONE MINUTE PRIOR TO REACHING TP
234
A
OFF SAVE
52
SET OUTBOUND HEADING ON HSI
53
CHECK AZ-RNG TRKG SWITCH
54
MONITOR DVRI HEADING BUG FOR MOVEMENT
TO 180° RELATIVE POSITION (TP PASSAGE)
55
RECORD LEG TIME OR TOTAL ELAPSED TIME
AT TP PASSAGE
56
ACTIVATE COCKPIT CLOCK AT TP PASSAGE IF
REQUIRED (LEG TIME ONLY)
57
INFORM PILOT OF TP PASSAGE AND OUTBOUND
HEADING
58
CHECK FOR PILOT TURNING TO NEW HEADING
For subsequent navigation legs, return to Segment
235
OFF
3,
Task
2
APPENDIX B
GLOSSARY OF TASK SPECIFIC BEHAVIORS
The enclosed glossary of 31 specific behaviors, or action
verbs, was excerpted from Oiler
[1968].
Each verb has a
specific meaning and is acceptable in the sense that all
Each action verb is used in
synonyms have been eliminated.
the task listing
(Appendix A)
the MTLA (Appendix
D)
,
,
task analysis
(Appendix
C)
,
and
for the purpose of defining observable
behavior that may be measured in terms of task performance.
236
.
Provide the initial force or action to begin an
operation of some equipment configuration.
-
Activate
Manipulate controls, levers, linkages and other equipment items to return equipment from an out-of-tolerance condition to an in-tolerance condition.
-
Adjust
Inform designated persons that a certain condition
exists in order to bring them up to a watchful state in which
Alert
a
-
quick reaction is possible.
Check - Examine to determine if a given action produces a
specified result; to determine that a presupposed condition
actually exists, or to confirm or determine measurements by
the use of visual, auditory, tactile, or mechanical means.
Checkout - Perform routine procedures
,
which are discrete,
ordered stepwise actions designed to determine the status or
assess the performance of an item.
Compare - Examine the characteristics of two or more items to
determine their similarities and differences.
Proceed in the performance of some action, procedure,
etc., or to remain on the same course or direction
e.g.,
continue to check the temperature fluctuations; continue to
adjust the controls; and continue on the same heading)
Continue
-
(
Delay
Wait a brief period of time before taking a certain
action or making a response.
-
Depress
I
j
'
Apply manual (as opposed to automatic) pressure to
activate or initiate an action or to cause an item of equipment
to function or cease to function.
-
Determine
Find, discover, or detect a condition
mine degree of angle ).
-
237
(e.g., deter-
I
.
,
Judge or appraise the worth or amount, of a unit
of equipment, operational procedure or condition (e.g.
evaluate status of life support systems)
-
Evaluate
Inform - Pass on information in some appropriate manner to
that they
one or more persons about a condition, event, etc.
,
should be aware of.
Initiate
-
Give a start to a plan, idea, request, or some form
of human action
(e.g.,
initiate a new safety procedure).
Insert - Place, put, or thrust something within an existing
context (e.g., insert a part in the equipment, insert a request
in the computer).
Instruct
-
Impart information in an organized, systematic manner
to one or more persons.
Monitor
Observe continually or periodically visual displays,
-
or listen for or to audio displays, or vibrations in order to
determine equipment condition or operating status.
Observe
Note the presence of mechanical motion, the condition
-
of an indicator, or audio display, or other sources of movement
or audible sounds on a nonperiodic basis.
Perform
-
Place
Transport an object to an exact location.
Carry out some action from preparation to completion
(It is understood that some special skill or knowledge is
required to successfully accomplish the action.).
-
Position
Turn, slide, rotate, or otherwise move a switch,
lever, valve handle, or similar control device to a selected
orientation about some fixed reference point.
P^sh
-
Exert a force on an object in such a manner that the
object will move or tend to move away from the origin of the
-
force.
Use ones eyes to comprehend some standardized form of
visual symbols (e.g., sign, gauge, or chart).
^^3.6.
-
238
.
Make a permanent account of the results of some
action, test, event, etc., so that the authentic evidence
will be available for subsequent examination.
-
Record
Repeat - Perform the same series of tests, operations, etc.,
over again, or perform an identical series of tasks, tests,
operations, etc.
Apply manual torque to cause a multiple position
rotary switch or a constantly varying device like a handwheel,
Rotate
-
thumbwheel, or potentiometer to move in a clockwise or counter-
clockwise manner.
Select
-
Choose, or to be commanded to choose, an alternative
from among a series of similar choices
(e.g.,
select a proper
transmission frequency)
Set
Move pointers, clock hands, etc., to a position in con-
-
formity with a standard, or place mechanical controls in a
predetermined position.
Change manually the setting of a toggle switch from
one position to another.
Throw
-
Troubleshoot
-
Examine and analyze failure reports, equipment
readouts, test equipment meter valves, failure symptoms, etc.,
to isolate the source of malfunction.
Adjust an item of equipment to a prescribed operating
condition.
Tune
-
Use - Utilize some unit of equipment or operational procedure.
239
APPENDIX C
A-6E TRAM RADAR NAVIGATION TASK ANALYSIS
The purpose of performing a task analysis for measuring
B/N performance during radar navigation was to provide candidate performance measure metrics that may describe either
successful task performance or B/N skill acquisition.
Only
segment three, Navigation to TP, was examined to limit the
scope of the task analysis.
The sequential flow of tasks,
subtasks, and subtask elements
(defined below), is the same
as that found in the A-6E TRAM radar navigation task listing
(Appendix A)
The seven columns of the task analysis form
.
were defined in Chapter V but the definitions are repeated
here for the convenience of the reader:
(1)
Subtask
-
a
component activity of a task.
Within a task,
collectively all subtasks comprise the task.
Subtasks
are represented by the letter "S" followed immediately
Subtask elements are represented by a
small letter in parentheses.
by a numeral.
(2)
Feedback
action.
the indication of adequacy of response or
Listed as VISUAL, TACTILE, AUDITORY, or
-
VESTIBULAR and is listed in the subtask column for convenience only.
(3)
Action Stimulus
-
the event or cue that instigates performance of the subtask. This stimulus may be an out-of-
tolerance display indication, a requirement of periodic
inspection, a command, a failure, etc.
240
.
(4)
.
the estimated time in seconds to perform the
subtask or task element calculated from initiation to
Time
-
completion.
(5)
Criticality
the relationship between mission success
-
and the below-minimum performance or required excessive
performance time of a particular subtask or subtask
element.
"High" (H) indicates poor subtask performance
may lead to mission failure or an accident.
(M)
"Medium"
indicates the possibility of degraded mission capa-
bility.
"Low"
(L)
indicates that poor performance may
have little effect on mission success.
(6)
Potential Error
-
errors are classified as failure to
perform the task (OMIT)
,
performing the task inappropri-
ately in time or accuracy (COMMIT)
task steps in the incorrect order
(7)
,
or performing sequential
(SEQUENTIAL)
Skills Required - the taxonomy of training objectives used
for the Grumman task analysis was retained and presented
in Table VII
(8)
[Campbell, et al.,
Performance Measure Metrics
- a
1977].
candidate metric which
may best describe the successful performance of the task
or a genuine display of the required skills.
The types
of metrics suggested were classified as:
TIME (time in
seconds from start to finish of task)
T-S (time-sharing
or proportion of time that particular task is performed
in relation to other tasks being performed in the same
time period)
R-T (reaction time in seconds from the onset
,
,
of an action stimulus to task initiation)
of task performance)
,
ACC (accuracy
FREQ (number of task occurrences)
DEC (decisions made as a correct or .incorrect choice
depending on the particular situation and mission requirements)
,
,
QUAL (quality of a task, especially in regards to
radar scope tuning quality)
and SUBJ (subjective observation or comprehension of the task execution success by an
instructor)
,
,
241
The right-hand column of the task analysis form ("perfor-
mance measure metrics") provided several hundred possible
candidate measures for describing successful task performance
or B/N skill acquisition.
Using initial measure selection
criteria as outlined in Chapter IV, these measures were reduced
and combined with literature review candidate measures
II of Chapter IV)
(Table
to produce the final candidate measure set
as shown in Table XI of Chapter VII.
242
U
o w
2 DJ U
^ a: M
S 3 Pi
S Eh
o< u
Pm W S
a s
^
^
W
Oi
^
CO OI
Cm
Eh Pi
Cm
1
Eh
t/5
u
di
1
w"
^tO
Eh
CO
sum
HUD
<
Cd
Oi
b
*fo
Eh
<: CO
<
u
b
^
fc
^
«
H
«
1
V
M
0)
X 3
ai
&«
U OQ
UD
i-4
X D
Ciq
13^
«.
a
u
a
w
w
s u
H U
0)
M
01
w
«
*»
a
1
0)
3
>H
t3
<U
TJ
0)
73
CU
T3
0)
T3
rH
(D
rH
0)
rH
OJ
rH
CU
a
e
U
a
O
e
M
u
0^
a
o
e
a
u
e
M
u
•H
cu
CO P^
(U
u
X 3
u
3
rH
o.
o
S
(U
u
M
u
•H
cu
CO CU
H
^«
Eh
<
Z
«
D
O
Eh
Z
CCj
EH
o
Eh
W
-p
p
AA
4J
4J
•H
•H
•H
•H
•H
o
2
o
o
1-q
^
in
in
o
o
0)
e
cu
W
Eh
<
H
Eh
H
2
H
iH
Eh
• •
u
a:
M
Eh
Eh
M S
CJ
o
Z D
OJ
M3
E^ S
U M
< Eh
tA
<
g
O
K
CN
0)
(U
>
>
•H
>
•H
•H
4J
-P
4J
U
o
•H
•H
u
M
o
O
M
•H
•H
•H
0)
0)
Q
1
W
<U
C
4J -P
di
Eh
+j
X
O
^
Q)
•H 0^
ry^'Xi
s:
>,
C
03
-P
M
•H
T3
j:;
e
M
O (J^
G C C
Eh
ti<
CO
<:
&^
CQ
D
CO
^
cr^
03
-H
-H
-H -H
-C TJ
O
Cn+J
U 3
C
03
0)
T5
Jh
03
^
(U
Oix:
<
-P
u
03
-p T3
M C
(D
D
s-i
^
ja
T3
-H
0)
4J
ro
^^
s-(
(U
0)
U
<: i3
Chfa
C/3
-h
T3
5
QJ
^
03
(U
g
x:
03
0)
>-(
X} -H -H
OJ
W
en
0)
>-i
-i4
XI
O
M
CO
K
OJ
G
>
03
QJ
0)
^
(U
03
U)
x:
G
CJ
«
«
^
U
G
2
«
rH
(U
rH
•
o
03
3
JJ
x: 4J 4J
cn
H
H +J en
G
> 0) a
Q e
•H
>
a
^
•
o
(U
03
OJ
0)
•
u
OJ
cn
••
cn
CU
03
QJ
U
-P
Oi
0^
03
03
03
03
cn
XI
73
>
CU XI
•H
0)
.
a)
cn
<u
+J
x;
.
a;
(U
03
Q)
U
•
Cm
S-l
a. Cm
U
M
iH
(N
U5
CO
C/2
en
Cm
'O
S
"^
CO
4H
••
^
'-^
Q)
^
c/
•--
U CPEh >
0)
M-l
=
-HO
^ > >
O -H
-p e -p
•H
03
G ^ rH
-i^
r-
-P
-i^
l+H
M
•H
Cli
CO
G
03
03
OJ
..
T3
U
-H
03
>
CJ
U
m
243
^
O
3
•H
<
0)
•H CO
n3 -H
03
en
••
-H x; XI
T3 -P T3
rH
rH
-H
TJ
C7>
>i-P
3
W
cr
CO
H
--
G
UJ
rH
3
iTio
03
03
-P
3
3
J-)
c
i3
0^
U
4J
C C -P Ti
•H 3 O O
ro 4J
G
3
JG
•H
fi
0)
s
.
U C
M
OJ
-P
-P -H
rH
•H
C
4J
(0
01
tj\
-H
Cn
.
OJ
(0
•H
c
O
Q
Q
D D
c
••
E
O
K
rH
to
Eh
Z
o
M
Eh
<
O
M
>
<
z
s
s
<
in
CO
H
73
OJ
-P
QJ
03
Cm
CiJ
u
2 PJ
< 2
S D
2 01
O<
W
s
fl4
w
k
U
CO
H4
(X
Eh
E-*
w
S
cc;
CJ
1
"»
a
w
«
fa
*
W 'fO
S U CQ
HUD
< CO
u
CJ
<:
E-t
cu
q
w
trj
0)
(U
u
M
X a
J Di
J h4
H 3
w C
^y
OJ
H
CU
6
U
3
T3
0)
'C
0)
rH
0)
a
U
S
^
•H
M
CO 0^
CU
^
<
H4
d::
O
Z «
W
o
E-t
—^
T3
3
a
U
•H
E-i
4J
4J
E-t
•H
•H
CU
o
Di
M
M 2
c
6
e
£-•
&J
O
s
K
o
o
m
H
rH
u
^-'
CO
2
iJ
O
MD
E^ S
U M
<
:=3
rH
Eh
E-t
,a
0]
fC
<0
•H
•H
P
4J
C
C
OJ
0)
D
3
5T
(T
CO
CO
O
0)
<
etT»
G
•H
cu
TJ
f-l
G
U
:stj
M
<
Eh
CO
CJ
M
>
<
z
M-l
D
CO
u
U3
-p
4J
-H
n
3
<
-M
'TJ
rH
•H
Q4
C
.
cn
G
^
O
.
-P
CJ^
cn
OJ
^
^
M
fd
<u
CU
G
fO
0)
OJ
x:
a.X!
(ii
(13
-H
13 T3
CU
^3
G
(d
U
VD
r--
CO
CO
••
JH
^
5
(CJ
%4
CO
H
3
CO
XI -H
•H
aT3
M
4-1
cn
3
rH C
•H -H
fd
Cn
(d
fT3
oj
OJ
rH
tT3
>
-P
e
^
,,
m
O
ja
M
rH
4J
3
CO
P-t
<
M
0^:3
HO
o
z
o
>i
G
M
3
Q)
G
U
Xi
13
0)
0)
-P fa
244
U
2 DJ
< Cm
S D
S
o<
U
Di S
u
t/J
Cj^
CO
U
M
Di
E-i
u
S
DJ
cu
a
^
U
U
<
"
Eh
1
«
u
a:
fa
o
W
^
u
u
<
Di
fa
Oh
Eh
O
Eh
q
U
C/3
U
2
H
CH
w
w
tJ oi
X
:3
t-4
0)
T3
(U
r-t
OJ
rH
»-^
MD
d
biJ
Cl,
a
u
e
e
U
v^
•H
0.
CO
EH
1-^
S
w
<
I-+
Eh
2
w
<
>
H
u
O
Eh
CO
0)
a:
D:?
4J
a
£h
•H
cr
g
0)
o
en
Eh
Eh
U
<
CN
&^
M
U
C::
Eh
m
Eh
M S M
CN
2 D
O
M D
U w
< Eh
i-J
• «
C
C
•H
CO
CQ
2
en
W
OJ
O
5-<
n3
T3
E•z
o
H
m
<
3
>
<
2
^^
ro
tn
^J
2
J
\
-P
tT3
t-^
&i
Eh
C
rd
Eh
CJ
Eh
-H
3
W
fO
-H
cn
cp
oj
c
M
a
Q
r-\
CO
..
^
U
03
C
nj
(13
ia
-H TJ
>
<u
fO
OJ
J5 fa
0)
rH rH
en
H
fa
4J
tT3
4J
X
fi
t3
U
4J
5
(1)
iH iH
4-)
(U
^
•H
EH
0)
O
.
en
(C
Eh
CQ
l_l
•H
•H
^
Oj
Eh
O
•H
0)
Eh
K
CN
>
3
u m
Q
O
g
G^
X
o
u
^
u
A
U
03
ro
Eh
•
-H
>
••
U
^
(U
OJ
(tJ
o
-p ia
en
T3
.
<u
•
(U
•
fa
CN
CO
245
CJ
0^
Eh
O
Eh
O
2
H
o
u
CO
z u u
iCf
Qi
1-4
!r
r:)
c::
K CO EO< w
fa C4 S
Ci]
Cu
U
<
u
H
<:
u
fa
Q
Q
w a
S fa
M «
«
^
S 03
H D
H
Eh fa
E-t
CO
PS
B
X
w»^
1
1
on
w
-
Eh
CO
Q
s
W
Eh
CO
>H
CO
CO
CiJ
hJ
i-h
CD
(U
U
3
U
X P
O T3
0)
^
•H
Q)
a
rH
a
o
e
S
•H
)-|
CO
(1^
U
(U
u
C
M
p
4J
o
4J
•H
T)
0)
3
rH
0)
rH
a
iH
e
03
a.
a
u
•H
CO cu
•H
(U
u
5-1
CO CU
o
G -P
O -H
3 e
D^ g
(U o
U
C
-P
Q)
-H
D S
cr e
(U
o
°
K
O
MH
X
s
in
CN
CN
CO
T3
e
M
^ >
cu fa
g
s
o
Pi
-P
•H
3
<U
^ 9
<
S-i
fU
<
D
U
U
0)
(U
-H
o
CO
u
fa
U
Eh
Pi
M
M S
E^
IT)
CJ
o
o
o
U
ro
«•
0)
CO
(U
O J
Eh S
U M
< Eh
•H
CO
Q
s
>
>
-p
O
•H
Q)
U
CQ
CO
•H
•H
Eh
>,
c
P5
03
•H
Cn>
rH
H
o
CO
<
%
o
rd
rH
cu
M e
-H
(d
X5
(d
cu
^3
e
QJ
0.
Eh
3
en
e
-H
(U
c^>
c
-P
>-i
ja u^
P
CO
M
Q a
tn
(u
•H
Q
•
>H
en
C
(U
fCJ
-H -p en (u
-H +J
-H 4J
5
a,
Eh
tn
QJ
M
QJ
0)
CP
(d
rH
c
a<
M
fd
e s
>-i
(U
c
(d
(TJ
J^
(d
rH
i3
(d
Sh
a
C7> <u
(U
(T3
3
P5
^
Cij
S
0)
i3 fa
a;
cj
en
G
G
e
0)
1
en
en
•H
>iCU
Eh
en
Eh
4-1
•H 4J
4->
••
M
un
tT>
•
^
Eh
G
-P
Eh
•H
U
(d
^
73
-P
(U
0)
03
Q
(U
C
>4
<u
<u
-P
axi
fa
CJ
CO
(U
M
M
P
tn
•
G
-P
)H
Q)
O
O
U
CU
Sh
H
-P
u
CJ
r-^
<N
m
'S*
CO
CO
CO
CO
246
en
en
>iCU
T3
03
OJ
-P -H
-P -H
Oi-h X}
Sh
e
en
4-1
c
rd
a
u u
4J 73 T3
en
(T3
-P
g
<u
en
c
td
o
f^
tji
^
-p
c
G
••
-i«:
>i+j 'd
en
+J Eh
>i
(T3
cn
t3
CO
•H
Q
en
H
c
Eh
w
o
(U
13
(T3
ro
O
T3
Eh
>
<
•H
4J
CO
o
>
H
en
o
Q
W
rt
H
D
a
w
M
u
2
CO
Ci3
S D Di
K CO Eh
o< a
s
ci S
u
IJ-i
Ci3
04
H
O
s
M
«
w
w
CO
Eh
CO
>•
CO
..
Eh CO
Eh
Eh
T3
<D
TJ
dJ
.H
0)
e
u
^-1
(U
U
3
M
3
(D
4-»
fd
<U
<U
TJ
<U
3
T)
'C
rH
04
Q)
rH
CU
Q)
rH
04
0)
>H
03
e
U
U
v^
0^
>
•H
w
O
e
M
•H
CO o.
^H
X 3
O
e
M
CO 04
U
O
U
04
a
o
cs:
^ 9
0)
4J
+J
-P
3
4J
p
PH CJ
•H
•H
•H
cr
•H
•H
04
O
o
Eh
<0
X D
<]:
M
^
u
s
H U
Eh <
-H
e
a*
Cid
5-1
a
u
pa
G
CJ
l-H
M
s
6
e
o
O
Eh
s
s
s
M 2 U
CO
o
CM
iH
CN
(U
(U
cj a:
Eh
Eh
^q
Q
o
K
CO
w
"^
1
s
H
a
w
o::
»
^
en
I-)
U
X 3
Eh
Eh
fa
r-<
a
Cm
^
w
CO CJ
Qi
Eh
W
S CQ
H D
to
-^ Oi
MD
1
w
S
H
-
J
CO
«
<:
Q)
E-t
D
O
a
ca
U
u
U
<; oi M-
z o
hJ
O
MD
Eh S
U M
>
>
•H
•H
4J
+J
U
(U
M
o
(D
U
<C Eh
•H
Q
•H
en
CO
e
oj
2
o
o
c
(U
3
rH
in
rH
•H
.3
4J
4J
c
0)
3
cr
Q
o u
s
o
^
•H
P
e
g
g
o
CO
•H
c
0)
3
tr
cr
0)
(U
<u
CO
en
to
it:
CO
<
45
U
Eh
\
a*
Eh
o
Eh
2
n
M
s
u
>
<
2
••
m
^
CO
<c
Cn
4J
C
(d
^
CO
Oh
Eh iH
3
nj
•H
>
•
4J 4J
<U
G
H
^ c
tT3
0)
••
x: 6 u
e o Q) rd
rH XJ
u
(u e a^a
-H
Q
CO
a
U 3
4-1
W
(1)
4J
-P
u
W
Q)
OJ
Ct,
03
-H
0)
-H
4J
2
Eh
fd
u
U
5-1
4J
td
Eh
nj
rH
Q<
(d
Eh
e e
u u
.
-H
-H
CU
(TJ
^
S
CO
-H +J
a
S3
>1
^
G
WO)
o<
Eh
fd
en
U
Q)
O
rH
rO
(^
Eh
enJ
V
J
^
\Di
-M Cm
M
^
O
•H
..
0^^
C U
Q
rH
D iH
Q ^
Q u u
•
S C
M
J
!-l
M
Q)
TJ
^3
fd
0)
Cm
«
^^
^-^
<d
£
M
Q
03
fO
>
rT3
(U
(U
CO
tJ
TD
0)
Cm
3
w
•H
T3
^
>i
.
1-^
tJ
-p
\
W
•
^-1
(d
(d
•
U
-P
td
^DQ
W
Eh
^^
rH
-P
j::
Q)
^
04
CD B^
(T3
<T3
U
IW
S
S-i
5
(d
K^l XI
Oi Cn\T3
g C 4-1 0)
e c
••
td
\
en
3
^
4-4
-P
04 ••
Eh ^3
(U
«
••
w
^
rH
0^
5-1
-H
03
U
-H
J-i
-H
0)
>
0)
3
>i
o
M
cr
cu
(d
td
S-i
OJ
4J
rH i3
04 T3
5-1
en
U
4-1
4J
03
Q)
•H
rH T3
(U
^
(d
G 3
-P
o
^
(u
QJ en M
03 c M
G
H ^:i O
+J -H
CO
247
.
04
Eh
••
M
U
(d
ia
TJ
a>
0)
Cm
»
u
^
RMAN
SURE
TRIG
<
tJ
^
a
u
OH
fe
ci3
s
di
W
S PQ
H D
Eh CO
-
Q
W
H D
Eh
Eh
Eh CO
MD
-H
x;
U
D
0)
T3
td
rH
CU
(U
3
S
u
^
^
<
a:
^ 9
2
+J
CU
Q)
fH
Q4
u
g
•H M
(T3
M >
w
C/1
e
Cu
O
u
oi
E-<
M s u
»*
Et
2 O
^
S
E-t
0)
TJ
QJ
rH
Ol
(U
3
U
rH
e
S-i
U
CO Oi
e-t
(U
e
g
+J
=!
•H CT
03
03
M >
0.
w
P
4J
•H
•H
e
u
CO
K
G
'*~^
•H
-P
T!
e
cu
G
-H
n
,—
CO
U
X 3
n3
•H
-u
-P
^
3
(U
•H
M
^
S
<D
4J
CtJ
C
0)
a
1-4
•H
(U
r-i
a
<-{
a.
tT)
C
Q)
w
tn
1
H
H
f-3
s
s
s
CN
in
rH
rH
in
-H
rH
Q)
03
03
>
03
•H
•H
•H
•H
-p
u
0)
(U
3
(U
-
0)
3
3
CO
CO
Ui
C
4J
-p
•+-<
-H
•H
CO
5-1
•H
••
01
(U
Q
0)
to
U
4J
E-"
C
'H
O4
W
4J
••
E-»
0)
(T3
+J
(U
0)
U B U
-^
E-i
2
M
<
>
<
2
••
ro
fO
W
-u x:
^
CO
<
D
CO
e w
Q)
-P
W
<U
^
6
rtJ
-P -H
>i
C ^
03
<44
>
>i
-p
••
03
cn
Q
D
Q
Q
0)
<U
cn
-P
a
M
C
03
(1)
Eh
cn
a*
Q)
U
ui
a,
Q)
^ e
"
M
-H
03
00
(u
G 5
>i
cn
cn
'
>,
XI
T3
•
cn
cn
M
P
M U
U
Eh
rH -H -H
OJ
^-^
^^
03
03
pLi
£
p
+J -P
03
>
Q4
a
(U
.
CO
TJ -H
OJ
03
0)
Q4&L,
Oi
'^
c;3
248
a
e
cn
(U
-H -H
S-I
+j
a
g
-H
OJ
>i>
a
cn
cn
4J
0)
S-4
Si
cn T(
03
OJ
g
aM_l ^q
OJ
Pn
U
^^^..^
0)
s-j
S-i
03
03
n
+J T3
cn
Q4
• •
^
a,^ u
-H
a4-H
fH
uu
5
G
OJ
03
g g
•
cn
^
aa
G
03
G
a
x; jj
0)
U
S-I
cn
-H
>i
••
^
0)
•H +J
4J 4J
cn
r-i
cn
0)
'CO
-H
-H rH 73
-P
Q)
^^
^ U
cn
•
-H -H
03
4-)
x:
-P
(Tj
OJ -P i3
4J -H T3
03
03
13
-P
Oh
rH
*-5
-P
M
C C
03
G
e
Q)
•
c
cn
CO
(d
Q)
cn
J D
a^
Q
Q
5
^
••
ijq
cn
th
-H
-P
03
e 3
(U W
CO
to
U-l
rH
tion
\
Q)
5 M
Oi-H
G
sent
6^
0)
g
-P
T)
<u
03
S-I
(D
g
u-j
Pm
1
.
a
u
2 CJ
< Di
S D
XW
O<
f^ W
Di S
u
«
01
U
1
Qi
E-t
S
W s u
H O
EH <
H
u
OS
Cm
••
cu
t/J
a
w
K+
q
w
(U
>-<
J Oi
J M
MD
s^ a
en w
X 3
0)
rH
a
e
o::
U
n
(U
M
Cm
^
<
h4 a:
E^
2
-—
^
c
•H
Ci
•H
0^
o
W
U
Qi
Eh
M 2
-P
-p
DJ Pi
E-
o
Q)
o
h-t
e
2
E-*
in
ta
"SJ*
rH
U
tn
Z 3
"^ir
E-<
••
OJ
M 3
^ s
4J
Ul
01
c
(U
U H
< Eh
;3
<
^
1
CO
0)
a
Cu
fo
e-«
o
E-*
Z
c
M
<
C3
M
>
<:
2
E-t
tsi
CO
<
E-i
DQ
3
01
-P -H
CO
U
(DC
M
M -H
-P
0-H
-H
^-»
T3
(U
3
C
•H
4J
t«
P
CO
>
.
••
rd 44
<u
U
^^
(t3
u a-H
(D
D
i3
T3
CO
+J
CT
(U
C C
0)
(D
(U
^
li,
H
^.-.s
m
O
•~-'
u
"^
01
01
U
T3
>—'
249
DJ
U
2
<:
Cii
a
u
1-4
s D e:
K CO Eh
o< w
P4 Cd S
Di S
U
o
2
H
«
w
w
f^
-CQ
U H
U CO
<
V cy
EH
cu
0)
q
J-l
X 3
CO Cd
^J ci
)J 1-4
Eh
W
MD
s
w
Eh
W
W
05
Oi fc
1
0)
T3
rH
0)
a
w a
"'g
e
U
u
04
>H
CO
H^
Eh
<
M
<
K
Eh 04
O Eh
hJ
H W
] T]
Q_,
a-!
FORM
TO
2 CO
H H
LO
Eh
t«
ci:
^ 9
2 «
CJ (^
Eh U
Eh
4J
•H
o
e
04
u
ai
M
Eh
Eh
M S
ClI
o
E
o
CM
CO
2 O
J
O
M 3
H s
U M
< Eh
to
a
•H
CO
CO
•H
S
CO
<
Cn
Eh
•H
04
Eh
td
• •
CJ
M
(d
0)
rH
trj
0)
+J
^
CO
Cn
lit:
CO
<:
£h
cfl
-73
3 C
(T3
nH
+J
CO
3
-P -P XJ
tH
n
C M
S
o
z
o
M
EH
<
CJ
M
>
<
2
c;
1
-H
fU
iH -H
•H rH
g
CO
0)
CO
OJ
aMH
(U
3
-H
4J
> U >
CO
4->
0)
(K 4-1
D
x:
tU
CO
+J
CO
-i^
CO
>i
CO
CO
5
(U
-H rH
U
+J IH
^
(T3
••
^
U
03
-Q
CU
OJ
Ii4
H
CO
CO
250
u
2 CJ U
< a;
s 3 ei
S EH
o< u
fM w S
a s
w
I-+
C/3
a*
a
a
^
U
y
<
-a
w
Oi
Cm
••
CO
*
CO
Oh
Eh Cm
1
1
£h
u
<
^a
0:1
W
CO
a
u
^
o
9
b
CO
X
b
^
CO
ciq
Qi
Eh Cm
1
1
Eh
^
U
u
<
*a
CO pq
a;
Eh Cm
1
1
Eh
Eh
O
2
H
Z
H
<
s
w
a
t/l
Q
w
^-1
Di
J M
MD
OJ
M
X 3
iH
d
TJ
(U
TS
(1)
rH
(U
rH
<U
rH
e
"'S
0)
(U
a
a
b^
OJ
M
3
•H
a
S
U
U
CO cu
T3
a
O
u
e
0)
0)
<D
u
X 3
M
U
X 3
TJ
0)
^3
(U
rH
Q)
rH
(U
rH
a
e
M
•H
S-i
Oi
CO
(1^
3
<U
a
u
e
M
O
o
(U
u
e
•H
!-j
CO a.
0.
T3
a
U
5h
PU
E-t
w
z
«
o
J
<
M
£h
z
Cn
<:
Eh
o
[i
Q H
Eh
Eh
c
C
o
a
e:
0)
O
3
Eh
Oi
M
M S
E^
td
VO
Eh
Z
O
M
U
ir^
D
J
D
S
M
<C Eh
t«
t/3
-P
3
-P
3
-P
•H
g
cr
•H
•H
cr
(U
s
CP
6
(U
S
(D
S
(U
o
CO
^
2
CM
O
o
c
to
S
+J
(tS
o
en
E-i
w
Q
O
g
Eh
rH
•H
•«
O
b3
CO
rH
<0
(d
•H
•H
•H
•M
-P
4J
4J
4J
c
C
03
0)
3
3
a
o
3
cr
a
0^
CT"
Q)
(U
(U
CO
CO
CO
<
S
D
Eh
•H
.
3
(1)
CO
0^
s
o
u
5
U
<D
••
4-1
U
G
to
•
-Q
Ti
(U
W
-P
fd
rH
-H
-H
•H -H
•H
-P
-P -P
M U
U O
M
Q)
ru
b!
2
Z
U
•
x:
•
0)
Eh
•
fe
CO
Eh
O
• •
cu
0)^X5
d)
TJ T3
M
Ch
CU
a
(U
(UNO)
Q
-H Cm
0)
Q
»-q
C
T3
3
-P
Eh
a.
^4
a
>1
••
M
nj
^
T3
rH
3
CO
-H
CO
SH
>
a
-p
C
(U
-P
rH
H
rH
-H
03
•r^
-P -P
5h
U U
Oi
03
M-l
03
0)
<
!h
a
•
•
M
03
CO
03
3
Eh
CO
>i
-H
^
>
CU
>i
4H
0)
^
CO
-P
to
-P
-H
o u
C 03
O
CO
CO
03
to
-H
O U
G 03
OJ
-P
(U
i2
T5
Q)
Xi
n3
OJ
-P
CU
U
M G 3
M G 3
XS
T3
0)
a,
03
tr
0)
a.
Q)
O.
03
tr
0)
0)
3
(U
CO
(U
<D
Q)
(U
3
OJ
CU
D^
CO
Cm
Cm
Q
D^
Cm
Q
Cm
Q
-H
CN
m
^
un
CO
^
CO
CO
CO
CO
CO
251
rH
-H Oi
CUM
>-q
^
03
C
CO
03
E^
Eh
0)
>i
•H
-i^
03
-H 0^
CU^
CO
CO
n3
to
CO
M
g
n3
(T3
5h
QJ
•
\
0)
C
QJ
•H n3 03
> 'H Eh
Q,
CT
t/i
3
+j
CD
M
>1
Q)
a
•
c
3
6
C
M
^
a
U W
>
CO
:j>
W
Eh
3
0)
CO
•
CO
-H
r-i
o
o
03
CO
(T3
s
a^
(0
(TJ
<-i
hJ
•H
>i
rH
4J
O
<d
•H
a
CO
ON
rH
1
5
m
s
O
V
•H
tj
O
iH
0)
CO
CO
•H
C
3
•H
O
CN
rH
U:
CO
(U
+J
•H
<
z
o
M
fH
<
3
M
>
<;
z
0)
:3
e-i
On
Eh
O
G
c
ai
iH
CO
Q>
o
-p
en
S Cl
H H
0)
•H
O
Oi
U
o
o
c
o
(U
q:
0)
»
w
u
U
2
< 2
^
CO
:3 oi
Eh
Cx^
s
»-4
ttV C/l
o< u
Cn pq
CC 2.
1
E-"
s
u
Dj
MD
«
^
^
a
um
U D
o;
0$
<: CO
H
b
« D
^
o
CO
1
1
2 W
M «
Eh <C CO
Eh
Eh
Eh fa
^
^
1
Eh
**
^^
a
W CQ
fa CO
^
W
sum
HUD
«.tD
V
CO
I-)
a
H
b
fci
fa
w
fa
-
0)
0)
0)
<D
u
X a
U
3
M
3
U
X 3
U
X 3
(U
T3
0)
rH
a.
(D
rH
a,
e
u
o
^
0)
6
u
•H
cu
0)
T!
<U
tJ
0)
13
rH
(U
rH
(U
rH
OJ
a
o
a
o
g
a
o
6
V^
•H
Jh
CO cu
CO
Pla
u
-
2 U U
H U fa
Eh < Q
OJ
^—
3
to
^
a
w
e
v^
U
CU
o
M
CI.
J
<
0)
I-+
ai
^ 9
2 ^
Oi
Ci3
„-^
o
^3
cu
•H
4J
U
El
M
0i
v^
Eh
e
2
O
M
b*
U
CO
D
J
CD
S
M
£-•
en
<
P
P
4J
-P
Cf^
•H
•H
•H
2
in
CN
CN
o
o
o
fC3
•H
•H
M
-P
C
S
Q)
3
0)
a
cr
(U
c
-d
•H
fi
fc3
(t3
•H
^3
(d
X3
en
S
-P -H
!T>-H +J
C
CO
-P -H
E-i
4J
TJ
W
d
(D
< >
a
CO
rH x;
•H
••
4«J
a-p o
c fd
-p cu ia
5h W T)
0)0)0)
rH
<
r^
CO
C
<p
2
o
u
5
U
x;
CUfe
Eh
o
<a
u
0)
3
cr
•H
cu
Q
CO
•
CJi
*+H
•
c
•H
fi
U
(T3
O
•H
r-i r-i
•H
ta
-P
3
u
nj
Eh
cn
-H
>
>
U
o;
0)
••
^
o
-P 03
CO J3
•
Ti
.
0)
•
<u
.
fa
-P
<U
^
W
-P
<c
£h
<:
Q
D
Q
Q
0}
03
-H
Eh
• •
•
M
r-i
O
•-i
(tJ
(13
00
CO
Ok
CO
252
3
U
-Q
5
OUT)
0).
U
x: c 0)
Eh
03
rH rH
-H (T3
fa
>
*-3
rH
03
U
3
U
U
rH
03
3
TJ
0)
^
3
3 O
X:
en
cn
03
-H
^
!h
>
j::
rH
•
Q) 1+^
>i
5
U
'•
-P
;:«J
CO
0)
CO -P
03
DO
Q
Q
«
^
o
\-p
03
4J
w
Q
o
s
OJ
SH
•H
t3
5
Eh
cu
fC
CQ
D
t3
+J
Q
U
O
fc4
0)
>
•H
•H
-P
.
03
(U
-C
Jh
H
cr
(u
CO
rH
rH
4J
w
CO
O U
2
a
>
O
0)
U
S 3
g cr
e
o
o
CO
O
•H
<
g
s
OJ
J
nj
W
3
^
>irH
CO
<D
K
C
f•
•H
O
+J
n
O
o
C
0)
o
cu
6h
^
z
o
H
t:
<
o
M
>
<
z
-P
rH
<
•t
E-*
M S W
G
^
P
•H
s
0)
C
-p
•H
CD
u
c
P
03
E
V^
rH
03
i4H
3
Sh
CO
<U
-H
G Cu>
-H
^
>i
S^
C0 T3 -P
.•
^
rH
>,
C O
0)
G U
^
03
0)
03
-P
Sh
<U
03
-H
03
i2
TJ
03
O
rH
a*
a;
3
OJ
to
4J
(U
-C -H 03 a
T3 03 fa
03
JH
03
Q4 0^-P
<u
rd
QJ
(U
(U
05
!h
U
O
iH
CO
S
03 fa
w
U
2 bl U
<C OC
S 3
2 ca Eh
o < u
I-+
(iC
u
u
<
1
Eh
^
^
^
«
u
u
<
U
u
<
a
w
Pu
V
*»
-o
CO
Di 2,
^
CO
W -1^
S U CQ
HUD
Eh < CO
w
oi
Eh Cm
1
-
tn
^J
hJ
W hJ
m
s
H D
Eh Pm
Eh CO
0)
0)
C
<U
0)
5-1
>-l
-H
S-l
-H
X 3
4J
73
X 3
X D
4J
Cli
OJ
Ti
0)
Ti
(T3
0)
T3
t-4
-H
(U
rH
(U
3
rH
(U
a
a
g
e
•^a
U
V^
u
Pu
a
o
nj
^ >
•H
cu
CO CM
.H
S
w
C
M
3
3
HD
« a
^
w a
s w
H «
rH
a
u
(U
g
M
O
(d
3
-H
5^
0^
d
>
w
J
<c
H4
cc;
^ 2
2 °^
a
ci
o
n
c:
4J
G
e
o
o
04
Q)
•H
•p
H
s
DJ
e-i
^-v
4J
•H
U
Eh
M
Ci
Eh
4J
4J
•H
•H
o
o
K
s
K
K
ro
o
LD
in
M S U o
O
rH
o
rH
.H
O
KD
E-i
2
o
M
EH
U
<
y:
D
J
3
c
C
•H
1
•H
4J
+J
•H
•H
W
W
C
CO
cn
(U
Eh
•H
tr
cn
s
•H
S
M
fd
fO
3
3
2
cr
OJ
(1)
CO
G
'
2
e"
M
D^
4-1
c
C
•
fC
•H
Eh
EH
O u
04
Eh
o
e
2
O
M
<
o
M
>
<
2
E:i
Eh
3 en <U
U C rH
T3
-H -H
Q)
«
CO
<
EH
aj
D
cn
^^
•H
3
M ^
(U
a
U-i
W
e
••
OJ
Ai
en
CO
JJ
03
M
u
U3
0)
0)
>iia
U U ^
rrj
t7> 3 rH
3 C .H a.
O -H 03 g
O
>
5-1
(U
OJ
M
4J
Ui
iw
M
Q)
<1)
Eh
fe^
0)
^
U
g
Cli
0)
Eh
^
OJ
5-1
Ti
T3 &4
Q <
(tJ
WU
6
U
<N
rH
-H
03
CO
CO
>i-H
D
(d
>
(0
••
H
>
Q
^
a
cnttJ -H
c
(tJ
(U
Si
+J TJ
05
^
i2
OJ
0)
CU
G
G
-u
Cn
td
CO
C
n-(
>i
fd
g
CO
5-1
(U
ro
'd
(T3
5-1
G
Q)
ja
(d
(d
TS
g Cm3
G (U
5^ d Q)
fi
rd U-l
5-1
Pm
a.
0)
^-»
oi^
>i
(U
^
;Q
Pi4
253
a
tU
1
G
•
5-1
QJ
-H 04
ra
<-\
s-i
i2 4J
T3
Q)
4J
aa
g g
0)
Q)
a
Eh
(d
5-1
U U
O ••
Cn^
G U
fd
fd
fd
g
5-1
ja
T3
-G -H
g cn-P
G -H
U -H 5
W
W
TJ
V4
flj
CO
+J
o
T)
td
3
W
cr»
>i
m
W
0)
(tl
•
CO
CO
••
X
rO
4J i2
0)
+J
CO
0)
^
OJ
n3
n3
5-1
rH
C
5-1
OJ
-P +J 4J +J
Q)
^
cwxi
• •
m
O
n3
+J EH
IT >i
(U
-U
C
-H
0)
n3
0.
EH
+J T3
(U
CO
G
0)
<U
(d
fc4
w
u
2 CJ U
< o:
S D c^
2 Eh
O < w
f^ 3 s
CC S
W
W
s
M
W -1^
S U CQ
cu
Eh
Eh <; 03
i-H
^
03
U
W
1
Q
Eh Di
fa
C/3
^
«b
03
C
(U
^ Di
J H^
MD
^ a
0)
(U
U
X D
u
U
X d
-H
TS
fC
o;
Ti
<U
ro
Q)
3
rH
Q)
rH
Q)
(T3
g
(D
-H
a
e
03 DJ
Di
U
U
<
HUD
-
q
w
«.
O
W
o
-U
^
M >
u & w
a
u
X 3
a
o
e
u
u
cu
u
cu
^
<
Hf
Eh
z
,_^
Eh
T3
o
Q)
Oi
c
•H
c
Eh
<•
O
4J
a;
•H
3
-p
4J
1
•H
•H
u
o
U
05
Eh
K
£h
M S U
ro
rH
M
u
'—'
VD
a:
03
D
H^
D
X
M
<C Eh
03
4-)
03
Q
s
i-M
o
lO
r-t
<-\
(U
Z
O
M
EH
U
g
s
fd
^
•H
u c
3 -H
U
U
^-1
<1J
03
Q)
MC
-P
OI
fl
A-i
c
3
•H
(U
CQ
to
cr
•H
s
Q)
m
to
^4
<
(t3
Eh
rH 13
03
M-l
cu
E-i
O
rH
03
td
03
2
O
M
Eh
<
o
H
>
<
z
3
x:
fT>
en
4J
t«J
G
C
-H
03
03
>
<
-P
<
^
3
•H
-P
M
QJ
Eh
03
D
(N
rH
.
03 TJ
-i4
03
(U
O
-P
M
03
-H
••
03
^
0)3'^
rH
•H
a
0)
.
4->
CU
Eh
W
s e
M 0)
4-1
(D
IH
03
m
0)
(U
(U
>i
OS
M
C
fci
O
w
•^
rH
in
-H
0^
03
03
H
••
^
U
Q, cr
••
-H
T3
M
-P -H
^3
Eh
-P
4J
• ja
rH tr>-H
rH 0) rH C -P
•H "H
-H 03
03 UH
M (U
>1
W
ro
rH
W
0)
03
jC jq
-P T3
Q)
CU
+J
li(
a
03
^
0)
03
03
C
-P
C
03
03
-H
^
03 +J S 03
4J x: (u ja
C (7^-P T3
0)
X:
U
^-»
JE
254
-H -H
03 .H
e *H
03
rH
03
0)
03
3
-H
03
3
> U >
Q)
4->
U
03
••
^
Q)
>i
Q)
03
fa
.
c
^
2
ClJ
< IX
S D
U
H+
Ci
K tn ^
M
O<
Cd S
Cti
r^i
ci
^*^
S
^-4
CU
-^
en <:
D
en OQ
Eh
ir*
1
a
1
-J
D
D
-J
en <:
D
en <:
en
Eh
Eh
"•^
1
a
1
w dit^
S W CQ
H Oi D
w a
s w
H «
w a f^
S fa CQ
H 05 D
Eh En en
Eh fa
Eh fa en
-(-3
en <C
a
1
Eh
w"6if^
2
H
D
a
fa CDI fD
fa
05
&^ ;iM
m
D
m
D
^
s
H
fa
Di
Eh fa en
en
«
<
QJ
O
c
en
Q
EH
tn
2
O
H
Eh
<
iJ Qi
CjJ
M
1-1
2a
HH
>^J
03
CO
Sh
X s
•H
•H
X 3
a 't!
W
03
a'4-i
>1
rH
>1
rH
rH
a,
S M
fd
<d
e
(U
>H
rH
U
Q)
(U
03
c
c
OJ
0^
<
<
H
U
u
X 3
CU
.H
Qh
OJ
o
e
u
o
Oh
n
0)
u
U
o<
>
<
:3
1-4
s
H
EH
en
>H
en
O
fa
Pi
Ol
^
2 9
Oh
3 c<
Eh &J
O
04
U
CJ
Eh
H S U
t-H E-.
-p
+J
•H
•H
1
£
o U
U
a:
E
^
e
04
rEh
• «
en
z s
O^
MD
Eh S
< Eh
G
en
O
CU
•H
•H
4J
+»
o
o
M
fl
W
<U
CO
3
(0
ca
o
M
•H
•H
•H
(1)
2
0)
Q
Q
en
•H
e
(U
•H
Cn
G
a
M
rH
tiJ
(T3
o:
OT
-H
en
>
«4-l
S-l
to
1X5
iH
T3
nj
M
OJ
C
w
o
•H
>i
ro
2
LD
fi
:3
z
s
00
JJ
Eh
>
E
CX5
>
•H
+J
D
u
>
0(
Oh
P
••
O.^
tn u
-H (d
n3 i2
H
H
01
:3
•ri
TJ
G
D
^
S
U
M
T3
to
to
3
M
G
-P
< O
G
•H
>
"H
4-)
u
(0
-P
X
^
•
Oh
Eh
^
>
u S
« U
u
••
-P
(T3
•H
\
s
Xi
03
0)
D
•n
u
<u
ra
•
0)
fa
<<
•
cun
U
•
rH
O to
G 3
to W
O G j5
P G
Oif
Ch
0)
-H
-P
0)
Vh
•
iH
M
-P rH
-H
>
^
JJ
4J
G
3
a -^
Q s
U
u
(U
-H
G
CU
CU
03
to
Q)
3
rH
en
Sh
03
Oi-H
03
Q)
U
-H
CU
•
03
03
M
.
CU
CU
en
•
tu
o
5h
to
4J
i2
T)
•
-P
to
-P
..
ja
T3
03
M
3
3
-P
G
4J
0)
•f->
G
(U
OJ
Qj
M
T!
>
(U
CU
fa
<:
0)
fa
i-i
rH
03
3
CU
3 Oh QJ
Eh Oh Cm
(TJ
^-'
en
255
,-«,
.^^
£
^-^
o
n
U
Qh
rH
!h
U
••
X
u
(0
-H JG XI
-P 4-> T3
TJ (U
3 -H CU
-n C G
73
< U
T3
^-.^
to
•
G
en
rH
ri
eo
U
^
o
>
>
S
U
3
G U 73 ^
< \ 4J -H
S M G >
to
.
..
CU
rH
Eh
•
CU
G
G
U
3
(U
03
Qh
rH +J
t3
•
.
^.^
,
^
rH
(H
en
u
OS
1
Z
o
u
CU
S
o
g
rH
H
&H
s
•H
en
&H
+J
-H
6
O
O
CN
w
-p
•H
+J
—
.
3
—
5
fa
3
«
u
2
C4
<^
ff:
o
s D
«
U
M
»:
S La Eh
o<u
fM
w a
W
1
Eh Oi
f^
CQ
CO
1
D
Eh
- OQ
CO
UD
U CO
1
Eh
u u
s
H U W
Eh <: Q
S
M
Eh
1
Eh oi
OJ
H
MD
CU
(U
U
c
o
a
^
Hi
C
!4
cu
^
e ^
OJ
(U
U
04
G
c
<u
U
-H
td
=1
-P
'^
fd
X g
(U M
U
r-l
3
r4
CU4-I
Eh t4
(I)
X
n
e ^
s w
H
Oi
a
e
U
04
rH
Q.4H
fd
^ >
04
w
jj
P
•H
s
g
4J
M
P
•H
•H
g
g
g
g
u
u
K
•H
04
o u
K
o
K
ffi
in
rH
in
o
r-l
fNl
in
ro
G
c
•H
•H
g
e
U
Di
E-«
M S U
1-4
Eh
2 D
C
^J
•H
•H
•H
CO
CO
CQ
01
(0
CQ
CQ
to
•H
•H
•H
•H
U M
s
S
en
-g
+J
en
c
c
5-j
U
r4
0)
-H
td
td
rH
fd
(U •H
J^ JJ +J
4-1
o
>-l
a
Eh
-H
C
rH
en
fC
+J -H
be:
<
D
x:
en
U
CO
CO
<4-l
tJ
D
O
fd
v,
tn
(d
i-i
Cn4-i
rd
<;
fO
;^
C
Q)
O
-r-l
v^
n3
(d
3
•H
C
S-i
-H
5-1
(d
(U
cn
0)
0)
Q)
C
-H
(d
cn
54
S-i
a
-t-)
2
04
0)
JQ
+J 73
M-i
pm
a
< 2
cr
j5
5
G
-P
-H
g
••
^
C
(d
e
-H
fd
cn
cn
td
fd
cn
M
-P TJ
a.
(U
cn
U
H
g
0)
td
(d
g
;a
U
'IJ
0)
(1)
fa
U
^—
^^
(d
£^
^-'
<N
CQ
256
5-1
jj i:
u c
>i;Q
-H
TS
td
-P
Q4
+J
6
cn
tn
0)
U
£
>
H
tU +J U-l
4J -H -H
oj
•
td
D
>i
4J t3
•H
fd
t7>
5-1
n3
!-i
+J
S
O
.H
s-(
U tT>
3 -H
O >
u td
5-1
-H
TJ
U X^ P •H
3 u o >
o fd
^
C/2
• •
(U
a
U 04^
(U o
-H
^
(d Oi 3
4J
cr>
•H
G
a
o^-H
>
<
2
0)
G
CO
<
s
5H
O4
o
S
2
O
g
U
01
^
2 2
or
Eh 3
O
Eh
u
u
Q
(U
-
c
aM-i
U
-P
&H
u u
s
H U W
Eh <: Q
<
D
td
•H
1
«
CX4
t/1
T3
CO
U 2
u
«—
a
w
•
••
TJ r^
u
H
g P
•
^
T3
fd
5h
(U
—'-H
D
<u
td
+j
jq
T)
fd
QJ
0)
!h
cr
CD
5-1
(U
a;
G^^w
5-1
5.^
G
Ua
Q4
0)
cr
m
05 -H
^.^
u
u
u
•*
la^
S D
S tn
O<
Qi
en
1
Eh
cn
1
«
Eh
fa
£->
:J
-f^
•
a
w
«»
«>
u CQ
U W D
U Q (^
< ^
-
^
cn
«k
e^
1
u
cn fa
CU
Eh <: en
Eh Oi fa
Eh fa
<u
wu
iJ ci
hJ M
M3
M
X a
a
t»c;
1
T3
CU
rH
a.
(U
-H
U
-o
1
U
CU
^
-
-
s u u
H U fa
Eh < Q
«
^h4
fa
S
H
Er*
Eh
1
X
<U
CU
CU
O
c
o
c
c;
CU
(0
U
3
>^
a«w
s
fa
T
UD
u cn
< ^
^ pq
o
X s
CU
u
e
U
"'S
CD!
&^
^
Nk
h:i
1
D
sum
HUD
-f^
cn
1x1
Eh Oi
fa cn
1
«
w -a
Eh fa <
S
H a D
W
V
o 1^
m
CU
73
rH
a,
CU
e
^^
CU
•H
(13
X S
CU
S-i
H
aiH
M
U
}-i
atH
e M
cn &<
Oi
a
03
X e
CU
H
o
<
D
s n
CU
u
CU
Q)
c^
CU
So;
Eh O
2
•H
Cr:
'a
c
-P
C
4J
e
3
•H
!T
•H
^
ou
°
1-:)
K
K
4J
-P
•H
•H
04
M
Oi
Eh
M S W o
Eh
g
e
g
U
•H
0)
p
bj (H
Eh lil
o
<-*
c
4J
g
u
rEh
• «
2
O
M
H
U
<
cn
D
J
D
S
M
Eh
tn
O
X
K
LO
in
o
H
o
CU
^-^
g
CU
cn
>
H
(T3
C
•H
c
•H
-P
-P
U
U
•H
W
CU
u
•H
Q
(T3
cn
T3
•H
(0
«
S
c
CU
a
•H
3
0}
CQ
er
•H
CU
s:
CU
u
cn
cn
en
1
<
Eh
««
a
u
en
CU
>i
w
H
r>
•
fd
(XS
in
<
EH
s
D
cn
-P rH
>i
e
T3 -P -P
cn
d
c
0)
(T3
M M 3
3
-H
<
CO M-l
M
3
a
••
^
-P
o
c
03
(d
•H
-H T3
•H
CU
01
-H
0^
&H
>
(T3
>
C d
H C
CU
-H
fa
Oi 4J fa
•<!3'
cn
cn
cn
Sh
3
(T5
a.
^
il_l
4J
Q)
C
H
Ti
(t3
.,
M
CU
O
(13
[5
CU
s-i
cn
(T3
cn
JQ
-d
CU
to
CU
(U
fa
O
H
H
p
cn
-P
03
cu
CU
O
3
u
TD
CU
cn
-HO
cn
^
c
M u
CU
Eh:
U
>ir
(t3
s-i
03
•
••
-i4
cn
S-l
G U
CU
03
M
u a
e
CU
a-p c u M >
H 3
(U 3
Q J3 'O U e
0)
O
CU
g
03
Sh
g
(U
uH
,»^
^M»
£
u
^:5
S-I
H
TJ*
^
i^C
u a
G ^jC
G UT
CO
(d
257
-P T3
2j cn
fa
V
M 5
3
03
o G x:
U
G-H
03
^-s
^^
s^ j::
ja
"3
CU
T-
c cun
03
T!
p
a^a
cn -H 4J
CU
•H
-H
-P
-p
c G
-P
>
u
<-{ -r-i
-P -P
H
cn
<H
T3 -P
c
1
-H
(0
U
HH
^
G
CU
M
a C U
1
a
T3
CU
ro
x; -H
-C rH Eh
-P
U
c
CT3
O
..
^
c u
G
o
cn
(13
&4
M-l
cn
4J -H
c:
CU
CU
H
•H CTrH
t3
CU
S
O -H
•H -H
CU
S
H
^
CO
C
Q4-H
•
T3
03
rH
•H
-n
CU
>i M M
W 3
u -u
i+H
O -H
i<
•
to
U
-M
CU
o
H
z
o
M
H
<C
O
M
>
<
2
e
Q)
CU
+j
03
H
-H a
D
Q
a. cm.
^
1
w
CJ
en
1
METRIC
aiT)
W
cq
Eh oi
D
(J4
CO
'ERFORMANi
MJIASURE
&H <:
CO
oi
&H Oi
a
CO
u
u
Q
S U CQ
HUD
Eh < CO
<
QJ
Eh
1
05
OJ
O
a
n
X g
0)
s^
n3
X e
0) u
p
REQUIREID
r-i
CD
T3
Q)
rH
0)
r-^
a
a«4-i
e u
o
U
D
«
u
G
SKILLS
>D
CQ
w
1
h
sou
H U W
l-U
a
u
O
g
M
•H
M
X 3
U
0)
T3
rH
Cu
(U
Q44-I
g u
g
U
cn 0^
0^
0)
CD
u
04
u
cu
^
<
Di
^ 2
z a:
1-4
M
^-»
•H
4J
•H
g
S
05
o
P
P
+J
•H
•H
•H
^
°
g
uo
M
U
0:J
E-i
M 2
4->
-P
-H
E-i
K
K
3:
3:
'O
CN
00
CN
o
DJ
g
o
•-H
rH
^
fC
CO
c
Z D
O hJ
c
c
•H
•H
CO
cn
Q>
cn
CO
CO
•H
•H
4J
r^
Eh
C
•H
•H
U H
•H
CO
S
3
en
S
0)
CO
f4
CO
<
—
^
E-"
E-»
-p
>
O
Ij
^3
=5
•H
cn
T3
<C
U
U
c
4J
CU
03
O
&-*
•
T3
<
Oi
c
•
r-i
•H
M
>
<c
z
"
CO
<c
a
D
CO
(d
^^
--•H
—v
O
cr
fd
u
(t(
0)
0)
^
J3
TJ
^
^
Q)
3
c
+J
(U
u
5
CMC
(T3
(U
Eh
<-i
c
Oa
<d
-p
CO 4J
Q) U-^
Q)
3 3
Oi -H t4
04 J3
.o^
<««
2
0)
c
m
u
^^
O
u
^
C
-P
U c
U
=3
n
•H
••
^
c
r-l
rM 4J
S-l
(tJ
(TJ
*"*
t«
-i<i
•
rH
oj
to
C
en
a&4
258
03
D
CT>
> C
-P
S-4
C
c
••
C 3
2
03
U
CU
4J
03
x:
C
to
0)
0)
u
fa
-CO
-H
D^-P
0)
^
u
,
—
03
'^
3
CO
3
•H
-H
•
-H
CO
03
t+-i
>
c
••
•H
C U
-i4
g W
a3
03
(U
S^
03
Q)
-H XI
•H -H -P 13 T3
+j
S-i
CO
>
cr
MO)
LD
cn
CO
tn
fd
H
-H
>U^
0)
-P
-H
n3
C
fH
M D
•H
nj
(U
(U
•H +J
U
^
n
•H
rH
03
CU
-P
•nag
p
a
03
bei
o
1—
rH
C71
Q)
CO
M
C
O
D
Q
r^
(U
.
^
cn +J
z
H
Z M
H
03
03
U
J3
•H TJ
T3 0)
C
--^^ •H
Q)
fa
1
'
CJ
u
2
<
si
^
U
DJ
a; H+
D
^
1-3
a
a
D
ffi
u
u
<
CO
SW
o <: a
t^
E-i
CJ
S
oi
^
S
D
CiJ
cu
1
X
cn
Q
<u
en cd
vj Di
M
X 3
t-5
M
>^
MID
a
K
D
a
T3
CD
T3
Q)
iH
(U
u
U
a
6
u
u
H
0.
CO 04
^
•H
Cd
^ 1^
Eh
CO
sum
MUD
<
D
Oi CO
0)
u
3
(U -r)
rH a
ao
6
3
0)
1
0)
5h
1
Q)
U
X 3
>H
X 3
0)
T>
rH
0)
a
e
Sh
O
CO 04
^
O
a
Eh Oi
" 1-3
&H CQ
P5
t4
CO
rH
a,
g
a
M
-tr5
Eh CQ
^^
CQ
CO
..
T!
rH
a
o
Q)
u
e
M
U
04
^^
04
J
<
M
Eh
z
c::
-p
+J
+>
-M
4J
CJ tf
&H
•H
•H
•H
•H
•H
a
o
„-^
n
05
O
g
e
o
Q
^-q
S
D4
g
s
o
o
s
^
^
o
O
0}
C
U
•H
4J
Eh
Ci
M
Eh
in
M S M
o
CN
O
e
Q
o
fi
o
»—
r^
Eh
C/5
2
hJ
O
M D
Eh S
U M
< Eh
:i5
Q)
0)
(U
OJ
>
>
>
>
•H
>1
•H
•H
-H
-P
H
a
10
4J
4J
-P
O
O
CO
0)
V4
iH
u
a
u
•H
•H
•H
•H
U
Q)
U
•H
Q
cn
<
M
E-i
Q
W
C
H
o
i-
4J
rH
(T3
ra
U
3
(U
-H
'O
0)
a
-H
e
••
U
U
fO
c
.
>
Oi-H TJ
"Z
o
M
H
<:
a
M
>
<
z
;^
CO
OJ
Oi
Eh
<
&:
ca
D
in
^-^
T3
0)
M
-H
3 O
-P
CP
03
nJ
CU
i3
-P
-H
U
,_^
u
U
^
"O—1^
^
-H
•H
*««-l^
Cd
cn
C
O U U
Q) Q)
auH
j::
D
fi
••
m
>
^
tJi
Q
3
0)
15
•H
T!
>
S-l
iH
^
0)
(T3
>
u
•
(T3
CJ
<
tU
rH rH ja
0)
0) T3
0)
U
O
fd
0)
x:
(T3
a
aci^
•
U
,^
H
+J
G
• •
M
cp
Sh
fO
>
(T3
s
x:
03
14H
J3
3
W
0)
Qj
Cn
M
M
C G
0)
(T3
<U
3
5
CO
-H
>
rH
4H
P
+.
a
•'-
r^
•H
3
=
<
CT
G
>i
03
•
r-{
r^
x: -H -H
-P HD
OJ
-^
•H
QjiH
u
n
(T3
j:^
-r!
o-H
0)
UH -p
Q)
OiPh
^
.
c
e c w o:
M
3x:
G
M
^^
,_^
OJ
U-l
^•—l^
g
r-^
o G a (U
W G
O
U
>
U
c
03-P
r-i
e
e M
-P TJ
>
fd
o
-H
•H
cn
•
^i
3 G
>
U
0)
-P
-P
•
p
w
a-H
•H
•
2
259
C
d e
o
in
Ol
(TJ
•H
„^
^^
r-\
-i^
•
QJ
.
a
!T>'d
O
u
oi
D
w
x:
^
Ci,
(tJ
0)
-P
0)
^
0)
-P iH
-H
UH rH
M
MH
V^
c
Ti
Q)
•H
rH
1
D
Q
Q
n3
H D
M OS Q
a > Q
-P Q
rH
G
Q
1
-H
4-1
Oi
Q
N^
4Jt:
+J
a
a
acn
&.
03
03
DJ
U
_.
,
..
en
1
METRIC
^
••
a
w
Eh 05
-<c
U D
ua
<
:rforman
NiEASURE
W -1^
S U OQ
MUD
Eh <: W
uu
Q^
Eh
1
«
Eh Cn CO
l-t
>-}
C
5^
rH
^
o
^s
05
-P
X g
u
X 3
03
0)
3
CJ
TJ
rH
i-\
Q)
)H
Q4M-1
rH
OiMH
S M
Q*
03
g u
g
U
>
CD
w
(h
u
(U
c
•H
c
U
a.
P
O
OJ
?5
Dd
M
4J
•H
•H
g
g
s
U
E
K
in
in
rH
6
U
Di
Eh
M S U
E-.
C
K
CM
-P
3
4J
-H
CT"
•H
g
(U
CO
^-5
LD
a;
cn
Eh
•
U
flH
•H
P
u_
r^
U
Q)
<
Z
S
Eh
O
(U
•H
J
Eh
S U oq
H CJ D
Eh < CO
CO
0)
G
(U
W cq
« D
li.
1::^
G
(0
IIRED
1
Eh 05
Cm
^af-3
w «
s
H 05 D
O
;lls
^
CO 01
yS
>»
UU
u w
<Q
>
Z D
k3
O
MD
&4 X
U M
< Eh
C
G
c
•H
•H
•H
CO
(Q
CO
0]
(0
Cfl
•H
^
•H
•H
C/1
s
•H
4J
S
s
Q
W
rH Eh
iH
E-«
03
P
<d
c
T3
0^
0)
E-i
O
Z
o
M
Eh
<
U
M
>
<
z
(U
D^4J
Sh
<
^
03
ZD
CO
>
OJ
e
< z
•H
-P
u
en
(U
M
••
M
-P i3
•H -n
Q)
ro
S
Cu
O
W
^o
(1)
CT>
(U
03
g
03
U
3
-P
CU-P Eh
;-(
0)
1
M-i
cu
g
-P
03
a-P
CO
^
!h
5
en
C U
3
-P rH
-P
rH -P
•H
04 )H
x:
-P -P
03
(U
g
cn
-P
CO
03
3 -P
CJ
u ^ u
4J 3 -P M
to -r-i C M
C -73
H 03 U U
V<
QJ
OjCm
^-v
„^
s
ja
C
U
IH
4J
-r^
C
CO
tn
-H
•
-H
g
CO
TJ
-H -H
-P -P
Eh
rH
r-i
••
M
•H
CU
U
03
JH
0)
Pn
CO
260
Q)
M
••
M
03
cu ja
Eh T3
C C
»»»•
0^
CO
MH
,_^
3
<
3
u
g
M
-r)
OJ
>i
-P
cn
< 2
-H
3 U
(DO)
-P
03
Q)
•H
OJ
4-1
-P
rH +J
-H XJ
x: 4J > T3
-H -H
03
s
.0
rH -H Oi
Sh
rH
G,
rH
-P
3
CI.
Q)
03
a
-P
g
•
CM
Eh
03
T3
C
> a
o;
M
g
(T3
^-1
rH
0^
U
-H
n3
iTJ
C
••
-H -H
+J
0) U-i
G
lili
CO
4-1
-H
C U
3
E-i
a
03
M
.)A
CO
0^ (U -H
Eh +J 4J
(d
QJ
-H
(U
0)
Em
CJ
^
^
{J
2 W U
< QC
S 3
C? to
w
o<
t: s
s
w
1-+
o::
E-'
P-t
ce;
cu
^
cn
1
Eh
^
^ CQ
u D
U CO
N'
'
.
a
M
a
w
w
«
oi
fa
^
^
a
a
w
.
«
fa
oc>
fa
fa
<:
^
w
S
H
^
*»
EH
a
w
oi
Eh D5 Ch
1
^
«
^
W 1^
CQ
S
H D
W
CQ
S
H D
CQ
S
M D
W Tj
CQ
S
H D
Eh CO
Eh CO
Eh CO
Eh CO
(U
0)
(D
0)
3
U
3
X 3
1-3
fa
I-:
0)
O
Q
tn cj
^
05
hJ
t-4
MD
J-l
X s
dJ u
X 3
rH
e
u
i
H+
c
o
'—'
r^
t^
t«
M
Oi
E-"
M S
2
O
M
U
<
E-t
a
73
(U
73
OJ
rH
0)
rH
Q)
U
S
u
•H
Sh
•H
U
a
a
e
e
CO Oa
CU
a
e
M
u
CO 0^
4J
4J
4J
-p
+j
•H
•H
•H
•H
•H
g
U
•H
73
rH
O
u
a<
O
W
o
a.
3
C
O
0)
£2:
E^
•z en
Oh
,.-^
TJ
a
eu
Sh
(U
s^
<u
M
rH
0.14-1
Ol
bti
ra
t/5
D
h3
D
S
M
&M
to
&i
DJ
s
*^
o
K
K
g
e
^
s
s
2
in
in
o
rH
rH
o
^
(d
fC
03
•H
•H
ID
C
g
o
^
o
r-{
(d
H
•H
U
4J
4J
•H
c;
C
01
(U
CD
0)
CO
3
3
3
C
d)
3
!T
•H
+>
C
D^
D^
cr
<U
<U
0)
<U
CO
CO
CO
CO
S
cn
•
*i:
CD
Ph
w
•H
M
•H
CO
P
3
r-l
O
O
M
>
2
^^
m
O
M
CO
U
(U
>
t^
Ul
<;
E-*
<:
C
C -H
3 5
e
(u
CO rH
T3
O M
Oj
en
73 -
TJ
rH +j
rH 4h
tt3 •H
e M
+J CO
Q
rH
ftJ
3
M
T3
to
CD
C
-H
nj
e
(U
CQ
>
^
4J CO
O
T3
M
O^
73
>.
CO
(TJ
03
C/1
M u
^
tU
0)
fO
o
a
(U
-P i3
05
en
fa
H
rd
C
S
.
OJ
&4
—
(T3
^^
rH
n3
03
OJ
OirH rH
nj
(U
03
rH
3
73
CL
CO
>H
CO
M
-H
CU
G
3
CO
to
3
M
Sh
-H
n3
-H
u
-H
03
0)
03
.
aa
CO
-H
-H
73
>
OiTD
••
rH
c
TS -Q
73 OJ 73
•
C
M
O
..
(U
0)
>i--
w
0)
0)
CO
3
0)
>irH
O M
..
0.
0)
73
Cli
e
03
OJ
^
a u
73
M a
oi MH
^-v
^
—
r^
CO
261
03
73
73
G
03
c
e
Sh
en
73
+J
73
0)
CU
Sh
03
^
03
X!
O --
73
73 -H CO -H
03 73 i< 73
(U
(U
0)
G
cnfa
(Ij
.H
a>
-P
!h
+J MH
OJ
G
0)
g
3
fH
^
•
3
tn
-P CO -H
<C
e Eh
>
dJ
••
03
—
—
4J
to
73
^
CJ
>i
QJ
U
03
to
(U
03
ai2
4.)
X!
73
73
tn
<U
03
M
OJ
OJ
-H
(U
..H -H fa
Oi
03
fa
Sh
H C G
—
-—
^—
—u
s
tn
73
0)
»
«
w
u
2 CJ U
< Di
s D
S 05
< u
O
^ t4 'Z
a; g
u
^ ^
U cq
U D
< CO
t-4
ct;
.
«
*•
u
cn
u
Q
1
1
Eh
^
w o
s w
H
W fD
S CQ
H
D
sum
HUD
2
H
Eh
Eh Cn
Eh <; cn
Eh
ce;
cu
w
^
*
«k
cn
£h Oi
fa
£-•
«k
^
a
Cl4
^
^^-)
fa
Q
u
tn
fj Oi
^
HD
« O
y-*
O
G
CO
03
4J
•H
X S
u
X 3
(0
to
0)
(u
-d
0)
r-i
Q)
rH
D
^a
M
>i'H
.H
0,4-1
e M
coo
< U
(0
03
>
w
1
a
fa
Pi
fa
c
<a
•H
rH
»,
-a
Eh
O
o
(U
C
^
U D
U cn
<
^ CQ
CU
OiMH
S
e ^
u
Pn
03
X E
M
u
U
cu
0)
&i
h:}
<
HH o:
p
^ o
2 IX
*-»
0}
U
-P
c
4J
Fh Du
•H
04
o
o
T3
•H
•H
E-i
Di
M
M 2
e
Eh
DJ
u
^-^
H
•«
-p
-p
-H
•H
o
K
K
^J
o
K
LD
CN
r-l
U3
o
0)
H>
C
•H
+J
C/3
Q
in
03
2 D
iJ
O
M Z2
^ s
U M
< E^
M
•H
e
e
uo
rH
en
I^
g
6
C
4->
C
•H
o
to
CO
a;
3
^4
to
(Q
cr
•H
•H
•H
0)
s:
•H
Q
s
en
t/3
c
<:
.
03
CJ CO -H
E-)
tji
04
Eh
P
o
CO
P-i
U
CO
CO
<:
Ch
oa
U
Ci]
CO
-p
0)
CO
03
03
CO
C
03
D
03
>
o
crj
CO
0)
0)
-•^.
ta
0)
p
c
•H
-P
,,
ro
-p
a rH
-H T3 rH Qj
>i
i«5
""^
-H
^
en
••
<^
U
03
-H
'^
n3 T)
03
rH
>
C
(U
-H
W > 5
4-1
-H
<; cn
(1)
.
••
to
-H -a
-i^
(U
-P
x:
(U
0)
OJ
Sh
Q)
Q
(U
fa
Eh
>
5H
fa
<—
OJ
UH
^-^
'-^
r^
en
ai
rH
4J
V4
C
03
•H
fi
UH
+J
>
•H
a
u
rH -H S-l 03
Xi O -H Xi
d
D T3
rH cp a
'—
>1
>i
+J
-P
•H
-H
^3
U
D
(U
>
e e
M 0)
•
-H
M
<;
cu
CO
o
03
^
3
G
>.
OJ
0)
CO
M
fa
j::
CO
>i
CO
'X3
<U
••
M
u o
P
03
Xi
•H T3
C
S
CO
CO
262
C
-H
cn +J
CO
—
-H
03
CO
.
0)
4J +J
03
-P
3
q;
sh
4H
H
03
e D -H
0) U >
-P U 03
-p
rH
-r)
<:
rH rH
CO
-H
a o
to
4-)
,
•H
e-H
u (U o
3 P
O CO <-t
O >, Q)
>i
S
W U
-P 4J Eh
03
OJ
-P 4J
6 C
0)
2
o
H
Eh
<
O
M
>
<
2
<
H
to
e
(U
Q)
0)
fa
2
CO
>
.
u
Z
-
u
<j;
u
u
DJ CJ
Di M.
E-<
*
s
k4
a
C/2
Eh CO
&H CO
Eh fa
<U
(U
0)
0)
^
3
u
3
3
OJ
T3
0)
r^
a.
(U
-H
S
""S
^
s w
H «
fa to
OQ
>-i
O
<:
^
S CQ
H D
Q
u
M
HD
fa
S
H D
E-t
tn
vJ Oi
iJ
u
u
W
S
H D
ffl
CU
«
w
«
^
1-3
M
i<
Oi
fa
<
s D »:
S to
< u
o
Cm C4 S
Di S
-
a
a
w
•.
»
•H
a
u
e
U
•H
fa
fa
1-:)
M
3
T3
OJ
73
<U
13
CU
rH
0)
-H
CU
a
o
e
U
en CU
CO CU
JJ
H
•H
a
o
o
e
M
•H
CO CU
i-l
CO CU
«
H O
z
3
cc;
y-^
'O
0)
•H
4J
G
o
Eh
«•
e
6
04
O Oi M
H M s u
E-1
C/J
r»
4J
•H
E-"
z o
iJ
O
MD
S
U M
<^
in
o
K
s
s
s
O
O
o
o
rH
rH
rH
m
(d
(d
•H
•H
•H
-p
-p
c
c
G
G
Q)
CD
CU
(1)
3
P
a
3
3
cr
CT
CT"
<u
Q)
(U
CU
CO
CO
CO
CO
td
CP >,rH
03
•
rH
C
nj
fd
c/3
t;J
•H
73
r-(
x:
cn
CU
a
W
O
fO
03
-H
0) -H
jG 'O
>
Ph
S
D
CO
(U
V^
t3
-P
M
«^
>
Q
3
C
'd
-Q
n3
T3
(U
cd
•H
0)
>4
(U
Q)
Oi
"4-1
0)
CO
<
P
—
5«i
3
••
fd
0)
6
(T3
fa
3
W
a-H
e >
U
(T3
•H
fi
CP-P
^
0)
U
g 3
"3
(T3
e
i2
73
U
0)
Q)
Oi iw fa
G
>-v
,,
O
*-'
<TJ
^-^
D
Q
Q
1
•H
•H
o
CO
cr"
c
c
i3
P-t
u
Ci]
(Ti
3
CU
n
O
rH
(0
•H
E-4
<
O
M
>
<
2
g
o
o^
C-t
g
o
<
z
o
JJ
•H
O
P
E-<
-p
•H
e -p
to
H 3
-H
4J
CO
•H
-H
U-l
0^
•H rH
13 (d
(d
(U
^
J3
00
CO
263
C
N
td
3
>
S-t
CU
cci
G
f^
S
G g
••
^
3
w
-H
>
r-\
G
^
Di
td
x: -H fa
Id
Cj)
-P
CU
(U
M
> a
•ri
P u
fd
<:
5
.
x;
13 -H -H 13
(d M 13 <u
^
»^
P
(d
>
W
C
—
u
>-»
^-»
•H
fd
M
g
"H
13
C
O
>i
fd
•
en
C
-H -H
Qi +J 13
td
td
(U
—
,
13
—
td
rH rH j3
a. 0) 13
(U
O-fa
s
»
W
u
Z CJ CJ
'^ rt
5
xpo:
a
O < CJ
fa w 2:
OS S
u
tyj
^
01
u
W
Q
^
W "h)
S U CQ
MUD
Eh < 01
-
1
1
04
U
w
U
u w
&H U Q
< ^
^
w -a
Eh W
S
M a;
^ on ^
Eh
E-i
^
^
01
^
Q
^
W
S CQ
H D
f-j
1
Eh 01
0)
Q)
Q
w
en
J
C
X 6
M
M
MD
^ a
^ S3
a
<U
Q)
T3
rH
rH
(U
e M
J
-p
+J
cr
•H
0)
e
s
H g
e
s
K
K
LO
rH
in
e
jj
U
tf
M
E-i
Eh
M S
Cd
Z
M
Eh
U
<
3
hJ
D
S
U
C
>i
rH
•H
rH
CO
W
M
•H
£h
s
1
rtC
•H
a
CQ
w
CQ
-H
•H
S Q
s
C/1
0)
(U
5-1
>4
-P
C
•.
.
c;
13
-HO)
0)
en
-P +j
03
1
u
S
x:
-H
-P
•H cno; CUT)
T3C01GC
03
D
[SrHrH
Q)cn3(T3
x; 3
e
«
U5
<
^
ca
D
01
'd
0)
D
C
•H
4-)
C
'O
M-i
••
U
^-^
1
S
03o3(UC-P^Eho3
-p
>i
^0^3
03
•>
Cn-H (Us
u
> XJ
rH3aT3-HMS'T3
3 M G
-H
W
3
03
03O0303MWQQJ
>U-H(U0303U<1)
w 03 cux; > as fa
CO
en
-p •H
03
CO
H
4J
>^
:3
CO
CP
w
-H
3
H
>
<4H
rH •H
T3 -H
a
••
X
r-^
CO
H
-Q
T3 JG -P T3
3
< C
C
5 U
OJ
(U
fa
-^
T3
<c
s
M
(T3
cu
..
;i<J
a>
u
C
03
^
i3
T3
05
(U
(U
(U
H
Eh x: fa
^-^
C7^
^^
C
u
3
M
MH
x; -H
(U
03
03
cu
-P
MX!
-H
—
tn
(U
OJ
OJ
MH
CT
+J
TD
,
3 3
U
+J
"-^
264
(U
ui
(U
Q)
03
01
0)
•r-i
^-'
cn
0)
-p
MH
CU
'X3
H
-H
HH
6
^
0)
a
Q)
<q
.
t3
-P
X
(U
,—
e
3036
CMPaU
^-H
C
a
rH
Oi
n3-HS^cnracn6>0
^-^
O
u
to
o
ro
•H
v^
c
cn
cu
t^
••
P
•
01
&H
z
CJ
cu
•H
<
>
<
u
cu
Q)
H
z
O
M
H
<c
o
M
in
cu
c
a
3
xi
• •
U
<u
,-»
Eh
e
0)
cu
HV QJ
a:
z 9
[^
X 6
0) u
^
auH
e u
<
E-^
•H
a
aM-i
U
c:
fO
M
X 3
(d
Pi
iJ
0)
'
W
u
2
<
i«
^
CJ
Oi
U
M
03
1
s 3 e:
S 07 Eh
O < u
fm w s
(X s
Eh
W
^
«.
O
U
01 cy
1
«
w
Oi
01
w
1
Eh 05
fa
-h)
w"
S U OQ
HUD
Eh < 01
Eh
-»-:)
S
H
ffl
H-+
;^
a
"'S
s
H4
Eh
Z
3
i-<
o
'-N
n3
U
1
O
fa
05
fa
05
fa
u
fa 1^
fa Id
S CQ
H D
S CQ
H D
Eh 01
Eh 01
0)
Q)
M
3
M
3
O
G
0)
u
X 3
X s
0) M
-H
atw
S M
^
MD
W
a
Q)
fO
(x;
-O
Eh
Pi
Eh 05 fa
O
•-3
U D
U 01
<
W*"
S U
HUD
Eh < 01
O
c
q
^^
- CQ
«i
fd
X e
OJ ^
H TJ
ao
(U
S
(U
u
Pu
<U
H
0)
H
a
am
s ^
u
U
a^
e
•H
CJ
73
O
T3
Q)
rH
0)
a
u
e
^
•H
01 04
0^
u
M
01 0^
0)
o
a:
c
O
01
OJ
oi
u
-P
D
M
•p
-p
•H
•H
•H
P
Cr
•H
•H
6
0)
s
e
s
O
04
o
o
n:
J
hj
in
in
01
s
o
o
(D
C
u
•H
+J
c
Eh
ci:
M S
o
-"^
r--
Eh
• •
M
Z
o
M
EH
U
<
Eh
DJ
o
CN
(U
w
>
3
^
D
S
H
•H
C5
o
o
nH
iH
(13
(0
•H
•H
C
0)
3
O
•H
G
cn
0)
CO
a;
3
w
^
cn
•H
•H
•H
Q
P
P
•H
S
01
K
IT)
G
4J
Eh
K
in
rr
s
cr
0)
0)
Ol
rn
s
•
<
cn
•
0)
Eh
(tj
0)
-P
-.
OJ
73
cn
^-3
4J
>i
U
^^
+J
CU
Eh
•P
Eh
<
o
M
>
<
2
•H
O
DJ
CO
VI
o
cn
D
to
o
••
^-»
e-4
X)
OJ
D
C
•H
4J
U
3
M
-P
C
O
C
3 e C
cn
rO
cn -r-i
G
H
-73
4J -H
-73
03
4J
X
O
fd
JQ
T3
-H
T3
cn
G
-H
>
4J
U-l
tn
u
D
4J J3 4J
-P -H
3
T3
<i;
rH
(T3
•H
0)
ax:
.
>1
cn
•H
P
H
(t3
cn
4J (T3
0) rH i2
-P 3 T!
tn
e s
d)
ftJ
>i
-P
0)
03
X
0)
(w
cn
tn
oj
G
u u
(U
>i
CU
(U
(T3
cn
s-i
(JL,
-P i3
-P
H
03
(D
03
3
-P
M
-P
•H
03
(U
M
3
u
^^
--«.
<>^
x:
•H
^-^
*—
CO
G
(U
S
fa
a>
W
01
265
(U
M
H
Q)
03
4J
e
cn
-H
H
-H
< e
—
'-H 3
P
rH -H
T3
>
VI
m
cn
e
-H
-H
>
-P
H
cn
-^
cn
1-5
-P T3 -^
H
0)
01
0)
03
O
VI
s
vi
03
S-(
03
a—
3
0)
••
••
(T3
H
3
< <
rH
Oafri
Q)
-P 13
M
e 3
U
-P
O
.
tn
<U
o
cn
S-l
X
a
+j
1
•H
oj
3
^
o
03
e
ja
T3
T3
<U
tn
0)
03
T3
0)
CU
(U
S-I
QJ
a 3 M
0)
05
M-i
fa
T)
03
tn
05 -P
•H Ti
,,
ro
.
-TJ
0)
03
03
(U
(TJ
be:
<
S
D
u<
(13
1
C a
^ -H
G
a -H
o
t-(
z
o
M
cn
M
^^
03
"«^
^^
£^
i3
no
Otfa
s
^
dJ
u
2
^
CJ
U
Di
)-+
a
w
D a:
S tn
o< w
f^9 s
a s
2:
^
^
•-:)
«
U CQ
U D
Cm
<: en
^
1
£-•
u a
D
ce;
^
«»
la
cu
^
W ^^
S OQ
M D
w a
2 W
H «
en
Eh &4
E-t
Ciq
-
^
cq 1^
CQ
u
S
MuU
W
tn < Q
w
ff;
CU
^
X 3
M
X 3
t-f
cu
'O
(U
T3
O
rH
<U
rH
a,
cu
MD
lii
cu
a
"§
e
u
J
<
M
^ 9
Z CJ
a OS
U
o
u
e
u
U
cu
cu
c
c
CU
(d
fd
i^
X g
cu
3
C
•H
u
-P
c
o
rEh
••
+J
-P
E^
cc;
M
•H
•H
g
e
o
E"
M S U
X
IT)
in
o
e M
g u
g
Q)
cu
u
cu
u
cu
g
g
o
u
.J
J
•H
•-'
-P
-H
•"-I
g
g
rH
2 D
O hJ
M 'D
^ s
U H
<: H
•H
•S
-p
-P
4-)
c
C
c
cu
CU
<u
Cfl
3
3
3
CQ
(0
tr
CT
0*
•H
•H
cu
CU
0)
en
en
en
CO
iH
(d
E"
^3
c
c
•H
•H
3
r-l
-H
-P
^
n3
tn
td
cu
3
g 3
:2
en
<U
rH
<-i
to
fd
(TJ
-H
en
^J
<
c
(d
cu
•
4J
cu
cu
(0
-P
M
tn
en
3
C
^3
1-^
T3
'TJ
td
en
CU
•H
cu
(d
cu
-P
ttJ
„^
^»*'
u
<T(
en
en
o
>«»«*
.
^Xi
S
c
CO
>i
..
>
S-l
TS
CU
•H
S-^
^—
03
a
;^
D
§::
W
•H -P
w
<u
-"fc
-HO)
+J
3
cu
C
fd
'H
+J
s
en
iH
1)
E-<
>
HD
H
(d
^c:
C
Ij-h
•
•H
«
p
>i
w u
Q.
en
(d
>
g
cu
w u
33..
(J
en
s-i
5^
U O
-P
U
u
(d
fd
fd
(d
rH
>i
cu
^
.
fd
••
M
s
,
1
4J
m
s
•H
04
u
IT)
•H
en
g
g
^q
in
iH
o
u
a.
4J
H
rH
cu
m
O
cu
CWU
H
o
hJ
<
O
H
Z
o
M
fi
<
o
M
>
<
2
T3
au-i
p
04
0)
CU
+J
E-«
t3
X 3
^-1
<-\
cc;
.»
Ctl
iH
Q4
u
cu
D
M U
E^ < Q
Eh en
X e
cu u
M
1-3
sou
S
H D
rH
o
W
CQ
Eh 05
Cm en
^
- Cd
1
«
*»
(U
ij
en 01
Cm en
E-'
01
ij
*»
u
H
Q
en cDJfo
ri^
en
td
^
g
cu
cu
>^
1
•H
cu
cu
4J
-P -H +J
TD
rH
•
rH
g 3
(d
^3
fd
en
3
>i er
en
cu
Id
M
3
^1
-U
u-i
0-H
<
en
cu ^3
-H 3 fd
-Q -p XI
3
cu
CU
fd
TJ
cu
> 3
X)
<u
cu
U
u^
ca -P
(d
Cm
Eh
td
Cm
s-i
c
tw
^_»
^.^
T3
0)
N.^'
^i^
266
Td
-P
en
-H -H
>irH
ej
cu
en <:
sh
cr
u
cu
3
IH
-p -H
^
>
••
^
U
td
cu
XJ
'd
cu
en -73
cu
Q)
G 3
cu
4-)
Cm
3-H
'TJ
-P
r-i
cu
H
_^
^^
U^
3
^^
3
i-(
44
rH
(U
-P -H
en -P
cu
!-l
cu
n3
>
4J
j::
Q^jC jq
O
cu
Uj
u
Z
*k
u
<C Oi
^ ^
^
S to
O< w
Cm C4 S
en
Cj
I-+
1
Eh
•.
•«>
a
w
Cm
E-"
DJ
* 1^
sum
MUD
< W
u
CU
Ctl
CQ
D
Oi
Eh 05 Cm CO
1
X
HD
w a
Q)
n
r-l
0)
a
e
^a
U
H
«
&a
a
ClI
^^T)
s u pa
HUD
Eh < CO
s w
H tf
Eh Cm
C
c
•H
•H
4J
+J
(0
to
Q)
3
3
n
r-i
Q)
rH
rH
Oi
(d
rO
e
(I)
^
1-4
>»
a
Cm
0)
01 cq
^q
E-t
1
Eh
C/2
-Of^
CxT
S
M
Eh
q
U PQ
U D
<
1
«
»
cn
-•d
<k
-u
u m
Eh U Q
^<
CO
u
U
>
>
CU
cq
cq
M
X 3
U
U
M
04
-3
<
(-4 ff:
o
E^
z
^.^
OC
C4
d:;
4->
+)
P
-p
E-"
DJ
•H
•H
•H
•H
g
g
s
n3
o
Qi
0«
c
•H
C
e
M
E-i
^q
E-t
M s u
in
X
in
c:
C
c:
U
•H
•H
•H
0)
CO
M
CO
CO
CO
CO
CO
•H
•H
•H
^
cn
Q)
>
•H
O kJ
M3
E^ S
U H
<
•
:r
in
o
Z D
•
en
a;
en
Eh
o
u
u
r-
o
+J
•H
Eh
<
3
i::
>i
fH
en
-P
1
•H Q4
rH -H
(U
+J
E-t
73
-H
T3
o
-P 4J
P
-H
(<
U^
04
d
E-t
z
o
H
H
<
CJ
M
>
<
z
••
^
<
S
D
H ^J
•H tH
a
nj
C/1
e-i
CO
^_^
T!
(U
3
C
H
-P
u
CO
-U
M-(
C
H
c
o
<4-i
>1
.
W
+j
^
(T3
rH XI
3 T3
t/J
c/3
0)
>.
0)
0)
W M
Cm
-P
x:
CP
P
3
-H
-P -P
5
(T3
CO
CJ
Cm
rH
c
OJ
cn-H
:3
c
T3
(U
C
M-4
ft!
03
sh
<U
-P
(a
en
G
3
i-l
IT5
.
^
0)
OJ
3
rH
D
rH
(U
(13
U
Ul
03
-H
en
UH -H -H
4J
U G
d
>i
14H
(t3
W
M
3
4J
ciq
M-l
Q)
OJ
4-1
UH
(TJ
cn
Cn
rH
03
a
g
C en ••
g G C ^
0)
03
-H
s-1
rH
c:
rH
>i
W^
1
Q)
3 M
MH
Qj
a
x:
C W C
C OJ
2-H S
OJ
--^
0)
fO
Cm
rH
in
267
^^
4J
U-l
-H
T3
e
u
3
+J
<
rH
.
td
Q)
oj
>
>
•H
X
a
•
03
Ot-H X}
U
rH
-HO)
(0
-H
rH
(U
-p
M^
+J
Cr
a\
w
+J
C
03
TJ
e
0)
<D
0)
Sh
Cm
g W
3
)-l
u
03
j3
-p 'd
OJ
M-l
03
C
-P
(U
0)
Cm
H
•H JJ 4J T3
^.^
ro
O
g e
M CU
M-l
s
S
s
Q
t/1
^-%
XJ
^^
u
u
Z
C4 CJ
r^
D:i
*k
H4
f^
OQ
Id
0}
l-D
D
m
D
cn
cn
cn
^
^
Eh
Eh
Eh
1
1
D
"Z Z2 (H
K UJ ^
o< u
b
i ^
OS g
«
cu
3
iJ
-1
1
•»
o
u
Eh OJ
&4
^
..
CJ
tn
cn
0)
<U
(U
<0
u
3
^
P
U
3
u
X 3
0)
T3
l-H
-H
0)
a
W O
wg
6
•H
1
X
«
Cii
M3
W ^iT)
S U CQ
HUD
Eh < cn
^
a
73
0)
0)
rH
•H
M
OJ
g
M
a
u
rH
04
0)
o
e
e
•H
cn 04
cn 04
a)n3
TJ
)H
U
cn o.
0)
M
04
^
<
H4
ttl
^ 9
z »:
gs
o
^-^
-o
c
•H
p
c
u
Eh
M
a;
+)
-p
JJ
•H
•H
•H
g
s
04
<u
+j
•H
Eh
M 2 U
e
s
o
o
o
o
X
Ml
K
x
CN
00
o
O
o
o
o
r^
Eh
«t
Z
o
M
EH
U
<
cn
3
J
D
S
H
c
C
G
G
H
H
•H
•H
01
(0
CO
CO
to
•H
•H
w
W
•H
S
Eh
CO
H
S
s:
s
C/1
to
iH
<
-p
M
Eh
V^
P
04
Eh
o
C
3
G
&-"
2
O
M
Eh
<C
O
H
>
<:
z
• •
s*:
en
<
&:
CQ
D
cn
,-^
T^
CU
3
C
H
-P
G
n
3
O
U
O
en
fd
-H
•H +J t3
-G C
nj
CO
-H
G
td
•H
3
w
5-1
-H
03
>
cox
ai
^
O
>1
C C
0)
•H
Q)
-P
-H -H -H
rH
X!
•H
^
-P
CO
03
03
G
Eh
03
O
o
a
u
Xi
f.
03
-P
(T3
CO
Q)
Sh
-H
UH
03
ja
M G
'13
o
0)
(U
JZ
(U
S
-H
0)
Q)
+J
1*4
>
^
U
^^
»>^
u
3
q-i
-H
T3
..
^
CO
GO
-H
-P
Xi
X
(U
0)
x:
03
Q)
cn
n
iH
'-{ -r^
Q)
rH tw
Clh
t/3
268
X
U
>
n
CO
4J
a
03
3
U U
e
J3
73
Sh
Sh
G
O H
jG -h
U
•
a;
OJ
fc
JJ
4H X: T3 rH
0^ G 3
•H 03 CO
-P rH
Q)
4H CO M
4J
rH
•H
C -P
•iH
«-«»
r^
u
•
CO
OJ
M O
•
O
^-1
0)
fd
Q)
to
(T3
^-'
cn
fT3
o^
x:
rH -H
rH
S
4J
3
CO
3
.
M
G O G a
CO
M
-H
(D
U
nj
Cr>-H
-H
rH
(d
cn<w
1
M-i
•
m
>i
-P
0)
>H MH
G
H
^-v
OJ
UH
^-'
•-^
c
e oj
3 e
0)
Sh
Q4
-P -H
CO
(d
G
CO
-H
3
CP
(U
U
u
2
U
< a:
2 D
K tn
o< u
a i!
w
Ci3
V-+
ti:
a CQ
175
cn
Cc;
D
Eh
IJM
cn
W
1
Eh
E-i
u
^
-
1
•»
sou
HOWQ
^
W J
2 ^ <:
H D
S
H
Eh 05
Eh
•»
1
Q)
tn ly
»J Oi
M
MD
iJ
«
en
CU
en
ttJ
2
H
fa
Eh fa cn
fa CQ
cc;
U
G
X S
fd
X 6
a ^
X 6
0) u
X s
u
S-i
-^
e ^
u
(U
u
fl
"H
<-i
aM-i
ai+-i
e M
e u
u
cu
D
0)
fd
0)
a:
1
«
^
01 1^
(d
^'U
w
•fc
fa
Eh fa
nj
-H
a
Eh
*»
u
u
Q
c
Q
Pi
P^
(U
U
c
1
Eh
en
«
D
Q
W
u
o
H
cn
^ t-j
CJ fa
UQ
< ^
^
W ^CD<
en
<:
^
w
^
en
^ CQ
1
&H <C
cu
^
U D
»
^
cn
u
cu
CU^H
S M
0)
u
a<
a
cu
o
Eh
<
M
K
U
<:
o
Eh
Z
U
Eh
O
D5
&,
W
0^
00
Eh
0:5
4->
CtJ
•H
+j
•H
£
S
^ u
u
a;
Eh
M S U
i-H
Eh
E
cn
rH
c
c:
•H
•H
CO
cn
n
cn
CO
ca
CQ
CQ
•H
•H
•H
•H
S
cu
U
O
M
Eh
2
iii
cn
<c
Eh
03
D
cn
p
J2
!T>
p
-p
C
C
rcj
-H
id'H
a.
^
^
0)
4J
^-1
+J
-H
fd
^-1
+J 4J
M
-H
u
13
rd
fd
(d
OJ
i-i
Di
73
M j:: p -H
cn 3 u
>
-p
u (d
M rH cn O s
fd
3 (d
rH
(d
^+-4
M C
\
•H
U
0)
M M
rd
C M
-H M
cn
rd
U
^
cn
M c
3
U -H
+J
C
(U
0)
cn
x: -H
T3
td
M
3
u
13
--a
<;
OJ
G
G
Ai
(d
-H
<u
u
M
1
(d
^
U
e G M G
-H -H CU
fd
rn
Sh
>
cn
(d
S-I
2
G
U
i2
^
fd
cu
0)
a
(U
3
4J
1+^
fa
,—»
•
G
.
W
T3
-H
G
>
Tl
-H
+J
^
-H
OJ
G
•H
cn
tj*
-p
!^
a
•H
(U
iq
cn
-p
G
<w
fd
f-t
cn
U3
269
'j
1
..
ain
G
•
^ "
— -Hu ^
G
(Ti
(d
-p
cr
fd
X5
XJ
fd
0)
0)
M
ja
73
OJ
cu
CD
OJ
(U U-l
0)
cu -H -H fa
OJ -H fa
^^
^^
u
>
G.t^
QJ
^
(d
^
S -H
2
73 <3 cu
OJ
G U
^
(d
^-^
fd
5-1
-P T3
cn
-P
G
-H
0)
fa
r-{
fd
fl
••
^
rH 3 ftJ
Ch cri3
Qj Ti <U
73
4J
0) 73 O
S rd
5-1
nj
0)
M
D
5
-H
CP
73
x;
cu
Eh
C
0^
s
5
C
-H
H
;^
2
CO
a.-H
(U
o
w
s
ffi
C
Cn-H
• •
s
u
•H
cn
04
EH
m
4J
H
fl
cn
M
>
<c
2
e
CD
e
g
•H
Eh
<
a
•H
in
<
Eh
-p
•H
e
E
in
Z
o
M
4J
en
2 3
h4
O
M 3
Eh S
U M
<
ir^
• •
-4J
•H
Ci
04
<
O
d::
O
fd
'
»
t
DJ
2 W U
< d: M
s 3 a:
S ta H
o< u
h w s:
a
cn
U W D
u Q cn
w
1
Eh 05 cn
fa
<:
u
u
<
P-i
Eh
1
«
D
S
H
CO
Eh 05
03
a
-hi"
fa
ITJ
W <
tf D
fa
m
a
W D
Eh <C
1
CU
G
in
0)
q
w
3
MD
w
H
S>4
a
0)
TS
rH
a.
(U
•H
M
X 3
-P
03
0)
3
Ti
rH
(U
a
fH
e
6
ra
>
>^
w
cn Oh
U
G
0)
U
rd
X S
Q) U
rH
04 1^
u
g U
M
U
0^
0)
04
i-:i
<
^ 9
2
S ?1
c:J
^
•H
Cu
•H
p
4J
•H
•H
e
§
-tJ
•H
e
g
E
t-"
K
E-"
M S U
CN
O
CN
O
o
r-i
c»
&H
t
>
U
<
•H
•H
-P
4J
G
D
1—
U
CJ
(d
3
TJ
D^
td
«
E-"
••
-H
d)
(U
hJ
g
o
ffi
O
1—
-P
ou
&H
t/U
•H
o u
Pi
—
e
g
o
U
+J
G
M
•^
•H
e
O
0)
4-»
OJ
a
u
c
o
•H
(U
cn
u
cn
•H
-H
cn
Q
CO
't^
s:
CO
Ui
w
>i
D
Q
Q
04
(d <-{
(U
^1/
•H
CO
M
E-t
o<
O
-p
0)
o
:5
fd
4J
z
o
M
s
o
M
>
<
2
*•
oo
O
U
cn
^
CO
<
s
D
CO
U
Eh
f-i
<U
• •
^
O
.—
u
G
(U
D
x:
-u)
rj
CO
4J
•H
3 3
-P
0. XJ
G
O
~~'
H
01
rH -H
a
W
.
(TJ
G
e
•H r- +J
T3 r^ fd
cn
O >1-H
^M >
\
O +J G
m
g
-H
rH rH
-H Id
-P
3
W
•
(d
-H
13
a
Eh
G
-P
fd
«
0) 'tj
0)
<:
(d
0)
-P
CUEh
afr4
u
a.
<u
(d
g
(d
W
X
fd
^
0.
(d
rH
T)
OJ
-P
^-(
cn
fd
14-1
••
u M
3 O
(d
cj
^-»
"~^
X
<u
OJ
rH XI
G
G
(d
fd
H
g
•H
rd
pq
g
3
g
-rJ
-H
0)
U G
u
T3
(D
J3
TJ
(U
(U
-H &H
CN
cn
270
>
-P
OX
vo
\
o a
rH
S-i
00
w
S
<
0)
cn
nj
3
.H
O.
cn
>
+j
n3
U
cn
M-4
5-1
•
cn
a
>
M
td
-H
Oi
rd
••
^
Id
T3
oj
o
.-1
(d
M
rH -H X3
g TJ
'^
>i
fd
a
cn
•
CU >i
oj
(d
-H
T3
o
^
\
o
cn
G
p
X! -H r^ (U
T3 rH fa
O
EH
ro
•^
cn
cn
r.
W
«
<
D
a
'
»
^
w
u
•»
2: cJ
< «
s
u
02
l-H
1
-I-)
CO
CO
Eh CO
Eh
1
tyj
w
1
«»
«»
w a
s w
fa
S
M
t-4
iJ
t-f
Jxi
Q
M
<0
u
3
Oi
M
D
o
c:
l-H
—
-
^*^
CO
Eh
rt
Eh
tJd
O
t3
c
O
Ci3
M
U
CJ
Eh
M 2
e
O
2 D
J
O
)— ID
&H S
U M
< Eh
t
u
>1
•H
M
HS
U
CO
CO
CI,
fl^
oj
4-1
e
(u
O
CO
Eh
J
o
O
(D
cu
0)
(U
0)
>
>
•H
>
>
•H
•H
>
•H
•H
4J
4J
4J
+J
O
4J
o
o
Q)
0)
0)
•H
•H
U
!h
Sh
•H
•H
•H
Q
Q
o
+J
>
•
+J
l-q
[2
U
T3
03
C
Eh
03
u
•—
wJ
O
P
C
rH
.
<-^ -rA
,-^
••
M
r-^
Eh
S
<
G
o
iH
03
4-1
a
-H
>
cn
OJ
!h
U
o
05
••
a^
^
C
U
C
D
c^
CO
CO
>-.H
03
(T3
—
P
r-^
>
u
Di
CJ
s
U
U
(J
CO
G
4-»
V4
r-{
03
-P
J3
CO
T)
G
i2
T3
5
OJ
rH
-H
U
C
U
P
0)
•
4-1
4J
Q)
c Q)
p U
oG
Q)
T3
fa
(d
Eh
.
.
•
cn
C
^
G
Q)
<
M
4J
3
E-t
CO
•
O
2
04
-H
>
*4H
05
1
.
ISJ
rH
<
•
••
•
X
•
u
x:
fd
Sh
rS<J
3
i3
4-»
4J ^a
•r-i
C
u
0)
-H
CU
T3
O ^
< O
,_^
^^
2
271
03
05
CO
—u
W
«
a
Xi
'^
.
CJ
4J
.-^
"—
M
>
U
2u
<\
s
2u
<
-H
M
(U
x:
-^
CO
Q
•
fa
^^ 12
CJ
•
4-)
fa
c
03
r
•
ttJ
Oi
OJ
•
4J
^-^
U
^~^
u
•
.
4-)
Eh -H CO
Eh 4J (U
<: C -l-l
c^
x;
•
0)
p
C
•
4J
•H
4->
U
U
CO
Q
4J T)
CO
rH -H
•H
o
Eh
cn
t3
<
O
J
CO
<:
0)
CO
K
cu
tii
cr
E
O
CO
m
Wf
Eh
+J
•H
OJ
(U
o
G
(U
4J 3
•H 0^
e O
^
•H
o
D
•H tr
u
CO cu
0)
o
c
0)
G
tu
P
a
^
03
cu
Eh
S
o
o
c
e
0)
m
O
0)
c
0)
H
>
T3
rH
4-1
E-
2
O
O
Q)
•H tr
Q
..
t3
cr
pg
t/5
OJ
rH
Oj
-P
s
Eh
CD
x:
4J
•H
e
M
3
(U
Q)
O
PQ
u
o
G
CO
0)
^ Id
T!
o
c
0)
3
O
<; CO
CO
<u
Di
Eh
a
CO a^
Eh fa
(D
0)
S-i
o:;
U
U
3
rH
<
«
w a
w
s
H Oi
CO
0)
S
a
fa
•H
T3
•H
^
a
fa
«•
«i»
M
X P
0)
o
"N
(D
rH
a
D
a
1
E^
£h fa
Eh fa
tn
a
•*
Eh Pm
D4
-^^
CO <;
fa
W Oi
w
s
H oi
(-*H
-^
<
D
m
D
Eh
cc:
S Eh
o< u
S
CJ ^^
fi^
P~^
'
x:
U
^^^^.^
<D
>—
(U
5
OJ
CO
fa
^'
—
»
'
a
u
2 W U
<
2 D c:
S CO t^
o< u
W 2^
Ci S
u
^
t-4
cc:
-f-^
en <:
a
w
1
«
Eh
fr^
04
D
a
a
1
Eh
«.<;
UD
Ua
<
^ Eh^a^o
U CQ
U D
w
s w
H «
S
H
<: en
Eh Cm
Eh D5
-1^
Cj-i
h:1
en
Cq CQ
1
cc;
D
li,
en
(U
Q
03
1x1
(J 05
hJ
M
HD
W
u
•H
(0
w
X e
(U M
0)
'd
0)
0)
jC
u
4J
a.M-1
G
u
e M
>^
O
S
•H
or
cn
r-^
a
CO PJ
c
Q)
en cu
M
U
en
(U
04
v-5
<:
M
-p
d::
H o
•z
CJ
—
•H
ff|
6
00
H
M
U
Ci
E-«
M S U
Eh
O
;-:i
X
o
in
in
2 3
>J
O
M D
S
U M
<
(D
>
>
•H
•H
-P
4J
•H
CQ
(U
0)
V-i
0)
•H
•H
•H
Q
Q
u
«
cu
5si
CO
<
a
p
CO
>-v
TJ
QJ
P
C
•H
+J
G
^^
o
td
'
«
H
>
w
J
w
X
p
w
.
M-l
-H
>
M-l
•
••
.
X
•
u
-C
03
O
Xi
O
-M T3
(U
-H
U
S
(U
OJ
cn
Cm
^
—
U-l
—
G
U
c
rH
E-«
d
S
(U
E-t
M
>
G
U
!-l
E-*
en
o
rH
<
o
H
2
o
g
o
u o
^^
E-t
• •
4->
•H
s
O
<U
—o
4J
-H
Cli
•H
c
e
S
-U
cc;
o
rs
3
•H
Cm
•
\
O
^
-P
W ^
D -P
G
•r-i
T3
G G
•H
(U 4-1
w
Q •P
H
> W
< U
-H
•H
O
•H
G
-P
OJ
.
•H
01
01
-P
0)
0)
a-p
3
C
01
0)
^
(U
^-s
"
"
-H
G
^^
•H
&^
•^
en
272
Sh
>
^
•
U-l
•
>H
-P
(U
•
X
0)
5-1
(13
tT3
M cn
D P -H
01 O >
•H O fO
-H +J
(T3
-H
(TJ
G
^
a
01 G -P
G ^
G -P
3 M OJ 0)
U U M-i U M
in
cn
en
rH
G
Cn
OJ
-p -p
0)
-H
CT
0)
fO
M G a
D M
iH -H
G
-H
cn
-P TJ CL
-H TJ
G
^d
G
OJ
G
-P
,
4J
cn
(t3
H£
Q G
u
(d
i2
T3
0)
cu
&4
<:
2
.
U
u
2
<C
2
CJ
S
o < w
fats
tyj
w
w
ci
05
cu
w
fa
hJ oi
hi h4
HD
u
a
«
"g
«
o
u
«
c;
C
•H
•H
+j
4J
fO
fC
3
3
iH
rH
03
>
>
O
6-
Z
W
•H
fa Pi
e
e
o
04
u
a;
M
E-"
E-
M S
fa
cu
Z
O
M
U
<
£-•
w
D
J
3
2
M
&H
01
+J
+j
D5
w
^
M
<
1
Di
1-4 cr:
H
••
•»
Eh
^
<
U
O
J
w
>
EH
^
fa
>H
E^
a\
fa
Q
1
t/i
^
O
U
fa
Q
E-t
q
Di
C^
fa
E-i
§
Q
W
u
o
o
U
Di hH
Z) Oi
H
6
B
-u
-H
g
-P
-H
e
a o
uo
B
E
o
O
C
G
•H
•H
CO
CO
w
CO
•H
•H
S
S
^-1
u
•H
e-t
+J
1
(d
03
6
CU
E-"
-P
o
P
E-i
(d
•z
o
M
en
<
o
M
>
»<
z
ii^
M
Ul
<
S
D
M-l
'-
M
03
-P
(U
^
-H
>-(
U
3
cr
(u
c
M
o;
rH
-P -H
D
G
(T3
c e
D — -H
f3
•
e +J
O
U Q)
U
^
U-i
G
CN
o
(T»
>,
Eh -P
M
U
r-(
•H
U
S-i
rH
o
W
C
•
O
a>
to
Eh 4J
>i
•H
o
CD
4J
-H
x:
a
> ^ 5
(Tj
O
(t3
1
OJ
a
nj
CO
+J
S U
(^
G ^
CU
(J
(N
C/1
^
U
U
(T3
>
-P
!h
03
•
—
>,
a-P
-H
>i
S-l
-P -P rH
-P -H G -H
O
a;
rH
"H
(U
en
•H
tJ^iH
-P
D
rH
rtJ
1
fH
171
•
fO
4-J
r-i
^^
m
O
N
<
^
s-^
E-t
CO
1
H
t^
fa
rH
0)
i3
^
(T3
s^
a
3 03
> u o
(U
PO
01
273
+
u
u
2
•.
bJ
<! Cu
u
w
Pi
U
CO
•—
D
« U> Eh
o< u
P4 CJ S
IX s
:s
&H
t
cr:
!JJ
Oc
1
Eh
W -J
2 U <
HUDa
n3
X e
0)
0)
a
1^
H
Eh
<
s
o
£h
D
<
^
<
HV DC
^ o
z cc
^ s
o
Sh
m
D
CO
Eh CO
CO
1
H
a<4H
a.M-1
e
0)
U
04
«»
^-3
W
a;
fa
fa
fa
s
H
Eh fa
(U
0)
u
3
5h
Eh
^
3
3
0)
0)
ID
0)
^3
dJ
73
0)
rH
a.
(U
r-{
<i)
r~{
0)
rH
M
E w
•H a
S
<D
•H
a
u
u
CO 04
a
Q4
6
•H
V^
e
•H
Sh
CO 0,
CO O4
CO
p
4J
+J
-p
-P
4J
•H
•H
•H
•H
•H
•H
E
e
g
E
s
E
O
G
QJ
^^
04
^
«k
«b
w a
S fa
H OJ
01
«
>H
e ^
U
m
D
^
a
w
c
03
X S
"S
" h)
CO
D
CO
W »J
2 u <
HUD
Eh < a
hJ Oi
iJ
^
f^
CQ
fa
o
c
M3
m
D
w
«
Cl3
M
>
u
tH
>-:>
*».
«k
Q
t-5
1
«
Eh <;
t/J
H
u
o
w
»*
en
P4
o
u
>^
Eh
•.
a
w
Oi
a
a.
O
u
Eh
M
a;
M S
tH
CJ
HM
n
o
ffi
K
E
s
E
CN
CM
CN
CN
CN
0^
H
•
(T>
EH
••
1—
2
O
M
EH
U
<
in
n
J
D
S
H
&H
cn
<
c
W
z
o
H
U
3
O
cn
<
EH
C
(U
3
c
G
-P
CO
(U
rH
n3
3
S
•H
N
CO
-r*
1+^
>
d)
u
•H
C
td
+J
CU
J3
T3
3
3
3
CT"
CT
IT
cr
Q)
(D
<u
CO
CO
CO
CO
JZ
>-i
4J
-P
a)
(U
5
o^
73
CO
rH
OJ
3
G
«
a
(U
CO
Eh
C7>-H
C
^3
03
fCJ
0)
.
G
M
3
-H 4J
C
OJ
-P
•H
i^
CO
-H
>
••
^
u
•H
T3
rH
C
03
•H
3
CO
•H
U
2
«
>
^
<
u
•
(d
-P 3 O4 i3
•H T-i Eh T3
5 C
J3
T3
Q)
MH
0^ -P Pn
O4 -P
rH
CN
CO
CU
U
.
G
rH
.
3
.
CO
IT>H -H
c
03
U
0:;
>
Q
M
(U
^
&-I
Eh
en
CO
cn
274
•
0)
.
a
.
Cl4
1
M
<
^
U
C
U
>
G
•H
CO
4-1
•H
H
CO
e
a
>
•p
0)
E
-H
>
•
••
•
M
•
-P JQ
j:: T3
0)
<-{
03
0)
jG -H (U
rH fa
Q)
r-i
>1 >i 03 -H
4J ^ 3 -P
rH
u
G
03
a rH
0)
^
-p
T3
Q)
CO
03
cr>
,
p
rH
03
•
>•
1
to
03
CO
U
03
CJ
Cn
0)
CO
QJ
CJ
H
(tl
••
X
-P
•H
U
3
M
5-1
c
G
Q)
4J
-M O4
3 Eh
CO
U
JJ
u
J5
• •
m
O
+J
.
c
U
03
03
ID
01
-P
P
u
fe;
<C
O
H
>
<
2
•H
0)
Sh
i«5
03
•H
^
s
CO
Eh
03
•H
CO
M
o
03
•H
X
•H
Eh
04
Eh
03
CO
•H
rH
•H
rH
r-l
r-l
-P
^
u
03
u
ia
03 -P TJ
-P -H 0)
5
0)
CO
fa
03
3
03
03
Eh
T3
D
Q
Q
^
U
0)
x:
U
Oi
':!•
in
vD
CO
CO
CO
U
>
•
<<
•
X
•
x:
03
u
ja
-P T3
-H 0)
5
Qj
CO
fa
u
u
is^
S D
a w
c^
W S
s
w
a
03
fr.
E-^
o<u
Pm
c;
1^
cu
s
H
w"
W f^
m
s
M D
Eh
Eh 03
w
u
a
tn
^q Pi
1-3
1-^
MD
« C
en
y
Q)
Tl
0)
rH
a,
0)
rH
e
•H
TS
a
o
e
U
M
•H
03 0^
Q)
o
03 a,
J
<
M
6^
^"^
z
U
o
'O
<D
E-I
D
C
C
O
a;
Oh
CJ
u
E-t
ct;
4J
+J
•H
•H
e
s
o
cu
•H
-U
o:
M
E-I
M S W
o
K
w
CM
O
o
rH
<-{
rH
•
cr>
Eh
.•
crj
z s
J
o
MD
H s
U M
< Eh
cn
tT3
(C3
•H
•H
4-»
4-»
c
C
<u
(1)
3
3
tr
tJ^
cu
(U
01
03
t/1
<
0)
Ph
U ^
>i
S
(tJ
nH
Q.
Cu
E-i
Ui
o
o
M
E-i
!<
O
M
>
<c
2
••
r»l
O
W
CO
t<
C/l
<C
to
4J
ftJ
D
Q
Q
-H
j:;
03
-P
D
Vj
(1)
4J
0)
r-^
CUU
^
M
a
(U
oi
OJ
fi
U^
r^
03
^_^
M
u
C
5
CU
>
>1
CO
p
-H
•H
>
U
D
03
CO
y-l
- CU
TJ iH rH
I
fC3
-H
3
-M
-H
cn
fd
-H
^
5
>
td
Eh
M-l
rH
fO
3
(U
^
0)
-P
T3
s
3
M-i
O
(T3
p-i
C/1
-^-'f
U G
•H
Ti u^
E-)
•z,
CO
.
o
o
O
..
CU
cu
.
^
M
>
•
CU
(0
cu
x:
-M
O
M
o
^
..
^
o
(tj
^
+J IH x5
T3
(TJ
rH
0)
4-1
-H
CU
nj
OJ
>
«
S
i3
Ix,
en
—
CU
O
Ci4
00
03
275
•
w
u
2 DJ U
^ DC H+
S 3
S CO E^
o <; W
Ph U S
w
V
en
1
- ^^
CO
CO
^
D
w
1
W
S U <
HUD
Eh < a
CU
Q)
G
(13
td
o
M
3
Q)
(U
'-i
rH
CU
Cuu-i
aM-i
e u
e M
e
OJ
0)
•H
U
u
Oi
«k
a
C4
CO
CO
B^ Pi
1
^
S
H
Eh CQ
1
(D
M
3
3
CU
n:3
(U
O
o
T3
rH
0)
rH
a.
cu
M
•H
^
•H
a
S
P
Eh Pi CO
!h
T^
-1^
tq
cu
S
CO cu
CO cu
a.
^
w a
W
S
H Pi
^ fa
Pi
&4
X s
U
^
MD
w a
"'g
»k
a
w
^hJ
o
c
•k
m
D
Ci^
W -J
S U <C
HUD
Eh < a
X e
QJ U
en
»j Pi
^q H-V
H
u
o
^^
OQ
»
q
w
>l
Eh
1
Vk,
Oi
Eh Pi
Pn
PL4
cu
«
«
o
u
»k
W
Eh Pi
o::
Eh
^
O
w
u
M
CO P^
^-;
>
J
<
y^
<
D
2
<
s
t-f
Eh
2
W
Eh
o
Pi
O
PC
-U
Pi
•H
W
-p
H
P
4J
•H
•H
g
g
s
e
s
o
o
o
o
-p
•H
o
cu
o
U
Pi
Eh
rH
.
a>
E-t
Pi
Eh
M S U
2
O
M
EH
U
<C
•«
M
K
w
K
M-t
n:
00
fN
(M
cN
o
o
o
05
D
hJ
D
S
M
•H
i-*
•H
C
s
<
M
•
G
u a
d -P
U Q)
U
a.
Eh
o
Eh
2
O
M
^
<
O
M
>
<
2
x:
-p 04
D Eh
i^
m
<
•H
Eh
03
(T3
e
N
D
a
cn
•H
iH
4J
4J
•H
G
4J
M
Q)
•H
C
-P
(U
-H
U
cn
3
G
0)
3
(U
cr
CP
J5
U
0)
0)
CO
CO
-P
^o
0)
(d
a
W
OJ
-H
en
>
••
M
4-)
U
G
U
5
H
0)
d
d
o^
Ti
CO
0)
O
rH
;:4
rd
3
Pi
^
en
H
-H
>
en
G
-H
^
(tJ
>
CJ
2
c
M ^
3 U
••
• •
1
M
tS]
<
•
U
U-l
(d
M-l
XJ
H^
(d
XI
Tl
-p
CI)
ja
ri»:J
t3
O
OJ
0)
cu
.
x:
.
•
Cu
•H -P
M
cn
3 cu
cu -nE-t
IJu
u
u
<d
EM
G
cd
a.
H
>1 >i
JJ 5h
fd
3
-P
•H
cn
o
-H
fd
rH
e
CU
•
>
•
CU
P
>
••
^
•
U
x;
(d
Eh
T3
G M
3
u u
CO
(U
O
CO
M
y-i
>t
H
\
e
d
e
(tJ
H
-H
cu
cu
S
cu
cu
-H
en
Cu
Pi
-P
-H
a
X
U
5
U
tH 00 rH
CN i-{
XI
fd -P TJ
4J -H cu
n
U
CO
CJ
CU
cu
-H
-H
U 6
-H
-73
Q
O
U
rH
CN
ro
'ST
in
c/3
cn
cn
CO
CO
CO
276
^
1
cu
Pi
•
•H
fe
cu
•
CQ
CD
tn
cr
o
cu -P
u G ^
u •H G
nj
dJ
Oi
C
3
+J
•H
<4-4
W
U
d
u
G
<U
en
U
0)
ro
(d
O
fC
4J
.
>•
l+H
H
1^
•H
4J
r-i
rH
rH
M
to
E-t
o
-p
^
W
W
tfi
o
rd
-p
X
.
(d
-p xj
g
G 3 -H a
e M cu
e -H T3 Ct^
IW rQ
»
>
u
2
^
o
-1-0
-
U
Ci4
hH
a:
O::
^D
in ^
K
a< a
fM
w s
« s
u
04
^
worn
1
u
CO
W D
w
Q
1
Eh Oi 01
&4
«k
K
E-*
^
W -i-^l
s u <c
MUD
Eh < a
^
^
CQ
D
OQ
a
u
en
en
OJ
fa
(-D
D
w a ^o
S W CQ
H « D
V
^
w
s
H
u
s
Eh Cm en
EH
Eh
«>
^
t-3
fa
2 m
H D
i-i
Eh en
OJ
U
0)
w 3
s
^
Q) U
.J OS
-^ l-V
MD
!^
o
X 6
0) u
a
a^-i
"'S
O
3
-H
r-i
OJ
0)
T!
0)
T>
Q)
-H
(U
r-H
(U
S U
e
0)
•H
Ph
en
U
Oi
G
XJ
a
a^-l
3
QJ
rH
a
u
o
CLi
e
M
•H
A
O
S
•H
5h
en &i
Sh
cn cu
^
<
M
-
e^
—
tt;
o
-P
2 «
M a;
Eh w
o
Ti
0)
3
•H
o
M
-p
•H
•H
e
S
o
Cu
•H
4-1
+J
u
cc:
&-<
M s u
E-«
e
E
g
o
4J
H
•H
o
5
s
g
O U
O
K
E
X
un
rH
(N
ro
CN
X
un
+J
-P
H
O
1—
i->
rH
rH
CN
•
C\
E-i
«•
2
o
M
fH
U
<
en
:d
(T3
•H
J
D
s
M
4J
Eh
cr
CO
0)
M
C
-P
0)
d
eo
<+-(
i-<
-H
3 M
^
Cn W
C
•H 3
C O
5-1
o
Eh
^
Ul
<
Eh
ca
3
01
TD
0)
tT>
G
^
d
M
4J
ftJ
(U
C U
•H
(U
-P
Cn^J Oi
•H
W
-H
C
(13
03
0)
5-1
iH
T3
4->
a.
en
-P
fd
C W
d
03
c
0)
3
m
C
cr
<D
a
en
rn
U
Q)
CO
-H
!T>
O a
D
M-l
-P
^-1
S-i
D
U
iH rH
0)
0)
a
fH
cr
(U
03
U
-P
03
(y
rH
en
03
•H
T3
3
i^
03
(li
(U
0) M-t
-H
a-H
03
03
U
jG
-C
4-1
•H
4J
-P
(U
5^
M
>
••
OXO
S-i
(T3
(U
X5
^O
GO)
<D
4-)
^^
DOW
^
3
U
C
y-j
'a -H
M
G c
03
-H
-P -P -P
S Q
••
^
O
•
•
-i4
-G
n3
O
i3
73
4-1
(U
CD
-H
<U
JG
S
W
G
-H
S
^^
XI
ttfe
-H T3
U
Oi -H 4J
O
U
V£)
r»
00
CO
en
03
cn
••
.
O
4-1
05
Q
<
D
Q X
Q
u
ro
lii
>
03
03
^
•H
Q)
u-i
eT>
P
iH
4J
>i
-H
Eh
-n+J
0)
Cli
«•
G
0)
3
OJ
^
e-i
•z
c
(U
3
"0
fO
cu
E-"
4J
4J
1
<
^
2
O
M
<
O
M
>
<
•H
&•
U Q
en
to
rd
H
+J
03
o
OJ
0)
4-1
u
03
G
(U
03
H
<-i
Cli
tji
03
G
D
Q D
Q rH
a,
03
•H
>
4-1
rH
cjH
X
0)
a
a
3
0)
x:
U
o^
en
a
e
o
o
O
^
"^ UH
G
,,
;^
O
>
1
"
rH
03
•H
03
03
Mh
Uh
(U
>
5-H
OOi
•
Eh
.
•
^
3
>
••
X
0)
6
!h
03
03
(U
ja
4J
U OXl
'TI
Oj
4->
4JUH t3
-H 0) 0)
03
Q)
-G
0)
4-»
fa
5ja
Q)
05 03-^fa
o
rH
cn
277
0^
(l)t3
>. 03-^
4J U3rH
fH
03
U C
(U
u
03
^ >
0)
fa
U
u
2
^
o
•.
CJ
u
c:
1-4
s D
2 to E^
o < a
f^ w :s
oi S
ct:
w
3
J «
^ H4
tn
Eh Si
&H
V
Eh Di
fe
W
1
w
a
w
w
1
*k
a
m
CO
^
O
1
fa
B^ Qi
fa
Oi
X
Pi^
fa
w
s u
H U
Eh <
-^
U CQ
U D
Eh
<C to
PH
Q)
0)
<u
Q)
U
X 3
0)
u
X 3
M
d
U
X d
a TJ
U
X d
^
-
^h3
sum
HUD
<
OJ
rH
3a
CO td
S
•.
CO Oi
CO
Eh
Cu
•>
•.
a
CO
a
e
u
73
0)
T3
0)
T)
CD
.H
0)
rH
<U
e
u
^
a
U
S
u
u
cu
a
o
u
•H
cu
to Oi
^
W
s
*»
rH
a,
e
U
^ID
HUDm
1
fa
Eh <: to
0)
OJ
T3
rH
QJ
a
u
g
u
U
0^
u
M
04
eu
tn
^
Eh
<
S
«
D
<
H
Eh
H
H
O
M
Eh
«•
-p
+J
4J
p
4J
Eh PJ
•H
•H
•H
•H
•H
04
o
o
Eh
Eh
^ O
2 £4
u rt
e
M
g
g
g
O
o
^
o
U
Di
Eh
2
tJ
h:i
K
K
Eh
M S U
lD
o
LT)
o
CM
00
LO
0)
(U
(U
CO
2 D
J
O
M D
EH S
U M
< Eh
to
o
>
>
•H
>
•H
•H
-P
+J
+J
U
o
dJ
S-l
0)
V4
•H
O
O
U
•H
Q
<-)
c
•H
03
CO
(0
W
H
H
•H
Q
Q
C
•H
S
s;
to
<c
Eh
1
OJ
G
-P -P
3 d
c
4J
g
o
0)
Q)
Eh
C!
C C
2
O
M
Eh
<
U
H
>
<
2
* •
m
O
i^
CO
Eh
ca
4->
<
D
CO
rH
M
T3
rH
(«
(TJ
Q)
a
U
«
«
CO
Eh
xi
3
T3
(T3
QJ
U
<
-i^
0)
-p
(13
-p -d
M G
O d
rH
<
U
r-{
01
CO
ja
(d
P
>
C
d
v^
-H
0)
S-l
0)
acn
^
-P
O
d
(T3
-Q
^
H
•
-P
rH
TJ iH
itl
d
0)
•H
>
tJJ
2
;«!
^
.
ja
+j
t3
•H
(U
OJ
.
OJ
(U
x:
•
Q)
pcH
•
t.
fO
en
278
(T3
i-i
CO
CN
CO
-H
0}
en
> >
en
-H
(d
a;
<+-i
O
u
d
w
-H
H +J W ^ >
OS C
> o a
Q e
(T3
g
G M
S
^
to
0^
Eh
H
(0
;C -P -P
•H
y-i
CO a:
OJ
rH
5j>
1
to
<:
!-l
C
(0
(U
-P
g
d
(T3
cn
•H
-P
(13
+j
a^
(t3
T3
>
D^-H
(U
+J
rH
T3
M
.
M
O
TJ
OJ
0)
tn
(T3
cu
cn
XI
T3
tri
(d
rH O.
OJ
o
(T3
en
(u
Eh
0)
(U
rH
(13
cu
fa
o:
OJ
QJ
"^^
••
^
-H
q;
g
(U
Oi
X5
£1
TJ
•H 00
(13
+j
O
•
en
•H
(T3
cux:
rH
tJIO
C
CO
•H
^
G
H
cn
5
o^
c
>i
-H +J
-H
tT> O 13
-H
Ti
d
.
Eh
X
M-i
tJ>
o
x>
•H CU
0*
Eh
x:
S-i
—
in
cn
afa
V
.
U
o
2 DJ U
^ Di
S D
S cn Eh
O < DJ
tn CJ S
cc S
W
t-H
£2:
cu
w p
w
u Oi
k4 M
M :3
j^ a
en w
q:
^
CO
a
w
1
«
Eh
Cl<
a.
a
u
«
fr^
W -^:)
S U CQ
HUD
Eh < CO
V
£h
1
«
0)
Q)
u
X 3
U
X a
0)
T3
Q)
T3
rH
Q*
OJ
<-\
0)
e
U
a
O
e
u
U
0^
U
U
<
(U
v^
3
u
H
a
-u
M
•H
u
0.
CO &(
OJ
e
a
J
<
K-+ ct;
£h
z
—
Eh
TJ
o
QJ
D^
C
•H
U
-P
C
Eh
rH
Eh
• •
o::
Cii
M Eh
M 2 U
CC;
u
~^
o
o
Z
O
M
Eh
U
<
03
D
hJ
D
S
M
E"
CO
4J
p
+»
•H
•H
•H
e
e
o
e
o
o
re
s
K
o
in
o
OJ
rH
r-i
>
nj
•H
•H
-g
-P
H
+J
u
C
cu
M
(U
:3
•H
cr
o
Q
(0
C
9
0)
tr
(U
rn
en
CO
<:
Eh
.
,—
>i
^
cn
a
G
aEh
r-l
U
o
^
z
o
M
Eh
<
o
M
>
<
2
••
ro
(^
Eh
e
0)
-H rH rH
•H -H +J -H 03
Eh
tic:
CO
<c
^
ca
D
CO
O
u
en
CT> 0)
fO
-H
rH
^
Eh
n3
a
p
(T3
>
w
w
fcj
a.
!T>
T) -^
>
-P
e
M
03
-P
3
3
c
.H
03
cu
-P
<
rH
XI -H
DJ
01
.-H
C
•H -H
a-T^
03
03
•
cn
03
MH
W
C
03
tT>
C
M
O
03
-H ja
T3 TJ
OJ
0)
a^
U
Q)
M
0)
x;
U
0)
>
"
M
5
03
U
Xi
U C
d)
CO
Xi
UH
(U
lii
>
CJi-P
CJ
U3
CO
3 W
-P
C
cn
H
-H
03
TJ
0)
fO
M
Sh
3
3
rH
•H
04
C
-H
TS
+J
<u
S-i
0^ -H X}
•H Eh D ^3
+j
cr (u
U -P OJ (U
rtj
03 M fe
U
UH 4J
-pa
Qj
C
a
Si
U-4
a,
^
>i
T3
fd
CD
ro
^
CU
a
-n
C
03
•H JJ
Di
•H
e
C7>
>
<U
03
CO
C
-P +J
c
C
3 M
cr 3
cu
cu
-P
CO
cu
JH
u
3
en
-P Cm
*
CO
M
•
<N
^
C7>
CO
r^
CO
Q)
03
CO
CO
tn -H Eh
279
APPENDIX D
RADAR NAVIGATION MISSION TIME LINE ANALYSIS
The Mission Tine Line Analysis
relates the sequence
(MTLA)
of tasks to be performed by the operator to a real time basis,
and can be used to identify critical tasks within a maneuver
that are important for performance measurement [Matheny, et
al.,
Using the segment three portion of the A-6E TRAM
1970].
radar navigation task listing
(Appendix A), each task/subtask
was listed along the vertical axis of the time line.
The
estimated time to perform each task and subtask was then ex-
tracted from the task analysis (Appendix
C)
and plotted along
the horizontal axis, which represents in this example a seven-
minute radar navigation TP-to-TP "leg."
(1)
Time is coded as:
dark if the task must be executed for maneuver Success or
if the task requires complete operator attention, or
(2)
shaded
if the task is one of monitoring or troubleshooting and can be
performed simultaneously with other tasks.
The MTLA is a large graph but is presented here as two
task pages
(Tl to T7,
two time pages
(0
S3;
to 3+30,
and T7
,
each followed by
S9 to TIO)
and 3+30 to 7+00)
.
By removing
and appropriately arranging the six pages, the full MTLA graph
will result.
280
SEGMENT
Tl
3:
NAVIGATION TO TURN POINT
INITIATE TURN AT IP
ALERT PILOT OF OUTBOUND HEADING
SET HEADING ON HSI
CHECK AZ-RNG TRKG SWITCH
MONITOR DVRI FOR IP PASSAGE
ACTIVATE COCKPIT CLOCK
INFORM PILOT OF IP PASSAGE
CHECK FOR PILOT TURNING
ACTIVATE STEERING TO TP
51 DEPRESS TGT N ADDRESS KEY
52
CHECK COMPTMODE SWITCH
CHECK FOR ACCURATE STEERING
READ SYSTEM BEARING AND RANGE
SI
S2
COMPARE BEARING AND RANGES
TROUBLESHOOT STEERING IF REQUIRED
SI DETERMINE ACTUAL TP LAT/LONG
S2
COMPARE ACTUAL/SYSTEM TP
S3
INSERT CORRECT TP LAT/LONG
S4
EVALUATE SYSTEM PRESENT POSITION
INFORM PILOT OF STEERING TO TP
INSERT DATA FOR NEXT TP(s)
SI
THROW COMPTMODE SWITCH
S2
DEPRESS OLD TGT N ADDRESS KEY
S3
DEPRESS POS ACTION KEY
54
DEPRESS QUANTITY KEYS
55
DEPRESS ALT ACTION KEY
S6
DEPRESS QUANTITY KEYS
S7
ALERT PILOT TO MAINTAIN HEADING
S8
THROW COMPTMODE SWITCH
S9
THROW DDU DATA SWITCH
SIO CHECK FOR ACCURATE DATA ENTRY
Sll REPEAT INSERT IF REQUIRED
S12 DEPRESS TGT N ADDRESS KEY FOR TP
S13 CHECK FOR ACCURATE STEERING
514 REPEAT S12/S13 IF REQUIRED
515 INFORM PILOT OF STEERING TO TP
PERFORM SYSTEM NAVIGATION TASKS
SI TUNE RADAR FOR OPTIMUM DISPLAY
S2
MONITOR FLIGHT PROGRESS
53
INFORM PILOT OF NAV ACCURACY
54
POSITION CURSORS ON TP
S5
MONITOR NAVIGATION EQUIPMENT
S6
MONITOR TIME ON TP
S7
MONITOR SYSTEM VELOCITIES
S8
MONITOR SYSTEM HEADING
SI
S2
S3
54
55
S6
S7
T2
T3
T4
T5
T6
T7
281
-0+30
(TP)
-
—
«
////// ////////
1+30
1+00
0+30
2+00
/////// ///
—
///
/ V//
'////////
'//////// V/////
/
/
//
/
////
///
f
7
///
////// ///////
/
///
/// //
///
—
282
2+30
3+00
3+30
3+30
4+00
4+30
5+00
5+30
•
283
6+00
6+30
7+00
SEGMENT
T7
-0 + 30
(continued)
(continued)
S9
T8
3:
MONITOR SYSTEM ALTITUDE
SIO MONITOR FLIGHT SAFETY INSTR.
PERFORM APPROACH TO TP PROCEDURES
51
CHECK FLIGHT PROGRESS
52
SELECT ARE 30/60 DISPLAY
OBSERVE ARE 30/60 DISPLAY
53
TUNE RADAR FOR OPTIMUM DISPLAY
CONTINUE CURSOR POSITIONING
T9 1
PERFORM AUT0I4ATIC VELOCITY CORRECT
SI
POSITION AZIMUTH CURSOR
S2
POSITION RANGE CURSOR
S3
THROW AZ-RNG TRKG SWITCH
S4
CHECK FOR AZ-RNG LOCK-ON
S5
ROTATE VELOCITY CORRECT SWITCH
S6
CHECK DDU DATA SELECT SWITCH
S7
CHECK A-4 DISPLAY
S8
ROTATE VELOCITY CORRECT SWITCH
T9 1
PERFORM MANUAL VELOCITY CORRECT
SI
POSITION AZIMUTH CURSOR
S2
POSITION RANGE CURSOR
S3
CHECK AZ-RNG TRKG SWITCH
S4
ROTATE VELOCITY CORRECT SWITCH
S5
DELAY FOR 10- SEC MINIMUM
REPEAT CURSOR POSITIONING
S6
S7
MONITOR FURTHER CURSOR DRIFT
S8
CHECK DDU DATA SELECT SWITCH
S9
CHECK A-4 DISPLAY
SIO ROTATE VELOCITY CORRECT SWITCH
TIO
INITIATE TURN AT TP
SI ALERT PILOT OF OUTBOUND HEADING
S2
SET HEADING ON HSI
S3
CHECK AZ-RNG TRKG SWITCH
MONITOR DVRI FOR TP PASSAGE
S4
S5
RECORD TIME OF TP PASSAGE
S6
ACTIVATE COCKPIT CLOCK
S7
INFORM PILOT OF TP PASSAGE
RETURN TO SEGMENT 3, TASK 2
54
55
284
0+30
1+00
1+30
2+00
285
2+30
3+00
3+30
3+30
4+00
4+30
5+30
5+00
6+00
6+30
7+0(
"
—
—
—
-
--
//
7
//
////
///
//
V
//
//
///
286
////////
APPENDIX
E
SEQUENTIAL SAMPLING DECISION MODEL
This appendix presents the sequential sampling decision
model and its parameters in sufficient detail for the reader
unfamiliar with the background theory of the model.
Much of
the material is excerpted from Rankin and McDaniel [1980]
a
in
method proposal for achieving improvements in the precision
of determining FRS student aviator proficiency using a Com-
puter Aided Training Evaluation and Scheduling (GATES)
system,
GATES provides a computer managed, prescriptive training pro-
gram based on individual student performance, and could be
utilized for the evaluation portion of the model to measure
B/N performance by either simulator software incorporation or
by desk-top minicomputers.
287
.
I.
GATES DECISION MODEL
One sequential method that may be used as a means for
making statistical decisions with a minimum sample was introduced by Wald [1947]
.
Probability ratio tests and correspond-
ing sequential procedures were developed for several statistical
One of the tests, the binomial probability
distributions.
ratio test, was formulated in a context of a sampling procedure
to determine whether a collection of a manufactured product
should be rejected because the proportion of defectives is too
high or should be accepted because the proportion of the defectives is below an acceptable level.
The sequential testing
procedure also provides for a postponement of decisions conThis deferred decision is
cerning acceptance or rejection.
based on prescribed values of alpha
(a)
(a)
and beta
(3).
Alpha
limits errors of declaring something "True" when is is
"False"
(Type
I
Beta
error).
(3)
limits errors of declaring
something "False" when it is "True"
(Type II error)
In an industrial quality control setting, the inspector
needs a chart similar to Figure
El
to perform a sequential
test to determine if a manufacturing process has turned out a
lot with too many defective items or whether the proportion
of defects is acceptable.
As each item is observed, the in-
spector plots a point on the chart one unit to the right if
it is not defective, one unit to the right and one unit up if
the item is defective.
If the plotted line crosses the upper
parallel line, the inspector will reject the production lot.
288
;
If the plotted line crosses the lower parallel line,
will be accepted.
the lot
If the plotted line remains between the
two parallel lines of the sequential decision chart, another
sample item will be drawn and observed/tested.
The GATES decision model focuses on proportions of profi-
cient trials
(analogous to nondef ectives or correct responses)
whereas, in previous applications, proportions of defectives
or incorrect responses were the items of interest.
This ap-
proach does not alter the logic of the sequential sampling
procedure or the decision model.
It does enhance the "mean-
ingfulness" of the procedure in decisions concerning proficiency
because the ultimate goal is to determine "proficiency" rather
than "nonprof iciency.
"
It should be noted that in the industrial
quality control setting, sampling occurs after the manufacturing
process.
In educational and training applications,
sampling occurred after the learning period.
sequential
In the GATES
System, the sequential sampling occurs during the learning
period and eventually terminates it.
The GATES decision model can be described as consisting of
decision boundaries.
Referring to Figure
lines represent those decision boundaries.
El,
the parallel
Grossing the upper
line, or boundary, results in a decision to "Reject Lot"
crossing the lower line, or boundary, results in
"Accept Lot."
a
decision to
In the GATES system, these decision boundaries
translate to "Proficient" and "Not Proficient."
Galculations
of the decision boundaries require four parameters.
parameters are:
289
These four
-p
u
x:
u
en
c
•H
.H
a
e
td
CO
(d
•H
C
0)
td
o
•H
P
Q)
x:
o
a
>i
u
en
>
M
&^
u
CO
S O w Eh
H
Cm
Z
W
Q
290
—
.
p.
Lowest acceptable proportion of proficient trials
(P) required to pass the NATOPS flight evaluation
with a grade of "Qualified." Passage of the
NATOPS flight evaluation is required to be considered a trained aviator in an operational (fleet]
squadron.
P-
Acceptable proportion of proficient trials (P)
that represent desirable performance on the NATOPS
flight evaluation.
Alpha
Beta
(a)
(3)
The probability of making a TYPE I decision error
(deciding a student is proficient when in fact he
is not proficient)
The probability of making a TYPE II decision error
(deciding a student is not proficient when in fact
he is proficient.
Parameter setting is a crucial element in the development
of the sequential sampling decision model.
Kalisch [1980]
outlines three methods for selecting proficient/not proficient
performance
(q^v/q-,
values)
as:
—
Individuals are classified
Method 1 External Criterion
as masters, non-masters, or unknown on the basis of performance on criteria directly related to the instructional
objectives.
These criteria can be in terms of demonstrated
levels of proficiency either on the job or in a training
environment.
The mean proportion of items answered correctly by the masters on an objective would provide an
estimate for qSimilarly, q, would be the proportion
correct for the non-masters.
.
.
—
Method 2 Rationalization
Experts in the subject area
who understand the relation of the training objectives
to the end result; e.g., on-the-job performance, select
the q^ and q, values to reflect their estimation of the
necessary levels of performance.
This method is probably the closest to that now used by the Air Force.
The
procedure may provide somewhat easier decision making
since specifying two values creates an indecision zone
neither mastery nor non-mastery.
This indecision zone
indicates that performance is at a level which may not
be mastery but is not sufficiently poor to be considered
at a non-mastery level.
.
291
The scores of prior
Method 3--Representative Sample
trainees, who demonstrate the entire range from extremely
poor to exemplary performance on objectives, are used
The proportion correct for the
to estimate q^ and q,
entire sample is used to obtain an initial cutting score
Scores are separated into two categories:
(a) those
C.
scores greater than or equal to C and (b) those scores
For each category, the mean proportion
less than C.
correct score is computed.
The mean for the first category equals q^; the mean for the second category equals
»
.
^1-
The selection of alpha
(a)
and beta
(3)
should be based
on the criticality of accurate proficiency decisions.
values of alpha
(ct)
and beta
(3)
Small
require additional task trials
to make decisions with greater confidence.
Factors that are
important in selecting values for alpha
and beta
(a)
(3)
are
outlined below:
(1)
C2)
Alpha
(a)
values
—
(a)
Safety potential harm to the trainee or to
others due to the trainee's actual non-mastery
of the task.
(b)
Prerequisite in Instruction potential problems
in future instruction, especially if the task
is prerequisite to other tasks.
(c)
Time/Cost potential loss or destruction of
equipment either in training or upon fleet
assignment.
(d)
Trainee's View of the Training potential negative view by trainee when classified as proficient although the trainee lacks confidence
in that decision.
Also, after fleet assignment
if previous training has not prepared him
sufficiently the trainee may also have a negative
view of the training program.
Beta
Ca)
—
—
—
(3)
values
—
Instruction requirement for additional training
resources (personnel and materials) for unnecessary training in case of misclassif ication as
not proficient.
292
—
Cb)
Trainee Attitudes the attitude of trainees
when tasks have been mastered yet training
continues; trainee frustration; corresponding
impact on performance in the remainder of the
training program and fleet assignment.
(c)
Cost/Time-- the additional cost and time required
for additional training that is not really needed.
After the model parameters have been selected, calculation
of the decision boundaries may be accomplished using the Wald
Binomial Probability Ratio Test.
A formal mathematical dis-
cussion of this test follows.
WALD BINOMIAL PROBABILITY RATIO TEST
II.
The Wald binomial probability ratio test was developed by
Wald [1947] as a means of making statistical decisions using
as limited a sample as possible.
The procedure involves the
consideration of two hypotheses:
and
P is the
H.
1
:
P
>
-
P^
2
where
proportion of nondefectives in the collection under
consideration,
P,
is the minimum proportion of nondefectives
at or below which the collection is rejected, and P^ is the
desired proportion of nondefectives, at or above which the
collection is accepted.
Since a simple hypothesis is being
tested against a simple alternative, the basis for deciding
between Hq and H^ may be tested using the likelihood ratio:
293
R
:
<^2'''"
^2n
Where:
-
^Z'""""
Minimum proportion of nondef ectives at or below
which the collection is rejected.
=
P,
(^
P^ = Desirable proportion of nondef ectives at or
above which the collection is accepted.
= Total items in collection.
n
dn = Total nondef ectives in collection.
The sequential testing procedure
provides for a postpone-
ment region based on prescribed values of alpha
(3)
and beta
that approximate the two types of errors found in the
statistical decision process.
H-.:
(a)
P = P,
To test the hypothesis
calculate the likelihood ratio and proceed as
,
follows
P
^^^
^^
P^
In
If
-^
-
T^
>
i:i^
'
accept Hq
,
accept
P
(2)
^In
°^
P
6
(3)
If T-^—
1-a
<
— —
2n
Pt„
In
t;
<
1-
a
^
,
H^
^
take an additional observation.
These three decisions relate well to the task proficiency
problem.
(1)
We may use the following rules:
Accept the hypothesis that the grade of
P
is accumu-
lated in lower proportions than acceptable performance would
indicate.
294
Reject the hypothesis that the grade of P is accumu-
(2)
lated in lower proportions than acceptable performance would
By rejecting this hypothesis, an alternative
indicate.
hypothesis is accepted that the grade of P is accumulated in
proportions equal to or greater than desired performance.
Continue training by taking an additional trial (s);
(3)
a
decision cannot be made with specified confidence.
The following equations are used to calculate the decision
regions of the sequential sampling decision model.
log
dn
<
j^
i-a
—
=-—
1-P
;r
P
log
+ n
2
1
log p- + log .TTp^1
^ ^2
log
dn
>
=:
2
log
—
=r
1-P
—
+ n
1
log p- + log y;-^^1
^ ^2
Where:
P
1-P
2
1
log p- + log ^Tp"
^
1
i^
:;
P
j^
^2
P
2
^2
"^
1-P
1
log p- + log yZp^
^1
2
dn = Accumulation of trials graded as "P" in the
sequence.
n
-
Total trials presented in the sequence.
P-
=
Lowest acceptable proportion of proficient trials
(P) required to pass the NATOPS flight evaluation
with a grade of "Qualified."
P^ = Proportion of proficient trials (p) that represent
desirable performance on the NATOPS flight evaluation.
Alpha (a)= The probability of making a type I error (deciding
a student is proficient when in fact he is not
proficient.
295
.
Beta
(3)
= The probability of making a type II error
(deciding a student is not proficient when
in fact he is proficient)
The first term of the two equations will detejrmine the
intercepts of the two linear equations.
The width between
these intercepts is determined largely by values selected for
alpha
(a)
and beta
(6)
.
The width between the intercepts
translates into a region of uncertainty; thus, as lower values
of alpha
(a)
and beta
(3)
are selected this region of uncer-
tainty increases.
The second term of the equations determines the slopes
of the linear equation.
Since the second term is the same
for both equations, the result will be slopes with parallel
lines.
P,
Values of
P,
and P^ ^^ well as differences between
and P2 affect the slope of the lines.
translated into task difficulty.
This is easily
As P^ values increase, in-
dicating easier tasks, the slope becomes more steep.
This in
turn results in fewer trials required in the sample to reach
a decision.
As differences in P, and P^ increase, the slope also be-
comes steeper and the uncertainty region decreases.
consonant with rational decision making.
This is
When the difference
between the lower level of proficiency and upper level of
proficiency is great, it is easier to determine at which proficiency level the pilot trainee is performing.
of differences in P,
The concept
and 2^ is analogous to the concept of
296
.
effect size in statistically testing the difference between
the means of two groups.
alpha
(a)
and beta
(3)
In such statistical testing, when
remain constant, the number of obser-
vations required to detect a significant difference may be
reduced as the anticipated effect size increases [Kalisch,
1980]
297
INITIAL DISTRIBUTION LIST
Copies
No.
1.
Defense Technical Information Center
Cameron Station
Alexandria, Virginia 22314
2
2.
Library, Code 014 2
Naval Postgraduate School
Monterey, California 93 94
2
3.
Department Chairman, Code 55
Department of Operations Research
Naval Postgraduate School
Monterey, California 93940
1
4.
Asst Professor D. E. Neil, Code 55
Department of Operations Research
Naval Postgraduate School
Monterey, California 93940
2
5.
LT Ted R. Mixon
4 61 Lilac Rd
Casselberry, Florida 32707
5
6.
Mr.
Don Vreuls
Vreuls Research Corporation
68 Long Court, Suite E
Thousand Oaks, California 91360
1
7.
Dr.
Jack B. Shelnutt
Seville Research Corporation
400 Plaza Building
Pensacola, Florida 32505
1
8.
A.
F. Smode, Director TAEG
Training Analysis and Evaluation Group
Orlando, Florida 32813
1
9.
LT Gerald S. Stoffer
Code N-712
Naval Training Equipment Center
Orlando, Florida 32813
1
10.
Don Norman
Code N-71
Naval Training Equipment Center
Orlando, Florida 32813
1
298
Copies
No.
11.
COMNAVAIRPAC Commander Naval Air Force
US Pacific Fleet
Box 1210, Naval Air Station, North Island
San Diego, California 92135
1
12.
COMNAVAIRLANT, Commander Naval Air Force
US Atlantic Fleet
Norfolk, Virginia 23511
1
13.
COMTACWINGSLANT, Commander Tactical Wings
Atlantic
Naval Air Station, Oceana
Virginia Beach, Virginia 23460
1
14.
COMMATVAQWINGPAC Commander Medium Attack
Tactical Electronic Warfare Wing Pacific
Naval Air Station, Whidbey Island
Oak Harbor, Washington 98 27 8
2
15.
MATWING, Medium Attack Wing 1
Naval Air Station, Odeana
Virginia Beach, Virginia 23460
2
16.
Commanding Officer, Attack Squadron 42 (VA-42)
Naval Air Station, Oceana
Virginia Beach, Virginia 23460
2
17.
Commanding Officer, Attack Squadron 12 8 (VA-128)
Naval Air Station, Whidbey Island
Oak Harbor, Washington 98278
2
18.
CDR W. Moroney, Code 55Mp
Department of Operations Research
Naval Postgraduate School
Monterey, California 93940
6
19.
NAVAIRSYSCOM Headquarters
Navy Department
Washington, DC 20361
-
AIR 5313
1
20.
NAVAIRSYSCOM Headquarters
Navy Department
Washington, DC 20361
-
AIR 34
21.
NAVAIRSYSCOM Headquarters
Navy Department
Washington, DC 20361
-
AIR 413 5 A
22.
Leonard Ryan
Naval Training Equipment Center
Orlando, Florida 32813
,
,
299
F
1
1
1
No.
23.
Dr. Sam Campbell
Code All-04
Grumman Aerospace Corporation
Bethpage, New York 11714
24.
Professor Glenn Lindsay, Code 55Ls
Department of Operations Research
Naval Postgraduate School
Monterey, California 93940
25.
Dr. Tom Killion
Air Force Human Resources Laboratory
Code OT(UDRI), Bldg 558
Williams AFB, Arizona 85224
26.
Ira Goldstein
Code N-71
Naval Training Equipment Center
Orlando, Florida 32313
27.
LT Alexander Bory
Human Factors Engineering Branch
Pacific Missile Test Center
Point Mugu, California 93042
28.
CDR Paul R. Chatelier
Office of Director Defense Research
and Development
3 D 129 Pentagon
Washington, DC 20350
29.
LCDR Wade R. Helm
Human Factors Division
Naval Air Development Center
Warminster, Pennsylvania 18 07 4
30.
CDR Robert S. Kennedy
Chief, Human Performance Science Division
Naval Biodynamic Laboratory
Box 29407, Michoud Station
New Orleans, Louisiana 7 0189
31.
CDR Norman E. Lane
Human Factors Engineering Division
Crew Systems Department
NADC Warminster, Pennsylvania 18074
300
Copies
No.
32.
LT Michael G. Lilienthal
Code 5
Human Performance Sciences Department
Naval Aerospace Medical Research Laboratory
NAS Pensacola, Florida 32508
33.
CDR Joseph F. Funaro
Code N-82
Naval Training Equipment Center
Orlando, Florida 32813
34.
CAPT Thomas J. Gallagher
Naval Air Development Center
Warminster, Pennsylvania 18 074
35.
LCDR Richard H. Shannon
Naval Biodynamic Laboratory
Box 29407, Michoud Station
New Orleans, Louisiana 7 018 9
36.
CDR Patrick M. Curran
Head, Aerospace Psychology Branch (3C13)
Bureau of Medicine and Surgery
Navy Department
Washington, DC 20372
37.
Stan Collyer
Human Factors Laboratory
Naval Training Equipment Center
Orlando, Florida 32813
38.
Larry R. Beideman
McDonnell Douglas Corporation
Box 516
Saint Louis, Missouri 63166
39.
William C. McDaniel
Training Analysis and Evaluation Group
Orlando, Florida 32813
301
Copies
Thesis
1
1652 7
c.l
<r
192347
Mixon
A model to measure
bombardier /navigator
performance during
radar navigation in
device 2F114, A-6E
weapon system trainer,
,
thesM6527
" -.?;iei to
measure bombardier navigator
3 2768 000 98471
DUDLEY KNOX LIBRARY
MP'
R V
.
E
\
i
'
»»
..
-/•
•
•!
..ij
v-.-ikl'V-
M
;,.-•
k'
0.
A
'"•••S'xVV.-
•';,v.'.,-';|v-H.i^.
.
l<'''-V/jn|
';/
•s\'<S'iJ
A\
,
.'.1-
fcl
Download