Cognition in the Cockpit: In Need of a Theory

advertisement
Proceedings of the11th International Symposium
on Aviation Psychology, March, 2001
COGNITION IN THE COCKPIT: IN NEED OF A THEORY
Kathleen L. Mosier
Roberta Bernhard
Jeffrey Keyes
San Francisco State University
San Francisco, CA
ABSTRACT
In this paper, we present the results of a study
investigating how pilots make choices in paper-andpencil scenarios – specifically, whether they are biased
in favor of automated information. We then discuss
the cognitive demands of the flying task in the highlyautomated aircraft, and introduce a new theoretical
framework organized around the goals of
correspondence and coherence and tactics varying on a
continuum from intuition to analysis (Hammond, 1996)
to describe and explain cognitive mechanisms
underlying actions and results.
INTRODUCTION
Omission and commission errors resulting from
automation bias, the tendency to rely on automated
cues as a heuristic replacement for vigilant information
seeking and processing, have been documented in
professional pilots and students, in one- and twoperson crews (e.g., Mosier, Skitka,
Dunbar, &
McDonnell, 2001; Mosier, Skitka, Heers, & Burdick,
1998; Skitka, Mosier, Burdick, Rosenblatt, 2000;
Skitka, Mosier, & Burdick, 1999). We can discern
WHAT pilots do in terms of actions and results
(omission and commission errors), but are still unclear
about WHY they do what they do – that is, what the
cognitive mechanisms are that underlie and explain this
behavior. Underlying causes of omission errors have
been traced in part to vigilance issues, as crews who
are monitoring flight progress and system status often
"miss" events that are not pointed out to them by
automated systems. Causes of commission errors are
harder to track. It has been hypothesized that
commission errors may be related to a desire of pilots
to "take action," particularly as proactivity has typically
been associated with superior crew performance.
Additionally, most of the studies cited above
utilized low- or medium-fidelity flying tasks, and the
salience of the automated display may have fostered a
tendency to rely heavily on it for information. What
happens when automated information is presented in a
format, such as text on paper, that makes it equal in
salience to other information? Will we see the same
cognitive processes and tendency to act on automated
information as has been elicited in previous studies?
To investigate these questions, a paper-and-pencil
scenario study was conducted using regional airline
pilots as participants.
Scenario Study
METHOD
Scenario Development. Scenarios were created
using incidents from ASRS (Aviation Safety Reporting
System) reports and from previous research studies
(Fischer, Orasanu, & Wich, 1995). Care was taken to
ensure that scenarios were representative enough that
they could be responded to by pilots of several
different aircraft types. Each scenario conveyed a
situation involving conflicting information from two
sources: an automated source + either a human source
or a traditional indicator. In each scenario, information
from one source suggested making some change
(action); information from the other source suggested
maintaining status quo. Each scenario was followed by
two decision options - for example:
You are the pilot flying on approach into your
destination in VMC. You would really like to
expedite your arrival, because you are already late
and many of the passengers are in danger of
missing their connections. You are being vectored
in for a landing behind a 757, which you know is
notorious for causing wake turbulence problems for
aircraft following it. Air traffic control has told you
that you are presently 5 miles behind the 757, in no
danger of encountering wake turbulence, and to
maintain your present speed of 200 knots to stay in
sequence for landing. You look at the TCAS
display, and it shows you only 3 miles behind the
757.
Given this information, what would be your
decision?
__ Hold present speed and distance from the 757.
__ Get ATC clearance to slow down to increase the
distance from the 757.
The above scenario above contains information
from an automated source (TCAS) and a human source
Proceedings of the11th International Symposium
on Aviation Psychology, March, 2001
(Air Traffic Controller).
In this version, the
information from the human source suggests that the
pilot maintain status quo; information from the
automated source suggests a change. Pilots were asked
to choose one of the options, and to report their level of
confidence in the decision (not confident -- > very
confident) as well as the risk involved in the scenario
(minimal risk -- > high risk) on 1-9 scales. Pilots were
told in a cover letter that we realized the scenarios
might not contain all of the information or decision
options that they would like to have, but asked them to
make a choice based on what was available in the
scenarios.
Procedures. Two different packets of 10 scenarios
each were created. Seven of the scenarios were
matched between packets - that is, the same scenario
was manipulated so that, in Packet 1, the information
from the automated source suggested action, and in
Packet 2, the information from the other source
suggested the same action. Pilots saw only one version
of each scenario. Two scenarios involved engine fires.
One contained conflicting indications about whether or
not an engine fire was actually present; the other
contained conflicting action recommendations - an
automated source suggested that one of two engines
was on fire and should be shut down, but traditional
indications suggested that it was actually the other
engine that was damaged. Two additional scenarios
were added to each packet to even out the number of
human and traditional indicators contained in the
scenarios. Demographic information solicited included
flight hours, years with current airline, and experience
by aircraft type.
Approximately 700 packets were distributed to the
mailboxes of pilots of a US regional carrier. Pilots
were asked to place completed packets into a collection
box in their operations office. One hundred twenty-five
packets with usable data were returned to us.
RESULTS AND DISCUSSION
Pilot respondents ranged in age from 22-55 years (M =
34), and had total hours of flight experience ranging
from 1,000-23,000 hours (M = 5382; SD = 3787).
Glass cockpit hours varied from 0-10,400 hours (M =
1,355; SD = 2,047). It should be noted that this sample
represents a broad range of flight experience,
particularly with respect to glass cockpit experience.
Decision choices. The nature of the data did not
lend itself to traditional statistical comparison of all
scenarios against each other. However, in looking at
matched scenario pairs, we found no systematic
evidence of a preference for automated information in
pilot decisions – in fact, in none of the scenario pairs
was automated information followed across packets.
Rather, we saw a pronounced scenario effect; that is, in
most scenarios there was high agreement across
packets on the preferred option, the risk level of the
scenario, and the confidence with which pilots chose an
option. We did not find evidence of a preference for
action (which was, in most cases, the more
conservative option) across all scenarios, although the
higher the estimated risk of a scenario, the more likely
pilots were to choose action, and the more confident
they were in their choice.
The most dramatic scenario response patterns
exhibited either a clear source effect (e.g., following a
particular source), or a response effect (e.g., taking
action vs maintaining status quo). A more complete
picture can be gained by looking specifically at the
scenario pairs, as displayed in Table 1. The table
shows pilot responses, by source of information and by
action/status quo options.
Numbers for the
predominant predictors by scenario, source or
response, are in bold type.
Table 1.Decision choice totals by scenario.
Source Effect
Automation Human/Traditional
TCAS vs ATC
61
64
TCAS vs ATC
65
58
Warning light vs human/ indirect
74
50
TCAS vs PNF
34
90
Computer vs PNF
35
86
FMS vs traditional (VHF nav)
31
99
Ambiguous engine fire
70
54
warning system vs engine gauges
Engine fire - which engine?
9
94
warning system vs engine gauges
Information Conflict
Response Effect
Action
Status Quo
110
15
98
25
119
5
74
50
56
65
71
51
6
118
(action/action)
Proceedings of the11th International Symposium
on Aviation Psychology, March, 2001
When automated information conflicted with
information from a human source, for example, the
nature of the human information impacted decisions. If
the "human" was the air traffic controller, or when the
human offered indirect information (e.g., remembered
a similar incident being a false alarm), pilots exhibited
a response effect, and tended to follow whichever
source recommended action. However, when the PNF
(pilot-not-flying) was the source of direct information,
pilots made the decision suggested by the human
(source) rather than the automation. In automation vs.
traditional indicator conflicts, we observed a tendency
to follow traditional rather than automated indicators
(source). This included the scenario that contained
conflicting engine fire indications (which engine was
on fire). Pilots most often followed traditional, rather
than automated indicators. In the other engine fire
scenario, the dilemma was whether or not an engine
was actually on fire, and should be shut down. In this
conflict, pilots responded conservatively, and this was
the only scenario in which an inaction response was
prevalent (response).
It should be noted that this
contradicts data we have collected in previous studies –
in the part-task simulator, pilots almost always shut the
engine down (Mosier et al., 1998; 2001).
This study, although admittedly limited in scope,
suggests that, when data are made to be equally salient,
pilots do not exhibit a systematic preference for
automated information, but rather a tendency toward
action in higher-risk situations, and a trust of traditional
indicators and direct human information. Other factors,
such as the perceived validity of conflicting
information, also impacted whether or not automated
cues were trusted. Pilots seem to assume high validity
when information comes from fellow crewmembers,
and lower validity when the reporting human is an air
traffic controller.
Results of this study have several possible
explanations. One hypothesis is that regional airline
pilots, typically less experienced with automated
aircraft than the commercial, B-737/747/767 pilots of
previous studies, are not yet “jaded” by automation.
Given this explanation, we may be able to impact
automation bias if we train pilots early enough in their
careers to evaluate automated cues in context with
other cues. A second possible explanation is that the
paper-and-pencil
format
provides
information
differently than it is shown within the cockpit, and
allows the information to be processed in a less biased
and more analytical way. It is certainly true that the
information displays in the automated cockpit are very
different from our paper-and-pencil presentation – and
that these displays have been found to result in
automation bias, automation surprises, and mode
confusion. It is possible, then, that features of the glass
cockpit may not be eliciting the type of cognition
required for effective analysis in the automated
environment. In order to define and resolve this
inconsistency, it is necessary to examine the cognitive
requirements of the automated cockpit within a
theoretical framework that explains and predicts a wide
variety of automation-related behaviors and associated
errors.
AUTOMATED COCKPIT: CHANGE IN TASK
AND CHANGE IN COGNITION
Implications of the automation in the aircraft
cockpit in terms of the pilots’ tasks are enormous, and
have been discussed at length from many perspectives.
The shift from active control to systems monitoring has
also profoundly changed the type of cognitive activity
required of pilots. Models that explain pilot behavior
in terms of perception -- > response must be replaced
by others that focus on thinking, judgment, and
decision making. Most importantly, in terms of
theoretical implications, the automated cockpit brings
cues that were in the outside environment into the
cockpit, and displays them as highly reliable and
accurate information rather than probabilistic cues.
This changes the goal of pilot cognition from
correspondence, or empirical accuracy in using
probabilistic cues for diagnosis, judgment, and
prediction, to coherence, or rationality and consistency
in diagnostic and judgment processes (Hammond,
1996; 2000).
Correspondence. The goal of correspondence in
cognition is empirical, objective accuracy in human
judgment. Correspondence competence refers to an
individual’s ability to accurately perceive and respond
to multiple fallible indicators in the environment (e.g.,
Brunswik, 1956). A pilot, for example, exercises
correspondence competence when using cues outside
the cockpit to figure out aircraft position, or to judge
height and distance from an obstacle or a runway.
Features of the environment and of the cues utilized
will impact the accuracy of correspondence judgments.
For example, cues that are concrete and/or can be
perceived clearly will facilitate accurate judgments. A
pilot will have a relatively easy time judging a 5-mile
reporting point when it is marked by a distinctive
building. Cues that are murkier, either because they
are not as concrete in nature or because they are
obscured by factors in the environment, will hinder
accurate judgments. The same pilot will have a much
harder time judging the report point at night, or when
the building is hidden by fog or clouds.
Correspondence judgments cannot be made without
reference to the “real world,” and are evaluated
Proceedings of the11th International Symposium
on Aviation Psychology, March, 2001
according to how well they represent, predict, or
explain objective reality.
Coherence. The goal of coherence in cognition, on
the other hand, is rationality in judgments and
decisions.
Coherence competence refers to an
individual’s ability to maintain logical consistency in
diagnoses, judgments, or decisions. In modern, hightech aircraft, the flying task is to a very great extent
coherence-based. In contrast to earlier pilots, glass
cockpit pilots can spend relatively little of their time
looking out the window, and most to all of it focused
on information inside the cockpit. The data that they
utilize to fly can, in most cases, be found on cockpit
display panels and CRTs. These data are qualitatively
different from the cues used in correspondence
judgments. They are data, rather than cues - that is,
they are precise, reliable indicators of whatever they
are designed to represent.
Coherence judgments can be made without direct
reference to cues in the “real world” (the pilot never
even has to look out the window) – what is important is
the logical consistency, or coherence, of the process
and resultant judgment. In contrast to correspondence
competence, the quality of the cognitive process
utilized is the sole evaluative criterion for coherence.
A pilot exercises coherence competence when scanning
the information displayed inside the cockpit to ensure
that system parameters, flight modes, and navigational
displays are consistent with what should be present.
What the pilot strives for is a rationally "good" picture
- engine and other system parameters should be in sync
with flight mode and navigational status - and
decisions that are consistent with what is displayed.
Much of the research on coherence in judgment and
decision making has focused on the difficulty humans
have maintaining coherence. Most of the biases
individuals exhibit in decision making, including
automation bias, are the result of non-coherent
judgment processes – not using data in a rational and
consistent way.
Both correspondence and coherence are important
in aviation. Aviation is a correspondence-driven
domain, because it exists in a physical and social world
and is subject to rules and constraints of that world
(Vicente, 1990). Situation awareness demands an
accurate perception of the world within and outside of
the cockpit. However, with respect to humanautomation interaction in the glass cockpit, the demand
for correspondence-driven cue processing has, to a
very great extent, been removed from the cockpit by
automated systems and displays. When crews achieve
coherence in the cockpit – for example, when ALL
information inside the cockpit paints a consistent
picture of the aircraft on the glide path – they have also
achieved correspondence, and can be confident that the
aircraft IS on the glide path. The pilots do not need to
look out the window for airport cues to confirm it, and,
in fact, visibility conditions often do not allow them to
do so.
Two aspects of correspondence and coherence are
critical in understanding their role in aviation
cognition. First, correspondence and coherence are
complementary, either/or processes. An individual
may alternate between coherence and correspondence,
but cannot do both at once (Hammond, 2000). While
landing an aircraft, a pilot may switch back and forth
rapidly from correspondence to coherence - checking
cues outside of the window, glancing inside at cockpit
instruments, back out the window - or, in some cases,
one crew member will be responsible for coherence
and the other for correspondence. A standard landing
routine, for example, calls for one pilot to keep his or
head "out the window" while the other monitors
instruments and makes altitude callouts.
Second, because aviation is a correspondencedriven domain, pilots must be able to trust the
empirical accuracy of the data used to achieve
coherence. This means that the achievement of
coherence must also accomplish correspondence. The
pilot may not be able to verify this because he or she
does not always have access to either correspondence
cues or to "objective reality." When programming a
flight plan or landing in poor weather, for example, the
pilot must be able to assume that the aircraft will fly
what is programmed and that the instruments are
accurately reflecting altitude and course.
When
monitoring system functioning, the pilot must be
confident that the parameters displayed on the
instrument panel are accurate. The cockpit is a
deterministic, rather than a probabilistic environment,
in that the uncertainty has, for most practical purposes,
been engineered out of it through high system
reliability.
In the automated cockpit, then, the priority for
correspondence in cognitive processing has been
replaced by a demand for coherence. This shift in
cognitive goals means is that we need to re-examine
cognition in the automated cockpit to determine what is
required to achieve, maintain, recover coherence in the
cockpit, and whether or not these processes are
supported by current displays of information.
Intuition and Analysis.
The goals of
correspondence and coherence can be achieved by
cognitive tactics ranging on a continuum from intuition
to analysis (e.g., Hammond, 1996; Hammond, Hamm,
Grassia, & Pearson, 1997). In the aviation context,
Proceedings of the11th International Symposium
on Aviation Psychology, March, 2001
Figure 1. Cognitive tactics to achieve correspondence and coherence in the cockpit.
CORRESPONDENCE
INTUITION……………………ANALYSIS
Pattern Matching
Rules and Computations
COHERENCE
INTUITION…………………..ANALYSIS
Pattern Matching for
Needed for
(Some) Anomaly Detection
novice pilots analytically strive for correspondence –
accuracy – by using a combination of cues, rules and
computations to figure out when to start a descent for
landing (see Figure 1).
Pilots also learn to use intuitive, pattern-matching
processes to assess cues and judge situations. As
they gain more experience, the correspondence
process becomes more recognitional, and their
intuitive assessment of whether the situation “looks
right” to start down becomes increasingly effective.
In the naturalistic environment, a pilot’s
correspondence competence – that is, the ability to
utilize probabilistic cues in the environment to assess
situations and predict outcomes - increases with
expertise. Expert pilots are able to quickly recognize
a situation, and may be able to use intuitive processes
under conditions that would demand analysis of a
novice.
The design and display of most automated
systems elicits intuitive cognition. Unlike the
information in our paper-and-pencil scenario study,
for example, data in the electronic cockpit are preprocessed, and presented in a format that allows, for
the most part, a wholistic view of aircraft and system
states. Often, pictorial representations exploit human
intuitive pattern-matching abilities, and allow quick
detection of out-of-parameter system states. This
design philosophy seems to be consistent with the
goals of workload reduction and information
consolidation - and, indeed, many features of cockpit
displays do foster the quick detection of disruptions
to a coherent state. However, current displays may in
fact be leading pilots astray by fostering the
assumption that cockpit data can be managed in an
intuitive fashion. This is a false assumption.
Anomaly Resolution
Although pilots can intuitively infer coherence
among cockpit indicators much of the time if things
are operating smoothly, repairing - and often
detecting - disruptions to coherence demands a shift
toward analysis. Many errors and anomalies, such as
being in the incorrect flight mode, can only be
detected via analysis. Mode confusion, for example,
often results from what looks, intuitively speaking,
like a coherent picture.
Additionally, the complex nature of the
automated cockpit requires that disruptions to
coherence be resolved via analytical means. Data in
displays must be compared with expected data to
detect discrepancies, and, if they exist, analysis is
required to resolve them before they translate into
unexpected or undesired aircraft behaviors.
Expertise, then, does not offer the same advantages to
the pilot in the electronic world as in the naturalistic
world. Experience may give pilots hints on where to
look for anomalies, but it does not insulate them from
the need to analyze their way back to coherence.
Experts that do think they can operate intuitively in
the electronic cockpit are susceptible to the kinds of
automation-related errors often discussed by
researchers - such as mode errors or automation bias.
SUMMARY
Technological advances in the aircraft cockpit
have resulted in profound changes in the flying task
and in the cognitive requirements of the pilot. The
sophistication of automated systems means that pilots
have access to highly reliable and accurate
information (rather than probabilistic cues), and that
the nature of the pilots' cognitive task been altered
from
one
demanding
largely
intuitive,
Proceedings of the11th International Symposium
on Aviation Psychology, March, 2001
correspondence-based processing to one requiring
primarily analytical, coherence-based cognition.
Examining cognition in the cockpit in terms of a
correspondence/coherence framework has practical
implications for pilots and for displays. With respect
to the pilots, it must be recognized that
correspondence
competence
and
coherence
competence are very different abilities - and skill in
one does not necessarily guarantee skill in the other.
Pilot training programs have in recent years
recognized the importance of correspondence
competence, and have moved toward naturalistic
models of the pilot decision-making process and the
impact of expertise. These models have focused on
correspondence competence - the ability to recognize
probabilistic cues in a dynamic environment, to
quickly assess the situation, and to accurately predict
the outcome of decisions and actions.
Training
for
intuitive
correspondence
competence, however, is not sufficient.
It is
important also to recognize the importance of
coherence competence in the electronic cockpit, and
to include training for it. This process demands more
than attention and vigilance - it entails the rational,
consistent use of information in diagnosis and
decision making.
Displays in the cockpit should not only support
intuitive processes, such as the quick detection of
some out-of-parameter states, but must also provide
the information necessary for analysis. If the pilot is
expected to maintain coherence in the cockpit, he or
she must be able to develop accurate mental models
of system functioning. In order to track system status
and resolve anomalies, the electronic world must
support analysis of current states and resolution of
discrepancies.
Lastly, to validate these concepts in the aviation
context, research must be directed toward defining
and understanding cognitive processes in the cockpit
within the coherence and correspondence framework.
This includes tracking cognitive processes as
described by pilots, as elicited by displays, and as
demanded by situations.
ACKNOWLEDGEMENTS
CoeWe are grateful for the valued input from Dr.
Beth Lyall, Co-Investigator in this research program,
and her colleagues at Research Integrations, Inc.
Graduate students Richard Oppenheim, Lis West,
Kirill Elistratov, and Ozlem Arikan helped with the
scenario study. This work was funded in part by the
NASA Aviation Safety Program, NASA Grant
#NAG2-1285. Dr. Judith Orasanu at NASA Ames
Research Center is our Technical Monitor. Special
thanks to Ken Hammond, the coherence/
correspondence guru.
REFERENCES
Brunswik, E. (1956).
Perception and the
representative design of psychological experiments.
Berkeley, CA: University of California Press.
Fischer, U., Orasanu, J., & Wich, M. (1995).
Expert pilots' perceptions of problem situations.
Proceedings of the 8th International Symposium on
Aviation Psychology, Columbus, OH.
Hammond, K. R. (1996). Human Judgment and
Social Policy. New York: Oxford Press.
Hammond, K. R. (2000). Judgments under
stress. New York: Oxford Press.
Hammond, K. R., Hamm, R. M., Grassia, J., &
Pearson, T. (1997). Direct comparison of the
efficacy of intuitive and analytical cognition in expert
judgment. In W. M. Goldstein & R. M. Hogarth
(Eds.), Research on judgment and decision making:
Currents, connections, and controversies (pp. 144180). Cambridge: Cambridge University Press.
Mosier, K. L., Skitka, L. J., Heers, S., & Burdick,
M. D. (1998). Automation bias: Decision making
and performance in high-tech cockpits. International
Journal of Aviation Psychology, 8, 47-63.
Mosier, K. L., Skitka, L. J., Dunbar, M., &
McDonnell, L. (2001). Air Crews and Automation
Bias: The Advantages of Teamwork? International
Journal of Aviation Psychology, 11(1), 1-14.
Skitka, L. J., Mosier, K. L., & Burdick, M.
(1999). Does automation bias decision making?
International Journal of Human-Computer Studies
50, 991-1006.
Skitka, L. J., Mosier, K. L., Burdick, M.,
Rosenblatt, B. (2000). Automation bias and errors:
Are crews better than individuals? International
Journal of Aviation Psychology, 10(1), 83-95.
Vicente, K. J. (1990).
Coherence- and
correspondence-driven work domains: Implications
for systems desigh.
Behavior and Information
Technology, 9(6), 493-502.
Download