Automaticity development and decision making in complex, dynamic tasks

advertisement
Automaticity development and decision
making in complex, dynamic tasks
Dynamic Decision Making Laboratory
www.cmu.edu/DDMLab
Social and Decision Sciences Department
Carnegie Mellon University
Cleotilde Gonzalez
Rickey Thomas
Polina Vanyukov
1
Complex and dynamic tasks
Executing a battle, driving, air traffic controlling, managing of a
production plan, piloting, managing inventory in a production
chain, etc.
• Demand real-time decisions (time constraints)
• Demand attentional control
• Require multi-tasking: they are composed of multiple and
interrelated subtasks
• Demand the identification of ‘targets’ defined by multiattributes
• Demand multiple and possibly changing responses
2
Automaticity in dynamic, complex tasks
• targets and distractors are often inconsistently
mapped to stimuli and responses
• Often, we bring pre-learned categories and
mappings to a task
stimulus - category
L ------------- letter
category - response
button --------- click
• Are decision makers in dynamic situations
operating in controlled processing continuously?
3
Proposed model of automaticity in DDM
sub-task Structure (Mapping)
sub-task Structure (Mapping)
CM/VM
CM/VM
Cues
Categories
Responses
Cues
coupling
Categories
Responses
coupling
Goals (Relevancy)
Task switching (resource allocation)
4
Experiments
• Automaticity develops with consistently mapped
stimuli to targets, even when targets move and time
is limited (Experiment 1)
• The consistency of target to response mapping also
determines automaticity development (Experiment
2)
• Automaticity of a task component frees-up time and
resources for high level decision-making
(Experiment 3)
• Automaticity develops differently with different
degrees of pre-learned categories (Experiment 4)
5
The Radar Task
6
7
General method
• Independent variables
o
stimulus mapping (CM or VM)
• CM = Search for Numbers in Letters
• VM = Search for Letters in Letters
o
cognitive load
• Memory set size (MSS): Number of possible targets to remember (1 or 4)
• frame size (FS): Number of blips present on the screen at a given time (1
or 4)
o
target present/absent (a target was present 75% of the trials)
• Dependent variables
o
o
Accuracy: proportion of correct detections or decision-making
responses
Time: mean target detection or decision-making time in msec
• From 18 to 30 hours of practice, 3 hours per day 6 to 10 days
8
Experiment 1: Consistency of stimuli
• Replicate major findings from the dual-process
theory (Schneider & Shiffrin, 1977) in a dynamic
task
• Automaticity is acquired with practice in consistent
mapping conditions, and automatic performance is
unaffected by workload
9
Experiment 1: Method
o
CM vs. VM
o
Cognitive Load Variables
• Memory Set Size
• Frame Size
o
Only one possible
response: pressing
spacebar when target is
detected
10
Experiment 1: Accuracy
1
CM
VM
CM
0.9
Accuracy
VM
0.8
0.7
0.6
FS=1
FS=4
MSS = 1
FS=1
FS=4
MSS = 4
11
Experiment 1: Detect Time
1600
1400
1200
Msec
VM
1000
VM
CM
800
CM
600
FS=1
FS=4
MSS = 1
FS=1
FS=4
MSS = 4
12
Experiment 1: Summary
• Radar’s manipulations of cognitive load interact with
stimulus mapping in ways that parallel Schneider &
Shiffrin’s results
• Automaticity develops with extended practice and
consistently mapped stimuli even when targets
move and time is limited
• Radar task can be used to study automaticity in
dynamic stimulus environments
13
Experiment 2: Response Consistency
• There is some evidence that response mapping is not
critical for automaticity to develop (Fisk & Schneider,
1984; Kramer, Strayer, & Buckley, 1991)
• In complex tasks mapping of targets to responses can be
inconsistent
o Resulting in large processing costs, even when stimuli
are consistently mapped to targets
14
Experiment 2: Method
o
Only consistently mapped stimuli
o
Cognitive Load Variables
• Memory Set Size
• Frame Size
o
Response consistency varied in four levels
15
Response Mapping
Conditions
T
Mapped to Stimuli
Partial Mapping to interface
T
T
Fully Mapped to interface
Random Mapping
T
16
Experiment 2: Accuracy
1
Accuracy
0.9
0.8
0.7
0.6
Stimulus
Full
Partial
Random
17
Experiment 2 : detect time
1600
1400
Msec
1200
1000
800
600
Stimulus
Full
Partial
Random
18
Experiment 2: Summary
• A consistent response reduces processing requirements
• Total task consistency (both, consistency of stimuli and
consistency of responses) matters
o There are processing costs if responses are not
consistently mapped, even when stimuli are
• Implications
o
Interface design: interface influences processing of
responses
• Response selection using track-up vs. north-up displays
• Make response selection intuitive
• Interface design, decision support tools, training
o
We can now systematically manipulate Radar to
elucidate the effects of automaticity on high-level
dynamic decision-making
19
Experiment 3: Automatic detection & highlevel decision making
• How would automatic detection of a component help
decision-making?
• Decision-making component required operators to
analyze a sensor array of detected aircraft
• Sensor and weapon information changed dynamically
20
Experiment 3: Method
• Sensor Reading Task
• Determine if Target is Hostile
o
o
o
Scan Sensors
> 13 (Hostile)
< 13 (Non-Hostile)
• Press Ignore (5-Key)
• Select Response (Weapon Systems)
o
o
o
Guns vs. Missiles
> 10 Missiles (6-Key)
< 10 Guns (4-Key)
• Quiet Airspace Report
o
o
No targets detected
Click submit report with mouse
key
21
Experiment 3: Detect Accuracy
1
CM
CM
VM
0.9
Accuracy
VM
0.8
0.7
0.6
FS=1
FS=4
MSS = 1
FS=1
FS=4
MSS = 4
22
Experiment 3: Decision-making Accuracy
1
Accuracy
0.9
CM
CM
VM
0.8
0.7
VM
0.6
FS=4
FS=1
MSS = 1
FS=4
FS=1
MSS = 4
23
Experiment 3: Detect Time
1600
Msec
1400
1200
VM
1000
CM
VM
800
CM
600
FS=1
FS=4
MSS = 1
FS=1
FS=4
MSS = 4
24
Experiment 3: Decision-making Time
1600
Msec
1400
1200
1000
VM
CM
VM
CM
800
600
FS=1
FS=4
MSS = 1
FS=1
FS=4
MSS = 4
25
Experiment 3: Summary
• Consistent mapping of targets improved he accuracy of the
decision-making of the task
• Detect time, detect accuracy, and whole-task performance
are sensitive to workload manipulations
• Implications
o Consistent mapping actually improved whole-task
performance by freeing up time for the controlled sensorreading tasks to run to completion
o Thus, processing speed-up associated with automatic
detection can have a large impact on whole-task
performance
26
But…?
• Is accuracy of decision-making improved simply
because there is more time to process?
• Effect of detection on high-level decision-making
in the presence of a dual-task
27
Experiment 3b: Method
• Secondary tone task: enter count of number of nonstandard tones
o Calibrated to standard tone at beginning of
session for each participant
o Non-standard tones higher/lower pitch than
standard
28
Experiment 3b: results
• In fact the Radar task
performance was the same with
and without the tone task!
Performance on Tone Task
• Detect Time
o
No Effect of secondary task
0.85
0.83
0.81
• Decision-Making Time
o
No Effect of secondary task
• Decision-Making Accuracy
o
No Effect of secondary task
0.79
Accuracy
• Detect Accuracy
o
No Effect of secondary task
0.77
CM
0.75
VM
0.73
0.71
0.69
0.67
0.65
MSS = 1
MSS = 4
29
Experiment 3b: Implications
• No effect of dual task on RADAR performance
• Operators are allocating resources away from tone
task to maintain RADAR performance
• Implications
o Finding supports the hypothesis that consistent
mapping improves decision-making performance
by freeing up resources for other tasks
o Thus, processing speed-up and low resource
requirement associated with consistent mapping
can have a large impact on performance in
complex task
30
Experiment 4: Categorization
• Since consistent mapping is the search for
numbers in letters, it is possible that load-free
processing is due to categorization (Cheng,
1985)
• Purpose of this experiment is to establish the
presence of load-free processing without
categorization
31
Experiment 4: Method
• Incorporate memory ensembles where no possible categorization can take place
either a priori or with learning
• CM vs. VM with tone
o
o
CM = {C, G, H, M, Q, X, Z, R, S}
VM = {B, D, F, J, K, N, W, P, L}
• Memory ensembles were equated
o
o
Angular {H,M,X,Z,F,K,N,W} vs. Round {C,B,D,G,Q,P,R,J}
Beginning {B,C,D,F,G,H,J,K} vs. End {M,N,P,Q,R,W,X,Z}
• Cognitive Load Variables
o
o
Memory Set Size (1 or 4)
Frame Size (1 or 4)
• Indicated detection of target by pressing spacebar
o
o
Detect Performance
Detect Response Time
32
Experiment 4: Detect accuracy
1
VM
CM
CM
Accuracy
0.9
VM
0.8
0.7
0.6
FS=1
FS=4
MSS = 1
FS=1
FS=4
MSS = 4
33
Experiment 4: Decision-making accuracy
1
Accuracy
0.9
VM
CM
0.8
CM
0.7
VM
0.6
FS=4
FS=1
MSS = 1
FS=4
FS=1
MSS = 4
34
Experiment 4: Detect time
1600
1400
Msec
VM
1200
CM
1000
VM
CM
800
600
FS=1
FS=4
MSS = 1
FS=1
FS=4
MSS = 4
35
Experiment 4: Decision-making time
1600
Msec
1400
1200
1000
VM
CM
CM
VM
800
600
FS=1
FS=4
MSS = 1
FS=1
FS=4
MSS = 4
36
Experiment 4: Implications
• Varied mapped performance is more sensitive to
load than consistently mapped performance
• Individuals performed better in the high-level
decision-making component of Radar when
stimulus mapping was consistently mapped
• Implications
o
o
Categorization is NOT a necessary requirement
for automaticity development
Consistent stimulus mapping is a necessary
condition for the development of automatic
detection
37
Summary of accomplishments
• Developed Radar, a dynamic simulation where it is possible
to study (i.e., to measure) automaticity
• In Radar it is possible to elucidate the effects of automaticity
on high-level dynamic decision-making
• Established the usefulness and applications of the dualprocess theory of automaticity
• Deepen our understanding of the implications of automaticity
development for practical real-world tasks
• Brought together two main theories of automaticity: instancebased theory and dual-process theory
38
Future research
• Consistency of mapping and responding is relative
to the categories (i.e., similarity) that a user can
form
• Thus, consistent mapping can lead to automatic
responses for high-level decision-making after
extended practice
39
Looking towards applications
• Test these hypotheses in
airport luggage screening
• Decide whether to hand
search the luggage
• There is no consistency but
rather just similarity
(relative to a ‘knife’
category)
40
Download