Engineering Psychology Final Exam Study Sheet

advertisement
Intro + Humans as Information Processing Channel
2
Human Information Processing Intro
2
Human Error - Models and Remedies
4
Information Theoretic Models (Info Transmission)
Humans as Info Processing Channels
9
9
Speed-Accuracy Tradeoff
12
Absolute and Relative Judgment
14
Signal Detection Theory Modelling
Even More Applied Models
Attention
17
18
20
Selective Attention
20
Divided + Focused Attention
24
Working Memory
28
Decision Making
30
Part 1: Introduction + Rational Decision Making
30
Part 2: Irrational/Descriptive Decision Making
31
Multi-Tasking
33
Mental Workload & Stress
37
Automation
39
1
Intro + Humans as Information Processing Channel
●
Engineering Psychology = Experimental Psychology + Human Factors
○ Exp Psych: goal is to understand laws of human behaviours
○ HF: goal is to apply laws of human behaviours for systems design
○ Eng Psych: Exp Psych research motivated by HF applications. Basically, doing
psych-type experiments to better design shit.
Human Information Processing Intro
●
SCR Model of Human Information Processing (HIP)
○ Stimulus → Cognition → Response
○ Stimulus: sensory perception, e.g. motion, sound, smell, etc
○ Cognition: brain receives impulse from nerves
○ Response: you do something about it
2
Wickens’ Model of HIP (the big complex thing we’re tryna understand over time):
Some notes:
● Sensory Processing: ​all the info available to us at a given time. we only PERCEIVE it if
we ATTEND to it using our attention resources. For example, I could just not be focusing
on sounds in general, only sights and smells.
● Cognition (“C”):​ Not everything goes through cognition; we have automatic responses
that cognition does not act on
● Long-Term Memory:​ Long term memory affects what we choose to attend to AND what
we perceive from that; it's what we've learned!
● System Environment (Feedback)​: this is the world, full of information
● Attention Resources: ​This affects everything. It affects:
○ what we're attending to in the first place
○ what we perceive of what we attend to (e.g. too many things to look at while
driving)
○ the interchange between working and long term memory
○ what goes into working memory in the first place
○ which responses we select and execute
●
Lessons Learned From Pilot Video:
● Humans are biased decision makers
● Teamwork / collective decision making & communications are essential system elements
3
●
●
●
●
●
●
●
Mental workload is critical
Individual differences are critical
Importance of instrument design
Importance of field studies
Importance of behavioural models
Importance of management and social environment
Humans provide creativity
Human Error - Models and Remedies
Human Error: inappropriate human behaviour that lowers levels of system effectiveness or
safety.
Dealing With Errors:
1. Try to understand errors
○ Model different kinds of errors
○ Model the situations in which errors occur
○ Model the factors that contribute to errors
2. Design “error tolerant systems that either
○ Reduce the probability of errors or
○ Contain errors when they occur
Errors can be attributed to different stages of HIP.
4
Some notes:
● Knowledge:​ having the wrong information/trigger; misinterpreting the situation. For
example, looking at a scan that contains a tumor and missing it.
● Rule:​ Correctly assessing the situation and selecting the incorrect response. For
example, correctly identifying a tumor and then prescribing the wrong treatment.
○ Memory:​ can contribute to incorrect response selection.
● Slips: ​Correctly selecting the action, incorrectly executing it. For example, seeing a
tumor and correctly deciding to prescribe radiotherapy, then accidentally selecting the
wrong option in the computer menu and clicking on chemotherapy instead.
This is still arguably too simple a model, but DJ Pauly M put it in the slides so LEGGO
Some notes, in the context of a plane crashing:
● Management or Design Errors:​ pilot could've been given shit instructions, or the
cockpit could've been designed for a 13 foot ape with 3 eyes
● Natural Factors/Hazards:​ e.g. turbulence/downdraft/storm
● System Characteristics:​ honestly just seems like a catch-all term for “other shit that
may have happened”
5
Some notes, in the context of a plane crashing:
● Technical/Engineering System:​ the equipment in the cockpit
● Workers:​ the people interacting with the equipment (e.g. pilots, technicians)
● Organizational/Management Infrastructure: ​e.g. Delta airlines, where the managers
are bullies who don’t like hearing about problems (HOLY LIBEL BATMAN)
● Environment/Context:​ Chicago Airport has narrow takeoff lanes
6
7
Designing For Mistakes and Slips (according to Don Norman):
1. Understand causes of error and design to minimize them
2. Allow reversible actions
3. Make errors easy to detect
4. Expect errors
General Principles for Error Remediation
● Task Design​: reduce mental workload
● Equipment Design:
○ Minimise perceptual confusions
○ Make system transparent
○ Use constraints to “lock out” possibility of errors
○ Avoid multimode systems
● Training:
○ Practice to reduce risks
○ Train for crises/high-stress situations
● Assists and Rules:​ understand human limits, assist in surpassing them
● Error-Tolerant Systems:
8
○
○
Avoid irreversible actions
Automate, but carefully; automation can breed complacency
Information Theoretic Models (Info Transmission)
Humans as Info Processing Channels
The goal here is to quantify the amount of information moving through blackbox of HIP.
Basically, we’re treating humans the same way ECEs treat communication systems:
Important Basic Principle:​ the Human Information Processor can be modelled as a Limited
Capacity Noisy Communication System:
● Noisy:​ there are other things going on that need parsing
● Communication:​ intakes and outputs information
● Limited:​ there is a limit on the ​absolute quantity and transmission rate of
information​ we can handle.
○ When working at capacity, one can increase speed only at the expense of
accuracy and vice-versa → Speed-Accuracy Tradeoff
9
Some Terms:
● Basic Classes of Information Processing Tasks:
○ Information Reduction:​ sorting, categorising, storage to long-term memory
○ Information Transmission:​ typing, transcribing, 1:1 mapping
○ Information Elaboration:​ when output > input information; CREATIVITY
● Terminology, using bucket of balls example:
○ Information Source:​ the bucket of balls
○ Event/Outcome:​ a particular ball is chosen
○ Uncertainty:​ the fact that we don’t know which ball will be chosen beforehand
○ Information:​ answer to question “what colour ball is drawn?”
○ Information Content of Source (Hs):​ the average amount of information that can
be acquired over time from observing outcomes/events, i.e. what is the average
amount of uncertainty resolved? Factors that influence it:
i.
Number of possible events (more = more information)
ii.
Probability of events (if they’re equal, there’s more info)
iii.
Sequential constraints of events/redundancy (can we infer the outcome
from previous/external events?)
Important thing to remember:
. It’s a weighted average!
Where N = number of possible events and p​i​ = probability of event i
For N equally likely events, H = log​2​(N)
Things get a bit hairier when we throw in redundancy:
It’s not actually that scary though! The takeaway is that if we get info from redundancy, there’s
less information in the source.
Redundancy = 1 - (H​ave​/H​max​). For example, if we were just guessing which letter came after the
next when reading english, then we have no redundancy; H​max​ = log​2​(26) = 4.7 bits. However,
using context, the actual H​ave​ for letters of the alphabet in English text ~= 1.5 bits.
Therefore, the redundancy of letters in English = 1- 1.5/4.7 = 68%
Some useful diagrams:
10
11
Speed-Accuracy Tradeoff
12
Donders was an 1869 manz who studied Reaction Time
Basically, having to choose a response makes you slower, because now you can be inaccurate
Factors/Considerations Affecting Simple Reaction Time
●
●
●
●
Stimulus Modality:​ what form does the stimulus take (we respond faster to auditory
than visual)
Stimulus Intensity:​ we respond faster to stronger stimuli (gunshot vs whisper)
Temporal Uncertainty:​ RT longer when interval is variable, also longer if they’re longer
Expectancy:​ if signal is expected, RT is lower
13
Fitts's law is a predictive model of human movement primarily used in human–computer
interaction and ergonomics. This scientific law predicts that the time required to rapidly move to
a target area is a function of the ratio between the distance to the target and the width of the
target.
Absolute and Relative Judgment
Question: how can we estimate the “average precision” with which humans are able to make
absolute judgements?
Characteristics of Absolute Judgment:
● Different limits for different senses/modalities
● Bow Effect: better performance at extremes of stimulus range
● Limit unaffected by spacing of stimuli on dimensions
● We’re much better at relative judgment tasks
14
●
Performance improved by:
○ Training/experience
○ Anchors (application of Bow Effect/turns it into more of a relative judgment task)
○ Redundancy
Orthogonal Dimensions of Information Source: info says different things, like the Fox News
display. High ​Efficiency​ in terms of total possible info transmitted, diminishing returns with each
dimension.
Correlated Dimensions of Information Source: redundant information from multiple dimensions
all saying the same thing. High ​Reliability​.
15
Basically, harder to sort along a dimension while ignoring integral dimensions, easier to sort if
correlated. Ergo, if source dimensions are correlated, use integral display dimensions; if source
dimensions are orthogonal, use separable display dimensions.
16
Signal Detection Theory Modelling
Shortcomings of Information Theory Modelling:
● Not sensitive to Type of Errors; measures consistency, not ‘correctness’
● Not sensitive to Magnitude of Errors; Ht can remain constant for small correlations
● Insensitive to HO’s subjective probability and biases; HO may not have accurate
understanding of source
● Ignores stimulus discriminability: does not account for hyper similar stimuli vs not, eg A
vs B or AAAAFAFAAFAFAFA and AAAAAFAAAAFAAFAFAAA.
● Ignores repetition/habituating effects: you should expect repeated S-R pairs to have
shorter RTs, not accounted for
● Ignores practice effects: possible to flatten vs Hs curves with practice
● Ignores S-R Compatibility
SDT SOLVES THESE MOTHA FUCKIN PROBLEMS. It takes into account:
1. The Strength of the signals (d’)
2. The HO’s subjective decision criteria (B)
17
Why does any of this matter? Because if we can compute a representative d’ and B from
observing a system, and assume rationality, we can gain insight into how the system is currently
operating.
Even More Applied Models
Vigilance: ​operator’s ability to detect infrequently, intermittently, and unpredictably occurring
signals over extended period of time (“the watch”)
●
●
●
●
●
●
●
Vigilance level = level of detection performance
Vigilance decrement = decline in vigilance level over time; usually steep, usually over
first 30 minutes
Inspection Situation:
○ Events occur at regular intervals
○ Frequency = # of targets/number of events
○ Time not taken into account
Free-response Situation:
○ Target event may occur at any time
○ Non-event not defined
○ Event frequency = # of targets/unit time
Simultaneous Task: all info necessary to discriminate is visible at each trial
Successive Task: target stimuli appear sequentially, inspector must remember stimuli
Sensor Task: signals are changes in visual/auditory/tactile intensity, eg listening for beep
18
●
Cognitive Task: involve symbolic/alpha-numeric coding (eg radiology, proof-reading, etc)
Ways To Improve Automated Alarms:
● Use multiple alarms
● Raise automated Beta (B) slightly, to reduce “boy who cried wolf” effect
● Keep human in the loop, but monitoring raw data must be easy
● Improve HO understanding of False Alarms
Factors Affecting d’:
Factors Affecting B:
Expectancy Theory: vicious cycle of fatigue → fewer signals detected → decrease in HO’s
subjective P(S) ‘cause they miss signals → increased Beta → fewer signals detected
Arousal Theory: prolonged period without signals → lower arousal, reduces variance of both
noise and signal distributions without changing d’ and B. Results in ​effective​ increase in B.
SDT Means of Influencing Performance:
● Introduce feedback of results: Ps(S) → affects Beta
● Introduce artificial signals: affects subjective Ps(S) and Beta, memory aid improves d’,
arousal improvement improves d’ and Beta
● Non-signal “arousal” stimuli: improves d’
● Standard Comparison stimuli: this is a memory aid, improves d’
● Maintain limited target set: lowers memory load, improves d’
● Increase signal-noise ratio (e.g. display persistence), improves d’
● Increase target salience (stands out, basically stronger/more distinct signal): improves d’
● Use redundant sensory modalities: improves d’
● Vary event rate: affects d’
● Training: affects beta and d’, positively one would hope
IN SUMMARY:
● If problem concerns sensitivity (d’), redesign the workstation
● If problem concerns trigger-happiness (B), redesign the operating procedures,
strategies, training programme, etc.
19
Attention
These are all models that are useful in different scenarios:
Selective Attention
●
●
●
●
●
●
●
Our eyes move slightly even when looking at one thing; otherwise, it fades from view.
Eye movements focus on things you need using the fovea
Central field of view is small (~2.5 degrees), so the eye constantly moves
Five types of eye movements: saccades, vergence, pursuit, VOR (vestibular ocular
reflex), and OKR (opto-kinetic response)
Important Point:​ we use our eyes to sample the world
General Orientation/scene scanning
Supervision/control
○ Location of information is known; information content is unknown
○ SEEV Model (factors determining which Area of Interest (AOI)):
■ Bottom-Up Influences:
● Salience (conspicuity): attracts attention to AOI (BRIGHT
THINGS)
● Effort: defines cost of shifting attention (do you need to move your
eyes, or turn your body, do you need to drop your weapon and
twerk)
■ Top-Down Influences:
● Expectancy: AOI determined by what we expect to observe
● Value: usefulness of info acquired in AOI
20
●
Noticing
○ Failure of Attentional Capture: Distinctions
21
■
■
●
Change Blindness: when changes in environment are not noticed
Inattention Blindness: failure to notice something, even when looking
directly at it
■ Attentional Blink: failure to detect salient target if presented very soon
after first one
■ Repetition Blindness: failure to detect repeated target stimulus, but not
different target
Searching for Target(s)
○ Information is known, location of info is unknown
○ Useful Field of View (UFOV): effective area within which information can be
extracted
■ Distance between successive fixations in visual search task
■ Affected by density of information, discriminability of target from
background, and ageing; greater skill (through training) often results in
larger UFOV
○ Fixation Dwell Times:
■ Typically 250-500ms but increases for:
● Less legible displays
● Less familiar words/objects
● Higher interpretation difficulty text/instruments
● Critical displays (more info to transmit)
● Longer for novices than experts
■ Used for cognitive assessment of driving!
○ Serial Search: each item examined separately
○ Parallel Search: occurs when targets “pop out”, e.g. when contrast between
target and distractor is large. Very efficient - independent of number of targets
○
●
●
Bottom-Up Factors Affecting Serial Self-Terminating Search (SSTS) Performance
(Search Time):
■ # of items searched: more is slower
■ # of features defining target: more features = greater search time
■ Discriminability: how different is the signal from the noise
■ Homogeneity vs heterogeneity: faster if noise is homogenous
■ Target features present rather than absent: if present, parallel search, if
absent, serial search
■ Spacing (close vs dispersed): little overall effect due to trade-off between
time on scanning vs interference due to clutter
■ Searching for single target vs multiple possibilities: lower for single target
■ Target distinguished across single dimension: simple rule = parallel
■ Training: makes it faster
Reading (?)
Confirming of Information (?)
22
Divided + Focused Attention
23
Holistic Information Processing: when the whole is perceived directly, rather than as a
consequence of separate analysis of constituent elements. Objects perceived as a whole
depending on
- Surrounding contours
- Correlated Attributes
- Familiarity
●
●
●
Global Processing:
○ organizes world into objects, groups, patterns, etc
○ Automatic/pre-attentive
○ Exhibits Global Precedence
Local Processing:
○ Separate processing of individual objects in display
○ Depends more on focused selective attention
Example:
The goal is to support/suppress parallel processing when appropriate. For example a cluttered
layout requires local processing, while an organized layout facilitates global processing.
Emergent Features: related to aggregation of stimuli, not evident in each item on their own.
24
Principle of Spatial Proximity:​ placing visual info sources close together (AR, superimposing
info on real world) (within UFOV) to facilitate parallel processing (think SEEV) (beware of
inattention blindness and response conflicts due to spatial proximity)
• reduced scanning transition times (recall SEEV model)
• reduced visual accommodation time and effort
• reduced light-dark adaptation
HUD:
Conformal Symbology:
● Correspondence of objects across views
● Creates new “shared object”
Some Problems with spatial proximity
● Absence of (or insufficient) conformality
● Spatial separation in depth: – co-located stimuli may appear at different depth planes
(e.g. using stereoscopic displays)
● Co-located stimuli not immune to change blindness
● Perceptual competition
● Response conflicts
Eriksen Flanker Test: *** LAST EXAMPLE FROM FRIDAY
- Shows response conflict due to spatial proximity (bunch of arrows all together, respond
with direction of centre arrow)
- Diff example, H = right F= left, stim:
- KHK - perceptual competition (display clutter > relevant v irrelevant RT increases)
- FHF - response conflict (spatial prox > processed in parallel, Rt/P(err) increases)
- HHH - redundancy gain (spatial prox > processed in parallel, RT/P(err) decreases)
Stroop Effect
25
-failure of spatial prox ?
Different information channels comprise different dimensions of the same physical stimulus
EX name ink colour: ​green ​red
-RT and error rate increase when conflict (as opposed to text in neutral colour)
Principles of Object-Based Proximity: ​object features are detected in early stage, then
combined into objects. Feature processing is parallel within an object but serial between
different integral objects. This has to do with emergent features. Example:
strengths/weaknesses polygon.
Display vs Source Dimensions:
26
Proximity Compatibility Principle:​ high task proximity (things that need to be seen/done
together) should be matched with high display
proximity (in the same place in the display, in terms of space, colour, connections, etc).
Auditory Modality:
● Omnidirectional - no direction/space compatibility, unlike visual space
● Always On
● Transient - held in short term sensory story, because not repeated
● Easily noticed - not easily ignored
● We’re good at attending only to one ear (dichotic listening task)
● Auditory Divided Attention:
○ Achieved by rapidly switching between info sources
○ Easier to do if switch trigger is relevant (e.g. your name)
○ Therefore, personalized triggers are good and prevent distraction(“switch if you
hear THIS song”)
Working Memery
Why do we care about memory? To understand:
● Encoding - learning and training
● Storage - working and long term memory
● Retrieval - getting the right info back when needed
27
Basic Takeaways:
1. Sensory Storage - holds info for very short period of time (<1 second); info either enters
WM or is lost
2. Working Memory - info encoded visually or phonologically, holds info for seconds to
minutes. Volatile, limited capacity (7 +- 2 items). Can be trained.
3. Long Terms Memory - info held forever, problem is of retrieval, which can be improved
via mnemonics, training, etc.
Baddeley’s Model of Working Memory: Central Executive assigns tasks/resources to
visuospatial sketchpad and phonological store (visual + audio). Interference occurs when two
tasks draw upon same WM subsystem concurrently. There is evidence of kinesthetic
(movement-based) working memory.
28
Implications/Takeaways:
● match visual WM with pictures and verbal WM with speech/audio displays.
● Can use redundancy (eg speech + subtitles/pictures) for longer or more complex
messages.
Limitations/Boosts:
● Primacy and Recency Effects: first and last items are remembered better.
● 7+-2 things are remembered, but can be augmented by CHUNKING (​I like em biiiiiiig, I
like em chunkeeeeeeeeh​). This is what experts do! They recognize, and thereby
remember, bigger and bigger chunks.
● Generally remembered for 10s of seconds
● Interference:
○ Reduces recency effect
○ Proactive: happens before material is presented
○ Retroactive: happens after material is presented
Decision Making
Part 1: Introduction + Rational Decision Making
Basic Words:
29
●
●
●
Normative Models - ​prescribe​ how people should make optimal decisions. Very rational,
consider all available information, etc. Basically what we learn in DA, OR, etc.
Descriptive Models - ​describe​ how people actually make decisions. Basically behavioural
decision theory/economics, aka in what way are we irrational.
Naturalistic Decision Making - how do expects make real decisions “out in nature”
(outside of laboratory)?
Factors Affecting DM Behaviour:
● Information - availability, relevance, value/diagnosticity
● Uncertainty - likelihood of occurrence of particular outcomes can be uncertain
● Risk - impact of decision can be uncertain
● Expertise - reduces uncertainty, risk, and increases situation awareness
● Time Pressure
● Information Processing Limitations - WM, LTM, etc
● Rational/Normative vs Naturalistic - rationality vs speed
● Automation/DSS - effect of trust in automation, transparency of DSS
Rational Decision Making:
● Under certainty, it’s basically arithmetic/weighted decision matrix. One right answer.
● Under uncertainty, assign supposed probabilities to outcomes and multiply those
probabilities by the values of the outcomes. E.g. rational gambling. TIES IN WITH SDT.
● Diagnosticity - how good is the cue at discriminating between hypotheses (how much
does it matter)
● Reliability - how likely is cue to be true (fucking LAWL)
● Salience - conspicuity of cue (how visible it is)
Bayes’ Rule:
Before information is given, prior odds of 2 hypotheses = P(H1)/P(H2)
After information is given, posterior odds are
30
Part 2: Irrational/Descriptive Decision Making
Basic Concept:
● decisions can’t be fully rational (bounded rationality)
● Humans employ heuristics to make decisions
● They don’t make optimal decisions, they satisfice
Human Estimation (Intuitive Statistician):
● Means: Good at estimating means of series/observed events
● Variability: better if variability is smaller. Tend to underestimate large variances.
○ On scatter plots, tend to underestimate high correlations and overestimate low
correlations. This is due to focusing on orthogonal distances from regression line,
rather than vertical differences.
● Proportion of Observed Events:
○ Good generally at estimating proportions in the middle (nearer 50/50)
○ Tend to underestimate high observed probabilities
○ Tend to overestimate low observed probabilities
○ Basically sluggish Beta
● Extrapolation of Observed Events:
○ We tend to exhibit a conservative bias in predicting future values, and linearize
trends
● Randomness: we’re bad at understanding randomness, so DSSs should (when possible)
provide explicit, stated estimates of outcome probabilities
Evidence Accumulation Barriers:
1. Information may be missing
2. Information overload - we can’t absorb too much information, but we seek more
information than we can use anyways.
3. Salience bias - we pay attention to more salient cues over more important ones
4. Confirmation bias - we seek info to confirm our current beliefs
5. As-if Heuristic - we weight cues as if they have equal value. Computers can help with
this by having us consciously assign cue weights to start, then having computers do the
multiplying for us.
How Experience Affects Decision Making:
1. Representativeness Heuristic - typicality, defaulting to salience. Does ok, but doesn’t
take into account base rate (truck drivers vs librarians).
a. Also conjunction fallacy - having A and B occur cannot be more likely than having
A occur
b. Throws off rational Bayesian estimates of P(D/H)
31
2. Availability Heuristic - we use ease of retrieval as a proxy for frequency of occurrence.
We choose to answer questions that are easy to answer, not the right questions.
a. Throws off rational Bayesian estimates of P(H); rationality requires ALL
possibilities be considered
3. Anchoring Heuristic - we tend to anchor on a hypothesis supported by the first arriving
cue
a. This is an irrational manifestation of primacy
b. Argument for simultaneous presentation of alternative data
4. Confirmation Bias - once a tentative belief is formed, we give greater weight to evidence
that supports that hypothesis. It’s more comfortable to deal with “you are right”
information!
a. Overconfidence Bias - we’re just more confident than we ought to be because not
being confident is uncomfortable
5. Decision Fatigue Phenomenon - later decisions rely on increasingly simplifying heuristics
Heuristics In Choice Under Certainty - bounded rationality instead of rational DM. Basically
satisficing by the Elimination By Aspects heuristic: eliminate from consideration any object that
doesn’t lie within top few aspects, then pick from there. Usually results in satisfactory choice.
Heuristics In Choice Under Uncertainty:
● Departs from optimizing expected return (may feed loss aversion by minimizing
maximum loss)
● May result in direct retrieval of most familiar choice (do what I did last time)
● Prospect Theory:
○
○
○
○
○
○
○
Loss is scarier than gain is good
Short term gains valued over long term gains
Curves tend toward horizontal as loss or gain gets larger
Low probabilities are overestimated, EXTREMELY low probabilities may be
discounted entirely
Framing: people avoid uncertainty wrt gains, seek risk to avoid losses
Sunk Cost Fallacy is reeeeaaaaaaal
32
Multi-Tasking
Goal:
● Understand factors that limit multitasking
● Design systems to account for said limits
It’s basically the same as WM, wherein we have multiple resources and have trouble when
multiple tasks demand same resource.
Factor 2: Multiplicity of Resources
Tasks can be time shared efficiently if they demand different levels along the 4 dimensions for
multiple resources in the brain.
33
It’s more reasonable to ask someone to perceive one thing and execute a different response
than it is to ask them to execute two different responses. E.g. “count the butterflies and sing
Imagine” easier than “sing Imagine and write a letter”
Doing a verbal thing and a spatial thing is easier than doing two verbal things or two spatial
things.
Perceiving different things from different modalities is easier.
34
We can’t read two things at once, but we can read and walk!
These 4 things can be summed to see how much interference there is.
Factor 3: Cost of Switching
If tasks can’t be performed sequentially, there is switching between them. We use the Task +
Interruption Management Model, which has an Ongoing Task (OT) and an Interrupting Task
(IT), and we switch between them! Switching can increase probability of change blindness, as
(by Nyquist’s Theorem) we aren’t sampling all that often.
Factors that determine switching between OT and IT:
● Switch Cost - time to switch from OT to IT at time S1
○ Usually longer when switching between different kinds of tasks (singing to
reading to singing)
○ Cost reduced when event signaling IT arrival indicates what needs to be done
○ Cost amplified if interval between consecutive switches is shortened - some time
needed to readjust to new tasks
OT Properties That Affect Switching S1:
● Engagement - if HO is super engaged, can create cognitive tunneling
● Modality - switching from auditory WM tasks slower than from visual task
● Dynamics - less likely to switch when OT is unstable (eg heavy traffic) vs stable (steady
highway driving)
● Priority of OT - if it’s more important, it won’t be switched from as much
● Sub-Goal Completion - switching slower/less likely if OT in middle of sub-goal (eg. finish
page first); good interface would present ITs at ends of sub goals
● Delay in S1 - S1 is delayed if possible in general
35
IT Properties That Affect Switching S1:
● Cognitive demands of IT - if higher, demands switching
● Sensory pre-emption - auditory IT usually has shorter switch time/lower switch cost than
visual
● Pre-attentive alerting - if impending S1 can be signaled, it will be prepared for and will
cost less
● IT Salience - high = rapid switching, low = slower, delayed, or ignored (change
blindness) switching
○ Zero salience IT - no signal/reminder, must be self-initiated via HO memory.
Generally a bad idea if IT is important
Factors That Affect Return to OT S2 (Resumption Lag RL):
● Shorter RL is S1 was delayed
● Longer RL after longer/more difficult IT
● Shorter RL if OT remains visible during IT (e.g. visible checklist)
● Longer RL if OT and IT are similar, causes confusion
Mental Workload & Stress
36
Estimation of TO business: time required (TR)/time available (TA).
Evaluating Mental Workload:
● Behavioural measures - tough because behaviour often changes, even when
performance doesn’t change with resource demands
● Secondary Task methods - require HO to perform secondary task in parallel
● Subjective Measures - eg NASA Task Load Index (rohit’s thing!)
● Physiological Measures (Neuroergonomics) - recording heart rate, pupil dilation, galvanic
skin responses, etc
Stress:​ emotional state of heightened arousal
● Can be both external (light) and internal (frustration)
● Manifests in different ways:
○ Emotional response
○ Change in activity of peripheral nervous system (heart rate increase during
takeoff, or cortisol output after lots of work)
○ Changes in information processing/performance; WE CARE ABOUT THIS ONE
Stress can result in errors in:
37
●
●
●
●
Perception - failure to perceive info correctly
Attention - failure to focus/divide/allocate+select attention correctly
DM - giving into simplification/biases because you just can’t even
Response Execution - errors of data entry, speed-accuracy tradeoff
Natural Stress Responses
1. Recruit more resources - try harder booiiiii, can result in lowered precision
2. Removal of stressor - done if possible through elimination/postponing/ignoring source of
stress
3. Redefine task goals - settle for less, satisfice
4. Do Nothing - “sit on your hands for X minutes”. Is not always the best response, but can
help deal!
Solutions To Stress:
● Design Solutions:
○ Remove environmental stressors
○ Reduce unecessary info
○ Reduce need to maintain/transform info in WM
● Training Solutions
○ Give priority to emergency procedures
○ Train extensively → stress inoculation, ‘cause you done it already!
○ Include planning, anticipating, rehearsing in training programme
Automation
38
Automation - performance of previously (partially or fully) human tasks by machines, including
tasks humans are incapable to doing (such as lifting super heavy loads)
Why Automate?
● Increased productivity
● Reduced operator workload
● Enhanced safety
● Augmenting human tasks
39
1. Information Acquisition
a. Corresponds to human sensory and selective attention processes
b. Low level automation - automatic manipulation of tilt and zoom camera
c. High level automation - intelligent alarms based on multiple sensors
2. Information Analysis
a. Corresponds to WM + inferential processes
b. Low level automation - collision warning systems, predictor displays
c. High level automation - fancy displays
3. DM and Action Selection
a. Corresponds to selecting from among multiple choices
b. Low level automation - displays with complete lists of choices
c. Displays that indicate the best decision choice
4. Action Implementation
a. Corresponds to doing things
b. Low level automation - maintaining plane flight path
c. High level automation - robotic tele-surgery
Problems With Automation: - related
● Complexity
○ Automation complexity → automation imperfection
○ Difficult to verify systems that have millions of lines of code; decreased HO
observability, can cause automation surprises
○ Legacy systems are often grafted onto older legacy systems
40
●
AI
○
○
●
●
●
●
●
Can be impossible to understand reasoning of modern AI
When it learns from human decision makers, biases can appear in AI decisions
as well
Feedback
○ Many automation related accidents have occurred due to poor/non-salient
feedback to HO
○ Even salient feedback can be a problem if it’s ambiguous (doesn’t clarify what’s
wrong) or inflexible (can’t dig deeper into what’s wrong)
Trust And Dependence
○ Trust - cognitive state of the operator, ranges from over-trust (complacency) to
under-trust (I’ve been burned before, ‘cry wolf’ effect)
○ Dependence - objective behaviour associated with use of automation. This is
how the automation is actually used, and what the results are.
○ You want how dependable the system is to be lined up with how much the HO
trusts the system for optimal frequency of use of the system.
Automation Bias
○ inappropriate reliance on inaccurate decision advice
○ Related to complacency, usually occurs with automated decision aids in complex
environments
○ HO uses automation instead of seeking own information
Out-of-the-Loop Unfamiliarity (OOTLUF) - when HO doesn’t know what to do when
called to action. Due to:
○ Automation Complacency
○ Automation Bias
○ Loss of Situation Awareness
○ Skill Degradation
Alarm Mistrust - result of ‘cry wolf’ effect
Fitt’s Lists:
41
USS Vincennes Case Study
Basic Premise - during the 1980 Iran-Iraq war, a US ship shot down an ascending civilian flight
that they thought was a descending Iranian F-14 warplane.
Context/Reasons:
1. Political - US ship had been attacked a year before, did not fire in defense. They had
engaged/been approached by Iranian planes and gunboats.
2. Displays - most of the people were monitoring screens that showed what the automation
was doing
3. Automation/Command Structure - many levels of command, and all operators were
performing supervisory control
4. Stress
a. they were all aware of political background, and had low beta (itchy trigger
fingers)
b. Stress can cause failure of perception on display
c. Information overload
d. Narrowing of selective attention, confirmation bias on “this is an enemy”
e. Loss of working memory - failure to consider alternative hypotheses, correlation
of information that needs to be considered in concert (eg velocity and altitude)
f. Failure of strategic control - does not reformulate strategies
How did they not know about whether the plane was ascending or descending? Location and
altitude information was presented on separate screens (LACK OF PROXIMITY
COMPATIBILITY). Also, altitude is analogue concept, should not be displayed digitally. LACK
OF PICTORIAL REALISM.
42
Download