Cognitive and Linguistic Sciences Brown Univ CG 195 Shotola

advertisement
Heuristics and Biases
Why dumb people do smart things…
and vice versa.
Overview
 Context
 Heuristics as biases (defects)
 Availability
 Anchoring and adjustment
 Representativeness (Revisited)
 Heuristics as intelligence
 Recognition
 “Fast and frugal”
 Tit-for-tat (social heuristics
 Conclusion
Context




Rational Choice Theory
Utility Theory
Probability Theory
Bounded Rationality
Rational Choice Theory
 von Neumann & Morgenstern (1947)
attempted to remove psychological
assumptions from the theory of decision
making:
 Individuals have precise information about the
consequences of their actions
 Individuals have sufficient time and capability to
weigh alternatives
 All decisions are “forward looking” (e.g., the “sunkcost fallacy”)
 “Game theory” is RCT in practice
Utility & Probability Theory
 UT determines how preferences are determined
within RCT
 A response to the St. Petersburg Paradox (1734)
 Strictly construed, UT assigns a common currency
(utiles) to disparate outcomes
 Probabilistic reasoning
 Under uncertainty, the “value” of a choice is the
expected value of probabilistic outcomes
 Savage (1972) formalized the conjunction of UT and
Bayesian probabilistic reasoning
Bounded Rationality
 Acknowledges several limitations of UT and RCT as
descriptive models of individual choice
 People lack:



Perfect information of outcomes and probabilities
Consistent utility functions across domains
Time and cognitive capabilities to comprehensively enact the
prescriptions of UT and RCT
 Experts and everyday decision makers are errorprone
 Useful in the domain of problem-solving
 Simon (1955) argued for satisficing
 Individuals make decisions that are “good enough”
considering the costs of decision-making, the specific
goal, and cognitive limitations
Why heuristics?
 RCT & UT are terrible descriptive models in
many cases
 Bounded Rationality’s limitations are
insufficient to explain human behavior
 Sometimes information is insufficient even for BR
 Many judgments are not goal-directed or encased in
problem-solving tasks
 Human errors are systematic
 Discovering heuristic rules of judgment can explain
these systematic errors
Heuristics
 Substitute easy questions for difficult ones
(attribute substitution)
 Defined heuristic rules specify the substitution
 Allow judgment and decision making in cases
where specific and accurate solutions are either
unknown or unknowable
 Availability, anchoring and adjustment, and
representativeness are frequently considered
“metaheuristics” since they engender many
specific effects
Heuristics as error-generators
 How smart people do dumb things
 Kahneman, Slovic, & Tversky (1974)
 Availability
 Anchoring and adjustment
 Representativeness
Availability Heuristic
 “The ease with which instances or occurrences
can be brought to mind” motivates judgment
 Retrievability of instances
 “Were there more males or females on a given list?”
 Effectiveness of a search set
 “Do more English words begin with r or have r as the
third letter?”
 Biases of imaginability in ad hoc categories
 “Which is larger: 10 C 2 or 10 C 8?”
 Illusory correlation
 Chapman & Chapman (1969)
Condition 1
1×2×3×4×5×6×7×8 = ?
Condition 2
8×7×6×5×4×3×2×1 = ?
Anchoring and adjustment
 Insufficient adjustment
 Condition 1: 1×2×3×4×5×6×7×8
 Mean answer: 512
 Condition 2: 8×7×6×5×4×3×2×1
 Mean answer: 2250
Anchoring and adjustment
 Correct answer: 8! =
1×2×3×4×5×6×7×8 =
8×7×6×5×4×3×2×1 =
40,320
Anchoring and adjustment
 Biases in the evaluation of conjunctive and
disjunctive events
 Probability of conjunctive events overestimated
 Probability of disjunctive evens underestimated
 Anchoring in the assessment of subjective
probability distributions
 Variance of estimated probability distributions
narrower than actual probability distributions
 Common to naïve and expert respondents
Representativeness
 Likelihood of a condition is judged by similarity to
a condition, mitigating factors notwithstanding
 Insensitivity to prior probability of outcomes




“Imagine a group of (70/30) lawyers and (30/70)
engineers.”
“Dick is a 30 year old man. He is married with no
children. A man of high ability and high motivation, he
promises to be quite successful in his field. He is wellliked by his colleagues.”
Participants judged Dick to be equally likely to be an
engineer regardless of prior probability condition
Base rate neglect
Representativeness
 Insensitivity to sample size
A certain town is served by two hospitals. In the larger hospital about 45
babies are born each day, and in the smaller hospital about 15 babies are
born each day. As you know, about 50 percent of all babies are boys.
However, the exact percentage varies from day to day…
For a period of 1 year, each hospital recorded the days on which more than
60 percent of the babies born were boys. Which hospital do you think
recorded more such days?
The larger hospital (21)
The smaller hospital (21)
About the same (within 5% of each other) (53)
Representativeness
 Misconceptions of chance
 “The law of large numbers applies to small numbers
as well.”
 Expert researchers select sample sizes too small to
fairly test hypotheses (Cohen, 1969)
 Conjunctive fallacy
 Linda!
 Robust effect:

Linda is more likely to be a bank teller than she is to be a
feminist bank teller, because every feminist bank teller is a
bank teller, but some women bank tellers are not feminists,
and Linda could be one of them
Representativeness
 Conjunctive fallacy
 Linda!
 Robust effect:



1. Linda is more likely to be a bank teller than she is to be a
feminist bank teller, because every feminist bank teller is a
bank teller, but some women bank tellers are not feminists,
and Linda could be one of them
2. Linda is more likely to be a feminist bank teller than she is
likely to be a bank teller, because she resembles an active
feminist more than she resembles a bank teller
65% of participants selected argument 2
Representativeness
 Misconceptions of Regression to the Mean
 Why flight instructors conclude that criticism is more
effective than praise
Heuristics as intelligence
 How dumb people do smart things:
when less (information) is more
Recognition Heuristic
 Who will win in the soccer match: Manchester
United vs. Shrewsbury Town? (Ayton & Onkal,
1997)
 Which has a greater population: San Diego or San
Antonio? (Goldstein & Girgerenzer, 2002)
 Turkish participants as accurate as British in the
former; German participants more accurate than
American in the latter
“Fast and Frugal”
 Heart-attack patient



Standard, multivariate
patient-interview vs.
Limited-information decision
tree of 3 yes/no Q’s.
Decision tree “more accurate
in classifying risk than
complex statistical methods”
(Breiman et al., 1993) , in
(Todd and Gigerenzer, 1999)
“Tit-for-tat”
 Simple game-theory strategy:
 In the first round: always cooperate
 Subsequently:
 Remembers partner’s (opponent’s) one (!) previous
response
 Reciprocates previous response
 This simple cooperation heuristic bested many highly
sophisticated algorithms that based their decisions on
high memory of partner’s actions and intense
computational machinery (Axelrod, 1984)
 Interacts well with other tit-for-tat machines as well
 “Why selfish people do nice things” – a simple
heuristic for social cooperation
Heuristics: bias or intelligence
Bias View
 limited decision-making
methods that people often
misapply to situations
where UT, RCT, and PT
should be applied instead
 instantaneous responses
based on attribute
substitution – switch hard
questions for easy ones
 sources of predictable
error and
underperformance
Intelligence View
 “intelligent behavior” need
not be computationally
expensive
 frugal representations and
response mechanisms are
more tractable and
plausible
 simple rules can generate
rich cognitive and social
effects
Download