Overconfidence - The Case Of Joseph Kidd

advertisement

Overconfidence

Agatha Christie: Poirot: Cards on the

Table (2005)

Hercule Poirot: The question is, can

Hercule Poirot possibly by wrong?

Mrs. Lorrimer: No one can always be right.

Hercule Poirot: But I am! Always I am right. It is so invariable it startles me. And now it looks very much as though I may be wrong, and that upsets me. But I should not be upset, because I am right. I must be right because I am never wrong.

Agatha Christie's whodunnits mystery's can be solved with maths - Daily Mail

- 3 Aug 2015

Friday, 10 April 2020 1:23 AM

Overconfidence

“The odds of a meltdown are one in 10,000 years.”

Vitali Skylarov, Minister of Power and Electrification in the Ukraine, two months before the Chernobyl accident (cited in Rylsky, 1986, February).

No problem in judgement, in decision making, is more prevalent and more potentially catastrophic than overconfidence.

As Janis (1982) documented in his work on groupthink, American overconfidence enabled the

Japanese to destroy Pearl Harbour in World War II.

Overconfidence

Overconfidence also played a role in the disastrous decision to launch the U.S. space shuttle Challenger.

The Space Shuttle Challenger disaster occurred on

28 January 1986 ( BBC video also BBC video ).

Before the shuttle exploded on its twenty-fifth mission, NASA's official launch risk estimate was 1 catastrophic failure in 100,000 launches ( Feynman,

1988, February ). This risk estimate is roughly equivalent to launching the shuttle once per day and expecting to see only one accident in three centuries.

Challenger's Rollout From

Orbiter Processing Facility To

The Vehicle Assembly Building

The Crew Of The Final, Ill-fated

Flight of the Challenger

The Challenger Breaks Apart 73

Seconds Into Its Final Mission

Debris Recovered From Space

Shuttle Challenger

Conclusion

Richard Phillips Feynman (May 11,

1918 – February 15, 1988) was an

American physicist known for his work in the path integral formulation of quantum mechanics, the theory of quantum electrodynamics, and the physics of the superfluidity of supercooled liquid helium, as well as in particle physics. For his contributions to the development of quantum electrodynamics, Feynman, jointly with Julian Schwinger and

Sin-Itiro Tomonaga, received the

Nobel Prize in Physics in 1965.

Conclusion

Feynman played an important role on the Presidential Rogers Commission, which investigated the Challenger disaster. During a televised hearing,

Feynman demonstrated that the material used in the shuttle's Orings became less resilient in cold weather by immersing a sample of the material in ice-cold water. The

Commission ultimately determined that the disaster was caused by the primary O-ring not properly sealing due to extremely cold weather at

Cape Canaveral.

Overconfidence

Was NASA genuinely overconfident of success, or did it simply need to appear confident?

Because true confidence is hard to measure in such situations, the most persuasive evidence of overconfidence comes from carefully controlled experiments.

Stuart Oskamp published one of the earliest and best known of these studies in 1965.

Overconfidence - The Case

Of Joseph Kidd

Oskamp asked 8 clinical psychologists; 18 psychology graduate students, and 6 undergraduates to read the case study of “Joseph Kidd,” a 29-year-old man who had experienced “adolescent maladjustment.”

Each participant was given excerpts from “The Case of Joseph Kidd” a chapter in Robert White's “ Lives in Progress ” giving a detailed account of the life and problems of a 29 year old male. The chapter is about

50 pages long.

Overconfidence - The Case

Of Joseph Kidd

The case study was divided into four parts.

Part 1 Introduced Kidd as a war veteran who was working as a business assistant in a floral decorating studio (35 words from the first page).

Part 2 Discussed Kidd's childhood through age 12

(750 words ).

Part 3 Covered Kidd's high school and college years

(1,000 words ).

Part 4 Chronicled Kidd's army service and later activities to age 29 (600 words).

Overconfidence - The Case

Of Joseph Kidd

Subjects answered the same set of questions four times - once after each part of the case study.

These questions were constructed from factual material in the case study, but they required subjects to form clinical judgments based on general impressions of Kidd's personality.

Questions always had five forced-choice alternatives, and following each item, subjects estimated the likelihood that their answer was correct .

For example

5

Overconfidence - The Case

Of Joseph Kidd

During College, when Kidd was in a familiar and congenial social situation, he often a.

Tried to direct the group and impose his wishes on it b.

c.

d.

e.

Stayed aloof and withdrawn from the group

Was quite unconcerned about how people reacted to him

Took an active part in the group but in a quite and modest way

Acted the clown and showed off

Guess!

5

Overconfidence - The Case

Of Joseph Kidd

During College, when Kidd was in a familiar and congenial social situation, he often a.

Tried to direct the group and impose his wishes on it b. Stayed aloof and withdrawn from the group c. Was quite unconcerned about how people reacted to him d. Took an active part in the group but in a quite e.

and modest way

Acted the clown and showed off

Overconfidence - The Case

Of Joseph Kidd

15 Kidd's present attitude towards his mother is one of a.

Love and respect for her ideals b. Affectionate tolerance for her foibles c. Combined respect and resentment d. Rejection of her and all her beliefs e. Dutiful but perfunctory affection

Guess!

Overconfidence - The Case

Of Joseph Kidd

15 Kidd's present attitude towards his mother is one of a. Love and respect for her ideals b. Affectionate tolerance for her foibles c. Combined respect and resentment d. Rejection of her and all her beliefs e. Dutiful but perfunctory affection

Overconfidence - The Case

Of Joseph Kidd

The confidence ratings ranged from 20 % (no confidence beyond chance levels of accuracy) to 100 %

(absolute certainty).

Somewhat surprisingly, there were no significant differences among the ratings from psychologists, graduate students, and undergraduates, so Oskamp combined all three groups in his analysis of the results.

What he found was that confidence increased with the amount of information subjects read, but accuracy did not.

Overconfidence - The Case

Of Joseph Kidd

After reading the first part of the case study, subjects answered 26 % of the questions correctly

(slightly more than what would be expected by chance), and their mean confidence rating was 33 % .

These figures show fairly close agreement.

As subjects read more information, though, the gap between confidence and accuracy grew (see

Figure

below).

Overconfidence - The Case

Of Joseph Kidd

The more material subjects read, the more confident they became - even though accuracy did not increase significantly with additional information.

By the time they finished reading the fourth part of the case study, more than 90 % of Oskamp's subjects were overconfident in their answers.

Overconfidence - The Case

Of Joseph Kidd

60

50

20

10

0

40

30

Estimated accuracy

True accuracy

Part 1 Part 2 Part 3

Amount of case study read by subjects

Part 4

Overconfidence

Lichtenstein and Fischhoff

In the years since this experiment, a number of studies have found that people tend to be overconfident of their judgments, particularly when accurate judgments are difficult to make.

For example, Lichtenstein and Fischhoff (1977) conducted a series of experiments in which they found that people were 65 % to 70 % confident of being right when they were actually correct about

50 % of the time.

Overconfidence

Lichtenstein and Fischhoff

In the first of these experiments, Lichtenstein and

Fischhoff asked people to judge whether each of 12 children's drawings came from Europe or Asia, and to estimate the probability that each judgment was correct.

Even though only 53 % of the judgments were correct

(very close to chance performance), the average confidence rating was 68 % .

Overconfidence

Lichtenstein and Fischhoff

In another experiment, Lichtenstein and Fischhoff gave people market reports on 12 stocks and asked them to predict whether the stocks would rise or fall in a given period.

Once again, even though only 47 % of these predictions were correct (slightly less than would be expected by chance), the mean confidence rating was

65 % .

Overconfidence

Lichtenstein and Fischhoff

After several additional studies, Lichtenstein and

Fischhoff drew the following conclusions about the correspondence between accuracy and confidence in two-alternative judgments:

Overconfidence

Lichtenstein and Fischhoff

► Overconfidence is greatest when accuracy is near chance levels.

► Overconfidence diminishes as accuracy increases from 50 % to 80 % , and once accuracy exceeds 80 % , people often become under confident.

Overconfidence

Lichtenstein and Fischhoff

In other words, the gap between accuracy and confidence is smallest when accuracy is around 80 % , and it grows larger as accuracy departs from this level. Discrepancies between accuracy and confidence are not related to a decision maker's intelligence.

Overconfidence

Lichtenstein and Fischhoff

Although early critics of this work claimed that these results were largely a function of asking people questions about obscure or trivial topics, recent studies have replicated Lichtenstein and Fischhoffs findings with more commonplace judgments.

For example, in a series of experiments involving more than 10,000 separate judgments, Ross and his colleagues found roughly 10 % to 15 % overconfidence when subjects were asked to make a variety of predictions about their behaviour and the behaviour of others (Dunning, Griffin, Milojkovic, and Ross, 1990;

Vallone, Griffin, Lin, and Ross, 1990).

Overconfidence

This is not to say that people are always overconfident Ronis and Yates (1987) found, for instance, that overconfidence depends partly on how confidence ratings are elicited and what type of judgments are being made (general knowledge items seem to produce relatively high degrees of overconfidence).

Overconfidence - Feedback

There is also some evidence that expert bridge players, professional odds makers, and National

Weather Service forecasters - all of whom receive regular feedback following their judgments - exhibit little or no overconfidence (Keren, 1987; Lichtenstein et al. 1982; Murphy and Brown, 1984; Murphy and

Winkler, 1984).

Overconfidence - Feedback

Keren (1997) discusses bridge experts calibration, with amateur players showing considerable overconfidence and expert players calibrated almost perfectly. He underlines the difference between accuracy (usually carefully studied) and resolution

(also called discrimination - an ability to judge whether an event will take place or not), as there is no agreement whether these are two forms of expertise or rather two different kinds of expertise.

Still, for the most part, research suggests that overconfidence is prevalent.

Overconfidence -

Extreme Confidence

What if people are virtually certain that an answer is correct? How often are they right in such cases?

Fischhoff et al. (1977) conducted a series of experiments to investigate this issue. In the first experiment, subjects answered hundreds of general knowledge questions and estimated the probability that their answers were correct.

For example, they answered whether absinthe is a liqueur or a precious stone, and they estimated their confidence on a scale from 0.50 to 1.00.

Overconfidence -

Extreme Confidence

Fischhoff et al. then examined the accuracy of only those answers about which subjects were absolutely sure.

What they found was that people tended to be only

70 % to 85 % correct when they reported being 100 % sure of their answer.

The correct answer is that absinthe is a liqueur, though many people confuse it with a precious stone called amethyst.

Overconfidence -

Extreme Confidence

Just to be certain their results were not due to misconceptions about probability, Fischhoff et al.

(1977) conducted a second experiment in which confidence was elicited in terms of the odds of being correct .

Overconfidence -

Extreme Confidence

Subjects in this experiment were given more than

106 items in which two causes of death were listed for instance, leukaemia and drowning. They were asked to indicate which cause of death was more frequent in the United States and to estimate the odds that their answer was correct (i.e. 2:1, 3:1, etc.). This way, instead of having to express 75 % confidence in terms of a probability, subjects could express their confidence as 3:1 odds of being correct.

What are odds?

Aside - Odds

Odds are just an alternative way of expressing the likelihood of an event such as catching the flu.

Probability is the expected number of flu patients divided by the total number of patients.

Odds would be the expected number of flu patients divided by the expected number of non-flu patients.

Skip

Aside - Odds

During the flu season, you might see ten patients in a day.

One would have the flu and the other nine would have something else.

So the probability of the flu in your patient pool would be one out of ten.

The odds would be one to nine.

Aside - Odds

More details

It's easy to convert a probability into an odds.

Simply take the probability and divide it by one minus the probability. Here's a formula.

Odds = Probability/(1-Probability)

If you know the odds in favour of an event, the probability is just the odds divided by one plus the odds. Here's a formula.

Probability = Odds/(1+Odds)

Aside - Odds

Example

If both of your parents have an Aa genotype, the probability that you will have an AA genotype is 0.25.

The odds would be

Odds = 0.25/(1-0.25) = 0.333

which can also be expressed as one to three.

Aside - Odds

If both of your parents are Aa, then the probability that you will be Aa is 0.50. In this case, the odds would be

Odds = 0.5/(1-0.5) = 1

We will sometimes refer to this as even odds or one to one odds.

Aside - Odds

When the probability of an event is larger than 50 % , then the odds will be larger than 1. When both of your parents are Aa, the probability that you will have at least one A gene is 0.75. This means that the odds are.

Odds = 0.75/(1-0.75) = 3 which we can also express as 3 to 1 in favour of inheriting that gene.

Let's convert that odds back into a probability. An odds of 3 would imply that

Probability = 3/(1+3) = 0.75

Aside - Odds

Suppose the odds against winning a contest were eight to one. We need to re-express as odds in favour of the event, and then apply the formula. The odds in favour would be one to eight or 0.125. Then we would compute the probability as

Probability = 0.125/(1+0.125) = 0.111

Notice that in this example, the probability (0.125) and the odds (0.111) did not differ too much. This pattern tends to hold for rare events. In other words, if a probability is small, then the odds will be close to the probability. On the other hand, when the probability is large, the odds will be quite different.

Aside - Odds

When Spell-Check Can’t Help – NY Times - 13 May 2014

Confusion about odds By Philip B. Corbett, “After Deadline” blog

The present article includes warnings about vague statements involving odds.

For instance these two examples:

The odds of Mr. Gandhi’s becoming the next prime minister have dropped so low that Mumbai bookies have stopped taking bets on him.

Iraq Unrest Narrows Odds for Maliki to Keep Seat

Take care to be clear in referring to “odds.” “Higher” odds could suggest that something is more likely (higher probability) or less likely

(1,000 to 1, say, compared with 10 to 1). It was difficult to tell whether “narrows odds” in the second headline meant he had more chance or less. Consider “probability,” “likelihood” or “chance” as alternatives if “odds” might be ambiguous.

Aside - Odds

On a related note the following quotations from What the Numbers

Say: A Field Guide to Mastering Our Numerical World by Derrick

Niederman and David Boyum (p. 174) are relevant:

“If Congress ever decided to act in the public interest, it could do no worse than to pass a law banning the use of odds as a method for stating probabilities.”

“If you're confused [about odds], don't worry, for even if you understand how odds work, you can never be sure if the person you're talking to does

.”

Overconfidence -

Extreme Confidence

What Fischhoff et al. (1977) found was that confidence and accuracy were aligned fairly well up to confidence estimates of about 3:1, but as confidence increased from 3:1 to 100:1, accuracy did not increase appreciably.

When people set the odds of being correct at 100:1, they were actually correct 73 % of the time. Even when, people set the odds between 10,000:1 and 1,000,000:1

indicating virtual certainty they were correct only 85 % to 90 % of the time (and should have given a confidence rating between 6:1 and 9:1).

Overconfidence -

Extreme Confidence

Although these results may seem to contradict

Lichtenstein and Fischhoffs earlier claim that overconfidence is minimal when subjects are 80 % accurate, there is really no contradiction.

The fact that subjects average only 70 % to 90 % accuracy when they are highly confident does not mean that they are always highly confident when 70 % to 90 % accurate.

Overconfidence -

Extreme Confidence

Finally, as an added check to make sure that subjects understood the task, and were taking it seriously,

Fischhoff et al. (1977) conducted three replications.

In one replication, the relation between odds and probability was carefully explained in a twenty

minute lecture. Subjects were given a chart showing the correspondence between various odds estimates and probabilities, and they were told about the subtleties of expressing uncertainty as an odds rating (with a special emphasis on how to use odds between 1:1 and 2:1 to express uncertainty).

Overconfidence -

Extreme Confidence

Even with these instructions, subjects showed unwarranted confidence in their answers. They assigned odds of at least 50:1 when, the odds were actually about 4:1, and they gave odds of 1000:1 when they should have given odds of 5:1.

Overconfidence -

Extreme Confidence

In another replication, subjects were asked whether they would accept a monetary bet based on the accuracy of answers that they rated as having 50:1 or better odds of being correct.

Of 42 subjects, 39 were willing to gamble-even though their overconfidence would have led to a total of more than $140 in losses.

Overconfidence -

Extreme Confidence

And in a final replication, Fischhoff et al. (1977) actually played subjects' bets.

In this study, 13 of 19 subjects agreed to gamble on the accuracy of their answers, even though they were incorrect on 12 % of the questions to which they had assigned odds of 50:1 or greater (and all would have lost from $1 to $11, had the experimenters not waived the loss).

Overconfidence

It has been argued that estimates of the expected inflation rate may be obtained from the qualitative data generated by surveys in which respondents are asked whether they expect prices to rise, fall or stay the same

(Carlson and Parkin 1975).

Numerous works build on this original work. In particular the analysis of a sample of stock return expectations that contains both qualitative and quantitative forecasts. The first to provide direct evidence of the weakness (as well as the reasons for this weakness) of popular quantification procedures in such a setting

(Breitung and Schmeling 2013).

Overconfidence -

Extreme Confidence

These results suggest that

1. people are overconfident even when virtually certain they are correct

2. overconfidence is not simply a consequence of taking the task lightly or misunderstanding how to make confidence ratings.

Indeed, Sieber (1974) found that overconfidence increased with incentives to perform well.

Overconfidence - When Overconfidence

Becomes A Capital Offence

Are people overconfident when more is at stake than a few dollars?

Although ethical considerations obviously limit what can be tested in the laboratory. The Catch-22

(Heller 1961 see below) of human laboratory research is that the more relevant a motivational variable is to everyday human life, the less ethically justifiable is the manipulation of that variable in the laboratory. At least one line of evidence suggests that overconfidence operates even when human life hangs in the balance. This evidence comes from research on the death penalty.

Overconfidence

Catch-22 (Heller 1961)

“There was only one catch and that was Catch-22, which specified that a concern for one's safety in the face of dangers that were real and immediate was the process of a rational mind. Orr was crazy and could be grounded. All he had to do was ask; and as soon as he did, he would no longer be crazy and would have to fly more missions. Orr would be crazy to fly more missions and sane if he didn't, but if he were sane he had to fly them. If he flew them he was crazy and didn't have to; but if he didn't want to he was sane and had to. Yossarian was moved very deeply by the absolute simplicity of this clause of Catch-

22 and let out a respectful whistle.” (p. 56, ch. 5)

It is set during World War II from 1942 to 1944. The novel follows

Captain John Yossarian, a U.S. Army Air Forces B-25 bombardier.

Overconfidence

Catch-22

Losing something is typically a conventional problem; to solve it, one looks for the lost item until one finds it. But if the thing lost is one's glasses, one can't see to look for them.

If the lights are out in a room, one can't see to find the light switch.

If one locks their keys in their car, it is not possible to unlock the car to retrieve them.

If one lacks work experience, one cannot get a job to gain experience.

If one doesn't have money, one can't invest to make money.

Overconfidence - When Overconfidence

Becomes A Capital Offence

In a comprehensive review of wrongful convictions,

Bedau and Radelet (1987) found 350 documented instances in which innocent defendants were convicted of capital or potentially capital crimes in the United States - even though the defendants were apparently' judged “guilty beyond a reasonable doubt.”

In five of these cases, the error was discovered prior to sentencing.

Overconfidence - When Overconfidence

Becomes A Capital Offence

The other defendants were not so lucky:

67 were sentenced to prison for terms of up to 25 years

139 were sentenced to life in prison (terms of 25 years or more)

139 were sentenced to die.

At the time of Bedau and Radelet's review, 23 of the people sentenced to die had been executed.

America’s most controversial prosecutor - Darth Vader’s lament -

Economist - 21 Nov 2015

Enthusiasm for the death penalty has made Dale Cox a hated figure in the liberal press.

Overconfidence - Calibration

“Calibration” is the degree to which confidence matches accuracy.

A decision maker is perfectly calibrated when, across all judgments at a given level of confidence, the proportion of accurate judgments is identical to the expected probability of being correct.

In other words, 90 % of all judgments assigned a 0.90 probability of being correct are accurate, 80 % of all judgments assigned a probability of 0.80 are accurate,

70 % of all judgments assigned a probability of 0.70 are accurate, and so forth.

Overconfidence - Calibration

When individual judgments are considered alone, it doesn't make much sense to speak of calibration. The only way to reliably assess calibration is by comparing accuracy and confidence across hundreds of judgments (Lichtenstein et al. 1982).

Just as there are many ways to measure confidence, there are several techniques for assessing calibration.

One way is simply to calculate the difference between average confidence ratings and the overall proportion of accurate judgments.

Overconfidence - Calibration

For instance, a decision maker might average 80 % confidence on a set of general knowledge items but be correct on only 60 % of the items. Such a decision maker would be overconfident by 20 % .

Although this measure of calibration is convenient, it can be misleading at times.

Consider, for example, a decision maker whose overall accuracy and average confidence are both

80 % . Is this person perfectly calibrated?

Overconfidence - Calibration

Not necessarily.

The person may be 60 % confident on half the judgments and l00 % confident on the others

(averaging out to 80 % confidence), yet 80 % accurate at both levels of confidence. Such a person would be under confident when 60 % sure and overconfident when 100 % sure.

Overconfidence - Calibration

A somewhat more refined approach is to examine accuracy over a range of confidence levels.

When accuracy is calculated separately for different levels of confidence, it is possible to create a “calibration curve” in which the horizontal axis represents confidence and the vertical axis represents accuracy.

Overconfidence - Calibration

The figure (next slide) contains two calibration curves - one for weather forecasters' predictions of precipitation , and the other for physicians' diagnoses of pneumonia .

Overconfidence - Calibration

As you can see, the weather forecasters are almost perfectly calibrated; on the average, their predictions closely match the weather

(contrary to popular belief!).

Overconfidence - Calibration

In contrast, the physicians are poorly calibrated; most of their predictions lie below the line, indicating overconfidence.

Overconfidence - Calibration

Although the weather forecasters are almost perfectly calibrated, the physicians show substantial overconfidence (i.e., unwarranted certainty that patients have pneumonia).

The data on weather forecasters comes from a report by Murphy and Winkler (1984), and the data on physicians comes from a study by Christensen-

Szalanski and Bushyhead (1981).

Overconfidence - Calibration

Ganguli et al. (2015) found that while most primary care physicians responding to a survey expressed confidence in their ability to identify potential cases of Ebola and communicate Ebola risks to their patients. When asked how they would care for hypothetical patients who might have been exposed to Ebola, less than 70% gave answers fitting CDC guidelines. Those least likely to encounter an Ebola patient were most likely to choose overly intense management of patients actually at low risk.

See also Many physicians overestimate their ability to assess patients' risk of Ebola: Those least likely to encounter patients exposed to Ebola are most likely to choose excessive management measures -- ScienceDaily -- 27 August 2015 .

Overconfidence - Calibration

Most research into uncertainty focuses on how people estimate probability magnitude. By contrast, this paper focuses on how people interpret the concept of probability and why they often misinterpret it.

In a weather forecast context, they hypothesised that the absence of an explicit reference class and the polysemy of the percentage format are causing incorrect probability interpretations, and test two interventions to help people make better probability interpretation. In two studies (N = 1337). They demonstrate that most people from the

UK and the US do not interpret probabilities of precipitation correctly.

The explicit mention of the reference class helped people to interpret probabilities of precipitation better when the target area was explicit; but this was not the case when it was not specified. Furthermore, the polysemy of the percentage format is not likely to cause these misinterpretations, since a non-polysemous format (e.g. verbal probability) did not facilitate a correct probability interpretation in our studies (Juanchich and Sirota 2015).

polysemy – word having multiple related meanings

Overconfidence - Calibration

In a similar vein see Börjesson et al. (2015) on risk propensity within the military: in a study of Swedish officers and soldiers. Also see Perko et al. (2015) the differences in perception of radiological risks: lay people versus new and experienced employees in the nuclear sector and volcanologists (Donovan et al. 2015).

Donovan et al. (2015) suggest that the phrasing of questions affects the ways in which probabilities are estimated. In total, 71% of respondents (N = 70) exhibited some form of inconsistency in their answers between and/or within different question formats. Some respondents were uncomfortable with providing any numerical probability estimate. Perhaps suggesting that they considered the uncertainty too high for meaningful judgements to be made.

Overconfidence - Calibration

There are additional ways to assess calibration, some of them involving complicated mathematics.

For instance, one of the most common techniques is to calculate a number known as a “ Brier score ”

(named after statistician Glenn Brier).

Brier scores can be partitioned into three components, one of which corresponds to calibration.

Overconfidence - Calibration

The Brier score component for calibration is a weighted average of the mean squared differences between the proportion correct in each category and the probability associated with that category (for a good introduction to the technical aspects of calibration, see Yates, 1990).

Overconfidence - Calibration

One of the most interesting measures of calibration is known as the “surprise index.”

The surprise index is used for interval judgments of unknown quantities.

For example, suppose you felt 90 % confident that the answer to a survey item was somewhere between an inch and a mile. Because the correct answer is actually greater than a mile, this answer would be scored as a surprise.

The surprise index is simply the percentage of judgments that lie beyond the boundaries of a confidence interval.

How to interpret a surprise index - FT - 8 Dec 2011

Overconfidence - Calibration

In major review of calibration research, Lichtenstein et al. (1982) examined several studies in which subjects had been asked to give 98 % confidence intervals (i.e., intervals that had a 98 % chance of including the correct answer).

In every study, the surprise index exceeded 2 % .

Averaging across all experiments for which information was available - a total of nearly 15,000 judgments - the surprise index was 32 % .

Overconfidence - Calibration

In other words, when subjects were 98 % sure that an interval contained the correct answer; they were right 68 % of the time. Once again, overconfidence proved the rule rather than the exception.

Are you overconfident?

Russo and Schoemaker (1989) developed an easy self-test to measure overconfidence on general knowledge questions (presented below).

skip quiz

Overconfidence - Self-Test Of

Overconfidence

For each of the following ten items, provide a low and high guess such that you are 90 % sure the correct answer falls between the two. Your challenge is to be neither too narrow (i.e., overconfident) nor too wide

(i.e., under confident). If you successfully meet this challenge you should have 10 % misses - that is, exactly one miss.

Overconfidence - Self-Test Of

Overconfidence

90% Confidence Range

Low High

1.

Martin Luther King's age at death

2.

Length of the Nile River

3.

Number of countries that are members of OPEC

4.

Number of books in the Old Testament

5.

Diameter of the moon in miles

6.

Weight of an empty Boeing 747 in pounds

7.

Year in which Wolfgang Amadeus Mozart was born

8.

Gestation period (in days) of an Asian elephant

9.

Air distance from London to Tokyo

10.

Deepest (known) point in the ocean (in feet)

Keep a note of your intervals

Are these really general knowledge!

Overconfidence - Self-Test Of

Overconfidence

This test will give you some idea of whether you are overconfident on general knowledge questions (Russo and Schoemaker, 1989).

1.

2.

3.

4.

5.

F

Overconfidence - Self-Test Of

Overconfidence

6.

Martin Luther King's age at death

Length of the Nile River

Number of countries that are members of OPEC

Number of books in the Old Testament

Diameter of the moon in miles

Weight of an empty Boeing 747 in pounds

7.

8.

9.

Year in which Wolfgang Amadeus Mozart was born

Gestation period (in days) of an Asian elephant

Air distance from London to Tokyo f 10.

Deepest (known) point in the ocean (in feet)

True Value

39

4187 miles

13 countries

39 books

2160 miles

390,000 pounds

1756

645 days

5959 miles

36,198 feet

Is the answer in your interval?

Overconfidence - Self-Test Of

Overconfidence

How long is the M1?

1.

What is the diameter of the London Eye?

2.

In what year was the construction of London Bridge completed?

3.

How tall is Mount Snowdon?

4.

How long is the River Thames?

5.

6.

In what year did Newcastle University become independent from the

University of Durham?

How tall is the Elizabeth Tower (Palace of Westminster)?

7.

What is the rail distance from London to Edinburgh?

8.

In what year was Queen Victoria born?

9.

10.

In what year was the School of Medicine and Surgery established in

Newcastle?

90% Confidence Range

Low High

Keep a note of your intervals

Overconfidence - Self-Test Of

Overconfidence

This test will give you some idea of whether you are overconfident on general knowledge questions.

Overconfidence - Self-Test Of

Overconfidence f

F

3.

4.

1.

2.

5.

6.

How long is the M1?

What is the diameter of the London Eye?

In what year was the construction of London Bridge completed?

How tall is Mount Snowdon?

How long is the River Thames?

In what year did Newcastle University become independent from the

University of Durham?

7.

8.

How tall is the Elizabeth Tower (Palace of Westminster)?

What is the rail distance from London to Edinburgh?

9.

10.

In what year was Queen Victoria born?

In what year was the School of Medicine and Surgery established in

Newcastle?

True Value

193 miles / 311 km

394 ft /120 m

1894

3560 ft / 1085 m

215 miles / 346 km

Is the answer in your interval?

1963

315 ft /96 m

393 miles / 633 km

1819 (24 May)

1834

If not it contribues to your surprise!

Overconfidence - Calibration

Although a comprehensive assessment of calibration requires hundreds of judgments, this test will give you a rough idea of what your surprise index is with general knowledge questions at one level of confidence.

Russo and Schoemaker administered the test to more than 1,000 people and found that less than 1 % of the respondents got nine or more items correct.

Most people missed four to seven items (a surprise index of 40 % to 70 % ), indicating a substantial degree of overconfidence.

Robotics

Grove et al. (2000) review the process of making judgments and decisions which require a method for combining data. To compare the accuracy of clinical and mechanical (formal, statistical) datacombination techniques on studies of human health and behaviour. On average, mechanical-prediction techniques were about 10% more accurate than clinical predictions. Depending on the specific analysis, mechanical prediction substantially outperformed clinical prediction in 33%-47% of studies examined.

Although clinical predictions were often as accurate as mechanical predictions, in only a few studies (6%-16%) were they substantially more accurate. Superiority for mechanical-prediction techniques was consistent, regardless of the judgment task, type of judges, judges' amounts of experience, or the types of data being combined. Clinical predictions performed relatively less well when predictors included clinical interview data.

These data indicate that mechanical predictions of human behaviours are equal or superior to clinical prediction methods for a wide range of circumstances.

Robotics

Dietvorst et al. (2015) show that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster.

This phenomenon, called algorithm aversion, is costly, and it is important to understand its causes. People are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake.

In 5 studies, participants who saw an algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

Robotics

How the Robots Lost: High-Frequency Trading's Rise and Fall - Businessweek - 6 June 2013

For the first time since its inception, high-frequency trading (HFT), the bogey machine of the markets, is in retreat. According to estimates from Rosenblatt Securities , as much as two-thirds of all stock trades in the U.S. from 2008 to 2011 were executed by highfrequency firms; today it’s about half. In 2009, high-frequency traders moved about 3.25 billion shares a day. In 2012, it was 1.6 billion a day. Speed traders aren’t just trading fewer shares, they’re making less money on each trade. Average profits have fallen from about a tenth of a penny per share to a twentieth of a penny.

In defence of ‘high frequency’ traders - FT - 2 Aug 2009

High-speed traders benefit US equity market – Rosenblatt - 8 Oct 2009

The Future of HFT? More Risk and Fewer Easy Pickings – Rosenblatt - 12 February 2013

US traders’ algorithms face tougher scrutiny - FT - 24 Nov 2015

Robotics

Stoll (2014) reports, high speed trading has drawn the attention of regulators who fear that such trading harms markets and leads to excessive speculation . The Flash Crash of May 6, 2010 is taken as evidence of the potential harmful effects of high frequency trading.

On the other hand, some view high frequency trading as a manifestation of technological advances that have reduced the optimal trade size and improved order routing. From that perspective, high speed trading is a continuation of a long-standing trend to more rapid and more efficient trading. This study shows that the speed of trading has changed dramatically. The average number of trades per day for large cap New York

Stock Exchange stocks has risen from about 500 to more than 40000 in the period 1993-2011.

At the same time, the average trade size has fallen from 1600 shares to

200 shares. The ultimate sources of these changes are the technology that has automated almost all aspects of trading and the regulatory developments that have helped reduce bid-ask spreads and made markets more accessible. The result of these developments is that markets are considerably more liquid and less costly. High frequency traders draw on this liquidity and also contribute to it.

Flash Crash 2010

Overconfidence - The Correlation

Between Confidence And Accuracy

Overconfidence notwithstanding, it is still possible for confidence to be correlated with accuracy.

To take an example, suppose a decision maker were

50 % accurate when 70 % confident

60 % accurate when 80 % confident

70 % accurate when 90 % confident

In such a case confidence would be perfectly correlated, with accuracy, even though the decision maker would be uniformly overconfident by 20 % .

Overconfidence - The Correlation

Between Confidence And Accuracy

The question arises, then, whether confidence is correlated with accuracy - regardless of whether decision makers are overconfident.

If confidence ratings increase when accuracy increases, then accuracy can be predicted as a function of how confident a decision maker feels. If not, then confidence is a misleading indicator of accuracy.

Many studies have examined this issue, and the results have often shown very little relationship between confidence and accuracy.

Overconfidence - The Correlation

Between Confidence And Accuracy

To illustrate, consider the following two problems concerning military history:

Problem 1. The government of a country not far from Superpower A, after discussing certain changes in its party system, began broadening its trade with

Superpower B. To reverse these changes in government and trade, Superpower A sent its troops into the country and militarily backed the original government.

Who was Superpower A - the United States or the

Soviet Union?

How confident are you that your answer is correct?

Overconfidence - The Correlation

Between Confidence And Accuracy

Problem 2. In the 1960s Superpower A sponsored a surprise invasion of a small country near its border, with the purpose of overthrowing the regime in power at the time. The invasion failed, and most of the original invading forces were killed or imprisoned.

Who was Superpower A - the United States or the

Soviet Union?

How confident are you that your answer is correct?

Overconfidence - The Correlation

Between Confidence And Accuracy

If you guessed the Soviet Union in the first problem and the United States in the second, you were right on both counts. The first problem describes the

1968 Soviet invasion of Czechoslovakia , and the second describes the American invasion of the Bay of Pigs in Cuba .

Is modern history still taught?

Most people miss at least one of these problems, despite whatever confidence they feel.

Overconfidence - The Correlation

Between Confidence And Accuracy

In the November 1984 issue of Psychology Today magazine, Plous and Zimbardo published the results of a reader survey that contained both of these problems and a variety of others on superpower conflict.

The survey included 10 descriptions of events, statements, or policies related to American and

Soviet militarism, but in each description, all labels identifying the United States and Soviet Union were removed. The task for readers was to decide whether “Superpower A” was the United States or the Soviet Union, and to indicate on a 9-point scale how confident they were of each answer.

Overconfidence - The Correlation

Between Confidence And Accuracy

Based on surveys from 3,500 people, they were able to conclude two things.

First, respondents were not able to tell American and

Soviet military actions apart. Even though they would have averaged 5 items correct out of 10 just by flipping a coin, the overall average from readers of

Psychology Today - who were more politically involved and educated than the general public - was 4.9 items correct.

Overconfidence - The Correlation

Between Confidence And Accuracy

Only 54 % of the respondents correctly identified the

Soviet Union as Superpower A in the invasion of

Czechoslovakia, and 25 % mistook the United States for the Soviet Union in the Bay of Pigs invasion.

These findings suggested that Americans were condemning Soviet actions and policies largely because they were Soviet, not because they were radically different from American actions and policies.

Overconfidence - The Correlation

Between Confidence And Accuracy

The second thing they found was that people's confidence ratings were virtually unrelated to their accuracy (the average correlation between confidence and accuracy for each respondent was only 0.08, very close to zero).

On the whole, people who got nine or ten items correct were no more confident than less successful respondents, and highly confident respondents scored about the same as less confident respondents.

Overconfidence - The Correlation

Between Confidence And Accuracy

This does not mean that confidence ratings were made at random; highly confident respondents differed in a number of ways from other respondents.

Two-thirds of all highly confident respondents (i.e., who averaged more than 8 on the 9-point confidence scale) were male, even though the general sample was split evenly by gender, and 80 % were more than 30 years old.

Overconfidence - The Correlation

Between Confidence And Accuracy

Twice as many of the highly confident respondents wanted to increase defence spending, as did less confident respondents, and nearly twice as many felt that the Soviet government could not be trusted at all.

Yet the mean score these respondents achieved on the survey was 5.1 items correct - almost exactly what would be expected by chance responding. Thus, highly confident respondents could not discriminate between

Soviet and American military actions, but they were very confident of misperceived differences and advocated increased defence spending.

Overconfidence - The Correlation

Between Confidence And Accuracy

As mentioned earlier, many other studies have found little or no correlation between confidence and accuracy (Paese and Sniezek, 1991; Ryback, 1967;

Sniezek and Henry, 1989, 1990; Sniezek et al. 1990).

This general pattern is particularly apparent in research on eyewitness testimony. By and large, these studies suggest that the confidence eyewitnesses feel about their testimony bears little relation to how accurate the testimony actually is

(Brown, Deffenbacher, and Sturgill, 1977; Clifford and Scott, 1978; Leippe, Wells, and Ostrom, 1978).

Recall research on the police .

Overconfidence - The Correlation

Between Confidence And Accuracy

In a review of 43 separate research findings on the relation between accuracy and confidence in eye and ear witnesses, Deffenbacher (1980) found that in two-thirds of the “forensically relevant” studies (e.g. studies in which subjects were not instructed in advance to watch for a staged crime), the correlation between confidence and accuracy was not significantly positive.

Findings such as these led Loftus (1979, p. 101), author of Eyewitness Testimony , to caution: “One should not take high confidence as any absolute guarantee of anything.”

Overconfidence - The Correlation

Between Confidence And Accuracy

Flowe et al. (2015) examined the influence of alcohol on remembering an interactive hypothetical sexual assault scenario in the laboratory. The accuracy of the information intoxicated participants reported did not differ compared to sober participants.

Additionally, peripheral details were remembered less accurately than central details, regardless of the intoxication level; and memory accuracy for peripheral details decreased by a larger amount compared to central details across the retention interval. These results challenge the misconception that intoxicated victims and witnesses are unreliable.

Overconfidence - The Correlation

Between Confidence And Accuracy

Similar results have been found in clinical research.

In one of the first experiments to explore this topic,

Goldberg (1959) assessed the correlation between confidence and accuracy in clinical diagnoses.

Goldberg was interested in whether clinicians could accurately detect organic brain damage on the basis of protocols from the Bender-Gestalt test (a test widely used to diagnose brain damage).

Overconfidence - The Correlation

Between Confidence And Accuracy

He presented 30 different test results to four experienced clinical psychologists, ten clinical trainees, and eight non-psychologists (secretaries).

Half of these protocols were from patients who had brain damage, and half were from psychiatric patients who had non-organic problems.

Judges were asked to indicate whether each patient was “organic” or “non-organic,” and to indicate their confidence on a rating scale labelled “Positively,”

“Fairly certain,” “Think so,” “Maybe,” or “Blind guess.”

Overconfidence - The Correlation

Between Confidence And Accuracy

Goldberg found two surprising results.

First, all three groups of judges - experienced clinicians, trainees, and non-psychologists, correctly classified 65 % to 70 % of the patients. There were no differences based on clinical experience; secretaries performed as well as psychologists with four to ten years of clinical experience.

Second, there was no significant relationship between individual diagnostic accuracy and degree of confidence.

Judges were generally as confident on cases they misdiagnosed as on cases they diagnosed correctly.

Overconfidence - The Correlation

Between Confidence And Accuracy

Subsequent, studies have found miss-calibration in diagnoses of cancer, pneumonia (Bushyhead 1981), and other serious medical problems (Centor et al.

1984; Christensen-Szalanski and Bushyhead, 1981;

Wallsten, 1981).

Overconfidence - How Can

Overconfidence Be Reduced?

In a pair of experiments on how to improve calibration, Lichtenstein and Fischhoff (1980) found that people who were initially overconfident could learn to be better calibrated after making 200 judgments and receiving intensive performance feedback.

Likewise, Arkes and his associates found that overconfidence could be eliminated by giving subjects feedback after five deceptively difficult problems (Arkes et al. 1987).

Overconfidence - How Can

Overconfidence Be Reduced?

These studies show that overconfidence can be unlearned, although their applied value is somewhat limited. Few people will ever undergo special training sessions to become well calibrated.

What would be useful is a technique that decision makers could carry with them from judgment to judgment - something lightweight, durable, and easy to apply in a range of situations. And indeed, there does seem to be such a technique.

Overconfidence - How Can

Overconfidence Be Reduced?

The most effective way to improve calibration seems to be very simple:

Stop to consider reasons why your judgment might be wrong.

The value of this technique was first documented by

Koriat et al. (1980). In this research, subjects answered two sets of two-alternative general knowledge questions, first under control instructions and then under reasons instructions.

Overconfidence - How Can

Overconfidence Be Reduced?

Under control instructions, subjects chose an answer and estimated the probability (between 0.50 and

1.00) that their answer was correct.

Under reasons instructions, they were asked to list reasons for and against each of the alternatives before choosing an answer.

Overconfidence - How Can

Overconfidence Be Reduced?

Koriat et al. (1980) found that under control instructions, subjects showed typical levels of overconfidence, but after generating pro and con reasons, they became extremely well calibrated

(roughly comparable to subjects who were given intensive feedback in the study by Lichtenstein and

Fischhoff).

After listing reasons for and against each of the alternatives, subjects were less confident (primarily because they used 0.50 more often and 1.00 less often) and more accurate (presumably because they devoted more thought to their answers).

Overconfidence - How Can

Overconfidence Be Reduced?

In a follow-up experiment, Koriat et al. (1980) found that it was not the generation of reasons per se that led to improved calibration; rather, it was the generation of opposing reasons.

When subjects listed reasons in support of their preferred answers, overconfidence was not reduced.

Calibration improved only when subjects considered reasons why their preferred answers might be wrong.

Overconfidence - How Can

Overconfidence Be Reduced?

Although these findings may be partly a function of

“social demand characteristics” (i.e., subjects feeling cued by instructions to tone down their confidence levels), other studies have confirmed that the generation of opposing reasons improves calibration

(e.g. Hoch, 1985).

These results are reminiscent of the study by Slovic and Fischhoff (1977), in which hindsight biases were reduced when subjects thought of reasons why certain experimental results might have turned out differently than they did.

Overconfidence - How Can

Overconfidence Be Reduced?

Since the time of Slovic and Fischhoff’s study, several experiments have shown how various judgment biases can be reduced by considering the possibility of alternative outcomes or answers

(Griffin, Dunning, and Ross, 1990; Hoch, 1985; Lord,

Lepper, and Preston, 1984).

As Lord, Lepper, and Preston (1984, p. 1239) pointed out: “The observation that humans have a blind spot for opposite possibilities is not a new one. In 1620,

Francis Bacon wrote that 'it is the peculiar and perpetual error of human intellect to be more moved and excited by affirmatives than by negatives.’”

Overconfidence - Conclusion

It is important to keep research on overconfidence in perspective.

In most studies, average confidence levels do not exceed accuracy by more than 10 % to 20 % .

Consequently, overconfidence is unlikely to be catastrophic unless decision makers are nearly certain that their judgments are correct. As the explosion of the space shuttle illustrates, the most devastating form of miss-calibration is inappropriate certainty.

Overconfidence - Conclusion

Taken together, the studies in this chapter suggest several strategies for dealing with miss-calibration:

First, you may want to flag certain judgments for special consideration. Overconfidence is greatest when judgments are difficult or confidence is extreme. In such cases, it pays to proceed cautiously.

Overconfidence - Conclusion

Second, you may want to “recalibrate” your confidence judgments and the judgments of others.

As Lichtenstein and Fischhoff (1977) observed, if a decision maker is 90 % confident but only 70 % to 75 % accurate, it is probably best to treat “90 % confidence” as though it were “70 % to 75 % confidence.”

Overconfidence - Conclusion

Along the same lines, you may want to automatically convert judgments of “100 % confidence” to a lesser degree of confidence.

One hundred percent confidence is especially unwarranted when predicting how people will behave

(Dunning, Griffin, Milojkovic, and Ross, 1990).

Above all, if you feel extremely confident about an answer, consider reasons why a different answer might be correct. Even though you may not change your mind, your judgments will probably be better calibrated.

A Useful Bibliography

Overconfidence - Conclusion

Experiments to demonstrate the existence of the bias of over-confidence as a human psychological bias have been performed (Jemaiel et al. 2013).

The bias was measured by three methods: the estimation interval, the frequency estimation method and the method of question with two answer choices.

The estimation interval method finds a very wide bias compared to the other two methods, but overconfidence persists in the other two methods at lower levels.

It was demonstrated that there is a strong link between over-confidence and risk taking.

Overconfidence

Sometimes having the inflated sense of abilities that accompanies overconfidence can lead to very bad outcomes. For instance, in corporate finance, the tendency of companies to overbid for projects and take on commitments they perhaps cannot manage is known as the winner's curse. In the strip, Dilbert's boss has clearly fallen victim to this inclination.

Cartoon (Kramer 2014)

Guidelines For Eliciting Expert

Knowledge

Kynn (2008) offers a guide to the psychological research on assessing probabilities, both old and new, and gives concrete guidelines for eliciting expert knowledge. The conclusions are summarized as 10 recommendations for eliciting probability estimates from experts.

Before the Elicitation

Recommendation 1. In terms of calibration, training questions are valuable only when directly related to test questions. However, this does not negate the benefit of familiarizing the expert with the elicitation process.

Recommendation 2. Scoring rules can be used as a training device, but they need to be transparent to the expert.

Recommendation 3. A brief review of basic probability concepts may be helpful.

During the Elicitation

Recommendation 4. Only ask questions from within the area of expertise by using familiar measurements.

Recommendation 5. Decompose the elicitation into tasks that are as ‘small’ and distinct as possible. Any assessments that can be combined or computed mechanically should be done with a computer, not in the expert’s head. Check for coherency — do not expect the expert to be coherent without aid.

During the Elicitation

Recommendation 6. Be specific with wording: use a frequency representation where possible with an explicit reference class or make sure that the set relations within the problem are transparent.

Recommendation 7. Do not lead the expert by providing sample numbers on which the expert may anchor. Consider the effect of positively or negatively framing questions; if a neutral framing is not possible consider asking the same question in different ‘frames’ to allow the expert to doublecheck on assessments.

During the Elicitation

Recommendation 8. Ask the expert for, or provide the expert with, specific alternatives to the focal hypothesis; ask the expert to discuss estimates, giving evidence both for and against the focal hypothesis. Consider allowing competing hypotheses to be assessed separately and compared by a ratio.

During the Elicitation

Recommendation 9. Offer process feedback about the task and probability assessments; for example, offer different representations of probability (say graphical), give summaries of the assessments made and allow the expert to reconsider estimates. There has long been speculation that graphical and numerical presentations of risk statistics differ in their impact on people’s willingness to pursue actions that could harm or even kill them (Chua et al. 2006,

Smerecnik et al. 2010).

After the Elicitation

Recommendation 10.

If possible, duplicate the elicitation procedure with the same expert at a later date to check the self-consistency of the expert.

In Summary

A summary table of the process has been proposed by Edwards and Fasolo (2001).

4.

5.

6.

7.

Step

1.

2.

3.

Task

Identification and selection of the experts;

Training in probability judgments;

Presentation and discussion of the uncertain events and quantities;

Analysis and data collection;

Presentation and discussion of the results of step 4;

Elicitation;

Analysis, aggregation, and documentation.

Are Graphs A Good Idea?

Duclos (2014) examines how people process graphical displays of financial information (e.g., stock-prices) to forecast future trends and invest accordingly.

The last trading day(s) of a stock bear a disproportionately (and unduly) high importance on investment behaviour, end-anchoring. A stock-price closing upward (downward) fosters upward

(downward) forecasts for tomorrow and, accordingly, more (less) investing in the present.

The psychology of investment behavior: (De)biasing financial decision-making one graph at a time - Duclos - 2014

Are Graphs A Good Idea?

When experiences are made of successive episodes spanning from past to future, consumers rely on the end of one episode to predict what will happen in the next. With respect to financial decision-making, the graphical representation of a stock-price can unduly anchor investment behaviour.

As they contemplate the retrospective performance of a stock, investors rely disproportionally on the most recent trade-activity (i.e., the end of a series) to infer how the stock will fare today.

As a result, stocks whose last price-fluctuation followed an upward (downward) trajectory foster upward (downward) expectations for the future; hence increase (decrease) one's willingness to purchase shares in the present.

Are Graphs A Good Idea?

End-anchoring is more likely to occur when information is reviewed visually (i.e., graphically, as is often the case in the real world) than numerically.

Indeed lines on a graph foster a greater sense of continuity over time (than tabular displays) since each new day is visibly/directly linked to its predecessor. This sense of continuity makes it in turn easier to expect and/or visualize consistency from one day to the next.

Are Graphs A Good Idea?

The research should sound a note of caution for consumers as much as for industry players and regulators.

Indeed, biases such as end-anchoring can easily lead to precipitated sale (or purchase) of assets, lopsided portfolio allocations, or other irrational behaviours.

It is thus important, both managerially and societally, to understand how displays of data impact investment behaviour. Graphic (numeric) displays encourage (mitigate) end-anchoring.

As such, it may sometimes be beneficial to convey quantitative information in less “perceptual” ways.

Because of their visual nature, graphs may indeed be more likely to foster perceptual processing and heuristic decision-making. In contrast, numeric displays may be less prone to such a pitfall (Duclos 2014).

Next Week

Probability And Risk

Actually Behavioural Traps

Download