Introduction Key references are Barberis and Thaler (2003) Gärling et al. (2009) Kahneman (2011) Li Calzi (2008) Plous (1993) Rabin (1996) (and later works by the same author) Shefrin (2007) Shleifer (2000) Bibliography Lecture 2 Lecture 3 1.11 Friday, 11 March 2016 6:40 PM Introduction Whats in a name? Much research that has appeared in the media as research done by behavioral economists, in fact has been done by psychologists. For an interesting essay see “Why Behavioural Economics Is Cool, and I'm Not” by Adam Grant who is a professor at the Wharton School of the University of Pennsylvania and the author of “Give and Take: A Revolutionary Approach to Success” (Viking Press, 2013). 1.22 Seeing Is Believing! 1.33 Introduction Making decisions is both tough and risky (see, for example, the reviews by Rapoport and Wallsten 1972 and Edwards and Fasolo 2001). Bad decisions can cause damage to a business a career or your finances, sometimes irreparably. So where do bad decisions come from? 1.44 Introduction In many cases, they can be traced back to the way the decisions were made; the alternatives were not clearly defined, the right information was not collected, the costs and benefits were not accurately weighed. Sometimes the fault lies not in the decision-making process but rather in the mind of the decision maker. The way the human brain works can sabotage our decisions. Researchers have identified a whole series of such flaws in the way we think in making decisions. 1.55 Introduction Shefrin’s (2010) insightful observation is of interest: “Finance is in the midst of a paradigm shift, from a neoclassical based framework to a psychologically based framework. Behavioural finance is the application of psychology to financial decision making and financial markets. Behaviouralising finance is the process of replacing neoclassical assumptions with behavioural counterparts. … the future of finance will combine realistic assumptions from behavioural finance and rigorous analysis from neoclassical finance.” 1.66 Introduction If irrational traders cause deviations from a “true” value, rational traders will often be powerless to do anything about it (Barberis and Thaler 2003)? Current examples are the oil price (which is too high) and the gold or silver prices (where there has been artificial shorting going on). In view of this economists turn to the extensive experimental evidence compiled by cognitive psychologists on the systematic biases that arise when people form beliefs, and on people’s preferences. Fffffffffffff (Shorting - The sale of a borrowed security, commodity or fffff currency with the expectation that the asset will fall in value.) 1.77 Introduction For a useful link to topics in Daniel Kahneman’s text see "Thinking, Fast and Slow" Shim Marom. This provides an overview rather than a detailed analysis. Also of interest is "Common Flaws With How We Think" Forbes - 2/11/2014 - Ross Pomeroy. For a concise overview see Homo economicus – or more like Homer Simpson? - Schneider 2010 - Deutsche Bank Research These are intended as background reading. 1.88 Menu Overconfidence Avoidance Prudence Avoidance Recallability Avoidance Optimism And Wishful Thinking Representativeness Sample Size Neglect The Law Of Small Numbers Conservatism Belief Perseverance Anchoring Avoidance Confirmatory Bias Avoidance Availability Bias Internalisation Heuristics Menu 1.99 1.10 10 Menu Overconfidence Avoidance Prudence Avoidance Recallability Avoidance Optimism And Wishful Thinking Representativeness Sample Size Neglect The Law Of Small Numbers Conservatism Belief Perseverance Anchoring Avoidance Confirmatory Bias Avoidance Availability Bias Internalisation Heuristics Menu 1.11 11 Overconfidence Extensive evidence shows that people are overconfident in their judgments. 1 Confidence intervals people assign to their estimates of quantities. For example, estimating the level of the stock market in a years time, are far too narrow. Their 98% confidence intervals, for example, include the true quantity only about 60% of the time (Alpert and Raifa, 1982). 1.12 12 Overconfidence Extensive evidence shows that people are overconfident in their judgments. 2 People are poorly calibrated when estimating probabilities. Events they think are certain to occur actually occur only around 80% of the time, and events they deem impossible occur approximately 20% of the time (Fischhof et al. 1977). 1.13 13 Overconfidence An example from David J. Spiegelhalter and coworkers. The Great Ormond Street Hospital in London (GOS) specialises in child diseases and acts as a regional centre for South East of England. Whenever a blue baby is born, the paediatrician telephones GOS and a diagnosis is made. It is then decided whether or not to send Child to GOS for treatment. A Bayesian model has been used for the diagnosis of congenital heart disease and was derived by experts. 1.14 14 Overconfidence Evaluation Of A Diagnostic Algorithm For Heart-Disease In Neonates Franklin, R.C.G., Spiegelhalter, D.J., Macartney, F.J. and Bull, K. British Medical Journal, 302, 935-939, 1991. 1.15 15 Overconfidence 1.16 16 Overconfidence On the first day a baby exhibited a mix of symptoms that the experts said would never arise together. The impossible occurred! 1.17 17 Overconfidence Overconfidence may in part stem from two other biases (self-attribution and hindsight bias). 3 Self-attribution bias refers to people’s tendency to ascribe any success they have in some activity to their own talents, while blaming failure on bad luck, rather than on their ineptitude. Doing this repeatedly will lead people to the pleasing but erroneous conclusion that they are very talented. 1.18 18 Overconfidence We like to exploit the luck of others (BPS) Psychologists have documented the many irrational ways we think about luck, from the fact we prefer to make our own choice in gambling games (thus increasing our sense of control) to our belief in lucky runs or hot numbers (Wohl and Enzle, 2009). 1.19 19 Overconfidence Bad luck really can be reversed by touching wood ritual, say scientists (The Telegraph 2 Oct. 2013). In five separate experiments, researchers had participants either tempt fate or not and then engage in an action that was either avoidant or not. The avoidant actions included those that were superstitious – like knocking on wood – or non-superstitious – like throwing a ball. They found that those who knocked down (away from themselves) or threw a ball believed that a jinxed negative outcome was less likely than participants who knocked up (toward themselves) or held a ball (Zhang et al. 2013). 1.20 20 Overconfidence For example, investors might become overconfident after several quarters of investing success (Gervais and Odean, 2001). In an experimental asset market where agents trade one risky asset, Maciejovsky and Kirchler (2002) find the largest overconfidence towards the end of the experiment, when the participants gain more experience and start to rely more heavily on their (overestimated) knowledge. This finding indicates that overconfidence may be subject to modifications, which goes back to the crucial role of clear, rapid feedback in shaping individual overconfidence levels (Russo and Schoemaker 1992). 1.21 21 Overconfidence Overconfidence may in part stem from two other biases (self-attribution and hindsight bias). 1 Hindsight bias is the tendency of people to believe, after an event has occurred, that they predicted it before it happened. 2 If people think they predicted the past better than they actually did, they may also believe that they can predict the future better than they actually can. 1.22 22 Overconfidence Overconfidence is the tendency to be overly optimistic, to overestimate one's own abilities, or to believe their information is more precise than it really is. In the strip, Dilbert's boss falls victim to this bias when he assumes that all managers (presumably including himself) are better than average, all the while not recognizing Dilbert's impolite jab at his poor math skills Cartoon (Kramer 2014). For a broad interdisciplinary review see Skala (2008). 1.23 23 Beliefs – Overconfidence Self-attribution or Self-Serving Bias Along the same vein as overconfidence is the selfattribution or self-serving bias. This is when investors are quick to take credit for portfolio gains, but just as quick to blame losses on outside factors like market forces or the Bank of China. Much like an athlete blaming the referee for a loss, self-serving bias helps investors avoid accountability. Although you might feel better by following this bias, you will be cheating yourself out of a valuable opportunity to improve your investing intelligence. If you've never made a mistake in the market, you'll have no reason to develop better investing skills and your returns will reflect it. 1.24 24 Certainty Gigerenzer et al. (2008) report on the illusion of certainty. Shown are results from face-to-face interviews conducted in 2006, in which a representative sample of 1,016 German citizens was asked: “Which of the following tests are absolutely certain?” 1. DNA test vote now! 2. Fingerprint test higher or lower than those above? 3. HIV test higher or lower than those above? 4. Mammogram higher or lower than those above? 5. Horoscope higher or lower than those above? 1.25 25 Certainty A large proportion of the general public have illusory certainty about the perfection of tests, including HIV testing and mammography. This illusion is not simply a product of the individual mind but has its historical origins in deterministic medical science. Today, it is fueled by health messages that claim or suggest certainty. 1.26 26 1.27 27 Menu Overconfidence Avoidance Prudence Avoidance Recallability Avoidance Optimism And Wishful Thinking Representativeness Sample Size Neglect The Law Of Small Numbers Conservatism Belief Perseverance Anchoring Avoidance Confirmatory Bias Avoidance Availability Bias Internalisation Heuristics Menu 1.28 28 Overconfidence - Avoidance To reduce the effects of overconfidence in making estimates, always start by considering the extremes, the low and high ends of the possible range of values. This will help you avoid being anchored by an initial estimate. Then challenge your estimates of the extremes. Try to imagine circumstances where the actual figure would fall below your low or above your high, and adjust your range accordingly. Challenge the estimates of your subordinates and advisers in a similar fashion. They're also susceptible to overconfidence (Hammond et al., 1.29 29 1998/2006). 1.30 30 Menu Overconfidence Avoidance Prudence Avoidance Recallability Avoidance Optimism And Wishful Thinking Representativeness Sample Size Neglect The Law Of Small Numbers Conservatism Belief Perseverance Anchoring Avoidance Confirmatory Bias Avoidance Availability Bias Internalisation Heuristics Menu 1.31 31 Prudence Another problem takes the form of overcautiousness, or prudence. When faced with high-stakes decisions, we tend to adjust our estimates or forecasts “just to be on the safe side.” 1.32 32 Prudence Many years ago, for example, one of the Big Three U.S. automakers was deciding how many of a new-model car to produce in anticipation of its busiest sales season. The market-planning department, responsible for the decision, asked other departments to supply forecasts of key variables such as anticipated sales, dealer inventories, competitor actions, and costs. 1.33 33 Prudence Knowing the purpose of the estimates, each department slanted its forecast to favour building more cars; “just to be safe.” But the market planners took the numbers at face value and then made their own “just to be safe” adjustments. Not surprisingly, the number of cars produced far exceeded demand, and the company took six months to sell off the surplus, resorting in the end to promotional pricing (Hammond et al. 1998/2006). 1.34 34 Prudence Policy makers have gone so far as to codify overcautiousness in formal decision procedures. An extreme example is the methodology of “worst-case analysis,” which was once popular in the design of weapons systems and is still used in certain engineering and regulatory settings. Using this approach, engineers designed weapons to operate under the worst possible combination of circumstances, even though the odds of those circumstances actually coming to pass were infinitesimal. 1.35 35 Prudence Worst-case analysis added enormous costs with no practical benefit (in fact, it often backfired by touching off an arms race), proving that too much prudence can sometimes be as dangerous as too little. 1.36 36 Prudence However, maybe we can be more careful, consider the list of bridge failures, famously the Tacoma Narrows Bridge (Galloping Gertie - 7 November 1940). The Tay Bridge disaster occurred during a violent storm on 28 December 1879 when the first Tay Rail Bridge collapsed while a train was passing over it from Wormit to Dundee, killing all aboard. For William McGonagall's poem on this subject, see The Tay Bridge Disaster. 1.37 37 1.38 38 Menu Overconfidence Avoidance Prudence Avoidance Recallability Avoidance Optimism And Wishful Thinking Representativeness Sample Size Neglect The Law Of Small Numbers Conservatism Belief Perseverance Anchoring Avoidance Confirmatory Bias Avoidance Availability Bias Internalisation Heuristics Menu 1.39 39 Prudence - Avoidance To avoid the prudence trap, always state your estimates honestly and explain to anyone who will be using them that they have not been adjusted. Emphasize the need for honest input to anyone who will be supplying you with estimates. Test estimates over a reasonable range to assess their impact. Take a second look at the more sensitive estimates (Hammond et al. 1998/2006). 1.40 40 1.41 41 Menu Overconfidence Avoidance Prudence Avoidance Recallability Avoidance Optimism And Wishful Thinking Representativeness Sample Size Neglect The Law Of Small Numbers Conservatism Belief Perseverance Anchoring Avoidance Confirmatory Bias Avoidance Availability Bias Internalisation Heuristics Menu 1.42 42 Recallability Even if we are neither overly confident nor unduly prudent, we can still fall into a trap when making estimates or forecasts. Because we frequently base our predictions about future events on our memory of past events, we can be overly influenced by dramatic events; those that leave a strong impression on our memory. 1.43 43 Recallability We all, for example, exaggerate the probability of rare but catastrophic occurrences such as plane crashes because they get disproportionate attention in the media. A dramatic or traumatic event in your own life can also distort your thinking. You will assign a higher probability to traffic accidents if you have passed one on the way to work. You will assign a higher chance of someday dying of cancer yourself if a close friend has died of the disease. 1.44 44 Recallability In fact, anything that distorts your ability to recall events in a balanced way will distort your probability assessments. In one experiment, lists of well-known men and women were read to different groups of people (Hammond et al. 1998/2006). 1.45 45 Recallability Unbeknownst to the subjects, each list had an equal number of men and women, but on some lists the men were more famous than the women while on others the women were more famous. Afterward, the participants were asked to estimate the percentages of men and women on each list. Those who had heard the list with the more famous men thought there were more men on the list, while those who had heard the one with the more famous women thought there were more women. 1.46 46 Recallability Corporate lawyers often get caught in the recallability trap when defending liability suits. Their decisions about whether to settle a claim or take it to court usually hinge on their assessments of the possible outcomes of a trial. Because the media tend to aggressively publicise massive damage awards (while ignoring other, far more common trial outcomes), lawyers can overestimate the probability of a large award for the plaintiff. As a result, they offer larger settlements than are actually warranted (Hammond et al. 1998/2006). 1.47 47 1.48 48 Menu Overconfidence Avoidance Prudence Avoidance Recallability Avoidance Optimism And Wishful Thinking Representativeness Sample Size Neglect The Law Of Small Numbers Conservatism Belief Perseverance Anchoring Avoidance Confirmatory Bias Avoidance Availability Bias Internalisation Heuristics Menu 1.49 49 Recallability - Avoidance To minimize the distortion caused by variations in recallability, carefully examine all your assumptions to ensure they're not unduly influenced by your memory. Get actual statistics whenever possible. Try not to be guided by impressions (Hammond et al. 1998/2006). 1.50 50 1.51 51 Menu Overconfidence Avoidance Prudence Avoidance Recallability Avoidance Optimism And Wishful Thinking Representativeness Sample Size Neglect The Law Of Small Numbers Conservatism Belief Perseverance Anchoring Avoidance Confirmatory Bias Avoidance Availability Bias Internalisation Heuristics Menu 1.52 52 Optimism And Wishful Thinking Most people display unrealistically rosy views of their abilities and prospects. They also display a systematic planning fallacy: they predict that tasks (such as writing survey papers) will be completed much sooner than they actually are (Buehler et al. 1994). Typically, over 90% of those surveyed think they are above average in such domains as driving skill, ability to get along with people and sense of humour (Weinstein 1980). 1.53 53 Optimism And Wishful Thinking But maybe they were right! Link The average rate was 18 per 100,000 people. 1.54 54 Optimism And Wishful Thinking Activity to have been won Men by women, Women Driving's battle of the sexes appears according Appropriate speed approaching hazards 55% 75% to a survey. Stopping safely at amber traffic lights 44% 85% Negative impact on other drivers 73% 54% Driving too close to the vehicle in front 27% 4% Cutting corners when turning 68% 43% Female drivers outscored males not only in in-car tests but also when Adequate indication 82% 96% observed anonymously using one of the UK's busiest junctions - Hyde Park Adequate use of mirrors 46% 79% Corner. Effective observation (e.g. checking blind spot) 82% 71% But another part of the survey - from - found only 28% Staying within the speed limit Privilege Insurance 86% 89% of women reckoned they were drivers than men, 13% of men Appropriate speedbetter for the situation 64% with only 64% thinking women were Steering superior behind the wheel. / Control of the vehicle 100% 96% Talking or texting on the phone while driving 16% watched at A sample of 50 drivers faced in-car assessment while24%200 were Cutting dangerously in to traffic Hyde Park Corner. Marked on 14 different aspects of14%driving, 1%women scored Causing an obstruction on the road 25% 16% 23.6 points out of a possible 30, while men managed to chalk up only 19.8 Total co-efficient (max 30) 19.8 23.6 points. Women are, after all, better drivers than men - Telegraph - 15 May 2015 1.55 55 Optimism And Wishful Thinking The study tested a theoretical model of the relationship between the Big Five Personality Factors, aggressive driving and ‘risky driving outcomes’ (accidents, traffic tickets, and license suspension). It also tested the mediation effect of aggressive driving in the relationship between the five factor personality model and risky driving outcomes. The link between personality, aggressive driving, and risky driving outcomes – testing a theoretical model Chraif et al. 2015 1.56 56 Optimism And Wishful Thinking The experimental results of Camerer and Lovallo (1999) confirm the better-than average effect in the behaviour of most business owners, who forecast negative returns for an average market participant, with themselves being an exception to the rule. 1.57 57 1.58 58 Menu Overconfidence Avoidance Prudence Avoidance Recallability Avoidance Optimism And Wishful Thinking Representativeness Sample Size Neglect The Law Of Small Numbers Conservatism Belief Perseverance Anchoring Avoidance Confirmatory Bias Avoidance Availability Bias Internalisation Heuristics Menu 1.59 59 Representativeness Also known as the Conjunction Fallacy. Erceg and Galić (2014) in their study explored the occurrence of the overconfidence bias and the conjunction fallacy in betting behaviour among frequent and sporadic bettors and to test whether it was influenced by the task format (probability vs. frequencies). Frequent bettors (N = 67) and sporadic bettors (N = 63) estimated whether the bets on football games presented to them via an on-line questionnaire would be successful. 1.60 60 Representativeness The bets consisted of singles (one match outcomes) and conjunctions (two matches outcomes), and were presented either in probability or frequency terms. Both frequent and sporadic bettors showed similar levels of the overconfidence bias. However, the frequent bettors made the conjunction fallacy more often than the sporadic bettors. The presentation of the task in the frequency terms significantly reduced the overconfidence bias in comparison to the evaluations in probability terms, but left the conjunction fallacy unaffected. 1.61 61 Representativeness Kahneman and Tversky (1974) show that when people try to determine the probability that a data set A was generated by a model B, or that an object A belongs to a class B, they often use the representativeness heuristic. This means that they evaluate the probability by the degree to which A reflects the essential characteristics of B. 1.62 62 Representativeness Much of the time, representativeness is a helpful heuristic, but it can generate some severe biases. The first is base rate neglect. To illustrate, Kahneman and Tversky present this description of a person named Linda: 1.63 63 Representativeness Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations. 1.64 64 Representativeness Asked which of “Linda is a bank teller” (statement A) and “Linda is a bank teller and is active in the feminist movement” (statement B) is more likely. What do you think? Subjects typically assign a greater probability to B. This is, of course, impossible. 1.65 65 Representativeness This is, of course, impossible. Representativeness provides a simple explanation. The description of Linda sounds like the description of a feminist – it is representative of a feminist – leading subjects to pick B. 1.66 66 Representativeness Put differently, while Bayes Law says that Prob(statement B| description) = Prob(description | statement B) Prob(statement B) Prob(description) People apply the law incorrectly, putting too much weight on Prob(description | statement B), which captures representativeness, and too little weight on the base rate, Prob(statement B). 1.67 67 Representativeness Put differently, a Venn diagram makes the position clear. 1.68 68 Representativeness Another explanation of the conjunction fallacy is the “configural weighted average (CWA) hypothesis” (Nilsson, 2008; Nilsson, Winman, Juslin and Hansson, 2009). According to the CWA hypothesis, participants will assess the probability of the conjunction first by assessing the probability of each of the components in the conjunction, assigning them a weight and finally adding them. In other words, instead of multiplying the probabilities of components, participants are averaging them, inevitably making the conjunction fallacy. 1.69 69 Representativeness For example, the probability of the conjunction “Linda is a bank teller and is active in the feminist movement” will be assessed first by assigning greater weight to the less probable assertion (Linda is a bank teller) and lower weight to the more probable claim (Linda is active in the feminist movement), and then by aggregating these weighted claims. Thus, if a person estimates the probability of Linda being a bank teller to be 0.2, and the probability that she is active in the feminist movement to be 0.8, and then assigns weights of 0.7 to the less likely and 0.3 to the more likely claim, their integration of these information would proceed as 1.70 70 follows: Prob = 0.7 × 0.2 + 0.3 × 0.8 = 0.38 Wrong!! Representativeness We think a person is more likely to be a member of some group if that person is similar to a typical member of that group. If a man behaves more like a criminal (shifty eyes, etc.), then we think it is more likely he is a criminal. Bayes Law of course, captures this simple intuition. 1.71 71 Representativeness Who was fatally shot? is the the policeman? ddddddddddddddddddddd Who is criminal? The Daily Mail 17-9-2013 An unarmed man Dddddddddddddddd D(left) seeking Dhelp after a car Dcrash was shot D10 times by the DCharlotte police Dofficer who's Dnow charged in his death. 1.72 72 Jonathan Ferrell: North Carolina cop Randall Kerrick charged with manslaughter in death of unarmed man Representativeness In case you were curious:•Who Second North Carolina was fatally shot?grand jury indicts police officer Randall is the the policeman? ddddddddddddddddddddd Who is criminal? Kerrick, 28, who fatally shot unarmed Jonathan Ferrell, 24, ten times last September • First jury declined to indict Kerrick on involuntary manslaughter last week Dddddddddddddddd • Investigators say Kerrick shot Ferrell last September 14 as he D looked for help after a car crash • Ferrell's mother says: “I just feel like God's D will, will be done” • But Kerrick's attorneys say there was “nothing D irregular or improper” about the decision of the first grand D jury • Voluntary manslaughter charge carries a prison sentence of up D to 11 years D man as he looked Second grand jury indicts police officer who fatally shot unarmed for help after car crash - Daily Mail - 28 January 2014 D 2nd grand jury indicts officer in shooting of ex-FAMU football player - CNN - 28 January 2014 1.73 73 Beliefs – Representativeness Getting to grips with implicit bias Implicit attitudes are one of the hottest topics in social psychology. Now a massive new study directly compares methods for changing them. The results are both good and bad for those who believe that some part of prejudice is our automatic, uncontrollable, reactions to different social groups. 1.74 74 Beliefs – Representativeness Getting to grips with implicit bias The implicit association test (IAT) is a simple task you can complete online at Project Implicit which records the speed of your responses when sorting targets, such as white and black faces, into different categories, such as good and bad. Even people who disavow any prejudiced beliefs or feelings can have IAT scores which show they find it easier, for example, to associate white faces with goodness and black faces with badness – a so called “implicit bias” (Lai et al. 2014). 1.75 75 Beliefs – Representativeness Getting to grips with implicit bias Lai et al. 2014 state “Many methods for reducing implicit prejudice have been identified, but little is known about their relative effectiveness. We held a research contest to experimentally compare interventions for reducing the expression of implicit racial prejudice. Teams submitted 17 interventions that were tested an average of 3.70 times each in 4 studies (total N = 17,021), with rules for revising interventions between studies. Eight of 17 interventions were effective at reducing implicit preferences for Whites compared with Blacks, particularly ones that provided experience with counter stereotypical exemplars, used evaluative conditioning methods, and provided strategies to override biases. The other 9 interventions were ineffective, particularly ones that engaged participants with others' perspectives, asked participants to consider egalitarian values, or induced a positive emotion. The most potent interventions were ones that invoked high self-involvement or linked Black people with positivity and White people with negativity. No intervention consistently reduced explicit racial preferences. Furthermore, intervention effectiveness only weakly extended to implicit preferences for Asians and Hispanics.” 1.76 76 1.77 77 Menu Overconfidence Avoidance Prudence Avoidance Recallability Avoidance Optimism And Wishful Thinking Representativeness Sample Size Neglect The Law Of Small Numbers Conservatism Belief Perseverance Anchoring Avoidance Confirmatory Bias Avoidance Availability Bias Internalisation Heuristics Menu 1.78 78 Sample Size Neglect Representativeness also leads to another bias, sample size neglect. When judging the likelihood that a data set was generated by a particular model, people often fail to take the size of the sample into account: after all, a small sample can be just as representative as a large one. 1.79 79 Sample Size Neglect Six tosses of a coin resulting in three heads and three tails are as representative of a fair coin as 500 heads and 500 tails are in a total of 1000 tosses. Representativeness implies that people will find the two sets of tosses equally informative about the fairness of the coin, even though the second set is much more so. 1.80 80 Sample Size Neglect Sample size neglect means that in cases where people do not initially know the data-generating process, they will tend to infer it too quickly on the basis of too few data points. For instance, they will come to believe that a financial analyst with four good stock picks is talented because four successes are not representative of a bad or mediocre analyst. 1.81 81 Sample Size Neglect It also generates a “hot hand” phenomenon, whereby sports fans become convinced that a basketball player who has made three shots in a row is on a hot streak and will score again, even though there is no evidence of a hot hand in the data (Gilovich, Valone and Tversky 1985). This belief that even small samples will reflect the properties of the parent population is sometimes known as the “law of small numbers” (Rabin 2002). 1.82 82 Sample Size Neglect Surprised by the Gambler's and Hot Hand Fallacies? A Truth in the Law of Small Numbers Miller and Sanjurjo 2015 We find a subtle but substantial bias in a standard measure of the conditional dependence of present outcomes on streaks of past outcomes in sequential data. The mechanism is driven by a form of selection bias, which leads to an underestimate of the true conditional probability of a given outcome when conditioning on prior outcomes of the same kind. The biased measure has been used prominently in the literature that investigates incorrect beliefs in sequential decision making - most notably the Gambler's Fallacy and the Hot Hand Fallacy. Upon correcting for the bias, the conclusions of some prominent studies in the literature are reversed. The bias also provides a structural explanation of why the belief in the law of small numbers persists, as repeated experience with finite sequences can only reinforce these beliefs, on average. With discussion and critique. 1.83 83 Sample Size Neglect In situations where people do know the datagenerating process in advance, the law of small numbers leads to a gambler’s fallacy effect. If a fair coin generates five heads in a row, people will say that “tails are due”. Since they believe that even a short sample should be representative of the fair coin, there have to be more tails to balance out the large number of heads. 1.84 84 Sample Size Neglect The study (Xu and Harvey 2014) is a great example of how a simple phenomenon – the gambler's fallacy – can have unpredicted outcomes when studied in a complex real-world environment. Don't get too carried away by the rewards of online gambling however, the paper contains this telling detail: of all the bets analysed in the study, 178,947 were won and 192,359 were lost - giving overall odds of winning at 0.48. Enough to ensure the betting site's profit margin, and to suggest that on average you're going to lose more than you stake. Unless you're lucky. 1.85 85 1.86 86 Menu Overconfidence Avoidance Prudence Avoidance Recallability Avoidance Optimism And Wishful Thinking Representativeness Sample Size Neglect The Law Of Small Numbers Conservatism Belief Perseverance Anchoring Avoidance Confirmatory Bias Avoidance Availability Bias Internalisation Heuristics Menu 1.87 87 The Law Of Small Numbers A phenomenon related to the under-use of base rates is “the law of small numbers” (Tversky and Kahneman 1971): People exaggerate how often a small group will closely resemble the parent population or underlying probability distribution that generates the group. 1.88 88 The Law Of Small Numbers We expect even small classes of students to contain very close to the typical distribution of smart ones and personable ones. Likewise, we underestimate how often a good financial analyst will be wrong a few times in a row, and how often a clueless analyst will be right a few times in a row. 1.89 89 The Law Of Small Numbers For example, Kahneman and Tversky (1982, p. 44) asked undergraduates the following question: A certain town is served by two hospitals. In the larger hospital about 45 babies are born each day, and in the smaller hospital about 15 babies are born each day. As you know, about 50 percent of all babies are boys. However, the exact percentage varies from day to day. Sometimes it may be higher than 50 percent, sometimes lower. 1.90 90 The Law Of Small Numbers For a period of 1 year, each hospital recorded the days on which more than 60 percent of the babies born were boys. Which hospital do you think recorded more such days? Twenty-two percent of the subjects said that they thought that it was more likely that the larger hospital recorded more such days, and 56% said that they thought the number of days would be about the same. 1.91 91 The Law Of Small Numbers Only 22% of subjects answered correctly that the smaller hospital would report more such days. This is the same fraction as guessed exactly wrong. Apparently, the subjects simply did not see the relevance of the number of childbirths per day. A large sample is less likely to stray from the 50%, it will provide the best estimate. Recall the t test confidence interval. As n increases, tν and 1/n decrease. 1.92 92 The Law Of Small Numbers But this would not really happen? The Gates Foundation has spent about $2 billion on its goal of having 80% of minority and low-income students graduate from high school college-ready. The Foundation has mainly supported this through a “small schools” initiative that breaks existing lowperforming schools into 400-student blocks. The theory is that these small schools will reduce dropout rates - even in the absence of other major improvements. 1.93 93 The Law Of Small Numbers Yet, according to Wharton School statistician Howard Wainer, the foundation may have misread the numbers when it arrived at its first prescription for American education. Wainer, who used the foundation as a case study in his 2009 book, “Picturing the Uncertain World”, says it seized on data showing small schools are overrepresented among the country's highest achievers and started pouring money into creating small high schools and subdividing big ones. Wainer, H.: Picturing the Uncertain World: How to Understand, Communicate, and Control Uncertainty through Graphical Display. (Paperback) 1.94 94 The Law Of Small Numbers But according to Wainer, adherents overlooked a troublesome fact: Small schools are overrepresented among the lowest as well as highest achievers. Why? Because the smaller a school, the more likely its overall performance can be skewed by a few good or bad students. 1.95 95 The Law Of Small Numbers Forum on Education in America - Bill & Melinda Gates Foundation November 11, 2008 In the first four years of our work with new, small schools, most of the schools had achievement scores below district averages on reading and math assessments. In one set of schools we supported, graduation rates were no better than the state wide average, and reading and math scores were consistently below the average. The percentage of students attending college the year after graduating high school was up only 2.5 percentage points after five years. Simply breaking up existing schools into smaller units often did not generate the gains we were hoping for. 1.96 96 1.97 97 Menu Overconfidence Avoidance Prudence Avoidance Recallability Avoidance Optimism And Wishful Thinking Representativeness Sample Size Neglect The Law Of Small Numbers Conservatism Belief Perseverance Anchoring Avoidance Confirmatory Bias Avoidance Availability Bias Internalisation Heuristics Menu 1.98 98 Conservatism While representativeness leads to an under weighting of base rates, there are situations where base rates are over-emphasized relative to sample evidence. In an experiment run by Edwards (1968), there are two urns, one containing 3 blue balls and 7 red ones, and the other containing 7 blue balls and 3 red ones. A random draw of 12 balls, with replacement, from one of the urns yields 4 blues and 8 reds. What is the probability the draw was made from the first urn? You guess! 1.99 99 Conservatism While the correct answer is 0.97 (0.78×0.34/(0.78×0.34+0.38×0.74)), most people estimate a number around 0.7, apparently over weighting the base rate of 0.5. At first sight, the evidence of conservatism appears at odds with representativeness. However, there may be a natural way in which they fit together. It appears that if a data sample is representative of an underlying model, then people over weight the data. 1.100 100 Conservatism However, if the data is not representative of any salient model, people react too little to the data and rely too much on their priors. In Edwards’ experiment, the draw of 4 blue and 8 red balls is not particularly representative of either urn, possibly leading to an over reliance on prior information. Mulainathan (2001) presents a formal model that neatly reconciles the evidence on under weighting sample information with the evidence on over weighting sample information. 1.101 101 Conservatism Mulainathan (2001) presents a model of human inference in which people use coarse categories to make inferences. Coarseness means that rather than updating continuously as suggested by the Bayesian ideal, people update change categories only when they see enough data to suggest that an alternative category better fits the data. This simple model of inference generates a set of predictions about behaviour. The author applies this framework to produce a simple model of financial markets, where it produces straight forward and testable predictions about predictability of returns, co-movement and volume. 1.102 102 1.103 103 Menu Overconfidence Avoidance Prudence Avoidance Recallability Avoidance Optimism And Wishful Thinking Representativeness Sample Size Neglect The Law Of Small Numbers Conservatism Belief Perseverance Anchoring Avoidance Confirmatory Bias Avoidance Availability Bias Internalisation Heuristics End 1.104 104 Belief Perseverance There is much evidence that once people have formed an opinion, they cling to it too tightly and for too long (Lord, Ross and Lepper 1979). At least two effects appear to be at work. First, people are reluctant to search for evidence that contradicts their beliefs. Second, even if they find such evidence, they treat it with excessive scepticism. 1.105 105 Belief Perseverance Some studies have found an even stronger effect, known as confirmation bias, whereby people misinterpret evidence that goes against their hypothesis as actually being in their favour. In the context of academic finance, belief perseverance predicts that if people start out believing in the Efficient Markets Hypothesis (see the technical definitions), they may continue to believe in it long after compelling evidence to the contrary has emerged. Definitions 1.106 106 Belief Perseverance As suggested by Soufian et al. (2014) it is possible to replace the concept of the Efficient Markets Hypothesis, that financial markets always act to set prices “rationally”. By an understanding that prices change as investors’ constantly adapt their behaviour, allowing markets to evolve their own internal order. The latter process is known as the Adaptive Markets Hypothesis and was initially proposed by Lo (2004, 2005). Definitions 1.107 107 Efficient Market An efficient market incorporates news into prices immediately and fully. Tests for efficiency in financial markets have been undermined by information leakage. The authors (Croxson and Reade 2013) test for efficiency in sports betting markets – real-world markets where news breaks are remarkably clean. The data deployed in the article comprise second-by-second prices and volumes from Football Betting Markets & Odds at Betfair Sportsbook markets for 1,206 professional football games. Applying a novel identification to highfrequency data, they investigate the reaction of prices to goals scored on the “cusp” of half-time. This strategy allows them to separate the market's response to major news (a goal), from its reaction to the continual flow of minor game-time news. The Betfair markets behave largely “as if” they are updating efficiently to the ticking down of the clock. On the author’s evidence, prices update swiftly and fully. 1.108 108 Efficient Market A stock market is said to be efficient if it accurately reflects all relevant information in determining security prices. Critics have asserted that share prices are far too volatile to be explained by changes in objective economic events - the October 1987 crash (Black Monday) being a case in point. Although the evidence is not unambiguous, reports of the death of the efficient market hypothesis appear premature (Malkiel 1989). 1.109 109 Stock Performance in Popular Quartiles There are fashions for everything: clothes, hairstyles, video games and hashtags on Twitter. And that applies to stock markets as well. Who can forget the enthusiasm for technology, media and telecom shares in the late 1990s? A paper (Ibbotson and Idzorek 2014) suggests that this tendency may provide a strategy for outperforming the stock market, based on the popularity of individual stocks. The authors defined the most popular stocks as those that saw the most trading in their shares as a proportion of their market value. These are most likely to be the companies that are in the news, perhaps because they have a hot new product or because many analysts are recommending them. For a time these stocks may benefit from the so-called momentum effect — a phenomenon whereby stocks that have recently risen in price continue to perform well in the short term (usually a matter of months). However, this popularity may drive such stocks up to excessive valuations from which future returns are bound to be disappointing (in 1.110 110 other words, they inevitably lose momentum sooner or later). Stock Performance in Popular Quartiles Standard deviation!! The table shows the annualised return of American stocks, based on their popularity over the preceding year. Over a period of more than 40 years, the paper finds, stocks in the least popular quartile outperformed those in the most popular segment by seven percentage points a year. The finding is significant. Academics have explained the long-term outperformance of small companies (the size effect) or those with below-average valuations (the value effect) in terms of compensation for extra risk. Small firms are more likely to go bust than large ones; cheap-looking stocks are usually cheap for a reason. 1.111 111 Stock Performance in Popular Quartiles The effect is theoretically compatible with the efficient-market hypothesis. But it is very hard to see how the momentum or popularity effects can be squared with the hypothesis, which supposes that all public information is already reflected in share prices and thus should be no help in determining future price movements. The psychological reasons for the popularity effect are not hard to discern. Financial assets are not like other goods; when they rise in price, demand has a tendency to increase, not decrease. An investor who hears that a friend or neighbour had made money out of a particular stock will want to jump on the bandwagon. The authors of the paper quote Ben Graham, the doyen of share analysts, as saying the market is not a weighing machine but a “voting machine whereon countless individuals register choices which are partly the product of reason and partly the product of emotion.” 1.112 112 Stock Performance in Popular Quartiles Even professional fund managers may have good reasons for following a fad. They may want to show, in their reports to clients, that they have been smart enough to buy the hottest stocks of the year. In addition, clients have a natural tendency to fire managers who have performed badly, and transfer their assets to managers who have recently beaten the market. When that happens the new managers get cash, and they are likely to use it to buy their favourite shares — by definition, those that have recently performed well. This may exacerbate the momentum effect. In turn, this may explain why the average manager does not outperform the market, even though apparently exploitable anomalies exist. Professional fund managers have their favourites; they just hang on to them for too long. Dimensions of Popularity by R.G. Ibbotson and T.M. Idzorek Drop of the pops - The Economist - 17 Jan 2015 1.113 113 Passive Investing The process of bringing diversified, affordable investment products to the masses started with investment trusts, which first appeared in the UK in the 1860s and afforded “the investor of moderate means the same advantages as large capitalists”. Open-ended mutual funds followed in the 1920s, and were boosted in the 1990s by fund supermarkets which made them more popular by removing the initial charges for investing. By contrast, passive investing is a fairly recent arrival. It did not start until the 1970s, when academic research started to highlight the fact that most active fund managers do not achieve better returns after costs than the broader market. As a proportion of the UK fund management industry, passive investing is still fairly small. The Investment Association says index-tracking open-ended funds account for about 10 per cent of overall retail funds under management. According to ETFGI, a consultancy, exchange traded funds — which also track indices, but are traded on a regulated market just like equities — account for just 4.4 per cent of mutual fund assets in Europe despite explosive growth in recent years. 1.114 114 Democratising finance: How passive funds changed investing - FT - 30 Jan 15 Cycles Numerous research works indicate that the cycle of boom and crisis can be regarded as a natural element in financial market history. On the other hand, there is a rich discussion among practitioners and academics on the origins of the recent global economic and financial crisis, which led the world into the deepest and most severe downturn since the Great Depression in the 1930s. An explanation solely based on the collapse of the U.S. housing bubble and its effects seems far too shortsighted. In addition to economic elucidations and rationalizations, there are also behavioural and socioeconomic explanations, which take into account the powerful social and psychological forces at work in financial markets (Fenzl et al. 2013) . 1.115 115 Cycles Fenzl et al. (2013) approaches the discussion from a mass psychological perspective. Starting from the shortcomings of mainstream economic approaches in predicting market trends and their underlying trading behaviour realistically, their paper elucidates postulated mechanisms behind mass phenomena and provides a concise review of literature on collective dynamics in financial markets. They then delineate previous research on the distinction between mass phenomena and attempt to transfer this theoretical framework to financial markets. 1.116 116 Sociability Interestingly Heimer (2014) found empirical evidence that social interaction is more prevalent among active rather than passive investors. It is found that active investors tend to be male, urban, and educated, they are technologically savvy and risk-seeking. Sociability is less strongly associated with ownership of savings bonds – considered to be an extremely passive form of investing. Active investing is driven by overconfidence, and since men are more overconfident than women, men are more likely to be active traders. Specifically, using data from a discount brokerage in the U.S., Barber and Odean (2001) document that men trade 45 percent more than women. 1.117 117 Age – Gender – Ethnicity - Religion Yuce and Yap (2006) examined the investment behaviour of male and female Canadian students who participated in an investment game and invested $1,000,000 and formed portfolios of different financial assets. Risk aversion levels showed that female students are statistically more risk averse than the male students. All female students avoided futures and options investments, the risky derivative instruments. Their results also showed that female groups did not get the top 5 returns or the bottom 5 returns; instead they obtained middle range returns, because they invested in safer instruments. 1.118 118 Age – Gender – Ethnicity - Religion In southern Brasil male individuals showed a higher level of financial literacy on average compared to females (Potrich et al. 2015). White males perceive negative outcomes to be less probable than white females or people from ethnic minorities (Olofsson and Rashid 2011). 1.119 119 Age – Gender – Ethnicity - Religion Hong et al. (2004) used a representative sample of elder households, to show that social individuals – those who claim to “know their neighbours,” “visit their neighbours,” or “attend church” – are more likely to be stock market participants. 1.120 120 Age – Gender – Ethnicity - Religion Kanagaretnam et al. (2015) examined religiosity and risk-taking in international banking. Individuals who are more religious have greater risk aversion. There is a positive relation between religiosity and both financial accounting transparency and timely recognition of bad news. Banks located in more religious countries exhibit lower levels of risk in their decision-making. And were less likely to encounter financial difficulty or fail during the 2007–2009 financial crisis. Also affected by technology? Aversion to risk hampers growth of German fintech sector - FT - 8 October 2015 fintech – financial technology 1.121 121 Emotion There is considerable evidence that stock prices are not driven by fundamentals and that emotions play a major role. Shiller (1981) highlighted emotionallydriven excess market volatility, which has been hotly debated ever since. But after 30 years of empirical efforts to explain excess volatility and prove the efficiency of markets, Shiller (2003) stood by his initial assertion: 1.122 122 Emotion “After all the efforts to defend the efficient markets theory there is still every reason to think that, while markets are not totally crazy, they contain quite substantial noise, so substantial that it dominates the movements in the aggregate market. The efficient markets model, for the aggregate stock market, has still never been supported by any study effectively linking stock market fluctuations with subsequent fundamentals.” The fact that noise, rather than fundamentals, dominates market price movements is clear evidence that crowds dominate stock pricing (Howard 2013). 1.123 123 Emotion Benartzi and Thaler (1993), however, provide an emotional explanation. “The equity premium puzzle refers to the empirical fact that stocks have outperformed bonds over the last century by a surprisingly large margin. We offer a new explanation based on two behavioural concepts. First, investors are assumed to be “loss averse,” meaning that they are distinctly more sensitive to losses than to gains. Second, even long-term investors are assumed to evaluate their portfolios frequently. We dub this combination “myopic loss aversion.” Using simulations, we find that the size of the equity premium is consistent with the previously estimated parameters of prospect theory if investors evaluate their portfolios annually.” 1.124 124 Emotion The observed 7% equity premium is thus the result of short-term loss aversion and the investor ritual of evaluating portfolio performance annually, rather than the result of fundamental risk. Putting Shiller’s research together with Benartzi and Thaler’s analysis, it is reasonable to conclude that both stock market volatility and long-term returns are largely determined by investor emotions (Howard 2013). 1.125 125 1.126 126 Menu Overconfidence Avoidance Prudence Avoidance Recallability Avoidance Optimism And Wishful Thinking Representativeness Sample Size Neglect The Law Of Small Numbers Conservatism Belief Perseverance Anchoring Avoidance Confirmatory Bias Avoidance Availability Bias Internalisation Heuristics Menu 1.127 127 Anchoring Kahneman and Tversky (1974) argue that when forming estimates, people often start with some initial, possibly arbitrary value, and then adjust away from it. Experimental evidence shows that the adjustment is often insufficient. Put differently, people “anchor” too much on the initial value. 1.128 128 Anchoring In one experiment, subjects were asked to estimate the percentage of United Nations’ countries that are African. More specifically, before giving a percentage, they were asked whether their guess was higher or lower than a randomly generated number between 0 and 100. The initial random number significantly affected their subsequent estimates. 1.129 129 Anchoring Those who were asked to compare their estimate to 10% subsequently estimated 25%, while those who compared to 60%, estimated 45%. Recall “Lets ask the audience” in Who Wants To Be A Millionaire, where the contestants discussion influences the audience. Also in auctions, where the auctioneer starts at a high value, prior to descending. In anchoring, arbitrary and irrelevant numbers bias people's judgments (Tversky and Kahneman, 1974) and decisions (Ariely et al. 2003), even when participants know that anchors are random or implausible (Chapman and Johnson, 1994). 1.130 130 Anchoring Meaningful anchors also bias judgments (e.g., Mussweiler and Strack, 2000). If decisions about credit-card repayments are anchored upon minimum-payment information, then people will repay less than they otherwise would and incur greater interest charges (Thaler and Sunstein, 2008 and Stewart, 2009). Stewart (2009) found a strong correlation between minimum payment size and actual repayment size in a survey of credit-card payments. 1.131 131 Anchoring A number of studies have pointed out experts’ susceptibility to anchoring, e.g. for car mechanics (Mussweiler et al., 2000), real estate agents (Northcraft and Neale, 1987) and legal experts (Englich and Mussweiler, 2001 and Englich et al., 2005; 2006). As Furnham and Boo (2011) summarize, expertise fails to prevent anchoring. 1.132 132 Anchoring However, task specific knowledge has been shown to reduce anchoring by Wilson et al. (1996), as well as by Wright and Anderson (1989). The divergent results on task familiarity point to different processes that elicit anchoring effects (see Crusius et al., 2012). Thus, expert statements may be biased as anchor-consistent knowledge is activated in a cognitively effortful process, whereas in more simple tasks, anchors are used intuitively as a cue to the right answer (Wegener et al., 2001; 2010). Given that the decision situations investigated in empirical anchoring studies can be expected to feature non-intuitive settings, respective experimental studies need to implement cognitively effortful tasks to uphold external validity. Connected to this is the effect of cognitive load on subject’s decision quality. Blankenship et al. (2008) show that a mental overload through time pressure and task complexity increases anchoring (Meub and Proeger 2015). 1.133 133 Beliefs – Anchoring Numbers-Only Investing Markets become volatile when investors pour in money based purely on a few figures from the financials and the analysts' predictions without knowing about the companies those numbers represent. This is called anchoring and it refers to focusing on one detail at the expense of all the others. Imagine betting on a boxing match and choosing the fighter purely by who has thrown the most punches in their last five fights. You may come out all right by picking the statistically busier fighter, but the fighter with the least punches may have won five by first-round knockouts. Clearly, any metric can become meaningless when it is taken out of context. 1.134 134 Beliefs – Anchoring Numbers-Only Investing If you believe that it is all in the numbers, then you have to react quickly to any change in the numbers to protect your profits. Numbers-only investors are the most prone to panic selling. They tend to hedge their buys with stop-loss orders that other traders will try to trigger in order to profit from shorting a stock. This strategy, called gathering in the stops, can increase market volatility for a short period of time and give the traders who short the stock a profit. What this doesn't change is the actual company beneath the stock. Short-term volatility in the stock market shouldn't affect a corporation's business operations. For example, Nike doesn't stop making shoes when its stock dips. 1.135 135 Beliefs – Anchoring Anchoring Index (Kahneman 2011) To illustrate anchoring bias in action, psychologists Daniel Kahneman and Amos Tversky developed the anchoring index (Jacowitz and Kahneman 1995). Here is how it works. The researchers asked test subjects the following questions (Tversky and Kahneman, 1974): Is the height of the tallest redwood more or less than 1,200 feet? What is your best guess about the height of the tallest redwood? 1.136 136 Beliefs – Anchoring Anchoring Index (Kahneman 2011) The group of participants had a mean estimate of 844 feet for the question, which is about three times the actual height of a very tall redwood. A different group was given the same question, but the height value in first question was changed from 1,200 feet to 180 feet. The results from this second group illustrate the powerful effects of anchoring bias, as the mean estimate fell to 282 feet. Rather than try to reason that a 1,200-foot tree would approximate a 120-story building, people assume that there must be some factual basis to the hypothetical height value, so they adjust their estimates accordingly. 1.137 137 Beliefs – Anchoring Anchoring Index (Kahneman 2011) The anchoring index is the ratio of the differences expressed as a percentage. In the example, the difference between the two estimates (562 feet) is divided by the difference between the two anchors (1,020 feet) to arrive at 55%. According to Kahneman, the 55% anchoring index measure is fairly typical of similar experiments. 1.138 138 Beliefs – Anchoring Listing Prices Affect Estimates of Home Values Stocks are not the only aspect of personal finance that anchoring bias affects. A separate study (Northcraft and Neale 1987) asked real estate agents to estimate the value of homes on the market. The agents visited the homes and heard comprehensive information about them, including an asking price. Half the agents got an asking price that was significantly higher than the actual listing price, and half received a price significantly lower than the listing price. 1.139 139 Beliefs – Anchoring Listing Prices Affect Estimates of Home Values Each agent then had to suggest a reasonable buying price, and the lowest price at which the agent would sell the home if it were their own. The agents also had to list the factors that affected their estimates. None of the agents cited asking prices as a factor in their price estimates. Indeed, the agents touted their ability to ignore asking prices when estimating home values. 1.140 140 Beliefs – Anchoring Listing Prices Affect Estimates of Home Values The results, however, tell a very different story. The effect of anchoring bias, as measured by the anchoring index, was 41% for the agents in the experiment. In other words, 41% of them were very close to the asking price they had heard. A control group of business school students with no real estate expertise performed the same experiment, and fared only slightly worse with an anchoring index of 48%. The main difference is that the students admitted that their estimates were affected by listing price. 1.141 141 Beliefs – Anchoring The house doesn’t always win: Evidence of anchoring among Australian bookies McAlvanah and Moul (2013) examine Australian horseracing bookmakers’ responses to late scratches, instances in which a horse is abruptly withdrawn after betting has commenced. They observed bookies exhibit anchoring on the original odds and fail to re-adjust odds fully on the remaining horses after a scratch, thereby earning lower profit margins and occasionally creating nominal arbitrage opportunities for bettors. They also examined which horses’ odds bookies adjust after a scratch and demonstrate diminished profit margins even after controlling for these endogenous adjustments. Their results indicate that bookies’ adjustments recover approximately 80% of lost profit margin but that bookies forgo the remaining 20% due to systematic under-adjustments. 1.142 142 Beliefs – Anchoring Closely related is the recognition heuristic which could be a first step in consideration set formation (Marewski et al. 2010), as it allows the choice set to be quickly reduced. This idea is consistent with research that suggests that priming a familiar brand increases the probability that it will be considered for purchase (e.g., Coates et al. 2004). Brand recognition can be even more important than attributes that are a more direct reflection of quality. For instance, in a blind test, most people preferred a jar of high-quality peanut butter to two alternative jars of lowquality peanut butter. Yet when a familiar brand label was attached to one of the low-quality jars, the preferences changed. Most (73%) now preferred the jar with the label they recognized, and only 20% preferred the unlabelled jar with the high-quality peanut butter (Hoyer and Brown 1990). Brand recognition may well dominate the taste cues, or the taste cues themselves might even be changed by brand recognition — people “taste” the brand name. 1.143 143 Beliefs – Anchoring The anchoring(-and-adjustment) heuristic has been studied in numerous experimental settings Anchors result from publicly observable and aggregated decisions of other market participants. Studies have neglected this social dimension. An experimental design with a socially derived anchor, to more accurately implement market conditions was employed. Robust effects for the socially derived anchor exhibit an increased bias for higher cognitive load, and only weak learning effects. Comparison to a neutral anchor shows that the social context increases biased behaviour. Anchoring remains a valid explanation for systematically biased decisions within market contexts (Meub and Proeger 2015). 1.144 144 Beliefs – Anchoring Furnham and Boo (2011) review the literature on anchoring including various different models, explanations and underlying mechanisms used to explain the effects. The anchoring effect is both robust and has many implications in all decision making processes. The paper documents the many different domains and tasks in which the effect has been shown. It also considers mood and individual difference (ability, personality, information styles) correlates of anchoring as well as the effect of motivation and knowledge on decisions affected by anchoring. Finally it looks at the applications of anchoring effects in everyday life. 1.145 145 Anchoring Anchoring is closely related to confirmatory bias 1.146 146 1.147 147 Menu Overconfidence Avoidance Prudence Avoidance Recallability Avoidance Optimism And Wishful Thinking Representativeness Sample Size Neglect The Law Of Small Numbers Conservatism Belief Perseverance Anchoring Avoidance Confirmatory Bias Avoidance Availability Bias Internalisation Heuristics Menu 1.148 148 Beliefs – Anchoring - Avoidance Hammond et al. 1998/2006 1. Always view a problem from different perspectives. Try using alternative starting points and approaches rather than sticking with the first line of thought that occurs to you. 2. Think about the problem on your own before consulting others to avoid becoming anchored by their ideas. 3. Be open-minded. Seek information and opinions from a variety of people to widen your frame of reference and to push your mind in fresh directions. 1.149 149 Beliefs – Anchoring - Avoidance 4. Be careful to avoid anchoring your advisers, consultants, and others from whom you solicit information and counsel. Tell them as little as possible about your own ideas, estimates, and tentative decisions. If you reveal too much, your own preconceptions may simply come back to you. 5. Be particularly wary of anchors in negotiations. Think through your position before any negotiation begins in order to avoid being anchored by the other party's initial proposal. At the same time, look for opportunities to use anchors to your own advantage - if you're the seller, for example, suggest a high, but defensible, price as an opening gambit. 1.150 150 1.151 151 Menu Overconfidence Avoidance Prudence Avoidance Recallability Avoidance Optimism And Wishful Thinking Representativeness Sample Size Neglect The Law Of Small Numbers Conservatism Belief Perseverance Anchoring Avoidance Confirmatory Bias Avoidance Availability Bias Internalisation Heuristics Menu 1.152 152 Confirmatory Bias The most striking evidence for the confirmatory bias is a series of experiments demonstrating how providing the same ambiguous information to people who differ in their initial beliefs on some topic can move their beliefs further apart. To illustrate such polarization, Lord, Ross, and Lepper (1979) asked 151 undergraduates to complete a questionnaire that included three questions on capital punishment. 1.153 153 Confirmatory Bias Later, 48 of these students were recruited to participate in another experiment. Twenty-four of them were selected because their answers to the earlier questionnaire indicated that they were “proponents” who favoured capital punishment, believed it to have a deterrent effect, and thought most of the relevant research supported their own beliefs. 1.154 154 Confirmatory Bias Twenty-four were opponents who opposed capital punishment, doubted its deterrent effect and thought that the relevant research supported their views. These subjects were then asked to judge the merits of randomly selected studies on the deterrent efficacy of the death penalty, and to state whether a given study (along with criticisms of that study) provided evidence for or against the deterrence hypothesis. 1.155 155 Confirmatory Bias Subjects were then asked to rate, on 17 point scales ranging from -8 to +8, how the studies they had read moved their attitudes towards the death penalty, and how they had changed their beliefs regarding its deterrent efficacy. Lord, Ross and Lepper (1979, pp. 2102-4) summarize the basic results (all of which hold with confidence p < 0.01) as follows: 1.156 156 Confirmatory Bias The relevant data provide strong support for the polarization hypothesis. Asked for their final attitudes relative to the experiment’s start, proponents reported that they were more in favour of capital punishment, whereas opponents reported that they were less in favour of capital punishment. 1.157 157 Confirmatory Bias Similar results characterized subjects’ beliefs about deterrent efficacy. Proponents reported greater belief in the deterrent effect of capital punishment, whereas opponents reported less belief in this deterrent effect. 1.158 158 Confirmatory Bias This bias leads us to seek out information that supports our existing instinct or point of view while avoiding information that contradicts it. The confirmatory bias not only affects where we go to collect evidence but also how we interpret the evidence we do receive, leading us to give too much weight to supporting information and too little to conflicting information. 1.159 159 Confirmatory Bias Research (Young et al. 2009 and 2011) suggests that the antidote for confirmation bias could be, oddly, anger. Researchers asked 97 undergraduates to participate in what they thought were two separate experiments. The first involved either recalling and writing about a time they'd been exceptionally angry (this was just a prop designed to make them angry), or a time they'd been sad, or about something mundane. Next, all the participants read an introduction to the debate about whether hands-free devices make speaking on a mobile phone while driving any safer. 1.160 160 Confirmatory Bias (Important to note: all of the participants had been chosen because a pre-study showed they believed that the devices do make it safer.) Finally, the participants were presented with onesentence summaries of eight articles, either in favour, or against, the idea that hands-free devices make driving safer. The participants had to choose five of these articles to read in full. The results: Participants who'd earlier been made to feel angry read more articles critical of hands-free devices, contrary to their own position. 1.161 161 Confirmatory Bias And when the participants' attitudes were retested at the end of the study, it was the angry participants who'd shifted more from their original position in the debate. 1.162 162 Confirmatory Bias Confirmation bias is the tendency to seek evidence consistent with a prior belief. In the strip, Dilbert's boss demonstrates this bias to a tee when he assumes his astute managerial skills are what caused a minuscule (and clearly unrelated) improvement in the company's stock price. Cartoon (Kramer 2014) A Quick Puzzle to Test - NY Times - 2 July 2015 A short game sheds light on government policy, corporate America and why no one likes to be wrong. I don't want to play; just tell me the answer. 1.163 163 1.164 164 Menu Overconfidence Avoidance Prudence Avoidance Recallability Avoidance Optimism And Wishful Thinking Representativeness Sample Size Neglect The Law Of Small Numbers Conservatism Belief Perseverance Anchoring Avoidance Confirmatory Bias Avoidance Availability Bias Internalisation Heuristics Menu 1.165 165 Confirmatory Bias - Avoidance Hammond et al. 1998/2006 : 1. Always check to see whether you are examining all the evidence with equal rigour. Avoid the tendency to accept confirming evidence without question. 2. Get someone you respect to play devil's advocate, to argue against the decision you're contemplating. Better yet, build the counter arguments yourself. What's the strongest reason to do something else? The second strongest reason? The third? Consider the position with an open mind. 1.166 166 Confirmatory Bias - Avoidance 3. Be honest with yourself about your motives. Are you really gathering information to help you make a smart choice, or are you just looking for evidence confirming what you think you'd like to do? 4. In seeking the advice of others, don't ask leading questions that invite confirming evidence. And if you find that an adviser always seems to support your point of view, find a new adviser. Don't surround yourself with yes-men. 1.167 167 Use Experts Wisely Expert advice can often be compromised by human frailties - like their current mood or what their values are - and should be treated accordingly, experts say (Sutherland and Burgman 2015). Eight ways to improve expert advice 1. Use groups. Their estimates consistently outperform those of individuals‘. 2. Choose members carefully. Expertise declines dramatically outside an individual's specialisation. 3. Don't be star struck. A person's age, number of publications or reputation is not a measure of an 168 expert's ability to estimate or predict events. 1.168 Use Experts Wisely 4. Avoid homogeneity. Diverse groups tend to generate more accurate judgements. 5. Don't be bullied. Less-assured and assertive people tend to make better judgements. 6. Weight opinions. Calibrate an expert's performance with test questions. 7. Train experts. Training can improve an expert's ability. 8. Give feedback. Rapid feedback tends to improve expert judgements. Sutherland and Burgman 2015 1.169 169 1.170 170 Menu Overconfidence Avoidance Prudence Avoidance Recallability Avoidance Optimism And Wishful Thinking Representativeness Sample Size Neglect The Law Of Small Numbers Conservatism Belief Perseverance Anchoring Avoidance Confirmatory Bias Avoidance Availability Bias Internalisation Heuristics Menu 1.171 171 Availability Bias When judging the probability of an event – the likelihood of getting mugged in Chicago, say – people often search their memories for relevant information. While this is a perfectly sensible procedure, it can produce biased estimates because not all memories are equally retrievable or “available”, in the language of Kahneman and Tversky (1974). More recent events and more salient events – the mugging of a close friend, say – will weigh more heavily and distort the estimate. 1.172 172 Availability Bias Economists are some times wary of this body of experimental evidence because they believe 1. That people, through repetition, will learn their way out of biases; 2. That experts in a field, such as traders in an investment bank, will make fewer errors; 3. That with more powerful incentives, the effects will disappear. 1.173 173 Availability Bias While all these factors can attenuate biases to some extent, there is little evidence that they wipe them out altogether. The effect of learning is often muted by errors of application: when the bias is explained, people often understand it, but then immediately proceed to violate it again in specific applications. 1.174 174 Availability Bias Expertise, too, is often a hindrance rather than a help: experts, armed with their sophisticated models, have been found to exhibit more overconfidence than laymen, particularly when they receive only limited feedback about their predictions. Finally, in a review of dozens of studies on the topic, Camerer and Hogarth (1999, p. 7) conclude that while incentives can sometimes reduce the biases people display, “no replicated study has made rationality violations disappear purely by raising incentives”. 1.175 175 Availability Bias Bodnaruk and Simonov (2014) provide direct evidence on the effect of financial expertise on investment outcomes by analysing private portfolios of mutual fund managers. They find no evidence that financial experts make better investment decisions than peers: they do not outperform, do not diversify their risks better, and do not exhibit lower behavioural biases. 1.176 176 Availability Bias Managers do much better in stocks for which they have an information advantage over other investors, i.e., stocks that are also held by their mutual funds. More experienced managers seem to be aware of the limitations to their investment skills as they increase their holdings of mutual fund-related stocks following poor performance of their portfolios. Their results suggest that there are limits to the value added by financial expertise. 1.177 177 Availability Bias News media highlights memorable occurrences which gives an event the inaccurate appearance of frequency. 1.178 178 Availability Bias Child theft is a rare occurrence, but availability bias suggests a high probability of abduction. 1.179 179 Availability Bias Belief in the likelihood of a plane crash is caused by availability bias. More traffic deaths in wake of 9/11 - ScienceDaily - 11 Sept 2012 also Gaissmaier and 1.180 180 Gigerenzer 2012. Availability Bias 1.181 181 Availability Bias Risk Savvy: How To Make Good Decisions By Gerd Gigerenzer 1.182 182 Debias Of course debiasing (Soll et al. 2013) is a good idea. There are two general approaches available for debiasing decisions: (1) debiasing by modifying the decision maker (e.g., through education and the provision of tools) (2) debiasing by modifying the environment (e.g., by creating optimal conditions to support wise judgment). 1.183 183 1.184 184 Menu Overconfidence Avoidance Prudence Avoidance Recallability Avoidance Optimism And Wishful Thinking Representativeness Sample Size Neglect The Law Of Small Numbers Conservatism Belief Perseverance Anchoring Avoidance Confirmatory Bias Avoidance Availability Bias Internalisation Heuristics Menu 1.185 185 Internalisation Are investors in stock markets influenced by their current mood? Some studies have empirically found that factors known or assumed to affect current mood (e.g., temperature, sunny or cloudy weather, changes of season, time of day) correlate with stock returns in the expected direction (Dowling and Lucey, 2005; Nofsinger, 2005). For instance, in one study (Hirshleifer and Shumway, 2003) replicating some previous research, a negative relationship between cloudy weather and stock returns was observed in a majority of 26 international stock 1.186 186 markets. Internalisation Also the hungrier an animal becomes, the more risks it's prepared to take in the search for food. Now, for the first time, Symmonds et al. (2010) have shown that our animal instinct to maintain a balanced metabolic state influences our decision-making in other contexts, including finance. In the context of their study, biology would seem to inform economic theory, not only in providing explanations of psychological phenomena such as loss aversion, but also in highlighting substantive effects of state changes on economic decisions, perhaps reflecting shared evolutionarily conserved neurobiological mechanisms. 1.187 187 Internalisation The immediate effect of a meal is to neutralise risk aversion. For the men with more adipose tissue and higher baseline levels of leptin (a hormone that suppresses appetite), who are generally more risk averse, this meant they became less risk averse when performing the task right after eating. By contrast, for men with less adipose tissue and lower leptin levels, who are generally low risk averse, their risk aversion was increased immediately after eating, just as you'd expect based on the behaviour of hungry animals. 1.188 188 Internalisation Going to the casino? Don't eat! Gambling on an empty stomach leads to better decisions, study claims Daily Mail - 29 Oct 2014 It might seem like common sense that it’s better to make important decisions after you’ve eaten. But a study has claimed the exact opposite - that we actually make better decisions on an empty stomach. Researchers found that people who were hungry made better snap decisions and also could also appreciate future big rewards than those who were fully fed. 1.189 189 Internalisation The research was carried out by scientists at Utrecht University in The Netherlands. In the study participants were asked to fast for a night, and when they arrived at the laboratory the next day some were given food and some were not. They were then given a variety of tasks to simulate decision making. 1.190 190 Internalisation Three experimental studies examined the counter intuitive hypothesis that hunger improves strategic decision making, arguing that people in a hot state (like emotions or visceral drives - relating to deep inward feelings rather than to the intellect) are better able to make favourable decisions involving uncertain outcomes. Studies 1 and 2 demonstrated that participants with more hunger or greater appetite made more advantageous choices in the Iowa Gambling Task compared to sated participants or participants with a smaller appetite. 1.191 191 Internalisation Study 3 revealed that hungry participants were better able to appreciate future big rewards in a delay discounting task; and that, in spite of their perception of increased rewarding value of both food and monetary objects, hungry participants were not more inclined to take risks to get the object of their desire. Together, these studies for the first time provide evidence that hot states improve decision making under uncertain conditions, challenging the conventional conception of the detrimental role of impulsivity in decision making (de Ridder et al. 2014). 1.192 192 Internalisation Although several cognitive models have been proposed to disentangle the psychological processes underlying performance on the Iowa Gambling Task (IGT), the Expectancy Valence Model (EVM) has been the most widely implemented. It is shown that the EVM does not provide clear information about decision making processes at the individual level by fitting the EVM, with individual random effects, to a sample of participants from various drug using populations using Bayesian techniques and to a sample of participants who complete the IGT multiple times. In particular, they show that the individual-level parameter estimates from the model may be bi-modally distributed and hence are inherently ambiguous and have little psychological significance (Humphries et al. 2015). 1.193 193 Internalisation Numerous stock market pricing distortions have been uncovered. Many of these have been linked to the cognitive errors documented in the behavioural science literature. Hirshleifer (2008) provided three organizing principles to place price distortions into a systematic framework. 1.194 194 Internalisation People rely on heuristics (i.e. short-cut decision rules) because people face cognitive limitations. Because of a shared evolutionary history, people might be predisposed to rely on the same heuristics, and therefore be subject to the same biases. People inadvertently signal their inner states to others. For this reason, nature might have selected for traits such as overconfidence, in order that people signal strong confidence to others. People’s judgments and decisions are subject to their own emotions as well as to their reason (Howard 2013). 1.195 195 Internalisation Duclos et al. (2013) examined the effects of social exclusion on a critical aspect of consumer behaviour, financial decision-making. Specifically, four lab experiments and one field survey uncover how feeling isolated or ostracized causes consumers to pursue riskier but potentially more profitable financial opportunities. These daring proclivities do not appear driven by impaired affect or self-esteem. Rather, interpersonal rejection exacerbates financial risktaking by heightening the instrumentality of money (as a substitute for popularity) to obtain benefits in life. Invariably, the quest for wealth that ensues tends to adopt a riskier but potentially more lucrative road. 1.196 196 Duclos et al. 2013 Internalisation Wang and Xiao (2009) examined college students’ credit card indebtedness and found that their buying patterns and social networks affected indebtedness. Students with a tendency toward compulsive buying that is, chronic and repetitive purchasing that becomes a primary response to negative events or feelings (O’Guinn and Faber, 1989) - were more likely, and those with greater social support less likely, to have high debts. 1.197 197 Internalisation According to lay opinion about financial debts, individual characteristics and irresponsible purchases are the major reasons for indebtedness. Being in debt is often attributed to personal fault of the indebted people themselves rather than to situational circumstances (e.g., Roland-Lévy and Walker, 1994; Walker, 1996), or to easy access to credit due to lenders’ misjudgments of borrowers’ financial standing. 1.198 198 Internalisation Subjective well-being and hedonic (relating to, or marked by pleasure) editing, that is how happy people maximize joint outcomes of loss and gain was examined by Sul et al. (2013). Hedonic editing refers to the decision strategy of arranging multiple events in time to maximize hedonic outcomes (Thaler 1985). The research investigated the relationship between subjective well-being and hedonic editing. 1.199 199 Internalisation In Study 1, they gave participants pairs of social or financial events and asked them to indicate their preferences regarding the sequence and interval length between the two events. Compared to participants with lower subjective well-being, those with higher subjective well-being preferred to experience a social gain (e.g., chatting with a close friend) temporally closer to a financial loss, suggesting that happy individuals are more inclined than less happy individuals to use positive social events as buffers against loss. 1.200 200 Internalisation In Study 2, participants were asked to select the type of positive event they would want to experience after a negative event. Happy individuals displayed a stronger preference for social events. Their findings (Sul et al. 2013) suggest that happy and less happy individuals employ different hedonic editing strategies for mixed events. The hedonic editing strategies preferred by happy individuals are as follows. Happy individuals use the loss-buffering strategy of arranging a positive social event closer to a negative event (e.g., receiving a nice letter from a friend and paying a fine for speeding on the same day). 1.201 201 Internalisation They use the benefits of positive social events to decrease the impact of negative experiences (e.g., an unsuccessful job interview), by voluntarily choosing to experience a social gain (e.g., hanging out with friends) over other events (e.g., finding a $10 bill on the street) as a cross-domain buffer. Given the importance of social resources and effective coping strategies for one’s subjective wellbeing, it is possible that happy individuals are better than less happy individuals at making themselves happier. However, their data should be interpreted with caution because it does not provide direct evidence regarding whether happy individuals’ hedonic editing strategies actually improve hedonic outcomes. 1.202 202 1.203 203 Menu Overconfidence Avoidance Prudence Avoidance Recallability Avoidance Optimism And Wishful Thinking Representativeness Sample Size Neglect The Law Of Small Numbers Conservatism Belief Perseverance Anchoring Avoidance Confirmatory Bias Avoidance Availability Bias Internalisation Heuristics Menu 1.204 204 Heuristics Heuristics are strategies that guide information search and modify problem representations to facilitate solutions. A heuristic is a strategy that ignores part of the information, with the goal of making decisions more quickly, frugally, and/or accurately than more complex methods (Gigerenzer and Gaissmaier, 2011). 1.205 205 Heuristics When heuristics were formalized, a surprising discovery was made. In a number of large worlds, simple heuristics were more accurate than standard statistical methods that have the same or more information. These results became known as less-ismore effects: There is an inverse-U-shaped relation between level of accuracy and amount of information, computation, or time. In other words, there is a point where more is not better, but harmful (Gigerenzer and Gaissmaier, 2011). 1.206 206 Heuristics Ten well-studied heuristics for which there is evidence that they are in the adaptive toolbox of humans. Each heuristic can be used to solve problems in social and non-social environments. See the references given for more information regarding their ecological rationality, and the surprising predictions they entail (Gigerenzer and Brighton, 2009). A summary table follows. 1.207 207 Heuristics Heuristic Definition Recognition heuristic (Goldstein and Gigerenzer, If one of two alternatives is recognized, infer that it has the 2002) also Schooler and Hertwig, 2005 higher value on the criterion. Fluency heuristic (Jacoby and Dallas, 1981) also If both alternatives are recognized but one is recognised Schooler and Hertwig, 2005 faster, infer that it has the higher value on the criterion. Take-the-best (Gigerenzer and Goldstein, 1996) also To infer which of two alternatives has the higher value: Gigerenzer and Brighton 2009, Czerlinski et al., 1999 (a) search through cues in order of validity, and Brighton, 2006 (b) stop search as soon as a cue discriminates, and (c) choose the alternative this cue favours. Tallying (unit-weight linear model, Dawes, 1979) also To estimate a criterion, do not estimate weights but simply Hogarth and Karelaia, 2005, 2006 and Czerlinski et al., count the number of positive cues. 1999 Satisficing (Simon, 1955; Todd and Miller, 1999 also Search through alternatives and choose the first one that Dudey and Todd, 2002, Gilbert and Mosteller, 1966 exceeds your aspiration level. and Bruss, 2000 1.208 208 Heuristics Heuristic Definition 1⁄N; equality heuristic (DeMiguel et al., Allocate resources equally to each of N alternatives. 2009) Default heuristic (Johnson and Goldstein, If there is a default, do nothing. 2003; Pichert and Katsikopoulos, 2008) Tit-for-tat (Axelrod, 1984) Cooperate first and then imitate your partner’s last behaviour Imitate the majority (Boyd and Richerson, Consider the majority of people in your peer group and imitate their 2005) behaviour Imitate the successful (Boyd and Richerson, Consider the most successful person and imitate his or her behaviour 2005) Now in detail 1.209 209 Recognition Heuristic If one of two objects is recognized and the other is not, then infer that the recognized object has the higher value with respect to the criterion (Goldstein and Gigerenzer, 2002). The recognition heuristic relies on ignorance that is partial and systematic. It works because lack of recognition knowledge about objects such as cities, colleges, sports teams, and companies traded on a stock market is often not random (Schooler and Hertwig, 2005). The recognition heuristic cannot be applied when both objects are either recognized or unrecognized (Schooler and Hertwig, 2005). 1.210 210 Recognition Heuristic Common sense suggests that ignorance stands in the way of good decision making. The recognition heuristic belies this intuition. To see how the heuristic turns ignorance to its advantage, consider the simple situation in which one must select whichever of two objects is higher than the other with respect to some criterion (e.g., size or price). A contestant on a game show, for example, may have to make such decisions when faced with the question, “Which city has more inhabitants, San Diego or San Antonio?” San 1.356 million (2013) depends on the information available to her. If the How Diego she makes this decision San Antonio 1.409 million (2013) only information on hand is whether she recognizes one of the cities and there is So San to Antonio reason suspect that recognition is positively correlated with city population, then she can do little better than rely on her (partial) ignorance. This kind of ignorancebased inference is embodied in the recognition heuristic, which for a two alternative choice can be stated as follows: If one of two objects is recognized and the other is not, then infer that the recognized object has the higher value with respect to the criterion (Schooler and Hertwig, 2005). 1.211 211 Fluency Heuristic The fluency heuristic assumes that if one object is processed more fluently, faster, or more smoothly than another, it is inferred that this object has the higher value with respect to the question being considered (Jacoby and Lee 1984). The fluency heuristic, in contrast to the recognition heuristic, does not exploit partial ignorance but rather graded recognition (Schooler and Hertwig, 2005). Masson et al. (1995) examined the influence of task demands on the use of the fluency heuristic using a version of the fame judgment task. Subjects initially read a list of famous and non-famous names, and later were asked to classify a set of names as famous or non-famous. Some of the test names had been read in the first part of the experiment, and consequently were expected to be more fluently identified by all the subjects. They attempted to verify this expectation by testing a subset of the names in a visual identification task. Subsequent use of the fluency heuristic in the classification task was expected to be revealed by a higher probability of classifying the previously read names as famous. 1.212 212 Fluency Heuristic The "fluency heuristic," which tells us that if something is easy to process, then we tend to prefer it over more complicated options. Gut instincts allow people to make routine decisions without thinking too hard - Washington Post - 1 Nov 2010 In a University of Michigan study (Song and Schwarz 2008, 2010 or 2010), people were more open to the idea of working out and more likely to do it when the directions for an exercise routine were written in a basic typeface as opposed to a more convoluted script. "Apparently the [subjects'] brains mistook the ease of simply reading about exercise for ease of actually doing the sit-ups and bench presses," writes Herbert (On Second Thought: Outsmarting Your Mind's Hard-Wired Habits), who surmises that for the group with a more confusing font, "the reading alone tired them out." 1.213 213 Fluency Heuristic What that means, says Herbert, director of science communication for the Association for Psychological Science, a national organization based in Washington, is that "people on the front lines of getting us through this national health and obesity crisis . . . have to overcome some really, really deeply wired habits of mind." On Second Thought: Outsmarting Your Mind's Hard-Wired Habits - Herbert 2010 1.214 214 Take-The-First Heuristic Choose the first alternative that comes to mind (Gigerenzer and Gaissmaier, 2011). Can taking the first option in decision-making lead to the best decisions in sports contexts? And, is one's decision-making self-efficacy in that context linked to take the first decisions? The purpose of the study was to examine the role of the take the first heuristic and self-efficacy in decision-making on a simulated sports task. Students participated in the study and performed 13 trials in each of two video-based basketball decision tasks. 1.215 215 Take-The-First Heuristic One task required participants to verbally generate options before making a final decision on what to do next, while the other task simply asked participants to make a decision regarding the next move as quickly as possible. Decision-making self-efficacy was assessed using a questionnaire comprising various aspects of decision-making in basketball. Participants also rated their confidence in the final decision. Results supported many of the tenets of the take the first heuristic, such that people used the heuristic on a majority of the trials (70%), earlier generated options were better than later ones, first options were meaningfully generated, and final options were meaningfully selected. Results did not support differences in dynamic inconsistency or decision confidence based on the number of options. Findings also supported the link between self-efficacy and the take the first heuristic. Participants with higher self- efficacy beliefs used take the first more frequently and generated fewer options than those with low self-efficacy. Thus, not only is take the first an important heuristic when making decisions in dynamic, time-pressure situations, but self-efficacy plays an influential role in take the first (Hepler and Feltz, 2012). 1.216 216 Take-The-First Heuristic Take-the-first is a heuristic that can be used by players to choose among practical options. There is evidence that experienced players do not try to exhaustively generate all possible options. Instead, they seem to rely on the order in which options are spontaneously generated in a particular situation and choose the first option that comes to mind (Johnson and Raab, 2003). 1.217 217 Take-The-Best Heuristic The take-the-best algorithm, has the policy that is "take the best, ignore the rest." The take-the-best algorithm assumes a subjective rank order of cues according to their validities (Gigerenzer and Goldstein, 1996). Consider the task of which of two alternatives to choose given several binary cues to some unobservable criterion. An example is deciding which of two cities is the bigger, given such cues as whether each has a university or has a football team in the premier league (Hutchinson and Gigerenzer, 2005). García-Retamero and Dhami (2009) tested how policemen, professional burglars, and laypeople infer which of two residential properties is more likely to be burgled. Comparison of experts and novices in terms of the cues they considered to be important for choosing which of a pair of residential properties was more likely to be burgled, and in terms of the strategy that best predicted their choices in such1.218 a218 task. Take-The-Best Heuristic 1.219 219 Take-The-Best Heuristic Positive and Negative Values for the Eight Cues Cue Positive Value Negative Value Garden in the property Tall hedges/bushes Signs of care Not well-kept property Type of property Short hedges/bushes Well-kept property Flat House Light in the property Off On Letterbox Stuffed with post Empty Location of the property Corner of the street Middle of the street Access to the property Doors/windows Security in the property on ground Doors/windows on floor floor No burglar alarm system Burglar alarm system second 1.220 220 One-Clever Cue Heuristicsic Use only one “clever cue” or single criterion to make a decision. Example: Female peacocks select mates based on the males’ number of eyespots. Peahens choose the male with greatest number of eyespots (Gigerenzer and Goldstein, 1996). Many animal species appear to rely on a single “clever” cue for locating food, nest sites, or mates. For instance, in order to pursue a prey or a mate, bats, birds, and fish do not compute trajectories in three-dimensional space, but simply maintain a constant optical angle between their target and themselves — a strategy called the gaze heuristic (Gigerenzer, 2007, Shaffer et al., 2004). In order to catch a fly ball, baseball outfielders and cricket players rely on the same kind of heuristics rather than trying to compute the ball’s trajectory (McLeod and Dienes, 1996). Similarly, to choose a mate, a peahen investigates only three or four of the peacocks displaying in a lek and chooses the one with the largest number of eyespots (Petrie and Halliday, 1994). 1.221 221 Hiatus Heuristic If a customer has not purchased within a certain number of months (the hiatus), the customer is classified as inactive; otherwise, the customer is classified as active. Here is one example where heuristics have proven to be more accurate than the models of rationality. 1.222 222 Hiatus Heuristic Recently, academics (Wübben and Wangenheim, 2008) have shown interest and enthusiasm in the development and implementation of stochastic customer base analysis models. Using the information these models provide, customer managers should be able to (1) distinguish active customers from inactive customers, (2) generate transaction forecasts for individual customers and determine future best customers, (3) predict the purchase volume of the entire customer base. 1.223 223 Hiatus Heuristic However, there is also a growing frustration among academics insofar as these models have not found their way into wide managerial application. The authors compare the quality of these models when applied to managerial decision making with the simple heuristics that firms typically use. The authors find that the simple heuristics perform at least as well as the stochastic models with regard to all managerially relevant areas, except for predictions regarding future purchases at the overall customer base level. The authors conclude that in their current state, stochastic customer base analysis models should be implemented in managerial practice with much care. Furthermore, they identify areas for improvement to make these models managerially more useful (Wübben and Wangenheim, 2008). 1.224 224 Hiatus Heuristic Clothing retailer Hiatus Heuristic Pareto Model 83% accurate 75% (83% of customers were correctly classified) Airline 77% 74% Online CD Store 77% 77% Note that while the hiatus heuristic works better than the Pareto/NBD model for classifying customers for the clothing retailer and airline, the two prediction methods tie when it comes to classifying customers for the online CD store (APPsychTextbk - Accuracy-Effort Trade-off). [The fanciest modern approach to deciding uses the Pareto/NBD model, which uses the negative binomial distribution, a statistics method, to determine which customers will be active or inactive. Pareto/NBD and related models have seen much discussion in academic literature. The foundation was laid by Schmittlein et al. in a 1987 paper and expanded upon in 2004 by Fader et al. For more practical information, Bruce Hardie provides a multitude of tutorials and Excel spreadsheets for using probabilistic models in a marketing context (source).] 1.225 225 Tallying Heuristic - UnitWeight Linear Model To estimate a criterion, do not estimate weights but simply count the number of favouring cues (Dawes, 1979). Tallying of positive evidence, the number of positive cue values for each object is tallied across all cues, and the object with the largest number of positive cue values is chosen validities (Gigerenzer and Goldstein, 1996). Example: In deciding between frozen yogurt and ice cream, a student chooses frozen yogurt because frozen yogurt has the greater number of favouring cues. How would you decide? Frozen yogurt: 1) healthier; 2) more toppings; 3) cheaper Ice Cream: 1) more flavours 1.226 226 Tallying Heuristic - UnitWeight Linear Model Magnetic Resonance Imaging (MRI) or simple bedside rules? There are about 2.6 million emergency room visits for dizziness or vertigo in the United States every year (Kattah et al. 2009). The challenging task for the emergency physician is to detect the rare cases where dizziness is due to a dangerous brainstem or cerebellar stroke. Frontline misdiagnosis of strokes happens in about 35% of the cases. One solution to this challenge could be technology. Getting an early MRI with diffusion-weighted imaging takes 5 to 10 minutes plus several hours of waiting time, costs more than $1,000, and is not readily available everywhere. 1.227 227 Tallying Heuristic - UnitWeight Linear Model Magnetic Resonance Imaging (MRI) or simple bedside rules? However, Kattah et al. (2009) developed a simple bedside eye examination that actually outperforms MRI and takes only about one minute: It consists of three tests and raises an alarm if at least one indicates a stroke. This simple tallying rule correctly detected 100% of those patients who actually had a stroke (sensitivity), whereas an early MRI only detected 88%. Out of 25 patients who did not have a stroke, the bedside exam raised a false alarm in only one case (i.e., 4% false positive rate = 96% specificity). Even though the MRI did not raise any false alarms, the bedside examination seems preferable in total, given that misses are more severe than false alarms and that it is faster, cheaper, and universally applicable. 1.228 228 Tallying Heuristic - UnitWeight Linear Model Avoiding avalanche accidents. Hikers and skiers need to know when avalanches could occur. The obvious clues method is a tallying heuristic that checks how many out of seven cues have been observed enroute or on the slope that is evaluated (McCammon and Hägeli 2007). These cues include whether there has been an avalanche in the past 48 hours and whether there is liquid water present on the snow surface as a result of recent sudden warming. When more than three of these cues are present on a given slope, the situation should be considered dangerous. With this simple tallying strategy, 92% of the historical accidents (where the method would have been applicable) could have been prevented. 1.229 229 Satisficing Heuristic Satisficing is a decision-making strategy or cognitive heuristic that entails searching through the available alternatives until an acceptability threshold is met. Search through alternatives, and choose the first one that exceeds your aspiration level (Dudey and Todd, 2002). If you were searching for a house, for instance, you may decide you want a clean house in a suburban area that is below $300,000. It is possible that you would satisfice when choosing a house to buy because it is near impossible to look at all available houses everywhere and then select the best option. This means you would probably buy the first house that met your aspiration level (Snook and Cullen, 2008). 1.230 230 Satisficing Heuristic Facione and Gittens 2013 “Snap Judgments – Risks and Benefits of Heuristic Thinking” – Think Critically Example: Being thirsty, how much water would we drink? Only enough to slake our thirst. Example: Seeking a new job, how hard would we look? Hard enough to find one that meets whatever are our basic criteria for pay, proximity to home, nature of the work, etc.? Example: Having arrested a suspect who had the means, motive, and opportunity to commit the crime, how hard can we expect police detectives to strive to locate other suspects? Satisficing suggests hardly at all. The question of the actual guilt or innocence of the subject becomes the concern of the prosecuting attorney and the courts. 1.231 231 1⁄N; Equality Heuristic Allocate resources equally to each of N alternatives (De Miguel et al., 2009). People use equality heuristically. This means that people do not always think deeply and analytically about their decisions. Instead, they often take a quick read on a situation and make a decision by applying some form of the idea of equality (Messick, 1995). Subjects read a story in which five business partners needed to allocate the profits and expenses of the partnership in a fair and reasonable manner. Each of the partners worked independently and produced different gross incomes between $140 and $285. The gross incomes were to be divided into expenses and profits. Subjects were asked to fill in fair allocations in an accounting ledger. Three factors were manipulated: the target of the allocation task (either the expenses or the profits), the causal attributions for the differences in gross incomes (internal, external. or both). And whether the subjects were asked to fill in both columns in the ledger (expenses and profits) or just one (Messick and Schell, 1992). 1.232 232 1⁄N; Equality Heuristic The results supported the hypothesis that the subjects heuristically used equality to make their allocations. Over 70% of the subjects allocated at least one column equally (although the frequency of equality use varied as a function of both the target of the allocation and the attribution given). Subjects allocated the target columns equally more often than non-target columns, even though equality for one column implied inequality for the other. The use of equality was also sensitive to the attribution given for the performance differences. Differences due to external factors. i.e., the number of people showing up at the market, produced the most equal allocations of profits (with unequal expenses) while the internal attribution produced the highest proportion of equal expense allocations (with unequal profits) (Messick and Schell, 1992). 1.233 233 Default Heuristic If there is a default, do nothing about it (Johnson and Goldstein, 2003). If an agent is indifferent or conflicted between options it may involve too much cognitive effort to base a choice on explicit evaluations. In that case she might disregard the evaluations and chose according to the default heuristic instead which simply states “if there is a default, do nothing about it” (doi:Gigerenzer, 2008). There is inconsistency in many people’s choice of electricity. When asked, they say they prefer a ‘green’ (i.e., environmentally friendly) source for this energy. Yet, although green electricity is available in many markets, people do not generally buy it. Why not? Motivated by behavioural decision research, we argue that the format of information presentation drastically affects the choice of electricity. Specifically, we hypothesise that people use the kind of electricity that is offered to them as the default. They present two natural studies and two experiments in the laboratory that support this hypothesis. In the two real-world situations, there was a green default, and most people used it. In the first laboratory experiment, more participants chose the green utility when it was the default than when ‘grey’ electricity was the default. In the second laboratory experiment, participants asked for more money to give up green electricity than they were willing to pay for it. They argue that changing defaults can be used to promote pro-environmental behaviour. Potential policymaking 1.234 234 applications are discussed (Pichert and Katsikopoulos, 2008). Tit-For-Tat Heuristic Cooperate first, keep a memory of size one, and then imitate your partner’s last behaviour (Axelrod, 1984). The tit-for-tat heuristic memorizes only the last of the partner’s actions and forgets the rest (a form of forgiving) but can lead to better cooperation and higher monetary gain than more complex strategies do, including the rational strategy of always defecting (e.g., in the prisoner’s dilemma with a fixed number of trials). If you interact with another person and have the choice between being kind (cooperate) or nasty (defect), then: (a) be kind in the first encounter, thereafter (b) keep a memory of size one, and (c) imitate your partner’s last behaviour (kind or nasty). 1.235 235 Tit-For-Tat Heuristic ‘‘Keep a memory of size one’’ means that only the last behaviour (kind or nasty) is imitated; all previous ones are ignored or forgotten, which can help to stabilize a relationship. Tit-for-tat can coordinate the behaviour in a group in the sense that all actors will end up cooperating but are simultaneously protected against potential defectors. As with imitate your peers and the default heuristic, tit-for-tat illustrates that the same heuristic can lead to opposite behaviours, here kind or nasty, depending on the social environment. If a husband and wife both cooperate when engaging in their first interaction and subsequently always imitate the other’s behaviour, the result can be a long harmonious relationship. If, however, she relies on tit-for-tat but he on the maxim ‘‘always be nasty to your wife, so that she knows who is the boss,’’ her initially kind behaviour will turn to being nasty to him as well. Behaviour is not a mirror of a trait of being kind or nasty, but results from an interaction between mind and environment. An explanation of the tit-for-tat players’ behaviour in terms of traits or attitudes would miss this crucial difference between process (tit-for-tat) and resulting 1.236 236 behaviour (cooperate or not) (Gigerenzer, 2010 alternately). Imitate The Majority Heuristic Look at a majority of people in your peer group, and imitate their behaviour (Boyd and Richerson, 2005). Imitate-the-majority heuristic, also referred to follow-the-majority heuristic. An agent using the heuristic would imitate the behaviour of the majority of agents in his reference group. For instance, in deciding which restaurant to choose, people tend to choose the one with the longer waiting queue (Raz and Ert, 2008). In a situation of uncertainty, individuals follow the actions or choices of the majority of their peers regardless of their social status. The domain of pro-environmental behaviour provides numerous illustrations for this strategy, such as littering behaviour in public places (Cialdini, Reno, and Kallgren 1991), the reuse of towels in hotel rooms (Goldstein, Cialdini, and Griskevicius 2008), and changes in private energy consumption in response to information about the consumption of the majority of neighbours (Schultz, Nolan, Cialdini, Goldstein, and Griskevicius 2007). 1.237 237 Imitate The Successful Heuristic Look for the most successful person and imitate his or her behaviour. Imitate-thesuccessful heuristic, also referred to follow-the-best heuristic. An agent using the heuristic would imitate the behaviour of the most successful person in her reference group (Boyd and Richerson, 2005). Organizations extensively use groups to perform a variety of cognitive tasks and collective decisions are essential for organizational performance. Reliance on groups in social life is built on a strong assumption, namely that the array of information exchanged, explored and integrated in groups enhances decision quality relative to individual choices. Similarly, other species organize and work in collectives in order to enhance their survival chances. For example, homing and migrating birds collectively decide on communal routes that maximize their chances of survival and successful arrival to their destination and swarms of bees and ants collectively choose new nest sites on which their survival depends. Social interactions unfolding in such collectives shape the emergence of collective choices that transcend a simple aggregation of individual preferences or competencies (Meslec et al. 2014 plus cited references). 1.238 238 Imitate The Successful Heuristic Although groups have the potential to become superior (as interacting collectives) to stand alone individuals or simple aggregation of individual actions or competencies, this (emergent) potential is not always realized in real-life situations. Studies stemming from the group synergy literature illustrate not only that groups do not manage to achieve strong cognitive synergy (they fail to perform better than their best individual member) but sometimes they even have difficulties to achieve weak cognitive synergy (they perform worse than the average individual performance in the group). Obviously, group synergy is a group emergent phenomenon that is rather difficult to achieve in interacting groups. Therefore, understanding the way in which individual choices and competencies are combined and coordinated through social interactions in order to generate superior collective outcomes is of key importance to understanding the emergence of collective cognitive competencies (Meslec et al. 2014 plus cited references). 1.239 239 Availability Heuristic Tversky and Kahneman (1973) proposed that people may use an availability heuristic to judge frequency and the probability of events. Using the availability heuristic, people would judge the probability of events by the ease in which instances could be brought to mind. Thus, using the availability heuristic, people would judge an event to be more likely to occur if they could think of more examples of that event. For example after seeing many news stories of home foreclosures, people may judge that the likelihood of this event is greater. This may be true because it is easier to think of examples of this event. Also people who read more case studies of successful businesses may judge the probability of running a successful business to be greater. 1.240 240 Next Week Judgement Biases Give some thought to the assessments 1.241 241 Menu Overconfidence Avoidance Prudence Avoidance Recallability Avoidance Optimism And Wishful Thinking Representativeness Sample Size Neglect The Law Of Small Numbers Conservatism Belief Perseverance Anchoring Avoidance Confirmatory Bias Avoidance Availability Bias Internalisation Heuristics Menu 1.242 242