MAX LIBRARIES - DEVVE' Digitized by the Internet Archive in 2011 with funding from Boston Library Consortium Member Libraries http://www.archive.org/details/pathologicaloutcOOsmit DEWEY HB31 .M415 working paper department of economics PATHOLOGICAL OUTCOMES OF OBSERVATIONAL LEARNING Lones Smith Peter Sprensen June 1996 massachusetts institute of technology 50 memorial drive Cambridge, mass. 02139 PATHOLOGICAL OUTCOMES OF OBSERVATIONAL LEARNING Lones Smith Peter Sprensen 96-19 June 1996 MASSACHUSETTS INSTITUTE .JUL 18 1996 96-19 Pathological Outcomes of Observational Learning' Peter S0rensen Lones Smith''' Department of Economics, M.I.T. June 28, Nuffield College : : ' and M.I.T. 1996 (original version: July, 1994) Abstract This paper systematically analyzes and enriches the observational learning paradigm of Banerjee (1992) and Bikhchandani, Hirshleifer, and Welch (1992). Our contributions First, fall into three categories. we develop what we consider to be the 'right' analytic framework for informational herding (convergence of actions and convergence of beliefs, using a Markov-martingale process). four major ways: (1) arise, We We demonstrate its power and simplicity in decouple herds and cascades: Cascades might never (2) We show that wrong herds can arise iff the bounded strength. (3) We determine when the finite, and show that (absent revealing signals) it is even though herds must. private signals have uniformly mean time to start a herd is infinite when a correct herd is is inevitable. (4) We prove that long-run learning unaffected by background 'noise' from crazy/trembling decisions. Second, we explore a types, new and economically compelling model with multiple and discover that a confounded learning. Third, we show may It from history even while 'twin' observational it how pathology generically appears: well be impossible to draw any further inference continues to accumulate privately-informed decisions! the martingale property of likelihood ratios is neatly linked with the stochastic stability of the learning dynamics. This not only al- lows us to analyze herding with noise, and convergence to our new confounding outcome, but also shows promise for optimal experimentation. 'From its initial presentation at the MIT theory lunch, this paper has evolved and been much improved and polished by feedback from seminars at Yale, IDEI Toulouse, CEPREMAP, IEA Barcelona, Pompeu Fabra, Copenhagen, IIES Stockholm, Stockholm School of Economics, Nuffield College, LSE, CentER, Free University of Brussels, Western Ontario, Brown, Pittsburgh, Berkeley, UC San Diego, UCLA, University College of London, Edinburgh, and Cambridge. We wish also to specifically acknowledge Abhijit Banerjee, Kong-Pin Chen, Glenn Ellison, John Geanakoplos, David Hirshleifer, David Levine, Preston McAfee, Thomas Piketty, Ben Polak, and Jorgen Weibull for useful comments, and Chris Avery for a key role at a very early stage in this project. The usenet group sci. math. research has also proved invaluable. All errors remain our responsibility. Smith and S0rensen respectively acknowledge financial support for this work from the National Science Foundation and the Danish Social Sciences Research Council. t e-mail address: lones01ones.mit.edu *e-mail address: SORENSENOcole.nuff.ox.ac.uk Contents 1 INTRODUCTION 1 2 THE STANDARD MODEL 4 Some Notation 4 2.2 Private Beliefs 5 2.3 Action Choice 6 2.4 Individual Learning 7 2.5 Corporate Learning as a Markov-Martingale Process 8 2.1 3 THE MAIN RESULTS 3.1 Belief Convergence 10 3.2 Action Convergence 12 3.3 Must Cascades Exist with Bounded Mean Time to Herd 3.4 4 5 Beliefs? NOISE 14 14 16 4.1 Convergence of Beliefs 16 4.2 Convergence of Actions: The Failure of the Overturning Principle 17 MULTIPLE INDIVIDUAL TYPES 19 5.1 The Model and an Overview 19 5.2 Examples of Confounded Learning The Basic Theory 20 5.3 6 10 CONCLUSION 22 24 A ON BAYESIAN UPDATING OF DIVERSE SIGNALS 25 B FIXED POINTS OF MARKOV-MARTINGALE SYSTEMS 26 C STABLE STOCHASTIC DIFFERENCE EQUATIONS 28 D MORE STATES AND ACTIONS 31 E SOME ASYMPTOTICS FOR DIFFERENCE EQUATIONS 32 F OMITTED PROOFS AND EXAMPLES 34 INTRODUCTION 1. Suppose that a countable number of individuals each must make a once-in-a-lifetime — encumbered binary decision sume that preferences externalities all by uncertainty about the state of the world. As- solely are identical, and that there are no congestion effects or network from acting alike. Then in a world of complete and symmetric information, would ideally wish to make the same decision. But life is more complicated than decide sequentially, all in Assume that. some preordained order. instead that the individuals must Suppose that each may condition decision both on his (endowed) private signal about the state of the world predecessors' decisions, but not their private signals. and on his all his The above simple framework was independently introduced in Banerjee (1992) and Bikhchandani, Hirshleifer, and Welch (1992) (hereafter, simply BHW). Their perhaps surprising with positive probability a 'incorrect herd' would information, after In this paper, as it some Assume that we What his predecessor is identical less profitable decision. enrich the informational herding paradigm, is best motivated by means of the following action, but suppose that the very next individual then could Mr. one million and two conclude? First, he could decide that had a more powerful We shall an accident. signal than everyone else. beyond discrete no uniformly most powerful irrational or story some generalize the private information there Despite the surfeit of available are in a potential herd in which one million consecutive individuals have followed suit on deviates. conclusion was that both simple lessons and deeper insights into rational learning. offer Our embellishment upon the herding counterfactual. make the we systematically analyze and happens to so point, everyone can arise: common signal. signals, To capture and admit the no decisive lessons Here 1. • is shall possibility that thus add noise to the herding model. Third, he possibly model with multiple only possible 'pathological' outcome: offers we Second, he might opine that the action was might decide that different preferences provoked the contrary choice. shall consider the herding this, for anyone, We may types. Here, we On this score, find that herding is we not the well converge to a situation where history and everyone must forever rely on his private signal! an overview of the paper, and how we view our contributions. Proper Analytic Framework; Nonrobustness of Past Results; Extensions. The 'Right' Theory. Our analysis of beliefs (learning) is focused through the two lenses of convergence and convergence of actions (herding). As such, two stochastic processes constitute the building blocks for our theory: (i) the public likelihood ratio given the state, and (u) the vector (action taken, likelihood ratio) a is a martingale Markov chain. We prove the power and simplicity of this informational herding framework in four major ways: The Markovian aspect of the dynamics allows us to drastically narrow the range of possible long run outcomes, as we need only focus on the ergodic set. This set is wholly unrelated to initial conditions, and depends only on the transition dynamics of the model. By contrast, the martingale property affords us a different glimpse into the long run dynamics, tying them down to the initial conditions in expectation. As turns out, this it allows us to eliminate from consideration the not inconceivable elements of the ergodic set where everyone entertains entirely Bad Herds? • false beliefs in the long run. Rational learning private signals from a common is fruitful because Bayes-updating of any space of prior yields private beliefs that are informative of the true — a simple corollary of our no introspection property. state by the support of the private have bounded private itively, beliefs: Incorrect beliefs, i.e. there do not We focus on the role played herds can develop exactly when individuals exist arbitrarily strong private signals. Intu- whenever individual private signals are uniformly bounded in strength, history has the potential to mislead everyone: Eventually even the most doctrinaire individual dare not quarrel with the conclusion of histories that aggregate enough information. With unbounded private learning beliefs, is Soon enough, a wise enough complete. doctrinaire individual will appear whose contrary action will radically shift public beliefs and overturn any would-be incorrect herd. Yet by our overturning cannot be turned on its tenacious individuals is richer head to rule out correct herds. largely a modelling decision. model than Banerjee (1992) and in their framework, but also sheds BHW, 1 For and opens principle, this logic The admission it of arbitrarily not only provides us with a lines of inquiry that critical light into the exact failure of make no sense incomplete learning: For as we approach this idealized extreme, bad herds become vanishingly implausible. BHW Cascades? • introduced the colorful terminology of a cascade for an infinite train of individuals acting irrespective of the content of their signal. Yet label 'cascades literature' outside BHW's is malapropos. For cascades are the exception and not the rule discrete signal world. All but one (rather contrived) attests to this fact. With is richer — for herds must arise. This occurs, we argue, because public beliefs and since time. 1 is The find that link paper much what matters is not between unbounded support it is With these no longer so clear must converge (as per This simple insight (the true because contrary actions radically swing beliefs. Expected Time To Herd. We We is belief convergence implies action convergence. overturning principle) into herding • in this ever a foregone conclusion (a cascade)! two notions decoupled, the resulting analysis usual), example generic signal distributions, individuals always eventually settle on an action (a herd), but no decision why we argue that the characterize how beliefs when herds arise in finite expected fast the truth is learned and complete learning was but rather how slowly implicit in Smith (1991). error rooted out: There must be enough contrarians to overturn any temporary herd is enough. Our surprising discovery fast that while learning the correct herd always requires infinite beliefs, • is NOISE. That a into herding. So it is mean time is complete with unbounded some to start in single individual can 'overturn the herd' is state! the key analytic insight only natural to undermine the impact of contrarians by adding noise. Counterintuitively, even with a constant inflow of crazy/trembling individuals, complete learning still obtains. Everyone (sane) eventually learns the true state of the world. Besides definitively resolving and further exploring informational herding, the above results set the context needed for what are our most New Herding Economics: 2. We next relax a critical namely that all (if and innovative findings below. Confounding with Multiple Preference Types. under-appreciated) premise of the original herding results, individuals have the same Surely this preferences. is isolated individuals need not greatly matter anything but an why Multiple types offers a parallel reason apt description of the world. fix striking the actions of — but with much richer consequences. Let's ideas with a hopefully familiar example. Suppose that on a highway under construction, depending upon how the detours are arranged, those going to Houston should merge either right (in state If R) or one knows that left (in 70% state L), with the opposite for those headed toward Dallas. are headed toward Houston, then absent any strong signal to the This yields two contrary, Dallas-bound drivers should take the lane 'less traveled by'. herding outcomes: 70% left or 70% right. But another rather subtle For as the chance q that observed history accords state r(q) that a Houston driver merges for Dallas drivers. r «(?) stops! = If for some q, R right gradually increases cars are equilikely in states from R Of course, whether such a fixed point exists is far specifications, such a 'confounding' 3. outcome does with positive probability The above convergence be forever choked and conversely and L to merge result right, or learning all With why if so, that for non-degenerate and furthermore, dynamics will signals! Exponentially Fast/Stable Learning. =*> and the deduction of belief convergence. off exist, is arise. the chance — even with arbitrarily strong private Likelihood Ratios as Martingales pend on the rate of 1, 1, from obvious, and even need we converge there? The surprising content of Theorem 9 it to to no inference can be drawn from additional decisions, and r L{q), then converge upon from rises may possibility 'rational herds' with noise both de- noise, the inflow of rational contrarians without entering a cascade if learning is exponentially fast. may Such rapid belief convergence also implies the local stochastic stability of learning needed for confounded learning to arise with multiple types. The we have found between the martingale character stability of learning: This neat result holds if common ingredient of the likelihood ratio is a simple link and the exponential near a fixed point, posterior beliefs aren't We (degenerately) equally responsive to prior beliefs for every action taken. feel this tech- nique has broad applicability beyond the herding paradigm, into optimal experimentation. Section 2 outlines the benchmark model and some preliminary vides the associated action and belief convergence results; noise new model with heterogeneous preferences is results. Section 3 pro- added is explored in section A 5. and proofs are appendicized, including a convergence related) results in section 4. Our host of needed (or criterion for Markov- martingale processes, and a new local stability result for stochastic difference equations. 2. THE STANDARD MODEL Some Notation 2.1 An sequence of individuals n infinite Everyone observes the ordered actions of = 1,2,... takes actions in that exogenous order. A predecessors. all background probability space random: There (Q, £, v) captures all uncertainty in our model. First, the action payoffs are 5 = are 2 possible states of the world (or just states), s Q Formally, partitioned into events is = v{H) belief be v(L) = significant algebraic cost 2 Our QH U f2 L , called H = H and number any finite and dubious conceptual gain (see our 1996 working paper). finite action set (a m might think of investors deciding whether to , m£M), where Ml 'invest' or 'decline' s and everyone acts so as to maximize weakly dominated, and at least n receives a private Conditional on the state, {an } are s )i in state s world, we \i L fi H And . are not the same measure Common priors 3 So — E , . . M}. One all individuals, assume that exist. profile, loosely an 6 E, about the WLOG no Before selecting denoted h. state of the world. and drawn according to the probability measure Radon-Nikodym to avoid trivialities, 2 . 3 there exists a positive, finite w.r.t. i.i.d., signal, {1, To ensure that no signal will perfectly reveal the state of the H and [i L be mutually absolutely continuous (a.c.). 4 Thus, that [i e {H, L}. shall insist random of states, but at same for two undominated actions an action, an individual observes the entire action history Individual We expected payoff. his = prior an investment project of uncertain value. Action a m pays off u (a m ) in state s £ {H, L}, the is ('low'). common Let the L. =L s results extend to 1/2. Everyone chooses from a action and ('high') — i.e. we derivative g shall rule out g some = 1 = L d\x ld\i H : E— >• (0, oo) of almost surely, 5 so that fi L fi H and signals are informative about the state. WLOG standard (Harsanyi (1967-68)), and a fiat prior (see our working paper). a random variable; /x s = n"n = v s °o~ x where measure u° conditions v on event ft". "See Rudin (1987). Measure \i L is a.c. w.r.t. fj," if H {S) = => fi L (S) = VS G S, the a-algebra on H E. By the Radon-Nikodym Theorem, a unique g € L}{n H ) exists with fi L (S) = Js gdfi for every S G S. 5 H L With fi mutually a.c, 'almost sure' assertions are well-defined without specifying the measure. <7 n : ft is is , fJ, , /j, 2.2 Private Beliefs The second source a € S, an individual = p(o) are a F s on (0, 1). The F H and distributions F L each F s has a density. The next dF H /dF L = a Radon-Nikodym derivative / Lemma 1 € {H, L), p fi F H and L and ultimately drives \i L are Thus there . f {p)/f , L when {p) wisdom that posterior learning that occurs. all = dF H /dF L (No Introspection Condition) The derivative f (a) distributed with is H distributions F and which reduces to H result neatly illustrates the folk beliefs are sufficient for one's information, state, private beliefs are subtly linked. Since mutually absolutely continuous, so are the associated exists shall refer to as his private belief H. Conditional on the is Given signal private information. across individuals because signals are. In state s i.i.d. c.d.f. that the state (0, 1) is what we uses Bayes' rule to arrive at + 1)6 \/(g(o) model of uncertainty in our of private F H F L satisfies {it): f(p) = p/(l—p) almost surely inp € (0, 1). Conversely, H F L arise from updating a common prior with some signal measures H L if (-^r) then F L H (b) The difference F (p) — F {p) is nondecreasing or nonincreasing as p ^ 1/2. L H or 1. (c) (Beliefs are Informative) F (p) < F (p) except when both terms are belief c.d.f. s , p, , Proof: If the individual further updates his private belief p by asking of = the two states of the world, he must learn nothing more. So, p = p/{\ — p), F Conversely, given f(p) Part (a) implies part (b), and is let distinct, and F L (b) = and Lemma 1(a) For • clarity, 6 = a, = 2a and thus the (left < = [l l b < 1, < 1/2 Let \x h let n 1). = 5 < 1 since L fj, no perfectly revealing will [b,b] and 2/(5-2a). Note how p maps C [0, 1] H p, signals. co(supp(F)) if stochastic = are We [0, l]. 7 be progressively embellished. So g(a) = g =p <F 2 H be Lebesgue with Radon-Nikodym derivative g(a) l/(g(o) + l) = have probability density g L (a) simple formulae F H (p) Next, < and unbounded panel of figure Bounded Beliefs Example. and choose p{a) b 6 as there are 1 Unbounded Beliefs Example. yielding p(a) < follows. common support, structure of co(supp(F)) we introduce two leading examples which and n H the density g H {o) • < implies that they have a The Note that if E {H, L). in state s, s when p e supp(F) \ {1/2}. Hence (c) functions F H and F L can be taken as the F H (b-) = the private beliefs bounded call s . likelihood in 6 say supp(F), equal to the range of p(-) on E. plays a major role in the paper. distribution \i f(p)/[l+f(p)], as desired. strict Thus, the conditional distribution primitives of the model. Notice that a have its , H {cr)/g (a) L L (p) = 2p = = —p 2 = |^Q = 3/2 [0,1] injectively onto on [b,b] 2a, (l-a)/a, 2 . (uniform) measure on -a - [0, 1]. = [0, 1], Then [2/5,2/3], The support of a measure is the smallest closed set of full measure; co(A) is the convex hull of set A. To exhaust all possibilities we should also consider supports that are bounded above and not below, etc., but this exercise in generality yields no new insights. 7 9 Figure Signal Densities. Graphs 1: Observe how, near for instance, signals H {°) for the unbounded and bounded (left) (right) beliefs are very strongly in favor of state L, as in Lemma examples. 1. u L (ai) u L (a 2 ) u L (a 3 ) f Figure = Payoff Frontier. The diagram 2: f3 =l f2 fi depicts the (expected) payoff of each of three actions as is H. The individual simply chooses the action yielding an insurance action, while actions ai and 03 are extreme actions. a function of the posterior belief r that the state the highest payoff. Action a? < p with p(a) <£> 2/(5 is - 2a) < p <& (5p bounded, and the distribution of p € in state # and in state - > 2)/2p [2/5, 2/3] is FH a. (p) = Here, the support of H /i (5p-2)/2p] [0, = F H FL , is (5p-2)/2p L: 5p-2 ^ 2.3 iP L (p) = / «W da= JOf Jp(cr)<p 2 \{Zo - a ) Z l ru H (a) Lemma + (1 — belief r (5p-2)(p Proof: 2) Sp^ , [0, 1] in state H, the expected payoff [0,1] partitions into subintervals, As noted, the payoff action a m [0, 1] is strictly best for where it of choosing action a strictly of each action some dominates r; is is optimal with posterior belief r 6 Im a linear function of therefore, there all or action basins, I\,...,Im must exist other actions. That this r. We now WLOG order the actions = f < G [fm _i,fm = Im where ] , so that a m f\ < • < is . But by assumption, a single open subinterval is a partition follows from the fact that there exists at least one optimal action for each posterior r r + r)u L (a). Figure 2 depicts the content of the next (standard) result. The interval 2 € overlapping at endpoints only, such that action a m of 2" Action Choice Given a posterior is \-ada= € [0, 1]. optimal exactly when the posterior fM = 1. We employ the tie-breaking rule 8 that individuals take action a m (versus a m+ i) at r (resp. actions 02, • optimal when one is a,\) . . , . = To Invest. posit u > 0. u both in (resp. L), while insurance H in state two actions and —1 = when fu — — f), (1 L from state may be to state = a± = so f is H. Decline in state L; to Decline Since, by assumption, action ai states. Indifference prevails H is au extreme action . as confidence shifts In our running examples, the Invest yields payoff investing yields payoff we certain that the state om-i are respectively optimal Examples Cont'd. and a 2 is = fm The from undominated, + u). 1/(1 2.4 Individual Learning map from Individual decision rules Bayesian equilibrium, everyone knows compute the ex ante chance = likelihood ratio ((h) q(h) in state So q(h) A is H, n 1 s 7r (h) of (h) / it 11 own private beliefs and history to all common decision rules and the any history h each state in (h) that the state is L actions. prior, In a and can This yields the public s. H, and the versus 9 public belief i.e. the posterior ensuing from a neutral private belief and history observation h. final application of Bayes rule yields the posterior terms of the public history belief r (that the state is H) in — or equivalently the likelihood ratio ((h) — and the private belief p: p ir H (h) p pir H (h) + {l-p)ir L {h) Lemma 3 (Private Belief Thresholds) = po(^) < Pi{h) < pe (pm -i(h),pm (h)], . . . < pM{h) — 1, 1 p+(l-p)e{h) Given history such that a m is i + h, there are chosen iff (1) l=t£(h) M+ 1 thresholds the private belief satisfies where Pm{h) 7 RHS of (1) is strictly increasing in p, and (Hi) the reformulation of (1) as posterior odds (1 — r)/r equal private odds (l—p)/p times the likelihood ratio (.(h). Note that if pm _i(h) = pm (h) at some h, then a m is never chosen. This true given is (i) the tie-break rule, Since the likelihood ratio explicit • dependence ((h), is (ii) the informationally sufficient for the history, and write pm (() instead o(p m (h), where Examples Cont'd. In both examples, (2) yields p(() = ( h-> (/(u we suppress the pm (() is + ()\if= 8 increasing. 1/(1 + u). For generic models, this choice does not matter. Even when it does change the probabilistic course of nongeneric models, the tie-breaking rule does not change the statement of any of our theorems. 9 We assume common knowledge of rationality. Section 4 partially backs away from this. £ Optimal action a m Action a\ : 1 £g(a)elm = Action a m is - r m -i taken with chance p(M\LJ) Action a M p(m\s,£) in state s G {H, L} <p(M,£) £ p(M\H,£) Figure 3: Individual Black Box. Everyone bases his decision on both the public likelihood ratio and his private signal <r, resulting in his action choice a m and a likelihood ratio to confront successors. I One takes action a m iff one's posterior likelihood lies in the interval Im where , I\ , . . . , Im partition [0, oo] Corporate Learning as a Markov-Martingale Process 2.5 We and qn let t n chooses action actions, are mn 10 , , be the public likelihood ratio and belief after Mr. n respectively, with £q random, and so = 1 and (£ n )^=1 and =F p(m\s,£) <p(m, £) £) is likelihood and the true state we move Our to £n+ i = s (<7n)£Li are stochastic processes, s (p m (£)) -F s and thereby described by (p m ^(£)) (3) = £p(m\L, £)/p{m\H, £) (4) e {H, L}. Faced with £n , if individual n takes action mn , ip(m n £n ). Figure 3 schematically summarizes this transition. , insights are best expressed Mx (mn +i,(p(m n+ i,£n )) with is initial history). Signals, , process on the state space state 1/2 (null the chance that a (rational) individual takes action o m given the public Here, p(m\s, £, = go by considering (m n ,£n ) as a time homogeneous Markov [0, Given (m n ,£ n ), oo). probability p(m n+ i\H, (3) and (4) n ) in state imply that the next H. In principle, such a two-dimensional process with continuous state variables could be very ill-behaved, and possibly chaotic. The next Lemma But result is Lemma 5 attests to how well-behaved quite standard (but for completeness, 4 (The Unconditional Martingale) unconditional on the state of the world. is is The public the likelihood process. proven in the appendix). belief (q n ) is a martingale, 11 This martingale describes the forecast of subsequent public beliefs by individuals in the model, who do not know the true state of the world: Prior to receiving his signal, individual n's expectation of the public belief that will confront his successor for our purposes, an unconditional martingale does not convergence. tell us all is the current one. But we want to know about For that, we will condition on the state of the world, and that will render m will denote actions, and n individuals. 10 Or 11 We really ought to specify the accompanying sequence of tr-algebras to the stochastic process; however, equivalently, action a mn . Throughout the paper, these will be suppressed because they are simply the ones generated by the process itself. H the public belief (q n ) a suimartingale in state E[qn+ i H,qi,... ,qn ] | > qn more focused on the true the modeller, a —a state of the world useful martingale is = — ((1 q n )/q.n)- The stochastic process of (b) The n Assume 5 (Likelihood Ratios as a Conditional Martingale) (a) = [0,oo). So fully incorrect learning (£ n —» H,£ u ...£n ] = J2 P(m\H,Q £n Since the likelihood ratios are non-negative Martingale Convergence Theorem. stochastic evolution of (£n ) on state H. 00) almost surely cannot occur. , I (See = j^%j\ random Breiman H. = limn-^^, l^ Proof: Given the value of £n the conditional expectation of £n+ i in state E[£n+1 state likelihood ratios (£n ) is a martingale conditional likelihood ratio process (£ n ) converges almost surely to a r.v., with supp(^oo) For seek. one that conditions on knowledge of the state of the world, namely, the likelihood process (£n ) Lemma much weaker than we result ^ 4 H is = L P{™\L,£n ) variables, the result follows (1968), Theorem from the Or, since the 5.14.) mean-preserving, convergence to any dead wrong belief is i.e. become weakly Essentially, the public beliefs are expected to - much more (and a supermartingale in state L), cannot occur: The odds against the truth are not permitted to explode. a.s. 13 Easley and Kiefer (1988), and others, underscore that unlike statistical learning where information may well accrue at a 'constant rate', complete learning conclusion in a economic model of costly experimentation. complete learning is generally When is not at all a foregone information has a price, deemed too expensive. One might expect that the resulting pathological outcomes to the learning dynamics persist this observational learning setting. Guiding Questions: of the process 1. Is there (m n ,£n ) settle down? Belief convergence obtains 2. converges a herd, or action convergence: Does the If so, on action o^? as — since t^ exists by Lemma some stage n? Or perhaps only a n -> 00. And is Otherwise, learning 3. What is learning complete is 5. But must a cascade 13 — p(m\H,£ limit cascade — or do n) on a m beliefs — coordinate expected time? Markov process (m n ,£n ) arise, where everyone takes p(m\L,£ n ) will arise, i.e. — 1 for p(m\H, converge to the truth, i.e. some £n ) m —> 1 £ n -> 0? incomplete (beliefs not eventually focused on state H). the link between action and belief convergence? with a discrete signal space, 12 in finite — the second coordinate of the action a m irrespective of signal realizations after And first is: (i) cascades must occur, and See, for instance, Doob Another proof of this fact uses public beliefs (see, for instance, (1953), section The rough (ii) logic of BHW, valid cascades imply herds, and II. 7. Bray and Kreps (1987)). The second thus (Hi) herds occur. ignores his signal and takes a m still p(m\H, £ n +\) We = shall prove that we first interval Jm = C {£ supp(F) | £ int (Jm ), With bounded private (6) With unbounded private \p is imply that herds eventually is finite. beliefs, £. So once all G The lemma 1. [£, Jm = oo] £ is (0, = and Jm {0}, Ji = With bounded private signals every public likelihood £ Remarks. = beliefs, J\ only boundedly far from an insurance action, Im C [0, £] also possible for Then the insurance = the public if u L (a 3 ) 1, rearranging an expression like (2), -^b + (i-b)£ action. = u H (a 3 = ) 0, of private signals. Jm2 <C Jmi iff 10 m 2 > mi. an action absorbing basin. beliefs are and u# (02) one can show that £ 5 oo; But with unbounded swamped by some mass and < £ or oo or perhaps even in favor of are inversely ordered, or u L (a\) < other basins are empty. all an 'insurance' action a m when = £ private beliefs, the posterior odds are must lead to the same oo) will be < some = bounded. For is in _i f! ' + (i-5)^ m Jm = UL{a 2 ) action a 2 has an action absorbing basin for small enough e fm empty unchanged. Also, is for {oo}, and sufficiently near The absorbing basins instance, take u H (ai) such that [0,oo], asserts that to each 'extreme action' corresponds is their signals. there is a possibly , then taken almost surely, and £ intuitive: By imply herds? then the posterior likelihood ratio £g(a) G Im almost surely in is 3. limit cascades For each action a m m -i(£),pm(£)]} C The appendicized proof 7^ — and cannot then prove that this convergence limit cascades i.e. Absorbing Basins) 6 (Action (a) But Jm Do must understand exactly why individuals might wish to ignore private signals a. Action a m 2. So (2). Convergence likelihood ratio £ beliefs, We characterize the limit t^. first whether the mean time to entry into a herd Lemma So the . conclude by discussing the speed at which beliefs converge, and the intertwined 3.1 Belief We £n THE MAIN RESULTS of beliefs implies convergence of actions, issue of = unchanged by as private belief thresholds are 1, , information, and so £n+ i This leads to a tougher question: beliefs. In this section, We new on action a m Mr. n not robust: Cascades generically needn't arise (i) is 3. occur. irrefutable: In a cascade too. 1 with unbounded is revealing no , n+ obtains at stage cascade step precisely > 1 0. when — e. J2 2u/Z 2u Ji Figure 4: Continuations and Absorbing Basins. Continuation functions for the examples: unbounded private beliefs (left), and bounded private beliefs (right). By the martingale property, the expected continuation lies on the diagonal. The stationary points are where both arms hit the diagonal (impossible here), or where one arm is an action absorbing basin So, larger the smaller is Unbounded Beliefs Example Cont'd. F H (p) = p {(. = is F L (p) = = in the left panel; I 2u/2> in the right). and the support [b,b], Only extreme action absorbing basins the interval [fm _i,fm ]. • taken with zero chance exist for large Private beliefs p e (0, 1) the larger is enough [b,b]. are distributed = [0, 1], and private beliefs are unbounded; the basins collapse to the extreme points, J\ = {oo}, Jm = {0}. With our two actions, we have p = £/(u + £), where u > 0. Thus we let p(l\H,£) = £ 2 /{u + £) 2 and p(l\L,t) = 2 We now get <p(lj) = £+2u and ip(2J) = u£/(u + 2£), shown in figure 4. £{£ + 2u)/(u + £) H L • Bounded Beliefs Example Cont'd. Here, F (p) = (5p - 2)/2p and F (p) = 2 (5p - 2)(p + 2)/8p for p 6 [2/5, 2/3]. With p = £/(u + £), active dynamics occur when £ € (2u/3, 2u). For £ < 2u/3, we have p(l\H, £) = p(l\L,£) = 0, i.e. action a 2 is taken a.s., and thus its absorbing basin is J2 = [0, 2u/3]. For £ > 2u, we similarly find J = [2u, oo]. For£<E (2u/3,2u)wehavep(l\H,£) = (2>£-2u)/2£ and p(l\L, £) = {3£-2u)(3£ + 2u)/8£ 2 By the martingale property, <p(l, £) = u/2 + 3£/4 and ip(2, £) = u/2 + £/4. (See figure 4.) We now argue that limit cascades must occur: Dynamics must tend to one of the basins. as 2 and 2p—p2 So supp(F) . , . x . Theorem 1 (Limit Cascades) = has support J The appendicized proof is variable Any £<*> Jy (we The U • • • U Jm, = 0) or feel) intuitive. it Theorems B.l-2 H, simply because Theorem 2 (Long-run Learning: beliefs, if£ , and (Lemma process, = £). i.e. either If at least 1(c)). random , then with positive chance 11 £00-. an action two actions L than state This will violate stationarity. Complete and Incomplete) £ JM the precisely characterizes taken with greater chance in state is beliefs are informative With bounded private Markov teaches us nothing (<p(m,£) are taken in the limit, then the least one £ (X> the absorbed set. point £ e supp(^oo) must be stationary for the doesn't occur (p(m\£) (a) likelihood ratio process £ n Assume state H. i^ G JX U- -UJm-i- With unbounded private (b) For Proof: then (a), if £& E beliefs, £ n —> with positive chance, we are done. Otherwise, J\ uniformly bounded above by (£ n ) is almost surely. the infimum of J\, introduced in £, Lebesgue's Dominated Convergence Theorem, the mean of (£ n ) is ^ if £«, Ji a.s., Lemma By 6. preserved in the limit, i.e. > £ it cannot be the case that supp(4o) CJM = [0,^]. (b) Theorem 1 asserts £& £ J a.s. By Lemma 6, with unbounded beliefs J — {0} U {+oo}, while Lemma 5 proves supp(£ 00 C [0, oo) if the state is H. E[£oo] = the initial ratio. So, £o, if £ ) To underscore how nonintuitive where strict inequality and so (£ n ) this conclusion, notice that this is Lemma, holds in Fatou's or 1 must be unbounded. While the process = limn _>0O E[£ n (£ n ) ] is one situation > E^iuvn^^ £ n = ] 0, occasionally gets arbitrarily large, corresponding to arbitrarily long trains of individuals choosing the least optimal action, the longer all is the train, the less likely such trains must almost surely (£ n ), come Still, this is Puzzle # an arresting 1. Why 0. With On balance, the an end. For a random walk probability one, classic analogy to the behavior of # 2. and eventually hits is Why absorbed. 2, convergence towards totally incorrect beliefs appears Why complete learning? limit. aren't correct herds periodically broken herds are)? Couldn't beliefs be ever cycling betwixt confidence in (just as incorrect and L, so that the ergodic distribution assigns weight to non-stationary establishes that beliefs BHW must eventually settle H is here fail, and beliefs? It that the martingale convergence theorem succeeds where Markovian arguments the analysis of at 1 can't individuals eventually be wholly mistaken about the state of But the martingale convergence theorem ruled out that self-enforcing. 2 tells us that Brownian motion) starting (or it Theorem on two counts. result, the world? For as noted in section Puzzle to occur. to think of the behavior of a driftless with an absorbing barrier at up it is down: Limit cycles cannot occur. Note that — which did not appeal to martingale methods — only succeeded because their stochastic process necessarily settled down in some (stochastic) finite time. Action Convergence 3.2 Suppose a putative herd has begun: action, but a cascade has not yet begun. A string of individuals has taken the identical If someone then acts in a contrary fashion, his successors have no choice but to concede the strength of his signal, thus sharply revising the public belief. action. his own since More We say that the putative herd has been overturned by the unexpected formally, if Mr. n chooses action a m then n private signal, find it is , it + 1 should, before he observes optimal to follow suit because he knows no more than n, and common knowledge that n rationally chose a m 12 : So after n's action, the public belief is G Im q = (r m _i,r m The next lemma ]. codifies this logic (proof appendicized), for proves central to an understanding of the entire observational learning paradigm. it Lemma (The Overturning Principle) 7 action a m the updated likelihood ratio £ , <p(l,£) G [u, oo) and — , (a) With bounded private state (b) A H) from running examples, the principle correctly predicts G So (see figure 4). [0, it) Theorems 1-2 beliefs, on an action other than (in with positive probability. BHW. 14 bounded Strictly beliefs Fix the payoffs and prior beliefs so makes clear. 15 the continuous transition is tend from bounded to unbounded. beliefs. 7/co(supp(F fc converges )) then the chance of an incorrect limit cascade vanishes as k —¥ oo. Only basins J* If Om (and we think, simplicity) the major to emphasize the above characterization Theorem 4 (Continuity) [0, 1]. om for their striking result, as the characterization from incomplete to complete learning as private Proof: working paper). individuals almost surely settle on the optimal action. beliefs analysis proves in generality happens to be the mainstay to [0, l], (details in absent a cascade on the most profitable action beliefs, pathological herding finding in Banerjee (1992) and 16 convergence implies action belief herd on some action will almost surely arise in finite time. With unbounded private The major reason suit. eg. for the the outset, a herd arises The bounded takes contrary actions in a limit cascade, for that would (f(2,£) 3 (Herds) someone optimally G Im and an uninformed successor follows convergence, yielding the action counterparts to Theorem history, if many This result precludes infinitely negate belief convergence For any fc 7r is = [^*,oo] and J^ remain once co(supp(F the chance of a correct limit cascade, then EEqo by Fatou's Lemma, so that n k > 1 — Eo/fr. As k —> Remark: Fragility of Limit Cascades. 00, I* BHW —> this insight. Even Our results (1 00, — close is )) fc 7r enough to < )P\ But £"£00 and so —> fc 7r £ 1. devote considerable attention to the fragility of herds, pointing out that the release of a small can undo a herd. > fc amount of public information on the nonexistence of cascades only serve to strengthen in the limit, we have shown that generically likelihood ratio lies at the edge of the absorbing basin. By public information will break the limit cascade. bounded away from the edge of the absorbing (i.e. without atoms) the Consequently, arbitrarily contrast, the limit belief in basin, thus inoculating the little BHW is model to the release of sufficiently small packets of public information. 14 And extended 15 While BHW to M > 2 actions. BHW also handled several states (done in our working paper). unbounded private signals, their working paper introduces perfectly informative signals under the rubric of 'pseudo-cascades' (ruled out by our assumption of mutually a.c. information). Complete learning in that context is clear. For once the public beliefs overwhelm all but the didn't consider perfectly revealing signals, the very next contrary action reveals the state of the world, 16 We mean the HausdorfF topology: If co(supp(F*)) 13 = [a k ,b k ], then a k -> 0, and and we're done! b k -» 1. . Must Cascades 3.3 Or we ask "Can cascades rather, Theorem chain, £ n -¥ £ Bounded Exist with Beliefs? As (m n ,£n ) exist...?" not^i finite state Markov is — 2 cannot assert finite time convergence (a cascade) £ J but always £n bounded In the running BHW, J. Unlike $. beliefs for we may have cascades need not obtain, though herds must. example, a cascade on action ai obtains iff > £ 2u. In a £„—>£€ [2u, oo), figure 4 clearly shows that (£ n 17 So learning never its edge. A cascade never starts. herd (and so limit cascade) on Oi, with J\, but only never enters No matter how many ceases: A approaches cascade on With a < individuals have followed suit, a contrarian might why = £ BHW this is For true. F inf(Jm ) implies L a herd starts on action a m with if , {p m -x{£ n )) = FH = (p m _i(4)) 'jump into' Jm we have 0, L )) inf{F L (p)/F H (p)\p G in boundedly and thus are nongeneric. Mean Time 3.4 We long to illustrate the is it int finite time. One might prematurely conclude tributions, to -U 1m To {£ n ) must A counterexample to this conjecture is in Appendix F. if it (in reverse order). Call lie in the communicating basin C starts at £$. Inside C is C [0, How oo), at least one action 1, 2, ... , M. Thus, a string of identical actions that eventually comes is 8 (Putative Herds) boundedly positive There most n* steps with chance (£ n ) is e* is either a herd or temporary herd. This also follows analytically: possible for (£ n ) to jump > after a fixed and n* < number of periods. oo such that a putative herd at least e* remaining long of a putative herd. Intuitively, at least until It is and that cascades can only arise with discrete signal dis- this end, let £ The appendicized proof rules out 18 A.l, that temporary herds are of uniformly bounded expected duration, and that the starts in at 17 Lemma WLOG the actions that can be taken in 6 as entry rate into putative herds Lemma by This needn't occur with general signal spaces. an end a temporary herd; therefore, a putative herd It suffices 1 power of our framework by answering an important question: absorbing basin. 18 Relabel • > Herd until a herd starts? = I\ U- () [b) X (supp(F))} the smallest interval that (£n ) cannot exit 6 0, then F (Pm (£n))-FHPm-dQ) ... F {Pm {£n)) £)-£ m tn n n) ~ "^ F»(pm (£n - F"(pm - {£n )) ~ F»{p m {£n )) 71+1 > £ n +i/£ n ^ (Jm ) int ^( ' So cp(i, •). deduced that cascades must occur, and we L / appear. still can only arise with a nonmonotonic likelihood transition function discrete signal distribution, can easily see since £ n a* ) it in Uj{Ij\Jj starts, at least If i n < 2u, then £ n+1 = <p(l,£ n ) = u/2 over action absorbing basin J{ if tp(i, ) 14 — 0}, delaying the start two actions can occur, and + is ZE/4 < u/2 + 3u/2 = 2u too. locally decreasing nearby. with boundedly positive chance, Let e n be n's exit chance from a putative herd, and Mr. 1, 2, a herd . . , . n participate E^ = F\ = is (1 in — ei)(l — e?) chance from a herd conditional on stage n is permanent is Fn = En = — ei) (1 (since signals are conditionally it • its • • • — e„) (1 Then i.i.d.). the chance the chance of > exactly • • •, and by Bayes so — Fn ). _n — E = rule, b n e n /(l EE Since the acid test ^2 n (En good principle How is: 5 (The With atoms (a) Eoc) < co can even when fail how slowly they can radically shift ofsupp(F), 20 Lemma by Until the Herd?) mean herds begin infinite time. LetF H (p) =c(p-by+0((p-b)i +1 ) andF L (p) =l-d(b-p) 5 +0({b-pY +l (b) With bounded (c) With unbounded So bounded for beliefs, herds begin infinite beliefs, beliefs, mean time extreme signals 7 < if there are and 2 > 5 2, many extreme om the eventual herd on starts in finite time in state time in state L\ So, herds cannot start in beliefs, • finite om and is in since a Theorem If in its tail. 19 makes The For exp(-i) number 20 5 clear that result follows > 1 -x yields En = ) for from liminfI ^ 1 _(l exp(- £ ft N of terms, followed by N -» 00, < F H (b) < F H (b~) < 0(l/n both 6 = ek ) > (1 proves (1 With bounded is (1 some a > - S > 2. in L, or herds start in infinite states. With unbounded 2, the mean time mean in infinite x) if — F 5) L , while • • • > 1 time. the tails. is 1 ^J° En 6 n . —S n The . The proof has polynomial weight J2T x "/ na = - ei)(l - e 2 ) - ei)(l - e 2 ) to herd of time until a a herd starts in period n, the discounted fraction of time until then Theorem and our proof explains (Time Discounting) The expected 5 -discounted fraction vanishes as 5 —> 1, provided each F s has polynomial weight in a 2 a t end quickly enough that H ends ai in state expected discounted time to finishing any temporary herd of 7 < 6 herd starts Proof: wrong temporary herd on andb. expected (ex ante) to take forever! Unbounded Beliefs Example Cont'd. With 7 = is infinite, iff beliefs, H — but then mean time complete learning eventually obtains, but H H, and conversely few signals available in state b must have an unbounded With unbounded then respectively, temporary herds on near ) 2. in state in favor of the true state density, to ensure a correct herd in finite time. how < 5 iff 7, mean time herds begin in finite E.l(b), a This leads us to beliefs. How Long Speed of Action Convergence: at the edge E^ > —22. on how quickly individuals become quickly a herd starts depends not convinced of a state but on Theorem — 0. when YL e n < °o. 19 Let b n be n's exit being temporary. The chance that a given herd at • — e n )(l - e n+ i) (1 ^ marches monotonically toward some Im with Jm (£ n ) for all < a < l. 21 an induction argument on the e k ) if e 6 [0, 1) for all i. - (£ k { such extreme signals aren't perfectly revealing. 21 Michael Larsen (U. Pennsylvania) gave us a quick proof: Let 77 > 0. As the early terms of S(x) = 2 ) = 77. J2T z"/™" don 't affect our limit, assume l/n° < 77; then (1 - x)S(x) < (1 - 1)77(1 + x + x H i.e., 1. beliefs, 15 4. The pivotal role played by the overturning principle large weight accorded isolated actions unappealing; either NOISE 22 therefore, by design that being noisy we now add ('craziness') or is is somewhat is The both economically implausible and theoretically system: 'noise' to the Some percentage of individuals We mistake ('trembling') do not choose wisely. not public information, and that this trait across individuals. Since unsettling. is assume distributed independently 23 actions are expected to occur, none can have drastic effects. all Below, we address the simplest case of craziness noise, and appendicize the discussion of trembling noise. So with chance We Km Mr. n chooses action a m , , irrespective of history. 24 = 1 — $3 m =i K m > of rational individuals, each of whom p(m\s,£) = F (pm (£)) — F (pm -i(£)). So action a m is now assume a positive fraction k take action a m with chance s s taken with chance ip(m\s,£) yielding likelihood continuation (p(m,£), where ip(m\s,£) <p{m, £) 4.1 = Km + Kp(m\s,£) = £i>{m\L, e)/if>(m\H, £) (7) Convergence of Beliefs — Since £ Ylm=\ sure finite limit 4c- p(m\H, £) = p(m\L, xl; { rn \H,£)if(m,£), (£n ) The £) = is still interval structure of 1 for some «7i, . a martingale in state H, with almost . . be adjudged ex post as noisy, and Theorems and 2 are robust 1 , Jm (Lemma will surely. 7 (Long Run H, where supp^oo) C to noise. Statistically with chance less than Proof: As all ip or (p(m\H, £) 1 if £ With bounded beliefs, £ Jm- With unbounded are boundedly positive, Contrary actions ^ m, will perforce constant behavior of noisy individuals Learning with Noise) [0,oo). m' for all So simply be ignored. The next result asserts that doesn't affect long run learning by rational individuals, as Theorem 6) is also still valid. m implies p(m'\H, £) = p(m'\L, £) = and thus rational individuals take action a m almost it can be filtered out. ^ In the noisy model, £n —> 1^ 6 J almost and beliefs, £00 we need only check = surely, in state 4*, G Jm almost surely. stationarity of £ G supp(£00 ), =I V>(m\H,t) 22 (6) + Kp{mlLj = £ Km } =£ K m + Kp(m\H, £) BHW Note that the herd fragility discussed in refers instead to the release of public information. 0ne might think that informational free-riders constitute a different form of noise i.e. where some individuals receive no private signal, and simply free-ride off the public information. But they do not require special treatment: Simply let F H and F L each have an atom at 1/2. 24 Equivalently, every action is commonly misperceived by all successors. — 23 16 £n+\ . /45° • /45° 2u 2u \tp{y)s? \/\/'vlM^ U /V>^(2,-) 2u/3 2tt/3 - i h Figure 5: Continuations. Bounded Beliefs Example — In > ' h J\ 2u 2u/3 y 2u 2u/3 J\ Here, we juxtapose the two continuation likelihood functions for the first for We only rational individuals, then with some crazy types. see in the right graph that the discontinuity vanishes, corresponding to the failure of the overturning principle. = and so p(m\H,£) p(m\L,£). That £ 6 J now ensues from conceptually easier here (one case and not two). Finally, Theorem the proof of but 1, mimic the proof of Theorem is 2. Convergence of Actions: The Failure of the Overturning Principle 4.2 While long-run learning resilient: Noisy individuals rational herds: Do unaffected by constant background noise, herding is — — simply do not herd. like cats The the rational agents herd? It is not so is natural to ask about failure of the overturning principle offers a special challenge, as herd violations only minimally impact public beliefs, being deemed irrational acts. This severs our clean implication: belief convergence => action conformity. Let us illustrate running is small By is with noise. The — the overturning principle, (p(m',£) amount left panel in figure 5 depicts our can see that for £ near Jm \(p(m, , for all m' ^ m. £) —£\ Observe how each £ w only for m! = m when £ near of noise effects a remarkable sea change in figure For in the right panel of figure 5. ip fails only fixed under one continuation, since contrary actions can't occur Introduction of a small p and 7 Bounded Beliefs Example, where one and \ip(m',£) — £\ is bounded away from stationary point £* there: how Lemma are continuous, ip(m',£) if all — £ 5, Jm . as seen actions occur with boundedly positive chance, and = for all m' at the fixed point, by Theorem B.l. For instance, <p(m', £)-£ = £ p(m', p(m',£)-£ *[PKIM)-P(m'|ff,l)] = ~1 Kp(m'\H,£)+K m + Km>/(K p(m'\H,£)) (8) , where P(m',£) \<p(m', £) is 0, and — £\ for = £p(m'\L, vanishes for m' ^ m, £) / p(m'\H, £) is all the old noiseless continuation. So, with noise, m', and any £ G (p(m', £) = £ Jm : Indeed, (p(m, because the denominator 17 is £) = £ since infinite (as the numerator p(m'\H, £) =0). That <p(m, £) — = £ at a fixed point £ allows us to make simple deductions about Appendix C develops a theory of the rate of convergence for this system. stochastic dynamical systems like this one. Given functions C l and tp ip stability for that are, respectively, at the extremes, Corollary C.l asserts that and continuous M = 6 l)^™ *1 ^ = rate that (£ n ) converges to fixed point I 1 \<Pi(m, ]"J m=\ mean the frequency-weighted geometric i.e., The proof of the continuation derivatives. of the next result contains the core essence of our most broadly applicable theoretical finding. Lemma L H f and f derivatives Clearly, Proof: £ = Assume 10 (Rate of Belief Convergence) Y2 m =i VH m If private beliefs are . ^2^= itp{Tn\H,£) = l-^> £) P( ( m If £)'• y 1, Y^m'=\ ^i{ than with equality (b > 0), 1: — Pe(m,£) jumps iff all 1 and informative Whether a mean an Jm (convergence < with 1. £[f (p e = (m,£) < £ for all m\ and (ipi(m', £)) is 1. If rate 0). If («; all any are are non-negative, = £p(m'\L, £) / p(m'\H, £) be m + Kf3i(m,£))/(K m + re), which is strictly < 1, given Lemma 1(a), bounded beliefs So 9 1/2). Let (3(m',£) 1. H (b) - L (b)] f beliefs (b = inequality neatly proves that convergence occurs at a derivatives are Then + into < 1. rational herd eventually starts turns on the speed of convergence of the public likelihood ratios (£ n ). Suppose rule out then 9 25 (9) in (9) vanishes, since ip(m',£) So the arithmetic mean of the derivatives the noiseless continuation. less tails, aU functions are differentiable, we then have , m'\Hi £) = 0- 1, 0, l while the martingale property yields the identity G Jm the second sum the arithmetic mean- geometric < C m=l negative, then (£n ) eventually rate 8 have bounded and f H (b),f L (b) > m=l £ , M M - Yl ^H#' £)<Pt{™> t) + Yl Mm\£Mm, £) 1 At a fixed point F H FL that infinite limit cascade £ n —> £ € Jm . When can we subsequence of rational 'herd violators', whose private beliefs counteract the public belief? In light of the chance provided ]T^Li beliefs, this inequality I 1 ~ P( holds if given the positive tail density. So, 25 Namely, each 26 We mean 27 Since not everyone C 1 in )) F H and —> Borel-Cantelli (first) m \H,£n For then the convergence of £n is we have a 0, < F L °° for almost surely C have and thus Lemma l tails, this occurs all (£ n )- with f H (b) # - p(m\H, £ n 1 ) -> is 27 with zero With bounded and f L (b) exponentially ^ 0. fast, 10 implies that some open neighborhood of b and the non-standard conditional version of the is Lemma, 26 rational, this inequality in fact 18 b. Lemma, e.g. Corollary 5.29 of Breiman (1968). assumes a worst case. Theorem and FL Lemma C have 1 Then tails. suffice for the Borel-Cantelli Lemma beliefs, FH that arise. and the slow rate of convergence may not above. This must remain a (hard) open question. new venture into and investigate a territory model with heterogeneous preferences, or is must and MULTIPLE INDIVIDUAL TYPES 5. types that private beliefs are bounded, rational herds unbounded 10 doesn't apply to We now Assume Herds) 8 (Rational A types. richer and often more realistic model with multiple but observable informationally equivalent to the single preference world. So just as with the noise formulation, we assume that types are private information. Still one can learn from history by comparing the proportions choosing each action with the known type frequencies. This and inference intuitively ought to be fruitful, barring nongenericities, like equal frequencies exactly opposed learning: vNM A preferences. Dynamics may curious is then introduced — confounded which each action in may it Given are by unbounded private arise even with turns out to be an all-round fundamentally different economic beast. many finitely types — t 1, . . . , T, where t determines both vNM preferences over the given action set a\,...,a,M, as well as one's private signal distribution. denote the known proportion of type Since an one's type t, and assume the types are private information, this formulation is one type has state-independent preferences. In contrast on (and thus be informative of) private As before, {£ n ) is belief threshold rule choose action a m when choosing an t of £ yielding p {m\H,£) = U l U JM - p (m\L,£) = J{ U J\ all £ where each type finds himself • • unbounded private identical to noise to noise, all decisions for each signals, of private information, t, then J 1 may depend This radically changes the analysis. still employs a posterior public likelihood in J = J n J2 n 1 • • • D J7 an absorbing basin. {0, oo}. If £ n 6 is So J, every action l Jm will . Similarly, let the overall absorbed if is just one type has chosen irrespective and thus provides no information. Conversely, 19 t on Next, call the set £. the action absorbing basin so that = but if all action; however, they will generally disagree {H,L} and t Let A* across individuals. i.i.d. Let p'(m|s,£) be the chance that someone of type given state s e , signals. is a convergent martingale in state H. Each type the desirability of the actions. set: is The Model and an Overview 5.1 J* taken is This twin pathological learning outcome all states. one measure more robust than wrong herds, as beliefs. It also twist upon an outcome well converge with the same probability in new if £n £ J, some type's action E J almost £oo same more surely, as there are potentially In a limit cascade, a type-specific herd the But unlike not a foregone conclusion. is But action. if may limit points. Everyone of the same type arise: vNM types differ in their we cannot conclude that before, preferences, unless and the same action, an overturning principle doesn't work. As with all will take should take one noise, we must apply the speed of convergence reasoning from section 4.2 to conclude that herds must arise. The dynamics of the likelihood ratio are described in the usual notation by (7) and T i>{m\s, £) = ^2 AV (mis, £) (10) t=i If -0 is £. continuous in By Theorems 2 £, and multiple types, this outcomes. Such in the two then any fixed points must satisfy (A-4): 4>(m\H, may is solutions to this criterion 7, all We call its (p(m, £*) — £* when no longer exist since would ignore it the absorbed lie in and is J confounding is most certainly not their decisions then (11) totally informative at £*. For if it were, agents would generally be informative. Rather history has any additional inferences. The distinction Both private both compelling and sweet. signals and history becomes affect totally and private signals wholly inconsequential: Yet both are pathological outcomes. verify that confounded learning can occur, or £ n —>£*, inequality (pt(l,£*) that = but with Vm decisions in a confounding outcome, whereas in a limit cascade, history this will set, £) actions are taken with equal chance = ij;(m\H,e*) precisely so informative as to choke off with a cascade To or <p(m, states: Crucially, history decisive, = solutions £* outside true. ip(m\LJ*) become £) imply the if (£ n ) ^ tpg(2,£*), with both enough to £*, suffices to check the generic terms positive. For as in the proof of local stability of the point £*. starts close it —> then £n As defined £* in Appendix C, Lemma this 10, means with positive probability: Since (£ n ) cannot cycle, limit cascades and confounded learning are the only possible 'pathological' (incomplete learning) outcomes in finite-action observational learning models. 5.2 Examples of Confounded Learning We show that confounding outcomes may M and confounded learning can occur. T = 2 types. Types differ in their preferences, but not their signal distributions. Type U is 'usual', preferring action a 2 in state H, ai in state L, and ai for private beliefs below p u (£) = £/{u + £). The preferences In the examples, we have = exist, 2 actions 20 and 2u/3 U U t2u Figure f f t2u Confounded Learning. 6: = Bounded Beliefs Example, Based on our with \ u graph, the curves tp(l\H,£) and ip(l\L,£) cross at the confounding outcome = 4/5, v/2. In the u where no additional decisions are informative. At £*, 7/8 choose action oi, and counterintuitively 7/8 lies outside v and X u the convex hull of A eg. in the introductory driving example, more than 70% of cars may merge right in a confounding outcome. The right graph depicts continuation likelihood dynamics. left £*, — of type —1 V are opposite: action a\ always pays zero, while a 2 yields payoff v in state He thus chooses in state L. WLOG, v > > u 0, so a\ for private beliefs above the threshold vNM preferences > are not exactly opposed if v p v = (£) Bounded Beliefs Example Cont'd. The transition probabilities for nowp u (l\H,£) = (3£-2u)/2£andp u (l\LJ) = {3l-2u)(3£+2u)/8£ 2 where £ 6 v For type V, p (l\H,£) = (2v - £)/2£ and p v (l\L,£) = (2v + £){2v - £)/8£ 2 , , With bounded beliefs, 7 in the intervals J^ = dynamics nonessentially [0, 2u/3) and differ confounded learning. The same remark holds < v, no overlap arises if mHJ) = -^ + X^ X» by (10). 2u/3 < and mL 3 2£ U 2£ Figure 6 graphs these functions. if J^ = [2u, oo) and J^ So consider the dynamics u. ,l) We = X» £, = where [0, for £ ^m^+>? U € The [2u,oo), in section 3 2w/3] overlap. G (2u/3, (2V - 2 for £ thus precluding { can rewrite (11) are (2u/3,2u). J% = from those because only one of the types ever makes an informative choice for any For u type the absorbing basins complicate the dynamics. two types take action 2 with certainty respectively. If these overlap, then + £). £/(v u. • (2u/3,2v). H and 2u): mV+e) 8£ 2 a confounding outcome as {2v (2u If u > v then £ maps (2v/3, 2u) onto X U ,X V u (\ /4 . + - (0, oo), Next, Since the functions ip 2v) 2u) = m and so a confounding outcome \ v /4)/ij;(l\H,£*) <p t (l,r) = (3A /4 + v 3\ /4)/ip(2\H, £*) and both are t/ £){3£ £)(3£ positive. exists for generically differs from <p t {2,t) = So confounded learning can occur. are increasing, the system either starts in a cascade in 21 any [0, 2u/3] and thus or [2u, oo), or starts trapped is As in [2u/3,£*] or [£*,2u]. probability mass all wrong herd eventually concentrated at the endpoints, there can only arise only a is confounded learning € £ [^*,2u], Just as in the proof of [2u/3,£*]. E [£oo] = if £Q and a correct herd or confounded learning Theorem 2, since (£n ) is e £ a bounded martingale and both possible outcomes have positive probability in each , if or More case. generally, with only a single confounding outcome, limit cascades must occur with positive chance. The Basic Theory 5.3 We • Theoretical Robustness. comes can exist, Theorem 9 (a) have shown by example that confounding out- and confounded learning may (Confounded Learning) arise. Assume We now there are If the private signal distribution is atomless, obtains with positive chance. No one T> 2 types. then for nondegenerate specifications and type proportions, confounding outcomes of preference finish outlining the theory. exist, and confounded learning confounding outcome must occur with bounded (b) Generically, at any confounding outcome only two actions are taken. (c) Confounded learning (d) When confounded is is generically robust to the~addition of craziness noise. learning does not occur, a limit cascade arises, almost surely correct with unbounded (e) If beliefs. M > 2 with unbounded beliefs, or £n i.e. —» £ G J, and beliefs. if the private signal distributions are discrete, then generically no confounding outcome exists. The Proof: first point has been addressed by the example, which Let us consider the equations that a confounding outcome (6) bounded some actions may not occur beliefs are taken with positive probability at to M - Generically, sign of L jj — 1 Equality (11) (c) d M <fii(i, {i\£) as £), = ^ d£^(i\£)~ when it Assume that 1 = Yl m =i i; ( rni\H^) still obtains if state independent noise is ip(m\£) Any = £ L i) {i\£)^{i\£) _ ~ ffi(j|l) - g(t|g la ^2i^ii>( M with M actions < Walras' Law) m i\L,£) = = satisfy (A-4). This precludes all = , The unbounded beliefs argument 22 is a^(i\£) __ ~ 1- 2. sides. The - a^f{i\£) H aiP {i\£) + {l-a)Ki (pt{2,£). but limit cascades, where some action a m and confounding outcomes, where . M added to both H {i\e) j> ^{i\£f 6 4o must 1 for two actions a m — solve. First, equals (11) holds. Finally, only for nongeneric noise will <fe(l,£) (d) must £* equation in one unknown £ can only be solved when H {i\£)^{i\£) ' not nongeneric. Equation (11) reduces in (a £*. independent equations, since 1 at all at £*. is ip(m\£) by now standard. > for at least n Jx Figure Continuations and Absorbing Basins. 7: Example, but that there is no This is interior basin, but in the second graph there is an graph, preferences are such first interior basin. graph, that with the basin, for any value of I at most two actions are in play. This one of M > 2 types has Observe in the second the kind of example is M > 2 and confounded learning. which can be used to construct models with (e) If Bounded Beliefs based on the here with one insurance and two extreme actions. In the unbounded private beliefs, then for no £ G (0, oo) are So confounding outcomes only appear only two actions taken with positive chance. in degenerate models. Next, v v p (l\H,e)-p (l\L,£) p"(l\L,e)-pV(l\H,e) A^ \v is a reformulation of (11), and so if FH and FL are discrete, then the RHS will only assume a countable number of values, and confounding outcomes will generically not exist. Remarks. 1. While only two actions confounding outcomes, generic models with outcomes. For with bounded This will 2. I* beliefs, M > 2 actions can only two actions happen when there are absorbing basins Being a fixed point, we can say occur with positive chance at generic will little may have confounding still well be taken over a range of for the insurance actions, as in figure 7. about the uniqueness of confounding outcomes — except that with discrete distributions, they are not unique when they interval 3. only • t will satisfy (11) FH because Our example posited two types with vNM • some around preferences differ, then one can Economic Importance. light t. and FL exist (for an are locally constant). different preference orderings over actions. If show that no confounding outcome can We now return to our introductory example, exist. and shed on exactly when one should expect to see our new confounding phenomenon. The Driving Example Revisited with Unbounded Houston (type U) drivers should merge right (action a x ) in state state L, with the reverse true for Dallas (type 23 V) drivers. Going Beliefs. H, left to the Posit that (action a 2 ) in wrong city yields zero always. Getting home by the right lane a\ payoff vector of the Houston-bound drivers, it is (0, 1) Claim and (u,0). 28 is preferred, as is H (u, 0) in state The proportions are A^ and = it has fewer potholes. The (0, 1) in and X v .7 With a differentiable signal distribution and unbounded point exists if We Proof: preferences are show that more disparate than type frequencies: ip(l\H, £) lies below -0(1 JZ^, £) near £ = state L; for Dallas = .3. a confounding beliefs, u/v X u /X v < and above it < v/u. = near £ and therefore by continuity the transition probability curves must at some oo, £ coincide. Differentiating V>(l|s, £) near £ = > The ipe(l\L, 0) u F s (£/(u + £)) + X v [l -F merge nearly right in state all By Lemma 1, contrarians when they there are (£/{v H than in state L. many more simple, near p 6. But otherwise, beliefs is (infinitely are right than (resp. left) for public beliefs is drivers are Houston-bound, then what matters near extreme nate, then s + £))] = intuition for the existence of confounding points beliefs. Clearly, if will iff X v s u L H f (0)[X /u - X /v]. Since / (0) > f (0) by Lemma A.l, X u /X v > u/v. The reverse inequality near £ = oo is similar. yields ^(l|s,0) ipe(l \H, 0) = = how many p if b) in bounded preference differences domi- contrarians of either type exist. are wrong. Thus, = to uniformly true that more more with unbounded when they b (resp. it is and common state beliefs) doctrinaire more merge will H than state L right (conversely). CONCLUSION This paper has explored and expanded upon the so-called herding literature. We hope our analysis underscores a rich theory that ensues from attention to likelihood ratios and their conditional martingale property in theoretical learning models. • Related Literature. theorem We for We think it noteworthy that Milgrom's (1979) convergence competitive bidding also turns on the bounded-unbounded signal knife-edge. must underscore that our complete learning results are in no way related to Lee (1993), where a rich continuous action space effectively allows for a one-to-one signals -H- actions. same way the As roughly foreseen by Banerjee (1992), this precludes herding in as statistical decision problems do. A key touchstone of herding inference of predecessors' signals as they pass through a there with 28 is many no map loss of generality in lumpy filter (like is of much the coarse action choices). our model with two signal values compared to a model signal values. By a payoff renormalization, this is equivalent to our standard payoff structure. To quote from Lee (on page 397): From the standpoint of information revelation, 24 a sparse action Experimentation Literature. • Lessons for the and S0rensen (1996a), or SSI, presents a allel different slant between bad herds and incomplete learning experimentation, as actions learning -f-> makes clear that even it if on this field, drawing a formal par- optimal experimentation: observational signals, as belief thresholds «-» actions. SSI also a patient social planner (unable to observe signals) could control the action choices, he too would In light of SSI, that <-> in Our companion paper Smith succumb to bad herds, given bounded private more than two actions beliefs. generically cannot be taken at a confounding outcome speaks to a general property of optimal experimentation models. For instance, SSI published examples of incomplete learning, and cites in fact, given beliefs for Our exogenous signal functions and just two which more than two signals optimally stochastic stability condition also problems. Since informational herding is may be have binary signals. 30 all states, only arise And nongeneric models yield with equal chance in either state. a useful tool for optimal experimentation formally equivalent to single person learning, we suspect that by focusing on the the stochastic dynamic process of likelihood ratios rather than the intricacies of dynamic optimization, our stability program offers significant hope. For example, in the popular two-state, continuous-action learning models, suffices to it verify that near a fixed point, the continuation posterior does not always have the same slope as a function of the prior belief after each observed signal. This condition clearly has nothing to do with the particulars of the actual dynamic optimization. Indeed, this paper has afforded this simple insight precisely because the optimization problem • Where Do We Go Now? for why so trivial. Smith and Sorensen (1996b) relaxes the key assump- tion that one can perfectly observe the ordered action history. (more plausible) reason is isolated contrary actions While might have little effect, motivated by deeper concerns. For absent martingales, the resulting analysis different that standard rational learning theory, and another this is yet is we are radically forces one to think (a la Blackwell) abstractly about information, and to delve deeply into the theory of urn processes. A. ON BAYESIAN UPDATING OF DIVERSE SIGNALS The set-up (measures The s fi characterization of the and distribution functions Radon-Nikodym F derivative s , s = H,L) H of F , F L is in taken from §2.1-2.2. Lemma 1 implies set provides little means to signals adds to the updating of the posterior distribution and the whole sequence of individuals little convey the private information. Consequently the infinite sequence of private end up choosing the wrong action. 30 An additional one that has come to our attention 25 is Kihlstrom, Mirman, and Postlewaite (1984). may : Lemma A.l (Extreme Beliefs are Informative) L L H (a) F (p) < pF (p)/(l - p) holds for all p (0, 1), and is strict when F (p) > L H is weakly increasing, and strictly so on (b,b]. (b) The ratio F /F <E = dF H /dF L Since / Proof: F H (p) = ( is a strictly increasing function, we have dF L (r) < f(r) f(p) Jr<p for F L (p) > any p with F H (p) - F H (q) = Thus, whenever 0. f f(r) dF L 0. {r) > [F L (p) Jf T <P dF L (r) = pF L (p)/(l - p) F L (p) > F L (q) > - F L (q)]f(q) > 0, [F (A-2) we have L (p) - F L (q))F H (q)/F L (q) Jq where we have used (A-2). It immediately follows that FIXED POINTS OF MARKOV-MARTINGALE SYSTEMS B. The general framework that evolution of the likelihood ratio Given ip{- F H (p)/F L {p) > F H (q)/F L (q). 1 •) • ip(- • 4> 1 is a finite set M x M+ : £) is also and ijj we introduce here (£ n ) Ai, and Borel measurable functions ->• [0, 1] is <p(- , •) : a probability measure on For our application, ip(m\£) Next, for any is and <p(m, £) £, to, the M. x K+ —¥ K+, and satisfying: M for all £ € R+, or J2 meM ip{m\£) jointly satisfy the following 'martingale property' for all £ with likelihood not confined 31 over time viewed as a stochastic difference equation. J2i>(rn\£) P:R+ includes, but 1. G M+ p(m,£)=£ (A-3) ( the chance that the next agent takes action a m is = when faced the resulting continuation likelihood ratio. B in the Borel a-algebra B on R+ = [0, oc), define a transition probability xB->[0,l]: P(£,B)= ^H') J2 m\<p(m,e)eB Let 31 (£ n )%Li be a Markov process 32 with transition from £n i-> £ n+ i governed by P, and Arthur, Ermoliev, and Kaniovski (1986) consider a stochastic system with a seemingly similar structure But their approach, differs fundamentally from ours insofar as here — namely, a 'generalized urn scheme'. it is 32 of importance not only Technically, £ n : QH how many times a given action has occurred, but exactly when it occurred. K+ is a measurable Markov process on (Q H ,£ H ,v H ), where E H and £ L -> correspond to the restriction of sigma field S to Q H and fi L respectively. Despite a countable state space, standard convergence results for discrete Markov chains have no bite, as states are in general transitory. , 26 < El\ Then oo. E[en+1 \e l ,...,£n By a martingale, true to the above casual label of (A-3): (£ n ) is ] = E[£n+1 \£n ]= we have the Martingale Convergence Theorem, Theorem B.l (Stationarity) m G M., —» obtains, and £n £00 If £ ]^vH4Mm,4) = 4 f tp(£ n ,dt)= —> £n (p(m,£) and £ >-> almost surely, then for all £ = £<*> i-> lim^oo a.s. ip(m\£) are continuous for all G supp(£ 0O ), stationarity P(£, {£}) = 1 i.e. = ip{m\£) or ip(m, £) = for £ m all Indeed, (£n ) must also converge weakly (in distribution) to Markov chain, its limiting distribution The Futia (1982). G M (A-4) Since (£n ,mn ) £00. convergence then implies that the invariant limit must be pointwise a.s. lines, the continuity assumptions As we wish are subtly hard-wired into the final stage of Futia's proof of this fact. away with we continuity, when also a is intuitively invariant for the transition P, as in is While Theorem B.l admits a proof along these invariant. exactly > £n That (A-4) establish an even stronger result. — neither ip(m\£) nor p(m,£) Theorem B.2 (Generalized is to do violated for m £ is zero suggests Assume Stationarity) that the open interval I C R+ has the property W e I 3m G M > 3e Then I cannot contain any point from that there exists £ G /nsupp(4o). Let m G M with ip(m\£) > e J = and <p(m,£) <£ £ !-> Proof: is that £ G supp(^ 0O ). an m £) — Theorem B.2 Finally, £ are both yields it is > 0, and suppose (*) a contradiction for £ n +\ J & J with chance in the £e (•), for all next period £, Then for each m G M, J is J, there eventually at least either £ most at That e. 1 — e. h-> tp(m, £) or or the stationarity condition (A-4) obtains. such that £ does not satisfy (A-4) and both £ £ 1-4 ip(m\£) are continuous, and <p(m, e eventually in J. Hence, £ cannot exist. discontinuous at is there If is Assume ip(m\ £) > almost surely eventually exits J. This contradicts the claim that with (£ n ) positive chance (£ n ) Corollary £\ Since £ G supp(£oo), £n G J. the conditional chance that the process stays in So the process - (£-e/2,£+e/2)nl. By with positive probability. But whenever £n G J, is, \<p(m, £) e, the support of the limit, £00. Let / be an open interval satisfying (*) for e Proof: exists > ip(m\£) : then there is an open interval / around bounded away from 0. This implies that an immediate contradiction. obvious that the corollary implies Theorem B.l. 27 i-> £ in (f(m, £) and which ip(m\£) (•) obtains, and so STABLE STOCHASTIC DIFFERENCE EQUATIONS C. In this appendix, We ence equations. There first then use develop a global stability criterion for linear stochastic it on to derive a result local stability of work cited as the first is Kifer (1986) in this area, Furstenberg (1963) a classic article, and — state-dependent transition chances many dimensions an added our needs, and extrapolation to for is Bellman level. a modern textbook. Ellison and Fudenberg (1995) have independently ob- is tained related 'concrete' results, yet not as flexible is critical differ- a nonlinear systems. 33 a small extant literature that treats models like ours at an abstract is (1954) we benefit of our different approach. we add an Further, analysis of rates of convergence. There Overall, the literature aimed is n to a fixed point, such that 9~ £n (£ n ) happy a small literature which problems we discuss here, but we have not been able to exactly recognize our treats the results. is ^ C f° r some £ Lemma dimensional linear case we treat in discuss later on, —> 9~ n upper bound 9 such that to settle for an an exact rate 9 of convergence of at determining C.l, we do not determine the exact £n We, on the other hand, are 0. —> for all 9 > we do obtain the exact In the one- 9. we rate, but, as rate in higher dimensions. • Linear Stochastic Difference Equations. Fix G M, p G a, b and [0,1], let e R. Let £ {a£n _i with probability p with probability 6£ n _i 1 (A-5) —p sequence of stochastic indicator functions defined by yn = = otherwise, where (a n ) random Markov process define a Lemma Almost (b) If 9 £ < No 1 (a) Let (b) If 33 We Law — ab£ n for all £ i.i.d. then — >• , = then £) > > all 9 uniform-[0, 1] aVn b^~ Vn £n -i, where (yn ) 1 when an < is = (M^I&I 2 ^)" \£ and yn a = variables. = Define 9 —> then there 0, \£ n \ result follows from with positive chance jumps to a.s., all We whenever In particular, £n p, is |a| almost surely 1-p p . |6| if 9 < 1. a positive probability that € No- Numbers, the (^ n ) 9. around ball £Li2/*- Then are coining terms here. Pr(lim„_+00 £ n for any open is of Large 0, —> provided £ Yn = Since £ n 0. 9~ n £ and No all n, the Strong ^ surely, for Proof: ab£ a sequence of is £n as: C.l (Stability of Linear Homogeneous Systems) (a) £n This can be recast (£ n ). £ but call € finitely many terms lie Since \. |a|"^|&| s_^~a at once, inside ' Yn /n —> 9 -> p by a.s. and stays any open a.s. ball there. Let No around a fixed point £ of a stochastic difference equation locally stable a small enough neighborhood about I. If Pr(lim„_>0o t-n = ^) J\f(, £ is globally stable. 28 if > some for Pr or a.s., k. Remark 4 = 4 G A/o}) > stays inside A/o starting at that 4- WLOG k = |4 G A/o}) With invariant. 4= if ^ ||4||/IKm||A/o > So Pr ({w G Q"|Vn 1. scale invariant dynamics, time invariance allows us to conclude that First, 1. positive probability then w fi So with positive chance, (4) dynamics are time since e (\J keN f] n >k{^ any 4 fc, G A/o £ G A/o for all Second, the equations are linear, so that £~m- we have now a smaller for all n. So, the system started in the small ball, would remain it ball around in the large ball. will do. if ||4|| 1 D n with < 114/11 zero, such that if But finally notice that from any point of the original large ball, one can reach the inner ball in only a finite number of steps, which occurs with positive probability too. Remark Observe that 2. but their geometric p)b, mean reformulate the criterion by first Remark taking logarithms, as inplog(|a|) + (l— p)log(|6|) from the theory of differential equations. 3. It is straightforward to generalize continuations, i.e. where yn has arbitrary finite Lemma C.l support. The to the case of < It is — (1 we If then 0, common through. Indeed, 4 let G n M one half of the lemma goes and assume A4-i f [£4-i A and B are given real 34 more than two analysis for multidimensional also of importance, but unfortunately in that case only where + logarithm to enter when translating from difference to differential equations. for the is of the coefficients pa that determines the behavior of the linear system. this is reminiscent of stability results 4 mean not the arithmetic it is n x n Vn = ifyn = if matrices. Let 1 \\A\\ and denote the operator \\B\\ Lemma C.l goes through, with nearly unchanged proof (using \\AB < \\A\\ \\B\\): If 9 > 9 = \\A\\ p \\B\\ p then 9~ n £n *A 0, i.e. 4 converges a.s. to zero at rate 6. As this part of Lemma C.l is the only result applied in norms of the matrices. Then the following half of l \ , \ the sequel, our local stability assertions will also go through in multidimensional models. Call 6 a convergence rate of not very tight, for 9" G [9', 1]. -> C.l. the projection onto 34 That 0~ n 4 -> at the rate for all 9 9', then impractical, for in multidimensional settings is Lemma We t (4) converges to In general, it is Consider for instance the case where time. £* if it > 9. This definition also does so at its we do not have the converse is p|| = B the projection onto a linear subspace, and orthogonal complement, then 9 = 1, but is a.s. reached in prefer to maintain the possibility of calling 9 a convergence rate, even is, rate; possible to find convergence rates smaller than A su P|l|=1 \Ax\. 29 is any rate Perhaps we ought to narrow down the convergence rate to the infimum however, that part of if 4 9. is finite if it is not the tightest such. See Kifer (1986) for more precise results for linear systems. • Nonlinear Stochastic Difference Equations. Consider the revised process defined by: {V?(l,4-i) with probability V(l|4-i) </?(2,£ n _i) with probability V(2|4_i) where transitions are independent: = £n (p(l,£ n -i) iff an < We ip(l\£ n -i). care about the fixed points £ of (A-6): = <p(l,i) Theorem C.l (Local £ of (A-7), each ip(i\-) Li = <p(2,i) (A-7) i Assume Stability of Nonlinear Systems) > is continuous and ) <p(i, is Lipschitz, 35 some open obtains, then for forever remain inside Proof: WLOG ball ip(l, . •) with Lipschitz constant a neighborhood N{£) of in £. £ inside lies whenever around (A-8) 1 £n it, (£ n ) will —> £, it converges at the rate Lemma C.l applies to our original non-linear system. — and thus L\ < Choose M{£) small enough so that for L\l}f p <\, xl)(l\£)>p, and \\<p(i,e)-<p(iJ)\\<Li\\e-i\l for all £ e M{£). Fix G M{£). Define a new stochastic process £ I where (y n ) is our earlier and ln € M(£) yielding £n As for all e M{£) ip(l\£) for all ify„ indicator sequence. Lemma i.i.d. n with £ £n £ for —> The i = ' [0,1], 1,2 = with £ holds £ e M{£) ||^n at x, then / — : Rm M{x) such is -> Rk that is is 9, l C.l then asserts £ n £\\ > = - \\£ n since for => £ n 1 l\\ £n —> = : Lipschitz with any constant £ a.s., realization of (an ) (p(l,£n _i), for all £ any 9 > n \\f(x) L > - f(x)\\ ||D/(x)||. 30 < L\\x - and the Lipschitz (in this realization). So with positive probability. a small enough neighborhood 9, Lipschitz at the point x with Lipschitz constant Vx 6 Af(x) —> the linear process (£n ) majorizes the non-linear one any such (an ) realization. In summary, function / neighborhood £, e N{£), we have yn Finally, the rate of convergence 35 for 6 N{£). In any positive chance given £ n and > p when = _'t= (^(In-i-i) property yields the majorization —> (£ n ) some p € 1 and given, (£n): 9. by a linear stochastic difference equation continuous, (A-8) obtains is with positive chance £ locally > £, if Also, £. and then argue that < L 2 As L\ —> while £n it, = Li^Ll- mi) < around we majorize (A-6) First, of the form (A-5), £n that at a fixed point If the stability criterion . 6 — and i x\\. If / is L > if there exists a continuously differentiable Afp (i) exists for wherein it is p which L\h\ dominated by < Whenever 9. 6= true with is If each |p/(l,£)|* when i.e. £ and we care about function, ip(i, {1| ' ) •) its eventually stays in p p (1_,, (1| ' )) C.l. through In that case <p(m,£) matrix derivative Di<p(m, Then 1. p {£), 1- carries £) at then must assume that the operator norm ||Z?/^(m,£)|| (which eigenvalue) mis less than < ' ^(2,*)| Lemma by N also continuously differentiable, is a vector (for several states). is (£n ) how Corollary C.l Finally, let us briefly explain precisely dimensions, £, which converges at rate L 1 Ll~ (£n ), Corollary C.l (C 1 -Local Stability) then Theorem C.l £ n -> in several a vector is the stationary point. is the same the proof goes through, largely as before. We as the largest We must also be more careful with the dominance argument. Rather than choosing a constant L\ larger than \y>i(l, £)\, with all we have to choose a matrix 1 rules finite s number S for each state, fi fix one reference and use state, it to define Rather than the simple open [0, 1] 36 In into closed subintervals, We convex polytopes. Theorem B.2 (and its proof) a single action a it we need to refer to the Rs_1 unit simplex in to the reader to it be impossible to will in the limit. So, even with learning. But while we do not get and Jullien (1991), correct action In the is same we ponder the optimal open intervals / and J full statistically distinguish beliefs, we cannot between these possibly get complete learning, in the terminology of Aghion, Bolton, chosen optimally. vein, with more than two states, the long-run state of the world has The exact formulation is optimal in a given state. ties of BHW In that case, may of we occur, when the two or more optimal actions, and there are unbounded learning will obtain, but is unbounded will arise if there are get adequate learning: the limit beliefs are such that the whereby more than one action (1996b), leave optimal in two states of the world, which is two states Harris, we would now have a balls. fewer actions than states, 36 we would become notationally cumbersome to write down. notation. full Given pairwise mutually absolutely of states. But the optimal decision sliced into closed If and likelihood ratios, each a convergent conditional martingale. partitioning of as £), MORE STATES AND ACTIONS can handle any continuous measures 5— with the same eigenspaces as D£<p(m, numerically larger eigenvalues. D. We A beliefs, will not necessarily observe that all individuals take what constitutes full-support also slightly non-trivial. 31 beliefs, which is true one outlined in Smith and S0rensen particular action. In short, the overturning But in the long run. the learning The lemma will unchanged with a denumerable action space. Lemma finite partition of [0, 1] in thus a countable set of posterior belief thresholds f threshold functions p just The convergence Theorem result . > we could b we Lemma In this way, we could minimum substitute a was well-defined because there were only m so that pm is unbounded With 3. obtain without any qualifications provided a unique action still and teriors sufficiently close to 1, b many such that beliefs case, arbitrarily close to 0. is comes to Theorem it finitely enough to close action threshold pi by one that Complications are more insidious when results model are preserved. complication arises, as our choice of the "bounded away" assertions hold. Similarly, in the proof of all 3 will yield the depend on the action space being denu- 2 does not instead just pick and get a countable partition, properties of the beliefs proof, a technical such that pm(£) actions. Otherwise, 2, 37 The martingale as before. bounded In the m the actions that are tied adequate. is Rather than a the least among again, since the individuals will eventually get the optimal payoff, analysis also goes through virtually merable. fail for then the overturning principle M — oo, both optimal for pos- is is still valid near the extreme actions. But otherwise, we must change our tune. For instance, with unbounded may beliefs, there made such infinite [r strength. Yet this m _i,fm is shrink to a point, which robs the overturning argument of ] not a serious non-robustness critique, because the payoff functions of the actions taken by individuals Under upon a Lemma E.l (a) Let Xqq J2 n xn = oo 37 is with the trembling formulation, where we support of the tremble from any finite I. SOME ASYMPTOTICS FOR DIFFERENCE EQUATIONS E. (c) must then converge! noise, the only subtlety that arises shall insist (b) sequence of distinct optimal 'insurance' action choices that the likelihood ratio nonetheless converges. This obviously requires that the optimality intervals its an exist Let x, € = if (0, 1) and x n < a 1, = £ X n for a/n n < all + oo i, This may mean that we cannot define if a > 1, with Y>x n then , Xn = (1 — x{) summable. b n , with (b n ) LetX^ > 0. Then^iXn-Xoo) < and x < x < 1 for all i, // Xoo > { and n n if (1 - x n ). Xn = J2 |6„| 0(n~ a ), and < oo. so 38 = oo = £»(*»-*«>) = <oo^> Xoo » 0. andJ2 n nx n ^ n nx • Then £ X < oo n • oo. necessarily well order the order the belief thresholds, nor as a result the actions. 38 Co Say x 3> > c (eg. c with x(-) > x » is x boundedly positive) of a function or random variable x if there exists some Likewise x < oo (or x boundedly finite) if x(-) < Co for some Co < oo. Co always. 32 Proof of (a): 39 The binomial theorem = yields d n - a/n — log(l — bn ) — log(l l/n) a = — a/n + Cn), for c„ = 0(l/n 2 ). By the mean value theorem, \d n < \b n + Cn|/min(l — a/n - b n 1 — a/n + c„) < 2\b n + c„| for large enough n (since a then l-a/n-bn > 1/2 and (1 - l/n) > 1/2). So £ |6 n < oo iff £ |d„| < oo, and — a/n — log(l — bn ) log(l , \ | An = n ° exp(dj The boundedly Proof of + • • • + dn ) = D„n- a finite parts follow Dn < D < < = 0(l/n a So A'„ oo. ). with such an exact bound. We've shown before that (b): D < where , — (1 — x2 £i)(l • ) > • 1 — (xi 4- + x-i • if all •) • x { lie in (0, 1). This means that ^2 k>n x k > 1 — X^/Xn. Summing both sides over n and appealing to £ n Efc>n x * = E na; „ yields £nx n > EU ~ ^oo/A^) > 52(Xn - *«>) Conversely, the identity E n (^n-i — -^n) = J2(Xn — -^oo) follows by rearranging terms of partial sums to n = N, and taking limits as N —> oo. Our sought for result follows from = X>( A'"-1 - J2 nx n Proof of x E [0,x]. So < Since x A'oo = there exists a(x) 1, -xi)(l (1 X>(Xn _l - Xnj/Xoa = Y,( Xn < -x 2 > with 1 X^/X^ ~ — x > exp(— a(x)x) 1 £ )---> exp(-a(x)^ k x k ) > exp(-a(x) for all D fcx fc ). fc Dynamic Systems) Let g be nondecreasing, with g(0) = 0. 41 = -g{x) and y n+l - y n = -g{yn with x(0) >y >0, then x(n) > yn < 1 exists, then (l—g'(z))z = —g(z) andy > z(0)—g(z(0)) => yn > z(n)—g(z(n)). Lemma Ifx (a) 40 (c): Xn )/Xn _i E.2 (Vanishing . ) (b) If g' Proof of (a): By > induction, assume x(n) yn If . assume x(n + l) < yn With x decreasing, there . + x(n exists t' > 1) € (n, > yn yn +\, we're done. So n + 1) with x(t') = yn Then . /•n+1 = x(n+l) Proof of x(t')- (b): If yn +i- Since z for all t z. As g + z(n 39 40 £" We If < 41 is 1) also is x{t')-g(x(t')){n+l-t') a least n with z(n) and z — g(z) + 1], monotone, g(z(n + 1)) l-x- e~ Since £'(x(a)) < is whereupon = are very grateful to ax = £(x) 0. - not, there < 6 [n,n > g(x(t))dt , (1 — z{n) increasing in > z(t) yn g'(z(t))) - g(z(n)) x(t')-g{x{t')) — g(z(n)) < y n and z(n + we have —g(z(t)) z, z(t) f = n+l / —g(z(t)) < —g(yn 0, it follows = of Purdue University (Statistics) and f (1) < 0, and on [0, oo), £(x) that x'(a) > 0. Finally, a(x) We are very grateful to Anthony Quas of Cambridge University 33 ) is yn+ i — g(z(n + l)) > > yn +i = Vn — gdjn) l) for t is increasing in < n+ Then 1. ) for the proofs of (a) ^ = y n -g(y n ) —d (z(t) - g{z(t))) dt < yn - g{yn Herman Rubin then f(0) = — once again because z — g(z) z(t) + > for x ^ x(a) e = yn+1 and (b). (0, 1), as the inverse function to x(a). (Statistics) for result (b) and its proof. — OMITTED PROOFS AND EXAMPLES F. • Fact: Fair Priors L as not 50/50) is WLOG. is As alluded in §2, unfair priors (i.e. states Lemma equivalent to a simple payoff renormalization. For fm u H (a m + (1 ) 2 H and is still valid, key defining indifference relation refers only to the posterior beliefs, while the it r - fm )u L (a m = fm u H (a m+l + ) ) - fm )u L (a m+l (1 (A-9) ) implies that ,, posterior odds l-fm — = = u H (a m ) — = —m -- u L {a rm Since priors merely multiply the posterior odds by a f ,...,fM are all unchanged we merely multiply if ) u H (am+ i) z-. u L (a m+1 ) common all constant, the thresholds H payoffs in state constant. Likewise, constants added to payoffs in any state do not affect any by the same f{. The Unconditional Martingale: Proof of Lemma 4. Given the action history {mi, ,m n _i}, the conditional expectation of the next public belief is (by the * . . . Markovian assumption), E[q n+ i qn ] | ^ = QnYl P(m\H,qn ) + (1 - qn P(™\L,qn ) p(m|L ^ p{m{Lqn) i + £" l + tn meM m£M p(m\H,q n p(m\H,q n qn p(m\H,qn + (l-qn )p(m\L,qn = qn p{m\H,qn r— n p{m\H, q n + (1 - qn )p(m\L, qn q meM ) ) ) E k —— Absorbing Basins: Proof of Lemma that ) 3, \p m -i(£),pm (l)] is ) ) ) Lemma an interval for all Since 6. Then Jm £. pm (£) is increasing in m the closed interval of is by all £ fulfill Pm-i(£)<b Then F ) interior disjointness (pm(£)) = 1 for all £ is obvious. p m {£)>b and Next, int (Jm ) if e int(Jm ). The individual updating occurs; therefore, the continuation value With bounded one of the inequalities beliefs, simultaneously satisfy both. As have JM = [0, £] Finally, let and Jx m 2 > = mu Pm 2 [£, Lemma 3 oo], yields p where pM -i{£) > Pmi {£\) ^ choose action a m is a.s. £, = (£) = b and a.s., and and so no as required. some and Pm(£) pi(£) = = . > Pm 2 -l(£2) £, but no 1 for all £, b define Jm2 Then >b>b> pmA^) 34 F H (pm -\(£)) = then in (A-10) holds for with £ x G Jmi and £2 G -l(£l) will (A-10) < £ < £ might we must £ < oo. and so £2 < £\ because pm2 -\ With unbounded by (A-10). By = beliefs, 6 Lemma strictly increasing in IS and — at the point £ yields p(m\£) = (p(m, £) equality £ implies FH if G supp(£oo) with < F H (pm (£)-) < £ 1, £ p(m\£) L =F {pm (£)) =F L = £. so FH 0, > Assume m satisfying pm (£) (p m (£)) > 0. From > Since F H y FSD 0. = (p m (£)) WLOG a neighborhood of in > F H (pm _ (3), ip(m\£) = 1 £ for £ near is us that pm is F Indeed, Case 2. FH follows (p m (£)) £. on an interval £ [£, F L = (p m -i(£)) L is = H if F (p m B.2, £ by Lemma Then 0. 1(c), this is H. Then exist a point some for m we have some pm (£) > and since There are two is po{£) = < 6, the 0. possibilities: F H (pm (£)) - F H (pm -i(£)) > bounded away from (£))/F H which (p m (£)), is in this € supp(£ 0O ) e for neighborhood, bounded away from also co(supp(F)), and so Lemma 1 guarantees for £ near £ (recall that therefore cannot occur. = has an atom at p m _i(£) for k, {pm _ l {£))- some and places no weight on (b,pm (£)]. 6, and pm _ 2 < pm -i, that Therefore, ip{m + T]), and this criterion, , bounded away from F H (pm (£)) from F H {pm _ l (£)-) = neighborhood of ip(m\£) and FH L 0. B.l, stationarity So we may assume F H (pm _i(£)-) = £. in the interior of By Theorem This can only occur It pm (£) (p m (£)) exceeds continuous). = (£)). p(m\H, £) = £F L £ Thus, £ G Jm as required. 1. the state 6 is well-defined. while (4) reduces to (p(m,£) £. =F (p m -i(£)) Here, there will be a neighborhood around £ where e meets £ 6 Jm proceed here under the first By Theorem £. £ so that individuals will strictly prefer to choose action a m for Next, F H (pm (£)) > Case 1. F H some £ We 1. private beliefs and a m+ i for others. Consequently, least such and 1 Suppose by way of contradiction that there Assume J. > (pm (£)) (p m (£)) Next abandon continuity. £ or (p(m,£) m such that only possible is m= are continuous in ip = and pm = 1 for = oo, or m = M and Hence, p m _i 1. Proof of Theorem simplifying assumption that p and FH = only happens for 3, this • Limit Cascades: consider the smallest 6 £. rj > l\£) 0. - and <p(m On 1,£) F H {pm _ 2 {£)) = - bounded away from £ are the other hand, the choice of and ip(m, £)-£ are boundedly positive on an interval {£-rj\ for all £ in a £], for m ensures some 77' > that So, 0. once again Theorem B.2 (observe the order of the quantifiers!) proves that £ £ supp(4o). • Overturning Principle: Proof of signal on must Lemma 7. If n optimally chooses a m , his satisfy 1 ~ Tm ~ > 'l(h)g(a > n X ) action a m with probability all JE( l (A-ll) T I'm— 1 Let E(/i) denote the set of -^ m signals 1 dp* an that satisfy (A-ll). (resp. J^, h) 35 H gdp, ) Then in state H individual n chooses (resp. state L). This m yields the continuation = £(h, a m ) Now just £{h) H and use inequality (A-ll) to bound the cross-multiply, Theorem 3. and Lemma 7. We * Herds: Proof of combine Theorem 2 ' r then a herd on a m must With bounded shall prove that when the integral. we need only private beliefs, limit value £ Jm Notice that £ G arise in finite time. hand right implies Jm in is pm (£) > , b. Consequently, (2) yields ~f fm 1 > p ^ - 1 > ~ _m rm * b — fm -i)/fm _i for all £ G Jm and so the closed interval Jm lies in the interior of [(1 — fm )/fm (1 — fm _i)/fm -i). Therefore, whenever £ n —> £ G Jm we have £n G [(1 — fm )/fm (1 — fm -i)/fm _!) for n > N where the from strict inequality follows b > < Similarly, £ 1/2. (1 , , , N big enough. and Now beliefs. —> 2 asserts that £ n Lemma by the overturning principle, only action a m unbounded posit Theorem By , Assume WLOG and so a.s., whenever any other action than clm 7, that the state £ n is eventually in we taken, is • Calculating the Chance of a Correct Herd. given £Q = 1, let C is H, with a^ optimal. [0, (1 — exit that In the fM-i)/^M-i)- But neighborhood of bounded {2u,2u/3}. and by the the reasoning a.s., (Such a tight prediction in §3 about 1 = + ir(2u/3) (1 — 7r)(2u) Smooth * Cascades with then implicitly defines Signals. = £\ = 1 because -k For a discrete jump < derivative <^_(l,2u) odds F L (p(£))/F H (p(£)), Since ip{l,£) 0. which is decreasing \ < 2u. 1 < 2u. \£n whenever 2u/3 < into an absorbing basin, for instance [2u, oo) in figure 4, simple graphical reasoning tells us left example, this I identity 2 tells clearly impossible with cascades.) is Lebesgue's Dominated Convergence assures us that .E^oo H] The 0. example, beliefs Theorem a correct herd happen with chance n in state H. us that a limit cascade arises supp(£ 00 ) taken after period N. is we need the = £F L (p(£))/F H (p(£)), the private beliefs by Lemma A.l, must be more than unit- The trick is to choose smooth but 'nearly' discrete private signal distributions. Assume that [i H is Lebesgue measure on [0, 1], and that [i L satisfies dfx L /d/j, H = g, where elastic in g' a, > 0, £. g(0) > 0, and #(1) < and has support + [1/(1 the belief distributions are The oo. belief p(a) g(l)), 1/(1 FH (p) = + f* g{0))}. = 1/(1 + g(cr)) in state Given the inverse a(p) .da and F L (p) = j\ H = is decreasing in _1 ^ g{a)da. ((! -p)/p), Now action a 2 We thus only ) requires that £g(a) need tp e (l,u/g(0)) < = and since g(a) u, 1 - #(0)(1 < g(0), - #(0)W(0). 36 we have inf^) It suffices to = u/g(0). choose #'(0) small for fixed chance 1 O 1 J2 2u/3 2/3 2/5 2u Ji H L Figure 8: Nonmonotonicity. The left graph exhibits F and F that work in the Cascades of their supports, they mimic the 'nearly' having atoms at the edge with Smooth Signals Example, actions boundedly informative, and so a single decision can information, are effect in BHW: With lumpy graph shows how the corresponding continuation functions into The right a cascade. toss all successors for the likelihood ratio are £ g(0) no longer monotonic. 42 Figure 8 gives such an example. (0, 1). Proof of • Putative Herds: and assume we are not yet Lemma Let £n G Im 8. C Absent a cascade in a putative herd. Q, with (£ n WLOG m > 6 Jm C Im ), 1, at least two actions are possible. Then either a) Jm y^0 and action a m b) an action c) actions dj (j <Zj (chance ^> For if not After a m > m) occurs with chance 3> 0), £ n +i/^n (b, b) taken with chance 3> are sufficiently 3> > and e 0, so in 1; then a m (a) or (b), G exists b* < m) (j is , a m+1 more whereupon i n+ G 0, \ than likely boundedly many is in i > '- Now, a putative herd inductively help establish = e\ • Ij • • (some j M -e* a m occurs only (2a F L (b*)-F L (b + e) F H {b*)-F H {b + e) F L (b*) > F H {b*) < m) Lemma » starts at once with chance Lemma 8: There in k m steps with is e*m > p < some b*, and a m _i 7; taken is j or < m. So there iffp < b + e. as e I 1 A. 1(a) and . (b). in case (a), while (b) and k m > chance e*m Then and (c) such that £ transits from set k* = k^ + k*M + , and (by the conditional independence of the private signals). r.v., the mean time By the 9. alternate formula for the to exiting from a temporary herd = (2a + 5)/10 for a e [0,1/4], g(a) 13)/10 for a e [3/4, 1]. Namely, g(a) + Lemma when a m steps, (£ n ) enters Ij, for private beliefs Temporary Herds: Proof of Lemma of a positive 42 that by taken with boundedly positive chance. where the monotonicity and inequality follow from £* < m) a, (j Ij, , 4+ Im into or 0; 37 = (18a + — that is, mean conditional 1)/10 for a e [1/4,3/4], and g(a) = on eventually exiting is, the sum cc oo n n=l n=l fc=l of chances of exiting after every period /- T7> fc A y* 2^11 (1 - e*)(l with the l 71=1 En Fn+ = because last equality is true . . ., or _ y> g*g ~ gHJ _ y* ^n - fi Z^ 2^ !_^ !__p fl+i) i_f" 71=1 fc=l 1, 2, k 71=1 Jfc=l ~ = /i\z7> n °° \ n X 71=1 F\. i Mean Time To Herd: Proof of Theorem 5. Consider a temporary herd on om in state H: As £n € Im and EyL^H,^ = £n (£n converges Jm with positive chance, and so E^ > 0. An upper bound to Yl ne n implies a uniform upper bound to the length • ) , of the temporary herds, • Part we wish (a): and a uniform lower bound Extreme Signals are Atoms. We ^2 n ne n to check that < boundedly many, say at most N, • Nonatomic herd on (say) a M - As seen oo. steps. Then Tails: Preliminary. When £ +1 - I clm is have bounded = for (£ n ) beliefs, and n > N, and Yl n ne n < study how fast taken repeatedly, E.l(b) and (c). as above a putative herd ends in in subsection 3.3, en We E^, by Lemma to (e n ) N 2 . vanishes in a putative obeys the recurrence - -£ FL ^-M)-F"{p M ^£ n )) = = + £) for some u > (Lemma 3). If £ solves b = p(£), then Lemma 1(b) implies that t](£) = 0, and that is increasing near £ (as each of its factors is). Hence, we may apply Lemma E.2 to the differential equation z = —n(z). Let preferences result in the private belief threshold Pm-i{£) £/(u tj Fact The proof will proceed with weaker assumptions than in the theorem, in all but one case: unbounded beliefs and 6 = 2. So F H (p) = c(p — 6) 7 + o((p — ft) 7 ) and F L (p) = 5 = cy(p - 7 " 1 + o{(p - 7_1 Then with bounded 1 - d(b + o((b - s ). Now H • 1. p) f p) 6) (p) &) ). 7" 1 ft) L Lemma 1(a) yields f (p) = cj(l - b)/b(p + o((p - &) 7_1 ), which ' integrates to F L (p) = c(l - b)/b(p - 6) 7 + o{{p - 6) 7 ); similarly, f L (p) = c^p 1 2 + o(p 7 2 ~ and F L (p) = 07/(7 - l)p7_1 + o(p1 with unbounded beliefs. 43 beliefs (b > 0), ) l ) • Fact 2. reaches x in finite time. For we get • 43 xt =x+ Part = -j(x - x) 1 = x + exp(-jt + d), Consider the differential equation x (b): [(7 - l)(jt Bounded 7 + = h)} 1 1, we get x ^ 1 -^, h>0 . For 7 < 1, the solution d arbitrary. For 7 > 1, arbitrary. Beliefs. One key approximation that we use here and below Observe that both integrals are valid since necessarily only 7 near b with bounded (resp. unbounded) beliefs. 38 > (resp. 7 > 1) renders f L integrable is ~k = pM -i{£) - p(£)[u/(u + £) 2 }(£ -£) + pM -i(£) and some tedious algebra produces in (A-12), Fact 1 a= — c[u when - \£ (e„) — Then 2. equation of Fact o((£ n - £^ < all cases, «C 00 when > rj(£) With 7 > see that £ n • 2, , case, e n 1, = e„ and (Hi) 7 e But outside of N, we thus reaches It is M in at most (1 see = and that en = ) < ^^Coo _7 ^ 7 ~^), For the second case, — F H (pi(£ n )) L alone. A for _7/(7_1) ), en Lemma iff 2, just as with E.l(a) provides a simple test for putative herd on a\ in state (with the derived criterion on H so (ne n ) is , is with some e > 0, E.2(b), we can the bounded beliefs that want J^ ne n <C 00 7 < 7=1, (£ n ) is falling e for Lemma = °°- ?7'(0) = 0. for a putative one on a\ (which can't be a herd, and so E^, and thus Yl ne n -C 00 - Thus, ]T ne n <C 00. steps. Applying 0. — £n 1 We 2 £) }(£ n steps; (ii) for = 0(n 0{n-'y ^- 1 '>). So J2 n have two separate cases in state H: + c[u/(u from (A-12) that — fM )/(efM differentiable at £ with 77'^) arbitrarily small N containing the basin JM an open neighborhood is where to the above differential many (1,2), e„ t) 1 ), £. = F H (p M -i{£n)) — after finitely for + o((£ - 7 K—k (with near f) Unbounded Beliefs. Mimicking the fist part of 1-7 r](£) = at + o(F), where a — cit 7/(7 — 1). Notice = 0(n <2m in state - a(£ Recalling our formula (c): analysis yields on clm and t] 7 for £ ). by boundedly positive decrements, or £ n+ \ = 0{n-^^-^), Part We now e. 7 < 0; (£ n ) is in J\f. in our putative herd then there £) Since e n E.2(a). for (i) < Ka{£ - K < 1 2 £) bounded above by the solution (£ n ) is we have rj(£) = tj(£) < exist constants k converging exponentially to ^ne n 1 There Lemma by 2, £)'1 ), summable. In for . } with ka{£ is) £\ Assume 7 < tp + + 1) £][u/(u 2 1 - 0({£ = 0). bounded ^2En herd In the first beliefs. <C 00 using e n = analytically equivalent to one on 7 then applied to 6). So, we let (£ n ) evolve as L M -\(£n))- Now Lemma E.2(a) and (b) yield [(7 — l)(Kan + < 4 < 7 _ l)(kan + /i)]- ^ 7-1 with k < 1 < K, both arbitrarily close to one as £ n is close to 0. So, a/[(Kj - l)(an + h)] < e n < a/[(y — l)(kan + h)]. For 7 < 2, we can find a < 1 with e n < a/n eventually. Then J^ n En = 00. Likewise for 7 > 2, a > 1 before, but consider e n =F (p />)]-i/(7-i) 1 ), [( exists with e n The case > a/n 7 = 2 eventually, F = ^ n £n <oo. requires special attention assumptions of the theorem L and so in this case. 44 — Then it as already noted, we go back to the can be proved exactly as before that We have r)(£) = a£ 2 + 0(£ 3 ), and it can be directly verified that + 0{p 2 3 T)(£)/(l-T)'(£)) = a£ + 0(£ ). So, we consider dz/dt = -az 2 + bz 3 with a > 0. Integration yields z(t) = l/[at + c— (b/a) \og(-b + a/z(t))} for z(t) < a/b, i.e. z close enough to zero. Using our prior knowledge that z(t) = 0(l/t) together with log(l + x) = x + 0(x 2 ), we get z(t) = \/{at + c- (b/a)[\og(a/z(t)) - b/(at) + 0(\/t 2 )]} = l/(at + d) + 0(\og(t)/t 2 ). (p) 44 We If 2cp 2 3 ). we had used the more general assumption we could not conclude below that e„ — 1/n is summable. we shall refrain from pursuing this. think a slightly larger error term likely also suffices, but 39 Lemma E.2(6) then yields asymptotically £ n = l/(an + h) + 0(log(n)/n 2 ), and then we 2 get e n = a /(an + h) + 0(\og(n)/n ). Then e„ — 1/n is summable, and we conclude from Lemma E.l(a) that On • ^ n En = Form the Trembling instead posits that one D oo. may A of Noise. qualitatively different form of noise Individuals randomly take a 'tremble', a la Selten (1975): suboptimal action with some idiosyncratic chance. Someone planning to take action a m opt for will instead simplicity, we insist a,j with probability K3m (£) for j that in state is K m (£) = [1 , (7) is valid for trembling noise. Indeed, all actions indeed bounded away from <p(m, £) —£ = and Let £ 2. is satisfied ^ than one action Here, the We now invariant show that Theorem 7 must be taken with positive by (A-13). We . is also probability, and wish to argue once more that under exactly the same conditions as in the proofs of Theorems 2 be a stationary point, and assume by way of contradiction that more taken with positive probability. Then is For any such m, we can use [1 is Regardless of plans, one accidentally takes action a m with fixed chance K m £: is (A-13) a special case of trembling, where k™(£) Proof of Theorem 7 for Trembling Individuals: so ip(m\£) and - Km (£)]p(m\s,£) + Zj#n-*?WPti\s,.Q Observe crucially that craziness and So the chance of deviating 0. = H are now described by iP(m\s,£) across j For t. ] Y^j^m K m(fy- Even with constant values of K m this history-dependent, and must be bounded above: K m (£) < 1/2M for all m. is The dynamics m, possibly dependent on K3m (£) be bounded away from all from an optimal action a m form of noise ^ - K m (£)} [p(m\L, sum on £) (7) to rewrite (p(m, £) - p(m\H, £)} = the right hand side may £ = < F H (pm (£)) < 1 for some m. £ as follows: «?(*) [p(k\H, £) - p(k\L, (A-14) £)} have negative and positive terms, but notice that m Y,[p(k\H,t)-p(k\L,£)] fc=i m = E pH [ ^) - FH (P*-i) - F L (Pk) + F L (Pk-i)] = F H {pm - F L (p m ) ) k=l Recall that the function pm is increasing in F L -F H is first increasing, then decreasing, m. Thus, the negative terms 40 in the sum can at and that the threshold most sum to (minus) number F(£) the p(m\H, £) = maxm=lt - p(m\L, £), and ... M {FL ipm )-FH {pm k%(£) Since )}. < K k (£) < 1/2M £ fc#m [p{k\H,£) by assumption, the left - p(k\L, hand £)] = side of (A- obeys the inequality 14) [1 - K m {£)) [p(m\L, £) JL [p(m\L, £) - p(m\H, £)} < - p(m\H, £)} + ^F(£) and so 1As this holds for all is [p(m\L,£)-p(m\HJ)]<^F(£) m= m, we may sum over [F L (pn) which M - F H {Prn)] < m 1, . . 2M-2 impossible by definition of F(£) (and . , m, and discover that F(£)< M> 2). 2 obtains once again, while <p{l,£) Theorem B.2, — £ is 2M-2 is optimal. bounded away from and so Theorem 2 goes through just as before 41 F(£) Hence, the equations <p(m,£) could only be solved by an £ for which only one action rem M also. The proof = £ of Theo- on the interval / of References Aghion, P., P. Bolton, C. Harris, and B. Jullien (1991): "Optimal Learning by Experimentation," Review of Economic Studies, 58(196), 621-654. Arthur, W. B., Y. M. Ermoliev, and Y. M. Kaniovski "Strong Laws (1986): For a Class of Path-Dependent Stochastic Processes with Applications," in Stochastic Optimization: Proceedings of the International Conference, Kiev, 1984, ed- by A. Shiraev, and R. Wets. Springer- Verlag, BANERJEE, A. V. (1992): New I. Arkin, York. "A Simple Model of Herd Behavior," Quarterly Journal of Economics, 107, 797-817. Bellman, R. (1954): "Limit Theorem For Non-commutative Operations. I," Duke Math- ematical Journal, 21, 491-500. D. Hirshleifer, and I. Welch (1992): "A Theory of Fads, FashCustom, and Cultural Change as Information Cascades," Journal of Political Econ- BlKHCHANDANl, ion, S., omy, 100, 992-1026. Bray, M., and D. Kreps (1987): "Rational Learning and Rational Expectations," in Kenneth Arrow and the Ascent of Economic Theory, ed. by G. Feiwel, pp. 597-625. Macmillan, London. Breiman, Doob, L. (1968): Probability. Addison-Wesley, Reading, Mass. J. L. (1953): Stochastic Processes. Easley, D., and N. Kiefer (1988): Wiley, New York. "Controlling a Stochastic Process with Unknown Parameters," Econometrica, 56, 1045-1064. Ellison, G., and D. Fudenberg (1995): "Word of Mouth Communication and Social Learning," Quarterly Journal of Economics, 109, 93-125. FURSTENBERG, H. (1963): American Mathematics "Noncommuting Random Products," Transactions of the Society, 108, 377-428. Futia, C. A. (1982): "Invariant Distributions and the Limiting Behavior of Markovian Economic Models," Econometrica, 50, 377-408. Harsanyi, Players," J. C. (1967-68): "Games with Incomplete Information Played by Bayesian Science, 14, 159-182, 320-334, 486-502. Management Kifer, Y. (1986): Ergodic Theory Of Random Transformations. Birkhauser, Boston. Kihlstrom, tion R., L. Mirman, and A. Postlewaite and the 'Rothschild and Kihlstrom. Elsevier, Effect'," in New (1984): "Experimental Consump- Bayesian Models in Economic Theory, York. 42 ed. by Boyer, Lee, I. H. (1993): "On the Convergence of Informational Cascades," Journal of Economic Theory, 61, 395-411. MlLGROM, P. (1979): "A Convergence Theorem for Competitive Bidding," Econometrica, 47, 670-688. Rudin, W. (1987): Real Selten, R. and Complex Analysis. McGraw-Hill, New York, 3rd edn. (1975): "Reexamination of the Perfectness Extensive Games," International Journal of Smith, L. (1991): Game Concept Theory, 4, for Equilibrium Points in 25-55. "Error Persistence, and Experiential versus Observational Learning," Foerder Series working paper. Smith, Vu," L., and P. S0RENSEN MIT Working (1996a): "Informational Herding as Experimentation Deja Paper. (1996b): "Rational Social Learning Through Random Sampling," 43 MIT mimeo. 15-2 15-6 15-7 15-8 15-9 15-10 15-10 15-11 15-12 Booting the System to the Installation Menus Changing the Installation Device Changing the Network Settings Changing the Volume Group and Logical Volume Attributes Starting a Maintenance Shell Rebooting the System Installing the System Removing Sysback from a non-licensed machine Removing the Network Configuration Chapter 16. Changing the Volume Group and Logical Volume 16-1 Attributes 16-1 16—5 16—8 Customizing the Volume Group Information Customizing the Logical Volume Information Customizing the Filesystem Information Appendix A. Commands A— 1 Appendix B. Creating Shell Scripts for Customizing the System Backup and Install Process B-l Creating the Shell Scripts Example: Appending Raw Logical Volume Data to System Backup Appendix C. Device/System -SpeciGc Information IBM 7208 8mm Tape Drives IBM 3490 and 3590 9-Track Tape Drives IBM 7331 8mm Tape Library IBM 7332 4mm Tape Library Other Tape Libraries or Autoloaders IBM Scalable POWERparallel Systems (SP2) IBM 7133 Serial Storage Architecture (SSA) disk subsystem IBM 7135 RAIDiant Array IBM 7137 and Other RAID Devices Index B-l B-2 C-l C-l C-l C-l C-2 C-2 C-2 C-3 C-4 C-4 X-l AIX System Backup & Recovery/6000 Vll AIX System Backup & Recovery/6000 (Sysback) Overview AIX System Backup & Recovery/6000 (Sysback) is a menu-driven application which provides system administrators and other AIX users with a simple, efficient way to backup data and recover from hardware failures. It was designed with the intention of providing AIX users with the fastest possible method of recovering all or part of the system after a system or hardware failure. Sysback is flexible enough to allow a system backup created on one system to be installed or reinstalled onto a machine with either an identical or dissimilar hardware configuration. Virtual devices provide large storage capacity with little or no user-interaction. Sysback provides the following • Ability to specific features: create various types of backups, including: - Full - Volume Groups - Logical - Filesystems (Incremental) - Specific Directories or Files System Image) (Installation Volumes (raw data) Complete SMIT menus for configuring Sysback options, backing and restoring files from all backup types. up, listing, verifying Ability to perform all backups to tape drives or disk file on a remote host on the network. This allows a single system to act as a backup media server for multiple clients. Backups may be performed to multiple sequential devices, automatically continuing the backup on the next device Backups may be performed to multiple backup in a fraction of the time. A when the first is full. parallel devices, completing a single backup may be written to multiple devices, providing duplicate same backup in the same time it normally takes to make one. single copies of the Status Indicator displays estimated backup or restore size and time, as well as performance and percent complete. User may All interactively select files to restore backups, local and remote, from a may be performed list of files on the backup. to multiple tape volumes or multiple devices. Automated recreation of volume groups, logical volumes and filesystems, with option to change attributes and physical locations, using the information stored on the backup media. Backup compression, reducing backup time and media usage by 25-40%. exclude specific temporary or non-changeable backups, saving media space and backup time. Ability to from all files Option to create "incremental" backups, specifying that only a specific time period or since a previous level backup. V11I AIX System Backup & Recovery/6000 or directories files modified in • Simple configuration of a network boot server and/or network installation server used to alleviate the need for local boot or installation media. • Ability to "stack" multiple efficiently • and not backups onto a single tape, using tape media more media to be changed between backups. requiring the User may create custom scripts for performing additional functions before and after the System Backup, and at completion of the System Install process. • The "System Backup" - option provides the following benefits: Restoration of the original system to including devices, multiple its exact device configuration, volume groups, logical volume placement on disks, etc. - User may customize the installation by changing volume group, volume and filesystem locations, sizes and other attributes. - Installation of a complete system, including multiple volume groups from a remote tape drive or disk file backups. System to be installed may be booted from the network server, so no local boot media is required. - logical machines of same or different hardware custom backup (also referred to as "cloning"). Installation of multiple configuration from a - Preservation of multi-copy (mirrored) logical volumes. - Option of retaining exact partition placement of logical volumes or making contiguous any partitions had become fragmented to improve I/O performance. - Option of either importing, ignoring or recreating and restoring each individual volume group from a single backup media. AIX System Backup & Recovery/6000 IX . MIT LIBRARIES 3 9080 00978 0252 Chapter Installing 1. AIX System Backup & Recovery/6000 AIX System Backup & Recovery/6000 (Sysback), you must have installed the on your system. Then, follow the installation procedures below for installation from the Sysback Installation Diskette. To install prerequisite software Prerequisite Tasks The following software may be required, depending on the Sysback functions you using: • AIX Base Operating System (BOS) Runtime Version 3.2 or higher • On AIX in • If Version 4 systems, the "bos.sysmgt.sysbr" addition to those automatically included with the you will following • be using the Remote Backup Services functions must also be o AIX Version 3.2: bosnet.tcpip.obj o AIX Version 4: bos.rte.net bos.net.tcp.client If you will be using the Network Boot function install of the Network AIX Version 3.2: bosnet.nfs.obj o AIX Version 4: bos.net.nfs.client On AIX Version 4 the following are installed automatically with and may not be removed for Sysback to function properly: . o bos.rte.bosinst o bos.rte.archive o bos.rte.libnetsvc (If Installation Network in Install Installation functions are to the system, be used) from Diskette After the prerequisite software Log Sysback, the the following: o Procedure for 1 of installed: Services, you must also • fileset must be installed base operating system. is installed: as root user. You should see the following: IBM AIX Operating System (c) Copyright IBM Corp. 19XX, 19XX /dev/console) login: root ( # (The default root user prompt AIX Sysback/6000 2. Insert the 3. Enter the following on the AIX smit install 1—1 Installing AIX System Backup & Recovery/6000 is #) Installation Diskette into the diskette drive. command line: will be Date Due Lib-26-67