For Bayesian Wannabes, Are Disagreements Not About Info? Robin Hanson Economics, GMU The Puzzle of Disagreement Persistent disagreement ubiquitous Speculative trading, wars, juries, … Argue in science, politics, family, … Theory seems to say this irrational Possible explanations We’re “just joshing” Infeasible epistemic rationality Fixable irrationality: all will change! Other rationality – truth not main goal My Answer: We Self-Deceive We biased to think better driver, lover, … “I less biased, better data & analysis” Evolutionary origin: helps us to deceive Mind “leaks” beliefs via face, voice, … Leak less if conscious mind really believes Beliefs like clothes Function in harsh weather, fashion in mild We Can’t Agree to Disagree Aumann in 1976 Since generalized to Re possible worlds Impossible worlds Common knowledge Common Belief Of exact E1[x], E2[x] A f(•, •), or who max Last ±(E1[x] - E1[E2[x]]) Would say next For Bayesians At core, or Wannabe With common priors Symmetric prior origins If seek truth, not lie Generalize to Bounded Rationality Bayesians (with common prior) Possibility-set agents: balanced (Geanakoplos ‘89), or “Know that they know” (Samet ‘90), … Turing machines: prove all computable in finite time (Medgiddo ‘89, Shin & Williamson ‘95) Many more specific models … Consider Bayesian Wannabes ~ X i ( ) E [ X ( ) | I i ( )] ei [ X ] i Disagree Sources Pure Agree to Disagree? Prior 1() 2 () Yes Info I1 () I 2 () No Errors e1 e2 Yes Ex: E1[p] @ A.D. X( ) A.D. Y( ) Y Either combo implies pure version! 3.14, E2[p] @ 22/ Theorem in English If two Bayesian wannabes nearly agree to disagree about any X, nearly agree that both think they nearly unbiased, nearly agree that one agent’s estimate of other’s bias is consistent with a certain simple algebraic relation Then they nearly agree to disagree about Y, one agent’s average error regarding X. (Y is state-independent, so info is irrelevant). Notation State (finite) Random variable X ( ) [ X , X X ] Informatio n I i ( ) I i (a partition) Bayesian estimate X i ( ) E i [ X ( ) | I i ( )] ~ ~ X i ( ) Ei [ X ] X i ( ) ei [ X ] Wannabe estimate Assume : ei ei I i ( ) More Notation Expect unbiased ei [ X | S ] E [ei [ X ] | S ] ~ Ei [ei [ X | S ]] 0 Calibrated error ei [ X ] mi [ X ] ci [ X ] Choose ci at Di ( ) Di coarsens I i Bias Lemma 1 The ci which mins ~ E[( X i X )2 | Di ( )] sets ei [ X | Di ( )] 0. Still More Notation ~q ~ Estimation set Bi (E ) { | Ei [ i ( E | I i ( ))] q} ~q N q-agree that E in CE C iN Bi (C E ) ~ ~ i,j -disagree about X { | X i ( ) X j ( ) } F [ X ] ~ ~ i,j , -disagree about X { | X i ( ) X j ( )} i, j q-agree to -disagree {i, j} q-agree that i, j -disagree Let 1,2 Agree to Disagree Re X ~q A CF F [ X ], Bi Bi (A) (coarsens Di ) ei ei [ X | Bi ], pi ( A|Bi ) p0 min( p1, p2 ), ˆ( p) p 2(1 p)X Lemma 4 : 0 and e2 0 imply e1 ˆ( p0 ) ~ ~p E 2 2 [ p0 ] ~ E2 [e1 ] ˆ( ~ p2 ) ~ E1 [e1 ] 0 (3) (4) Theorems Re agents 1,2 q-agreeing to -disagree about X , IF at some equations 3 and 4 are satisfied, 1 THEN at agents 2,1 q-agree to ˆ( ~ p2 ),0-disagree about e1 2 IF agents 1,2 q-agree to -disagree (within C ) that they -disagree about X and satisfy eqns 3 and 4, THEN ( within C ) agents 2,1 q-agree to ˆ( ~p2 ),0-disagree about e1 Theorem in English If two Bayesian wannabes nearly agree to disagree about any X, nearly agree that both think they nearly unbiased, nearly agree that one agent’s estimate of other’s bias is consistent with a certain simple algebraic relation Then they nearly agree to disagree about Y, one agent’s average error regarding X. (Y is state-independent, so info is irrelevant). Consider Bayesian Wannabes ~ X i ( ) E [ X ( ) | I i ( )] ei [ X ] i Disagree Sources Pure Agree to Disagree? Prior 1() 2 () Yes Info I1 () I 2 () No Errors e1 e2 Yes Ex: E1[p] @ A.D. X( ) A.D. Y( ) Y Either combo implies pure version! 3.14, E2[p] @ 22/ Conclusion Bayesian wannabes are a general model of computationally-constrained agents. Add minimal assumptions that maintain some easy-to-compute belief relations. For such Bayesian wannabes, A.D. (agreeing to disagree) regarding X(w) implies A.D. re Y(w)=Y. Since info is irrelevant to estimating Y, any A.D. implies a pure error-based A.D. So if pure error A.D. irrational, all are.