Information and Volatility Dirk Bergemann Tibor Heumann Stephen Morris

advertisement
Information and Volatility
Dirk Bergemanny
Tibor Heumannz
Stephen Morrisx
May 19, 2014
Abstract
We analyze a class of games with interdependent values and linear best responses. The
payo¤ uncertainty is described by a multivariate normal distribution that includes the pure
common and pure private value environment as special cases. We characterize the set of joint
distributions over actions and states that can arise as Bayes Nash equilibrium distributions
under any multivariate normally distributed signals about the payo¤ states. We characterize
maximum aggregate volatility for a given distribution of the payo¤ states. We show that the
maximal aggregate volatility is attained in a noise-free equilibrium in which the agents confound idiosyncratic and common components of the payo¤ state, and display excess response
to the common component.
We use a general approach to identify the critical information
structures for the Bayes Nash equilibrium via the notion of Bayes correlated equilibrium, as
introduced by Bergemann and Morris (2013b).
Jel Classification: C72, C73, D43, D83.
Keywords: Incomplete Information, Bayes Correlated Equilibrium, Volatility, Moments Restrictions, Linear Best Responses, Quadratic Payo¤s.
We acknowledge …nancial support through NSF Grants SES 0851200 and ICES 1215808. We thank the Editor,
Xavier Vives, and the referees for many thoughtful suggestions. We are grateful to our discussant, Marios Angeletos,
for many productive suggestions and insights. We thank the audience at the Workshop on Information, Competition
and Market Frictions, Barcelona 2013, for helpful comments on an earlier version that was entitled "Information,
Interdependence, and Interaction: Where Does the Volatility Come From?"
y
Department of Economics, Yale University, New Haven, CT 06520, U.S.A., dirk.bergemann@yale.edu.
z
Department of Economics, Yale University, New Haven, CT 06520, U.S.A., tibor.heumann@yale.edu.
x
Department of Economics, Princeton University, Princeton, NJ 08544, U.S.A. smorris@princeton.edu.
1
1
Introduction
In an economy with widely dispersed private information about the fundamentals, actions taken by
agents in response to the private information generate individual as well as aggregate volatility in the
economy. How does the nature of the volatility in the fundamentals translates into volatility of the
outcomes, the actions take by the agents. To what extent does heterogeneity in the fundamentals
dampen or accentuates volatility in the individual or aggregate actions?
At one extreme, the
fundamental state, the payo¤ relevant state of the economy could consist of a common value that
a¤ects the utility or the productivity of all the agents in the same way. At the other extreme, the
payo¤ relevant states could consist of purely idiosyncratic, distinct and independent, states across
the agents.
We analyze these questions in a general class of linear quadratic economies with a continuum of
agents while allowing for heterogeneity in the fundamentals, ranging from pure common to interdependent to pure private values. We restrict our attention to environments where the fundamentals
are distributed according to a multivariate normal distribution. We describe the payo¤ relevant
state of each agent by the sum of a common component and an idiosyncratic component. As we
vary the variance of the common and the idiosyncratic component, we change the nature of the
heterogeneity in the fundamentals and can go from pure common to interdependent to pure private
value environments.
The present objective is to analyze how the volatility in the fundamentals translates into volatility of the outcomes. The nature of the private information, the information structure of the agents,
matters a great deal for the transmission of the fundamental volatility. We therefore want to know
which information structure leads to the largest (or the smallest) volatility of the outcomes for a
given structure in the fundamental volatility. Thus, we are lead to investigate the behavior of the
economy for a given distribution of the fundamentals across all possible information or signal structures that the agents might have. By contrast, the established literature is mostly concerned with
the characterization of the behavior, the Bayes Nash equilibrium for a given information structure.
In earlier work, two of us suggested an approach to analyze the equilibrium behavior of the
agents for a given description of the fundamentals for all possible information structures. In Bergemann and Morris (2013a) we de…ne a notion of correlated equilibrium for games with incomplete
information, which we call Bayes correlated equilibrium. We establish that this solution concept
characterizes the set of outcomes that could arise in any Bayes Nash equilibria of an incomplete
information game where agents may or may not have access to more information beyond the given
2
common prior over the fundamentals. In Bergemann and Morris (2013b) we pursue this argument
in detail and characterize the set of Bayes correlated equilibria in the class of games with quadratic
payo¤s and normally distributed uncertainty, but there we restricted our attention to the special
environment with pure common values and aggregate interaction. In the present contribution, we
substantially generalize the environment to interdependent values (and more general interaction
structures) and analyze the interaction between the heterogeneity in the fundamentals and the
information structure of the agents. We continue to restrict attention to multivariate normal distribution for the fundamentals and for the signals, but allowing for arbitrarily high-dimensional
signals. With the resulting restriction to multivariate normal distributions of the joint equilibrium
distribution over actions and fundamentals, we can describe the set of equilibria across all (normal)
information structures completely in terms of restrictions on the …rst and second moments of the
equilibrium joint distribution.
We o¤er an exact characterization of the set of all possible Bayes correlated equilibrium distributions. We show in Proposition 8 that the joint distribution of any BCE is completely characterized
by three correlation coe¢ cients. In particular, the mean of the individual and the aggregate action
is uniquely determined by the fundamentals of the economy. But, the second moments, the variance
and covariances can di¤er signi…cantly across equilibria. The set of BCE is explicitly described by
a triple of correlation coe¢ cients: (i) the correlation coe¢ cient of any pair of individual actions (or
equivalently the individual action and the average action), (ii) the correlation coe¢ cient of an individual action and the associated individual payo¤ state, and (iii) the correlation coe¢ cient of the
aggregate action and an individual state. The description of the Bayes correlated equilibrium set
arises from two distinct set of conditions. The restrictions on the coe¢ cients themselves are purely
statistical in nature, and come from the requirement that the variance-covariance matrix is positive
de…nite, and thus these conditions do not depend at all on the nature of the interaction and hence
the game itself. It is only the determination of the moments themselves, the mean and the variance
that re‡ect the best response conditions, and hence the interaction structure of the game. This
striking decomposition of the equilibrium conditions into statistical and incentive conditions arises
as we allow for all possible information structures consistent with the common prior. In Section
6, we restrict attention to the subset of information structures in which each agent knows his own
payo¤ state with certainty. This restriction on the informational environment is commonly imposed
in the literature, and indeed we analyze the resulting implications for the equilibrium behavior.
But with respect to the above separation result, we …nd that the decomposition fails whenever we
3
impose any additional restrictions, such as knowing one’s own payo¤ state.
We then characterize the upper boundary of the Bayes correlated equilibrium in terms of the
correlation coe¢ cients. We show in Proposition 3 that the upper boundary consists of equilibria
that share a property that we refer to as noise-free.1 The noise-free equilibria have the property
that the conditional variance of the individual and aggregate action is zero, conditional on the
idiosyncratic and common component of the state. That is, given the realization of the components
of each agent’s payo¤ state, the action of each agent is deterministic in the sense that it is a function
of the common and the idiosyncratic components only. In fact, this description of the noise-free
equilibria directly links the Bayes correlated equilibria to the Bayes Nash equilibria. In Proposition
10 we show that we can represent every noise-free Bayes correlated equilibrium in terms of the action
chosen by the agent as a convex combination of the idiosyncratic and the common component of
the state. Thus, a second surprising result of our analysis is that the boundary of the equilibrium
set is formed by action state distributions that do no contain any additional noise. However as each
agent implicitly responds to a convex sum of the components, each agent is likely to confound the
idiosyncratic and common components with weights that di¤er from the equal weights by which they
enter the payo¤ state, and hence lead to under or overreaction relative to the complete information
Nash equilibrium.
We show for every Bayes correlated equilibrium, there exists an information structure under
which the equilibrium distribution can be (informationally) decentralized as a Bayes Nash equilibrium, and conversely. The logic of Proposition 9 follows an argument presented in Bergemann and
Morris (2013a) for canonical …nite games and information structures. The additional insight here
is that in the present environment any Bayes correlated equilibrium outcome can be achieved by
a minimal information structure, in which each agent only receives a one-dimensional signal. The
exact construction of the information structure is suggested by the very structure of the BCE. With
the one-dimensional signal, each agent receives a signal that is linear combination of the common
and idiosyncratic part of their payo¤ state and an additive noise term, possibly with a component
common to all agents. Under this signal structure the action of each agent is a constant times the
signal, and thus the agent only needs to choose this constant optimally. This construction allows
us to understand how the composition of the signal a¤ects the reaction of the agent to the signal.
With the compact description of the noise-free Bayes correlated equilibrium, we can then ask
which information structure generates the largest aggregate (or individual) volatility, and which
1
We would like to thank Marios Angeletos, our discussant, for suggesting this term.
4
information structure might support the largest dispersion in the actions taken by the agents. In
Proposition 11, we provide very mild su¢ cient conditions, essentially very weak monotonicity conditions, on the objective functions such that we can restrict attention to the noise-free equilibria. Any
of these second moments is maximized by a noise-free equilibrium, but importantly not all of these
quantities are maximized by the same equilibrium. The compact representation of the noise-free
equilibria, described above, is particularly useful as we can …nd noise-free equilibria by maximizing
over a single variable without constraints, rather than having to solve a general constrained maximization program. In Proposition 6 we use this insight to establish the comparative statics for
maximal individual or aggregate volatility that can arise in any Bayes correlated equilibrium as we
change the nature of the strategic interaction.
With a detailed characterization of the feasible equilibrium outcomes we ask what does interdependence add or to change relative to equilibrium behavior in a model with pure common values.
With pure common values, any residual uncertainty about the common payo¤ state lowers the responsiveness of the agent to the information, it attenuates their response to the true payo¤ state.
Thus, the maximal responsiveness is attained in the complete information equilibrium in which
consequently the aggregate volatility is maximized across all possible information structures. At
the other end of the spectrum, with pure private values, the complete information Nash equilibrium has zero aggregate volatility. But if the noisy information has an error term common across
the agents, then aggregate volatility can arise with incomplete information, even though there is
no aggregate payo¤ uncertainty. Now, the presence of the error term still leads every agent to
attenuate his response to the signal compared to his response if he were to observe the true payo¤
state, as a Bayesian correction to the signal. Nonetheless, we show that as the variance of the idiosyncratic component increases, the maximal aggregate volatility increases as well, in fact linearly
in the variance of the idiosyncratic component, see Proposition 7. Now, strikingly, with interdependent values, that is in between pure private and pure common values, the aggregate volatility
is still maximal in the presence of residual uncertainty, but the response of the agent is not attenuated anymore, quite to contrary, it exceeds the complete information responsiveness in some
dimensions. In the noise-free equilibrium, whether implicit in the BCE, or explicit in the BNE, the
information about the idiosyncratic component is bundled with the information about the common
component. In fact, the signal in the noise-free BNE represents the idiosyncratic and the common
component as a convex combination. Thus, even though the signal does not contain any payo¤
irrelevant information, hence noise-free, the agent still faces uncertainty about his payo¤ state as
5
the convex weights typically di¤er from the equal weights the components receives in the payo¤
state. As a consequence, any given convex combination apart from the equal weigh gives each agent
a reason to react stronger to some component of the fundamental state than he would if he had
complete information about either component, thus making it an excess response to at least some
component. Now, if the excess response occurs with respect to the common component, then we
can observe an increase in the aggregate volatility far beyond the one suggested by the complete
information analysis. Moreover, as Proposition 7 shows there is a strong positive interaction e¤ect
with respect to the maximal aggregate volatility between the variance of the idiosyncratic and the
common component. This emphasizes the fact that the increased responsiveness to the common
component cannot be understood by merely combining the separate intuition gained from the pure
private and the pure common value case with which we start above.
We proceed to extend the analysis in two directions. First, we constrain the set of possible
information structures and impose the common restriction that every agent knows his own value
but may still be uncertain about the payo¤ state of the other agents. We show that this common
informational restriction can easily be accommodated by our analysis. But once we impose any
such restriction on the class of permissible information structures, then the separability between
the correlation structure and the interaction structure, that we observe without any restriction,
does not hold anymore. Second, we extend the analysis to accommodate more general interaction
structures. In particular, we consider the case when the best response of every agent is to a convex
combination of the average action and the action of a speci…c, matched, agent. This allows us in
particular to analyze the case of pairwise interaction, and we consider both random matching as
well as positive or negative assortative matching. The following analysis shows that the current
framework is su¢ ciently ‡exible to eventually accommodate an even more ambitious analysis of
general information and network (interaction) structures.
The remainder of the paper is organized as follows. Section 2 introduces the model and the
equilibrium concept. Section 3 introduces the noise free information structures and analyzes the
Bayes Nash equilibrium behavior in this class of informatio structures. Section 4 determines the
maximal volatility and dispersion and identi…es the associated information structures, and thus
establishes the link between information and volatility. Section 5 introduces the solution concept
of Bayes correlated equilibrium. We establish an equivalence between the set of Bayes correlated
equilibria and the set of Bayes Nash equilibria under any information structure. Section 6 considers
a number of speci…c information structures that have been appeared widely in the literature and we
6
describe the subtle restrictions that they impose on the equilibrium behavior. Section 7 discusses
the relationship of our results to recent contribution in macroeconomics on the source and scope of
volatility and concludes. Section 8 constitutes the appendix and contains most of the proofs.
2
Model
We consider a continuum of agents, with mass normalized to 1. Agent i 2 [0; 1] chooses an action
ai 2 R and is assumed to have a quadratic payo¤ function which is function of his action ai , the
mean action taken by all agents, A:
A,
and the individual payo¤ state,
i
Z
aj dj
j
2 R:
ui : R3 ! R.
In consequence, agent i has a linear best response function:
ai = rE[AjIi ] + E[ i jIi ],
(1)
where E[ jIi ] is the expectation conditional on the information Ii agent i has prior to taking an
action ai . The parameter r 2 R of the best response function represents the strategic interaction
among the agents. If r < 0, then we have a game of strategic substitutes, if r > 0, then we have a
game of strategic complements. We shall assume that the interaction parameter r is bounded above,
or r 2 ( 1; 1).
We assume that the individual payo¤ state
component
and an idiosyncratic component
i
i
is given by the linear combination of a common
i
:
i.
= +
(2)
Each component is assumed to be normally distributed.
idiosyncratic component
i
While
is common to all agents, the
is identically distributed across agents, independent of the common
component. Formally, we describe the payo¤ uncertainty in terms of the pair of random variables
;
i
:
i
!
N
0
7
!
2
;
0
0
2
i
!!
:
(3)
It follows that the sample average of the idiosyncratic component across all agents always equals
zero. We denote the sample average across the entire population, that is across all i, as Ei [ ], and
so Ei [
i]
= 0. The common component can be interpreted as the sample mean or average payo¤
state, and so
= Ei [ i ].
Given the independence and the symmetry of the idiosyncratic component
above joint distribution can be expressed in terms of the variance
state, and the correlation (coe¢ cient)
j.
2
i
=
2
+
of the individual
i
between any two states of any two agents i and j,
After all, by construction the covariance of
and
i
j
is equal to the covariance between
, and in turn also represents the variance of the common component, that is
!!
!
!
i
across agents, the
i
2
2
N
i
i
i
and
:
(4)
:
2
2
i
i
For the remainder of the paper, we shall normalize
2
=
and
2
i
;
2
i
by setting
= 0 as any nonzero mean
simply shifts the best response without a¤ecting the second moments of the equilibrium distributions.2 The joint normal distribution given by (4) is the commonly known common prior. We shall
almost exclusively use the above representation of the payo¤ uncertainty and only occasionally the
former representation, see (3).
We often take the variance (and the mean) of the individual state
the changes in the equilibrium behavior as we change
2
of the individual state constant, by changing
i
as given by
2
, and describe
between 0 and 1. As we keep the variance
we implicitly change the relative contribution
of the idiosyncratic and the common variance as
2
=
2
2
=
i
We refer to the special cases of
= 0 and
2
+
:
2
i
= 1 as the case of pure private and pure common
values.
The present model of a continuum of players with quadratic payo¤s and a normally distributed
values, encompassing both private and common value environments, was …rst proposed by Vives
(1990a) to analyze information sharing among agents with private, but partial information about
the fundamentals. The focus of the present paper is rather di¤erent, but we shall brie‡y indicate
how our approach to determine critical information structures yields also new insights to the large
literature on information sharing.
2
Although we assume
mean action in terms of
= 0 in the reminder of the paper, in Proposition 2 and Proposition 8 we express the
as it is suggestive on how the results change if we relax the assumption
8
= 0.
3
Noise Free Information
We begin the analysis by considering a class of noise free information structures. We derive the Bayes
Nash equilibrium behavior under these one-dimensional information structure. After a complete
description of the equilibrium behavior, we characterize in Section 4 the volatility induced by these
information structures. We complete the argument by establishing in Section 5 that these special,
noise free and one-dimensional information structures are in fact the information structures that
generate the maximal volatility across all symmetric multivariate normally distributed information,
allowing in particular for multi-dimensional and noisy signal structures.
3.1
Noise Free Information Structure
We consider the following one-dimensional class of signals:
si ,
i
j j) ;
+ (1
(5)
where the linear composition of the signal is determined by the parameter
2 [ 1; 1]. As a
normalization, the weight on the common component, 1 j j, is always nonnegative, and symmetric
around
= 0. The weight on the idiosyncratic component,
2 [ 1; 1], can be either positive or
negative. We restrict attention to symmetric information structures (across agents), and hence all
agents have the same parameter . Hence, we also refer to the parameter
also as the noise free
information structure .
The information structure generated by the signals si is noise free in the sense that every signal
is a linear combination of the components of the payo¤ state,
i
and , and no extraneous noise or
error enters the signal of each agent. Nonetheless, since the signal si combines the idiosyncratic and
the common component of the payo¤ state, each signal si leaves agent i with residual uncertainty
about the components of the payo¤ state. Moreover, unless the weight
exactly mirrors the composition of the payo¤ state
i
=
agent i still faces residual uncertainty about his payo¤ state
two sources of fundamental uncertainty. In particular, if
i,
+
i.
in the information structure
and hence exactly equals 1=2,
Thus, the signal confounds the
is below 0, then the signal negatively
confounds the idiosyncratic and the common component of the payo¤ state. Given the information
structure , we can compute the conditional expectation of agent i given the signal si about the
9
3
i:
idiosyncratic component
E[
jsi ] =
cov ( i ; si )
=
var (si )
jsi =
cov ; si
=
var (si )
i
(1
)
2
j j) + (1
(1
)
(6)
s;
2 i
the common component :
E
and the payo¤ state
i
(1 j j)
j j)2 + (1
(1
)
s;
2 i
(7)
s:
2 i
(8)
of agent i :
E [ i jsi ] =
(1 j j) + (1
(1 j j)2 + (1
cov ( i ; si )
=
var (si )
)
)
A few noise free information structures are of particular interest. If
knows his own payo¤ state
i
= 1=2; then each agent
with certainty, as (8) reduces to:
E [ i jsi ] = 2si =
i
+ ;
but remains uncertain about the exact value of the idiosyncratic and common component. Similarly,
if
= 0, then either the common component is known with certainty by each agent, as E
jsi =
si = , but there remains residual uncertainty about the other component and a fortiori about the
payo¤ state
i.
Likewise, if either
certainty by each agent, as E [
=
i
1 or
= 1, then the idiosyncratic component is known with
jsi ] = si =
i,
but again there remains residual uncertainty
about the other component and a fortiori about the payo¤ state
3.2
i.
Noise Free Equilibrium
We can formally de…ne Bayes Nash equilibrium as the solution concept given the noise information
structures.
De…nition 1 (Bayes Nash Equilibrium)
Given an information structure
the strategy pro…le
a : R ! R,
forms a pure strategy symmetric Bayes Nash equilibrium if and only if:
a (si ) = E[ i + rAjsi ]; 8si 2 R.
3
We report the conditional expectations under the earlier normalization that
(9)
= 0. With
conditional expectation would extend to be the usual convex combination of the signal and prior mean
10
6= 0; the
.
The construction of the linear equilibrium strategy in the multivariate normal environment is
by now standard (see e.g. Morris and Shin (2002) and Angeletos and Pavan (2007) for the pure
common value environment.) Given the information structure , we are particularly interesting in
identifying how strongly the agents respond to the signals in the Bayes Nash equilibrium, and we
denote the responsiveness, and in the linear strategy, the slope of the strategy, by w( ).
Proposition 1 (Noise Free BNE )
For every noise free information structure
, there is a unique Bayes Nash equilibrium and the
strategy of each agent i is linear in the signal si :
(10)
ai (si ) = w ( ) si ;
with weight w ( ) :
w( ) =
(1
(1
r)
j j) + (1
(1 j j)2 +
)
2
(1
)
(11)
:
We observe that the responsiveness of the individual strategy is a¤ected by the interaction
parameter r, and thus if r = 0, each agent essentially solves an individual prediction problem, then
the optimal weight corresponds to the Bayesian updating rule given by (8). If r > 0, then the agents
are in a game with strategic complementarities and respond stronger to the signal than Bayesian
updating would suggest because of the inherent coordination motive with respect to the common
component in the state, represented by the weight
(1
j j)2 .
In the special cases of pure private or pure common values,
payo¤ uncertainty is described completely by either
i
= 0 and
= 1, respectively, the
or , and reduces from a two-dimensional
to one-dimensional space of uncertainty. In either case, across all
2 [ 1; 1], there are only two
possible noise-free equilibrium outcomes. Namely, either players respond perfectly to the state of
the world (complete information) or players do not respond at all (zero information). For example,
with purely private values, that is
perfectly informative for all
= 0, we have
2
i
=
2
i
, and
2
= 0. Then, the signal si is
6= 0 about the idiosyncratic component, and we are e¤ectively in a
complete information setting. By contrast, if
= 0, then the signal si is completely uninformative,
and each agent makes a deterministic choice given the expected value E [
i]
= 0 of the state.
Correspondingly, for purely common values, the critical and sole value under which the information
structure is completely uninformative is
= 1.
By contrast, with interdependent values,
is described by the two-dimensional vector
i;
11
2 (0; 1), the payo¤ uncertainty of each player
. And di¤erently, from the special cases of
pure private and pure common values, there are now a continuum of noise-free equilibria with
interdependent values. The continuum of equilibria arise as the linear weight
components
i
and
which combines the
gives rise to distinct posterior beliefs about payo¤s types and beliefs of the
other agents.
It now becomes evident that there is a discontinuity at
2 f0; 1g in what we describe as the
set of noise-free Bayes Nash equilibria. The reason is simple and stems from the fact that as
approaches zero or one, one of the dimensions of the uncertainty about fundamentals vanishes. Yet
we should emphasize, that even as the payo¤ types approach the case of pure common or pure
private values, the part of the fundamental that becomes small can be arbitrarily ampli…ed by the
weight . For example, as
values, yet the shock
i
! 1, the environment becomes arbitrarily close to pure common
still can be ampli…ed by letting
above. Thus, the component
i
! 1 in the construction of signal (5)
acts similarly to a purely idiosyncratic noise in an environment
with pure common values. After all, the component
i
only a¤ects the payo¤s in a negligible
way, but with a large enough weight, it has a non-negligible e¤ect on the actions that the players
take. This suggests that for the case in which the correlation of types approaches the case of pure
common or pure private values, there is no longer a sharp distinction between what is noise and
what is fundamentals.
Given the information structure represented by
and the linearity of the unique Bayes Nash
equilibrium (for a given information structure ), we can immediately derive the properties of the
joint distribution of the equilibrium variables.
Proposition 2 (Moments of the Noise Free BNE)
For every noise free information structure :
1. the mean action is given by:
E[ai ] =
1
r
(12)
;
2. the variance of the individual action is given by:
var (ai ) ,
2
a
= w( )2 (
(1
) 2)
j j)2 + (1
2
(13)
;
3. the covariances are given by:
cov(ai ; aj ) ,
2
aa a
= w( )2
(1
j j)2
2
(14)
;
and
cov(ai ; i ) ,
a
a
= w( ) (
12
(1
j j) + (1
) )
2
:
(15)
We observe that the mean of the individual action is only a function of the mean
state
i
of the payo¤
and the interaction parameter r, and thus is invariant with respect to the interdependence
in the payo¤ states and to the information structure
respond to the interdependence
. By contrast, the second moments,
in the payo¤ states and to the information structure . Given
the normal distribution of the payo¤ states, and the linearity of the strategy, the variance and
covariance terms are naturally the products of the weights,
and w ( ), and the variance
2
of the
fundamental uncertainty.
In particular, we are interested in the correlation between any pair of actions by agents i and
j, ai and aj , and the correlation between the action ai and the state
respective correlation coe¢ cients by
aa
and
i
of agent i. We denote the
. From the variance/covariance terms in the above
a
proposition, we can obtain the correlation coe¢ cients explicitly:
aa
=
)
(1
j j) + (1
)
(1
j j)2 + (1
(1
and
a
(1 j j)2
j j)2 + (1
=q
(16)
2;
(17)
:
)
2
For completeness, we note that the covariance between the action ai of agent i and the payo¤ state
j
of agent j, is given by:
cov(ai ;
j)
= cov(ai ; ) ,
j j)
= w( )(1
a
a
2
:
(18)
The …rst equality is due to idiosyncratic (and hence uncorrelated) component in the payo¤ states,
and the corresponding correlation coe¢ cient
is simply the product of the correlation between
a
the actions and between the states of agent i and j:
a
=q
(1
(1
j
j)2
j j)
+
.
2
(1
(19)
)
We observe that the correlation coe¢ cients, as a general statistical property, re‡ect only the
direction of the linear relationship of two random variables, but not the slope of that relationship.
In this context, it means that only the informational terms, namely
and , enter the above
correlation coe¢ cients. By contrast, the interaction parameter r which only a¤ects the slope of
the equilibrium strategy, but not the composition of the signals, does not appear in the correlation
coe¢ cients. For this reason, the correlation coe¢ cients by themselves, allow us to describe important statistical relationship of the equilibrium strategies independent of the nature of the strategic
interaction represented by r.
13
Proposition 3 (Characterization of Noise-Free BNE)
For all
is given by:
2 (0; 1), the set of noise-free BNE in the space of correlation coe¢ cients (
f(
aa ;
a
) 2 [0; 1]2 :
a
p
=j
p
(1
aa
)(1
aa ;
a
;
a
)
(20)
aa )jg.
To visualize the noise-free equilibria in the space of correlation coe¢ cients we plot them in Figure
1. The set of noise-free equilibria is described by the two roots of the above quadratic equation,
a
1.0
0.8
0.6
0.25
0.4
0.5
0.75
0.2
0.0
0.2
0.4
0.6
0.8
1.0
aa
Figure 1: The set of noise-free BNE
visually separated by the dashed line in the above illustration. The positive root corresponds to
the upper part of the ellipse, the negative root to the lower part.
The positive root is generated by noise free information structures, we recall that:
si ,
that vary
between 0 and 1. For
the correlation coe¢ cient
the correlation
aa
highest correlation
aa
i
+ (1
j j) ;
= 1, all the weight is on the idiosyncratic component, and
of the actions of the agents is zero. As
increases and reaches the maximum of 1 for
a
aa
= 0. In between 0 and 1, the
= 1 is always achieved at the information structure
signal exactly informs the agent about his payo¤ state state
coe¢ cient
decreases starting at 1,
i,
= 1=2 at which the
and in consequence the correlation
of the actions then mirrors exactly the correlation coe¢ cient of the states, or
aa
=
.
The negative root is generated by noise free information structures that negatively confound the
idiosyncratic and the common component payo¤ state. As
14
turns negative starting from 0, the
idiosyncratic component enters with a negative sign, thus negatively confounds, and hence both
the correlations
aa
and
a
decrease until the correlation
is reached at the information structure
reaches its minimum at zero, which
a
, as we can verify from (17). At
=
, the
=
negative confounding is maximal, and the covariance between the signal si and the payo¤ state
i
is zero. In fact, we learn from the conditional expectation, see (8), that E [ i jsi ] = E [ i ] = 0 for all
si , and so the signal si is completely uninformative. In consequence, the equilibrium responsiveness
is w ( ) = 0 and in equilibrium all agents are adopting a deterministic choice. As the idiosyncratic
component receives a negative weight lower than
state
i,
, the information content about the payo¤
and in particular about the idiosyncratic component
i
to be perfectly informative about the idiosyncratic component at
of pure private or pure common values,
=
1, and hence o¤er the same
= 1.4
equilibrium correlation pro…le as the information structure
In Proposition 3, we con…ned attention to
increases again until it returns
2 (0; 1), but the result extends to the limiting cases
2 f0; 1g as follows. We observed earlier that there is a
discontinuity in the set of noise-free Bayes Nash equilibria at
2 f0; 1g, but we shall …nd in Section
5 that there is no discontinuity in the set of all Bayes Nash equilibria. In particular, the description
of the equilibrium coe¢ cients in (20) of Proposition 3 remains valid even for
at the limiting points
components
i
2 f0; 1g. Only
2 f0; 1g, we will need to add extraneous noise relative the fundamental
and , as either the idiosyncratic or the common component of the fundamental
state cease to have variance, and hence cannot support the requisite variance in the signals anymore.
In other words, in the case of pure private values, we will have to add a purely common noise term,
and conversely, in the case of pure common values, we will have to a add a purely idiosyncratic
noise term to achieve the equilibrium coe¢ cients of (20).
We note that for most of the subsequent results on volatility, it will be su¢ cient to focus on
information structures
inside the unit interval, and the above description gives an immediate
reason. Namely for every correlation coe¢ cient
is another information structure
aa
that can be achieved with
2 Rn [0; 1], there
2 [0; 1], such that the same correlation coe¢ cient
obtained, but with a strictly higher correlation coe¢ cient
a
between action ai and state
aa
can be
i,
which
is strictly preferred by agent i, as expressed by his best response function (9).
4
As an aside, we noticed that at
=
, the equilibrium choice is deterministic, and hence the equilibrium
variance of actions is zero. Thus the correlation coe¢ cient
value) we obtain
aa
=1
aa
is not well-de…ned. But, we observe that (as limiting
. That is, just before there is zero informativeness, the negative confounding means
that the agents cannot relate their action ai through the common signal, and thus the residual correlation (just
before the limit) is close to 1
:
15
3.3
Complete Information
The noise free information structure gives each agent access to a one-dimensional signal si that
combines, and hence confounds, the components of the payo¤ state,
i
and . By contrast, under
complete information, each agent observes the private and common component separately, and
hence each player receive noise free two-dimensional signal
i;
. Given the linear structure of
the best response, the Bayes Nash equilibrium under complete information is easily established to
be:
,
:
(21)
1 r
In the complete information equilibrium each agent assigns weight 1 to the idiosyncratic compoai
nent, and weight 1= (1
i;
i
+
r) to the common component. But, of course we observe that the strategy
remains linear in the components. Thus, we may expect that a one-dimensional noise free information structure may still be able to replicate the outcome under the two-dimensional complete
information structure. Indeed, we can verify that the one-dimensional information structure
with:
1 r
;
(22)
2 r
2 R such that the correspond slope of the equilibrium strategy:
,
is the unique information structure
w( ) =
1
=
2
1
r
,
r
yields the complete information equilibrium action for every realized component pair
ai (si ) =
This critical value
i
+
1
r
i;
:
.
for the noise free information structure will be an important benchmark as
we analyze how the information structure changes the responsiveness of the agents relative to the
complete information equilibrium. We note that
is the unique information structure such that
the slope w ( ) is independent of the correlation structure
. It always reproduces the complete
information outcome, independent of the correlation structure. By comparison with the information
structure
= 1=2 at which agent learns his payo¤ state
<
1
, r > 0.
2
i,
we …nd that
(23)
Thus in a game of strategic complementarities, the agent wishes to put less weight on the idiosyncratic component than on the common component, and conversely for strategic complements.
16
The critical property of the information structure
is that the signal si is a su¢ cient statistic
with respect to the equilibrium action. We can furthermore evaluate the equilibrium moments of
Proposition 2 at
and recover the equilibrium moments of the complete information equilibrium.
In particular, we …nd that the correlation coe¢ cient
aa
=
(1
of the actions is given by
r)2 +
) (1
and as the above relationship of the critical value
aa
;
with balanced information structure of
= 1=2
suggest, we have that
aa
the equilibrium correlation
aa
=
3.4
=
aa
(1
) (1
r)2 +
>
, r > 0,
under complete information is larger than under
= 1=2 where
if and only if there are strategic complements.
Confounding Information
The idea that confounding shocks can lead to overreaction goes back at least to Lucas (1972). In a
seminal contribution, the author shows how monetary shocks can have a real e¤ect in the economy,
even when under complete information monetary shocks would have no real e¤ect. As agents
observe just a one dimensional signal that confounds two shocks, namely the market conditions
and the monetary supply shock, whatever response they take on the signal, a¤ects their response
to both shocks in the same way. Under complete information they would condition their labour
decision only on the real market conditions, yet as the one dimensional signal does not allow them
to disentangle both shocks, in equilibrium they respond to both shocks. Thus, this can be seen as
an overreaction to monetary shocks due to informational frictions. The idea has been present also
in more resent papers, for example, Venkateswaran (2013) uses a similar idea to show how …rms
can have an excess reaction to aggregate shocks when these are confused as idiosyncratic shocks.
In models in which the information agents get one dimensional signals can also lead to seemingly
opposite conclusions. That is, agents respond as if they had complete information, even though
the information they have leaves them very uncertain about the real state of the world. Hellwig
and Venkateswaran (2009) provide a micro founded model, in which …rms get a one dimensional
signal. Even though the signal leaves …rms relatively uncertain about aggregate shocks, the signal
they get is a pretty good statistic of what their optimal action should be. The main reason is that
idiosyncratic shocks are usually a order of magnitude bigger than aggregate shocks, and thus a
17
signal that confound both shocks (with similar weights) leaves …rms almost with no information
on the realization of the aggregate shocks. Nevertheless, …rms behave “as if” they had perfect
information, even though they remain uncertain about the realization of the shocks, just because
the signal they get constitutes a very good statistic of what their optimal action would be under
complete information.
We can also use the previous analysis to reinterpret some results found in the literature. In
a recent contribution, Angeletos and La’O (2013) show that an economy without any kind of
aggregate uncertainty might still have aggregate ‡uctuations.5 In the language of our model, what
drives the aggregate ‡uctuations in their model is common value noise, which is confounded in the
signals of agents with their fundamental uncertainty. They interpret this common value noise as
sentiments, which are aggregate ‡uctuation that drive the economy that have no payo¤ relevant
e¤ect. Using the previous ideas we can provide a slightly di¤erent interpretation to the sentiments.
It is not necessary to think of these shocks as being completely payo¤ irrelevant, but they can be
payo¤ relevant shock that have a very high weight on the signals of agents. Thus, we can think
of sentiments as agents overreacting to small aggregate fundamental shocks. As we will see in the
next section, it is precisely small shocks that can have a biggest response in the economy.
4
Volatility and Information
In this Section, we use the characterization of the noise-free equilibria to analyze what drives the
volatility of the outcomes, in particular the aggregate outcome in this class of linear quadratic
environments. We focus mainly on aggregate volatility, as it is often the most interesting empirical
quantity and this lens keeps our discussion focused, but we also state results for the individual
volatility, and for dispersion.
5
There are some subtle di¤erences between the reduced form of the model found in Angeletos and La’O (2013)
and our model which are worth brie‡y discussing. In Angeletos and La’O (2013) the fundamental uncertainty is
purely idiosyncratic, agents know the realization of their payo¤ state and agents interact with one particular other
agent instead of the whole aggregate. These di¤erence translate into the fact that agents remain uncertain about the
action of their partner, but not about their own payo¤ state. Yet, the uncertainty is still purely idiosyncratic. This
translates into the fact that agents have strategic uncertainty instead of fundamental uncertainty, nevertheless all
the intuitions and ideas can be extended directly to the case in which the uncertainty is of strategic nature instead
of being about fundamentals. We go back to the di¤erence between strategic and fundamental uncertainty when we
discuss the implications of assuming agents know their own payo¤.
18
4.1
Individual Decision-Making
As we analyze how the private information generates individual as well as aggregate volatility, it
might be instructive to …rst consider the role of the informational externalities in the absence of
strategic interaction. In other words, we begin the analysis with purely individual decision making
problem in which the strategic parameter r is equal to zero. The best response of each agent is then
simply to predict the payo¤ state
i
given the signal si :
(1
(1
ai = E [ i jsi ] =
)
+ (1
+ 2 (1
)2
)
)
si .
(24)
Now, as standard result in statistical decision theory, the individual prediction problem is more responsive to the extent that the information is less noisy. Now, as we observed earlier the information
structure
= 1=2 allows each agent to perfectly predict the payo¤ state. Thus the responsiveness,
2
a
and hence the variance of the individual action
2
a
=
2
is maximized at
+
2
i
Now, to the extent that the individual payo¤ states
= 1=2, at which:
.
and
i
j
are correlated, we …nd that even
though the each agent i only solves an individual prediction problem, their actions are correlated
by means of the underlying correlation in terms if the payo¤ state. Under the information structure
= 1=2; the aggregate volatility is simply given by:
2
A
=
2
aa a
=
2
:
But now we can ask, whether the aggregate volatility may reach higher levels under information
structures di¤erent from
= 1=2. As the information structure departs from
= 1=2, it necessarily
introduces a bias in the signal si towards one of the two components of the payo¤ state
the signal si is losing informational quality with respect to the payo¤ state
i
as
i.
Clearly,
moves away
from 1=2. Thus the individual prediction problem (24) is becoming noisier, and in consequence the
response of the individual agent to the signal si is attenuated. Hence a larger weight, 1
, on
the common component , may support correlation in the actions across agents, and thus support
aggregate volatility. But at the same time, as the response of the agent is likely to be attenuated,
and thus a trade-o¤ appears between bias and loss of information. We can then ask what is the
maximal aggregate volatility that can be sustained across all noise-free information structures.
19
Proposition 4 (Maximmal Aggregate Volatility)
The maximal aggregate volatility:
max fvar (A)g =
is achieved by the information structure
(
)2
+
4
;
(25)
1
< :
2
(26)
:
arg max fvar (A)g =
+2
In the limit to pure private or pure common values, we have
2
lim max var(A) =
!0
4
; lim max var(A) =
!1
2
.
(27)
Thus, the aggregate volatility is indeed maximized by an information structure which biases
the signal towards the common component of the payo¤ state. As we approach the pure common
value environment, the aggregate volatility of the actions coincides with the aggregate variance of
the common state. More surprisingly, as we approach the pure private value environment, and
hence consider an environment with purely idiosyncratic payo¤ uncertainty, the maximal aggregate
volatility does not converge to zero, rather it is bounded away from 0 and given by
2
i
=4. The
information structure that maximizes the aggregate volatility, given by (26) can also expressed in
terms of the correlation coe¢ cient in the payo¤ states, or
p
arg max fvar (A)g =
p
1+2
.
Thus, as the payo¤ environment approaches the pure private value, the signal puts more and more
weight on the vanishing common component. Thus, it ampli…es the response to the small common
shock and thus maintains a substantial correlation in the signals and hence in the actions across
the agents, even though the impact of the common component on the payo¤ state is decreasing. In
fact, if we evaluate the ratio of the individual volatility and the common volatility in the signal of
the information structure (26), then we …nd that its limiting value is given by:
lim
!0
(1
)
= lim q
!0
+
2
+
= 1:
2
Thus, the economy can maintain a large aggregate volatility even in the presence of vanishing
aggregate uncertainty by confound the payo¤ relevant information about the private component
20
with the (in the limit) payo¤ irrelevant information about the common component. The optimal
trade-o¤ between biasing the information and yet maintain responsiveness towards the action is
given by an information structure with signal to noise ratio exactly equal to 1.
Finally, we observe that the dispersion of the individual action:
ai ()
ai = A +
ai = ai
A;
namely the volatility of the individual action beyond the common, or aggregate, volatility is given
by:
var( ai ) = (1
2
a.
aa )
In fact, the dispersion of the individual actions is an entirely symmetric expression to the aggregate
volatility after we de…ne
~ , (1
); ~
, (1
):
Namely, we can write the dispersion:
var( ai ) =
(1
(1
)
)2
+ (1
+ 2 (1
2
)
2
)
(1
)
2
in terms of the above newly de…ned variables as:
var( ai ) =
(1
(1
~ )~ + ~ (1 ~ )
~ )2 ~ + ~ 2 (1 ~ )
!2
~(1
)2 ~
2
.
Thus the problem of …nding the maximal dispersion represents the mirror image to …nding the
maximal volatility.
Corollary 1 (Maximal Dispersion)
The maximal dispersion:
max fvar( ai )g =
is achieved by the information structure
p
( 1
+ 1) 2
4
2
;
:
p
1+ 1
p
arg max fvar( ai )g =
1+2 1
21
1
> :
2
(28)
4.2
Interaction and Information
We now extend our insights from individual decision-making to strategic environments with r 6= 0.
Now, the responsiveness of the action to the signal will depend on the nature of the signal but also
on the nature of the interaction. Still, for each individual agent the residual uncertainty in the
signal will a¤ect the responsiveness to his underlying payo¤ state
i.
In Section 3.3, we established that for every level of interaction r, there is a noise-free, but onedimensional information structure
under which the agents’s equilibrium behavior exactly mimics
the complete information Nash equilibrium. This benchmark information structure
describe in general terms how the information structure
allows to
a¤ects the equilibrium behavior, and
in particular the responsiveness to the payo¤ fundamentals, relative to the complete information
outcome.
Proposition 5 (Responsiveness to Fundamentals)
In the noise-free BNE with information structure :
1.
2 ( ; 1) () cov(ai ;
2.
2 (0;
i)
> 1;
) () cov(ai ; ) >
1
:
1 r
Thus, the responsiveness of the action to each component of the payo¤ state is determined by
the weight
that the signal assigns relative to complete information benchmark
. Importantly
for any given information structure , the responsiveness is typically stronger than in the complete
information environment for exactly one of the components. Importantly, with interdependent values, the maximal responsiveness of the individual action to either the common or the idiosyncratic
component is achieved with uncertainty about the payo¤ state, and not under complete information. We recall that with pure common values, any residual uncertainty about the payo¤ state
inevitably reduced the responsiveness of the individual agent to the common state, and ultimately
also reduced the aggregate responsiveness.) Similarly, with pure private values, for each individual
agent the residual uncertainty attenuated the responsiveness to his payo¤ state
i.
By contrast,
with interdependent values, the interaction between the idiosyncratic and the common component
in the payo¤ state can correlate the responsiveness of the agents without attenuating the individual response, thus leading to a greater responsiveness than could be achieved under complete
information.
22
In Figure 2 we plot the responsiveness to the two components of the fundamental state for the
case of
= 0:5 and di¤erent interaction parameters r. The threshold values
to the critical value
r
for each of the considered interaction parameters r 2
horizontal black lines represent the responsiveness to the common component
simply correspond
3
; 0; + 34
4
. The
in the complete
information equilibrium which is equal to 1=(1 r), and the responsiveness to the idiosyncratic part,
which is always equal to 1. By contrast, the red curves represent the responsiveness to the common
component along the noise-free equilibrium, and the blue curves represent the responsiveness to
the idiosyncratic component. Thus if
<
, then the responsiveness to the common component
1
cov ai ,
i
, r 0.75
1 0.75
cov ai , ,, r 0.75
cov ai ,
i
,r 0
cov ai , , r 0
cov ai ,
i
cov ai , , r
,r
0.75
0.75
1
1
1 0.75
0,0
3
3
0
4
1
4
Figure 2: Responsiveness to Fundamentals for
= 1=2
is larger than in the complete information equilibrium, and conversely for
i.
Moreover, we
observe that the maximum responsiveness to the common component is never attained under either
the complete information equilibrium or at the boundary values of , at 0 or 1. This immediately
implies that the responsiveness is not monotonic in the informational content. We now provide
some general comparative static results with respect to the strategic environment represented by r.
Proposition 6 ( Comparative Statics)
For all
2 (0; 1):
1. the maximal individual volatility max
the maximal dispersion max (1
2
aa ) a
2
a,
the maximal aggregate volatility max
are strictly increasing in r;
23
2
aa a
and
2
a;
2. the corresponding weights on the idiosyncratic component, argmax
2
aa ) a
argmax
2
aa a ,
argmax (1
are strictly decreasing in r;
3. the corresponding weights on the idiosyncratic component satisfy:
argmax (1
2
aa ) a
> argmax
2
a
2
aa a .
> argmax
Thus, the maximal volatility, both individual and aggregate, is increasing in the level of complementarity r. Even the maximal dispersion is increasing in r. In the maximally dispersive
equilibrium, the agents confound the idiosyncratic and aggregate component of the payo¤ type and
overreact to the idiosyncratic part, this e¤ect increases with r. This implies that the responsiveness
to the common component
i
increases, and hence the overreaction to the idiosyncratic component
increases as well. Moreover, the optimal weight on the aggregate component increases in r for
all of the second moments.
We can contrast the behavior of aggregate volatility with pure common values and with interdependent values even more dramatically. Up to now, we stated our results with a representation
2
of the payo¤ uncertainty in terms of the total variance
coe¢ cient
variance
of the payo¤ state and the correlation
, see (4). The alternative parametrization, see (3), represented the uncertainty by the
2
2
of the common component , and the variance
i
of the idiosyncratic component
i
separately (and then the correlation coe¢ cient across agents is determined by the two variances
terms.) For the next result, we shall adopt this second parametrization, as we want to ask what
happens to the aggregate volatility as we keep the variance
and increase the variance
2
i
2
of the common component constant,
of the idiosyncratic component
i,
or conversely.
Proposition 7 (Aggregate Volatility)
For all r 2 ( 1; 1), the maximal aggregate volatility is equal to
4
max
2
aa a
=
4
and is strictly increasing without bound in
max
2
i
q
2
+ (1
2
. As
2
aa a
i
2
=
r)
2
! 0, this converges to the
=(1
r)2 ,
which is also the aggregate volatility in the complete information equilibrium.
converges to
maxf
2
aa a g
=
24
2
(29)
2
=(4(1
r)).
As
2
! 0, this
In other words, as we move away from the model of pure common values, that is
2
i
= 0,
the aggregate volatility is largest with some amount of incomplete information. In consequence,
the maximum aggregate volatility is not bounded by the aggregate volatility under the complete
information equilibrium as it is the case with common values. In fact, the aggregate volatility is
increasing without bounds in the variance of the idiosyncratic component even in the absence of
variance of the common component . The latter result is in stark contrast to the complete information equilibrium in which the aggregate volatility is una¤ected by the variance of the idiosyncratic
component. This illustrates in a simple way that in this model the aggregate volatility may result
from uncertainty about the aggregate fundamental or the idiosyncratic fundamental.
Earlier, we suggested that the impact of the confounding information on the equilibrium behavior is distinct in the interdependent value environment relative to either the pure private and
pure common value environment. We can make this now precise by evaluating the impact the
introduction of a public component has in a world of pure idiosyncratic uncertainty. By evaluating
the aggregate volatility and ask how much can it be increased by adding a common payo¤ shock
with arbitrarily small variance, we …nd from (29) that:
@ max f
@
2
aa a g
2
j
=0
=
2(1
r)3=2
.
More generally, there is positive interaction between the variance of the idiosyncratic and the
common term with respect to the aggregate volatility than can prevail in equilibrium as the crossderivative inform us:
@ max f aa
@ i@
2
ag
3
=
2
2
+ (1
r)
2
3=2
> 0:
Interestingly, for a given variance of the common component, the positive interaction e¤ect as measured by the cross derivatives occurs at …nite values of the variance of the idiosyncratic component.
4.3
Volatility and Informational Friction
The previous analysis allow us to extract some very simple intuitions on when can informational
frictions have a bigger e¤ect. For this, it is worth highlighting once again, that as we approach a
world of idiosyncratic uncertainty (
! 0), the maximum aggregate volatility is bounded away
from 0 and it is a achieved by a noise-free equilibria. This implies that, as we move to a world
of idiosyncratic uncertainty, the response to the aggregate uncertainty must be getting arbitrarily
25
large. In general, any shock that has small variance6 , can have a big response by agents if the weight
is large. On the other hand, shocks that have very high variance generally have a response that
is similar to the response they would have in a complete information environment. This is simply
because a shock that has a very high weight on a signal, but also has a large variance, means that
agents learn the true value of the shock with the signal. Thus, there cannot be any large excess
response due to confounding of the shocks in the signals.
If we go back to the discussion in Angeletos and La’O (2013) we can see how in a model with
idiosyncratic uncertainty, any arbitrarily small aggregate shock can have a huge e¤ect. It is not
necessary to have a large common value noise, but a small shock can be arbitrarily ampli…ed if it
is overweighted in the signal of agents. Of course, our model does not allow to distinguish what
is a large noise shock from a small noise shock that has a large weight on the signal. After all,
noise shocks are by de…nition payo¤ irrelevant so one can scale them up or down without a¤ecting
anything in the model. On the other hand, one can easily interpret having a aggregate shocks to
fundamentals that have very small variance, but receive a large weight in the signals of agents.
5
Bayes Correlated Equilibrium
So far, we analyzed the outcomes under the Bayes Nash equilibrium for a very special class of information structures. Now, we establish that these special, noise-free information structures indeed
form the boundaries of the equilibrium behavior for a very large class of information structures.
At …rst glance, the argument may appear to be an indirect route, but we hope to convince the
reader that it is arguably the most expeditious route to take. We …rst de…ne a solution concept,
the Bayes correlated equilibrium, that describes the behavior (and outcomes) independent of the
speci…c information structure that the agents may have access to. We then establish that the set of
outcomes under the Bayes correlated equilibrium is equivalent to the set of outcomes that may arise
under Bayes Nash equilibrium for all possible information structures. The set of Bayes correlated
equilibria has the advantage that it can be completely described by a small set of parameters and
inequalities that restrict the second moments of the equilibrium distribution. We then show that
the equilibria which form the boundary of the equilibrium set in terms of the inequalities can be
compactly described as noise free Bayes correlated equilibria. In turn, these equilibria, as the name
suggest, can be shown to be equivalent to the set of noise free Bayes Nash equilibria we described
6
During the rest of the subsection we refer to “large”and “small”shocks, as shocks having large or small variance.
26
earlier. Finally, we show that a large class of “welfare functions”, including those that represent the
individual or the aggregate volatility, are maximized by choosing equilibria and hence information
structures that are on the boundary of the equilibrium set. Thus, we conclude that the special class
of noise free information structures are indeed the relevant information structures as we wish to
analyze the maximal volatility or dispersion that can arise in equilibrium.
5.1
De…nition of Bayes Correlated Equilibrium
We …rst de…ne the new solution concept.
De…nition 2 (Bayes Correlated Equilibrium)
The variables ( i ; ; ai ; A) form a symmetric and normally distributed Bayes correlated equilibrium
(BCE) if their joint distribution is given by a multivariate normal distribution and for all i and ai :
ai = rE[Ajai ] + E[ i jai ].
(30)
The Bayes correlated equilibrium requires that the joint distribution of states and actions satis…es
for every action ai in the support of the joint distribution the best response condition (30). As before,
we shall restrict our attention to symmetric and normally distributed equilibrium outcomes.
We emphasize that the equilibrium notion does not at all refer to any information structure
or signals, and thus is de…ned without reference to any speci…c information structure. As the
equilibrium object is the joint distribution ( i ; ; ai ; A), the only informational restrictions that
are imposed by the equilibrium notion are that: (i) the marginal distribution over the payo¤
relevant states
i;
coincides with common prior, and (ii) each agent conditions on the information
contained in joint distribution and hence the conditional expectation given ai when choosing action
ai . The equilibrium notion thus di¤ers notably from the Bayes Nash equilibrium under a given
information structure.
We denote the variance-covariance matrix of the joint distribution of ( i ; ; ai ; A) by V. Next, we
identify necessary and su¢ cient conditions such that the random variables ( i ; ; ai ; A) form a Bayes
correlated equilibrium. These conditions can be separated into two distinct sets of requirements: the
…rst set consists of conditions such that the variance-covariance matrix V of the joint multivariate
distribution constitutes a valid variance-covariance matrix, namely that it is positive-semide…nite;
and a second set of conditions that guarantee that the best response conditions (30) hold. The …rst
set of conditions are purely statistical requirements. The second set of conditions are necessary for
27
any BCE, and these later conditions merely rely on the linearity of the best response. Importantly,
both set of conditions are necessary independent of the assumption of normal distributed payo¤
uncertainty. The normality assumption will simply ensure that the equilibrium distributions are
completely determined by the …rst and second moment. Thus, the normality assumptions allows us
to describe the set of BCE in terms of restrictions that are necessary and su¢ cient.
5.2
Characterization of Bayes Correlated Equilibria
We begin the analysis of the Bayes correlated equilibrium by reducing the dimensionality of the
variance-covariance matrix. We appeal to the symmetry condition to express the aggregate variance
in terms of the individual variance and the correlation between individual terms.
described above the variance
2
of the common component
2
any two individual payo¤ states in (4), or
2
A
action
2
=
Just as we
in terms of the covariance between
, we can describe the variance of aggregate
in terms of the covariance of any two individual actions, or
2
A
=
2
aa a .
Earlier, see
Proposition 2, we denoted the correlation coe¢ cient between action ai and payo¤ state
i by
a
i
of player
:
cov (ai ; i ) ,
a
,
a
and the correlation coe¢ cient between the action ai of agent i and the payo¤ state
agent j as
a
j
of a di¤erent
:
cov (ai ;
These three correlation coe¢ cients,
aa ;
a
;
j)
a
,
a
a
:
, parameterize the entire variance-covariance ma-
trix. To see why, observe that the covariance between a purely idiosyncratic random variable and a
common random variable is always 0. This implies that both the covariance between the aggregate
action A and the payo¤ state
j
of player j and the covariance between the agent i’s action, ai ,
and the common component of the payo¤ state, , are the same as the covariance between the
action of player i and the payo¤ state
j
of player j, or
a
a
. Thus we can reduce the number
of variance terms, and in particular the number of correlation coe¢ cients needed to describe the
variance-covariance matrix V without loss of generality.
Lemma 1 (Symmetric Bayes Correlated Equilibrium)
The variables ( i ; ; ai ; A) form a symmetric and normally distributed Bayes correlated equilibrium
(BCE) if and only if there exist parameters of the …rst and second moments,
28
a;
a;
aa ;
a
;
a
,
such that the joint distribution is given by:
0 1
00 1 0
i
B C
B C
B C
Ba C
@ iA
A
and for all i and ai :
BB
BB
B
NB
BB
@@
C B
C B
C;B
C B
aA @
a
2
2
2
2
a
a
a
a
a
a
a
a
a
a
a
a
2
a
a
a
a
a
2
aa a
2
aa a
2
aa a
11
CC
CC
CC ;
CC
AA
ai = rE[Ajai ] + E[ i jai ].
(31)
(32)
With a multivariate normal distribution the conditional expectations have the familiar linear
form and we can write the best response condition (32) in terms of the …rst and second moments:
ai = r(
a
+
aa (ai
a ))
+(
+
By the law of iterated expectation we determine the mean
a
=r
a
,
+
a
=
(ai
a
a
a
1
(33)
a )):
of the individual action:
r
(34)
:
Taking the derivative of (33) with respect to ai we determine the variance
a
of the individual
action:
1=r
aa
+
a
,
a
a
=
a
1
r
.
(35)
aa
We thus have a complete determination of the individual mean and variance. Now, for V to be a
valid variance-covariance matrix, it has to be positive semi-de…nite, and this imposes restrictions
on the remaining covariance terms.
Proposition 8 (Characterization of BCE)
A multivariate normal distribution of
i;
; ai ; A is a symmetric Bayes correlated equilibrium if
and only if :
1. the mean of the individual action is:
a
=
1
r
(36)
;
2. the standard deviation of the individual action is:
a
=
a
1
29
r
0;
aa
(37)
3. the correlation coe¢ cients
aa ;
a
;
satisfy the nonnegativity conditions
a
aa ;
0 and
a
the inequalities:
(i)
(
a
)2
aa ;
(ii)
(1
aa )(1
)
(
a
There are two aspects of a BCE that we should highlight. First, the mean
a
)2 .
a
(38)
of the individual
action (and a fortiori the mean of the aggregate action A) is completely pinned down by the payo¤
fundamentals. This implies that any di¤erences across Bayes correlated equilibria must manifest
themselves in the second moments only.7 Second, the restrictions on the equilibrium correlation
coe¢ cients do not at all depend on the interaction parameter r. It is easy to see from the proof
of Proposition 8 that the restrictions on the set of equilibrium correlations are purely statistical.
They come exclusively from the condition that the variance-covariance matrix V forms a positive
semi-de…nite matrix. As we show later in Section 6, this disentanglement of the set of feasible
correlations and the interaction parameter is possible only if we allow for all possible information
structures, i.e. when we do not impose any restrictions on the private information that agents may
have. By contrast, the mean
a
and the variance
2
a
of the individual actions do depend on the
interaction parameter r, as they are determined by the best response condition (32).
We note that in the special case of pure private (or pure common) values the set of outcomes in
terms of the correlation coe¢ cients
aa ;
a
;
a
reduces to a two dimensional set. The reduction
in dimensionality arises as the correlation coe¢ cient
a
of the individual action and the common
state is either zero (as in the pure private value case) or equal to the correlation coe¢ cient between
the individual action and individual state (as in the pure common value case), and thus redundant
in either case.
5.3
Equivalence between BNE and BCE
Next we describe the relationship between the joint distributions
i;
; ai ; A that can arise as
Bayes correlated equilibria and the distributions that may arise as a Bayes Nash equilibrium for
some information structure I = fSi gi2[0;1] . In contrast to the restriction to one-dimensional noise
free information structure made in Section 3, we now (implicitly) allow for a much larger class of
information structures, including noisy and multi-dimensional information structures. In fact, for
7
We report in Proposition 8 the equilibrium mean action for given mean
we frequenly adopt the normalization that
= 0, as any change in
only a¤ects the expected action
any further implications for the higher moments of the equilibrium distribution.
30
2 R, but as mentioned in Section 2,
a
without
the present purpose, it is su¢ cient to merely require that the associated symmetric equilibrium
strategy fai gi2[0;1] :
ai : S i ! R
forms a multivariate normal distribution.
De…nition 3 (Bayes Nash Equilibrium)
The random variables fai gi2[0;1] form a normally distributed Bayes Nash Equilibrium under infor-
mation structure I = fSi gi2[0;1] if and only if the random variables fai gi2[0;1] are normally distributed
and
ai = E[ i + rAjsi ];
8i; 8si .
In Section 6, we explicitly analyze certain classes of multi-dimensional normally distributed
information structures, many of which have already appeared in the literature. As we will see
then, and as it has been established in the literature, the normality of the signals together with
the linearity of the best response function leads to the normality of the outcome distribution. We
postpone the discussion until the next section, as for the moment the relevant condition is the
normality of the outcome distribution itself.
Proposition 9 (Equivalence Between BCE and BNE)
The variables ( i ; ; ai ; A) form a (normal) Bayes correlated equilibrium if and only if there exists
some information structure I under which the variables ( i ; ; ai ; A) form a Bayes Nash equilibrium.
The important insight of the equivalence is that the set of outcomes that can be achieved as a
BNE for some information structure can equivalently be described as a BCE. Thus, the solution
concept of BCE allows us to study the set of outcomes that can be achieved as a BNE, importantly without the need to specify a speci…c information structure. In Bergemann and Morris
(2013a), we establish the equivalence between Bayes correlated equilibrium and Bayes Nash equilibrium for canonical …nite games and arbitrary information structures (see Theorem 1 there). The
above proposition specializes the proof to an environment with linear best responses and normally
distributed payo¤ states and actions.
We will discuss speci…c information structures and their associated equilibrium behavior in
Section 6. Here, we describe a one-dimensional class of signals, which add noise to the noise-free
information structures analyzed earlier, that is already su¢ ciently rich to decentralize the entire
31
set of Bayes correlated equilibria as Bayes Nash equilibria. By adding noise "i to the noise free
information structure, we generate a signal si :
si =
i
+ (1
(39)
) + "i ;
2
".
where "i is normally distributed with mean zero and variance
Similar to the de…nition of the
payo¤ relevant fundamentals, the individual error term "i can have a common component:
" , Ei ["i ];
and an idiosyncratic component:
"i , "i
";
while being independent of the fundamental component. Thus, the joint distribution of the states
and signals is given by:
1
0
i
C
B
C
B
C
B
B "C
@ iA
"
00
0
1 0
BB C B
BB C B
B C B
NB
BB 0 C ; B
@@ A @
0
and the standard deviation
"
(1
)
2
0
0
2
0
0
0
0
0
0
0
2
"" ) "
(1
0
> 0 and the correlation coe¢ cient
""
11
CC
C
0 C
CC ;
C
0 C
AA
(40)
2
"" "
2 [0; 1] are the parameters of
the fully speci…ed information structure I = fIi gi2[0;1] , together with the confounding parameter
. We observe that the dimensionality of information structure I; given by (39) and (40), and thus
parametrized by ( ;
";
"" ),
matches the dimensionality of the Bayes correlated equilibrium in terms
of the correlation coe¢ cients
space, intuitively, the variance
aa ; a ; a . Relative to the boundary of the BCE in the ( aa ; a )
2
" of the noise term controls how far the equilibrium correlation a
falls vertically below the boundary, whereas the correlation
""
in the errors across agents controls
the correlation in the agents actions as the noise in the observation increases.
Proposition 10 (Decentralization of BCE)
The variables ( i ; ; ai ; A) form a (normal) Bayes correlated equilibrium if and only if there exist
some information structure ( ;
2
";
"" )
under which the variables ( i ; ; ai ; A) form a Bayes Nash
equilibrium.
32
5.4
The Boundary of Bayes Correlated Equilibria
In Proposition 8, we characterize the entire set of Bayes correlated equilibria. The mean and
the variance of the individual action was determined by the equalities (36) and (37), whereas the
correlation coe¢ cients
aa ;
a
;
were restricted by the two inequalities given by (38). We will
a
now characterize the set of equilibria in which two inequalities given by (38) are satis…ed with
equality, which we call the boundary of the Bayes correlated equilibrium.8 It follows that the
boundary of the Bayes correlated equilibrium set can be expressed as a one-dimensional object in
the three-dimensional space of correlation coe¢ cients:
aa ;
a
;
2 [0; 1]
a
where the restriction to the positive unit interval for
[0; 1]
aa
and
[ 1; 1] ;
was also established in Proposition
a
8.
Corollary 2 (Boundary of Bayes Correlated Equilibria)
For all
2 (0; 1), the boundary of the BCE (in the space of correlation coe¢ cients (
aa ;
a
;
a
)
is given by:
f(
aa ;
a
;
a
) 2 [0; 1]2
[ 1; 1] :
a
p
=j
p
(1
aa
)(1
aa )j;
a
=
p
aa
g. (41)
The above description of the boundary of the Bayes correlated equilibria is purely in terms of
correlation coe¢ cients, and we might ask what structural properties do the boundary equilibria have
that identi…es the boundary and distinguishes them from the interior equilibria. As we observed
earlier, we can decompose the action of each agent in terms of his responsiveness to the components
of his payo¤ state
i,
namely the idiosyncratic component
i
and the common component , and
any residual responsiveness has to be attributed to noise. Given the multivariate normal distribution, the responsiveness of the agent to the components of his payo¤ state is directly expressed by
the covariance:
@E[ai j
@
8
i]
=
cov(ai ;
i
Technically, the set of feasible correlations
i)
2
aa ; a
;
;
a
@E[ai j ]
cov(ai ; )
=
:
2
@
(42)
is a three dimensional object, and thus it’s boundary is
a two dimensional object. To characterize the boundary one would have to impose that each of the two inequalities
given by (38) are satis…ed with equality separately. There are two reasons to study the case in which both equalities
are satis…ed with equality. First, this will e¤ectively provide the boundary of the set of feasible correlations in
(
a
;
aa )
space. Second, it is a simpler object to explain and illustrate, while the intuitions trivially extend to the
case in which we impose that the inequalities in (38) are satis…ed with equality separately.
33
The action ai itself also has an idiosyncratic and a common component as ai = A +
ai . The con-
ditional variance of these components of ai can be expressed in terms of the correlation coe¢ cients
aa ;
a
;
a
, which are subject to the restrictions of Proposition 8. By using the familiar
;
property of the multivariate normal distribution for the conditional variance, we obtain a diagonal
matrix:
var
"
If the components A and
the payo¤ state,
and
ai
i
A
#
=
0
2 @(1
(
aa )
a
2
a
a
)
0
1
2
a
0
aa
1
A.
(43)
ai of the agent’s action are completely explained by the components of
i,
then the conditional variance of the action components, and a fortiori
of the action itself, is equal to zero. Now, by comparing the conditional variances above with the
conditions of Proposition 8, it is easy to see that the conditional variances are equal to zero if and
only if the conditions of Proposition 8 are satis…ed as equalities. Moreover, by the conditions of
Proposition 8.3, in any BCE, the conditional variance of action ai can be equal to zero if and only
if the conditional variances of the components, A and
ai are each equal to zero. This suggests the
following de…nition.
De…nition 4 (Noise Free BCE)
A BCE is noise-free if ai has zero variance, conditional on
and
i.
We observe that the above matrix of conditional variances is only well-de…ned for interdependent
values, that is for
2 (0; 1). For the case of pure private or pure common values,
only one of the o¤ diagonal terms is meaningful, as the other conditioning terms,
= 0 or
or
= 1,
i,
have
zero variance by de…nition. Now, we have immediately a complete description of the boundary of
the Bayes correlated equilibrium set in terms of noise free Bayes correlated equilibria. Moreover,
as the name may suggest the set of noise free BCE coincide exactly with the set of noise free BNE
that we analyzed in Section 3.
Corollary 3 (Equivalence of Noise Free BNE and BCE)
1. For all
2 (0; 1), a BCE is noise free if and only if it is a boundary BCE.
2. For all
2 (0; 1), a BCE is noise free if and only if it is noise free BNE for some information
structure .
34
We have thus established that all the equilibria at the boundary of the Bayes correlated equilibria
can arise as Bayes Nash equilibria of the one-dimensional noise free structures that we analyzed in
Section 3. Thus, to the extent that we are interested in extremal behavior, behavior that arises
at the boundary rather than at the interior of the equilibrium set, Corollary 3 establishes that the
restriction to noise free information structure indeed is a necessary but also su¢ cient condition.
But of course, whether the relevant equilibrium behavior (and associated information structure) is
located in the interior or at the boundary large depends on the objective function. The …nal result of
this section present weak su¢ cient condition under which the objective function is indeed maximized
at the boundary of the Bayes correlated equilibrium set. Given our characterization of the set of
Bayes correlated equilibria, we know that the mean and variance is determined by the interaction
structure and the correlation coe¢ cients, where the later are restricted by the inequalities derived in
Proposition 8. Thus, if we are interested in identifying the equilibrium with the maximal aggregate
or individual volatility, or essentially any other function of the …rst and the second moments of the
equilibrium distribution, we essentially look at some function
is given by the triple of correlation coe¢ cients: (
aa ;
a
;
a
).
: [ 1; 1]3 ! R, where the domain
Proposition 11 (Maximal Equilibria)
Let
(
aa ;
creasing in
;
a
a
a
) be an arbitrary continuous function, strictly increasing in
. Then, the BCE that maximizes
a
and weakly in-
is an noise free BCE.
An immediate consequence of the above result is the following
Corollary 4 (Volatility and Dispersion)
The individual volatility
2
a;
continuous functions of (
aa ;
the aggregate volatility
a
;
a
2
aa a
), strictly increasing in
aa )
2
a
are all
and weakly increasing in
a
.
and the dispersion (1
a
Thus we can already conclude that information structures that maximize either individual or
aggregate volatility, or dispersion, are by necessity the noise free information structures that we analyzed in Section 3. Moreover, Proposition 11 indicates that these noise free information structures
would remain relevant if we were to conduct a more comprehensive welfare analysis as the associated objective functions, in particular if they were linear quadratic as well, would satisfy the mild
monotonicity conditions of Proposition 11. In particular, we note that the conditions of Proposition
11 are silent about the correlation coe¢ cient
aa ,
and thus we can accommodate arbitrary behavior,
in particular we can accommodate environments (and payo¤s and associated objective functions)
with strategic substitutes or complements.
35
6
Information Structures and Equilibrium Behavior
We began our analysis in Section 3 with a class of speci…c, namely noise-free, information structures,
and analyzed the resulting Bayes Nash equilibrium behavior with respect to volatility and dispersion.
We then established in Section 5 that the noise-free information structures indeed form the boundary of equilibrium behavior with respect to the relevant correlation coe¢ cients across all possibly
symmetric and normally distributed information structures. In particular, Proposition 9 established
the equivalence of the set of Bayes correlated equilibria with the set of Bayes Nash equilibria under
all symmetric and normally distributed information structures. As the literature has frequently
restricted attention to speci…c information structures, we can now ask, in light of Proposition 8
and 9, how a speci…c information structure restricts the equilibrium behavior, and more generally,
how restrictive or permissive commonly used classes of information structures are with respect to
the entire set of feasible equilibrium behavior as characterized by the Bayes correlated equilibrium.
In Subsection 6.1, we analyze various classes of information structures, one-dimensional as well as
multi-dimensional ones, and directly derive the restrictions they impose on equilibrium behavior.
An important insight that emerges here is that the dimensionality of the information structures by
itself is not su¢ cient statistic for the dimensionality of the supported equilibrium behavior, and
that information structures limit equilibrium behavior in subtle ways. In Subsection 6.2, we deploy
the full characterization of the Bayes correlated equilibrium to analyze information sharing among
…rms. With an exhaustive description of the set of feasible equilibrium behavior, we can show that
information sharing is revenue optimal more frequently than established previously, and identify
new information sharing policies that are revenue superior to the ones derived previously.
6.1
Information as Restriction on Equilibrium Behavior
We now explicitly analyze the equilibrium restrictions that are imposed by a number of commonly
discusses classes of information structure. The objective of the analysis is two-fold. First, we
illustrate how di¤erent exogenous assumptions on the information structures restrict the possible
outcomes that may be achieved. Second, the solution concept of Bayes correlated equilibrium is used
as device to characterize the set of outcomes achievable for a given class of information structures.
The speci…c information structures that we study are given by, or form a subset of, the following
three-dimensional information structures:
si , fs1i =
i
+ "1i , s2i = + "2i , s3i = + "3 g;
36
(44)
where "1i ; "2i are idiosyncratic noise terms and "3 is a common noise term. This class of information
structures appears in the analysis of Angeletos and Pavan (2009) and is parameterized by three
variables, namely the variances (
2
"1 ;
2
"2 ;
2
"3 )
of the noise terms. We begin by characterizing the
set of feasible correlations when agents only observe a noisy and idiosyncratic signal of their private
payo¤ state
i:
s1i =
and thus we set
2
"2
2
"3
=
+ "1i ;
i
(45)
= 1. This class of signals is frequently used in the literature on
information sharing, see Vives (1990b) and Raith (1996).
Proposition 12 (Noisy Idiosyncratic Signal of Payo¤ Type)
A set of correlations (
structure
fs1i gi2[0;1] ,
aa ;
a
;
a
) can be achieved as a Bayes Nash equilibrium with an information
if and only if:
=
a
r
aa
;
a
=
;
a
aa
2 [0;
(46)
]:
We observe that the set of feasible correlations when the agents receive only a one-dimensional
signal of the form s1i does not depend on the interaction parameter r. This, at …rst perhaps
surprising result is actually true for any class of one-dimensional signals. In fact, earlier we showed
that a larger set of one dimensional signals allows us to span the entire set of feasible outcomes
(that is, all Bayes correlated equilibrium). And indeed, for the entire set of feasible outcomes, the
set of feasible correlations is independent of r.
A second interesting information structures to study is the case in which, besides signal fs1i gi2[0;1] ,
the players also know the average payo¤ state , and thus we set
2
"2
=
2
"3
= 0, and each agent
observe and s1i . Although a priori this may not seem like an information structure that might arise
exogenously, it is the information structure that arises when agents receive endogenous information
on the average action taken by other players. For example, in a rational expectation equilibrium
with a continuum of sellers as studied by Vives (2013), each seller takes an action, the supply, given
a signal about his private cost
i
and given the price p, which is simply a linear function of the
average supply, and in turn a linear function of the average cost .
Proposition 13 (Agents Know the Common Component )
A set of correlations (
structure fs1i gi2[0;1] ;
aa ;
a
;
a
) can be achieved as a Bayes Nash equilibrium with an information
if and only if:
r
1 r aa
;
a =
r
aa 1
a
=
37
r
aa ;
aa
aa
2[
aa ; 1]:
(47)
Now, the set of feasible correlations indeed depends on r, and the information structure fs1i gi2[0;1] ;
generates a two-dimensional signal for every agent i. We also …nd see that the correlation coe¢ cient
aa
has a lower bound, which corresponds precisely to the correlation coe¢ cient
aa
achieved in the
complete information equilibrium.
In Figure 3 we illustrate the set of correlations that can be achieved when each agent receives
a noisy signal s1i of his payo¤ state
i,
and when each agent in addition knows the average payo¤
state . We depict the feasible correlations in the two-dimensional space of correlation coe¢ cients
aa
and
a
signal s1i ;
. Here we observe that either the one-dimensional signal s1i or the two-dimensional
induce a one-dimensional subspace of
a
;
. But Proposition 12 and 13 in fact
a
establish that these signals maintain a one-dimensional subspace even with the respect to the full
three dimensional space of correlation coe¢ cients (
aa ;
a
;
a
).
BCE Frontier
a
N oisySignal
on O w n P ayoff
1.0
0.8
K no w n , r
1
0.6
K no w n , r 0
0.4
K no w n , r 0.5
0.2
0.0
0.2
0.4
0.6
0.8
1.0
aa
Figure 3: Set of feasible correlations for subset of information structures (
= 1=2)
We now characterize the set of feasible correlations when agents know their own payo¤ state,
and thus we set
least
i.
2
"1
= 0. That is, all possible outcomes that are consistent with agents knowing at
Since each agent knows his own payo¤ type, the residual uncertainty is with respect to the
actions taken by other players. This informational assumption of knowing the own payo¤ state
i
is commonly appears in the macroeconomics literature. For example, Angeletos, Iovino, and La’O
(2011) analyze the impact of public information on the social welfare when the agents know their
own payo¤ state
i
but may receive additional information about the common component . In
a model with idiosyncratic rather than aggregate interaction, Angeletos and La’O (2013) analyze
38
the impact of informational friction on aggregate ‡uctuations. Again, they assume that each agent
knows his own payo¤ state
i,
but is uncertain about the payo¤ state
j
of the trading partner j.
Similarly, Lorenzoni (2010) investigates the optimal monetary policy with dispersed information.
He also considers a form on individual matching rather than aggregate interaction. The common
informational assumption in all of these models is that every agent i knows his own payo¤ state
i,
and thus all uncertainty is purely strategic.
The characterization is achieved in two steps. First, we describe the set of feasible action
correlations
aa .
is equal to
aa
If each agent only knows his own payo¤ state, then the correlation across actions
. In this case, the only information that agent i receives is his payo¤ state
i,
and the actions of any two agents can only be correlated to the extent that their payo¤ states
are correlated. By contrast, if the agents have complete information, the correlation is given, as
established earlier, by
aa .
=
aa
We …nd that the set of feasible action correlations is always
between these two quantities, providing the lower and upper bounds. If r > 0, then the complete
information bound is the upper bound, if r < 0, it is the lower bound. For r = 0 they coincide as
i
is a su¢ cient statistic of the action taken by each under complete information.
Second, we describe the set of feasible correlations between action and state,
aa .
The set of feasible
bound for the feasible
is determined by two functions of
a
a
. We denote these functions by
aa ,
i
a
a
, for any feasible
which provide the lower and upper
c
a
and
as these bounds are achieved
by information structures in which each agent know his own payo¤ state and receives a second
signal, either an idiosyncratic signal of the common component
signal of
: s2i , + "2i or a common, public
: s3i , + "3 .
Proposition 14 (Bounds on BCE Correlation Coe¢ cients)
A set of correlations (
aa ;
a
agents knows his payo¤ state
) can be induced by a linear Bayes Nash equilibrium in which each
i
if and only if
aa
and for a given
aa
2 [minf
2 [minf
g; maxf
aa ;
a
2 [minf
g; maxf
aa ;
g]:
aa ;
c
a
g];
aa ;
;
i
a
g; maxf
c
a
;
i
a
g]:
In Figure 3, we illustrate the Bayes correlated equilibrium set for di¤erent values of the interaction parameter r with a given interdependency
= 1=2. Each interaction value r is represented
by a di¤erently colored pair of lower and upper bounds. For each value of r, the entire set of BCE
39
c
a
is given by the area enclosed by the lower and upper bound. Notably, the bounds
i
a
aa
(
aa )
=
intersect in two points, corresponding to each agent knowing his payo¤ state
= 1=2) and to complete information, at the low or high end of
aa
(
i
aa )
and
only (at
depending on the
nature of the interaction, respectively. In fact these, and only these, two points, are also noise-free
equilibria of the unrestricted set of BCE. When r
0, the upper bound is given by signals with
idiosyncratic error terms, while the lower bound is given by signals with common error terms, and
conversely for r
0. With r > 0, as the additional signal contains an idiosyncratic error, it forces
the agent to stay closely to his known payo¤ state, as this is where the desired correlation with the
other agents arises, and only slowly incorporate the information about the idiosyncratic state, thus
overall tracking as close as possible his own payo¤ state
i,
and achieving the upper bound. The
argument for the lower bound follows a similar logic.
Thus the present restriction to information structures in which each agent knows at least his own
payo¤ state dramatically reduces the set of possible BCE. With the exception of these two points,
all elements of the smaller set are in the interior of unrestricted set of BCE. Moreover, the nature of
the interaction has a profound impact on the shape of the correlation that can arise in equilibrium,
both in terms of the size as well as the location of the set in the unit square of (
the set of feasible correlations (
aa ;
a
;
a
aa ;
a
). Moreover,
) forms a two dimensional space.
a
1.0
0.8
BCE Frontier
0.6
r
50
r
2
0.4
r 0.75
r 0.99
0.2
0.0
0.2
0.4
0.6
0.8
1.0
aa
Figure 4: Boundary of the set of feasible correlations when agents know own payo¤ (
= 1=2)
We emphasize that in the case of pure common or pure private values, the set of Bayes correlated
equilibria where each agent knows his own payo¤ state is degenerate. Under either pure common
40
or pure private values, if each agent know his own payo¤ state then, there is no uncertainty left,
and thus the only possible outcome corresponds to the complete information outcome.
So far, we con…ned attention to strict subsets of the three-dimensional information structures
de…ned earlier by (44). Notably, each agent either observed only a one-dimensional signal s1i of the
private payo¤ state
i,
or observed either the private payo¤ state
i
or the common component
without noise. Now that we understand how the observability of the individual state or components of the individual state a¤ect the equilibrium correlation, we ask what is the entire set of
equilibrium correlations that can be achieved with the three dimensional signal structure. As the
dimensionality of the information structure de…ned by (44) coincides with the dimensionality of the
set of equilibrium correlations, we might expect that the set of information structures is su¢ ciently
rich to span the the equilibrium set de…ned by the BCE. In fact, in the case of pure common values
Bergemann and Morris (2013b) show that any BCE can be decentralized by considering a pair of
noisy signals, a private and a public signal of the payo¤ state, namely s1i =
i
+ "1i and s3i = + "3
in terms of the current language. By contrast, in the present general environment neither the class
of binary nor the extended class of tertiary information structures can decentralize the entire sets
of BCE.
We can characterize the set of feasible correlations under information structure si in (
space. In order to illustrate the set of feasible correlations in the (
aa ;
a
aa ;
a
)
) space that can be attained
by signals of the form si , we de…ne the following upper bound:
~a (
aa )
, maxf
a
2 [0; 1] : (63), (64), (65) and (66) are satis…ed for
In Proposition 15 we characterize ~a (
aa
…xed.g:
(48)
which corresponds to connecting the previous upper
aa ),
bounds of the more restrictive information structures investigated in Proposition 12 - 14.
Proposition 15 (Boundary of Correlations under si )
The boundary of feasible correlations ~a (
1. If
aa
2 [0; minf
aa ;
g]; then ~a (
aa )
aa )
is given by:
is attained by an information structure in which each
agent gets a noisy signal on his own payo¤ state: "1 2 [0; 1],
r
aa
:
~a ( aa ) =
2. If
aa
2 [minf
aa ;
g; maxf
aa ;
g] then ~a (
which each agents knows his own payo¤ state:
~a (
aa )
aa )
"1
= maxf
41
"2
=
"3
= 1 with:
is attained by an information structure in
= 0;
c
a
;
"2 ;
i
a
g:
"3
2 (0; 1) with:
3. If
aa
2 [maxf
aa ;
g; 1] then ~a (
know the common component:
"1
aa )
2 [0; 1],
~a (
Moreover, for all (
aa ;
a
is attained by an information structure in which agent
) such that
aa )
a
=
"2
r
=
"3
1
aa
2 [0; ~a (
= 0 with:
r
1
aa
r
aa )],
:
there exists an information structure
of the form si that achieves such correlations as a Bayes Nash equilibrium.
Thus, the parametrization of ~a corresponds to the union of the equilibria that can be achieved
with information structures: (i) in which each agent receives a noisy signal on his own payo¤ state
i,
(ii) in which each agent knows at least his own payo¤ state
knows the common component . In Figure 5 we illustrate ~a (
i
and (iii) in which each agent
aa )
for three distinct values of
r. The graphs illustrates some of the qualitative properties of ~a as established by Proposition
15. First, we can see that the boundary ~a is strictly below the frontier of all Bayes correlated
equilibrium, which implies that the set of three-dimensional information structures given by (44)
is indeed restrictive. Second, we can see that the boundary ~a attains the frontier of the BCE
at exactly three points: (i) the complete information equilibrium, (ii) the equilibrium in which
each agent knows
i
and (iii) the equilibrium in which each agent only knows . It is also worth
highlighting that the boundary ~a is discontinuous for r < 0. This result emphasizes the subtle
role that the information structure has on equilibrium outcomes.
a
1.0
0.8
BCE Frontier
0.6
0.4
0.2
0.0
0.2
0.4
0.6
0.8
1.0
aa
, r 0.5
aa
, r 0.
aa
,r
1
aa
Figure 5: Boundary of the set of feasible correlations for information structures si (
42
= 1=2)
Idiosyncratic Interaction and Known Payo¤ State We brie‡y discuss how our results extend
to an environment in which the interaction of the agent occurs at a resolution higher than the
aggregate interaction. Speci…cally, we discuss a model of pairwise matching in which each agent
i knows his own payo¤ state
partner,
j.
i
but may still be uncertain about the payo¤ state of his matched
This speci…cation, and variants thereof have appeared prominently in the literature,
see for example Lorenzoni (2010) and Angeletos and La’O (2013). As each agent i is assumed to
know his own payo¤ state
i,
there is no fundamental uncertainty (about the payo¤s) anymore.
Rather for each agent i, the residual uncertainty is all about the strategic uncertainty, namely the
actions expected to be taken by the other agents. Thus, as a by-product, the subsequent discussion
also informs us about the distinct contribution of strategic uncertainty to the aggregate volatility
in the absence of fundamental uncertainty.9
Given that each agent i is assumed to know his payo¤ state, it is useful to introduce the following
change of variables:
a
~ i , ai
i;
e,A
A
(49)
;
~ Now, by necessity, the residual action a
and thus analyze the joint distribution of ( i ; ; a
~i ; A).
~i
represents the response to the strategic uncertainty. Even before we consider pairwise interaction,
we observe that the informational condition of knowing the own payo¤ state
i
has important
implications. for aggregate interaction. After the above change of variables, the best response
condition in terms of a
~i under aggregate interaction in any Bayes Nash equilibrium reduces to:
h
i
e j i ; Ii ;
a
~i = rE j i ; Ii + rE A
(50)
where Ii is any information agents get above knowing
into a function of
out
i.
i.
After all any signal in Ii can be turned
only after using the fact that agent i already knows
i,
and thus we can …lter
We observe that the resulting best response condition is isomorphic to one where there the
payo¤ environment is a pure common value environment, and where the payo¤ state and average
action receive the same weight in the best response condition of the individual agent. This already
gives a hint on the strong restrictions on behavior that arise from imposing that agents know their
own payo¤ state in a model with aggregate interactions only.
We now look at a simple model of pairwise interaction as in Angeletos and La’O (2013). We
assume there is some matching between i and j and that agents interact with their partner as well
9
We thank our discussant, Marios Angeletos, who emphasized the importance of the distinct contribution of each
source of uncertainty to the aggregate volatility.
43
as with the aggregate population. Thus, the …rst order condition of agent i who is matched with j
is given by:
ai = E [ i j i ; Ii ] + ra E [aj j i ; Ii ] + rA E [A j i ; Ii ] :
If we make the same change of variables as before, we have that, expressed in the variables f~
ai gi2[0;1] ,
the …rst order conditions are given by:
a
~ i = ra E [
h
i
e j i ; Ii :
j i ; Ii + ra E [e
aj j i ; Ii ] + rA E A
j j i ; Ii ] + rA E
(51)
Thus, we have a similar model as the one we have been studying so far, but with some di¤erences.
First, agents have some prior information on
the shocks
j
and
which comes from knowing
i.
Second, the size of
are scaled by ra and rA in the …rst order conditions. Besides these di¤erences,
a model with heterogeneous interaction in which each agent knows his own payo¤ state is almost
identical to our original model. Namely, each agent’s uncertainty is still two-dimensional, with a
common and an idiosyncratic component (equal to
and
j
respectively). Thus, we see that even
if we were interested in strategic uncertainty in the absence of fundamental uncertainty, the same
basic intuitions and ideas still apply. A key factor to consider in the information of agents would
be to see how the signals of agents lead to confusion between
j
and . Moreover, the confusion
would lead to overreaction and underreaction to some of these fundamentals.
To be more precise, let us consider a model with pairwise interactions as described above in
which each agent receives one additional signal beyond the payo¤ state
i.
We could in particular
consider a noise-free information structure and assume that agents receive a signal of the form:
si =
j
+ (1
j j) with [ 1; 1]:
It is not di¢ cult to see, that in such a model one could still have an unbounded aggregate volatility
as the variance of idiosyncratic payo¤ state (
the same as before. As
and increase the response to
) goes to in…nity. The argument would be essentially
goes to in…nity we can increase the weight that the signal gives on
without bounds. Notably, all uncertainty in such a model would be
purely strategic, as agents know their own payo¤ state. Moreover, all aggregate volatility would
be driven by the aggregate fundamental, as by construction there is no common value noise in
the signal. Thus, we would have equilibria with only strategic uncertainty, in which the aggregate
volatility is only driven by fundamentals, but the aggregate volatility increases without bounds as
the variance of the idiosyncratic payo¤ state goes to in…nity.
44
6.2
Information Structure and Information Sharing
We end this section by developing some implications of the present analysis for the large literature on
information sharing in oligopoly. The problem of information sharing among …rms was pioneered in
work by Novshek and Sonnenschein (1982), Clarke (1983) and Vives (1984), who examined to what
extent competing …rms have an incentive to share information in an uncertain environment. In this
strand of literature, which is presented in a very general framework by Raith (1996) and surveyed
in Vives (1999), each …rm receives a private signal about a source of uncertainty, say a demand
or cost shock. The central question then is under which conditions the …rms have an incentive to
commit ex-ante to an agreement to share information in some form. Clarke (1983) showed that in
the Cournot oligopoly with uncertainty about a common parameter of demand, the …rms will never
…nd it optimal to share information. The complete absence of information sharing is surprising as
it would be socially optimal to share the information, and one might have conjectured that the
oligopolistic …rms partially appropriate the social gains of information. In fact, in subsequent work,
the strong result of zero information sharing was shown to rely on constant marginal cost, and with
a quadratic cost of production, it was shown that either zero or complete information sharing can
be optimal, where the information sharing result appears when the cost of production is su¢ ciently
convex for each …rm, and hence information becomes more valuable, see Kirby (1988) for common
value environments, and subsequently Vives (1990b) and Raith (1996) for general interdependent
environments.
In the above cited work, the individual …rms receive a private, idiosyncratic and noisy signal xi
about the state of demand . Each …rm can commit to transmit the information, noisy or noiseless,
to an intermediary, such as a trade association, which aggregates the information. The intermediary
then discloses the aggregate information to the …rms. Importantly, while the literature did consider
the possibility of noisy or noiseless transmission of the private information, it a priori restricted the
disclosure policy to be noiseless, which implicitly restricted the information policy to disclose the
same, common signal to all the …rms. The present analysis of the Bayes correlated equilibria and the
impact of the information structures on the feasible correlation structure allows us to substantially
modify these earlier insights.
To be precise, in a competitive equilibrium with a continuum of producers, each one of them
with a quadratic cost of production c (ai ) = a2i =2; and facing a linear inverse demand function:
p ( ; A) = + rA;
(52)
the resulting best response function is given by (1), analyzed by Guesnerie (1992) in the competitive
45
setting, and by Vives (1988) as the limit of large, but …nite, Cournot oligopoly markets. For the
purpose of the present discussion, we restrict our attention to the pure common value environment,
i
= , but the argument extends naturally to the general interdependent payo¤ environment.
In the preceding analysis, we learned that information structure given by the idiosyncratic signal:
s1i =
i + "i ,
2
"i ,
with the variance of the noise term given by
correlation coe¢ cients (
coe¢ cients (
a
;
aa )
a
;
aa ).
restricts the set of feasible equilibrium
With interdependent values, this leads to Bayes Nash equilibrium
in the interior of the set of Bayes correlated equilibrium coe¢ cients. With
pure common values, the idiosyncratic signals actually trace out one of the boundaries of the BCE
coe¢ cients, as illustrated in Figure 6. In any case, for a given variance
2
"i
of the noise term, the
associate Bayes Nash equilibrium generates a pair of equilibrium coe¢ cients, depicted by (
a
;
aa ).
Now, if the …rms were to agree to share their information, and have the shared information disclosed
in the form of a public, but possibly noisy signal: s2i =
2
",
given by
from (
a
;
i
+ ", with the variance of the noise term
then this allows them to extend the set of feasible equilibrium coe¢ cient, starting
all the way to (1; 1), as illustrated in Figure 6. After all, the idiosyncratic noise in
aa )
the initial signal s1i can be …ltered out completely with a continuum of …rm, and thus, the trade
association could possibly disclose the (true) realized common demand .10 But importantly, any
signal s2i with a noise term common to all agents stays below the boundary of the BCE feasible
correlation coe¢ cients. Now, the question arises as to whether the revenue optimal disclosure policy
is at all a¤ected by the restrictions imposed through the common error in the disclosure policy.
The answer follows immediately from the geometry of the iso-pro…t function in the space of
correlation coe¢ cients (
expected pro…t
aa ;
a
). The iso-pro…t curve
aa ,
(
aa )
de…ned implicitly by a constant
of the representative …rm:
E ai p
is linear in
a
1 2
a = ;
2 i
and the slope is determined by the responsiveness r of the price to demand. By con-
trast, the maximal correlation achievable with disclosure of a common signal, denoted by
is convex in
10
aa ,
c
a
(
aa ),
whereas the maximal correlation achievable with disclosure of an idiosyncratic
In Section 8.4 of Vives (1999), the design of the optimal information sharing in a large market with a continuum
of agents is indeed phrased as the problem of a mediator who elicits and then transmits the collected information to
the agents. The analysis is thus close to the present perspective suggested by the Bayes correlated equilibrium, but
also restrict the transmission to public signals, and hence leads to the same conclusion as the previously mentioned
literature.
46
a
1.2
complete information BNE
1.0
0.8
BCE Frontier
1
0.6
Information Sharing
Equilibria
0.4
Iso profit curves
originalBNE
0.2
0.0
0.2
0.4
0.6
0.8
aa
1.0
Figure 6: Information Sharing under Public and Private Disclosure Rules
signal, given by:
i
a
, is concave in
aa .
(
aa )
,
p
aa :
The following result now follows immediately from the concavity and convexity
of the upper bounds,
i
a
(
aa )
and
c
a
(
aa ),
respectively.
Proposition 16 (Disclosure of Information)
1. The optimal disclosure policy of a public signal is either zero or complete disclosure.
2. The optimal disclosure policy of a private signal depends on r and can be noisy.
3. The optimal (unrestricted) disclosure policy uses private signal for su¢ ciently large r and
2
"i .
The proof of Proposition 16 is entirely straightforward. The iso-pro…t curve generates a linear
trade-o¤ in the correlation of the individual supply decision ai with the state of demand , and the
supply of the other …rms. A better match with level of aggregate demand increases pro…t, but a
better match with the supply of the other …rms decreases the pro…t. With the public disclosure of a
noisy signal si , the convexity in the trade-o¤ suggests either zero disclosure, or complete disclosure
of the aggregate information. This is because the public disclosure initially favors the correlation
in the supply with the competing …rms, and hence the convexity of
c
a
(
aa ).
With the private
disclosure, the trade-o¤ is resolved in favor of a better match with the state of demand, without
an undue increase in the correlation of the supply decisions. Thus, we …nd that the industry wide
preferred disclosure policy is frequently a partial disclosure of information, but one which is noisy
47
and idiosyncratic, as opposed to a bang-bang like solution that was previously obtained in the
literature under the restriction to public disclosure policies. Proposition 16 thus establishes that a
common and hence perfectly correlated disclosure policy is (always) weakly and (sometimes) strictly
dominated by a private and hence imperfectly correlated disclosure policy.
In Bergemann and Morris (2013b), we derived the …rm optimal Bayes correlated equilibrium as
the result of optimization problem over the set of feasible Bayes correlated equilibrium moments,
see Proposition 7. By contrast, the result here directly uses the geometry of the pro…ts as well as the
restrictions imposed by the information structures on the set of feasible Bayes Nash equilibrium. It
thus emphasizes the importance of the information structures for the set of attainable Bayes Nash
equilibrium allocations, and by extension, the subtle interplay between information and incentives.
7
Discussion
In this section we review the main results of the paper and at the same time provide a very broad
connection to other papers found in the literature. We will explain how our results connect to the
results found in di¤erent papers, and explain how one can understand them in a common framework.
The Drivers of Responsiveness to Fundamentals A literature in which the responsiveness
to fundamentals has been widely studied is the study of nominal rigidities driven by informational
frictions.
Lucas (1972) established informational frictions can cause an economy to respond to
monetary shocks that would not have real e¤ects under complete information. In the model agents
are subject to real and monetary shocks, but only observe prices. Since the unique signal they
observe does not allow them to disentangle the two shocks, the demand of agents (in real terms)
responds to monetary shocks. Thus, even when under complete information there would be no
“real”e¤ects of monetary shocks, as the agents confound the shocks which allows monetary shocks
to have real e¤ects.
In more recent contributions, Hellwig and Venkateswaran (2009) and Mackowiak and Wiederholt
(2009) both present dynamic models in which the intertemporal dynamics are purely informational
and in which the maximization problem in each period is strategically equivalent to the one in the
present model with aggregate interactions only (ra = 0).
The informational assumptions in Hellwig and Venkateswaran (2009) are derived from a microfounded model. Each agent observes a signal that is a result of his past transaction, thus a result
of past actions and past shocks. The persistence of shocks means that such a signal serves as a
48
predictor of future shocks. The main feature of the signal is the property that it is composed of
idiosyncratic and common shocks, and hence the informational content of the signal is similar to the
one-dimensional signal analyzed in Section 3. They suggest that in empirically relevant information
structure the idiosyncratic shocks might be much larger than the common shocks, and thus agents
have very little information about the realization of the common shocks. Thus, even when agents
have little information about the common shocks, they respond strongly to the idiosyncratic shock
in order to be able to respond to the common shock at all.
By contrast, the informational assumptions in Mackowiak and Wiederholt (2009) are derived
from a model of rational inattention. Here the agents receive separate signals on the idiosyncratic
and common part of their shocks. In consequence, the response to each signal can be computed
independently and the responsiveness to each signal is proportional to the quality of information.
However, the amount of noise on each signal is chosen optimally by the agents, subject to a global
informational constraint motivated by limited attention. Again, motivated by empirical observations, it is assumed that the idiosyncratic shocks are much larger than common shocks. This in turn
results in agents choosing to have better information on the idiosyncratic shocks rather than the
common shocks. This implies that the response to the common shock is sluggish as the information
on it is poor.
Both papers have very contrasting results on the response of the economy to common shocks,
although the underlying assumptions are quite similar. Note that not only the strategic environment
is similar, but also the fact that agents have little information on common valued shocks. Their
contrasting analysis thus indicates the extent to which the response to the fundamentals may be
driven by the informational composition of the signals.
The Role of Heterogeneous Interactions and the Dimensions of Uncertainty In a recent
paper Angeletos and La’O (2013) demonstrate how informational frictions can generate aggregate
‡uctuations in a economy even without any aggregate uncertainty. They consider a neoclassical
economy with trade, which in its reduced form can be interpreted with linear best responses as in
(51) with rA = 0 and
= 0. In their environment, each agent is also assumed to know his own
payo¤ state. In such a model, but with aggregate rather than pairwise interaction, there would
be a unique equilibrium as the agents would face no uncertainty at all. Yet, given the pairwise
interaction, there is a dimension of uncertainty, which enables aggregate ‡uctuations to arise in
the absence of aggregate payo¤ uncertainty. If each agent observes the payo¤ state of his partners
through a noisy signal with a common error term, then the aggregate volatility will not be equal to
49
0 as each agent responds to same common error term in the individual signal.
Our analysis suggest the emergence of aggregate volatility from the pairwise trading does not
hinge on any particular interaction structure. The only necessary condition for the aggregate
volatility to emerge is the presence of some residual uncertainty. Moreover, even though one might
be skeptical about ‡uctuations arising merely from common noise error terms, it should be clear
that the same mechanism could explain how the response to a small aggregate fundamental could
be ampli…ed. Thus, it is not necessary to have strong aggregate ‡uctuations in the fundamentals
or strong aggregate interactions to generate large aggregate ‡uctuations, but these can arise from
strong local interactions, as in the model of Angeletos and La’O (2013) with pairwise interactions.
More generally, the present analysis also demonstrates that di¤erent sources of uncertainty and their
impact on the volatility of outcomes cannot be analyzed independently in the absence of complete
information.
In this paper we have focused on the information structures that are completely exogenous to
the action taken by players, that is, all information agents get is concerning the fundamentals.
We could have also considered information structures in which each agent gets a signal that is
informative about the average action taken by other players, and then compute the set of outcomes
that are consistent with a rational expectations equilibrium. The set of feasible outcomes that
are consistent with a rational expectation equilibrium would constitute a subset of the outcomes
when only information on the fundamentals is considered, yet we leave the characterization of such
outcomes as work for future research. A recent contribution by Benhabib, Wang, and Wen (2013)
(and related Benhabib, Wang, and Wen (2012)) consider a model with this kind of information
structure. Their model has in its reduced form a linear structure like the one we have analyzed,
yet agents can get a noisy signal on the average action taken by other players. They show that the
equilibria can display non-fundamental uncertainty, which can be thought of as sentiments. From a
purely game theoretic perspective, their model is equivalent to our benchmark model with common
values, but agents only receive a (possibly noisy) signal on the average action taken by all players.
The Drivers of Co-Movement in Actions One of the …rst results presented here is the fact
the co-movement in actions is not only driven by fundamentals, but also by the speci…c information
structure that the agents have access to. Of course, under a …xed information structure, in which
agents receive possibly multi-dimensional signals, the co-movement in the actions can only change
with the interaction parameters. But more generally, by changes in the informational environment
would also lead to changes in the co-movement of actions. Moreover, we showed that any feasi50
ble correlations structure can be rationalized for some information structure, independent of the
interaction parameters. The importance of the information structure has been recognized in macroeconomics to o¤er explanations to open empirical puzzles. For example, Veldkamp and Wolfers
(2007) provide an explanation of why there is more co-movement in output across industries than
the one that could be predicted by co-movement in TFP shocks and complementarities in production. They suggest that the sectoral industries might be better informed about the common shocks
than about the idiosyncratic shocks, which in turn would generate the excess co-movement. They
suggest that this information structure might be prevalent since information has a …xed cost for acquiring but no cost of replicating and thus di¤erent industries may typically have more information
on common shocks than on idiosyncratic shocks.
51
8
Appendix
The appendix collects the omitted proofs from the main body of the text.
Proof of Proposition 1. From the …rst order conditions we know that:
ai = E[rA + i jsi ]:
Since the actions of players must be measurable with respect to si , in any linear strategy the
actions of players must be given by ai = w( )si + , where
A = w( )((1
and w( ) are constants. Thus
j j) ) + . Thus, we must have that:
ai = w( )si +
= E[r(w( )((1
j j) ) + ) + i jsi ]:
By taking expectations and using the law of iterated expectations, we get:
j j)
w( )((1
+
j j)
= r(w( )((1
Using that
= 0, we get that
A = w( )((1
j j) ). Multiplying by ai we get:
+ )+
:
= 0. Thus, we know that ai = w( )((1
j j) +
i)
and
a2i = E[rAai + i ai jsi ];
and appealing to the law of iterated expectations we get:
w( )((1
r)(1
j j)2
2
+
(1
)) = ((1
j j)
+ (1
));
and solving for w( ) yields the expression in (11).
Proof of Proposition 2. First, note that by the …rst order condition:
ai = E[rA + i jsi ]:
We can directly take expectations over the previous equation and use the law of iterated expectations, in which case we get:
where the last equality is using
=
= 0;
1 r
= 0. We can compute the variance and covariances by using (10)
a
and (11). It is easy to see that:
2
a
= var(ai ) = w( )2 var(si ) = w( )2 ((1
52
j j)2
+
2
(1
))
2
;
thus we get (13). Similarly
2
aa a
= cov(ai ; aj ) = w( )2 E[((1
j j) +
j j) +
i )((1
j )]
= w( )2 (1
j j)2
2
;
and
a
a
j j) +
= cov(ai ; i ) = ( )E[((1
i) i]
j j)
= ( )(1
+ (1
Proof of Proposition 3. To prove the proposition one can directly solve for
)
2
;
in terms of
a
aa
from (16) and (17). Nevertheless, the result also follows directly from Corollary (2) and Corollary
3. Thus, we prove this proposition directly by proving both corollaries.
As this Proposition is a Corollary of Proposition 7, we relegate this
Proof of Proposition 4.
proof for when we prove Proposition 7.
Proof of Corollary 1.
This is direct from proof 7 and using the change of variables explained
on the main text.
Proof of Proposition 5. Given a noise-free equilibrium parametrized by
analyze the case
0, thus we disregard the absolute value function):
cov(ai ; ) = w( )(1
cov(ai ;
But, note that if
we have that (we
i)
= ( )
, then
<
(1 r)
cov(ai ;
with strict inequality if
>
) =
i
((1
) + (1
))
2
2
((1 r)(1
)
+ (1
=
((1
) + (1
))
2
2
((1 r)(1
)
+ (1
< (1
i) =
<
(1
))
) ;
i:
), but then
((1
)
((1 r)(1
((1
)2 (1
((1 r)(1
+
)2
r)
)2
2
(1
+ 2 (1
+ 2 (1
+ 2 (1
))
i
))
))
=1
))
. Thus, the response to the idiosyncratic component is greater than
in the complete information equilibrium if
argument. Note that if
))
, then
cov(ai ; ) =
< (1
2 ( ; 1). For the second part we repeat the same
)(1
r), but then
((1
)2
+ (1
) (1
2
((1 r)(1
)
+ 2 (1
1 ((1
)2 (1 r) +
1 r ((1 r)(1
)2
+
53
))
))
2
(1
2
(1
))
1
=
1 r
))
with strict inequality if
<
.
Proof of Proposition 6. The comparative statics with respect to the maximum are direct from
the envelope theorem. The comparative statics with respect to the argmax are shown by proving
that the quantities have a unique maximum, which is interior, and then use the sign of the cross
derivatives (the derivative with respect to
and r). Finally, the last part is proved by comparing
the derivatives.
(1.) and (2.) We begin with the individual variance, and using (13) we can write it in terms of
:
2
a
= (
=
((1
) + (1
))
2
2
((1 r)(1
)
+ (1
2
(1 + yx)
(1 + x2 ) 2 ;
((1 r) + x2 )2
where
p
(1
x, p
)
(1
Note that x is strictly increasing in , and if
)
))
)2 ((1
; y,
p
)2
1
p
d 2a
dx
(1
))
2
(53)
2 [0; 1] then x 2 [0; 1], and thus maximizing with
2 [0; 1]. Finding the derivative
we get:
It is easy to see that
2
:
respect to x 2 [0; 1] is equivalent to maximizing with respect to
@ 2a
=
@x
+
2(xy + 1) (x3 + (2r 1)yx2 + (r + 1)x
(x2 + 1 r)3
(1
r)y)
2
is positive at x = 0 and negative if we take a x large enough, and thus the
maximum must be in x 2 (0; 1). We would like to show that the polynomial:
x3 + (2r
1)yx2 + (r + 1)x
has a unique root in x 2 (0; 1). If r > 1=2 or r <
(1
r)y
1, we know it has a unique root in x 2 (0; 1).
For r 2 [ 1; 1=2] we de…ne the determinant of the cubic equation:
= 18abcd
We know that if
4b3 d + b2 c2
4ac3
27a2 d2 :
< 0 then the polynomial has a unique root. Replacing by the respective values
of the cubic polynomial we get:
= 4y 4 (2r
1)3 (1
r) + y 2 ((2r
1)2 (1 + r)2
54
18(1
r2 )(2r
1)
27(1
r)2 ))
4(1 + r)3
using the fact that for r 2 [ 1; 1=2] we have that (2r
0 and 1 + r
1)
0, we know that the
term with y 4 and without y are negative. We just need to check the term with y 2 , but this is also
negative for r 2 [ 1; 1=2]. Thus,
< 0, and thus for r 2 [ 1; 1=2] the polynomial has a unique
root.
Thus, we have that there exists a unique
@ 2a
=2
@r
((1
Note that
@
@ ((1
Finally, we have that:
)2
(1
r)(1
)2
(1
)2
)2
r)(1
2
a.
that maximizes
2
+
+
2
(1
2
a:
))
(1
))
< 0;
and thus at the maximum:
@ 2 2a
=2
@r@
2
a
and thus argmax
2
a
@
@ ((1
)2
(1
)2
r)(1
2
+
(1
))
is decreasing in r. Finally, we know that
=
< 0;
1 r
2 r
and thus x = y(1
r),
thus we have that:
@ 2a
jx=x =
@x
thus, argmax
2
a
2(y 2 (1
r) + 1) (y 3 (1 r)2 r + y(1
(y 2 (1 r)2 + 1 r)3
if and only if r
= (
=
2
0
0:
Next, we consider the aggregate variance
2
aa a
r)r)
2
aa a ,
and write it in terms of :
((1
) + (1
))
2
((1 r)(1
)2
+ (1
(1 + yx)2
2
;
2
2
((1 r) + x )
))
)2 (1
)2
2
(54)
where x and y are de…ned as in (53). Maximizing with respect to x 2 [0; 1] is equivalent to
maximizing with respect to
2 [0; 1]. Finding the derivative we get:
2
aa a
@
@x
Again, we have that (2x + (x2 + r
exists a unique
that maximizes
@
2
aa a
@r
2(xy + 1) (2x + (x2 + r
(x2 + 1 1r)3
=
2
:
(55)
1) y) has a unique root in (0; 1) Thus, we have that there
2
aa a .
=2
1) y)
Finally, we have that:
)2
(1
((1
r)(1
)2
55
+
2
(1
))
2
aa a :
Note that,
@
@ ((1
and thus at the maximum
that
=
1 r
2 r
@ 2 2a
@r@
2
aa a
@x
+
2
(1
))
2
aa a
< 0, and thus argmax
< 0;
is decreasing in r. Finally, we know
r), there we have that:
2(y 2 (1
jx=x =
2
aa a
thus, argmax
)2
r)(1
and thus x = y(1
@
)2
(1
r) + 1) (y(1 r) + y 3 (1
(y 2 (1 r) + 1 r)3
r))
2
0;
:
Finally, we consider the dispersion, (1
2
aa ) a
(1
= (
=
aa )
2
a,
expressed in terms of :
((1
) + (1
))
2
2
((1 r)(1
)
+ (1
2
(1 + yx)
x2 2 ;
((1 r) + x2 )2
))
2
)2
(1
2
)
where x and y are de…ned in (53). As before, maximizing with respect to x 2 [0; 1] is equivalent
to maximizing with respect to
2 [0; 1]. Finding the derivative we get:
2
aa ) a
@(1
@x
Again, we have that (x2 + 2(r
unique
that maximizes (1
@(1
@r
Note that
@
@ ((1
and thus at the maximum
know that
=2
=
1 r
2 r
@ 2 2a
@r@
r)(1
)2
(1
)2
r)(1
and thus at x = y(1
2
aa ) a
)2
(1
((1
2
:
Finally, we have that:
)2
+
+
2
2
(1
))
(1
))
< 0, and thus argmax (1
@ 2a
2y(1
jx=x =
@x
thus, argmax (1
2
a.
1)
1) has a unique root in (0; 1) Thus, there exists a
1)yx + r
aa )
2
aa ) a
2x(xy + 1) (x2 + 2(r 1)yx + r
(x2 + 1 r)3
=
2
aa ) a :
(1
< 0;
2
aa ) a
is decreasing in r. Finally, we
r) we have that:
r)(y 2 (1 r) + 1) (y 2 (1 r)2 + (1
(y 2 (1 r) + 1 r)3
r))
2
0;
:
(3.) Finally, we want to show that argmax (1
2
aa ) a
These inequalities follows from comparing the derivatives of (1
56
> argmax
2
a) a;
2
a
2
a
and
> argmax
2
aa a
2
aa a .
with respect
to . Since the derivatives satisfy the previous inequalities, the maximum must also satisfy the
same inequalities.
2
aa a g.
Proof of Proposition 7. We …rst solve for max f
that the aggregate volatility is maximized at,
p
1 + y 2 (1
x=
y
r)
1
By setting (55) equal to 0, we have
:
In terms of the original variables this can be written as follows:
q
1 r
+r+r 2
=
:
(r 4) + 1
Substituting the solution in (54) and using the de…nitions of x and y we get that the maximum
volatility is equal to:
2
p
4(
Using the de…nition of
and
)2
(1
p
+ (1
))2
r)(1
:
we get (29). Note that by imposing r = 0 we also get (25) and
i
(26).
It follows from (29) that:
2
2
aa a g
lim
maxf
2
!0
2
It is also immediate that
=
4(1
r)
:
r)2 is the aggregate volatility in the complete information equi-
=(1
librium. Thus, we are only left with proving that,
lim
maxf
2
i
!0
2
aa a g
2
=
=(1
r)2 :
The limit can be easily calculated using L’Hopital’s rule. That is, just note that as
have that:
q
4(
2
+ (1
r)
2
)2
4
(1
r)2 =
2
+ o(
6
2
! 0 we
);
and hence we get the result. Finally, for completeness, it is worth noting that (27) is trivial. Thus,
we have also proven Proposition 4.
Proof of Lemma 1.
eters (
a;
aa ;
a
;
a
;
We need to prove that given the assumption of symmetry, the prama)
are su¢ cient to characterize the distribution of the random variables
( i ; ; ai ; A). Clearly, we have that
a
=
A,
as it follows from the law of iterated expectations.
57
By the previous de…nition (and decomposition) of the idiosyncratic state
expectations of the following products all agree: Ei [ai ] = Ei [A
i]
i,
we observe that the
= Ei [A ]. This can be easily
seen as follows:
E[ i A] = E[ A] + E[
i A]
= E[ A] + E[A E[ i jA]] = E[ A];
| {z }
=0
where we just use the law of iterated expectations and the fact that the expected value of a idiosyncratic variable conditioned on an aggregate variable must be 0. Thus:
cov(ai ; ) = cov(A; i ) = cov(A; ) = cov(ai ;
j)
= E[ai j ]
a
=
a
a:
Similarly, since we consider a symmetric Bayes correlated equilibrium, the covariance of the actions
2
aa a ,
of any two individuals, ai and aj , which is denoted by
is equal to the aggregate variance.
Once again, this can be easily seen as follows,
E[ai aj ] = E[A2 ] + E[A aj ] + E[ ai A] + E[ ai aj ] = E[A2 ];
where in this case we need to use that the equilibrium is symmetric and thus E[ ai aj ] = 0. Thus,
we have
2
A
Proof of Proposition 8.
= cov(ai ; aj ) = cov(A; ai ) =
2
aa a .
The moment equalities (1) and (2) were established in (34) and
(35). Thus we proceed to verify that the inequality constraints (3) are necessary and su¢ cient to
guarantee that the matrix V is positive semi-de…nite.
Here we express the equilibrium conditions, by a change of variables, in terms of di¤erent
variables, which facilitates the calculation. Let:
0
1
B
B 0
M ,B
B 0
@
0
Thus, we have that:
0
i
1
C
B
C
B
B
C
B aC
@ iA
A
1 0
1
0
0
1
0
0
00
0
BB
BB
B
NB
BB 0
@@
a
58
1
0
1
C
0 C
C:
1 C
A
1
1
C
C
C
C
C ; M VM 0 )C ;
C
C
A
A
where
0
(1
B
B 0
0
V? , M VM = B
B
@
0
)
2
0
a
2
a
a
a
0
a
a
0
0
(1
a
a
a
aa )
2
a
0
2
aa a
0
We use V? to denote the variance/covariance matrix expressed in terms of (
a
i;
1
C
C
C:
C
A
(56)
; ai ; A). It is easy
to verify that V? is positive semi-de…nite if and only if the inequality conditions (3) are satis…ed.
To check this it is su¢ cient to note that the leading principal minors are positive if and only if
these conditions are satis…ed, and thus V? is positive semi-de…nite if and only if these conditions
are satis…ed.
Proof of Proposition 9. (() We …rst prove that if the variables ( i ; ; ai ; A) form a BNE for
some information structure Ii (and associated signals), then the variables ( i ; ; ai ; A) also form a
BCE. Consider the case in which agents receive normally distributed signals through the information
structure Ii , which by minor abuse of notation also serves as conditioning event. Then in any BNE
of the game, we have that the actions of the agents are given by:
ai = rE[AjIi ] + E[ i jIi ]; 8i; 8Ii ;
(57)
and since the information is normally distributed, the variables ( i ; ; ai ; A) are jointly normal as
well. By taking the expectation of (57) conditional on the information set Ii0 = fIi ; ai g we get:
E[ai jIi ; ai ] = ai = rE[E[AjIi ]jIi ; ai ] + E[E[ i jIi ]jIi ; ai ]
= rE[AjIi ; ai ] + E[ i jIi ; ai ]:
(58)
In other words, agents know the recommended action they are supposed to take, and thus, we can
assume that the agents condition on their own actions. By taking expectations of (58) conditional
on fai g we get:
E[ai jai ] = ai = rE[E[AjIi ; ai ]jai ] + E[E[ i jIi ; ai ]jai ]
= rE[Ajai ] + E[ i jai ];
(59)
where we used the law of iterated expectations. In other words, the information contained in fai g
is a su¢ cient statistic for agents to compute their best response, and thus the agents compute the
same best response if they know fIi ; ai g or if they just know fai g. Yet, looking at (59), by de…nition
( i ; ; ai ; A) form a BCE.
59
()) We now prove that if ( i ; ; ai ; A) form a BCE, then there exists an information structure
Ii such that the variables ( i ; ; ai ; A) form a BNE when agents receive this information structure.
We consider the case in which the variables ( i ; ; ai ; A) form a BCE, and thus the variables are
jointly normal and
ai = rE[Ajai ] + E[ i jai ]:
Since the variables are jointly normal we can always …nd w 2 R and
ai = w(
i
+ (1
(60)
2 [ 1; 1], such that:
) + "i );
where the variables ( ; w) and the random variable " are de…ned by the following equations of the
BCE equilibrium distribution:
w = cov(ai ;
and
"=
ai
i );
j j) = cov(ai ; );
w(1
cov(ai ;
i)
i
cov(ai ; )
:
w
Now consider the case in which agents receive a one-dimensional signal
si ,
ai
=(
w
i
+ (1
(61)
) + "i ):
Then, by de…nition, we have that:
ai = wsi = rE[Ajai ] + E[ i jai ] = rE[Ajsi ] + E[ i jsi ];
where we use the fact that conditioning on ai is equivalent to conditioning on si . Thus, when agent
i receives information structure (and associated signal si ): Ii = fsi g, then agent i taking action
ai = wsi constitutes a Bayes Nash equilibrium, as it complies with the best response condition.
Thus, the distribution ( i ; ; ai ; A) forms a BNE when agents receive signals Ii = fsi g:
Proof of Proposition 10. Note that (61) has the form stated in the Proposition, and thus this
was implicitly established in the proof of Proposition 9.
Proof of Corollary 2. This comes from imposing that inequalities (38) are satis…ed with equality
and solving for
a
and
a
in terms of
aa .
As we have the additional constraints that
we keep the roots the satisfy these conditions.
60
a
;
aa
0,
Proof of Corollary 3. The …rst part follows directly from (43), which we rewrite for the ease of
the reader:
var
"
ai
i
A
#
0
2 @(1
=
(
aa )
a
2
a
a
)
0
1
2
a
0
aa
1
A.
It is clear that both inequalities in (38) are satis…ed with equality, if and only if,
"
#
!
ai
0
i
var
=
A
0
Thus,
ai and A can be written as linear functions of
ai is a linear function of
ai = A +
i
i
and
respectively. This implies that
and , and hence noise free. Moreover, using the same
argument as in the proof of Proposition 9 we know that a BCE is noise free if and only if it can be
decentralized by some information structure .
Proof of Proposition 11. By rewriting the constraints (38) of Proposition 8 we obtain:
1.
2. (1
If (
(
aa
a
aa )(1
aa ;
a
;
)2
0;
)
a
(
a
a
)2
0:
) is strictly increasing, then in the optimum the above inequality (2) must bind.
Moreover, if the constraint (1) does not bind, then we can just increase
a
and
without violating (2) and increasing the value of . Thus, in the maximum of
both bind. Moreover, since (
aa ;
a
;
a
) is strictly increasing in
a
a
in equal amounts,
we must have that
and weakly increasing in
a
,
it is clear that the maximum will be achieved positive root of (20).
Proof of Corollary 4.
From (37) the individual volatility, aggregate volatility and dispersion
can be written as follows:
a
1
r
;
a
aa
aa
1
r
; (1
aa )
aa
a
1
r
:
aa
Thus, the result follows directly.
To establish Proposition 15, we …rst provide an auxiliary result that describes the set of correlations that can be achieved by the class of information structures (44). We de…ne the following
information to noise ratio:
b,
2
2
+
2
"1
61
2 [0; 1]:
(62)
Proposition 17 (Feasible Outcomes with si )
A set of correlations (
aa ;
;
a
a
) can be achieved by a linear equilibrium in which agents receive a
information structure of the form si , if and only if, there exists b 2 [0; 1] such that :
1. The following equalities are satis…ed:
a
(1
a
1
r
1
1
+ (
b
a
r)
aa
a
a
=
aa r
a
(63)
:
)
a
a
1
r
=
(64)
:
aa
2. The following inequalities are satis…ed:
2
a
aa
(1
aa )(1
1
(
b
)
(65)
;
a
)2
a
(66)
0:
Proof. (If) Consider a linear Bayes Nash equilibrium in which agents get signals fs1i ; s2i ; s3 g.
First, we note that (65) must be satis…ed trivially, as this must be satis…ed for any Bayes correlated
equilibrium. We now prove that restriction (65) must be satis…ed.
In any linear Bayes Nash equilibrium the action of players can be written as follows, ai =
1
2 si +
1
3 si .
Therefore, for any constants (
1;
2;
3)
and thus:
var( ai j
Yet, note that we can write
1
i)
ai =
1(
1
i +"i )+
2 2
1 "1 :
i)
as follows,
1
and var( ai j
2 R3 , we know that
=
cov(
i;
ai )
2
;
is given by:
var( ai j
i) =
cov(
2
a
i;
ai )
2
2
2
=
2
a
(1
(
aa )
a
a
(1
)2
:
)
Thus, we have that,
2
a
(1
aa )
(
a
(1
a
)2
)
cov(
i;
2
62
ai )
!2
2
"1
=
2
a
(
a
(1
a
)2
)
2
"1
2
:
1
1 si +
2
2 "i ,
Finally, we note that by de…nition:
2
"1
2
=
(
1
b
1):
Thus, we get,
(1
1( a
b (1
aa )
a
)2
)
0;
which is condition (66).
Finally, we prove that condition (64) must be satis…ed. For this, note that in any Bayes Nash
equilibrium, we must have that ai = E[
i
+ rAjs1i ; s2i ; s3 ], and multiplying the equation by s1i , we
get:
s1i ai = E[
s1i + r A s1i js1i ; s2i ; s3 ]:
i
Taking expectations and using the law of iterated expectations, we get:
cov(ai ; s1i ) = cov( i ; s1i ) + r cov(A; s1i ),
and thus:
cov( i ; s1i ) = cov( i ; i ) =
2
; cov(A; s1i ) = cov(A; i ) =
,
a
a
and:
cov(ai ; s1i ) = cov(ai ; "i ) + cov( i ; ai ) =
=
a
+(
a
a
a
)
(
a
1
b
+(
a
a
a
)
a
1 "2
=
a
+
a
(
a
a
2
)
a
2
"
1):
Thus, we get:
a
+
a
a
(
1
b
1) =
2
+r
a
a
;
and re-arranging terms we get (64).
(Only If) Let ( i ; ; ai ; A) be a Bayes correlated equilibrium with correlations (
a
;
a
;
aa )
satisfying (64), (65) and (66). We will show that there exists a information structure si , under
which the Bayes Nash equilibrium induces the random variables ( i ; ; ai ; A) .
Let "1i be a random variable that is uncorrelated with
with variance
2
"1
= (1
b)
2
i
and
=b and such that:
cov("1i ; ai ) =
cov(
i ; ai )
2
63
2
"1 :
(and thus it is a noise term)
De…ne signals s1i and s~i as follows, s1i ,
i
+ "1i ,
cov(ai ;
s~i , ai
i)
2
s1 :
Thus, by de…nition s1i and s~i are informationally equivalent to s1i and ai . Note that by de…nition
cov(~
si ;
i)
= 0. Thus, we can de…ne:
cov(~
si ; )
~"i , s~i
2
;
and write signal s~i as follows: s~i = + ~"i ; where ~"i is a noise term (thus independent of
with a correlation of
~
"~
"
and
i)
across agents. Note that
ai = E[ i + rAjs1i ; ai ];
holds if and only if:
a
=
+r
A;
a
=
a
+r
cov(ai ; s1i ) = cov( i ; s1i ) + r cov(A; s1i ):
aa a ,
(67)
To show this, just note that we could just de…ne a random variable z as follows:
zi , E[ i + rAjs1i ; ai ];
and impose ai = zi . By de…nition, we must have that,
E[ai ] = E[zi ]; var(zi ) = cov(ai ; zi ); cov(ai ; zi ) = var(zi ); var(ai ; s1i ) = cov(zi ; s1i ):
(68)
These corresponds to conditions (67) (obviously, since ai = zi , cov(ai ; zi ) = var(zi ) or cov(ai ; zi ) =
var(ai ) are redundant). Moreover, any random variable z 0 that satis…es (68) must also satisfy a = z 0 .
This just comes from the fact that these are normal random variables, thus the joint distribution
of (zi0 ; ai ; s1i ) is completely de…ned by its …rst and second moments. Thus, any random variable zi0
that has the same second moments as zi must satisfy that ai
zi
0.
Note that, conditions (67) hold since ( i ; ; ai ; A) form a Bayes correlated equilibrium, while
cov(ai ; s1i ) = cov( i ; s1i ) + cov(A; s1i ) holds by the assumption (64) (where the calculation is the
same as before). Thus, we have that, ( i ; ; ai ; A) is induced by the linear Bayes Nash equilibrium
when players receive information structure fs1i ; s~i g.
On the other hand, note that the signal
s1i =
i
+ "1i is a su¢ cient statistic for
that agents only receive signals (s1i ; s~i ). Thus, we have that,
E[
1
~i ]
i jsi ; s
= E[E[
ij
s1i ]js1i ; s~i ] = E[b s1i js1i ; s~i ]:
64
i
given
From the Bayes correlated equilibrium condition, we know that:
ai = E[ i + rAjs1i ; ai ] = E[b si + + rAjs1i ; s~i ]:
Subtracting bs1i from both sides, we get:
a
~i , (ai
b) + rAjs1i ; ai ] = E[(1
bs1i ) = E[(1
(1
~ 1i ; s~i ]:
r)b) + rAjs
(69)
~ is induced by a linear Bayes Nash equilibrium with common values, in which
Thus, ( i ; ; a
~i ; A)
players get signals (s1i ; s~i ).
We now show that there exists signals (s2i ; s3 ), such that,
s2i =
+ "2i , s3i = (1
) + ";
and
a
~i = E[(1
where
~ 1 ; s2 ; s3 ];
r)b) + rAjs
i
i
(1
2 [0; 1] is just a parameter to simplify the expressions. If we show that such signals (s2i ; s3 )
exists, this directly implies that we must also have that ai = E[ i +rAjs1i ; s2i ; s3 ]. Thus, ( i ; ; ai ; A) is
induced by the linear Bayes Nash equilibrium when players receive information structure fs1i ; s2i ; s3 g.
For this we de…ne "2i and " in terms of ~"i as follows:
" = Ei [~"i ], "2i =
~"i = (~"i
Ei [~"i ]):
Note that by (69) we know that:
cov(~
ai ; s~i ) = (1
(1
~ s~i ):
r)b) cov( ; s~i ) + r cov(A;
Yet, we can re-write the previous equality using the de…nition of s~i :
cov(~
ai ; + "2i + ") = (1
Yet, we can …nd
such that the following conditions hold:
cov(~
ai ;
cov(~
ai ; (1
~ + "2i + "):
r)b) cov( ; + "2i + ") + r cov(A;
(1
+ "2i ) = (1
) + ") = (1
(1
(1
r)b) cov( ;
r)b) cov( ; (1
~
+ "2i ) + r cov(A;
+ "2i ) ;
~ (1
) + ") + r cov(A;
) + "):
Thus, the following conditions must hold:
cov(~
ai ; s2i ) = (1
(1
~ s2 ) ;
r)b) cov( ; s2i ) + r cov(A;
i
65
(70)
cov(~
ai ; s3i ) = (1
(1
~ s3 ):
r)b) cov( ; s3i ) + r cov(A;
i
(71)
(1
~ s1 );
r)b) cov( ; s1i ) + r cov(A;
i
(72)
Since (69) holds, we must have that:
cov(~
ai ; s1i ) = (1
thus, using the same argument as before, we know that (70), (71) and (72) imply that,
a
~i = E[(1
~ 1i ; s2i ; s3 ]:
r)b) + rAjs
(1
Thus, we get the result.
Proof of Proposition 12.
We assume agents receive just one signal of the form s1i . Thus,
the action of players is given by:
ai = s1i = ( i + "i );
for some
2 R: Thus, we can calculate
a
=
cov(ai ; i )
=
=
=
2
a
2
2
"
+
2 2
2( 2
".
We can see that:
+
2)
"
+
2
"
=p
=
2
2
2
"
+
;
2
2
"
+
;
2
=
a
From the previous equalities, plus the fact that
Proof of Proposition 13.
in terms of
2
cov(ai ; aj )
cov(ai ; )
aa
p
=
a
aa
a
and
a
p
"
2
=p
2
2
"
+
:
2 [0; 1), we must have that 46 is satis…ed.
Since players get signal s1i and know , it is easy to see that in
equilibrium the best response of players will be given by:
2
ai =
1
r
j
+ E[
s1i ]
=
1
r
+
2
+
2
"1
(s1i
)=
where
s1i , s1i
We can now calculate the correlations (
aa ;
a
;
a
2
+
) in terms of
".
2
a
=
cov(ai ; i )
a
=
q
66
1 r
2
+b
=(1
r
2
1
i + "i and b ,
=
1
+ b(s1i
2
"1
:
We get:
2
r)2
;
+b
2
);
2
aa
=
cov(ai ; aj )
2
a
=
2
(1 r)2
r)2 +
=(1
b
2
;
2
=
a
cov(ai ; )
q
=
a
1 r
2
:
r)2 + b
=(1
Note that by de…nition b 2 [0; 1], and thus we must have that
a
aa
we get (47).
Proof of Proposition 14.
rE[AjIi ]; and multiplying by
i E[AjIi ]
i
2
2[
aa ; 1]:
By solving for
and
a
From the best response conditions, we have that, ai =
and taking expectations (note that because
i
= E[ i AjIi ]), we …nd that
a
a
=
+r
+
is in Ii , we have that
(73)
a:
a
i
As expected this is the same as condition (64), but imposing b = 1. We also use the fact that
a
=
+r
a
aa a ,
and hence inserting in (73) we obtain:
a
1
= (
r
1
r
aa
a
(74)
):
a
Thus, the inequalities (65) and (66) can be written as follows,
(1
aa )(1
)
1
((1
r2
1
(
r2
aa
r)
1
r
a
aa 2
(75)
);
a
1
r
aa 2
a
(76)
):
a
For both of the previous inequalities the right hand side is a convex function of
…xed
aa ,
inequalities (75) and (76) independently constraint the set of feasible
interval with non-empty interior for all
a
aa
a
a
. Thus, for a
to be in a convex
2 [0; 1] (it is easy to check that there exists values of
such that either inequality is strict). Thus, if we impose both inequalities jointly, we …nd the
intersection of both intervals which is also a convex interval. We …rst …nd the set of feasible
aa
and prove that it is always case that one of the inequalities provides the lower bound and the other
inequality provides the upper bound on the set of feasible
make several observations.
67
a
for each feasible
aa .
For this we
First, when agents know their own payo¤ type there are only two noise-free equilibria, one in
which agents know only their type and the complete information equilibria. This implies that there
are only two pair of values for (
aa ;
a
) such that inequalities (75) and (76) both hold with equality.
Second, the previous point implies that there are only two pairs of values for (
aa ;
a
) such that the
bound of the intervals imposed by inequalities (75) and (76) are the same. Third, the previous two
points imply that there are only two possible
values are
aa
2f
;
aa g,
aa
such that the set of feasible
which corresponds to the
is a singleton. These
a
of the complete information equilibria and
aa
the equilibria in which agents only know their type. Fourth, it is clear that there are no feasible
BCE with
aa
2 f0; 1g. Fifth, the upper and lower bound on the feasible
inequalities (75) and (76) move smoothly with
bounded by the values of
aa
is in [minf
aa
Sixth, this implies that the set of feasible
in which the set of feasible
aa g; maxf
;
aa .
;
aa g].
that are imposed by
a
aa
is
is a singleton. Thus, the set of feasible
a
Moreover, it is easy to see that for all
this interval one of the inequalities provides the upper bound for
a
aa
in the interior of
while the other inequality will
provide the lower bound.
We now provide the explicit functional forms for the upper and lower bounds. To check which of
the inequalities provides the upper and lower bound respectively we can just look at the equilibria
in which agents know only their own type, and thus
inequalities for
,
a
)2
(1
2
As expected it is easy to check that
that if r > 0 then
while
a
a
a
1
((1
r2
1
(
r2
r)
aa
1
a
)2 ;
a
1
r
)2 :
a
a
= 1 satis…es both inequalities. Moreover, it is easy to see
= 1 provides a lower bound on the set of
r < 0 we get the opposite,
a
r
a
= 1 provides an upper bound on the set of
inequality while
. In this case we have the following
=
a
a
that satis…es the …rst inequality
that satis…es the second inequality. If
= 1 provides a upper bound on the set of
= 1 provides a lower bound on the set of
a
a
Note that the conclusions about the bounds when
for all
aa
=
aa .
that satis…es the …rst
that satis…es the second inequality.
Thus, if r > 0; then the inequality (75) provides the lower bound for
provides an upper bound on the set of feasible
a
and the inequality (76)
a
If r < 0 we get the opposite result.
can be extended for all feasible
because we know that the bounds of the intervals are di¤erent for all
aa
62 f
;
aa g,
aa
and they
move continuously, thus the relative order is preserved. Finally, by taking the right roots (just check
which one yields
a
= 1 when
aa
=
), we de…ne implicitly the functions
68
i
a
and
c
a
:For a given
aa ,
c
a
i
a
and
represent the solutions of the following equations:
1
( (1
r
r)(
c
a
)2 + 1
r
p
c
a
aa )
(1
and
aa )(1
1 i 2
i p
(( )
(1 r aa ))
= 0:
aa
a
r a
Therefore, for all feasible BCE in which agents know their own type aa 2 [minf
while the set of
a
is bounded by the functions
the upper bound while
upper bound while
i
a
c
a
aa );
aa )
(78)
; aa g],
aa g; maxf
i
. If r > 0 then function a provides
< 0, then the function ca provides the
;
c
a
and
provides the lower bound. If r
provides the lower bound.
We …rst prove that in any Bayes correlated equilibrium that
Proof of Proposition 15.
achieves (~(
i
a
(77)
) = 0;
subject to (64), (65) and (66) must satisfy the following two conditions. If
b < 1 then (65) and (66) must be satis…ed with equality. If b = 1 then (65) or (66) must be satis…ed
with equality.
From condition (64), we know that:
a
Replacing
a
=
1
b
b(1
r)
(
1
a
b
r
aa
):
a
in (65) and (66), we get the following two inequalities:
aa
(1
1
aa )(1
)
Note that we need to maximize
a
2
b
b(1
(1
r)
b2
b(1
(
r))2
a
1
b
((1
r
aa 2
):
a
r)
1
r
a
aa 2
):
a
given the previous two inequalities. Thus, it is clear that at
least one of these inequalities must be binding. Thus, inequality (75) or (76) must be binding (it
is easy to check that there exists values of
a
such that either inequality is strict). Thus, at least
one of the restrictions must always be satis…ed with equality. We now show that, if b 2 (0; 1), then
both inequalities must be satis…ed with equality. To show this just note that the derivative of the
right hand side of the inequalities with respect to b is di¤erent than 0. Thus, if just one of the
constraints is binding and the other one is not, then one can change b and relax both constraints,
which allow to get a higher
a
. Note that the argument also holds for b = 0, as in this case the
second inequality will be satis…ed with slack, while the right hand side of the …rst inequality will
69
be decreasing with respect to b. Thus, we must have that in the maximum
a
both constraints are
binding or b = 1.
From the previous argument ~a is achieved by b = 1 or both (75) and (76) are satis…ed with
equality. If b = 1 then agents know their own payo¤s. From the proof of Proposition 17 it is easy
to see that, if both (75) and (76) are satis…ed with equality, then the information structure that
achieves this correlations must satisfy that
"2 ;
2 f0; 1g. Thus, we can calculate ~a using
"3
Proposition 12, Proposition 13 and Proposition 14.
Finally, we show that for all (
aa ;
a
) such that
a
2 [0; ~a (
)], there exists a information
structure of the form si that decentralizes a equilibrium with correlations (
0
…rst consider s~i to be the information structure that decentralized (
aa ; ~a
(
aa ;
aa )),
a
). For this,
which we write
explicitly:
s10
i =
i
2
30
+ "1i , s20
i = + "i , si = + ":
We denote by a0i the equilibrium actions in the linear Bayes Nash equilibrium when agents get signal
structure s0i , and note that a0i can be written as follows,
a0i =
for some
0
1;
0
2;
0
3
0 10
1 si
0 20
2 si
+
0 30
3 si ;
+
2 R.
We de…ne now two additional noise terms, namely (ei ; e), where ei is purely idiosyncratic and e
is common valued. Moreover, we assume that the variance of (ei ; e) is such that:
2
2
2
where
+
2
+
2
e
= ;
2 R is a parameter. Before we proceed it is convenient to make the following de…nitions:
' , E[
ij
i
+ ei ] = (
and also de…ne naturally 'i , ' +
''
=
(where
''
i
+ ei ) ; ' , E[ j + e] = ( + e);
'i . Note that:
var( 'i ) =
thus
2
e
;
=
var(
i)
; var(') =
var( );
is just the correlation between 'i and 'j . That is, corr('i ; 'j ) =
We now de…ne the information structure si as follows,
s1i =
i
1
1
+ ei + e + "1i = ('i + "1i ),
70
'' ).
1
1
s2i = + e + "2i = (' + "2i ),
1
1
s3i = + e + " = (' + "):
Note that, if agents receive information structure si , then knowing
for
i
and
i
+ ei is a su¢ cient statistic
+ e is a su¢ cient statistic for knowing . That is,
E[ i jsi ] = E[E[
ij
i
+ ei ] + E[ j + e]jsi ] = E['i jsi ]:
Thus, it is easy to see that,
0 1
1 si
ai = (
0 2
2 si
+
+
0 3
3 si );
is a Bayes Nash equilibrium when agents receive signal structure si . For this, just note that we can
replace all the variables
'' ,
=
i
in the equilibrium under s0i with 'i in the equilibrium under si . Since
it is also straightforward that:
corr(a0i ; i ) = corr(ai ; 'i ) ; corr(a0i ; a0i ) = corr(ai ; ai ):
Using the fact that,
var(') =
2
var( i + "i ) =
var( );
it is clear that,
var(ai ) =
a' '
1
r
2
=
var(a0i ):
aa
Thus, we have that under information structure si ,
a
Since
cov(ai ; i )
cov(a0i ; i )
p
p
=
=p
var(a0i )
var(ai )
can be any number in [0; 1], we get that for all (
=
p
aa ;
a
~a (
aa ):
) such that
a
2 [0; ~a (
)],
there exists a information structure of the form si that decentralizes a equilibrium with correlations
(
aa ;
a
).
71
References
Angeletos, G., and J. La’O (2013): “Sentiments,”Econometrica, 81, 739–779.
Angeletos, G., and A. Pavan (2007): “E¢ cient Use of Information and Social Value of Information,”Econometrica, 75, 1103–1142.
(2009): “Policy with Dispersed Information,” Journal of the European Economic Association, 7, 11–60.
Angeletos, G.-M., L. Iovino, and J. La’O (2011): “Cycles, Gaps, and the Social Value of
Information,”NBER Working Papers 17229, National Bureau of Economic Research, Inc.
Benhabib, J., P. Wang, and Y. Wen (2012): “Sentiments and Aggregate Fluctuations,” Discussion paper, New York University.
(2013): “Uncertainty and Sentiment-Driven Equilibria,” Discussion paper, New York
University.
Bergemann, D., and S. Morris (2013a): “The Comparison of Information Structures in Games:
Bayes Correlated Equilibrium and Individual Su¢ ciency,”Discussion Paper 1909, Cowles Foundation for Research in Economics, Yale University.
(2013b): “Robust Predictions in Games with Incomplete Information,”Econometrica, 81,
1251–1308.
Clarke, R. (1983): “Collusion and the Incentives for Information Sharing,”Bell Journal of Economics, 14, 383–394.
Guesnerie, R. (1992):
“An Exploration of the Eductive Justi…cations of the Rational-
Expectations Hypothesis,”American Economic Review, 82, 1254–1278.
Hellwig, C., and V. Venkateswaran (2009): “Setting the right prices for the wrong reasons,”
Journal of Monetary Economics, 56, Supplement(0), S57 –S77.
Kirby, A. (1988): “Trade Associations as Information Exchange Mechanisms,”RAND Journal of
Economics, 19, 138–146.
72
Lorenzoni, G. (2010): “Optimal Monetary Policy with Uncertain Fundamentals and Dispersed
Information,”Review of Economics Studies, 77, 305–338.
Lucas, R. (1972): “Expectations and the Neutrality of Money,” Journal of Economic Theory, 4,
103–124.
Mackowiak, B., and M. Wiederholt (2009): “Optimal Sticky Prices under Rational Inattention,”American Economic Review, 99(3), 769–803.
Morris, S., and H. Shin (2002): “Social Value of Public Information,” American Economic
Review, 92, 1521–1534.
Novshek, W., and H. Sonnenschein (1982): “Ful…lled Expectations Cournot Duopoly with
Information Acquisition and Release,”Bell Journal of Economics, 13, 214–218.
Raith, M. (1996): “A General Model of Information Sharing in Oligopoly,” Journal of Economic
Theory, 71, 260–288.
Veldkamp, L., and J. Wolfers (2007): “Aggregate Shocks or Aggregate Information? Costly
Information and Business Cycle Comovement,”Journal of Monetary Economics, 54(S), 37–55.
Venkateswaran, V. (2013): “Heterogeous Information and Labor Market Fluctuations,”Discussion paper, New York University.
Vives, X. (1984): “Duopoly Information Equilibrium: Cournot and Bertrand,” Journal of Economic Theory, 34, 71–94.
(1988): “Aggregation of Information in Large Cournot Markets,” Econometrica, 56, 851–
876.
(1990a): “Nash Equilibrium with Strategic Complementarities,”Journal of Mathematical
Economics, 19, 305–321.
(1990b): “Trade Association Disclosure Rules, Incentives to share information, and welfare,”RAND Journal of Economics.
(1999): Oligopoly Pricing. MIT Press, Cambridge.
(2013): “On the Possibility of Informationally E¢ cient Markets,”Discussion paper, IESE
Business School.
73
Download