A Theory of Cabinet-Making: The Politics of Inclusion, Exclusion, and Information John Patty

advertisement
A Theory of Cabinet-Making:
The Politics of Inclusion, Exclusion, and
Information
John Patty
Washington University in Saint Louis
September 27, 2013
The Question
When both information and decision authority are dispersed among
several agents, under what conditions can some or all of these
agents credibly share their information using “cheap talk”?
Decentralized Decision-making:
1. Multiple agents with private information,
2. Partial delegation of decision-making authority,
3. Policy decisions jointly affect all agents, and
4. Messaging relies on “cheap talk.”
The Point(s)
I
Inclusion and exclusion of agents can affect the credibility of
signaling between (other) agents
I
The quality of policy decisions & social welfare can justify
excluding potentially informative agents,
I
The inclusion of agents can aid information aggregation &
social welfare even when the added agents do not themselves
communicate truthfully.
I
There is a potential informational, social-welfare-based
rationale for excluding agents from observing the product of
policy communication precisely because the excluded agents
possess decision-making authority.
Primitives
I
Set of n + 1 players: N = {1, 2, . . . , n + 1},
I
State of nature, θ ∈ [0, 1],
I
Player i ∈ N’s private information (signal): si ∈ {0, 1},
I
Player i ∈ N’s policy decision: yi ∈ R,
I
Player i ∈ N’s policy preference (bias): βi ∈ R,
I
Player i ∈ N’s discretionary authority, αi ≥ 0.
Agent i’s payoff:
ui (y , θ; β) = −
n+1
X
αj (yj − θ − βi )2 ,
j=1
A room is denoted by R = (N, α = {αi }i∈N , β = {βi }i∈N ).
Sequence of Play
1. State of nature, θ, drawn from Uniform[0, 1] distribution,
2. Each player i observes only si ,
3. Each player i simultaneously chooses mi ∈ {0, 1},
4. All players observe m = (m1 , . . . , mn+1 ),
5. Each player i simultaneously chooses policy yi ∈ R,
6. θ revealed, players receive their payoffs.
Related Models
Galeotti, Ghiglino, & Squintani (2011), Dewan & Squintani (2012),
Patty & Penn (2013), Gailmard & Patty (2013).
Information and Policymaking
si = 0 interpreted as a failure, si = 1 as a success:
Pr[si = 1|θ] = θ.
Beliefs after m trials and k successes (i.e., k occurrences of s = 1
and m − k occurrences of s = 0) are characterized by a
Beta(k + 1, m − k + 1) distribution, so that
E (θ|k, m) =
V (θ|k, m) =
k +1
, and
m+2
(k + 1)(m − k + 1)
.
2
(m + 2) (m + 3)
Player i’s optimal policy choice, yi∗ , given k successes and m − k
failures:
k +1
∗
+ βi .
yi (k, m) =
m+2
Equilibrium Analysis
I analyze pure strategy Perfect Bayesian Equilibria. For each i ∈ N,
either
I
mi = si (truthful/separating) or
I
mi = 0 for both si ∈ {0, 1} (babbling/pooling).
Thus, an equilibrium can be entirely characterized as a partition:
N = M ∪ B,
with M ∩ B = ∅, where M is the set of truthful “messengers” and B
is the set of “babblers.”
Basic Incentive Compatibility
For any room R = (N, α, β) with |N| = n + 1, the IC conditions for
truthful messaging by agent j ∈ N are:
X
αi (βj − βi )2 ≤
X
i∈N−j
i∈N−j
X
X
i∈N−j
2
αi (βj − βi )
≤
i∈N−j
1
n+3
2
1
βj − βi +
n+3
2
βj − βi −
αi
αi
These are satisfied if and only if
P
α
β
i
i
i∈N−j
βj − P
≤
i∈N−j αi 1
.
2(n + 3)
, and
.
Equilibrium Existence
There is always a babbling equilibrium: B = N.
Let E (R) ⊆ Σ(R) denote the set of pure strategy equilibria for a
room R.
For any subset of agents M ⊆ N, an equilibrium is M-truthful if it
satisfies
∀i ∈ M µi (0) = 1 − µi (1) and ∀j ∈ N − M µj (0) = µj (1). (1)
An equilibrium e ∈ E (R) is referred to as completely truthful if it is
N-truthful.
Existence of a Completely Truthful Equilibrium
The structure of the problem yields a simple necessary and
sufficient condition for the existence of a completely truthful
equilibrium for a given situation R = (N, α, β).
Proposition. For any strategic situation R = (N, α, β), a
completely truthful equilibrium exists for R if and only if


X
1
1


αi (βj − βi ) ≤
.
max P
j∈N
2(n + 3)
i∈N−j αi i∈N−j
(2)
Existence of an Incompletely Truthful Equilibrium
Even if Inequality (2) does not hold, there can exist “incompletely
truthful” equilibria in which only M ⊂ N are truthful.
Letting m = |M| < n + 1 denote the number of truthful agents in a
profile, the condition for such an equilibrium is, ∀j ∈ M :
X
X
α
(β
−
β
)
α
(β
−
β
)
j
i
j
i
k
k
+
m+2
m + 3 i∈M−j
≤
X
i∈M−j
k∈N−M
X
αi
+
2(m + 2)2
k∈N−M
αk
.
2(m + 3)2
(3)
Incompletely Truthful Equilibria, continued
The difference between the IC constraints for completely and
incompletely truthful equilibria is due to the fact that any agent j
who babbles (i.e., j ∈ N − M 0 ) will nonetheless use his or her own
signal, sj , in ultimately setting yj .
Furthermore, this fact is known by all those who signal truthfully in
the equilibrium in question (i.e., all agents i ∈ M 0 ).
Thus, the manipulative impact of a truthful agent’s message varies
across other agents, depending on whether those agents are
babbling or not.
Incompletely Truthful Equilibria, continued
As another way of picturing the importance of babblers’ presence in
the room, note that it is not in general the case that one can shrink
the set of players in a game possessing an M-truthful equilibrium
and construct a truthful equilibrium of any size.
Write R = (N, α, β) ⊂ R 0 = (N 0 , α0 , β 0 ) if there is a mapping
f : N → N 0 such that for all i 6= j ∈ N, f (i) 6= f (j), αi = αf (i) , and
βi = βf (i) .
Proposition. There exist rooms R = (N, α, β) and
R 0 = (N 0 , α0 , β 0 ) with R ⊂ R 0 and R 0 possessing a N-truthful
equilibrium, but R not possessing a M-truthful equilibrium for any
M ⊆ N.
Intermediaries & Communication with Decentralized
Decision-Making
I
The presence of an agent with an intermediate bias can
support truthful communication between agents with relatively
extreme preferences.
I
In some ways, mirrors other results regarding effect of
intermediaries on communication between agents (e.g., Kydd
(2003), Ganguly & Ray (2006), Goltsman, et al. (2009),
Ivanov (2010)).
I
However, the logic in this context is different: here, the
intermediary does not obfuscate earlier messages.
I
Instead, here, the intermediary’s presence in the room supports
truthful communication because of their independent
decision-making authority
Welfare Analysis
Consider the following formulation of ex ante expected social
welfare from an equilibrium e ∈ E (R):
X
SW (e; R) = −
αi Ee [(yi − βi − θ)2 ],
(4)
i∈N
In a M-truthful equilibrium, Equation (4) reduces to
P
P
i∈N−M αi
i∈M αi
SW (e, R) = −
−
.
6|M| + 12
6|M| + 18
Thus, the ex ante expected social welfare from an M-truthful
equilibrium is higher than that from an M 0 -truthful equilibrium if
and only if M contains more agents than M 0 .
Proposition. For any room R and equilibria e ∈ E (R) and
e 0 ∈ E (R), where e is M-truthful and e 0 is M 0 -truthful,
|M| > |M 0 | ⇒ SW (e; R) > SW (e 0 , R).
Social Ranking of Equilibria
For any room R = (N, α, β) and any strategy profile s ∈ S(R), the
ex ante expected payoff for agent i ∈ N from s is denoted by
vi (s, R).
As is common in cheap-talk games (Crawford & Sobel (1982)), the
pure strategy equilibria for any room R are also Pareto-ranked
according to SW (e, R).
Proposition.For any room R and equilibria e ∈ E (R) and
e 0 ∈ E (R), where e is M-truthful and e 0 is M 0 -truthful,
|M| > |M 0 | ⇒ {i ∈ N : vi (e, R) > vi (e 0 , R)} = N.
Optimal Rooms
The maximum equilibrium social welfare in a room R is denoted by
SW(R) = max [SW (e, R)].
e∈E (R)
The optimal room problem centers on “what room maximizes
SW(R)?”
Let G = (G , A, B) denote the latent group from which a room must
be constructed:
I
G is an index set of the agents in the group,
I
A is a profile of |G | (exogenous) authorities, A = {αi }∈G , and
I
B is a profile of |G | preference biases, B = {βi }i∈G .
I
A special agent, the convener, is denoted by c ∈ G and,
without loss of generality, B is normalized so that βc = 0.
Optimal Rooms, continued
The problem. The convener must partition the set of agents into
two sets, G = N ∪ O, with N ∩ O = ∅ and c ∈ N, such that N
denotes the set of individuals inside the room, and O denoting the
individuals left “outside.”
This constraint is meaningful in a couple of ways.
I
The convener must be in the room is a binding constraint in
some settings,
I
It rules out “multiple room” designs. Even constraining each
room to contain the convener (such that rooms might have
overlap), such a multiple room design can dominate the best
single room design.
Optimal Rooms, continued
Optimization goal:
I
Benevolent optimization: maximize “maximum ex ante
equilibrium social welfare”:
P
i∈O αi
WB (R, O) = SW(R) −
,
18
Benevolent Optimization
First, benevolent optimization is not equivalent to choosing the
room that supports an equilibrium that maximizes the number of
truthful agents.
Proposition. There exist groups G such that there are rooms
R 0 ⊆ G with M-truthful equilibria such that M contains strictly
more agents than are truthful in any truthful equilibrium supported
by the optimal room under the benevolent optimization goal,
RB (G).
Benevolent Optimization, continued
Second, benevolent optimization can lead to the choice of a room
in which one or more agents in the room are nonetheless
uninformative.
Proposition. There exist groups G such that the equilibrium
offering maximum ex ante expected social welfare in the optimal
room under the benevolent optimization goal,
RB (G) = (N, α, β) ⊆ G, is an M-truthful equilibrium for some
M ⊂ N.
Benevolent Optimization, continued
Third, when comparing equilibria with equal numbers of truthful
agents, social welfare will in general depend on the exact
assignment of agents to truth-telling and babbling roles. This fact
produces a succinct characterization of the socially optimal
equilibrium in any given room.
Proposition. For any room R and M-truthful equilibrium
e ∈ E (R), if
SW (e, R) = SW(R)
then i ∈ M and j ∈ N − M implies that
αi ≤ αj .
Optimal Rooms: An Example.
Example. [Authority Trumps Information.] Suppose that the group
G contains 10 agents, G = {c, 1, 2, . . . , 9}, with preferences and
authorities as follows:
αc = 0.10,
βc = 0,
α1 = 0.80,
β1 = −0.11,
α2 = α3 = . . . α8 = α9 = 0.0125,
β2 = β3 = . . . = β8 = β9 = 0.04.
In this situation, one can verify that R = N (excluding no agents
from the room) is not compatible with a completely truthful
equilibrium.
Benevolent optimization calls for choosing the room to include
agents {c, 1} and exclude all other agents.
Optimal Rooms: Stepping Out of the Room
Example. Suppose that the group G consists of 3 agents,
G = {c, 1, 2}, with preferences and authorities as follows:
αc = 0.15,
βc = 0,
α1 = 0.45,
βc = −0.06,
α2 = 0.4,
β2 = 0.06.
Benevolent optimization calls for choosing the room in this
situation: R = {c, 1}.
However, if one does not require that the convener include himself
or herself in the room, there is a better room: R 0 = {1, 2}.
This room dominates R = {c, 1} from a social welfare perspective
because, while the amount of information transmitted (2 messages)
is identical in R and R 0 , use of R 0 implies that the information is
used by decision-makers with greater combined authority than in R.
What Kinds of Agents Are Problematic?
Information aggregation in “in the room” messaging is most
unambiguously hindered through the inclusion of a sufficiently
extreme new agent with positive decision-making authority.
Proposition. Consider two rooms R = (N, α, β) and
R 0 = (N 0 , α0 , β 0 ) with R ⊂ R 0 . If SW(R 0 ) < SW(R), then there
exists j ∈ N 0 − N such that αj > 0.
Reducing maximal equilibrium welfare through the introduction of
new agents to a room requires that at least one of the new agents
has independent decision-making authority: adding new agents can
reduce social welfare in an unambiguous way only if the new agents
include some “listeners” whose preferences are different from one or
more of the existing agents and who also possess independent
decision-making authority.
Extensions
I
Multiple Rooms (sequential policymaking? Patty & Penn
(2013))
I
Tying Messages to Actions (delegated discretion? Gailmard &
Patty (2013))
Conclusions
I
Information and authority are frequently dispersed in real-world
policymaking organizations.
I
The inclusion of agents within the room—even if the new
agents are not identical to any of the other agents already
within the room and/or even when the added agents do not
themselves communicate truthfully—can aid information
aggregation and social welfare.
I
The optimal room design need not maximize the level of
information that can be aggregated in equilibrium and, for
analogous reasons, the optimal room might purposely exclude
one or more decision-makers precisely because they possess
“too much” decision-making authority.
I
Informational motivations (and hence social-welfare
considerations) can in some cases justify excluding agents with
exogenous and independent decision-making authority.
Download