Answers to Sample Midterm Questions EC 703 Spring 2016

advertisement
Answers to Sample Midterm Questions
EC 703
Spring 2016
1. (a) It’s easy to solve by backward induction to find that the unique subgame perfect
equilibrium is (a, cd) (where I denote 2’s strategies by listing 2’s response to a, then his
response to d). For Nash, we have to write out the normal form which is
cc cd dc dd
a 4, 1 4, 1 0, 0 0, 0
b 5, 0 1, 1 5, 0 1, 1
The pure strategy Nash equilibria are (a, cd) (the subgame perfect equilibrium) and
(b, dd).
(b) I think the easiest way to solve is to fix a strategy for 1 and see if it can be part
of an equilibrium. So let’s first see if there is an equilibrium in which 1’s strategy is a.
Let’s see what 2’s beliefs must be. Note that both of 2’s information sets are reached
with positive probability since we’ve assumed that p ∈ (0, 1). The probability 2 gives to
the top node in his left–hand information set must be 1 since it is the probability this
node is reached (1 − p) divided by the probability the information set is reached (again,
1 − p). Similarly, 2’s probability on the top node in the other information set is also 1.
Hence sequential rationality requires that 2 plays c at both information sets. Given this,
is 1’s strategy of c sequentially rational? If he plays c, his payoff is (1 − p)4 + p4 = 4.
In other words, since 2 is playing c no matter what, the randomness of Nature’s move is
really irrelevant and 1’s payoff is 4. If he deviated to b, his payoff would be 5, so this is
not sequentially rational. Hence there is no weak perfect Bayesian equilibrium where 1
plays a.
What about 1 playing b? If he does so in an equilibrium, 2’s belief at each of his
information sets must put probability 1 on the lower node. Hence 2 must play d at both
information sets. It is not hard to see that this makes b sequentially rational for 1 —
he gets 1 by playing b and 0 if he deviates. Hence (b, dd) is a weak perfect Bayesian
equilibrium.
1
2. (a) Let p be the probability that a player contributes. For a player to randomize, he
must be indifferent between contributing and not. Contributing gives a payoff of 1 − c,
while not contributing gives 0 if no one else contributes (which occurs with probability
(1 − p)N −1 and 1 if someone does. So
1 − c = [1 − (1 − p)N −1 ]
or (1 − p)N −1 = c. Hence p = 1 − c1/(N −1) .
(b) The second period works exactly as in part (a). What about the first period? Let
p2 be the probability an agent contributes in the second period, so from part (a), we have
p2 = 1 − c1/(N −1) . Let p1 be the probability an agent contributes in the first period. As
before, we have to have indifference for the agents to randomize. If an agent contributes
in period 1, his payoff is 1 − c, just as before. If he does not, then if no other agent
contributes, we know his payoff will be δ(1 − c). To see this, note that he’ll be indifferent
between contributing or not in the second period, so he’ll get 1 − c, discounted to the
present. So we have to have
1 − c = (1 − p1 )N −1 δ(1 − c) + [1 − (1 − p1 )N −1 ].
So
(1 − p1 )N −1 =
c
,
1 − δ(1 − c)
or
c
p1 = 1 −
1 − δ(1 − c)
Clearly, p1 < p2 if
c
1−
1 − δ(1 − c)
1/(N −1)
1/(N −1)
.
< 1 − c1/(N −1) .
Rearranging shows that this holds iff
1 − δ(1 − c) < 1
which is true since c < 1 and δ > 0.
(c) If we add periods, every period except the last one looks just like period 1 in (b).
To see this, consider a three period model. The last period is just like in part (a). The
next to last period is just like period 1 in (b). So consider the first period. As before, we
require indifference. If an agent contributes, his payoff is 1 − c, just as before. If he does
not contribute and someone else does, his payoff is 1, just as before. Finally, if he does
not contribute and no one else does, we go to the second period. Since he is randomizing
there, his payoff in the second period is 1 − c. Discounting to today, then, we get δ(1 − c).
So the equation defining the probability of contribution in the first period is unchanged
2
from part (b). The same would apply to any period except the last period if we have
more than 3 periods.
3. First, let’s look for separating equilibria. Let’s try m(t1 ) = m1 and m(t2 ) = m2 .
With these strategies for 1, 2’s reply must be a(m1 ) = a1 and a(m2 ) = a3 . Given this
behavior by 2, t1 doesn’t want to deviate since he’s getting 2 and a deviation would lead
to a payoff of 1. Similarly, t2 doesn’t want to deviate since he’s getting 1 and a deviation
would lead to a payoff of −1. Hence this is an equilibrium.
The other possibility for a separating equilibrium is m(t1 ) = m2 and m(t2 ) = m1 .
If this is 1’s strategy, 2’s best reply is a(m1 ) = a3 and a(m2 ) = a1 . But then t2 would
deviate: with these strategies, his payoff is −3, while if he deviated to m2 , he’d get 0.
Hence this is not an equilibrium.
So let’s considering pooling equilibria. First, suppose m(t1 ) = m(t2 ) = m1 . Then
player 2’s beliefs in response to m1 would be the prior — that is, he’d put probability
3/4 on t1 . Hence his expected payoff to a1 would be 9/4, his expected payoff to a2 would
be 2, and his expected payoff to a1 would be 1. Hence we must have a(m1 ) = a1 . Notice
that this means that in equilibrium (assuming we can make this an equilibrium), t1 gets
a payoff of 2, the highest payoff he can possibly get. Hence he will not deviate, regardless
of how we specify a(m2 ). However, t2 gets a payoff of −1 from m1 , so we have to be
careful how we choose a(m2 ). In particular, the only response that would prevent t2 from
deviating is if we set a(m2 ) = a2 . If a(m2 ) = a1 or a(m2 ) = a3 , then t2 would deviate to
get either 0 (in the first case) or 1 (in the second case), instead of the −1 he gets from
m1 . Can we choose beliefs for player 2 that makes this action sequentially rational? Let
µ be the probability 2 puts on t1 in response to m2 . For a2 to be his best response to
m2 , we must have 1 ≥ 4µ and 1 ≥ 4(1 − µ). The former implies µ ≤ 1/4, while the
latter requires µ ≥ 3/4. These requirements are inconsistent, so we cannot find a belief
which makes a2 2’s best response to m2 . Hence there is no pooling equilibrium where
both types send m1 .
The only other possibility, then, is m(t1 ) = m(t2 ) = m2 . If this is an equilibrium,
then 2’s beliefs in response to m2 must be the prior. Hence his expected payoff to a1 is 3,
his expected payoff to a2 is 1, and his expected payoff to a3 is 1. Therefore, we must have
a(m2 ) = a1 . This gives both t1 and t2 a payoff of 0 if they send message m2 . Note that
this is better than any of type t2 ’s possible payoffs from m1 , so we don’t have to worry
about him deviating. For t1 , however, we must set a(m1 ) = a2 or else he will deviate to
m1 . Can we find beliefs making this sequentially rational for 2? Yes: probability 1/2 on
each type will work. With these beliefs, his expected payoff to either a1 or a3 is 3/2, while
his expected payoff to a2 is 2. Hence m(t1 ) = m(t2 ) = m2 , a(m1 ) = a2 , and a(m2 ) = a1
is a weak perfect Bayesian equilibrium.
3
Download