MA200.2 Game Theory II, LSE

advertisement
MA200.2 Game Theory II, LSE
Answers to Problem Set 1
[1] In part (i), proceed as follows. Suppose that we are doing 2’s best response to 1. Let p
be probability that player 1 plays U . Now if player 2 chooses L, her payoff is
pb + (1 − p)f
while if she chooses R, her payoff is
pd = (1 − p)h.
When p = 0, this boils down to a comparison of f with h, while if p = 1, this boils down to
a comparison of b with d.
(a) If f −h and b−d have the same sign, then player 2 will always want to choose L (assuming
f > h and b > d) or will always want to choose R (assuming f < h and b < d), regardless of
the value of p. So the best response is “flat” at the same value of q (1 or 0).
(b) If f > h and b < d, then player 2 will prefer L below some cutoff value of p and R above,
where the cutoff is given by
p̄b + (1 − p̄)f = p̄d = (1 − p̄)h.
So the best response looks like this: it is q = 1 for p < p̄, q = 0 for p > p̄, and q is any value
between 0 and 1 when p = p̄.
The other cases are basically identical to (a) and (b). Notice that I have neglected possible
equalities (such as f = h) but you do these in a very similar way.
(ii) You will need a − c and e − g to have the same sign, and f − h and b − d to have the
same sign. This ensures that each player has a strictly dominant strategy.
(iii) Suppose, on the contrary, that (U, L) is a pure strategy Nash equilibrium. Then a ≥ e
and b ≥ d.
Case 1. b = d. Then it must be that g > c, otherwise (U, R) would be another Nash
equilibrium. But then f > h, otherwise (D, R) would be Nash. But this means that to avoid
(D, L) being Nash, a > e.
So in this case, a > e and b = d. Now let q be the probability of the Column player playing
L. For all q ∈ (0, 1) but close enough to 1,
qa + (1 − q)c > qe + (1 − q)g,
so playing U for Row is a strictly best response. And if so, Column is indifferent between L
and R (since b = d), so all pairs (p, q) with p = 1 and q sufficiently close to 1 are also Nash,
a contradiction to uniqueness.
Case 2. b > d. But then, because there are no strictly dominant strategies, h ≥ f . But then,
to avoid (D, R) being Nash, it must be that c > g. But there are no dominant strategies, so
that a = e (remember that a ≥ e to start with, so this is the only alternative). Now apply
2
the last couple of lines in Case 1 starting from a = e (instead of b = d) to get a similar
contradiction.
[2] The best way to do these is to simply plot the best responses for each player on the same
graph and then check out the intersections. I do this for the first example, the Battle of the
Sexes. In the figure below, p is the probability of the Row player playing U and q is the
probability of the Column player playing L. Note that there are three equilibria, two in pure
and one in mixed, in which p = q = 1/3.
p
1
1/3
0
1/3
1 q
You can (and should) do the rest yourself.
[3] This is a classical model in political science. First take n = 2. Define the unique position
x∗ such that half the population have ideal points to the left of x∗ and the other half have
ideal points to the right of it. Because there is a continuum of citizens and ideal points have
a density function, there is no ambiguity about this definition.
Now consider all possible combinations of pure strategies, call them (x1 , x2 ). Without loss
of generality we can suppose that x1 ≤ x2 .
Case 1. x1 < x2 . Now there are two subpossibilities. First, each of the candidates is getting
an equal number of votes; there is an electoral tie. But then neither player can be playing
a best response to each other. For instance, if x1 is increased slightly by (so that x1 + is
still less than x2 ), then candidate 1 will not lose a single original voter, will gain some (why?)
and so will win the elections for sure rather than with probability 1/2.
Second, one of the candidates, say 1, is a clear winner. But the other candidate can assure
herself a 50-50 chance by simply selecting the same position x1 . Thus it is not possible for
two candidates to have distinct positions in Nash equilibrium.
Case 2. x1 = x2 . Again two subpossibilities. One is that this common value is not equal
to x∗ , the median defined above. But then any of the players can deviate slightly in the
direction of x∗ and get more than half the votes (why?), guaranteeing a victory. In contrast,
with x1 = x2 each player’s chances of victory are only 50-50.
3
The only remaining possibility is that x1 = x2 = x∗ , where both parties converge to the
median voter. This is a Nash equilibrium, and it;s the only one as we have seen.
If n ≥ 3 there is no Nash equilibrium in pure strategies. It is very easy to see this using the
methods above. Essentially, one looks through different cases and gets a profitable deviation
each time. You may be wondering how the equivalent of median voter convergence gets ruled
out. Well, imagine that xi = x∗ for every i, so each candidate’s chances are 1/n. Now have
any candidate deviate by choosing a position slightly to the rigt of x∗ . Notice that she will
now get everyone to the right of her, which is almost half the population. In contrast, the
voters to her left are split up among the remaining candidates (some even stay with her), so
that none of her rivals can get more than a quarter. So she wins for sure. [Go back again to
case 2 above and see why this argument does not work when n = 2.]
[4] (a) Look at the first-order conditions for best responses. The important point here is
that you should not blindly write down first-order equality conditions describing a Nash
equilibrium. Suppose you do that here: you will get the absurd result that
n
X
λi f 0 (
ei ) = 1,
i=1
which, of course, cannot hold simultaneously for all the different values of λi ! Of course,
once you see this you will understand right away that in any equilibrium e∗ , e∗i is positive
only if λi is the largest among all the λ’s. So in any pure strategy Nash equilibrium, the
“shareholders” who have a share lower than the maximum share — call this share M — put
in zero effort, and the maximum shareholders put in any combination of efforts so that total
effort E solves the equation
M f 0 (E) = 1.
(b) Notice that none of the above attains the efficient outcome unless there is one person
who gets the entire share! In that case, M = 1 and the last equation in part (a) guarantees
that the outcome is efficient.
(c) Notice that high inequality is conducive to efficiency in this example. But efficiency just
means Pareto optimality: there is no other combination of efforts that can make every person
better off. [At this stage, you should convince yourself that every Nash equilibrium under
every situation in which at least two persons get a positive share is inefficient in thise sense.]
Pareto-optimality isn’t everything in life, however, because the outcome is highly inequitable.
The result also depends on the assumption that output is a function of the sum of efforts. If
output depends in other ways on effort, then efficiency and inequality are not closely related
anymore. If you are interested in more on this, look at
http://www.econ.nyu.edu/user/debraj/Papers/BDR03.pdf
[5] let em be the amount of love and care put in by Mum, and ed the amount put in by Dad.
Mum wants to maximize
m(em + ed ),
while Dad wants to maximize
d(em + ed ).
4
By the same argument as in case (a) of the previous question, both parents cannot simultaneously set their first-order conditions equal to zero (because they have different maximizers).
So one parent must put in zero love and care and the other the whole amount. Now show
that if Mum has the larger maximizer she must put in all the effort.
[13] [a] Consider the maximization problem:
max
n
X
[u(ci ) − v(ei )]
i=1
subject to
n
X
n
X
ci ≤ f (
ei ).
i=1
i=1
Of course you can use Lagrangeans to do this, but a simpler way is to first note that all ci ’s
must be the same. For if not, transfer some from a larger ci to a smaller cj : by the strict
concavity of u the maximand must go up. The argument that all the ei ’s must be the same is
just the same: again, proceed by contradiction and transfer some from larger ei to smaller ej .
By the strict concavity of −v the maximand goes up. Note in both cases that the constraint
is unaffected.
So we have the problem:
f (ne)
− v(e)
n
which (for an interior solution) leads to the necessary and sufficient first-order condition
max u
u0 (c∗ )f 0 (ne∗ ) = v 0 (e∗ ).
[b] The (symmetric) equilibrium values ĉ and ê will satisfy the FOC
(1/n)u0 (ĉ)f 0 (nê) = v 0 (ê),
It is easy to see that this leads to underproduction (and underconsumption) relative to the
first best. For if (on the contrary) nê ≥ ne∗ , then ĉ ≥ c∗ also. But then by the curvature of
the relevant functions, both sets of FOCs cannot simultaneously hold.
[c] Each person chooses r to maximize
u β(1/n) + (1 − β)
e
−
f (e + E ) − v(e)
e + E−
where E − denotes the sum of other efforts. Let (c, e) denote the best response. Write down
the FOC which are necessary and sufficient for a best response:
−
e
0
0
−
− (1 − β)E
u (c) β(1/n) + (1 − β)
f (e + E ) + f (e + E )
= v 0 (e)
e + e−
(e + E − )2
Now impose the symmetric equilibrium condition that (c, r) = (c̃, te) and E − = (n − 1)ẽ.
Using this in the FOC above, we get
(1 − β)(n − 1)f (nẽ)
1 0
0
u (c̃)
f (nẽ) +
= v 0 (ẽ).
n
n2 ẽ
5
Examine this for different values of β. In particular, at β = 1 we get the old equilibrium
which is no surprise. The interesting case is when β is at zero (all output divided according
to work points). Then you should be able to check that
u0 (c̃)f 0 (nẽ) < v 0 (ẽ)!
[Hint: To do this, use the strict concavity of f , in particular the inequality that f (x) > xf 0 (x)
for all x > 0.]
But the above inequality means that you have overproduction relative to the first best. To
prove this, simply run the underproduction proof in reverse and use the same sort of logic.
You should also be able to calculate the β that gives you exactly the first best solution.
Notice that it depends only on the production function and not on the utility function.
[d] Think about it!
[7] (a)
1
b
a
2
c
d
1
(6,0,6)
(8,6,8)
e
f
3
g
(7,10,7)
h
g
(0,0,0) (0,0,0)
h
(7,10,7)
(b) If player 2 plays C with probability 1 then we are done; 1 must initially play B. So all
it remains to do is check when player 2 will want to play D with positive probability. This
will happen if player 2 anticipates that in the subsequent coordination game between players
1 and 3, one of the two pure strategy equilibria (it does not matter which one) is played. In
that case 2 will strictly prefer to play D. But in these pure cordination outcomes player 1
gets 7, so once again he will prefer to play B right at the beginning.
[If player 2 anticipates the only remaining equilibrium in the coordination game, which yields
an expected payoff to her of 5, she will want to play C, and this case has already been covered.]
(c) Suppose that player 2 anticipates that in the subsequent coordnation game one of the
two pure strategy equilibria will be played, but that player 1 anticipates the 50-50 mixed
strategy equilibrium. In that case player 2 will, indeed, play D (anticipating a payoff of 10),
but along this path player 1 only anticipates an expected payoff of 3.5 (why?). So under
these beliefs, player 1 will play A.
6
This possibility is ruled out by the definition of a subgame perfect equilibrium because it is
assumed that players have common beliefs about the strategy profile that will be played in
the game.
[8] Version (i): auditor’s strategy space is { Audit, Not Audit }, individual’s space is all maps
from from the above space to { Evade, Not Evade }. Version (ii): auditor’s strategy space
is [0, 1], where p ∈ [0, 1] represents the probability of audit, individual’s space is all maps
from from the above space to { Evade, Not Evade }. Version (iii): auditor’s strategy space
{ Audit, Not Audit }, individual’s space is { Evade, Not Evade }.
Solving these games is very easy. The important point is to note the different interpretations
of a probability of audit: in (ii), it is a pure strategy, in (iii) it is a mixed strategy.
[9] Basically, we worked this out in class. We showed that in a single-entrant problem the
entrant would enter and the incumbent would not fight. Now you should be able to solve the
n-entrant problem by using the same logic of backward induction as in the centipede game.
This should show that no matter how many entrants you have (as long as they are finite),
the entrant will enter and the incumbent will accommodate.
[10] [OR Exercise 101.3.] Notice that if Army 1 has strictly more battalions than Army 2,
then this will never change over the course of the game no matter what they do. In this
case the unique equilibrium of the game is for Army 1 to attack when it can (in these cases
K ≥ 2) and for 2 never to attack. In this equilibrium Army 1’s payoff is K − 1 + x, where
x > 1 is the payoff from occupation. By deviating it can get only K. For Army 2 it suffices
to consider a one-shot deviation. If it is in an attacking position, then for a profitable attack
it must be that L ≥ 2 (otherwise it cannot occupy the island). If it attacks, then it loses
a battalion. If L ≥ 2 then it loses the island next round for good (applying the strategies
thereafter), so this deviation is not profitable.
So the only case to consider is where they have the same number of battalions; K = L = M .
If M = 1, the attacker does not attack (it wins but cannot occupy). So if M = 2, an attack
with permanent occupation thereafter will occur. This means that if M = 3 an attack will
not occur. And so on. It follows that if M is odd an attack will not occur, while if M is even
and positive, it will occur.
[11] To discuss this question, consider the matching pennies game
U
D
L
1, −1
−1, 1
R
−1, 1
1, 1
It should be obvious that no player can gain by committing to move first. The other person
will obviously exploit the knowledge of the previous move. Matching pennies is an example
of a game in which there is first-mover disadvantage.
In contrast, consider any two-player game such that each player has a unique best response
to the other player’s strategy at each of its pure-strategy Nash equilibria, and assume that
at least one such Nash equilibrium exists. Then notice that if a player is given the right to
go first, she cannot be worse off. The reason is that she can always, at least, choose the Nash
7
equilibrium that is best for her and play her part of that strategy profile. She can be assured
that her opponent will choose his part of the profile (this is where a unique best response at
each Nash equilibrium helps), and so our first-mover can always guarantee herself the payoff
from the best possible Nash equilibrium.
So she is no worse off when she moves first.
But sometimes she can do strictly better. Consider the Cournot duopoly with linear demand
curve p = A − bQ and constant marginal cost c. Let’s call the two individual outputs x and
y. Calculate y as a best response to x: for each x, y is chosen to
max(A − bx − by − c)y,
FOC (assuming y > 0):
(A − bx − by − c) − by = 0,
or
A − bx − c
,
2b
at least for all the x’s such that A − bx − c ≥ 0 (if not, y = 0).
y=
When the player that chooses x moves first, the big difference is that she does not play a best
response to y! In fact there is no single number y that describes the strategy of the other
player, who moves second. If you draw the game tree you see that the other player’s strategy
is a specification of y conditional on x. However, we know by subgame perfection that in
equilibrium, this conditional strategy must specify best responses for each pre-committed
value of x. Thus player 1 maximizes
max(A − bx − by − c)y,
knowing that subsequently,
A − bx − c
.
2b
The solution to this first-mover problem gives the payoff to moving first. Notice that a feasible
solution to this problem is to choose the Cournot-Nash value of x in the simultaneous-move
game; then y would also be the Cournot-Nash value (unique), and the first mover can pick
up the Cournot payoff at least. But she can do better. I leave the details to you. Solve the
maximization problem above, and show that yields a value of x that is different from the
simultaneous-move Cournot-Nash value.
y=
[12] Call the last period 0. Count backwards at A’s offer points and label these points
t = {0, 1, 2, . . .}. Let A’s payoff at point t be at .
Now in the period just previous to t, B proposes and can obviously get 1 − αat by offering A
the amount αat . Therefore what a can get at stage t + 1, which is just prior to this proposal
by B, must be given by
at+1 = 1 − β(1 − αat ) = (1 − β) + αβat .
This is a simple difference equation starting from a0 and if there are T offer points, the
solution aT , which is A’s first offer (in real time) is given by
aT = (1 − β)[1 + αβ + (αβ)2 + . . . + (αβ)T −1 ] + (αβ)T a0
8
Because a0 is surely bounded, its exact value is irrelevant as long as αβ is less than one. The
initial offer converges to
1−β
a∗ ≡
1 − αβ
as T → ∞, which is exactly the infinite horizon Rubinstein bargaining solution studied in
Lectures 5 and 6. By the same token, B’s initial offer must converge to
1−α
.
b∗ ≡
1 − αβ
When α = β = 1, this convergence result breaks down. It can be verified that whenever A
is the last person to make an offer, her first period payoff aT equals 1, otherwise it equals 0
(likewise for B). Such a sequence has no limit as T → ∞.
Download