Dynamic Stackelberg Problems

advertisement
DYNAMIC
STACKELBER
G PROBLEMS
RECURSIVE MACROECONOMIC THEORY,
LJUNGQVIST AND SARGENT,
3 RD E D I T I O N ,
CHAPTER 19
1
Taylor Collins
BACKGROUND INFORMATION
• A new type of problem
• Optimal decision rules are no longer functions of the natural
state variables
• A large agent and a competitive market
• A rational expectations equilibrium
• Recall Stackelberg problem from Game Theory
• The cost of confirming past expectations
Taylor Collins
2
THE STACKELBERG PROBLEM
• Solving the problem – general idea
• Defining the Stackelberg leader and follower
• Defining the variables:
• Zt is a vector of natural state variables
• Xt is a vector of endogenous variables
• Ut is a vector of government instruments
• Yt is a stacked vector of Zt and Xt
Taylor Collins
3
THE STACKELBERG PROBLEM
• The government’s one period loss function is
r(y, u) = y' Ry + u'Qu
• Government wants to maximize
¥
-å b t r(yt , ut )
(1)
t=0
subject to an initial condition for Z0, but not X0
• Government makes policy in light of the model
yt+1 = Ay t +But
ézt+1 ù é A11 A12 ùézt ù
Û ê ú=ê
úê ú + But
ë xt+1 û ë A21 A22 ûë xt û
(2)
• The government maximizes (1) by choosing
{ut , xt , zt+1}¥t=0
subject to (2)
Taylor Collins
4
PROBLEM S
• “The Stackelberg Problem is to maximize (2) by choosing an X0 and a
sequence of decision rules, the time t component of which maps the
time t history of the state Zt into the time t decision of the Stackelberg
leader.”
• The Stackelberg leader commits to a sequence of decisions
• The optimal decision rule is history dependent
• Two sources of history dependence
• Government’s ability to commit at time 0
• Forward looking ability of the private sector
• Dynamics of Lagrange Multipliers
• The multipliers measure the cost today of honoring past government promises
• Set multipliers equal to zero at time zero
• Multipliers take nonzero values thereafter
Taylor Collins
5
SOLVING THE STACKELBERG PROBLEM
• 4 Step Algorithm
• Solve an optimal linear regulator
• Use stabilizing properties of shadow prices
• Convert Implementation multipliers into state
variables
• Solve for X0 and μx0
Taylor Collins
6
STEP 1: SOLVE AN O.L.R.
• Assume X0 is given
• This will be corrected for in step 3
• With this assumption, the problem has the form of an optimal linear
regulator
• The optimal value function has the form
v(y) = - y'Py
where P solves the Riccati Equation
¥
• The linear regulator is
v(y0 ) = - y0 Py0
=
max¥ - å b t (yt ' Ryt + ut 'Qut )
{ut ,yt+1 }t=0
t=0
subject to an initial Y0 and the law of motion from (2)
• Then, the Bellman Equation is
-y'Py = max{-y' Ry - u'Qu - b y* 'Py* } s.t. y* = Ay + Bu
(3)
*
u,y
Taylor Collins
7
STEP 1: SOLVE AN O.L.R.
• Taking the first order condition of the Bellman
equation and solving gives us
u = - Fy s.t. F = b[Q + b B'PB]-1 B'PA
(4)
• Plugging this back into the Bellman equation gives
us
_
_
_
_
-y' Py = - y' Ry - u'Qu - b (Ay + Bu)'P(Ay + Bu)
such that ū is optimal, as described by (4)
• Rearranging gives us the matrix Riccati Equation
P = R + b A'PA - b 2 A'PB(Q + b B' PB)-1 B'PA
• Denote the solution to this equation as P*
Taylor Collins
8
STEP 2: USE THE SHADOW PRICE
• Decode the information in P*
• Adapt a method from 5.5 that solves a problem of
the form (1),(2)
• Attach a sequence of Lagrange multipliersto the
sequence of constraints (2) and form the following
Lagrangian
¥
L = - å b t [y't Ryt + u't Qut + 2 bm 't+1 (Ayt + But - yt+1 )]
t=0
• Partition μt conformably with our partition of Y
Taylor Collins
9
STEP 2: USE THE SHADOW PRICE
• Want to maximize L w.r.t. Ut and Yt+1
¶L
= 0 Þ 0 = Qut + b B' mt+1
¶ut
¶L
= 0 Þ mt = Ryt + BA' mt+1
¶yt
• Solving for Ut and plugging into (2) gives us
(5
)
yt+1 = Ayt - b BQ-1B' mt+1
• Combining this with (5), we can write the system as
é I b BQ-1B'ùé yt+1 ù é A 0ùé yt ù
é ù
é yt ù
* yt+1
ê
úê ú = ê
úê ú Û L ê ú = N ê ú (6
b A' ûëmt+1 û ë-R I ûëmt û
ëmt+1 û
ëmt û )
ë0
Taylor Collins
10
STEP 2: USE THE SHADOW PRICE
• We now want to find a stabilizing solution to (6)
• ie, a solution that satisfies
¥
t
b
å y't yt < ¥
t=0
• In section 5.5, it is shown that a stabilizing solution
satisfies
m0 = P* y'0
• Then, the solution replicates itself over time in the
sense that
mt = P y't
*
(7)
Taylor Collins
11
STEP 3: CONVERT IMPLEMENTATION
MULTIPLIERS
• We now confront the inconsistency of our
assumption on Y0
• Forces multiplier to be a jump variable
• Focus on partitions of Y and μ
• Convert multipliers into state variables
• Write the last nx equations of (7) as
m x = P21zt + P22 xt
t
• Pay attention to partition of P
• Solving this for Xt gives us
(8)
xt = P22-1m xt - P22-1P21zt
Taylor Collins
12
STEP 3: CONVERT IMPLEMENTATION
MULTIPLIERS
• Using these modifications and (4) gives us
é I
0 ùézt ù
ut = -F ê -1
ê ú
-1 ú
ë-P22 P21 P22 ûëm xt û
(9)
• We now have a complete description of the
Stackelberg problem
yt+1 = Ayt + But
Û
é
I
0 ù
ú(A - BF)ê -1
P22 û
ë-P22 P21
ézt ù
-1
-1
xt = éë-P22 P21 P22 ùûê ú
ëm xt û
é zt+1 ù é I
ê ú=ê
ëmt+1 û ëP21
Taylor Collins
0 ùé zt ù
ê ú
-1 ú
P22 ûëm xt û
(9’)
(9’’)
13
STEP 4: SOLVE FOR X0 AND μx0
• The value function satisfies
v(y0 ) = -y'0 P* y0 = -z'0 P11*z0 - 2x'0 P21*z0 - x'0 P22* x0
• Now, choose X0 by equating to zero the gradient of
V(Y0), w.r.t. X0
-2P21*z0 - 2P22* x0 = 0 Þ x0 = -P22*-1P21*z0
• Then, recall (8)
(8) Þ m x0 = 0
• Finally, the Stackelberg problem is solved by
plugging in these initial conditions to (9), (9’), and
(9’’) and iterating the process to get
{ut , xt , zt+1}¥t=0
Taylor Collins
14
CONCLUSION
• Brief Review
• Setup and Goal of problem
• 4 step Algorithm
• Questions, Comments, or Feedback
Taylor Collins
15
Download