Model exercises

advertisement
DSGE Models and Optimal
Monetary Policy
Andrew P. Blake
A framework of analysis
• Typified by Woodford’s Interest and Prices
– Sometimes called DSGE models
– Also known as NNS models
• Strongly micro-founded models
• Prominent role for monetary policy
• Optimising agents and policymakers
What do we assume?
• Model is stochastic, linear, time invariant
• Objective function can be approximated
very well by a quadratic
• That the solutions are certainty equivalent
– Not always clear that they are
• Agents (when they form them) have rational
expectations or fixed coefficient
extrapolative expectations
Linear stochastic model
• We consider a model in state space form:
s t 1  As t  Bu t  C  t 1
• u is a vector of control instruments, s a
vector of endogenous variables, ε is a shock
vector
• The model coefficients are in A, B and C
Quadratic objective function
• Assume the following objective function:
V 0  min
1
ut

  s Qs

2
t
t
t
 u t Ru t 
t0
• Q and R are positive (semi-) definite
symmetric matrices of weights
• 0 < ρ ≤ 1 is the discount factor
• We take the initial time to be 0
How do we solve
for the optimal policy?
• We have two options:
– Dynamic programming
– Pontryagin’s minimum principle
• Both are equivalent with non-anticipatory
behaviour
• Very different with rational expectations
• We will require both to analyse optimal
policy
Dynamic programming
• Approach due to Bellman (1957)
• Formulated the value function:
Vt 
1
2
s t Ss t  min
1
ut
2
 s tQs t  u t Ru t   s t1 Ss t 1 
• Recognised that it must have the structure:
V t  min
ut
 s tQs t  u t Ru t   ( s t A   u t B ) S ( As t  Bu t ) 
Optimal policy rule
• First order condition (FOC) for u:
Vt
ut
 0  Ru t   B S ( As t  Bu t )  0
• Use to solve for policy rule:
1
u t    ( R   B SB ) B SAs t
  Fs t
The Riccati equation
• Leaves us with an unknown in S
• Collect terms from the value function:
s t Ss t  s tQs t  s t F RFs
• Drop z:
t
  s t ( A   F B ) S ( A  BF ) s t
S  Q  F RF   ( A   F B ) S ( A  BF )
Riccati equation (cont.)
• If we substitute in for F we can obtain:
2
1



S  Q   A SA   A SB ( R  B SB ) B SA
• Complicated matrix quadratic in S
• Solved ‘backwards’ by iteration, perhaps
by:
S j  Q   A  S j  1 A   A  S j  1 B ( R  B S j  1 B )
2
1
B S j  1 A
Properties of the solution
•
•
•
•
‘Principle of optimality’
The optimal policy depends on the unknown S
S must satisfy the Riccati equation
Once you solve for S you can define the policy rule
and evaluate the welfare loss
• S does not depend on s or u only on the model and
the objective function
• The initial values do not affect the optimal control
Lagrange multipliers
• Due to Pontryagin (1957)
• Formulated a system using constraints as:
Hk  
k
 s k Qs k
 u k Ru k   k  1 ( As k  Bu k  s k 1 ) 
• λ is a vector of Lagrange multipliers:
• The constrained objective function is:

Vt 
H
k t
k
FOCs
• Differentiate with respect to the three sets of
variables:
H t
u t
H t
st
H t
t
 0  Ru t   B  t  1  0
 0  Qs t   A   t  1   t  0
 0  As t  Bu t  s t  1  0
Hamiltonian system
• Use the FOCs to yield the Hamiltonian system:
I

0
1

 B R B   s t 1   A

 
 A     t 1    Q
0   st 
 
I  t 
• This system is saddlepath stable
• Need to eliminate the co-states to determine the
solution
• NB: Now in the form of a (singular) rational
expectations model (discussed later)
Solutions are equivalent
• Assume that the solution to the saddlepath
problem is  t  Ss t
• Substitute into the FOCs to give:
H t
u t
H t
st
 0  Ru t   B Ss t  1  0
 0  Qs t   A  Ss t  1  Ss t  0
Equivalence (cont.)
• We can combine these with the model and
eliminate s to give:
2
1



S  Q   A SA   A SB ( R  B SB ) B SA
• Same solution for S that we had before
• Pontryagin and Bellman give the same answer
• Norman (1974, IER) showed them to be
stochastically equivalent
• Kalman (1961) developed certainty equivalence
What happens with RE?
• Modify the model to:
 z t  1   A11 A12   z t   B1 
 e  
     ut
 x t  1   A 21 A 22   x t   B 2 
• Now we have z as predetermined variables and x
as jump variables
• Model has a saddlepath structure on its own
• Solved using Blanchard-Kahn etc.
Bellman’s dedication
• At the beginning of Bellman’s book
Dynamic Programming he dedicates it thus:
To Betty-Jo
Whose decision processes defy analysis
Control with RE
• How do rational expectations affect the optimal
policy?
– Somewhat unbelievably - no change
– Best policy characterised by the same algebra
• However, we need to be careful about the jump
variables, and Betty-Jo
• We now obtain pre-determined values for the costates λ
• Why?
Pre-determined co-states
• Look at the value function V t  12 s t Ss t
• Remember the reaction function is:
  tz   S 11
 x  
  t   S 21
S 12   z t 
    Ss t
S 22   x t 
• So the cost can be written as V t  12 s t t
• We can minimise the cost by choosing some
co-states and letting x jump
Pre-determined co-states (cont.)
• At time 0 this is minimised by:
V0 
1
2
 z 0
  0z  1
x 0  x    z 0
0  2
  0z 
x 0  
0
• We can rearrange the reaction function to:
  tz   N 11
  
 x t   N 21
• Where
N 12   z t 
 x 
N 22    t 
1
1
N 11  S 11  S 12 S 22 S 21 , N 22  S 22
etc
Pre-determined co-states (cont.)
• Alternatively the value function can be
written in terms of the x and the z’s as:
 zt   I
  
 x t   N 21
0   zt 
 zt 
 x   T  x 
N 22    t 
t 
• The loss is:
V0 
1
2

z 0


 T ST

x
0
 z0 
 x
0 
Cost-to-go
•
•
•
•
At time 0, z0 is predetermined
x0 is not, and can be any value
In fact is a function of z0 (and implicitly u)
We can choose the value of λx at time 0 to
minimise cost
• We choose it to be 0
• This minimises the cost-to-go in period 0
Time inconsistency
•
•
•
•
•
•
This is true at time 0
Time passes, maybe just one period
Time 1 ‘becomes time 0’
Same optimality conditions apply
We should reset the co-states to 0
The optimal policy is time inconsistent
Different to non-RE
• We established before that the non-RE solution did
not depend on the initial conditions (or any z)
• Now it directly does
• Can we use the same solution methods?
– DP or LM?
– Yes, as long as we ‘re-assign’ the co-states
• However, we are implicitly using the LM solution
as it is ‘open-loop’ – the policy depends directly
on the initial conditions
Where does this fit in?
• Originally established in 1980s
– Clearest statement Currie and Levine (1993)
– Re-discovered in recent US literature
– Ljungqvist and Sargent Recursive
Macroeconomic Theory (2000, and new
edition)
• Compare with Stokey and Lucas
How do we deal with time
inconsistency?
• Why not use the ‘principle of optimality’
• Start at the end and work back
• How do we incorporate this into the RE
control problem?
– Assume expectations about the future are
‘fixed’ in some way
– Optimise subject to these expectations
A rule for future expectations
• Assume that:
x
e
t 1
  N t 1 z t 1
• If we substitute this into the model we get:
1
x t   ( A 22  N t  1 A12 ) ( N t  1 A11  A12 ) z t
1
 ( A 22  N t  1 A12 ) ( N t  1 B1  B 2 ) u t
  J t zt  K tut
A rule for future expectations
• The ‘pre-determined’ model is:
z t 1  A11 z t  A12 x t  B1u t
• Using the reaction function for x we get:
z t  1  ( A11  A12 J t ) z t  ( B1  B 2 K t ) u t
 Aˆ t z t  Bˆ u t
Dynamic programming solution
• To calculate the best policy we need to
make assumptions about leadership
• What is the effect on x of changes in u?
• If we assume no leadership it is zero
• Otherwise it is K, need to use:
Vt
u t

Vt xt
xt u t
0
Vt
u t

Vt
xt
Kt
Dynamic programming (cont.)
• FOC for u for leadership:
1 ˆ
ˆ
ˆ
ˆ

u t    ( R   B t S t  1 B t ) ( B tS t  1 Aˆ t  K t ( Q 22 J t  Q 21 )) z t
  Fˆt z t
where: Rˆ t  R  K tQ 22 K t
• This policy must be time consistent
• Only uses intra-period leadership
Dynamic programming (cont.)
• This is known in the dynamic game
literature as feedback Stackelberg
• Also need to solve for S
– Substitute in using relations above
• Can also assume that x unaffected by u
– Feedback Nash equilibrium
• Developed by Oudiz and Sachs (1985)
Dynamic programming (cont.)
• Key assumption that we condition on a rule
for expectations
• Could condition on a time path (LM)
• Time consistent by construction
– Principle of optimality
• Many other policies have similar properties
• Stochastic properties now matter
Time consistency
• Not the only time consistent solutions
• Could use Lagrange multipliers
• DP is not only time consistent it is subgame
perfect
• Much stronger requirement
– See Blake (2004) for discussion
What’s new with DSGE models?
• Woodford and others have derived welfare loss
functions that are quadratic and depend only on
the variances of inflation and output
• These are approximations to the true social utility
functions
• Can apply LQ control as above to these models
• Parameters of the model appear in the loss
function and vice versa (e.g. discount factor)
DGSE models in WinSolve
• Can set up micro-founded models
• Can set up micro-founded loss functions
• Can explore optimal monetary policy
– Time inconsistent
– Time consistent
– Taylor-type approximations
• Let’s do it!
Download