EC201 Revision Seminar Boromeus Wanengkirtyo University of Warwick

advertisement
EC201 Revision Seminar
Boromeus Wanengkirtyo
University of Warwick
Roadmap
1. Week 5: Consumption
2. RBC crash course
3. 2013-14 exam question on RBC
Boromeus Wanengkirtyo
EC201 Revision Seminar
1
DSGE Overview
Before DSGE:
• Policymakers used large-scale macroeconometric models
• Hundreds/thousands of estimated equations
• Simulate (fiscal/monetary/whatever) policy changes
But subject to the Lucas critique:
• Different policy, different behaviour!
• Macroeconometric models relied on past data
DSGE models:
• Optimising behaviour – can anticipate how they change
• Relied on deep parameters – more stable over time
Suite of models:
• Both are still used for different purposes
Boromeus Wanengkirtyo
EC201 Revision Seminar
2
Basic RBC Structure
Markets
Labour
L
w
Y
Firms
Goods
C+I
K
rk
Households
I
Capital
Boromeus Wanengkirtyo
EC201 Revision Seminar
3
Optimisation
Firms maximise profits (or minimise costs) (static):
At F1 (Kt , ht ) = rtK
At F2 (Kt , ht ) = wt
Households maximise expected future discounted utility (dynamic):
max∞ E0
{ct ,ht }t=0
∞
X
β t [u(ct ) + v (1 − ht )]
s.t. BCs
t=0
results in:
v 0 (1 − ht )
u 0 (ct )
0
u (ct ) = βRt u 0 (ct+1 )
wt =
where Rt ≡ rtk + (1 − δ). Other equations:
• Kt+1 = (1 − δ)Kt + It
• Yt = At F (Kt , ht ) = Ct + It
Boromeus Wanengkirtyo
EC201 Revision Seminar
4
The rest
Now we have a model, what do we do with it?
1. Calibrate the deep parameters (more on this later)
2. ‘Solve’ the model
3. Simulate the model
Solving the model is getting the policy functions:
• Policy functions = choice variables in terms of state variables
• Ct = f (Kt , At )
⇒
Kt+1 = g (Kt , At )
• Often have to do it numerically (i.e. need software to do it)
Simulation:
• Take random draws from distribution for (productivity) shocks
• Put it through the policy functions
• Calculate moments (mean/variance/corr/autocorr etc.)
Boromeus Wanengkirtyo
EC201 Revision Seminar
5
Calibration = a form of estimation
Macroeconomists (econometricians):
1. Set a parameter value so the model prediction matches a
particular observed data moment (estimation)
2. Verify that we match other moments well too (testing)
Estimation depends on the loss function:
• The weight on various data features (e.g. OLS = quadratic)
• Often focus exclusively on model predictions for steady states
I Fully exclude other parts of data features
I i.e. we have infinite loss on the calibration targets (so we
match it), and zero loss on the rest
Boromeus Wanengkirtyo
EC201 Revision Seminar
6
Calibration = a form of estimation
Why this specific loss function?
• Our models cannot match everything
I Macro is really, really hard
I Pick moments that we think are relevant and can identify
• But there is now a lot of DSGE estimation in the literature
I Thijs in revision lecture: some things you just can’t calibrate
General good practice:
• Choose moments far away from the results interested in
• More credible that model mechanisms generates them
• Examples of good calibration:
I Match predictions for micro data outside of standard aggregate
macro data (e.g. match price stickiness to average price
duration we see in micro data)
I Match steady states (the long-run mean) (next slide)
• Then simulate, to see if we can match bus cycle moments
Boromeus Wanengkirtyo
EC201 Revision Seminar
7
Calibration example
To calibrate β, take Euler equation:
U 0 (Ct ) = βEt U 0 (Ct+1 )Rt
In steady state:
U 0
(C ) = β
U 0
(C )(R)
So:
β=
1
R
where R we match to the average annual S&P500 returns of 6.5%
1
• since RBC is typically quarterly, β = (1+0.065/4)
= 0.984
• see King and Rebelo (1999) for other parameters
Boromeus Wanengkirtyo
EC201 Revision Seminar
8
Comparing Data and Model – King and Rebelo (1999)
Compare:
• Volatilities (absolute and relative to output)
• Persistence (autocorrelation)
• Co-movement (contemporaneous correlation with output)
Good to memorise some of the stylised facts:
Data
RBC
x
Y
C
I
N
Y
C
I
N
σx
1.81
1.35
5.30
1.79
1.39
0.61
4.09
0.67
σx /σY
1.00
0.74
2.93
0.99
1.00
0.44
2.95
0.48
Boromeus Wanengkirtyo
corr(xt , xt−1 )
0.84
0.80
0.87
0.88
0.72
0.79
0.71
0.71
corr(xt , Yt )
1.00
0.88
0.80
0.88
1.00
0.94
0.99
0.97
EC201 Revision Seminar
9
Comparing Data and Model – King and Rebelo (1999)
Volatility:
• Good at investment, not bad but not great at consumption
• Main failure of RBC: lack of labour market volatility
Others:
• Not enough persistence – lack of propagation
• Too much co-movement – more than 1 shock!
Lack of labour market volatility:
• When At ↑, MPLt ↑ and wt ↑, 2 effects:
I SE: work more today to exploit high productivity to increase
current and future consumption
I IE: work less today since do not need to work as much to
maintain consumption
• SE only slightly bigger than IE, so hours do not move much
• Also: most of aggregate hours volatility in data is extensive
margin, but RBC has no unemployment
Boromeus Wanengkirtyo
EC201 Revision Seminar
10
2014 Exam RBC Question
2 models: standard RBC, and RBC + variable capital utilisation
• Now when At ↑, firms can use existing capital more intensively
I (In this setup, firms own capital, not households)
• Rather than investing into new capital
• But at the cost of higher depreciation:
I Capital Law of Motion: K
t+1 = (1 − δ(zt ))Kt + It
I Production function: Y = F (z K , h )
t
t t
t
Boromeus Wanengkirtyo
EC201 Revision Seminar
11
b) In the model with variable capital utilization, labour is more volatile than in the standard
model. Why is this? (10 marks)
Part (a)c)
True or false? Explain. If we wanted the model to predict a higher volatility of labour, we
Which model matches
the
dataa better
could simply
assume
larger value for the variance of technology shocks. (10 marks)
Table with business cycle statistics
Standard RBC model
Standard
Relative
dev.
std.dev.
Y
1.36
1.00
C
1.01
0.74
I
2.62
1.93
H
0.90
0.66
A
0.79
0.58
Variable capital utilization
Standard
Relative
dev.
std.dev.
1.42
1.00
0.89
0.63
3.15
2.22
1.26
0.89
0.15
0.11
No data given out, so need to have some stylised facts memorised:
• Most important is lack of labour market volatility
• New model does better – halves the gap
Others which are less important:
• RBC needs large and persistent TFP shocks – this model only
needs smaller TFP shocks
6
(continued)
• Relative volatility of investment better, consumption worse
Boromeus Wanengkirtyo
EC201 Revision Seminar
12
Part (b)
Why is labour more volatile in the variable utilisation model
Tests if you read King and Rebelo (1999) or not:
• Standard RBC: SE is counteracted by IE
• Variable utilisation: utilisation rate zt ↑ as At ↑
Additional SE is strong:
• Effective capital input zt Kt ↑ so MPLt = wt ↑ even more
• Increases labour supply to exploit highly productive times
Additional IE is smaller because:
• Higher MPL raises w leading to standard income effects
• But higher depreciation reduces capital and future MPL and w
Thus, SE dominates IE and labour is more volatile
Boromeus Wanengkirtyo
EC201 Revision Seminar
13
Part (c)
Assuming higher variance of TFP shocks
Can we get a higher labour volatility by just doing so?
• Technically yes
But false:
• We do not just assume parameter values randomly
• We calibrate them to a specified procedure
• For TFP: match the variance of TFP (Solow residuals) in data
• So improper to assume a larger variance out of nowhere
In other words:
• We can match labour volatility by doing this
• But sacrificing matching TFP volatility
• So the empirical fit of the model does not improve by just
assuming larger variance
Boromeus Wanengkirtyo
EC201 Revision Seminar
14
Download