University Lille 1
M2 EITEI
Thomas Weitzenblum
• My name: Thomas Weitzenblum
• To join me: thomas.weitzenblum@univ-lille2.fr
• To get the PPT presentations: http://weitzenblum.free.fr
• If we can stick to the original plan, each 2 hours course will be devoted to a theory/economic question,
• I will take care of the introduction,
• The rest of the course will consist in studying a document that you will have previously read
• Prof. Ragot has focused on the economic analysis of the long run dynamics,
• Therefore, we will analyse the short-medium run implications of macroeconomic dynamics, for the Advanced Macro course to be a complete survey of modern macro.
1. An introduction to Real Business Cycle Theory
2. A 2-country RBC
3. Endogenous cycles,
4. Keynesian views of macroeconomic fluctuations: fixed or staggered prices
5. The labor market: job creation and job destruction
6. Monetary policy in a multi-country model
Remarks:
• Only 2 hours for each subject is a binding constraint…
• Especially with the first topic, which is not a particular application, but a whole field in macro, with numerous applications…
• So, more likely than not, one of the previous 6 subject might be sacrificed… on the altar of the total inelasticity of time supply…
• Attention to economic cycles has arisen quite early: Clément Juglar, as early as 1860,
• The beginning of the analysis of cycles (as opposed to the simple enumeration of crises):
– Descriptive: different types of cycles: Juglar, Kitchin and Kondratieff,
– First attempts to understand the propagation of shocks over time and sectors: Mitchell (1927),
– First attempts to tackle the statistical issue: how to be sure that what we see as a cycle is really one?
Cycles or not cycles??
In 1927, Slutzky shows that what may appear, visually, and statistically, as cycles –to be defined…- may be the result of pure randomness, deprived of any mean-reverting mechanism:
A simple mobile average of a period-by-period white noise does the job…
Frisch’s rocking horse
Ragnar Frisch (1933) proposes a new distinction regarding the analysis of economic fluctuations:
• Fluctuations are due to shocks affecting the economy,
• Consequently, an obvious distinction must be made between the impulse of the shock (its origin, its magnitude, its own temporal and statistical characteristics) and its propagation onto the economy,
• Very much like we distinguish between the stick that hits a rocking horse (its intensity, etc…) and the subsequent movement of the horse.
Frisch claims that this distinction originally belongs to Wicksell.
What is the meaning/ the extent of this view?
• stochastic shocks are regarded as an essential source of fluctuations,
• but, while shocks are exogenous, the response of the economy, over time, over sectors, over types of agents, will be endogenous: the structure of the economy determines the nature of the propagation
This suggests a clear departure from the point of view that cycles may be fully endogenous (they solely depend on the structure of the economy)
But it also clearly departs from the other extreme view: that shocks are fully exogenous, so that the whole macrodynamics is itself exogenous
Simply because here, part of the story is exogenous, and part is endogenous.
A simulated example:
Assume a type of Samuelson’s oscillator
(Investment depends on expected demand, and the production function is linear in K):
I t
I
0
Y t
1
Y t
2
Also assume that current consumption depends on past income (1-period lag):
C t
C
0
cY t
1
This implies that the GDP dynamics is characterized by the following second-order difference equation:
Y t
C
0
I
0
c
Y t
1
Y t
2
With a plausible parameterization, this gives rise
Y t to damped fluctuations: time
If additional stochastic shocks were to affect, say, investment:
I t
I
0
Y t
1
Y t
2
t with
t being a white noise , that is a random variable such that:
its mean (rather, its expected value) is equal to zero,
its current realization
past one
t-i
, t is independent from any
its has a Gaussian distribution, that is, its variance is constant throughout the time
The response of the economy to this repeated shocks might well look like the time series of
GDP in France, or the US.
What we have seen so far:
The impulse-propagation view tends to be prefered to the endogenous cycle one,
It suggests to describe with as much precision as one can the propagation mechanisms,
But it of course requires too, and even beforehand, to correctly model the exogenous shock : as a random variable, it may be characterized by persistence (or not), multilags (or not), etc…
• The RBC framework is very much the heir of
Frisch’s rocking horse,
• However, a word needs be said about the real aspect of business cycles, which, of course, opposes to nominal aspects of the business cycle.
Real vs. Nominal cycles??
If RBC are named this way, it owes a lot to their initial developments, and to Long and Plosser’s (1983) initiative;
However, the impulse propagation mechanisms can be relevant with monetary disturbances as well as real ones.
Historically, the first « fluctuations at the equilibrium » models were rooted in monetary policy, and…
Modern RBC models do take money and financial considerations into account.
So that these models are not all focused on real sources of fluctuations.
The various dimensions along which temporal fluctuations do matter:
The standard deviation of the variable, of course in percentage points, dives an insight on how volatile the variable is,
Its auto-correlation, with respect to various lags and leads, gives valuable information on the degree of persistence of the variable,
Its correlation with any other variable may be of interest.
Of course, its correlation with GDP comes first: it makes the variable either procyclical, or countercyclical
But, even before all these considerations, how can we obtain time series in such a format that they are suited for revealing business cycle properties?
The answer is: detrending time series first, and, once detrended, fluctuations around the trend may be analyzed.
Trend and fluctuations around the trend in the US
1) Uncertainty and intertemporel choices
We assume the absence of government, and a closed economy.
Same assumptions as the Ramsey model.
In particular, no market imperfection (no externality, or imperfect information).
This implies that the equilibrium is Paretoefficient in a Ramsey model. The exogenous shifts in the productivity parameter will not change anything in terms of efficiency
The dynamic equilibrium of the economy, subject to aggregate shocks, is optimal.
This does not mean that agents like fluctuations, but that, given that there are fluctuations, the decentralized equilibrium cannot be outperformed.
This has another essential implication:
The resolution of the central planner’s program, or that of the decentralized equilibrium, will lead to the same results:
We can choose whether we explicitly describe the behavior of individuals, facing prices, or the optimal allocation of the planner.
Caution: this is not true for all RBCs, since many, of course, are going further and do introduce government spendings, distorsive taxation, externalities, etc…
The setting
A 2-period economy.
The uncertainty takes the form of 2 possible states of nature for date 2.
Agents can by claim to 1 unit of date 2consumption depending on the state of nature
(high h or low l). To each state is associated a probability:
l and
h
=1-
l
Markets are said to be complete.
Agents are all alike, and there is a continuum, of measure 1, of such agents.
Each is endowed with an endowment : w
1 at date 1, and produces according to the function: y
2
e i f ( k )
Where e i is the productivity shock (e is the individual date 1 investment.
l
< e h
) and k
The aggregate endowment is identical to the individual one (agents’ measure equals 1).
Agent’s preferences (no leisure):
Agents live for 2 periods, and the utility is separable with respect to time:
U
u
1
E
u
2
u
1
l u
2
h u
2
Agent’s program:
max c
1
, c
1
U s .
c .
u
1
E
u
2
c c c
1 l
2 h
2
p l d
2 l d d l
2 h
2
e l e h p f f h
( d
( k k
2
) h
)
k
l u
2
h u
2
w
1
The intertemporal budget constraint writes: c
1
p l c
2 l p h c
2 h k
w
1
p l e l f ( k )
p h e h f ( k )
Standard program (with 3 goods):
Optimality condition: agents equate the marginal rate of substitution with the relative price…
TMS
1 /
TMS
1 /
TMS p l e l
2 l
/
2
2 l h
2 h
p h
e h
f l h
'
l h u u u
'
' u '
'
c
1
2
2
1
u u
'
'
c
2 l
2
( k )
1 p l
p h p h p l
• Agents can individually perfectly insure themselves against the uncertain future
However, if all agents are alike, they will all try to perfectly insure, raising the relative value of the good they want to buy (low) and lowering that of the other (high).
In the end, we know that perfect insure is impossible, because the global endowment at date 2 depends on the state of nature.
Agents will all behave similarly, and similarly to a
representative agent whose consumption is the average of the agents’.
This representative agent behaves like Robinson
Crusoe on his island: he is alone, so cannot exchange with anyone.
He simply consumes, at each date, his endowment (he would probably invests, if he was allowed to…).
The Euler Equation of the agent writes:
l u ' u ' u u
'
'
1
1
2
1
e l
E
e i l e l f
' f h
(
' k
( u u k
'
'
c
1 h
2
e h
) u
) u
'
' c
2 i
l
2
f
' ( h k e h
) f
1
' ( k ) u '
2
1 u '
r
2 i u '
1
1 e i
f
E
'
(
1 k
E
)
1 r
2 i
E r
2 i
u '
2
2
cov
1
r
2 i
, u '
2
This is the Euler equation (or the Keynes-Ramsey condition) for the discrete-time uncertainty augmented Ramsey model.
The new term is the covariance: when the productivity shock is high, so is the interest rate, consumption is high too, but then its marginal utility is low
The covariance is negative.
What can be understood from these calculations?
In an uncertain world, expected value do matter, but they are not the whole story: the covariance must not be forgotten.
The Euler equation has a similar form here, to that in deterministic models.
And, very important, we know that we can resort to the representative agent (rigorously, the marginal rate of substitution needs be homogeneous of degree 0 with respect to consumption at different dates).
The next step: adding leisure.
Absolutely necessary: otherwise, the fluctuations of output would be only caused by:
The productivity shock, which is exogenous,
The investment behavior of the agents, who take advantage of a positive productivity shock, whenever it occurs.
However, we may notice that this simple model already contains, qualitatively, if not quantitatively, the mechanisms pertaining to the rocking horse…
The productivity shock may hit the economy only once, but…
…agents’ reaction will be to save/invest more at the time productivity is high.
Otherwise, if they did not save more, they would simply increase their current consumption.
This would break the smoothness of the consumption path.
Agents decide to save part of the increase in productivity, to take advantage of a higher consumption during several time periods.
Since investing increases the capital stock, this means that future capital levels will be higher than their long-run target
Future GDP will also be higher, even if the productivity shock is gone
This model, however simple, manages, at least qualitatively, to create some form of persistence in the dynamics of output and capital.
This is precisely what is intended: data show a considerable amount of persistence, so the model has to contain the mechanisms creating it, otherwise the persistence would be solely due to the exogenous shock.
As was already noted, the next step consists in adding leisure (cf. Romer, chapter 4):
A 2-period model, without uncertainty (to start with…).
The intertemporal utility function is:
U ln
1 u
c
1 b
, l
1 ln
1
u l
1
c
2
,
l
2
ln
2
b ln
1
l
2
The productive sector: a large number of identical firms, with a Cobb-Douglas production function:
A constant rate of capital depreciation
.
Factors earn their marginal productivity.
And the intertemporal budget constraint: c
1
1
1
r c
2
w
1 l
1
1
1
r w
2 l
2
The optimality condition writes:
1
l
1
1
l
2
1
1
r
w
2 w
1
Another optimality condition: w
1
1
l
1
bc
1
In the presence of uncertainty, one obtains the same equation as one put forward previously:
1 c t
E
1
r t
1
1 c t
1
In the case of total capital depreciation between
2 consecutive periods, the saving rate can be proved to be constant, as well as the labor supply.
Advantage of the simplifying assumptions:
Enabling for a closed-form solution.
Great inconvenient: labor supply does not depend on the current wage: no intertemporal substitution of labor.
This model will not generate enough volatility, and not enough persistence.
Here are the following questions, that await answers from you all:
1) What is the general principles for solving numerically more complex RBC models?
2) How are the coefficient of the log-linearized version of the model found?
3) What is an impulse response function?
4) What is the impact of a 1% productivity shock on the dynamics of the various variables of interest?
5) What were the possible reasons of the success of RBC models among economists
(compare with potential explanations from another paradigm)?
6) What does it mean, to consider that per capita output has a random walk component?
What does an RBC model endowed with such a property tell us, in terms of short-run dynamics?
7) With a standard RBC model, what variables have their dynamics correctly mimicked by the model? Which ones do not? Explain.