Robust Stability of Monetary Policy Rules under Adaptive Learning Eric Gaus

advertisement
1
Robust Stability of Monetary Policy Rules under
Adaptive Learning
Eric Gaus∗
Ursinus College
601 East Main St.
Collegville PA 19426-1000
Office Phone: (610)-409-3080
Email: egaus@ursinus.edu
Abstract
Recent research has explored how minor changes in expectation formation
can change the stability properties of a model (Duffy and Xiao 2007, Evans
and Honkapohja 2009). This paper builds on this research by examining an
economy subject to a variety of monetary policy rules under an endogenous
learning algorithm proposed by Marcet and Nicolini (2003). The results indicate
that operational versions of optimal discretionary rules are not “robustly stable”
as in Evans and Honkapohja (2009). In addition commitment rules are not robust
to minor changes in expectational structure and parameter values.
JEL Codes: E52, D83
∗ Acknowledgements:
Many thanks to Srikanth Ramamurthy, George Evans, Jeremy Piger, Shankha
Chakraborty, the associate editor, and two anonymous referees for their helpful suggestions and comments.
All remaining errors are my own.
2
1
Introduction
In addressing important open questions in monetary policy, inflation and expectations,
Bernanke (2007) states that many of the interesting issues in modern monetary theory
require a framework that incorporates learning on the part of agents. Adaptive learning
relaxes the rational expectations assumption by allowing agents to use econometrics to
forecast the economic variables of interest. As new data arrives, agents “learn” about the
data process by updating their forecast equations. Agents might apply a decreasing weight
(referred to as a gain) to new information if they believe the economic structure is fixed.
Alternatively, agents might apply a constant weight to the new data to account for frequent
structural change. Regardless of the weighting scheme, the ability of the agents to learn the
rational expectations solution can be reduced to a stability condition, described below.
In an early paper, Howitt (1992) showed that an interest rate pegging regime does not
result in agents learning the rational expectations solution. More recently, Evans and
Honkpohja (2003 and 2006) provide an overview of various monetary policy rules and their
stability under learning. As pointed out by Duffy and Xiao (2007), these results hold only if
policy makers do not consider the interest rate in their loss function. Rules with so-called
interest rate stabilization are stable under the so-called decreasing gain learning, but Evans
and Honkapohja (2009) contend that Duffy and Xiao’s rules do not result in stability for
constant gain learning.
This paper investigates a variety of monetary policy rules under various learning
algorithms with a particular focus on a hybrid of decreasing and constant gain learning
presented by Marcet and Nicolini (2003), which may be appropriate if agents believe they
face occasional structural breaks. By using this hybrid gain this paper can address how
demonstratively Duffy and Xiao (2007) and Evans and Honkapohja (2009) differ from each
other. Under the same conditions as Evans and Honkapohja (2009), using a hybrid gain
3
does not overturn the results in their paper, implying a stronger result.
Duffy and Xiao (2007) derive two optimal interest rate rules - one under discretion,
which can be characterized as a Taylor rule
it = θx xt + θπ πt ,
(1)
and the other under commitment, which is similar, but incorporates lagged values,
it = θπ πt + θx (xt − xt−1 ) + θi1 it−1 − θi2 it−2 .
(2)
Evans and Honkapohja (2009) assume that agents do not have access to contemporaneous
endogenous variables (the output gap, x, and inflation, π) and therefore examine an
operational version of (1),
it = θx xte + θπ πte ,
(3)
∗ x and π e = E ∗ π , where the star indicates that expectations need not be
where xte = Et−1
t
t
t−1 t
rational. This paper adds to the literature by examining these three rules plus an operational
version of (2) of the form,
it = θπ πte + θx (xte − xt−1 ) + θi1 it−1 − θi2 it−2 ,
(4)
and deriving an expectations based rule of the flavor of Evans and Honkapohja (2006),
e
e
it = θx1 xt−1 + θx2 xt+1
+ θπ πt+1
+ θi1 it−1 + θi2 it−2 + θu ut + θg gt .
where ut and gt are exogenous AR(1) processes. The following equations govern these
processes:
ut = ρut−1 + ũt , and gt = µgt−1 + g̃t ,
(5)
4
where g̃t ∼ iid(0, σg2 ), ũt ∼ iid(0, σu2 ), and 0 < |µ|, |ρ| < 1. A critical difference between
Evans and Honkapohja’s analysis and Duffy and Xiao is the distinction between optimal
and operational rules. The assumption of operational behavior drives the instability results
in Evans and Honkapohja (2009), which may result because of a violation in optimality.1
The paper unfolds as follows. The next section describes the basic modeling
framework, learning, the stability concept in learning, and introduces the hybrid learning
algorithm. Section 3 explores the optimal discretionary policy (1) and also the operational
version (3) under all three types of learning. That section also provides description of the
dynamics associated with the hybrid learning algorithm. The fourth section examines Duffy
and Xiao’s optimal rule with commitment (2), the operational version (4) and an
expectations based rule (5). The last section concludes.
2
Methodology
Basic Model
The following New Kyensian (NK) model, presented in section 3 of Evans and Honkapohja
(2009), describes the economy,2
e
e
xt = xt+1
− ϕ(it − πt+1
) + gt ,
(6)
e
πt = β πt+1
+ λ xt + ut ,
(7)
The Euler equation for consumption generates the output equation (6), while (7) describes
the NK Phillips Curve. The model is closed by specifying an interest-rate rule.
Substituting the generic Taylor rule (1) into (6) and rearranging (6) and (7) results in
5
the following matrix form of the model.
e
yt = Myt+1
+ Pυt ,
(8)
where yt = (xt , πt )0 and υt = (gt , ut )0 and where,

M
ϕ −1
 ϕ −1 +θπ λ +θx
=
λ ϕ −1
−1
ϕ +θπ λ +θx

β
1−θπ β
ϕ −1 +θπ λ +θx 

λ (1−θπ β )
+ ϕ −1
+θπ λ +θx


and
ϕ −1
 ϕ −1 +θπ λ +θx
P=
λ ϕ −1
−1
ϕ +θπ λ +θx
− ϕ −1 +θθπ λ +θ
π
x
λ θπ
1 + ϕ −1 +θ
λ +θ
π

.
x
Denoting F = diag{µ, ρ} and ν̃t = (g̃t , ũt )0 the corresponding process for the shocks
takes the form
υt = Fυt−1 + υ̃t .
Learning and E-stability
Under these assumptions, agents’ perceived law of motion (PLM) - the equation they
estimate - takes the form of the minimum state variable (MSV) solution,
yt = a + cυt
(9)
e = a + cFυ . Substituting these
which implies that agents expectations can be written as yt+1
t
expectations into (8) yields the actual law of motion (ALM),
yt = Ma + (McF + P)υt .
(10)
There exists a mapping of perceived coefficients to the actual coefficients, which the
6
literature refers to as the T-map. In this particular case the T-mapping is,
T (a) = Ma,
T (c) = McF + P.
The fixed point of the T-mapping is the Rational Expectations Equilibrium (REE). If all the
eigenvalues of the derivative of the T-map are less than one, then the solution is locally
expectationally stable, or E-stable. The result is local in the sense that agents expectations
will converge to the REE as long as their initial expectations are not too far away from the
REE. Evans and Honkapohja (2001) define the E-stability principle, which states that there
exists a correspondence between the E-stability of an REE and its stability under adaptive
learning.
The agents in the model do not know the structural parameters. Their expectations of
future outcomes are therefore based on estimates of a and c for the model (9). In real time,
these estimates of ϕt = (at , ct )0 are calculated by recursive least squares (RLS),
ϕ̂t = ϕ̂t−1 + γt Rt−1 ξt0 (yt − ξt0 ϕ̂t−1 ),
Rt = Rt−1 + γt (ξt ξt0 − Rt−1 ),
where ξ = (1 υ), and Rt is the moment matrix.
Adaptive learning assumes the gain, γt , is simply t −1 , which can also be referred to as
decreasing gain learning. This case corresponds to standard OLS, where the all data is
weighed equally. In contrast, setting γt = γ ∈ (0, 1] implies that the oldest data has virtually
no weight. This is called constant-gain learning and is similar to a rolling window of data.3
Proposition 1 in Evans and Honkapohja (2009) states that for constant gain learning the
eigenvalues must lie inside a circle of radius 1/γ and origin (1 − 1/γ, 0). Further, they show
that this is true for several different monetary policy rules for empirically relevant values of
7
the constant gain.
Switching Gains
Marcet and Nicolini (2003) propose a hybrid gain sequence which allows for an
endogenous switch between constant gain and decreasing gain. It is plausible that switching
between the different types of gain may vindicate the monetary policy rules examined by
Evans and Honkapohja (2009). The switch is endogenously triggered by forecast errors.
Large errors cause agents to suspect a structural break and therefore they would prefer to
use a constant-gain to remove the bias of the older data. Once forecast errors fall below a
cutoff agents switch back to a decreasing-gain. In Milani (2007b) this cutoff is determined
by the historical average of forecast errors.
Denoting the gain, γz,t , for each endogenous variable, z = {x, π}, the switching gain
takes the following form,
γz,t =



γ̄z



1
γ̄z−1 +k
if
∑ti=t−J |zi −zei |
J
if
∑ti=t−J |zi −zei |
J
≥
∑ti=t−W |zi −zei |
,
W
<
∑ti=t−W |zi −zei |
.
W
(11)
where k is the number of periods since the last switch to a decreasing-gain, J is the number
of periods for recent calculations, and W is the number of periods for historical
calculations.4 This rule sets the gain of a particular estimation equal to a constant value if
recent prediction error of that variable is greater than the historical average. If the reverse is
true, then the value of the gain declines with each subsequent period. Unfortunately, the
stability properties of this particular type of gain have not been studied. The following
section explores the stability properties of this type of gain sequence in the context of
several models.
8
3
Discretionary Monetary Policy
Optimal Policy
Duffy and Xiao (2007) suggest an optimal policy rule based on policymaker minimizing a
loss function that includes interest-rate stabilization in addition to output and inflation
stabilization. Note that like Duffy and Xiao, no zero-lower bound condition is imposed
because this particular line of literature relies on the observation by Woodford (2003) that
monetary policy makers should include the interest rate in their loss function to ensure that
the lower bound does not bind.5 Specifically policymaker minimize the following loss
function subject to (6) and (7), which are modified to account for a lack of commitment.6
∞
E0 ∑ β t [πt2 + αx xt2 + αi it2 ],
(12)
t=0
where the relative weights of interest-rate and output stabilization are αi and αx ,
respectively. Using the first order conditions of this loss function Duffy and Xiao (2007)
derive the following interest-rate rule,
it =
ϕαx
ϕλ
πt +
xt .
αi
αi
(13)
To analyze E-stability of the model closed by (13) one sets θπ = ϕλ αi−1 and
θx = ϕαx αi−1 in (8) and finds the eigenvalues of the derivative of the T-map. The derivative
of the T-map is as follows:
DTa = M,
DTc = F 0 ⊗ M.
Table 6.1 of Woodford (2003) , provides the calibrated values αx = 0.048, αi = 0.077,
9
ϕ = 1/0.157, λ = 0.024, and β = 0.99. In addition, σu = σg = 0.2 and ρ = µ = 0.8.7
Under this parameterization the model all the eigenvalues of the derivative of the T-map are
less than one. This implies that the model is locally E-stable for a decreasing-gain. The
eigenvalues also satisfy the constant gain learning local stability condition for all values of γ
between zero and one. Consequently, the switching-gain must be locally E-stable.
As noted by Evans and Honkapohja (2009) and McCallum (1999) policy rules with
contemporaneous endogenous variables are problematic. What follows is a version of Duffy
and Xiao’s rule that accounts for this problem under a few common parameterizations.
Operational Policy
Operational monetary policy rules, in the sense of McCallum (1999), assume knowledge of
contemporaneous exogenous variables, but not contemporaneous endogenous variables.8
Thus, agents form expectations over contemporaneous endogenous variables and this,
following Evans and Honkapohja (2009), changes (1) to (3).9 By substituting (3) into (6)
the model can be rewritten in matrix form as,
e
yt = M0 yte + M1 yt+1
+ Pυt ,
(14)
where yt = (xt , πt )0 and υt = (gt , ut )0 and where,

2
− ααx ϕi

M0 = 
2
− αxαϕi λ
2
− ϕαiλ





ϕ 

1
 1 0
,
M
=
and
P
=




.
1
2 2
− ϕ αλi
λ β + ϕλ
λ 1
Substituting the appropriate expectational terms into (14) yields the ALM,
yt = (M0 + M1 )a + (M0 c + M1 cF + P)υt .
(15)
10
with the following T-map,
T (a) = (M0 + M1 )a,
T (c) = (M0 c + M1 cF + P).
Table 1 lists the eigenvalues of the derivatives of the T-map under three common
parameterizations of the New Keynesian model: Woodford (2003), Clarida, Gali and
Gertler (2000), and Jensen and McCallum (2002). The Duffy and Xiao rule is not E-stable
under the Jensen, McCallum (JM) parameterization, but is under Woodford and Clarida,
Gali, Gertler (CGG). This is likely due to the inter-temporal elasticity of substitution being
greater than one under Woodford and CGG and less than one under JM. When the model is
E-stable we see large negative numbers under Woodford and CGG. As in Evans and
Honkapohja (2009), this is the cause for the instability with large constant gains. Note that
the only one of the eigenvalues is effected when switching from contemporaneous data rule
to and operational rule.
Simulation of this gain structure requires a small burn in period to establish a history of
error terms. In order to create a seamless transition from the burn-in to learning, the burn-in
length is set to the inverse of the gain. During this period agents use the constant-gain.
Given the constant-gain value of 0.025 this implies a burn in length of 40 periods. This
ensures no discontinuity at agent’s first opportunity to switch; agents choose between
keeping the constant-gain or allowing the value of the gain to decrease. In the initialization
period agent’s expectations do not have an effect in the economy to minimize the usual
learning dynamics. Thus, the coefficients driving the simulation will be a small perturbation
away from the Rational expectations (RE) values. When the initialization period ends
agents use the switching-gain in (11).
Figures 1 and 2 depict a particular realization of the NK economy under constant-gain
learning and the contemporaneous expectations Duffy and Xiao policy rule. Evans and
11
Honkapohja (2009) report that with the Woodford parameterization the model converges to
RE if the constant-gain parameter takes values less than 0.024. They refer to the result as
not being “robustly stable,” in the sense that empirical estimates of the constant-gain are
larger that this value.10 Under the CGG parameterization the model is converges to RE if
the constant-gain parameter takes values less than 0.059, an implied window size of 17
periods, which is an improvement but does not cover the entire plausible range.
Discussion of Switching Gain Stability
While the values of the constant-gain have increased (from 0.024 to 0.026 under the
Woodford parameterization), the simulations exhibit temporary deviations from the REE.
Figure (3) illustrates these exotic dynamics for a particular realization of the NK economy
when agents use the switching-gain.11 For these simulations γ̄z = 0.025 for z = x, π, which
lies just outside the stable range found by Evans and Honkapohja (2009), the historical
window length, W = 35, which suggests that agents use about nine years of past data for the
historical volatility indicator, the window length for recent data, J = 4, which is the
estimated value found by Milani (2007b). Simulations do not explode for values of the
constant-gain of 0.026 or lower, which would not be considered robustly stable. The
historical average suggested by Milani partially drives this result. Should one use an
arbitrary value in the switching rule as suggested by Marcet and Nicolini (2003), then, for a
given value of the constant gain, there exists a value above which the model is explosive
and below which the model rapidly settles into a continuous decreasing-gain regime.
Unlike Marcet and Nicolini (2003) there are no underlying structural changes in this
NK model. Thus, the result is completely driven by the expectation formation behavior.
Sargent (1999) uses a model in which agents temporarily escape a self-confirming
equilibrium as well, but examines government beliefs, not beliefs of the entire economy.
Cho, Williams and Sargent (2002) examine the ordinary differential equations (ODEs) in
12
the Sargent (1999) framework and find that the “escape dynamics” include an additional
ODE relative to the mean dynamics.
Table 2 provides a comparison of the economic significance of the temporary
deviations. These examples come from two independent simulations of 15,000 periods.
After discarding the first 10,000 periods, the mean and variance of output and inflation
relative to the REE are calculated entire remaining 5,000 periods and also for a 100 period
window around the largest temporary deviation in that 5,000 period section. These
examples suggest that the exotic behavior leads to a large increase in variance relative to
RE. The top example shows that both inflation and output may be lower than under RE,
while the bottom example has both variables above RE.
Robustness of these results is not easily obtained. Changing structural parameters
changes the threshold for instability requiring more than one parameter to change in the
comparison. However, changing the parameters in the gain structure (11) does not.
Therefore Table 3 looks at different values of the rolling window. This gives a sense of the
economic impact of the deviations from rational expectations, but does not address
frequency. Table 4 displays stability results from a Monte Carlo exercise for several
different historical window lengths. These results are based on 5,000 simulations of 10,000
periods each. If the last estimated coefficients lie within 1 percent of the REE value then
that the particular simulation achieved stability.12 The two sets of calibrated parameters that
result in stability under a decreasing-gain are used. Ignoring the level effects, one can see
more sensitivity to the window size, W , under Woodford than under CGG. A potential
explanation is that under the CGG parameterization the expectational feedback loop is more
sensitive to a “bad” series of shocks.
13
4
Monetary Policy with Commitment
Evans and Honkapohja (2009) postulate that the policy rule with commitment in Duffy and
Xiao (2007) suffers from the same instability that arises under discretionary policy. As
mentioned above, Evans and Honkapohja restrict their examination of commitment to rules
where αi = 0, which leaves Duffy and Xiao’s rule undefined. This section evaluates the
stability of Duffy and Xiao’s commitment rule under all three types of gain sequences, and
compares it to the expectations based rule similar to Evans and Honkapohja (2003) and
Evans and Honkapohja (2006). It also examines the robustness of Evans and Honkapohja
(2009) result by considering alternative parameterizations.
Optimal Policy with Commitment
The first order condition that results from minimizing (12) subject to (6) and (7) under the
timeless perspective results in,
β λ πt + β αx (xt − xt−1 ) + αi λ it−1 + αi ϕ −1 (it−1 − it−2 ) − β αi ϕ −1 (it − it−1 ) = 0
Rearranging results in (2) with θπ =
ϕλ
αi ,
θx =
αx ϕ
αi ,
θi1 =
ϕλ +β +1
,
β
(16)
and θi2 = β1 . The system
under commitment can be written as,
e
yt = Myt+1
+ N0 yt−1 + N1 wt−1 + Pυt ,
(17)
where wt = (it , it−1 )0 and the appropriate matrices for M, N0 , N1 , and P. The MSV solution
provides the PLM, which also supplies the form of the RE solution.
yt = a + b0 yt−1 + b1 wt−1 + cυt .
(18)
14
Note that the law of motion governing the exogenous variables, wt , can be written as,
wt = Q0 yt + Q1 yt−1 + Q2 wt−1 .
(19)
Substituting (18) and (19) in (17) one can find the T-mapping,
T (a) = M(I + b0 + b1 Q0 )a,
T (b0 ) = M(b20 + b1 Q0 b0 + b1 Q1 ) + N0 ,
T (b1 ) = M(b0 b1 + b1 Q0 b1 + b1 Q2 ) + N1 ,
T (c) = M((b0 + b1 Q0 )c + b1 cF) + P.
Upon inspection one can see that there are multiple equilibria in this model. Using these
equations one can find derivatives of the T-mapping at a rational expectations solution,
DTa = M(I + b̄0 + b̄1 Q0 ),
DTb0 = b̄00 ⊗ M + I ⊗ M(b̄0 + b̄1 Q0 ),
DTb1 = I ⊗ M(b̄0 + b̄1 Q0 ) + (Q0 b̄1 )0 ⊗ I + Q02 ⊗ I,
DTc = I ⊗ M(b̄0 + b̄1 Q0 ) + F 0 ⊗ b̄1 ,
where a bar indicates the rational expectations coefficients found using the generalized
schur decomposition (Klein 2000).
The results are consistent with Duffy and Xiao (2007), that is, all real parts of the
eigenvalues of the derivative of the T-map lie within the unit circle. This implies that the
model is locally stable under adaptive learning. However, since there lagged endogenous
variables the model may be unstable for high values of a constant gain. Numerical
simulation shows that the optimal rule results in instability for values of 0.18 or greater and
the switching yields no improvement.
15
Operational Policy with Commitment
Drawing on the methodology of Evans and Honkapohja (2009) one can operationalize (2)
as (4), now referred to as DX, with the same values for the θ ’s as in the optimal
commitment policy, the system can be written as,
e
yt = M0 yte + M1 yt+1
+ N0 yt−1 + N1 wt−1 + Pυt ,
(20)
with the appropriate matrices for M0 , M1 , N0 , N1 , and P. The MSV solution (18) serves as
the PLM for this model as well. Using the appropriate law of motion for wt , the derivatives
of the T-map at the unique saddle path stable rational expectations solution are,
DTa = M0 + M1 (I + b̄0 + b̄1 Q0 ),
DTb0 = I ⊗ M0 + b̄00 ⊗ M1 + I ⊗ M1 (b̄0 + b̄1 Q0 ),
DTb1 = I ⊗ M0 + I ⊗ M1 (b̄0 + b̄1 Q0 ) + (Q0 b̄1 )0 ⊗ I + Q02 ⊗ I,
DTc = I ⊗ M0 + I 0 ⊗ M1 (b̄0 + b̄1 Q0 ) + F 0 ⊗ b̄1 ).
Under the Woodford parameterization I find that the model achieves stability for values
of the gain of 0.0078 or less. Using Milani’s switching gain extends this region to 0.0084,
but it does not display the transitory exotic dynamics found under a Taylor-type rule. As
predicted by Evans and Honkapohja (2009), the DX rule does not fare well under large
gains. In fact the instability is so severe that even allowing for temporary switches to a
decreasing-gain does not significantly extend the range of values that result in stability.
Expectations Based Policy with Commitment
A potential criticism of operational rules is that they are not necessarily optimal.
Generalizing the expectations based rule of Evans and Honkapohja (2009) for αi > 0 allows
16
for better comparison to the previous optimal rule with commitment. Substituting (6) and
(7) into the first order condition (16) and rearranging results in (5), now referred to as EH,
with
θx1 = − α ϕ −1 +λαx2 ϕ+α ϕ ,
i
θi1 =
x
αi (ϕλ +β +1)
,
β (αi +λ 2 ϕ 2 +αx ϕ 2 )
θx2 = θg =
θi2 =
λ 2 +αx
,
αi ϕ −1 +λ 2 ϕ+αx ϕ
αi
,
β (αi +λ 2 ϕ 2 +αx ϕ 2 )
θπ =
λ β +λ 2 ϕ+αx ϕ
,
αi ϕ −1 +λ 2 ϕ+αx ϕ
θu =
λ
.
αi ϕ −1 +λ 2 ϕ+αx ϕ
The matrix form of the model is identical to (17), where M, N0, N1, and P are redefined
appropriately. Using the MSV solution (18), and the appropriate law of motion for wt , the
derivatives of T-mapping are,
DTa = M(I − b̄1 Q0 )−1 (I + b̄0 ),
DTb0 = b̄00 ⊗ M(I − b̄1 Q0 )−1 + I ⊗ M(I − b̄1 Q0 )−1 b̄0 ,
DTb1 = −(Q0 (I − b1 Q0 )−1 (b̄0 b̄1 + b̄1 Q2 ))0 ⊗ M(I − b̄1 Q0 )−1
+ I ⊗ M(I − b̄1 Q0 )−1 b̄0 + Q02 ⊗ M(I − b̄1 Q0 )−1 ,
DTc = I ⊗ M(I − b̄1 Q0 )−1 b̄0 + F 0 ⊗ M(I − b̄1 Q0 )−1 .
The eigenvalues of the T-mapping under the Woodford parameterization for DTa are
0.0782, and 0.9169, those for DTb are 0.0350, and 0.9114, and those for DTc are 0, 0,
0.0715, and 0.7192. Much like the result in Evans and Honkapohja (2009) all the
eigenvalues lie within the unit circle. Though the EH rule satisfies the E-stability condition,
the lagged endogenous variables imply that there exists a possibility for instability for
sufficiently high values of the constant-gain. Similar to the expectations based rule when
αi = 0, the EH rule is robustly stable under the Woodford parameterization. The values of
the constant gain equal to or larger than 0.134 result in the instability of the EH rule under
interest-rate stabilization. This value is much smaller than the expectations based rule
without interest-rate stabilization found in Evans and Honkapohja (2009). Using the Milani
switching gain extends the stable range significantly. The EH rule remains stable until
17
values of 0.292 or higher.
Table 5 shows these policy rules with commitment are sensitive to different
parameterizations. The EH rule is not E-stable under CGG and JM. Fewer large negative
numbers are observed as ϕ decreases for the DX rule, however they still exist. Taken
together, these results suggest that commitment rules with interest rate stabilization perform
poorly under learning. In addition the sensitivity of the constant-gain portion of the
switching-gain to a series of “bad” shocks differs across monetary policy rules. This
sensitivity suggests that the expectational feedback loop spirals out of equilibrium faster
than the decreasing gain can reattain the REE. The expectations based rule has a larger
range, not only is “robustly stable,” but also slows down the expectational feedback loop.
5
Conclusion
Researchers have debated the merits of monetary policy rules under learning using two
types of gain structures, decreasing and constant. Finding conflicting results for many
monetary policy rules leads one to consider switching between a constant-gain and a
decreasing-gain, as proposed by Marcet and Nicolini (2003). Though the switching-gain
extended the stable region for all interest-rate rules, in most cases it did not result in robustly
stable values of the gain. The results also suggest that the results in Evans and Honkapohja
(2009) rely on the distinction between optimal and operational monetary policy rules.
The analysis above shows that switching-gains result in stability, but potentially
develop exotic dynamics. These dynamics are characterized by several episodes of very
high volatility. This indicates that monetary policy rules that appear stable may in fact hide
a potential period of substantial economic turmoil driven entirely by expectations. Marcet
and Nicolini (2003) also find deviations from the rational expectations equilibrium, but their
model has two equilibria predicated on government imposition of exchange rate rules. This
18
paper documents exotic behavior in model with a single REE. The results presented above
suggest that policymakers should be concerned with the potential that expectations, and
expectations alone, can create exotic behavior that temporarily strays from the REE.
19
Notes
1 For
more discussion of optimality under learning see Mele, Molnár and Santoro (2011).
2 See
Woodford (2003) for derivation.
3 The
window size can be found by taking the inverse of the value of the gain.
4 In
Milani (2007b) W was set to 3000 for very long simulations.
5 If the interest rate target is high enough, then negative values impose a high cost to the policy maker.
While
this is true, it is not the same as the zero lower bound condition imposed in Adam and Billi (2006). The results
below are robust to high interest rate targets. Ascari and Ropele (2009) show that high trend inflation leads
to deterioration in efficient policy and potential indeterminacy. These points raise interesting questions that
should be examined in future research.
6 All
the targets have been set to zero for convenience.
7 These
values are chosen for ease of comparison to Evans and Honkapohja (2009). The results are not
sensitive to these parameter values.
8 One
might also create an operational policy by using lagged values. In the context of adaptive learning this
would be a naive expectation.
9 Bullard and Mitra (2002) provides an excellent evaluation of data timing and expectations for a Taylor type
rule under learning.
10 Estimates
of constant-gain values range from 0.03 to 0.1, respectively, see Milani (2007a) and Branch and
Evans (2006).
11 Though
the last deviation may be in an indicator of instability, extending the simulation to 10,000 periods
can show that this deviation is temporary and the future deviations remain close to the REE. This example is
meant to show that relatively large deviations can occur later in the simulation.
12 This
method will underestimate the number of simulations that are stable.
20
References
Adam, Klaus, and Roberto M. Billi. 2006. Optimal monetary policy under commitment
with a zero bound on nominal interest rates, Journal of Money, Credit and Banking
54(3): 728–52.
Ascari, Guido, and Tiziano Ropele. 2009. Trend inflation, Taylor principle and
indeterminacy, Journal of Monetary Economics 41(8): 1557–84.
Bernanke, Benjamin. 2007. Inflation expectations and inflation forecasting. Federal
Reserve Bank Speech, July 10.
Branch, William A., and George W. Evans. 2006. ‘A simple recursive forecasting
model’, Economics Letters 91(2): 158–66.
Bullard, James, and Kaushik Mitra. 2002. Learning about monetary policy rules,
Journal of Monetary Economics 49(6): 1105–29.
Cho, In-Koo, Noah Williams and Thomas J. Sargent. 2002. Escaping Nash inflation, The
Review of Economic Studies 69(1): 1–40.
Clarida, Richard, Jordi Gali and Mark Gertler. 2000. Monetary policy rules and
macroeconomic stability: evidence and some theory, Quarterly Journal of Economics
115(1): 147–80.
Duffy, John, and Xiao, Wei. 2007. The value of interest rate stabilization policies when
agents are learning, Journal of Money, Credit and Banking 39(8): 2041–56.
Evans, George W., and Seppo Honkapohja. 2001. Learning and expectations in
macroeconomics. Princeton, NJ: Princeton University Press.
Evans, George W., and Seppo Honkapohja. 2003. Adaptive learning and monetary
policy design, Journal of Money, Credit and Banking 35(6): 1045–72.
21
Evans, George W., and Seppo Honkapohja. 2006. Monetary policy, expectations and
commitment, The Scandinavian Journal of Economics 108(1): 15–38.
Evans, George W., and Seppo Honkapohja. 2009. Robust learning stability with
operational monetary policy rules. In Monetary Policy under Uncertainty and Learning,
edited by Karl Schmidt-Hebbel and Carl Walsh. Santiago, Chile: Central Bank of Chile,
pp. 145–70.
Howitt, Peter. 1992. Interest rate control and nonconvergence to rational expectations,
Journal of Political Economy 100(4): 776–800.
Jensen, Christian, and Bennett T. McCallum. 2002. The non-optimality of proposed
monetary policy rules under timeless perspective commitment, Economics Letters
77(2): 163–8.
Klein, Paul. 2000. Using the generalized Schur form to solve a multivariate linear
rational expectations model, Journal of Economic Dynamics and Control
24(10): 1405–23.
Marcet, Albert, and Juan P. Nicolini. 2003. Recurrent hyperinflations and learning,
American Economic Review 93(5): 1476–98.
McCallum, Bennett T. 1999. Issues in the design of monetary policy rules. In Handbook
of Macroeconomics, edited by John Taylor and Michael Woodford. Amsterdam, The
Netherlands: Elsevier, pp. 1483–1530.
Mele, Antonio, Krisztina Molnár and Sergio Santoro. 2011. The suboptimality of
commitment equilibrium when agents are learning. Unpublished paper, University of
Oxford.
Milani, Fabio. 2007a. Expectations, learning and macroeconomic persistence, Journal
of Monetary Economics 54(7): 2065–82.
Milani, Fabio. 2007b. Learning and time-varying macroeconomic volatility.
Unpublished paper, University of California-Irvine.
22
Sargent, Thomas J. 1999. The conquest of American inflation. Princeton, NJ: Princeton
University Press.
Woodford, Michael. 2003. Interest and prices: foundations of a theory of monetary
policy. Princeton, NJ: Princeton University Press.
23
DTa
DTc
Table 1: Operational vs. Optimal Rules
Woodford
CGG
Jensen McCallum
Oper
Opt
Oper
Opt
Oper
Opt
-41.2973 0.0231
-25.7550 0.0360
0.9264
0.9274
0.9865 0.9865
0.9791 0.9793
1.0388
1.0383
-41.5280 0.0185
-26.0154 0.0288
0.7381
0.7419
0.7886 0.7892
0.7815 0.7834
0.8285
0.8307
Displays computed value of the eigenvalues of the derivative of the T-map under different parameterizations. Woodford (2003): αx = 0.048, αi = 0.077, ϕ = 1/0.157,
λ = 0.024, β = 0.99, σu = σg = 0.2, and ρ = µ = 0.8. Clarida et al. (2000): ϕ = 4,
and λ = 0.075. Jensen and McCallum (2002): ϕ = 0.164, and λ = 0.02. If all eigenvalues within the unit circle then the model is E-stable under constant gain learning.
Eigenvalues less than one ensure E-stability under decreasing gain learning.
24
Table 2: Examples of Temporary Deviations
Mean
Variance
x
π
x
π
5000 Periods 0.9755 0.9757
1.0996 1.0000
100 Periods 0.9843 0.9978
4.8917 1.0020
5000 Periods 1.0082 1.0133
100 Periods 1.0489 0.9955
1.1178 1.0000
6.3118 1.0003
Relative mean and variance statistics from two simulations with
Woodford calibrated values and W =35, J=4, and γ̄z =0.025.
Compares different subsamples around a temporary deviation
from rational expectations that occurs after a burn in of 10,000
periods.
25
Table 3: Sensitivity of Switching Gain Parameters
W =35, J=5, and γ̄z =0.025
Mean
x
π
5000 Periods 1.0070 1.0115
100 Periods 1.0436 .09961
Variance
x
π
1.0940 1.0002
5.2390 1.0000
W =35, J=6, and γ̄z =0.025
5000 Periods 0.9944 0.9744
100 Periods 0.7178 1.0990
4.7127 1.0002
168.67 1.0078
W =45, J=4, and γ̄z =0.025
5000 Periods 1.0185 1.0223
100 Periods 0.9938 0.9930
1.0000 1.0000
1.0000 1.0000
W =25, J=4, and γ̄z =0.025
5000 Periods 0.9902 1.0080
100 Periods 1.2012 0.9983
2.8574 1.0010
84.858 1.0004
Relative mean and variance statistics from four simulations
with Woodford calibrated values. Compares different subsamples around a temporary deviation from rational expectations
that occurs after a burn in of 10,000 periods.
26
Table 4: Switching Gain Stability Sensitivity Analysis
Woodford parameters, J=4, and γ̄z =0.025
W
15
25
35
% Converge
61.28
87.02
89.86
45
90.58
55
90.50
65
90.08
75
89.34
85
88.22
95
86.16
105
84.46
115
83.32
125
81.68
Woodford parameters, J=4, and γ̄z =0.024
W
15
25
35
% Converge
87.96
94.16
95.64
45
95.80
55
95.84
65
95.70
75
95.52
85
94.98
95
94.34
105
93.58
115
93.30
125
92.48
CGG parameters, J=4, and γ̄z =0.074
W
15
25
35
% Converge
57.54
73.86
62.26
45
49.52
55
40.26
65
30.20
75
23.08
85
15.52
95
11.36
105
8.48
115
5.74
125
4.14
Shows the percent of simulations in which the last value of the estimated parameters lie within 1 percent of the RE paramters.
The historical window is the parameter the governs the number of periods used to calculate the historical average MSFE. These
results are based on 5,000 simulations of 10,000 periods each.
27
DTa
DTb0
DTb1
DTc
Table 5: Robustness of Commitment Rules
Woodford
CGG
DX
EH
DX
EH
-27.4387
0.0030
-12.7509
0.0201
-0.0674
0.9990
-0.2815
1.0431
-26.3243
-0.0762
-11.6997
-0.1942
0.8352
-0.9185
0.3695
-0.7144
-27.4387
0
-12.7509
0
-0.0674
-0.0347
-0.2815
-0.0295
-27.6392
0.0754
-12.5596
0.2239
-27.2455 -0.0316 + 0.0085i -12.8931 -0.0373 + 0.0485i
-0.2679 -0.0316 - 0.0085i -0.0902 -0.0373 - 0.0485i
0.1257
-0.0339
-0.4237
-0.0543
-27.8832
0.0715
-13.3539
-13.3539
0.0039
0.7192
-0.1992
0.7834
JM
DX
-0.2055
-0.0324
0.5599
0.6361
-0.2055
-0.0324
0.8099
0.6368
0.0343
0.2074
-5.4861
0.1711
EH
-5.0314
0.9402
6.7124
-0.3795
0
3.3469
135.7722
-14.9013
6.8163
-0.3912
-3.3547
0.7512
Displays computed value of the eigenvalues of the derivative of the T-map under different parameterizations for
the operationalized Duffy and Xiao rule (4) and the expectations based rule (5). Woodford (2003): αx = 0.048,
αi = 0.077, ϕ = 1/0.157, λ = 0.024, β = 0.99, σu = σg = 0.2, and ρ = µ = 0.8. Clarida et al. (2000): ϕ = 4,
and λ = 0.075. Jensen and McCallum (2002): ϕ = 0.164, and λ = 0.02. If all eigenvalues within the unit
circle then the model is E-stable under constant gain learning. Eigenvalues less than one ensure E-stability
under decreasing gain learning.
28
Figure 1: Explosive behavior of the operational Taylor-type rule. Woodford
parameterization and γ = 0.04
Figure 2: Convergence to the REE under Taylor-type rule. Woodford parameterization
and γ = 0.02
Figure 3: Stability of operational Taylor-type rule with endogenously switching-gain,
γ̄z = 0.025, and Woodford parameterization.
29
Figure 1
30
Figure 2
31
Figure 3
Download