9. Fiscal Policy: Institutions versus Rules

advertisement
Innehåll
1. Basu & Taylor: Business Cycles in International Perspective ............................................... 2
2. John B. Taylor: An Historical Analysis of Monetary Policy Rules ....................................... 3
3. The quest for prosperity without inflation.............................................................................. 5
4. OPTIMAL INFLATION TARGETING: FURTHER DEVELOPMENTS OF INFLATION
TARGETING ............................................................................................................................. 7
5. On Target? The International Experience with Achieving Inflation Targets ....................... 10
6. Romer & Romer – Choosing the Federal Reserve Chair: Lesson from History .................. 16
7. The Case for Restricting Fiscal Policy Discretion ............................................................... 17
8. Fiscal Policy: Institutions versus Rules ................................................................................ 25
9. The (partial) rehabilitation of interest rate parity in the floating rate era: Longer horizons,
alternative expectations, and emerging markets ...................................................................... 27
10. Taylor & Taylor – The Purchasing Power Parity Debate .................................................. 29
11. Estimating China’s ”Equilibrium” Real Exchange Rate. By: Dunaway & Li ................... 30
The macroeconomic balance approach .................................................................................... 31
The extended PPP approach ..................................................................................................... 31
12. CURRENCY CRISES (Krugman ,1997) ........................................................................... 32
13. Managing Macroeconomic Crises: Policy Lessons ........................................................... 35
14. THE TRILEMMA IN HISTORY: TRADEOFFS AMONG EXCHANGE RATES,
MONETARY POLICIES, AND CAPITAL MOBILITY ........................................................ 37
1. Basu & Taylor: Business Cycles in International Perspective
The purpose of the article is to test different business cycle (BC) theories on macroeconomic
statistics. B&T divides the data available into four periods:
1. 1870-1914: classical gold standard, stability and integration of world markets.
2. 1919-1939: economy destroyed by wars, autarky.
3. 1945-1971: Bretton-Woods as an attempt to rebuild the global economy.
4. 1970s-ongoing: floating exchange rates, increased movements of capital.
Patterns of Macroeconomic Aggregates1
Has the BC become more or less volatile over time? B&T investigates this by looking at the
standard deviation of key economic variables, see Table 1. B&T: no definitive case that
today’s BCs are less volatile. Volatility of investment, however, is constantly three or four
times higher than other variables. Price volatility was stable during gold standard and B-W,
but also in recent years.
Another important property of variables is persistence. Higher persistence with floating
exchange rates, supports that money is important for the BC dynamic. A third characteristic is
to what extent variables move with output: correlation between C & Y around 0.6-0.7, I has
high correlation during interwar and floating periods, CA is countercyclical. B&T also look at
comovements: low for C which indicates lack of risk-sharing in global society, I has high
positive correlation even tough negative would be expected.
The insights from Table 1 are what B&T want a good BC model to explain.
Some Key Issues in Choosing Between Business Cycle Models
Is money neutral? In neoclassical models this is the case. Table 2 shows that exchange rate
volatility is higher during certain monetary regimes. This indicates that money is not neutral.
B&T conclude that a good BC model should deliver both short-run monetary non-neutrality
and long-run reversion to neutrality.
Are prices sticky? One approach is that money case real effects even though wages and prices
are perfectly flexible. Another approach is the slow adjustment of prices and wages. The New
Keynesian models emphasize flexible wages and sticky output prices, one example is so
called “menu costs”. B&T discusses a number of different researchers and their models, but
concludes that data imply that nominal rigidities are important and can lead to massive
economic downturns such as the Great Depression.
.
Does the labor market clear? A good BC model must explain that both C and L track the BC,
labor hours one-to-one with Y and C less than one. This means that only a procyclical real
wage explains the positive comovements of C and L. Three ways to explain how a procyclical
real wage is consistent with cost minimization of firms:
1. Markup may fall during booms (assume imperfect competition). Prices are fixed but firms experience higher
real wages. Focus of New Keynesian models of BC.
2. Technology may improve during booms. Leads to both higher wages and higher output. This is the Real
Business Cycle model.
3. If there are increasing returns to scale, MPL might rise as more labor is employed during booms.
1
Titta gärna på Table 1 själv. Jag har bara tagit med det allra viktigaste.
Old Keynesian models provide an alternative explanation as they assume wages to be sticky
so employment is determined by labor demand. However, this implies countercyclical real
wages. There are a number of models with imperfect labor markets. These models imply
acyclical or countercyclical real wages. In Table 4, B&T confront theory with data and founds
that real wage is acyclical during the first two periods and procyclical in the third and fourth
period. But there are several data issues. The outcome of the discussion is that real wages are
procyclical only during certain times. B&T urges BC theorists to come up with a model where
real wages are acyclical, procyclical and countercyclical.
Business cycles and the open economy. Some BC models assume closed economies while
other are built on an open global economy. B&T: these models suit different time periods and
a good BC model needs to take the current conditions in the world economy into account. One
approach is the macroeconomic policy trilemma by Obstfeld and Taylor: policymakers want
fixed exchange rates for stability, free capital mobility to ensure efficient resource allocation
and an activist monetary policy to address domestic policy goals. However, these goals are
incompatible. Different regimes solve different problems. B&T: a good BC model must keep
this trade-off in mind.
Conclusion
Evidence shows that money is not neutral, but it is still unclear through which channel. There
is some evidence in favor of models with nominal rigidities. For the Keynesian models it is
not clear whether the sticky-price or sticky-wage model is more plausible. For each model
there is a gap between the model and data. Many BC models also fail to take international
linkages into account. Probably, both theory and empirical evidence will move in favor of
open economy models. BC models must also take into account the trilemma facing
policymakers.
2. John B. Taylor: An Historical Analysis of Monetary Policy Rules2
Main finding: a good monetary policy rule would have responded more aggressively than it
did in the 1960s, 1970s and the gold standard period. This was done in the 1980s and 1990s.
In this paper, Taylor uses an historical approach (as a complement to a model-based
approach) to focus on particular episodes to get more knowledge about how a policy rule
might work in practice.
1. From the Quantity Equation of Money to a Monetary Policy Rule
Taylor uses own model as monetary policy rule (1993):
r = π + gy + h(π-π*) + rf
where r = short-term interest rate
π = the inflation rate (percent change in P)
y = the percentage deviation of real output (Y) from trend
g, h, π* = constants
Note that slope coefficient of inflation = (1+h), slope coefficient of output gap = g & intercept = r f-hπ*
Objective: determine if parameters vary across periods and look for differences in economic
performance related to such variations.
2
Martin diskuterade denna artikel på föreläsningen. Komplettera med hans förklaringar och åsikter om den.
Ibland höll Martin inte med Taylor.
Taylor suggests g = 0.5, h = 0.5, π* = 2% and rf = 2%, however g = 1 (or almost 1) in more
recent research. Taylor argues that coefficients must be positive, otherwise an increase in
inflation or output gap would be responded to by a lower short-run interest rate. The size of
these coefficients differs between periods, this is what Taylor intends to examine.
2. The Evolution of Monetary Policy Rules in the US: From the International Gold
Standard to the 1990s
Look at Fig 1 and 2, Taylor sees a striking difference between figures, fig 1: more frequent
business cycles, greater size of fluctuations in inflation and real output. 1960s & 1970s: large
and persistent swing in inflation. Since mid-1980s there is larger macroeconomic stability.
Second contrast: elasticity of short-term interest rate, as a response to inflation and output, is
much smaller in earlier period.
Table 1 shows estimates of the coefficients. Estimated values are much larger in Bretton
Woods and post-Bretton Woods era than during gold standard era. Coefficients increase
gradually over time. Fig. 3 shows monetary policy rules for two periods, the slopes of the
lines measure size of interest rate responses to inflation. In the earlier period, slope <1 which
means that inflation results in a decrease in short-run interest rate. The long-run equilibrium is
determined by the intersection of the monetary policy rule and the interest rate. Note that
there is no intersection for the earlier period’s line. According to Taylor, that is one way of
explaining why the inflation rate was more stable during the later period.
3. Effects on the Different Policy Rules on Macroeconomic Stability
Is there a connection between different policy rules and the economic performance? Table 1
indicates that three eras (1879-1914, 1960-1979, 1986-1997) can be distinguished by big
differences in responsiveness of short-term interest rate in the monetary policy rule. These
eras can also be distinguished by economic stability; more stable economy when the estimated
coefficients are larger. Taylor: these findings support my model!
Could this relationship be a result of reverse causation, i.e. could larger output gaps and
higher inflation cause lower responses in interest rates? Taylor says this is impossible. He
views Fed’s development as a learning process in search of a good monetary policy rule.
Taylor claims that this gradual evolution (which he shows through a number of quotes) makes
it clear that causation goes as in his model and not the other way.
4. “Policy Mistakes”: Big Deviations from Baseline Policy Rules
From Fig 4, 5 and 6 Taylor identifies periods where the gap between actual federal funds rate
and the policy rules is large and use them for an historical “policy mistake” analysis. The first
period was in the early 1960s when the federal funds rate was 2-3 percentage points higher
than what is suggested by the Taylor rule. This resulted in weak recovery. The second period
started in the late 1960s and continued throughout the 1970s, when monetary policy was
easier than it would have been with the Taylor rule. Taylor says this contributed to the Great
Inflation. At the time, the belief in a long-run Phillips curve made it hard for Fed to defend
low inflation and there was a great fear for unemployment from the Great Depression. The
third period was in the early 1980s, where the Taylor rule finds that the interest rate was not
lowered enough. However, Taylor can understand this policy as it was just after the Great
Inflation.
5. Conclusions
Monetary policy rule has changed dramatically over time in the US and these changes have
been associated with changes in economic stability. A monetary policy rule in which the
interest rate responds to inflation and real output as in the 1980s and 1990s is a good rule.
3. The quest for prosperity without inflation
By Athanasios Orphanides
This text makes an evaluation of how well the monetary stabilization policies have worked
during the last decades. The article focuses on the situation in the United States in particular
during the period 1960-70. During this period the economy fell into deep recession, this was
the time of the Great Inflation and of the oil shock in the late 1970s. Orphanides states that the
monetary policy was too active during this period, in contrast to Taylor who argues that it was
too passive.
There are several ideas of how to conduct monetary policies; the most discussed today is the
Taylor rule that was derived by looking at how the interest rate was set during a successful
period of the U.S. economy.
The Taylor rule:
Rt =2 + πt + 0.5(πt - 2) + 0.5yt
The revised Taylor rule:
Rt =2 + πt + 0.5(πt - 2) + 1.0yt
(The revised Taylor rule puts more emphasis on the output gap and is considered to have
better stabilization properties)
Taylor shows that if the Feds would have regulated the interest rate according to this rule in
the 1960-1970s they could have avoided the acceleration of inflation in the late 1960s and
70s. The commodity price shocks and the oil shocks of 1973 and 1979 are still apparent when
applying the Tyler rule but the inflation is successfully stabilized. The output gap is also
smaller and less volatile, compared with true outcome, when using the Taylor rule to decide
how to set the interest rate.
The central argument of this paper is that there are unrealistic assumptions of perfect
information when the Taylor rule decides the setting of the interest rate. Since the inflation
and the output gap are the explanatory variables in the Taylor rule, these are the ones
Orphanides examines in his paper. Taylor does incorrectly assume that the policymakers have
accurate information regarding the current values of inflation and the output gap when setting
the interest rate. Orphanides shows that a substantial part of the data that policymakers built
their case on consists of noise, mismeasurements. He compares the assumed levels of the
inflation rate and the output gap at the time, with the actual level as we know them today and
finds the following: The inflation was underestimated by approximately one percent at the
time and the output gap was heavily overstated at the time (max overstatement around 10%).
After this discovery, Orphanides uses the Taylor rule to see what recommendations it would
have given at the time concerning the interest rate, with the real time data imperfections.
Using the Taylor rule with the revised information shows that the inflation in the 80s would
have exceeded 13% and stayed above 10% until the 1990s. When making this analysis it
seems like the policymakers actually “followed” the Taylor rule with the information
available at the time, and that this caused the problems faced during the 1960-70s. Hence, the
Taylor rule should not be used as a guide of how to set the interest rate since the
measurements of data is not accurate. Orphanides states that the policy should have been
considerably tighter than was the case.
Since the mismeasurement of the output gap seems to be the key source of the policy failure
Orphanides investigates this further. The underlying assumptions when calculating the output
gap is the natural rate of employment and the growth rate of output. Both these key
assumptions seem to have been overly optimistic when estimated at the time.
1) The natural rate of unemployment was assumed to be around 4% which was reasonable at
the time considering the high growth rate since the end of WW2. At the late 1960s however
the economy slowed down to get closer to what we today consider normal, the unemployment
rate averaged 6.3% from 1966-1993 which is considerably higher than was found acceptable
at the time. Hence, the natural rate of unemployment was overstated by around 2%.
2) The potential output was assumed to grow at 4% or more at the time but in reality the growth
rate was 3.5%.
In the late 1970s corrections of the estimates were made but they were still overly optimistic.
In the 1960s it was Chairman Martin who led the Federal Reserve. He felt that the inflation
constituted the biggest problem in the U.S. economy at the late 1960s. The Fed raised the
interest rate by four percentage points to decrease inflation. A new Chairman was elected at
1970 and he was instead of the opinion that they should ease policy. Since the economy acted
in a way they had never seen before he misinterpreted the signals. The economy was not
going into recession, it was only not continuing to grow in the enormous speed that we saw
the first two decades after WW2. Thus, the unemployment rate continued to rise at the same
time as the inflation started to increase as a result of the eased monetary policy. As if this was
not bad enough, the oil shock in the late 70s complicated the problems in the economy
further.
Orphanides presents alternative policies to set the interest rate that does not rely on variables
that can be mismeasured as heavily as the output gap. He presents two options:
1) Inflation targeting: As mentioned above, estimation of inflation was not as bad as the
estimation of the output gap. To build the policy on the fluctuations in inflation would remove
a large part of the noise in Taylors rule.
2) Natural growth targeting: The underlying data of natural growth do also have measurement
problems but not as large as the output gap. According to historical data the errors are
considerably smaller.
When these methods are applied to data (that was known at the time) they would have set the interest
rate differently.
Alt.1 would have avoided the Great Inflation, even considered the (small) measurement
problems. However there would have been a deep recession after the oil shock in 1973.
Alt.2 would also have avoided the Great Inflation and kept π stabile during the 1970s. During
the late 1980s the inflation would have been around two percentage points higher than it was
in reality with this method.
Both these methods would have worked better than the Taylor rule including the realistic
information failures. Orphanides states that policy reflects “prudence or overconfidence”
rather than “rules or discretion”. This way he explains the choice policymakers do when they
choose to rely on the data at hand. He calls his to alternatives above; prudent policy rules.
They do not rely too heavily on data that might not be accurate and the do not encourage
activism but a rather more careful approach. The activist discretionary policy that was used
during the Great Inflation period did what Taylor would have suggested and this is referred to
as the overconfidence policy rules. Policymakers overestimate their understanding of the
economy. The greater activism is appropriate in stable periods in the economy. The success of
the U.S. economy the last decades proves according to Orphanides that the policymakers have
started to act less actively.
4. OPTIMAL INFLATION TARGETING: FURTHER DEVELOPMENTS
OF INFLATION TARGETING
Lars E.O. Svensson
Inflation targeting was first introduced in 1990 and has since then been adopted by more than
20 countries. The practice has lead to a more systematic and consistent internal decision
process and communication with the private sector has become much more transparent.
Monetary and real stability achieved is exceptional from a historical perspective.
This paper provides a selective discussion of points on which Svensson believes further
improvements are both possible and desirable.
CHARACTERISTICS OF GOOD INFLATION TARGETING



An explicit monetary policy objective in the form of a numerical inflation target.
Target variables include both inflation and a real variables, such as the output gap.
The central bank sets the instrument rate such that the forecast of the target variables
“looks good” relative to the monetary policy objective.
High degree of transparency and accountability with detailed motivations.
POSSIBLE IMPROVEMENTS
Central banks are not very consistent about the relative weight they attach to stability of
variables other than inflation and the intertemporal substitution effect between target variables
is not always taken into account. Progress would be made if operational objectives were
specified in terms of an explicit intertemporal loss function.
Central banks normally make explicit decisions and announcements only about the current
instrument rate but what really matters is the entire assumed instrument-rate path. The current
instrument rate matters very little for the economy. What matter are private sector
expectations about the entire future path of the instrument rate. These expectations affect
longer-term interest rates and asset prices, which in turn affect private sector decisions.
Progress can be made if central banks explicitly think in terms of entire instrument-rate plans
and corresponding projections of target variables.
Projections are normally based on an assumed instrument-rate path that differs from the
optimal one and forecasts will not be very accurate. Since monetary policy has an impact on
the economy via the private sector expectations, progress could be made by announcing the
optimal projection and the analysis behind it. This would be the most effective way to
implement monetary policy.
THE LOSS FUNCTION
Inflation targeting is normally flexible, meaning that monetary policy objectives include not
only stability of inflation but also stability of the real economy, such as the output gap. Fixed
horizons over which the inflation target will be met is often used but is not good. To clarify
what the target variables are and that relative weights they have, an intertemporal loss
function should be specified:
Lt = (πt – π*)2 + λxt2
Lt : Loss function in period t.
πt : Inflation in period t.
π* : Inflation target.
xt : Output gap in period t.
λ >0 : Relative weight on output-gap stabilization relative to inflation stabilization. λ=1
implies that the variability of the output gap and inflation are equally important.
(An alternative is to construct the loss function as the sum of current and expected discounted
future losses.) The loss function provides a consistent way of ranking different inflation and
output-gap projections and the central bank can simply choose the one that results in the
lowest loss.
ADVANTAGES OF THE LOSS FUNCTION
1. Clarifies what the target variables are and the substitution between them. Provides a
consistent ranking of the different projections. Makes clear that the entire projection
path of the target variables matters, not just the projections at some particular horizon.
2. Clarifies the appropriate role of asset prices and concerns about bubbles in inflation
targeting.
3. Avoids inconsistent and ad hoc decisions. Provides guidance to consistent policy.
4. Going public about the loss function increases transparency and improves the
evaluation of monetary policy. The public will better understand the substitutions and
tradeoffs involved.
THE INSTRUMENT-RATE PROJECTION
The optimal instrument-rate plan is the plan that results in an optimal projection of the target
variables, the projection that minimizes the intertemporal loss function.
THE INSTRUMENT-RATE ASSUMPTION UNDERLYING PROJECTIONS OF THE
TARGET VARIABLES
Constant instrument rate over the forecast horizon has traditionally been assumed by central
banks. Svensson argues that this has the following problems:

Unrealistic and misleading.


Differs from market expectations. Projections are based on the constant rate and the
expectations, leading to inconsistency.
Market expectations may adjust towards the constant rate leading to drastic changes in
asset prices.
SVENSSON’S ALTERNATIVES TO CONSTANT INSTRUMENT RATES:



Use market expectations of future interest rates as projections. More realistic but may
be problematic if expectations are abnormal.
Use an ad hoc reaction function such as a Taylor-type rule.
Use an optimal instrument-rate projection, minimizing the loss function.
THE INSTRUMENT-RATE DECISION
The central bank’s decisions have an effect on the economy essentially only through the
private sector expectations they give rise to about future instrument rates, inflation and output.
Modern monetary policy is therefore essentially “managing private sector expectations”.
Assumptions about the entire future instrument-rate path are therefore much more important
than current instrument-rate assumptions. If the central bank cannot decide on a path, they
should plot different paths in a graph and decide on the median.
TRANSPARENCY AND COMMUNICATION ISSUES
Announcing the optimal projection – including the instrument- rate projection – and the
analysis behind it would have the greatest impact on private sector expectations and is the
most effective way to implement monetary policy. More public information increases social
welfare. Some special explanation may be required to emphasize that the instrument-rate
projection is not a commitment but only the best forecast. Educating the market and the
general public about monetary policy is a natural part of successful inflation targeting.
INCORPORATING JUDGMENT
Svensson argues that models are very practical but that a substantial amount of judgment
always needs to be applied in the form of information, knowledge, and views outside the
scope. There is a significant difference between monetary policy with and without judgment.
Applying judgment will lead to the central bank adjusting variables to expected market
changes, making changes or shocks less noticeable. Without adjustments a sudden change in
inflation may force the central bank to raise the instrument rate much more than would have
been needed if they had responded earlier.
UNCERTAINTY
Monetary policy is always conducted under substantial uncertainty. Using mean projections
of future random variables (mean forecast targeting) can only be optimal under certainty.
Optimal policy in uncertainty requires that the entire distribution of future random target
variables is taken into account by assigning probability distributions to possible scenarios.
CONCLUSIONS
Inflation-targeting central banks can improve their targeting by being more specific,
systematic, and transparent about their operational objectives (by using an explicit
intertemporal loss function), their forecast (by deciding on optimal projections of the
instrument rate and the target variables), and their communication (by announcing optimal
projections of the instrument rate and target variables). Progress can also be made by
incorporating central bank judgment and model uncertainty in a systematic way in the
forecasting and decisionmaking process. In particular, incorporating model uncertainty allows
the central bank to target based on a more general distribution forecast rather than on the more
restrictive mean forecasts under the assumption of approximate certainty equivalence.
5. On Target? The International Experience with Achieving Inflation
Targets
Roger, S. and Stone, M. (2005)
I.
INTRODUCTION
Inflation targeting is founded on a clear commitment to a quantitive inflation target as the primary objective
of monetary policy. This paper analyses the inflation experience of countries with full-fledged inflation
targeting regimes, examined from three different angles: First, the institutional framework is summarised.
Second, inflation performances are compared with inflation targets. Third, case studies of large inflation
target misses are examined. The results from these approaches are then brought together in the form of
stylised facts.
II.
A BRIEF HISTORY OF FULL-FLEDGED INFLATION TARGETING
The number of full-fledged inflation targeting countries now stands at 203. New Zealand pioneered
inflation targeting in 1989 and today seven industrial countries employ this regime. Emerging market
countries began to practise full-fledged inflation targeting in 1997 (Israel, which now is an industrial
country and the Czech Republic) and thirteen use this regime currently. In addition, a number of emerging
market countries (e.g., Turkey, Romania, and Botswana) are moving towards full-fledged inflation
targeting. Over all, full-fledged inflation targeting is gaining popularity as a monetary regime.
III. THE INFLATION TARGETING POLICY FRAMEWORK
The key elements of the inflation targeting framework are the governance structure, the specification of the
inflation target, and the arrangements for policy transparency and accountability. These elements of the
framework provide the central bank with the authority and incentives to pursue the inflation target.
Australia, Brazil, Canada, Chile, Colombia, Czech Republic, Hungary, Iceland, Israel, Korea, Mexico, New Zealand, Norway,
Peru, Philippines, Poland, South Africa, Sweden, Thailand, United Kingdom (Finland and Spain were inflation targeting countries
prior to adopting the EURO in 1999).
3
A. Inflation Target Parameters
Inflation target parameters vary across countries, and to some degree across time. Numerical targets: The
numerical inflation target serves as the nominal anchor and makes possible a high degree of monetary
policy accountability. Most inflation targeting countries have adopted point targets within symmetric
ranges for inflation outcomes. The levels and range widths for inflation targets are very similar across
countries. With few exceptions, point targets are between 1 and 3 percent, and ranges are usually close to 2
percentage points wide (i.e. point target +/- 1 percent). The inflation target horizon is the period over
which the central bank holds itself accountable for meeting its target. For the target to be meaningful, a
basic requirement is that the horizon takes into account the lags between policy actions and their effects on
inflation outcomes (typically 16-18 months). Inflation target index: In all inflation targeting countries, the
target measure of inflation is based on the Consumer Price Index, CPI. Most countries define the target in
terms of the official “headline” inflation rate. However, core inflation measures continue to play key roles
in policy formulation and accountability; virtually all inflation targeters have developed and monitor
various measures of core inflation.
B. Institutional Framework
The institutional framework for inflation targeting is designed to allow the public to monitor the forwardlooking commitment of the central bank to the target. Central banks are usually designated by the
government to operate an inflation targeting regime (goal dependence), but at the same time they have
leeway in the implementation of monetary policy in support of the target (instrument independence). The
central bank policy decision makers are held accountable for implementing monetary policy in a manner
consistent with achieving the target. Transparency of the intentions and operations of monetary policy is
required for accountability to work. Governance structure: Most inflation targeting central bank laws
have price stability as the primary or sole de jure objective of monetary policy. The inflation target range is
usually announced either by the government, or jointly be the government and the central bank.
Independence of the operation of monetary policy (instrument independence) is guaranteed in the legal
frameworks of all inflation targeters. Monetary policy decision making: In almost all inflation targeting
countries, monetary policy decisions are made by a committee within the central bank. MPCs are the
institutionalisation of instrument independence. They reduce the dependence of decision making on a
single personality and increase the scope for information-based decision making. In most cases, the
committee is the same as the executive board or the board of governors, and most committees include a mix
of central bank insiders and outsiders. Accountability: Inflation targeting central banks are held
accountable for their performance in relation to the targets. The “stakeholders” to whom the central bank is
accountable can be viewed as comprising the public, the government, and the national legislative body.
Typically, the central bank’s performance is assessed on the basis of deviations of actual inflation outcomes
from the target rate. Hence, target ranges act as an important threshold for policy accountability.
Accountability can be either formal or informal when inflation targets are missed. In eight of the inflation
targeting countries, accountability arrangements include the requirement that the central bank provides
formal public explanations for inflation outcomes outside the target range. In the other countries,
accountability arrangements are less formal, but the central banks are still under pressure to explain
significant deviations of inflation from the announced targets. Transparency: The operations and
intentions of the central bank must be transparent in order for the stakeholders to hold the central bank
accountable for its adherence to the inflation target. For these reasons, full-fledged inflation targeting
central banks are more transparent than other central banks: Committee meetings for most inflation
targeting central banks follow a prescheduled calendar. Minutes, in varying levels of detail, are published
by about half of the central banks, but only a few publish the votes of individual members. All announce
monetary policy actions in press releases and give press conferences explaining their actions. Some
countries have senior bank officials appear before parliamentary committees. Furthermore, over time,
inflation reports have evolved to convey more information; including quantitive forecasts of inflation and
fan charts of potential inflation outcomes.
IV.
INFLATION PERFORMANCE UNDER INFLATION TARGETING
This analysis covers the aggregate experience of the 22 countries that have pursued full-fledged inflation
targeting up to mid-2004 (including Finland and Spain in the mid and late 1990s).
A. Methodological Choices
Country groupings: Countries are grouped into industrial and emerging market countries4, and those with
stable inflation targets and those pursuing disinflation, since the different circumstances of these groups
suggest that they could have qualitatively different outcomes. Calculation of inflation targets: Individual
country statistics are mostly based on monthly differences between 12-month inflation rates and centres of
the target ranges. Core inflation: The official measure of core inflation is used, the definition of which
varies across countries.
B. Aggregate Inflation Performance
Inflation outcomes relative to target or centre of target ranges
Deviations of actual from the targeted inflation are substantial and vary considerably across country groups.
The deviation has typically been about 1.8 percentage points. Disinflating countries have experienced, on
average, significantly greater dispersion of inflation outcomes around their targets than have countries with
stable targets. Emerging market economies have, on average, experienced significantly greater dispersion
of inflation around their targets than have industrial countries. Disinflating countries tend to overshoot their
targets and stable inflation countries undershoot, but the degree of bias is small in both cases. The average
outcome for targeted inflation for all countries, however, was just 0.1 percentage points above the centre of
target ranges, while the average outcome for core inflation was right on target. The volatility of inflation
outcomes in most countries has been high relative to the width of their target ranges. The standard
deviation of inflation outcomes relative to the centre of target ranges averages 1.4 percentage points for the
target measure of inflation, and only slightly less for the core inflation. The persistence of deviations of
inflation from target appears to be consistent with standard characterisations of monetary policy
transmission lags; i.e. typically in the range of 16-20 months, which corresponds fairly closely to the 6-8
quarters often referred to by central banks as the time it takes for changes in the stance of monetary policy
to influence inflation.
Inflation outcomes relative to edges of target ranges
Inflation targeting countries have missed their target ranges, on average, over 40 percent of the time. The
frequency of target range misses is consistent with the evidence on the dispersion of inflation outcomes
relative to the width of target ranges. Performances in terms of core inflation have not been substantially
different from those in terms of target inflation measures.
C. Inflation Performance Under Stable Inflation Targeting Versus Disinflation
As of mid-2004, 15 countries were pursuing stable inflation targets, and Finland and Spain had done so
earlier. Five countries were pursuing explicit disinflation targets, and another nine countries had completed
disinflation5. For the countries that began inflation targeting with a transitional disinflation phase, it took an
average of 41 months to reduce inflation to a stable rate within the inflation targeting framework.
Disinflation typically involved reduction of inflation of around 3 percentage points over 3-4 years, so that
Industrial countries: Australia, Canada, Finland, Iceland, Israel, Korea, New Zealand, Norway, Spain, Sweden, and the United
Kingdom. Emerging market countries: Brazil, Colombia, Chile, Czech Republic, Hungary, Mexico, Peru, Philippines, Poland,
South Africa, and Thailand.
4
Currently pursuing disinflation: Brazil, Colombia, Hungary, Philippines, and South Africa. Disinflation completed: Canada,
Chile, Czech Republic, Iceland, Israel, Mexico, New Zealand, Poland, and Spain.
5
planned reduction in targeted inflation averaged around ¾ percentage points per year. Marked differences
are evident in the dispersion of inflation outcomes between the two groups: For the stable inflation
targeting group, the standard deviation of inflation relative to targets is nearly half that for the group of
disinflating countries. The standard deviation of core inflation outcomes, as might be expected, is lower
than for headline inflation in both groups. Moreover, disinflating countries have missed their target ranges,
on average, twice as frequently as countries targeting stable inflation. Given similar target ranges widths,
but substantially greater dispersion of inflation outcomes, disinflating countries have missed their target
ranges, on average, 60 percent of the time, compared with 32 percent of the time for countries with stable
inflation targets. Disinflating countries have also tended to miss their targets by significantly larger
amounts, and for significantly longer, than countries with stable inflation targets: Countries with stable
inflation targets have missed their target ranges by an average of just under 1 percentage point, and for an
average of about six months. For disinflating countries, misses of target ranges are typically on the order of
1.4 percentage points, with an average duration of nearly 10 months. The duration of target range misses is
fairly short, particularly for stable inflation targeters, compared with lags typically associated with
monetary policy action. Thus, in practice, central banks tend to respond to deviations of inflation from the
centre of target ranges well before inflation actually leaves the range.
D. Inflation Performance In Industrial Countries Versus Emerging Market Economies
Both emerging market and industrial countries have had average inflation outcomes quite close to target
range centres and core inflation outcomes have been right on target. The standard deviation of inflation
outcomes for emerging market economies, however, is significantly higher than for industrial economies,
which is reflected in a substantial difference between the two groups in average frequency of misses of
their target ranges. Differences in inflation performance are more pronounced between disinflation and
stable inflation targeters (C. above) than between emerging market and industrial countries. Fairly
consistently, the difference seen between emerging market and industrial country performances are smaller
than those between disinflating countries and those targeting stable inflation. Moreover, when
performances of emerging market and industrial countries are compared within the context of either
disinflation or stable inflation targeting, differences are much smaller. During the disinflations stage, both
industrial and emerging market countries have experienced misses of target ranges nearly twice as often as
during stable inflation targeting. The magnitude of misses is also significantly larger during disinflation for
both groups of countries, and the duration of misses longer, indicating that this stage is difficult for all
countries. The differences in performance seen between emerging market and industrial country groups as a
whole substantially reflect the fact that the inflation targeting experience of emerging market economies
has been predominantly in the disinflation stage – while the experience of industrial economies has been
mainly in the stable inflation target stage.
Evolution of inflation performances
The evolution of inflation performances over time also reveals important differences between stable
inflation targeting and disinflation, and between emerging market and industrial countries: 1) Countries
with disinflation in progress have typically started out on a high level of inflation volatility but reduced it
quickly. 2) Countries that have completed disinflation have tended to undershoot their targets. Further,
volatility also starts off relatively high, but falls off quickly. 3) Countries targeting stable inflation from the
outset show little change in performance over time. Inflation outcomes are typically close to the centre of
the range, without any clear trend, while the variability of inflation outcomes around the mean is typically
low and stable.
V.
SELECTED EPISODES OF LARGE MISSES OF INFLATION TARGETS
The selection of the eight largest inflation target miss episodes6 exhibits some interesting commonalities:
First, all of the countries are especially vulnerable to external shocks. Second, the largest deviations of
Brazil (2001-03), Czech Republic (1998-99), Iceland (2001-02), Israel (1998, 2000-01 and 2002-03), Poland (2000 and 2002) and
South Africa (2002-03). In Appendix I, all these episodes are examined in some detail.
6
inflation from target occurred during disinflation. The misses were triggered by a mix of domestic and
external shocks: The most common shock was shifts in capital inflows brought on by changes in investor
perceptions of emerging market risks. Changes in world fuel prices also played a role in two of the misses.
Domestic shocks included changes in fiscal and monetary policies and the domestic food supply, and some
country-specific developments. In particular, an absence of strong coordination between the fiscal and
monetary authorities seems to have played a role in several large miss episodes. All of the large misses
reflected wide exchange rate fluctuations. These fluctuations included both depreciations and appreciations,
and manifest the openness of these countries on both current and capital accounts. None of the large misses
led to an abandonment of the inflation targeting regime and institutional changes in response to misses have
been fairly limited. The main change of the framework in response to a miss has been adjustments to the
targets themselves.
VI. STYLISED FACTS OF THE EXPERIENCE WITH INFLATION TARGETING
The previous sections focused on three different aspects of the inflation targeting experience: evolution of
institutional frameworks, inflation performances, and episodes of large inflation target misses. This section
brings together these three perspectives with a view to formulating stylised facts.
1. Inflation targeting has proven to be flexible
Inflation targeting central banks miss their targets frequently and often by a wide margin. Yet, average
outcomes over time tend to be fairly close to the centre of target ranges. The results here support the view
that “All real-world inflation targeting is flexible inflation targeting.” (Svensson, 2005). The high degree of
transparency and accountability of inflation targeting regimes seem to give its practitioners scope or
flexibility to miss their targets frequently and often by large margins or for lengthy periods, without
severely undercutting the credibility of the regime.
2. Inflation targeting regimes are resilient
So far, no country has dropped an inflation targeting regime. This absence of exits from inflation targeting
stands in contrast to the record of conventional fixed exchange rate peg regimes and monetary targeting.
The resilience of inflation targeting likely reflects its operational flexibility, which seems to reduce
conflicts between adherence to the inflation target nominal anchor and output and financial stability.
3. Emerging market countries are successful practitioners of inflation targeting
Emerging market countries have successfully adopted inflation targeting notwithstanding their greater
vulnerabilities vis-à-vis industrial countries. Generally, emerging market economies appear to have higher
levels of inflation variability, and be more vulnerable to large misses, than industrial countries. Still, the
generally successful experience of emerging market countries with full-fledged inflation targeting shows
that inflation targeting is a viable alternative to an exchange rate anchor given the right circumstances and
policies.
4. Disinflation from an inflation level around 10 percent is feasible under inflation targeting, but inflation is
more difficult to control
The threshold level of inflation for adopting inflation targeting has been around 10 percent. This threshold
may indicate that monetary control is more difficult above that point. Disinflation takes about 3 ½ years on
average, and inflation is reduced by about ¾ percent per annum.
5. In many respects the transparency elements of inflation targeting countries are converging
All inflation targeting countries strive to attain a high degree of policy transparency via press releases, press
conferences, and inflation reports, including increasingly quantitative macroeconomic forecasts and the
reasons for and responses to misses of their targets. This high degree of operational transparency may
enhance the effectiveness of monetary policy by reducing the possibility of surprises in policy
implementation and by facilitating central bank independence.
6. Some key aspects of inflation targeting transparency are still evolving
Inflation targeting central banks have not quantified policy objectives either than the inflation target. No
inflation targeting central bank has explicitly articulated how it weighs price versus output stability as has
been suggested in Svensson (2003, 2005). This seems to make sense because it would be extremely
difficult to design a monetary framework under which a central bank could be held accountable not just for
an inflation target but also for output and financial stability objectives. In addition, central banks generally
remain reluctant to discuss their projections of the future path of interest rates or the exchange rate, as most
believe that interest rate forecasts could cause confusion when subsequent events induce a change in the
policy interest rate from that forecasted. There is also the concern that disclosing interest rate assumptions
would reveal the direction of policy and disrupt the market.
7. The modalities of accountability are more country-specific and less formal
Accountability arrangements have become less formal over time. The interpretation is that the need for
such formal arrangements has been reduced by the maintenance of a high degree of policy transparency.
The modalities of accountability vary more across countries compared to the inflation target specification
or transparency elements of the framework. An important reason for this is that many elements of
accountability are embedded in central bank laws, and legal frameworks are more country-specific.
Countries change accountability arrangements less often than the other key elements of the inflation
targeting framework, reflecting the difficulty in changing laws or established relationships between
different branches of the government. This difficulty in changing accountability arrangements means that it
is harder for countries to incorporate the lessons from the experience of other countries with inflation
targeting.
8. Central banks appear to respond to deviations of inflation from the centre of target ranges, well before
inflation actually leaves the range
The duration of target range misses is fairly short, particularly for the stable inflation targeters, compared
with lags typically associated with monetary policy actions. This, together with the evidence on the
persistence of deviations of inflation from the centre of target ranges suggests that, in practice, central
banks tend to respond to deviations of inflation from the centre of target ranges prior to inflation actually
leaving the range.
9. Most countries use headline CPI as the target measure of inflation, but continue to use core measures in
policy analysis and communications
Over time, inflation targeting countries are moving towards setting targets in terms of CPI inflation rather
than core inflation. Perhaps the most important reason for this trend has been disillusionment with efforts to
define measures of core inflation that are both readily explained to and accepted by the general public and
at the same time satisfy more technical requirements. Setting the target in terms of the measure of inflation
most widely known greatly facilitates public communications and accountability, but does not preclude
using core inflation measures in policy analysis.
VII. POLICY IMPLICATIONS
This section discusses the practical policy implications for inflation targeting countries of the key issues
and trends elaborated upon in the previous sections.
1. Inflation targeting central banks can be expected to miss the target range
Misses of targets are part and parcel of operating an inflation targeting monetary regime. Of course, a
balance must be struck such that misses do not impair policy credibility. Further, central banks should
respond to deviations of inflation from the centre of target ranges well before inflation actually leaves the
range.
2. The inflation targeting range should be specified taking into account a country’s circumstances
In particular, country-specific characteristics, especially vulnerability to exchange rate shocks, should be
taken into account in setting the target ranges. Failure to do so undermines the usefulness of the target
ranges as either guides for expectations or as benchmarks for policy accountability. One approach to
tailoring range widths to country characteristics could be to set target range widths to limit misses of target
ranges to around one-third of the time (as for stable inflation targeters). This approach would help maintain
a reasonable balance between policy flexibility and policy discipline. In addition, the width of the target
range should probably be narrowed over time. The evidence in this paper suggests that the volatility of
inflation outcomes tends to decline quite rapidly after the start of inflation targeting. The evidence further
suggests that for most disinflating countries basing target range widths using a one-standard deviation rule
would result in range widths of around 3-4 percentage points wide, narrowing to about 2 percentage points
when stable inflation targets are adopted.
3. The experience of other inflation targeting countries provides a good guide for transparency modalities
Inflation targeting countries have more or less converged to a common transparency framework. Thus,
countries that are adopting full-fledged inflation targeting can build on the experience of their predecessors.
4. Open economy countries must take account of their greater vulnerability to exchange rate shocks
This vulnerability is not due simply to being open but also reflects strong links between external shocks and
domestic inflation. One set of links operates via the high rate of pass-through from the exchange rate to
domestic inflation. This means that policymakers must pay special attention to the impact of exchange rate
changes on inflation. Another set of links operates via the financial system and can kick in when external
shocks raise financial vulnerability. This means that monetary policy must take account of potential
financial sector problems when setting policy, both because of the consequences for real stability and
owing to the potential constraint on monetary policy posed by financial sector problems.
6. Romer & Romer – Choosing the Federal Reserve Chair: Lesson from
History
“When a realistic model of how the economy works and what monetary policy can
accomplish prevailed within the Federal Open Market Committee (FOMC), as in the 1950s
and in 1980s and beyond, policy was appropriate and macroeconomic outcomes were
desirable. When an unrealistic and misguided model prevailed, as in the 1930s and in the
late 1960s and the 1970s, policy was similarly misguided and outcomes were poor.”
Stable, noninflationary growth has been the goal of monetary policymakers since the
inception of the modern Federal Reserve in the mid 1930s. Policymakers, however, have
come closer to achieving this goal in some eras than in others.
The key determinants of policy success have been policymaker’s view on how the economy
works and what monetary policy can accomplish.
The well-tempered monetary policies of the 1950s and of the 1980s and 1990s stemmed from
a conviction that inflation has high costs and few benefits, together with realistic views about
the sustainable level of unemployment and the determinants of inflation. In contrast, the
profligate policies of the late 1960s and the 1970s stemmed initially from a belief in a
permanent tradeoff between inflation and unemployment and later from a natural rate
framework with a highly optimistic estimate of the sensitivity of inflation to economic slack.
The deflationary policies of the late 1930s, with Federal Reserve chairman Eccles, stemmed
from a belief that the economy could overheat at low levels of capacity utilization and that
monetary ease could do little to stimulate a depressed economy.
Good predictions of the views that the Federal Reserve chairmen held during their tenures
come from their speeches, writings and testimony prior to being confirmed. They clearly
revealed their beliefs before they were appointed.
In the 1950s, with Federal Reserve chairman Martin, the view of the Federal Reserve was that
“our purpose is to lean against the winds of deflation or inflation, which ever way they are
blowing”, i.e. monetary policy should be countercyclical. The result was a well tempered
monetary policy.
In the early 1970s, with chairman Burns, the Federal Reserve had a very optimistic estimate
of the natural rate of unemployment. In August 1970, with unemployment at or slightly below
5 % in previous months, policymakers believed that “expectations of continuing inflation had
abated considerably.” Their optimistic estimate of the natural rate appears to have made them
feel that there was no conflict between expansionary policy and their goal of lowering actual
inflation to validate the reduced expectations. The result of this unrealistic view on the
economic situation was a period of inflation at 10-14 % during most of the 1970s.
Both Volcker (‘79-‘87) and Greenspan (’87-’06) stressed the benefits of low inflation
virtually every time they testified to Congress about monetary policy during their tenures.
Highly partisan chairs have, in general, had less sound views on the economy, and vice versa.
(Greenspan as a clear Republican is an exception).
Training in economics, experience on Wall Street, nonpartisan public service in economic
policymaking and limited political involvement have been correlated with sensible beliefs,
although there are exceptions.
When a realistic model of how the economy works and what monetary policy can accomplish
prevailed within the Federal Open Market Committee (FOMC), as in the 1950s and in 1980s
and beyond, policy was appropriate and macroeconomic outcomes were desirable. When an
unrealistic and misguided model prevailed, as in the 1930s and in the late 1060s and the
1970s, policy was similarly misguided and outcomes were poor.
7. The Case for Restricting Fiscal Policy Discretion
Antonio Fatas and Ilian Mihov
Introduction
This paper studies the effects of discretionary fiscal policy on output volatility and economic
growth. The main results are:
 Governments that use fiscal policy aggressively induce macroeconomic instability.
 The volatility of output caused by discretionary fiscal policy lowers economic growth
by
more than 0.8 percentage points for every percentage point increase in volatility.
 Prudent use of fiscal policy is explained to a large extent by the presence of political
constraints and other political and institutional variables.
The evidence in the paper supports arguments for constraining discretion by imposing
institutional restrictions on governments as a way to reduce output volatility and increase the
rate of economic growth.7
Estimating the Effect of Discretionary Policy
The term discretionary fiscal policy is used to refer to changes in fiscal policy that do not
represent reaction to economic conditions, i.e discretionary policy that is implemented for
reasons other than current macroeconomic conditions. Annual data for ninety-one countries
over the period 1960-2000 is used to estimate the following equation for each country:
Gi,t  1  iY   iGi,t1  iWi,t  i,t (eq 1). G=log of real government spending, Y=log of
real GDP. Various controls for government spending as well as deterministic components like
time trends=W. Volatility=  i =the size of a discretionary change in fiscal policy for country i.

To study the link between discretionary fiscal policy and output volatility the cross-sectional
variation in the data is exploit. The correlation between policy and output volatility is positive

and highly significant.
The following model was used to estimate the relationship:
y

log  i     log(  i )   ' X i   i (eq 2). The dependent variable is the standard deviation of
the annual growth rate of GDP/capita for each of the countries. The key explanatory variable
is the volatility of discretionary fiscal (   ) policy constructed on the basis of eq 1. The
additional control variables are government consumption as % of GDP, Real GDP/capita, and
the ratio of imports and exports to GDP. To avoid endogeneity bias caused by the fact that
the variable that captures discretionary
fiscal policy might contain changes in fiscal policy in

response to business cycle conditions and to deal with possible measurement error when
estimating eq 1 in OLS and LAD, instrumental variables are used8. Variables that are linked
to the institutional characteristics of the countries in are selected; The electoral system
(majoritarian vs. proportional); the political system (presidential vs. parliamentarian); the
political constraints (numbers of points in the governors and the distribution of ideological
preferences); Number of elections for the executive and legislative branches. The presence of
 

7
I will try to focus on the same parts as Flodén did in class, therefore some regression analysis are presented. To specify which table I refer
to, I have put a footnote in the end of the paragraph.
8
You can relate the bias endogeneity to the example presented in class; Crime = a + b*Police + e. If election year, the police quantity will
probably be higher. According to Flodén, we do not need to understand the LAD estimation.
a measurement error leads to attenuation bias, the IV estimates of the effect of discretionary
policy are about 15% higher than the OLS estimates (columns 1 and 5). The coefficients
more than doubles when controls are used, (columns 2 and 6). The last column is a
modification of the estimation to take into account that government size might be endogenous
to output volatility. In column 7 it is showed that volatile discretionary policy generates
significant macroeconomic instability (t-statistic of 4.10). The last two columns show that one
percent reduction in policy volatility leads to between 0.8% to 1.2% decline in output
volatility.9
 
There is a negative association between average growth rates (∆yi) and output volatility  iy
. The following regression was run: y1     log(  )   ' X1  u1 (eq 3). When adding the
instrumental variables to this regression the following relationship, i.e the chain through
which policy and institutions affect growth, was found:10


Political and institutional setup  Discretionary fiscal policy  Output volatility  Growth
y
i
Table 3 displays the results of the instrumental variables effect on volatility. Whether
included separately or in a multivariate regression only two of the variables are significant —
the nature of the political system and the degree of political constraints. The explanatory
power of political constraints is particularly striking: Alone it can explain over 50% of the
cross-sectional variation in the policy measure. Running the regression with all variables
(column 5) changes the results only slightly for Political Constraints and Presidential. The
result should not be over-interpreted as it is not statistically significant from zero at
conventional levels. The number of elections has a negative and significant coefficient, which
is consistent with the view that elections hold politicians accountable. Presidential systems are
more volatile while countries with a large number of political constraints experience less
volatility in discretionary fiscal policy.11
Conclusions
The key conclusion is that the aggressive use of discretionary fiscal policy strengthens
business cycle fluctuations and harms economic growth. The use of the chosen instruments
(in purpose to avoid endogeneity bias and to deal with possible measurement errors) together
with a large set of additional tests confirms the robustness of the result. In all cases, more
aggressive discretionary fiscal policy is associated with more volatile business cycle and
slower rates of economic growth. A look at the use of instruments reveals interesting
connections between political economy variables and fiscal policy. It seems that more
political constraints lead to less frequent use of discretionary policy. This result is particularly
strong in the group of the rich countries. To the extent that reduced volatility of the business
cycle has negative welfare effects, the result show the benefits of introducing restrictions on
fiscal policy discretion.
One unanswered question is how to design institutions that restrict fiscal policy without
eliminating any of the automatic stabilizers. To implement the recommendations, it is
9
See table 1.
10
See table 2.
11
See table 3
necessary to be able to separate between the stabilizing and the other roles of fiscal policy
before restrictions can be discussed in a meaningful manner. One possible criticism of the
conclusions is that institutions are selected optimally to reflect differences in social
preferences and macroeconomic fundamentals. Nations set up their institutions to maximize a
welfare function that consists of various trade-offs. One trade-off, as in monetary policy, is
between flexibility and discipline. Volatility might be undesirable but the society might like to
give the government more flexibility so that societal concerns about a sharp increase in
inequality can be met immediately by a change in fiscal policy. If institutions are too rigid it
may take too long to induce an institutional change that will respond to the social demands for
greater redistribution.
QuickTime och en
TIFF (LZW)-dekomprimerare
krävs för att kunna se bilden.
QuickTime och en
TIFF (LZW)-dekomprimerare
krävs för att kunna se bilden.
QuickTime och en
TIFF (LZW)-dekomprimerare
krävs för att kunna se bilden.
Significance level: The value in the parenthesis
Correlation: The value above the parenthesis.
8. The Case Against the Case Against Discretionary Fiscal Policy. By:
Blinder
In 1963, fiscal policy (discretionary fiscal stabilization policy) had the lead role and was
viewed as a useful tool for stabilization policy, while monetary policy often was said to only
accommodate fiscal policy. However, contemporary discussions about stabilization policy are
only about monetary policy, not fiscal policy. Blinder argues that monetary police should
have the dominant role in stabilization policy, but that fiscal policy should not be complete
forgotten (“not be relegated to the dustbin of history”).
I. The Issues
The prevailing view today is that stabilization policy is about filling in troughs and shaving
offs peaks, that is, reducing the variance of output around a mean trend. This contemporary
view makes four assumptions (Blinder adheres to them, but argues for and against them.)
1. The macroeconomy is not subject to hysteria. In a system with a unit root, any shocks
to aggregate demand will leave a permanent impact on output. (Hysteria can for
example come from technology shocks, if faster technological progress is induced by a
booming economy). But it is not clear if output have a unit root.
2. The conventional, though much-disputed, effects of fiscal deficits on interest rates, and
thus on the capital stock, leave no lasting imprint on GDP. Budget deficits are
expansionary in the short-run, but contractionary in the long-run, because accumulated
public debt leads to higher interests rates, less investment, smaller capital stock, and
lower potential output in the future.
3. Due to some sort of normal rigidities, real output respond in the short-run to
aggregate demand shocks
4. The macroeconomy has the natural-rate property. Thus, a) output returns to potential,
and b) path of potential output is unaffected by either monetary or fiscal policy. This is
illustrated in fig. 1(a), which shows the pure effect of a one-time, non-repeated fiscal
stimulus. Actual output increases sharply above potential and then slowly returns to
potential (“hump” shape). Peak effect after five quarters, but still notable effect after
three years. Conclusion: real affects can last long, and fiscal policy can thus be used to
fill in troughs and shave off peaks (stabilize the business cycle).
Arguments against fiscal policy
1. Lags. Outside lags (time between fiscal policy shock and effect on economy) are
shorter for fiscal policy than for monetary policy. However, inside lags (time between
recognition of the need for fiscal stimulus and implementation) are longer for fiscal
policy. Why? Congress need time to reach a decision, and political wrangling may
delay further (political lags).
2. Theoretical/economic argument. Ricardian Equivalence (RE) (also known as pure
permanent income hypothesis, PIH)
II. Changing Views: A Brief History of Events and Ideas on Fiscal Policy (4 periods)
1. The triumph of Keynesian, 1936-66. Emphasized fiscal over monetary policy. The idea
of automatic stabilizers was born. Kennedy-Johnson tax cut (1964/65) was the first
deliberate use of fiscal policy in US. Optimism for active fiscal policy.
2. The consensus crumbles, 1967-77. Faith in stabilization policy in general, and fiscal
policy in particular, was destroyed. Vietnam war increased government spending
(without increasing taxes) when the US economy was already fully employed. Result:
overheated US economy and inflation. Moreover, the failure of the 1986 US surtax
greatly damaged the idea of fiscal police, because: a) long inside lags (it took 2.5 years
to get the tax hike enacted), and b) activist use of tax policy for stabilization purposes
would imply temporary changes in income taxes (with little effect according to RE).
Also, Friedman and Phelps demolished the long-run Phillips curve. All in all, fiscal
stabilization policy fell deeply out of favor.
3. Huge deficits crowd out stabilization policy, 1981-2001. President Reagan’s massive
tax cut in 1981 resulted in a large budget deficit. Fiscal policy was out of the question.
Instead, taxes were increased as part of the 1990 deficit-reduction package. In 1992,
Clinton wanted to use short-run fiscal stimulus, but the Congress rejected the idea.
Moreover, the idea that reducing the budget deficit would grow the economy (even in
the short-run) dominated the thinking in Washington. This was anti-Keynesian.
4. The new era, 2001-? Tax cuts 2001-03, but not to stabilize. Short inside lags (matter
of months). In 2001, the Bush administration changed rationale for the tax cuts to
Keynesian, but no further change in policy. Political consensus for fiscal stimulus and
that inside lags can be short. Moreover, some scepticism arose about the efficiency of
monetary policy, based on a Keynesian idea called “the liquidity trap”, which
concludes that monetary policy becomes weak if the overnight interest rate goes to
zero (it cannot go below zero, thus no room for downward adjustment).
Objections to Ricardian Equivalence
1. Bequests. If future tax burden falls on generations not yet born, Barro argues that
current households will adjust their bequests to make debt = taxes. Blinder argues that
most bonds will mature in less than 10 years, and thus affect current households.
2. Liquidity constraints. If households have liquidity constraints, then a debt-financed tax
cut will raise spending.
3. Different discount rates. Taxpayers and bondholders may discount at different rates.
4. Myopia. Households may discount at extraordinary high rates, or have short planning
horizons, and thus value current consumption over future consumption.
5. Precautionary saving. Receiving more income today, and expecting to receive less
future income, reduces income uncertainty, which reduces the need for precautionary
saving, which increases spending today.
6. Consumer spending may react more than consumption. Current tax receipts that are
not spent must be saved. One way to “save” is to buy a consumer durable that yields a
flow of consumption services in the future (which adds to aggregate demand).
7. The present-value government budget constraint is irrelevant in practice. Evidence
show that the debt may grow explosively for a decade or two without catastrophic
consequences.
Evidence for RE (in the US).
Okun looked at the explicit temporary tax surcharge enacted in 1968. Found no impact on
spending (but Blinder & Solow found that there was some impact). Modigliani & Steindel,
and Eckstein, looked at 1975 tax rebate. Found impact on spending. Shapiro & Slemrod
looked at temporary tax cut in 1992 (interview/natural experiment). Half of the subjects said
that they would spend most of the increase. Conclusion: RE does not hold (a temporary
income tax change will alter behaviour, but not as much as a permanent change).
IV. Temporary Tax Changes and Intertemporal Substitution
Since some RE when temporarily changing the income tax, other tax changes may be a more
efficient incentive for intertemporal substitution, such as investment tax credits (ITC), etc. A
current period reduction in consumption tax would redirect spending from future period to
current period. Temporary changes in ITC can be used to make firms reschedule their
investment projects. Another way to put investment goods on “sale” for a while is to offer
accelerated depreciation.
V. Countercyclical Variations in Government Purchases
Can changes in G (government spending) smooth cyclical fluctuations? Barro found that
government defence purchases have a significant positive impact on real output. However, the
problem of inside lags remains (takes time for Congress to authorize new projects). Preauthorized projects would be a solution, but not reasonable in practise. Conclusion: “If fiscal
policy is to be used for stabilization purposes, taxes are probably the instrument of choice.”
(Because shorter inside lags.)
VI. Is There a Case for Streamlining Fiscal Policy Institutions?
Long inside lags are the most critical element of the case against discretionary fiscal policy.
But institutions could be improved. Moreover, some discretionary policies could be made
automatic (automatic stabilizers). In addition, Blinder argues that technocratic decisionmaking on tax policy (level of taxation etc.) might produce better outcomes, since the
possibility of conducting timely and rational fiscal policy would be enhanced. Another idea is
to create an independent fiscal-policy agency.
In the euro zone, the Stability and Growth Pact (SGP) require member governments to limit
budget deficits to 3% of GDP. The limit could interfere with automatic stabilizers, since target
is in terms of actual budget deficits, not cyclical-adjusted deficits. If weak economic
performance lowered tax revenue and raised welfare expenditure, even a “responsible” fiscal
police could break the limit (thereby requiring offsetting fiscal actions that are procyclical).
But members rather violate the pact.
VII. Out of the Detritus: Some creative Ideas for Fiscal Stabilization
One way to stimulate consumption would be to target tax-transfer changes on households with
liquidity constraints (MPC≈1), that is, lower-income households who are more likely to be
living hand to mouth. Two drawbacks: 1) not possible to do this when the economy needs
restraint (not “good” to increase tax and reduce welfare payments for the poor), and 2) hard to
know whom is liquidity constrained. Asset-to-income ratio, and large negative transitory
income (TI) (deviation from the permanent income), may be better indicators of a liquidityconstrained household than income. An even better proxy for being liquidity constrained may
be receipt of unemployment insurance (UI). Households that collect UI have had a severe
drop in income (=large negative TI) and cannot maintain their previous consumption levels
due to liquidity constraints. In the USA, UI benefits are extended during times of high
unemployment (often with both additional automatic and discretionary increases in
recessions). Conclusion: more generous UI programs may be good for stabilizing (especially
when automatic). Moreover, as a way to bring incentives for intertemporal substitution (e.g.
consume more today, less in the future) other taxes can be changed temporarily, for example,
the value-added tax (VAT).
VIII. Wrapping Up: Is There Anything New Under the Sun?
1) Inside lags do not always need to be long for fiscal policy. 2) The worries of low MPC (no
intertemporal substitution) out of temporary income tax changes can be avoided by changing
other taxes. 3) Households may be, or act like they are, liquidity constrained and then RE does
not hold. 4) If nominal interest rate is zero, fiscal policy may help monetary policy (and since
long outside lags for monetary policy.
Overall conclusion: under normal circumstances, monetary policy is a far better candidate
for the stabilization policy than fiscal policy. Under abnormal circumstances (long recessions,
zero nominal interest rate, lower aggregate demand (AD), but not when too high AD)) fiscal
policy can help, maybe a lot, in stimulating the economy.
9. Fiscal Policy: Institutions versus Rules
Charles Wyplosz
Wyplosz argues in this article that fiscal policy should be conducted in a way more similar to
monetary policy.
The evolution of fiscal policy has been much slower than for monetary policy. Many
countries have today fallen into the debt trap. Some of the reasons have been tax cuts not
being compensated by spending cuts and hard-to-reverse welfare payments. To overcome the
problem of deficit biases, limits on government spending, debt rules, and strict limits on
budget deficits have been introduced, but with low satisfaction. Wyplosz argues that rules of
this kind should be replaced by incentives and institutions.
What is the problem with fiscal policy? A monetary policy interpretation
In monetary policy, time inconsistency problems have often led to monetary authorities
developing an inflation bias. There has been a temptation to inflate when faced with
difficulties such as high unemployment. Time inconsistency may as well cause fiscal policy
today do suffer from deficit biases. Within monetary policy, rules were replaced with central
banking institutions to minimize the time inconsistency problems and assuring that both short
and long term objectives were met. Formal or informal independence along with inflation
targeting were found to provide more short-run flexibility without jeopardising long-run price
stability. Bound by clear objectives and the accountability of their actions, independent
central banks have escaped the time inconsistency problem.
In fiscal policy, the short-run objective is output stabilization over the business cycle while
the long-run and more important objective is fiscal discipline. The deficit bias causes long-run
discipline objectives to by systematically overlooked because of short-run discretion.
Fiscal policy is not independent as the monetary policy, and has to be approved by parliament
or other political actors, often leading late implementations and unintended results. Fiscal
policy is therefore also highly affected by political interest and lobbying. Since fiscal policy
has a redistributive effect, democracy is needed to settle possible conflicts and policymaking
cannot be as independent as with monetary policy were appointed, and not democratically
elected actors, are in charge. The striking contrast between monetary and fiscal policy is that
the former is subject to ex post democratic control while the latter is subject to ex ante
control.
Existing fiscal rules
1. Multiannual spending limits where additional spending must be matched by additional
revenues.
2. Budget deficit rules restricting the size of the deficit.
3. Debt rules restricting the size of the debt.
4. The Growth and Stability Pact specified in the Maastricht Treaty where sanctions
apply if deficit limits are breached for more than two following years.
The problem with many rules is though that they focus on constraining problems and not
eliminating them.
Solutions for fiscal policy
Fiscal authorities should commit to stabilizing the debt at a feasible horizon, long enough to
extend beyond the business cycle. To remove the time inconsistency problem, the deficit bias
has to be eliminated. The bias is basically caused by two things. 1) A government cannot
make promises that its successors have to follow. 2) Election times and private interests put
pressure on the deciding authority.
Institutions for fiscal discipline
Wyplosz has two solutions:
1. Fiscal discipline must take the form of a commitment to a debt level target over the
relevant horizon. In principle zero debt is optimal since taxes are distortionary. The
government borrows on behalf of credit-restrained citizens and some debt is therefore
welfare-enhancing. An optimal level does not exist but a quantitative target serves as a
clear goal.
2. Political pressure must be removed from those who undertake the task of fiscal policy
and this calls for the establishment of new institutions. Fiscal policy should be
delegated to a Fiscal Policy Committee (FPC), an independent group of unelected
experts, just as with monetary policy today. The members should be appointed for
durations that exceed the horizon of the policy target and be given debt targets by
political authorities. The FPC is then given full authority to decide on the budget
balance and is also accountable to parliament. The main advantage in comparison with
budget rules is to replace mechanical limits with judgement.
3. A softer solution than giving the FPC the above authority could be to make the FPC an
independent advisory group that could only issue non-binding recommendations. This
could be a way of preparing public opinion for the more demanding version described
in solution number two.
4. Discretionary fiscal policy is typically carried out with annual budgets, which imposes
long lags. The process can be both slow and politicised, another source of deficit bias.
A faster procedure would be the use of reserve funds, which can be rapidly distributed
or refilled. Funds like these do exist in several countries but are often set aside only for
emergencies and not for casual fiscal policy.
Wyplosz strongly argues that the natural solution to the problems of fiscal policy is the
mandatory establishment of national FPCs. These could initially be advisory groups (solution
3) but should later be granted the authority of conducting fiscal policy (solution 4). Rules
should hereby be replaced by institutions and sanctions should be abandoned.
10. The (partial) rehabilitation of interest rate parity in the floating rate
era: Longer horizons, alternative expectations, and emerging markets
By Menzie D. Chinn
This article treats the subject of whether the uncovered interest rate parity (UIP) is empirically
applicable or if it only can function as a theoretical tool. Chinn argues that the empirical
studies made earlier show that UIP does not hold. However, he is not completely content with
these studies. To investigate the relation further he looks at some variables that might have
been neglected in previous studies and that could affect the outcome of the analysis:
1. The expected exchange rate changes should be unbiased: this is one of the underlying
conditions for the uncovered interest rate parity that has been neglected in many
empirical studies.
2. The effect of a long run interest rate: Offshore interest rate data is harder to find in the
long run. Therefore most studies on the subject have only looked at UIP in the short
run and this could be an explanation of why UIP has not been observed in empirical
studies.
3. Unbiasedness in emerging markets: High inflation or low per capita income, cause
greater reliability in uncovered interest rate parity.
Chinn runs the following regression in his analysis:
 st, t+k =  + (it,k - i*t,k)+t, t+k
 st, t+k = The realized change in the exchange rate from t to t+k
it,k
= is the k-period yield on the domestic instrument
i*t,k = is the k-period yield on the foreign instrument
t, t+k = the rational expectations forecast error
For the uncovered interest rate parity to hold:  = 0 and  =1 (the null hypothesis)
1) Unbiased changes in the expected exchange rate
Chinn stresses the fact that the expected exchange rate is in reality not possible to establish.
However, here Chinn estimates the expected exchange rate by assuming rational expectations
(). The error term in the regression captures the changes in the expected exchange rate, it is
even stronger than the assumption of unbiasedness.
Chinn runs the regression on the time period 1980Q1-2000Q4 and finds that even though the
unbiased change in expected exchange rate is accounted for, there is no empirical evidence to
prove that UIP holds. The beta becomes negative in four cases of six (even though we
expected it to be > 0, close to one) and the adjusted R2 statistic is low which tells us that a big
part of  st, t+k can not be explained by this regression analysis. This study shows that the bias
in interest differentials is not disappeared in the short run.
2) The long horizon
It is hard to find reliable long-term data of interest rates in different countries therefore most
of the previous studies rely on short-term data. The long-term series are influenced by taxes,
capital controls etc. within the countries that makes them harder to analyze. Thus, the longterm interest rate series includes more noise than the short-term series, this make their -value
biased towards zero (away from the null hypothesis  = 1). Chinn runs the regression,
accounting for the noise in data, and attains a positive value of beta between zero and one for
the currencies he observe. Because of the large degree of imprecision in data it is hard to draw
any strong conclusions from this, but it seems like interest rate parity predicts exchange rates
better at long horizons than at short. There are several possible explanations for this:
1. The monetary authority can only affect the short rate directly, as a result the
long term rates are more weakly exogenous than short term rates.
2. Short- and long-term bond markets are segmented from each other. (Shorthorizon holding period returns on long-term bonds do not exhibit bias in
predicting short-horizon exchange rates.)
3. Exchange rate expectations differ at short and long horizons.
3) Unbiasedness in emerging markets
With high inflation and inflation volatility, or with low per capita income, the assumption of
unbiasedness is more likely to hold. Emerging market governments tend to fulfil this
assumption of unbiasedness better than others. Unfortunately it is hard to find data for
developing countries. However, some non-G7 developed countries are examined with a long
horizon, and this analysis show that these countries seem to have a positive beta to a greater
extent than seen before. This might be a result of the greater predictability in exchange rate
trends that we can find in these countries.
Chinn suggests that there should be more research done in this field before we reject the idea
of uncovered interest rate parity as an empirically applicable tool. All the regressions
conducted in this study have quite low adjusted R2, but they highlight some facts that can be
worth looking at more closely.
11. Taylor & Taylor – The Purchasing Power Parity Debate
“Transaction costs is a key issue for PPP theory and the Law of One Price. The existence
of nontraded goods (a manifestation of extreme transaction costs) is a further obstacle to
the international goods arbitrage theory.”
Our valuation of a foreign currency in terms of our own mainly depends on the relative
purchasing power of the two currencies in their respective countries. Purchasing power parity
(PPP) says that the nominal exchange rate between two currencies should be equal to the ratio
of aggregate price levels between the two countries, so that a unit of currency of one country
will have the same purchasing power in a foreign country.
One very simple way of gauging whether there may be discrepancies from PPP is to compare
the prices of similar or identical goods from the basket in the two countries. For example, the
Economist newspaper publishes the prices of McDonald’s Big Mac hamburgers around the
world and compares them in a common currency, the U.S. dollar, at the market exchange rate
as a simple measure of whether a currency is overvalued or undervalues relative to the dollar
at the current exchange rate (on the supposition that the currency would be valued just right if
the dollar price of the burger were the same as in the U.S.)
The result in January 2004:
-
In China the Big Mac was sold at $1.23 (compared to $2.80 in the U.S.), indicating
that the yuan was 56 % undervalued relative to the dollar.
-
In the €-area the Big Mac was sold at $3.48, indicating that the € was 24 % overvalued
relative to the dollar.
However, the Big Mac index, and PPP in general, is in an important aspect misleading,
because they are based on an assumption of international goods arbitrage, which is unrealistic.
As a matter of fact, many of the inputs into a Big Mac cannot be traded internationally, or not
easily at least. Each burger sold contains a high service component – the wages of the person
serving the burger – and a high property rental component – the cost of providing you with
somewhere to sit down and eat your burger. Neither the service-sector labor nor the property
is easily arbitraged internationally, and yet advocates of PPP have generally based their view
largely on arguments relating to international goods arbitrage.
The assumption of international goods arbitrage leading to an internationally equalized price
level is called the Law of One Price. However, the Law of One Price is also based on the
absence of transaction costs in the arbitraging process. Consequently, the fact that there are
almost always transaction costs, such as transportation costs and import tariffs, in the
arbitraging process makes the Law of One Price unrealistic.
The relative PPP holds that the percentage change in the exchange rate over a given period
just offsets the difference in inflation rates in the countries concerned over the same period.
Neither absolute nor relative PPP appear to hold closely in the short run, but both appear to
hold reasonably well as a long-run average, although the evidence is weak.
Transaction costs is a key issue for PPP theory and the Law of One Price. The existence of
nontraded goods (a manifestation of extreme transaction costs) is a further obstacle to the
international goods arbitrage theory.
12. Estimating China’s ”Equilibrium” Real Exchange Rate. By: Dunaway
& Li
China’s presence in the world market has increased in recent years. Is China’s exchange rate
(the renminbi) a factor in explaining the country’s competitiveness? Is the exchange rate
undervalued? Many studies have tried to answer these questions by estimating China’s
”equilibrium” real exchange rate. The results are a mess (Table 1); the estimates of the
equilibrium exchange rate range from little to nearly 50% undervaluation. Researchers have
employed two broad approaches: a macroeconomic balance approach, and an extended
purchasing power parity (PPP) approach.
The macroeconomic balance approach
The macroeconomic balance approach derives an estimate for the change in the real exchange
rate needed to bring about equilibrium in the balance of payments. Balance of payments is
defined in one of two ways: 1) “normal net capital flows” = ”underlying” CA balance (i.e.
there is no change in international reserves), or 2) external CA balance = “structural”
domestic S-I balance. This approach rests on the stability of structural relationships (CA=S-I).
The approach generally involves 3 steps:
1. “Underlying” CA position under prevailing exchange rate is determined (remove the
effects of differences in relative cyclical positions between countries and the delayed
effects of past changes in real exchange rate).
2. “Equilibrium” in balance of payments in determined, called “norm” (e.g. some
measure of “normal” net capital inflows).
3. The gap between (1) and (2) is calculated.
Then, based on a trade model, the change needed in real exchange rate to close this gap is
computed, reflecting the estimated price elasticities for the countries exports and imports.
This change is an estimate of the extent the current real exchange rate may be overvalued (if
the rate is expected to depreciate) or undervalued (if the rate is expected to appreciate).
The approach is complicated and requires large amount of information Advantage: provides a
forward-looking assessment of the ”equilibrium” real exchange rate. Assessments vary widely
in different studies due to different assumptions and different definitions of the “norm” etc.
The extended PPP approach
Estimates the ”equilibrium” real exchange rate in one equation. Assumption: PPP holds in the
long run but several factors prevent the actual exchange rate from converting to its PPPdetermined level in the short run. These factors (“fundamentals”) are used as determinants in
an equation that is estimated to explain past movements in the real exchange rate. Exchange
rate equilibrium: actual value = predicted value (from the equation), that is, the exchange rate
is “in line with its fundamentals”.
The Balassa-Samuelson effect is the most common factor used to explain deviations from
PPP. Differences between countries in relative productivity in their traded versus nontraded
product (goods and services) sectors give rise to distortions in PPP. These differences may
arise as countries develop, open up to trade, or catch up with technology. Thus, technology in
traded sector rise faster than in the nontraded sector (and wages rise faster in traded sector,
bidding up the wages in the nontraded sector). Wages in nontraded sector rise faster than
productivity in that sector, resulting in an increase in nontraded relative to traded product
prices. Consequently, domestic prices rise faster than prices in the rest of the world, leading to
appreciation of the real exchange rate. Most extended PPP studies find that China’s real
effective exchange rate has not appreciated in line with the Balassa-Samuelson effect (China’s
exchange rate may be undervalued). However, there are two reasons why there may not be a
strong Balassa-Samuelson effect in China (implying that the approach may overstate an
undervaluation).
1) The Balassa-Samuelson effect assumes full, or very high, employment (not the case in
China). This condition in necessary in order to get rising wages in the traded products
sector as productivity in that sector increases, and to get spill-over of this wage
increase into the nontraded sector.
2) The Balassa-Samuelson relative productivity effect is often proxied by the ratio of
China’s consumer price index to its producer price index (CPI/PPI) relative to the
same ratio for China’s trading partners. May not be a good proxy (mismeasurement of
CPI, no close link between CPI/PPI and productivity)
The approach requires less information than the macroeconomic balance approach and is
backward-looking (it focuses on past behaviour of the exchange rate).
13. CURRENCY CRISES (Krugman ,1997)
The canonical crisis model
According to this model, the logic of a currency crisis is the same as that of speculative attack
on a commodity stock when trying to keep commodity prices stable. It is assumed that the
central bank holds the exchange rate fixed using a stock of foreign exchange reserves. If
speculators wait until the foreign exchange reserves are exhausted in the natural course of
events, the price of foreign exchange will begin rising making it unattractive to hold the
domestic currency. Foresighted speculators will then sell domestic currency before this, and
in doing so advancing the date of exhaustion and so on… Therefore when reserves fall to a
critical level (e.g. what is needed to finance the payment deficit), a speculative attack will
quickly drive the reserves to zero, forcing the country to abandon the fixed exchange rate.
In short, the canonical model explains currency crises as a result of a fundamental
inconsistency between domestic policies (money-financed budget deficits) and the attempt to
keep a fixed exchange rate.
More sophisticated models
The canonical model assumes that the government keeps printing money to cover a budget
deficit no matter what. It also assumes that the central bank keeps selling foreign exchange to
peg the exchange rate until the last unit of the reserve is gone. In reality, there are more
instruments available to keep budget deficits under control and maintain a fixed exchange
rate. The second generation models require three components: 1. There must be a reason why
the government would like to abandon its fixed exchange rate. One such reason is a large debt
burden denominated in domestic currency that the government may wish to deflate away.
Another reason can be unemployment due to downwardly rigid nominal wages and therefore
a desire to adopt a more expansionary monetary policy. 2. There must be a reason why the
government would like to defend the exchange rate. This can be because a fixed exchange rate
is important for international trade and investment. It can also be because a fixed rate is a
guarantor of credibility for a nation with a history of inflation. 3. The cost of defending a fixed
rate must itself increase when people expect that the rate might be abandoned. To defend the
currency in the face of expected depreciation, a higher interest rate is required. This will either
worsen the cash flow of the government or depress output and employment.
Combining these components, we get an explanation of currency crises. Suppose that a
country’s costs of keeping the fixed exchange rate is increasing over time – at some future
date the country would be likely to devalue. Speculators realise this and wish to sell their
domestic currency earlier. In doing this they increase the costs of keeping the exchange rate
fixed and cause an earlier devaluation. Given this logic, a speculative attack on a currency
will occur at the earliest date at which such an attack could succeed. The most important thing
to notice is that the crisis ultimately is provoked by the inconsistency of government policies
(especially a conflict between domestic objectives and the currency peg) which makes the
long-run survival of the fixed rate impossible – the crisis is driven by economic fundamentals.
The financial markets simply speed up the process.
Disputed issues
Even though a crisis ultimately is the result of economic fundamentals, the financial markets
may not be considered completely blameless:
Self-fulfilling crises: In some cases, an eventual end to a currency peg is not bound to happen
– there may be no worsening trend in the fundamentals. Still, the government can be forced to
abandon the peg after a severe speculative attack. Such an attack can actually be a result of
several investors believing that many other investors will pull their money out of the country,
a situation in which a currency collapse becomes more likely. What is important to realise
though is that even in these self-fulfilling models, it is only when fundamentals (foreign
exchange reserves, fiscal policy, commitment to exchange peg etc.) are sufficiently weak that
the country is vulnerable to speculative attacks.
Herding: In reality, foreign exchange markets are far from efficient. A small wave of selling
can be magnified through sheer imitation from investors who believe that the initial sellers
have some important private information. This leads to sort of a “bandwagon”-effect where
investors simply follow each other. Another version of this is money managers who are
compensated based on comparison with other money managers. They have much more to lose
from making an unconventional decision and turning out to be wrong, than they have from
acting as everyone else is.
Contagion: The currency crises of the 1990s, have spread throughout whole regions. One
reason for this may be real linkages between countries – if two countries export similar
products and one country devaluates, the other will face difficulties in the export sector which
may eventually lead to an economic crisis. Sometimes contagion appears without this link,
though. This may be because countries in the same region are perceived as sharing the same
culture and therefore be equally likely to abandon a peg.
Market manipulation: There may be a possibility for large speculators to benefit from a
country being vulnerable to a crisis. First they can quietly take a short position in the
vulnerable currency and then deliberately trigger a crisis through public statements and major
selling. This is what George Soros was accused of doing to the British pound in 1992. This is
very rare in practice though. Most investors are aware of the possibility of speculative attacks
on countries with deteriorating fundamentals. Therefore they try to anticipate the collapse,
bringing it forward in time, and attack as soon as an attack can succeed.
Case study 1: the ERM crises of 1992-93
In 1992, massive capital flows led to the exit of Britain, Italy and Spain from the exchange
rate mechanism of the European Monetary System. This crisis demonstrates the importance of
the more sophisticated models as opposed to the canonical. In all attacked countries,
governments retained full access to capital markets, had no need to monetize their budget
deficits and did not have a rapid growth of domestic credit. They had no limitation on foreign
exchange reserves and no high inflation. The reason for devaluation was unemployment due
to inadequate demand. The monetary authorities were pressured to engage in expansionary
policies, something they could not do when committed to a fixed exchange rate. This crisis
demonstrated the irrelevance of foreign exchange reserves in a world of high capital mobility.
It is also interesting to note that the countries which were driven off their pegs (e.g. Britain)
did better in the following period than those which defended their currencies.
Case study 2: the Latin crises, 1994-5
This crisis began in Mexico where political uncertainty and relaxed monetary and fiscal
discipline lead to a rapid decline in foreign exchange reserves and devaluation. The
confidence in Mexican policies fell, leading to a fall in the peso and high import prices
resulting in inflation. In order to stabilize the peso, the government had to raise interest rates
causing a fall in domestic demand and real GDP. Argentina had a different currency regime –
a currency board backed by a dollar of reserves for every peso. Speculators suspecting that
Argentina might abandon the currency board in order to reduce the unemployment rate
attacked the Argentinean currency as well. This lead to a decline in monetary base, a crisis in
the banking system, and ultimately an economic downturn (albeit milder than that of Mexico).
Argentina, as opposed to Mexico, chose to keep its exchange rate regime throughout the
period.
A similarity to the European crisis is a noticeable failure of financial markets to anticipate the
crises. A difference is the fact that in Europe the devaluating country (Britain) did best, while
in Latin America the non-devaluating country (Argentina) did best. Overall, the crisis in Latin
America was also much more severe.
Case study 3: Asian crises
Before this crisis, many Asian nations had very large current account deficits and financial
weaknesses (high investments in speculative real estate ventures etc.) However, just as in the
other crises, financial markets showed little concern until very late. Eventually the export
slowed down in the region (due to appreciation of the dollar, developments in key industries,
competition from China etc.) and many nations faced financial distress. In 1997, this led
speculators to fear devaluation especially in Thailand. As a result, Thailand had to increase its
interest premium which in turn increased the pressure to devaluate. Finally, Thailand did float
the bath which was followed by a wave of devaluations in the region.
Macroeconomic questions
The study of these crises reveals that the “second generation” models offer the best
explanation – the motives for devaluation lie in the perceived need for more expansionary
monetary policies rather than in budget deficits and inflation. The crisis and devaluation
seems to have led to negative short-run consequences in Latin America and Asia, while the
devaluating European nations did quite well. This may be because investors had more trust in
the European policy environment and institutions (bank regulations, lack of corruption etc.).
Can currency crises be prevented?
One can avoid currency crises by returning to the capital controls of the 1960’s, but that is
highly unlikely. Another option is for countries to follow sound and consistent policies.
However, this may not have an effect if the financial markets suspect that a country may want
to follow some other policy (e.g. prioritising employment over a fixed exchange rate). The
authors conclude that the ultimate lesson from the 1990s crises is that countries should avoid
halfway solutions. One solution against speculations is to be part of a monetary union – to not
have an independent currency at all. The other solution is to float the exchange rate, not
giving speculators an easy target.
14. Managing Macroeconomic Crises: Policy Lessons
by Jeffrey Frankel (Harvard University) and Shang-Jin Wei (IMF)
Introduction
This study is an attempt to review what the last decade reveals about which policies for crisis
prevention or crisis management seem to work and which do not. The empirical investigation
tries out a variety of methodological approaches: reasoning from examples of important crises
of the last eight years, formal analysis, a regression tree analysis, conventional regression
analysis, and a look at the typical profile of financing during the sudden stop preceding a
crisis.12
Summary13
The authors of the article seek to draw attention to policy decisions that are made during the
phase when capital inflows come to a sudden stop. Procrastination (the period of financing a
balance of payments deficit rather than adjusting) has serious consequences in some cases.
Crises are more frequent and more severe when short-term borrowing and dollar denominated
external debt is high, and foreign direct investment and reserves are low. This, due to the fact
that balance sheets are then very sensitive to increases in exchange rates and short-term
interest rates. These measures are affected by decisions made by policymakers in the period
immediately after capital inflows have begun to dry up, but before the speculative attack itself
has hit. If countries that are faced with a fall in inflows adjusted more quickly, rather than
12
I will focus on the results, which should be of most importance. If you are addicted to regression analysis and graphs you will find a bunch
of them in the paper:)
13
For those who do not have time to read it all, the summary is enough.
stalling for time by running down reserves or shifting to loans that are shorter-termed and
dollar-denominated, they might be able to adjust on more attractive terms.
In the last 30 years, emerging markets have experienced at least two complete boom-bust
cycles:
1. The first cycle was in the period from 1975 to 1981, followed by the international debt
crisis of 1982-89. Rich countries had given large loans to developing countries.
Despite this volatility, many developing countries (certainly not all) ended this 30-year
period with a far higher level of per capital income than they began it.
2. The last cycle was marked by rapid capital inflows from 1990 to 1996, followed by
severe crises for some countries and scarce capital for all countries from 1997 to 2003.
This cycle bore similarities to the first, with large sums of lending to developing
countries.
Conclusions
The results are consistent with much of the previous empirical literature, which have found
that crises are necessarily the outcome of high current account deficits or high indebtedness,
nor even of domestic credit creation. The flexibility in the exchange rate does not mean that
crises will be avoided. There is strong evidence that corruption (poor institutional policies) is
a fundamental problem too.
Some of the new conventional wisdoms do not appear to be proved by the tests. The
corner exchange rate regimes (regimes with extreme solutions, completely fixed or
completely floating exchange rate) are more prone to serious crises, not less. If emerging
market countries liberalize their capital controls, they are less prone to crises, not more. An
extensive search for interactive effects that have been claimed by others does not uncover
much evidence that capital account openness is particularly dangerous in combination with
low income, expansionary policies, or corruption.
Countries are likely to have more frequent and more severe crises if their capital inflows are
skewed toward short-term dollar borrowing and away from Foreign Domestic Investment and
equity inflows, and if they hold a low level of reserves. The ratio of short-term debt to
reserves is a particularly important indicator. The authors found evidence that high levels of
inflation significantly raise the probability of crisis. This, when coming in combination with a
low level of reserves and a composition of capital inflow that is tilted to the short-term.
All of the theoretical literature treats the “sudden stop” phase as taking place in a single
instant (the country goes directly from a period of capital inflows and strong reserves to a
crisis of capital outflows and plunging reserves). In reality there is often a temporary period,
when international investors have begun to lose enthusiasm, but the crisis has not yet hit. One
example is the lag between the beginning of 1994, when investors began to pull out of
Mexico, and the December peso crisis. The authors found, among a broad sample of
developing countries (1990-2002), that the typical lag between the peak in reserves and a
currency crisis was six months to a year, depending on the calculation. The average loss in
reserves during the sudden stop phase was 35 percent. Some countries had lost almost all of
their reserves by the time they decided to abandon the exchange rate target.
Procrastination had serious consequences in some cases. Typically, by the time the crisis
hit, the level of reserves was so low that confidence could not be restored without beginning
to rebuild them. As a result, reserves could not play their designated role of cushioning the
contraction. In addition, the composition of liabilities tended to shift adversely during the
period of sudden stop.
In the example of Mexico during the course of 1994, when the authorities were not stalling
for time by running down reserves, they were instead calming nervous investors by offering
them tesobonos (short-term dollar linked bonds) in place of the peso bonds (Cetes) that they
had previously held. On average across country crises, the fraction of loans that were shortterm increased by 0.6 percentage points after the peak in reserves (over a period of one or two
quarters, depending on data availability).
Crises are more frequent and more severe when short-term borrowing is high, dollar
denomination is high, Foreign Domestic Investments is low, and reserves are low (in large
part because balance sheets are then very sensitive to increases in exchange rates and short
term interest rates). The point is that these compositional measures are strongly affected by
decisions made by policymakers in the period immediately after capital inflows have begun to
dry up but before the speculative attack itself has hit. These crisis management policies merit
more attention. If countries that are faced with a fall in inflows had adjusted more promptly,
rather than stalling for time by running down reserves or shifting to loans that are shortertermed and dollar-denominated, they might be able to adjust on more attractive terms.
15. THE TRILEMMA IN HISTORY: TRADEOFFS AMONG
EXCHANGE RATES, MONETARY POLICIES, AND CAPITAL
MOBILITY
Obstfeld, Shambaugh, Taylor
Texten är genomgående relativt teknisk och innehåller mycket ekonometri. Jag frågade
Martin hur man ska angripa detta, och fick följande svar: ”Jag har inte använt artikeln för
undervisning tidigare och jag har ännu inte planerat i detalj hur jag vill använda den. Men min plan är att göra
det otekniskt och fokusera på resultaten - inte metoderna. Dvs fokus bör vara på: Vilken är deras frågeställning?
Vad säger data?”
Policymakers in open economies face a macroeconomic trilemma: they have to give up one
of the three following objectives (all are typically desired):
1. to stabilize the exchange rate
2. to enjoy free international capital mobility
3. to engage in a monetary policy oriented toward domestic goals
The paper examines data from 1870 up until today in order to determine if this trilemma has
been a reality throughout history. In order to do this, the coherence of international interest
rates is studied. The short-term nominal interest rates are used as measures of monetary
independence (objective 3), since monetary policy almost always takes the form of interestrate targeting.
The data is sorted into three time periods: the gold standard era (1870-1914), the Bretton
Woods era (1959-1970), and the post-Bretton Woods era (1973-2000). The base rate interest
rate, which is used to compare other countries’ rates to, differs between the eras: in the gold
standard era it is the UK interest rate, in the Bretton Woods era it is the US interest rate, and
in the last era the base differ across countries. Studying interest rates, one can look at the legal
commitment (the de jure status) or the observed behaviour of the exchange rate (the de facto
status). In applying the de facto classification under the gold standard, the authors check
whether the exchange rate against the pound sterling stays within ±2% bands over the course
of a year. The term “peg” is used to describe fixed exchange rates and “nonpeg” to describe
floating exchange rates. Concerning capital markets, the authors assume that all countries
have open capital markets during the gold standard era, that none do during Bretton Woods,
and that the IMF coding is a good approximation for capital controls during the post-Bretton
Woods era.
The starting point in the interest rate analysis is the following equation:
ΔRit = α + βΔRbit + uit
Rit = local interest rate at time t, Rbit = base rate at time t
With perfect capital mobility and a permanently, credibly pegged interest rate (within a band
of zero width) we would expect β=1. That is, if the base country interest rate moves, the
home-country interest rate makes the exact same movement. Otherwise we would find β<1
(monetary policy is used to offset base interest-rate shocks) or β>1 (home monetary policy
reinforces base interest-rate shocks).
There is also an equation testing for levels relationships and adjustment speed between
interest rates (equation 2).
The data reveals that the gold standard and post-Bretton Woods eras have fairly similar β
values (0.42 and 0.36). These are much higher than the β coefficient for the Brettton Woods
era which is indistinguishable from zero, showing that the capital controls of Bretton Woods
essentially shut down the mechanism by which local countries are forced to follow the base
country. Furthermore, comparing the pegs with the nonpegs during the gold standard and
post-Bretton Woods eras it is evident that the β values for the pegs are significantly higher
than the nonpegs suggesting that the countries with pegged interest rates had to follow the
base country interest rate to a higher extent.
The authors do not demand R2 or β values to be really close to 1 (which a model with no
exchange rate bands, costless arbitrage, and perfect regime credibilitiy would imply). In the
current period, as was the case during the gold standard, the exchange rates that we consider
to be pegged actually do move within specified narrow bands.
Pooling the eras, we still see a stark difference between pegs and nonpegs (with pegs having
higher β coefficients). We also see that non-capital control countries have higher values of
both β and R2 than capital control countries. This supports the trilemma. Furthermore, pegs
with open capital markets have the overall highest β and R2 values, suggesting that those
countries have the lowest ability to conduct independent monetary policies. Hence, it seems
like both exchange rate regimes and capital controls matter in affecting policy autonomy.
The same result is found using the levels analysis. During the gold standard, many of the
nonpeg nations actually had interest rates moving away from the base. The pegs, on the other
hand, showed little independence and adjusted very quickly to the base interest rate. During
the capital controls of the Bretton Woods era, the average adjustment speed for pegs was
much slower – demonstrating far more flexibility of domestic interest rate setting than under
the gold standards. It was the desire for such added flexibility that inspired the design of
Bretton Woods. In the current period, the average adjustment speed for pegs is faster than
under Bretton Woods and much faster than for current nonpegs. This suggests that there
appears to be room for nonpegs to have some monetary independence today – especially
compared to the fixed exchange rate countries.
All tests lead to the conclusion that “countries that peg have less monetary freedom than
nonpegs, although the capital controls of Bretton Woods did succeed in weakening the
linkages among national interest rates”.
The authors conclude that the trilemma makes sense as a guiding policy framework. Interest
rates of pegged economies react more strongly and more quickly to the changes in the base
country interest rate than do non-pegged economies. Absent capital controls, countries
choosing to peg lose monetary independence. Indeed, pegs in both eras of open capital
markets show rather similar relationships with the base interest rate.
Finally, the authors state that the designers of the Bretton Woods system achieved their goal
of exhange-rate stability with more room for interest-rate autonomy. Despite rigid pegs, the
Bretton Woods era interest rates show both weaker relationships with, and slower adjustment
speeds to, the base rate. Later, as capital controls became weaker, the combination of
exchange rate pegs and monetary independence became impossible.
Download