979797979797979797979797979797979
E550
Business Conditions Analysis
979797979797979797979797979797979
Expenditures
C+I+G
Gross Domestic Product
979797979797979797979797979797979
Dr. David A. Dilts
Department of Economics
June 2006 first revision
May 2005
Business Conditions Analysis, E550
© Dr. David A. Dilts
All rights reserved. No portion of this book
may be reproduced, transmitted, or stored, by
any process or technique, without the express
written consent of Dr. David A. Dilts
2006
Published by Indiana - Purdue University - Fort Wayne
for use in classes offered by the Department of Economics,
Richard T. Doermer School of Business and Management Sciences at IPFW
by permission of Dr. David A. Dilts
i
SYLLABUS
E550, Business Conditions Analysis
Dr. David A. Dilts
Department of Economics
School of Business and Management Sciences
Indiana University - Purdue University- Fort Wayne
Office: Neff Hall 340D
Office Phone: 481-6486
Email Office: Dilts@ipfw.edu
Home Phone: 486-8225
Email Home:
Ddilts2704@aol.com
Course Policies
1. This course is an intersession course. It meets from 5:30 p.m. to 10:20 p.m. for
ten straight weekdays beginning August 7, 2006.
2. Grades will be determined through take-home quizzes. A package of quizzes will
be distributed to class, and you will be expected to turn one in at the beginning of
class on August 8, August 10, August 14, August 16 August 17, and August 18,
2005. There are a total of six quizzes, each worth twenty points. The best five
quizzes will be used to calculate your final grade. The grade scale is:
89-100
77-88
65-76
53-64
52 and below
A
B
C
D
F
3. Attendance will not be taken; however, it is strongly encouraged.
4. The quizzes may be turned-in early, but will not be accepted more than one class
day late. Remember, as much as this is like taking a drink from a fireplug, it is
worse for the instructor who has to grade the work you turn in, and you people
outnumber me. I’m not trying to be officious, but I want to survive this too.
5. All other department, school, campus and university policies will be applicable to
this course and strictly observe.
Course Objectives:
This course has several objectives, which are very much intertwined. These objectives
include:
To solve problems innovatively, using the following tools, and concepts:
ii
1. A quick review of the essence of macroeconomics which is most important in
understanding the environment of business both domestically and globally,
including:
A. National Income Accounts, Price Indices, and Employment Data
B. Keynesian Model of Macroeconomic Activity
1. Business Cycles
2. Fiscal Policy
C. Monetary Aggregates
1. Monetary Policy
2. Exchange rates
3. Interest Rates
2. Economic data sources and their interrelations
A. Economic Indicators
1. Leading indicators
2. Concurrent indicators
3. Trailing indicators
B. Understanding sector performance using economic aggregates
3. Forecasting models and their uses
A. Simple internal models
B. Correlative methods
C. Limitations and value
4. Putting together the notion of business environment within a strategic view of a
business enterprise
With these issues mastered it is also the objective of this course to master these
tools to be able to integrate and synthesize business conditions information and
analysis for the purposes of planning and decision making.
Reading Assignments
Economics:
Monday, August 7, 2006
Dilts, Chapters 1-2
Tuesday, August 8, 2006
Dilts, Chapters 3
Wednesday, August 9, 2006
Dilts, Chapters 4-5
Thursday, August 10, 2006
Dilts, Chapters 6-7
iii
Friday, August 12, 2006
Dilts, Chapters 8-9
Monday, August 14 2006
Dilts, Chapter 10-11
Tuesday, August 15, 2006
Dilts, Chapter 12
Data:
Wednesday, August 16, 2006
Dilts, Chapter 13-14
Thursday, August 17, 2006
Dilts, Chapter 15
Friday, August 18, 2006
Dilts, Chapter 16 -17 and Catch-up
iv
Chapter 1
Introduction to Economics
In general, the purpose of this chapter is to provide the basic definitions upon
which the subsequent discussions of macroeconomics and business conditions will be
built. The specific purpose of this chapter is to define economics (and its major
component fields of study), describe the relation between economic theory and
empirical economics, and examine the role of objectivity in economic analysis, before
examining economic goals and their relations. For those of you who have had E201 or
E202, Introduction to Microeconomics or Introduction to Macroeconomics much of the
material contained in this chapter will be similar to the introductory material contained in
those courses. However, you will find that there is considerable expansion of the
discussions offered in the typical principle courses, as a matter of foundation for the
more in depth discussions of business conditions that will occur in E550.
Definitions
Economics has been studied since sixteenth century and is the oldest of the
social studies. Most of the business disciplines arose in attempt to fill some of the
institutional and analytical gaps in the areas with which economics was particularly well
suited to examine. The subject matter examined in economics is the behavior of
consumers, businesses, and other economic agents, including the government in the
production and allocation processes. Therefore, any business discipline will have some
direct relation with the methods or at least the subject matter with which economists
deal.
Economics is one of those words that seems to be constantly in the newspapers
and on television news shows. Most people have some vague idea of what the word
economics means, but precise definitions generally require some academic exposure to
the subject. Economics is the study of the allocation of SCARCE resources to
meet UNLIMITED human wants. In other words, economics is the study of human
behavior as it pertains to the material well-being of people (as either individuals or
societies).
Robert Heilbroner describes economics as a "Worldly Philosophy." It is the
organized examination of how, why and for what purposes people conduct their day-today activities, particularly as relates to the production of goods and services, the
accumulation of wealth, earning incomes, spending their resources, and saving for
1
future consumption. This worldly philosophy has been used to explain most rational
human behavior. (Irrational behavior being the domain of specialties in sociology,
psychology, history, and anthropology.)
Underlying all of economics is the base assumption that people act in their own
best interest (at least most of the time and in the aggregate). Without the assumption
of rational behavior, economics would be incapable of explaining the preponderance of
observed economic activity. Consistent responses to stimuli are necessary for a model
of behavior to predict future behavior. If we assume people will always act in their best
economic interests, then we can model their behavior so that the model will predict (with
some accuracy) future economic behavior. As limiting as this assumption may seem, it
appears to be an accurate description of reality. Experimental economics, using rats in
mazes, suggests that rats will act in their own best interest, therefore it appears to be a
reasonable assumption that humans are no less rational.
Most academic disciplines have evolved over the years to become collections of
closely associated scholarly endeavors of a specialized nature. Economics is no
exception. An examination of one of the scholarly journals published by the American
Economics Association, The Journal of Economic Literature reveals a classification
scheme for the professional literature in economics. Several dozen specialties are
identified in that classification scheme, everything from national income accounting, to
labor economics, to international economics. In other words, the realm of economics
has expanded to such an extent over the centuries that it is nearly impossible for
anyone to be an expert in all aspects of the discipline, so each economist generally
specializes in some narrow portion of the discipline. The decline of the generalist is a
function of the explosion of knowledge in most disciplines, and is not limited to
economists.
Economics can be classified into two general categories, these are (1)
microeconomics and (2) macroeconomics. Microeconomics is concerned with
decision-making by individual economic agents such as firms and consumers. In
other words, microeconomics is concerned with the behavior of individuals or groups
organized into firms, industries, unions, and other identifiable agents. Microeconomics
is the subject matter of E201 and E321 Microeconomics.
Macroeconomics is concerned with the aggregate performance of the
entire economic system. Unemployment, inflation, growth, balance of trade, and
business cycles are the topics that occupy most of the attention of students of
macroeconomics. In E202 and E321 the focus is on macroeconomic topics. These
matters are the topics to be examined this course E550, Business Conditions Analysis.
Macroeconomics is a course that interfaces with several other academic
disciplines. A significant amount of the material covered in this course involves public
policy and has a significant historical foundation. The result is that much of what is
2
currently in the news will be things that are being studied in this course as they happen.
In many respects, that makes this course of current interest, if not fun.
Methods in Economics
Economists seek to understand the behavior of people and economic systems
using scientific methods. These scientific endeavors can be classified into two
categories, (1) economic theory and (2) empirical economics. Economic theory relies
upon principles to analyze behavior of economic agents. These theories are
typically rigorous mathematical models (abstract representations) of behavior. A good
theory is one that accurately predicts future behavior and is consistent with the available
evidence.
Empirical economics relies upon facts to present a description of
economic activity. Empirical economics is used to test and refine theoretical
economics, based on tests of economic theory. The tests that are typically applied to
economic theories are statistically based, and is generally called econometric methods.
Much of the material involved in forecasting techniques draws heavily from these
methods. The regression-based forecasting models are very similar to the econometric
models used to analyze the macroeconomy.
In Business Conditions analysis we will rely heavily on a sound theoretical basis
to understand the macroeconomic framework in which business activity arises. Within
this theoretical framework a range of empirical models, and data will be examined which
will allow us to both formally and informally forecast economic conditions and what sorts
of phenomenon are associated with these conditions for specific markets.
Theory concerning human behavior is generally constructed using one of two
forms of logic. Sociology, psychology and anthropology typically rely on inductive logic
to create theory. Inductive logic creates principles from observation. In other
words, the scientist will observe evidence and attempt to create a principle or a theory
based on any consistencies that may be observed in the evidence. Economics relies
primarily on deductive logic to create theory. Deductive logic involves formulating
and testing hypotheses. Often the theory that will be tested comes form inductive
logic or sometime informed guess-work. The development of rigorous models
expressed as equations typically lend themselves to rigorous statistical methods to
determine whether the models are consistent with evidence from the real world. The
tests of hypotheses can only serve to reject or fail to reject a hypothesis. Therefore,
empirical methods are focused on rejecting hypotheses and those that fail to be rejected
over large numbers of tests generally attain the status of principle.
However, examples of both types of logic can be found in each of the social
sciences. In each of the social sciences it is common to find that the basic theory is
3
developed using inductive logic. With increasing regularity standard statistical methods
are being employed across all of the social sciences and business disciplines to test the
validity of theories.
The usefulness of economics depends on how accurate economic theory
predicts behavior. Even so, economics provides an objective mode of analysis, with
rigorous models that permit the discounting of the substantial bias that is usually
present with discussions of economic issues. The internal consistency brought to
economic theory by mathematical models often fosters objectivity. However, no model
is any better than the assumptions that underpin that model. If the assumptions are
either unrealistic or formulated to introduce a specific bias, objective analysis ca still be
thwarted (under the guise of scientific inquiry).
The purpose of economic theory is to describe behavior, but behavior is
described using models. Models are abstractions from reality - the best model is the
one that best describes reality and is the simplest (the simplest requirement is called
Occam's Razor). Economic models of human behavior are built upon assumptions; or
simplifications that allow rigorous analysis of real world events, without irrelevant
complications. Often (as will be pointed-out in this course) the assumptions underlying
a model are not accurate descriptions of reality. When the model's assumptions are
inaccurate then the model will provide results that are consistently wrong (known as
bias).
One assumption frequently used in economics is ceteris paribus which means
all other things equal (notice that economists, like lawyers and doctors will use Latin to
express rather simple ideas). This assumption is used to eliminate all sources of
variation in the model except for those sources under examination (not very realistic!).
Economic Goals, Policy, and Reality
Most people and organizations do, at least rudimentary planning, the purpose of
planning is the establishment of an organized effort to accomplish some economic
goals. Planning to finish your education is an economic goal. Goals are, in a sense, an
idea of what should be (what we would like to accomplish). However, goals must be
realistic and within our means to accomplish, if they are to be effective guides to action.
This brings another classification scheme to bear on economic thought. Economics can
be again classified into positive and normative economics.
Positive economics is concerned with what is; and normative economics is
concerned with what should be. Economic goals are examples of normative
economics. Evidence concerning economic performance or achievement of goals falls
within the domain of positive economics.
4
Most nations have established broad social goals that involve economic issues.
The types of goals a society adopts depends very much on the stage of economic
development, system of government, and societal norms. Most societies will adopt one
or more of the following goals: (1) economic efficiency, (2) economic growth, (3)
economic freedom, (4) economic security, (5) an equitable distribution of income, (6)
full employment, (7) price level stability, and (8) a reasonable balance of trade.
Each goal (listed above) has obvious merit. However, goals are little more than
value statements in this broad context. For example, it is easy for the very wealthy to
cite as their primary goal, economic freedom, but it is doubtful that anybody living in
poverty is going to get very excited about economic freedom; but equitable distributions
of income, full employment and economic security will probably find rather wide support
among the poor. Notice, if you will, goals will also differ within a society, based on
socio-political views of the individuals that comprise that society.
Economics can hardly be separated from politics because the establishment of
national goals occurs through the political arena. Government policies, regulations, law,
and public opinion will all effect goals and how goals are interpreted and whether they
have been achieved. A word of warning, eCONomics can be, and has often been used,
to further particular political agendas. The assumptions underlying a model used to
analyze a particular set of circumstances will often reflect a political agenda of the
economist doing the analysis. For example, Ronald Reagan argued that government
deficits were inexcusable, and that the way to reduce the deficit was to lower peoples'
taxes -- thereby spurring economic growth, therefore more income that could be taxed
at a lower rate and yet produce more revenue. Mr. Reagan is often accused, by his
detractors, of having a specific political agenda that was well-hidden in this analysis.
His alleged goal was to cut taxes for the very wealthy and the rest was just rhetoric to
make his tax cuts for the rich acceptable to most of the voters. (Who really knows?)
Most political commentators, both left and right, have mastered the use of assumptions
and high sounding goals to advance a specific agenda. This adds to the lack of
objectivity that seems to increasingly dominate discourse on economic problems.
On the other hand, goals can be publicly spirited and accomplish a substantial
amount of good. President Lincoln was convinced that the working classes should have
access to higher education. The Morrell Act was passed 1861 and created Land Grant
institutions for educating the working masses (Purdue, Michigan State, Iowa State, and
Kansas State (the first land grant school) are all examples of these types of schools).
By educating the working class, it was believed that several economic goals could be
achieved, including growth, a more equitable distribution of income, economic security
and freedom. In other words, economic goals that are complementary are consistent
and can often be accomplished together. Therefore, conflict need not be the
centerpiece of establishing economic goals.
Because any society's resources are limited there must be decisions about which
goals should be most actively pursued. The process by which such decisions are made
is called prioritizing. Prioritizing is the rank ordering of goals, from the most important to
5
the least important. Prioritizing of goals also involves value judgments, concerning
which goals are the most important. In the public policy arena prioritizing of economic
goals is often the subject of politics.
Herein lies one of the greatest difficulties in macroeconomics. An individual can
easily prioritize goals. It is also a relatively easy task for a small organization or firm to
prioritize goals. For the United States to establish national priorities is a far larger task.
Adam Smith in the Wealth of Nations (1776) describes the basic characteristics of
capitalism (this book marks the birth of capitalism). Smith suggests that there are three
legitimate functions of government in a free enterprise economy. These three functions
are (1) provide for the national defense, (2) provide for a system of justice, and (3)
provides those goods and services that cannot be effectively provided by the private
economy because of the lack of a profit motive. There is little or no controversy
concerning the first two of these government functions. Where debate occurs is over
the third of these legitimate roles.
Often you hear that some non-profit organization or government agency should
be "run like a business." A business is operated to make a profit. If the capitalist
model is correct, then the only reason for an entrepreneur to establish and operate a
business is to make profits (otherwise the conduct of the business is irrational and
cannot be explained as self-interested conduct). A church, charity, or school is
established for purposes other than the making of a profit. For example, a church may
be established for the purposes of maximizing spiritual well-being of the congregation
(the doing of good-works, giving testimony to one's religion, worship of God, and the
other higher pursuits). The purpose of a college or a secondary/elementary school
system, likewise is not to make profits, the purposes of educational institutions is to
provide access knowledge. A University is to increase the body of knowledge through
basic and applied research, professional services, and (of primary importance to the
students) to provide for the education of students. To argue that these public or
charitable organizations should be run like a business is to suggest that these matters
can be left to the private sector to operate for a profit. Inherent in this argument is the
assumption (a fallacy) that the profit motive would suffice to assure that society received
a quality product (spiritual or educational or both) and in the quantities necessary to
accomplish broad social objectives. Can you imagine what religion would become if it
was reduced to worldly profitability (some argue there's too much of that sort of thing
now), can you imagine what you would have to pay for your education if, instead of the
State of Indiana subsidizing education, the student was asked to pay for the total cost of
a course plus some percentage of cost as a profit? Perhaps worse still, who would do
the basic research that has provided the scientific breakthroughs that result in
thousands of new products each year? Would we have ever had computers without the
basic research done in universities, what would be missing from our medical
technology?
Priorities at a national level are rarely set without significant debate,
disagreements, and even conflict. It is through our free, democratic processes that we
establish national, state and local priorities. In other words, the establishment of our
6
economic priorities are accomplished through the political arena, and therefore it is
often impossible to separate the politics from the economics at the macro level.
Objective Thinking
Most people bring many misconceptions and biases to economics. After all,
economics deals with people's material well-being. Because of political beliefs and
other value system components rational, objective thinking concerning various
economic issues fail. Rational and objective thought requires approaching a subject
with an open-mind and a willingness to accept what ever answer the evidence suggests
is correct. In turn, such objectivity requires the shedding of the most basic
preconceptions and biases -- not an easy assignment.
What conclusions an individual draws from an objective analysis using economic
principles, are not necessarily cast in stone. The appropriate decision based on
economic principles may be inconsistent with other values. The respective evaluation
of the economic and "other values" (i.e., ethics) may result in a conflict. If an
inconsistency between economics and ethics is discovered in a particular application, a
rational person will normally select the option that is the least costly (i.e., the majority
view their integrity as priceless). An individual with a low value for ethics or morals may
find that a criminal act, such as theft, as involving minimal costs. In other words,
economics does not provide all of the answers, it provides only those answers capable
of being analyzed within the framework of the rational behavior that forms the basis of
the discipline.
Perhaps of greatest importance by modeling economic systems, it provide a
basis of cold, calculation free from individual emotion and self-interest. Do not be
confused, anytime we deal with economic forecasts, there is always the problem of
“hope” versus “what’s next.” By modeling economic activity, and empirically testing that
model, one may be able to produce forecasting models or even indicators that permit
unbiased forecasts. To forecast is one thing, to hope is another.
There are several common pitfalls to objective thinking in economics. After all,
few things excite more emotion than our material well-being. It should come as no
surprise that bias and less than objective reasoning is common when it comes to
economic issues, particularly those involving public policy.. Among the most common
logical pitfalls that affect economic thought are: (1) the fallacy of composition, and (2)
post hoc, ergo prompter hoc. Each of these will be reviewed, in turn.
The fallacy of composition is the mistaken belief that what is true for the
individual must be true for the group. An individual or small group of individuals may
exhibit behavior that is not common to an entire population. In other words, this fallacy
is simply assuming a small, unscientifically selected sample will predict the behavior,
7
values, or characteristics of an entire population. For example, if one individual in this
class is a I.U. fan then everyone in this class must be an I.U. fan is an obvious fallacy of
composition. Statistical inference can be drawn from a sample of individual
observations, but only within confidence intervals that provide information concerning
the likelihood of making an incorrect conclusion (E270, Introduction to Statistics,
provides a more in depth discussion of confidence intervals and inference).
Post hoc, ergo prompter hoc means after this, hence because of this, and
is a fallacy in reasoning. Simply because one event follows another does not
necessarily imply there is a causal relation. One event can follow another and be
completely unrelated. All of us have, at one time or another, experienced a simple
coincidence. One event can follow another, but there may be something other than a
direct causal relation that accounts for the timing of the two events.
For example, during the thirteenth century people noticed that the black plague
occurred in a location when the population of cats increased. Unfortunately, some
people concluded that the plague was caused by cats so they killed the cats. In fact,
the plague was carried by fleas on rats. When the rat population increased, cats were
attracted to the area because of the food supply (the rats). The people killed the
predatory cats, and therefore, rat populations increased, and so did the population of
fleas that carried the disease. This increase in the rat population also happened to
attract cats, but cats did not cause the plague, if left alone they may have gotten rid of
the real carriers (the rats, therefore the fleas). The idea that cats were observed
increasing in population gave rise to the conclusion that the cats brought the plague is a
post hoc, ergo prompter hoc fallacy, but this example has an indirect relation between
cats in the real cause. Often, even this indirect relation is absent.
Many superstitions are classic examples of this type of fallacy. Broken mirrors
causing seven years bad luck, or walking under a ladder brining bad luck are nothing
but fallacies of the post hoc, ergo prompter hoc variety. There is no causal relation
between breaking glass and bad luck or walking under ladder (unless something falls off
the ladder on the pedestrian). Deeper examination of the causal relations are
necessary for such events if the truth of the relations is to be discovered. However,
more in depth analysis is often costly, and the cost has the potential of causing
decision-makers to skip the informed part and cut straight to the opinion.
Economic history has several examples of how uniformed opinion resulted in
very significant difficulties for innocent third-parties, in addition, to those responsible for
the decisions. The following box presents a case where policy was implemented based
on the failure to recognize that there is a significant amount of interdependence in the
U.S. economy.
This story shows fairly conclusively that private interests can damage society as
a whole. While our economic freedom is one of the prime ingredients in making our
economy the grandest in the world, such freedom requires that it be exercised in a
responsible fashion, lest the freedom we prize becomes a source of social harm. Like
8
anything else, economic freedom for one group may mean disaster for another, through
no fault of the victims. Government and the exercise of our democratic responsibilities
is suppose to provide the checks on the negative results of the type portrayed in the
above box.
Statistical Methods in Economics
The use of statistical methods in empirical economics can result in errors in
inference. Most of the statistical methods used in econometrics (statistical examination
of economic data) rely on correlation. Correlation is the statistical association of
two or more variables. This statistical association means that the two variables move
predictably with or against each other. To infer that there is a causal relation between
two variables that are correlated is an error. For example, a graduate student once
found that Pete Rose's batting average was highly correlated with movement in GNP
during several baseball seasons. This spurious correlation cannot reasonably be
considered path-breaking economic research.
On the other hand we can test for causation (where one variable actually causes
another). Granger causality states that the thing that causes another must occur
first, that the explainer must add to the correlation, and must be sensible. As with
most statistical methods Granger causality models permit testing for the purpose of
rejecting that a causal relation exists, it cannot be used to prove causality exists. These
types of statistical methods are rather sophisticated and are generally examined in
upper division or graduate courses in statistics.
As is true with economics, statistics are simply a tool for analyzing evidence.
Statistical models are also based on assumptions, and too often, statistical methods are
used for purposes for which they were not intended. Caution is required in accepting
statistical evidence. One must be satisfied that the data is properly gathered, and
appropriate methods were applied before accepting statistical evidence. Statistics do
not lie, but sometimes statisticians do!
A whole chapter will be dedicated to forecasting methods in this course. It is not
so much that forecasting methods are the cognitive content of this course, as much as it
is presented to provide the student with a menu of the types of forecasting techniques
that are available to help get a handle on what to expect. BUFW H509 is Research
Methods in Business which focuses on data collection, statistical methods, and
inference. Together with E550 these two courses should provide the student with a
“deep-dish” approach to both subjects.
9
Objectivity and Rationality
Objective thinking in economics also includes rational behavior. The underlying
assumptions with each of the concepts examined in this course assumes that people
will act in their perceived best interest. Acting in one's best interests is how rationality is
defined. The only way this can be done, logically and rigorously, is with the use of
marginal analysis. This economic perspective involves weighing the costs against the
benefits of each additional action. In other words, if benefits of an additional action will
be greater than the costs, it is rational to do that thing, otherwise it is not.
Forecasting
Objectivity and rationality are critical in developing models and understanding
how business conditions arise. However, the forecaster is less interested in explaining
how the economy works, and more interested in what some series of numbers is going
to do over time, and in particular, in the near future. The end result is that forecasting
and economic analyses are very often similar processes. If one is interested in the
economic environment (business conditions), one must understand how the economy
actually works. This is why much of the first half of this course is focused exclusively on
the macroeconomy.
Forecasting is like most other endeavors, you need to have a plan. The steps in
forecasting provide an organized approach to permitting the best possible forecasts.
The steps in forecasting involve the identification of the variable or variables you wish to
forecast. Once these are identified, you need to determine the sophistication of the
forecasting method, i.e., who is going to do the forecasting, and who is going to use the
forecast will determine the sophistication. Once the sophistication level is determined,
then the selection of the method, i.e., exponential smoothing, moving averages at the
simple end, at the more complex end, two-stage least squares, autoregressive models,
etc. This, again, involves sophistication, but also the nature of the variable to be
forecasted may have something to say about the method selected. Once the
methodology is selected, predictor variables must be selected, tested, and incorporated
into the model. Once the model is developed it will be subject to nearly constant
evaluation, and re-specification. The efficacy, accuracy and usability of the results, and
the efficiency of the model in making the forecasts are the primary criteria used in
determining whether the model needs to be revised. Finally, forecasting reports will
need to be framed and circulated to the appropriate management officials for their use.
These reports will also be subject to evaluation and re-specification.
In sum, the forecasting plan includes several, interrelated steps, which are:
1. Selection of variables to be forecast
10
2.
3.
4.
5.
6.
Determining the level of forecast sophistication
Specification of the model to be used to forecast
Specification of the variables to be used in the model
Re-evaluation of the model, using the results, their efficiency and efficacy
Creating the reports, and evaluating the effectiveness of the reports
One should never lose track of the fact that a forecast will never be perfectly
accurate, realistic goals for the forecasts must be set. If one can pick up the direction of
moves in aggregate economic data, the models is probably pretty good. Arriving at
exact magnitudes of the changes in economic aggregates is probably more a function of
luck than of good modeling in most cases. In other words, be realistic in evaluating the
efficiency and efficacy of the forecasting models selected.
Conclusion
Business Conditions Analysis is focused on macroeconomic topics and how
these topics result in various aspects of the business environment in which the modern
enterprise operates. It is not simply a macroeconomics course, nor is it simply a data
collection, modeling and inference course. E550 is the managerial aspects of
macroeconomics, together with a quick and dirty introduction to analysis of the
macroeconomy.
This course is far different than what the typical MBA course of the same title
would have been twenty-five years ago. Twenty-five years ago this course would have
focused on a closed economy, with very little foreign sector influences, and an economy
that was far more reliant on the goods producing sectors. Today, the U.S. economy is
still losing manufacturing, fewer people are employed in mining and agriculture, and it is
truly an economy searching for what it wants to be. This is in some ways a more
exciting economy to study because it is at a cross-roads. The U.S. economy was the
dominate economy when it was a manufacturing based system. At this cross-roads,
there is the potential to re-double that economic dominance, a system full of
opportunities and challenges. On the other hand, there is always the path the British
followed in the 1960s and since, towards less satisfactory results. In U.S. economy
history we have had many mis-steps, but in general the long-term trend has been
toward ever-increasing prosperity. That is no guarantee, but it is certainly cause for
optimism, particularly if everyone masters the ideas that comprise this course.
11
KEY CONCEPTS
Economics
Microeconomics
Macroeconomics
Empirical economics v. Theoretical economics
Inductive logic v. Deductive logic
Model Building
Assumptions
Occam’s Razor
Normative economics v. Positive economics
Objective Thinking
Rationality
Fallacy of Composition
Cause and effect
Bias
Correlation v. causation
Cost-benefit analysis
Forecasting
Steps
Plan
Evaluation
12
STUDY GUIDE
Food for Thought:
Most people are biased in their thinking particularly concerning economic issues. Why
do you suppose this is?
Sample Questions:
Multiple Choice:
Which of the following is not an economic goal?
A.
B.
C.
D.
Full Employment
Price Stability
Economic Security
All of the above are economic goals
Which of the following methods can be applied to test for the existence of statistical
association between two variables?
A.
B.
C.
D.
Correlation
Granger causality
Theoretical modeling
None of the above
True - false:
Non-economists are no less or no more biased about economics than physics or
chemistry {FALSE}.
Assumptions are used to simplify the real world so that it may be rigorously analyzed
{TRUE}.
13
Chapter 2
National Income Accounting
The aggregate performance of a large and complex economic system requires
some standards by which to measure that performance. Unfortunately our systems of
accounting are imperfect and provide only rough guidelines, rather than crisp, clear
measurements of the economic performance of large systems. As imperfect as the
national income accounting methods are, they are the best measures we have and they
do provide substantial useful information. The purpose of this chapter is to present the
measures we do have of aggregate economic performance.
Gross Domestic and Gross National Product
The most inclusive measures we have of aggregate economic activity are Gross
Domestic Product and Gross National Product. These measures are used to describe
total output of the economy, by source. In the case of Gross Domestic Product we are
concerned with what is produced within our domestic economy. More precisely, Gross
Domestic Product (GDP) is the total value of all goods and services produced
within the borders of the United States (or country under analysis). On the other
hand, Gross National Product is concerned with American production (regardless of
whether it was produced domestically). More precisely, Gross National Product
(GNP) is the total value of all goods and services produced by Americans
regardless of whether in the United States or overseas.
These measures (GDP and GNP) are the two most commonly discussed in the
popular press. The reason they garner such interest is that they measure all of the
economy's output and are perhaps the least complicated of the national income
accounts. Often these data are presented as being overall measures of our
population's economic well-being. There is some truth in the assertion that GDP and
GNP are social welfare measures, however, there are significant limitations in such
inferences. To fully understand these limitations we must first understand how these
measures are constructed.
The national income accounts are constructed in such a manner as to avoid the
problem of double-counting. For example, if we count a finished automobile in the
national income accounts, what about the paint, steel, rubber, plastic, and other
components that go into making that car? To systematically eliminate double-counting,
only value-added is counted for each firm in each industry. The value of the paint,
used in producing a car, is value-added by the paint manufacturing company, the
14
application of that paint by an automobile worker is value-added by the car company
(but the value of the paint itself is not). By focusing only on value-added at each step of
the production process in each industry national income accountants are thus able to
avoid the problems of double-counting.
GROSS DOMESTIC PRODUCT by COMPONENT 1940-2000
(billions of current U.S. dollars)
YEAR
1940
1950
1960
1970
1980
1990
2000
2005
Personal
Consumption
71.1
192.1
332.4
646.5
1748.1
3742.6
6257.8
8745.7
Gross Domestic
Investment
13.4
55.1
78.7
150.3
467.6
802.6
1772.9
2105.0
Government
Expenditures
14.2
32.6
99.8
212.7
507.1
1042.9
1572.6
2362.9
Net Exports
1.4
0.7
-1.7
1.2
-14.7
-74.4
-399.1
-726.5
GDP
100.1
286.7
513.4
1010.7
2708.0
5513.8
9224.0
12487.1
The above box presents the GDP accounts in the major expenditures
components. GDP is the summation of personal consumption expenditures ©, gross
domestic private investment (Ig), government expenditures (G) and net exports (Xn),
where net exports are total export minus total imports. Put in equation form:
GDP (Y) = C + Ig + G + Xn
GDP can also be calculated using the incomes approach. GDP can be found by
summing each of the income categories and deducting Net American Income Earned
Abroad. The following illustration shows how GNP and GDP are calculated using the
incomes approach as follows:
15
______________________________________________________________________
______________________________________________________________________
Depreciation
+
Indirect Business Taxes
+
Employee Compensation
+
Rents
+
Interest
+
Proprietors' Income
+
Corporate Income Taxes
+
Dividends
+
Undistributed Corporate Profits
= Gross National Product
- Net American Income Earned Abroad
= Gross Domestic Product
______________________________________________________________________
______________________________________________________________________
In a practical sense, it makes little difference which approach to calculating GDP
is used, the same result will be obtained either way. What is of interest is the
information that each approach provides. The sub-accounts under each approach
provide useful information for purposes of understanding the aggregate performance of
the economy and potentially formulating economic policy. Under the expenditures
approach we have information concerning the amount of foreign trade, government
expenditures, personal consumption and investment.
The following accounts illustrate how GDP is broken down into another useful set
of sub-accounts. Each of these additional sub-accounts provides information that helps
us gain a more complete understanding of the aggregate economic system. The
following illustration demonstrates how the sub-accounts are calculated:
16
______________________________________________________________________
______________________________________________________________________
Gross Domestic Product
- Depreciation =
Net Domestic Product
+ Net American Income Earned Abroad
- Indirect Business Taxes =
National Income
- Social Security Contributions
- Corporate Income Taxes
- Undistributed Corporate Profits
+ Transfer Payments =
Personal Income
- Personal Taxes =
Disposable Income
______________________________________________________________________
______________________________________________________________________
The expenditures approach provides information concerning from what sector
proportions of GDP come. Personal consumption, government expenditures, foreign
sector, and investment all are useful in determining what is responsible for our
economic well-being. Likewise, the incomes approach provides greater detail to our
understanding of the aggregate economic output. Net National Product is the output
that we still have after accounting for what is used-up in producing, in other words, the
capital we used-up getting GDP is netted-out to provide a measure of the output we
have left. National Income takes out of Net National Product all ad valorem taxes that
must be paid during production and net American income originating from overseas.
Appropriate adjustments are made to National Income to deduct those things that do
not reach households (i.e., undistributed corporate profits) and adds in transfer
payments to arrive at Personal Income. The amount of Personal Income that
households are free to spend after paying their taxes is called Disposable Income.
So far, the national income accounts appear to provide a great deal of
information. However, we do know that this information fails to accurately measure our
aggregate economic well-being. There are many aspects of economic activity that do
not lend themselves well to standard accounting techniques and these problems must
be examined to gain a full appreciation for what this information really means.
17
National Income Accounts as a Measure of Social Welfare
Accounting, whether it is financial, cost, corporate, nonprofit, public sector, or
even national income, provides images of transactions. The images that the accounting
process provides have value judgments implicit within the practices and procedures of
the accountants. National income accounting, as do other accounting practices, also
has significant limitations in the availability of data and the cost of gathering data. In
turn, the costs of data gathering may also substantially influence the images that the
accounts portray.
GDP and GNP are nothing more than measures of total output (or income).
However, the total output measured is limited to legitimate market activities. Further,
national income accountants make no pretense to measure only positive contributions
to total output that occur through markets. Both economic goods and economic bads
are included in the accounts, which significantly limit any inference that GDP or any of
its sub-accounts are accurate images of social welfare. More information is necessary
before conclusions can be drawn concerning social welfare.
Nonmarket transactions such as household-provided services or barter are not
included in GDP. In other words, the services of a cook if employed are counted, but
the services of a man or woman doing the cooking for their own household is not. This
makes comparisons across time within the United States suspect. In the earliest
decades of national income accounting, many of the more routine needs of the
household were served by the household members’ own labor. As society became
faster paced, and two wage earners began to become the rule for American
households, more laundry, housecleaning, child rearing, and maintenance work
necessary to maintain the household were accomplished by persons hired in the
marketplace. In other words, the same level of service may have been provided, but
more of it is now a market activity, hence included in GNP. This is also the case in
comparing U.S. households with households in less developed countries. Certainly,
less market activity is in evidence in less developed countries that could be
characterized as household maintenance. Few people are hired outside of the family
unit to perform domestic labor in less developed countries, and if they are, they are
typically paid pennies per hour. Less developed countries' populations rely
predominately on subsistence farming or fishing, and therefore even food and clothing
may be rarely obtained in the marketplace.
Leisure is an economic good but time away from work is not included in GNP.
The only way leisure time could be included in GNP is to impute (estimate) a value for
the time and add it to GNP (the same method would be required for household services
of family members). Because of the lack of consistency in the use of time for leisure
activities these imputation would be a very arbitrary, at best. However, commodities
used in leisure activities are included in GNP. Such things as movie tickets, skis, and
18
other commodities are purchased in the market and may serve as a rough guide to the
benefits received by people having time away from work.
Product quality is not reflected in GNP. There is no pretense made by national
income accountants that GDP can account for product or service quality. There is also
little information available upon which to base a sound conclusions concerning whether
the qualitative aspects of our total output has increased. It is clear that domestic
automobiles have increased in quality since 1980, and this same experience is likely
true of most of U.S. industry.
No attempt is made in GDP data to account for the composition output. We must
look at the contributions of each sector of the economy to determine composition. The
U.S. Department of Commerce publishes information concerning output and classifies
that output by industry groups. These industry groupings are called Standard Industrial
Codes (S.I.C.) and permits relatively easy tracking of total output by industry group, and
by components of industry groups.
Over time, there are new products introduced and older products disappear as
technology advances. Whale oil lamps and horseshoes gave way to electric lights and
automobiles between the Nineteenth and Twentieth Centuries. As we moved into the
latter part of this century vinyl records gave way to cassettes, which, in turn, have been
replaced by compact disks. In almost every aspect of life, the commodities that we use
have changed within our lifetimes. Therefore, comparisons of GNP in 1996 with GNP in
1966 are really comparing apples and oranges because we did not have the same
products available in those two years.
As we move further back in time, the commodities change even more. However,
it is interesting to note the relative stability of the composition of output before the
industrial revolution. For centuries, after the fall of the Roman Empire, the composition
of total output was very similar. Attila the Hun would have recognized most of what was
available to Mohammed and he would have recognized most of what Da Vinci could
have found in the market place. Therefore, the rapid change in available commodities
is a function of the advancement of knowledge, hence the advancement in technology.
Another shortcoming of national income accounting is that the accounts say
nothing about how income is distributed. In the early centuries of this millennium, only a
privileged few had lifestyles that most of us would recognize as middle income or
above. With recent archaeological work at Imperial Roman sites, many scholars have
concluded that over 95% of the population lived in poverty (the majority in lifethreatening poverty), while a very few lived in extreme wealth. With the tremendous
increases in knowledge over the past two-hundred years, technology has increased our
productivity so substantially that in the 28 industrialized nations of the world, the
majority of people in those countries do not know poverty. However, the majority of the
world's population lives in less developed countries and the overwhelming majority of
the people in those countries do know poverty, and a significant minority of these
19
people knows life threatening poverty. In short, with the increase in output has come an
increase in the well-being of most people.
Per capita income is GDP divided by the population, and this is a very rough
guide to individual income, which still fails to account for distribution. In the United
States, the largest economy in the world, there are still over 40 million people (about
14½ percent) that live in poverty, and only a very few these in life threatening poverty.
Something just under four percent of the population (about 1 person in 26) is classified
as wealthy. The other 81 percent experience a middle-income lifestyle in the United
States. The distribution of poverty is not equal across the population of this country.
Poverty disproportionately falls to youngest and oldest segments of our population.
Minority group persons also experience a higher proportion of poverty than do the
majority.
Environmental problems are not addressed in the national income accounts. The
damage done to the environment in the production or consumption of commodities is
not counted in GDP data. The image created by the accounts is that pollution,
deforestation, chemical poisoning, and poor quality air and water that give rise to
cancer, birth defects and other health problems are economic goods, not economic
bads. The cost of the gasoline burned in a car that creates pollution is included in GNP,
however, the poisoning of the air, especially in places like Los Angeles, Denver, and
Louisville is not deducted as an economic bad. The only time these economic bads are
accounted for in GNP is when market transactions occur to clean-up the damage, and
these transactions are added to GNP. The end result is that GNP is overstated by the
amount of environmental damage done as a result of pollution and environmental
damage.
The largest understatement of GNP comes from something called the
underground economy. The underground economy is very substantial in most
less developed countries and in the United States. It includes all illegitimate
(mostly illegal) economic activities, whether market activities or not. In less
developed countries, much of the underground economy is the "black market," but there
is also a significant amount of crime in many of these countries. Estimates abound
concerning the actual size of the underground economy in the United States. The
middle range of these estimates suggests the amount of underground economic
activities may be as much as one-third of total U.S. output.
The F.B.I. has, for years, tracked crime statistics in the United States and
publishes an annual report concerning crime. It is clear that drugs, organized theft,
robberies, and other crimes against property are very substantial in the United States.
However, when these crimes result in income for the offenders, there is also the
substantial problem of income tax evasion from not reporting the income from the
criminal activity. After all, Al Capone never went to jail for all of the overt criminal acts
involved in his various criminal enterprises, he went to jail for another crime, that is,
because he did not pay income taxes on his ill-gotten gains.
20
Drug trafficking in the United States is a very large business. The maximum
estimates place this industry someplace in the order of a $500 billion per year business
in the U.S. Few legitimate industries are its equal. Worse yet, the news media reports
that nearly half of those incarcerated in this nation's prisons are there on drug charges.
The image that the national income accounts portrays is that the $100 billion, plus that
is spent on law enforcement and corrections because of drug trafficking is somehow an
economic good, not a failure of our system. Drugs, however, are not the only problem.
As almost any insurance company official can tell you, car theft is also another major
industry. A couple of years ago CNN reported that a car theft ring operating in the
Southeast (and particularly Florida) was responsible for a large proportion of vehicles
sold in certain Latin American countries. Further, that if this car theft ring were a
legitimate business it would be the fourteenth largest in the United States (right above
Coca-Cola in the Fortune 100).
In an economy with total output of $6 trillion, when nearly 10% of that is matched
by only one illegitimate industry - drugs - there is a serious undercounting problem. If
estimates are anyplace close to correct, and $500 billion per year are the gross sales of
drug dealers, and if the profits on this trade are only eighty percent (likely a low
estimate), and if the corporate income tax rate of forty-nine percent could be applied to
this sum, then instead of a $270 billion budget deficit, the Federal government would be
experiencing a surplus of something in the order of $130 billion, without any reduction in
expenditures for law enforcement and corrections (which could be re-allocated to
education, health care or other good purposes). Maybe the best argument for the
legalization of drugs is its effect on the nation's finances (assuming, of course, drugs
were only a national income accounting problem).
Price Indices
Changes in the price level pose some significant problems for national income
accountants. If we experience 10% inflation and a reduction of total output of 5% it
would appear that we had an increase in GNP. In fact, we had an increase in GNP, but
only in the current dollar value of that number. In real terms, we had a reduction in
GNP. Comparisons between these two time periods means very little because the price
levels were not the same. If we are to meaningfully compare output, we must have a
method by which we can compare output with from one period to another by controlling
for changes in the price levels.
Nominal GDP is the value of total output, at the prices that exist at that time. By
adjusting aggregate economic data for variations in price levels then we have data that
can be compared across time periods with different price levels. Real GDP is the value
of total output using constant prices (variations in price levels being removed).
21
Price indices are the way we attempt to measure inflation and adjust aggregate
economic data to account for price level variations. There is a wide array of price
indices. We measure the prices wholesalers must pay, that consumers must pay (either
urban consumers (CPI(U) or that wage earners must pay (CPI(W)), we measure prices
for all goods and services (GNP Deflator) and we also have indices that focus on
particular regions of the country, generally large urban areas, called Standard
Metropolitan Statistical Areas -- S.M.S.A.).
Price indices are far from perfect measures of variations in prices. These indices
are based on surveys of prices of a specific market basket of goods, at a particular point
in time. The accuracy of any inference that may be drawn from these indices depends
on how well the market basket of commodities used to construct the index match our
own expenditures (or the expenditures of the people upon whom the analysis focuses).
Further complicating matters, is the fact that the market basket of goods changes
periodically as researchers believe consumption patterns change. Every five to ten
years (generally seven years) the Commerce Department (Current Population Surveys)
changes the market basket of goods in an attempt to account for the current
expenditure patterns of the group for which the index is constructed (total GNP,
consumers, wholesalers, etc.).
For the consumer price indices, there is a standard set of assumptions used to
guide the survey takers concerning what should be included in the market basket. The
market basket for consumers assumes a family of four, with a male wage earner, an
adult female not employed outside of the home, and two children (one male, one
female). There are also assumptions concerning home ownership, gift giving, diet, and
most aspects of the hypothetical family's standard of living.
The cost of living and the standard of living are mirror images of one another. If
someone has a fixed income and there is a two percent inflation rate per year, then their
standard of living will decrease two percent per year (assuming the index used is an
accurate description of their consumption patterns). In other words, a standard of living
is eroded if there is inflation and no equal increase in wages (or other income, i.e.,
pensions). Under the two percent annual inflation scenario, a household would need a
two percent increase in income each year simply to avoid a loss in purchasing power of
their income (standard of living).
During most, if not all, of your lifetime this economy has experienced inflation.
Prior to World War II, however, the majority of American economic history is marked by
deflation. That is, a general decrease in all prices. With a deflationary economy all one
must do to have a constant increase in their standard of living is to keep their income
constant while prices fall. However, deflation is a problem. Suppose you want to buy a
house. Most of us have mortgages; we borrow to buy a house. If you purchase a
house worth $50,000 and borrow eighty percent of the purchase price, $40,000 you
may have a problem. If we have five percent deflation per year, it only takes five years
for the market value of that house to reach $38689. In the sixth year, you owe more on
22
your thirty-year mortgage than the market value of the house. Credit for consumer
purchases becomes an interesting problem in a deflationary economy.
On the other hand, if you owe a great deal of money, you have the opportunity to
pay back your loans with less valuable money the higher the rate of inflation. Therefore,
debtors benefit from inflation if they have fixed rate loans that do not adjust the rates for
the effects of inflation.
The inflationary experience of the post-World War II period has resulted in our
expecting prices to increase each year. Because we have come to anticipate inflation,
our behaviors change. One of the most notable changes in our economic behavior has
been the wide adoption of escalator provisions in collective bargaining agreements,
executory contracts, and in entitlement laws (social security, veterans' benefits, etc.).
Escalator arrangements (sometimes called Cost of Living Adjustments, C.O.L.A.)
typically tie earnings or other payments to the rate of inflation, but only proportionally.
For example, the escalator contained in the General Motors and United Auto Workers
contract provides for employees receiving 1¢ per hour for each .2 the CPI increases.
This protects approximately $5.00 of the employee’s earnings from the erosive effects
of inflation (.01/.2)100, assuming a base CPI of 100. (There is no escalator that
provides a greater benefit to income receivers than the GM-UAW national agreement).
There are other price indices that focus on geographic differences. (Price data
that measures changes over time are called time series, and those that measure
differences within a time period but across people or regions are called cross sections).
The American Chamber of Commerce Research Association (ACCRA) in Indianapolis
does a cross sectional survey, for mid-sized to large communities across the United
States. On this ACCRA index Fort Wayne generally ranges between about 96.0 and
102.0, where 100 is the national average and the error of estimate is between 2 and 4
percent.
There are also producer and wholesale price indices and several component
parts of the consumer price indices that are designed for specific purposes that focus on
regions of the country or industries. For example, the components of the CPI are also
broken down so that we have detailed price information for health care costs, housing
costs, and energy costs among others.
Paashe v. Laspeyres Indices
There are two different methods of calculating price indices; these are the
Laspeyres Index and the Paashe Index. The Laspeyres Index uses a “constant market
basket” of goods and services to represent the quantity portion of the ration Q0 so that
only price changes are in the index.
L = ∑PnQ0 / ∑P0Q0 X 100
23
This is the most widely used index in constructing price indices. It has the disadvantage
of not reflecting changes in the quantities of the commodities purchased over time. The
Paasche index overcomes this problem by permitting the quantities to vary over time,
which requires more extensive data grubbing. The Passche index is:
P = ∑PnQn / ∑P0Qn X 100
Measuring the Price Level
The discussion here will focus on the Consumer Price Index (CPI) but is
generally applicable. The CPI is based on a market basket of goods and is expressed
as a percentage of the value of the market baskets' value in a base year (the year with
which all prices in the index are compared). Each year's index is constructed by
dividing the current year market basket's value by the base year market basket's value
and then multiplying the result by 100. Note that the index number for the base year will
be 100.00 (or 1 X 100).
By using this index we can convert nominal values into real values (real value are
expressed in base year dollars). We can either inflate or deflate current values to obtain
real values. Inflating is the adjustment of prices to a higher level, for years when the
index is less than 100. Deflating is the adjustment of prices to a lower level, for years
when the index is more than 100.
The process whereby we inflate and deflate is relatively simple and
straightforward. To change nominal into real values the following equation is used:
Nominal value/ (price index/100)
For example, in 1989 the current base year the CPI is 100. By 1996 the CPI increased
to 110.0. If we want to know how much $1.00 of 1996 money is worth in 1989 we must
deflate the 1996 dollar. We accomplish this by dividing 110 by 100 and obtaining 1.1;
we then divided 1 by 1.1 and find .909, which is the value of a 1996 dollar in 1989. If we
want to know how much a 1989 dollar would buy in 1996 we must inflate. We
accomplish this by dividing 100 by 110 and obtaining .909; we then divide 1 by .909 and
find 1.10, which is the value of a 1989 dollar in 1996.
Because the government changes base years it may be necessary to convert
one or more indices with different base years to obtain a consistent time series if we
want to compare price levels between years decades apart. Changing base years is a
relatively simple operation. If you wish to convert a 1982 base year index to be
consistent with a 1987 base year, then you use the index number for 1982 in the 1987
series and divide all other observations for the 1982 series using the 1982 value in 1987
index series. This results in a new index with 1987 as a base year. If inflation was
24
experienced during the entire period then the index number for 1987 will be 100, for the
years prior to 1987 the indices will be less than 100 and for the years after 1987 the
numbers will be larger than 100.
The price index method has problems. The assumptions concerning the market
basket of goods to be surveyed causes specific results that are not descriptive of a
general population. In the case of the consumer price index, families without children or
with more than two may find their cost of living differs from what the index suggests. If
both parents work, the indices may understand the cost of living. For families with ten
year old, fixed rate mortgages and high current mortgages rates, the CPI may
understate their current cost of living. Therefore, the CPI is only a rough measure, and
its applicability differs from household to household.
Cost of Living Adjustments
David A. Dilts and Clarence R. Deitsch, Labor Relations, New York: Macmillan
Publishing Company, 1983, p. 167.
To the casual observer, COLA clauses may appear to be an excellent method
of protecting the real earnings of employees. This is not totally accurate. COLA is
not designed to protect the real wage of the employee but is simply to keep the
employee's nominal wage, within certain limits, close to its original purchasing power.
With a 1 cent adjustment per .4 increase in the CPI (if no ceiling is present) the base
wage which is being protected from the erosive effect of inflation is $2.50 per hour, 1
cent per .3 increase in the CPI protects $3.33 per hour, and 1 cent per .2 increase in
the CPI protects $5.00 per hour. This is quite easy to see; since the CPI is an index
number computed against some base year (CPI = 100 in 1967) and the adjustment
factor normally required in escalator clauses is 1 cent per some increase x, in the
CPI, the real wage which is protected by the escalator is the inverse of the CPI
requirement or 1/x.
The items, which go into the market basket of goods to construct the price
indices, and the proper inference, which can be drawn from these data, are published
in several sources. Perhaps the easiest accessible location of this information is at
the Bureau of Labor Statistics site www.bls.gov, which also publishes a wealth of
information concerning employment, output, and prices.
The following boxes provide some recent information of the type to be found at
the Bureau of Labor Statistics site. The first box provides annual data for consumer
prices for both All Urban Consumers (CPI-U), and Urban Wage Earners and Clerical
Employees (CPI-W).
25
Consumer Prices 1996-2005, Annual Averages
Year
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
CPI-U
156.9
160.5
163.0
166.6
172.2
177.1
179.9
184.0
188.9
195.3
CPI-W
154.1
157.6
159.7
1632.
168.9
173.5
175.9
179.8
184.5
191.0
Difference (U-W)
2.8
2.9
3.3
3.4
3.3
3.6
4.0
4.2
4.4
4.3
Not seasonally adjusted
1982-84 = 100.0
Source: U.S. Department of Labor, Bureau of Labor Statistics
Notice that the trend of prices is upwards, and that the CPI (W) is always slightly
larger than the CPI (U). The difference between the two series is growing in 1996, the
difference was about 1.78% of CPI (U), whereby in 2005 the difference was 2.2 % of
CPI (U).
The market basket of goods and services that go into calculating the consumer
price index has been the subject of substantial controversy. For inference to be
properly drawn concerning the underlying prices in the CPI one must have some idea of
what goes into to the market basket and whether that mix of goods and service is of any
particular significance for a consumer. The general categories of expenditures by
consumers which are represented by the CPI (U) are presented in the following table.
The table presents the categories of expenditure, rather than each specific item that is
included in the market basket. This category of expenditure is for the base period 198284 and is supposed to be representative of what the average urban consumer
purchased during this time frame.
Consider the consumer expenditure categories used in calculating the CPI (U)
which are presented in the following table:
26
CPI (U) Expenditure Categories 1982-84
Category
Food
Alcoholic Beverage
Relative Importance
Dec. 2005
13.942
1.109
Shelter
32.260
Fuels & Utilities
5.371
Furnishings & Operations 4.749
Apparel
3.786
Transportation
17.415
Medical Care
6.220
Recreation
5.637
Education
2.967
Communication
3.080
Other goods & services
3.463
___________________________
All Categories
99.999
Source: Bureau of Labor Statistics
This is an exercise in positive economics. These are NOT the percentages of
incomes that consumers spend. These are the percentages of incomes assumed to be
spent by consumers in constructing the CPI (U). It is clear that there are value
judgments and biases included in these data, but is not so clear is whether the data is
upwardly or downwardly biased. With the increases in health care costs and recent
increases in oil prices it is very likely that, in general, the CPI (U) is downwardly biased.
However, there are those who argued that the CPI (U) is upwardly biased, and should
be adjusted downward. These economists, however, are those who also argue that
indexing government transfer payments and social security is a bad idea. Where the
truth actually lies is an empirical question.
27
Other Price Level Aggregates
The key price level measures are published data from the Commerce
Department. Some simple relationship has been demonstrated in the economic
literature which is worth examining here. The consumer price index is divided into core
components and also includes things which are more volatile, these items are food and
energy. Housing, clothing, entertainment, etc. are less volatile are often referred to as
the core elements of the consumer price index.
Theoretically it is presumed that the Wholesale Price Index, an index of items
that wholesalers provide to retailers, and the Producer Price Index, an index of
intermediate goods are leading indicators of prices at the consumer level. In fact, these
two price indices, over the broad sweep of American economic history, have been pretty
reasonable indicators of what to expect from consumer prices.
The Producer Price Index is a complex, but useful collection of numbers. The
PPI is divided into general categories, industry and commodity. The major groupings
under each of these broad categories have a series of price data which has been
collected monthly for several years. The following two tables present simple examples
of the types of PPI data that are available. The first table presents industry data, the
second box presents commodity data. There are also series on such specialized
pricing information as import and export prices, and these are all available on the
www.bls.gov site.
Industry PPI data, 1996-2005
Year
Petroleum
Refineries
1996
85.3
1997
83.1
1998
62.3
1999
73.6
2000
111.6
2001
103.1
2002
96.3
2003
121.2
2004
151.5
2005
205.3
%Δ
1996-05 140.7
Book
Publishers
Auto & Light
Vehicle Mfg
General
Hospitals
224.7
232.1
238.8
247.6
255.0
263.3
273.0
283.0
293.7
305.4
140.4
137.8
136.8
137.6
138.7
137.6
134.9
135.1
136.5
135.1
112.5
113.6
114.6
116.6
119.8
123.4
127.9
135.3
141.9
147.3
35.9
-3.9
30.9
Source: Bureau of Labor Statistics
28
As can be easily seen from the above table, there is wide variation in PPI by
industry. Oil refineries’ costs of operation have gone up more than 140% over the
period, while the costs for automobile and light vehicle manufacturers’ have actually
declined over the period by nearly 4 percent. In keeping with the high costs of health
care, hospitals’ costs of operation have increased nearly as much as the costs to book
publishers over the period. Remember, these are industry cost numbers. The following
table present commodity cost statistics in the PPI series.
Commodity PPI data, 1996-2005
Year
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
%Δ
1996-05
All
Commodities
Petroleum
Crude
Paper
Iron and
Steel
127.7
127.6
124.4
125.5
132.7
134.2
131.1
138.1
146.7
157.4
62.6
57.5
35.7
50.3
85.2
69.2
67.9
83.0
108.2
150.1
149.4
143.9
145.4
141.8
149.8
150.6
144.7
146.1
149.4
159.6
125.8
126.5
122.5
114.0
116.6
109.7
114.1
121.5
162.4
171.1
23.3
139.8
6.8
36.0
Source: Bureau of Labor Statistics
As can readily be seen in comparing the two series, there is a reason why oil
refineries costs of operations increased. The price of crude oil increase by just about
what the costs to the refiners increased. However, book publishers need not look to the
price of paper to explain their cost increases. In sum, the costs of commodities went up
over the nine year period by a rather small 23.3 percent or just over two and one-half
percent.
Monetary aggregates have also been used as a rough gauge of what to expect in
the behavior price variables over time. Later in the course there will be substantial
discussion of the Federal Reserve and monetary economics, but it is interesting to note
here that monetary aggregates have both a practical and theoretical relationship to
aggregate price information in the economy. Monetary aggregates generally have
significant influence over interest rates, which, in turn, will have both aggregate supply
29
and aggregate demand implications. A presentation and discussion of these economic
statistics is reserved for the monetary economics chapters in this course.
Employment and Output
Unemployment and employment data have implications for the business cycle,
as will be discussed in detail later. However, it is important to note here that
employment statistics are generally a lagging indicator, that is, they trail (or confirm)
what is going on in the aggregate economy.
Forecasting can be as much a game of indications, more than precision. If we
see industrial output is on the rise, we can expect that unemployment rates will decline
once a trough in the business cycle has been reached. On the other hand, if industrial
output is declining, we can expect future unemployment rates to be on the rise. This
presumes that there has not been prolonged recession or structural changes in the
economy. When there are structural changes or prolonged recession, then such
phenomenon as the discouraged worker hypothesis may operate to make the
unemployment rates move in directions that would be otherwise inconsistent with what
had been recently observed in the aggregate economy.
In other words, even though the unemployment and employment statistics have a
generally predictable relationship with output, you can get fooled sometimes because of
underlying phenomenon in the economy. These theoretical relationships are only
indicative and can sometimes be overwhelmed by other economic pressures in the
economy.
A discussion of the various economic statistics relevant to employment and
output will be reserved for the following chapter where more discussion of the concepts
which underpin these data is discussed.
KEY CONCEPTS
National Income Accounting
Social Welfare
Under-estimations
Over-estimations
Gross Domestic Product v. Gross National Product
Value added
30
Expenditures Approach
Income Approach
Criticisms
Net Domestic Product v. Net National Product
Depreciation
National Income
Personal Income
Disposable Income
Inflation v. Deflation
Cost-Push Inflation v. Demand-Pull Inflation
Pure Inflation
Monetarist School
Quantity Theory of Money
Price Indices
CPI
PPI
Other Indices
Inflating v. Deflating
Paasche v. Laspeyres
Price aggregates
CPI and WPI and PPI relations
Monetary aggregates
Employment and Unemployment
31
Relations with output
Indicators, not hard and fast rules
STUDY GUIDE
Food for Thought:
Critically evaluate the use of national income accounts as measures of social welfare.
Using the following data construct the price indices indicated:
Year Market basket $
1989
1990
1991
1992
1993
350
400
440
465
500
Calculate a price index using 1989 as the base year
Calculate a price index using 1991 as the base year
If the nominal price of new house in 1993 is $100,000, how much is the house in 1989
dollars? In 1991 dollars?
If the price of a new house in 1989 is $90,000 what is the price in 1991 dollars?
32
Critically evaluate the use of price indices for comparison purpose across time.
Using the following data calculate GDP, NDP, NI, PI, and DI
Undistributed Corporate profits $40
Personal consumption expenditures $1345
Compensation of employees $841
Interest $142
Gross exports $55
Indirect business taxes $90
Government expenditures $560
Rents $115
Personal taxes $500
Gross imports $75
Proprietors income $460
Depreciation $80
Corporate income taxes $100
Net Investment $120
Dividends $222
Net American income earned abroad $5
Social security contributions $70
33
Sample Questions:
Multiple Choice:
If U.S. corporations paid out all of their undistributed corporate profits as dividends to
their stockholders then which of the following national income accountants would show
an increase?
A.
B.
C.
D.
Gross Domestic Product
Net Domestic Product
Personal Income
National Income
The following are costs of market baskets of goods and services for the years indicated:
1900
1910
1920
1930
1940
1950
1960
$100
$102
$105
$90
$100
$110
$160
Using 1920 as a base year what is the price index for 1900 and for 1960?
A.
B.
C.
D.
1900 is 105, 1960 is 160
1900 is 95.2, 1960 is 152.4
1900 is 111.1, 1960 is 177.8
Cannot tell from the information given
True - False:
The GDP is overstated because of the relatively large amount of economic activity that
occurs in the underground economy {TRUE}.
The difference between Personal Income and Disposable Income is personal taxes
{TRUE}.
34
Chapter 3
Unemployment and Inflation
When one thinks of business conditions, the first thing to come to mind, typically,
is the recurrent cycles that the economy repeats with near regularity. Business cycles
are a fact of economic life with a market based economy. Recessions are generally
followed by recoveries which give a sort of up and down vibration to the long-term
secular growth trend in the U.S. economy. The way we track this vibration is through
the two major economic phenomenons that are used to track business cycles, these are
employment and price level stability. The purpose of this chapter is to examine two of
the most important and recurrent economic problems that have characterized modern
economic history throughout the industrialized world, including the United States.
These two problems are unemployment (associated with recessions) and inflation
(associated with the loss of purchasing power of our incomes). Most economic policy
focuses on mitigating these, most serious, of problems in the macroeconomy.
Mixed Economic System and Standard of Living
American capitalism is a mixed economic system. There are small elements of
command and tradition, and some socialism. However, our economic system is
predominately capitalist. The economic freedom and ability to pursue our individual
self-interest provides for the American people a standard of living, in the main, that is
unprecedented in world history. Perhaps more important, our high standards of living
are widely shared throughout American society (with fewer than 17.5% of Americans
living in poverty).
The accomplishments of American society ought not to be taken lightly, no other
epoch and no other nation, has seen a "golden" age as impressive as modern America.
However, there are aspects of our freedom of enterprise that are not so positive. Free
market systems have a troubling propensity to experience recessions (at the extreme
depressions) periodically. As our freedom of inquiry develops new knowledge, new
products and new technologies, our freedom of enterprise also results in the
abandonment of old industries (generally in favor of new industries). At times we also
seem to lose faith in accelerating rates of growth or economic progress. At other times,
we have experienced little growth in incomes (at the extreme declines in consumer
incomes). All of these problems have resulted in down-turns in economic activity. At
other times, consumer's income have increased at accelerating rates, people have
become enthusiastic about our economic future, and the growth of new industries have
35
far outpaced the loss of old industries that have resulted in substantial expansions of
economic activity. Together these down-turns and expansions are referred to as the
business cycle.
Business Cycles
The business cycle is the recurrent ups and downs in economic activity
observed in market economies. Troughs, in the business cycle, are where
employment and output bottom-out during a recession (downturn) or depression
(serious recession). Peaks, in the business cycle, are where employment and output
top-out during a recovery or expansionary period (upturn). These ups and downs
(peaks and troughs) are generally short-run variations in economic activity. It is
relatively rare for a recession to last more than several months, two or three years
maximum. The Great Depression of the 1920s and 1930s was a rare exception. In
fact, the 1981-85 recession was unusually long.
One of the most confusing aspects of the business cycle is the difference
between a recession and depression. For the most part, recessionary trends are
marked by a downturn in output. This downturn in output is associated with increased
levels of unemployment. Therefore, unemployment is what is typically keyed upon in
following the course of a recession. In 1934, the U.S. economy experienced 24.9%
unemployment, this is clearly a depression. The recession of 1958-61 reached only
6.7% unemployment. This level of reduced economic activity is clearly only a
recession. However, in the 1981 through 1986 downturn the unemployment rate
reached a high of 12%, and in both 1982 and 1983 the annual average unemployment
rate was 10.7%. Probably the Reagan recession was close to, if not actually, a short
depression, arguably a deep recession. This 1981-85 period was clearly the worst
performing economy since World War II, but it also was clearly nothing compared to the
problems in the decade before World War II.
The old story about the difference between a recession and depression probably
is as close to describing the difference between a recession and a depression as
anything an economist can offer. That is, a recession is when your neighbor is out of
work, a depression is when you are out of work!
In general, the peaks and troughs associated with the business cycle, are shortrun variations around a long-term secular trend. Secular trends are general
movements in a particular direction that are observed for decades (at least 25
years in macroeconomic analyses).
Prior to World War II the secular trend started as relatively flat and limited growth
period and then it took a sharp downward direction until the beginnings of the War in
36
Europe (a period of about twenty years). Since the end of World War II we have
experienced a long period of rather impressive economic growth (a period of over fifty
years).
The following diagram shows a long-term secular trend that is substantially
positive (the straight, upward sloping line). About that secular trend is another curved
line, whose slope varies between positive and negative, this is much the same as the
business cycle variations about that long-term growth trend. If we map out economic
activity since World War II we would observe a positive long-term trend, with marked
ups and downs showing the effects of the business cycle.
Secular trend
Output
Peak
Trough
Years
There are other variations observed in macroeconomic data. These variations,
called seasonal variations, last only weeks and are associated with the seasons
of the year. During the summer, unemployment generally increases due to students
and teachers not having school in the summer and both groups seeking employment
during the summer months. Throughout most of the Midwest agriculture and
construction tend to be seasonal. Crops are harvested in the fall, then in the winter
months farmers either focus on livestock production or wait for the next grain season.
In the upper Midwest, north of Fort Wayne, outside work is very limited due to extreme
weather conditions, making construction exhibit a seasonal trend. In the retail industry,
from Thanksgiving through New Year's Day is when disproportionately large amounts of
business are observed, with smaller amounts during the summer.
Most series of data published by the Commerce Department or the Bureau of
Labor Statistics concerning prices and employment have two different series. There will
typically be data which does not account for predictable seasonal variations (not
seasonally adjusted) and those data which have had the seasonal variations removed
(seasonally adjusted). From a practical standpoint, the raw numbers capture what
actually happens and may be useful for forecasts and analyses where you wish to have
the actual results. On the other hand, knowing what the seasonal variations are
37
permitting the analyst to capture affects with their models that are from more
fundamentally economic or political processes. Therefore, both series of data are of
significance to the economics profession.
Unemployment
Unemployment is defined to be an individual worker who is not gainfully
employed, is willing and able to work, and is actively seeking employment. A
person who is not gainfully employed, but is not seeking employment or who is unwilling
or unable to work is not counted as unemployed, or as a member of the work force.
During the Vietnam War unemployment dropped to 3.6% in 1968 and 3.5% in 1969.
This period illustrates two ways in which unemployment can be reduced. Because of
the economic expansion of the Vietnam era more jobs were available, but during the
same period many people dropped out of the work force that may otherwise have been
unemployment or, alternatively, vacated jobs that became available to others to avoid
military service. One way to keep from being drafted was to become a full time student,
which induced many draft-age persons to go to college rather than risk military service
in Vietnam. Additionally, there was a substantial expansion in the manpower needs of
the military with nearly 500,000 troops in Vietnam in 1969. Therefore, the
unemployment rate was compressed between more job, and fewer labor force
participants.
Unemployment can decrease because more jobs become available. It can also
decrease because the work force participation of individuals declines, in favor of
additional schooling, military service, or leisure. Unemployment is more than idle
resources, unemployment also means that some households are also experiencing
reduced income. In other words, unemployment is associated with a current loss of
output, and reductions in income into the foreseeable future.
Economists classify unemployment into three category by cause. These three
categories of unemployment are (1) frictional, (2) structural, and (3) cyclical. Frictional
unemployment consists of search and waits unemployment which is caused by people
searching for employment or waiting to take a job in the near future. Structural
unemployment is caused by a change in composition of output, changes in technology,
or a change in the structure of demand (new industries replacing the old). Cyclical
unemployment is due to recessions, (the downturns in the business cycle). Structural
unemployment is associated with the permanent loss of jobs, however, cyclical
unemployment is generally associated with only temporary losses of employment
opportunities.
Full employment is not zero unemployment, the full employment rate of
unemployment is the same as the natural rate. The natural rate of unemployment is
thought to be about 4% and is a portion of structural unemployment and frictional
38
unemployment. However, there is not complete professional agreement concerning the
natural rate, some economists argue that the natural rate, today is, about 5%. The
disagreement centers more on observation of the secular trend, than any particular
technical aspect of the economy (and there are those in the profession who would
disagree with this latter statement).
The reason that frictional and structural unemployment will always be observed is
that our macroeconomy is dynamic. There are always people entering and leaving the
labor force, each year there are new high school and college graduates and secondary
wage earners who enter and leave the market. There is also a certain proportion of
structural unemployment that should be observed in a healthy economy. Innovations
result in new products and better production processes that will result in displacement of
old products and production processes that results employees becoming unnecessary
to staff the displaced and less efficient technology. Therefore, the structural component
of the natural rate is only a fraction of the total structural rate in periods where there is
displacement of older industries that may result from other than normal economic
progress. Perhaps the best example of this is the displacement of portions of the
domestic steel and automobile industries that resulted from predatory trade practices
(i.e., some of the dumping practices of Japan and others).
The level of output associated with full utilization of our productive
resources, in an efficient manner is called potential output. Potential output is the
output of the economy associated with full employment. It is the level of employment
and output associated with being someplace on the production possibilities curve (from
E201). This level of production will become important to us in judging the performance
of the economy.
Full employment is not zero unemployment and potential GNP is not total
capacity utilization (full production), such levels of production are destructive to the labor
force and capital base because people fulfill other roles (i.e., consumer, parent, etc.)
and capital must be maintained. The Nazi's slave labor camps (during World War II)
were examples of the evils of full production, where people were actually worked to
death.
Unemployment rates
The unemployment rate is the percentage of the work force that is
unemployed. The work force is about half of the total population. Retired persons,
children, those who are either incapable of working or those who choose not to
participate in the labor market are not counted in the labor force. Another way to look
at it, is that the labor force consists of those persons who are employed or unemployed
who are willing, able and searching for work.
39
People who are employed may be either full time or part time employees. In
aggregate the average number of hours worked by employees in the U.S. economy
generally is something just under forty-hours per week (generally between 38 and 39
hours per week). This statistic reflects the fact that people have vacation time, are
absent from work, and may have short periods of less than forty hours available to them
due to strikes, inventories, or plant shut-downs.
Part time employees are included in the work force. You are not counted as
unemployed unless you do not work and are actively pursuing work. Those who do not
have 40 hours of work (or the equivalent) available to them are classified as part time
employees. At present there are about 6.5 million U.S. workers were involuntarily parttime workers, and about 12 million were voluntarily part-time employees, this is up
about 3 million from total part time employment in 1982.
The unemployment rate is calculated as:
UR = Unemployed persons ÷ labor force
There are problems with the interpretation of the unemployment rate.
Discouraged workers are those persons who dropped out of labor force because they
could not find an acceptable job (generally after a prolonged search). To the extent
there are discouraged workers that have dropped out of the work force, the
unemployment rate is understated. There are also those individuals who have recently
lost jobs who are not interested in working, but do not wish to lose their unemployment
benefits. These individuals will typically go through the motions of seeking employment
to remain eligible for unemployment benefits but will not accept employment or make
any effort beyond the appearance of searching. This is called false search and serves
to overstate the unemployment rate.
There has yet to be any conclusive research that demonstrates whether the
discouraged worker or false search problem has the greatest impact on the
unemployment rate. However, what evidence exists suggests that in recent years the
discouraged worker problem is the larger of the two problem, suggesting that the
unemployment rate may be slightly understated.
The following box provides a snapshot of available economic data concerning
employment aggregates:
40
Labor Market Data, 1996-2006 (seasonally adjusted January)
Year
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
Civilian Labor
Force (1000s)
132616
135456
137095
139003
142267
143800
143883
145937
146817
147956
150114
Employment
Level (1000s)
Unemployment
Level (1000s)
Unemployment
Rate (%)
125125
128298
130726
133027
136559
137778
136442
137424
138472
140234
143074
7491
7158
6368
5976
5708
6023
8184
8513
8345
7723
7040
5.6
5.3
4.6
4.3
4.0
4.2
5.7
5.8
5.7
5.2
4.7
Source: Bureau of Labor Statistics
Okun's Law
As mentioned earlier, unemployment is not just a single dimensional problem.
Based on empirical observation an economist determined that there was a fairly stable
relation between unemployment and lost output in the macroeconomy. This relation
has a theoretical basis. As we move away from an economy in full employment, noninflationary equilibrium, we find that we lose jobs in a fairly constant ratio to the loss of
output. Okun's Law states that for each one percent (1%) the unemployment rate
exceeds the natural rate of unemployment (4%) there will be a gap of two and one-half
percent (2.5%) between actual GDP and potential GDP. In other words, employment is
a highly correlated with GDP. This is why it is not technically incorrect to look to the
unemployment rate to determine whether a recession has begun or stopped. It is also
true that unemployment tends to trail behind total output of the economy. In other
words, unemployment is a trailing indicator. As the economy recovers we could still see
worsening unemployment, but after a lag of a quarter or so, the improving economic
conditions will result in less unemployment.
This relationship permits some rough guesses about what is happening to total
output in the economy, however, it is only a rough guide, because unemployment is a
trailing indicator. In other words, as the economy goes into recession that last variable
to reflect the loss of output is the unemployment rate. Okun’s Law, as it turns out, is
also very useful in formulating policy within the Keynesian framework which will be
developed later in the course.
41
Burden of Unemployment
The individual burden of unemployment is not uniformly spread across the
various groups that comprise our society. There are several factors that have been
historically associated with who bears what proportion of the aggregate levels of
unemployment. Among the factors that determine the burden of unemployment are
occupation, age, race and gender.
The individual occupational choice will affect the likelihood of becoming
unemployed. Those with low skill and educational levels will generally experience
unemployment more frequently than those with more skills or education. There are also
specific occupations (even highly skilled or highly educated) that may experience
unemployment due to structural changes in the economy. For example, with the decline
in certain heavy manufacturing many skilled-trades persons experienced bouts of
unemployment during the 1980s. As educational resources declined in the 1970s and
again recently, many persons with a Ph.D. level education and certified teachers
experienced unemployment. However, unemployment for skilled or highly educated
occupations tends to be infrequent and of relatively short duration.
Age also plays a role. Younger people tend to experience more frictional
unemployment than their older, more experienced counterparts. As people enter the
work force for the first time their initial entry puts them into the unemployed category.
Younger persons also tend to experience a longer duration of unemployment.
However, there is some evidence that age discrimination may present a problem for
older workers (the Age Discrimination Act covers those persons over 40, and it appears
those over 50 experience the greatest burden of this discrimination).
Race and gender, unfortunately, are still important determinants of both
incidence and duration of unemployment. Most frequently the race and gender effects
are the result of unlawful discrimination in the labor market. There is also a body of
evidence that suggests there may be significant discrimination in the educational
opportunities available to minorities.
Edmund Phelps developed a theory called statistical theory of racism and
sexism that sought to explain how discrimination could be eliminated as a determinate
of the burden of unemployment. His theory was that if there was not a ideological
commitment to racism or sexism, that if employers were forced to sample from
minorities they would find that there was no difference in these employees' productivity
and the productivity of the majority. This formed the basis of affirmative action
programs. The lack of effectiveness of most of these programs suggests that the
racism and sexism that exists is ideological and requires stronger action, than simple
reliance on economic rationality and sampling.
42
Inflation
The news media reports inflation, generally, as increases in the CPI. This is not
technically accurate. Inflation is defined as a general increase in all prices (the
price level). The CPI does not purport to measure all prices, wholesale prices and
producer prices are not included in the consumer data. The closest we have to a
measure of inflation is the GNP deflator that measures prices for the broadest range of
goods and services, but even this broader index is not a perfect measure, but its all we
have and some information (particularly when we know the short-comings) is better than
perfect ignorance.
One of the more interesting bits of trivia concerning inflation is something called
the Rule of 70. The rule of 70 gives a short-hand method of determining how long it
takes for the price level to double at current inflation rates. It states that the number of
years for the price level to double is equal to seventy divided by the annual rate of
increase (i.e., 70/%annual rate of increase(expressed as a whole number)). For
example, with ten percent inflation, the price level will double every seven years (70/10
= 7).
There are three theories of inflation that arise from the real conduct of the
marcoeconomy. These three theories are demand-pull, cost-push, and pure inflation.
There is a fourth theory that suggests that inflation has little or nothing to do with the
real output of the economy, this is called the quantity theory of money. Each of these
theories will be reviewed in the remaining sections of this chapter.
Demand - Pull Inflation
Using a naive aggregate supply/aggregate demand model, we can illustrate the
theory of demand-pull inflation. The following chapter will develop a more sophisticated
aggregate supply/aggregate demand model, but for present purposes the naive model
will suffice. The naive model has a linear supply curve and a linear demand curve,
much the same as the competitive industry model developed in the principles of
microeconomics course (or in A524). However, the price variable here is not the price
of a commodity, it is the price level (the CPI for want of a better measure) and the
quantity here is the total output of the economy, not some number of widgets.
43
Aggregate
Supply
Price
Aggregate
Demand
Output
Using a naive aggregate demand\ aggregate supply model, as the aggregate
demand shifts to the right or increases, all prices increase. This increase in all prices is
called inflation. However, this increase in aggregate-demand is also associated with an
increase in total output. Total output is associated with employment (remember Okun's
Law?). In other words, even though this increase in aggregate demand causes
inflation, it does not result in lost output, hence unemployment. Policy measures
designed to control demand-pull inflation, will shift the aggregate demand curve to the
left, (i.e., reduce aggregate demand) and this reduction in aggregate demand is
associated with loss of output, hence increased unemployment.
Demand-pull inflation is therefore only a problem with respect to price level
instability. On the other hand, there is another model of inflation which creates
problems in both price level instability and unemployment, and that model is called
Cost-Push Inflation.
Cost - Push Inflation
Again using a naive aggregate supply/aggregate demand approach, cost-push
inflation results from particular changes in the real activity in the macroeconomy. In this
case, a decrease in the aggregate supply curve. A decrease in aggregate supply can
occur for several reasons. Government regulations which divert resources from
productive uses, increases in costs of intermediate goods and factors of production, ad
valorem taxes, and sources of inefficiency (excess executive salaries etc.) can all result
in a leftward shift in the aggregate supply curve.
44
The following diagram shows a shift to the left or decrease in aggregate supply
curve.
Aggregate
Supply
Price
Aggregate
Demand
Output
The OPEC Oil embargo may present the best case for cost-push inflation. As oil
prices doubled and then tripled, the costs of production that were comprised at least in
part from oil also dramatically increased. Therefore, the dramatic increase in the price
of oil shifted the aggregate supply curve to the left (a decrease) , resulting in cost-push
inflation.
The general case is, as the price of any productive input increases, the
aggregate supply curve will shift to the left. This decrease in aggregate supply also
results in reduced output, hence unemployment. This is consistent with the economic
experience of the early 1980s during the Reagan Administration when the economy
experienced high rates of both inflation and unemployment.
It is also interesting to note that the experiences of the Nixon-Ford administration
and later the Carter administrations were not unlike that of Reagan’s era. In the NixonFord administration the 1973 Mid-East war resulted in very high oil prices, and monetary
policies that resulted in rapidly increasing interest rates which together created what
economists labeled Stagflation – high rates of inflation together with high rates of
unemployment. Again, tracing the causes of this back to the origin results in a
conclusion that the Cost-Push model provided an interesting, albeit, over-simplified
explanation of the Nixon-Ford experience.
Jimmy Carter fared no better. The double digit inflation and unemployment of the
early 1970s was repeated at the end of the decade. Even though Carter’s move to
deregulate the airlines industry and the trucking industry may have had long-term
positive results for the economy, foreign policy was about to intervene and create
essentially the same havoc as plagued the previous regime. In 1979 the Iranian
45
revolution and the Russian invasion of Afghanistan created a foreign policy disaster in
the U.S. At the time of the Iranian crises, Iran was one this country’s largest trading
partner. With the loss of this source of foreign demand for U.S. goods, and the second
largest source of foreign oil, the shock resulted in returning to the same double digit
unemployment and inflation that swept the Republicans out of office just three years
earlier – and again, the experience was not unlike the results predicted by Cost-Push
Inflation models.
Pure Inflation
Pure inflation results from an increase in aggregate demand and a simultaneous
decrease in aggregate supply. For output to remain unaffected by these shift in
aggregate demand and aggregate supply, then the increase in aggregate demand must
be exactly offset by an equal decrease in aggregate supply.
The following aggregate supply/aggregate demand diagram illustrates the theory
of pure inflation:
Aggregate
Supply
Price
Aggregate
Demand
Output
Notice in this diagram, that aggregate supply shifted to the left, or decreased by
exactly the same amount that the aggregate demand curve shifted to the right or
increased. The result is that output remains exactly the same, but the price level
increased. Intuitively, this makes sense, with the loss of aggregate supply we would
expect an increase in prices, and with an increase in aggregate demand we would also
expect an increase in prices. As aggregate supply and aggregate demand move in
opposite directions, it is not perfectly clear what happens to output. In this case, with
equal changes in aggregate demand and supply output should remain exactly the
same.
46
Put the two models together, and it is a roll of the dice whether the economy
grows or declines as the price level runs amuck. If aggregate supply declines more
severely than aggregate demand increases, you will get stagflation. On the other, hand
economic growth, oddly enough, is still possible if aggregate demand outpaces the
declines in aggregate supply – an interesting quandary to say the least.
Quantity Theory of Money
The models of cost-push, demand-pull and pure inflation presented above, are
rather naive simplifications. These models suggest that there are causes of inflation
that are generated by the “real” rather than the monetary economy. In other words, it is
possible to create inflation through real shocks to the economy (Real Business Cycle
Theory from Tom Sargent and the Minnesota School). In fact, there may be pressures
that can influence both policy makers, and have implications for inflation if monetary
authorities make mis-steps.
The Monetarist School of economic thought points to another possible
explanation for inflation. These economists do not reject the idea that inflationary
pressures can occur because of an oil embargo or increases in consumer demand.
However, these economists argue that inflation cannot occur simply as a result of these
events. They are quick to point out that a change in a single price in the price index
market basket can give the appearances of inflation, when all that happened was a
change in the relative price of one commodity with respect to all others.
More of this theory will be discussed in the final weeks of the semester, however,
one point is necessary here. Inflation, in the monetarist view, can only occur if the
money supply is increased which permits all prices to increase. If the money supply is
not increased there can be changes in relative prices, for example, oil prices can go up,
but there has to be offsetting decreases in the prices of other commodities. An increase
in all prices or in the price of a particular good, therefore, is a failure of the Fed to
appropriately manage the money supply.
As oil prices rose from $15 a barrel to over $70 a barrel we would have expected
to experience very significant inflation if the histories of the Nixon-Ford, and Carter
administrations were instructive. However, so far at least, these histories have not been
repeated. What is interesting to note is that the price level has remained remarkably
stable in the face of the quadrupling of crude oil prices. There are two explanations for
this observation – together they illustrate the point of inflation and monetary economics.
The explanation is rather straightforward and simple. After 9/11/01 the Federal
Reserve drove the discount rate down to 1%. The result was that this extremely easy
monetary policy prevented what should have been a serious recession. The
47
unemployment rate in August of 2001 was a mere 4.9% and immediately started a
significant increase in September. By December of 2002 the rate was over 6 percent
seasonally adjusted. However, the Fed had brought interest rates to such a low point
that two dramatic changes occurred. We began a bull market in housing (interest rate
sensitive) and the declines in automobile production reversed (again, interest rate
sensitive commodities). These two portions of the economy kept the whole thing from
disaster. As the United States became mired in military actions in Southwest Asia, the
dollar began to lose value. The loss of the dollar’s value served to prevent even greater
penetration of domestic markets by Japanese and European competitors, hence again
keeping unemployment from become a more serious problem.
However, this monetary stimulus resulted in the Fed having record low interest
rates, at a time that the Federal budget deficits began run out of control. High Federal
debt levels being held, in large measure, by foreigners resulted in further downward
pressure on the value of the dollar. This downward pressure on the value of the dollar
meant that the price of foreign goods had an upward pressure. At the same time, China
was pegging the value of their currency on the value of the dollar and those exports to
the U.S. were increasing and driving rather dramatic economic growth in China. In
turn, the increasing economic prosperity in China resulted in increased demand for
everything in China, and particularly in commodities, like gold, copper, and oil. In turn,
the prices of these commodities in the U.S. sky-rocketed because of world demand
increasing faster than supply, and the loss of purchasing power of the dollar. The tight
monetary policy which began some two years ago has generated a 4 1/4% increase in
the discount rate, and has essentially stopped inflation in its tracks. The quadrupling of
the price of oil has had little effect in the overall price level simply because of the
quadrupling of interest rates.
However, the second reason is not to be entirely ignored. Over the period that oil
prices went up dramatically, the demand for oil in the U.S. has not kept pace. Increased
gas mileage, together with the loss of manufacturing has meant that oil demand has
increased at lower rates than in recent decades in the U.S. As oil intensive industries
become less important, the inflationary effects of oil price increases that were so
devastating in the 1970s are less so in the first decade of this century.
Effects of Inflation
The effects of inflation impact different people in different ways. Creditors and
those living on a fixed income will generally suffer. However, debtors and those whose
incomes can be adjusted to reflect the higher prices will not, and perhaps this group
may even benefit from higher rates of inflation.
If inflation is fully anticipated and people can adjust their nominal income or their
purchasing behavior to account for inflation then there will likely be no adverse effects,
48
however, if people cannot adjust their nominal income or consumption patterns people
will likely experience adverse effects. This is the same as if people experience
unanticipated inflation. Normally, if you cannot adjust income, are a creditor with a fixed
rate of interest or are living on a fixed income you will pay higher prices. The result is
that those individuals will see their standard of living eroded by inflation.
Debtors, whose loans specify a fixed rate of interest, typically benefit from
inflation because they can pay loans-off in the future with money that is worth less. It is
this paying of loans with money that purchases less that harms creditors. It should
come as no surprise that the double digit inflation of fifteen years ago caused
subsequent loan contracts to often specify variable interest rates to protect creditors
from the erosive effects of unanticipated inflation.
Savers may also find themselves in the same position as creditors. If savings
are placed in long-term savings certificates that have a fixed rate of interest, inflation
can erode the earnings on those savings substantially. Savers that anticipate inflation
will seek assets that vary with the price level, rather than risk the loss associated with
inflation.
Inflation will effect savings behavior in another way. If a person fully anticipates
inflation, rather than to save money now, consumers may acquire significant debt at
fixed interest rates to take advantage of the potential inflationary leverage caused by
fixed rates. Rather than to save now, consumers spend now. Therefore, inflation
typically creates expectations among people of increasing prices, and if people increase
their purchases aggregate demand will increase. An increase in aggregate demand will
cause demand-pull inflation. Therefore, inflationary expectations can create a spiraling
of increased aggregate-demand and inflationary expectations that can feed off one
another. At the other extreme, recessionary expectations may cause people to save,
that results in reduced aggregate demand, and another spiral effect can result (but
downwards). This is what economists call rational expectations.
Rational expectations are the simple idea that people anticipate what is going to
occur in the economy. In other words, markets will have everything priced into them
because both buyers and sellers know everything there is to know about the prices and
availability of the goods and services in the economy. While this view expectations
makes it easier to manipulate economic models, it is not very realistic.
To account for expectations in a less harsh way in economic behavior, the theory
of adaptive expectations has been formulated. It is more likely that people will not take
extreme views of economic problems, including perfect knowledge. People will
anticipate and react to relatively "sure things" and generally in the near term and wait to
see what happens. As economic conditions change, consumers and producers change
their expectations to account for these changes; in another words, they adapt their
expectations to current and near term information about future economic events.
49
The adaptive expectations model is supported by a substantial amount of
economic evidence. It appears that the overwhelming majority of the players in the
macroeconomy are adaptive in their expectations.
Unemployment Differentials
David A. Dilts, Mike Rubison, Bob Paul, “Unemployment: which person's burden man or woman, black or white?" Ethnic and Racial Studies. Vol. 12, No. 1 (January
1989) pp. 100-114.
. . . The race and sex of the work force are significant determinants of the
relative burdens of unemployment. Blacks are experiencing a decreasing burden of
unemployment over the period examined while white females exhibit a positive time
trend (increasing unemployment over the period). The dispersion of unemployment
for blacks varies directly with the business cycle which suggests greater labor force
participation sensitivity by this group. . . . The dispersions of white female
unemployment vary countercyclically with the business cycle which is consistent with
the inherited wisdom concerning unemployment and macroeconomics.
. . . These results, together with the unemployment equation results, indicate
that the unemployment rate for white males is not sensitive to the fluctuations in the
business cycle nor do these data exhibit any significant time trend. These are rather
startling results. This evidence suggests that while males bear substantially the
same relative unemployment rates over all ranges of the business cycle. . . .
KEY CONCEPTS
Business Cycle
Peaks
Troughs
Seasonal trends
Secular trends
Unemployment
frictional
structural
50
cyclical
Full employment
Natural rate of unemployment
Potential output
Labor Force
part-time employment
discouraged workers
false search
Okun’s Law
Burden of Unemployment
Inflation v. Deflation
CPI
Rule of 70
Demand-pull
Cost-push
Pure inflation
Monetarists
Stagflation
Historical experiences in the U.S. with Stagflation
Impact of Inflation
Expectations
51
STUDY GUIDE
Food for Thought:
Using the basic supply & demand model demonstrate cost-push, demand-pull, and pure
inflation.
Critically evaluate the three categories of unemployment, be sure to discuss the
problems with conceptualizing and measuring each.
Critically evaluate the qualitative and quantitative costs of both inflation and
unemployment. Which is worse? Why?
Sample Questions:
Multiple Choice:
Which of the following is most likely to benefit from a period of unanticipated inflation?
(assuming fixed assets and liabilities)
A. Those whose liabilities are less than their assets
B. Those whose liabilities exceed their assets, and whose loans are variable rate
C. Those whose liabilities exceed their assets, and whose loans are fixed rate
loans
D. None of the above will benefit
52
People who are unemployed due to a change in technology that results in a decline in
their industry fit which category of unemployment?
A.
B.
C.
D.
frictional
structural
cyclical
natural
True - False:
Seasonal variations in data are impossible to observe in annual data {TRUE}.
Cost-push inflation is often associated with increased unemployment {TRUE}
53
Chapter 4
Aggregate Supply & Aggregate Demand
The aggregate supply/ aggregate demand model of the macroeconomy will be
developed in this chapter. This model is one of two models (the other is the Keynesian
Cross) of the U.S. macroeconomic system that we will develop in this course. The
aggregate demand and aggregate supply model is the main mode of analysis that
characterized the classical school of economic thought and provides some useful
insights. However, because the Keynesian Cross permits more direct analysis of the
multiplier effects and other economic phenomenon it will be the model that is relied
upon for the majority of the course.
Aggregate Supply and Aggregate Demand
The aggregate supply/aggregate demand model is comparative static (slice of
time) model of the macroeconomy. Rather than to be able to observe changes during
each second of a period of time (dynamic), we will compare the economy in one time
period with the model in a subsequent and distinct period. Its elegance arises from the
fact that the model has foundations in microeconomics. In this view of the
macroeconomy we simply aggregate everything on the supply side to obtain an
aggregate supply curve and aggregate everything on the demand side to obtain an
aggregate demand curve. Where aggregate supply intersects aggregate demand we
observe a macroeconomic equilibrium. However, because the two functions are
aggregations, the horizontal axis does not measure the quantity of a particular good, it
measures GNP. The vertical axis is not a price in the sense of a microeconomic
market, but is the price level in the entire economy.
Aggregate Demand
The aggregate demand curve is a downward sloping function that shows the
inverse relationship between the price level and gross domestic output (GDP). The
reasons that the aggregate demand curve slopes down and to the right differ from the
reasons that individual market demand curves slope downward and to the right (i.e., the
substitution & income effects - these do not work directly with macroeconomic
aggregates, for among other reasons, we are dealing with all prices of all commodities,
not a single price of a single commodity, as in a microeconomic sense).
54
The reasons for the downward sloping aggregate demand curve are:
(1) the wealth or real balance effect,
(2) the interest rate effect, and
(3) the foreign purchases effect.
As the price level increases, with fixed incomes, the given amount of savings and
assets consumers have will purchase less. On the other hand, as the price level
decreases the savings and wealth consumers have will purchase more. The negative
relation between the price level and the purchasing power of savings (wealth or real
balances) is called the real balances or wealth effect.
Assuming a fixed supply of money, an increase in the price level will increase
interest rates. Increases in interest rates will reduce expenditures on goods and
services for which the demand is interest sensitive (e.g., consumer durables such as
cars and investment). This negative relation between the interest rate and purchases is
called the interest rate effect.
If the prices of domestic goods rise relative to foreign goods (an increase in the
price level), domestic consumers will purchase more foreign goods as substitutes,
assuming stable exchange rates for currencies. Remember from Chapter 2 that if
exports remain constant and imports increase GNP declines (net exports). Therefore,
as the price level increases, imports will also increase, ceteris paribus. The foreign
purchases effect is the propensity to increase purchases of imports, when the domestic
price level increases.
Each of these effects describes a negative relation between the price level and
the aggregate demand for the total output of the economy. Therefore, the aggregate
demand curve will slope downward and to the right, as shown in the diagram below:
55
The determinants of aggregate demand are the factors that shift the aggregate
demand curve. These determinants are:
(1) consumer and producer expectations concerning the price level, and real
incomes,
(2) consumer indebtedness,
(3) personal taxes,
(4) interest rates,
(5) changes in technology,
(6) excess capacity in industry,
(7) government spending,
(8) net exports,
(9) national income earned abroad, and
(10) exchange rates.
The expectations of producers and consumers concerning real income or
inflation (including profits from investments in business sector) will effect aggregate
demand. Inflationary expectations and anticipated increases in real income are
consistent with increased current expenditures. Should real income or inflation be
expected to fall, purchases may be postponed in favor of future consumption or
investment.
As personal taxes, interest rates and indebtedness increase aggregate demand
should decrease due to a general reduction in effective demand. Should any of these
three decrease there should be an increase in aggregate demand due to the increased
ability to purchase output.
Changes in technology operate on aggregate demand in two distinct ways, the
creation of new products or new production processes, and the reduction of the costs of
producing resulting in less expensive output. The amount of excess capacity in
industry will also impact the demand for capital goods. If there is substantial excess
capacity, output can be increased without obtaining more capital, on the other hand, if
there is very little excess capacity an expansion of output will require purchasing more
capital.
56
Government expenditures account for a significant proportion of GNP. If
government expenditures decrease, so will aggregate demand; if they increase, so will
aggregate demand. The same is true of national income earned abroad, as American
economic activity moves abroad, the demand for domestic output will generally decline.
Exchange rates refer to how much of a foreign currency the U.S. dollar will
purchase. If the Japanese Yen loses value with respect to the U.S. dollar, then
Japanese goods will become cheaper. As the dollar buys more imports aggregate
demand will decrease simply because the U.S. dollar gains value relative to that foreign
currency. As the dollar loses value relative to a foreign currency, such as the Yen, then
the Japanese goods become more expensive and American consumers will substitute
American goods for Japanese goods and aggregate demand increases.
These determinants of aggregate demand act together to shift the aggregate
demand curve. Several of these determinants may change at the same time, and
possibly in different directions. The actual observed change in an aggregate demand
curve will result from the net effects of these changes. For example, if the dollar gains
one or two percent in value relative to the Yen this alone may cause a slight decrease in
aggregate demand. However, if government purchases increase by two or three percent
this should offset any exchange rate tendency toward a reduction in aggregate demand
and shift the aggregate demand curve to the right.
Aggregate Supply
The aggregate supply curve shows the amount of domestic output available at
each price level. The aggregate supply curve has three ranges, the Keynesian range
(horizontal portion), the intermediate range (curved portion), and the classical range
(vertical portion). These ranges of the aggregate supply curve are identified in the
following diagram.
57
The Keynesian range of the aggregate supply curve shows that any output is
consistent with the particular price level (the intersection of the aggregate supply curve
with the price level axis) up to where the intermediate range begins. In other words,
wages and prices are assumed to be sticky (fixed) in this range of the aggregate supply
curve.
The classical range of the aggregate supply curve is vertical. Classical
economists believed that aggregate supply curve goes vertical at the full employment
level of output. In other words, any price level above the end of the intermediate range
is consistent with the full employment level (potential) of GNP.
The intermediate range is the transition between the horizontal and vertical
ranges of the aggregate supply curve. As the macroeconomy approaches full
employment levels of output the price level begins to increase, as output increases in
this range.
The determinants of aggregate supply cause the aggregate supply curve to shift.
There are three general categories of the determinants of aggregate supply, these are:
(1) changes in input prices,
(2) changes in input productivity, and
(3) changes in the legal or institutional environment.
As input prices increase, aggregate supply decreases. For example, when the
price of oil increased dramatically in 1979-80, aggregate supply decreased because the
costs of production increased in general in the United States. In other words, for the
same level of production cost, we got less GNP. Should input prices decline, we should
expect to observe an increase in aggregate supply. In other words, for the same level
of production cost we got more GNP.
Changes in input productivity will also shift the aggregate supply curve. If labor
or capital becomes more productive then producers will receive more output for the
same cost of production. This can occur because of better quality resources or
because of the ability to use more efficient technology.
Changes in the legal or institutional environment will also increase aggregate
supply. One of the reasons that de-regulation is so popular in certain business circles is
that such changes in the legal environment will often result in lower costs of production.
To the extent that there are no diseconomies, this increases aggregate supply. More
efficient capital markets (i.e., the recent proposed S.E.C. changes concerned, among
58
others, the New York Stock Exchange), better schools, better health care, and less
crime all have the potential for increasing aggregate supply through the institutional
environment of business.
Macroeconomic Equilibrium
As mentioned earlier in this chapter, macroeconomic equilibrium in this model is
the intersection of aggregate supply and aggregate demand. The idea of equilibrium in
the macroeconomy is similar to that in microeconomic market. Where aggregate supply
and aggregate demand intersect, if there is no force applied to disturb the intersection
(change in a determinant), then there is no propensity for the output and price levels to
change from those determined by the intersection. The following diagram portrays a
macroeconomy in equilibrium. In this case the intersection of aggregate supply and
aggregate demand is in the Keynesian range. Should aggregate demand increase up
to where the intermediate range starts, only output will change. Through the
intermediate range both output and the price level will increase as aggregate demand
increase. In the classical range if aggregate demand increases, output will remain the
same, but the price level will increase.
The following diagram shows a macroeconomy in equilibrium. The solid
aggregate demand curve is the initial equilibrium. The two dotted lines show an
increase in aggregate demand (AD1) and a decrease in aggregate demand (AD2).
The changes in aggregate supply are analytically only marginally more
complicated than aggregate demand. An increase in aggregate supply is simply a shift
59
of the entire curve to the right (downward). As aggregate supply shifts downward along
the aggregate demand curve in the Keynesian and intermediate ranges, the price level
falls, and output will increase. However, in the classical range a decrease in aggregate
supply changes neither the price level or total output. The following diagram portrays
an increase in aggregate supply, the line labeled (AS1) and a decrease in aggregate
supply, the line labeled (AS2), and the original aggregate supply curve is the solid line.
There is a more complicated view of changes in equilibrium in the aggregate supply and
aggregate demand model called the Ratchet Effect. The Rachet Effect is where there is
a decrease in aggregate demand, but producers are unwilling to accept lower prices
(rigid prices and wages). Rather than to accept the lower price levels resulting from a
decrease in aggregate demand, producers will decrease aggregate supply. Therefore,
there is a ratcheting of the aggregate supply curve (decrease in the intermediate and
Keynesian ranges) which will keep the price level the same, but with reduced output. In
other words, there can be increases in prices (forward) but no decreases in the price
level (but not backward) because producer will not accept decreases (price rigidity).
The same is argued to exist for wages in the labor market, in other words, unions will
resist decreases in wages associated with a decrease in aggregate demand, hence
they too, will place downward pressure on aggregate supply.
The following diagram illustrates the rachet effect:
60
An increase in aggregate demand from AD1 to AD2 moves the equilibrium from
point a to point b with real output and the price level increasing. However, if prices are
inflexible downward, then a decline in aggregate demand from AD2 to AD1 will not
restore the economy to its original equilibrium at point a. Instead, the new equilibrium
will be at point c with the price level remaining at the higher level and output falling to
the lowest point. The ratchet effect means that the aggregate supply curve has shifted
upward (a decrease) in both the Keynesian and intermediate ranges.
KEY CONCEPTS
Aggregate Demand
Real balance effect
Interest rate effect
Foreign purchases effect
Determinants of Aggregate Demand
Aggregate Supply
Keynesian Range
Classical Range
Intermediate Range
61
Determinants of Aggregate Supply
Equilibrium in Aggregate Supply and Aggregate Demand
Rachet Effect
STUDY GUIDE
Food for Thought:
Explain and demonstrate the operation of the determinants of aggregate supply and
aggregate demand.
Critically evaluate both the Keynesian and Classical ranges of the aggregate supply
curve.
Is the ratchet effect plausible? Explain.
Why does the aggregate demand curve slope downward? Why do we find ranges in
the aggregate supply curve? Explain.
62
Sample Questions:
Multiple Choice:
The aggregate demand curve is most likely to shift to the right (increase) when there is
a decrease in:
A.
B.
C.
D.
the overall price level
the personal income tax rates
the average wage received by workers
consumer and business confidence in the economy
A short-run increase in interest rates on consumer loans may cause a decrease in
aggregate demand. What would we expect to observe, if there is no ratchet effect?
A.
B.
C.
D.
In the classical range only a reduction in prices
In the Keynesian range only a reduction in output
Both A and B are correct
Neither A or B are correct
True - False:
All economists are convinced that ratchet effect exists in today's economy. {FALSE}
One of the major reasons that the aggregate demand curve slopes downward is the real
balances effect. {TRUE}
63
Chapter 5
Classical and Keynesian Models
The purpose of this chapter is to extend the analysis presented in Chapter 4.
Based on the foundations of the relatively simple aggregate supply and aggregate
demand model both the Classical and Keynesian theories of macroeconomics will be
developed and compared in this chapter.
Introduction
The Classical theory of employment (macroeconomics) traces its origins to the
nineteenth century and to such economists as John Stuart Mill and David Ricardo. The
Classical theory dominated modern economic thought until the middle of the Great
Depression when its predictions simply were at odds with reality. However, the work of
the classical school laid the foundations for current economic theory and a great
intellectual debt is owed to these economists.
During the beginnings of the 1930s economists in both Europe and the United
States recognized that current theory was inadequate to explain how a depression of
such magnitude and duration could occur. After all, the miracle of free market
capitalism was suppose to always result in a return to prosperity after short periods of
correction (recession). For a long term disequilibrium to be observed was both
disconcerting and fascinating. It became very obvious by 1935 that the market
mechanisms were not going to self-adjust and bring the economy out of a very deep
depression.
John Maynard Keynes, an English mathematician and economist, is the father of
modern macroeconomics. His book, The General Theory, (1936) was to change how
economists would examine macroeconomic activity for the next six decades (until
present). Keynes' work laid aside the notion that a free enterprise market system can
self-correct. He also provided the paradigm that explained how recessions can spiral
downwards into depression without active government intervention to correct the
observed deficiencies in aggregate demand. Some of Keynes' ideas were original,
however, he borrowed heavily from the Swedish School, in particular, Gunar Myrdal, in
his explanations of the fact that there is no viable mechanism in our system of markets
that provide for correction of recessionary spirals.
Many economists, on both sides of the Atlantic, were working on the problem of
why the classical theory had failed so miserably in explaining the prolonged, deep
downturn and in offering policy prescriptions to cure the Great Depression. In some
ways it resembled an intellectual scavenger hunt. As soon as Keynes had worked out
64
the theory, he literally rushed to print before someone beat him to the punch, so to
speak. The result is that The General Theory is not particularly well written and has
been subject to criticism for the rushed writing, however, its contributions to
understanding the operation of the macroeconomy are unmistakable.
The Classical Theory
The classical theory of employment (macroeconomics) rests upon two
fundamental principles, these are: (1) underspending is unlikely to occur, and (2) if
underspending should occur, the wage-price flexibility of free markets will prevent
recession by adjusting output upwards as wages and prices declined.
What is meant by underspending is that private expenditures will be insufficient
to support the current level of output. The classicists believed that spending in
amounts less than sufficient to purchase the full employment level of output is not likely.
They also believed that even if underspending should occur, then price/wage flexibility
will prevent output declines because prices and wages would adjust to keep the
economy at the full employment level of output.
The classicists based their faith in the market system on a simple proposition
called Say's Law. Say's Law in its crudest form states that "Supply creates its own
demand." In other words, every level of output creates enough income to purchase
exactly what was produced. However, as sensible as this proposition may seem there
is a serious problem. There are leakages from the system. The most glaring omission
in Say's Law, is that it does not account for savings. Savings give rise to gross private
domestic investment and the interest rates are what links savings and investment.
However, there is no assurance that savings and investment must always be in
equilibrium. In fact, people save for far different reasons that investors' purchase
capital.
Further, the classicists believed that both wages and prices were flexible. In
other words, as the economy entered a recession both wages and prices would decline
to bring output back up to pre-recession levels. However, there is empirical evidence
that demonstrates that producers will cut-back on production rather than to lower prices,
and that factor prices rarely decline in the face of recession. The classicists believed
that a laissez faire economy would result in macroeconomic equilibria through the
unfettered operation of the market system and that only the government could cause
disequilibria in the macroeconomy.
One need only look to the automobile industry of the last ten years to understand
that wage - price flexibility does not exist. Automobile producers have not lowered
prices in decades. When excess inventories accumulate, the car dealers will offer
rebates or inexpensive financing, but they have yet to offer price reductions. There has
65
been some concession bargaining by the unions in this industry, but even where wages
were held down, it is rare that a union accepts a nominal wage cut.
Modern neo-classical macroeconomics takes far less rigid views. Even though
the neo-classicists have come to realize that the market system has its imperfections,
they believe that government should be the economic stabilizer of last resort. Further,
even though there is now recognition that the lauded wage-price flexibility is unlikely, by
itself, to be able to correct major downturns in economic activity, the neo-classicists
stubbornly hold to the view that government must stay out of the economic stabilization
business. The one exception they seem to allow, is the possibility of a major external
shock to the system, otherwise they claim there is sufficient flexibility to prevent major
depressions, with only very limited government responses (primarily through monetary,
rather than fiscal policy). In other words, the differences between the Keynesians and
the neo-classicists are very subtle (magnitude of government involvement) and focus
primarily on the starting point of the analysis (Keynes with recession, neo-classicist with
equilibrium).
Keynesian Model
Keynes recognized that full employment is not guaranteed, because interest
motivates both households and businesses differently - just because households save
does not guarantee businesses will invest. In other words, there is no guarantee that
leakages will result in investment injections back into the system. Further, Keynes was
unwilling to assume that self-interest in a market system guaranteed that there would be
wage-price flexibility. In fact, the empirical evidence suggested that wages and prices
exhibited a substantial amount of downward rigidity.
Under the Keynesian assumptions there is no magic in the market system. The
mechanisms that the classicist thought would guarantee adjustments back toward full
employment-equilibrium simply did not exist. Therefore, Keynes believed that
government had to be pro-active in assuring that underspending did not spiral the
economy into depression once a recession began.
Keynes' views were revolutionary for their time. However, it must be
remembered that the Great Depression was an economic downturn unlike anything
experienced during our lifetimes. Parents and grandparents, no doubt, have related
some of the traumatic experiences they may have endured during the Depression to
their children and grandchildren (perhaps you have heard some of these stories).
However, such economic devastation is something that is nearly impossible to imagine
unless one has lived through it. One-quarter of the labor force was unemployed at
points during the Depression. The U.S. economy had almost no social safety net, no
unemployment compensation, little in the way of welfare programs, no social security,
no collective bargaining, and very small government. Our generations have gotten
66
used to the idea that there is something between us and absolute poverty, there are
programs to provide income during times of unemployment, generally for a sufficient
period to find alternative employment. Even though these programs may have a
personal significance, they were intended to prevent future demand-deficiency
depressions.
The array of entitlement programs, particularly unemployment, welfare, and
social security provides an automatic method to keep spending from spiraling into
depression. Once recessionary pressures begin to build in the economy, the loss of
employment does not eliminate a household's consumption as soon as savings are
depleted. Unemployment compensation and then welfare will keep the lights on (and I
& M employees working), food on the table (to the relief of those working at Kroger’s
and Scotts), and clothes on the back of the those in need (keeping people working at
Walmart through Hudson’s). Further, the government has been proactive in
stabilization of economic downturns since the beginning of World War II. Thereby
providing us with the expectation that something will be done to get things back in order
as soon as possible once a recession starts.
The government's ability to tax and to engage in deficit spending provides the
flexibility in the market system to deal with underspending that was only presumed to
exist by deflating the economy. Without the automatic stabilizers and the government
flexibility to deal with serious underspending the economy could theoretically produce
the same results that it did in the 1930s.
Aggregate supply and aggregate demand can now be expanded to include the
savings and investment in the analysis to make for a more complete model. In so
doing, we can also create a more powerful analytical tool.
The Consumption Schedule
In beginning the development of the Keynesian Cross model, we return to
aggregate supply and aggregate demand. Along the vertical axis we are going to
measure expenditures (consumption, government, or investment). Along the horizontal
axis we are going to measure income (disposable income). At the intersection of these
two axes (the origin) we have a forty-five degree line that extends upward and to the
right. What this forty-five degree line illustrates is every point where expenditures and
income are equal.
67
The consumption schedule intersects the 45 degree line at 400 in disposable
income, this is also where the savings function intersects zero (in the graph below the
consumption function). At this point, (400) all disposable income is consumed and
nothing is saved. To the left of the intersection of the consumption function with the 45
degree line, the consumption function lies above the 45 degree line. The distance
between the 45 degree line and the consumption schedule is dissavings, shown in the
savings schedule graph by the savings function falling below zero. Dissavings means
people are spending out their savings or are borrowing (negative savings). To the right
of the intersection of the consumption function with the 45 degree line, the consumption
schedule is below the 45 degree line. The distance that the consumption function is
below the 45 degree line is called savings, shown in the bottom graph by the savings
function rising above zero.
This analysis shows how savings is a leakage from the system. Perhaps more
importantly the analysis also shows that there is a predictable relation between
consumption and savings. What is not consumed is saved, and vice versa. However,
there is more to this than savings plus consumption must equal income.
The Marginal Propensity to Consume (MPC) is the proportion of any increase in
disposable income that is spent on consumption (if an entire increase in income is spent
MPC is 1, if none is spent then MPC is zero). The Marginal Propensity to Save (MPS)
is the proportion of any increase in disposable income that is saved. The relation
between MPC and MPS is that MPS + MPC =1, in other words, any change in income
will be either consumed or saved. This relation is demonstrated graphically in the
following diagram:
68
The slope (rise divided by the run) of the consumption function is the MPC and
the slope of the savings function is the MPS. The slope of the consumption function is
.8 or (8/10) and the slope of the savings function .2 or (2/10). Total change in income
will be either spent or saved. If ten dollars of additional income is obtained then $2 ($10
times .2) will be saved and $8 ($10 times .8) will be spent on consumption.
The marginal propensities to save and to consume deal with changes in one's
income. However, average propensities to consume or save deal with what happens to
total income. The Average Propensity to Consume (APC) is total consumption divided
by total income, Average Propensity to Save (APS) is total savings divided by total
income. Again, (just like the marginal propensities) if income can be either saved or
consumed (and nothing else) then the following relation holds, the average propensity
to consume plus the average propensity to save must equal one (APC + APS = 1). For
example, if total income is $1000 and the average propensity to consume is .95 and the
average propensity to save is .05, then the economy will experience $950 is
consumption (.95 times $1000) and $50 in savings ($1000 times .05).
The non-income determinants of consumption and saving cause the
consumption and savings functions to shift. The non-income determinants of
consumption and saving are: (1) wealth, (2) prices, (3) expectations concerning future
prices, incomes and availability of commodities, (4) consumer debts, and (5) taxes.
In general, it has been empirically observed that the greater the amount of wealth
possessed by a household the more of their current income will be spent on
consumption, ceteris paribus. All other things equal, the more wealth possessed by a
household the less their incentive to accumulate more. Conversely, the less wealth
possessed by a household the greater the incentive to save. An Italian economist,
69
Franco Modigliani, observed that this general rule varied somewhat by the stage in life a
person was in. The young (twenties) tend to save for homes and children, in the late
twenties through the forties, savings were less evident as children were raised, and with
the empty nest, came savings for retirement. This is called the Life Cycle Hypothesis.
An increase in the price level has the effect of causing the consumption function
to shift downward. As prices increase, the real balances effect (from the previous
chapter) becomes a binding constraint. As the value of wealth decreases, so too does
the command over goods and services, so consumption must fall (and savings
increase). If the price level decreases, then we would expect consumption to increase
(and savings to fall).
The expectations of households concerning future prices, incomes and
availability of commodities will also impact consumption and savings. As households
expect price to increase, real incomes to decline or commodities to be less available,
current consumption will increase (and current savings decline). If, on the other hand,
households expect incomes to increase, prices to fall, or commodities to become more
generally available, current consumption will decline (and savings will increase).
Consumer indebtedness will also effect consumption and savings. If consumers
are heavily indebted, save a third of their income goes on debt maintenance then
current consumption will decline to pay off debts (dis-savings). However, if
indebtedness is relatively low, consumers will consume more of their current income,
perhaps even engage in dis-savings (borrowing) to consume more currently.
Taxation operates on both the savings and consumption schedules in the same
way. Because taxes are generally paid partly from savings and partly from current
income, an increase in taxes will cause both consumption and savings to decline. On
the other hand, if taxes are decreased, then both the savings and consumption
functions will increase (shift upwards).
Investment
Investment demand is a function of the interest rate. The following diagram
shows an investment demand curve. The investment demand curve is downward
sloping, which suggests that as the interest rate increases investment decreases. The
reason for this is relatively simple. If the expected net return on an investment is six
percent, it is not profitable to invest when the interest is equal to or more than six
percent. A firm must be able to borrow the money to purchase capital at an interest rate
that is less than the expected net rate of return for the investment project to be
undertaken Therefore, there is an inverse relation between expected return and the
interest rate; and the interaction of the interest rate with the expected rate of return
determine the amount of investment.
70
The determinants of investment demand are those things that will cause the
investment demand curve to shift. The determinants of investment demand are: (1)
acquisition, maintenance & operating costs of capital, (2) business taxes, (3)
technology, (4) stock of capital on hand, and (5) expectations concerning profits in
future.
The investment demand depends on whether the expected net rate of return is
higher than the interest rate. Therefore, anything that increases the expected net return
will shift the investment demand curve to the right, anything that cause the expected
return to fall will shift the investment demand curve to the left (decrease). As the
acquisition, maintenance and operating costs of capital increase, the net expected
return will decrease, ceteris paribus, thereby shifting the demand curve to the left. If the
acquisition, maintenance and operating costs decline, we would expected a higher rate
of return on this investment and therefore the demand curve shifts to the right
(increase).
Business taxes are a cost of operation. If business taxes increase, the expected
net (after tax) return will decline, this shifts the investment demand curve to the left. If
business taxes decrease, the expected net return on the investment will increase,
thereby increasing the investment demand curve.
Changes in technology will also shift the investment demand curve. More
efficient technology will generally increase expected net returns and shift the investment
demand to the right (increase). By decreasing production costs or improving product
quality through technological improvements competitive advantages may be reaped and
this is one of the most important determinants of investment since World War II.
The stock of capital goods on hand will also impact investment demand. To the
extent that producers have a large stock of capital goods on hand, investment demand
71
will be dampened. On the other hand, if producers have little or no inventory of capital
goods, then investment demand may increase to restore depleted stocks of capital.
Business investment decisions are heavily influenced by expectations.
Expectations concerning the productive life of capital, its projected obsolescence,
expectations concerning sales and profits in the future will also impact investment
decisions. For example, expectations that technological breakthroughs may make
current computer equipment less competitive may dampen current investment demand.
Further, if competitors are tooling-up to enter your industry, you may be hesitant to
invest in more capital if the profits margins will be cut by the entrance of competitors.
Autonomous v. Induced Investment
Autonomous investment is that investment that occurs which is not related to the
level Gross Domestic Product. Investment that is based on population growth,
expected technological progress, changes in the tax structure or legal environment, or
on fads is generally not a function of the level of total output of the economy and is
called autonomous investment. In examining the following diagram, the autonomous
investment function is a horizontal line that intercepts the investment axis at the level of
autonomous investment.
Induced investment is functionally related to the level of Gross Domestic Product.
The following diagram has an investment function that slopes upward (increases as
GDP increases). Induced investment is that investment that is "induced" because of
increased business activity. In short, induced investment means that investment that is
caused by increased levels of GDP.
72
Throughout American economic history the level investment has been very
volatile. In fact, much of the variation in the business cycle can be attributed to the
instability of investment in the United States. There are several reasons for this
instability, including: (1) variations in the durability of capital, (2) irregularity of
innovation, (3) variability of profits, and (4) the expectations of investors.
The durability of most capital goods means that they have an expected
productive life of at least several years, if not decades. Because of the durability and
expense of capital goods, their purchase can be easily postponed. For example, a bank
may re-decorate and patch-up an old building, rather than build a new building,
depending on their business expectations and current financial position.
Perhaps the most important contributors to the instability of investment in the
post- World War II period is the irregularity of innovations. With the increase in basic
knowledge, comes the ability to develop new products and production processes.
During World War II there was heavy public investment in basic research in medicine
and the pure sciences. What was intended from these public expenditures was for
military use, but many of these discoveries had important civilian implications for new
products and better production methods. Again, in the late 1950s and early 1960s an
explosion of basic research occur that led to commercial advantages. The Russians
launched Sputnik and gave the Western World a wake-up call that they were behind in
some important technical areas, the government again spent money on education and
basic research.
For the private sector to invest there must be some expectation of profits flowing
from that investment. Much of the decline in private investment during the Great
Depression was because private investors did not expect to be able to make a profit in
the economic environment of the time. In the late 1940s, automobile producers knew
that profits would be nearly guaranteed because no new private passenger cars were
built in the war years. There was very significant investment in plant and equipment in
the auto industry in those years (mostly to convert from war to peace-time production).
The expectations of business concerning profits, prices, technology, legal
environment and most everything effecting their business are simply forecasts.
Because the best informed forecasts are still guess-work, there is substantial variability
in business conditions expectations. Because these expectations vary substantially
across businesses and over time, there should be significant variability in investment
decisions.
73
KEY CONCEPTS
Classical Theory
Say’s Law
Keynesian Model
price-wage rigidity
No guarantee of full employment
Income v. Expenditures
Consumption Function
Marginal Propensity to Consume v. Marginal Propensity to Save
Determinants of consumption and savings
Investment Demand
Determinants of investment demand
Investment instability in U.S.
Induced investment v. autonomous investment
STUDY GUIDE
Food for Thought:
Compare and contrast the classical model of the economy with the Keynesian model.
Are these models really that different? Explain.
Develop the consumption and savings schedules. Fully explain the assumptions
underlying the models, its mechanics, and its implications.
74
What is the relation between savings and investment? Explain.
Critically evaluate Say's Law.
Sample Questions:
Multiple Choice:
If you receive $40 in additional income and you save $4, what is your marginal
propensity to consume?
A.
B.
C.
D.
.4
.9
1.0
None of the above
Which of the following contribute to investment instability?
A.
B.
C.
D.
Irregularity in innovation
Variable investor expectations
Purchases of capital durable goods are postponable
All of the above
True - False:
MPC + MPS = 1 {TRUE}
The forty-five degree line in the consumption function model shows each point at which
disposable income and consumption are equal. {TRUE}
75
Chapter 6
Equilibrium in the Keynesian Model
The purpose of this chapter is to extend the analysis of Chapter 4. The basic
Keynesian Cross model will be developed and examine how equilibrium can be
achieved in the macroeconomy. In Chapter 6, that follows we will complete the analysis
of the Keynesian view of the macroeconomy.
Equilibrium and Disequilibrium
Equilibrium GDP is that output that will create total spending just sufficient to buy
that output (where aggregate expenditure schedule intersects 45 degree line). Further,
once achieved, an equilibrium in a macroeconomy has no propensity to change, unless
there is a shock to the system, or some variable changes to cause a disequilibrium.
Disequilibrium where spending is insufficient (recessionary gap) or too high for level of
output (inflationary gap).
The neo-classicists view the primary macroeconomic as one of maintaining
equilibriums. Their analysis of the system begins with equilibrium, because unless
there is some external intervening problem, they believe this is the state of nature for
the macroeconomy. Keynesians, on the other hand, begin their analysis with
disequilibrium because this is the natural state for a macroeconomy. The constant
changes associated with policies, technological change, and autonomous influences will
impact the economy periodically and cause it to move out of equilibrium. In fact, this is
the major analytical difference between the neo-classical and Keynesian economists.
Keynes' Expenditures - Output Approach
From Chapter 2, remember that one of the ways that GDP can be calculated is
using the identity Y = C + I + G + X ; where Y = GDP, C = Consumption, I = Investment,
G= Government expenditures, and X = Net exports (exports minus imports). This
provides for us the formula by which we can complete the model we began in the
previous chapter. Consider the following diagram:
76
Remember that the 45 degree line is each point where spending is exactly equal
to GDP. The above figure shows a simple economy with no public or foreign sectors.
We begin the analysis by adding investment to consumption, and obtaining Y = C + I.
The equilibrium level of GDP is indicated above where C + I is equal to the 45 degree
line. Investment in this model is autonomous and the amount of investment is the
vertical distance between the C and the C + I lines.
Keynes' Leakages - Injections Approach
The same result obtained in the expenditures - output approach above can be
obtained using another method. Remember that APC + APS = 1, and MPC + MPS = 1,
this suggests that leakages from the system are also predictable. The leakages injections approach relies on the equality of investment and savings at equilibrium in a
macroeconomic system (I = S).
The reason that the leakages - injection approach works is that planned
investment must be equal savings. The amount of savings is what is available for gross
private domestic investment. When investors use the available savings, the leakages
(savings) from the system are injected back into the system through investment.
However, this must be planned investment.
Unplanned investment is the cause of disequilibrium. Inventories can increase
beyond that planned, and inventories are investment (stock of unsold output). When
inventories accumulate there is output that is not purchased, hence reductions in
spending which is recessionary; or, on the other hand, if intended inventories are
depleted which this inflationary because of the excess spending in the system.
Consider the following diagram, where savings is equal to I1, investment. If there is
unplanned investment, the savings line is below the investment line, at the lowest level
77
of GDP, the vertical line label UPI (Unplanned Investment), if inventories are depleted
beyond the planned level, then the savings line is above I1, as illustrated with the
highest level of GDP, and that vertical line is labeled Dep. Inv. for Depleted Inventory.
The original equilibrium is where I1 is equal to S. The if the unplanned inventory
or unplanned depletion of inventory become planned investment the analysis changes.
If we experience a decrease in planned investment we move down to I2, with a
reduction in GDP (Recession), just like an increase in unplanned investment, and if an
increase in investment is observed it will be observed at I3, which is expansionary, and
this is similar to unplanned depletion of inventories (which could also be inflationary).
Re-spending
The interdependence of most developed economies, results in an observed respending effect if there is an injection of spending in the economy. This re-spending
effect is called the multiplier, and we will provide a more detailed analysis of the
multiplier effects in the following chapter, however, the re-spending effect represented
by the multiplier will be introduced here to provide a full understanding of the model.
If there is an increase in expenditures, there will be a re-spending effect. In other
words, if $10 is injected into the system, then it is income to someone. That first person
will spend a portion of the income and save a portion. If MPC is .90 then the first
individual will save $1 and spend $9.00. The second person receives $9.00 in income
and will spend $8.10 and save $0.90. This process continues until there is no money
left to be spent. Instead of summing all of the income, expenditures, and/or savings
there is a short-hand method of determining the total effect -- this is called the
78
Multiplier, which is 1/1-MPC or 1/MPS. The significance any increase in expenditures
is that it will increase GDP by a multiple of the original increase in spending.
The re-spending effect and the leakage - injection approach to GDP provides for
curious paradox. This paradox is called the paradox of thrift. To accumulate capital, it
is often the policy of less developed countries to encourage savings, to reduce the
country's dependence on international capital markets. What often happens is that as
a society tries to save more it may actually save the same amount, this is called
the paradox of thrift. The reason that savings may remain the same is that unless
investment moves up as a result of the increased savings, all that happens is that GDP
declines. The higher rate of savings with a smaller GDP results in the same amount of
savings if GDP declines proportionally with the increase in savings rates. If investment
is autonomous then there is no reason to believe that investment will increase simply
because the savings rate increased. In fact, because of the re-spending effects of the
leakages, generally savings will remain the same as before the rate went up.
Full Employment
Simply because C + I + G intersects the 45 degree line does not assure utopia.
The level of GDP associated with the intersection of the C + I + G line with the 45
degree line may be a disequilibrium level of GDP, and not the full employment level of
GDP. The full employment level of GDP may be to the right or to the left of the
aggregate expenditures line. Where this occurs you have respectively, (1) a
recessionary gap or (2) an inflationary gap. In either case, there is macroeconomic
disequilibrium that will generally require appropriate corrective action (as will be
described in detail in the following chapter on fiscal policy). Both forms of
disequilibrium can be illustrated using the expenditures - output approach. Consider the
following two diagrams:
Recessionary Gap
79
In the above diagram the dashed line labeled Full Employment GDP. Is the level
of GDP that is associated with potential GDP or full employment. The distance between
the C + I line and the 45 degree line along the dashed indicator line is the recessionary
gap. The dotted line shows the current macroeconomic equilibrium. Okun's Law
(Chapter 3) provides some insight into what this means, remember that every 2.5% of
lost potential GDP is associated with 1% unemployment above the full employment
level. Therefore, this recession represents lost output and unemployment is fixed
proportions of 1% to 2.5%.
In other words, if we know what the unemployment rate is, then theoretically we
should also know where we are with respect to potential GDP. For example, if we have
an 8% unemployment rate and the current GDP is $1000 billion dollars, we can
calculate what the full employment, non-inflationary level of GDP is quite simply. Using
Okun’s law we know that for every 1% the unemployment rate exceeds the 4% natural
rate, we lose 2.5% of potential GDP. With 8% unemployment we know that we are
currently 10% below potential GDP (2.5% times 4 = 10%). To calculate what potential
GDP is one need only a little 8th grade algebra, to wit:
.9GDPpotential = $1000 billion
dividing both sides of the equation by .9 yields:
GDPpotential = $1000 billion / .9 or
GDPpotential = $1,111.11 billion
Therefore, Okun’s law turns out to be a handy road map to where the economy
should be with respect to where it currently is. Most of what will be done in this course
with respect to disequilibrium will be concerned with recessionary gaps. Inflation is best
left to monetary policy, the record shows that stagflation has been obtained and
therefore a better system than just simple recessionary and inflationary gaps is needed.
In this case, recessionary gaps have been appropriately dealt with using Keynesian
economics, but inflation is a monetary phenomenon. However, in order to present a
balanced view, a brief examination of the inflationary gap view of the rudimentary
Keynesian model will be presented.
Inflationary Gap
An inflationary gap can arise in the Keynesian view of the world by actions which
take the economy past the potential GDP levels of output. Consider the following
diagram below:
80
Again the full employment (non-inflationary) level of GDP is indicated by the
dashed line labeled full employment GDP. The distance between the C + I line and the
45 degree line along the dashed indicator line is the inflationary gap. The dotted
indicator line shows the current macroeconomic equilibrium. In this case, there is too
much spending in the economy or some other (similar) problem that has resulted in an
inflated price level. A reduction in GDP is necessary to restore price level stability, and
to eliminate excess output.
The various C + I and 45 degree line intersections, if multiplied by the appropriate
price level will yield one point on the aggregate demand curve. Shifts in aggregate
demand can be shown with holding the price level constant and showing increases or
decreases in C + I in the Keynesian Cross model. Both models can be used to analyze
essentially the same macroeconomic events. However, from this point on will
concentrate on our efforts on mastering the Keynesian Cross.
With these basic tools in hand, we can now turn our attention to economic
stabilization and the use of fiscal policy for such activities. In the following chapter, the
Keynesian model will be applied to problems of economic stabilization using taxes,
government expenditures, and the combination of the two to eliminate recessionary and
inflationary gaps.
81
KEY CONCEPTS
Macroeconomic Equilibrium v. Macroeconomic Disequilibrium
Expenditures - Output Approach
Leakages - Injections Approach
Planned v. Unplanned Investment
Re-spending
Multiplier Effect
Paradox of Thrift
Economic Development constraints
Full Employment
Potential GDP
Natural Rate of Unemployment
Recessionary Gap
Inflationary Gap
STUDY GUIDE
Food for Thought:
Critically evaluate the paradox of thrift.
What specifically is meant by a recessionary gap? Explain, and how does this differ
from an inflationary gap? Explain.
82
What insight does the multiplier provide for the Bush tax cuts of 2003? Explain.
Sample Questions:
Multiple Choice:
With a marginal propensity to consume of .8 the simple multiplier is:
A.
B.
C.
D.
0.25
2.00
4.00
None of the above
An inflationary gap is:
A. The amount by which C+I+G exceeds the 45 degree line at or above the full
employment level of output
B. The amount by which C+I+G is below the 45 degree line at the full employment
level of output
C. The amount by which the full employment level of output exceed the current level
of output
D. None of the above
True - False
A recessionary gap is how much the aggregate expenditure line must increase to attain
full employment without inflation. {TRUE}
The equilibrium level of output is that output whose production will create total spending
just sufficient to purchase that output. {TRUE}
83
Chapter 7
John Maynard Keynes and Fiscal Policy
The Great Depression demonstrated that the economy is not self-correcting as
alleged by many of the classical economists of the time. The revolutionary Keynesian
view that government must take a proactive role in stabilization of the business cycle
focused in large measure on the powers of the federal government to tax and to make
expenditures. That power and its uses is what comprises the majority of the discussion
in this chapter. Together the government's activities involving taxing and spending are
called fiscal policy. The purpose of this chapter is to examine the fiscal policy tools and
their effectiveness.
The Keynesian Revolution
What is not well-understood even today is what a revolution John Maynard
Keynes brought to economic reasoning. The classical model was, unfortunately, not
capable of explaining much of what was being observed in the first third of the twentieth
century in the Western world. Keynes and other, primarily European economists, were
observing and writing concerning the inability of the classical model to explain the
financial shocks which rocked both the U.S. economy and the many of the European
economies both pre and post-World War I. What is perhaps somewhat disturbing is that
some seventy years after the appearance of the General Theory there are those who
still cling to some of the more peculiar aspect of the old classical reasoning. Markets
are imperfect and they have tendencies which result in less than satisfactory results left
unrestrained. Yet in this system populated by imperfect people, we have not yet
devised ways of allocating scare resources which provide entirely satisfactory results for
everyone concerned. In our second-best world we struggle to get-by, and it is the
recognition of the fact that we do live in a second-best world that is important to our
practical progress. In fact, it is that very recognition that is probably the greatest
achievement of Keynes’ work within the economic profession.
Chapter 1 of the General Theory of Employment, Interest and Money by John
Maynard Keynes sums up the economy quandary of the day, and is replicated here,
since it is less than a full page. It is this chapter that basically sums up what had been
transpiring, and its results for the economic system and for the profession. It also nicely
portrays what the real meaning of the idea classical versus neo-classical or as they are
now called, Keynesian economists are. The following box provides interesting insight
into the controversy:
84
Chapter 1, The General Theory
I have called this book the General Theory of Employment, Interest and Money,
placing the emphasis on the prefix general. The object of such a title is to contrast
the character of my arguments and conclusions with those of the classical* theory of
the subject, upon which I was brought up and which dominates the economic
thought, both practical and theoretical, of the governing and academic classes of this
generation, as it has for a hundred years past. I shall argue that the postulates of the
classical theory are applicable to a special case only and not to the general case, the
situation which it assumes being a limiting point of the possible positions of
equilibrium. Moreover, the characteristics of the special case assumed by the
classical theory happen not to be those of the economic society in which we actually
live, with the result that its teaching is misleading and disastrous if we attempt to
apply it to the facts of experience.
* “The classical economists” was a name invented by Marx to cover Ricardo and
James Mill and their predecessors, that is to say for the founders of the theory which
culminated in the Ricardian economics. I have become accustomed, perhaps
perpetrating a solecism, to include in “the classical school” the followers of Ricardo,
those, that is to say, who adopted and perfected the theory of Ricardian economics
including (for example) J. S. Mill, Marshall, Edgeworth and Prof. Pigou.
From this beginning it can be seen that the immodest title “General Theory” had
a specific meaning within the context of the inherited economic theory of the day, and
not something more conceited.
Discretionary Fiscal Policy
The Employment Act of 1946 formalized the federal government's responsibility
for promoting economic stability. The economic history of the first half of the twentieth
century was a relatively stormy series of financial panics prior to World War I, a
relatively stagnant decade after World War I, and the Depression of the 1930s. After
World War II, a new problem arose called inflation. It should therefore come as no
surprise that the Congress wished to assure that there was a pro-active role for the
government to smooth-out these swings in the business cycle.
The government has several roles to fulfill in society. Its fiscal powers are
necessary to providing essential public goods. Without the federal government national
defense, the judiciary, and several other critical functions could not be provided for
society. There is also the ebb and flow of politics. The Great Society of Lyndon B.
Johnson represents the public opinion of the 1960s, today's political agenda seems to
be substantially different. The result is that as political opinion changes so will the
government's pattern of taxation and expenditures.
85
The government's taxing and spending authority to stabilize the economy
is called discretionary fiscal policy. Taxation and spending by the federal
government has been used, with some frequency, to smooth out the business cycle. In
times of underspending, the short-fall is made up by government spending or reductions
in taxes. In times of inflation, cuts in spending or increases in taxes have been used to
cool-off the economy. However, in recent year the discretionary fiscal policies of the
federal government have become extremely controversial. The first step in
understanding this controversy, is to understand the role of fiscal policy in economic
stabilization.
Milton Friedman and others, have argued that there is no role for discretionary
fiscal policy. Friedman’s position is that much of the business cycle is the result of
governmental interference and that the long lags in fiscal policy becoming
operationalized makes it only a potential force for mischief, and not hope for stability. In
other words, Friedman believes that the classical economists, while over simplifying the
argument, were basically correct about keeping government out of the economy.
Simplifying Assumptions
As was discussed in E201, Introduction to Microeconomics, assumptions are
abstractions from reality. The utility of these abstractions is to eliminate many of the
complications that have the potential to confuse the analyses and to simplify the
presentation of the concepts in which we are interested. It must be remembered that
the assumptions underlying any model determine how good an approximation of reality
that model is. In other words, a good model is one that is a close approximation of the
real world.
To analyze the macroeconomy using the Keynesian Cross some simplifying
assumptions are necessary. We will assume that all investment and net exports are
determined by factors outside of GDP (exogenously determined), it is also assumed
that government expenditures do initially impact private decisions and all taxes are
lump-sum personal taxes (with some exogenous taxes collected, i.e., customs duties,
etc.). It is also assumed that there are no monetary affects associated with fiscal policy,
that the initial price level is fixed, and that any fiscal policy actions impact only the
demand side of economy, in other words, fiscal policy is not offset by, nor does impact
the monetary side of the economy. It is also assumed that there are not foreign affects
of domestic fiscal policy.
86
The Goals of Government Expenditures
The primary and general goal of discretionary fiscal policy is to stabilize the
economy. Often other goals, involving public goods and services as well as political are
included in the discretionary aspects of fiscal policy. However, for present purposes we
are concerned only with government expenditures used to stabilize the economy (taxes
will be examined elsewhere in this chapter).
To remedy a recession the government must spend an amount that will exactly
make-up the short-fall in spending associated with that recession. The government can
also mitigate inflation by reducing government expenditures. However, the amount of
increased expenditures necessary to bring the economy back to full employment is
subject to a multiplier effect (the same is true of decreases in government
expenditures). Therefore, the government must have substantial information about the
economy to make fiscal policy work effectively. To determine the proper value of the
multiplier the fiscal policy makers must know either the Marginal Propensity to Save or
to Consume. Further, the policy makers will need to know the current level of output
and what potential GDP is, (potential GDP is that output associated with full
employment). In a practical sense, in the near term the government may have
reasonably accurate information upon which to base forecasts to conduct fiscal policy.
For present purposes, we will assume the government has all the necessary information
at hand to conduct fiscal policy.
The Simple Multiplier
When the government increases expenditures, the effect on the economy is
more than the initial increase in government expenditures. In fact, this is also true of
investment and consumption expenditures, not just government expenditures. When
there is an increase in expenditures, through an increase in government expenditures,
those expenditures become income for someone. They will save a portion of the
expenditures and spend the rest which then become income to someone else. The
result of this chair-reaction re-spending is that there is a direct relation between the total
amount of re-spending and the marginal propensity to save. This is the re-spending
effect discussed in Chapter 6.
The multiplier is the reciprocal of the marginal propensity to save:
The multiplier = 1/MPS
Because MPS + MPC = 1, there is an equivalent expression: 1/MPS = 1/1-MPC. The
multiplier is the short-cut method of determining the total impact of an increase or
decrease in total spending in the economy. For example, if the government spends $10
87
more and the
marginal propensity
to consume is .5,
then the multiplier is
2 and the total
increase in spending
resulting from the
increase of $10 in
government
expenditures is $20.
This is called
the simple multiplier
because the only
leakage in the respending system is savings. In reality there are more leakages. People will use a
portion of their income to buy foreign goods (imports), and will have to pay income
taxes. These are also leakages and would be added to the denominator in the
multiplier equation. The Council of Economic Advisors tracks the multiplier that
contains each of these other leakages, called the complex multiplier. The complex
multiplier has remained relatively stable over the past couple of decades is estimated to
be about 2.0.
The following diagram presents a case of recession that will be eliminated by
increasing government expenditures by just enough to eliminate the recession, but not
to create inflation. Assuming that our current level of GDP is $350 billion and we know
that full employment GDP is $430 we which to eliminate this recession using increases
in government expenditures. We also know that MPC is .75 and therefore MPS is .25.
88
With an MPC of .75 we know the multiplier is 4 (1/.25). We also know that we
must obtain another $80 billion in GDP to bring the economy to full employment. The
distance between the forty-five degree line and the C+I=G1 line at $430 is $20 billion
which is our recessionary gap in expenditures. Therefore to close this gap we must
spend $20 billion and the multiplier effect turns this $20 billion increase in government
expenditures into $80 billion more in GDP. Another way to calculate this is, that we are
$80 billion short of full employment GDP, we know the multiplier is 4, so we divide $80
by the multiplier 4 and find the government must spend an additional $20 billion.
The leakages approach yields the same results.
An increase in $20 of government expenditures moves the investment plus
government expenditures line for I+G1 to I+G2 by a total of $20 billion dollars on the
expenditures axis, but because of the re-spending effects, this increase results in $80
billion more in GDP, from $350 billion to $430 billion.
Taxation in Fiscal Policy
The government can close a recessionary gap by cutting taxes, just as effectively
as it can by increasing government expenditures (bearing in mind, that there will be
differences in the lag between the actual enactment of law and the impact of the policy
on the economy, in general taxes show up later in affect, than do expenditures).
Assuming that it is a lump-sum tax the government will use (lump-sum meaning it is the
same amount of tax regardless of the level of GDP). The lump sum tax must be
multiplied by the MPC to obtain the change in consumption, however, such taxes are
also paid proportionately from savings. So the effect is not the same as is observed
with government expenditures. The tax is multiplied by the MPC and the remainder
paid from savings. However, there is also no immediate increase in expenditures. In
89
other words, the impact of a change in taxes is always less than the impact of
expenditures. In other words, the taxation multiplier is always the simple multiplier
minus one or:
taxation multiplier = (1/MPS) - 1
Consider the following diagrammatic presentation of the impact of decrease in
taxes on an economy.
If full employment GDP is $600 billion and we are presently at $500 billion with
an MPC of .8, then if we are going to increase GDP by $100 billion we must cut taxes
by $25 billion. The simple multiplier with an MPC of .8 is 1/.2 = 5; but the taxation
multiplier is the simple multiplier minus one, hence 4. The decrease in taxation
necessary to increase GDP by $100 billion is the $100 billion divided by 4 or $25 billion.
The major difference between increasing expenditures or decreasing taxes is
that the multiplier effect for taxation is less. In other words, to get the same effect on
GDP you must decrease taxes by more than you would have had to increase
expenditures.
Balanced Budget Multiplier
An alternative policy is to increase taxes by exactly the amount you increase
expenditures. This balanced budget approach can be used to expand the economy.
Remember that the simple multiplier results in 1/MPS times the increase in
expenditures, but the taxation multiplier is one less than the simple multiplier. In other
90
words, if you increases taxes by the same amount as expenditures, GNP will increase
by the amount of the initial increase in government expenditures. That is because only
the initial expenditure increases GDP and the remaining multiplier effect is offset by
taxation. Therefore, the balanced budget multiplier is always one.
For example, if full employment GDP is $700 billion, and we are presently at
$650 billion, with an MPC of .75, then the simple multiplier is 4, (1/.25) and the taxation
multiplier is 3, [(1/.25)-1], therefore the government must spend $50 billion and increase
taxes by $50 to increase GDP by $50.
Tax Structure
Taxation is always a controversial issue. Perhaps the most controversial of all
tax issues concerns the structure of taxation. Tax structure refers to the burden of the
tax. Progressive taxation is where the effective tax rate increases with ability to pay.
Regressive taxation is where the effective tax rate increases as ability to pay decreases.
A proportional tax structure is where a fixed proportion of ability to pay is taken in taxes.
In general consumption is more greatly effected by taxation when the tax is
progressive, and savings are impacted more when the tax is regressive. Therefore, if
the distribution of the tax across income groups will have a variable impact on the
consumption and savings.
At present, the federal income tax structure is nearly proportional, most ad
valorem taxes, tobacco etc., are regressive. State income tax structures also vary
substantially. States like Kansas, California and New Jersey have mildly progressive
income taxes. States like Indiana and South Carolina use gross income tax schemes
that tend to be very regressive. Therefore, the current tax structures are not neutral
with respect to their re-distributional effects across income groups. In total, the taxes
collected across the federal, state and local level are at best proportional and are
probably slightly regressive. Probably the most regressive of these taxes is the
gasoline tax.
Automatic Stabilizers
During the New Deal period several social welfare programs were enacted. The
purpose of these programs was to provide the economy with a system of automatic
stabilizers to help smooth business cycles without further legislative action. Among
these programs were:
(1) progressive income taxes,
91
(2) unemployment compensation, and
(3) government entitlement programs.
Since the end of the Carter administration these automatic stabilizers have
become controversial. President Clinton moved in his first term in office to eliminate
some of these programs, but it was really the Republican congress and Governors like
Tommy Thompson in Wisconsin who eliminated much of then welfare programs. Since
2002 much of this “welfare reform” has also come under fire for not having produced the
results that were advertised. In 2002 the movie “Bowling for Columbine” by Michael
Moore, brought many of these issues to an international audience.
The idea behind a progressive income tax was basic fairness. Those with the
greatest wealth and income have the greatest ability to pay and generally receive more
from government. As recessions occur, it is that segment of the population with the
greatest wealth that will have resources upon which to draw to pay taxes, the poor will
generally be impacted the most by high level of unemployment and recessions
generally leave them with very little ability to pay.
Unemployment compensation is paid for in every state of the union by a payroll
tax. The payroll tax is generally less than one percent of the first $10,800 of payrolls.
Most states also impose an experience rating premium, that is those companies that
have laid people off in the last year will pay a higher rate based on their experience.
The preponderance of this tax is paid during periods of expansion and is placed in a
trust fund. Unemployment benefits are then paid from this trust fund as the economy
enters a recession. The effect is that money is taken out of the system during
expansion and injected during recession which dampens the top of the cycle (peaks)
and eases the bottom of the cycle (trough).
Most social welfare programs have essentially the same effect, except that the
expansions tend not to be dampened as much because the funding comes from general
budget authority rather than a payroll tax. The significant increases in the proportion of
poor people during recessions, however, do not add as much to the downward spiral of
underspending that would have otherwise been observed in the absence of these
entitlement programs.
Problems with Fiscal Policy
There are several serious problems with fiscal policy as a method of stabilizing
the macroeconomy. Among these problems are (1) fiscal lags, (2) politics, (3) the
92
crowding-out effect, and (4) the foreign sector. Each of these will be addressed, in turn,
in the following paragraphs.
Fiscal Lag
There are numerous lags involved with the implementation of fiscal policy. It is
not uncommon for fiscal policy to take 2 or 3 years to have a noticeable effect, after
Congress begins to enact corrective fiscal measures. These fiscal lags fall into three
basic categories.
There is a recognition lag. The recognition lag is the amount of time for policy
makers to realize there is an economic problem and begin to react. Administrative lags
are how long it takes to have legislation enacted and implemented. Operational lags
are how long it takes for the fiscal actions to effect economic activities.
Because of the typical two to three years for fiscal policy to have its intended
effects, they may cause as many problems as they cure. For example, it is not
uncommon for the Congress to cut taxes because of a perceived recession that
subsequently ends within months of the enactment of the legislation. When the effects
of the fiscal policies actually effect the economy, it may be in a rapid expansion and the
tax cut or increase in government expenditures add to inflationary problems.
Politics, Crowding-out, the Foreign Sector and Fiscal Policy
Often politics overwhelms sound economic reasoning in formulating fiscal
policies. Public choice economists claim that politicians maximize their own utility by
legislative action, and are little concerned with the utility of their constituents. Perhaps
worse, is the fact that most bills involve log-rolling and negotiations. Special interest
often receive benefits simply to because they pay many of the election costs, and the
interests of these lobbyists may be inconsistent with the best interests of the nation as a
whole. The end result is that politics confounds the formulation of policy designed to
deal with technical ills in the economy.
The neo-classicist have long argued that government deficits (often associated
with fiscal policy) results in increased interest rates that crowds-out private investment.
there is little empirical evidence that demonstrates the exact magnitude of this
crowding-out effect, but there is almost certainly some small element of this.
The neo-classicist also argue that there is another problem with government
borrowing to fund deficit financing of fiscal policies. This problem is called Ricardian
Equivalence. David Ricardo hypothesized that the deficit financing of government debt
93
had the same effect on GDP as increased taxes. To the extent that capital markets are
not open (foreign investors) the argument is plausible, however, in open economies
there is little empirical evidence to support this view.
There are also problems that result from having an open economy. The most
technical of these problems is the net export effect. An increase in the interest rate
domestically (associated with a recession, or with an attempt to control inflation) will
attract foreign capital, but this increases the demand for dollars which increases their
value with respect to foreign currencies. As the value of the dollar increases it makes
U.S. goods more expensive overseas and foreign goods less expensive domestically.
This results in a reduction of net exports, hence a reduction in GDP.
There have also been shocks to the U.S. economy that have their origins outside
of the United States and are difficult if not impossible to address with fiscal policy. The
Arab Oil Embargo is a case in point. The United States and Holland supported the
Israeli in their war with the Arabs in the early 1970s. The problem was we also had
treaty obligations to some of the Arab states. Because of our support, the Arabs
embargo oil shipments to the United States and Holland which had the result of
increasing domestic oil prices and decreased aggregate supply, hence driving up the
price level. This all occurred because of American Foreign Policy, but little could be
done with fiscal policy to offset the problems for aggregate supply caused by the
embargo.
KEY CONCEPTS
Social Welfare Programs
1946 Employment Act
Politics and change
Expenditures
Expansionary v. Contractionary Fiscal Policy
Political Goals
Public Goods and Services
Taxation
Expansionary v. Contractionary Fiscal Policy
Multipliers
94
Simple
Taxation
Balanced Budget
Tax Structure
Proportional
Regressive
Progressive
Automatic Stabilizers
Progressive Income Taxes
Unemployment Compensation
Entitlement Programs
Fiscal Lags
Recognition
Administrative
Operational
Politics and Fiscal Policy
Log-rolling
Public Choice Economics
Government Deficits
Crowding-out
Ricardian Equivalence
Open Economy
95
STUDY GUIDE
Food for Thought:
Develop the expenditures - output model and show an increase (decrease) in taxes to
close an inflationary (recessionary) gap. Now, do the same thing using increases and
decreases in government expenditures. Do the exercise one more time using the
balanced budget approach. [do this exercise using various MPCs].
Critically evaluate fiscal policy as an economic stabilization policy.
Critically evaluate the Ricardian Equivalence Theorem, and be careful to explain its
implications for the national debt.
Which is most reliable as a fiscal policy tool, taxes or expenditures? Defend your
answer.
Sample Questions:
Multiple Choice:
With an inflationary gap of $100 million and a large budget deficit which has become a
serious political issue and an economy with an MPC of .95, what would you do to close
the gap?
A. Increase taxes $5 million
96
B. Decrease expenditures $5 million
C. Increase taxes $100 and decrease expenditures $100
D. None of the above
With a recessionary gap of $70 million and an MPS of .1 which of the following policy
would close the gap?
A.
B.
C.
D.
Increase taxes and expenditures by $70 million
Increase expenditures by $7 million
Decrease taxes by $7.8 million
All of the above will work
True - False:
The crowding - out effect is the theory that a government deficit raises interest rates and
absorbs resources that could have been used for private investment. {TRUE}
Automatic stabilizers, such as unemployment compensation, provide counter cyclical
relief from economic instability without additional government action. {TRUE}
97
Chapter 8
Money and Banking
The “real” economy has been the subject of most of the analysis to this point.
There is, however, another part of this story. That other part of this story is money.
Notice to this point we have assumed away any difficulties that may arise because of
the impact of policy on monetary aggregates. This chapter is the first of three which will
develop monetary economics.
In primitive, tribal societies the development and use of money occurs only after
that society reaches a size and complexity where barter is no longer a viable method of
transacting business. Barter, the trading of one good or service for another, requires a
coincidence of wants. When one individual has something another wants, and vice
verse, trade can be arranged be there is a coincidence of wants.
Larger, more complex social orders generally require the division of labor and
specialization, which in turn, increases the number of per capita market transactions.
When individuals live in a higher interdependent society, most necessities of life are
obtained through market transactions. In a modern industrialized country it is not
uncommon for an individual to make more than a dozen transactions per day. This
volume of business makes barter nearly impossible. The result is that societies will
develop money to facilitate the division of labor and specialization that provide for higher
standards of living.
The purpose of this chapter is to introduce the reader to money and to the
banking system. This chapter will provide the basic definitions essential to
understanding our complex monetary and banking systems. The following chapters will
extend the analyses to demonstrate how money is created and how the monetary
system is used to stabilize the fluctuations in the business cycle.
Functions of Money
98
There are three functions of money, these are:
(1) a medium of exchange,
(2) a measure of value, and
(3) a store of value.
Each of these functions will be examined, in turn, in the following paragraphs.
Money serves as a medium of exchange. It is generally accepted as "legal
tender" or something of general and specified value, such that, people have faith that
they can accept it for payment, because they can use it in exchange without loss of
value. Because barter requires a coincidence of wants, trade occurs only when two
people have different commodities that the other is willing to accept in trade for their
own wares. A barter economy makes exchange difficult, it may take several trades in
the market before you could obtain the bread you want for the apples you have.
Money solves the barter problem. If you have apples and want bread, you simply
sell the apples for money and exchange the money for bread. If barter persists it may
take a dozen or more transactions to turn apples into bread. In other words, money is
the grease that lubricates modern, sophisticated economic systems.
Money is also a measure of value. Without money as a standard by which to
gauge worth, value would be set by actual trades. The value of a horse in eighteenth
century Afghanistan could be stated in monetary units in the more modern areas of the
country. However, the nomads that wandered the northern plains of that country could
tell you in terms of goats, carpets, skins, and weapons what the range of values were
for horses. However, there were as many prices of horses as there were combinations
of goods and services that could be accepted in exchange for the animal. Money
permits the value of each commodity to be stated in simple terms of a single and
universally understood unit of value.
Money is also a store of value. Money can be saved with little risk, with virtually
no chance of spoilage, and little or no cost. Money is far easier to store than are
perishable foodstuffs, or bulky commodities such as coal, wool or flour. To store money
(save) and later exchange it for commodities is far more convenient than having to store
commodities for future use, or to have to continually go through barter exchanges.
Together, these functions vest in money a critical role in any complex modern
economy. Our prices are stated in terms of money, our transactions are facilitated by
money, and we can store the things we which to consume in the future by simply saving
99
money. In fact, money may not make the world go 'round, but it certainly permits the
economic world to go 'round much more smoothly.
The Supply of Money
The supply of money has a very interesting history in both U.S. economic history
and world economic history. Several historians note that one of the contributing factors
to the fall of the Roman Empire was that there was significant deflation in the third and
fourth centuries. The reason for this was that money, in those days, was primarily
coinage minted of gold, silver, and copper. As gold and silver was traded for
commodities from the orient there was a flow of coinage out of the west. In addition,
"barbarians" were constantly raiding Roman territory and it was the gold and silver that
they carried back with them (for trade). Further, as the population grew at a faster rate
than the availability of precious metals, the money supply fell relative to the need for it to
make the economy function efficiently. This rapid deflation, added to a extremely
maldistributed income, and loss of productive resources resulting in a rapidly declining
economy after the second century A.D. The Roman economy collapsed, with the
collapse of the economy the military and government were also doomed. Given the
times, once the Roman government and military were ruined, it was short-work to
eliminate the empire and social structure.
In 1792, the U.S. Congress enacted the first coinage act. The Congress
authorized the striking of gold and silver coins. The Congress set the ratio of the value
of gold to silver in the coinage at 15 units of silver equaled one unit of gold. The
problem with this was that gold was worth more than silver as bullion than as coinage.
This arbitrary setting of the coinage value of these metals resulted in the gold coins
disappearing from circulation (being melted down as bullion) and only silver coins
circulating. At the same time, most coinage that circulated in the eighteenth century
western hemisphere (including the United States) was of Spanish origin. In fact, the
standard unit adopted for U.S. coinage was the dollar, however, the Spanish minted
coins that were also called dollars (from a Dutch word, tolar). The Spanish one dollar
coins contained more silver, than did the U.S. dollar and the Spanish coins were being
melted down and sold, at a profit, at the U.S. mint. Herein is the problem with using
gold or silver as minted coins or as backing for currency. Money has value in exchange
that is unrelated to the value of precious metals. Their relative values fluctuate and
result in money disappearing if the value of the metal is more than the unit of currency
in which the metal is contained.
One of the classical economists noted this volatility in the monetary history of
most countries. As the value of the gold or silver made the currency worth more as
bullion, that currency disappeared and was quickly replaced by monetary devices of
lessor value. Gresham's Law is that money of lesser value will chase money of greater
value out of the market.
100
A modern example of this is available from U.S. coinage. In 1964, the value of
silver became nearly 4 times (and later that decade nearly 22 times) its worth in
coinage. Therefore, the last general circulation coins that were minted in the United
States that contained any silver was 1964. Today, the mint issues one dollar coins that
contain one ounce of silver (rather than .67 ounces), but that silver is worth $5.30 per
ounce as bullion. Therefore, you do not see actual silver dollars in circulation, and will
not until the value of silver drops below $1.00 per ounce. This observation is Gresham's
law in operation.
Since the American Civil War there have been various forms of money used in
this country other than coinage. The government issued paper money, particularly after
the Greenback Act of 1861 (there were numerous examples of U.S. paper money
before that year, including bank notes, state notes, and colonial notes). The Greenback
Act provided for the denominations of bills with which you are familiar, but it also
provided for 5¢ 10¢, 25¢, and 50¢ bills (fractional currency).
Today, there are numerous definitions of money. The most commonly used
monetary items are included in the M1 through M3 definitions of money; these
definitions are: (1) M1 is currency + checkable deposits, (2) M2 is M1 + noncheckable
savings account, time deposits of less $100,000, Money Market Deposit Accounts, and
Money Market Mutual Funds, and (3) M3 is M2 + large time deposit (larger than
$100,000). The largest component of the M1 money supply is checkable deposits
(checking accounts, credit union share drafts, etc.), currency is only the second largest
component of M1.
Monetary Aggregates, Monthly 2005 (Seasonally adjusted)
Month
January
February
March
April
May
June
July
August
September
October
November
December
M-1
1357.2
1363.0
1368.0
1360.4
1366.8
1359.8
1355.2
1359.1
1355.1
1363.8
1363.1
1363.2
M-2
6439.3
6446.9
6464.9
6471.9
6482.8
6502.6
6515.4
6548.3
6580.2
6612.9
6641.0
6663.9
M-3
9487.2
9531.6
9565.3
9620.9
9665.0
9725.3
9762.4
9864.6
9950.8
10032.0
10078.5
10154.0
Source: Federal Reserve System
101
The presented above are from the historical tables at the Federal Reserve’s site
www.Federalreserve.gov. The Fed routinely publishes money supply data on a weekly,
monthly, and annual average basis, both seasonally adjusted, and not seasonally
adjusted. As the above box demonstrates, there is a fairly stable relationship between
each of the three major definitions of the money supply. The growth in each of the
categories over the calendar year 2005 (on a seasonally adjusted basis) is also
relatively slow.
Near Money
Near money are the items that fulfill portions of the requirements of the functions
of money. Near money can be simply a store of value, such a stocks and bonds that
can be easily converted to cash. Credit cards are often accepted in transactions, and
the line of credit they represent can serve as a medium of exchange. Gold, silver, and
precious stones have historically served as close substitutes for money because these
commodities have inherent value and can generally be converted to cash in almost any
area of the world. Much of the wealth smuggled out of Europe at the end of World War
II by escaping Nazis was smuggled out in the form of gem stones, gold, and silver
bullion.
The widespread use of near money is relatively rare in the world's economic
history. However, in modern times, there is a potential for problems. Indiana's history
presents an interesting example. The State of Indiana issued currency in the 1850s, in
part to help finance the canals that were being built across the State. In 1855 and 1856,
the railroads put the canals out of business, even before they were completed, virtually
bankrupting the State, hence the 1857 Constitution, rather the year Indiana entered the
Union. State currency or even private bank currency is not controlled by a central bank
and is worth only what faith people have in it or the intrinsic value of the monetary unit.
If the full faith and credit of a bankrupt State or bank is all that backs the currency, it is
worthless.
What Gives Money Value?
The value of the U.S. dollar (or any other currency) can be expressed as the
simple reciprocal of the price level:
D = 1/P
102
Where D is the value of the dollar and P is the price level. In other words, the value of
the dollar is no more or less than what it will buy in the various markets. This is true of
any currency (money in general). However, there are reasons why money has value.
The value of money is determined by three factors, these factors are: (1) its general
acceptability for payment, (2) because the government claims it is legal tender (hence
must be accepted for payment), and (3) its relative scarcity (as a commodity).
Money can be used to buy goods and services because people have faith in its
general acceptability. It is not that the coin or paper has intrinsic value that makes
money of value in exchange, it is simply because people know that they can accept it in
payment and immediately exchange it for like value in other commodities, because
virtually everyone trusts its value in exchange.
Money also has value in international currency markets, not just domestic ones.
The international currency markets provide a very good example of why trust provides
money with value. The world's currencies are generally divided into two categories,
hard currency and soft currency. A hard currency is one that can be exchanged for
commodities in any nation in the world. A soft currency is one whose value is generally
limited to nation that issued it, and often to some limited extent in the world currency
markets (often with significant limitations or even discounts). The U.S. dollar, French
Franc, German Deutsche Mark, Canadian Dollar, Japanese Yen, British Pound Sterling,
and Italian Lire were recognized as hard currencies (generally the Swiss Franc is also
included in the hard currencies), at least prior to the Euro. These seven nations are the
G-7 nations and are the world's creditor nations (even though each has a national debt
of their own, the private sector extends credit to less developed countries). The reason
that these countries are the creditor nations is that they are the largest free market
economies, have democratic and stable governments, and long histories of rather
stable financial markets. These nations' currencies are termed "hard" currencies
because they are relatively stable in value and can be readily exchanged for the goods
and services of these largest most advanced economies. In other words, the economic
systems and governments that generate these currencies are the markets from which
everyone else imports a wide array of necessary commodities.
On the other hand, the Mexican Peso, Kenyan Dollar, and Greek Drachma
(among 165 others) are currencies of less developed nations that have very little of
value, relative to any of the G-7 nations in the world's markets, do not have histories of
stable financial markets, governments, and they are typically in debt to the G-7 nations.
Because of their limited value in exchange, and rather volatile value these currencies
are called "soft" currencies. The difference between a hard and a soft currency is trust
in its present and future value in exchange for commodities. Hard currencies are
generally trusted, hence accepted, soft currencies are not.
There is also an element of legality in the value of money. For example, the
United States is a large, economically powerful country. Its government is also a large,
powerful government that has always paid its bills. People have faith and trust in the
103
U.S. government making good on its financial obligations, therefore people have taken
notice when the United States government says that Federal Reserve Notes are legal
tender.
Also contributing to the value of money is its scarcity. Because money is a
scarce and useful commodity it also has value the same as any other commodity. It is
interesting to note that the U.S. $100 bill is the most widely circulated currency outside
of the United States, and more of these bills circulate outside of the U.S. than within the
U.S. This suggests something of the commodity value of U.S. dollars, as well as the
general international trust in the U.S. dollar.
By spring 1971, the exchange rate problems had become acute. This was
true for Japan and especially for West Germany, who had large trade surpluses with
the United States and held more dollars than they wanted. Under these conditions,
the U.S. dollar was rapidly losing value against the German Deutsche mark. From
January to April of 1971, in keeping with the Bretton Woods agreements, the German
Bundesbank had to acquire more than $5 billion of international reserves in order to
defend the value of the dollar. To protect the value of the dollar and its exchange
rate with the German mark, however, the German central bank was losing control
over its domestic monetary policy. By May 5, 1971, the Bundesbank abandoned its
efforts to protect the dollar and permitted the Deutsche mark to seek its own value in
the world's currency markets. The scenario was the same for all countries that had
trade surpluses with the United States. The excess supply of dollars was causing
those countries to lose control over their money supplies. Given that the United
States was rapidly losing its gold reserves, on August 155, 1971, by Richard Nixon's
order, it was announced that the United States officially abandoned the Bretton
Woods system and refused to exchange gold for U.S. dollars held by foreigners. For
the first time in modern monetary history, the U.S. dollar was permitted to seek its
value in open markets. The move to a flexible exchange rate system where the
exchange rates are determined by the basic market forces was the official demise of
the Bretton Woods system.
Mashaalah Rahnama-Moghadam, Hedayeh Samavati, and David A. Dilts, Doing
Business in Less Developed Countries: Financial Opportunities and Risks. Westport,
Conn: Quorum Books, 1995, p. 74.
The Demand for Money
The demand for money consists of two components of total money demand,
these are: (1) transactions demand, and (2) asset demand. Transactions demand for
money is the demand that consumers and business have for cash (or checks) to
104
conduct business. Transaction demand is related to a preference to have wealth or
resource in a form that can be used for purchases (liquidity). There is also an asset
demand for money. In times of volatility in the stock and bond markets, investors may
prefer to have their assets in cash so as not risk losses in other assets. Together, the
asset and transaction demand for money comprise the total demand for money.
Money is much the same as any other commodity, it has a demand curve and a
price. The price of money is interest, primarily because money is also a claim on capital
in the financial markets. The demand curve for money is a downward sloping function
that is a schedule of interest rates to the quantity of money.
The Money Market
The money market is a particularly interest market. In reality there are several
markets in which money is exchanged as a commodity. In examining only M1, the
currencies of various countries are exchanged for one another for the purpose of doing
business across national boundaries. The price of one nation's currency is generally
expressed in terms of another countries currency. For example, 105 Yen is the value of
a dollar, in the case of a Deutsche Mark, 1.26 DM is worth one dollar.
There is also the credit market. The credit market is where consumers and
businesses go to borrow money. A consumer purchasing a house will typically need a
mortgage and will borrow to buy a house. Businesses will need to borrow to purchase
capital equipment (investing) to produce commodities to sell in product markets. Both
types of borrowing influence the credit markets, because money is a relatively scarce
commodity. One of the largest borrowers in the U.S. economy is the U.S. Treasury.
Typically the government borrows by selling Treasury Bonds.
The following diagram is for a general money market (credit market):
105
The money supply curve is vertical because the supply of money is exogenously
determined by the Federal Reserve. The Federal Reserve System regulates the money
supply through monetary policy and can increase or decrease the money supply by the
various actions it has available to it in regulating the banking system and in selling or
buying Treasury Bonds. The money demand curve slopes downward and to the right.
The intersection of the money demand and money supply curves represents equilibrium
in the money market and determines the interest rate (price of money).
Bonds are financial obligations. Both private companies and governments issue
bonds and receive cash. The bonds typically state that the owner of the bond will
receive a specific payment in dollars periodically for holding the bond, and at the end of
the bond's life it will be redeemed for its face value. This is the primary market, where
bonds are sold directly by the government or company. In the case of the Treasury the
bonds are generally sold at auction.
This method of paying interest creates a "secondary" market for bonds. Bonds
may be resold for either a discount or a premium. As the market value of the bond
increases, it drives down the rate of return on the bond, conversely, if the market value
of the bond decreases the rate of return increases. For example, consider a $1000
bond that the government agrees to pay $60 per year in interest over its life. If the bond
remains at the $1000 face value the interest rate is 6%. However, if bonds are viewed
as a safer investment than other possible investments, or there is excess demand for
bonds, the market price may increase. If the market price of this $1000 bond increases
to $1200 then the rate of return falls to only 5% (60/1200 = .05). On the other hand, if
the bond is viewed as more risky, or there is an excess supply of bonds the market
price may fall, say to $800, then the rate of return increases to 7.5% (60/800 = .075).
Notice how bonds become a good investment. Bonds are good investments
when the interest rate is falling. As the interest rate falls, the market value of the bond
increases, (remember that the payment made by the original borrower is a fixed
payment each period). In other words, falling interest rates mean larger market values
for the bonds, and greater profits for investors in bonds.
An Overview of the U.S. Financial System
The U.S. financial system is a complex collection of banks, thrifts, savings &
loans, credit unions, bond and stock markets, and numerous markets for other financial
instruments such as mutual funds, options, and commodities. The complete analysis of
these markets is not a single course, its an entire curriculum called Finance. What is
106
important for understanding the basics of the effects of money on the macroeconomy is
the banking system and the closely associated regulatory agencies, such as the Federal
Reserve System and Federal Deposit Insurance Corporation (F.D.I.C.).
Federal Reserve System (FED) is comprised of member banks. These member
banks are generally large, nationally chartered banks that do significant amounts of
commercial banking. The FED is owned by these member banks. However, under the
Federal Reserve Act, the Board of Governors and Chairman are nominated by the
President of United States and confirmed by the Senate. The structure of the system is:
(1) Board of Governors - that governs the FED and is responsible for the operations of
the Fed, (2) Open Market Committee - buys and sells bonds (called open market
operations, (3) Federal Advisory Council - provides advise concerning appropriate
banks regulations and monetary policies, and (4) The FED has 12 regional banks that
serve as check clearing houses, conduct research, and supervises banks within the
region. Fort Wayne is in the Chicago region, however, Evansville is in the St. Louis
region. The Federal Reserve Banks are for each of the 12 regions are located in the
following cities:
1. Atlanta, Georgia
2. Boston, Massachusetts
3. Chicago, Illinois
4. Cleveland, Ohio
5. Dallas, Texas
6. Kansas City, Missouri
7. Minneapolis, Minnesota
8. New York, New York
9. Philadelphia, Pennsylvania
10. Richmond, Virginia
11. San Francisco, California
12. St. Louis, Missouri
The functions of FED are basically associated with bank regulation and the
conduct on monetary policy. The functions of the FED include:
1. Set reserves requirements,
2. Check clearing services,
3. Fiscal agents for U.S. government,
4. Supervision of banks, and
5. Control money supply through Open Market Operations (buying and
selling of bonds).
107
The supervision of banks and check clearing services are routine FED functions
that are focused on making the banking system safer and more efficient. Because the
FED is the agency through which Treasury obligations are bought and sold the FED is
the fiscal agent for the federal government. The setting of reserve requirements for the
banking system and open market operations are the tools of monetary policy and are
the subjects of the following chapter.
The Federal Deposit Insurance Corporation is a quasi-governmental corporation
whose purpose is to insure the deposits of member banks. Credit unions, thrifts, and
savings & loans had separate independent agencies designed to provide the same
insurance. However, after the savings & loans crisis, these other agencies were
consolidated under the control of F.D.I.C. The reason for these programs was the
experience of the banking industry during the Great Depression when many depositors
lost their life savings when the banks failed. Many banks failed, simply because of
"runs." Runs are where depositors demand their funds, simply because they have lost
faith in the financial ability of the banks to meet their obligations (sometimes the loss of
faith was warranted, often it was nothing more than panic). Therefore, to foster
depositor faith in the banking system, F.D.I.C. was created that provided a guarantee
that depositors would not loss their savings even if the bank did fail.
There is a problem with such insurance arrangements. This problem is called
moral hazard. Moral hazard is the effect that having insurance reduces the insured's
incentive to avoid the hazard against which they are insured. The savings and loan
crisis was at least in part the result of moral hazard. The managers of the failed saving
and loans often would extend loans or make investments that were high risk, but were
less concerned because if it resulted in failure, the government would pay-back the
depositors. This is a classic example of moral hazards, but there were also other
problems involving fraud, bad loans in Mexico, and shaky business practices, against
which it was not intended that F.D.I.C. would risk substantial exposure.
KEY CONCEPTS
Functions of Money
Medium of Exchange
Avoidance of Barter
Measure of Value
Store of Value
Supply of Money
108
M1, M2, and M3
Near Money
Value of Money
Demand for Money
Transactions Demand
Asset Demand
Total Demand
Money Market
Interest rates
Federal Reserve System
Board of Governors
F.M.O.C.
Federal Advisory Council
12 regions
Functions of the Fed
Sets Reserve Requirements
Check clearing
Fiscal agent for the U.S. government
Supervision of Banks
Control of Money Supply
Moral Hazard
STUDY GUIDE
Food for Thought:
109
Develop and explain each:
Functions of money,
Demand for money,
Money supply, and
Value of money.
Critically evaluate the Federal Reserve System as a regulatory agency.
What is near money? Explain and critically evaluate the use of near money.
Sample Questions:
Multiple Choice:
The U.S. money supply is backed by:
A.
B.
C.
D.
Gold
Gold and Silver
Gold, silver & government bonds
People's willingness to accept it
If a U.S. government 30-year bond sold originally for $1000 with a specified interest rate
of five percent, each year the bond holder receives $50 from the government. If the
sales price of the bond falls to $900 what is the interest rate?
110
A.
B.
C.
D.
5.56%
5.00%
4.50%
None of the above
True - False:
If the Fed wishes to decrease the money supply they can buy bonds {FALSE}.
The FDIC provides insurance for deposits in member banks {TRUE}.
111
Chapter 9
Multiple Expansion of Money
The purpose of this chapter is to analyze how banks create money and how the
Federal Reserve System plays an essential role in regulating the banking system. After
a brief history of banking reserves is examined, we will proceed to analyze the multiple
expansion of money.
Paper money is produced at the United States Bureau of Printing and Engraving
in Washington, D.C. Each day, the Bureau of Printing and Engraving produces about
$22 million in new currency. Once the Bureau of Printing and Engraving produces the
money it is then shipped to the twelve regional Federal Reserve banks for distribution to
the member banks. The stock of currency is created and maintained by a simple
printing and distribution process. However, the stock of paper money is only a small
part of the M1 money supply. The majority of the M1 money supply is checkable
deposits.
Counterfeiting is of U.S. currency is a significant problem. The U.S. Bureau of
Printing and Engraving has developed a new fifty dollar bill that will make counterfeiting
more difficult. Periodically for the next several years the new design will be extended to
all denominations of U.S. currency. A few years ago, a strip was added to U.S.
currency that when held to the light shows the denomination of the bill, so that
counterfeiters cannot use paper from one dollar bills to create higher denomination bills.
The government agency responsible for enforcing the counterfeiting laws in the United
States is the U.S. Secret Service, the same agency that provides security for the
President.
Assets and Liabilities of the Banking System
112
Assets are items of worth held by the banking system, liabilities are claims of
non-owners of the bank against the banks' assets. Net worth is the claims of the
owners of the bank against the banks' assets. Over the centuries a system of double
entry accounting has evolved that presents images of businesses. The double entry
system accounts for assets, liabilities, and net worth.
Accounts have developed a balance sheet approach to present the double entry
results of the accounting process. On the left hand side are entered all of the bank's
assets. On the right hand side of the ledger are entered all of the claims against those
assets (claims by owners are net worth and claims by non-owners are liabilities). The
assets side of the ledger, must equal the net worth and liabilities side. This rather
simple method, is an elegant way to assure that claims and assets balance.
The balance sheet method will permit us a method to track how banks create
money through the multiple expansion process.
Rational for Fractional Reserve Requirements
The fractional reserve approach to monetary stability dates from the middle-ages
in Europe. Goldsmiths received gold to make jewelry, religious objects, and to hold for
future use. In return the goldsmiths would issue receipts for the gold they received. In
essence, these goldsmith receipts were the first European paper money issued and
they were backed by stocks of gold. The stocks of gold acted as a reserve to assure
payment if the paper claims were presented for payment. In other words, there was a
100% reserve of gold that assured the bearer of the receipt that the paper receipt would
be honored. The reserves of gold held by the goldsmiths created faith in the receipts as
mediums of exchange, even though there was no governmental involvement in the
issuing of this money.
However, the goldsmiths in Europe were not the first to issue paper money.
Genghis Khan first issued paper money in the thirteenth century. Genghis did not hold
reserves to back his money, it was backed by nothing except the Khan's authority
(which was absolute). Therefore in the case of the Great Khan, it was the ability to
punish the untrusting individuals that gave money its value. In Europe, two-hundred
years later, it was trust in reserves that gave money its value.
The U.S. did not have a central banking system, as we know it, from the 1840s
through 1914. There were two early "national" banks whose purpose was to serve as
the fiscal agent of the U.S. government and to provide limited regulation for the U.S.
monetary system. Both failed and were eliminated. In the early part of this century
several financial panics pointed to the need for a central banking system and for strong
financial regulations.
113
During the first half of this country's history both states and private companies
issued paper money. Mostly this paper money was similar to the gold receipts issued
by the European goldsmiths, except the money was not backed by gold, typically the
money was a claim against the assets of the state or company, in other words, the
money issued represented debt. It is little wonder that most of this currency became
worthless, except as collectors' items. Prior to 1792, Spanish silver coins were widely
circulated in the U.S. because they were all that was available for use as money (for
more details see the previous chapter).
The first widespread issuance of U.S. paper money was during the Civil War
(The Greenback Act), which included fractional currency (paper dimes & nickels!).
Earlier attempts to issue U.S. notes were less than successful, simply because people
trusted coinage because of the silver and gold therein contained, and paper money was
a novelty (but not a very valuable one).
Today, the Federal Reserve requires banks to keep a portion of its deposits as
reserves, to help assure the solvency of the bank in case of a financial panic, like those
experienced in the first decade of this century and again in the 1930s. The fact that
these reserves are kept also helps to assure the public of the continuing viability of their
banking system, hence the safety of their deposits. In turn, this public faith should
prevent future runs on the banking system that have historically caused so much
economic grief due to bank failures.
The Required Reserve Ratio
The Required Reserve Ratio (RRR) is set by the FED's Board of Governors
within limits set by statute. The minimum legally allowable RRR is three percent, where
the current RRR has been set by the Board of Governors. The RRR determines by how
much the banking system can expand the money supply. The RRR is the amount of
reserves that a bank must keep, as a percentage of their total liabilities (deposits).
Banks are permitted some freedom to determine how their reserves are kept. A
bank can keep reserves as vault cash or deposits with the regional Federal Reserve
bank. Should a bank be short of the amount required to meet the reserves necessary,
then a bank can borrow their reserves for short periods from either the FED or other
member banks. The FED regulates the borrowing of reserves, and sets an interest rate
for these short term loans if they are borrowed from the FED. The rate charged on
borrowed reserves from the FED is called the discount rate. The rate of interest
charged on reserves borrowed from other member banks is called the Federal Funds
Rate (currently about 5.5%).
114
The banking system has three forms of reserves, these are actual, required, and
excess reserves. Actual reserves are the amounts the banks have received in deposits
that are currently held by the bank. The required reserves are the amounts the Board of
Governors requires the banks to keep (as vault cash, deposits with the FED, or
borrowed). The excess reserve is amount of actual reserves that exceeds the required
reserves. It is the excess reserves of the banking system that may be used by the
member banks to expand the checkable deposits component of the U.S. money supply.
Multiple Expansion of Checkable Deposits
The largest component of the M1 money supply is checkable deposits. Rather
than printing Federal Reserve Notes, the majority of the money supply is created
through a system of deposits, loans, and redeposits. Money is created by a bank
receiving a deposit, and then loaning that non-required reserve portion of the deposit
(excess reserve), which, in turn, is deposited in another checking account, and loans
are subsequently made against those deposits, after the required reserve is deducted
and placed in the bank’s vault or deposited with the FED.
For example, if the RRR is .10, then a bank must retain 10% of each deposit as
its required reserve and it can loan the 90% (excess reserves) of the deposit. The
multiple expansion of money, assuming a required reserve ratio of .10, can, therefore,
be illustrated with the use of a simple balance sheet (T-account):
Deposit
Loans
________________________
|
$10.00
|
9.00
9.00
|
8.10
8.10
|
7.29
.
|
.
.
|
.
_______
|
_______
$ 100.00
|
$90.00
|
10.00 (required reserves)
|
$100.00
115
The total new money is the initial deposit of $10 and an additional $90 of multiple
expansion for a total of $100.00 in new money. The T-account used to illustrate this
multiple expansion of money is really a crude balance sheet for the banking system with
the liabilities on the left side and the assets on the right side of the ledger. Notice,
however, that the T-account balances, and that there is $100.00 on each side of the
ledger.
There is a far easier way to determine how much the money supply can be
expanded through the multiple deposit - loan, re-deposit - loan mechanism. This shortcut method is called the money multiplier (Mm). The money multiplier is the reciprocal
of the required reserve ratio:
Mm= 1/RRR
The money multiplier is the short-hand method of calculating the total entries in
the banking systems' T-accounts and shows how much an initial injection of money into
the system can generate in total money supply through checkable deposits. One of the
tools of the FED to expand or contract the money supply is to make or withdraw
deposits from its member banks. This element of monetary policy will be discussed in
greater detail in the following chapter. The following chapter examines monetary policy
and how the FED operates to maintain control over the money supply.
The potential creation of money is therefore inversely related to the required
reserve ratio. For example, with a required reserve ratio of .05 the money multiplier is
20. This means that a $1.00 increase in deposits can potentially create $20 in new
checkable deposits as it is loan and re-deposited through the system. On the other
hand, with a required reserve ratio of .20 the money multiplier is 5 and only $5 of new
money can be created from an initial deposit of $1.00.
With the current required reserve ratio of .03, the money multiplier is 33.33.
Currently, an initial deposit of $1.00 can potentially create $33.33 in new money through
increased checking deposits. The reason that the word potential is used to describe
this process, is that there is no guarantee that the banking system will be able to loan all
of its available excess reserves. The amount that will be loaned still depends on the
demand for money and investment in plant and equipment.
The monetary history of several nations illustrates how well the multiple
expansion of money has been understood. The United States has had recurrent bouts
of inflation during the post-World War II period. As the Fed struggled with
understanding how the system worked in this country, the Swiss understood since the
turn of the century. Switzerland is a relatively small economy, and the money supply in
this small nation was very competently managed. The result is that the Swiss have
experienced usually stable price levels ever since 1945.
116
The recurrent bouts of inflation in the post-World War II economic history of the
United States, are not consistent with the economic history of this country prior to World
War II. Most of American history experienced significant deflation. Deflation is rarely
discussed today, but is, in many ways, a far more destructive problem than inflation.
The reason this problem persisted for so many decades in this country was that there
was no stable central banking system to manage the money supply and that the value
of the U.S. dollar was tied to an increasingly rare commodity - gold. This is what was
called the gold standard. The deflation that was a symptom of the Great Depression
moved then President Herbert Hoover to order that gold coinage be no longer minted.
A couple of years later President Roosevelt ordered that the gold coinage be taken out
of circulation, and later in his administration gold was not permitted to be held in
coinage by private citizens. Finally, the destabilizing affects of the lack of allowing the
U.S. dollar to find its own value on a free market motivated President Nixon to eliminate
the gold standard altogether in 1971.
Gold prices have been allowed to float in the market place. In 2001 the annual
average price of gold was just under $272, today the price of gold is nearing the levels
reached right after the demise of the gold standard in 1971. $650 an ounce for gold in
May of 2006 is merely a couple of hundred dollars below the lofty levels set in 1972.
Need for Regulation and Inflation
One of the serious implications of the money multiplier is that the banking system
has the potential for significant harm to economic stability. In Chapter 3 we examined
the various theories of inflation. One of those theories is called the quantity theory of
money, where MV = PQ. Because V and Q grow very slowly, they are generally
regarded as nearly constant. The implication is that what happens to the money supply
should be nearly directly reflected in the price level.
During expansions in the business cycle, investment demand is generally high
and banks can often loan all of their excess reserves. If we reduced the required
reserve ratio to .01 then banks could expand the money supply $100 for each $1.00 in
additional deposits made in the system. If the required reserve ratio was only .001, then
$1000 of new money can be created for every dollar of new deposits. Without any
required reserve ratio, the money supply could be theoretically be expanded infinitely for
each dollar of new deposits. These high multiple potential expansions could create
serious inflationary problems for the economy, and therefore the required reserve ratio
of the central bank is an essential portion of any nation's economic stabilization policies.
During this period of a political swing to the right of public opinion, the idea of deregulation has gained some new support. However, there is no serious, and informed
move toward de-regulating the money supply. Without monetary controls imposed by a
117
central bank, there will most certainly be serious economic problems associated with
the loss of some modicum of sensible control of the money supply.
This need for regulation has been long recognized by economists. Since at least
the 1890s classical economists described the role of money in the economy, and the
need to control the money supply if price stability was to achieved. Most prominent
among the classical economists writing at the beginning of the twentieth century
concerning monetary economics was Irving Fisher, who developed the modern quantity
theory of money presented in Chapter 3.
We come back to the conclusion that the velocity of circulation either of money or
deposits is independent of the quantity of money or of deposits. No reason has
been, or, so far as is apparent, can be assigned, to show why the velocity of
circulation of money, or deposits, should be different, when the quantity of money or
deposits, is great, from what it is when the quantity is small.
There still remains one seeming way of escape from the conclusion that the
sole effect of an increase in the quantity of money in circulation will be to increase
prices. In may be claimed -- in fact it has been claimed -- that such an increase
results in an increased volume of trade. We now proceed to show that (except during
transition periods) the volume of trade, like the velocity of the circulation of money, is
independent of the quantity of money. An inflation of the currency cannot increase
the product of farms and factories, nor the speed of freight trains or ships. The
stream of business depends on natural resources and technical conditions, not on
the quantity of money. The whole machinery of production, transportation, and sale
is a matter of physical capacities and technique, none of which depend on the
quantity of money . . . . We conclude, therefore, that a change in the quantity of
money will not appreciably affect the quantities of goods sold for money.
Since, then, a doubling in the quantity of money: (1) will normally double
deposits subject to check in the same ratio, and (2) will not appreciably affect either
the velocity of circulation of money or of deposits or the volume of trade, it follows
necessarily and mathematically that the level of prices must double . . . .
Irving Fisher, The Purchasing Power of Money. New York, Macmillan, 1911, pp. 15457.
KEY CONCEPTS
118
Fractional Reserve Requirements
Balance Sheet
Required Reserve Ratio
Money Multiplier
Monetary History
Genghis Kahn
Gold Receipts
U.S. had no central bank prior to 1914
Greenback Act
Reserves
Required
Excess
Actual
Money Creation
STUDY GUIDE
Food for Thought:
Trace the deposit of $100 through a banking system with a .25 RRR. Does an injection
of $100 really result in an increase in the money supply of $400? Explain.
Explain the sources of reserves for member banks. How does the Fed control the
money supply when reserves can be borrowed? Explain and critically evaluate this
practice.
119
Critically evaluate the Fed as maker and implementer of monetary policy in the United
States.
Sample Questions:
Multiple Choice:
With a required reserve ratio of .05 by how much will the money supply be increased in
the Federal Reserve deposits a check of $1000 with National City Bank?
A.
B.
C.
D.
$1000
$5000
$20,000
None of the above
Which of the following is the interest rate that member banks charge one another for
borrowing excess reserves?
A.
B.
C.
D.
Federal funds rate
Money Market rate
Discount rate
Prime rate
True - False:
Excess reserves are the funds in the Federal Reserve Banks that the Fed does not
need to assure the solvency of the banking system. {FALSE}
Money multiplier with a RRR of .25 is 4. {TRUE}
120
Chapter 10
Federal Reserve and Monetary Policy
The neo-classicists continue to reject the idea that the government has a proactive role to play in economic stabilization. However, where the neo-classicists see
any role for government in stabilizing the macroeconomy it is in monetary policy arena.
There is some merit to this point of view. Fiscal policy has significant lags that often
results in counter-productive governmental actions, by the time the fiscal policy actually
impacts the economy. Monetary policy, on the other hand, has very short lags between
the recognition of a problem and the impact on the macroeconomy. At a minimum,
monetary policy is quicker to change the economy's direction, is involved in far less
political game-playing, and is more efficient than is fiscal policy.
The purpose of this chapter is the examine the role of the Fed in the formulation
and implementation of monetary policy for the purpose economic stabilization. Each of
the major tools of monetary policy will be examined, and their potential effects
presented.
Monetary Policy
The monetary policies of a government focus on the control of the money supply.
These policies directly control inflation or deflation, but also can influence real economic
activity. The focal point of control over real economic activity through the management
of monetary aggregates is the interest rate. The demand for investment is dependent
upon the relation between expected rates of return from that investment, and the
interest rate that must be paid to borrow the money to buy capital.
121
Monetary policy in the United States is conducted by the Federal Reserve, either
through the Board of Governors or the Federal Open Market Committee. The monetary
policies of the United States have focused primarily on the control of various monetary
aggregates, i.e., the money supply, which, in turn, influences interest rates and hence
aggregate economic activity. The fundamental objective of monetary policy is to assist
the economy in attaining a full employment, non-inflationary macroeconomic
equilibrium.
Tools of Monetary Policy
The Federal Reserve has an arsenal of weapons to use to control the money
supply. The Fed can buy and sell U.S. Treasury bonds, called open market operations,
it can control the required reserve ratio, within statutory limits, and it can manipulate the
discount rate (the interest rate charged by the Fed for member banks to borrow
reserves). This array of tools provides the Fed with several options in taking corrective
actions to help stabilize the economy. Each of these tools will be examined in the
following paragraphs.
Open Market Operations
Open Market Operations (OMO) involves the selling and buying of U.S. Treasury
obligations in the open market. OMO directly influences the money supply through
exchanging money for bonds held by the public (generally commercial banks and
mutual funds). Expansionary monetary policy involves the buying of bonds. When the
Fed buys bonds, it replaces bonds held by the public with money. When someone sells
a bond, they receive money in exchange, which increases the money supply.
Contractionary monetary policy involves the Fed selling bonds to the general public.
When the Fed sells bonds it removes money from the hands of the public and replaces
that money with U.S. Treasury bonds.
The Fed sells and buys both long-term (thirty year) Treasury bonds, and shortterm (primarily two, five and ten year) Treasury bonds. In theory the Fed can focus its
influence on either long-term interest rates or short term-interest rates. However, what
most security analysts argue is that the Fed's bond buying and selling behavior is not a
leadership position. In recent years, the Fed buys and sell bonds after the money
market establishes a direction for interest rates. In times, of high inflation or steep
recession, however, the evidence suggests that the Fed's OMO leads the market, rather
122
than follows. In reality, this is what we should expect to observe in the responsible
exercise of monetary policies.
The Fed also buys and sells lower denomination and shorter term obligations
called Treasury Bills. While bonds are sold only periodically in Treasury auctions,
Treasury Bills are available at any time through the Federal Reserve Banks. The
Treasury Bill is also the buying and selling of U.S. government debt, but is not the
primary tool of open market operations, even though it has the same impact of
increasing or decreasing the money supply. The market for Treasury Bills is small
relative to the Bond market.
The Required Reserve Ratio
As discussed in the previous chapter, it is the required reserve ratio that
determines the size of the money multiplier. The money multiplier determines how
much money can be created by the member banks through the deposit - loan process.
Therefore, the Fed can directly control how much the banking system can expand the
money supply through the manipulation of the required reserve ratio.
The Fed can raise or lower the required reserve ratio, within statutory limits.
Increasing the required reserve ratio, reduces the money multiplier, hence reduces the
amount by which multiple expansions of the money supply can occur. By decreasing
the required reserve ratio, the Fed increases the money multiplier, and permits more
multiple expansion of the money supply through the deposit - loan process described in
the previous chapter.
Most of the new deposits that result in the multiple expansion of money occur
because the Fed bought bonds from the public. When the Fed buys bonds, this is
injecting new money into the system that is, in turn, deposited -- which set the money
multiplier (multiple expansion) into motion.
The Discount Rate
The Discount Rate is the rate at which the Fed will loan reserves to member
banks for short periods of time. To tighten monetary policy, the Fed will raise the
discount rate. The raising of the discount rate will discourage the borrowing of required
reserves by member banks, hence encourages using their reserves as required
reserves, rather than excess reserves (which they loan and start the multiple expansion
of money). By lowering the discount rate, the Fed encourages the borrowing of required
reserves, which may result in more excess reserves hence potentially more loans, and
123
a greater expansion of the money supply through the loan - deposit process described
in the previous chapter (Chapter 9).
It is the Discount Rate over which the Fed may exert direct control. However,
member banks may borrow reserves from one another, as well as the Fed. The rate at
which member banks borrow from one another is called the Fed Funds Rate. The Fed
Funds Rate is heavily influenced by what the Fed does with respect to the Discount
Rate, and the Fed’s supervisory impact on member banks. Therefore, the Fed Funds
Rate is often targeted by the Fed, and is not entirely immune to Fed influence, even
though not under the direct control of the Fed.
It is also worth mentioning that the Fed sets the margin rate minimums that
brokerages may charge customers to buy stocks with borrowed money. This function of
the Fed is not an important current monetary policy tool. In fact, it is pretty much a
symptom of the Federal Reserve Act being passed as a result of a series of financial
panics immediately prior to the out-break of World War I and of the devastation brought
later by the Great Depression.
Fed Targets
The Fed must have benchmarks to determine the need for and effectiveness of
monetary policies. The quantity theory of money suggests that the money supply itself
is the appropriate policy target for the Fed. As noted by Irving Fisher, the velocity of
money is how often the money supply turns-over, and is unrelated to economic activity.
Further, Fisher argued that money does not directly influence the real output of the
economy. Therefore, in any policies aimed at controlling inflation or deflation the Fed's
monetary targets should simply be the money supply.
However, the economy is not as simple as the quantity theory of money would
suggest. Many consumer purchases and most investment is interest rate sensitive.
Therefore, to the extend that the Fed's policies do impact interest rates, the Fed can
also correct downturns in the business cycle. If investment is too low to maintain fullemployment level of GDP, the Fed can reduce the required reserve ratio or buy bonds
thereby increasing the supply of money and lowering the interest rate. The lower
interest rate may encourage consumption expenditures and investment, thereby
mitigating recession.
There are dilemmas for the Fed in selecting targets for their monetary policies.
Interest rates and the current business cycle may present a dilemma. Expansionary
monetary policy may result in higher interest rates, by increasing the rate of inflation,
which will be reflected in the interest rates. As the interest rate increases and people's
inflationary expectations develop, these may serve to dampen the expansionary effects
of the Fed's monetary policies. At the same time, there is no necessary coordination of
124
fiscal and monetary policies. At the time an expansion monetary may be necessary to
reverse a recession, contractionary fiscal policies may begin to affect the economy.
Therefore, the Fed must keep an eye on the Congress and account for any fiscal
policies that may be contradictory to the appropriate monetary polices. The presents a
delicate balancing act for the Fed. Not only must the Fed correct problems in the
economy, it may well have to correct Congressional fiscal mistakes, or at least, account
for this errors when implementing monetary policies.
Tight and Easy Money
Discretionary monetary policy, therefore, fits into one of two categories, (1) easy
money, and (2) tight money policies. Easy money policies involves the lowering of
interest rates, and the expansion of the money supply. The purpose of easy money
policies are typically to mitigate recession and stimulate economic growth. Tight
monetary policies involve the increasing interest rates, and contracting the money
supply. The purpose of tight money policies is generally to mitigate inflation and slow
the rates of economic growth (typically associated with inflation). These monetary
policies may also have a significant impact on the value of the U.S. Dollar in foreign
exchange markets. In general, tight monetary policies are associated with increasing
the value of the U.S. Dollar, whereas easy monetary policies will generally have the
opposite affect.
Consider the following diagram showing both tight and easy money policies’
impact on the money market.
125
Assuming that the money supply remains constant, we can analyze the changes
in the money supply imposed by the Fed. As the Fed engages in tight money policies
the supply curve is shifted to the left (dashed line labeled tight), this increases the
interest rate and lowers the amount of money available in the economy. On the other
hand, easy money policy is a shift to the right of the money supply curve (dashed line
labeled easy). With easy money policies, the quantity of money increases and the
interest rate falls. The effectiveness of such policies in influencing GDP result from
changes in autonomous investment. In the case of an easy money policy, as the
interest falls, investment will increase which results in an increase in the C+I+G line as
illustrated below:
The decrease in the interest rate is associated with an increase in investment
(the vertical distance between C+I+G1 and C+I+G2) which results in an increased GDP
and levels of spending.
The acceptance of discretionary monetary policies are often associated with
Keynesian views of a pro-active role for government in economic stabilization. Even
though most neo-classicists argue that monetary policy is necessary to the proper
functioning of a market economy. However, there is another view. The most extreme
of the neo-classicists argue that there is simply no role for either discretionary fiscal or
monetary policies, except in dealing with extreme variations in economic activity, i.e.,
like the Great Depression of the 1930s.
Friedman's Monetary Rules Argument
126
The leading economist of the neo-classical school (Chicago School of Thought or
Monetarist) is the Nobel Prize winning economist, Milton Friedman. Professor Friedman
won his Nobel Prize for, among other contributions, his work on the monetary history of
the United States. Friedman argues, with some persuasion, that Irving Fisher's work
establishes the appropriate standard for monetary policy. Based on the presumption
that discretionary fiscal policy can be abolished, Friedman would have all monetary
policies based on a simple rule that follows directly from Fisher’s formulation of the
quantity theory of money.
Assuming that discretionary fiscal policy has been eliminated, and that the
economy is operating at a full-employment, non-inflationary equilibrium, monetary policy
should be nothing more or less than estimating the growth rate of economy (change in
Q) and matching the growth rate of the money supply to the growth rate of the
economy. Such monetary policy leaves nothing to the discretion of policy makers. The
Fed's sole role is to make sure that the money supply simply facilitates economic growth
by expanding at the same pace as the real economic activity. If the Fed underestimates
growth, there could be small deflations that could be eliminated but subsequent
adjustments to the money supply and overestimations of the growth rate causing
inflation could be dealt with in the same type of subsequent adjustments.
If nothing else, Friedman's suggestion would eliminate any government induced
variations in economic activity. Real Business Cycle Theory is another paradigm that
has arisen out of the ashes of the classical school, and these economists would not find
much to argue with Friedman about, as far as the analysis goes. These economists,
primarily Thomas Sargent from the University of Minnesota, argue that recessions and
inflations result from either major structural changes in the economy or external shocks,
such as the Arab Oil Embargo. When these types of events occur, the Real Business
Cycle Theorists would have the government play a pro-active role, but focused
specifically on the shock or structural problem. In this sense, there is a role for
discretionary fiscal and monetary policies, but very narrowly focused on very specific
events.
KEY CONCEPTS
Monetary Policy
Expansionary
Contractionary
Tools of Monetary Policy
Bonds
127
Required Reserves Ratio
Discount Rate
Velocity of Money
Quantity Theory of Money
Target Dilemma in Monetary Policy
Discretionary Monetary Policy vs. Monetary Rules
STUDY GUIDE
Food for Thought:
Critically evaluate the tools and the independence of the Fed to use them in monetary
policy.
If the MV = PQ equation is correct then inflation can be dealt with as a monetary
problem. If we are experiencing 10 percent inflation with $4 billion real economy (in
other words nominal GDP is $4.4 billion) and the velocity of money 2.8 how do we
control inflation? Would we want to? Explain.
What must the Fed do with each of its tools to create (1) a tight money policy, and (2)
an easy money policy? Critically evaluate each.
Sample Questions:
128
Multiple Choice:
If the Fed wanted to create a tight money policy which of the following is inconsistent
with this goal?
A.
B.
C.
D.
Increasing the Required Reserve Ratio
Increasing the Discount Rate
Buying bonds
All of the above are consistent with a tight money policy
If the Fed wishes to stimulate the economy during a recession what might we expect to
observe?
A.
B.
C.
D.
Lowering the Required Reserve Ratio
Lowering the Discount Rate
Fed Buying Bonds
None of the above
True/False:
The main goal of the Federal Reserve System is to assist the economy in achieving and
maintaining a full-employment, non-inflationary, stability. {TRUE}
A tight money policy assists in bringing the economy out of recession, but at the risk of
possibly causing inflation. {FALSE}
129
Chapter 11
Interest Rates and Output: Hick’s IS/LM Model
The final issue with which to build a complete picture of the macroeconomy is the
development of a model to explain the relationship of interest rates to the overall output
of the economy. Sir John Hicks developed a model which directly equates the interest
rate with the output of the economy. This model is called the IS/LM model and provides
valuable insights into the operation of the U.S. economy – this chapter offers a quick
and simple view of these relations.
IS Curve
The IS curve shows the level of real GDP for each level of real interest rate. The
derivation of the IS curve is a rather straightforward matter, observe the following
diagram.
INCOME/EXPENDITURE
Expenditure or Aggregate
Demand
GDP
AUTONOMOUS SPENDING
Interest Rate
Interest Rate
IS
Autonomous Spending
GDP
130
The Income-Expenditure diagram determines for each level of aggregate
demand the associated level of GDP. There is a portion of this GDP which is
determined within the system (autonomous spending) and for each level of autonomous
spending there is a specific interest rate. The indicator line from the IncomeExpenditure diagram provides levels of GDP, which are associated with specific interest
rates, which in turn are associated with specific level of autonomous spending. The
resulting IS curve is a schedule of levels of GDP associated with each interest rate.
The intercept of the IS Curve is the level of GDP that would obtain at a zero real
interest rate. The slope of IS Curve is the multiplier (1/1 - MPC) times marginal
propensity to Invest (resulting from a change in real interest rates) and the marginal
propensity to export (resulting from a change in real interest rates).
The IS curve can be shifted by fiscal policy. An increase in government
purchases pushes the IS curve to the right, and a decrease will shift it left. Conversely,
an increase in taxes pushes the IS curve left, while a decrease in taxes will push the
curve to the right. However, almost anything that changes the non-interest-dependent
components of autonomous spending will move the IS curve. For example, foreigners
speculating on the value of the dollar may result in a shift in the IS curve to the extent it
impacts purchases of imports. Changes in Co or Io will also move the IS curve.
Anytime the economy moves away from the IS curve there are forces within the
system which push the economy back onto the IS curve. Observe the following
diagram:
Real Interest Rate
⇦●A
B ●⇨
IS Curve
Real GDP
131
If the economy is at point A there is a relatively high level of real interest rates which
results in planned expenditure being less than production, hence inventories are
accumulating, and production will fall, hence pushing the economy toward the IS curve.
On the other hand, at point B the relatively low real interest rate results in planned
expenditure being greater than production, hence inventories are being sold into the
market place and product will rise to bring us back to the IS curve.
LM Curve
The LM Curve is derived in a fashion similar to that of the IS curve. Consider the
following diagram:
Interest Rate
Money S
Interest Rate
LM Curve
D1
D2
D3
GDP (Y)
Real Qty of Money
High Y
Moderate Y
Low Y
As can be readily observed from this diagram the LM curve is the schedule of
interest rates associated with levels of income (GDP). The interest rate, in this case,
being determined in the money market.
With a fixed money supply each level of demand for money creates a different
interest rate. If the money market is to remain in equilibrium, then as incomes rise so
too, then must the interest rate, if the supply of money is fixed. The shifting of the LM
curve is obtained through inflation or monetary policy. If the price level is fixed, then a
decrease in the money supply will shift the LM curve to the left, and an increase in the
money supply will push the LM curve to the right. Conversely, if the money supply
132
remains fixed and there is inflation, then the real money supply declines, and the LM
curve shifts left, and vice versa.
Equilibrium in IS-LM
Just as in the Keynesian model, the intersection of the IS and LM curve results in
there being an equilibrium in the macroeconomy. The following diagram illustrates an
economy in equilibrium at where LM is equal to ISe:
Interest Rate
LM
r
IS1
ISe
IS2
GDP
Y
Where the IS and LM curves intersect is where there is an equilibrium in this
economy. With this tool in hand the affects of the interest rate on GDP can be directly
observed. As the IS curve shifts to the right along the LM curve notice that there is an
increase in GDP, but with a higher interest rate (IS1) and just the opposite occurs as the
IS curve shifts back towards the origin (IS2). From above it is clear the sorts of things
that shift the IS curve, fiscal policy or changes in Co or Io.
Fiscal policy is associated with changes in the position of the IS curve. If the
government decreases taxes, and there is no change in monetary policy associated
with this fiscal change we would expect to observe a shift to the right of the IS curve,
hence an increase in GDP, but with an increase in interest rates. For interest rates to
remain the same with this increase in GDP, the Fed would have to engage in easy
monetary policies and shift the LM curve to the right which would also have the effect of
increasing GDP. The same analysis would result in the case of an increase in
government expenditures.
In the case of an increase in taxes, the GDP would fall, but so too would interest
rates. If the Fed were to increase the interest rates back to previous levels it would
need to engage in tight monetary policies to shift the LM curve back to the left. In other
words, the interest rate is the driving mechanism behind the equilibrium in the
macroeconomy as suggested by the monetarists and Keynesians alike. It is simply that
133
neither the Keynesian cross nor the quantity theory of money made it easy to examine
these relations in any significant detail. It is therefore clear that the results of fiscal
policy are dependent on what happens with the money supply.
Expansionary and Contractionary Monetary Policies
The LM curve, too, can be moved about by changes in policy. If the money
supply is increased or decreased that will have obvious implications for the LM curve
and hence the interest rate and equilibrium level of GDP. Consider the following
diagram:
Interest Rate
LM2
LMe
LM1
r
IS
GDP
Y
As the Fed engages in easy monetary policies the LM Curve shifts to the right,
and the interest rate falls, as GDP increases. This is the prediction of quantity theory of
money and is consistent with what is presumed to be the case if there is not monetary
neutrality in the Keynesian model. If the Fed engages in tight monetary policy then we
would expect to observe a shift to LM2. In the case of LM2 the decrease in the money
supply results in a higher interest rate, and a lower GDP. Again, these are results
consistent with the predictions of the quantity theory of money and consistent with what
we would expect without monetary neutrality in the case of the Keynesian model.
Foreign Shocks
The U.S. economy is not a closed economy, and the IS-LM Model permits us to
examine foreign shocks to the U.S. economy. There are three of these foreign shocks
worthy of examination here; these are (1) increase in demand for our exports, (2)
increases in foreign interest rates, and (3) currency speculators expectations of an
increase in the exchange rate of our currency with respect to some foreign currency.
Each of these foreign shocks results in an outward expansion of the IS Curve as
portrayed in the following diagram:
134
Interest Rate
LM
r2
r1
IS1
ISo
GDP
Y1
Y2
As the demand for domestic exports increases overseas the IS curve shifts from
IS o to IS1. The interest rate increases domestically which will have the tendency to
reduce the exchange rate, which will have two countervailing affects, which are to put
downward pressure on further exports, and to increase imports. The result is initially we
export more, and that increase is dampened somewhat by the countervailing effects in
the second round.
An increase in the foreign real interest rates has the effect of increasing our
domestic imports because foreign exchange rate has become more favorable to
foreigners, and allows their currency to go further in our economy. The end result is
that the same shift in the IS curve will be observed with same results. Speculators in
currency markets may also generate the same result as an increase in foreign interest
rates, if they believe that a weak dollar will result in their currency gaining value. Hence,
much of the foreign exchange and balance of payments phenomena can also be readily
analyzed using the IS-LM Model.
KEY CONCEPTS
IS Curve
Slope and Intercept
LM Curve
Money Market
Equilibrium in the IS-LM Model
Fiscal Policy
135
Monetary Policy
Real Interest Rates
Foreign Sector
Demand for exports
Foreign interest rates
Currency speculation
STUDY GUIDE
Food for thought:
Describe the role that interest rates play in determining equilibrium in the
macroeconomy. Is this role clear in the quantity theory of money or the Keynesian
models? Explain.
What if currency speculators believed that the dollar was going to become stronger, or
that the demand for exports decreased abroad? What would happen to the analysis
offered above?
Critically evaluate monetary and fiscal policy within the context of the IS-LM model. IS
this different than you believed coming into this course? Explain.
136
Sample Questions:
Multiple Choice:
If we are off of the IS curve, and above it, what will happen to bring us back to the IS
curve?
A.
B.
C.
D.
Monetary and / or fiscal policy will be required
Inventories will accumulate and bring us back
This cannot occur in theory or reality
None of the above
To shift the LM curve to the right which of the following must occur?
A.
B.
C.
D.
Increase taxes
Increase government expenditures
Increase the money supply
Any of the above
True / False:
The main determinants of the LM Curve are in the money market. {True}
Anything that affects the non-interest-dependent components of autonomous spending
shifts the position of the IS curve. {True}
137
Chapter 12
Economic Stability and Policy
The focus of this course is managerial, however, public policy has a significant
impact on business conditions. It is therefore of importance that some discussion occur
with respect to these issues. Perhaps the most important macroeconomic problems of
the last forty years are unemployment and price level instability. During the post-World
War II period price level instability has primarily been inflation. There are several other
matters that have also been the focus of economic policy, including poverty (income
distribution), and the federal debt and budget deficit. The purpose of this chapter is to
examine these policy issues.
The Misery Index
During the Reagan administration both unemployment and inflation topped ten
percent. This record gave rise to an economic statistic called the misery index. The
misery index is the summation of the civilian unemployment rate and the consumer
price index. For example, with 10 percent unemployment and twelve percent inflation,
the misery index would be 22.
The consumer price index is based on a market basket of goods that household
typically purchase. Therefore, the CPI measures the impact of increasing prices on the
standard of living of consumers. The unemployment rate also focuses on the welfare of
families, when the household wage earning is out of work it has serious implications for
that household's income. The misery index can be interpreted as a measure of the loss
of well-being of households.
Since 1973 American households have not fared well. The U.S. Department of
Commerce, Current Population Survey, tracks economic and demographic data
concerning households. Between 1973 and 1993 the median real income of American
households has not changed. The slight increases enjoyed in the 1970s were all given
back during the 1980s. However, between 1973 and 1993 there has been a dramatic
increase in the number of two wage earner households, and households where a fulltime worker also has a part-time job.
Increasingly, economists are warning that the distribution of misery in the U.S.
economy is becoming more extreme and that the social ills associated with this misery
are increasing. Crime, drug abuse, social and economic alienation, and stress related
illnesses have become epidemic. The fraudulent arguments that what was needed
was a return to traditional family values, is used as a substitute for what is really
138
transpiring, something must be done to reduce economic disparity, particularly as it
negatively effects the family. As Lester Thurow has noted:
If the real GNP is up and real wages are down for two thirds of the work force,
as an algebraic necessity wages must be up substantially for the remaining one third.
That one third is composed of Americans who still have an edge in skills on workers
in the rest of the world -- basically those with college educations. In the 1980s
educational attainment and increases or decreases in earnings were highly
correlated. American society is now divided into a skilled group with rising real
wages and an unskilled group with falling real wages. The less education, the bigger
the income reduction; the more education, the bigger the income gains.
These wage trends have produced a sharp rise in inequality. In the decade of
the 1980s, the real income of the most affluent five percent rose from $120,253 to
$148,438, while the income of the bottom 20 percent dropped from $9,990 to $9,431.
While the top 20 percent was gaining, each of the bottom four quintiles lost income
share; the lower quintile, the bigger the decline. At the end of the decade, the top 20
percent of the American population had the largest share of total income, and the
bottom 60 percent, the lowest share of total income ever recorded.
Lester Thurow, Head to Head: The Coming Economic Battle Among Japan, Europe,
and America. New York: William Morrow and Company, 1992, p. 164.
It is beyond dispute that since 1981 there has been a fundamental change in
American society. As Lester Thurow notes, America is becoming two separate and
unequal societies, one group with an increasing misery index, and another one with
increasing affluence.
The Bush Administration’s 2003 Tax Cut has also been heavily criticized as
having provided significant tax relief for the wealthy in the form of reduced rates on
dividends. At the same time, the deficits necessary to fund those dividend tax cuts will
increase interest rates over the next few quarters, which impact mortgage rates and
consumer loan interest rates. This is an empirical question and will be answered by
cold, objective evidence within the year.
The Phillips Curve
Since the Kennedy administration much of American economic policy has been
based on the idea that there is a trade-off between unemployment and inflation. It was
not until the recession of 1981-85 that we experienced very large amounts of
139
unemployment together with high rates of inflation. It was thought that there was
always a cruel choice in any macroeconomic policy decision, you can have
unemployment and low inflation, or you can have low rates of unemployment, but at the
cost of high rates of inflation. This policy dilemma, results from acceptance of a
statistical relation observed between unemployment and inflation named for A. W.
Phillips who examined the relation in the United Kingdom and published his results in
1958. (Actually Irving Fisher had done earlier work on the subject in 1926 focused on
the United States).
The Short-Run, Trade-off Phillips Curve
The following diagram presents the short-run trade-off view of the Phillips curve.
Actually, A. W. Phillips' original research envisioned a linear, downward sloping curve
that related nominal wages to unemployment. Over two years after A. W. Phillips'
paper was published in Economica, Richard Lipsey replicated Phillips' study specifying
a non-linear form of the equation and using the price level, rather than nominal wages in
his model. It is Lipsey's form that is commonly accepted, in the literature, as the shortrun, trade-off view of
the Phillips curve.
The short-run, trade-off view of the Phillips curve is often used to support an
activist role for government. However, the short-run, trade-off view of the Phillips Curve
shows that as unemployment declines, inflation increases, and vice versa. It is this
negative relation between unemployment and inflation, portrayed in the above diagram
that gives rise to the idea of cruel policy choices between unemployment and inflation.
However, there are alternative views of the Phillips Curve relation.
140
Natural Rate Hypothesis
Classicists argue that there can only be a short-run, trade-off type Phillips Curve
if inflation is not anticipated in the economy. Further, these economists argue that the
only stable relation between unemployment and inflation that can exist is in the longrun. In the long run the Phillips Curve is alleged to be vertical at the natural rate of
unemployment, as shown in the following diagram:
In this view of the Phillips Curve any rate of inflation is consistent with the natural
rate of unemployment, hence the Natural Rate Hypothesis. It is based on the idea that
people constantly adapt to current economic conditions and that their expectations are
subject to "adaptive" revisions almost constantly. If this is the case, then businesses
and consumers cannot be fooled into thinking that there is a reason for unemployment
to cure inflation or vice versa, as is necessary for the short-run, trade-off of the Phillips
Curve to exist.
If taken to the extreme, the adaptive expectations view can actually result in a
positively sloped Phillips Curve relation. The possibility of a positive sloping Phillips
Curve was first hypothesized by Milton Friedman. Friedman was of the opinion that
there may be a transitional Phillips Curve, caused by people adapting both their
expectations and institutions to new economic realities.
In fact, the experience of 1981-85 may well be a transitional period, just like that
envisioned by Friedman. The beginning of the period was marked by OPEC driving up
the price of exported oil, and several profound changes in both the American economy
and social institutions. The Reagan appointments to the N.L.R.B., Justice Department
(particularly the Anti-Trust Division), the Supreme Court (and Circuit Courts of Appeals,
and District Courts) and significant changes in the tax code changed much of the legal
environment significantly. Further, the very significant erosion of the traditional base
141
industries in the United States (automobiles, transportation, steel, and other heavy
manufacturing) together with massive increases in government spending on defense
arguably created a transitory economy, consistent with the increases in both inflation
and unemployment. The positively sloped Phillips Curve is show in the following
picture:
The positively sloped transitional Phillips Curve is consistent with the
observations of the early 1980s when both high rates of unemployment existed together
with high rates of inflation (the positive slope) -- a condition called stagflation (economic
stagnation accompanied by inflation).
Cruel choices may only exist in the case of the short-run, trade-off view of the
Phillips Curve. However, there maybe a "Lady and Tiger Dilemma" for policy makers
relying on the Phillips Curve to make policy decisions. If fiscal policy is relied upon, only
the timing of the impact of those fiscal policies will result in any positive influence on the
economy. Therefore, to act, through taxes or expenditures, may result in having a
counter-productive effect by the time the policy impacts the economy (the tiger). On the
other hand, if accurate two and three year into the future forecasts can be acted upon in
time, recession or inflation could be mitigated by current action -- a real long-shot! (the
lady).
However, if the rational expectations theories are correct, then the long-shot is
exactly what would be predicted. Rational expectations is a theory that businesses and
consumers will be able to accurately forecast prices (and other relevant economic
variables). If the accuracy of consumers and businesses' expectations permit them to
behave as though they know what will happen, then it is argued that only a vertical
Phillips Curve is possible, as long as political and economic institutions remain stable.
142
Market Policies
Market policies are focused on measures that will correct specific observed
economic woes. These market policies have been focused on mitigating poverty,
mitigating unemployment, and cooling-off inflation. For the most part, these market
policies have met with only limited success when implemented.
One class of market policies have been focused on reducing poverty in the
United States. These equity policies are designed to assure "a social safety net" at the
minimum, and at the liberal extreme, to redistribute income. In part, the distribution of
income is measured by the Lorenz Curve, and more completely by the Gini coefficient.
The following diagram presents the Lorenz Curve:
The Lorenz curve maps the distribution of income across the population. The 45
degree line shows what the distribution of income would be if income was uniformly
distributed across the population. However, the Lorenz curve drops down below the 45
degree line showing that poorer people receive less than rich people. The further the
Lorenz Curve is bowed toward the percentage of income axis the lower the income in
the poorer percentiles of the population.
The Gini coefficient is the percentage of the triangle (mapped out by the 45
degree line, the indicator line from the top of the 45 degree line to the percentage of
income axis, and the percentage of income axis) that is accounted for by the area
between the Lorenz curve and the 45 degree line. If the Gini coefficient is near zero,
income is close to uniformly distributed (and the 45 degree line); if is near 1 then income
is distributed in favor of the richest percentiles of the population (and the Lorenz curve is
close to the horizontal axis). If that distribution is consistent with the productivity or
meritorious performance of the population, there may be an efficiency argument that
can be used to justify the distribution. However, if the high income skewness of the
143
distribution is not related to productivity, then the skewed distribution is inefficient and
unfair, hence mal-distributed. In the United States the Gini coefficient exceeds .5, and
while incomes in the lower three-quarters of the upper quintile are highly correlated with
education (and presumably productivity) the overwhelming amount of the highest part of
the distribution does not have any prima facie evidence to prove the justice or efficiency
of such high incomes. Recently, (December 8, 1995) CNN reported that Chief
Executive Officer salaries in the entertainment industries appeared to be out of line with
similar officials in more productive industries, and that stock holders in several of these
companies were beginning to revolt over these high levels of compensation. In
particular, Viacom and Disney were experiencing stock holder queries concerning
executive salaries. In general, most economists familiar with the income distribution in
the United States, would probably agree that income in this country is mal-distributed,
because it does not reflect the market contributions of those at either the highest end, or
in the lower end of the distributions.
Productivity has also the subject of specific policies. The Investment Tax Credit,
WIN program, and various state and federal training programs have been focused on
increasing productivity. For the most part, there has been very little evidence
concerning most of these programs that give reason for optimism. The one exception
was the work of Mike Seeborg and others that found substantial evidence that Job
Corps provided skills that helped low income, minority teenagers find and keep
reasonably well-paying jobs.
Many of the recent treaties concerning international trade have aspects that can
be classified as market policies. To the extent that trade barriers to American exports
have been reduced through NAFTA and GATT it was hoped that there may have been
positive effects from these treaties. In fact, little if any, positive effects have been
observed from these initiatives.
There have been recurrent attempts have to directly control inflation through
price controls. These controls worked well during World War II, mainly because of
appeals to patriotism during a war in which the United States was attacked by a foreign
power. Further, during World War II the idea of sacrifice was reinforced by many
families having relatives serving in the military, which made the idea of sacrifice more
acceptable to most people. However, absent the popular support for these policies
created by World War II for rationing and wage and price controls, these policies have
been uniformly failures. President Carter tried voluntary guidelines that failed, and
Richard Nixon had earlier tried short-lived wage and price controls that simply were a
policy disaster.
144
Debt and Deficit
The national debt is often argued to have an adverse effect on interest rates,
which, in turn, crowds out private investment. The empirical evidence concerning this
“crowding out” hypothesis suggests that increased public debt often results in higher
interests rates, assuming that the debt was not acquired to mitigate recession.
The supply side economics of the Reagan Administration were based on the
theory that stimulating the economy would prevent deficits as government spending for
the military was substantially increased. This failed theory was based on something
called the Laffer Curve.
The Laffer Curve (named for Arthur Laffer) is a relation between tax rates and tax
receipts. Laffer's idea was rather simple and straightforward, he posited that there was
optimal tax rate, above which and below which tax receipts fell. If the government was
below the optimal tax rate, an increase in the rate would increase receipts and in the
rate was above the optimal rate, receipts could be increased by lowering tax rates. The
Laffer Curve is shown below:
The Laffer Curve shows that the same tax receipts will be collected at the rates
labeled both "too high" and "too low." What the supply-siders thought was that tax rates
were too high and that a reduction in tax rates would permit them to slide down and to
the right on the Laffer Curve and collect more tax revenue. In other words, they thought
the tax rate was above the optimal. Therefore Reagan proposed and obtained from
Congress a big tax rate reduction and found, unfortunately, that we were below the
optimal and tax revenues fell. While tax revenue fell significantly, the Reagan
Administration increased the defense budget by tens of billions of dollars per year. The
reduction in revenues, combined with substantial increases in government spending
made for record-breaking federal budget deficits and substantial additions to our
national debt.
145
There were other tenets of the supply-side view of the world. These economists
thought there was too much government regulation. They would have de-regulated
most aspects of economic life in the United States. However, after Jimmy Carter deregulated the trucking and airlines industries, there was considerable rhetoric and little
action concerning the de-regulation other aspects of American economic life.
Politics and Economic Policy
Unfortunately, the realities of American economic policy is that politics is often
main motivation for policy. Whatever anyone may think of Reagan's Presidency there is
simply no doubt that he was probably the most astute observer of the political arena of
any of his competitors. Reagan argued that taxes were too high and needed to be cut.
This is probably the single most popular political theme any candidate can adopt.
Remember McGovern? He said if he were elected President he would raise taxes, he
did not have to worry about how, because he won only two states. The surest way to
lose a bid for public office is to promise to raise taxes.
As it turned-out McGovern was right, there should have been a tax increase, at
the time Reagan cut them. Being right, does not have anything to do with being
popular. What we now face is a direct result of the unprecedented deficits run during
the Reagan years. In fact, during those eight years this country acquired nearly $1.7
trillion of its national debt. No other President in U.S. history has generated this amount
of debt in nominal dollars. However, before Reagan is judged too harshly, it must
remembered that we were in the midst of a major recession during his first term in
office, and the cold war was still at its zenith. Again, in Mr. Reagan's defense the first
three years of his administration also witnessed exceedingly high rates of inflation, that
are reflected in the nominal value of the deficits.
The following table shows the national debt for the period 1980 through 1988,
notice, if you will, how the national debt accelerated during the Reagan administration,
remember that the first three years were deepening recession.
146
______________________________________________________________________
____________________________________________________________________
Table I: National Debt 1980 - 2005
_____________________________________________________________________
YEAR
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
NATIONAL DEBT
(billions of U.S. dollars)
908.3
994.3
1136.8
1371.2
1564.1
1817.0
2120.1
2345.6
2600.6
2868.0
3206.6
3598.5
4002.1
4351.4
4643.7
4921.0
5181.9
5369.7
5478.7
5606.1
5629.0
5803.1
6228.2
6783.2
7379.1
7932.7
_____________________________________________________________________
Nearly $1692.3 of additional debt was accumulated during the years that
President Reagan was in office. The first nine months of this period was under Carter's
budget, but the first nine months of 1989 was under Reagan's and this data makes
Reagan's record look better than it really was. At a six percent interest rate, the current
yield on the 30 year Treasury Bond (as of December 8, 1995) the debt accumulated
during these eight years adds $101.5 billion to the federal budget in interest payments.
147
There is a lesson here, that has nothing to do with political propensities, tax cuts
that do not generate continuing economic growth, will add to the national debt, and the
interest payments on that debt will add to even further budget deficits. It cannot be
denied that Reagan was one of the most popular Presidents of this century. The very
things that made him popular, tax cuts, and military build ups are responsible for much
of our current controversy concerning the balanced budget.
Is the Debt Really a Problem?
The most valid criticism of the debt is its potential for disrupting credit markets, by
artificially raising interest rates and crowding out private investment. Naturally, a debt
as large as ours will have an upward influence on interest rates, but the evidence does
not suggest that it is substantial.
Today, we find ourselves in roughly the same position this country was in 1805.
President Jefferson borrowed about the same amount as our GDP from England and
Holland to buy Louisiana from the French (we owe an amount almost the same as
GDP). History has taught us that even when we added the debt acquired in the War of
1812, it did not bankrupt the government or its citizens. By the end of World War II, we
again had a national debt of $271 billion and a GDP of $211.6 billion. The predictions
that my generation would be paying 60 and 70 percent effective tax rates to pay off the
debt never materialized. Frankly, I wish my debt was only equal to my annual income,
and I suspect most people in their thirties and forties have the same wish. Surely the
size of debt and current deficits are not a source of any grief. If it is then little is learned
from history and little is known about economics.
The serious question is why did we acquire the debt and what must we give up if
we are to pay it off. When Reagan took office, we were in the midst of a cold war. The
increases in government expenditures for the military, caused our chief rivals to spend
larger proportions of their GNP on the military and eventually caused the economic and
political collapse of the Soviet bloc. Was the end of the Cold War worth the debt we
acquired? Are the benefits of education worth a few million dollars in current debt? Is
providing for the poor, the elderly, and children worth a few billion in debt? Are
veterans' pensions for those who stood between us and our enemies worth a few billion
in debt? What about the Interstate Highway System, subsidies for the company where
you work, research for medical reasons, the pure sciences, the social sciences, and the
tens of thousands of things on which the government spends our tax dollars? These
are the priorities that any society must set. Probably the only realistic answer to
whether the debt is a problem is what we do about it and what priorities we set. History
will judge this society, whether we are judged as compassionate, or barbarians, or
economically astute or fools is for future generations of historians. Let us all hope that
we choose correctly.
148
Hedayeh Samavati, David A. Dilts, and Clarence R. Deitsch, "The Phillips Curve:
Evidence of a 'Lady or Tiger Dilemma,'" The Quarterly Review of Economics and
Finance. Vol. 34, No. 4 (Winter 1994) pp. 333-345.
During the period examined, January 1974 to April 1990, the evidence reported
(here) suggests that there is a unidirectional causal relation from the inflation rate to
the rate of unemployment. If, the Phillips Curve were vertical over this sixteen year
period, one should have observed no causality between the unemployment rate and
the rate of inflation (the natural rate hypothesis, i.e., any rate of inflation can be
associated with the natural rate of unemployment). The empirical findings reported
here, however, suggest a non-vertical Phillips Curve.
Finally, the results reported here suggest the proper specification of the
empirical models used to test the Phillips Curve relation. Friedman argued that the
proper specification of the regression equations used to estimate the Phillips Curve
relation is of significance (both theoretically and empirically). "The truth of 1926 and
the error of 1958" (as Friedman argued) is supported by the evidence presented in
this paper. The statistical evidence presented here supports Friedman's claim that
inflation is properly specified as the independent variable in "Phillips Curve" analyses.
That is, rather that the specification proposed by A. W. Phillips (1958), the proper
specification is that proposed by Irving Fisher (1926) as asserted by Friedman. This
finding is of significance to those researchers using ordinary least squares to
examine relations between inflation and unemployment. Irving Fisher's specification
is consistent with Friedman's well known theoretical arguments concerning posited
relations between inflation and unemployment, hence, his taste for inflation as the
independent variable.
Maybe this box does not represent the final word in the Phillips curve
controversy, but this research suggests that there is a short-run view of the Phillips
curve and that the idea of rational expectations may not be as good as it looks at first
blush. This study employed the Granger causality methods to inflation and
unemployment data in the United States for the period identified to see if there was
causality -- the evidence suggests that inflation causes unemployment.
149
KEY CONCEPTS
Misery Index
Inflation
Unemployment
Phillips Curve
Short-run Trade-off
Long-term, natural rate hypothesis
Positively sloped
Rational Expectations
Market Policies
Lorenz Curve
Gini Coefficient
Investment Tax Credits
NAFTA & GATT
Wage-Price Policies
Laffer Curve
Supply Side Economics
Budget Deficit
National Debt
150
STUDY GUIDE
Food for Thought:
Compare and contrast the various views of the Phillips Curve. Map out each and
demonstrate how there may be a cruel choice in economic policy.
Develop the Laffer Curve. What does it tell us? How then can we have witnessed the
large increases in the deficit during the years this model held center stage? Critically
evaluate.
What are market policies? Explain the major market policies observed in the U.S.
economy today. Compare and contrast these.
Sample Questions:
Multiple Choice:
In the short-run view of the Phillips curve policy-makers are left with a cruel choice.
What is this cruel choice?
A.
B.
C.
D.
Inflation will exist regardless of the level of unemployment
Unemployment will exist regardless of the level of inflation
Policies that improve unemployment will create inflation and vice versa
Inflation appears not to have a causal relation with unemployment, hence a
downward sloping Phillips curve is not plausible.
Stagflation is:
A.
B.
C.
D.
general decreases in the price level
excess employment which causes inflation
both high rates of unemployment and inflation
both very low rates of unemployment and inflation
151
True - False:
The Laffer curve suggests that the higher the tax rate the lower will be tax revenues.
{FALSE}
Supply side economics was the view of the Reagan administration which believed Say's
Law. {FALSE}
152
Chapter 13
Data: Categories, Sources, Problems and Costs
Business conditions analysis requires that the analyst have a solid understanding
of the theoretical underpinnings of macroeconomics. However, theory is simply not
enough. One must have data to monitor or analyze what is occurring in the economy.
Data imply empirical economics, and a mastery of a tool-kit of empirical methods to
analyze the economy and perhaps make forecasts from those empirical methods. The
purpose of this chapter is present the types of data, their sources, and the practical
problems in the gathering, storing, and use of data. A following chapter, Chapter 15 will
introduce you to some preliminary types of data analysis to be used to become familiar
with and understand the data set with which you are working.
Data Types
Data may be categorized in several different ways, but all classification tend to
be by the nature of the underlying processes that generate that data. Economic data
are generally of two basic categories. There is time series data, and cross sectional
data. Time series data are metrics for a specific variable observed over time. Crosssectional data, on the other hand, are data for a specific time period, for a specific
variable, but across subjects. For example, unemployment rates are measured over
time. The U.S. annual average unemployment rate is calculated for months, quarters,
and years – such data is time series data. On the other hand, for each month, quarter
and year, the Bureau of Labor Statistics publishes unemployment rates for each State in
the U.S. Unemployment rate data for a specific month, quarter or year is called crosssectional data.
Economists sometimes find that cross-sectional data over time provides for
insights not available by looking at only time-series or just cross-sectional data. The
combination of cross-sectional data over time is called panel-data. The empirical
examination of each type of data has its own peculiarities and statistical pitfalls. These
will be examined in greater detail, in a latter chapter.
There is discrete data, and continuous data. Discrete data are metrics that
have specific and finite number of potential values. Continuous data, on the other hand,
are metrics, which can take on any value either from plus infinity to negative infinity, or
within specific limits. Discrete data generally require more sophisticated statistical
models than just simple ordinary least squares. Continuous variables are things such
153
as Federal budgets, GDP, and work force. Discrete variable include things like Likert
scales in survey research.
Data can also be classified as to stocks and flows. A stock variable is generally
a variable in which there is a specific value at some point. A flow variable is generally
something that has some beginning, ending or changing value over time. Inventory
data is a useful example to illustrate the difference between stocks and flows. At any
point in time a retail establishment will have a certain dollar value of goods on hand and
for sale. Such a metric is a stock variable. On the other hand, the amount of turn-over
of that inventory each week or month is a flow variable.
Data is sometimes generated by surveys, rather than the measurement of
some state of nature. Survey data is generally reported by an individual who may or
may not have some motivation to accurately report the data requested in the survey.
Therefore, when surveys are used to gather data, two specific issues arise with respect
to the data. These two issues are validity and reliability. Validity has two broad
dimensions. The response rate from the sample of a population will define how
representative that sample is of the population. The sample must be large enough to
have confidence that inference can be drawn from the sample about the population.
The second issue involves whether the items on the survey actually capture what the
researcher intends. Validity in this respect means that care must be exercised to
assure that the items on the survey are constructed so as to eliminate ambiguities.
Field testing and editing of the instruments make survey research often time consuming
and difficult.1
This second issue with validity also has three distinct dimensions these are: (1)
content validity, (2) criterion-related validity and (3) construct validity. Content validity
concerns whether the measures used actually are capturing the concepts that are the
target of the research. Face validity is often what is relied on concerning content that is
on the face of it, it appears that the measures are adequate. Otherwise, one must have
a theoretical basis for the measures. Criterion-related validity involves whether the
measures differentiate in a manner that predicts values in the dependent or criterion
variable. Finally, construct validity involves whether or not the empirical measures are
consistent with the hypothesis or theory being tested. Each of these validity issues are
described in greater detail in Sekaran’s book.2
Reliability has to do with assuring the accuracy of responses. Opinion surveys
and self-reporting is often very risky with respect to inference. A well-constructed
survey will ask the same thing is more than one way so that correlations can be
1
Uma Sekaran, Research Methods for Business: A Skill Building Approach, second
edition. New York: John Wiley and Sons, 1992 presents a detailed discussion of the
nuts and bolts of survey and interview methods for business analyses.
2
Ibid.
154
calculated to at least assure consistency in a respondent’s answers. Reliability of
survey results is often very problematic, but does not render these methods necessarily
unreliable. It just means thought must be given in ways to check the accuracy and
consistency of the data gathered.
States of nature data, involve things like market prices, number of persons, tons
of coal and other such metrics that can be physically observed or measured. Often
these data are estimated from surveys, i.e., Consumer Price Index, so that there are
situations where there are combinations of methods in gathering data.
Interviews and observational methods are also often utilized to gather what is
sometimes referred to as primary data. These data also have particular difficulties
concerning the metrics used and the validity and reliability of the data gathered using
these methods.
There are seasonally adjusted and not seasonally adjusted data. Seasonal
adjustment methods vary by the type of data examined. The Consumer Price Index
comes both seasonally adjusted, and without seasonal adjustment. There are almost
always published technical reports that discuss the methods used to adjust for seasonal
variations in the data.
Conceptually, there are a number of economic time series data, which have
specific, predictable biases created by the time of year. Unemployment predictably
increases every summer because students and teachers enter the work force during the
summer months, and then exit again at the end of summer or beginning of fall when
school resumes. The Christmas season always results in a jump up in retail sales that
will decline when the return rush is over in January. In these cases seasonal
adjustments permit long term trends to be identified that may be hidden in the shortterm variations.
Data Sources
Data sources are also diverse. There are public data sources and private data
sources. Public data sources are governments, and international organizations. Private
data sources include both proprietary and non-proprietary sources. Each of these will
be briefly examined in the following paragraphs.
Public data sources for the United States includes both Federal and State
agencies. Economic data is gathered, compiled and published by virtually every
Federal Agency. State and local agencies also gather data, much of which is reported
to a Federal agency, which compiles and publishes the data. For example, crime
statistics are available annually from the Federal Bureau of Investigation for both
Federal and State crimes. The Justice Department directly gathers statistics on Federal
crimes, but has to rely on State reporting to compile statistics concerning crimes in
155
states, such as car-theft, murder, and battery. The same relationship holds for several
other types of data. Unemployment is compiled in two distinct ways. There is an
establishment survey conducted by the Commerce Department’s, Current Population
Surveys, which is then compiled and given over to the Bureau of Labor Statistics.
There is also another series, called “covered” unemployment, which is reported by the
States through their Employment Security Divisions, and those numbers are also
compiled and reported by the Commerce and Labor Departments.
The Commerce Department gathers industry data, foreign trade data, and a
wealth of statistics concerning the National Income Accounts. Most developed
countries follow the lead of the United States in closely monitoring data concerning
various demographic, economic, infrastructure and health statistics. However, there are
also several international organizations, which are involved in these sorts of activities.
Perhaps one of the easiest to use sources of international statistics comes from the
U.S. Central Intelligence Agency. The CIA routinely publishes something called the
World Fact Book, which is available online at www.cia.gov and contains a wealth of
economic, political, demographic, and health care statistics for the majority of the
world’s countries.
The Board of Governors of the Federal Reserve System and several of the
Federal Reserve Banks publish statistical series on various monetary aggregates. Most
of these data are available in historical time series.
The World Bank and International Monetary Fund monitor various nation’s trade,
inflation, and indebtedness data. The World Bank routinely publishes the World Debt
Tables, which reports various aspects of the debt markets for LDCs’ debts, both private
and sovereign. The United Nations, through the auspices of their dozens of
international agencies also gathers and publishes data concerning various aspects of
the demographics and economics of the world’s nations.
These public sources of information publish current statistics, and most have
available historical time series data. Most of the data gathered by the U.S. Government
is available online at sources such as www.bls.gov and www.federalreserve.gov. These
data also generally have accompanying handbooks or articles, which describe how the
data are gathered, de-seasonalized, or otherwise technically handled.
There are also private sources of information, both proprietary and nonproprietary. Stock, bond, and commodity prices are generally proprietary information
available for a fee from a brokerage or investment bank. However, there are sources of
much of this information, which have little or no cost. The S&P 500 stock price data is
available on www.Standard&Poors.com however, if you need historical data that comes
for a price.
There are non-profit organizations that publish data. The National Bureau of
Economic Research and other such organizations keep tabs on various aspects of the
156
business cycle and National Income Accounts and have data available. There are also
several professional organizations and industry organizations that have data sources.
Perhaps one of the more interesting of these is the ACCRA Price Index which is a
cross-sectional price index published quarterly, to wish a person may subscribe for a
small fee. The market basket assumes a mid-management household and surveys
many smaller cities, Fort Wayne is included in the survey and has been pretty stable at
about 88% of the national average cost of living www.aacra.org.
Problems and Costs
How we know something is a matter of individual perception. That perception is
shaped by the data we have and the observations we have made. These perceptions
will also be colored by a persons values and their training. As a result people untrained
in the scientific method will often attach significance to evidence they believe supports
their position, it is a form of bias, and is often associated with anecdotal evidence. As it
turns out, anecdotal evidence is sometimes very misleading. The natural tendency is to
rely on a single isolated observation and use inductive logic to form conclusions. What,
however, is required scientifically is the systematic gathering of a large number of
observations, and the use of deductive logic to test theories concerning why that data is
the way that it is. This however, is expensive in time and resources. Information is
almost never free.
It is the cost of information that often becomes a limiting factor in making
business decisions, based on such things as business conditions analysis. Over the
past several years there have been numerous studies concerning the problems with the
use of information for making managerial decisions, even including strategic
management. Kathleen M. Sutcliff and Klaus Weber, published an article in the Harvard
Business Review in May of 2003, which takes a very practical and managerial approach
to the problems associated with “High Cost of Accurate Knowledge.”
Sutcliff and Weber argue that the accuracy of data is an expensive proposition
and that too often managers have neither the expertise nor the time to fully analyze
what imperfect information they can gather. What Sutcliff and Weber contend is that
intuition is perhaps a better modus operandi, what they call a humble optimism but
based on what knowledge can be gleaned without becoming a victim of the old cliché
“paralysis by analysis.
The effects of accuracy are summarized in the following charts:
157
Magnitude of Change
Perceptual Accur
Performance
Perceptual accuracy
Sutcliff and Weber studies suggest that organizational change has an inverted U
shaped relation to perceptual accuracy. In other words, over the initial ranges there are
increases in the return to perceptual accuracy for positive organizational change up to
some optimal point. Beyond that optimal, additional accuracy actually results in
negative returns. On the other hand, organization performance is negatively correlated
with perceptual accuracy. The more resources (time, etc.) are devoted to perceptual
accuracy of analysis or the underlying data the more likely performance will decline.
158
These authors account for this relation perceptual accuracy to both
organizational change and performance with several different factors. Chief among
these factors are the individual characteristics of the executives, growth trends in the
industry, complexity, controllability of the business’ fortunes, positiveness in the culture,
and the companies’ contingent strategies. The more capable a manager, the more
familiar with the business, and more experienced with the specific issues, which arise
the more likely that manager, is to make the right call. The more optimistic the
organizational culture, the more likely it is that there will be a “get it done” ethic, which
guides the organization. The ability to control the business’ success regardless of
environment makes data and its analysis less critical to the business, whereas the more
complex the business’ relation to its environment the more critical data and its analysis
becomes to the success of the company.
There is also an old cliché, which is operable here. “A higher tide, floats all
boats,” is true in this case. If the business is in a market in which there is rapid growth,
then data becomes less important to the extent that everyone is prospering. The dot
com firms in the 1990s, mini-steel mills in the last two decades, and numerous other
examples exist that suggest some validity to this view.
There will be more concerning these issues in the final chapter of this book.
Suffice it to say that there are significant internal costs of the management of data and
its uses.
There are other writers who point to other problems with data and its analysis. In
many enterprises there are external requirements that force a firm to invest heavily in
the gathering and analysis of data. Among the causes of this behavior is the fact that
there are substantial government regulations involving health and safety that require
data gathering and analysis. The automobile industry is required to engage in quality
control measures, and document those measures with data for safety critical parts.
Pharmaceutical companies have very stringent regulations from FDA concerning not
other the safety of their products, but there must also be evidence of their efficacy
before those products can go to market. While these data issues are not necessarily
business conditions in a macroeconomic sense, they are part of the regulatory
environment, which face businesses and are of critical importance to the success of the
enterprise. Further, just like market analysis, the development of this information and its
analysis must be part of the firm’s strategic planning functions. Again, issues to be
addressed later in this course.
Professor James Brian Quinn has observed that the move into “high-tech”
business (science based) has forced many firms to examine their strategies and how
they deal with information. In fact, Dr. Quinn believes that these science based
industries have been forced into a situation where knowledge sharing is of critical
importance to the success of industries and firms within those industries. The following
quotation states his point:
159
James Brian Quinn “Strategy, Science and Management” Sloane Management
Review, Summer 2002
No enterprise can out-innovate all potential competitors, suppliers and external
knowledge sources. Knowledge frontiers are moving too fast. In almost every major
discipline, up to 90% of relevant knowledge has appeared in the last 15 years. . . .
To exploit such interactions, leading companies develop flexible core-knowledge
platforms and, equally important, the entrepreneurial skills to seize opportunity
waves. They systematically disseminate and trade knowledge and, if necessary,
share proprietary information, recognizing that larger, unexpected innovations may
arise to increase their own innovations’ value by orders of magnitude. They know it’s
impossible to foresee all combinations and profit opportunities.
As was drive surfers, external forces often drive companies’ success.
Managers must read the waves expertly. But that means adopting a humility
unfamiliar to traditional titans of industry. Accomplishments depend on ensembles of
others’ work and unfold in way managers can’t anticipate. Because leaders can’t
predict which combinations will succeed, they can’t drive their organizations toward
predetermined positions.
Conclusions
Clearly the gathering and analysis of data has implications for both the success,
and the cost structure of the firm. However, it is also clear that much of the problem
associated with data, its acquisition and its use is also a function of the culture and
behavior within the organization. It is sort of amazing how “humility” seems to be a
recurrent theme in this literature. In many organizations the control of information is a
source of power within that business, and that is almost universally destructive to the
enterprise.
Finally, economists have long recognized that there is cost associated with
information. In the analysis of the macroeconomy it is worthy to note, that information,
through both rational and adaptive expectations played very important roles in the
results obtained from the economic models presented. The matter of expectations also
played significant roles in policy decisions – fiscal lags, inflationary expectations, and
aggregate demand are shaped heavily by what information people have and what they
do with it.
160
KEY CONCEPTS
Data Categories
Time Series
Cross-section
Panel Data
Stocks and Flows
Discrete and Continuous
Surveys and states of nature
Validity and reliability
Seasonally adjusted versus not seasonally adjusted
Sources of Data
Public
Government
Relations between levels of government
International Organizations
Private
Non-Proprietary and proprietary
Costs of data acquisition and analysis
Determinants of performance
Organizational cultural
Performance-Organizational Change
Humility versus Power and the role of information
161
STUDY GUIDE
Food for Thought:
Why has information traditionally been power? Why has this concept being replaced
with the idea that humility should reign with respect to information and its uses?
Explain.
Critically evaluate the idea that there is some optimal level of information for an
organization or a manager to utilize.
Compare and contrast the various types of data, their uses, and their short-comings for
management of an enterprise.
Have advances in science resulted in the need for information in business enterprises?
Explain.
What would you use a seasonally adjusted series of data for, over data that has not
been seasonally adjusted? Explain.
162
Sample Questions
Multiple Choice:
Data that is concerned with the levels of a particular variable at a specific time is:
A.
B.
C.
D.
Flow
Stock
Discrete
Continuous
Perceptual accuracy is related to performance of a firm, according Sutcliff and Weber:
A.
B.
C.
D.
Negatively
Positively
Positively to a point then negatively
Not at all
True / False:
Panel data is a mixture of discrete and continuous date. {False}
Survey data has problems not associated with data, which are measurements of states
of nature, and require greater attention be paid to reliability and validity. {True}
163
Chapter 14
Economic Indicators
Perhaps the easiest method of analysis of macroeconomic trends and cycles is
with the use of rather simple devices called “economic indicators.” Leading Economic
Indicators are often focused on by the press, particularly the financial press (i.e., CNBC
and the Wall Street Journal). By examining specific data, without the use of complicated
statistical methods one can often obtain a sense of where the macroeconomy is
headed, is currently, and has been. The purpose of this chapter is to present the
economic indicators, and describe their usefulness and limitations. Why these
indicators are being given such attention is not just their popularity in the press, they are
also theoretically connected to the business cycle and are useful, quick and dirty
methods of forecasting where we are in the business cycle.
Cyclical Indicators: Background
The Arthur F. Burns and Wesley C. Mitchell at the Bureau of Economic Research
began to examine economic time series data in the 1930s to determine if there were
data which were useful in predicting business cycles.3 What this research produced
was the classification of approximately 300 statistical series by their response to the
business cycle. These data are classified as leading indicators, concurrent indicators,
and trailing indicators.
The selection of variables to be included in the indices are done on the basis of
criteria, discussed below. The index for leading indicators is designed to provide a
forecasting tool, which permits calls to be made in advance of turning points in the
business cycle. The index of concurrent indicators is a confirmatory tool, and the
trailing indicator index suggests which each phase of the business cycle has been
completed. There is nothing complicated about the application of these indices, as
opposed to big multiple regression models. However, the simplicity is at the expense of
model the underlying processes, which produce the time series data, used to construct
these indicator indices.
These indicators are described in Bureau of Economic Analysis’ Handbook of
Cyclical Indicators.4 These indicators were selected using six criteria. These criteria
are described in John J. McAuley’s Economic Forecasting for Business: Concepts and
Applications, to wit:
3
W. C. Mitchell and A. F. Burns, Statistical Indicators of Cyclical Revivals, New York:
NBER Bulletin 69, 1938.
4
Handbook of Cyclcial Indicators. Washington, D.C.: Government Printing Office, 1977.
164
John McAuley, Economic Forecasting for Business: Concepts and Applications
Cyclical indicators are classified on the basis of six criteria: (1) the economic
significance of the indicator; (2) the statistical adequacy of the data; (3) the timing of
the series – whether it leads, coincides with, or lags turning points in overall
economic activity; (4) conformity of the series to overall business cycles; a series
conforms positively if it rises in expansions and declines in downturns; it conforms
inversely if it declines in expansions and rises in downturns: a high conformity score
indicates the series consistently followed the same pattern; (5) the smoothness of the
series’ movement over time; and (6) timeliness of the frequency of release – monthly
or quarterly – and the lag in release following the activity measured.
In other words, not only is there a basis in economic theory for these indicator variables,
these indicator variables appear to be statistically well-behaved over time. As a result
the indicators are considered, in general, to be useful, simple guides to the variations in
the business cycle in the U.S.
Leading Indicators
There are twelve variables that are included in the index of leading economic
indicators. Leading indicators are those variables, which precede changes in the
business cycle. In a statistical sense, these variables are those, which rather
consistently and accurately lead turning points in the business cycle, as measured by
other series – i.e., lagging indicators, and coincident indicators, which will be presented
below, as well as the national income accounting data, and employment data, which are
used to define the business cycle.
Table 1 presents these variables and the weights of the series within the index of
leading indicators.
165
___________________________________________________________
Table 1: Leading Economic Indicator Index
Variable
Weight in Index
1. Average workweek of production workers, manufacturing
2. Average weekly initial claims, State Unemployment Insurance
(Inverted)
3. Vendor performance (% change, first difference)
4. Change in credit outstanding
5. Percent change in sensitive prices, smoothed
6. Contracts and orders, plant and equipment (in dollars)
7. Index of net business formation
8. Index of stock price (S&P 500)
9. M-2 Money Supply
10. New Orders, consumer goods and materials (in dollars)
11. Building permits, private housing
12. Change in inventories on hand and on order (in dollars, smoothed)
1.014
1.041
1.081
.959
.892
.946
.973
1.149
.932
.973
1.054
.985
Source: Business Conditions Digest, 1983
______________________________________________________________________
From the discussions of macroeconomic activity in the first eleven chapters of
this text, there should be hints as to why this selection of twelve variables form the
leading indicator index. Rather straightforward reasoning concerning the first two
variables leads directly to household income in base industries, which drive the
remainder of the economy. As the workweek increases so too does consumer income,
and the inverse of initial unemployment claims also suggests something about the
prospects for consumer income. The base industries, manufacturing and other goods
producing activities are the pro-cyclical industries, which support the service sector. As
these industries become more capable of sustaining household incomes, this bodes will
for industries downstream. The same reasoning extends to credit outstanding. As
consumers anticipate their incomes declining, they typically extend themselves less with
respect to taking on new debt obligations, and attempt to repay the debts they currently
have. Prices of sensitive items include those things, which are cyclical sensitive.
Consumer durables and other large purchases which are interest rate sensitive are
typically postponed when recession is anticipated, exerting downward pressure on the
prices of those commodities.
To the extent that producers’ expectations account for these perceived changes
in household income, then we would expect producers to react prior to upturns and
downturns in economic activity. These producer reactions will be reflected in the
166
Vendor Performance, plant and equipment, business formation, building permits, and
inventory series. If producers react, we would also expect retailers and wholesalers to
react, hence orders for consumer goods and materials.
This leaves two components of the leading index, which have not yet been
discussed, these are stock prices, and the M-2 money supply. The M-2 money supply
is a function of the monetary policies of the Fed. As the Fed wishes to induce economic
recovery with monetary policy, it is typically observed that the Fed moves to an easy
monetary policy from a neutrality or if the Fed erred, a tight monetary policy. Hence the
M-2 money supply increases prior to an upturn in economic activity. The opposite is
typically observed in times of recession. As the Fed wishes to eliminate the risk of
inflation from an overheated economy, the Fed will retreat from an easy money policy
towards neutrality or even a tight monetary policy, which typically precedes a recession.
In other words, the money supply changes observed conform with the coming business
cycle and precedes the inflection point.
Stock prices are perhaps the most interesting of the leading indicators. Going
back to the finance literature, there is an interesting hypothesis. Eugene Fama and
others working on rational expectations in securities markets proposed that the financial
market price-in all available information, and therefore everything one needs to know
about prices are already in price of stocks and bonds. This is called the efficient
market hypothesis. In other words, a bear market signals that there is a recession
about to occur, and a bull market signals that economic expansion is on the way. In
fact, the empirical evidence suggest that the financial markets are very good predictors
of coming business cycles and that the S&P 500 index is the best of those predictors.
Why, one may ask, is the stock market such a good predictor of future fortunes in
the U.S. economy. It is a simple matter of the fundamentals underpinning investments.
Investors put their money where they believe the best returns are. When earnings
begin to increase, inventories start to decline, and revenues start to grow, this is the
time to buy the stock. If you wait until the analysts start recommending the stock, and
the prices go through the roof, you are late to the party and have pretty much nothing
but downside risk, and little upside potential. Hence, it is clear the stock market should
be an excellent predictor of future economic fortunes.
167
Coincident Indicators
These indicators are the ones, which happen as we are in a particular phase of
the business cycle, in other words, they happen concurrently with the business cycle.
Statistically their variations follow the leading indicators and occur prior to the lagging
indicators. There are four variables, which comprise the Index of Coincident Indicators,
and they are presented in Table 2 below:
______________________________________________________________________
Table 2: Index of Coincident Indicators
Variable
Weight
1.
2.
3.
4.
1.064
1.028
1.003
.905
Employees on nonagricultural payrolls
Index of industrial production, total
Personal Income, less transfer payments
Manufacturing and trade sales
Source: Business Conditions Digest, 1983.
____________________________________________________________________
The coincident indicator variables are those, which happen at the particular
points in the business cycle. That is, this variables tend to confirm what is going on
presently. At the top of recovery cycle we should observe that each of these variables
have reached their highpoint during the cycle, and their low point is generally observed
during the trough of a recession. As these variables are increasing, then we should be
in recovery, and as they are declining we should be moving toward recession.
Employees on nonagricultural payrolls and the Index of Industrial Production are
stock variables. Whereas Personal income and Manufacturing and trade sales are flow
variables. The stocks of output should be growing during recovery, as should personal
income (all industries) and sales. The exact opposite is true during recessions.
Therefore, these variables are simply confirming what phase of the business cycle the
economy is experiencing currently.
Lagging Indicators
Lagging indicators are those, which trail the phase in the business cycle that the
economy just experienced. Again, confirming what the leading indicators predicted, and
what the coincident indicators also confirmed as it occurred. There are six variables
that comprise the Index of Lagging Indicators. The movements of these variables trail
what actually transpired in the phases of the business cycle.
168
These six variables are presented in Table 3:
______________________________________________________________________
Table 3: Lagging Indicators
Variables
Weights
1.
2.
3.
4.
5.
6.
1.098
.868
.894
1.009
1.123
1.009
Average duration of unemployment
Labor cost per unit of output, manufacturing
Ratio constant dollar inventories to sales, and manufacturing and sales
Commercial and industrial loans outstanding
Average prime rate charged by banks
Ratio, consumer installment debt to personal income
Source: Business Conditions Digest, 1983.
____________________________________________________________________
The Index of Lagging Indicators follows what has already transpired in the
business cycle. The average duration of unemployment trails the recession in an
economy. For the stock variable, duration of unemployment, to increase means that
there are a lot of people out of work, who have been out of work for a considerable
period of time. Hence, this variables tends to continue to increase once the recovery is
in its early stages.
Labor costs in manufacturing also tend to increase after the recession has
ended, because producers are hesitant to hire additional labor, and are willing to work
overtime at premium pay, to avoid hiring which may yet turn out to be temporary. The
average prime rate reflects the banks risk premium not yet being mitigated as a result of
experience with a recovery. Hence the prime rate is sticky downward, much the same
as employment is sticky upward in the beginnings of a recovery.
Commercial and industrial loans tend to increase during periods in which liquidity
is a problem for enterprise. These loans tend to reach their peak shortly after a
recession has ended and recovery has begun. The consumer installment debt to
income ratio, also tends to reflect a recent history of a lack of income, which has, in
part, been supplemented by credit at the worst of a recession for some households.
The time necessary to work down an inventory excess generally extends past the
end of a recession for goods producing entities. Manufacturing companies will often
accumulate inventories, which cannot be brought down during a recession, and must be
carried forward until the recovery becomes strong enough that customers are willing
and able to buy down those inventory levels. Hence, inventories for the base industries
tend to persist into the recovery phase of the business cycle.
Other Indicators
169
There are other indicators, which are constructed and published concerning
various aspects of the business cycle. The University of Michigan publishes a
“Consumer Confidence” index. The University of Michigan has developed a survey
instrument and surveys consumers concerning what they expect in their economic
future. The Conference Board also does something similar for producers.
Real Estate firms have a lobby and research group that does surveys of new
home construction, and sales of existing housing which plays on essentially the same
sort of economic activities captured by “Building permits, private housing” variable found
in the Index of Leading Indicators.
The Federal Reserve is not to be outdone in this forecasting business. The
Philadelphia Federal Reserve Bank does an extensive survey of business conditions
and plans in the Philadelphia Federal Reserve District and publishes their results
monthly. This report is commonly referred to as “The Beige Book” and it eagerly
anticipated as providing insights into what to expect in the near future in the eastern part
of the United States.
There are also numerous proprietary forecasts and indices constructed for every
purpose imaginable. Stock price data have been subjected to nearly every torture
imaginable in an attempt to gain some sort of trading advantage. Bollinger Bands, Dow
Theory, and even Sun Spots have been used in an attempt to forecast what will happen
in the immediate short run with financial markets. As one might guess virtually all of
these schemes are proprietary. The normal warning applies, if it sounds too good to be
true, it generally is little more than fun and games with little substance. Care should be
exercised in taking advantage of these sorts of things.
Frankly with financial markets there is no substitute for one’s own due diligence,
however, there are a couple of good sources of information. If one is interested in the
cumulative wisdom of stock analysts, then Zacks (www.Zacks.com) is a good reporting
service, which provides not only average analyst ratings, but their own
recommendations based on sound fundamentals. Standard and Poors provides a
rating system based on up to five stars for the best performing stocks, again based on
sound fundamentals. For mutual funds, Morningstar provides both proprietary (a
premium service) and non-proprietary ratings of mutual funds and considerable
information concerning the management charges and loads on respective funds
www.morningstar.com.
Conclusions
It is interesting to note that these indices have been constructed and are rough
guides to what is occurring in the business cycle. However, it is also not encouraging to
know that as the economy changes, becomes more global, and more service oriented,
that the efficacy of these indices becomes less clear. These have never been perfectly
170
accurate, they are at best only rough guides to the landscape of business cycles in the
U.S. economy, However, it also must be remembered that a subscription to these
indices is far cheaper than having a battalion of econometricians on staff gathering and
analyzing economic data.
Further, these numbers are also reasonable indicators of what is occurring and
can give considerable empirical pop for the buck. To the extent that all economic
forecasting is, at best, little more than sophisticated number grubbing these sorts of
approaches are worth careful consideration.
It is also interesting to note that OECD (Organization for Economic Cooperation
and Development) gathers and publishes indices of economic indicators for several
European nations at their website www.OECD.org. These numbers provide essentially
the same sorts of leading, coincident, and lagging indicators as are described here.
The U.S. publication of the Economic Indicators is not longer shouldered entirely
by the Bureau of Economic Analysis. The Conference Board now publishes these
series for the U.S. These indicators are available through the Conference Board at their
website and requires a subscription www.ConferenceBoard.org. However, the
construction of the indices are still accomplished using data provided by the U.S.
Department of Commerce, and many organizations have constructed their own indices
mirroring what was done by the Bureau of Economic Analysis.
There are specialized data systems mostly for stocks, and mutual funds. These
are almost all proprietary for the premium services, but most brokerages provide one or
more these services, and some are available without a fee and without the premium
portions of the service. Many of these are quite good.
KEY CONCEPTS
Economic Indicators
Based in economic theory
Six criteria are used for variable selections
Different weighting used for each variable
Index of Leading Indicators
Prediction variables
Efficient Market Hypothesis
12 variables make up the index
171
Index of Coincident Indicators
Occurs with the cycle
4 variables make up the index
Index of Lagging Indicators
Confirms where we’ve been
6 variables make up the index
Variables are selected for their efficacy in the index
Other Indicators and Sources of Information
Proprietary services
Philadelphia Fed’s Beige Book
Investment information
Zacks
S&P
Morningstar
172
STUDY GUIDE
Food for Thought:
Critically evaluate the idea that an index can be created to predict future economic
activity, measure current economic activity, and confirm where we’ve been in the
business cycle.
Why do you suppose there are different weights for different variables in each of these
indices? Explain.
“Keep it Simple!!!” There is no doubt these indices are simple, but what do we give up
for the simplicity? Explain.
What other sorts of information is available based loosely on this index of indicators
idea? Explain.
173
Sample Questions:
Multiple Choice:
Which of the following variables are included in the Index of Leading Indicators?
A.
B.
C.
D.
Vendor Performance
Building Permits, Private Housing
Index of Stock Prices
All of the above
The duration of unemployment is included in the lagging indicators index. This variable
indicates the average length someone has been off work. What is this variable?
A.
B.
C.
D.
A flow variable
A stock variable
A discrete variable
None of the above
True / False
There are indicator indices constructed for and available for countries other than the
United States. {True}
High conformity scores for variables in the respective economic indicator indices
suggest the series consistently followed the same pattern. {True}
174
Chapter 15
Guide to Analytical and Forecasting Techniques
Since time immemorial knowledge of the future has been a quest of all mankind.
Frankly, we are all probably better-off not knowing what’s in store for us. However,
what is undisputable is that some insight into what business conditions are going to be,
allows us more an opportunity to prepare to take advantage of the opportunities, which
await us, and avoid the threats that may also be present. It is this risk that motivates
one to have as much information as is optimally available so as to effectively deal with
the contingencies we face. It is this motivation that brings most business organizations
to the need for forecasting.
The purpose of this chapter is to present a menu of forecasting techniques and
provide some guidance as to their utility and to what ends they are best suited. It is
presumed that students will have had a basic grounding in statistical methods so that
most of the discussion presented here will be readily understood. However, it is not the
purpose of this chapter to go into the “nuts and bolts” of the methods presented. A
second course, M509, Research Methods, will provide the computational expertise
necessary to deploy these methods.
Methods to be Presented
This chapter is divided into three basic sub-sections. The first sub-section will
present a class of models referred to collectively as Time Series Methods. The next
section deals with correlative methods, including ordinary least squares and variants of
that model. The final section will present a sampling of other statistical methods, which
may be useful in dealing with special cases.
Time Series Methods
This class of methods is the most simple in dealing with time series data. Time
series methods includes. There are several different approaches varying from the most
simple, to somewhat more complex. The methods to be briefly presented here include:
(1) trend line, (2) moving averages, (3) exponential smoothing, (4) decomposition, and
(5) Autoregressive Integrative Moving Average (ARIMA).
Each of these methods is readily accessible, and easily mastered (with the
possible exception of certain variants of ARIMA). The first four of these methods
involves little more than junior high school arithmetic and can be employed for a variety
of useful things.
175
1. Trend Line
This method is also commonly called a naive method, it that is simply fitting a
representative line to the data observed. One method, presented later, to determine
what the representative line is, is called ordinary least squares, or regression analysis.
In fitting a naive trend line, however, we rely on something more basic that is
extrapolation. In other words, if we have a pool of data and the plot of that data reveals
a predictable pattern we simple fit the line over what that observed pattern is.
X
●
●
●
●
●
● ●
●
●
●
●
Y
The Nobel Prize winning economist Paul Samuelson (a Gary, Indiana native)
described this method as the black string method. That is, if you have a collection of
data point, simply pull a string out of your black socks and position in such a manner as
it best fits your data. In other words, best judgment suggests that the dotted line is most
representative of this collective of data points. The line either passes through or
touches five of eleven data points, with three points being above and three points being
below the dotted line. However, there is nothing any more rigorous that visual
observation and letting the eye tell you what looks like it best represents the data.
2. Moving Averages
Moving averages are the next most rigorous method of best guess. If we have
three observations over time of a particular variable, and we wish to obtain an estimate
of the next observation we might choose to use something that gives use a particular
number, rather than a slope and intercept from which to extrapolate as in the trend line
method above.
If we have three data points 6, 7, and 8. The moving average is calculated as:
176
X = 6+7+8 ÷ 3 = 7
The moving average of this series is 7. However, it is clear that if the series
continues that the next number will be 9. Therefore, a weighted moving average may
result in a more satisfactory result. If the we use a method that weights the last
observation in the series more than the first observation we will get closer to the next
number in the series, which we presume is 9. Therefore, the following weighted
average is used:
X = (.5) 6+7+ (1.5)8 ÷ 3 = 7.333
This weighting scheme still undershoots the forecast. Therefore we try using
something in which the last number is weighted more heavily.
X = (.5) 6+7+(2)8 +1 ÷ 3 = 9
This weighting scheme with the addition of a constant term one provides a more
satisfactory result. Unfortunately, using a weighted average scheme results in guessing
for the best weighting scheme. Hence a better method is need.
This sort of method is often useful for forecasting very simple, “shop floor” sorts
of things. If you are dealing with a single data series, and need a very rough estimate of
what is going to happen next with this data, the moving average, or exponential
smoothing technique is often the most efficient.
3. Exponential Smoothing
One of the problems with moving averages is determining how much to weight
observations to obtain reasonably accurate forecasts. The weighting process is simply
guessing, albeit hopefully informed guessing, about what sort of weights will produce
the least error in future forecasts. Exponential smoothing is an effort to take some of
the guess work out, or at least formalize the guess work. The equation for the forecast
model is:
F = (∞) X2 + (1 - ∞) S1
Where alpha is the smoothing statistic, X is the last observation in the data set, and S is
the first predicted value from the data set. For example, if we use the following series:
25, 23, 22, 21, 24
177
To obtain the initial value of S or S1 we add 25 and 23 and divided the sum by 2 to
obtain 24. The X2 in this case is our last observation or 25. All that is left is the
selection of ∞.
Normally, an analyst will select .3 with a series, which does not a linear trend upwards
or downwards. This gives a systematic method of determining the appropriate value of
∞. Make predictions using ∞ beginning at .1 and work your way to .9 and determine
which ∞ gives the forecast with the least error term over a range of the latest available
data, and use that ∞ until such time as error begin to increase. To complete the
example, let’s plug and chug to a get a forecast.
F = (.3) 25 + (1 - .3) 24 =
7.5 + 16.8
= 24.3
The mechanics are simple, far simpler than name implies, and the method to
adjust for the best alpha is easy to implement.
3. Decomposition
There are several ways in which to decompose a time series data set. Outliers,
trend, and seasonality are among the most common methods to strip away variations in
the data so as to be able to look at fundamentals in the data. Each of these methods
relies upon an indexing routine in order to provide a basis to “normalize” the data.
Perhaps the most complicated of these is seasonality, and therefore we will examine
that example.
The process begins with calculating a moving average for the data series. For
quarterly data the moving average is:
MA = (Xt-2 + Xt-1 + Xt + Xt+1) / 4
178
We will apply this method to the following data:
Year 1
First Quarter
Second Quarter
Third Quarter
Fourth Quarter
X
10
12
13
11
Year 2
First Quarter
Second Quarter
Third Quarter
Fourth Quarter
X
11
13
14
11
Year 3
First Quarter
Second Quarter
Third Quarter
Fourth Quarter
X
10
13
13
11
Applying our moving average method:
MA3 = (10+12+13+11)/4 = 11.50
MA4 = (12+13+11+11)/4 = 11.75
MA5 = (13+11+11+13)/4 = 12.00
MA6 = (11+13+14+11)/4 = 12.25
The data are now presented for year as a moving average of the quarters
centered on the quarter represented by the number following the MA. For the year, the
data for this series centered on the third quarter is 11.50, centered on the fourth quarter
it is 11.75 and for the first quarter of the previous year it is 12.00.
Now we can calculate an index for seasonal adjustment, in a rather simple and
straightforward fashion. The index is calculated using the following equation:
SI t = Y t / MA t
where SI t is the seasonal index, Y t is the actual observation for that quarter, and MA t
is the moving average centered on that quarter, from the method shown above. A
179
seasonally adjusted index can then be created which allows the seasonality of the data
to be removed.
Using the three years worth of data above we can construct a seasonality index,
which allows us to remove the seasonal effects from the data. The average for each
quarter is calculated:
Y1 = (10+11+10)/3 = 10.33
Y2 = (12+13+13)/3 = 12.67
Y3 = (13+13+13)/3 = 13.00
Y4 = (11+11+11)/3 = 11.00
Therefore plugging in the values for Y t and MA t the equation above allows us to
construct an index for seasonality from this data:
SI 1 = 10.33/11.50 = .8983
SI 2 = 12.67/11.75 = 1.0783
SI 3 = 13.00/12.00 = 1.0833
SI 4 = 11.00/12.25 = .8988
To seasonally adjust the raw data is then a simple matter of divided the raw data
by its appropriate seasonal index.
Year 1
First Quarter
Second Quarter
Third Quarter
Fourth Quarter
X
Seasonally Adjusted
10/0.8983 = 11.1321
12/1.0783 = 11.1286
13/1.0833 = 12.0004
11/0.8988 = 12.2385
Year 2
First Quarter
Second Quarter
Third Quarter
Fourth Quarter
11/0.8983 = 12.2453
13/1.0783 = 12.0560
14/1.0833 = 12.9235
11/0.8988 = 12.2385
Year 3
First Quarter
Second Quarter
Third Quarter
Fourth Quarter
10/0.8983 = 11.1321
13/1.0783 = 12.0560
13/1.0833 = 12.0004
11/0.8988 = 12.2385
180
Decomposition of a data series can be accomplished for monthly, weekly, even
daily data. It can also be used to eliminate variations that occur for reasons other than
just seasonality. Any cyclical or trend influences in data can be controlled using these
classical decomposition methods.
ARIMA
ARIMA is an acronym that stands for Autoregressive Integrated Moving Average.
This method uses past values of a time series to estimate future values of the same
time series. According to John McAuley, The Nobel Prize winning economist Clive
Granger is alleged to have noted: “it has been said to be difficult that it should never be
tried for the first time.” However, it is useful tool in many applications, and therefore at
least something more than a passing mention is required here.
There are basically three parts of an ARIMA model, as are described in
McAuley’s text these three components are:5
“(1) The series can be described by an autoregressive (AR) component, such that the
most recent value (X1) can be estimated by a weighted average of past values in the
series going back p periods:
X1 = Θ (X t-1) + . . . + Θ p (X t-p) + δ + ε
where Θ p = the weights of the lagged values
δ = a constant term related to the series mean, and
ε = the stochastic term
(2) The series can have a moving average (MA) component based on a q period
moving average of the stochastic errors:
X1 =  + (εt - Θ1) εt-1 - . . . - Θq + εt-u
where  = the mean of the series
Θq = the weights of the lagged error terms and
ε = the stochastic errors.
5
John J. McAuly, Economic Forecasting for Business: Concepts and Applications,
Englewood Cliffs, N.J.: Prentice-Hall, Inc., 1985.
181
(3) Most economic series are not stationary (can be transformed through differencing
using a stationary process), but instead display a trend or cyclical movements over time.
Most series, however, can be made stationary by differencing. For instance, first
differencing is the period-to-period change in the series:
ΔX t = X t - X t-1 “
As can readily been seen the application of an ARIMA model requires a certain
modicum of statistical sophistication that is beyond the scope this course. For those
interested, consider taking M509.
However, a few concluding remarks are necessary here. To apply an ARIMA
model to data there are three procedures necessary. First, the analyst must determine
what autoregressive process is most appropriate to the series. Second, the model must
be then fitted to the data, and thirdly, diagnostic tests for check for the models adequacy
and that the data are stationary are minimally necessary before attempting to draw
inference from the results.
Clive Granger mentioned above won his Nobel Prize for extending ARIMA
models to where they were useful in testing any casual relations that may exist between
two time series variables. The method is called, “Granger Causality.” In many
sophisticated forecasting activities it is necessary to use models such as ARIMA, and
unfortunately such models generally require a professional statistician and are generally
beyond the reach of most managerial personnel. However, that does not mean that you
can’t learn enough to be able to draw inference from the results, and understanding the
limitations and benefits of such modeling.
Correlative Methods
Correlative methods rely on the concept of Ordinary Least Squares, or O.L.S.,
which is also sometimes referred to as regression analysis. O.L.S. refers to the fact that
the representative function of the data is determined by minimizing the sum of the
squared errors of each of the data points from that line.
Consider the follow diagram:
182
X
●
●
● ●
●
●
●
●
●
The lines from each data point to the representative line are the error terms.
Squaring the error terms eliminates the signs, and the regression analysis minimizes
these squared errors in order to determine what the most representative line is for there
data points.
The general form of the representative equation for a bi-variate regression is:
Y = α + βX + ε
where Y is the dependent variable, and X is the independent variable; α is the intercept
term, β is the slope coefficient and ε random error term.
Correlation
To be able to determine what a representative line is, requires some further
analysis. The model is said to be a good fit if the R squared is relatively high. There R
squared can vary between 0 (no correlation) to 1 (perfect correlation). Typically an F
statistic is generated by the regression program for the R squared which can be
checked against a table of critical values to determine if the R squared is significant
different than zero. If the F statistic is significant, then the R squared is significantly
more than zero.
183
The simple r is calculated as:
_
_
_
_
r = Σ (X - X) (Y - Y) / ([ Σ (X - X)2 ][ Σ (Y - Y)2 ])-1
and the simple r is squared to obtain the R squared.
Significance of Coefficients
There are also statistical tests to determine the significance of the intercept
coefficient and the slope coefficient, these are simple t-statistics. The test of
significance is whether the coefficient is zero. If the t-statistic is significant for the
number of degrees of freedom, then the coefficient is other than zero.
In general, the analyst is looking for coefficients, which are significant, and of a
sign that makes theoretical sense. If the slope coefficient for a particular variable is
zero, it suggests that there is no statistical association between that independent
variable and the dependent variable.
There is also a problem in which additional data being added to the series or
another variable will sometimes cause the sign of a coefficient already in the model to
swap signs. This is a symptom of problem with the data called heteroskedasticity. This
problem is a result data not conforming to the underlying assumptions of ordinary least
squares that is that the errors or residuals do not have a common variance. You
basically have two ways in which to deal with this problem, select a different analytical
technique or try transforming the data. If you transform the data, log transformations or
deflating the data into some normalized series will sometime eliminate the problem and
give unbiased, efficient estimates.
With time series data there also arises a problem called serial correlation. That
is when the error terms are not randomized as assumed. In other words, the error
terms are correlated with one another. There is test for this problem, called the DurbinWatson (d) statistic. When dealing with time series data one should, as a matter of
routine, have the program calculate a DW (d) and check with a table of critical values so
as to eliminate any problems in inference associated with correlated error terms. If this
problem arises, often using the first difference of observations in the data will eliminate
the problem. However, often a low DW (d) statistic is an indication that the regression
equation may be mis-specified which results in further difficulties, both statistically and
conceptually.
Other Issues
184
Correlative methods are often used because of there accessibility. You can do
regressions on Excel Spreadsheets. However, there are a couple of other issues
worthy of some mention here before moving on. If there is something that has
happened that is suspected of fundamentally changing the data series at some
particular point in time. Statisticians use a method called a dummy variable. If you
believe that when we went off the gold standard in 1971 that fundamentally changed the
prices of oil, then you would add a dummy variable to test that hypothesis. For each
year that the country was off the gold standard the value of that variable would be 1 and
for all other years (i.e., when we were on the gold standard) the value would be zero.
You may use multiple dummy variables, but there must be variation in the observations,
otherwise you create for yourself something called a dummy variable trap - which
results in a zero coefficient and other unpleasantries.
Instrumental variables are also sometimes used. These variables are
constructed for the purpose of substituting for something we could not directly measure
or quantify. For example, in years in which we had heavy military commitments, we
could also have been at war. Clearly during the cold war there we heavily military
commitments, but only intermittent combat. Rather than use a dummy for years at war,
an instrumental variable such troops abroad, or Department of Defense budgets may
capture more of what the research was looking for.
OTHER METHODS
This chapter has simply scratched the surface of methods useful in statistical
analysis of data and in forecasting. There are an array of other statistical methods,
which are useful for both purposes.
Included in these methods are nonparametric methods. Nonparametric methods
are useful with data in which quantification of the data is problematic. For example, the
use of survey methods generate results that are, in large measure, arbitrary. A Likert
scale asking someone to strongly agree, agree, disagree or strongly disagree is
imprecise. The ordering is all we know for sure, and that the magnitude is set in two
options, rather than the continuous data used in parametric methods. Just because
data is discrete does not mean there are not good ways to analyze that data.
There are also other methods of analyzing continuous data. There are unique
circumstances in continuous data that do not necessarily lend themselves to the O.L.S.
methods described above. We will also briefly describe some of these methods in this
section.
Continuous Data
185
One of the more simple and practical methods of examining data is the analysis
of variance and analysis of covariance. Analysis of variance is typically used to
isolate and evaluate the sources of variation within independent treatment variable to
determine how these variables interact. An analysis of variance is generally part of the
output of any regression package and provides additional information concerning the
behavior of the data subjected to the regression.
There are variations on the analysis of variance theme. There is a multivariate
analysis of variance, which provides information on the behavior of the variables in
the data set controlling for the effects of the other variables in the analysis., and the
analysis of covariance, which goes a set further, as the name suggests, and provides
an analysis of how the multiple variable interact statistically.
Factor analysis and cluster analysis are related processes, which rely on
correlative techniques to determine what variables are statistically related to one
another. A factor may have several different dimensions. For example, men and
women. In general, men tend to be heavier, taller, and live shorter lives. These
variables would form a factor of characteristics of human beings. Cluster analysis is
analogous in that rather than a particular theoretical requirement that results in factors,
there may be simple tendencies, such as behavioral variables.
Canonical correlation analysis is a method where factors are identified and the
importance of each variable in a factor is also identified. These elements are calculated
for a collection of variables, which are analogous to a dependent variable in a
regression analysis, and a collection of variables, which are analogous to independent
variables in a regression. This gives the research the ability to deal with issues which
may be multi-dimensional and do not lend themselves well to a single dependent
variable. For example, strike activity in the United States has several different
dimensions, and data measuring those dimensions. The competing theories explaining
what strikes happen and why they continue can then be simultaneously tested against
the array of strike measurements, lending to more profound insights than if just one
variable was used.
There are methods of analysis which involving the mixing of discrete and
continuous data. Profit analysis uses categorical dependent variables (discrete) and
explains the variation in that variable with a mixture of continuous and discrete
variables. The output of these computer programs yields output, which permits
inference much the same as is done with regression analysis. If there is a stochastic
dependent logit analysis is an alternative to profit.
Discrete Data
186
There are several nonparametric methods devised to deal specifically with
discrete data. These methods are typically useful with survey data, such as marketing
research or in some types of social science research in psychology or sociology. We
will briefly examine a couple of the generally applicable tests for comparing data
populations, and two of the most commonly used tests dealing with ordinal data
obtained from surveys, or other such methods.
The Mann-Whitney U test provides a method whereby two populations’
distributions can be compared. The Kruskal-Wallis test is an extension of the MannWhitney U test, in that this method provides an opportunity to compare several
populations. These tests can be used with the distributions of the data do not fit the
underlying assumptions necessary to apply an F test (random samples drawn from
normally distributed populations with equal variances).
There are also methods of dealing with ordinally ranked data. For example, the
Wilcoxon Signed-Rank test can be utilized to analyze paired-differences in
experimental data. This is particularly useful in educational studies or in some types of
behavioral studies where the F statistic assumptions are not fulfilled or the two tests
above do not apply (not looking at population differences).
Finally, there is a test that is somewhat analogous to regression analysis to deal
with ranked data. Spearman Rank Correlations are useful in determining how closely
associated individual observations may be to specific samples or populations. For
example, item analysis in multiple-choice exams can be analyzed using this procedure.
Students who perform well on the entire exam can have their overall score assessed
with respect to specific questions. In this example, if the top students all performed well
on question 1, their rank correlation would be near 1, however, if they all performed
well, but on question 2 they did relatively poorly, that correlation would be closer to zero,
say .35. If, on the other hand, none of the students who performed well got question 2
correct, then we would observe a zero correlation for that question. This method is
useful in marketing research and in behavioral, as well as college professors with too
much time on their hands.
There are several other nonparametric methods available. This section simply
presented the highlights so that you are aware that discrete data can be analyzed with
developed and generally accepted statistical techniques, which are well supported with
“canned” computer programs, which are generally available commercially. Just
because you have discrete data does not mean that you have no reasonable methods
to analyze that data.
Conclusions
187
This chapter presented a brief overview of some of the commonly used
forecasting and statistical methods commonly used in business for analysis of specific
issues concerning the business environment. This was done more as a menu of what’s
available rather than a seven-course meal in statistical fun and games.
There is a well-developed literature concerning these methods and numerous
excellent sources on the detailed nuts and bolts in applying these and other related
statistical methods. For the multivariate techniques6 there is an excellent text by Hair,
Anerson et al, which is often the basic textbook for the M509 course. Also for a basic
treatment of the nonparametric methods discussed here see Mendenhall, Reinmuth and
Beaver’s text.7 This text also presents an excellent discussion of regression analysis
and some of the related topics. In this discussion very little was offered concerning
survey research and the particular problems associated with this specialized topic. For
those desiring more concerning survey methods Sekaran’s text is an excellent primary
source.8
For those whose tastes run along the lines of the basic models used for
forecasting described in the first section of this chapter, you may wish to consult a text
by Wilson and Keating for further details.9 Of course, no course would be complete
concerning forecasting without some mention of the most advanced text books in the
field. For more in-depth and advanced discussions of forecasting see Makridakis,
Wheelright and McGee.10 There are also several classic textbooks concerning
econometric methods. Still the most complete and comprehensive of these books is the
one by Henri Theil.11 Finally, there is an article that is pretty much the authoritative
piece on manager’s guide to statistical and forecasting methods. This article appears in
6
J. F. Hair, R. E. Anderson, R. L. Tatham, and W. C. Black, Multivariate Data Analysis
with Readings, third edition. New York: Macmillian Publishing Company, 1992.
7
W. Mendenhal, J. E. Reimth, and R. Beaver, Statistics for Management and
Economics, sixth edition. Boston: PWS-Kent Publishing Company, 1989.
8
U. Sekaran, Research Methods for Business, second edition. New York: John Wiley &
Sons, 1992.
9
J. H. Wilson and B. Keating, Business Forecasting. Homewood, IL: Richard D. Irwin,
Co., 1990.
10
S. Makridakis, S. C. Wheelright, and V. E. McGree, Forecasting: Methods and
Applications, second edition. New York: John Wiley & Sons, 1983.
11
H. Theil, Principles of Econometrics. New York: John Wiley & Sons, 1976.
188
the Harvard Business Review in 1986.12 The reason for all of these citations, when we
seemed to live quite nicely without them to this point in the course, is that knowledge
comes in two varieties, either you know something yourself, or you know where to go
find the information you need. These citations fall into the later category.
KEY CONCEPTS
Time Series Methods
Trend Line
Moving Averages
Exponential Smoothing
Decomposition
Autoregressive Integrative Moving Average
Correlative Methods
Regression
Correlation
Statistical Significance
Heterosckdasticity
Serial Correlation
Dummy Variables
Other Continuous Data Methods
Analysis of Variance
Multivariate Analysis of Variance
Analysis of Covariance
Factor Analysis
12
D. M. Georgoff and R. G. Murdick, “Manager’s guide to forecasting,” Harvard
Business Review, January-February 1986.
189
Cluster Analysis
Canonical Correlation
Profit Analysis
Nonparametric Methods
Mann-Whitney U
Kruskal-Wallis
Wilcoxon Signed-Rank
Spearman Rank Correlations
STUDY GUIDE
Food for Thought:
What are the problems encountered in the data that may face you in applying
regression analysis to time series data? Explain.
Keeping things simple has obvious advantages with respect to cost and expertise, but
what are the drawbacks? Be specific.
Compare and contrast the occasions in which the various methods presented here may
be appropriate or inappropriate to the analysis of business conditions.
190
What is meant by statistical significance? Why is this of importance? Explain.
Critically evaluate de-seasonalizing time series data.
Sample Questions:
Multiple Choice:
Which of the following models are methods, which rely in some respect on correlation?
A.
B.
C.
D.
Exponential Smoothing
Wilcoxon Signed-Rank
Trend Line
None of the above
Serial correlation is:
A. The value of the simple r
B. Correlation between the slope coefficients in a bi-variate regression
C. The residuals being correlated with one another in a multi-variate
regression
D. None of the above
True / False
A significant t-statistic for a regression coefficient means we do not know that coefficient
differs from zero. {False}
Correlation is the strength of statistical association between two variables, and R
squared is the amount of variation explained in the dependent variable by the
independent variables. {True}
191
Chapter 16
Data: In the Beginning
This chapter is the follow-on of Chapter 13, but based, in part, on the discussions
found in Chapter 15. We again, return to the subject of data, and what to do with it so
as to accomplish the analyses or forecasts we wish to complete.
The purpose of this chapter to present brief discussion of the preliminary
examination of data for the purposes of various forecasting and analytical techniques.
This chapter presents some basics idea on how to become familiar with and handle
problems, which commonly arise in empirical economics. It is through these methods
that an analyst can best decide what methods are appropriate to her needs, and how to
best torture the data to obtain the desired results.
Data Inspection
Just like in the goods producing segments of the economy there are certain
quality control procedures that must be implemented with data before you begin using it
for testing theories or forecasting. Familiarity with the data, its characteristics and its
limitations will help determine, not only the methods that will be applied to it, but the
problems and benefits would may expect to encounter with its use.
The following sections provide a basic set of tools and considerations for the
“quality control” of data. The use of these methods can save a great deal of grief and
aggravation and are highly recommended.
Becoming Familiar with the Data
The simple single variable forecasting techniques are not very sensitive to data
problems, which may create considerable difficulties with correlative methods.
However, the time series data gathered may very well determine what single variable
forecasting techniques are best suited for forecasting purposes. Trend line, quite
naturally is a best guess method and is not subject to many of these problems, it is
simply "eye-balling" the data. However, exponential smoothing is not particularly robust
with data with large variations, and little trend to it. Whereas, decomposition may be
better suited to data, which can have some of, the variation removed through
elimination of seasonality, outliers, or other identifiable issues with the data.
192
Calculating means, medians, and modes will give you some idea what the
distribution of the data looks like. Mean is the arithmetic average, median is the midpoint in the data series, and mode is the most commonly appearing observation in the
data. In a normal distribution, the mean, median, and mode are all equal to one
another.
The first thing one should do when examining a data set, is to calculate each of
these descriptive statistics to determine how its distribution looks. Further, some
measure of variation should also be calculated, variance σ2 or standard deviation σ
which will provide some insight into how the data may be distributed. Plots of the data
will also help the analyst to come to some better understandings of how the data are
distributed as well.
Consider the following equations for the mean and for the variance for data that
is roughly normally distributed:
Mean:
k
X =
∑ fm
i
i
/n
i =n
Variance:
k
σ2
=
∑ fm
i
k
i2
- [(
i =n
∑ fm
i
) / n] / n - 1
i 2
i =n
Standard Deviation:
σ=[
k
∑ fm
i
k
i2
- [(
i =n
∑ fm
i
i =n
where:
mi = midpoint of data series I
fi = frequency of observation in series
193
) / n] / n - 1]
i 2
-1
The mean is used in the calculation of variance, however, it is a descriptive
statistic, which give the analyst some indication of the magnitude of the observations.
When compared with the median and the mode, the mean also provides some
indication of how that data may be distributed.
Variance is also a useful idea. If data are distributed very tightly around the
mean, the variance will tend to be small. If, on the other hand, there are large numbers
of outliers, or the data is tightly distributed the variance will tend to be large. Therefore,
variance or the square root of variance which is standard deviation is useful in
determining how tightly distributed the data set is. Variance is commonly used, but as
one can easily see from the equation is the square of the standard deviation. Standard
deviation is more easily interpreted because it is in the same numeraire as the data
itself.
The data series may be systematically distributed in some fashion that will have
some impact on modeling or technique decisions. Perhaps the most common of these
problems is the data being skewed. The following graph shows skewed data:
X
Mode
Median
Mean
Y
Z
This graph is presented with three variables, X, Y, and Z. The colored-in area, is
the plain created in this three-dimensional space, rather than the two-dimensional line
you are used to seeing. This data set is skewed to the left. If the tail observed is to
the other side of the distribution this is referred to as the data being skewed to the right.
Notice that the mean, median and mode are all different numbers in this graph.
194
X
Mode
Median
Mean
Y
In examining a data set with the tail to the other side, and this time in two
dimensions (just variables X and Y) we can observe a data set that is skewed to the
right.
There is a statistical principle called the law of large numbers. For purposes of
most statistical analyses, if the data are random sample from a population with the
same variance and continuous, then if you have more than 30 observations it is
assumed that those number of observations or more will approximate a normal
distribution.
If the mode appears precisely in the middle of the plot, and mean, mode and
median are all the same number, with a sombrero-shaped distribution, this is would be
what is called a normal distribution. The following diagram presents a normal
distribution.
195
X
Mean, Mode
and Median
Y
Correlated Independent Variables
In the discussion of the regression analysis in the previous chapter, there was
mention of correlated residuals sometimes referred to as serial correlation, or
multicolinearity. This most often occurs because there are two or more of the
independent variables – explanatory variables, which are correlated with one another.
That is that they have a statistically significant correlation. That is a correlation, which is
.3 or higher (ignoring the sign).
The best way to handle this issue with respect to forecasting models is to make
sure beforehand that you are not using correlated independent variables.
Unfortunately, you may not have a choice in the selection of variables with a structural
model, or if you do, the instrumental variable you choose instead of the correlated
variable may still exhibit a significant correlation with the remaining independent
variables. The simplest and quickest method for dealing with multicolinear variables is
to run a correlation matrix of the variables being considered for use in the forecasting
model. A correlation matrix is simply the correlation between each of the variables
presented.
Consider the following correlation matrix where we have correlated
unemployment rate, with Manufacturing inventories (in millions of dollars), Coincident
index of copper prices, and the Standard and Poor’s 500 index of common stock prices
in the U.S.
196
_____________________________________________________________________
Unemployment Inventories Coincident
S&P
Rate
Mfg.
Copper Prices
500
_____________________________________________________________________
Unemployment rate
1.000
Inventories Mfg.
-0.881
1.000
Copper Prices (coincident)
-0.793
0.535
1.000
S&P 500
-0.679
0.790
0.243
1.000
_____________________________________________________________________
From inspection of this matrix it is clear that all of these variables are highly
correlated with one another, except for Coincident Copper Prices and the S&P 500
stock index. The unemployment rate is negatively associated with each of the other
variables, and manufacturing inventories are positively associated with both the price of
copper and the S&P 500 index – although the statistical association between
inventories and copper prices is not as high as the remaining variables. Copper prices
and the S&P 500 may be useable in the same regression equation without causing
significant statistical problems.
In examining these variables it should come as no surprise that these variables
are highly correlated. Unemployment rate is highly correlated with person’s
unemployed (.98) which is also statistically associated with the lagging economic
indicator “average duration of unemployment.” The S&P 500 is very highly correlated
with most of the major categories within the National Income Accounts and is leading
economic indicator. With some understanding of the economic theory, which underpins
these indicators an analysis, could very easily pick out the types of variables, which
would be expected to highly correlated with one another.
Factor analysis will provide essentially the same sorts of information, and many
scholars will utilize factor analysis because it also give some basis for making other
decisions. If there are highly correlated variables, which are important to the analysis,
and you have these sorts of difficulties on both sides of the equation, independent and
potential dependent variables that are highly correlated, then this is precisely what
canonical correlation techniques are for. Therefore, these sorts of preliminary analyses
may be very useful in determining what methods are most appropriate to the task at
hand.
197
Forecasting versus Structural Models
It is wise to remember that business conditions analysis may involve two different
sorts of activities. There is the need to confirm theory or academic econometrics and
the need to get some fix on future developments – forecasting. There are differences in
both the rigor and the activities required when conducting research for the purposes of
understanding and explaining economic phenomenon or testing theories, and just
forecasting. With empirical examination of a theory, there is the need to fully specify
your model. Missing variables will bias results, and such under-specified models will
rarely get past a referee. However, the formalities of academic research are less
rigorous when it comes to forecasting.
The large econometric forecasting models, such as the Wharton Model and other
such simultaneous equations models are formal models of the U.S. economy and
attempt to model each aspect of what is measured in the National Income Accounts.
This sort of forecasting borders on academic research and is not the nuts and bolts sort
of thing we expect from simple forecasts of a single variable or focused on something
far less complex. Forecasting is more pragmatic. Forecasters will look to find those
variables that give consistent guidance to what might be expected in the selected
dependent variable, and if a variable can be found which predicts turn-around points,
the forecaster is typically elated, if some connection can be theoretically found as
conformation for the result.
One should always remember why you came to the swamp. Draining the
swamp doesn’t necessarily mean you have to wrestle every alligator that wanders into
camp. Forecasting is a down and dirty application of statistical and economic
knowledge, which is clearly more focused on pragmatics than what is expected of
empirical tests of economic theory.
Conclusions
The more information you have concerning the nature and statistical properties of
the data you have with which to work, the better choices you will be able to make
concerning the selection of variables and statistical techniques. You are generally well
advised to make choices for variable to explain variations in your dependent variable
which have both sound theoretical reasons for their inclusion, but which cause the least
amount of statistical headaches.
Prior planning does prevent poor performance. In this case, get to know the data
you are going to torture, it will help in the long - run.
198
KEY CONCEPTS
Know the Data
Independent variables
Dependent variables
Descriptive statistics
Mean
Mode
Median
Variance
Skewed Data
Correlation matrix
Choice of techniques related to data
Forecasting versus Structural equations
STUDY GUIDE
Food for Thought:
Why sorts of things would be useful to know about the variables you are going to use for
independent variables? Explain.
199
What are the differences between simple forecasting, and academic research
concerning testing theories? Explain.
In what ways will getting to know your data help in analyzing it? Explain.
Critically evaluate the use of descriptive statistics, correlations matrices, and data plots
to select variables for inclusion in a forecasting model.
Multiple Choice:
When the mode is greater than the mean and median what is this called?
A.
B.
C.
D.
Normal distribution
Skewed to the left data
Skewed to the right data
None of the above
Which of the following problems is associated with independent variables, which are
highly correlated?
A.
B.
C.
D.
Residuals are correlated
The estimated coefficients may be biased
The estimated coefficient may be unstable
All of the above
200
True / False
Standard deviation is one measure of the variability of a data set. {True}
A normal distribution has a mean, mode and median which are all equal, among other
characteristics. {True}
201
Chapter 17
Epilog:
Management and Business Conditions Analysis
The purpose of this chapter is to examine the managerial aspects of business
conditions analysis. Data gathering and data sources are the starting point of any
attempt to analyze business conditions. However, this must be done on a firm
foundation of sound economic analysis. The uses of the analyses and by who will, in
large measure, determine the rigor of analyses, and how it is communicated. Finally, in
this chapter, the discussion of the managerial aspects will end with an examination of
the role of this subject in strategic planning.
Information and Managerial Empowerment
To this point in the course, the discussion has focused almost entirely on the
development of macroeconomic models used to examine and explain the business
environment in which an enterprise must function. In the broadest sense, it is the
macroeconomic environment in which an enterprise must function. However, this
course, in a more practical sense, is a management course. All of the economics and
data analysis in the world doesn’t mean very much unless it can be translated into some
productive activities.
Information should, first and foremost, be one means by which people in an
organization are empowered to contribute to success of the organization. If the
information technology gurus and the economists are the only ones with access to
information, and the results of analyses then a barrier is being erected between the
people who are responsible for the organization’s performance and the information
necessary to discharge those responsibilities. Empowerment in a managerial sense
requires that those closest to the events should be making the calls, within broad policy
constraints. To do this effectively, it is also clear that these people must have access to
the analyses and the data necessary to effective decision making. How this is done is
of critical importance.
First and foremost, it must be remembered that information (its acquisition and
dissemination) is a support function. Accounting, information technology and economic
analysis are staff function, which are useful to line functions in determining the best
strategies for the organization to follow. These support functions are subsidiary
functions. There is a serious danger here, too many times these support functions take
202
on decision-making authority simply because they control the information necessary to
effective decision-making. THIS SHOULD NEVER HAPPEN. Economists and
accountants are not physicians, engineers or Indian chiefs and not the people with the
line authority or expertise to properly execute that line authority.
Policy parameters must be set for the gathering, analysis and use of data. Open
access to these data and the resultant analyses, regardless of where it is performed, is
essential if it is to be useful within the context of overall management of the
organization. It is an extremely delicate balancing act, but discretion within the policy
parameters is the essence of managerial empowerment. If you delegate authority, you
must permit discretion to exercise that authority, and information is one tool to make
sure that discretion is exercised wisely.
Policy and Strategy
The purpose of a strategic plan is to provide not only guidance for managerial
actions, hence policy, but also to provide a set of goals by which the directed actions of
the enterprise can be assessed. An effective strategic plan is not just simply a
roadmap. An effective strategic plan accounts for contingencies, and provides for
guidance so as to reduce ambiguities with respect to what and how a firm should be
performing. Naturally, business conditions analysis plays a very important role in
formulating business strategies. Both opportunities and risks arise from the economic
environment in which an enterprise operates. There for anticipation of those conditions,
and contingencies for situation in which the anticipation was inaccurate are important
results of business conditions analysis.
Decision-making is a fact of life in any organization. The culture that is
developed in an organization concerning how decision-making occurs is often one of
the most important determinants of the organization’s success or failure. Consistently,
studies of why enterprises fail suggest that poor planning is often at the heart of failure,
and that effective planning is predictive of success. How this planning gets
implemented is part and parcel of decision-making.
Effective organizations empower those with line authority throughout the
organization. However, to effectively empower line managers, requires that their efforts
be focused and coordinated. Unfortunately, to some that means micro-managing, and
the organization falls victim to all the ills associated with this bankrupt managerial
model. Effective organizations delegate authority, and have a mission statement that
provides general directions, and a series of well understood and reasonable policies to
guide those individuals to whom that authority has been delegated.
The role of information and its analysis in formulating and evaluating the goals
these policies are developed to achieve is critical. Not only are external business
203
conditions opportunities and threats, but how the organization responds to these issues,
given the information, is also of critical importance and should be a crucial part of the
planning process.
Qualitative Issues
What has been done in this course is primarily quantitative analysis, however,
this is not to minimize the qualitative aspects of a business environment. It is difficult to
measure ethics, or business regulations, however both of these have significant
implications for the environment in which businesses operate.
There are other issues, which tend to be more internal that are also shaped by
the business environment. The culture and design of organizations are often heavily
influence by what the business environment is anticipated to be. Enron’s culture was in
large measure the result of rise of derivatives and fast moving financial markets.
Enron’s management was successful in playing a sort of shell game moving losses and
assets to give the appearance of a thriving company. It is doubtful this culture would
have developed at all, except for the financial market environment in which they
operated.
As intrusive as Sarbanes - Oxley is for businesses, the legislative result was
predictable from the behavior of Ken Lay, Jeff Skilling, and Bernie Ebbers. When these
sorts of scandals arise which cost pension funds, and mutual funds billions of dollars
there will be a public outcry for action to prevent such conduct in the future. Such
qualitative aspects of the business environment does not lend itself to quantification and
statistical analysis. Yet these matters are important aspects of the conditions in which
business operates.
Forecasting and Planning Interrelations
Forecasting and planning cannot be separated, as a practical matter. Strategies
require a vision of the future. However, as pointed out earlier in the course, there are
costs to the gathering and analysis of information. Therefore, there is some optimal
amount of analysis and forecasting that is consistent with an effective business plan.
There are some general rules, which help in finding the optimal amount of
analysis for the purposes of planning. Perhaps the foundation of these rules is a
requisite flexibility to react to forecast "vision" errors. The religious adherence to plans
is a formula for disaster. Plans are subject to forces, which alter the results expected. It
is absolutely essential that the feed-forward, feed-back processes that generate the
original plans remain open during implementation so contingencies can be made
204
operable and goals revised as conditions require. This culture of flexibility requires a
certain humility and willingness to permit participation across the organization. For
those with work experience, it is clear that this requirement is far easier said, then done.
Over time and across organizations we have become far better at understanding
the planning process than we were even twenty years ago. There are clear
relationships between goals specified in the plan and what the authors of those plans
anticipated. A good strategic plan is going to specify specific objectives as guides to
whether goals are being met. Those specific objectives will be measurable either
qualitative or quantitatively. There will be provision for revisions of goals that the
objectives suggest are not being obtained or are being exceeded. Perhaps most
importantly the process will be one of continuous improvement and ongoing.
Forecasting and business conditions analysis is just one critical cog in this overall
wheel of strategic management. Goal are in an important sense nothing more than
forecasts, and the enabling objectives are simply road-signs along the way to allow us
to gauge how well we are doing.
Risk and Uncertainty
Risk is simply the down-side of opportunity. We could call this chapter clichés
centered on quantitative methods, but sometimes those clichés are accurate
descriptions of the real world. Just because they are cliché doesn’t mean you shouldn’t
pay attention. Risk is the reason entrepreneurs are rewarded. Risk is also the reason
enterprises fail. The trick is to take those risks you understand and can appropriately
manage. That sounds very simple, but its not or we would all be rich. To understand
the risk one faces requires insight and hard work – hard work to do the data gathering
and crunching necessary to have a handle on what you are up against. Most people
simply don’t gain the insights because they are either unwilling or unable to do the
homework necessary to make some enterprise work.
Stochastics are what breeds uncertainty, which, in turn, breeds ambiguity. When
we don’t know what to expect, that is sometimes the most anxiety producing situation.
Remember when you were a kid, and you transgressed on some rules your parents
made. The anxiety waiting for the verdict as to the appropriate corrective action was
almost always worse than the punishment. That’s what uncertainty does for you. To
the extent that the knowledge acquired in this course is useful to you as an individual it
can make your economic well-being less stressful. However, the lessons here properly
applied can help you to understand your world a little better, but if extended to your
organization can make the culture of the place you work less stressful and more
informed. Understanding the macroeconomy and having at your disposal a few
forecasting techniques can help minimizes uncertainty and if extended through your
205
organization can help co-workers and the enterprise deal with ambiguities – at least
those that arise from the business environment.
Conclusions
The issues examined in this course have not only a practical implication for the
gathering and analysis of information. It also is concerned with where that analysis and
information gathering fits in with the successful management of an organization.
Information and its uses is a major part of what a manager does in the day to day
operations of her part of an organization. However, it is unlikely that anyone in an
M.B.A. program will be put in charge of a major forecasting model for a big brokerage or
Federal Reserve Bank. Just as unlikely is that the typical M.B.A. will progress through
their careers without dealing with the issues presented here on a routine basis.
Therefore it is of critical importance that you at least know what the macroeconomic
environment is, and how analysts deal with the business cycle.
Go forth and prosper!!!
206
E550
Lecture
Notes
1. Introduction to Economics
Lecture Notes
1. Economics Defined - Economics is the study of the allocation of SCARCE
resources to meet unlimited human wants.
a. Microeconomics - is concerned with decision-making by individual
economic agents such as firms and consumers.
b. Macroeconomics - is concerned with the aggregate performance of the
entire economic system.
c. Empirical economics - relies upon facts to present a description of
economic activity.
d. Economic theory - relies upon principles to analyze behavior of economic
agents.
e. Inductive logic - creates principles from observation.
f. Deductive logic - hypothesis is formulated and tested.
2. Usefulness of economics - economics provides an objective mode of analysis,
with rigorous models that are predictive of human behavior.
3. Assumptions in Economics - economic models of human behavior are built upon
assumptions; or simplifications that permit rigorous analysis of real world events,
without irrelevant complications.
a. Model building - models are abstractions from reality - the best model is
the one that best describes reality and is the simplest.
b. simplifications:
1. Ceteris paribus - means all other things equal.
2. Problems with abstractions, based on assumptions. Too often, the
models built are inconsistent with observed reality - therefore they are
faulty and require modification. When a model is so complex that it
1
cannot be easily communicated or its implications understood - it is
less useful.
4. Goals and their Relations - Positive economics is concerned with what is;
normative economics is concerned with what should be. Economic goals are
value statements.
a. Most societies have one or more of the following goals:
1. Economic efficiency,
2. Economic growth,
3. Economic freedom,
4. Economic security,
5. Equitable distribution of income,
6. Full employment,
7. Price level stability, and
8. Reasonable balance of trade.
5. Goals are subject to:
a. Interpretation - precise meanings and measurements will often become
the subject of different points of view, often caused by politics.
b. Complementary - goals that are complementary are consistent and can
often be accomplished together.
c. Conflicting - where one goal precludes or is inconsistent with another.
d. Priorities - rank ordering from most important to least important; again
involving value judgments.
6. The Formulation of Public and Private Policy - Policy is the creation of guidelines,
regulations or law designed to affect the accomplishment of specific economic
goals.
2
a. Steps in formulating policy:
1. Stating goals - must be measurable with specific stated objective to be
accomplished.
2. options - identify the various actions that will accomplish the stated
goals & select one, and
3. Evaluation - gather and analyze evidence to determine whether policy
was effective in accomplishing goal, if not re-examine options and
select option most likely to be effective.
7. Objective Thinking:
a. Bias - most people bring many misconceptions and biases to economics.
Because of political beliefs and other value system components rational,
objective thinking concerning various issues requires the shedding of
these preconceptions and biases.
b. Fallacy of composition - is simply the mistaken belief that what is true for
the individual, must be true for the group.
c. Cause and effect - post hoc, ergo propter hoc - after this, because of this
fallacy.
1. Correlation - statistical association of two or more variables.
2. Causation - where one variable actually causes another. Granger
causality states that the thing that causes another must occur first, that
the explainer must add to the correlation, and must be sensible.
d. Forecasting - Steps
1. Selection of variables
2. Sophistication
3. Model
4. Specification
5. Re-evaluation of model
6. Reports etc.
3
2. National Income Accounting
Lecture Notes
1. Gross Domestic Product - (GDP) the total value of all goods and services
produced within the borders of the United States (or country under analysis).
2. Gross National Product - (GNP) the total value of all goods and services
produced by Americans regardless of whether in the United States or overseas.
3. National Income Accounts are the aggregate data used to measure the wellbeing of an economy.
a. The mechanics of these various accounts are:
Gross Domestic Product
- Depreciation =
Net Domestic Product
+ Net American Income Earned Abroad
- Indirect Business Taxes =
National Income
- Social Security Contributions
- Corporate Income Taxes
- Undistributed Corporate Profits
+ Transfer Payments =
Personal Income
- Personal Taxes =
Disposable Income
4
4. Expenditures Approach vs. Incomes Approach
a. Factor payments + Nonincome charges - GNP/GDP adjustments = GDP is
the incomes approach
b. Y = C + Ig + G + Xn
is the expenditures approach (where Y = GDP)
5. Social Welfare & GDP - GDP and GNP are nothing more than measures of total
output (or income). More information is necessary before conclusions can be
drawn concerning social welfare. There are problems with both measures,
among these are:
a. Nonmarket transactions such as household-provided services or barter
are not included in GDP.
b. Leisure is an economic good but time away from work is not counted,
however, movie tickets, skis, and other commodities used in leisure time
are.
c. Product quality - no pretense is made in GDP to account for product or
service quality.
d. Composition & Distribution of Output - no attempt is made in GDP data to
account for the composition or distribution of income or output. We must
look at sectors to determine composition and other information for
distribution.
e. Per capita income - is GDP divided by population, very rough guide to
individual income, but still mostly fails to account for distribution.
f. Environmental problems - damage done to the environment in production
or consumption is not counted in GDP data unless market transactions
occur to clean-up the damage.
g. Underground economy - estimates place the amount of underground
economic activities may be as much a one-third of total U.S. output.
Criminal activities, tax evasion, and other such activities are the
underground economy.
5
6. Price Indices - are the way we attempt to measure inflation. Price indices are far
from perfect measures and are based on surveys of prices of a specific market
basket of goods.
a. Market basket surveys - The market basket of goods and services are
selected periodically in an attempt to approximate what the average family
of four purchases at that time.
b. CPI (U) is for urban consumers & CPI (W) is for urban wage earners.
GDP Deflator is based on a broader market basket and may be more
useful in measuring inflation.
1. Standard of living - is eroded if there is inflation and no equal increase
in wages.
2. COLA - are escalator clauses that tie earnings or other payments to
the rate of inflation, but only proportionally.
3. Other indices - American Chamber of Commerce Research
Association in Indianapolis does a cross sectional survey, there are
wholesale price indices and several others designed for specific
purposes.
c. Inflation/Deflation - throughout most of U.S. economic history we have
experienced deflation - which is a general decline in all prices. Inflation is
primarily a post-World War II event and is defined to be a general increase
in all prices.
d. Nominal versus Real measures - economists use the term nominal to
describe money value or prices (not adjusted for inflation); real is used to
describe data, which are adjusted for inflation.
7. Paashe and Laspeyres Indices
L = ∑PnQ0 / ∑P0Q0 X 100
Laspeyres index is the most widely used index in constructing price indices. It
has the disadvantage of not reflecting changes in the quantities of the
commodities purchased over time. The Paasche index overcomes this problem
by permitting the quantities to vary over time, which requires more extensive data
grubbing. The Passche index is:
P = ∑PnQn / ∑P0Qn X 100
6
8
Measuring the price level
a. CPI = (current year market basket/ base year market basket) X 100 the
index number for the base year will be 100.00 (or 1 X 100)
b. Inflating is the adjustment of prices to a higher level, for years when the
index is less than 100.
c. Deflating is the adjustment of prices to a lower level, for years when the
index is more than 100.
1. To change nominal into real the following equation is used:
Nominal value/ (price index/100)
d. Changing base years - a price index base year can be changed to create
a consistent series (remembering market baskets also change, hence the
process has a fault). The process is a simple one. If you wish to convert
a 1982 base year index to be consistent with a 1987 base year, then you
use the index number for 1982 in the 1987 series and divide all other
observations for the 1982 series using the 1982 value in 1987 index
series.
7
3. Unemployment and Inflation
Lecture Notes
1. Business Cycle - is the recurrent ups and downs in economic activity observed in
market economies.
a. troughs are where employment and output bottom-out during a recession
(downturn)
b. peaks are where employment and output top-out during a recovery
(upturn)
c. seasonal trends are variations in data that are associated with a particular
season in the year.
d. secular trends are long-run trend (generally 25 or more years in
macroeconomic data.
Secular trend
Output
Peak
Trough
Years
2. Unemployment - there are various causes of unemployment, including:
a. frictional - consists of search and wait unemployment which is caused by
people searching for employment or waiting to take a job in the near
future.
8
b. structural - is caused by a change in composition of output, change in
technology, or a change in the structure of demand.
c. cyclical - due to recessions, (business cycle).
3. Full employment - is not zero unemployment, full employment unemployment
rate is the same as the natural rate.
a. natural rate - is thought to be about 4% and is structural + frictional
unemployment.
1. potential output - is the output of the economy at full employment.
4. Unemployment rate - is the percentage of the workforce that is unemployed.
a. labor force - those employed or unemployed who are willing, able and
searching for work; the labor force is about 50% of the total population.
b. part-time employment - those who do not have 40 hours of work (or
equivalent) available to them, at 6 million U.S. workers were involuntarily
part-time, and about 10
million were voluntarily part-time employees in
1992.
c. discouraged workers - those persons who dropped out of labor force
because they could not find an acceptable job.
d. false search - those individuals who claim to be searching for employment,
but really were not, some because of unemployment compensation
benefits.
5. Okun's law
a. Okun's Law states that for each 1% unemployment exceeds the natural
rate there will be a gap of 2.5% between actual GDP and potential GDP.
6. Burden of unemployment differs by several factors, these are:
a. Occupation - mostly due to structural changes.
9
b. Age young people tend to experience more frictional unemployment.
c. Race and gender reflect discrimination in the labor market and sometimes
in educational opportunities.
7. Inflation - general increase in all prices.
a. CPI - is the measure used to monitor inflation.
b. Rule of 70 -- the number of years for the price level to double =
70/%annual rate of increase.
8. Demand - pull inflation
Aggregate
Supply
Price
Aggregate
Demand
Output
Using a naive aggregate demand - aggregate supply model (similar to the supply
and demand diagrams for a market, except the supply is total output in all
markets and demand is total demand in all markets, as the aggregate demand
shifts outwards prices increase, but so does output.
9. Cost - push inflation - again using a naive aggregate supply - aggregate demand
approach cost-push inflation results from a decrease in aggregate supply:
10
Aggregate
Supply
Price
Aggregate
Demand
Output
a. pure inflation results from an increase in aggregate demand that is equal
to a decrease in aggregate supply:
Aggregate
Supply
Price
Aggregate
Demand
Output
10. Effects of inflation impact different people in different ways. If inflation is fully
anticipated and people can adjust their nominal income to account for inflation
then there will be no adverse effects, however, if people cannot adjust their
nominal income or the inflation is unanticipated those individual will see their
standard of living eroded.
11
a. Debitors typically benefit from inflation because they can pay loans-off in
the future with money that is worth less, thereby creditors are harmed by
inflation.
b. Inflation typically creates expectations among people of increasing prices,
which may contribute to future inflation.
c. Savers generally lose money because of inflation if the rate of return on
their savings is not sufficient to cover the inflation rate.
12
4. Aggregate Supply & Aggregate Demand
Lecture Notes
1. Aggregate demand is a downward sloping function that shows the inverse
relation between the price level and domestic output. The reasons that the
aggregate demand curve slopes down and to the right differs from the reasoning
offered for individual market demand curves (substitution & income effects these do not work with aggregates). The reasons for the downward sloping
aggregate demand curve are:
a. wealth or real balance effect - higher price level reduces the real
purchasing power of consumers' accumulated financial assets.
b. interest-rate effect - assuming a fixed supply of money, an increase in the
price level increases interest rates, which in turn, reduces interest
sensitive expenditures on goods and services (e.g., consumer durables cars etc.
c. foreign purchases effect - if prices of domestic goods rise relative to
foreign goods, domestic consumers will purchase more foreign goods as
substitutes.
13
2. Determinants of aggregate demand - factors that shift the aggregate demand
curve.
a. expectations concerning real income or inflation (including profits from
investments in business sector),
b. Consumer indebtedness,
c. Personal taxes,
d. Interest rates,
e. Changes in technology,
f. Amount of current excess capacity in industry,
g. Government spending,
h. Net exports,
i. National income abroad, and
j. Exchange rates.
3. Aggregate Supply shows amount of domestic output available at each price level.
The aggregate supply curve has three ranges, the Keynesian range (horizontal),
the intermediate range (curved), and the classical range (vertical).
a. Keynesian range - is the result of downward rigidity in prices and wages.
14
b. Classical range - classical economists believed that the aggregate supply
curve goes vertical at the full employment level of output.
c. Intermediate range - is the range in aggregate supply where rising output
is accompanied by rising price levels.
4. Determinants of Aggregate Supply cause the aggregate supply curve to shift.
a. Changes in input prices,
b. Changes in input productivity, and
c. Changes in the legal/institutional environment.
5. Macroeconomic Equilibrium - intersection of aggregate supply and aggregate
demand:
6. Ratchet Effect - is where there is a decrease in aggregate demand, that
producers are unwilling to accept lower prices (rigid prices and wages) therefore
there is a ratcheting of the aggregate supply curve (decrease in the intermediate
and Keynesian ranges) which will keep the price level the same, but with reduced
output. In other words, there can be increases in prices (forward) but no
decreases (but not backward).
15
An increase in aggregate demand from AD1 to AD2 moves the equilibrium from point a
to point b with real output and the price level increasing. However, if prices are
inflexible downward, then a decline in aggregate demand from AD2 to AD1 will not
restore the economy to its original equilibrium at point a. Instead, the new equilibrium
will be at point c with the price level remaining at the higher level and output falling to
the lowest point. The ratchet effect means that the aggregate supply curve has shifted
upward (a decrease) in both the Keynesian and intermediate ranges.
16
5. Classical and Keynesian Models
Lecture Notes
1. Classical theory of employment (macroeconomics) rests upon two founding
principles, these are:
a. underspending unlikely - spending in amounts less than sufficient to
purchase the full employment level of output is not likely.
b. even if underspending should occur, then price/wage flexibility will prevent
output declines because prices and wages would adjust to keep the
economy at the full employment level of output.
2. Say's Law "Supply creates its own demand" (well not exactly)
a. in other words, every level of output creates enough income to purchase
exactly what was produced.
b. among others, there is one glaring omission in Say's Law -- what about
savings?
3. Savings
a. output produces incomes, but savings is a leakage
b. savings give rise to investment and the interest rates are what links
savings and investment.
4. Wage-Price flexibility
a. the classicists believed that a laissez faire economy would result in
macroeconomic equilibria and that only the government could cause
disequilibria.
5. Keynesian Model - beginning in the 1930s the classical models failed to explain
what was going on, hence a new model was developed -- the Keynesian Model.
a. full employment is not guaranteed, because interest motivates both
consumers & businesses differently - just because households save does
not guarantee businesses will invest.
b. price-wage rigidity, rather than flexibility was assumed by Keynes
17
6. The Consumption schedule - income & consumption
a. consumption schedule - the 45-degree line is every point where
disposable income is totally consumed.
b. saving schedule - shows the amount of savings associated with the
consumption function.
The consumption schedule intersects the 45-degree line at 400 in
disposable income; this is also where the savings function intersects zero
(in the graph below the consumption function). To the left of the
intersection of the consumption function and the 45-degree line, the
consumption function lies above the 45-degree line. The distance
between the 45-degree line and the consumption schedule are dissavings,
shown in the savings schedule graph by the savings function falling below
zero. To the right of the intersection of the consumption function with the
45 degree line the consumption schedule is below the 45-degree line.
The distance that the consumption function is below the 45-degree line is
called savings, shown in the bottom graph by the savings function rising
above zero.
c. Marginal Propensity to Consume (MPC) is the proportion of any increase
in disposable income spent on consumption (if all is spent MPC is 1, if
none is spent MPC is zero). The Marginal Propensity to Save (MPS) is
the proportion of any increase in disposable income saved. The relation
between MPC and MPS is:
1. MPC + MPS = 1
18
d. The slope (rise divided by the run) of the consumption function is the MPC
and the slope of the savings function is the MPS. Add the slope of the
consumption function (8/10) to the slope of the savings function (2/10)
and they equal one (10/10).
e. The Average Propensity to Consume (APC) is total consumption divided
by total income, Average Propensity to Save (APS) is total savings divided
by total income. Again, if income can be either saved or consumed (and
nothing else) then the following relation holds:
1. APC + APS = 1
7. The nonincome determinants of consumption and saving are (these cause shifts
in the consumption and saving schedules):
a. Wealth,
b. Prices,
c. Expectations concerning future prices, incomes and availability of
commodities,
d. Consumer debts, and
e. Taxes.
8. Investment
a. investment demand curve is downward sloping:
19
b. determinants of investment demand are:
1. acquisition, maintenance & operating costs,
2. business taxes,
3. technology,
4. stock of capital on hand, and
5. expectations concerning profits in future.
c. Autonomous (determined outside of system) v. induced investment
(function of GDP):
20
1. Instability in investment has marked U.S. economic history.
2. Causes of this instability are:
a. Variations in the durability of capital,
b. Irregularity of innovation,
c. Variability of profits, and
d. Expectations of investors.
21
6. Equilibrium in the Keynesian Model
Lecture Notes
1. Equilibrium GDP - is that output that will create total spending just sufficient to
buy that output (where aggregate expenditure schedule intersects 45-degree
line).
a. Disequilibrium - where spending is insufficient (recessionary gap) or too
high for level of output (inflationary gap).
2. Expenditures - Output Approach
a. Y = C + I + G + X is the identity for income where Y = GDP, C =
Consumption, I = Investment, G= Government expenditures, and X = Net
exports (exports minus imports)
The equilibrium level of GDP is indicated above where C + I is equal to the
45-degree line. Investment in this model is autonomous and the amount
of investment is the vertical distance between the C and the C + I lines.
This model assumes no government and that net exports are zero.
3. Leakages - Injections Approach relies on the equality of investment and savings
at equilibrium.
a. I = S is equilibrium in the leakages - injections approach.
22
b. planned v. actual investment, the reason that the leakages - injection
approach works is that planned investment must equal savings.
Inventories can increase beyond that planned, hence output that is not
purchased which is recessionary; or intended inventories can be depleted
which is inflationary.
The original equilibrium is where I1 is equal to S and that level of GDP is
shown with the solid indicator line. If we experience a decrease in
investment we move down to I2 and if an increase in investment is
observed it will be observed at I3.
4. If there is an increase in expenditures, there will be a respending effect. In other
words, if $10 is injected into the system, then it is income to someone. That first
person will spend a portion of the income and save a portion. If MPC is .90 then
the first individual will save $1 and spend $9.00. The second person receives
$9.00 in income and will spend $8.10 and save $0.90. This process continues
until there is no money left to be spent. Instead of summing all of the income,
expenditures, and/or savings there is a short-hand method of determining the
total effect -- this is called the Multiplier, which is:
a. Multiplier M = 1/1-MPC or 1/MPS
b. significance - any increase in expenditures will have a multiple effect on
the GDP.
5. Paradox of thrift - the curious observation that if society tries to save more it may
actually save the same amount - unless investment moves up as a result of the
23
savings, all that happens is that GDP declines and if investment is autonomous
then savings remain the same.
6. Full Employment level of GDP may not be where the aggregate expenditures line
intersects the 45-degree line. There are two possibilities, (1) a recessionary gap
or (2) an inflationary gap, both are illustrated below.
a. Recessionary gap
The distance between the C + I line and the 45-degree line along the
dashed indicator line is the recessionary gap. The dotted line shows the
current macroeconomic equilibrium.
.9 GDP potential = $1000 billion divide each side by .9
get $1,111.11
b. Inflationary gap
24
The distance between the C + I line and the 45-degree line along the
dashed indicator line is the inflationary gap. The dotted indicator line
shows the current macroeconomic equilibrium.
7. Reconciling AD/AS with Keynesian Cross the various C + I and 45-degree line
intersections, if multiplied by the appropriate price level will yield one point on the
aggregate demand curve. Shifts in aggregate demand can be shown with
holding the price level constant and showing increases or decreases in C + I in
the Keynesian Cross model. Both models can be used to analyze essentially the
same macroeconomic events.
25
7. John Maynard Keynes and Fiscal Policy
Lecture Notes
1. Discretionary Fiscal Policy - involves government expenditures and/or taxes to
stabilize the economy.
a. Employment Act of 1946 - formalized the government's responsibility in
promoting economic stability.
b. simplifying assumptions for the analyses presented here:
1. exogenous I & X,
2. G does initially impact private decisions,
3. all taxes are personal taxes,
4. some exogenous taxes collected,
5. no monetary effects, fixed initial price level, and
6. fiscal policy impacts only demand side.
2. Changes in Government Expenditures - can be made for several reasons:
a. Stabilization of the economy,
1. To close a recessionary gap the government must spend an
amount that time the multiplier will equal the total gap.
2. To close an inflationary gap the government must cut
expenditures by an amount that times the multiplier will equal the
inflationary gap.
b. Political goals, and
c. Provision of necessary goods & services.
26
An increased government expenditure of $20 billion results in an increase
in GDP of $80 billion with an MPC of .75, hence a multiplier of 4.
3. Taxation effects both consumption and savings.
a. If the government uses a lump sum tax increase to reduce an inflationary
gap the reduction in GDP occurs thusly:
1. The lump sum tax must be multiplied by the MPC to obtain the
reduction in consumption;
2. The reduction in consumption is then multiplied by the multiplier.
b. A decrease in taxes works the same way, the total impact is the lump sum
reduction times the MPC to obtain the increase in consumption, which is,
in turn, multiplied by the multiplier to obtain the full impact on GDP.
c. A short-cut method with taxes is to calculate the multiplier, as you would
with an increase in government expenditures and deduct one from it.
4. The balanced budget multiplier is always one.
a. Occurs when the amount of government expenditures goes up by the
same amount that a lump sum tax is increased.
b. That is because only the initial expenditure increases GDP and the
remaining multiplier effect are offset by taxation.
27
5. Tax structure refers to the burden of the tax:
a. progressive is where the effective tax rate increases with ability to pay,
b. regressive is where the effective tax rate increases as ability to pay
decreases,
c. proportional is where a fixed proportion of ability to pay is taken in taxes.
6. Automatic stabilizers help to smooth business cycles without further legislative
action:
a. Progressive income taxes,
b. Unemployment compensation,
c. Government entitlement programs
7. Fiscal Lag - there are numerous lags involved with the implementation of fiscal
policy. It is not uncommon for fiscal policy to take 2 or 3 years to have a
noticeable effect, after Congress begins to enact fiscal measures.
a. Recognition lag - how long to start to react.
b. Administrative lag - how long to have legislation enacted & implemented.
c. Operational lag - how long it takes to have effects in economy.
8. Politics and Fiscal Policy.
a. Public choice economists claim that politicians maximize their own utility
by legislative action.
b. Log-rolling and negotiations results in many bills that impose costs.
9. Government deficits and crowding - out. It is alleged that private spending is
displaced when the government borrows to finance spending:
a. Ricardian Equivalence - deficit financing same effect on GDP as increased
tax.
28
10. Open economy problems. Because there is a foreign sector that impacts GDP
there are potential problems for fiscal policy arising from foreign sources.
a. increased interest - net export effect
1. An increase in the interest rate domestically will attract foreign
capital, but this increases the demand for dollars which increases
their value and thereby reduces exports, hence GDP.
b. foreign shocks - in addition to currency exchange rates.
1. Oil crises increased costs of production in the U.S.
29
8. Money and Banking
Lecture Notes
1. Functions of Money - there are three functions of money:
a. Medium of exchange - accepted as "legal tender" or something of general
and specified value.
1. Use avoids reliance on barter.
2. Barter requires a coincidence of wants and severely complicates
a market economy.
b. Measure of value - permits value to be stated in terms of a standard
and universally understood standard.
c. Store of value - can be saved with little risk, chance of spoilage and
virtually no cost and later exchanged for commodities without these
positive storage characteristics.
2. Supply of Money
a. There are numerous definitions of money M1 through M3 most commonly
used.
1. M1 is currency + checkable deposits
2. M2 is M1 + noncheckable savings account, time deposits of less
$100,000, Money Market Deposit Accounts, and Money Market
Mutual Funds.
3. M3 is M2 + large time deposit (larger than $100,000).
3. Near Money - are items that fulfill portions of the requirements of the functions of
money.
30
a. Credit cards - fulfill exchange function, but are not a measure of value and
if there is a credit line, can be used to store value.
b. Other forms of near money:
1. Precious metals - store of value, but not easily exchanged
2. Stocks and Bonds - earnings instruments, but can be used as
store of value.
c. Implications for near money - stability, spending habits & policy
4. What gives money value
a. No more gold standard
1. Nixon eliminated gold standard
b. The Value of money depends upon:
1. acceptability for payment,
2. because the government claims it is legal tender, and
3. its relative scarcity.
c. Exchange rates
5. Value of dollar = D = 1/P
6. Demand for Money - three components of money demand:
a. Transactions demand
b. Asset demand
c. Total demand
31
The money supply curve is vertical because the supply of money is exogenously
determined by the Federal Reserve. The money demand curve slopes downward and
to the right. The intersection of the money demand and money supply curves
represents equilibrium in the money market and determines the interest rate (price of
money).
7. Money market
a. With bonds that pay a specified interest payment per quarter then:
1. interest rate and value of bond inversely related
8. U.S. Financial System
a. FDIC - Federal Deposit Insurance Corporation - guarantees bank
deposits.
b. Federal Reserve System - is comprised of member banks. The Board of
Governors and Chairman are nominated by the President of United
States. The structure of the system is:
1. Board of Governors
2. Open Market Committee
3. Federal Advisory Council
32
4. 12 regions, Atlanta, Boston, Chicago, Cleveland, Dallas, Kansas
City, Minneapolis, New York, Philadelphia, Richmond, San
Francisco, St. Louis
c. Functions
1. Set reserves requirements,
2. Check clearing services,
3. Fiscal agents for U.S. government,
4. Supervision of banks,
5. Control money supply through FOMC,
9. Moral hazard - insuring reduces insured's incentive to assure risk does not
happen
33
9. Multiple Expansion of Money
Lecture Notes
1. Balance sheet (T accounts -- assets = liabilities + net worth)
a. is nothing more than a convenient reporting tool.
2. Fractional Reserve Requirements
a. Goldsmiths used to issue paper money receipts, backed by stocks of gold.
The stocks of gold acted as a reserve to assure payment if the paper
claims were presented for payment.
1. Genghis Khan first issued paper money in the thirteenth century it was backed by nothing except the Khan's authority.
b. The U.S. did not have a central banking system from the 1820 through
1914. In the early part of this century, several financial panics pointed to
the need for a central banking and financial regulation.
1. States and private companies used to issue paper money.
2. In the early days of U.S. history Spanish silver coins were widely
circulated in the U.S.
3. The first U.S. paper money was issued during the Civil War (The
Greenback Act), which included fractional currency (paper dimes
& nickels!).
c. Today, the Federal Reserve requires banks to keep a portion of its
deposits as reserves.
1. purposes to keep banks solvent & prevent financial panics
3. RRR (Required Reserve Ratio) and multiple expansion of money supply through
T accounts
a. How reserves are kept
1. Loans from Fed - discount rate at which Fed loans reserves to
members
34
2. Vault cash
3. Deposits with Fed
4. Fed funds rate - the rate at which members loan each other
reserves
b. RRR = Required reserve/demand deposit liabilities
c. actual, required, and excess reserves
4. Money created through deposit/loan redepositing
a. Money is created by a bank receiving a deposit, and then loaning that nonreserve portion of the deposit, which is deposited and loans made against
those deposits.
1. If the required reserve ratio is .10, then a bank must retain 10% of
each deposit as a reserve and can loan 90% of the deposit; the
multiple expansion of money, assuming a required reserve ratio of
.10, is therefore:
Deposit
Loan
$10.00
9.00
8.10
.
.
_______
$ 100.00
9.00
8.10
7.29
.
.
_______
$90.00
Total new money is the initial deposit of $10 + $90 of multiple
expansions for a total of $100.00 in new money.
5. Money multiplier Mm = 1/RRR
a. Is the short-hand method of calculating the entries in banks' T accounts
and shows how much an initial injection of money into the system
generates in total money supply.
35
b. With a required reserve ratio of .05 the money multiplier is 20 & with a
required reserve ratio of .20 the money multiplier is 5.
c. the Federal Reserve needs to inject only that fraction of money that time
the multiplier will increase the money supply to the desired levels.
36
10. Federal Reserve and Monetary Policy
Lecture Notes
1. Monetary policy, defined and objectives
a. Monetary policy is carried out by the Federal Reserve System and is
focused on controlling the money supply.
b. The fundamental objective of monetary policy is to assist the economy in
attaining a full employment, non-inflationary equilibrium.
2. Tools of Monetary Policy
a. Open Market Operations is the selling and buying of U.S. treasury
obligations in the open market.
b. Expansionary monetary policy involves the buying of bonds.
1. The Fed buying bonds, it puts money into the hands of those who
had held bonds.
c. Contractionary monetary policy involves the selling of bonds.
1. The Fed sells bonds it removes money from the system and
replaces it with bonds.
3. Required Reserve Ratio - the Fed can raise or lower the required reserve ratio.
a. Increasing the required reserve ratio, reduces the money multiplier, hence
reduces the amount by which multiple expansions of the money supply
can occur.
1. decreasing the required reserve ratio increases the money
multiplier, and permits more multiple expansion of the money
supply.
4. The Discount Rate is the rate at which the Fed will loan reserves to member
banks for short periods of time.
5. Velocity of Money - is how often the money supply turns-over.
37
a. The quantity theory of money is: MV = PQ
1. This equation has velocity (V) which is nearly constant and output
(Q) which grows slowly, so what happens to the money supply
(M) should be directly reflected in the price level (P).
6. Target Dilemma in Monetary Policy
a. Interest rates and the business cycle may present a dilemma.
Expansionary monetary policy may result in higher interest rates, which
dampen expansionary policies.
b. Fiscal and monetary policies may also be contradictory.
7. Easy Money - lowering interest rates, expanding money supply.
a. mitigate recession and stimulate growth.
8. Tight Money - increasing interest rates, contracting money supply.
a. mitigate inflation and slow growth.
9. Monetary rules - Milton Friedman
a. Discretionary monetary policy often misses targets in U.S. monetary
history
b. Suspicion of Fed and FOMC – perhaps overblown
38
11. Interest Rates and Output: Hick’s IS/LM Model
Lecture Notes
1. IS Curve
The IS curve shows the level of real GDP for each level of real interest
rate. The derivation of the IS curve is a rather straightforward matter, observe
the following diagram.
INCOME/EXPENDITURE
Expenditure or Aggregate
Demand
GDP
AUTONOMOUS SPENDING
Interest Rate
Interest Rate
IS
Autonomous Spending
GDP
39
The Income-Expenditure diagram determines for each level of aggregate
demand the associated level of GDP.
The intercept of the IS Curve is the level of GDP that would obtain at a zero real
interest rate. The slope of IS Curve is the multiplier (1/1 - MPC) times marginal
propensity to Invest (resulting from a change in real interest rates) and the marginal
propensity to export (resulting from a change in real interest rates).
The IS curve can be shifted by fiscal policy
Changes in Co or Io will also move the IS curve.
Anytime the economy moves away from the IS curve there are forces within the
system which push the economy back onto the IS curve. Observe the following
diagram:
Real Interest Rate
⇦●A
B ●⇨
IS Curve
Real GDP
If the economy is at point A there is a relatively high level of real interest rates which
results in planned expenditure being less than production, hence inventories are
accumulating, and production will fall, hence pushing the economy toward the IS curve.
On the other hand, at point B the relatively low real interest rate results in planned
expenditure being greater than production, hence inventories are being sold into the
market place and product will rise to bring us back to the IS curve.
40
2. LM Curve
The LM Curve is derived in a fashion similar to that of the IS curve. Consider the
following diagram:
Interest Rate
Money S
Interest Rate
LM Curve
D1
D2
D3
GDP (Y)
Real Qty of Money
High Y
Moderate Y
Low Y
As can be readily observed from this diagram the LM curve is the schedule of
interest rates associated with levels of income (GDP). The interest rate, in this case,
being determined in the money market.
With a fixed money supply each level of demand for money creates a different
interest rate.
If the money market is to remain in equilibrium, then as incomes rise so too, then
must the interest rate, if the supply of money is fixed.
The shifting of the LM curve is obtained through inflation or monetary policy.
41
3. Equilibrium in IS-LM
The Intersection of the IS and LM curve results in there being an equilibrium in
the macroeconomy.
Interest Rate
LM
r
IS1
ISe
IS2
GDP
Y
Where the IS and LM curves intersect is where there is an equilibrium in this
economy. With this tool in hand the affects of the interest rate on GDP can be directly
observed. As the IS curve shifts to the right along the LM curve notice that there is an
increase in GDP, but with a higher interest rate (IS1) and just the opposite occurs as the
IS curve shifts back towards the origin (IS2). From above it is clear the sorts of things
that shift the IS curve, fiscal policy or changes in Co or Io.
4. Expansionary and Contractionary Monetary Policies
The LM curve, too, can be moved about by changes in policy. If the money
supply is increased or decreased that will have obvious implications for the LM curve
and hence the interest rate and equilibrium level of GDP. Consider the following
diagram:
42
Interest Rate
LM2
LMe
LM1
r
IS
GDP
Y
As the Fed engages in easy monetary policies the LM Curve shifts to the right, and the
interest rate falls, as GDP increases.
5. Foreign Shocks
The U.S. economy is not a closed economy, and the IS-LM Model permits us to
examine foreign shocks to the U.S. economy. There are three of these foreign shocks
worthy of examination here; these are (1) increase in demand for our exports, (2)
increases in foreign interest rates, and (3) currency speculators expectations of an
increase in the exchange rate of our currency with respect to some foreign currency.
Each of these foreign shocks results in an outward expansion of the IS Curve as
portrayed in the following diagram:
43
Interest Rate
LM
r2
r1
IS1
ISo
GDP
Y1
Y2
44
12. Economic Stability and Policy
Lecture Notes
1. Inflation, Unemployment and Economic Policy
a. The misery index is the inflation rate plus the unemployment rate.
2. The Phillips Curve is a statistical relation between unemployment and inflation
named for A. W. Phillips who examined the relation in the United Kingdom and
published his results in 1958. (Actually Irving Fisher had done earlier work on
the subject in 1926).
a. Short run trade-off
Often used to support activitist role for government, however, the short-run tradeoff view of the Phillips curve demonstrates that there is a cruel choice between
increased inflation or increased unemployment, but low inflation and unemployment
together are not possible.
b. Long run Phillips Curve is alleged to be vertical at the natural rate of
unemployment.
45
This long-run view of the Phillips Curve is also called the Natural Rate
Hypothesis. It is based on the idea that people constantly adapt to current economic
conditions and that their expectations are subject to "adaptive" revisions almost
constantly. If this is the case, then business and consumers cannot be fooled into
thinking that there is a reason for unemployment to cure inflation or vice versa.
c. Possible positive sloping has hypothesized by Milton Friedman. Friedman
was of the opinion that the may be a transitional Phillips curve while
people adapt both their expectations and institutions to new economic
realities. The positively sloped Phillips curve is show in the following
picture:
46
The positively sloped transitional Phillips Curve is consistent with the
observations of the early 1980s when both high rates of unemployment existed together
with high rates of inflation -- a condition called stagflation.
d. Cruel choices only exist in the case of the short-run trade-off view of the
Phillips Curve. However, there maybe a "Lady and Tiger Dilemma" for
policy makers relying on the Phillips Curve to make policy decisions.
3. Rational expectations is a theory that businesses and consumers will be able to
accurately forecast prices (and other relevant economic variables). If the
accuracy of consumers' and business expectations permit them to behave as
though they know what will happen, then it is argued that only a vertical Phillips
Curve is possible, as long as political and economic institutions remain stable.
4. Market policies are concerned with correcting specific observed economic woes.
a. Equity policies are designed to assure "a social safety net" at the minimum
and at the liberal extreme to redistribute income.
2. The Lorenz Curve and Gini Coefficients are used to measure income
distribution in economies.
The Lorenz curve maps the distribution of income among across
the population. The 45 degree line shows what the distribution of
income would be if income was uniformly distributed across the
population. However, the Lorenz curve drops down below the 45
degree line showing that poorer people receive less than rich
people.
47
The Gini coefficient is the percentage of the triangle mapped out by
the 45 degree line, the indicator line from the top of the 45 degree
line to the percentage of income axis, and the percentage of
income axis that is accounted for by the area between the Lorenz
curve and the 45 degree line. If the Gini coefficient is near zero,
income is close to uniformly distributed; if is near 1 then income is
mal-distributed.
b. Productivity is also the subject of specific policies. The Investment Tax
Credit, WIN program, and various state and federal training programs are
focused increasing productivity.
c. Trade barriers have been reduced through NAFTA and GATT in hopes of
fostering more U.S. exports.
5. Wage-Price Policies
a. Attempts have been made to directly control inflation through price
controls, this seemed to work reasonably well during World War II. Carter
tried voluntary guidelines that failed, and Nixon tried controls that simply
were a disaster.
6. Supply Side Economics of the Reagan Administration were based on the theory
that stimulating the economy would prevent deficits as government spending for
the military was increased. This failed theory was based on something called the
Laffer Curve.
a. Laffer Curve (named for Arthur Laffer) is a relation between tax rates and
tax receipts. Laffer's idea was rather simple, he posited that there was
optimal tax rate, above which receipt went down and below which receipts
went down. The Laffer curve is shown below:
48
The Laffer Curve shows that the same tax receipts will be collected at the
rates labeled both "too high" and "too low." What the supply-siders
thought was that tax rates were too high and that a reduction in tax rates
would permit them to slide down and to the right on the Laffer Curve and
collect more revenue. In other words, they thought the tax rate was above
the optimal. We got a big tax rate reduction and found, unfortunately, that
we were below the optimal and tax revenues fell, while we dramatically
increased the budget. In other words, record-breaking deficits and debt.
b. There were other tenets of the supply-side view of the world. These
economists thought there was too much government regulation. After
Jimmy Carter de-regulated trucking and airlines, there was much rhetoric
and little action to de-regulate other aspects of American economic life.
7. Unfortunately, the realities of American economic policy is that politics is often
main motivation for policy.
a. Tax cuts are popular, tax increases are not.
b. Deficits are a natural propensity for politicians unwilling to cut pork from
their own districts and unwilling to increase taxes.
8. Politics - economics provides a scientific approach to understanding, politics is
the “art of the possible” what is good economically maybe horrible politically and
vice versa
a. Public choice literature
2. Politicians act in self-interest just like the rest of us
49
13. Data: Categories, Sources, Problems and Costs
Lecture Notes
1. Data Types
A. Time series data are metrics for a specific variable observed over time.
B. Cross-sectional data, on the other hand, are data for a specific time
period, for a specific variable, but across subjects.
C. The combination of cross-sectional data over time is called panel-data.
D. Discrete data are metrics that have specific and finite number of potential
values.
E. Continuous data, on the other hand, are metrics, which can take on any
value either from plus infinity to negative infinity, or within specific limits. .
F. A stock variable is generally a variable in which there is a specific value at
some point.
G. A flow variable is generally something that has some beginning, ending or
changing value over time.
2. Survey data is generally reported by an individual who may or may not have
some motivation to accurately report the data requested in the survey.
A. Validity has two broad dimensions.
1. The response rate from the sample of a population will define how
representative that sample is of the population.
2. The second issue involves whether the items on the survey actually
capture what the researcher intends.
a. This second issue with validity also has three distinct
dimensions these are: (1) content validity, (2) criterionrelated validity and (3) construct validity.
B. Reliability has to do with assuring the accuracy of responses.
50
1. Internal checks
2. States of nature data, involve things like market prices, number of
persons, tons of coal and other such metrics that can be physically
observed or measured.
3. Interviews and observational methods are also often utilized to
gather what is sometimes referred to as primary data.
3. Seasonal Adjustments – Conceptually, there are a number of economic time
series data, which have specific, predictable biases created by the time of year.
4. Data Sources
A. Public, Federal, State, International Organizations, and Foreign Countries
1.
2.
3.
4.
5.
6.
Commerce Department
Bureau of Labor Statistics
CIA
Fed
OECD
World Bank, IMF, and UN
B. Private data sources include both proprietary and non-proprietary sources.
1. Market data - S&P Dow Jones etc.
2. ACCRA
5. Information is not free, there are costs associated with gathering and analyzing
data.
A. Sutcliff and Webber
1. Perceptual Accuracy
2. Dangers in Anecdotal evidence
3. Training, perception, experience, and humility
Sutcliff and Weber argue that the accuracy of data is an expensive proposition
and that too often managers have neither the expertise nor the time to fully
analyze what imperfect information they can gather. What Sutcliff and Weber
51
contend is that intuition is perhaps a better modus operandi, what they call a
humble optimism but based on what knowledge can be gleaned without
becoming a victim of the old cliché “paralysis by analysis.
Magnitude of Change
Perceptual Accur
Performance
Perceptual accuracy
52
Sutcliff and Weber studies suggest that organizational change has an inverted U
shaped relation to perceptual accuracy. In other words, over the initial ranges
there are increases in the return to perceptual accuracy for positive organizational
change up to some optimal point. Beyond that optimal, additional accuracy
actually results in negative returns. On the other hand, organization performance
is negatively correlated with perceptual accuracy. The more resources (time, etc.)
are devoted to perceptual accuracy of analysis or the underlying data the more
likely performance will decline.
6. Professor James Brian Quinn has observed that the move into “high-tech”
business (science based) has forced many firms to examine their strategies and
how they deal with information. In fact, Dr. Quinn believes that these science
based industries have been forced into a situation where knowledge sharing is
of critical importance to the success of industries and firms within those
industries. The following quotation states his point:
53
14.
Economic Indicators
Lecture Notes
1. Cyclical Indicators: Background
A. The Arthur F. Burns and Wesley C. Mitchell at the Bureau of Economic
Research began to examine economic time series data in the 1930s to
determine if there were data, which were useful in predicting business
cycles.
B. These indicators are described in Bureau of Economic Analysis’ Handbook
of Cyclical Indicators. These indicators were selected using six criteria.
These criteria are described in John J. McAuley’s Economic Forecasting
for Business: Concepts and Applications, to wit:
John McAuley,Economic Forecasting for Business: Concepts and Applications
Cyclical indicators are classified on the basis of six criteria: (1) the economic
significance of the indicator; (2) the statistical adequacy of the data; (3) the timing of
the series – whether it leads, coincides with, or lags turning points in overall economic
activity; (4) conformity of the series to overall business cycles; a series conforms
positively if it rises in expansions and declines in downturns; it conforms inversely if it
declines in expansions and rises in downturns: a high conformity score indicates the
series consistently followed the same pattern; (5) the smoothness of the series’
movement over time; and (6) timeliness of the frequency of release – monthly or
quarterly – and the lag in release following the activity measured.
C. Indicators are considered, in general, to be useful, simple guides to the
variations in the business cycle in the U.S.
2. Leading Indicators.
A. Leading indicators are those variables, which precede changes in the
business cycle.
54
______________________________________________________________________
Table 1: Leading Economic Indicator Index
Variable
Weight in Index
1. Average workweek of production workers, manufacturing
2. Average weekly initial claims, State Unemployment Insurance
(Inverted)
3. Vendor performance (% change, first difference)
4. Change in credit outstanding
5. Percent change in sensitive prices, smoothed
6. Contracts and orders, plant and equipment (in dollars)
7. Index of net business formation
8. Index of stock price (S&P 500)
9. M-2 Money Supply
10. New Orders, consumer goods and materials (in dollars)
11. Building permits, private housing
12. Change in inventories on hand and on order (in dollars, smoothed)
1.014
1.041
1.081
.959
.892
.946
.973
1.149
.932
.973
1.054
.985
Source: Business Conditions Digest, 1983
______________________________________________________________________
B. Efficient Market Hypothesis – Eugene Fama
3. Coincident Indicators
A. These indicators are the ones, which happen as we are in a particular
phase of the business cycle, in other words, they happen concurrently with
the business cycle. Statistically their variations follow the leading indicators
and occur prior to the lagging indicators.
______________________________________________________________________
Table 2: Index of Coincident Indicators
Variable
Weight
1.
2.
3.
4.
1.064
1.028
1.003
.905
Employees on nonagricultural payrolls
Index of industrial production, total
Personal Income, less transfer payments
Manufacturing and trade sales
Source: Business Conditions Digest, 1983.
______________________________________________________________________
55
4. Lagging Indicators
A. Lagging indicators are those, which trail the phase in the business cycle
that the economy just experienced.
______________________________________________________________________
Table 3: Lagging Indicators
Variables
Weights
1.
2.
3.
4.
5.
6.
1.098
.868
.894
1.009
1.123
1.009
Average duration of unemployment
Labor cost per unit of output, manufacturing
Ratio constant dollar inventories to sales, and manufacturing and sales
Commercial and industrial loans outstanding
Average prime rate charged by banks
Ratio, consumer installment debt to personal income
Source: Business Conditions Digest, 1983.
______________________________________________________________________
5. Other Indicators
A. The University of Michigan publishes a “Consumer Confidence” index and the
Conference Board also does something similar for producers.
B. Real Estate firms have a lobby and research group that does surveys of new
home construction, and sales of existing housing which plays on essentially
the same sort of economic activities captured by “Building permits, private
housing” variable found in the Index of Leading Indicators.
C. The Federal Reserve is not to be outdone in this forecasting business. The
Philadelphia Federal Reserve Bank does an extensive survey of business
conditions and plans in the Philadelphia Federal Reserve District and
publishes their results monthly. This report is commonly referred to as “The
Beige Book” and it eagerly anticipated as providing insights into what to expect
in the near future in the eastern part of the United States.
56
D. There are also numerous proprietary forecasts and indices constructed for
every purpose imaginable. Stock price data have been subjected to nearly
every torture imaginable in an attempt to gain some sort of trading advantage.
Bollinger Bands, Dow Theory, and even Sun Spots have been used in an
attempt to forecast what will happen in the immediate short run with financial
markets.
E. Investment Services
1.
2.
3.
4.
Zacks
Standard and Poors
Morningstar
Others
57
15. Guide to Analytical and Forecasting Techniques
Lecture Notes
1. Time Series Methods
These are: (1) trend line, (2) moving averages, (3) exponential smoothing, (4)
decomposition, and (5) Autoregressive Integrative Moving Average (ARIMA).
Each of these methods is readily accessible, and easily mastered (with the
possible exception of certain variants of ARIMA). The first four of these methods
involves little more than junior high school arithmetic and can be employed for a
variety of useful things.
A. Trend Line
1. This method is also commonly called a naive method, it that is simply
fitting a representative line to the data observed. Black thread method.
X
●
●
●
●
●
● ●
●
●
●
●
Y
58
B. Moving Averages
1. If we have three data points 6, 7, and 8. The moving average is
calculated as:
0 = 6+7+8 ÷ 3 = 7
2. The moving average of this series is 7. However, it is clear that if the
series continues that the next number will be 9. Therefore, a weighted
moving average may result in a more satisfactory result. If the we use a
method that weights the last observation in the series more than the first
observation we will get closer to the next number in the series, which we
presume is 9. Therefore, the following weighted average is used:
0 = (.5) 6+7+ (1.5)8 ÷ 3 = 7.333
3. This weighting scheme still undershoots the forecast. Therefore we try
using something in which the last number is weighted more heavily.
0 = (.5) 6+7+(2)8 +1 ÷ 3 = 9
C. Exponential Smoothing
1. Exponential smoothing is an effort to take some of the guess work out,
or at least formalize the guess work. The equation for the forecast
model is:
F = (∞) X2 + (1 - ∞) S1
2. Where alpha is the smoothing statistic, X is the last observation in the
data set, and S is the first predicted value from the data set. For
example, if we use the following series:
25, 23, 22, 21, 24
3. To obtain the initial value of S or S1 we add 25 and 23 and divided the
sum by 2 to obtain 24. The X2 in this case is our last observation or 25.
All that is left is the selection of ∞. Normally, an analyst will select .3
with a series, which does not a linear trend upwards or downwards.
This gives a systematic method of determining the appropriate value of
∞. Make predictions using ∞ beginning at .1 and work your way to .9
and determine which ∞ gives the forecast with the least error term over
59
a range of the latest available data, and use that ∞ until such time as
error begin to increase. To complete the example, let’s plug and chug
to a get a forecast.
F = (.3) 25 + (1 - .3) 24 =
7.5 +
16.8
= 24.3
D. Decomposition
1. The process begins with calculating a moving average for the data
series. For quarterly data the moving average is:
MA = (Xt-2 + Xt-1 + Xt + Xt+1) / 4
We will apply this method to the following data:
Year 1
X
First Quarter
Second Quarter
Third Quarter
Fourth Quarter
10
12
13
11
Year 2
First Quarter
Second Quarter
Third Quarter
Fourth Quarter
X
11
13
14
11
Year 3
First Quarter
Second Quarter
Third Quarter
Fourth Quarter
X
10
13
13
11
Applying our moving average method:
MA3 = (10+12+13+11)/4 = 11.50
MA4 = (12+13+11+11)/4 = 11.75
MA5 = (13+11+11+13)/4 = 12.00
MA6 = (11+13+14+11)/4 = 12.25
60
The data are now presented for year as a moving average of the quarters
centered on the quarter represented by the number following the MA. For the
year, the data for this series centered on the third quarter is 11.50, centered on
the fourth quarter it is 11.75 and for the first quarter of the previous year it is
12.00.
Now we can calculate an index for seasonal adjustment, in a rather simple and
straightforward fashion. The index is calculated using the following equation:
SI t = Y t / MA t
Where SI t is the seasonal index, Y t is the actual observation for that quarter, and
MA t is the moving average centered on that quarter, from the method shown
above. A seasonally adjusted index can then be created which allows the
seasonality of the data to be removed.
Using the three years worth of data above we can construct a seasonality index,
which allows us to remove the seasonal effects from the data. The average for
each quarter is calculated:
Y1 = (10+11+10)/3 = 10.33
Y2 = (12+13+13)/3 = 12.67
Y3 = (13+13+13)/3 = 13.00
Y4 = (11+11+11)/3 = 11.00
Therefore plugging in the values for Y t and MA t the equation above allows us to
construct an index for seasonality from this data:
SI 1 = 10.33/11.50 = .8983
SI 2 = 12.67/11.75 = 1.0783
SI 3 = 13.00/12.00 = 1.0833
SI 4 = 11.00/12.25 = .8988
To seasonally adjust the raw data is then a simple matter of divided the raw data
by its appropriate seasonal index.
61
Year 1
First Quarter
Second Quarter
Third Quarter
Fourth Quarter
X
Seasonally Adjusted
10/0.8983 = 11.1321
12/1.0783 = 11.1286
13/1.0833 = 12.0004
11/0.8988 = 12.2385
Year 2
First Quarter
Second Quarter
Third Quarter
Fourth Quarter
11/0.8983 = 12.2453
13/1.0783 = 12.0560
14/1.0833 = 12.9235
11/0.8988 = 12.2385
Year 3
First Quarter
Second Quarter
Third Quarter
Fourth Quarter
10/0.8983 = 11.1321
13/1.0783 = 12.0560
13/1.0833 = 12.0004
11/0.8988 = 12.2385
E. ARIMA – Autoregressive Integrated Moving Average.
1. This method uses past values of a time series to estimate future values
of the same time series. According to John McAuley, The Nobel Prize
winning economist Clive Granger is alleged to have noted: “it has been
said to be difficult that it should never be tried for the first time.”
However, it is useful tool in many applications, and therefore at least
something more than a passing mention is required here.
There are basically three parts of an ARIMA model, as are described in
McAuley’s text these three components are:1
1
John J. McAuly, Economic Forecasting for Business: Concepts and Applications,
Englewood Cliffs, N.J.: Prentice-Hall, Inc., 1985.
62
“(1) The series can be described by an autoregressive (AR) component,
such that the most recent value (X1) can be estimated by a weighted
average of past values in the series going back p periods:
X1 = Θ (X t-1) + . . . + Θ p (X t-p) + δ + ε
where Θ p = the weights of the lagged values
δ = a constant term related to the series mean, and
ε = the stochastic term
(2) The series can have a moving average (MA) component based on a q
period moving average of the stochastic errors:
X1 = 0 + (εt - Θ1) εt-1 - . . . - Θq + εt-u
where 0 = the mean of the series
Θq = the weights of the lagged error terms and
ε = the stochastic errors.
(3) Most economic series are not stationary (can be transformed through
differencing using a stationary process), but instead display a trend or
cyclical movements over time. Most series, however, can be made
stationary by differencing. For instance, first differencing is the period-toperiod change in the series:
ΔX t = X t - X t-1 “
2. Correlative Methods
A. Correlative methods rely on the concept of Ordinary Least Squares, or
O.L.S., which is also sometimes referred to as regression analysis. O.L.S.
refers to the fact that the representative function of the data is determined
by minimizing the sum of the squared errors of each of the data points
from that line.
B. Graphical depiction of Ordinary Least Squares:
63
X
●
●
● ●
●
●
●
●
●
Y
1. The lines from each data point to the representative line are the error
terms. Squaring the error terms eliminates the signs, and the
regression analysis minimizes these squared errors in order to
determine what the most representative line is for there data points.
C. The general form of the representative equation for a bi-variate regression
is:
Y = α + βX + ε
where Y is the dependent variable, and X is the independent
variable; α is the intercept term, β is the slope coefficient and ε
random error term.
D. Correlation
1. To be able to determine what a representative line is, requires
some further analysis. The model is said to be a good fit if the R
squared is relatively high. There R squared can vary between 0
(no correlation) to 1 (perfect correlation). Typically an F statistic is
generated by the regression program for the R squared which can
be checked against a table of critical values to determine if the R
64
squared is significant different than zero. If the F statistic is
significant, then the R squared is significantly more than zero.
The simple r is calculated as:
r = Σ (X - X) (Y - Y) / ([ Σ (X - X)2 ][ Σ (Y - Y)2 ])-1
in addition, the simple r is squared to obtain the R squared.
E. Significance of Coefficients
1. There are also statistical tests to determine the significance of the
intercept coefficient and the slope coefficient, these are simple tstatistics. The test of significance is whether the coefficient is
zero. If the t-statistic is significant for the number of degrees of
freedom, then the coefficient is other than zero.
In general, the analyst is looking for coefficients, which are
significant, and of a sign that makes theoretical sense. If the
slope coefficient for a particular variable is zero, it suggests that
there is no statistical association between that independent
variable and the dependent variable.
2. There is also a problem in which additional data being added to
the series or another variable will sometimes cause the sign of a
coefficient already in the model to swap signs. This is a symptom
of problem with the data called heteroskedasticity. This problem
is a result data not conforming to the underlying assumptions of
ordinary least squares that is that the errors or residuals do not
have a common variance. You basically have two ways in which
to deal with this problem, select a different analytical technique or
try transforming the data. If you transform the data, log
transformations or deflating the data into some normalized series
will sometime eliminate the problem and give unbiased, efficient
estimates.
3. With time series data there also arises a problem called serial
correlation. That is when the error terms are not randomized as
assumed. In other words, the error terms are correlated with one
another. There is test for this problem, called the Durbin-Watson
(d) statistic. When dealing with time series data one should, as a
65
matter of routine, have the program calculate a DW (d) and check
with a table of critical values so as to eliminate any problems in
inference associated with correlated error terms. If this problem
arises, often using the first difference of observations in the data
will eliminate the problem. However, often a low DW (d) statistic
is an indication that the regression equation may be mis-specified
which results in further difficulties, both statistically and
conceptually.
F. Other Issues
1. accessibility; you can do regressions on Excel Spreadsheets.
2. dummy variable.
a. dummy variable trap - which results in a zero coefficient
and other unpleasantries.
3. Instrumental variables are also sometimes used. These variables
are constructed for the purpose of substituting for something we
could not directly measure or quantify.
3. Other Methods
A. Continuous Data
1. Analysis of variance is typically used to isolate and evaluate the
sources of variation within independent treatment variable to
determine how these variables interact. An analysis of variance is
generally part of the output of any regression package and
provides additional information concerning the behavior of the
data subjected to the regression.
2. Multivariate analysis of variance, which provides information
on the behavior of the variables in the data, set controlling for
the effects of the other variables in the analysis.
3. Analysis of covariance, which goes a set further, as the
name, suggests, and provides an analysis of how the multiple
variables interact statistically.
4. Factor analysis relies on correlative techniques to determine
what variables are statistically related to one another. A factor
may have several different dimensions. For example, men and
66
women. In general, men tend to be heavier, taller, and live
shorter lives. These variables would form a factor of
characteristics of human beings.
5. Cluster analysis is analogous in that rather than a particular
theoretical requirement that results in factors, there may be
simple tendencies, such as behavioral variables.
6. Canonical correlation analysis is a method where factors are
identified and the importance of each variable in a factor is also
identified. These elements are calculated for a collection of
variables, which are analogous to a dependent variable in a
regression analysis, and a collection of variables, which are
analogous to independent variables in a regression. This gives
the research the ability to deal with issues which may be multidimensional and do not lend themselves well to a single
dependent variable. For example, strike activity in the United
States has several different dimensions, and data measuring
those dimensions. The competing theories explaining what
strikes happen and why they continue can then be simultaneously
tested against the array of strike measurements, lending to more
profound insights than if just one variable was used.
7. Profit analysis uses categorical dependent variables (discrete)
and explains the variation in that variable with a mixture of
continuous and discrete variables. The output of these computer
programs yields output, which permits inference much the same
as is done with regression analysis.
a. If there is a stochastic dependent logit analysis is an
alternative to profit.
B. Discrete Data – There are several nonparametric methods devised to deal
specifically with discrete data.
1. The Mann-Whitney U test provides a method whereby two
populations’ distributions can be compared.
2. The Kruskal-Wallis test is an extension of the Mann-Whitney U
test, in that this method provides an opportunity to compare
several populations.
67
3. These tests can be used with the distributions of the data do not
fit the underlying assumptions necessary to apply an F test
(random samples drawn from normally distributed populations
with equal variances).
4. There are also methods of dealing with ordinally ranked data. For
example, the Wilcoxon Signed-Rank test can be utilized to
analyze paired-differences in experimental data. This is
particularly useful in educational studies or in some types of
behavioral studies where the F statistic assumptions are not
fulfilled or the two tests above do not apply (not looking at
population differences).
5. Spearman Rank Correlations are useful in determining how
closely associated individual observations may be to specific
samples or populations. For example, item analysis in multiple
choice exams can be analyzed using this procedure.
a. Students who perform well on the entire exam can have
their overall score assessed with respect to specific
questions. In this example, if the top students all
performed well on question 1, their rank correlation would
be near 1, however, if they all performed well, but on
question 2 they did relatively poorly, that correlation
would be closer to zero, say .35. If, on the other hand,
none of the students who performed well got question 2
correct, then we would observe a zero correlation for that
question. This method is useful in marketing research
and in behavioral, as well as college professors with too
much time on their hands.
16. Data: In the Beginning
Lecture Notes
1. Becoming Familiar with the Data
A. Calculating descriptive statistics to determine what the data “look” like.
1. Mode most frequent observation
68
2. Median - midpoint in the
3. Mean:
k
0=
∑ fm
i
i
/n
i =n
4. Variance:
k
σ2
=
∑ fm
i
k
i2
- [(
i =n
∑ fm
i
) / n] / n - 1
i 2
i =n
5. Standard Deviation:
σ=[
k
∑ fm
i
i =n
k
i2
- [(
∑ fm
i
) / n] / n - 1]
i 2
i =n
where:
mi = midpoint of data series I
fi = frequency of observation in series
A. Skewed data
1. 3 dimensional, skewed left
69
-1
X
Mode
Median
Mean
Y
2. Skewe
d right,
2
Z
dimensional
X
Mode
Median
Mean
Y
70
3. Normal
Distribu
tion
X
Mean, Mode
and Median
Y
4. T
here is a statistical principle called the law of large numbers. For
purposes of most statistical analyses, if the data are random sample
from a population with the same variance and continuous, then if you
have more than 30 observations it is assumed that that number of
observations or more will approximate a normal distribution.
.
2. Correlated Independent Variables
A. Multicolinearity, serial correlation, correlated error or residuals
B. Calculation of correlation matrix
____________________________________________________________________
71
Unemployment Inventories Coincident
S&P
Rate
Mfg.
Copper Prices
500
_____________________________________________________________________
Unemployment rate
1.000
Inventories Mfg.
-0.881
1.000
Copper Prices (coincident)
-0.793
0.535
1.000
S&P 500
-0.679
0.790
0.243
1.000
_____________________________________________________________________
C. Forecasting versus Structural Models
1. Structural equations versus forecasting models
2. Rigor and theory
17. Epilog: Management and Business Conditions Analysis
72
Lecture Notes
1. Information and Managerial Empowerment
A. Information should, first and foremost, be one means by which
people in an organization are empowered to contribute to success of
the organization.
1. Empowerment in a managerial sense requires that those closest
to the events should be making the calls, within broad policy
constraints.
2. Information (its acquisition and dissemination) is a support
function.
3. Policy parameters must be set for the gathering, analysis and use
of data.
a. Open access
b. Discretion to exercise delegated authority, and information
is one tool to make sure that discretion is exercised wisely.
2. Policy and Strategy
A. The purpose of a strategic plan is to provide not only guidance for
managerial actions, hence policy, but also to provide a set of goals
by which the directed actions of the enterprise can be assessed.
B. The culture that is developed in an organization concerning how
decision-making occurs is often one of the most important
determinants of the organization’s success or failure.
C. Effective organizations empower those with line authority throughout
the organization.
3. Qualitative Issues
73
A. Organizational Culture
B. Regulatory Environment
C. Other issues
4. Forecasting and Planning Interrelations – Forecasting and planning cannot
be separated, as a practical matter. Strategies require a vision of the
future.
A. Flexibility to react to vision errors
B. Contingency planning
C. Measurable outcomes
D. Scientific approach to creation of plan and goals.
5. Risk and Uncertainty
A. Risk is simply the down-side of opportunity. Risk is the reason
entrepreneurs are rewarded. Risk is also the reason enterprises fail. The
trick is to take those risks you understand and can appropriately manage.
B. Stochastics are what breeds uncertainty, which, in turn, breeds ambiguity.
When we don’t know what to expect, that is sometimes the most anxiety
producing situation.
6. Go forth and prosper!!!
74
E550
Quizzes for Intersession
2006
August 7 - August 20, 2006
E550, Business Conditions Analysis
First Quiz, August 8 Chapters 1,2,3
Dr. David A. Dilts
Name_____________________________________
True/False (1 point each)
___1. Macroeconomics is concerned with the aggregate performance of the entire
economic system.
___2. Inductive logic involve formulating and testing hypotheses.
___3. Positive economics is concerned with what is, and normative economics is
concerned with what should be.
___4. Correlation is the causal relation between two variables.
___5. Gross Domestic Product is the total value of all goods and services produced
within the borders of the United States.
___6. The difference between Gross Domestic Product and Net Domestic Product is
Indirect Business Taxes.
___7. The GDP is overstated because of the relatively large amount of economic
activity that occurs in the underground economy.
___8. Non-economists are no less or no more biased about economics than they are
about physics or chemistry.
___9. Assumptions are used to simplify the real world so that it may be rigorously
analyzed.
___10. Ceteris Paribus means “consumer beware.”
(Multiple Choice, each worth 2 points)
__1. People who are unemployed due to a change in technology that results in a
decline in their industry fit into which category of unemployment?
A.
B.
C.
D.
Frictional
Structural
Cyclical
Natural
Quiz 1, p. 2
___2. An increase in the price of natural gas will result in:
A.
B.
C.
D.
Demand-pull inflation
Cost-Push inflation
Pure inflation
None of the above
___3. The aggregate demand curve is most likely to shift to the right (increase) when
there is a decrease in:
A.
B.
C.
D.
The overall price level
The personal income tax rates
The average wage received by workers
Consumer and business confidence in the economy
___4. An increase in the real wage will:
A.
B.
C.
D.
Increase aggregate demand
Decrease aggregate supply
Cause cost-push inflation
All of the above
___5. Which of the following are not determinants of aggregate supply?
A.
B.
C.
D.
Changes in input prices
Changes in input productivity
Changes in the institutional environment
All of the above are determinants of aggregate supply
E550, Business Conditions Analysis
Second Quiz, August 10, Chapters 4,5,6,7
Dr. David A. Dilts
True/False (1 point each)
NAME______________________________
___1. The reasons the aggregate demand curve slopes downward is (1)real balance
effect, (2) interest rate effect, and (3) foreign purchase effect.
___2. The rachet effect results from the aggregate supply curve shifting upwards –
which is what is often called price rigidity.
___3. Say’s Law is an accounting identity that is associated with aggregate demand
being equal to aggregate supply in macroeconomic equilibrium.
___4. Marginal propensity to consume is the amount of consumption expenditures a
person makes on average.
___5. MPC+MPS = 0
___6. The Simple multiplier is 1/MPS.
Multiple Choice (2 points each)
___1. Which of the following is not a determinate of aggregate demand?
A.
B.
C.
D.
Personal taxes
Government spending
Exchange rates
All of the above are
___2. The non-income determinates of consumption and savings are:
A.
B.
C.
D.
Wealth
Consumer debts
Prices
All of the above are
Quiz 2, p. 2
___3. A recessionary gap is:
A. The amount by which C+I+G exceeds the 45 degree line at or above the full
employment level of output
B. The amount by which C+I+G is below the 45 degree line at or above the full
employment level of output
C. The amount by which the full employment level of output exceeds the current
level of output
D. None of the above
Short Answer (points as indicated)
With a marginal propensity to consume of .4 the simple multiplier effect is ____ (2
points)
Okun’s Law states that if unemployment is 8% we have lost what percent of potential
GDP____ (3 points)
Briefly explain the paradox of thrift (3 points)
______________________________________________________________________
______________________________________________________________
E550, Business Conditions Analysis
Third Quiz, August 15, Chapters 8,9,10, 11,12
Dr. David A. Dilts
NAME______________________________
Short Answer (points as indicated)
1. The relationship between unemployment and inflation is shown by what
model_________________________________________________________________
________________ (2 points)
2. An optimal tax rate and the revenues available at each rate are shown
by:___________________________________________________________________
(2 points)
3. (2 points) The way economist measure income distribution is with a:___________
curve and a ______coefficient .
4. To close a recessionary gap of $100, with a full employment level of GDP of $1000
and a marginal propensity to consume of .8 requires how much in:
addition government expenditures?__________ (2points); tax cuts?______________
(2 points)
5. (2 points) What is Gresham’s Law?
______________________________________________
True / False (1 point each)
___1. For barter to work requires a coincidence of wants.
___2. The U.S. dollar is backed by gold on deposit at Fort Knox, Kentucky.
___3. If the Fed wishes to decrease the money supply it can buy bonds.
___4. With a fractional required reserve ratio of .05, the money multiplier is 20.
___5. The Federal Funds Rate is the overnight rate of interest charged by the Fed to
loan member banks reserves.
___6. Tight monetary policy is associated with the Fed wishing to control inflation.
___7. Easy monetary policy is associated with the Fed wishing to bring the economy
out of recession.
___8. The largest proportion of the money supply is currency.
E550, Business Conditions Analysis
Fourth Quiz, August 16, Chapters 13, 14
Dr. David A. Dilts
NAME______________________________
Short Answers (points as indicated)
1. Identify the criteria on which cyclical indicators are
classified______________________________________________________________
_____________________________________________________________ (3 points)
2. (2 points) What is the largest proportion of GDP? _________________________
3. (2 points) What are NET exports ______________________ and are they positive or
negative in the national income accounts? _________
4. (3 points) List the components of the index of coincident indicators
___________________________________________________________________
True/ False (1 point each)
___1. The Laspeyres index uses a constant “market basket” whereas the Paasche
index uses changing weights to always have a current “market basket.”
___2. Seasonally adjusted unemployment differs from not seasonally adjusted in that
the entrants into the market in the summer and at Christmas time are washed out of the
data.
___3. Establishment data for unemployment counts workers twice who are employed in
two place, whereas Household data does not.
___4. Consumption expenditures are divided up among, consumer durables, consumer
nondurables, and services.
___5. Index of stock prices is a leading economic indicator.
___6. Index of industrial production is a coincident economic indicator.
___7. Average duration of unemployment is a lagging economic indicator.
___8. Two-thirds of consumer spending is accounted for by “other durable goods” and
“housing services.
___9. Producer’s durable equipment are reported in three categories: Motor vehicle,
aircraft, and other producer durables.
___10. The simplest forecasting method is the use of leading indicators.
E550, Business Conditions Analysis
Fifth Quiz, August 17, Chapter 15
Dr. David A. Dilts
NAME______________________________
1. (3 points) What three ways do economic forecasts vary: a. _________________,
b.________________, c.________________
2. What comprises the M3 money supply? (4 points) a.____________ b.____________
c. _____________ d. _________________
3. What are the two needs of an economic
forecaster?__________________________(2 points)
4. What are the five phases of the business cycle? (2 points)
______________________________________________________________________
______________________________________________________________________
5. Briefly outline what a regression model is (3 points)
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
True/False (1 point each)
___1. The domestic sector is C + I + G.
___2. Consumption is dependent on disposable income, and wealth is built in as a
stream of potential earnings.
___3. Investment is sensitive to interest rates.
___4. Public savings is the amount the government spends minus the aggregate
budget surpluses.
___5. A rise in interest rates abroad will reduce the value of the domestic currency at
each domestic interest rate.
___6. A decrease in the domestic interest rate will cause an increase in the flow of
savings in the loanable funds market domestically.
E550, Business Conditions Analysis
Sixth Quiz, August 18, Chapters 16, 17
Dr. David Dilts
NAME______________________________
True / False (1 point each)
___1. Quinn claims that successful strategic management with be to recognize patterns
of impending change, anomalies, or promising interactions, then monitor, reinforce, and
exploit them.
___2. Perceptual accuracy seems to be negatively correlated with performance as
measured by profits according to Sutcliffe and Weber.
___3. Top managers should focus on managing ambiguity for subordinates, rather than
worrying about the accuracy of data or forecasts according to Sutcliffe and Weber.
___4. Forecasting is just one tool in managing ambiguity and uncertainty.
Short Answers (as indicated)
1. Describe briefly nine forecasting or analytical methods presented in this course. Tell
what type of data it uses, what the output will be, and whether it is accessible. (9 points)
a.
f.
b.
g.
c.
h.
d.
i.
e.
2. Briefly explain each of the following forecasting methods: (4 points)
Judgments methods
Counting methods
Time Series methods
Association or causal methods
3. Managers can improve their projections in the following three ways: (3 points)
a.
b.
c.