Lecture 10: Joint Hypothesis Tests

Lecture 12:
Joint Hypothesis
Tests
(Chapter 9.1–9.3,
9.5–9.6)
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
Today’s Agenda
• Review
• Joint Hypotheses (Chapter 9.1)
• F-tests (Chapter 9.2–9.3)
• Applications of F-tests (Chapter 9.5–
9.6)
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-2
Review
• Perfect multicollinearity occurs when
2 or more of your explanators are jointly
perfectly correlated.
• That is, you can write one of your
explanators as a linear function of
other explanators:
X1  aX2  bX 3
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-3
Review (cont.)
• OLS breaks down with perfect
multicollinearity (and standard errors blow up
with near perfect multicollinearity).
• Multicollinearity most frequently occurs when
you want to include:
– Time, age, and birth year effects
– A dummy variable for each category, plus
a constant
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-4
Review (cont.)
• Dummy variables (also called binary
variables) take on only the values 0 or 1.
• Dummy variables let you estimate separate
intercepts and slopes for different groups.
• To avoid multicollinearity while including a
constant, you need to omit the dummy
variable for one group (e.g. males or
non-Hispanic whites). You want to pick one of
the larger groups to omit.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-5
Review (cont.)
Yi   0  1 D _1i   2 D _ 2i  1 X i   2 X i D _1i   3 X i D _ 2i   i
 0 is the intercept for the omitted category.
 0  1 is the intercept for the category coded by D _1.
 0   2 is the intercept for the category coded by D _ 2.
You can test whether group D _1 has the same intercept
as the omitted group by testing H 0 : 1  0.
1 is the slope for the omitted category.
1   2 is the slope for the category coded by D _1.
1   3 is the slope for the category coded by D _ 2.
You can test whether group D _1 has the same slope
as the omitted group by testing H 0 :  2  0.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-6
Review (cont.)
• You can multiply 2 variables together to
create interaction terms.
• Interaction terms let the slope of each
variable depend on the value of the
other variable.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-7
Review (cont.)
Yi   0  1 X 1i   2 X 2i   3 X 1i X 2i   i
Yi
 1   3 X 2 i
X 1i
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-8
Review (cont.)
• With many of the specifications covered last
time, we encountered hypotheses that
required us to test multiple conditions
simultaneously. For example, to test:
– All categories have the same intercept (with 3 or
more categories)
– All categories have the same slope (with 3 or
more categories)
– One explanator has no effect on Y, when that
explanator has been used in an interaction term
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-9
Review (cont.)
• In economics, many processes are
non-linear. Economic theory relies
heavily on diminishing marginal returns,
decreasing returns to scale, etc.
• We want a specification that lets the 50th
unit of X have a different marginal effect
than the 1st unit of X.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-10
Review (cont.)
• If we regress not
Yi  0  1 X i   i
but rather
Yi   0  1 X i   2 X i   i
2
then the marginal benefit of a unit of X
changes to:
Y
 1  22
X
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-11
Review (cont.)
Yi  0  1 X i  2 X i  i
2
Y
 1  22
X
• If 2 > 0, then the marginal impact of X
is increasing. If 2 = 0, then X has a
constant marginal effect. If 2 < 0, then
the marginal impact of X is decreasing.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-12
Review (cont.)
log(earnings)i  0  1 Edi  2 Expi  3 Expi 2  i
• If 2 > 0 and 3 < 0, then this equation
traces an inverse parabola.
• Earnings increases quickly in
experience at first, but then flattens out.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-13
Joint Hypotheses (Chapter 9.1)
log(earnings)i  0  1 Edi  2 Expi  3 Expi 2  i
• To test the hypothesis that experience is
not an explanator of log(earnings), you
need to test H0 : 2  0 AND 3  0
• WARNING: you CANNOT simply look
individually at the t-test for 2 = 0 and
the t-test for 3 = 0
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-14
Joint Hypotheses (cont.)
• You CANNOT test a JOINT hypothesis
by combining multiple t-tests.
• Suppose you are testing
H 0 : 1  0 AND 2  0
• A t-test rejects 1 = 0 if the data would
be very surprising to see, given that
1 = 0. A t-test does NOT reject 1 = 0 if
the data would only be pretty surprising.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-15
Joint Hypotheses (cont.)
• Each t-test could fail to reject the null if the
data would only be “pretty surprising” under
each null, taken one at a time.
• However, it might be “very surprising” to see
two “pretty surprising” events.
• We do not know the “size” of a joint test
conducted by stacking together many t-tests.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-16
Joint Hypotheses (cont.)
• Another problem with t-tests: suppose
X1 and X2 are heavily correlated with
each other (though not so much as to
create perfect multicollinearity). Then
each coefficient will have a large
standard error.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-17
Joint Hypotheses (cont.)
• Another problem with t-tests: suppose
X1 and X2 are heavily correlated with
each other.
• If you remove either variable—leaving
in the other—then you lose very little
explanatory power. The other variable
simply picks up the slack (through the
omitted variables bias formula).
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-18
Joint Hypotheses (cont.)
• However, to test the null hypotheses that
neither variable has explanatory power, we
want to consider removing both variables at
the same time. The two of them together may
share a lot of explanatory power, even if
either one could do the job nearly as well as
both together.
• We need a new type of test, that lets us
consider multiple hypotheses at once.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-19
Joint Hypotheses (cont.)
• Simply including more than one coefficient in
the hypothesis does NOT make a joint
hypothesis. For example, suppose you
believed that X1 and X2 had identical effects.
You could test this claim with:
H 0 : 1  2
• This test is a single hypotheses, and can be
tested using a t-test. The calculation requires
you to know the covariance of the two
coefficients. See Chapter 7.5.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-20
Joint Hypothesis (cont.)
• A joint hypothesis tests more than one
condition simultaneously. The easiest way to
see how many conditions are being tested is
to count the number of equal signs.
• E.g. H0 : 1 = 0 AND 2 = 0 has two equal
signs, so there are two conditions being
tested. This is a joint test.
• This hypothesis is often written 1  2  0
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-21
F-tests (Chapter 9.2–9.3)
• How can we test multiple conditions
simultaneously?
• Intuition: run a regression normally,
and then also run a regression where
you assume the conditions are true. See
if imposing the conditions makes a big
difference.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-22
F-tests (cont.)
log(earnings)i  0  1 Edi  2 Expi  3 Expi 2  i
• To test the hypothesis that experience is not
an explanator of log(earnings), you need to
test H0 : 2 = 0 AND 3 = 0
• If these conditions are true, then there should
be little difference between our “unrestricted”
regression and the “restricted” version:
log(earnings)i   0  1Edi  0 Expi  0 Expi 2   i
  0  1Edi   i
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-23
F-tests (cont.)
• If the conditions we are testing are true,
then there should be little difference between
our “unrestricted” regression and the
“restricted” version.
• What do we mean by “little difference”?
• Does imposing the restrictions we wish to
test greatly affect the model’s ability to fit
the data?
• We can turn to our measure of fit, R2
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-24
F-tests (cont.)
• To measure the difference in the quality of fit
before and after we impose the restrictions
we are testing, we can turn to our measure of
fit, R2
ˆ 0  ˆ 1 X ) 2
(
Y


SSR

2
R  1
 1
2
TSS
 (Y  Y )
• Notice that the Total Sum of Squares is the
same for both versions of the regression, so
we can focus on the Sum of Squares of
the Residuals.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-25
F-tests (cont.)
• Does imposing the restrictions from our null
hypothesis greatly increase the SSR ?
(Remember, we want a low SSR.)
• Run both regressions and calculate the SSR.
• Call the SSR for the unrestricted version
the SSRu
• Call the SSR for the restricted version
the SSRr
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-26
F-tests (cont.)
• Call the SSR for the unconstrained version
the SSRu
SSRu  (log(earnings)i  ˆ0  ˆ1Edi  ˆ2 Expi  ˆ3 Expi 2 )2
• Call the SSR for the constrained version
the SSRc
SSR c  (log(earnings)i  ˆ 0  ˆ1 Edi ) 2
• If the null hypothesis (2 = 3 = 0) is true,
then imposing the restrictions will not change
the SSR much. We will have a “small”
SSRc-SSRu
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-27
F-tests (cont.)
• If the null hypothesis is true, then imposing
the restrictions will not change the SSR much.
We will have a “small” SSRc-SSRu
• Remember, OLS finds the smallest possible
SSR. So SSRc > SSRu
• The more restrictions we impose, the larger
SSRc will get, even if the restrictions are true.
• We need to adjust for the number of
restrictions (r) we impose.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-28
F-tests (cont.)
• To measure how large an effect our
constraints have, look at:
SSR  SSR
r
c
u
• What constitutes a large difference? We want
to compare the difference in SSR to the
original SSRu. An increase of 100 units is
more worrisome if we start from SSRu = 200
than if SSRu = 20,000.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-29
F-tests (cont.)
• The more data we have, the more we
trust our unconstrained regression.
• Also, the more data we have, the more
seriously we want to take a deterioration
in SSR.
• To capture the effect of more data, we
weight by n-k-1.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-30
F-tests (cont.)
SSR  SSR
r
F
u
SSR
(n  k  1)
c
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
u
12-31
F-tests (cont.)
• When the i are distributed normally, the
F-statistic will be distributed according to the
F-distribution with r, n-k-1 degrees of freedom.
• We know how to compute an F-statistic from
the data.
• We know the distribution of the F-statistic
under the null hypothesis.
• The F-statistic meets all the needs of a
test statistic.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-32
F-tests (cont.)
• If our null hypothesis is true, then imposing
the hypothesized values as constraints on the
regression should not change SSR much.
Under the null, we expect a low value of F.
• If we see a large value of F, then we can build
a compelling case against the null hypothesis.
• The F-table tells you the critical values of F
for different values of r and n-k-1.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-33
F-tests (cont.)
• Let’s return to the earnings test example, with
the polynomial specification
log(earnings)i  0  1 Edi  2 Expi  3 Expi 2  i
• To test the hypothesis that experience is
not an explanator of log(earnings), you need
to test H 0 : 2  0 AND 3  0
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-34
F-tests (cont.)
H 0 :  2  3  0
r  2; n - k - 1  6540 - 3 - 1  6536
Unconstrained Regression:
log(earnings)i  0  1 Edi  2 Expi  3 Expi 2   i
SSRu  3844
Constrained Regression:
log(earnings)i   0  1 Edi  vi
SSR c  3959
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-35
F-tests (cont.)
H 0 :  2  3  0
SSR c  SSRu 3959  3844
r
2
F

 97.77
u
3844
SSR
6536
n  k 1
The 5% critical value for F2,6536 is 3. We can reject the
null hypothesis that experience is not an explanator
of log(income).
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-36
F-tests (cont.)
• Note: we must be able to impose the
restrictions as part of an OLS estimation.
We can impose only linear restrictions.
• For example, we CAN test:
3  14
1  2  3  0
1  42 and 3 - 34  5
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-37
F-tests (cont.)
• However, we CANNOT test:
1 ·2   3
1  
3
2
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-38
F-tests (cont.)
• Example:
Y  0  1 X1  2 X 2  3 X 3  4 X 4  5 X 5  
H0 : 1  42 and 3 - 34  5
• There are two equal signs. r = 2.
• How do we impose the restrictions?
Y   0  (4 2 )X1   2 X2   3 X3   4 X 4  ( 3 - 3 4 )X5  
• How do we enter this regression into
the computer?
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-39
F-tests (cont.)
• To enter a regression into the computer, we
need to regroup so that all our explanators
receive a single coefficient apiece.
• We need to transform this expression from
one with separated explanators and linear
combinations of coefficients to one with
separated coefficients and linear combinations
of explanators.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-40
F-tests (cont.)
Y  0  (42 )X1  2 X2  3 X3  4 X4  ( 3 - 34 )X5  
Y  0  2 (4 X1  X2 )  3 (X3  X5 )  4 (X4 - 3X5 )  
• To find the constrained sum of squares,
we need to regress Y on a constant,
(4X1+X2), (X3+X5), and (X4 -3X5). The
SSR from this regression is our SSRc.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-41
Checking Understanding
• You regress
Y  0  1 X1  2 X2  
• You want to test
H 0 : 1  -2
• What are the constrained and
unconstrained regressions? What is r ?
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-42
Checking Understanding (cont.)
Y  0  1 X 1  2 X 2  
H 0 : 1  -2
Unconstrained regression:
Y  0  1 X 1  2 X 2  
Constrained regression:
Y  0  (- 2 ) X 1   2 X 2  
 0   2 ( X 2 - X1 )  
r 1
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-43
F-tests
• Note: when r = 1, you have a choice
between using a t-test or an F-test.
• When r = 1, F = |t|2. F-tests and t-tests
will give the same results.
• When r > 1, you cannot use a t-test.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-44
F-tests (cont.)
• A frequently encountered test is the null
hypothesis that all the coefficients (except the
constant) are 0. This test asks whether the
entire model is useless. Do our explanators
do a better job at predicting Y than simply
guessing the mean?
• Many econometrics programs automatically
calculate this F-statistic when they perform
a regression.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-45
An Application of F-tests (Chapter 9.5)
• Let’s use F-tests to re-examine the
differences in earnings equations between
black women and black men in the NLSY.
• Regress the following for black workers:
log(earnings)i  0  1 Edi  2 Expi  3 D _ Fi   i
• where Edi = years of education,
Expi = years of experience, and
D_Fi = 1 if the worker is female
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-46
An Application of F-tests (cont.)
log(earnings)i  0  1 Edi  2 Expi  3 D _ Fi   i
• To test whether black males and black females have
the same intercept, we can use a simple t-test with
H0 : 3 = 0
• Our estimated coefficient is -0.201 with a standard
error of 0.036, yielding a t-statistic of -5.566
• This t-statistic exceeds our critical value of -1.96
• We can reject the null hypothesis at the 5% level
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-47
TABLE 9.1 Earnings Equation for Black
Men and Women (NLSY Data)
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-48
An Application of F-tests (cont.)
• We have rejected the null hypothesis
that black men and black women have
the same intercept.
• Could they also have different slopes
for education and experience?
• We can use dummy variable
interaction terms.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-49
An Application of F-tests (cont.)
log(earnings )i   0  1 Edi   2 Expi   3 D _ Fi
  4 Edi D _ Fi   5 Expi D _ Fi   i
Case: worker is male ( D _ Fi  0) :
log(earnings )i   0  1 Edi   2 Expi   i
Case: worker is female ( D _ Fi  1) :
log(earnings )i  (  0   3 )  ( 1   4 ) Edi  (  2   5 ) Expi   i
• To test the null hypothesis that black men and black
women have identical earnings equations, we need to
test the joint hypothesis:
H 0 : 3   4  5  0
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-50
An Application of F-tests (cont.)
H 0 : 3   4  5  0
Unconstrained Regression:
log(earnings )i   0  1 Ed i   2 Expi   3 D _ Fi
  4 Edi D _ Fi   5 Expi D _ Fi   i
SSR u  1002.75
Constrained Regression:
log(earnings )i   0  1 Ed i   2 Expi  vi
SSR c  1020.378
r  3, n  k  1  1795
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-51
An Application of F-tests (cont.)
H 0 : 3  4  5  0
SSR c  SSRu 1020.37 1002.75
r
3
F

 10.51
u
1002.75
SSR
1795
n  k 1
The critical value at the 5% significance level for F3,1795 is 2.60.
We can reject the null hypothesis at the 5% level.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-52
An Application of F-tests (cont.)
• We can reject the null hypothesis that black
men and black women have identical
earnings functions.
• Do we really need the interaction terms, or do
we get the same explanatory power by simply
giving black women a different intercept?
• Let’s test the null hypothesis that the
interaction coefficients are both 0.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-53
An Application of F-tests (cont.)
H 0 :  4  5  0
Unconstrained Regression:
log(earnings )i   0  1 Ed i   2 Expi   3 D _ Fi
  4 Edi D _ Fi   5 Expi D _ Fi   i
SSR u  1002.75
Constrained Regression:
log(earnings )i   0  1 Ed i   2 Expi   3 D _ Fi  vi
SSR c  1003.08
r  2, n  k  1  1795
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-54
An Application of F-tests (cont.)
H 0 : 4  5  0
SSR c  SSRu 1003.08 1002.75
r
2
F

 0.30
u
1002.75
SSR
1795
n  k 1
The critical value at the 5% significance level for F2,1795 is 3.00.
We fail to reject the null hypothesis at the 5% level.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-55
F-tests and Regime Shifts (Chapter 9.6)
• What is the relationship between Federal
budget deficits and long-term interest rates?
• We have time-series data from 1960–1994.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-56
F-tests and Regime Shifts (cont.)
• Our dependent variable is long-term
interest rates (LongTermt)
• Our explanators are expected inflation
(Inflationt), short-term interest rates
(ShortTermt), change in real per-capita
income (DeltaInct), and the real
per-capita budget deficit (Deficitt).
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-57
F-tests and Regime Shifts (cont.)
LongTermt  0  1 Inflationt  2 ShortTermt
3 DeltaInct  4 Deficitt  t
• Note that we index observations by t, not i.
• 4 is the change in long-term interest rates from a
$1 increase in the Federal deficit (measured in
1996 dollars).
• Financial market de-regulation began in 1982.
• Was the relationship between long-term interest rates
and Federal deficits altered by the de-regulation?
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-58
F-tests and Regime Shifts (cont.)
• We can let the co-efficient on Deficitt vary
before and after 1982 by interacting with a
dummy variable.
• Create the variable D_1982t = 1 if the
observation is for year 1983 or later
LongTermt   0  1Inflationt   2 ShortTermt   3 DeltaInct
  4 Deficitt   5 D _1982t   6 Deficitt D _1982t   t
• To test whether the slope on Deficitt changes
after 1982, conduct a t-test of the hypothesis
H0 : 6 = 0
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-59
F-tests and Regime Shifts (cont.)
•
•
•
•
•
•
•
•
•
Dependent Variable: LongTermt
Independent Variables, with standard errors
Inflationt:
0.765
(0.0372)
ShortTermt:
0.822
(0.0586)
DeltaInct:
-6.4·10-6 (1.6·10-4)
Deficitt:
0.002
(0.0004)
D_1982t:
-0.739
(0.6608)
Deficitt·D_1982t: 0.0005 (0.0007)
Constant:
1.277
(0.2135)
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-60
F-tests and Regime Shifts (cont.)
• For the period 1960–1982, the slope on
Deficitt is 0.0022. A $1 increase in the Federal
deficit per capita increases long-term interest
rates by 0.0022 points.
• For the period 1983–1994, the slope on
Deficitt is 0.0022 + 0.0005 = 0.0027.
• The t-statistic for Deficitt·D_1982t is 0.63.
We fail to reject the null hypothesis that the
slopes are different.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-61
F-tests and Regime Shifts (cont.)
• For the period 1960–1982, the slope on
Deficitt is 0.0022. A $1 increase in the Federal
deficit per capita increases long-term interest
rates by 0.0022 points.
• Is this change important in magnitude?
One quick, crude way to assess magnitudes
is to ask, “How many standard deviations
does Y change when I change X by 1
standard deviation?”
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-62
F-tests and Regime Shifts (cont.)
• Standard Deviation of Deficitt = 463
• Standard Deviation of LongTermt = 2.65
• A 1-standard-deviation change in Deficitt is
predicted to cause a 463·0.0022 = 1.02
percentage point change in LongTermt, or
about a third of a standard deviation
• At first glance, the effect of Federal deficits
on interest rates is non-negligible, but not
massive, either.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-63
F-tests and Regime Shifts (cont.)
• Let’s test a more complicated hypothesis.
• Does the entire financial regime shift
after 1982?
• Let’s let every coefficient vary between the
two time periods.
LongTermt   0  1Inflationt   2 ShortTermt   3 DeltaInct
  4 Deficitt   5 D _1982t   6 Inflationt D _1982t
  7 ShortTermt D _1982t   8 DeltaInct D _1982t
  9 Deficitt D _1982t   t
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-64
F-tests and Regime Shifts (cont.)
• Does the entire financial regime shift
after 1982?
• Test the joint hypothesis that every
interaction term is 0:
H 0 : 5  6   7  8  9  0
• We need an F-test
• There are 5 equal signs, so r = 5
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-65
F-tests and Regime Shifts (cont.)
H 0 : 5   6   7  8  9  0
Unconstrained regression:
LongTermt   0  1 Inflationt   2 ShortTermt   3 DeltaInct
  4 Deficitt   5 D _1982t   6 Inflationt D _1982t
  7 ShortTermt D _1982t   8 DeltaInct D _1982t
  9 Deficitt D _1982t   t
Constrained regression:
LongTermt   0  1 Inflationt   2 ShortTermt   3 DeltaInct
  4 Deficitt  vt
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-66
F-tests and Regime Shifts (cont.)
H 0 : 5  6  7  8  9  0
SSR c  SSRu 4.93  4.15
r
5
F

 0.94.
u
4.15
SSR
25
n  k 1
The critical value for an F-test with 5,25 degrees
of freedom is 2.60. We cannot reject the null
hypothesis that there is no regime shift.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-67
F-tests and Regime Shifts (cont.)
• Instead of using dummy variables, we could
conduct this same test by running the same
regression on 3 separate datasets.
• For the constrained regression (there is no
regime shift), we use all the data, 1960–1994.
• For the unconstrained regression (there is a
regime shift), we run separate regressions for
1960–1982 and 1983–1994.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-68
F-tests and Regime Shifts (cont.)
LongTermt  0  1 Inflationt   2 ShortTermt
 3 DeltaInct   4 Deficitt   t
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-69
F-tests and Regime Shifts (cont.)
• For each regression, we record the SSR
• SSRc is the SSR from the regression for
1960–1994, SSR1960–1994
• SSRu is the sum SSR1960–1982 + SSR1983–1994
1982

t 1960
et 2 
1994

t 1983
et 2 
1994

t 1960
et 2
• Using these SSR ’s, we can compute F
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-70
F-tests and Regime Shifts (cont.)
SSRu  SSR19601982  SSR19831994
 2.17 1.98  4.15
SSR c  SSR19601994  4.93
r 5
n  k 1  25
SSR c  SSRu 4.93  4.15
r
5
F

 0.94
u
4.15
SSR
25
n  k 1
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-71
F-tests and Regime Shifts (cont.)
• See Chapter 9.7 for additional tests for
regime shifts.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-72
Review
• How can we test multiple conditions
simultaneously?
• Intuition: run a regression normally, and
then also run a regression where you
constrain the parameters to make the
null hypothesis true. See if imposing the
conditions makes a big difference.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-73
Review (cont.)
• Does imposing the restrictions from our null
hypothesis greatly increase the SSR ?
(Remember, we want a low SSR.)
• Run both regressions and calculate the SSR
• Call the SSR for the unrestricted version
the SSRu
• Call the SSR for the restricted version
the SSRr
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-74
Review (cont.)
• Run both regressions and calculate the SSR
• Call the SSR for the unrestricted version
the SSRu
• Call the SSR for the restricted version the SSRr
• If the null hypothesis is true, then imposing the
restrictions will not change the SSR much. We
will have a “small” SSRr-SSRu
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-75
Review (cont.)
SSR  SSR
r
F
u
SSR
(n  k  1)
c
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
u
12-76
Review (cont.)
• When the i are distributed normally, the
F-statistic will be distributed according to the
F-distribution with r, n-k-1 degrees of freedom
• We know how to compute an F-statistic from
the data
• We know the distribution of the F-statistic
under the null hypothesis
• The F-statistic meets all the needs of a
test statistic
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-77
Review (cont.)
• If our null hypothesis is true, then imposing
the hypothesized values as constraints on the
regression should not change SSR much.
Under the null, we expect a low value of F.
• If we see a large value of F, then we can build
a compelling case against the null hypothesis.
• The F table tells you the critical values of F for
different values of r and n-k-1.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
12-78