Lecture 6 Heteroscedasticity and Autocorrelation Research Methods

advertisement
Research Methods For Economists
Lecture 6
Heteroscedasticity and
Autocorrelation
Causes
Detection
Remedial measures
1
Assumptions of a Regression Model
Yi = β1 + β 2 X i + ei
E [ei ] = 0
cov(ei e j ) = 0
var[ei ] = σ 2
for all i ≠ j
E [ei X i ] = 0
Xi
is exogenous, not random
2
Why variance of errors may change over time?
•Learning reduces errors;
driving practice, driving errors and accidents
typing practice and typing errors,
defects in productions and improved machines
•Improved data collection: better formulas and goods softw
•More heteroscedasticity exists in cross section than
in time series data.
3
Homoscedasticity
2
i
eˆ
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
Y
4
Hetroscedasticity -Y1
2
i
eˆ
.
.
.
.
.
.
.
.
.
.
.
Y
5
Hetroscedasticity -X1
2
i
eˆ
.
.
.
.
.
.
.
.
.
.
.
X
6
Hetroscedasticity -Y2
2
i
eˆ
.
.
.
.
.
.
Y
7
Hetroscedasticity -X2
2
i
eˆ
.
.
.
.
.
.
X
8
Hetroscedasticity -Y3
2
i
eˆ
.
.
.
.
.
.
.
Y
9
Hetroscedasticity -X3
2
i
eˆ
.
.
.
.
.
.
.
X
10
Hetroscedasticity -Y4
2
i
eˆ
.
.
.
.
.
.
.
.
.
Y
11
Hetroscedasticity -X4
2
i
eˆ
.
.
.
.
.
.
.
.
.
X
12
Hetroscedasticity -Y5
σˆ
2
i
.
.
.
.
.
.
.
.
.
.
.
.
.
Y
13
Hetroscedasticity -X5
σˆ
2
i
.
.
.
.
.
.
.
.
.
.
.
.
.
X
14
Y
Meaning of Homoscedasticity
var[e5 ] = σ 2
var[e4 ] = σ 2
Yi = β1 + β 2 X i + ei
e2
e3
e4
e5
var[e3 ] = σ 2
var[e2 ] = σ 2
e1 var[e ] = σ
2
i
Same Variance of error across all X
15
X
Meaning of Heteroscedasticity
Y
var[e4 ] = σ
var[e5 ] =
2
Yi = β1 + β 2 X i + ei
e3
e2
e4
e5
var[e3 ] = σ 32
var[e2 ] = σ 22
e1 var[e ] = σ
i
2
1
Variance of Error Terms Varies by value of X
X
16
Nature and Causes
⎡e ⎤ = σ 2
e
var
• LS assumption: variance of i is constant
for
⎢⎣ i ⎥⎦
every ith observation,
⎤
⎡
⎥
⎢
⎥
1
⎛ ˆ ⎞ ˆ 2⎢
var⎜ β ⎟ = σ ⎢
⎥
2
⎝ 2⎠
⎥
⎢ ⎛
⎞
⎢ ∑ ⎜⎝ xi − x ⎟⎠ ⎥
⎥⎦
⎢⎣ i
but it is
possible that
σi2 = σ 2xi
Causes: Learning, growth, improved data collection, outliers,
omitted variables;
17
Linear, Unbiasedness and Minimum Variance Properties
of an Estimator (BLUE Property)
∑ ⎛⎜⎝ X i − X ⎞⎟⎠Yi
= ∑ wiYi
Linearity: β̂ 2 =
2
∑ ⎛⎜⎝ X i − X ⎞⎟⎠
Unbiasedness E⎛⎜⎜ βˆ1⎞⎟⎟ = β1
⎝
⎠
( )
f βˆ1
Minimum Variance
⎡
⎤
⎢
⎥
⎢
⎥
⎛ ˆ ⎞
1
2
⎢
⎥
var⎜⎜ β ⎟⎟ = σˆ ⎢
2
2⎥
⎝
⎠
⎢ ⎛⎜ X − X ⎞⎟ ⎥
⎢∑⎝ i
⎠ ⎥
⎣i
⎦
( )
f βˆ1
βˆ2
( )
E βˆ1 = β1
Bias
Bias
βˆ3
18
Consequences of Hetroscedasticity
OLS Estimate is Unbiased
But it is no longer efficient
( )
E βˆ2 = β 2
( ) [
⎡
⎤
ˆ
E β 2 = E ∑ wi yi = E ⎢∑ wi (β1 + β 2 xi + ei )⎥
⎣
⎦
= E ∑ wi β1 + β 2 ∑ wi xi + ∑ wi ei = β 2
]
[
var
]
(βˆ ) =
2
∑ (x
i
− x
)2 σ
2
i
i
⎡
⎢∑
⎣ i
(x i
− x
)2
⎤
⎥
⎦
2
19
Tests for Heteroscedasticity
Glejser test
Yi = β 1 + β 2 xi + ei
There are several tests
ei = β 1 + β 2 X i + v i
ei = β 1 + β 2 X i + v i
1
+ vi
Xi
1
ei = β 1 + β 2
+ vi
Xi
ei = β 1 + β 2
ei = β1 + β 2 X i + vi
ei = β1 + β 2 X i2i + vi
In each case do t-test H0:
β =0
against HA: β ≠ 0 . If β
is
significant then that is the evidence of
heteroscedasticity.
Goldfeld-Quandt test
Model Yi = β 1 + β 2 xi + ei (1)
Steps:
1. Rank observations in ascending order
of one of the x variable
2. Omit c numbers of central
observations leaving two groups with
n−c
number of osbervations
2
n−c
and the last
3. Fit OLS to the first
2
n−c
observations and find sum of
2
the squared errors from both of them.
4. Set hypothesis
H :σ 2 = σ 2
0 1
2
H :σ 2 ≠ σ 2 .
A 1
2
5. compute λ =
distribution.
against
ERSS 2 df 2
it follows F
ERSS1 df 1
20
Tests for Heteroscedasticity
There are a series of formal methods developed in the econometrics literature to detect
the existence of Heteroscedasticity in a given regression model.
Spearman’s rank correlation test
Park test
Model Yi = β1 + β 2 xi + ei
(1)
v
β
2
2
i
Error square: σ = σ x e
i
i
(2)
Or taking log
ln σ i2 = ln σ 2 + β ln xi + vi
(2’)
steps : run the OLS regression for (1)
and get the estimates of error terms
Square
ei .
ei , and then run a regression of
2
e
ln i with x variable. Do t-test H0:
β =0
⎡ ∑ d i2 ⎤
⎥
rs = 1 − 6 ⎢ i 2
⎢n n −1 ⎥
⎥⎦
⎢⎣
(
steps:
1. run OLS of y on x.
2. obtain errors e
3. rank e and y or x
4. find the difference of
the rank
5. use t-statistics if ranks
are significantly
different assuming n>8
and rank correlation
β
against HA: β ≠ 0 . If
is
significant then that is the evidence of
heteroscedasticity.
)
coefficient
t=
rs n − 2
1− r
If tcal > tcrit there is
heteroscedasticity.
2
s
ρ
=0.
with df (n-2)
21
Tests for Heteroscedasticity
There are a series of formal methods developed in the econometrics literature to detect
the existence of Heteroscedasticity in a given regression model.
Spearman’s rank correlation test
Park test
Model Yi = β1 + β 2 xi + ei
(1)
v
β
2
2
i
Error square: σ = σ x e
i
i
(2)
Or taking log
ln σ i2 = ln σ 2 + β ln xi + vi
(2’)
steps : run the OLS regression for (1)
and get the estimates of error terms
Square
ei .
ei , and then run a regression of
2
e
ln i with x variable. Do t-test H0:
β =0
⎡ ∑ d i2 ⎤
⎥
rs = 1 − 6 ⎢ i 2
⎢n n −1 ⎥
⎥⎦
⎢⎣
(
steps:
1. run OLS of y on x.
2. obtain errors e
3. rank e and y or x
4. find the difference of
the rank
5. use t-statistics if ranks
are significantly
different assuming n>8
and rank correlation
β
against HA: β ≠ 0 . If
is
significant then that is the evidence of
heteroscedasticity.
)
coefficient
t=
rs n − 2
1− r
If tcal > tcrit there is
heteroscedasticity.
2
s
ρ
=0.
with df (n-2)
22
Tests for Heteroscedasticity
Breusch-Pagan,Godfrey test
Yi = β 1 + β 2 x 2,i + .. + β k x k ,i + ei
1.run OLS and obtain error squares
2. Obtain average error square
ˆ2
e
σ~ 2 = i
n
3. regress
and
eˆ 2
pi = i
σ~ 2
pi on a set of explanatory
variables
pi = α + α x + .. + α x
+e
1
2 2, i
k k, i
i
4. obtain squares of explained sum
(EXSS)
1
5. θ = (EXSS )
2
1
(EXSS ) ≈ χ m2 −1
6. θ =
m −1
H : α = α = .. = α = 0 No
0 2
2
k
heteroscedasticity and σ 2 = α a
1
i
constant. If calculated χ m2 −1 is greater
than table value there is an evidence of
heteroscedasticity.
White Test
This is a more general test
Model Yi = β 1 + β 2 x 2,i + β 3 x3,i + ei
Run OLS to this and get eˆ
i
eˆi2 = α 1 + α 2 x 2,i + α 3 x3,i + α 4 x22,i + α 5 x32,i
α 6 x 2,i x3,i + vi
Compute the test statistics
n.R 2 ~ χ df2
Again if the calculated χ df is greater
than table value there is an evidence of
heteroscedasticity.
2
23
ARCH-GARCH Tests
Yt = β 1 + β 2Yt −1 + β 3 X t + β 4 X t −1 + ε t
ARCH(p) Process
σ = α0 +α ε
2
t
2
1 t −1
+α ε
2
2 t −2
+α ε
2
3 t −3
+ ... + α ε
2
p t− p
GARCH(p,q) Process
σ t2 = α 0 + α 1ε t2−1 + α 2ε t2−2 + α 3ε t2−3 + ... + α p ε t2− p + φ1σ t2−1 + φ 2σ t2−2 + φ 3σ t2−3 + ... + φ qσ t2−q
24
Hetroscedasticity and Autocorrelation Depends
on the Variance Covariance Terms of the Error Term
⎡ var(e1 ) cov(e1e2 ) cov(e1e3 ) cov(e1e4 ) cov(e1e5 )⎤
⎢cov(e e ) var(e ) cov(e e ) cov(e e ) cov(e e )⎥
2 1
2
2 3
2 4
2 5 ⎥
⎢
E [ee'] = ⎢cov(e3 e1 ) cov(e3 e2 ) var(e3 ) cov(e3 e4 ) cov(e3 e5 )⎥
⎥
⎢
(
)
(
)
(
)
(
)
(
)
cov
cov
cov
var
cov
e
e
e
e
e
e
e
e
e
4
1
4
2
4
3
4
4
5
⎥
⎢
⎢⎣cov(e5 e1 ) cov(e5 e2 ) cov(e5 e3 ) cov(e5 e4 ) var(e5 ) ⎥⎦
⎡σ 11
⎢σ
⎢ 21
E [ee'] = ⎢σ 31
⎢
⎢σ 41
⎢⎣σ 51
σ 12
σ 22
σ 32
σ 42
σ 52
σ 13
σ 23
σ 33
σ 43
σ 53
σ 13
σ 24
σ 34
σ 44
σ 54
σ 14 ⎤
σ 25 ⎥⎥
σ 35 ⎥
⎥
σ 45 ⎥
σ 55 ⎥⎦
⎡σ 12 0
0
0
0⎤
⎥
⎢
2
0
σ
0
0
0
2
⎥
⎢
2
E [ee'] = ⎢ 0
0 σ3 0
0⎥
⎥=
⎢
2
0
0
0
σ
0
⎥
⎢
4
⎢0
0
0
0 σ 52 ⎥⎦
⎣
E[ee'] = σ I
2
t
25
Remedial Measures
2
σ
Weighted Least Square and GLS when i known, divide the whole equation
by σ i
Apply OLS to transformed variables.
β
Y
X
X
e
X
X
X
i = 0 +β 1+β
k + i
3 +β
2 +β
4 + ....... + β
1
2
3
4
k
σ
σ
σ
σ
σ
σ
σ
σ
i
i
i
i
i
i
i
i
Variance of this transformed model equals 1.
2
2 2
Other examples: Yi = β 1 + β 2 xi + ei and assume ei = σ xi
Y
i =
x
i
β
e
1+β + i
2
x
x
i
i
;
⎛e
⎜
E⎜ i
⎜x
⎝ i
2
⎞
⎟
σ 2 xi2
2
⎟ = 2 =σ
xi
⎟
⎠
*
*
*
In Matrix notation: PY = PXβ + Pe ; P' P = Ω −1 and Y = X β + e
β̂
= ⎛⎜ X * ' X * ⎞⎟
GLS ⎝
⎠
β̂
GLS
=
−1
X * ' Y * = aY
⎛
− 1 X ⎞⎟
⎜ X 'Ω
⎝
⎠
−1
X ' Ω − 1Y = aY where Ω−1 is a variance-covariance matrix
2
σ
which makes the residual constant throughout all observations. when
unknown estimate σ using the sample information and do the above
procedures (Gujarati is a good text for Heteroscedasticity).
2
26
Density
2.5
Errors in Work- hours on Pay and Taxes
Detection: Graphical Method
r:WHRS
N(0,1)
2.0
1.5
1.0
0.5
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
4.5
5.0
5.5
27
WHRS = + 16.86 + 0.0002153*PAY - 0.001377*TAX
(SE) (0.298) (0.00039)
(0.00119)
sigma
19.9025 RSS
2681268.38
R^2
0.00028724 F(2,6769) = 0.9724 [0.378]
log-likelihood
-29861.6 DW
1.81
no. of observations
6772 no. of parameters
3
mean(WHRS)
16.7634 var(WHRS)
396.048
Errors are not normal
Normality test: Chi^2(2) = 2147.0 [0.0000]**
NO Presence of Heteroscedasticity
hetero test:
F(4,6764)= 1.2780 [0.2761]
hetero-X test: F(5,6763)= 1.0353 [0.3948]
RESET test:
F(1,6768)=0.0042512 [0.9480]
28
Correction of Heteroscedasticity
Heteroscedasticity consistent standard errors
Coefficients
SE
HACSE
HCSE
JHCSE
Constant
16.856
0.29808
0.31282
0.27802
0.28423
PAY
0.00021532 0.00039000 0.00031787 0.00032990 0.0003866
TAX
-0.0013765 0.0011895 0.0010469 0.0010719 0.0012026
Coefficients
Constant
16.856
PAY
0.00021532
TAX
-0.0013765
t-SE
t-HACSE
t-HCSE
t-JHCSE
56.550
53.884
60.629
59.306
0.55210
0.67739
0.65268
0.55685
-1.1572
-1.3148
-1.2842
-1.1446
29
Regression of Pay on Workhours and Tax Payment among British Households
PAY = + 308 + 0.2091*WHRS + 2.565*TAX
(SE) (10.6) (0.379)
(0.0201)
R^2
0.707293 F(2,6769) = 8178 [0.000]**
log-likelihood
-53152.3 DW
1.97
no. of observations
6772 no. of parameters
3
mean(PAY)
809.312 var(PAY)
1.31373e+006
Errors are not Normal and not Homoscedastic
Normality test: Chi^2(2) =3.3707e+005 [0.0000]**
hetero test:
F(4,6764)= 696.91 [0.0000]**
hetero-X test: F(5,6763)= 564.35 [0.0000]**
RESET test:
F(1,6768)= 306.40 [0.0000]**
Sample from the BHPS
30
Hetroscedasticity Consistent Standard Errors: While Tests
Heteroscedasticity consistent standard errors
Coefficients
SE
HACSE
HCSE
JHCSEConstant
307.97
10.633
58.458
58.086
64.117WHRS
0.20913
0.37878
0.28170
0.29554
0.29859TAX
2.5652 0.020059
0.31721
0.31539
0.34842
t-JHCSEConstant
4.8032WHRS
0.70039TAX
7.3624
Coefficients
t-SE
t-HACSE
t-HCSE
307.97
28.965
5.2681
5.3019
0.20913
0.55210
0.74237
0.70760
2.5652
127.88
8.0869
8.1337
31
Some Heteroscedasticity Tests in Shazam
*GLS estimation
*Weighted Least Square estimation
*Glejser-Harvey-Pagan and ARCH tests
ols y x1 x2 /
diagnos/het
*Harvey and Phillips tests
ols y x1 x2 /
diagnos/recur
*Chow and Godfrey and Quandt test
ols y x1 x2 /
diagnos/chowone=5
*Jackknife test
ols y x1 x2 /cov=b anova
diagnos/jackknife
*Ramsey reset specification test
ols y x1 x2 /cov=b anova
diagnos/reset
32
Autocorrelation
Causes
Consequences
Remedies
33
Assumptions of a Regression Model
Yi = β1 + β 2 X i + ei
E [ei ] = 0
cov(ei e j ) = 0
var[ei ] = σ 2
for all i ≠ j
E [ei X i ] = 0
Xi
is exogenous, not random
34
Linear, Unbiasedness and Minimum Variance Properties
of an Estimator (BLUE Property)
∑ ⎛⎜⎝ X i − X ⎞⎟⎠Yi
= ∑ wiYi
Linearity: β̂ 2 =
2
∑ ⎛⎜⎝ X i − X ⎞⎟⎠
Unbiasedness E⎛⎜⎜ βˆ1⎞⎟⎟ = β1
⎝
⎠
( )
f βˆ1
Minimum Variance
⎡
⎤
⎢
⎥
⎢
⎥
⎛ ˆ ⎞
1
2
⎢
⎥
var⎜⎜ β ⎟⎟ = σˆ ⎢
2
2⎥
⎝
⎠
⎢ ⎛⎜ X − X ⎞⎟ ⎥
⎢∑⎝ i
⎠ ⎥
⎣i
⎦
( )
f βˆ1
βˆ2
( )
E βˆ1 = β1
Bias
Bias
βˆ3
35
What is an autocorrelation?
ei
e = ρe + v
i
i−1 i
.
.
.
.
.
.
.
ρ <0
.
.
.
.
0
.
Time
36
What is an autocorrelation?
ei
e = ρe + v
i
i−1 i
.
.
.
.
.
.
.
ρ <0
.
.
.
.
0
.
Time
37
What is an autocorrelation?
ei
e = ρ e + ρ e +v
i 1 i−1 2 i−2 i
.
.
.
.
.
Time
38
What is autocorrelation?
Assumption behind the OLS
cov(e e ) = 0 for all i ≠ j
i j
Autocorrelation exists when
ρ
cov(e e ) ≠ 0 for all i ≠ j
i j
⎛
⎞
e = ρe + v
2
⎜
⎟
~
0
,
v
N
σ
i
i
⎜
⎟
i−1
i
⎝
⎠
correlation coefficient between –1 and 1
39
Causes, consequences and Remedial measures for Autocorrelatio
Causes:
inertia , specification bias, cobweb phenomena
manipulation of data.
Consequences
Estimators are still linear and unbiased
but
they there not the best, they are inefficient.
ρ
ρ
Remedial measures:
When
Is known
Transformation the model
When
is unknown estimate it and transform the model
40
Variance of the error term with AR(1) Process
⎛
⎜
⎜
⎝
⎞
⎟
⎟
⎠
var e = var ρe +v = ρ2 var⎛⎜⎜e ⎞⎟⎟ + var⎛⎜v ⎞⎟
i
⎝ i⎠
i −1 i
⎝ i−1⎠
⎛
⎞
+ 2cov⎜⎜e v ⎟⎟
⎝ i−1 i ⎠
2
σ
σ e2 = ρ 2σ e2 +σ v2 => σ e2 = v
1− ρ 2
⎛
⎜⎜
⎝
⎞
⎟⎟
⎠
Specifically:
ρ=
cov⎛⎜ et e ⎞⎟
⎝ t−1⎠
var(et ) var⎛⎜ e ⎞⎟
⎝ t−1⎠
;
cov⎛⎜ et e ⎞⎟ = ρσe2
⎝ t−1⎠
(9.3)
41
Durbin-Watson (DW) test
2
2 − 2 eˆ eˆ
2
T⎛
ˆ
ˆ
+
e
e
⎞
∑
∑
∑
∑ ⎜⎝ eˆt − eˆt −1 ⎟⎠
t
t t −1
t −1
i
i
i
i
ˆ
=
≈ 2(1 − ρ̂ )
d=
2
2
∑ eˆt
∑ eˆt
i
i
2
2
∑ eˆt −1 ≈ ∑ eˆt
i
i
2
2 2 eˆ eˆ
ˆ
ˆ
e
e
∑ t t −1 ∑ t −1
∑ t
= (2 − 2 ρ ) = 2(1− ρˆ )
+i
− i
dˆ = i
2
2
2
∑ eˆt
∑ eˆt
∑ eˆt
i
i
i
d = 0; ρ = 1
d = 2; ρ = 0
(9.8)
d = 4; ρ = −1
42
Durbin-Watson (DW) test
2
T ⎛
ˆ 2 − 2 ∑ eˆ eˆ
ˆ2
e
e
+
⎞
∑
∑
ˆ
ˆ
e
e
−
∑ ⎜⎝ t
t
t t −1
t −1
t −1 ⎟⎠
i
i
i
i
ˆ
d =
=
≈ 2 (1 − ρ̂ )
2
2
ˆ
∑ et
∑ eˆt
i
i
Values of DW depend on number of parameters in the Model, K, and
observations
Inconclusive
region
Inconclusive
region
d = 0; ρ = 1
dL
No auto-correlation
dU
Positive autocorrelation
d = 2; ρ = 0
4 − dU 4 − d L
d = 4; ρ = −1
Negative autocorrelation
43
Consequence of Autocorrelation
OLS Estimator is still linear and unbiased.
Variance is large that makes t-statistic insignificant
Proof:
( ) [
(
]
⎡
E βˆ = E ∑ wi yi = E ⎢∑ wi β + β xi + ei
2
1 2
⎣
)⎤⎥⎦ = E[∑ wi β1 + β 2 ∑ wi xi + ∑ wiei ]= β 2
2
( ) [( )
] [∑ w e ]
var βˆ 2 = E E βˆ 2 − β 2 = E
2
i i
2
⎡
⎤
⎡
⎤
(xi − x ) ⎥ E (e )2 + 2 ⎢
(xi − x ) ⎥ E (e e ) =
⎢
= ⎢∑
∑
i
i j
2
2
⎢
(
(
xi − x ) ⎥
xi − x ) ⎥
∑
∑
⎢⎣
⎥⎦
⎢⎣
⎥⎦
i
i
2
⎡
⎤
⎡
⎤
(
)
(
)
−
−
x
x
x
x
( xi − xi ) ⎥ 2 ⎢
⎢
i
i
i
j ⎥ 2 s
+
σ
2
σ ρ
2
⎢ ∑ ( x − x )2 ⎥
⎢∑
⎥
( xi − xi ) ⎥
∑
i
i
⎢⎣ ∑
⎥
⎢
i
i
⎦
⎣
⎦
= σˆ 2
∑ (x
i
1
i
− x)
2
⎡
xi xi −1
∑
⎢1 + ρ i
2
+
2
ρ
⎢
xi2
∑
⎢⎣
i
∑x x
∑x
i
i −1
i
2
i
i
+ .... + 2 ρ n −1
∑x x
∑x
i
i −1
i
2
i
i
⎤
⎥
⎥
⎥⎦
44
Remedial Measure when
ρ is Known
Take a lag of the original model; and multiply it by ρ ; and subtract from the
original model to find a transformed model.
ρY
= ρβ + ρβ x + ρe
t −1
1
2 t −1
t −1
Yt −ρY = β −ρβ +β x −ρβ x +e −ρe
t−1 1 1 2 t−1 2 t−1 t−1 t−
Transformed model
Y * = β * + β * x* + e* Where
t
1
2 t t
Yt* = Yt − ρY ; X t* = X t − ρX ;
t −1
t −1
et* = et − ρe ; β * = β − ρβ .
1
1
t −1
OLS estimates unknown parameters of this transformed model will be BLUE.
Retrieve
β 1 of the original model from estimates of β * .
45
Cochrane-Orcutt iterative procedure
This method is similar to the above ones, except that it involves
multiple iteration for estimating
ˆ
ˆ
β
β
1. Estimate
1 and
2
estimate ρ̂ i
ρ̂ i . Steps are as following:
the original model; get error terms and
2. Transform the original model multiplying it by
first difference,
ˆˆ
β
ρ̂ i and
by taking the
ˆ
βˆ 2
1 and
from the transformed model and get errors
3. Estimate
of this transformed model
4. Then again estimate
original model
ρ̂ˆ i
and use those values to transform the
Yt − ρˆ Y
= β − ρˆ β + β x
− ρˆ β x
+e
− ρˆ e
t −1
1
1
2 t −1
2 t −1 t −1
t −1
.
ρ
converges; ρ̂ˆ i =0.
5. Continue this iteration process until
Diagnos /ACF options in OLS in Shazam will generate these
iterations.
46
Relation between Exchange Rate and the Interest Rate in UK
ER = + 1.555 + 0.03491*RPI
(SE) (0.0458) (0.00469)
R^2
0.319598 F(1,118) =
log-likelihood
-22.354 DW
55.43 [0.000]**
0.102
Tests suggest that Errors are not normal: Errors are autocorrelated and hetroscedastic
dU ,(1,150) = 1.746
AR 1-5 test:
F(5,113) = 150.53 [0.0000]**
ARCH 1-4 test: F(4,110) = 183.65 [0.0000]** d
L ,(1,150 ) = 1.720
Normality test: Chi^2(2) = 15.844 [0.0004]**
hetero test:
F(2,115) = 5.8804 [0.0037]**
hetero-X test: F(2,115) = 5.8804 [0.0037]**
RESET test:
F(1,117) = 13.176 [0.0004]**
47
Evidence of positive autocorrelation (PcGive resutls)
Transformation of the model by the first difference
Exchange Rate and the Interest Rate
DER = - 0.006429 + 0.009026*DRPI
(SE) (0.00778) (0.00506)
R^2
0.0264972 F(1,117) =
log-likelihood
125.689 DW
3.185 [0.077]
1.62
AR 1-5 test:
F(5,112) = 1.1384 [0.3444]
ARCH 1-4 test: F(4,109) = 0.72343 [0.5778]
Normality test: Chi^2(2) = 9.1530 [0.0103]*
hetero test:
F(2,114) = 1.6454 [0.1975]
hetero-X test: F(2,114) = 1.6454 [0.1975]
RESET test:
F(1,116) = 0.055692 [0.8139]
Autocorrelation has gone but the relation has disappeared.
48
Transformation of the model by the first difference
Exchange Rate and the Interest Rate
DER = - 0.006429 + 0.009026*DRPI
(SE) (0.00778) (0.00506)
R^2
0.0264972 F(1,117) =
log-likelihood
125.689 DW
3.185 [0.077]
1.62
AR 1-5 test:
F(5,112) = 1.1384 [0.3444]
ARCH 1-4 test: F(4,109) = 0.72343 [0.5778]
Normality test: Chi^2(2) = 9.1530 [0.0103]*
hetero test:
F(2,114) = 1.6454 [0.1975]
hetero-X test: F(2,114) = 1.6454 [0.1975]
RESET test:
F(1,116) = 0.055692 [0.8139]
Autocorrelation has gone but the relation has disappeared.
49
Illustration of Shazam Program Autocorrelation Corrected Estimations
read year
y
m4 p
1970
418032 26643
l
r
19.6 23797
6.56
dim w1 32
ols y p l/list resid=w1
diagnos/ACF
matrix w=diag(w1)
matrix ww=w*w
auto y p l/order=8 list
*maximum likelihood estimator
auto y p l/ml
*Cochrane-Orcutt iterative procedure
auto y p l/iter=20
*Generalised list square estimator
gls y p l /pmatrix=ww
Stop
50
51
Download