Econ 399 Chapter8b

advertisement

8.4 Weighted Least Squares Estimation

Before the existence of heteroskedasticity-robust statistics, one needed to know the form of heteroskedasticity

-Het was then corrected using WEIGHTED

LEAST SQUARES (WLS)

-This method is still useful today, as if heteroskedasticity can be correctly modeled, WLS becomes more efficient than OLS

-ie: WLS becomes BLUE

8.4 Known Heteroskedasticity

-Assume first that the form of heteroskedasticity is known and expressed as:

Var ( u | X )

 

2 h ( X )

-Where h(X) is some function of the independent variables

-since variance must be positive, h(X)>0 for all valid combinations of X

-given a random sample, we can write:

 i

2 

Var ( u i

| X i

)

 

2 h i

8.4 Known Het Example

-Assume that sanity is a function of econometrics knowledge and other factors:

crazy

 

0

 

1

econ

otherfacto rs

u

-However, by studying econometrics two things happen: either one becomes more sane as one understands the world, or one becomes more crazy as one is pulled into a never-ending vortex of causal relationships. Therefore:

Var ( u i

| X i

)

 

2 econ i

8.4 Known Heteroskedasticity

-Since h is a function of x, we know that:

E ( u i

/ h i

| X )

0 and

Var(u i

| X )

E ( u i

2

| X )

 

2 h i

-Therefore

E [( u i

/ h i

| X )

2

]

E ( u i

2

) | h i

2 h i

  h i

2

-So inclusion of the h term in our model can solve heteroskedasticity

8.4 Fixing Het – And Stay Down!

-We therefore have the modified equation: y i h i

0 h i

1 x i 1 h i

...

 k x ik h i

 u i h i

-Or alternately: y i

*  

0

 

1

* x i 1

...

  k

* x ik

 u i

*

(8.26)

-Note that although our estimates for B

J will change (and their standard errors become valid), their interpretation is the same as the straightforward OLS model (don’t try to bring h into your interpretation)

8.4 Het Fixing – “I am the law”

-(8.26) is linear and satisfied MLR.1

-if the original sample was random, nothing chances so MLR.2 is satisfied

-If no perfect collinearity existed before, MLR.3 is still satisfied now

-E(u i

*|X i

*)=0, so MLR.4 is satisfied

-Var(u i

*|X i

*)=σ 2 , so MLR.5 is satisfied

-if u i has a normal distribution, so does u

MLR. 6 is satisfied i

*, so

-Thus if the original model satisfies everything but het, the new model satisfies MLR. 1 to 6

8.4 Het Fix – Control the Het Pop

-These B

J

* estimates are different from typical

OLS estimates and are examples of

GENERALIZED LEAST SQUARES (GLS)

ESTIMATORS

-this GLS estimation provides standard errors, t statistics and F statistics that are valid

-Since these estimates satisfy all 6 CLM assumptions, and because they are BLUE, GLS is always more efficient than OLS

-Note that OLS is a special case of GLS where h i

=1

8.4 Het Fix – Who broke it anyhow?

-Note that the R 2 obtained from this regression is useful for F statistics but is NOT useful for its typical interpretation

-this is due to the fact that it explains how much X* explains y*, not how much X explains y

-when GLS estimators are used to correct for heteroskedasticity, they are called WEIGHTED

LEAST SQUARES (WLS) ESTIMATORS

-most econometric programs have commands to minimize the weighted sum of squared residuals: min

( y i

 

0

 

1 x i 1

...

  k x ik

)

2

/ h i

8.4 Incorrect Correcting?

What happens if h(x) is misspecified and WLS is run (ie: if one expects x

1 actually causes het) to cause het but x

3

1) WLS is still unbiased and consistent (similar to

OLS)

2) Standard Errors (thus t and F tests) are no longer valid

-to avoid this, one can always apply a fully robust inference for WLS (as we say for OLS in 8.2)

-this can be tedious

8.4 Incorrect Correcting?

WLS is often criticized as being better than OLS

ONLY IF the form of het is correctly chosen

-one may argue that making some correction for het is better than none at all

-there is always the option of using robust WLS estimation

-in cases of doubt, both robust WLS and robust

OLS results can be reported

8.4 Averages and Het

Heteroskedasticity will always exist when

AVERAGES are used

-when using averages, each observation is the sum of all individual observations divided by group size: x i

  x i

/ m i

-Therefore in our true regression, our error term is the sum of all individual observations’ error terms divided by group size: u i

  u i

/ m i

8.4 Averages and Het

If the individual model is homoskedastic, and no correlation exists between groups, then the average equation is heteroskedastic with a weight of h i

=1/m i

-In this way larger groups receive more weight in the regression and is due to the fact that

Var ( u i

 

2

) / m i

For example, assume that we run a regression on how math knowledge impacts grades in econ classes. Bigger classes (Econ 299) would be weighted to give more information than smaller classes (Econ 349.5 – Love and Econ.)

8.4 Feasible GLS

-In the previous section we assumed that we knew the form of the heteroskedasticity, h i

(x)

-often this is not the case an we need to use data to estimate h i hat

-this yields an estimator called FEASIBLE GLS

(FGLS) or ESTIMATED GLS (EGLS)

-Although h(x) can be measured many ways, we assume that

Var ( u | X )

 

2 e

0

 

1 x

1

...

  k x k (8.30) h ( X )

 e

0

 

1 x

1

...

  k x k

8.4 Feasible GLS

-Note that while the BP test for Het assumed Het was linear, here we allow for non-linear Het

-although testing for linear Het is effective, correcting for Het has issues with linear models as h(X) could be negative, making Var(u|X) negative

-since delta is unknown, it must be estimated

-using (8.30), u

2  

2 e

0

 

1 x

1

...

  k x k v

-Where v, conditional on X, has a mean of unity

8.4 Feasible GLS

-If we assume v is independent of X, log( u

2

)

   

1 x

1

...

  x k

 e

0 k

-Where e has zero mean and is independent of X

-note that the intercept changes, which is unavoidable but not drastically important

-as usual, we only have residuals, not errors, so we run the regression and obtain fitted values l oˆ g( u

ˆ 2

)

  ˆ

0

  ˆ

1 x

1

...

  ˆ k x k

-To obtain: h

ˆ

i

 e l oˆ g( u i

2

)

8.4 FGLS

To use FGLS to correct for Heteroskedasticity,

1) Regress y on all x’s and obtain residuals uhat

2) Create log(uhat 2 )

3) Regress log(uhat 2 ) on all x’s and obtain fitted values ghat

4) Estimate hhat=exp(ghat)

5) Run WLS using weights 1/hhat

8.4 FGLS

If we used the actual h(X), our estimator would be unbiased and BEST

-since h(X) is estimated using the same data as

FGLS, it is biased and therefore not BEST

-however, FGLS is consistent and asymptotically more efficient than OLS

-therefore FGLS is a good alternative to OLS in large samples

-note that FGLS estimates are interpreted the same as OLS

-note also that heteroskedasticity-robust standard errors can always be calculated in cases of doubt

8.4 FGLS Alternative

One alternative is to estimate ghat as: l oˆ g( u

ˆ 2

)

  ˆ

0

  ˆ

1

  ˆ

2

2

Using fitted y values from the OLS equation

-This changes step 3 above, but the remaining steps are the same

-Note that the Park (1996) test is based on FGLS but is inferior to our previous tests due to FGLS only being consistent

8.4 F Tests and WLS

When conducting F tests using WLS,

1) First estimate the restricted and unrestricted model using OLS

2) After determining weights, use these weights on both the restricted and unrestricted model

3) Conduct F tests

Luckily most econometric programs have commands for joint tests

8.4 WLS vs. OLS – Cage Match

In general, WLS and OLS estimates should always differ due to sampling error

-However, some differences are problematic:

1) If significant variables change signs

2) If significant variables drastically change magnitudes

-This usually indicates a violation of a Gauss-

Markov assumption, generally the zero conditional mean assumption (MLR.4)

-this violation would cause bias

-the Hausman (1978) test exists to test for this, but “eyeballing” is generally sufficient

8.5 Linear Probability Model

We’ve already seen that the Linear Probability

Model (LPM), where y is a Dummy Variable, is subject to Heteroskedasticity

-the simplest way to deal with this Het is to use

OLS estimation with heteroskedastic-robust standard errors

-since OLS estimators are generally inefficient in

LPM, we can use FGLS:

8.5 LPM and FGLS

We know that:

Var ( y | X )

 p ( X )[ 1

 p ( X )] p ( X )

 

0

 

1 x

1

...

  k x k

Where p(X) is the response probability; probability that y=1

-OLS gives us fitted values and estimates variance using h

ˆ i

 y

ˆ i

[ 1

 y

ˆ i

]

Given that we now have hhat, we can apply

FGLS, except for one catch…

8.5 LPM and FGLS

If our fitted values, yhat, are outside our (0,1) range, hhat becomes negative or zero

-if this happens WLS cannot be done as each observation i is multiplied by

1 / h

ˆ i

The easiest way to fix this is to use OLS and heteroskedasticity-robust statistics

-One alternative is to modify yhat to fit in the range, for example, let yhat=0.01 if yhat is too low and yhat=0.99 if yhat is too high

-unfortunately this is very arbitrary and thus not the same among estimations

8.5 LPM and FGLS

To estimate LPM using FGLS,

1) Estimate the model using OLS to obtain yhat

2) If some values of yhat are outside the unit interval (0,1), adjust those yhat values

3) Estimate variance using: h

ˆ i

 y

ˆ i

[ 1

 y

ˆ i

]

4) Perform WLS estimation using the weight hhat

8. Heteroskedasticic Review

1) Heteroskedasticity does not affect consistency or biasedness, but does affect standard errors and all tests

2) 2 ways to test for Het are: a) Breuch-Pagan Test b) White Test

3) If the form of Het is know, WLS is superior to

OLS

4) If the form of Het is unknown, FGLS can be run and is asymptotically superior to OLS

5) Failing 3 or 4, heteroskedastic-robust standard errors can be used in OLS

Download