E(b 2 )

advertisement
The Proof of unbiased estimator of 2
(Ref. To Gujarati (2003)pp.100-101)
The least squares formulas (estimators) in the simple regression case:
x
y
i
i = k (Y -Y) = k Y - Yk = k Y
b2 =
i
i
i i
i
i
xi2 = k
=0
i
Substitute the PRF: Y = 1 +2X + u into the b2 formula
=0
=1
b2 = ki (1 +2Xi + ui ) = 1ki +2 kiXi + kiui
= 2 + kiu
Take the expectation on both side:
=0
E(b2) = 2 + ki E(ui)
E(b2) = 2 (unbiased estimator)
Since
ki= 0
kiX = 1
By assumptions:
E(ui) = 0
E(ui ,uj) = 0
Var(uI) = 2
The Proof of variance of ^2 (Ref. To Gujarati (2003)pp.101-102)
Var (b2) = E[ b2 – E(b2)]2
= E[ b2 - 2]2
E(b1) = 1 (unbiased estimator)
= E[ kiui]2
= E[k12u12+ k22u22 + k32u32 +….+2k1k2u1u2+2k1k3u1u3 + …..]
=0
=0
2
2
2
2
2
2
= k1 E(u1 )+ k2 E(u2 ) + k3 E(u3 ) +….+2k1k2E(u1u2) +2k1k3E(u1u3 )+….
= k122 + k222 + k322 + …. + 0 + 0 + 0 + …
= 2 ki2
= 2 (xi/ xi2 )2
=
2 xi2/
(xi2 )2
= 2 / xi2
By assumptions:
E(i) = 0
E(i ,j) = 0
Var(I) = 2
^
^ : cov(
^ , ^ )
The Proof of covariance of 1 and 
2
1 2
(Ref. To Gujarati (2003)pp.102)
By definition:
Cov (b1, b2) = E{[b1- E(b1)][b2- E(b2)]}
= E{[(Y – b2X) – (Y - 2X)][b2 - 2]}
= E{[-X(b2 - 2)][b2 - 2]}
= -X E(b2 - 2)2
= -X[ 2 / xi2]
The Proof of minimum variance property of OLS
(Ref. To Gujarati (2003)pp.104-105)
xiyi
The OLS estimator of 2 is: b2= x 2 =kiYi
i
Now suppose an other linear estimator of 2 is b2*=wiYi
And assume wi  ki
Since
b2* = wiYi
= wi(2+2Xi+ui)
= wi1+2wiXi+ wi uI
Take the expectation of b2*:
E( b2*) = E(wi1)+2E(wiXi)+ E(wi ui)
= 1wi +2wiXi
since E(ui)=0
For b2* to be unbiased, i.e., E(b2*) = 2
there must be
wi =0, and wiXi = 1
If b2* is an unbiased estimator, then
b2* = wiYi = wi(1+2Xi+ui) = 2 + wi ui
Therefore,
(b2* - 2) = wi ui
And the variance of b2* is:
Var(b2*) = E[b2* - E(b2*)]2 = E[ b2* - 2]2 = E(wi ui)2
= wi2 E(ui)2
= 2 wi2
= 2  [(wi - ki) + ki)]2
=0
= 2 (wi - ki)2 + 2 ki2 + 2 2(wi - ki)ki
= 2 (wi - ki)2 + 2 ki2
Since ki =0
= 2 (wi - ki)2 + 2/xi2
= 2 (wi - ki)2 + Var(b2)
Therefore, only if wi = ki, then Var(b2*) = Var(b2), Hence the OLS
estimator b2 is the min. variance. If it is not min.=>OLS isn’t the best
The Proof of unbiased estimator of 2
(Ref. To Gujarati (2003)pp.102-03)
Or Var( e ) = E(e2) = 2
Var( u^ ) = E(u^2) = ^2
Since Y = 1 + 2X + u and Y = 1 + 2X + u
=> y = 2x + (u - u )
Deviation form
e = Y – b1 – b2X and 0 = Y – b1 – b2X
=> e = y – b2x
e = 2x + (u - u ) – b2x = (2 – b2)x + (u - u )
Take squares and summing on both sides:
e2 = (2 –b2)2x2+ (u - u )2 – 2(2 –b2)x(u - u)
Take expectation on both sides:
E(e2) = E[(2 –b2)2]x2 + E[(u - u )2] – 2E[(2 –b2)x(u - u)]
I
II
III
Utilize the OLS assumptions, that are
E(u) = 0, E(u2) = 2 and E(uiuj) = 0
And get
I = 2
II = (n-1) 2
III = -2 2
Substituting these three terms, I, II, and III into the equation and get
E(e2) = (n-2)2
And if define
^ 2 = e2 /(n-2)

Therefore, the expected value is
^ 2) = E(e2)/(n-2) = (n-2)2/(n-2) = 2
E(
Download