HEC Lausanne - Advanced Econometrics

advertisement
HEC Lausanne - Advanced Econometrics
Christophe HURLIN
Correction Final Exam. January 2014. C. Hurlin
Exercise 1: MLE, parametric tests and the trilogy (30 points)
Part I: Maximum Likelihood Estimation (MLE)
Question 1 (2 points): since the random variables X1 ; ::; XN are i:i:d: (0.5 point), the loglikelihood of the sample x1 ; ::; xN is de…ned as to be:
`N ( ; x) =
N
X
ln fX (xi ; )
(1)
i=1
with (0.5 point)
ln fX (xi ; ) =
1
ln ( )
1
ln (c) +
ln (xi )
(2)
So, we have (1 point):
`N ( ; x) =
N
N ln ( )
ln (c) +
N
X
1
ln (xi )
(3)
i=1
Question 2 (2 points): the ML estimator of
is de…ned as to be (0.5 point):
b = arg max `N ( ; x)
(4)
2R+
The log-likelihood equation (FOC) is the following:
with (0.5 point)
@ ln `N ( ; x)
gN b; x =
@
@ ln `N ( ; x)
@
By solving this system, we get:
=
b
N
N
+ 2 ln (c)
b
b
b = ln (c)
1
=0
(5)
b
N
1 X
ln (xi ) = 0
b2 i=1
N
1 X
ln (xi )
N i=1
(6)
(7)
Mid-term 2013. Advanced Econometrics. HEC Lausanne. C. Hurlin
2
The SOC is based on the Hessian number:
HN
2
b; x = @ ln `N ( ; x)
@ 2
b
N
= 2
b
2N
b3
ln (c)
!
N
1 X
ln (xi )
N i=1
(8)
Given the FOC, the Hessian number is equal to (0.5 point):
HN b; x
=
=
=
N
b2
2N
b3
ln (c)
N
2N b
2
b
b3
N
<0
b2
!
N
1 X
ln (xi )
N i=1
(9)
(10)
This number is negative, so we have a maximum. The maximum likelihood estimators of
the parameter is de…ned as to be (0.5 point):
b = ln (c)
N
1 X
ln (Xi )
N i=1
(11)
Question 3 (2 points): The sequence of i:i:d: (0.5 point) random variables ln (X1 ) ; ::; ln (XN )
satisfy E (ln (Xi )) = ln (c)
: Given the WLLN, we have (0.5 point):
N
1 X
p
ln (Xi ) ! E (ln (Xi )) = ln (c)
N i=1
By using the CMP theorem for a function g (z) = ln (c) z (0.5 point), we have:
!
N
1 X
p
g
ln (Xi ) ! g (ln (c)
)
N i=1
(12)
(13)
or equivalently
ln (c)
N
1 X
p
ln (Xi ) ! ln (c)
N i=1
ln (c) +
(14)
As a consequence (0.5 point):
p
b!
The estimator b is (weakly) consistent.
Question 4 (2 points): Since the problem is regular (0.5 point), we have:
p
N b0
d
0
! N 0; I
1
( 0)
(15)
where 0 denotes the true value of the parameter and I( 0 ) the (average) Fisher information matrix for one observation. The Fisher information matrix associated to the sample
Mid-term 2013. Advanced Econometrics. HEC Lausanne. C. Hurlin
3
is equal to:
I N ( 0)
= E ( HN ( 0 ; X))
!!
N
2N
1 X
N
ln (c)
ln (Xi )
= E
2 +
3
N i=1
0
0
!
N
N
2N
1 X
=
ln (c)
E (ln (Xi ))
2 +
3
N i=1
0
0
N
=
2
0
N
=
2
0
+
+
2N
3
0
2N
(ln (c)
ln (c) +
0)
(17)
(18)
(19)
0
(20)
3
0
N
=
(16)
(21)
2
0
Since the sample is i:i:d:, we have
I ( 0) =
1
N
IN ( ) =
1
(22)
2
0
As a consequence (1.5 point):
p
or equivalently
d
N b
! N 0;
0
b asy N
0;
2
0
(23)
2
0
(24)
N
Question 5 (2 points). The Fisher information matrix for one observation is equal to:
I ( 0) =
1
(25)
2
0
A …rst natural consistent estimator is given by (0.5 point):
1
b
I b = 2
b
(26)
A second possible estimator is the BHHH estimator based on the cross-product of the
gradients:
N
2
1 X
b
gi xi ; b
(27)
I b =
N i=1
with
gi xi ; b =
1 ln (c) ln (xi )
+
b
b2
(28)
So, a second consistent estimator is given by (0.5 point):
N
1 X
b
I b =
N i=1
1 ln (c) ln (xi )
+
b
b2
2
(29)
Mid-term 2013. Advanced Econometrics. HEC Lausanne. C. Hurlin
The asymptotic variance of b is equal to (0.5 point):
1
Vasy b = I
N
1
( 0) =
4
2
0
(30)
N
So, two alternative estimator of the asymptotic variance of b are given by (0.5 point for
each estimator):
b2
b
b
(31)
Vasy
=
N
! 1
N
2
X
1 ln (c) ln (xi )
b
b
Vasy
=
(32)
+
b
b2
i=1
Part II: Parametric tests
Question 6 (2 points). Given the Neyman Pearson lemma the rejection region of the UMP
test of size is given by (0.5 point):
LN ( 0 ; x)
<K
LN ( 1 ; x)
where K is a constant determined by the size
`N ( 0 ; x)
(33)
or equivalently
`N ( 1 ; x) < ln (K)
(34)
It gives (0.5 point):
N ln ( 0 )
N
ln (c)+
0
1
N
X
1
0
ln (xi )+N ln ( 1 )+
1
1
0
1
0
0 1
1
<
0;
N
X
1
1
1
N
X
ln (xi ) < ln (K)
i=1
ln (xi ) < K2
(35)
i=1
ln ( 1 )) + N ln (c)
1
Since
ln (c)
1
i=1
where K2 = ln (K) + N (ln ( 0 )
N
N
X
1
1
0
1
is a constant term.
ln (xi ) < K2
(36)
i=1
we have (1 point):
N
X
ln (xi ) > K3
(37)
i=1
with K3 = K2
0 1= ( 1
0) :
This inequality can be re-expressed as:
ln (c)
with K4 = ln (c)
form (1 point):
N
1 X
ln (xi ) < K4
N i=1
K3 =N: The rejection region of the UMP test of size
n
o
W = x : b (x) < A
(38)
has the general
(39)
where A is a constant determined by the size :
Remark: the expressions of the constant terms K2 ; K3 and K4 are useless (no point for
these expressions).
Mid-term 2013. Advanced Econometrics. HEC Lausanne. C. Hurlin
5
Question 7 (2 points). Given the de…nition of the error of type I (0.5 point):
= Pr ( Wj H0 )
(40)
So, here we have (0.5 point):
asy
= Pr b < A b
N
Then, we have:
= Pr
b
A
0
p <
0= N
b
0
p
0= N
0
p
0= N
0;
asy
2
0
(41)
N
!
N (0; 1)
A
=
p
0=
0
N
(42)
where (:) denotes the cdf of the standard normal distribution. From this expression, we
can deduce the critical value of the UMP test of size (1 point):
A=
0
0
+p
N
1
( )
(43)
Question 8 (2 points). Consider the test
H0
H1
with
1
<
0:
:
:
=
=
0
1
The rejection region of the UMP test of size
x : b (x) <
W=
W does not depend on
test (0.5 point):
1
0
0
+p
N
1
is (1 point):
( )
(44)
(0.5 point). It is also the rejection region of the UMP one-sided
H0
H1
:
:
=
<
0
0
Question 9 (2 points). Consider the one-sided tests:
Test A: H
Test B: H
0
0
:
:
=
=
0
0
against H1 :
against H1 :
<
>
0
0
The non rejection regions of the UMP tests of size =2 are (0.5 point):
WA =
WB =
x : b (x) >
x : b (x) <
0
0
0
+p
N
1
0
+p
N
1
So, non rejection region of the two-sided test of size
W=
x:
0
0
+p
N
1
2
< b (x) <
(45)
2
1
(46)
2
is (1 point):
0
0
+p
N
1
1
2
(47)
Mid-term 2013. Advanced Econometrics. HEC Lausanne. C. Hurlin
1
Since
1
( =2) =
(1
6
=2) ; this region can be rewritten as:
x : b (x)
W=
0
<p
N
0
The region rejection of the two-sided test of size
x : b (x)
W=
1
(48)
2
is (0.5 point):
0
>p
N
0
1
1
1
(49)
2
Question 10 (2 points). By de…nition of the power function, we have (0.5 point):
P ( ) = Pr ( Wj H1 )
8 6=
Under the alternative:
2
b asy N
;
H1
with
6=
P( )
0:
(50)
0
(51)
N
The power function can be expressed as:
=
1
Pr W H1
=
1
Pr
=
1
Pr b <
=
1
0
+p
N
0
0
+
0
<b<
1
2
0
+p
N
1
p0
N
p
1
1
1
2
= N
0
2
!
0
+p
N
+
1
+ Pr b <
0
+
1
0
2
0
+p
N
1
2
!
1
p0
N
2
p
= N
(52)
The power function is (0.5 point):
0
P( ) = 1
p
= N
+
0
1
1
2
+
0
p
= N
+
0
When N tends to in…nity, two cases have to be considered. If >
point):
lim P ( ) = 1
( 1) + ( 1) = 1
1
2
0,
then we have (0.5
N !1
If
<
0,
(53)
(54)
then we have (0.5 point):
lim P ( ) = 1
N !1
(+1) +
(+1) = 1
1+1=1
(55)
Whatever the value of ; the power function tends to one:
lim P ( ) = 1
N !1
(56)
The test is consistent.
Part III: The trilogy
Question 11 (3 points). The LR test statistic is de…ned as to be (0.5 point):
LR =
2 `N bH 0 ; x
`N bH 1 ; x
(57)
Mid-term 2013. Advanced Econometrics. HEC Lausanne. C. Hurlin
7
where bH 0 and bH 1 respectively denote the ML estimator of the parameter
null and the alternative. We know that under the alternative
under the
N
1 X
ln (xi )
N i=1
bH (x) = ln (c)
1
(58)
As a consequence, we have (0.5 point):
N
X
ln (xi ) = N ln (c)
i=1
bH (x) = 100 (4
1
2) = 200
(59)
The log-likelihood of the sample under the alternative is equal to (0.5 point):
! N
X
bH
1
N
1
ln (c) +
ln (xi )
`N bH 1 ; x
=
N ln bH 1
bH
bH
1
=
100 ln (2)
=
369:31
Under the null, the parameter
is equal to (0.5 point)
`N bH 0 ; x
i=1
1
100
2
4+
1
2
200
2
(60)
is known ( = 1:5) and the log-likelihood of the sample
=
N
N ln ( 0 )
ln (c) +
1
0
0
=
100 ln (1:5)
=
373:87
0
100
1:5
4+
1
N
X
ln (xi )
i=1
1:5
1:5
200
(61)
The LR test statistic (realisation) is equal to (0.5 point):
LR (x)
=
=
=
2 `N bH 0 ; x
`N bH 1 ; x
2 ( 373:87 + 369:31)
9:13
(62)
Under some regularity conditions and under the null, we have:
d
LR !
2
(1)
(63)
since there is only one restriction imposed. The critical region for a 5% signi…cance level
is:
W = x : LR (x) > 20:95 (1) = 3:8415
(64)
Conclusion: for a 5% signi…cance level, we reject the null H0 :
= 1:5 (0.5 point).
Question 12 (3 points). In this case, the Wald test statistic (realisation) is equal to (1.5
point):
2
2
bH
bH
2
1:5
1:5
1
1
(2 1:5)
=
=
= 6:25
(65)
Wald (x) =
2
22 =100
b asy bH
b
V
=N
1
H1
Under some regularity conditions and under the null, we have (0.5 point):
d
Wald !
2
(1)
(66)
Mid-term 2013. Advanced Econometrics. HEC Lausanne. C. Hurlin
8
since there is only one restriction imposed. The critical region for a 5% signi…cance level
is (0.5 point):
W = x : Wald (x) > 20:95 (1) = 3:8415
(67)
Conclusion: for a 5% signi…cance level, we reject the null H0 :
= 1:5 (0.5 point).
Question 13 (4 points). The LM test statistic is de…ned as to be (0.5 point):
s2N ( c ; xi )
b
I N ( c)
LM =
(68)
where sN ( c ; x) is the score of the unconstrained model evaluated at
score is de…ned by (0.5 point):
sN ( ; X) =
N
@ ln `N ( ; X)
=
@
The realisation of the score evaluated at
sN ( c ; x)
=
N
=
N
2
N
1 X
ln (c)
2
In our case, the
ln (Xi )
(69)
i=1
c
+
is equal to (1 point)
N
c
=
+
c:
2
c
N
1 X
ln (c)
100
100
+
1:5
1:52
22:2222
2
c i=1
4
ln (xi )
1
1:52
200
(70)
The estimate of the Fisher information number associated to the sample b
I N ( c ) is equal
to (0.5 point):
N
100
b
I N ( c) = 2 =
= 44:4444
(71)
1:52
c
So, the realisation of the LM test statistic is (0.5 point):
LM (x) =
22:22222
s2N ( c ; x)
=
= 11:1111
b
44:4444
I N ( c)
(72)
Under some regularity conditions and under the null, we have (0.5 point):
d
LM !
2
(1)
(73)
since there is only one restriction imposed. The critical region for a 5% signi…cance level
is:
W = x : LM (x) > 20:95 (1) = 3:8415
(74)
Conclusion: for a 5% signi…cance level, we reject the null H0 :
= 1:5 (0.5 point).
Mid-term 2013. Advanced Econometrics. HEC Lausanne. C. Hurlin
9
Exercise 2: Logit model and MLE (15 points)
Question 1 (2 points). the conditional distribution of the dependent variable Yi is a Bernoulli
distribution with a (conditional) success probability equal to pi = Pr ( yi = 1j xi ) : (0.5
point)
yi j xi Bernoulli (pi )
N
Since the variables fyi ; xi gi=1 are i:i:d: (0.5 point), the conditional log-likelihood of the
N
sample fyi ; xi gi=1 corresponds to the sum of the (log) probability mass functions associated to the conditional distributions of yi given xi for i = 1; ::; N (0.5 point):
`N ( ; yj x) =
N
X
i=1
ln f yjx ( yi j xi ; )
with
(75)
yi
f yjx ( yi j xi ; ) = pyi i (1
(76)
pi )
N
As a consequence, the (conditional) log-likelihood of the sample fyi ; xi gi=1 is equal to
(0.5 point):
`N ( ; yj x) =
N
X
yi ln
(x|i ) +
N
X
(1
(x|i ))
yi ) ln (1
(77)
i=1
i=1
or equivalently
`N ( ; yj x) =
N
X
exp (x|i )
1 + exp (x|i )
yi ln
i=1
+
N
X
(1
exp (x|i )
1 + exp (x|i )
yi ) ln 1
i=1
(78)
Question 2 (2 points). By de…nition of the score vector, we have (1 point):
sN ( ; yj x)
@`N ( ; yj x)
@
N
X
(x|i ) @x|i
=
yi
(x|i ) @
i=1
=
N
X
=
i=1
Since
(x) =
sN ( ; yj x)
(x) (1
=
N
X
=
(x|i )
xi
(x|i )
yi
=
yi )
i=1
N
X
(1
yi )
1
i=1
(x|i ) (1
(x|i ))
xi
|
(xi )
yi (1
(x|i )) xi
i=1
N
X
(1
1
(x|i ) @x|i
(x|i ) @
(x|i )
xi
(x|i )
(79)
(x)) ; this expression can be simpli…ed as follows (0.5 point):
i=1
N
X
yi
N
X
N
X
(1
N
X
(1
yi )
i=1
yi )
(x|i ) (1
(x|i ))
xi
|
1
(xi )
(x|i ) xi
i=1
(yi
yi (x|i )
(x|i ) + yi (x|i )) xi
i=1
=
N
X
i=1
(yi
(x|i )) xi
(80)
Mid-term 2013. Advanced Econometrics. HEC Lausanne. C. Hurlin
10
So, the score vector of the logit model is simply de…ned by (0.5 point):
sN ( ; yj x) =
N
X
xi (yi
(x|i ))
(81)
i=1
Question 3 (2 points). The Hessian matrix is a K
HN ( ; yj x) =
K matrix given by (0.5 point):
@sN ( ; yj x)
@ 2 `N ( ; yj x)
=
|
@ @
@ |
(82)
So, we have (0.5 point):
HN ( ; yj x) =
N
X
xi (x|i )
i=1
@x|i
@ |
(83)
The Hessian matrix is de…ned as to be (1point) :
HN ( ; yj x) =
N
X
xi (x|i ) x|i
(84)
(x|i ) xi x|i
(85)
i=1
or equivalently (since
(x|i ) is a scalar)
HN ( ; yj x) =
N
X
i=1
Question 4 (4 points). The …rst step consists in writing a function Matlab to de…ne the
opposite of the log-likelihood. The following syntax (2 points) is proposed, cf. Figure 1:
Figure 1: Log-Likelihood function
The second step consists in using the numerical optimisation algorithm of Matlab by using
the following syntax, cf. Figure 2 (2 points).
Mid-term 2013. Advanced Econometrics. HEC Lausanne. C. Hurlin
11
Figure 2: Main program
Question 5 (2 points). Since the problem is regular, the asymptotic variance covariance
matrix of the MLE estimator b is (0.5 point):
1
Vasy b = I
N
1
b =I 1 b
N
(86)
A consistent estimator for average Fisher information matrix, based on the Hessian matrix,
is (0.5 point):
N
1 X
1
b
Hi b ; yi j xi =
HN b ; yj x
I b =
(87)
N i=1
N
So, an estimator of the asymptotic variance covariance matrix of the MLE estimator b is
given by (0.5 point):
b asy b = 1 b
I
V
N
1
In this case, we have (0.5 point):
b asy b =
V
b =
N
X
i=1
HN 1 b ; yj x
x|i b
xi x|i
!
(88)
1
Question 6 (3 points). The following syntax is proposed, cf. Figure 3:
(89)
Mid-term 2013. Advanced Econometrics. HEC Lausanne. C. Hurlin
12
Figure 3: Asymptotic standard errors
Exercise 3: OLS and heteroscedasticity (5 points)
The aim of this code is to compute the OLS estimate of the parameter of a linear model,
denoted beta in the code (0.5 point) and the corresponding White consistent standard errors
(1 point). The matrix M denotes the estimator de…ned by (0.5 point):
M=
N
1 X 2
b
" xi x|i
N i=1 i
(90)
where b
"i denotes the OLS residual for the ith unit and xi the corresponding vector of explicative
variables. The code uses a …nite sample correction (0.5 point) for M (Davidson and MacKinnon,
1993). The matrix V corresponds to the White consistent estimate of the asymptotic variance
covariance matrix of the OLS estimator (1 point). The vector std corresponds to the robust
asymptotic standard errors (0.5 point).
Download