HEC Lausanne - Advanced Econometrics

advertisement
HEC Lausanne - Advanced Econometrics
Christophe HURLIN
Correction Mid-term exam. December 2013. C. Hurlin
Exercise 1 (10 points): Log-normal distribution
Question 1 (2 points): since the random variables X1 ; ::; XN are i:i:d: (0.5 point), the loglikelihood of the sample x1 ; ::; xN is de…ned to be:
`N ( ; x) =
N
X
ln fX (xi ; )
(1)
i=1
with (0.5 point)
ln fX (xi ; ) =
1
ln
2
ln (xi )
1
ln (2 )
2
2
(ln (xi )
2 2
2
)
(2)
So, we have (1 point):
`N ( ; x) =
N
X
N
ln
2
ln (xi )
i=1
Question 2 (2 points): the ML estimator of
b=
N
ln (2 )
2
2
=
:
2 >
2
=
b
2
)
(3)
i=1
(4)
2 2R+
b
with (0.5 point)
@ ln `N ( ; x)
@ 2
(ln (xi )
arg max `N ( ; x)
2R;
b; x = @ ln `N ( ; x)
@
@ ln `N ( ; x)
@
2
is de…ned as to be (0.5 point):
The log-likelihood equations (FOC) are the following:
0
gN
N
1 X
=
b
B
=@
@ ln `N ( ;x)
@
b
@ ln `N ( ;x)
@ 2
b
N
1 X
(ln (xi )
b2 i=1
C
A=
b) = 0
N
N
1 X
+
(ln (xi )
2b2
2b4 i=1
1
1
0
0
!
(5)
(6)
2
b) = 0
(7)
Mid-term 2013. Advanced Econometrics. HEC Lausanne. C. Hurlin
2
By solving this system, we get (0.5 point):
b=
N
1 X
ln (xi )
N i=1
b2 =
N
1 X
(ln (xi )
N i=1
2
b)
The SOC are based on the Hessian matrix:
HN b ; x
=
@ 2 ln `N ( ; x)
@ @ >
0
= @
1
b4
b
N
b2
PN
i=1 (ln (xi )
PN
Given the FOC, we have i=1 (ln (xi )
Hessian matrix is equal to (0.5 point):
N
2b4
b)
b) = 0 and
N
b2
HN b ; x =
1
b4
i=1
1
b6
PN
i=1
0
N
2b4
0
PN
(8)
(ln (xi )
PN
i=1 (ln (xi )
2
b)
2
b)
1
A
(9)
b) = N b2 ; so the
(ln (xi )
!
(10)
This matrix is de…nite negative, so we have a maximum. The maximum likelihood estimators of the parameters and 2 are:
b=
N
1 X
ln (Xi )
N i=1
b2 =
N
1 X
(ln (Xi )
N i=1
2
b)
(11)
Question 3 (1 point): there are several ways to prove the consistency of the ML estimator
b: By using the weak law of large number (Kinchine’s theorem), the result is direct. The
sequence of i:i:d: random variables ln (X1 ) ; ::; ln (XN ) satisfy E (ln (Xi )) = : Given the
WLLN, we have
N
1 X
p
ln (Xi ) ! E (ln (Xi )) =
(12)
b=
N i=1
The estimator b is (weakly) consistent.
Question 4 (1 point): consider the sequence of i:i:d: random variables ln (X1 ) ; ::; ln (XN )
with ln (Xi ) N ; 2 : So, b is de…ned as the sum of independent normal variables: it
has also a normal distribution (0.5 point):
2
b
The estimator b2 is de…ned as to be:
b2 =
N
;
(13)
N
N
1 X
(ln (Xi )
N i=1
2
b)
Since, variables ln (X1 ) ; ::; ln (XN ) are i:i:d: with ln (Xi )
point):
b2
2
N 2
(N 1)
(14)
N
;
2
, we know that (0.5
(15)
Mid-term 2013. Advanced Econometrics. HEC Lausanne. C. Hurlin
3
Question 5 (2 points): Since the problem is regular (0.5 point), we have:
p
d
N b0
0
1
! N 0; I
(
0)
(16)
where 0 denotes the true value of the parameter and I( 0 ) the (average) Fisher information matrix for one observation. The Fisher information matrix associated to the sample
is equal to:
N
0
2
(17)
I N ( 0 ) = E ( HN ( 0 ; x)) =
0 2N4
Since the sample is i:i:d:, we have
I(
0)
=
1
1
N
IN (
0)
0
2
=
(18)
1
0
4
2
As a consequence (1 point):
p
N b
2
0
0
d
0
!N
;
0
0
2
4
(19)
or equivalently (0.5 point)
b=
b
b2
asy
2
N
2
;
N
0
0
2 4
N
!!
(20)
Question 6 (2 points): The asymptotic variance covariance matrix of b is equal to (0.5 point):
!
2
0
N
(21)
Vasy b =
4
0 2N
A natural estimator is given by (1 point):
b asy b =
V
b2
N
0
0
2b4
N
!
(22)
Since b2 is a consistent estimator of 2 ; by using the CMP and the Slutsky’s theorem, it
b asy b is a consistent estimator of Vasy b (0.5 point). A second
is easy to show that V
possible estimator is the BHHH estimator based on the cross-product of the gradients.
Mid-term 2013. Advanced Econometrics. HEC Lausanne. C. Hurlin
4
Exercise 2 (12 points): Probit model
Question 1 (1 point): under these assumptions, the conditional probability to observe the
event Yi = 1 for the two types of individuals are equal to:
Pr ( Yi = 1j Xi = 1) =
Pr ( Yi = 1j Xi =
( + )
1) =
(
(23)
)
(24)
Question 2 (2 points): the conditional distribution of the dependent variable Yi is a Bernoulli
distribution with a (conditional) success probability equal to pi = Pr ( Yi = 1j Xi = xi ) :
(0.5 point)
Yi jXi =xi Bernoulli (pi )
N
Since the variables fYi ; Xi gi=1 are i:i:d: (0.5 point), the conditional log-likelihood of the
N
sample fyi ; xi gi=1 corresponds to the sum of the (log) probability mass functions associated to the conditional distributions of Yi given Xi = xi for i = 1; ::; N :
`N ( ; yj x) =
N
X
i=1
with
>
=( : )
ln f Y jX ( yi j xi ; )
and
(25)
yi
f Y jX ( yi j xi ; ) = pyi i (1
(26)
pi )
N
As a consequence, the (conditional) log-likelihood of the sample fyi ; xi gi=1 is equal to
(0.5 point):
`N ( ; yj x) =
N
X
yi ln
( + xi ) + (1
yi ) ln (1
( + xi ))
(27)
i=1
Given the observations, we have (0.5 point):
`N ( ; yj x)
=
50 ln ( ( + )) + 30 ln ( (
))
+50 ln (1
( + )) + 60 ln (1
(
Question 3 (1 point): denote a =
`N ( ; yj x) = 50
(
) and b =
ln b + 30
ln a + 50
))
(28)
( + ) ; we get immediately (1 point):
ln (1
b) + 60
Question 4 (2 points): The maximum likelihood estimate b = b
a : bb
maximisation problem (0.5 point):
ln (1
>
a)
(29)
is the solution of the
b = arg max `N ( ; yj x)
(30)
2R2
The log-likelihood equations (FOC) are:
@ ln `N ( ; yj x)
@
b
0
B
=@
@ ln `N ( ; yjx)
@a
@ ln `N ( ; yjx)
@a
b
b
1
C
A=
0
0
!
(31)
Mid-term 2013. Advanced Econometrics. HEC Lausanne. C. Hurlin
with:
@ ln `N ( ; yj x)
@a
b
@ ln `N ( ; yj x)
@b
b
a
bb
The SOC are (0.5 point):
@ 2 ln `N ( ; yj x)
@ @ >
or equivalently
=
b
@ 2 ln `N ( ; yj x)
@ @ >
1
50
b
a
bb
1
=0
(32)
=0
(33)
1=3
1=2
=
30
b
a2
60
50
bb
=
b
By solving the system, we get (1 point):
b=
30
b
a
=
5
0
(34)
60
(1 b
a)2
0
50
b
b2
405
0
=
b
50
2
(1 bb)
!
(35)
0
400
The Hessian matrix is de…nite negative, so we have a maximum.
Question 5 (2 points): the equivariance (or invariance) principle states that, under suitable
regularity conditions, the maximum likelihood estimator of a function g ( ) of the pa>
rameter is g b , where b is the maximum likelihood estimator of = ( : ) (0.5
point). Let us de…ne a vectorial function g (:) such that
= g( )
(36)
This function can be expressed as (0.5 point):
g
(
)
( + )
=
!
(37)
Then, we have
Given the values of b ; we have
Since g (:) is invertible:
b
a
bb
b=g b
()
0
1=3
1=2
=
(
(38)
b
=@
b=g
1
b+b=
b+b
(b )
b=
b
1
1
b
1
2
1
A
1
3
b=
1
2
1
3
1
1
2
1
'
1
3
(40)
(41)
=0
Solving this system allows to get the ML estimates b and b (1 point):
b=
(39)
0:2154
(42)
' 0:2154
(43)
Mid-term 2013. Advanced Econometrics. HEC Lausanne. C. Hurlin
6
Question 6 (2 points) :
`N ( ; yj z) =
N
X
yi ln
(z i ) + (1
yi ) ln (1
(z i ))
(44)
i=1
The score vector is de…ned as to be (0.5 point):
SN ( ; Y j z) =
We have (1 point)
SN ( ; Y j z) =
0
@ ln `N ( ; Y jz)
@
@ ln `N ( ; Y j z) B
=@
@
N
X
Yi
i=1
(z i ) >
z
(z i ) i
@ ln `N ( ; Y jz)
@
(1
Yi )
1
1
C
A
(45)
(z i )
z>
(z i ) i
(46)
where (:) denotes the pdf of the standard normal distribution. This expression can be
expressed as:
N
X
(Yi
(z i )) (z i ) >
z
(47)
SN ( ; Y j z) =
(z
)
(1
(z i )) i
i
i=1
As a consequence
E (SN ( ; Y j z)) =
N
X
E (Yi )
i=1
(z i ) >
z
(z i ) i
(1
E (Yi ))
For a dummy variable, E (Yi ) = Pr ( Yi = 1j Zi = zi ) =
E (SN ( ; Y j z))
=
N
X
(z i )
i=1
=
N
X
(z i ) >
z
(z i ) i
(z i ) z >
i
1
(z i )
z>
(z i ) i
(48)
(z i ) ; so we have:
(1
(z i ))
(z i ) z
1
(z i )
z>
(z i ) i
(49)
i=1
As usual, the expectation of the score is equal to zero (0.5 point):
E (SN ( ; Y j z)) = 02
1
where E denotes the expectation with respect to the conditional distribution of Yi given
Zi = zi :
Question 7 (2 points) : the problem is regular, so:
p
d
N b
0 ! N 0; I
1
( 0)
where 0 denotes the true value of the parameter and I( 0 ) the (average) Fisher information matrix for one observation (0.5 point). Equivalently, we have
b asy N
0; N
1
I
1
(
0)
with N I( 0 ) = I N ( 0 ) : The asymptotic variance covariance matrix of the ML estima>
tor b = b : b
is given by (0.5 point):
Vasy b = I N1 (
0)
(50)
Mid-term 2013. Advanced Econometrics. HEC Lausanne. C. Hurlin
7
An estimator of the asymptotic variance covariance matrix of the ML estimator b =
>
b:b
is given by:
1
b asy b = b
V
IN b
(51)
The corresponding estimate is equal to:
b asy b
V
=
117:2049
10:1190
10:1190
117:2049
'
0:0086
0:0007
0:0007
0:0086
1
The estimates of the standard errors of b and b are then equal to:
r
q
p
b asy (b ) = V
b asy b ' 0:0086 = 0:0927
V
These results are equal to those reported by Eviews.
(52)
(53)
Download