Exercises Chapter 2 Maximum Likelihood Estimation Advanced Econometrics - HEC Lausanne Christophe Hurlin

advertisement
Exercises Chapter 2
Maximum Likelihood Estimation
Advanced Econometrics - HEC Lausanne
Christophe Hurlin
University of Orléans
November 2013
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
1 / 74
Exercise 1
MLE and Geometric Distribution
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
2 / 74
Problem (MLE and geometric distribution)
We consider a sample X1 , X2 , .., XN of i.i.d. discrete random variables,
where Xi has a geometric distribution with a pmf given by:
fX (x, θ ) = Pr (X = x ) = θ
(1
θ )x
1
8x 2 f1, 2, 3, ..g
where the success probability θ satis…es 0 < θ < 1 and is unknown. We
assume that:
1 θ
1
E (X ) =
V (X ) =
θ
θ2
Question 1: Write the log-likelihood function of the sample fx1 , x2 , ..xN g .
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
3 / 74
Solution
fX (x, θ ) = Pr (X = x ) = θ
(1
θ )x
1
8x 2 f1, 2, 3, ..g
Since the X1 , X2 , .., XN are i.i.d. then
N
LN (θ; x1 .., xN ) =
∏ fX (xi ; θ ) = θ N
N
(1
θ ) ∑ i =1 (x i
1)
i =1
N
`N (θ; x1 , .., xn ) =
∑ ln fX (xi ; θ ) = N ln (θ ) + ln (1
i =1
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
N
θ ) ∑ (xi
1)
i =1
November 2013
4 / 74
Problem (MLE and geometric distribution)
Question 2: Determine the maximum likelihood estimator of the success
probability θ.
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
5 / 74
Solution
The maximum likelihood estimate of the success probability θ is de…ned by:
b
θ = arg max `N (θ; x ) = arg max N ln (θ ) + ln (1
0 < θ <1
0 < θ <1
N
θ ) ∑ (xi
1)
i =1
The gradient and the hessian (deterministic) are de…ned by:
∂`N (θ; x )
N
=
∂θ
θ
∂2 `N (θ; x )
=
∂θ 2
Christophe Hurlin (University of Orléans)
N
θ2
N
1
1
θ
∑ (xi
2 N
1
1
1)
i =1
θ
∑ (xi
1)
i =1
Advanced Econometrics - HEC Lausanne
November 2013
6 / 74
Solution (cont’d)
So, the FOC (likelihood equation) is:
∂`N (θ; x )
∂θ
=
b
θ
()
1
N
b
θ
b
θ
()
So we have:
1
1
b
θ
=
1
N
1
1
=
b
N
θ
N
∑
b
θ i =1
1) = 0
( xi
N
∑ xi
1
i =1
N
∑ xi
i =1
1
b
θ=
xn
where x n denotes the realisation of the sample mean X N = N
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
1
∑N
i =1 Xi .
November 2013
7 / 74
Solution (cont’d)
The SOC is:
∂2 `N (θ; x )
∂θ 2
=
Since b
θ = 1/x n , we have:
b
θ
N
2
b
θ
2 N
1
1
N
N
∑ (xi
1) =
∑ xi
N = Nx n
N=N
i =1
i =1
So, we have:
∂2 `N (θ; x )
∂θ 2
Christophe Hurlin (University of Orléans)
=
b
θ
=
N
2
b
θ
0
N@
1)
i =1
1
b
θ
b
θ
=N
1
2
1
1
∑ (xi
b
θ
N
1
1
1
1
A
+
2
b
b
θ 1 b
θ
θ
Advanced Econometrics - HEC Lausanne
b
θ
b
θ
!
1
b
θ
b
θ
November 2013
!
8 / 74
Solution (cont’d)
∂2 `
(θ; x )
∂θ 2
N
=
b
θ
=
0
N@
2
b
θ
we have a maximum since 0 < b
θ < 1.
b
θ 1
N
1
2
b
θ +b
θ
3
b
θ 1
b
θ
b
θ
<0
1
A
Conclusion: the ML estimator of θ is equal to the inverse of the sample
mean:
1
b
θ=
XN
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
9 / 74
Problem (MLE and geometric distribution)
Question 3: Show that the maximum likelihood estimator of the success
probability θ is weakly consistent.
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
10 / 74
Solution
In two lines...
1
Since the X1 , X2 , .., XN are i.i.d. then according to the Khinchin’s
theorem (WLLN), we have:
p
X N ! E (Xi ) =
2
1
θ
Given that b
θ = 1/X N , by using the continuous mapping theorem
(CMP) for a function g (x ) = 1/x, we get:
or equivalently
p
b
θ = g XN ! g
p
b
θ!θ
The estimator b
θ is (weakly) consistent.
Christophe Hurlin (University of Orléans)
1
θ
Advanced Econometrics - HEC Lausanne
November 2013
11 / 74
Problem (MLE and geometric distribution)
Question 4: By using the asymptotic properties of the MLE, derive the
asymptotic distribution of the ML estimator b
θ = 1/X N .
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
12 / 74
Solution
1
The log-likelihood function ln fX (θ; xi ) satis…es the regularity
conditions.
2
So, the ML estimator is asymptotically normally distributed with
p
N b
θ
d
θ 0 ! N 0, I
1
(θ 0 )
where θ 0 denotes the true value of the parameter and I (θ 0 ) the
(average) Fisher information number for one observation.
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
13 / 74
Solution (cont’d)
3. Compute the Fisher information number for one observation. Since
we consider a marginal log-likelihood, the Fisher information number
associated to Xi is the same for the observations i. We have three
de…nition for I (θ )
I ( θ ) = Vθ
= Eθ
= Eθ
Christophe Hurlin (University of Orléans)
∂`i (θ; Xi )
∂θ
∂`i (θ; Xi ) ∂`i> (θ; Xi )
∂θ
∂θ
2
∂ `i (θ; Xi )
∂θ 2
Advanced Econometrics - HEC Lausanne
November 2013
14 / 74
Solution (cont’d)
Let us consider the third one:
I ( θ ) = Eθ
= Eθ
=
=
=
Christophe Hurlin (University of Orléans)
∂2 `i (θ; Xi )
∂θ 2
1
+
θ2
2
1
1
1
1
+
2
1 θ
θ
1
1
+
2
1 θ
θ
1
θ 2 (1 θ )
!
(Xi
1)
(Eθ (Xi )
1)
θ
2
2
1
θ
Advanced Econometrics - HEC Lausanne
1
November 2013
15 / 74
Solution (cont’d)
The asymptotic distribution of the ML estimator is:
p
N b
θ
θ0
d
! N 0, θ 20 (1
N !∞
θ0 )
where θ 0 denotes the true value of the parameter. Or equivalently:
!
2
asy
θ
1
θ
(
)
0
0
b
θ
N θ0 ,
N
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
16 / 74
Problem (MLE and geometric distribution)
Question 5: By using the central limit theorem and the delta method,
…nd the asymptotic distribution of the ML estimator b
θ = 1/X N .
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
17 / 74
Solution
1
Since the X1 , X2 , .., XN are i.i.d. with E (X ) = 1/θ 0 and
V (X ) = (1 θ 0 ) /θ 20 , according to the Lindberg-Levy’s CLT we get
immediately
p
2
N
XN
1
θ0
d
!N
0,
1
θ0
θ 20
Our MLE estimator is de…ned by b
θ = 1/X N . Let us consider a
function g (z ) = 1/z. So, g (.) is a continuous and continuously
di¤erentiable function with g (1/θ ) = θ 6= 0 and not involving N,
then the delta method implies
p
N
g XN
Christophe Hurlin (University of Orléans)
g
1
θ0
d
!N
0,
∂g (z )
∂z
Advanced Econometrics - HEC Lausanne
2
1/θ
1
θ0
θ 20
November 2013
!
18 / 74
Solution (cont’d)
p
N
g XN
g
1
θ0
d
!N
0,
∂g (z )
∂z
N b
θ
d
θ0 ! N
0, θ 40
1/θ
1
θ0
θ 20
!
1/z 2 , so we have
We known that g (z ) = 1/z and ∂g (z ) /∂z =
p
2
1
θ0
θ 20
Finally, we get the same result as in the previous question:
p
N b
θ
Christophe Hurlin (University of Orléans)
d
θ 0 ! N 0, θ 20 (1
Advanced Econometrics - HEC Lausanne
θ0 )
November 2013
19 / 74
Problem (MLE and geometric distribution)
Question 6: Determine the FDCR or Cramer-Rao bound. Is the ML
estimator b
θ e¢ cient and/or asymptotically e¢ cient?
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
20 / 74
Solution
The FDCR or Cramer-Rao bound is de…ned by:
FDCR = IN 1 (θ 0 )
where I N (θ 0 ) denotes the Fisher information number for the sample
evaluated at the true value θ 0 . There are three alternative de…nitions for
I N (θ 0 ) .
!
∂`N (θ; X )
I N ( θ 0 ) = Vθ
∂θ
θ0
!
∂`N (θ; X )
∂`N (θ; X )>
I N ( θ 0 ) = Eθ
∂θ
∂θ
θ0
θ0
!
∂2 `N (θ; X )
I N ( θ 0 ) = Eθ
∂θ∂θ >
θ0
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
21 / 74
Solution (cont’d)
Let us consider the third one:
I N ( θ 0 ) = Eθ
= Eθ
=
=
=
Christophe Hurlin (University of Orléans)
N
+
θ 20
∂2 `N (θ; X )
∂θ∂θ >
N
+
θ 20
2 N
1
1
!
∑ (Xi
1)
∑ (Eθ (Xi )
1)
θ0
i =1
2 N
1
1
θ0
!
θ0
1
N
+
2
1 θ0
θ0
N
2
θ 0 (1 θ 0 )
i =1
2
N
Advanced Econometrics - HEC Lausanne
1
θ0
1
November 2013
22 / 74
Solution (cont’d)
So, the FDCR or Cramer-Rao bound is de…ned by:
FDCR = IN 1 (θ 0 ) =
1
2
θ 20 (1 θ 0 )
N
We don’t know if b
θ is e¢ cient... For that we need to compute the
variance V b
θ = V 1/X N .
Since the log-likelihood function ln fX (θ; xi ) satis…es the regularity
conditions, the MLE is asymptotically e¢ cient.
Remark: we shown that for N large:
θ 2 (1 θ 0 )
Vasy b
θ = IN 1 (θ 0 ) = 0
N
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
23 / 74
Remark
How to get the Fisher information number for the sample (and as a
consequence the FDCR or Cramer-Rao bound) in one line from the
question 4? Since the sample is i.i.d., we have:
IN (θ 0 ) = N
Christophe Hurlin (University of Orléans)
I (θ 0 ) =
θ 20
N
(1 θ 0 )
Advanced Econometrics - HEC Lausanne
November 2013
24 / 74
Problem (MLE and geometric distribution)
Question 7: Propose a consistent estimator for the asymptotic variance
of the ML estimator b
θ.
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
25 / 74
Solution
We have:
θ 2 (1 θ 0 )
Vasy b
θ = 0
N
and we know that the ML estimator b
θ is a (weakly) consistent estimator of
θ0 :
p
b
θ ! θ0
A natural estimator for the asymptotic variance is given by:
b asy b
V
θ =
2
b
θ0 1
N
b
θ0
Given the CMP and Slutsky’s theorem, it is easy to show that:
p
b asy b
V
θ ! Vasy b
θ
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
26 / 74
Problem (MLE and geometric distribution)
Question 8: Write a Matlab code in order to
(1) Generate a sample of size N = 1, 000 of i.i.d. random variable
distributed according to a geometric distribution with a success probability
θ = 0.3 by using the function geornd.
(2) Estimate by MLE the parameter θ. Compare your estimate with the
sample mean.
Remak: There are two de…nitions of geometric distribution:
Pr (X = x ) = θ
Pr (X = x ) = θ
(1
(1
Christophe Hurlin (University of Orléans)
θ )x
θ )x
1
8x 2 f1, 2, ..g
used in this exercice
8x 2 f0, 1, 2, ..g used by Matlab for geornd.
Advanced Econometrics - HEC Lausanne
November 2013
27 / 74
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
28 / 74
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
29 / 74
Exercise 2
MLE and AR(p) processes
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
30 / 74
De…nition (AR(1) process)
A stationary Gaussian AR(1) process takes the form
Yt = c + ρYt
1
+ εt
with εt i.i.d. N 0, σ2 , jρj < 1 and:
E (Yt ) =
Christophe Hurlin (University of Orléans)
c
1
ρ
V (Yt ) =
Advanced Econometrics - HEC Lausanne
σ2
1 ρ2
November 2013
31 / 74
Problem (MLE and AR processes)
>
Question 1: Denote θ = c; ρ; σ2
the 3 1 vector of parameters and
write the likelihood and the log-likelihood of the …rst observation y1 .
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
32 / 74
Solution
Since the variable Y1 is gaussian with
E (Yt ) =
c
1
V (Yt ) =
ρ
σ2
1 ρ2
The (unconditional) likelihood of y1 is equal to:
1
L1 (θ; y1 ) = p p
2
2π σ / (1
ρ2 )
exp
1 (y1 c/ (1 ρ))2
2 σ 2 / (1 ρ2 )
!
The (unconditional) log-likelihood of y1 is equal to:
`1 (θ; y1 ) =
1
ln (2π )
2
Christophe Hurlin (University of Orléans)
1
ln
2
σ2
1 ρ2
1 (y1 c/ (1 ρ))2
2 σ 2 / (1 ρ2 )
Advanced Econometrics - HEC Lausanne
November 2013
33 / 74
Problem (MLE and AR processes)
Question 2: What is the conditional distribution of Y2 given Y1 = y1 .
Write the (conditional) likelihood and the (conditional) log-likelihood of
the second observation y2 .
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
34 / 74
Solution
For t = 2, we have:
Y2 = c + ρY1 + ε2
where ε2 N 0, σ2 . As a consequence, the conditional distribution of
Y2 given Y1 = y1 is also normal:
Y2 j Y1 = y1
Christophe Hurlin (University of Orléans)
N c + ρy1 , σ2
Advanced Econometrics - HEC Lausanne
November 2013
35 / 74
Solution (cont’d)
Given
Y2 j Y1 = y1
N c + ρy1 , σ2
The conditional likelihood of y2 is equal to:
1
exp
L2 (θ; y2 j y1 ) = p
σ 2π
c ρy1 )2
σ2
1 (y2
2
!
The conditional log-likelihood of y2 is equal to:
`2 (θ; y2 j y1 ) =
Christophe Hurlin (University of Orléans)
1
ln (2π )
2
1
ln σ2
2
1 (y2
2
Advanced Econometrics - HEC Lausanne
c ρy1 )2
σ2
November 2013
36 / 74
Problem (MLE and AR processes)
Question 3: Consider a sample of fy1 , y2 g of size T = 2. Write the
exact likelihood (or full likelihood) and the exact log-likelihood of the
AR (1) model for the sample fy1 , y2 g .
Note that for two continuous random variables X and Y , the pdf of the
joint distribution (X , Y ) can be written as:
fX ,Y (x, y ) = f X jY =y ( x j y )
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
fY ( y )
November 2013
37 / 74
Solution
The exact (or full) likelihood of the sample fy1 , y2 g corresponds to the pdf
of the joint distribution of (Y1 , Y2 ) :
LT (θ; y1 , y2 ) = fY 1 ,Y 2 (y1 , y2 )
This joint density can be rewritten as the product of the marginal density
of Y1 by the conditional density of Y2 given Y1 = y1 :
LT (θ; y1 , y2 ) = f Y 2 jY 1 =y1 ( y2 j y1 ; θ)
fY 1 (y1 ; θ)
or equivalently:
LT (θ; y1 , y2 ) = L2 (θ; y2 j y1 )
Christophe Hurlin (University of Orléans)
L1 (θ; y1 )
Advanced Econometrics - HEC Lausanne
November 2013
38 / 74
Solution (cont’d)
The exact (or full) likelihood of the sample fy1 , y2 g is equal to:
LT (θ; y1 , y2 ) =
1
p p
2
2π σ / (1
1
p exp
σ 2π
Christophe Hurlin (University of Orléans)
ρ2 )
exp
1 (y2
2
c
σ2
Advanced Econometrics - HEC Lausanne
1 (y1 c/ (1 ρ))2
2 σ 2 / (1 ρ2 )
!
ρy1 )2
November 2013
!
39 / 74
Solution (cont’d)
Similarly the exact (or full) log-likelihood of the sample fy1 , y2 g is equal
to:
`T (θ; y1 , y2 ) = `2 (θ; y2 j y1 ) + `1 (θ; y1 )
Then, we get:
`T (θ; y1 , y2 ) =
σ2
1 ρ2
1
ln (2π )
2
1
ln
2
1
ln (2π )
2
1
ln σ2
2
Christophe Hurlin (University of Orléans)
1 (y2
2
Advanced Econometrics - HEC Lausanne
1 (y1 c/ (1 ρ))2
2 σ 2 / (1 ρ2 )
c ρy1 )2
σ2
November 2013
40 / 74
Problem (MLE and AR processes)
Question 4: Write the exact likelihood (or full likelihood) and the
exact log-likelihood of the AR (1) model for a sample fy1 , y2 , .., yT g of
size T .
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
41 / 74
Solution
More generally, we have:
T
LT (θ; y1 , ., yT ) = L1 (θ; y1 )
∏ Lt (θ; yt j yt
1)
t =2
T
`T (θ; y1 , .., yT ) = `1 (θ; y1 ) +
∑ `t (θ; yt j yt
1)
t =2
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
42 / 74
Solution (cont’d)
LT (θ; y ) =
1
p p
2π σ2 / (1
T
1
∏ σp2π exp
t =2
`T (θ; y ) =
1
ln (2π )
2
T
+∑
t =2
Christophe Hurlin (University of Orléans)
1
ln
2
1
ln (2π )
2
ρ2 )
exp
1 (yt
2
σ2
1 ρ2
1 (y1 c/ (1 ρ))2
2 σ 2 / (1 ρ2 )
!
c ρyt 1 )2
σ2
!
1 (y1 c/ (1 ρ))2
2 σ 2 / (1 ρ2 )
1
ln σ2
2
Advanced Econometrics - HEC Lausanne
1 (yt
2
c
ρyt
1)
2
σ2
November 2013
!
43 / 74
Problem (MLE and AR processes)
Question 5: The exact log-likelihood function is a non-linear function of
the parameters θ, and so there is no closed form solution for the exact
b2 )> must be determined by numerically
mles. The exact MLE b
θ = (b
c; b
ρ; σ
maximizing the exact log-likelihood function. Write a Matlab code to
(1) to generate a sample of size T = 1, 000 from an AR (1) process with
c = 1, ρ = 0.5 and σ2 = 1. Remark : for the initial condition, generate a
normal random variable.
(2) to compute the exact MLE.
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
44 / 74
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
45 / 74
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
46 / 74
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
47 / 74
Problem (MLE and AR processes)
Question 6: Now we consider the …rst observation y1 as given
(deterministic). Then, we have fY 1 (y1 ; θ) = 1. Write the conditional
log-likelihood of the AR (1) model for a sample fy1 , y2 , .., yT g of size T .
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
48 / 74
Solution
The conditional likelihood is de…ned by:
T
LT (θ; y2 , .., yT j y1 ) =
=
∏ f Y jY
t 1 ,Y 1 =y 1
∏ f Y jY
t 1
t
t =2
T
t
t =2
( yt j yt
( yt j yt
1 , y1 ; θ)
fY 1 (y1 ; θ)
1 ; θ)
The conditional log-likelihood is de…ned by:
T
`T (θ; y1 , .., yT j y1 ) = `1 (θ; y1 ) +
∑ `t (θ; yt j yt
1 , y1 )
t =2
T
=
∑ `t (θ; yt j yt
1)
t =2
where `t (θ; yt j yt
1)
= ln f Yt jYt
Christophe Hurlin (University of Orléans)
1
( yt j yt
1 ; θ)
Advanced Econometrics - HEC Lausanne
.
November 2013
49 / 74
Solution (cont’d)
The conditional log-likelihood is then equal to:
T
`T (θ; y ) =
∑
t =2
1
ln σ2
2
1
ln (2π )
2
1 (yt
2
c
ρyt
1)
2
σ2
!
or equivalently
`T (θ; y ) =
(T
1)
2
1
2σ2
Christophe Hurlin (University of Orléans)
ln (2π )
(T
1)
2
T
∑ (yt
c
ρyt
1)
ln σ2
2
t =2
Advanced Econometrics - HEC Lausanne
November 2013
50 / 74
Problem (MLE and AR processes)
Question 7: Write the likelihood equations associated to the conditional
log-likelihood.
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
51 / 74
Solution
b2 )> of θ is de…ned by:
The ML estimator b
θ = (b
c; b
ρ; σ
b
θ = arg max`T (θ; y1 , .., yT )
θ 2Θ
The log-likelihood equations are:
0
∂`T (θ; y )
∂θ
Christophe Hurlin (University of Orléans)
b
θ
B
B
B
=B
B
@
∂`T (θ;y )
∂c
b
θ
∂`T (θ;y )
∂ρ
b
θ
∂`T (θ;y )
∂σ2
b
θ
1
C 0 1
0
C
C @ A
0
C=
C
0
A
Advanced Econometrics - HEC Lausanne
November 2013
52 / 74
Solution (cont’d)
(T
`T (θ; y ) =
1)
2
1
2σ2
∂`T (θ; y )
∂c
∂`T (θ; y )
∂ρ
∂`T (θ; y )
∂σ2
=
b
θ
Christophe Hurlin (University of Orléans)
=
b
θ
=
b
θ
1
b2
σ
(T
ln (2π )
1)
2
T
∑ (yt
c
1)
ρyt
ln σ2
2
t =2
1
b2
σ
T
∑ (yt
t =2
T
∑ (yt
t =2
1
(T 1)
+ 4
2
2b
σ
2b
σ
b
c
T
b
c
b
ρyt
b
ρyt
∑ (yt
t =2
Advanced Econometrics - HEC Lausanne
1)
=0
1 ) yt 1
b
c
b
ρyt
=0
1)
2
=0
November 2013
53 / 74
Problem (MLE and AR processes)
Question 8: Show that the conditional ML estimators b
c and b
ρ correspond
2
b . Remark: do not verify
to the OLS estimator. Give the the estimator of σ
the SOC at this step.
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
54 / 74
Solution
The maximisation of `T (θ; y ) with respect to c and ρ
(T
`T (θ; y ) =
1)
2
1
2σ2
ln (2π )
(T
1)
2
T
∑ (yt
c
ρyt
1)
ln σ2
2
t =2
is equivalent to the minimisation of
T
∑ (yt
c
ρyt
1)
2
= (y
Xβ)> (y
Xβ)
t =2
with y = (y2 ; ..; yN )> , β = (c; ρ)> and X = (1 : y
y 1 = (y1 ; ..; yN 1 )> .
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
1)
with
November 2013
55 / 74
Solution
The conditional ML estimators of c and ρ are equivalent to the ordinary
least square (OLS) estimators obtained in regression of yt on a constant
and its own lagged value:
yt = c + ρyt
b
c
b
ρ
=
T
1
∑Tt=2 yt
Christophe Hurlin (University of Orléans)
1
1
∑Tt=2 yt
1
∑Tt=2 yt2
1
+ εt
!
1
Advanced Econometrics - HEC Lausanne
∑Tt=2 yt
∑Tt=2 yt
1 yt
!
November 2013
56 / 74
Solution
b2 is de…ned by:
The ML estimator σ
∂`T (θ; y )
∂σ2
Then, we get:
b2 =
σ
b
θ
1
T
1
(T 1)
+ 4
2
2b
σ
2b
σ
=
t =2
T
(yt
1 ∑
Christophe Hurlin (University of Orléans)
T
∑ (yt
t =2
b
c
b
ρyt
1)
2
=
T
Advanced Econometrics - HEC Lausanne
b
c
b
ρyt
1
εbt 2
1 ∑
1)
2
=0
T
t =2
November 2013
57 / 74
Problem (MLE and AR processes)
Question 9: Write a Matlab code to compute the conditional maximum
b2 )> .
likelihood estimator b
θ = (b
c; b
ρ; σ
(1) Generate a sample of size T = 1, 000 from an AR (1) process with
c = 1, ρ = 0.5 and σ2 = 1. Remark : for the initial condition, generate a
normal random variable.
(2) Compute the conditional MLE.
(3) Compare the ML estimators b
c and b
ρ to the OLS ones.
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
58 / 74
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
59 / 74
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
60 / 74
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
61 / 74
Problem (MLE and AR processes)
Question 10: Write the average Fisher information matrix associated to
the conditonal likelihood.
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
62 / 74
Solution
In general for a conditional model, in order to compute the average
information matrix I (θ ) for one observation:
Step 1: Compute the Hessian matrix or the score vector for one
observation
Hi (θ; Yi j xi ) =
∂2 `i (θ; Yi j xi )
∂θ∂θ
>
si (θ; Yi j xi ) =
∂`i (θ; Yi j xi )
∂θ
Step 2: Take the expectation (or the variance) with respect to the
conditional distribution Yi j Xi = xi
I i (θ ) = Vθ (si (θ; Yi j xi )) = Eθ ( Hi (θ; Yi j xi ))
Step 3: Then the expectation with respect to the conditioning variable X
I (θ ) = EX (I i (θ ))
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
63 / 74
Solution (cont’d)
Step 1:
∂`t (θ; yt )
1
= 2 (yt c ρyt 1 )
∂c
σ
∂`t (θ; yt )
1
= 2 (yt c ρyt 1 ) yt
∂ρ
σ
∂`t (θ; yt )
1
1
=
+ 4 (yt c ρyt
∂σ2
2σ2
2σ
The Hessian matrix for one observation is de…ned by:
Ht (θ; Yt j yt
1)
0
B
=@
Christophe Hurlin (University of Orléans)
1/σ2
yt
εt
1 /σ
/σ4
yt
2
1 /σ
2
1 /σ
Advanced Econometrics - HEC Lausanne
1)
2
εt /σ4
yt2 1 /σ2
εt yt
1
εt yt
4
1/2σ4
4
1 /σ
ε2t /σ6
November 2013
1
C
A
64 / 74
Solution (cont’d)
Ht (θ; Yt j yt
1)
0
1/σ2
B
=@
yt
εt
1 /σ
/σ4
yt
2
1 /σ
2
εt /σ4
yt2 1 /σ2
εt yt
1 /σ
εt yt
4
1/2σ4
4
1 /σ
ε2t /σ6
Step 2: Take the expectation (or the variance) with respect to the
conditional distribution Yt j Yt 1 = yt 1
1
C
A
I t (θ) = Eθ ( Ht (θ; Yt j yt 1 ))
0
1
1/σ2
yt 1 /σ2
0
B
C
0
I t (θ) = @ yt 1 /σ2 yt2 1 /σ2
A
0
0
1/2σ4
since Eθ (εt ) = 0, Eθ (εt yt
Christophe Hurlin (University of Orléans)
1)
= yt
1 Eθ
(εt ) = 0 and Eθ ε1t = σ2 .
Advanced Econometrics - HEC Lausanne
November 2013
65 / 74
Solution (cont’d)
0
1/σ2
B
I t (θ) = @ yt
1 /σ
0
2
0
yt2 1 /σ2
0
0
1/2σ4
yt
2
1 /σ
1
C
A
Step 3: Then take the expectation with respect to the conditioning
variable xt = (1 : yt 1 )
I (θ) = EX (I i (θ))
0
1/σ2
B
I (θ) = @ EX (yt
Christophe Hurlin (University of Orléans)
1 ) /σ
0
2
EX (yt
1 ) /σ
EX yt2
1
2
0
/σ2
0
0
Advanced Econometrics - HEC Lausanne
1/2σ4
1
C
A
November 2013
66 / 74
Problem (MLE and AR processes)
Question 11: What is the asymptotic distribution of the conditional
MLE? Propose an estimator for the asymptotic variance covariance matrix
b2 )> .
of b
θ = (b
c; b
ρ; σ
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
67 / 74
Solution
Since the log-likelihood is regular, we have:
p
T
or equivalently
with
0
b
θ
θ
1 b
N
1
θ0 ,
T
1/σ2
B
I (θ) = @ EX (yt
Christophe Hurlin (University of Orléans)
d
1 ) /σ
0
1
θ0 ! N 0, I
2
1
I
1
(θ0 )
(θ0 )
EX (yt
1 ) /σ
EX yt2
1
2
0
/σ2
0
0
Advanced Econometrics - HEC Lausanne
1/2σ4
1
C
A
November 2013
68 / 74
Solution (cont’d)
0
1/σ2
B
I (θ) = @ EX (yt
1 ) /σ
2
EX (yt
1 ) /σ
EX yt2
1
0
2
0
/σ2
0
1/2σ4
0
1
C
A
An estimator of the asymptotic variance covariance matrix can be derived
from:
1
0
σ2
0
1/b
σ2
(T 1) 1 ∑Tt=2 yt 1 /b
C
1
bI (θ) = B
σ2 (T 1) 1 ∑Tt=2 yt2 1 /b
σ2
0
@ (T 1) ∑Tt=2 yt 1 /b
A
0
0
b2 is the ML estimator of σ2 .
where σ
b asy b
V
θ =
Christophe Hurlin (University of Orléans)
1
T
1
bI
1
1/2b
σ4
(θ)
Advanced Econometrics - HEC Lausanne
November 2013
69 / 74
Solution (cont’d)
If we denote by X = (1 : y
since
bI (θ) =
X> X =
Christophe Hurlin (University of Orléans)
1) ,
then we have:
1
T
>
σ2 02 1
1 X X/b
01 2
1/2b
σ4
T
1
∑Tt=2 yt 1
∑Tt=2 yt
∑Tt=2 yt2
Advanced Econometrics - HEC Lausanne
!
1
1
November 2013
70 / 74
Problem (MLE and AR processes)
Question 12: Write a Matlab code to compute the asymptotic variance
covariance matrix associated to the conditional maximum likelihood
b2 )> .
estimator b
θ = (b
c; b
ρ; σ
(1) Import the data from the excel …le Chapter2_Exercice2.xls
(2) Compute the asymptotic variance covariance matrix of the
conditional MLE.
(3) Compare your results with the results reported in Eviews.
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
71 / 74
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
72 / 74
Perfect....
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
73 / 74
End of Exercices - Chapter 2
Christophe Hurlin (University of Orléans)
Christophe Hurlin (University of Orléans)
Advanced Econometrics - HEC Lausanne
November 2013
74 / 74
Download