Document 10829980

advertisement
Errors estimation and the asymptotic distribution of probabilistic estimates
65

1 ρ ... ρT −1
T −2


σ 2  ρ 1 ... ρ
 = σ2V
cov(ū) = 1−ρ
2 
− − − − − − −− 
ρT −1 ρT −2 ... 1
and we can decompose it as V√−1 ≡ M 0 M,where

1−ρ 0
0 ... 0
0
 −ρ
1
0 ... 0
0 



−ρ 1 ... 0
0 
M = 0
.
 ...
... ... ... ... ... 
0
0
0 ... −ρ 1
For ρ and β we obtain the estimation using the least-squares formulation
1
method and limT →∞ (1 − ρ2 ) T = 1, ρ ∈ (−1, 1), by minimizing

σ̂ 2 (ρ̂)
1
(1 − ρ̂2 ) T
.
(9)
Thus, to globally maximizing the likelihood function is equivalent to globally minimizing (9). From an asymptotic view point, the two above procedures
1
are equivalent, ∀ρ ∈ (−1, 1) and limT →∞ (1 − ρ2 ) T = 1.
III. The asymptotic distribution of maximal probabilistic
estimations in the self-recessive errors model
Now, we come back at the estimations from Section II and we study
asymptotic distribution.
0
We firstly introduce the notation γ = σ 2 ρβ , and we observe that the
estimation satisfies the equality ∂L
∂γ = 0.
We extend the probabilistic function concerning the relation upon the vector
γ̄o as follows
∂L
∂2L
(γo ) = −
(γo )(γ̄ − γo ) + the order 3 terms.
∂γo
∂γ∂γo
(10)
Now, we shall ”drop out” the above ”order 3 terms”, because, in this context,
they go to zero. (∂L∂γ) (γ̄o ) is the gradient of the probability function.
∂L
We can write
or ∂ 2 ∂γ∂γ o , (where γ̄o is implicitly understood) and we
∂ρ
then observe that:
1 T
1
∂L
ρ
∂L
= − 2 + 2 u0 V −1 u,
=−
=
∂σ 2
2σ
2σ
∂ρ
1 − ρ2
Download