Document 10948379

advertisement
Hindawi Publishing Corporation
Mathematical Problems in Engineering
Volume 2010, Article ID 956907, 30 pages
doi:10.1155/2010/956907
Research Article
QML Estimators in Linear Regression Models with
Functional Coefficient Autoregressive Processes
Hongchang Hu
School of Mathematics and Statistics, Hubei Normal University, Huangshi 435002, China
Correspondence should be addressed to Hongchang Hu, retutome@163.com
Received 30 December 2009; Revised 19 March 2010; Accepted 6 April 2010
Academic Editor: Massimo Scalia
Copyright q 2010 Hongchang Hu. This is an open access article distributed under the Creative
Commons Attribution License, which permits unrestricted use, distribution, and reproduction in
any medium, provided the original work is properly cited.
This paper studies a linear regression model, whose errors are functional coefficient autoregressive
processes. Firstly, the quasi-maximum likelihood QML estimators of some unknown parameters
are given. Secondly, under general conditions, the asymptotic properties existence, consistency,
and asymptotic distributions of the QML estimators are investigated. These results extend those
of Maller 2003, White 1959, Brockwell and Davis 1987, and so on. Lastly, the validity and
feasibility of the method are illuminated by a simulation example and a real example.
1. Introduction
Consider the following linear regression model:
yt xtT β εt ,
t 1, 2, . . . , n,
1.1
where yt ’s are scalar response variables, xt ’s are explanatory variables, β is a d-dimensional
unknown parameter, and the εt ’s are functional coefficient autoregressive processes given as
ε1 η1 ,
εt ft θεt−1 ηt ,
t 2, 3, . . . , n,
1.2
where ηt ’s are independent and identically distributed random errors with zero mean and
finite variance σ 2 , θ is a one-dimensional unknown parameter and ft θ is a real valued
function defined on a compact set Θ which contains the true value θ0 as an inner point and is
a subset of R1 . The values of θ0 and σ 2 are unknown.
2
Mathematical Problems in Engineering
Model 1.1 includes many special cases, such as an ordinary linear regression model
0, for some θ ∈ Θ, is
when ft θ ≡ 0; see 1–11. In the sequel, we always assume that ft θ /
a linear regression model with constant coefficient autoregressive processes when ft θ θ, see Maller 12, Pere 13, and Fuller 14, time-dependent and functional coefficient
autoregressive processes when β 0, see Kwoun and Yajima 15, constant coefficient
autoregressive processes when ft θ θ and β 0, see White 16, 17, Hamilton 18,
Brockwell and Davis 19, and Abadir and Lucas 20, time-dependent or time-varying
autoregressive processes when ft θ at and β 0, see Carsoule and Franses 21, Azrak
and Mélard 22, and Dahlhaus 23, and so forth.
Regression analysis is one of the most mature and widely applied branches of
statistics. Linear regression analysis is one of the most widely used statistical techniques.
Its applications occur in almost every field, including engineering, economics, the physical
sciences, management, life and biological sciences, and the social sciences. Linear regression
model is the most important and popular model in the statistical literature, which attracts
many statisticians to estimate the coefficients of the regression model. For the ordinary
linear regression model when the errors are independent and identically distributed random
variables, Bai and Guo 1, Chen 2, Anderson and Taylor 3, Drygas 4, GonzálezRodrı́guez et al. 5, Hampel et al. 6, He 7, Cui 8, Durbin 9, Hoerl and Kennard 10, Li
and Yang 11, and Zhang et al. 24 used various estimation methods Least squares estimate
method, robust estimation, biased estimation, and Bayes estimation to obtain estimators of
the unknown parameters in 1.1 and discussed some large or small sample properties of
these estimators.
However, the independence assumption for the errors is not always appropriate in
applications, especially for sequentially collected economic and physical data, which often
exhibit evident dependence on the errors. Recently, linear regression with serially correlated
errors has attracted increasing attention from statisticians. One case of considerable interest is
that the errors are autoregressive processes and the asymptotic theory of this estimator was
developed by Hannan and Kavalieris 25. Fox and Taqqu 26 established its asymptotic
normality in the case of long-memory stationary Gaussian observations errors. Giraitis
and Surgailis 27 extended this result to non-Gaussian linear sequences. The asymptotic
distribution of the maximum likelihood estimator was studied by Giraitis and Koul in
28 and Koul in 29 when the errors are nonlinear instantaneous functions of a Gaussian
long-memory sequence. Koul and Surgailis 30 established the asymptotic normality of
the Whittle estimator in linear regression models with non-Gaussian long-memory moving
average errors. When the errors are Gaussian, or a function of Gaussian random variables that
are strictly stationary and long range dependent, Koul and Mukherjee 31 investigated the
linear model. Shiohama and Taniguchi 32 estimated the regression parameters in a linear
regression model with autoregressive process.
In addition to constant or functional or random coefficient autoregressive model, it
has gained much attention and has been applied to many fields, such as economics, physics,
geography, geology, biology, and agriculture. Fan and Yao 33, Berk 34, Hannan and
Kavalieris 35, Goldenshluger and Zeevi 36, Liebscher 37, An et al. 38, Elsebach 39,
Carsoule and Franses 21, Baran et al. 40, Distaso 41, and Harvill and Ray 42 used
various estimation methods the least squares method, the Yule-Walker method, the method
of stochastic approximation, and robust estimation method to obtain some estimators and
discussed their asymptotic properties, or investigated hypotheses testing.
This paper discusses the model 1.1-1.2 including stationary and explosive
processes. The organization of the paper is as follows. In Section 2 some estimators of β, θ,
Mathematical Problems in Engineering
3
and σ 2 are given by the quasi-maximum likelihood method. Under general conditions,
the existence and consistency the quasi-maximum likelihood estimators are investigated,
and asymptotic normality as well, in Section 3. Some preliminary lemmas are presented in
Section 4. The main proofs are presented in Section 5, with some examples in Section 6.
2. Estimation Method
Write the “true” model as
yt xtT β0 et ,
e 1 η1 ,
t 1, 2, . . . , n,
et ft θ0 et−1 ηt ,
t 2, 3, . . . , n,
2.1
2.2
where ft θ0 dft θ/dθ|θθ0 /
0, and ηt ’s are i.i.d errors with zero mean and finite variance
−1
2
σ0 . Define i0 ft−i θ0 1, and by 2.2 we have
et j−1
t−1 j0
ft−i θ0 ηt−j .
2.3
i0
Thus et is measurable with respect to the σ-field H generated by η1 , η2 , . . . , ηt , and
Eet 0,
Varet σ02
j−1
t−1 2
ft−i θ0 .
j0
2.4
i0
Assume at first that the ηt ’s are i.i.d. N0, σ 2 . Using similar arguments to those of
Fuller 14 or Maller 12, we get the log-likelihood of y2 , y3 , . . . , yn conditional on y1 :
n
2 1
1
1 εt − ft θεt−1 − n − 1 log2π. 2.5
Ψn β, θ, σ 2 log Ln − n − 1 log σ 2 − 2
2
2
2σ t2
At this stage we drop the normality assumption, but still maximize 2.5 to obtain QML
estimators, denoted by σn2 , βn , θn when they exist:
n
2
n−1
1 ∂Ψn
−
εt − ft θεt−1 ,
2
2
4
∂σ
2σ
2σ t2
2.6
n
∂Ψn
1
2 ft θ εt − ft θεt−1 εt−1 ,
∂θ
σ t2
2.7
n
1
∂Ψn
2
εt − ft θεt−1 xt − ft θxt−1 .
∂β
σ t2
2.8
4
Mathematical Problems in Engineering
Thus σn2 , βn , θn satisfy the following estimation equations:
σn2 n n 2
1 εt − ft θn εt−1 ,
n − 1 t2
2.9
εt − ft θn εt−1 ft θn εt−1 0,
2.10
εt − ft θn εt−1 xt − ft θn xt−1 0,
2.11
t2
n t2
where
εt yt − xtT βn .
2.12
Remark 2.1. If ft θ θ, then the above equations become the same as Maller’s 12.
Therefore, we extend the QML estimators of Maller 12.
To calculate the values of the QML estimators, we may use the grid search method,
steepest ascent method, Newton-Raphson method, and modified Newton-Raphson method.
In order to calculate in Section 6, we introduce the most popular modified Newton-Raphson
method proposed by Davidon-Fletcher-Powell see Hamilton 18.
→
−
→
− m
σ m2 , βm , θm denote an estimator of θ σ 2 , β, θ that
Let d 2 × 1 vector θ
→
− m
has been calculated at the mth iteration, and let Am denote an estimation of H θ −1 .
→
− m1
The new estimator θ
is given by
m − m
→
−
→
− m1 →
θ
sAm g θ
θ
2.13
→
− m
→
− m
for s the positive scalar that maximizes Ψn { θ
sAm g θ }, where d 2 × 1 vector
⎛ ∂Ψ
⎞
n
| 2 m2
2
∂σ σ σ ⎟
− ⎜
⎜
m ∂Ψn →
θ
⎜ ∂Ψn
→
−
m ⎜
|
g θ
→
−
→
−
⎜ ∂β |ββm
→
−
θθ
⎜
∂θ
⎝
∂Ψn
| m
∂θ θθ
⎟
⎟
⎟
⎟
⎟
⎠
2.14
⎞
∂2 Ψn ∂2 Ψn
∂σ 2 ∂β ∂σ 2 ∂θ ⎟
⎟
⎟
∂2 Ψn ∂2 Ψn ⎟
⎟|→
− →
− m ,
θθ
∂β∂βT ∂β∂θ ⎟
⎟
∂2 Ψn ⎠
∗
∂θ2
2.15
and d 2 × d 2 symmetric matrix
⎛
∂2 Ψn
⎜
→
−
⎜ ∂σ 2 2
2
m ⎜
θ
Ψ
∂
n
→
−
⎜
H θ
|
−
→
− →
− m ⎜
→
− →
− T θθ
⎜ ∗
⎜
∂θ∂θ
⎝
∗
Mathematical Problems in Engineering
5
where
∂2 Ψn
∂σ 2 2
n
2
n−1
1
−
εt − ft θεt−1 ,
6
4
σ t2
2σ
n
T
1
∂2 Ψn
−
εt − ft θεt−1 xt − ft θxt−1 ,
2
4
∂σ ∂β
σ t2
2.16
n
1
∂2 Ψn
−
εt − ft θεt−1 ft θ,
2
4
∂σ ∂θ
σ t2
n
T
∂2 Ψn
1
−
xt − ft θxt−1 xt − ft θxt−1 ,
2
T
σ t2
∂β∂β
n
1
∂2 Ψn
− 2
ft θεt−1 xt ft θεt xt−1 − 2ft θft θxt−1 εt−1 ,
∂β∂θ
σ t2
n 1
∂2 Ψn
2
2
ε
f
.
−
−
f
ε
f
θ
θf
θ
θε
t
t
t−1
t
t
t
t−1
∂θ2
σ 2 t2
2.17
2.18
→
− m1
→
− m1
Once θ
and the gradient at θ
have been calculated, a new estimation Am1
is found from
Am1 Am −
A
m
Δg
m1
Δg m1
T
A
m1 T
Δg
m
Δg
→
− m1 T
Δθ
−
,
m1 T →
− m1
Δθ
Δg
Am
m1
→
− m1
Δθ
2.19
where
→
− m1 →
− m1 →
− m
Δθ
θ
−θ ,
Δg
m1
m1 m →
−
→
−
g θ
−g θ
.
2.20
It is well known that least squares estimators in ordinary linear regression model
are very good estimators, so a recursive algorithms procedure is to start the iteration with
β0 , σ 02 which are least squares estimators of β and σ 2 , respectively. Take θ0 such that
ft θ0 0. Iterations are stopped if some termination criterion is reached, for example, if
m1 →
→
− m −
−θ θ
< δ,
m →
−
θ
for some prechosen small number δ > 0.
2.21
6
Mathematical Problems in Engineering
Up to this point, we obtain the values of QML estimators when the function ft θ ft, θ is known. However, the function ft θ is never the case in practice; we have to estimate
it. By 2.12 and 1.2, we obtain
εt
,
f t, θn εt−1
2.22
t 2, 3, . . . , n.
θn θn , t 2, 3, . . . , n}, we may obtain the estimation function ft,
Based on the dataset {ft,
of ft, θ by some smoothing methods see Simonff 43, Fan and Yao 33, Green and
Silverman 44, Fan and Gijbels 45, etc.
To obtain our results, the following conditions are sufficient.
A1 Xn nt2 xt xtT is positive definite for sufficiently large n and
lim max xtT Xn−1 xt 0,
2.23
lim sup|λ|max Xn−1/2 Zn Xn−T/2 < 1,
2.24
n → ∞ 1≤t≤n
n→∞
T
where Zn 1/2 nt2 xt xt−1
xt−1 xtT and |λ|max · denotes the maximum in absolute value
of the eigenvalues of a symmetric matrix.
A2 There is a constant α > 0 such that
j−1
t
j1
2
ft−i
θ
≤α
2.25
i0
for any t ∈ {1, 2, . . . , n} and θ ∈ Θ.
A3 The derivatives ft θ dft θ/dθ, ft θ dft θ/dθ exist and are bounded for
any t and θ ∈ Θ.
Remark 2.2. Maller 12 applied the condition A1, and Kwoun and Yajima 15 used the
conditions A2 and A3. Thus our conditions are general. A1 delineates the class of xt for
which our results hold in the sense required. It is further discussed by Maller in 12. Kwoun
and Yajima 15 call {et } stable if Varet is bounded. Thus A2 implies that {et } is stable.
However, {et } is not stationary. In fact, by 2.3, we obtain that
Covet , etk σ02
k−1
i0
which is dependent of t.
ftk−i θ0 ft θ0 k
i0
t−2
tk−2
ftk−i θ0 · · · ft−l θ0 ftk−i θ0 ,
l0
i0
2.26
Mathematical Problems in Engineering
7
For ease of exposition, we will introduce the following notations which will be used
later in the paper.
Define d 1-vector ϕ β, θ, and
∂Ψn ∂Ψn
∂Ψn
σ2
,
,
Sn ϕ σ 2
∂ϕ
∂β ∂θ
∂2 Ψn
Fn ϕ −σ 2
.
∂ϕ∂ϕT
2.27
By 2.7 and 2.8, we get
⎛
Xn θ
⎜
⎜
Fn ϕ ⎜
⎝
∗
n
ft θεt−1 xt
t2
n ft θεt xt−1
−
2ft θft θxt−1 εt−1
2
ft 2 θ ft θft θ εt−1
− ft θεt εt−1
⎞
⎟
⎟
⎟,
⎠
2.28
t2
where Xn θ −σ 2 ∂2 Ψn /∂β∂βT and the ∗ indicates that the element is filled in by symmetry.
Thus,
Dn E Fn ϕ0
⎞
⎛
Xn θ0 0
⎟
⎜
n ⎟
⎜
⎝ ∗
2
2
ft θ0 ft θ0 ft θ0 Eet−1 − ft θ0 Eet et−1 ⎠
⎛
⎜
⎜
⎝
t2
⎞
Xn θ0 0
∗
n
2
ft 2 θ0 Eet−1
2.29
⎟
⎟
⎠
t2
Xn θ0 0
∗
Δn θ0 , σ0 ,
where
j−1
n
t−2 n
2
2
2
2
2
Δn θ0 , σ0 ft θ0 Eet−1 σ0 ft θ0 ft−i θ On.
t2
t2
j0
2.30
i0
3. Statement of Main Results
Theorem 3.1. Suppose that conditions (A1)–(A3) hold. Then there is a sequence An ↓ 0 such that,
for each A > 0, as n → ∞, the probability
P there are estimators ϕn , σn2 with Sn ϕn 0, and ϕn , σn2 ∈ Nn A −→ 1.
3.1
8
Mathematical Problems in Engineering
Furthermore,
ϕn , σn2 −→p ϕ0 , σ02 ,
n −→ ∞,
3.2
where, for each n 1, 2, . . . , A > 0 and An ∈ 0, σ02 ; define neighborhoods
T Nn A ϕ ∈ Rd1 : ϕ − ϕ0 Dn ϕ − ϕ0 ≤ A2 ,
Nn A Nn A ∩ σ 2 ∈ σ02 − An , σ02 An .
3.3
Theorem 3.2. Suppose that conditions (A1)–(A3) hold. Then
1 T/2 F
ϕn ϕn − ϕ0 −→D N0, Id1 ,
σn n
n −→ ∞.
3.4
Remark 3.3. For θ ∈ Rm , m ∈ N, our results still hold.
In the following, we will investigate some special cases in the model 1.1-1.2.
Although the following results are directly obtained from Theorems 3.1 and 3.2, we discuss
these results in order to compare with the corresponding results.
Corollary 3.4. Let ft θ θ. If condition (A1) holds, then, for |θ| / 1, 3.1, 3.2, and 3.4 hold.
Remark 3.5. These results are the same as the corresponding results of Maller 12.
Corollary 3.6. If β 0 and ft θ θ, then, for | θ | /
1,
n
2
t2 εt−1 σn
θn − θ0 −→D N0, 1,
n −→ ∞,
3.5
where
σn2
n 2
1 εt − θn εt−1 ,
n − 1 t2
θn n
t2 εt εt−1 .
n 2
t2 εt−1
3.6
Remark 3.7. These estimators are the same as the least squares estimators see White 16.
For |θ| > 1, {εt } are explosive processes. In the case, the corollary is the same as the results of
2
→ p Eεt2 σ02 /1 − θ02 ,
White 17. While |θ| < 1, notice that σn2 → p σ02 and 1/n − 1 nt2 εt−1
and by Corollary 3.6 we obtain
√ n θn − θ0 −→D N 0, 1 − θ02 .
3.7
The result was discussed by many authors, such as Fujikoshi and Ochi 46 and Brockwell
and Davis 19.
Mathematical Problems in Engineering
9
Corollary 3.8. Let β 0. If conditions (A2) and (A3) hold, then
Fn1/2 θn θn − θ0 −→D N0, 1,
σn
3.8
n −→ ∞,
where
n 2
2
Fn θn ft θn ft θn ft θn εt−1
− ft θn εt εt−1 ,
t2
σn2
3.9
n 2
1 εt − ft θn εt−1 .
n − 1 t2
Corollary 3.9. Let ft θ at . If condition (A1) holds, then
T/2
n
1 T
βn − β0 −→D N0, Id ,
xt − at xt−1 xt − at xt−1 σn t2
n −→ ∞.
3.10
√
Remark 3.10. Let at 0. Note that nt2 xt xtT O n and σn2 → p σ02 ; we easily obtain
asymptotic normality of the quasi-maximum likelihood or least squares estimator in
ordinary linear regression models from the corollary.
4. Some Lemmas
To prove Theorems 3.1 and 3.2, we first introduce the following lemmas.
Lemma 4.1. The matrix Dn is positive definite for large enough n with ESn ϕ0 0 and
VarSn ϕ0 σ02 Dn .
Proof. It is easy to show that the matrix Dn is positive definite for large enough n. By 2.8, we
have
σ02 E
∂Ψn
|
∂β ββ0
n
E et − ft θ0 et−1 xt − ft θ0 xt−1
t2
n
4.1
xt − ft θ0 xt−1 Eηt 0.
t2
Note that et−1 and ηt are independent of each other; thus by 2.7 and Eηt 0, we have
σ02 E
∂Ψn
|
∂θ θθ0
n
E et − ft θ0 et−1 ft θ0 et−1
t2
n
t2
ft θ0 E
ηt et−1 0.
4.2
10
Mathematical Problems in Engineering
Hence, from 4.1 and 4.2,
∂Ψn
∂Ψn
|ββ0 ,
|θθ0 0.
E Sn ϕ0 σ02 E
∂β
∂θ
4.3
By 2.8 and 2.17, we have
n
2 ∂Ψn
et − ft θ0 et−1 xt − ft θ0 xt−1
|
Var σ0
Var
∂β ββ0
t2
n
σ02 Xn θ0 .
ηt xt − ft θ0 xt−1
Var
4.4
t2
Note that {ft θ0 ηt et−1 , Ht } is a martingale difference sequence with
2
2
2
2
σ02 ft θ0 Eet−1
,
Var ft θ0 ηt et−1 ft θ0 Eηt2 Eet−1
4.5
so
n
2 ∂Ψn
|
ηt ft θ0 et−1
Var
Var σ0
∂θ θθ0
t2
n
2
2
ft θ0 Eet−1
4.6
σ02 Δn θ0 , σ0 .
t2
By 2.7 and 2.8 and noting that et−1 and ηt are independent of each other, we have
∂Ψn
∂Ψn
∂Ψn
∂Ψn
|ββ0 , σ02
|θθ0 E σ02
|ββ0 , σ02
|θθ0
Cov σ02
∂β
∂θ
∂β
∂θ
n
2
E
ηt xt − ft θ0 xt−1 ft θ0 et−1
t2
n
t−1
ηt xt − ft θ0 xt−1
ηs fs θ0 es−1
E
t3
E
n
s2
ηs fs θ0 es−1
s3
0.
From 4.4–4.7, it follows that VarSn ϕ0 σ02 Dn .
s−1
ηt xt − ft θ0 xt−1
t2
4.7
Mathematical Problems in Engineering
11
Lemma 4.2. If condition (A1) holds, then, for any θ ∈ Θ, the matrix Xn θ is positive definite for
large enough n, and
lim max xtT Xn−1 θxt 0.
4.8
n → ∞ 1≤t≤n
Proof. Let λ1 and λd be the smallest and largest roots of |Zn − λXn | 0. Then from the study
of Rao in 47, Ex 22.1,
λ1 ≤
uT Zn u
≤ λd
uT Xn u
4.9
for unit vectors u. Thus by 2.24, there are some δ ∈ max{0, 1 − 1 min2≤t≤n |
ft2 θ|/max2≤t≤n |ft θ|}, 1 and n0 δ such that n ≥ N0 implies that
T
u Zn u ≤ 1 − δuT Xn u.
4.10
By 4.10, we have
uT Xn u n 2
uT xt − ft θxt−1
t2
n T
u xt
2
ft2 θ
T
u xt−1
2
− ft θu
T
xt−1 xtT u
− ft θu
T
T
xt xt−1
u
t2
≥
n uT xt
2
n 2
uT xt−1 − maxft θuT Zn u
min ft2 θ
2≤t≤n
t2
2≤t≤n
t2
4.11
≥ uT Xn u min ft2 θuT Xn u − maxft θuT Zn u
≥
2≤t≤n
2≤t≤n
1 min ft2 θ − maxft θ1 − δ uT Xn u
2≤t≤n
2≤t≤n
Cθ, δuT Xn u.
By the study of Rao in 47, page 60 and 2.23, we have
2
uT xt
−→ 0.
uT Xn u
4.12
12
Mathematical Problems in Engineering
From 4.12 and Cθ, δ > 0,
T 2 2 uT xt
u xt
−→ 0.
≤ sup
T
u Xn θu
Cθ, δuT Xn u
u
xtT Xn−1 θ
sup
u
4.13
Lemma 4.3 see 48. Let Wn be a symmetric random matrix with eigenvalues λj n, 1 ≤ j ≤ d.
Then
Wn −→p I ⇐⇒ λj n−→p 1,
n −→ ∞.
4.14
Lemma 4.4. For each A > 0,
sup Dn−1/2 Fn ϕ Dn−T/2 − Φn −→p 0,
ϕ∈Nn A
n −→ ∞,
4.15
and also
Φn −→D Φ,
lim lim sup lim sup P
c→0
A→∞
4.16
inf λmin Dn−1/2 Fn ϕ Dn−T/2 ≤ c
ϕ∈Nn A
n→∞
0,
4.17
where
⎛
⎜
Φn ⎝
Id
⎞
⎟
2 ⎠,
ft 2 θ0 et−1
Δn θ0 , σ0 n
0
0
Φ Id1 .
t2
4.18
Proof. Let Xn θ0 Xn1/2 θ0 XnT/2 θ0 be a square root decomposition of Xn θ0 . Then
Dn Xn1/2 θ0 ∗
0
!
Δn θ0 , σ0 T/2
Xn θ0 ∗
!
0
Δn θ0 , σ0 Dn1/2 DnT/2 .
4.19
Let ϕ ∈ Nn A. Then
ϕ − ϕ0
T
T
Dn ϕ − ϕ0 β − β0 Xn θ0 β − β0 θ − θ0 2 Δn θ0 , σ0 ≤ A2 .
4.20
Mathematical Problems in Engineering
13
From 2.28, 2.29, and 4.18,
Dn−1/2 Fn ϕ Dn−T/2 − Φn
⎛
−1/2
−T/2
⎜Xn θ0 Xn θXn θ0 ⎜
⎜
⎜
⎜
⎝
∗
⎞
n t2 ft θεt−1 xt ft θεt xt−1 − 2ft θft θεt−1 xt−1
− Id
!
⎟
⎟
Δn θ0 , σ0 ⎟
⎟.
n
2
2
2
2 ⎟
ft θ ft θft θ εt−1
− ft θεt εt−1 − nt2 ft θ0 et−1
t2
⎠
!
Δn θ0 , σ0 Xn−1/2 θ0 4.21
Let
2
T
β : β − β0 Xn1/2 θ0 ≤ A2 ,
β
Nn A Nnθ A
A
4.22
.
θ : |θ − θ0 | ≤ !
Δn θ0 , σ0 4.23
In the first step, we will show that, for each A > 0,
sup Xn−1/2 θ0 Xn θXn−T/2 θ0 − Id −→ 0,
n −→ ∞.
θ∈Nnθ A
4.24
In fact, note that
Xn−1/2 θ0 Xn θXn−T/2 θ0 − Id Xn−1/2 θ0 Xn θ − Xn θ0 Xn−T/2 θ0 Xn−1/2 θ0 T1 T2 − T3 Xn−T/2 θ0 ,
4.25
where
T1 n
T
ft θ0 − ft θ xt−1 xt − ft θ0 xt−1 ,
t2
T2 n
T
,
xt − ft θ0 xt−1 xt−1
t2
T3 n
2
T
.
ft θ0 − ft θ xt−1 xt−1
t2
4.26
14
Mathematical Problems in Engineering
Let u, v ∈ Rd , |u| |v| 1, and let uTn uT Xn−1/2 θ0 , vnT Xn−T/2 θ0 v. By CauchySchwartz inequality, Lemma 4.2, condition A3, and noting that θ ∈ Nnθ A, we have that
n
T
T T
f θ − ft θ un xt−1 xt − ft θ0 xt−1 vn un T1 vn t2 t 0
n
T T
≤ maxft θ0 − ft θ un xt−1 xt − ft θ0 xt−1 vn t2
2≤t≤n
1/2
n
T
T
≤ maxft θ0 − ft θ
un xt−1 xt−1 un
2≤t≤n
·
t2
n
vnT
xt − ft θ0 xt−1
xt − ft θ0 xt−1
T
1/2
vn
4.27
t2
1/2
n
T
T
≤ maxft θ0 − ft θ
un xt xt un
2≤t≤n
t2
√
≤ maxft θ |θ0 − θ| · n max xtT Xn−1 θ0 xt
2≤t≤n
"
≤C
1≤t≤n
n
o1 −→ 0.
Δn θ0 , σ0 Here θ aθ 1 − aθ0 for some 0 ≤ a ≤ 1. Similar to the proof of T1 , we easily obtain that
T
un T2 vn −→ 0.
4.28
By Cauchy-Schwartz inequality, Lemma 4.2, condition A3, and noting that Nnθ A, we have
that
n
2
T T
T
ft θ0 − ft θ xt−1 xt−1 vn un T3 vn un
t2
1/2
n
n
2 T
T
T
T
≤ maxft θ0 − ft θ
un xt xt un vn xt xt vn
2≤t≤n
t2
t2
2
≤ n maxft θ |θ0 − θ|2 max xtT Xn−1 θ0 xt
2≤t≤n
≤
1≤t≤n
2
nA
o1 −→ 0.
Δn θ0 , σ0 Hence, 4.24 follows from 4.25–4.29.
4.29
Mathematical Problems in Engineering
15
In the second step, we will show that
Xn−1/2 θ0 n t2
ft θεt−1 xt ft θεt xt−1 − 2ft θft θεt−1 xt−1
−→p 0.
!
Δn θ0 , σ0 4.30
Note that
εt yt − xtT β xtT β0 − β et ,
T β0 − β ηt .
εt − ft θ0 εt−1 xt − ft θ0 xt−1
4.31
Consider
J
n
ft θεt−1 xt ft θεt xt−1 − 2ft θft θεt−1 xt−1
t2
n
εt−1 ft θ xt − ft θ0 xt−1 ft θ εt − ft θ0 εt−1 xt−1
t2
4.32
2ft θ ft θ0 − ft θ εt−1 xt−1
T1 T2 T3 T4 2T5 2T6 ,
where
T1 n
T
xt−1
ft θ β0 − β xt − ft θ0 xt−1 ,
T2 t2
T3 n
ft θet−1 xt − ft θ0 xt−1 ,
t2
n
T ft θ xt − ft θ0 xt−1
β0 − β xt−1 ,
T4 t2
T5 n
ft θηt xt−1 ,
t2
n
T ft θ ft θ0 − ft θ xt−1
β0 − β xt−1 ,
t2
T6 n
ft θ ft θ0 − ft θ et−1 xt−1 .
t2
4.33
β
For β ∈ Nn A and each A > 0, we have
T 2 T
β0 − β xt β0 − β Xn1/2 θ0 Xn−1/2 θ0 xt xtT Xn−T/2 θ0 XnT/2 θ0 β0 − β
T
≤ max xtT Xn−1 θ0 xt β0 − β Xn θ0 β0 − β
1≤t≤n
≤ A2 max xtT Xn−1 θ0 xt .
1≤t≤n
4.34
16
Mathematical Problems in Engineering
By 4.34 and Lemma 4.2, we have
T sup max β0 − β xt −→ 0,
β
β∈Nn A
n −→ ∞, A > 0.
1≤t≤n
4.35
Using Cauchy-Schwartz inequality, condition A3, and 4.35, we obtain
uTn T1 n
T
uTn xt−1
β0 − β ft θ xt − ft θ0 xt−1
t2
≤
n T
xt−1
β0 − β
2
1/2
t2
n 2
ft θuTn
×
xt − ft θ0 xt−1
xt − ft θ0 xt−1
T
un
2
1/2
4.36
t2
≤
√
T n max β0 − β xt maxft θ
1≤t≤n
1≤t≤n
√ o n .
Let
atn uTn ft θ xt − ft θ0 xt−1 .
4.37
Then from Lemma 4.2,
2 max a2tn maxft θ
2≤t≤n
2≤t≤n
T × max uT xt − ft θ0 xt−1 Xn−1 θ0 xt − ft θ0 xt−1 u
2≤t≤n
o1.
4.38
Mathematical Problems in Engineering
17
By condition A2 and 4.38, we have
Var
uTn T2
n
Var
atn et−1
n
atn et−1
Var
t2
t2
⎧
⎞⎫
⎛
t−j−1
n−1
n
⎬
⎨
Var
ηj ⎝
atn
ft−i θ0 ⎠
⎭
⎩ j1
i0
tj1
⎞
⎛
t−j−1
n
n−1
σ2 ⎝
atn
ft−i θ0 ⎠
4.39
0
j1
≤
σ02
i0
tj1
n−1 t−j−1
max|atn |
ft−i θ0 2≤t≤n
i0
j1
≤ ασ02 max|atn |n on.
2≤t≤n
Thus by Chebychev inequality and 4.39,
uTn T2 op
√ n .
4.40
Using the similar argument as T1 , we obtain that
uTn T3 op
√ n .
4.41
Using the similar argument as T2 , we obtain that
uTn T4 op
√ n ,
uTn T6 op
√ n .
4.42
By Cauchy-Schwartz inequality, 4.35, and 4.27, we get
uTn T5 n
T ft θ ft θ0 − ft θ xt−1
β0 − β uTt xt−1
t2
≤
n
2
ft θ
ft θ0 − ft θ
2 T
xt−1
β0 − β
t2
n 2 uTt xt−1
2
1/2
t2
n n 2
2 T
xt−1
uTt xt−1
≤ maxft θmaxft θ |θ0 − θ| ·
β0 − β
2≤t≤n
≤ C!
2≤t≤n
A
Δn θ0 , σ0 √
t2
t2
√ no1 o n .
Thus 4.30 follows immediately from 4.32, 4.36, and 4.40–4.43.
1/2
4.43
18
Mathematical Problems in Engineering
In the third step, we will show that
n t2
2
2
ft 2 θ ft θft θ εt−1
− ft θεt εt−1 − nt2 ft 2 θ0 et−1
−→p 0.
!
Δn θ0 , σ0 4.44
Let
J
n n
2
2
2
2
ft θ ft θft θ εt−1
− ft θεt εt−1 − ft θ0 et−1
.
t2
4.45
t2
Then
J
n n
2
2
2
2
ft θ ft θft θ εt−1
− ft θ ft θεt−1 ηt εt−1 − ft θ0 et−1
t2
t2
n 2
2
2
2
ft θεt−1
− ft θηt εt−1 − ft θ0 et−1
t2
n 2
2 2
2
2
T
2
ft θ − ft θ0 et−1
ft xt−1
β0 − β
4.46
t2
n 2
T
T
2ft θ0 xt−1
β0 − β et−1 − ft θxt−1
β0 − β ηt−1 − ft θ0 ηt et−1
t2
T1 T2 2T3 − T4 − T5 .
By 4.34, it is easy to show that
T1 on.
4.47
From condition A3, 2.30, and 4.23, we obtain that
2
2
2
2 |ET2 | ft θ − ft θ0 Eet−1
ft θ ft θ0 2
2 ft θ θ − θ0 f
θ
Ee
0
t
t−1 f 2 θ0 t
f θ f θ0 t
t
≤ maxft θ
|θ − θ0 |Δn θ0 , σ0 2
2≤t≤n f θ0 t
f θ f θ0 A
t
t
Δn θ0 , σ0 ≤ maxft θ
!
2
2≤t≤n
Δn θ0 , σ0 f θ0 o
Δn θ0 , σ0 .
t
4.48
Mathematical Problems in Engineering
19
Hence, by Markov inequality,
T2 Op
Δn θ0 , σ0 .
4.49
Using the similar argument as 4.40, we easily obtain that
T3 op
√ n .
4.50
By Markov inequality and noting that
2
VarT4 ≤ σ02 n maxft θ max xtT β0 − β on,
2≤t≤n
2≤t≤n
4.51
we have that
T4 op
√ n .
4.52
Using the similar argument as 4.6, we easily obtain that
T5 Op
Δn θ0 , σ0 .
4.53
Hence, 4.44 follows immediately from 4.46, 4.47, and 4.49–4.53.
This completes the proof of 4.15 from 4.21, 4.24, 4.30, and 4.44. To prove 4.16,
we need to show that
n
2
ft 2 θ0 et−1
−→p 1,
Δn θ0 , σ0 t2
n −→ ∞.
4.54
This follows immediately from 2.27 and Markov inequality.
Finally, we will prove 4.17. By 4.15 and 4.16, we have
Dn−1/2 F ϕ Dn−T/2 −→p I,
n −→ ∞,
4.55
uniformly in ϕ ∈ Nn A for each A > 0. Thus, by Lemma 4.3,
λmin Dn−1/2 F ϕ Dn−T/2 −→p 1,
This implies 4.17.
n −→ ∞.
4.56
20
Mathematical Problems in Engineering
Lemma 4.5 see 49. Let {Sni , Fni , 1 ≤ i ≤ kn , n ≥ 1} be a zero-mean, square-integrable martingale
2
I|Xni | >
array with differences Xni , and let η2 be an a.s. finite random variable. Suppose that i E{Xni
2
2
ε | Fn,i−1 } → p 0, for all ε → 0, and i E{Xni | Fn,i−1 } → p η . Then
Snkn Xni −→D Z,
i
4.57
where the r.v. Z has characteristic function E{exp−1/2η2 t2 }.
5. Proof of Theorems
5.1. Proof of Theorem 3.1
Take A > 0, let
T Mn A ϕ ∈ Rd1 : ϕ − ϕ0 Dn ϕ − ϕ0 A2
5.1
be the boundary of Nn A, and let ϕ ∈ Mn A. Using 2.27 and Taylor expansion, for each
σ 2 > 0, we have
Ψn ϕ, σ
2
Ψn ϕ0 , σ
2
T ∂Ψn ϕ0 , σ 2
T ∂2 Ψn ϕ0 , σ 2 1
ϕ − ϕ0
ϕ − ϕ0
ϕ − ϕ0
T
∂ϕ
2
∂ϕ∂ϕ
T T 1
1 2 Ψn ϕ0 , σ 2 ϕ − ϕ0 Sn ϕ0 − 2 ϕ − ϕ0 Fn ϕ ϕ − ϕ0 ,
σ
2σ
5.2
where ϕ aϕ 1 − aϕ0 for some 0 ≤ a ≤ 1.
− ϕ0 and vn ϕ 1/ADnT/2 ϕ − ϕ0 . Take c > 0
Let Qn ϕ 1/2ϕ − ϕ0 T Fn ϕϕ
and ϕ ∈ Mn A, and by 5.2 we obtain that
P Ψn ϕ, σ 2 ≥ Ψn ϕ0 , σ 2 for some ϕ ∈ Mn A
≤P
ϕ − ϕ0
T
Sn ϕ0 ≥ Qn ϕ , Qn ϕ > cA2 for some ϕ ∈ Mn A
P Qn ϕ ≤ cA2 for some ϕ ∈ Mn A
≤ P vnT ϕ Dn−1/2 Sn ϕ0 > cA for some ϕ ∈ Mn A
P vnT ϕ Dn−1/2 Fn ϕ Dn−T/2 vn ϕ ≤ c for some ϕ ∈ Mn A
inf λmin Dn−1/2 Fn ϕ Dn−T/2 ≤ c .
≤ P Dn−1/2 Sn ϕ0 > cA P
ϕ∈Nn A
5.3
Mathematical Problems in Engineering
21
By Lemma 4.1 and Chebychev inequality, we obtain
Var Dn−1/2 Sn ϕ0
σ02
.
P Dn−1/2 Sn ϕ0 > cA ≤
c 2 A2
c 2 A2
5.4
Let A → ∞, then c ↓ 0, and using 4.17, we have
P
inf λmin Dn−1/2 Fn ϕ Dn−T/2 ≤ c
−→ 0.
ϕ∈Nn A
5.5
By 5.3–5.5, we have
lim lim inf P Ψn ϕ, σ 2 < Ψn ϕ0 , σ 2 ∀ϕ ∈ Mn A 1.
5.6
A→∞ n→∞
By Lemma 4.3, λmin Xn θ0 → ∞ as n → ∞. Hence λmin Dn → ∞. Moreover, from 4.17,
we have
inf λmin Fn ϕ −→p ∞.
5.7
ϕ∈Nn A
This implies that Ψn ϕ, σ 2 is concave on Nn A. Noting this fact and 5.6, we get
lim lim inf P
A→∞ n→∞
sup Ψn ϕ, σ
ϕ∈Mn A
2
< Ψn ϕ0 , σ
2
, Ψn ϕ, σ
2
is concave on Nn A
1.
5.8
On the event in the brackets, the continuous function Ψn ϕ, σ 2 has a unique maximum in ϕ
over the compact neighborhood Nn A. Hence
) *
lim lim inf P Sn ϕn A 0 for a unique ϕn A ∈ Nn A 1.
A→∞ n→∞
5.9
n satisfies
Moreover, there is a sequence An → ∞ such that ϕn ϕA
lim inf P Sn ϕn 0 and ϕn maximizes Ψn ϕ, σ 2 uniquely in Nn A 1.
n→∞
5.10
Thus the ϕn βn , θn is a QML estimator for ϕ0 . It is clearly consistent, and
)
*
lim lim inf P ϕn ∈ Nn A 1.
A→∞ n→∞
Since ϕn βn , θn is a QML estimator for ϕ0 , σn2 is a QML estimator for σ02 from 2.9.
5.11
22
Mathematical Problems in Engineering
To complete the proof, we will show that σn2 → σ02 as n → ∞. If ϕn ∈ Nn A, then
β
βn ∈ Nn A and θn ∈ Nnθ A. By 2.12 and 2.1, we have
T εt − ft θn εt−1 xt − ft θn xt−1
β0 − βn et − ft θn et−1 .
5.12
By 2.9, 2.11, and 5.12, we have
σn2 n − 1
n 2
εt − ft θn εt−1
t2
n T εt − ft θn εt−1
xt − ft θn xt−1
β0 − βn et − ft θn et−1
t2
n T εt − ft θn εt−1 xt − ft θn xt−1
β0 − βn
t2
n εt − ft θn εt−1 et − ft θn et−1
t2
n εt − ft θn εt−1 et − ft θn et−1 .
t2
5.13
From 5.12, it follows that
n T β0 − βn
xt − ft θn xt−1
2
t2
n 2
εt − ft θn εt−1
t2
n
T
− 2 ft θn xt−1 β0 − βn et − ft θn et−1
t2
n 2
et − ft θn et−1 .
t2
5.14
From 2.2,
n n 2 2
et − ft θn et−1 ft θ0 et−1 − ft θn et−1
t2
t2
n
n ft θ0 − ft θn ηt et−1
ηt2 2
t2
n t2
2
2
ft θ0 − ft θn
et−1
.
t2
5.15
Mathematical Problems in Engineering
23
By 5.13–5.15, we have
n −
1
σn2
n n 2 T 2
et − ft θn et−1 −
β0 − βn
xt − ft θn xt−1
t2
n
t2
n n 2
2
ft θ0 − ft θn ηt et−1 ft θ0 − ft θn
ηt2 2
et−1
t2
−
t2
t2
n T β0 − βn
xt − ft θn xt−1
2
5.16
t2
T1 2T2 T3 − T4 .
By the law of large numbers,
n
1 1
T1 η2 −→ σ02 ,
n−1
n − 1 t2 t
a.s. n −→ ∞.
5.17
Since {ft θ0 − ft θn ηt et−1 , Ht−1 } is a martingale difference sequence with
Var
2
2
ft θ0 − ft θn ηt et−1 ft θ0 − ft θn
σ02 Eet−1
,
VarT2 n
2
E ft θ0 − ft θn ηt et−1
t2
σ02
n 2
2
ft θ0 − ft θn
Eet−1
t2
2
σ02
≤
σ02
n f
t
θ 2
t2 ft θ0 θ0 − θn
2
5.18
2
ft θ0 Eet−1
2
2
f θ
2 t max θ0 − θn Δn θ0 , σ02
2≤t≤n f θ0 t
≤ CA2 .
By Chebychev inequality, we have
1
T2 −→p 0,
n−1
n −→ ∞.
5.19
By Markov inequality and noting that ET3 ≤ CA2 , we obtain that
1
T3 −→p 0,
n−1
n −→ ∞.
5.20
24
Mathematical Problems in Engineering
Write
T4 n xt − ft θ0 xt−1
T 2
T
β0 − βn ft θ0 − ft θn xt−1
β0 − βn
t2
n T
2 2
T
ft θ0 − ft θn
xt−1
β0 − βn
β0 − βn Xn θ0 β0 − βn t2
n
2
xt − ft θ0 xt−1
T β0 − βn
5.21
T
ft θ0 − ft θn xt−1
β0 − βn
t2
I1 I2 2I3 .
β
Noting that β ∈ Nn A, we have
I1 Op 1,
n −→ ∞.
5.22
By 4.34 and condition A3, we have
2
2
T
2 β0 − βn n maxft θ
|I2 | ≤ maxxt−1
θ0 − θn o1,
2≤t≤n
n −→ ∞.
2≤t≤n
5.23
By 4.34, condition A3, and Cauchy-Schwartz inequality, we have
|I3 |2 ≤
n β0 − βn
T xt − ft θ0 xt−1
T xt − ft θ0 xt−1
β0 − βn
t2
·
n 2 2
T
ft θ0 − ft θn
xt−1
β0 − βn
t2
≤ β0 − βn
T
2
2 Xn θ0 β0 − βn n maxft θ
θ0 − θn 5.24
2≤t≤n
2
T
· maxxt−1
β0 − βn 2≤t≤n
≤ Op 1n
A2
o1 op 1.
Δn θ0 , σ02
By 5.21–5.24, we obtain
1
T4 −→p 0,
n−1
n −→ ∞.
5.25
From 5.16, 5.17, 5.19, 5.20, and 5.25, we have σn2 → σ02 . We therefore complete the
proof of Theorem 3.1.
Mathematical Problems in Engineering
25
5.2. Proof of Theorem 3.2
It is easy to know that Sn ϕn 0 and Fn ϕn is nonsingular from Theorem 3.1. By Taylor’s
expansion, we have
0 Sn ϕn Sn ϕ0 − Fn ϕn ϕn − ϕ0 .
5.26
Since ϕn ∈ Nn A, also ϕn ∈ Nn A. By 4.15, we have
n DnT/2 ,
Fn ϕn Dn1/2 Φn A
5.27
n is a symmetric matrix with A
n → p 0. By 5.26 and 5.27, we have
where A
−1
n Dn−1/2 Sn ϕ0 .
DnT/2 ϕn − ϕ0 DnT/2 Fn−1 ϕn Sn ϕ0 Φn A
5.28
Similar to 5.27, we have
n DnT/2 Fn ϕn Dn1/2 Φn A
Fn1/2
ϕn FnT/2
1/2 T/2
n
n
Dn1/2 Φn A
DnT/2
Φn A
5.29
ϕn .
n → p 0. By 5.28, 5.29, and noting that σn2 → p σ 2 and Dn−1/2 Sn ϕ0 Op 1, we obtain
Here A
0
that
1/2 −1
n
n Dn−1/2 Sn ϕ0
Φn A
Φn A
FnT/2 ϕn ϕn − ϕ0
σn
σn
Φ−1/2
Dn−1/2 Sn ϕ0
n
op 1.
σ0
5.30
From 2.7 and 2.8, we have
n
n
Sn ϕ0
1 ηt xt − ft θ0 xt−1 , ft θ0 ηt et−1 .
σ0
σ0 t2
t2
5.31
26
Mathematical Problems in Engineering
From 2.29 and 4.18, we have
Φ−1/2
Dn−1/2
n
⎛
⎞⎛
⎞
Id
0
0
X −1/2 θ0 ⎜ "
⎟⎜ n
⎟
⎜
⎠
Δn θ0 , σ0 ⎟
1
⎝0
⎠⎝
0
!
n 2
2
Δn θ0 , σ0 t2 ft θ0 et−1
⎞
⎛ −1/2
0
Xn θ0 ⎟
⎜
⎟.
1
⎜
⎠
⎝
0
2
n
2
f
t2 t θ0 et−1
5.32
By 5.30–5.32, we have
Φ−1/2
Dn−1/2 Sn ϕ0
n
σ0
⎛
⎞
n
n
t2 ft θ0 ηt et−1
1⎜
⎝ ηt Xn−1/2 θ0 xt − ft θ0 xt−1 , σ0 t2
n
t2
2
ft 2 θ0 et−1
⎟
⎠.
5.33
Let u ∈ Rd with |u| 1, and atn uXn−1/2 θ0 xt − ft θ0 xt−1 . Then max2≤t≤n atn o1,
and we will consider the limiting distribution of the following 2-vector:
⎛
⎞
n n
f
e
θ
η
1 ⎜
0 t t−1 ⎟
t2
⎝ atn ηt , t
⎠.
σ0 t2
n
2
2
f
θ
e
0
t2 t
t−1
5.34
By Cramer-Wold device, it will suffice to find the asymptotic distribution of the following
random:
n
ηt
t2
u2 f θ0 et−1
u1 atn
!t
σ0
σ0 Δn θ0 , σ0 n
ηt mt n,
5.35
t2
where u1 , u2 ∈ R2 with u21 u22 1. Note that E{ηt mt n | Ht−1 } 0, so the sums in 5.35
are partial sums of a martingale triangular array with respect to {Ht }, and we will verify the
Lindeberg conditions for their convergence to normality as follows:
n
n
E ηt2 m2t n | Ht−1 σ02 m2t n
t2
t2
u21
n
n
2u1 u2
a2tn !
ft θ0 atn et−1
Δn θ0 , σ0 t2
t2
n
u22
2
2
f θ0 et−1
Δn θ0 , σ0 t2 t
u21 op 1 u22 1 op 1.
5.36
Mathematical Problems in Engineering
27
Noting that max1≤t≤n et2 /n op 1 and max2≤t≤n atn o1, we obtain that
u2 f θ0 et−1 u1 atn t
max !
max|mt n| ≤ max
op 1.
2≤t≤n
2≤t≤n
σ0 2≤t≤n σ0 Δn θ0 , σ0 5.37
Hence, for given δ > 0, there is a set whose probability approaches 1 as n → ∞ on which
max2≤t≤n |mt n| ≤ δ. On this event, for any c > 0,
n
∞ +∞
)
*
2 2
E ηt mt nI ηt mt n > c | Ht−1 y2 dp ηt mt n ≤ y | Ht−1
t2
t2
n
c
m2t n
t2
≤
n
m2t n
t2
oδ
n
+∞
c/mt n
+∞
) *
y2 dp η1 ≤ y | Ht−1
) *
y2 dp η1 ≤ y | Ht−1
5.38
c/δ
m2t n oδ Op 1 −→ 0,
n −→ ∞.
t2
Here oδ → 0 as δ → 0. This verifies the Lindeberg conditions, and by Lemma 4.5
n
ηt mt n−→D N0, 1.
5.39
t2
Thus we complete the proof of Theorem 3.2.
6. Numerical Examples
6.1. Simulation Example
We will simulate a regression model 1.1, where xt t/20, β 3.5, t 1, 2, . . . , 100, and the
random errors
εt π
1
sin
t θ εt−1 ηt ,
2
2
6.1
where θ 2, ηt ∼ N0, 1.
By the ordinary least squares method, we obtain the least squares estimators βLS 2
1.0347. So we take β0 3.5136, σ 02 1.0347, θ0 π/2, and δ 0.01.
3.5136, and σLS
Therefore, using the iterative computing method, we obtain
→
− 1
θ 1.0159, 3.5076, 1.6573T ,
→
− 2
θ 1.0283, 3.5076, 1.7599T .
6.2
28
Mathematical Problems in Engineering
− 1 →
− 1
→
− 2 →
Since θ − θ / θ < 0.01, the QML estimators of β, θ, and σ 2 are given by
σn2 1.0283,
βn 3.5076,
θn 1.7599.
6.3
These values closely approximate their true values, so our method is successful,
especially in estimating the parameters β and σ 2 .
6.2. Empirical Example
We will use the data studied by Fuller in 14. The data pertain to the consumption of spirits
in the United Kingdom from 1870 to 1983. The dependent variable yt is the annual per capita
consumption of spirits in the United Kingdom. The explanatory variables xt1 and xt2 are per
capita income and price of spirits, respectively, both deflated by a general price index. All
data are in logarithms. The model suggested by Prest can be written as
yt β0 β1 xt1 β2 xt2 β3 xt3 β4 xt4 εt ,
6.4
where 1869 is the origin for t, xt3 t/100, and xt4 t − 352 /104 , and assuming that εt is a
stationary time series.
Fuller 14 obtained the estimated generalized least squares equation
yt 2.36 0.72xt1 − 0.80xt2 − 0.81xt3 − 0.92xt4 ,
εt 0.7633εt−1 ηt ,
6.5
where ηt is a sequence of uncorrelated 0, 0.000417 random variables.
Using our method, we obtain the following models:
yt 2.3607 0.7437xt1 − 0.8210xt2 − 0.7857xt3 − 0.9178xt4 ,
εt 0.70054 0.00424tεt−1 ηt ,
6.6
where ηt is a sequence of uncorrelated 0, 0.000413 random variables.
By the models 6.6, the residual mean square is 0.000413, which is smaller than
0.000417 calculated by the models 6.5.
From the above examples, it can be seen that our method is successful and valid.
However, a further discussion of fitting the function ft θ is needed so that we can find a
good method to use in practical applications.
Acknowledgments
The author would like to thank an anonymous referee for useful comments and suggestions.
The paper was supported by the Key Project of Chinese Ministry of Education no. 209078
and Scientific Research Item of Department of Education, Hubei no. D20092207.
Mathematical Problems in Engineering
29
References
1 Z. D. Bai and M. Guo, “A paradox in least-squares estimation of linear regression models,” Statistics
& Probability Letters, vol. 42, no. 2, pp. 167–174, 1999.
2 X. Chen, “Consistency of LS estimates of multiple regression under a lower order moment condition,”
Science in China A, vol. 38, no. 12, pp. 1420–1431, 1995.
3 T. W. Anderson and J. B. Taylor, “Strong consistency of least squares estimates in normal linear
regression,” The Annals of Statistics, vol. 4, no. 4, pp. 788–790, 1976.
4 H. Drygas, “Weak and strong consistency of the least squares estimators in regression models,”
Zeitschrift fur Wahrscheinlichkeitstheorie und Verwandte Gebiete, vol. 34, no. 2, pp. 119–127, 1976.
5 G. González-Rodrı́guez, A. Blanco, N. Corral, and A. Colubi, “Least squares estimation of linear
regression models for convex compact random sets,” Advances in Data Analysis and Classification, vol.
1, no. 1, pp. 67–81, 2007.
6 F. R. Hampel, E. M. Ronchetti, P. J. Rousseeuw, and W. A. Stahel, Robust Statistics, Wiley Series in
Probability and Mathematical Statistics: Probability and Mathematical Statistics, John Wiley & Sons,
New York, NY, USA, 1986.
7 X. He, “A local breakdown property of robust tests in linear regression,” Journal of Multivariate
Analysis, vol. 38, no. 2, pp. 294–305, 1991.
8 H. Cui, “On asymptotics of t-type regression estimation in multiple linear model,” Science in China A,
vol. 47, no. 4, pp. 628–639, 2004.
9 J. Durbin, “A note on regression when there is extraneous information about one of the coefficients,”
Journal of the American Statistical Association, vol. 48, pp. 799–808, 1953.
10 A. E. Hoerl and R. W. Kennard, “Ridge regression: biased estimation for nonorthogonal problems,”
Technometrics, vol. 12, no. 1, pp. 55–67, 1970.
11 Y. Li and H. Yang, “A new stochastic mixed ridge estimator in linear regression model,” Statistical
Papers. In press.
12 R. A. Maller, “Asymptotics of regressions with stationary and nonstationary residuals,” Stochastic
Processes and Their Applications, vol. 105, no. 1, pp. 33–67, 2003.
13 P. Pere, “Adjusted estimates and Wald statistics for the AR1 model with constant,” Journal of
Econometrics, vol. 98, no. 2, pp. 335–363, 2000.
14 W. A. Fuller, Introduction to Statistical Time Series, Wiley Series in Probability and Statistics: Probability
and Statistics, John Wiley & Sons, New York, NY, USA, 2nd edition, 1996.
15 G. H. Kwoun and Y. Yajima, “On an autoregressive model with time-dependent coefficients,” Annals
of the Institute of Statistical Mathematics, vol. 38, no. 2, pp. 297–309, 1986.
16 J. S. White, “The limiting distribution of the serial correlation coefficient in the explosive case,” Annals
of Mathematical Statistics, vol. 29, pp. 1188–1197, 1958.
17 J. S. White, “The limiting distribution of the serial correlation coefficient in the explosive case. II,”
Annals of Mathematical Statistics, vol. 30, pp. 831–834, 1959.
18 J. D. Hamilton, Time Series Analysis, Princeton University Press, Princeton, NJ, USA, 1994.
19 P. J. Brockwell and R. A. Davis, Time Series: Theory and Methods, Springer Series in Statistics, Springer,
New York, NY, USA, 1987.
20 K. M. Abadir and A. Lucas, “A comparison of minimum MSE and maximum power for the nearly
integrated non-Gaussian model,” Journal of Econometrics, vol. 119, no. 1, pp. 45–71, 2004.
21 F. Carsoule and P. H. Franses, “A note on monitoring time-varying parameters in an autoregression,”
Metrika, vol. 57, no. 1, pp. 51–62, 2003.
22 R. Azrak and G. Mélard, “Asymptotic properties of quasi-maximum likelihood estimators for ARMA
models with time-dependent coefficients,” Statistical Inference for Stochastic Processes, vol. 9, no. 3, pp.
279–330, 2006.
23 R. Dahlhaus, “Fitting time series models to nonstationary processes,” The Annals of Statistics, vol. 25,
no. 1, pp. 1–37, 1997.
24 W. Zhang, L. Wei, and Y. Yang, “The superiority of empirical Bayes estimator of parameters in linear
model,” Statistics and Probability Letters, vol. 72, no. 1, pp. 43–50, 2005.
25 E. J. Hannan, “The asymptotic theory of linear time-series models,” Journal of Applied Probability, vol.
10, pp. 130–145, 1973.
26 R. Fox and M. S. Taqqu, “Large-sample properties of parameter estimates for strongly dependent
stationary Gaussian time series,” The Annals of Statistics, vol. 14, no. 2, pp. 517–532, 1986.
30
Mathematical Problems in Engineering
27 L. Giraitis and D. Surgailis, “A central limit theorem for quadratic forms in strongly dependent linear
variables and its application to asymptotical normality of Whittle’s estimate,” Probability Theory and
Related Fields, vol. 86, no. 1, pp. 87–104, 1990.
28 L. Giraitis and H. Koul, “Estimation of the dependence parameter in linear regression with longrange-dependent errors,” Stochastic Processes and Their Applications, vol. 71, no. 2, pp. 207–224, 1997.
29 H. L. Koul, “Estimation of the dependence parameter in non-linear regression with random designs
and long memory errors,” in Proceedings of the Triennial Conference on Nonparametric Inference and
Related Fields, Calcutta University, 1997.
30 H. L. Koul and D. Surgailis, “Asymptotic normality of the Whittle estimator in linear regression
models with long memory errors,” Statistical Inference for Stochastic Processes, vol. 3, no. 1-2, pp. 129–
147, 2000.
31 H. L. Koul and K. Mukherjee, “Asymptotics of R-, MD- and LAD-estimators in linear regression
models with long range dependent errors,” Probability Theory and Related Fields, vol. 95, no. 4, pp.
535–553, 1993.
32 T. Shiohama and M. Taniguchi, “Sequential estimation for time series regression models,” Journal of
Statistical Planning and Inference, vol. 123, no. 2, pp. 295–312, 2004.
33 J. Fan and Q. Yao, Nonlinear Time Series: Nonparametric and Parametric Methods, Springer, New York,
NY, USA, 2005.
34 K. N. Berk, “Consistent autoregressive spectral estimates,” The Annals of Statistics, vol. 2, pp. 489–502,
1974.
35 E. J. Hannan and L. Kavalieris, “Regression, autoregression models,” Journal of Time Series Analysis,
vol. 7, no. 1, pp. 27–49, 1986.
36 A. Goldenshluger and A. Zeevi, “Nonasymptotic bounds for autoregressive time series modeling,”
The Annals of Statistics, vol. 29, no. 2, pp. 417–444, 2001.
37 E. Liebscher, “Strong convergence of estimators in nonlinear autoregressive models,” Journal of
Multivariate Analysis, vol. 84, no. 2, pp. 247–261, 2003.
38 H.-Z. An, L.-X. Zhu, and R.-Z. Li, “A mixed-type test for linearity in time series,” Journal of Statistical
Planning and Inference, vol. 88, no. 2, pp. 339–353, 2000.
39 R. Elsebach, “Evaluation of forecasts in AR models with outliers,” OR Spectrum, vol. 16, no. 1, pp.
41–45, 1994.
40 S. Baran, G. Pap, and M. C. A. van Zuijlen, “Asymptotic inference for unit roots in spatial triangular
autoregression,” Acta Applicandae Mathematicae, vol. 96, no. 1–3, pp. 17–42, 2007.
41 W. Distaso, “Testing for unit root processes in random coefficient autoregressive models,” Journal of
Econometrics, vol. 142, no. 1, pp. 581–609, 2008.
42 J. L. Harvill and B. K. Ray, “Functional coefficient autoregressive models for vector time series,”
Computational Statistics and Data Analysis, vol. 50, no. 12, pp. 3547–3566, 2006.
43 J. S. Simonoff, Smoothing Methods in Statistics, Springer Series in Statistics, Springer, New York, NY,
USA, 1996.
44 P. J. Green and B. W. Silverman, Nonparametric Regression and Generalized Linear Models, vol. 58 of
Monographs on Statistics and Applied Probability, Chapman & Hall, London, UK, 1994.
45 J. Fan and I. Gijbels, Local Polynomial Modelling and Its Applications, vol. 66 of Monographs on Statistics
and Applied Probability, Chapman & Hall, London, UK, 1996.
46 Y. Fujikoshi and Y. Ochi, “Asymptotic properties of the maximum likelihood estimate in the first
order autoregressive process,” Annals of the Institute of Statistical Mathematics, vol. 36, no. 1, pp. 119–
128, 1984.
47 C. R. Rao, Linear Statistical Inference and Its Applications, Wiley Series in Probability and Mathematical
Statistic, John Wiley & Sons, New York, NY, USA, 2nd edition, 1973.
48 R. A. Maller, “Quadratic negligibility and the asymptotic normality of operator normed sums,”
Journal of Multivariate Analysis, vol. 44, no. 2, pp. 191–219, 1993.
49 P. Hall and C. C. Heyde, Martingale Limit Theory and Its Application. Probability and Mathematical
Statistic, Academic Press, New York, NY, USA, 1980.
Download