141 SOME LIMIT PROPERTIES OF AN APPROXIMATE LEAST

advertisement
141
Acta Math. Univ. Comenianae
Vol. LXV, 1(1996), pp. 141–148
SOME LIMIT PROPERTIES OF AN APPROXIMATE LEAST
SQUARES ESTIMATOR IN A NONLINEAR REGRESSION
MODEL WITH CORRELATED NONZERO MEAN ERRORS
J. KALICKÁ
Abstract. A nonlinear regression model with correlated, normally distributed errors with non zero means is investigated. The limit properties of bias and the
mean square error matrix of the approximate least squares estimator of regression
parameters are studied.
1. Introduction
Let us consider a linear regression model
(1.1)
Xn×1 = Fn×k βk×1 + εn×1 ,
E(ε) = 0,
Var (ε) = Σ
where the n × k matrix F is known, β ∈ Rk (k-dimensional Euclidean space) is an
unknown vector parameter and εn×1 is n × 1 vector of the errors.
Under the condition of stationarity of covariance functions:
(1.2)
Σ=
n
X
R(i − 1)Ui
i=1
where
(1.3)
(Ui )kl =
1, for k − l = i − 1,
0, otherwise.
We will consider that R(·) is a nonlinear function of p × 1 parameter θ (θ ∈ Rp )
and therefore we mark R(·) ∼
= Rθ (·).
Its estimator is given by
n−t
(1.4)
R̂θ (t) =
1 X
(X(i + t) − F̂β(i + t))(X(i) − F̂β(i))
n − t i=1
Received December 1, 1993; revised October 27, 1995.
1980 Mathematics Subject Classification (1991 Revision). Primary 62M20, 62M10.
Key words and phrases. Nonlinear regression, covariance function, parametric and nonparametric estimator, approximate least squares estimator.
142
J. KALICKÁ
for t = 0, 1, . . . , n − 1 where F̂β = (F̂β(1), . . . , F̂β(n)), (see [6]), and β̂ given by
(F0 F)−1 F0 X is LSE of Rθ . Note that the nonparametric estimator of Rθ given in
(1.4) is asymptotically unbiased only (see [6]).
We consider further
(1.5)
R̂θ (t) = Rθ (t) + (R̂θ (t) − Rθ (t))
for t = 0, 1, . . . , m − 1 where m < n − k + 1. The function Rθ (·) is assumed to be
known, continuous and twice continuously differentiable in θ.
Let (R̂θ (t) − Rθ (t)) = ζθ (t), E(ζθ (t)) = µn (t) for t = 0, 1, . . . , m − 1 and m <
n − k + 1.
We will investigate a nonlinear model
(1.6)
Yn (t) = fn (xt , θ) + ζθ (t) for t = 0, 1, . . . , m − 1 and m < n − k + 1
for n → ∞ and fixed m, where fn is a nonlinear function of parameter θ =
(θ1 , . . . , θp ) continuous and twice continuously differentiable in θ. Further ζθ =
(ζθ (0), . . . , ζθ (m − 1)) is m × 1 vector of errors with E(ζθ ) = µn , Var (ζ0 ) = Σn
and we will consider that limn→∞ µn = 0. This condition is fulfilled for ζ(t) =
R̂(t) − R(t) and t = 0, 1, . . . , m − 1, m < n − k + 1.
2. An Approximate Least Squares Estimator
Let us consider a model described by (1.5). The approximate least squares estimator θ̃ is based on a method due to Box (see [2]) for derivation of an approximate
bias of θ̂. Let us denote by fn (θ) the m × 1 vector (f (x0 , θ), . . . , f (xm−1 , θ))0 and
let jt (θ) be the p × 1 vector with components
jt (θ) =
∂f (xt , θ)
∂θi
t = 0, . . . , m − 1, m < n − k + 1.
i=1,...,p


j00 (θ)


..
Let J(θ) = 
 be m × p matrix of the first derivatives of fn (θ). Let Ht ,
.
0
jm−1
(θ)
t = 0, . . . , m − 1, be the p × p matrix of second derivatives, (Ht )ij =
∂ 2 fn (xt , θ)
∂θi ∂θj
for i, j = 1, . . . , p.
Since θ̂ is the least squares estimator of θ, the following matrix equality should
hold:
(2.1)
J0 (θ) · (Y − fn (θ̂)) = 0
143
NONLINEAR REGRESSION MODEL
where Y = (Y0 , . . . , Ym−1 )0 . By [2] (2.1) and using the Taylor expansion of J(θ)
and fn (θ) it follows that LSE θ̂ of θ can be approximated by the estimator given
by:
1 0
0 −1 0
0 −1
0
U (ζ0 )Mζ0 − J H(ζ0 )
(2.2)
θ̃m = θ + (J J) J ζ0 + (J J)
2
−1 0
0 −1 0
where M= I − J(J0 J)
 J , A = J(J J) J , U(ζθ ) is a m × p matrix of the form
ζθ0 A0 H0
..
 and Hn (ζθ ) is the m × 1 random vector with components
U(ζθ ) = 
.
ζθ0 A0n Hm−1
0 0
ζθ A Ht Aζθ for t = 0, 1, . . . , m
(U0 (ζθ )Mζθ )j we get:
0
(U (ζθ )Mζθ )j =
m−1
X
− 1. For the j-th component of the random vector
XX X
(U (ζθ )ji (Mζθ )i =
(Hi A)jk (M)il
0
i=0
k
l
!
.
i
Pm−1
ζθ (k)ζθ (l) = ζθ0 Nj ζθ for j = 1, . . . , p where (Nj )kl =
i=0 (Hi A)jk · (M)il ,
N +N0 j
j
0
0
k, l = 0, . . . , m − 1 and (U (ζθ )Mζθ )j = ζθ
ζθ is a quadratic form with
2
symetric matrices.
3. The Mean and the Mean Square Error Matrix
of an Approximate Least Squares Estimator
Let θ̃m be in the form (2.2) and let θ̃m be an approximate LSE of θ in (1.5).
We can write:
h
Eθ (θ̂m ) = θ + Aµn + (J0 J)−1 · tr (NΣn ) + µ0n Nµn
(2.3)
i
1
− J(tr (A0 HAΣn ) + µ0n A0 N Aµn ) · (J0 J)−1 .
2
We will try to bound this term.
(Aµn )j =
(2.4)
m−1
P
i=0
ajk µn (k), where ajk = ((J0 J)−1 J0 )jk . As far
as m is fixed number m < n − k + 1 and lim µn (k) = 0
n→∞
for k = 0, . . . , m − 1 this term tends to zero for every fixed
m < n − k + 1 and n → ∞.
In what follows we use the relations:
tr (AB0 ) =
X
i,j

Aij Bij ,
| tr (AB0 )| ≤ kAk · kBk where kAk = 
n
X
i,j
1/2
A2ij 
144
J. KALICKÁ
is the Euclidean norm of matrix A, for which the inequality kABk2 ≤ kAk2 kBk2
holds.

(2.5)

tr (N0 Σn )


..
tr (NΣn ) = 
 =⇒ tr (Nj Σn ) ≤ kNj k · kΣn k and
.
tr (Nm−1 Σn )
for every fixed m, m < n − k + 1 and for lim kΣn k = 0 this term
tends to zero for n → ∞.
n→∞

(2.6)

tr (µ0n N0 µn )


..
µ0n Nµn = 
 =⇒ tr (µ0n Nj µn ) =
.
tr (µ0n Nm−1 µn )
tr (Nj µn µ0n ) = tr (Nj Wn ), where Wn = µn µ0n and j =
0, . . . , m − 1 then | tr (Nj Wn )| ≤ kNj k · kWn k, what for
every fixed m < n − k + 1 and for n → ∞ tend to zero.
Now
(2.7) tr (µ0n A0 Nj Aµn ) = tr (µn µ0n A0 Nj A) = tr (Wn A0 Nj A) ≤ kWk kAk2kNj k.
This term tends to zero for every fixed m < n − k + 1 and for n → ∞. We can
easily see that for the last term of (2.3) we have
 
tr (µ0n A0 H0 Aµn )
m−1
P
 
 0
..
(ji )j tr (A0 Hi AWn )
  =
J 
.
i=0
tr (µ0n A0 Hm−1 Aµn ) j
P
m−1
= (tr (A0 i=0
(ji )j Hi AWn ) and hence it is sufficient to
bound | tr (A0 Bj AWn )| · | tr (A0 Bj AWn )| ≤ kA0 Bj AWn k,
m−1
P
(Ji )j Hi .
where Bj =

(2.8)

i=0
Now we can state
Theorem 1.1. Let the following conditions be fulfilled in model (1.6):
(i) The limit limn→∞ µn (t) = 0 for t = 0, 1, . . . , m − 1 and m < n − k + 1,
(ii) limn→∞ kΣn k = 0 for m < n − k + 1.
Then, for every fixed m < n − k + 1, the approximate least squares estimator θ̃m
is asymptotically unbiased, i.e. limn→∞ Eθ (θ̃m ) = θ.
Proof. The results follow directly from (2.4)–(2.8) and from (i), (ii).
NONLINEAR REGRESSION MODEL
145
Now, let ζθ ∼ N(µn , Σn ). We can express Eθ [(θ̃m − θ)(θ̃m − θ)0 ] as follows:
Eθ [(θ̃m − θ)(θ̃m − θ)0 ]
0
1 0
1 0
=
+ (J J) Eθ (N(ζθ ) − J H(ζθ ) · N (ζθ ) − J H(ζθ )
2
2
h
1
= AEθ (ζθ0 ζθ )A0 + (J0 J)−1 Eθ (N(ζθ )N(ζθ )0 ) − Eθ (N (ζθ )H(ζθ )0 )J
2
i
1 0
1
− J Eθ (H(ζθ )N(ζθ )0 ) + J0 Eθ (H(ζθ )H(ζθ )0 )J (J0 J)−1 .
2
4
AEθ (ζθ0 ζθ )A0
0
−1
We will delimit the terms in the last expression member by member
1. AEθ (ζθ ζθ0 )A = A(Σn + µn µ0n )A0 = AΣn A0 + AWn A0 in the norm:
kAΣn A0 k ≤ kΣk · kAA0 k = kΣn k · k(JJ0 )−1 k,
(2.9)
kAWn A0 k ≤ kWn k · k(J0 J)−1 k.
This term tends to zero as n → ∞ and for every fixed m, m < n − k + 1.
2. We express the (i, j)-element for the term Eθ (N(ζ)(N(ζ)0 ).
[Eθ (N(ζθ )(N(ζθ )0 )]i,j
= 2 tr (Ni Σn Nj Σ) + tr (Ni Σn ) tr (Nj Σn ) + µ0n Ni µn tr (Nj Σn )
+ µ0n Nj µn tr (Ni Σn ) + 4µ0n Ni Σn Nj µn + µ0n Ni µn µ0n Nj µn .
Now we have
| tr (Ni Σn Nj Σ| ≤ kΣn k2 · kNi k · kNj k,
tr (Ni Σn ) tr (Nj Σn ) ≤ kΣn k2 · kNi k · kNj k,
µ0n Ni µn tr (Nj Σn ) = tr (µ0n Ni µn ) tr (Nj Σn )
and consequently,
| tr (Wn Ni ) tr (Nj Σn )| ≤ kNi k · kNj k · kΣn k2 ,
µ0n Ni Σn Nj µn = tr (µ0n Ni Σn Nj µn ) ≤ kNik · kNj k · kΣn k · kWn k,
µ0n Ni µn µ0n Nj µn = tr (µ0n Ni µn µ0n Nj µ)
= tr (Wn Ni Wn Nj ) ≤ kWn k2 · kNi k kNj k
and
(2.10)
[Eθ (N(ζθ )(Nn (ζθ )0 ]i,j ≤ 3kNik · kNj k · kΣn k2
+ 6kNik · kNj k · kWn kkΣn k + kNi k · kNj k · kWn k2
= kNi k · kNj k(3kΣn k2 + 6kΣn k · kWn k + kWn k2 )
146
J. KALICKÁ
This term tends to zero as n → ∞ for every fixed m < n − k + 1.
3. Step by step, we express Eθ (ζθ0 Nζθ ζθ0 A0 HAζθ )k,p and ((Eθ (ζθ0 Nζθ ζθ0 A0 HAζθ ) ·
J)k,p :
Eθ (ζθ0 Nζθ ζθ0 A0 HAζθ )k,p = 2 tr (Nk Σn A0 Hp AΣn ) + tr (Nk Σn ) tr (A0 Hp AΣn )
+ µ0n Nk µn tr (A0 Hp AΣn ) + µ0n A0 Hp A tr (Nk Σn )
+ 4µ0n Nk Σn A0n Hp Aµn + µ0n Nk Wn A0 Hp Aµn .
The first two members of ((Eθ (ζθ0 Nζθ ζθ0 A0 HAζθ ) · J)k,p are bounded (see [7]):
m−1
P
(2.11)
j=0
(2 tr (Nk Σn A0 Hj AΣn ) + tr (Nk Σn ) · tr (A0 Hj AΣn )(jj )p
≤ 3kNi k · kA0 Bp Ak · kΣn k2
From this we calculate only last four members of ((Eθ (ζθ0 Nζθ ζθ0 A0 HAζθ ) · J)k,p .
(Eθ (ζθ0 Nζθ ζθ0 A0 HAζθ ) · j)k,p
=
m−1
X
(µ0n Nk µn tr (A0 Hj AΣn ) + µ0n A0n Hj An µn tr (Nk Σn )
j=0
+ 4µ0n Nk Σn A0 Hj Aµn + µ0n Nk Wn A0 Hj Aµn )(jj )p
=
m−1
X
(tr (Wn Nk ) tr (A0 Hj AΣn ) + tr (Wn A0 Hj A) tr (Nk Σn )
j=0
+ 4 tr (Wn Nk Σn A0 Hj An )
+ tr (Wn Nk Wn A0 Hj An ) + tr (Wn Nk Wn A0 Hj A))(jj )p


m−1
X
0
= tr (Wn Nk ) · tr A
Hj (jj )p AΣn 
j=0

+ trWn Nk Wn A
m−1
X
Hj (jj )p A tr (Nk Σn )
j=0

+ 4 trWn Nk Σn A0

m−1
X


Hj (jj )p AΣn  + trWn Nk Wn A0
j=0
0
m−1
X

Hj (jj )p A
j=0
0
= tr (Wn Nk ) · tr (A Bp AΣn ) + tr (Wn A Bp A) · tr (Nk Σn )
+ 4 tr (Wn Nk Σn A0 Bp A) + tr (Wn Nk Wn A0 Bp Σn )
Pm−1
where Bp = j=0 (jj )p Hj . Now, we have
tr (Wn Nk ) tr (A0 Bp AΣn ) ≤ kWn k · kNk k · kΣn k · kA0 Bp Ak
tr (Wn A0 Bp A) tr (Nk Σn ) ≤ kWn k · kNk k · kΣn k · kA0 Bp Ak
tr (Wn Nk Σn A0 Bp A) ≤ kWn k · kNk k · kΣn k · kA0 Bp Ak
tr (WNk WA0 Bp A) ≤ kWn k2 · kNk k · kA0 Bp Ak
NONLINEAR REGRESSION MODEL
147
and consequently
(2.12)
((Eθ (ζθ0 Nζθ ζθ0 A0 HAζθ ) · J)k,p
≤ 6kWn k · kNk k · kA0 Bp Ak · kΣn k + kWn k2 · kNk k · kA0 Bp Ak
= kA0 Bp Ak · kNk k · kWn k · (6kΣn k + kWn k)
This term tends to zero for every fixed m < n − k + 1 and for n → ∞.
4. The case of the third member Eθ (ζθ0 A0 HAζθ ζθ0 Nζθ )k,p is analogous. For
(J0 (Eθ (ζθ0 Nζθ ζθ0 A0 HAζθ )))k,p we have:
(2.13)
(J0 (Eθ (ζθ0 Nζθ ζθ0 A0 HAζθ )))k,p
≤ kWn k · kA0 Bk Ak · kNp k · (6kΣn k + kWn k).
5. By expressing the last member in the form J0 Eθ ((H(ζθ )H(ζθ )) · J, we calculate
in the first place the members Eθ (ζθ0 A0 HAζθ ζθ0 A0 HAζθ )i,j and
(J0 (Eθ (ζθ0 A0 HAζθ ζθ0 A0 HAζθ ))J)i,j .
Eθ (ζθ0 A0 HAζθ ζθ0 A0 HAζθ )i,j
= 2 tr (A0 Hi AΣn A0 Hj Σn ) + tr (A0 Hi AΣn ) · tr (A0 Hj AΣn )
+ µ0n A0 Hi Aµn tr (A0 Hj AΣn ) + µ0n A0 Hj Aµn tr (A0 Hi AΣn )
+ 4µ0n A0 Hi AΣn A0 Hj Aµn + µ0n A0 Hi AWn A0 Hj Aµn .
For the first two members in this formula we can write (see Štulajter [7])
[J0 (2 tr (A0 HAΣn A0 HAΣn ) + tr (A0 HA) tr (A0 HAΣn ))J]k,p
≤ 3kA0 Bk Ak · kA0 Bl Ak · kΣn k2 .
We denote the last four members of this formula as Zi,j



J0
m−1
X m−1
X

 . 
(J0 ZJ)k,p = (J0 , . . . , Jm−1 ) · Z ·  ..  =
(ji )k (Zi,j )(jj )p
Jm−1
k,p
i=0 j=0
= tr (Wn A0 Bk A) tr (Wn A0 Bp AΣn ) + tr (Wn A0 Bk A) tr (ABp AΣn )
+ 4 tr (Wn A0 Bk AΣn A0 Bp A) + tr (Wn A0 Hi AWn A0 Bp A)
= (J0 (Eθ (ζθ0 A0 HAζθ ζθ0 A0 HAζθ ))J)i,j
It is easy to see that
(2.14)
(J0 (Eθ (ζθ0 A0 HAζθ ζθ0 A0 HAζθ ))J)i,j
≤ kWn k · kA0 Bk Ak · kA0 Bp Ak · kΣn k
+ kWn k · kA0 Bk Ak · kA0 Bp Ak · kΣn k
+ 4kWnk · kA0 Bk Ak · kA0 Bp Ak · kΣn k
+ kWn k2 · kA0 Bk Ak · kABp Ak
≤ kWn k · kA0 Bk Ak · kA0 Bp Ak · (6kΣn k + kWn k)
Now, we are ready to prove the following result.
148
J. KALICKÁ
Theorem 1.2. Let ζθ ∼ N(µn , Σn ) and let the assumptions of Theorem 1.1 be
fulfilled. Then the approximate LSE θ̃m of parameter θ fulfils
lim Eθ [(θ̃m − θ)(θ̃m − θ)0 ] = 0 .
n→∞
Proof. Based on (2.9)–(2.14) and conditions (i), (ii) of Theorem 1.1 we can
easily see that every member of the mean square error matrix of the approximate
estimator θ̃m converges to zero for every fixed m < n−k +1 if n tends to infinity.
References
1. Bates D. M. and Watts D. G., Relative curvature measures of nonlinarity, J. Roy. Stat. Soc.
42(B) (1980), 1–25.
2. Box M. J., Bias in nonlinear estimation, J. Roy. Stat. Soc. 33(B) (1971), 171–201.
3. Cook R. D. and Tsai C. L., Residuals in nonlinear regression, Biometrika 72 (1985).
, Bias in nonlinear regression, Biometrika 73 (1986).
4.
5. Kalická J., Some limit properties of an approximate least squares estimator, Probastat’94.
6. Štulajter F., Odhady v náhodných procesoch, Alfa, Bratislava, 1989.
7.
, Mean squares error matrix of an approximate least squares estimator in nonlinear regression model with corelated errors, Acta Math. Univ. Comenianae LXI(2) (1992),
251–261.
8. Štulajter F. and Hudáková J., An approximate least squares estimator in nonlinear regression, Probastat’91.
J. Kalická, Department of Mathematics, Slovak Technical University, Radlinského 11, 813 68
Bratislava, Slovakia
Download