Document 11157782

advertisement
Digitized by the Internet Archive
in
2011 with funding from
Boston Library Consortium IVIember Libraries
http://www.archive.org/details/biascorrectedinsOOhahn
Massachusetts Institute of Technology
Department of Economics
Working Paper Series
CORRECTED INSTRUMENTAL
VARIABLES ESTIMATION FOR DYNAMIC PANEL
MODELS WITH FIXED EFFECTS
BIAS
Jinyong Hahn
Jerry
Hausman
Guido Kuersteiner
Working Paper 01 -24
June 2001
Room
E52-251
50 Memorial Drive
Cambridge,
MA 02142
This paper can be downloaded without charge from the
Social Science Research Network Paper Collection at
http://papers.ssm.com/paper.taf?abstract id= 276592
MASSACHUSEHS
INSTITUTE
OF TECHNOLOGY
AUG 1
5 2001
LIBRARIES
Technology
Department of Economics
Working Paper Series
Massachusetts
Institute of
CORRECTED INSTRUMENTAL
VARIABLES ESTIMATION FOR DYNAMIC PANEL
MODELS WITH FIXED EFFECTS
BIAS
Jinyong Hahn
Jerry
Hausman
Guide Kuersteiner
Working Paper 01 -24
June 2001
Room
E52-251
50 Memorial Drive
Cambridge,
MA 021 42
This paper can be downloaded without charge from the
Social Science Research Network
Paper Collection
http://papers.ssm.com/paper.taf?abstract id= 276592
at
Bias Corrected Instrumental Variables Estimation for
Dynamic
Panel Models with Fixed Effects
Jinyong
Brown
Hahn
Jerry
University
Hausman
Guido Kuersteiner
MIT
MIT
June, 2001
Acknowledgment: We
are grateful to
Yu Hu
for exceptional research assistance.
comments by Guido Imbens, Whitney Newey, James
Powell, and participants of workshops at
Arizona State University, Berkeley Harvard/MIT, Maryland, Triangle,
appreciated.
Helpful
UCL, UCLA, UCSD
are
Abstract
This paper analyzes the second order bias of instrumental variables estimators for a dynamic
panel model with fixed
considered.
effects.
Three
different
methods
of second order bias correction are
Simulation experiments show that these methods perform well
not have a root near unity but break
down near
the unit root a weak instrument approximation
long differencing the model
is
the unit
is
used.
circle.
We
if
the model does
To remedy the problem near
show that an estimator based on
approximately achieving the minimal bias in a certain class of
instrumental variables (IV) estimators. Simulation experiments document the performance of
the proposed procedure in finite samples.
Keywords: dynamic
JEL: C13, C23, C51
panel, bias correction, second order, unit root,
weak instrument
reduces the finite sample bias and also decreases the
the explanatory power of the instruments,
MSE
of the estimator.
we use the technique
Monte Carlo
(1995).
even
in the
increeise further
of using estimated residuals as
additional instruments a technique introduced in the simultaneous equations
Newey, and Taylor (1987) and used
To
model by Hausman,
dynamic panel data context by Ahn and Schmidt
results demonstrate that the long difference estimator performs quite well,
high positive values of the lagged variable coefficient where previous estimators are
for
badly biased.
However, the second order bias calculations do not predict well the performance of the estimator for these high values of the coefficient. Simulation evidence
do not work well near the unit
circle
where the model
problem. In order to analyze the bias of standard
we
from a near non-identification
GMM procedures under these circumstances
consider a local to non-identification asymptotic approximation.
The
(2000)
alternative asymptotic approximation of Staiger
is
and Stock (1997) and Stock and Wright
based on letting the correlation between instruments and regressors decrease at a
prescribed rate of the sample
is
suffers
shows that our approximations
size.
In their work,
it is
assumed that the number of instruments
held fixed as the sample size increases. Their limit distribution
is
nonstandard and
cases corresponds to exact small sample distributions such as the one obtained
(1968) for the bivariate simultaneous equations model.
by
Phillips (1989)
and Choi and
Dufour (1997),
identified case.
Philfips (1992)
Wang and
This approach
Swanson
error of
by Richardson
related to the
work
on the asymptotics of 2SLS in the partially
Zivot (1998) and Nelson, Startz and Zivot (1998)
The
analyze valid inference and tests in the presence of weak instruments.
mean squared
is
in special
associated bias and
2SLS under weak instrument assumptions was obtained by Chao and
(2000).
In this paper
we use
dynamic panel model.
limit distribution.
the
We
Here we
weak instrument asymptotic approximations to analyze 2SLS
for the
analyze the impact of stationarity assumptions on the nonstandard
let
the autoregressive parameter tend to unity in a similar
the near unit root literature. Nevertheless
approximation the number of time periods
we
way
as in
are not considering time series cases since in our
T is held constant while the number of cross-sectional
observations n tends to infinity.
Our
limiting distribution for the
GMM
estimator shows that only
volving initial conditions are asymptotically relevant.
linear combinations of asymptotically relevant
moment
We
moment
conditions
in-
define a class of estimators based on
show that a
conditions and
bias minimal
estimator within this class can approximately be based on taking long differences of the dynamic
panel model. In general,
it
turns out that under near non-identification asymptotics the optimal
procedures of Alvarez and Arellano (1998), Arellano and
Bond
(1991)
,
Ahn and Schmidt
(1995.
1997) are suboptiraal from a bias point of view and inference optimally should be based on a
smaller than the
full set
of
moment
conditions.
We
show that a
bias
obtained by using a particular linear combination of the original
minimal estimator can be
moment
using the weak instrument asymptotic approximation to the distribution of
conditions.
tlic
We
are
IV estimator
to
Introduction
1
We
are concerned with estimation of the
n, fixed
T
asymptotics
hood estimator
it is
well
dynamic panel model with
known from
fixed effects.
Nickell (1981) that the standard
Under
maximum
and Arellano and Bond (1991). Ahn and Schmidt (1995), Hahn (1997), and
(1988),
Blundell and
Bond
(1998) considered further
contents of varieties of
moment
restrictions
moment
Comparisons of information
restrictions.
made by Ahn and Schmidt
suggest that, unless stationarity of the initial level y^o
Bond
(GMM)
Examples include Anderson and Hsiao (1982), Holtz-Eakin, Newey,
first differences.
and Rosen
hkeli-
from an incidental parameter problem leading to inconsistency. In order
suffers
to avoid this problem the hterature has focused on instrumental variables estimation
applied to
large
(1998), the orthogonality of lagged levels with
somehow
is
(1995) and
Hahn
(1999)
exploited as in Blundell and
differences provide the largest source
first
of information.
Unfortunately, the standard
GMM estimator obtained after
by
this problem, modifications of likelihood
See Kiviet (1995), Lancaster (1997),
mators do reduce
but the remaining bias
In this paper,
obtained after
substantial for
is still
we attempt
first
Hahn and
maximum
(1959),
We
view the standard
GMM estimator
GMM estimator as a minimum distance
T— 1 instrumental variable estimators (2SLS)
for quite a while that
applied to
IV estimators can be quite biased
first differences.
Hausman
(1986). It has
Nagar
in finite sample. See
Mariano and Sawa (1972), Rothenberg (1983), Bekker (1994), Donald and Newey (1998)
and Kuersteiner
(2000).
If
the ingredients of the
natural to expect such bias in the resultant
it is
likelihood estimator,
relatively small.
This view has been adopted by Chamberlain (1984) and Grihches and
been noted
likelihood based esti-
to eliminate the finite sample bias of the standard
differencing.
estimator that combines
The
Kuersteiner (2000).
T
Mo-
based estimators emerged in the literature.
sample bias compared to the standard
finite
differencing has been found
See Alonso-Borrego and Arellano (1996).
to suffer from substantial finite sample biases.
tivated
first
GMM. We
propose to eliminate the bias of the
minimum
minimum
distance estimator are
all
biased,
distance estimator, or equivalently,
GMM estimator by replacing
all
the ingredients
with Nagar type bias corrected instrumental variable estimators. To our knowledge, the idea of
applying a
new
in
We
minimum
distance estimator to bias corrected instrumental variables estimators
is
the literature.
GMM estimator using the formula
the standard GMM estimator suffers
consider a second order approach to the bias of the
contained in
Hahn and Hausman
from significant
bias.
The
(2000).
We
find that
bias arises from two primary sources: the correlation of the structural
equation error with the reduced form error and the low explanatory power of the instruments.
We
attempt to solve these problems by using the "long difference technique" of Griliches and
Hausman
used
bias.
(1986). Griliches
and Hausman noted that bias
in tlie errors in variable
Long
is
reduced when long differences are
problem, and a similar result works here with the second order
differences also increases the explanatory
1
power of the instruments which further
derive the form of the optimal hnear combination.
Review of the Bias of
2
GMM Estimator
Consider the usual dynamic panel model with fixed
=
y^t
It
common
has been
usual
+ f3yi^t-i + eu,
ai
=
i
effects:
l,...,n;
based on the
is
Vit
- yi,t-\ =
{yi,t-i
- yi,t-2) +
l,...,T
{en
-
is
large
(1)
and
T
is
small.
The
model
difference form of the
first
(3
=
where n
in the literature to consider the case
GMM estimator
t
^t,t-i)
where the instruments are based on the orthogonality
E
Instead,
which
we
-
[yi,5 {^it
=
et,t-i)]
=
s
0,
.
.
.
,
i
-
2.
GMM estimator developed by Arellano and Bover (1995),
characterization of the "weight matrix" in GMM estimation. We define
consider a version of the
simplifies the
the innovation ua
=
+ en-
Qi
Arellano and Bover (1995) eliminate the fixed effect aj in
(1)
by
applying Helmert's transformation
T -t
ul
instead of
Xj
=
y*t-\- Let
z^t
=
("j.t+i H
(j/io,
=
-
+ e*t>
P^zt
-
=
<
Uix)
1,
.
.
.
Our moment
,yit-i)
E[zite*n]=0
It
1-
t
=
r-1
i
The transformation produces
first differencing.^
Vu
where
- ^7—7
Utt
T-t+l
,
T-
1
restriction
is
summarized by
= l,...,r-l
<
can be shown that, with the homoscedasticity assumption on en, the optimal "weight ma-
trix" is proportional to a block-diagonal matrix, with typical diagonal block equal to E[zitz[^.
Therefore, the optimal
GMM estimator
equal to
is
ET —
where x^
Now,
= {^ur
let b2sis,t
,<;)', Vt
= iVur
^f ry
1
*
^t Ptyt
1=1
^GMM
(2)
^Vnt)'--
^i
=
{zu,--- ,Znt)\ and Pt
=
1,...
=
Zt{Z\Zt)~^ Z[
denote the 2SLS of y* on x*:
^VPiVl
t
^2SLS,t
'Arellano and Bover (1995) notes that the
or not Helmert's transformation
is
cfficienc)' of
used instead of
first
the resultant
differencing.
,r-
1
CMM
estimator
is
not afTcrted whether
.
If Eit are i.i.d. across
n grows
to infinity,
it
t,
then under the standard
(b2SLS,l -/?,-••
MSLS^T-I -/?)-> AA (0, *)
^ is a diagonal matrix with the t-th diagonal elements equal to
Therefore,
we may
T
fixed
is
and
can be shown that
V^
where
order) asymptotics where
(first
consider a
minimum
,
Var {en)/ (plim n~^x*'Ptx*)
distance estimator, which solves
-1
-1
(xrPixj)
b2SLS,\
-b
\
mm
b
~^
\ b2SLS,T-l
The
resultant
)'
(x5,'_ jPr-l 2:^-1
)
minimum
distance estimator
is
\ hsLS,T-l
numerically identical to the
GMM
~b
J
estimator in
(2):
Tj='<Ptx;,-'b2SLS,i
boMM =
(3)
Z^t=l ^f
Therefore, the
GMM estimator
bcMM
estimators b2SLS,i-,--- 7^25LS,r-isubstantial finite sample bias. See
and Newey (1998)
n:iay
may be
be understood as a
linear
combination of the 2SLS
has long been known that the 2SLS
It
Nagar (1959), Rothenberg
for related discussion.
combination of the 2SLS
-^t^t
It is
may be
(1983), Bekker (1994),
subject to
and Donald
therefore natural to conjecture that a linear
subject to quite substantial finite sample bias.
Bias Correction using Alternative Asymptotics
3
In this section,
we
consider the usual dynamic panel model with fixed effects (1) using the
alternative asymptotics
was
where n and
originally developed
We
also
1 en
~
y,o| «:
in the
A/^(0, a^) over
assume stationarity on
Condition 2
grow to
infinity at the
same
rate.
Such approximation
by Bekker (1994), and was adopted by Alvarez and Arellano (1998)
and Hahn and Kuersteiner (2000)
Condition
T
yifi
~ A/" (^,
In order to guarantee that Z'tZt
i
We
assume
andt.
and normality on
^)
is
dynamic panel context.
and
Or
nonsingular.
^N
we
{0,al).
will
^This condition allows us to use lots of intermediate results
expected to be robust to violation of this condition.
a,^:
assume that
in
Alvarez and Arellano (1998). Our results are
^ p.
Condition 3 ^
Al\'arez
<
where
p <l.^
and Arellano (1998) show
that,
under Conditions 1-3,
v^ (bGM^f -(3-^{l+ ,3)]^ where
bcMM
is
By examining
defined in (2) and (3).
where n and
alternative asjTnptotic approximation
.'V (0. 1
Combining
Theorem
(4)
and
(5).
we can
T
grow to
infinity'
is
asjTnptotics,
it
invohang other
satisfied.
As
An
stead.
[hcMM — !3)
Then, y/nT
GMM
exogenous
which may not be
For example, we
^t
may
^
n-K*
use At
for the f-th equation, in
^^'^
-
=
— /3^)
Although
We
the model
therefore consider ehminating biases in b^sis.t in-
—
is
the Nagar type estimator.
'
Xtx'/Mtx^
^* denotes the number
^^'j^^2 ^^ ^°
(5) to
GMM estimator under the alternative
estimator that removes the higher order bias of b2SLS,t
= i — Pt:
1
Such a generalization would require the charac-
\'ariables.
trivial.
.V(0,l-/32).
is efficient.
would not be easy to generalize the development leading to
x'/PfX^
Alt
(5)
does have a desirable property under the alternative
&JV,agar.t
where
we can
given by
is
such, the bias corrected
terization of the asjTiiptotic distribution of the standard
as^Tnptotics,
rate,
by a Hajek-type convolution theorem that ^^Z" (0,
(2000) establish
GMM estimator bcMM
strictly
same
1
-bcMM +
1-3 are
the minimal asymptotic distribution.
the bias corrected
at the
easily obtain:
1 Suppose that Conditions
Hahn and Kuersteiner
(4)
,
the asjTnptotic distribution (4) under such
develop a bias-corrected estimator. This bias-corrected estimator
'GA/M
- 0')
of instruments for the t-ih equation.
Donald and Newey (1998).
We may
also use
LIML
which case At would be estimated by the usual minimum eigenvalue
search.
We now examine properties of the corresponding minimum distance estimator. One possible
weight matrix for this problem
is
given by
-1
{xl'Prxl-Xix\'Mrxl)
{^T-l^T-l^T-l ~ ^T-l^T-l-^^T-i^T-l)
'.Alvarez
everv
t.
and Arellano (1998) only require
< p<
oo.
We
require p
<
1
to guarantee that Z'lZt
is
singular for
.
With
One
this weight matrix,
way
possible
minimum
can be shown that the
it
distance estimator
to examine the finite sample property of the
given by
is
new estimator
is
to use the
alternative asymptotics:
Theorem
2 Suppose that Conditions 1-3 are
same
infinity at the
Proof. Lemmas
—
Then, \/nT {bj^agar
rate.
and 11
10,
in
Appendix
Also suppose that
satisfied.
P) —*
A
N (0,1 —
T
grow
to
/3 )
Lemma
along with
n and
2 of Alvarez
and Arellano
(1998) establish that
7
VnT^K
<Pte*t
'
''
'-^^t'Mts:
n-Kt
'
^
V
-> AT
0,
'
V
1
-
/3^
and
-^
y
(x'Ptxt -
-^i-x/Mtx;
a^
i2'
,
from which the conclusion follows.
In Table
we summarized
1,
10000 Monte Carlo runs.^
Here,
/? is
close to one
sample properties of h^agar and
buML
is
buML
approximated by
the estimator where As in (6) are replaced by the
we
In general,
corresponding "eigenvalues".
bias unless
finite
and the sample
find that bj^agar
and buj^i successfully remove
size is small.
Bias Correction using Higher Order Expansions
4
we explained the
In the previous section,
of the
2SLS
estimators.
bias of the
GMM
estimator as a result of the biases
we consider elimination
In this section,
of the bias
by adopting the
second order Taylor type approximation. This perspective has been adopted by Nagar (1959),
and Rothenberg (1983).
For this purpose we
6 of
a single parameter
first
(3
analyze the second order bias of a general minimization estimator
&R defined
by
=
b
for
some
The
CC
M.
criterion function
G(c) are defined
and
that
The
V(tt;,,c)
in
"in our
is
(c)
assumed to be of the form Q„
Condition
0.
Qn
(7)
score for the minimization problem
mapping M^ x
E [6 {xL\, p)] =
argmin
We
M
7.
The
criterion function
into R"^ for d
>
1,
(c)
is
=
denoted by Sn
g
(c)'
(c)
G {c)~^ g (c)
= dQn (c) /dc.
where g
let e,t
~
8 {wi,c)
We
assume
where Wi are
A^ (0, 1), a,
~N
and
depends on primitive functions
i.i.d.
observations.
impose the following additional conditions on w^,6, and
Monte Carlo experiment, we
(c)
(0, 1),
and
y,o
~N
(yrfS'
irW)
ip.
Condition 4 The random
variables wi are
Condition 5 The functions
where
C CM
a compact set such that
is
Lipschitz condition
C
and ci,C2 G
E [Me {wi)] <
and
6 {w, c)
\\6
00 and
E
(3
{w, c) are three times dijferentiable in c for c
E
int
C. Assume that 6
and
{wi, c)
|ci
—
same statement holding for
ip.
The functions M5{.) and M^(.)
6 {wi,
C2)||
|M^ (wi)P <
C2I
for some function Ms{.)
C
£
satisfy a
tp {iVi, c)
< Ms (wi)
{wi, ci)
with the
—
tj;
i.i.d.
:
IR'^
— R
>
satisfy
00.
= d^6 {wi,c) /dc^ '^ {wi,c) = ip {wi,c)il) {wi,c)' and '^j{wi,c) =
d^^ {wi,c) jdcK Then, Xj (c) = E [6j {wi,c)], and Aj (c) = E [^j (fOi, c)] all exist and are finite
3. For simplicity, we use the notation Xj = Xj (/3), Aj = Aj (/?), A (c) = Xq (c) and
for j = 0,
A(c) = Ao(c).
Condition 6 Let Sj{wi,c)
,
...,
Condition 7 Letg{c) = -Yl'l^iS {wi,c), Qj {c) = ^ Er=i ^j (""^^.c), G (c) = ^ E"=i ^ (^^i>c) VK-,c)'
an dGj{c) = ^Er=i^iK,c). Theng{c) ^ E[6iw„c)], g,{c) -^ E[6,{w„c)], G(c)
E ['^ {wi,c)],
and Gj
(c)
^E
[i>-j
{wi, c)] for all
Our asymptotic approximation
estimator b such that b
—
b
=
ceC.
of the second order bias of b
Op (^~^)
-
The approximate
bias of b
while the original estimator h need not necessarily possess
we need
justify our approximation
to establish that b
probability tending to one. For this purpose
Condition 8
(i)
There exists some finite
are contained in the compact interval
and only
if c
= (3;
exists
|2+T7
suPcecll^K'C)
rior
minimum
is
moments
then defined as
E
(b)
=
all
c
G C;
(3
with
following additional conditions.
M < 00 such that the eigenvalues of E [^
[M^-',M] for
—
b
of any order. In order to
i/n-consistent and that Sn
we introduce the
<
is
based on an approximate
(ii)
the vector
E[8
c)]
(zt;,-,
{w^,c)\
=
if
(Hi) X\ 7^ 0.
Condition 9 There
Condition 8
is
is
<
some
00,
r/
and
>
E
such that
E Ms {w^)
suPcGcllV'(^«i,c)f+''
<
^
<
00,
E M^ (u'i)^2+7, <
00.
an identification condition that guarantees the existence of a unique
of the limiting criterion function.
Andrews (1994) and
is
00,
Condition 9 corresponds to Assumption
inte-
B
of
used to establish a stochastic equicontinuity property of the criterion
function.
Lemma
1
Under conditions 4
with probability tending to
-
9, b
defined in (7) satisfies y/n{b
—
(3)
=
Op(l) and Sn
(6)
=
1.
Proof. See Appendix B.
Based on
Lemma
Q
1
the
first
order condition
for (7)
can be characterized by
= 2gi{b)'G{br'g{b)-g{b)'G{h)-^Gx[b)G{h)'g[h) wp
-
1.
(8)
A
second order Taylor expansion of
of order Op{n~'^). In
Appendix B,
VS(6 -
ffl
in (9),
it is
around
/3
leads to a representation oi b
B
— (3 up
to terms
shown that
= -i* + -L (-ir +
(See Definition 2 in Appendix
term
(8)
i*= - i;*') + Op (-1=)
for the definition of
*,T,$,H, and
(9)
F.) Ignoring the Op
and taking expectations, we obtain the "approximate mean" of ^/n{b
—
(A^j
(3).
We
present the second order bias of b in the next Theorem.
Theorem
4-9, the second order bias
3 Under Conditions
ofb
is
equal to
(10)
where
E[^]
f\cf
E[T]
=
0,
\
"1
= 2tTace{A''E 6i^
- 2X[A-^E
j
[^p^i>',A-H^]
- trace (A-^AjA-^i?
[6i6'^)
,
and
A-^Ai - 4XjA'^E
= SX[A-^E
E[^=]
-SX'^A-^E
[6i6'^
[6^X\A-'^^P,^'^
+ AX'^A'^E
A-^AiA-^Ai
[<5,<5',]
A^^Ai
A^^Aa,
and
E[<l>^]=iX[A-^E[6^6'^A-'X^,
Proof. See Appendix B.
Remark
1
For the particular case where
exactly coincides with
We now apply
Newsy and Smith's
c
C
where
C
is
when
6i, i.e.
b is a
CUE,
the bias
formula (10)
GMM estimator of the dynamic panel model.
The
can be understood to be a solution to the minimization problem
mm
G
=
(2000).
these general results to the
GMM estimator bcMM
for c
ip,
some
\
m,
n
(c)
closed interval on the real line containing the true parameter value
and
Zi\Zii
m,
(c)
T7
V
We now
^^.T-\
[vIt-i
C
:T-a
4. It
Zx,T~\Z, 7-_j
J
characterize the finite sample bias of the
panel model using Theorem
^-^
2=1
X.
can be shown that:
GMM
estimator be
mm
of the
dynamic
.
Theorem 4 Under
Conditions 1-3 the second order bias of bcMM
B1+B2 + B3
+
is
equal to
1
f-
(11)
\n
n
where
Bi
= Tr^ ELY trace
T-")
((rr)''
-2Tf zti EL7 rr' (rr)-' r-- (rf )-i rzx
s
)-i
= Tr^ ^^j-/ ^fr^i rr' (rr)-' -B3.1 (t, s) (rf
rr
B2 =
^3
andTi = Er=i'rr'(rfr^rr.
Proof. See Appendix C.
In Table
2,
on Theorem
4.
we compare
the actual performance
oihcMM and
Table 2 tabulates the actual bias of the estimator approximated by 10000 Monte
Carlo runs, and compares
with the second order bias based on the formula (11).
it
that the second order theory does a reasonably good job except
and n
is
4 suggests a natural
way
>/n- consistent estimators of B-[,B2, B3.
t>BCi
order equivalent to
first
n-'
when p
is
It is
clear
close to the unit circle
small.
Theorem
is
the prediction of its bias based
HU ^rt^tM"" =
ri-^
—
of eliminating the bias.
Then
easy to see that
it is
t>GMM
{B-i
bcMM, and has second
EIli ^rtKv
«"'
^\T =
Suppose that Bi/B2,Bz are
+ B2 + Bsj
(12)
order bias equal to zero.
ELi <t^l^u^is
Define Tf^
=
and
n
i=l
where
e*j
53,1 (f,s)
=
by
y*^
ff ,fr,%"
Then the Bs
be
first
— x*^bGMM
Let Bi,B2 and
and 53,i(i,s)
order equivalent to
bcMM
and
numerator of
i?3 is
will
requirement, and hence, the estimator (12) will
have zero second order
6*iZu\'^K^^ Zisz[^E [z,sz[,]
equal to zero for s
bBC2
be defined by replacing rj^,rf^,r"^^ and
Bi,B2 and S3-
will satisfy the -yn-consistency
E [z^tx*i]' E \z,t,z\^ "
in the
in
.63
= bcMM
<
we may
i,
n
[
Bi
bias.
~
Because the
summand
E [zisX*,]
instead consider
+ B2 + B3
;i3)
where
)
)
(f f "' ^3,1 (*,
lis = tr^ eL"/ eI=' f r'
s)
(f r '' f r-
Second order asymptotic theory predicts approximately that
of bias.
We
examined whether such prediction
Monte Carlo
is
reasonably accurate unless
is
would be relatively
reasonably accurate in
runs.^ Table 3 summarizes the properties of 6bc2-
the second order theory
6j3C2
/3 is
We
finite
sample by 5000
have seen in Table 2 that
close to one. It
is
therefore sensible
to conjecture that bBC2 would have a reasonable finite sample bias property as long as
too close to one. Such a conjecture
Long Difference
5
In previous sections,
is
verified in
Table
that even the second order asymptotics "fails" to be a good
This phenomenon can be explained by the 'Sveak instrument"
/?
1.
alleviated
stationarity condition
may
or"
difficulties of inference
circle
It
is
avoiding the stationarity assumption.
circle
would be
alleviated
5.1
Intuition
In
We
«
1
as noted
argue that some of the
by taking a long
- yn = P iViT-i -
Vio)
+
{£iT
difference.
- £n)
(14)
Hausman and Taylor (1983) or Ahn and Schmidt
yi2 — Pyn would be valid instruments as well.
l3yiT-2,
,
Hahn-Hausman (HH)
tors:
/?
based on the long difference
similar intuition as in
-
(1999)
we found that the
"Explained" variance of the
Such
yjo-
easy to see that the initial observation y,o would serve as a valid instrument.
2/iT-i
instru-
some other method to overcome the weak instrument
single equation
VtT
weak
observation
initial
be a predominant source of information around
around the unit
we focus on a
(1998) argued that the
not be appropriate for particular applications. Further, sta-
therefore turn to
problem around the unit
Bond
by assuming stationarity on the
may
tionarity assumption turns out to
specific,
3.
~
ment problem can be
To be
not
Specification: Finite Iteration
problem. See Staiger and Stock (1997). Blundell and
We
/3 is
we noted
approximation around
by Hahn (1999).
free
first
bias of
2SLS
(1995),
(GMM)
we can
Using
see that
depends on 4
fac-
stage reduced form equation, covariance between the
stochastic disturbance of the structural equation
and the reduced form equation, the number of
instruments, and sample size:
„,:_,
"The
_
m~
difTcrence of
across Tables
I
-
(number of
1
n "Explained" variance
instruments) x ("covarianc e")
of the
first
stage reduced form equation
Monte Carlo runs here induced some minor numerical
3.
10
difference (in properties of
bcMM)
—
Similarly, the
Donald-Newey (DN) (1999)
consider
differences
first
MSE formula depends on the same 4 factors. We now
(FD) and long differences (LD) to see why
LD
does so
much
better in
our Monte- Carlo experiments.
Assume that
T — A. The
- Z/3 =
2/4
For the
RHS
variables
it
calculate the
ditions"
R^
We would then use
but not using
(3
-
=
y2
cv
(y2, yi yo,
,
ei
-f-
,
a
moment
-I-
^
the
(
0' -iln^
stage error
first
)
^^
•
^^^
-
1)
2/2
information,
we can
-a2
equal to
—19
for
Number
/?
—
;
Sample
turn to the
For n
of Instruments
:^
We now
.9.
LD
l
=
TT-
,„„
X 100
p
Size
and the "explained variance"
(2/3^
-
4/3''
= %.
is
first
stage
is
in
-
the
2/3^
2
-
4/3^
error
equal to
and
cr'^-^r^-
equal to
+p
5 —19
= —— -—r
100 0.9
first
stage
is
yo)
,„„
x 100
=
,^^ ^^
-105.56
+ £4-£i
first
stage and second stage errors
given by
+ 4/32 + 4p-2p^ + 6)a^ + P^ -p^ + 2(-2/3-3 + /?2)a2-l+/32
Therefore, the ratio that determines the bias
(2/3^
symbols,
write
2/3^
2
where a^
for
setup:
can be shown that the covariance between the
—a
linear restrictions:
100, this implies the percentage bias of
—19
7;
"ideal con-
1-/3'
y4-yi = P (ys It
become
Assuming stationarity
the "explained variance" in the
2^±1
is
moments under
= Yzrp + ^0'
Therefore,the ratio that determines the bias of 2SLS
which
+ a + £3
shown that the covariance between the structure
t>e
— o"^, and
is
(/?
£2) as instruments.
yo
where ^0
(15)
in the sense that the nonlinear restrictions
as additional
it
- y2) + £4 - £3
(ya
equation (15) using the Ahn-Schmidt (AS)
for
where you know
/?
is:
uses the instrument equation:
ys
Now
up
difference set
first
-
2/3'^
is
equal to
{-2p-3 + p^)a^-\+p^
+ 4/32 + 4/3 - 2/3^ + 6) a'^ + /?^ -
11
p'^
+
2
- 2p^
is
— /3^(T^,
,
which
equal to
is
-.374 08
for /?
=
.9.
Note that the
maximum
+
2.570 3x10-''
a2
+ 4.8306x
IO-2
value that this ratio can take in absolute terms
is
-.37408
which
much
is
smaller than —19.
We therefore conclude that the long difference increases i?^
decreases the covariance. Further, the
specification so
except sample
number
we should expect even smaller
cause the
size,
LD
of instruments
bias.
Thus,
is
smaller in the long difference
HH
of the factors in the
all
but
equation,
estimator to have smaller bias.
Monte Carlo
5.2
— l3yiT-2-,
For the long difference specification, we can use y^o as well as the "residuals" yir-i
-
•
-
We may estimate p by applying 2SLS to the long difference equausing yio as instrument. Wemaythenuse (yio,yiT-i -b2SLsyiT-2,
,yi2 -hsLsyn)^
yi2—f3yii as valid instruments.^
tion (14)
By
instrument to the long difference equation (14) to estimate p. Call the estimator b2SLS,iiterating this procedure,
we can
define 62SL5,2, i>2SLS,Si
Similarly,
we may
by the Arellano and Bover approach, and use (yio,yiT-i -bGMMyiT-2, --
,yi2
first
-bcMMynj
instrument to the long difference equation (14) to estimate p. Call the estimator
iterating this procedure,
P by huML, and
we can
use [yio,yiT-i
define bGMM,2, ^GMM.s,
- buMLyiT-2,
,yi2-
difference equation (14) to estimate p. Call the estimator
we can
define bLiML,2, bLiML,3,
mator works quite
(j^
=
o"^
=
1.
Our
well.
We
-
We
-
-
buMLynj
tors bsBi,
,
monte
i.e.,
T=
5,
carlo runs
is
summarized
in
for
n =
estimate
this procedure,
p =
100,
Table
0.9
esti-
and
In general,
4.
we
well.^
,
We compared four versions
estimators bu m lATbu m L,2,bu m
6bb4, see Appendix E.
N {0,1).
we
Our
finding based on 5000
find that Blundell
of their estima-
l,2,-
Of the
reported in their Monte Carlo section. In our
bias,
By
•
as instrument to the long
stationarity.
bsBi with the long difference
definition of 6bbi,
r^
first
/?
as
compared the performance of our estimator with Blundell and Bond's (1998) estimator,
which uses additional information,
Q.i
6251,5,1
found that such iteration of the long difference
found that the iteration of the long difference estimator works quite
We
we may
bijML^- By iterating
implemented these procedures
finding with 5000
Likewise,
-
estimate
For the exact
four versions, 6bb3 and bsBi are the ones
Monte Carlo
Monte Carlo runs
is
exercise,
we
set
p =
contained in Table
5.
0.9,
a^
=
\,
In terms of
and Bond's estimators 6bb3 and 6^54 have similar properties as the
long difference estimator(s), although the former dominates the latter in terms of variability.
We acknowledge that the residual instruments are irrelevant under the near unity asymptotics.
^Second order theory does not seem to explain the behavior of the long difference estimator. In Table
compare the actual performance of the long
in
7,
we
difference based estimators with the second order theory developed
Appendix F.
12
(We
and 6bs2 are seriously biased. This indicates that the choice
note, however, that 6bbi
and Bond's procedure.) This
of weight matrix matters in implementing Blundell
result is not
surprising because the long difference estimator does not use the information contained in the
initial condition.
Hahn
See
We
(1999) for a related discussion.
sensitivity of Blundell and Bond's estimator to misspecification,
also
wanted to examine the
nonstationary distribution
i.e.,
of yio- For this situation the estimator will be inconsistent. In order to assess the finite sample
sensitivity,
we
~
considered the cases where yio
on 5000 runs are contained in Table
biased as predicted by the
first
j3k~ j^L
'
Our Monte Carlo
)
which contains results
6,
find that the long difference estimator
f
for (3p
—
.5
and ^p
quite robust, whereas 6^53 and 6bb4
is
(We note that hBB\ and 6bb2
order theory.
results
=
0.
We
are less sensitive
is
not
conclude that the long
compared to Blundell and Bond's (1998) estimator.^
difference estimator works quite well even
Near Unit Root Approximation
6
Our Monte Carlo simulation
results
summarized
in Tables 1, 2,
and 3 indicate that the previously
discussed approximations and the bias corrections that are based on
the unit
circle.
circle.
This
Bond
>
00 when also
should
/?„
We
who
To be
identified.
tends to unity.
We
by Staiger and
related the problem to the analysis
we formally adopt approximations
specific,
we
local to the points in the
consider model (1) for
T
fixed
and
analyze the bias of the associated weak instrument
GMM
analyze the class of
restrictions leads to the inference based
Following
Ahn and Schmidt we
=
E [uiyiQ]
=
[1, .-., 1]'
estimators that exploit
on the "long difference"
moment
exploit the
E[uiu[]
with 1
well near
Ahn and Schmidt's
moment conditions and show that a strict subset of the full set of moment restrictions
be used in estimation in order to minimize bias. We argue that this subset of moment
limit distribution.
(1997)
(1998),
In this Section,
parameter space that are not
—
them do not work
because the identification of the model becomes "weak" near the unit
is
See Blundell and
Stock (1997).
n
We
become quite
to misspecification. Such robustness consideration suggests that choice of weight matrix
straightforward in implementing Blundell and Bond's procedure.)
based
{al
specification.
conditions
+ cTl)l + alll'
cr.
ayo-
a vector of dimension
T
and
Uj
=
[u,i, •j^^zt]
The moment
conditions can
be written more compactly as
chE[uiu'^\
E [uiyio]
^We
did try to
compare the
=
2
vech /
<^e
9
+ ^a
vech(7
+
+
(16)
O'oyo
1
sensitivity of the
two moment restrictions by implementing CUE. but we experi-
enced some numerical problem. Numerical problem seems to be an issue with
reports similar problems with
ll')
CUE.
13
CUE
in general.
Windmeijer (2000)
where the redundant moment conditions have been eHminated by use of the vech operator which
makes
extracts the upper diagonal elements from a symmetric matrix. Representation (16)
clear that the vector b
way
is
E R^(^+^)/2+^
of stating that there are
equivalent to
GMM
Ahn and
contained in a 3 dimensional subspace which
is
G = T{T +l)/2 + T — 3
Schmidt's (1997) analysis of the number of
estimators are obtained from the
parameters
and Oay^. The
(T^,<y1,
+
+ T)
l)/2
spanning the orthogonal complement of
moment
=
b'A
will
x
A
moment
— un,
[ui2
...,Uix
—
=
It
UiT-i]
unknown
G
A
matrix
A
which contains
all
the vectors
satisfies
such that
,
=
conditions.
0.
[EuitAuis,E {uiT^Uij) EuiAuik, EAu'^yio]
where Aui
the
=
be convenient to choose
s
This statement
b.
conditions by eliminating the
This matrix
6.
b'A
it
moment
another
GMM estimators leading to consistent estimates of/?
set of all
can therefore be described by a {T{T
For our purposes
imposed on
restrictions
is
it
,
2,...,T;i
=
l,...s-2;j
=
2,...,r-l;A;
=
2,...,T
becomes transparent that any other representation of
conditions can be obtained by applying a corresponding nonsingular linear operator
C to the matrix A. It can be checked that there exists a nonsingular matrix C such that b' AC =
is
identical to the
We
moment
investigate the properties of (infeasible)
S [u^T^Uij
E [u^tAuis (/?)] = 0,
Auu
obtained by setting
observable. Let
Also
conditions (4a)-(4c) in
let g,2{(3)
gn
=
{(3)
{(3)
We
C
is
a
G
[u,Au,k
Finally, let
[y^oAu,{f3)].
moment
= E
gn{P)
[gi (/?„) gi (/?„)']
conditions C'gn
x r matrix for
Choosing r
ki = -dgii{0)/dp,
=
1
/,,2
less
argmin g„
<
r
< G
(/?)
(/3)'
.
=
The
are
-Pn=
now analyzing
=
than
G
S [Vro^u.t (/?)] =
0,
(/3)
n-^^j:^^,
infeasible
uu
are
,UiAuik
(/?).
that the instruments
,
UixAuij
[g^l (P)'
GMM
,
(/?)
9^2 {P)']'
with the
estimator of a possibly
then solves
C (C'C!„C)+ C'gn {(3)
such that
C'C =
U
We
and /„
(17)
and rank (c{C'Q,C)'^ C'\
>
1.
thus allow the use of a singular
allows to exclude certain
= -%2(/?)/a/?,
2SLS estimator can be written
P2SLS
(/?)]
we assume
Here,
use [C'Q.C)'^ to denote the Moore- Penrose inverse.
weight matrix.
We
£;
0,
= Ayu - PAyu-i-
P2SLS
where
=
(1997).
GMM estimators based on
denote a column vector consisting oiuuAuis
optimal weight matrix fin
transformed set of
(/?)]
Ahn and Schmidt
moment
conditions.
= n-^^Z^^i [fuJ^]'- The
Let
infeasible
as
(fnC (G'fi„G)+
the behavior of /?2sl5
C fX' f^C (C'fi„G)+ C'gn iPn)
- Pn under
the following additional assumptions.^
'Kruiniger (2000) considers similar local-to-unity asymptotics.
14
local to unity asymptotics.
(18)
We make
=
Condition 10 Let yu
A''
^% j^^
(
,
]
,
where
Also note that
/?„
Ayu =
+
Oi
=
+
PnVit-i
exp(— c/n) for some
Pl^^Vio
+ ^it +
~
with en
^it
c
>
A''
(0,cr^),
ai
^
N (0,a^)
and
y^o
~
0.
^ El=i Pn'^^u-s +
Op (n-^) where
r?,o~iv(0,(^„-l)y (1-/32)).
Under the generating mechanism described
in the previous definition the following
be established.
Lemma
Lemma
can
'
2 Assume
/?„
=
exp(— c/n) for some c
>
0.
i=l
T fixed
For
and as n -^ oo
i=l
and
where[!i'^,i'y]
~
E
iV(0,E) with
Ell
Ei2
E21
E22
2
-1
=
E21
.
We
^I,
E12
= SMi
E22
= SM2,
where
2
2
1
Ml =
and S12
=
and Ell
-1
M2
1
•-
-1
-1
-1
2
also have
1
^
\
^
+ o(l)
E22
Proof. See Appendix D.
Using
Lemma
(2)
the limiting distribution of P2SLS
For this purpose we define the augmented vectors ^f
partition
C=
Corollary
1
[C'qjC{]'
such that C'^f
Let P2SLS
" Pn
P:2SLS
-
=
is
stated in the next corollary.
[O, ...,0,^^]
and ^*
=
[O, ...,0,^^]
and
C[^^. Let rj denote the rank of Ci.
^^ given by (IS). If
d
=
~ Pn
Condition 10
SiCl (C1S22C1)
C{^y
1
c;ci(C5E22Circ;e
15
=
is satisfied
^V(C,S22)
then
(19)
.
Unlike the limiting distribution for the standard weak instrument problem,
defined in (19),
X(C, E22),
based on normal vectors that have zero mean. This degeneracy
is
by the presence of the fixed
effect in the initial condition, scaled
as
is
generated
up appropriately
to satisfy
the stationarity requirement^'' for the process yu- Inspection of the proof shows that the usual
concentration parameter appearing in the limit distribution
ponent related to the fixed
dominated by a stochastic com-
is
This situation seems to be similar to time series models where
effect.
deterministic trends can dominate the asymptotic distribution.
Based on Corollary
The
model.
1
,
we
define the following class of
class contains estimators that only use
2SLS estimators
for the
dynamic panel
moment
a nonredundant set of
conditions
involving the initial conditions yiQ.
Definition 1 Let P2SLS
=
(0)
92,71
stants
Ti~^' ^
and C\
^^ defined as P^gj^g
^
Yl^=\ 9i2 {P)r
[T —
a
is
\)
X
r\
=
argmin^ ^2,n
(/?)'
Ci lC{QCi)
—
<T—
a symmetric positive definite (T
is
matrix of full column rank
Next we turn to the analysis of the asymptotic bias
panel model. Since the limit only depends on zero
rj
1)
x (T
C[g2,n{0), where
—
1)
matrix of con-
such that C[Ci
1
for the estimator
mean normal random
=
I.
P2SLS ^^ the dynamic
vectors we can apply
the results of Smith (1993).
Theorem
5 Let
X* (Ci,QJ
Condition 10. Let
E
D=
be the limiting distribution of
[D + D')
[x (c„n)] =
/2, where
£it
~N
alternative
(O.CTj)
,
~N
Qi
where
fi
=
all
It-
i
{0,al) andy.o
moment
cry/2.
~N
oi
X*
Estimators
[D +
y^^
in
D
)
/2,
with
j
=
is
D=
/3„
=
is
„,r-l r-l
Q,
=
i''^
to set
=
yu
.It
^
C[MiC-i and
/i
=
sees relatively easily that for large
T
the
is
+
(B^yu-i
+
en,
can be shown that an estimator
consistent. Restricting attention to estimators that
(/3o)]
1
=
0,
one can show that
can be analyzed by the same methods as
Their limiting distributions are denoted by
It-\-
^l'n
C\ij..
/
(!—/?„) Qi
Then
the
n'D^i_,_
mean
of
^* (Ci,/t-i)
^,r-l r-1
'^'(^-•^ + i'^)-7TT^"^^^^~^'^-
expression as a function of the weight matrix
Definition 1 under
- A- {c[acX')
exp(-c/n)
E[uiAu,k
in the paper.
1
^
the class /Jjs^s defined in Definition
are intractably complicated unless
r+1
D=
Oi,
condition E[uii^Uik (Po)]
^i'D^J
with
i
conditions except the condition
under the stationarity assumptions discussed
Moments
(fl,/n
V
\2)\+k'^-
to parametrize the stationarity condition
moment
soley based on the
are ba.sed on
way
~
C[M[Ci. Then
(c{QCi]
<MWc»
A- f;
fc=0
'"An
D=
P2SLS
is
X'
(Ci
,
)
given by
+ '>~-
While finding an exact analytical minimum of the above bias
probably infeasible due to the complexity of the formula, one
minimum must approximately be
approximately to the long difference estimator.
16
achieved for
C=
lr~i, thus leading
n =
where
rank(Ci),
(a)j,
is
the
Pochammer symbol r{a +
invariant polynomial defined by Davis (1980) and A
The mean
E X [Ci,^)
exists for r\
/r{b), C^^{.,.)
b)
a top order
is
the largest eigenvalue of (c[ClC]\
is
.
>\.
Proof. See Appendix D.
The Theorem shows
that the bias of /?2sl5 both depends on the choice of C\ and the weight
matrix ^. Note fore example that E[X{C\,It-\)]
The problem
of minimizing the bias
=
txD/r\.
by choosing optimal matrices C\ and
Vt
does not seem
to lead to an analytical solution but could in principle be carried out numerically for a given
number
of time periods T. For our purpose
show however that
we
two particular choices of
for
minimization over C\ an analytical solution
optimum
turns out that the
that the
are not interested in such an exact
is
Q.
where
=
Q,
It-\ ox
well
=
We
and subsequent
II22
minimal estimator can be found.
for the bias
the same for both weight matrices.
optimum can be reasonably
Q.
minimum.
The theorem
approximated by a procedure that
is
It
shows
also
very easy to
implement.
Theorem
{C[
6 Let
X (Ci,Qj
ncX^ C[ M[ Ci
.
be as defined in Definition
D =
Let
1.
{D + D') /2 where
D =
Then
c st'i'^c
I
5.1. OjL-i = J7-j
Oi
min
\E{^(CuIt^,)]\=
Ci
ri=l,..,r-l
\E [X {Cu^22)]\
-
GjGi=irj
s.i.
= l,..,T-l
Ti
Moreover,
E[X{CiJT-i)] = tTD/r^.
Let
/9j
CI
is
=
argmin^^j \E [X{CiJt-\)]\ subject to
=
C[Ci
the eigenvector corresponding to the smallest eigenvalue
T -^ 00 the smallest eigenvalue of D,
Then for Ci = l/(l'l)^/^ it follows that tr D -^
min/,/2. As
Theorem
T
as
a linear combination of the
is
+ Mj) /2.
shows that at
0.
p^
least for large
T
same
can also be shown that a 2SLS estimator that uses
>
Then C^
of D. Thus,
Let 1
=
=
Pi
where
mincitx D/ri
be a
[1, ..., 1]'
T—1
=
vector.
is
based only on a single
j/jQ
moment
as instrument
corresponding to the smallest eigenvalue
for
any given T. The Theorem also
all
moment
The heuristic
— "n) yio] which can be motivated
bias reduction as the optimal procedure.
E {{uiT
by taking "long difTerences" of the model equation yu
biased even as
1.
a heuristic method which puts equal weight on
procedure turns out be equal to the moment condition
T—
li
..,T-
conditions involving
This eigenvalue can be easily computed
conditions leads to essentially the
It
>
1,
-^ 00.
moment
where the weights are the elements of the eigenvector
(Ml
—
minli
6 shows that the estimator that minimizes the bias
condition which
of
=
Ir^,ri
00.
17
all
=
cti
+ /?„yit_] + en
moment
i.e.
by considering
conditions involving
?/,o
remains
Long Difference
7
We
Specification: Infinite Iteration
found that the iteration of the long difference estimator works quite
iteration, our iterated estimator estimates the
ViT
- yn =
based on 2SLS using instruments
P,(\ is
—
{yi0,yi2
- P{e)yn,---
We
and see
if it
on an
continue the iteration and
infinite iteration,
it
l)-th
- Uio) + £iT - ea
the estimator obtained in the previous iteration.
of an estimator based
+
model
PiViT-i
Zi {^(^e))
well. In the {i
converges^ \ the estimator
is
,yiT-i
-%)?/ir-2J, where
might want to examine properties
improves the bias property.
If
we
a fixed point to the minimization
problem
where ^i{b)
=
Zi (b)
2SLS and denote
it
{{yiT
—
I3j2Sls-
yii)
—
b {yir-i —yio))-
Call the minimizer the infinitely iterated
Another estimator which resembles Pj2sls
(N
\' /
i^
CUE, which
solves
\~^ / ^
'^
{b)
Their actual performance approximated by 5000 Monte Carlo runs along with the biases predicted by second order theory in
Theorem
4 are
summarized
in Tables 8
and
9.
We
find that
the long difference based estimators have quite reasonable finite sample properties even
is
close to
1.
close to
1.
in two-step
GMM
difference strategy, for
which
and Bond's (1998) estimator. Both estimators are defined
make a accurate comparison with our long
procedures. In order to
is
/3
compared performances of our estimators with Ahn and Schmidt's (1995) estimator as
well as Blundell
there
/?
Similar to the finite iteration in the previous section, the second order theory seem
to be next to irrelevant for
We
when
no ambiguity of weight matrix, we decided to apply the continuous updating estimator to
We had difficulty of finding global minimum for Ahn and Schmidt's
(1995) moment restrictions. We therefore used a Rothenberg type two step iteration, which
would have the same second order property as the CUE itself. (See Appendix I.) Again, in order
their
to
moment
restrictions.
make a accurate comparison, we
Blundell and
We
set
Bond
(1998) as well.
n = 100 and
T=
5.
a large median bias at
"There
is
no a
Ahn and
/3
=
We call these estimators Pcue2 aSj PcuE2 ld^
Again the number of monte carlo runs was
results are reported in Tables 10
comparable property to
applied the two step iteration idea to our long difi"erence and
.95
and
11.
We
set equal to 5000.
Our
can see that the long difference estimator has a
Schmidt's estimator.
whereas
^^^ 0cuE2 BB-
P^y^ iq
We
do not know why
^^^ such
does not have such problem.
priori reason to believe that the iterations converge to the fixed point.
have to prove that the iterations are a contraction mapping.
'
18
Pcue2LD
To show
that, one
would
Conclusion
8
We have investigated the bias of the dynamic panel effects estimators using second order approximations and Monte Carlo simulations.
of significant bias as the parameter
investigations.
The second
becomes
order approximations confirm the presence
large, as
has previously been found in Monte Carlo
Use of the second order asymptotics to define a second order unbiased estimator
using the Nagar approach improve matters, but unfortunately does not solve the problem. Thus,
we propose and
man
(1986).
investigate a
We find
in
most of the bias even
new
estimator, the long difference estimator of GriUches and Haus-
Monte Carlo experiments that
for quite
this estimator
works quite
removing
high values of the parameter. Indeed, the long differences
mator does considerably better than "standard" second order asymptotics would
we
well,
esti-
predict. Thus,
consider alternative asymptotics with a near unit circle approximation. These asymptotics
indicate that the previously proposed estimators for the
dynamic
fixed effects
problem
suffer
from larger biases. The calculations also demonstrate that the long difference estimator should
work
in eliminating the finite
explain our
Monte Carlo
sample bias previously foimd. Thus, the alternative asymptotics
finding of the excellent performance of the long differences estimator.
19
,
Technical Appendix
A
Technical Details for Section 3
Lemma
3
E E{<^'-:-^/.'«":)
Proof.
E
where Et
[]
We
have
xl'Ptel
Kt
- ^xl'Mte:
E{tTa.ce{PtEt[e:x;'])]
sectional independence,
yl'-i,
=
0,
Et
[e^xl'] is
the conditional
which does not depend on Zt due to joint normality. Moreover, by cross-
= Kt
Hence, using the fact that trace (Pj)
xl'Pte;
from which the conclusion
-
=
Et [eltxlt] /„.
and trace (Mj)
—x'Mtc
Et [eltxU
n
=
n — Kt, we have
{Kt -
^^^ [n - Kt)^
=
0,
follows.
4
Var {xl'Mte;)
Cov
v*,
[el]
we have
Et [e;xr]
where
-E [trace {MtEtlelx*/])],
n-Kt
denotes the conditional expectation given Zt- Because Et
covariance between e^ and
Lemma
0.
=
x*,
-
[xl'h'Itel
,
=
{n
-
x'/Msel )
t)
=
a^E
{n
-
[v*^]
s)
+
[n
-
t)
{E [<,e*J)'
E [vie*,] E [<,£*,]
,
s<t
E[x\,\zn].
Proof. Follows by modifying the developments from (A23) to (A30) and from (A31) to A(34)
Alvarez and Arellano (1998).
Lemma
5 Suppose that s
<
t.
We
T -t
E \v^] =
^2
"
/
0-0T-t+l
1
T-t
T-t + l(T-tf{l-pf
1
X \{T-t)
E[vy*,]
have
= -o^
+
r-1
/
-0"-
T
)
\JT- +
i
+
i
fT^
1
(T - t){\ ~0)
1
T-i + l{T-tf{l-0)
20
[T-t]
0-0T~t
1
-
in
E[vle*^]
-s
Vr- s +
'
= -a.^/
'
E Kel]
We
Proof.
first
T--t
=
J/i.t-i
Xi,t+i
=
Vi.t
=
Qi
i,r
=
yi,r-i
=
1
v*j.
We
have
5)
(r
\-pT-t
(T-t)
1-/?
t){\--P)
1
(T -
Y T-f+
characterize
Xi,t
1
\It-s + \{T-i )(r-
1
^
J
(T --5)(1 -P)
s
-
+ a'
-P^-
i}
1
-
- t)
(1
-
T-t
;'
1-/3
/?)
+ Pyi,t-\ + £i,t
—
— p^-Qi +
i
1
+
*yi,t-i
P'^
+ Pei,T-2 +
ff i,T-i
+ P"^
'
^e^.t)
/
\
and hence
T-f + l
1
,
,
P-PT-f+l
/?-/?
yi,t-i
(1
\l-/3
(T-i)(l-/3)y^"'-^
^
- P) £i,T-i +
(1
-
/3^)
ei,T-2
+
+
T-t+l
(r-i)(l-/3)V
(l
-
Z?"^-*) ei,t
(T-i)(l-/?)
It follows
that
/?-/?
1
T-t
{T-t){l-p)
''•'-'
J
T-t+l
E[ai\zit],
yi-P
(T-t) (1-/3)2^
from which we obtain
T-t
1
T-t + l yi-p
T-t
T-t + l
We now
compute
E
(Qj
(1
-
(ai -£'[Qi|2,i])
{T-t){l-pf J
/3)^..T-i
+
(1
-
/?')
£i,r-2
+ •••+(!-
P'^-')
(20)
{T-t){\-P)
- E\ai\zit\y
Var
[q;! Zit]. It
can be shown that
2
2
Cov(Q„(y,o,-..,y.(-i)')
where
£
is
=
and
Y^^>
Var
((y,o,
-
,2/zt-i)')
a i-dimensioanl column vector of ones, and
Q
I-
1
P
P
1
p'
21
P'
t-2
P
=
—^^^^' + Q
Therefore, the conditional variance
is
given by
••l
U'+^^^Q
- "It
Because
-1
-1
.
[i-py
+
,
(i-/?r
Q
Q
a-/?)^g
^
Q-\
^cc
a'
^
i+^'(^q)"A
^-^Q
Q^g
i
\0Din,-^
Q-'KQ
we obtain
„
+
,
(i-/?r
Q
and hence,
2
2/7/
.f+(i^c?
1+(T^^'Q-^^
Now,
it
can be shown that^^
i'Q-H =
^
+ (<-
(2 (1 -/?)
2) (1
-
/?)')
from which we obtain
£;[(Qi-£;[a,|z,t])^j
We now
we can
characterize
E
\y\t\-
Using
(20),
T-t
T-t + 1
\
T-i+l
/?-/?
1-/3
{T-t){l-pf
T-t
T-t + l{^T-ty{l-py
(21),
As
for
we obtain
E
[t'*,£*t]'
the
w-6
first
Amemiya
E
^{ai
-
E[ai\z^t]f
{T-t) +
conclusion.
note that
^it
"See
and the independence of the
1
—
^u
T-t
(21)
7|-Y—+ >(^-2)
I^T^
see that
E \v^ =
With
=
(£jT H
(1985, p. 164), for example.
22
h £i:t+lj
first
and second term
there,
Combining with
(20)
,
we obtain
-T~^(l-^) + ---+(l-/3^''"0
£K4] = - T-t + \{T~t){\~P)^2 +'Vr-t+1
(T-i)2(i_/?)
from which follows the second conclusion.
As
for
E K*se*t]
and
E [v*^e*^\
s
T-s
=-
E[vle*^
+
<t, we note that
(l
-
/?^-')
a
T-s+1
(T-s)(T-t)(l-/3)
and
-y—^(1 -/?) + (! -^")+---+ (1-/3^-^)
T-f + l
(T-s)(r-i)(l-/3)
Bb^eL]
Lemma
6
^E;;^^K1-(i)
•'^-^ n — i
nT
t
Proof. Write
P-P T-t+l
T-t
T-t + iyi-P
2
r-i
r- +
«
2
Sum
of the
first
^T-t){l-pyj
1
1
(T
-
{T-t) +
+
i
r+
-Pf
^-*
1
T-i + l(T-i)^(l-/3)^
^^ + ^(«-2)
.2/J^-t+2
2/3
/?2-l
^ Q2(T-t)+2 _
"
2rt^-*+l
-^^"1
V
two terms on the right can be bounded above by
rr2
c-
and the third term can be bounded above
in absolute value
by
1
C{T-t)
where
C
is
a generic constant. Therefore, we have
C Y^
C V1_
+
>
;^E~^Ka < —
—
n
nT^n-M + ^^ + ^(i-2) nT ^n-t(T-t]
(T
-^
i^
'^
Z
/
T)
<
2
23
C
^
T^
1
>
It
can be shown that
y
^21^
-,
Using the assumption that T/n
Lemma
= O (1), we
^
y
=0(logr),
=
0(1)
obtain the desired conclusion.
7
t
We can bound (E [u*j£:*j]) by jfc^'
adopting the same proof as in Lemma 6.
where
Proof.
by
Lemma
C is a generic constant.
Conclusion easily follows
8
s<t
Proof.
We
can bound |E
[v*^^*^] ^Iv'^e*^]!
C
by ^337- Therefore, we have
v-^ v-^
st
C ^—
1
t
\
Z^ n-t
_
nT ^^ '^ n-t(T- sf ~ nT ^—'
t=l
;tE;^^1<^'^'')^(<*^-
T7
s<t
/
—
s
I
X
\
2^\{T-sY
^
But because
1
{T-sY
we can bound
^ E.<t -^t^ K^*t] E
T-s-(T-sY
{T-sY
\v*tel
O ^—
n
V
T
<
further by
^—
t
1
^ n — ^^AT-sY
(T
t
Because
.^^\=o'j~;
t'AT-sf
\Jt
O
s'
we have
c"^
n
i
^n—
t
:
(I
n
c^-'
C
^-^
\t
T
n
c
1
^ n-
t
nT
T-l
-^-^
t
^ n~t
Conclusion follows from
T-l
;lE;;^'^|.C.-,I^K.Cl=o£x:;r^Ho(^SZl^
24
!=»(>)
Lemma
9
Var
(;^E;7^«>w)=o(i)
Proof. Note that
i^ E ^^>'-i)
Var
-
^ E (;r^)' V" i^'M.^:)
2
v-^
nT
^—'
s<t
\n —
t
\n —
J
Cov(i;'M,E;,<M,t;
s
t
s<t
Here, the second equahty
terms on the
Lemma
is
based on
Lemma
4.
Lemmas
6, 7,
and 8 estabhsh that variances of the three
far right are all of order o (1).
'
10
^E(<«e.--;^^>.^0-"(°T^)
Proof. Follows
easily
Lemma
by combining
Theorem
9 and the proof of
2 in Alvarez
(1998).
Lemma
11
t
Proof.
First,
note that x^
Mtx^
—
vl'AItv^
nT^n-Kt
W
*
t
By
conditioning,
it
by normahty.
nT
We
therefore have
^n-
v^' MiVi
t
can be shown that
={n-t)E[v:^]
vl'Mtv;
Therefore,
— y -^x;'MxA = ^ytE
Modifying the proof of
We now
Lemma
6,
we can
establish that the right
is
\v]f]
o{l]
show that
-'^E
nT ^^
" -x'/Mix;
n
-
K
t
25
]
=o{}).
and Arellano
We
have
Var
+ ^E;r^W^cov(«;'M..:.„.-'M,.;)
Modifying the development from (A53) to (A58) in Alvarez and Arellano (1998) and using normality, we
can show that
=2{n-t)E [v*^] =Q{n-t){E [v*^]f
Var (vl'Mtv^)
Gov {v:'Msv:,vl'Mtvt)
Using
(20),
=2{n-t){E [v*,vl]f
,
.
we can show that
T-s
T-t
I
0-13iT-f+l
1
I
/?-/?^-^+'
1
\l-P
/
2
^ '^
\
B
al
[T-s)[\-PY)
i
^^ + 2.(i_2)
+
T-t I T-s
T-t + l\T-s + l{T-t){T-s){l-pf
1
0"
(
x|(T
Adopting the same argument as
\
t)+
- 2ff-'^^ +
/3^(^-')+2
_ 20^-'+' +
'
2(i
^2_j
in the proofs for
Lemmas
6
-
8,
we can show
that the variance
is
o(l).
Technical Details for Section 4
Definition 2
*=
T=
-^E„^
= A{g^
1
1
n
Proof of
4-7
open
1.
follows that sup^g^.
it
interval of length
for all e
Pr (5„
>
(6)
^
0.
0)
It
- 3A'iA~^AiA-^Ai,
-^^ = 2\\K-'g,
2AiA-iAi,
Jn
-Ai)'A-VAi - 2A'jA-^ (G - A) A-^Aj - 4A'iA-^AiA-^5 +
T = 2{g,-
Lemma
3A'iA-^A2
e
A,)'
^-'g -
By Lemma 2
|(5„ (c) - Q {c)\ =
centered at p.
2A;
(a)
A-i (G - A) h^'g -
and Theorem
Op(l),
By
where
Condition 8
1
- Pr
{b
G
int
C) <
denotes the boundary of C.
26
1
(c)'
A (c)"^
A(c).
Let B{P,e) be an
follows that infcgB(/3,o *?('^)
then follows from standard arguments that b
< Pr (6 e aC) =
A
A,A-' g.
Andrews (1992) and Conditions
1(a) of
Q {c) =
it
g' A-'
2A' A-^5,
- Pr (6
- P =
e B{p,
Op(l).
e))
^
It
for
^
Q (/^) —
^
therefore follows that
any
f
>
where
dC
Using similar arguments as
\Q{b)\
in
Pakes and Pollard (1989) we write
=
\x{byA{b)-'\{b)
<
g {by
G (b)-' g{b)-\ {by A (6)-^ \{b) - g (/?)' G (6)"^ g {f3)
+ \g{byG{br'g{b)\ +
<
g {by
+
+
where g {by
G {b)"'^ g{b) <
G {b)-' g{b)-X {by G (6)"^ X{b)-g (^)' G {by' g (/?)
A {by
G (6)-^
A
(6)
\g{byG{b)-'g{b)\
g
\g{(3yG{b)-'g{(3)
(/?)'
- A {by A (b)"' A (6)
+
\g{0yG{b)-'g{l3)
G (/3)"^ g (/?) = Op
G {b)~ = Op (1) by consistency of b and the
g {py G (6)"^ g {P) =Op{n-^). We also have
have
(n-^) by the definition of b and Condition
9.
We
uniform law of large numbers, from which we obtain
G {by' g{b)-\ {by G {by' X{b)-g {py G {by' g (/3)
= g {by G {by' {g (6) -g{P)-X {b)) + {g (6) -g{p)-X (6))' G {by'
+2g (/?)' G {by' X (6) + {g {b) -g{P)-X {b)y G {by' g{P).
g {by
Therefore,
X
(6)
we obtain
\Q{b)\
<
\x{byG{by'x{b)-x{byK{by'x{b)
+ \g{hyG{by'{g{b)-g{P)-X{b))
+ \{9{h)~9{P)-\{b)yG{by'X{b)
+2\g{pyG{by'X{b)\ + \{g{b)-g{P)-X{b)yG{by'g{P)
+Op
The terms G{py', and A(6)~
(n-i)
.
are Op(l)
by consistency of
b
and the uniform law of
large numbers.
by consistency of b and the uniform law of large numbers. From
Andrews (1994) and Conditions 4-9 it follows that g {b) - A (6) - g {P) = Op (n"'''^).
From a standard CLT and consistency of b it follows that {g {b) — g{p) — X {b)y G {by g (/?) = Op (n~^),
and g{pyG {by' = Op {n''^'^). These results show that
Also, the terms g {b) and A (6) are Op (1)
Theorems
1
and
2 in
\Q{b)\
Because \Q
{b)\
=
X {by
<
\\G{by'-A{by'\\\\X{b)f + Op{n-'/')\\X{b)\\+Op{n-')
=
Op (1) ||A(?>)f
A {by' A (6)1 >
+ Op
^
(n-i/2)
\\X{b)f,
IIA (6)11
+
we conclude
Op {n-')
.
that
-Op(n-i/2)||A(6)||<Op(n-^)
M-^''^^^:
or
||A(6)||=Op(n-^/2),
27
h- (3 = Op{n ^/^).
Theorem 3. Note that we
which imphes that
Proof of
9i {h)
have
= gi + -^32 v^ (6 - /?) + —33
•
•
(%/H (&
- /3))^ + Op
f-
V
and
+ i^ {2G-'GiG-'GiG-' - G~'G2G-') {V^{b - /3))' +
Gi
= Gi + -^G2 v^ (6 - /3) + i-Gs
(&)
where g,gj,G,Gj and y^C* ~
/3)
(^/^(&
-
^))'
are Op(l) by Conditions 6 and 7 and
Op f ^"j
+ o^ (-)
Lemma
1.
Therefore,
g,{byG{br'g{b)=g{G-'g + -^h^-^{b-(3) + -h2-{V^{b-P)f +
{b)'
G (b)-'
Gi
(&)
G (6)-^ 5 (6) = ^'G-^GiG-i^ +
+
4='*3
^/^4 -(nA^ (6 -/?))'
v^ (6 - /?)
+ Op (1),
where
/m
=
g'^G-'g
h2
=
\g'zG-'g
Ig'sG-'g
- g[G-'GiG-'g + g{G-'gi,
+
\g\
{2G-'G,G-'G,G-' - G-'G2G-') g + \g[G-'g2
- g'^G-'GiG-'g - g\G-^G,G-^gi + g'^G'^gi
=
\g'zG-^g
\9zG-'g
g[G-'G,G-^G^G-^g - ^s'lG-^GzG-^p + \9[G-'g2
+ 5;G-^GiG-'GiG-^5
- g'^G-'G^G-'g - g[G-'G,G-'gu
and
h3
= g[G-'G,G-'g-g'G-'G^G-'G,G-'g + g'G-'G2G-'g
- g'G-'G,G-'G,G-'g + g'G-^G,G-^g,
= 2g\G-'G,G-'g - 2g'G~'G^G-'G,G-'g + g'G-'G2G-'g,
28
we have
oJ-),
and
g
,
hi
=
\g'2G-^GrG-^g + \g' [2G-'G^G-^G,G-^ - G-^G^G-^) G^G'^g
+
\g'G-'G:,G-'g
+ \g'G-^G^ {2G-'G^G-'G,G-' - G-^G2G-')g
+ \9'G-^G,G-'g2
- g[G-'GiG-'GiG-'g + g'^G-'G^Cg - g\G-'G,G-'G,G-^g
+ g[G-^GiG-^gi - gVGiG-'G2G-'g
+ g'G''GiG-^GiG-^GiG~^g - g'G'^G^G'^GiG-'^gi - g'G-^G2G-^GiG-^g
+ g'G-'G2G-'g^ - g'G-'GiG-'GiG-'gi
We may
therefore rewrite the score
5„
(b)
=
{2g[G-'g
+n
Next note that F(|5„
Sn
(b)
—
Lemmas
(6)|
Sn
>
(2/12
e)
-
=
- g'G-'G^G-'g) + 4=
^4)
(V^ {b -
P{\S.a{b)\
Op{n~^) and we can subsume
12, 13,
{b) as
>
/?))'
(2/ii
+ Op f-)
\^/
e/n.-'^)
for
and 14 below, we may rewrite the
first
^3) ^^(^'
-
P)
(22)
.
any
this error into the Op{Ti~^)
-
>
e
term of
because of
22.
Lemma
Thus
1.
Using these arguments and
order condition (22) as
or
based on which we obtain
(9).
Noting that
E[^]
E[T]
=2^X\A~^E{g]=0,
(23)
=
2nE[{gi-Xi)'A-^g]-2nX[A-^E[{G-A)A'^g]-nE[g'A-^A^A-^g]
=
2trace('A-i£;
E[^E]
6^^
8nX[A-'^E [g{g^
^ - 2X[A-'' E
-
A,)']
[7P,i>',A-'6^]
-
trace {A~' A^A''
E [6^S'i\)
,
(24)
A'^Aj - 4nE [A'lA-^gA'jA-^ (G - A) A'^Aj]
-8nA',A-^£;[5£f']A-'AiA-Ui+4nA',A-'E[g5']A-^A2
06'
SX'A-^E
6^
dp
A-'Ai - 4A'jA-^£; [5,A',A-VzV'i] A"'Ai
-8A'iA-^£;[6i<5^] A-^AiA-'Ai+4A'jA-'£;[<5,<5-]A-'A2,
(25)
E[^^] =4A'iA-'£[('i,,5;]A-'Ai,
(26)
and
we obtain the
desired conclusion.
29
,
Lemma
12 Under Conditions 6 and 1
=
/i4
= A'lA-^AiA-Ui + Op (1)
Proof. Follows from plimg
Lemma
- A'lA-iAiA-^Ai + Op
h2
=
|a'iA-^A2
(1)
0.
13 Under Conditions 6 and 7
2hi-h3 = T +
-== + Op
(
-7=
)
.
Proof. Because
g[G-^G^G-^g = X\^-'^^l^-^g + Op[
^
and
g\G-^g,
=
(Ai
=
Ai A-^ Ai
+
-
[g^
M))'
U"' - h'^
+ 2 (51 -
{G - A) A'^
A-^Ai -
Ai)'
Aj A'^
+ Op (A^)) ih +
(G - A) A"^ Aj +
Op
i9i
(^^
"
,
we obtain
/ii=A'2A-iy-A'iA-iAiA-i5
+ AiA-^Ai + 2 (51 Similarly,
A-Ui -
AjA"' (G - A) A'^Ai
+ Op (-^^
we obtain
=
/i3
The
Ai)'
2A'i
A-^Ai
A-i(7
+ Op
(^^
.
conclusion follows.
Lemma
14 Under Conditions 6 and 7
2g[G-'g - g'G-^G^G'^g
= -^$ + -F +
y/n
Proof.
We
n
Op
(-\
.
\n J
have
g\G-'g
=
(Ai
+
{g,
-
= X[A-^g +
\,))'
(g,
U'^ - A"^
- XiYA-'g -
(G - A) A"^
A'jA"'
g'G-^GiG-^g = g'A-'A,A-'g +
30
Op
(-^\ g
(G - A) A-'^; +
and
from which the conclusion follows.
+
Op
Q
Op ("1
.
Ai))
(27)
,
C
Proof of Theorem 4
The second
order bias
is
computed using Theorem
the parameter of interest,
Also, because the
4 equal to zero.
=
A2
0,
and
=
which renders the third,
0,
moment
restriction
which renders the seventh and eight terms
Zite*^
Zitz'^f.E [zitzl^]"
X[A-'E
we have Ai
=
therefore, the second
is
E
E[zitx*J E[zitzl^}-^
sixth,
linear in the
Theorem 4 equal
in
under conditional symmetry of
= - EJ=i
[,Pii>'iA-'6i]
Because the "weight matrix" here does not involve
4.
e*j,
last
terms
parameter of
in
interest,
Theorem
we have
Furthermore, because
to zero.
the numerator in the second term
[zuz'^^E [zitz'^]'' zue*^^ should
We obtain
term should be equal to zero.
and
be equal to zero,
the desired conclusion by noting that
T-l
AiA-^Ai
trace
(
A-'E S,-^
\\A-'E
^•^'"l
)
=
Y^^i'^i^'^it]' E[zuz[,]-' E[zux*t]
=~
E
*^^^^
(^ h*4]"'
E [e*,x*,zuz'i,])
,
A-'M=-Y,f2^ [zux'.yElzuzl,]-' E [4^*.^it4] E \zuz[^Y' E [z^x^J
dp
,
t=\ s=\
and
A'jA-'£[(5,A'iA-V'.V'-]A-iAi
T-l T-l
= - XI X^ -^ [2it<t]' E [zitz'uV^ E
SitZuE
[zisx*^]'
E h^z-J"' Zi^z'-^ E [zis^L]"^ E [z^.x^J
t=l s=l
D
Proofs for Section 6
Proof of
Lemma
Note that
2.
E[\uu^y,s-i\]
<
jE[ul]JE
(Ay„_i)
s-2
V<^e
+<
/?r'
(/?„
-
1)^,0 +^i.-i
+ (/?„
-
1)
E^""'^'
r=l
\
^fo
By independence
the
same
of
reasoning,
uu^yis-\ across
we obtain
therefore obtain n"^/^
^^^^
y.
i,
it
n''^/'^Yl'i=i^iT^yii-\
^
=
Next we consider n^^/^ ^^;^j
^^ (j)
/,,2
=
Op(l).
By
UiAyifc_i =
Op(l).
We
therefore follows that W^/'^YJi^i'^it^yis-i
We
=
Op(l). and n^^^^
^"^,
can similarly obtain n"^/^ YJi=i
andn^^/^ ^"^j
5^,2-
9i.i
=
Op (!)•
Note that
s-2
E [Ayuyio] = E
.0
/3^-' (/?.
-
1) .^0
+ ^"-1 +
A--?f^-o(')
31
(/?„
-
1)
E,=
AC'e=s-l-r
i
and
,s-2
E
if a2
,(/?„-
,,,2it-2)4iPn-l?
al
1-/3^
^"
(1-/?J^^
(l-/3^r
2^2
cr'CT
e'^a„2
+ 0(n).
n^
such that Var (n"^/^ X]"=i ^VitVio)
that
E [5,,2 (/3o)] =
=
0(1). For n"^/^ 5Z"=]
5^,2 (/^o)
^^ have from the moment conditions
and
2^2_2
Var (Au,(/3o)y,o)
The
=
2^2^^
- /JJ-^ + 0(n) -
(1
joint Hmiting distribution of n~^/^ 52"=! [/i,2
gular array
CUT. By
""
ji
=
cr^/2(,
c^" "ow be obtained from a trian-
-^/i,2'^i,2(/5o)']
previous arguments
E[n,2^gi.2{M] =
with
-^n' + 0{n).
+ 0{n~^)
where
(//,2
l
-E
is
the
[fl^]
T—
[
••
m'
dimensional vector with elements
1
,m,2{W (/;,2 - E [/;.2]
=
,5.,2(/3o)')
Then
1.
E„
where
^n
S21,n
By
previous calculations
2
we have found
the diagonal elements of Sii „ and S22 n to be
'
*
The
off-diagonal elements of
E[/^yu^y,,yl]
=
E
En
„ are found to
y?o [^/3r2 (/?„
V'
(/?„
which
is
be
^s-2
- lK,o +^»-i +
(^„
-
1)
E.., ^^'^i.-i-r
-
{Pn
-
1)
I]^~ J ^r'£x*-l-r
1) e,0
(/3„-l)2/
3'„-^/3r'
+ eu-l +
a2
+ 0(l) =
-f-3-
(l-/?2)
(1-/32)
\(1-/?J2
of lower order of magnitude while n~^ (£'[Ayity,o])^
=
0(1). Thus ti"^Sii,„ -^
^n + 0(l)
diag(^
off-diagonal elements of E22,n are obtained from
E
[Aw,(A?t,syfo]
=
a\ai
(1
-
/3„)
-=
-I-
0(n)
<
=
s
-f-
or
1
t
=
otherwise
For
^^^v?
and
C
2
-^^-^n?.
The
^22,71
E]2,7i.
we consider
^n2 + 0(n)
E
[Ay,, AH,,y2j
=
<[
-^n^ + 0(n)
ifi
=
s
=
s
-
if i
1
otherwise
32
s
-
1
c
'
1
It
then follows that for I G Mr(T+i)/+2r-6
gy^,}^ ^j^^^^ i'i
=
1
N (0, 1) by the Lindeberg-Feller CLT for triangular arrays.
n-3/2 ^^^^
It
(.'H'^l'^
[C^'y]' where [C,
Q' - N (0, E)
.
£:/,,2, 5i,2(/?o)']'
^
then follows from a straightforward applica-
Cramer- Wold theorem and the continuous mapping theorem that
tion of the
-
[/?_2
Note that n-3/2 ^^^^ i'E
[f,^^]
=
n~^/'^ Yl7=i [/i 2>5t,2(/9o)']' ~*
0(n-i/2) and thus'does not
affect
the limit distribution.
Finally note that
E [gi,ig'i^i] = O (1).
Var (Au,(/3o)yio)
The
E
off-diagonal elements of
=
Also note that
2a^,al (1
M^.2
n^ + Oin).
^^e obtained from
[^1,25^,2]
-a2c72
E [AititAui^y.^o] =
- /JJ-^ + Oin) =
(1
-H
0n
0(n)
5+lori = s—
t
otherwise
It therefore follows
that
;E
Proof of Theorem
can take
L =
the fact that
W
E [C{^ CiQ =
second equality
is
= (c{ncA
based on (CJS]iCi)~^
=
z'D'z
The
Proof of Theorem
=
z'Dz where
result then follows from
definite.
5"^(CiM2Ci)"^ such
6.
z
=
We first
(C;E„Ci)-^
z'L'WFLz
z'L'WLz
E
| (£>
+
D')
.
Note that we
(Ci,n\ = z'L'WC{(,y/ z'L'WLz. Next use
F = CjEziC
Smith (1993, Eq.
analyze
X
= LV
^C[^^ with CjSnC'i
=
Using a conditioning argument
6~^Ir,
D=
=L
Then
.
FC{^^ = FLz with
x{ci,nj' =
Note that z'Dz
E22
Use the transformation
5.
Define
V^/^,.
+ o(l).
[gig'il
is
CjA'/^Ci, where the
it
then follows that
'Dz
z'Wz
symmetric. Also,
W
is
symmetric positive
2.4, p. 273).
E \X (Cj, £22)]-
Note that
in this case
W = {C[T,22Ci)~^ =
that
E\X{C,,^22)\
z'SDz
=E
z'{C[M2C,]
and 6ixD
=
i5/2trlVCi (M[
+ Mi)Ci =
matrix of eigenvectors of {C[M2C\)~
(CJM2C1
\-i
FiAiFj and
(1993, p. 273 and
E
where
for
any two
=
M[ + hh = -M2.
Let Ti be an orthogonal
with corresponding diagonal matrix of eigenvalues A] such that
T\z. Let A be the largest element of A].
Then
it
follows from
Appendix A)
z'Dz
'Wz
zj
-ri/2 since
= 6E
z{T',DT,z,
z'jAjZj
real
^^"E^^^#Tf^m-(r^'.^r„/., -A-A,)
to (^)a..^-!
symmetric matrices ¥^,¥2
CI:A¥u¥2)
2(1),
A^tk-iV.
33
tr(y,K;)r,^,(y2)
Smith
and
=
Ck{Y2)
-j^dk{Y2)
\2)k
fc-1
j=0
=
do{Y2)
Since
all
elements A
implying that
Also note that
Cjt
if
A^ in
A
in — A
I
<A
Ai satisfy
Ai
>
0,
1.
<
A^
1 it
follows that /^j
which holds with equahty only
—A
Aj
eigenvalues A^ are the same.
if all
J
12,^3 are diagonal matrices and Yi symmetric then trY3YiY2
trr'i£>ri(/^,
-
positive semidefinite
is
A~^A])''
=
trr'i
{WC[M[Ci+C[MiCiW)ri
=
tryiy3y2* such that
- A'ViV
(ir,
= 6-HT{A,r\C{M[CiT^+r\C{M,CiTiA,)(^Ir,-r^Aiy
This shows that
all
the terms tr
=
(5-^trr'jC;(A/i'+Mi)Cir]Ai
=
-S-^ trTiAiT'iC^lMzCi
=
-^-4r(7^, -A^'Ai)'.
f
T\DTi
(l-r^
- A~
Aj
j
therefore
all
the terms Cj^j.
(
FjiTi,/,.,
—
^^"
|£[X(Ci,S22)]|>
A
fc
= 0,
Ck-i
V
(/r,
have the same
j
- A'^AiV
- A'^AjV
(z^,
E ?^#^^.';'.
fc=0
For
Ai
|
(z^,
- A~
Ai
have the same sign and
j
we have
sign. Therefore,
(r'r£>r,,
7,
-
A- A,
+ k'^-
2 Ji
we have
and (l)o(i)i/(¥)i = Vn- This shows that \E[X {C1,^22)]] > A'"V2
and allri 6 {l,2,..,T - 1} Then
for all
d
such that
CJd =
U,
.
min
\E[X{C,,E22)]\>
rj6{l,2,..,T-l}
where min Ij
(1988,
is
>
Ir,
10, p.
-
209).
rVi
Now
'
let rj
= 1-1 =
=
1
and C]
such that
|£:
=
p,,
where
p, is
[X (d,
E22)]|
=
Magnus and Neudecker
the eigenvector corresponding to
A"' |trr',£)r,|
=
(1/min/,)" V2
min/j/2. Inequality (28) therefore holds with equality.
Next consider
(28)
7-,e{l,2,..,r-l}
the smallest eigenvalue of A/2 and the last inequality follows from
Theorem
mmlj. Then
^ ^,
min
E [a: (Ci,/t-i)] =
trCjMiCi/r, with M,
min
= {M{ + Ah) /2. We
|trC;A7]Ci/ri|.
ri6{l,2,..,r-l}
34
analyze
=
.
We can therefore minimize — tr (CJ Mi Ci )
It is now useful to choose an orthogonal matrix R with j-th row pj such that R'R = RR' = I and
—Ml = RLR' where L is the diagonal matrix of eigenvalues of -Mi = X!j=i hPjPj- Then it follows that
— tr(CiMiCi) = YljZi ^jP'jCiC\Pj- Next note that all the eigenvalues of CiC[ are either zero or one
It
can be checked
<
such that
easily that
p'jC\C[pj
Ci such that C[pj
also
minimized
=
Mj
<
is
negative definite symmetric.
The minimum
1.
— tr(CiMiCi)
of
except for the eigenvector
p-
is
=
then found by choosing ri
corresponding to minlj. To show that
tr
1
and
[D/ri)
is
= 1 and Ci — Pi, where tr {D/ri) = minZj, consider augmenting C\ by a column
= 1 and p'^x = 0. Then C[Ci = h, t2 = 2 and trC^MjCi =h + T.]^i h {p'j^f
~ 1- Since Ij >
equality X^j^i
we can bound tr(CjMiCi) > 2li but then
[p'i^)
for ri
vector X such that x'x
By
tr
Parseval's
>
(D/i)
lows that
li
This argument can be repeated to more than one orthogonal additions x.
li-
E[X {C\,It-\)\ =
tr
{p/r\)
is
minimized
=
for ri
and
1
C=
Pi,
where
p^ is
It
now
fol-
the eigenvector
corresponding to the smallest eigenvalue.
Next note that from x'x
=
1
min/j
for 1 =[1,...,1]
function of the
The
E
< —x'M-^x <
such that minli
of
moment
it
follows that
< -l'Mil/(l'l) =(T-1)"^
which shows that the smallest eigenvalue
number
ma.xli
is
bounded by a monotonically decreasing
conditions.
^0.
last part of the result follows from l'M]l/(l'l)
Blundell and Bond's (1998) Estimator and Weight Matrix
Bludell and
Bond
(1998) suggest a
new
set of
moment
E[qi{P)]
restrictions. If
T=
=
where
Vio
{{yi2
ViO
{{yi3
yn
{{yi3
ViO
{{yi4
yn
((j/i4
yi2
yio
((j/i4
•
{{yib
Qiib)
yn
{{yi5
yi2
{{yi5
yi3
{{yr5
a
GMM
bivri
-ym)-yn)
{yi2
-byii)
{yz2
(yt3
-^a)
(yz3
-yi2)-
{yi4
-by, 3)
{yi4
-yis)
{yi5
- ^24)
{yn
They suggest
- y^o))
- yi2) - b {yr2 - yn))
- yi2) -b{yi2 - yzi))
- yis) -b{y^3 - y^2))
- yis) - b (j/i3 - yi2))
- yis) - b {yiz - yi2))
- yn) - b {yz4 - yis))
- yi4) - b {y^i - y,3))
- yi4) -b{yz4 - j/zs))
- yii) - b{yi4 - Vzs))
-Vii)-
•
estimation:
E
mm E^'W
b
^i=l
35
'''('•
5,
they can be written as
examine properties of Blundell and Bond's moment restriction
We
methods of computing A, which
1.
We
in principle is
for
a consistent estimator of
We
P
near unity.
E
[qi (/3) qi (/?)]:
consider four
can use h^jML as our consistent estimator and use
j4i
=-
^
{bLIMLj
gi
qi
[buMLJ
t=l
This gives us a
2.
We can
GMM estimator that minimizes
and obtain a
We
1i (^))
^r'
(Z^"=i
^i (^))-
We
call it
6bbi-
compute
2
3.
{Ya=\
=1
GMM estimator that minimizes
iYl7=i
^i i^))
^2
'
(Z!r=i 9^ (^))- ^'^
^^ '*
^bb2-
can compute
n ^-^
1=1
where
Vifi
yi,o
Zi
ViA
Vifi
=
••
2/2,1
yi,r-2
Ayn
Ayi!,2
Ay,, T-l
and obtain a
This
4.
We
is
one of
GMM
^3 ^ (Xl"=i 9i C*))- We call it 6bb3the estimators considered by Blundell and Bond (1998) in thier Monte Carlo.
estimator that minimizes
(
J^"=i
gi (&))
can compute
1
^4
=-
"
^
'
gi f
^BB3 j
qi
f
f'SBS
j
i=l
F
and obtain a
GMM
Again, this
one of the estimators considered by Blundell and Bond (1998) in thier Monte Carlo.
is
estimator that minimizes (5Il"=i
Second Order Theory
9i (^))
^4
iY^-iqii^))-
for Finitely Iterated
We
call it
6bb4-
Long Difference
Estimator
We
examine second order bias of
finitely iterated
2SLS. For this purpose, we consider 2SLS
1
Y^x,?A[Y^%%\
(E^i^j (z^'^j
[Y^z^x,
36
f
I]%i
(29)
applied to the single equation
+ Si
I3xi
using instrument
Zi
=
-
Zi
-j^^Wi, where
(30)
= y/nC^- Pj.
?
Here,
Zi is
We
the "proper" instrument
assume that
= ;^|:/^ + ;^Q" + -p(;^)
^
where
and has mean
fi is i.i.d.
zero,
/3
second order
biets
of b
is
= Op (1).
and Q„
under our assumption
order bias of
(31).
It
(31)
can be seen that
If Ci in (30) is
^^^
equal to the second
is
symmetrically distributed given
Zi,
then the
equal to - times
A^A-y
iK-2)aue
X'A-'E[fiWie,]
,
A'A-U^^"J
A'A-iA
A'A-iA
E[f,ZiXi]'A-\
4>'A-'E\fiZ,6i]
A'A-U
A'A-iA
X'A-^E[fiZ,z'^A-^^
A'A-U
^
X'A-^AA-^E[fiZie,]
_ „
r
.21
A^A'^AA'V
^-^'J
A'A-iA
A'A-iA
+ 2-^/^^E[fiZ,x,]'A-'X
(A'A-iA)
(A'A-U)'
where A
=
Using
£J [ziXi],
(32),
A = E [ziz'^,
we can
equation using a
LIML
= E [wiXi], A = E [wizl + Ziiu'^,
like
estimator as the
LIML
like
and
and
<p
= E [wi£i].
2SLS applied
to the long difference
For this purpose, we need to have the
initial estimator.
estimator.
Appendix G, we present a second order bias
In
In fact, based on 5000 runs,
like estimator.
biases of &l/ml,i
(p
(A'A-iA)'
characterize the second order bias of iterated
second order bias of the
LIML
^
^
we found
bLjML,2 are smaller than predicted
in
our Monte Carlo experiments that the
by the second order theory. In Table
compare the actual performance of the long difference based estimators with the second order
It is
sometimes of
such exercise
interest to construct a consistent estimator for the asymptotic variance.
may appear
for refinement of confidence interval as well:
bootstrap as considered by Hall and Horowitz (1996) require such consistent estimator
for the
Appendix H, we present
(32).
We
In
we
theory.
Although
first
present an expansion for
[b- p) =
for
Pivoted
second order
a first order asymptotic result as well as a consistent estimator
asymptotic variance.
Proof of
7,
to be related only to first order asymptotics, a consistent estimator of the
asymptotic variance could be useful in practice
refinement. In
of the
2SLS using instrument
—j—.
;
(iEr=]?,-Ti) (^E?=iS-,?j)
37
7^
=
i-
(;^Er=i?-Trj
Zi
—
j=Ow,.
We
have
(33)
Write
-
y
Ziz\
= A + -^
f
^V
{ziz'i
-
A)
-
I
Recalling that
we can
derive that
and
;^U^S^OUS^"^"'~^^r''^(;^
38
Here,
(f>
and
A
are defined in
Theorem
("S'-") i^^tr)
32.
Using arguments similar to the derivation of
(TnP")
-'->{;l:|:-)-(;^|:/.)^'A-v
and
l"S""J l"S'"v
Therefore,
l^S""J
we may conclude that
39
(27),
we obtain
where
A'A-U
Zti {-i< - A)) A-^
A'A-^ {-j^
{^ T.U ^i^^)
A'A-iA
k.^^fl!^JLV
_2
y-(z.x.-A)
1
A-^A
7
VV^£t
(A'A-U)^
L:Af;!^^yA-M-i=V(3,z,'-A)
V^tr
(A'A-u)^
A-iA,
7
and
^2 -
"yX^^" " \> ^^7
VA^^A
(;^E:^^(^i^i-A))'A-V
A'A-iA
,^S^'
\v^£t 7
^
1
V^£t7^'A-'^
^'^-^^
^^^^ A-A-^(^E^^l(^.^^-A))A-v
2=1
<"
,
1
V-f^
A'A-^AA-^(^Er=i^.-^0
V^ir
V^/^fe7(A'A-A)^
f^y
'--^
/.
f^,V vA-^AA-V
1
\v^k
A'A-u
VV^^^7
_
(
,
^'A-^A
)
7
^ -A-^AA-'A
+ (-^r/.l -^^^^A'A-AA-^A.
2'
,vA^tt'7
The
first
two terms on the right side of (34) capture the standard
estimator, which estabhshes
is
(A'A-U)
Lemma
15.
Obviously, they have
the standard second order expansion term
when
40
=
0.
first
order asymptotics of the plug in
mean equal
i.e..
when
to zero.
The
third term -y^Bi
the proper instrument
is
known
exactly. Therefore,
The
third term
under conditional symmetry of
^.^2
of the estimation. It
is
is
can be shown that
the correction to the second order expansion to accommodate the plug-in nature
not difficult to see that
^'^"V prn
FIR]-
it
£,-,
yA-'ElfiWiSi]
1
E[/,ZjX,]^A-V
A'A-iA
4,'A-^E[f,ZiCi]
A'A-iA
X'A-^E[fiZiz'^A-'^
A'A-U
^
X'A-'AA-'E\fa,e,] _ „ r .21
^-^'J
A'A-iA
VA-^AA'V
A'A-iA
^
(A'A-iA)'
+
'
t^^-^0'A-^A - 2E
2
(A'A-iA)'
^0'A-^A
Iff]
^^^(A'A-U)'
-^^^^^X'A-'E\f,z,z'^A-'X
{x'A-^xy
^^^
(A'A-U)'
and
we can obtain the
Using
(34), (35),
G
Second Order Bias of huML
Our b^jML
(36),
'
desired conclusion.
modifies Arellano and Dover's estimator.
^{b-l3)=
^
(A'A-U)'
It is
given by
-^^,
f 7-;
(37)
where
We make
the second order expansion of -/n [h
equation model.
G.l
—
/?).
We make
a digression to the discussion of single
'^
Characterization of Second Order Bias of
LIML
Consider a simple simultaneous equations model
iji
=
(3xi
+ £i,
Xi
=
z[n
+
Ui
''The digression mostly confirms the usual higher order analysis of
The
only reason
we consider such
analysis given instruments:
They
analysis
all
is
because
all
the analysis
LIML
readily available in the literature.
we found
in the literature arc conditional
assume that the instruments are nonstochastic. Our purpose
marginal second order analysis, which
is
more natmal
in
41
the dynamic panel model context.
is
to
make
a
"
LIML
and examine
b that solves
ejcypejc)
.
iin
c
.
—
.
mill
(^ Er^i
^^ jy^
'
-
.-1
^ Ei=i fa -
c
e (c) e (c)
a e7=i
Er^i ^i-0
^ic))' (^
-j
fa -
^^^))
o
'
2^^c)
where
e (c)
Here, the
first
order condition
is
=
y
—
xc
given by
-2x'Pe{b)
e{h)'
Pe{h)
{-2x'e{b))
e{h)'e{h)
=
{e{h)'e{b)y
or
Gn
=
(b)
0,
where
G„{b)= (^x'Peib))
(J-e{b)'e{b))
- (^^'eib)] Qe(6)'Pe(6)
Note that
dGn
jb)
db
{-2-x'Pe{b)
e{b)
]
and
a2G„(6)
^ /'_l^,p
dh^
We now
First,
\
n
expand Gn
\ /
J \
(/3),
2l:r'e(6)^
n
+ f--x'Px^ (-2-x'e{b)) + (-x'Peib)] (2-x'x
J
—'gb-' ^^^ —db^
\
n
J \
n
/\'^
using >yn-consistency of
note that
C„(«=(;-'f0(H-(^'''=)O'
42
b:
J
\
^
Because
i|:.i«.=A+^(^-i=g(.iX.-A)j,
1
where X
= E [z^Xi],
and
"
1
A=
E {ziz[].
/
Therefore,
G„(/3)
"
1
\
we have
= 4=* + -r + Opf-)
(39)
where
and
Now, note that
^Gn(/?)_
/i.'p.VV.V
=
^+7^= + ''''(;^)'
f
i-V Vp.
whert
T=
-(7^A'A-^\
43
^-^'^
.
and
Finally, note that
2
(
-x'Px^ fix'e^ -
2 ("-x'Pe^
f-x'x
= 2* + Op (1)
(41)
where
*=
Combining
(38), (39), (40),
and
(41),
cr^^A'A-^A.
we obtain
from which we obtain
Note that
$
has a
mean equal
E
which
is
G.2
qualitatively of the
under symmetry, the second order bias of b
to zero. Therefore,
n\ T
same form
T3
T2
A'A-iA'
mean.
as Rothenberg's
Higher Order Analysis of the "Eigenvalue'
Let
_
e (6)'
Pe [b)
e{b)'e{b)
Getting back to the
first
=
we can
order condition
x'Pe
{b)
- ^^t\r^i?^ x'e (6) =
e(6)'e(6)
x'
Py -
nx'y
- {x'Px -
write
b
=
x'Py — Kx'y
x'Px — kx'x'
the usual expression.
Note that
le'Pc - 21
le'e
(6
-
p) e'Px
- 2l{b -
i3)e'x
AA
+^{b- (3)' x'Px
+ ^{b - pf x'x
kx'x)
6,
is
given by
.
The numerator and the denominator may be
1
rewritten as
^
'
n
and
^
n
We may
G.3
We now
e'e
- 2^
n
(6
-
/?)
e'x
+^
n
(6
-
/3)^ rr'x
=
a^
+ o^ (1)
therefore write
Application to
Dynamic Panel Model
adopt obvious notations, and make a second order analysis of the right side of (37). First, note
that
J_
{x*'Ptel
-
K,xre*)
Cue.l
J,"
(AiAr-(j,E^..^..^;.))\
,
^-^^.
AJAf'^
""''''
and
n
=
(xj PtX^
-
KlX^ X.
[iP'''^j
)
(^|j'''''V
(n§'""")
45
"''
("S^"''^'
It
therefore follows that
V^(6-/3)
ELY {^ Er=i i^itxu - At))^ Ar^
1
+
/^
ELY A^Ar^
(;^ Er^i
/^
{-n-'it
-
At)) Ar^
(;^ E^^i
^j^^t)
Er=YA;A,-'A,
/^
+
^it<.)
EL-i^a^a-a,
1
^
(;^ EILi
^
'^t=i
1
ELY a; A,- A,
<x?,
...
/n
a;a.-'a.
AJAr'A,
Er=YA;Ar^A,
2
ELYA;Ar^(;feEr=.^.^r.)
^
(eLYa;a-a.)^
/^-Yi ^.
.A'._,^
%\^^}'^''^'-'n
ELYA;Ar^(^Er=a^..<0 /^\,,_,/
1
,
V^
\«=i
(E(=i A,At Utj
Therefore, under symmetry, the second order bias of the
1
LIML
like
^,
,
i=i
estimator
'^
''
,\,_,,
/
is
given by
l^t = \ ^ue,t
1
"ELYA^Ar'A,
2
ELY ELY A;Ar^g [{zusd {z.^x*, - k)'] k^k
(eLYaja^-a,)'
ELY ELY A^Ar^g [{zHcD a;aji
1
,
{z^^z',,
-
a,)] a.-'a^
(eLYa;a-a.)'
^''^Tn
H
First
Order Asymptotic Theory
for Finitely Iterated
Difference Estimator
Lemma
15 Let
b denote the
nu m
1
2SLS
^
in (29).
We
have
A'A-V,\
i A'A-^
,,.
where
E =
£
[(Zif,
-
f.ifi)
46
{z,€,
-
f^^py]
^J^ X'A-'EA-'x\
Long
1
.
Proof. See Appendix F.
Lemma 15 can be used to establish the influence function of iterated 2SLS estimators ^i,/A^/,,i
appHed to the long
We
diff'erence.
note that the influence function of b^iML
first
-T-1
— w.-l
—
\'
EJ
^LIMLA
.
given by
«
A
i
is
,
fhlML,-
where
= E [ziiX-j],
Xt
(0,j/iT-2,--- ,yii)'-
is
This
is
where
at each iteration,
buMLA
and At
= E \zitz[^]. We
=
also note that yi
-
yn, xi
because we use the instrument of the form (yio,yiT-i
/? is
some preliminary estimator of
yix
By Lemma
/?.
=
yir-i
-
PyiT-2,--- ,yi2
-
yio,
and Wi
-Pyn)
15, the influence function of
equal to
VA-y
A'A-^
(42)
A'A-iA^'^'~ A'A-iA-^"^'""'
Using
Lemma
we can
15 again,
see that the influence function of 62
'A-l
A'A
-ZiCz
A'A-iA"'^'
Likewise,
=
we can
A'A-y
/ A'A-1
A'A-y
A'A-iA
""
\'\-l\'''^'
Va'A-U
\'\-l\^'^"^^-'
A'A-^A'
A'A-iA [A'A-iA^'^'
A'A-U^^^^
and
A'A-y
A'A-i
r
equal to
A'A-y
see that the influence functions of 63
A'A-i
is
64 are
(43)
equal to
A'A-y
/ A'A-1
A'A-iA U'A-^A^^^'
\i
(44)
A'A-U^^"^^'VJ
and
A'A-y
A'A-^
A'A-iA
z,e
A'A-y
A'A-i
f
A'A-y
A'A-i
"
A'A-y
/ A'A-i
Y
'
A'A-iA {X'A-'X"'^'
'
A'A-iA
A'A-iA U'A-iA^'^^
A'A-iA^^'^
A'A-iA^'-'^'^'V
(45)
Using (42)
s/n (&L/ML,2
-
~
(45),
f^]-
we can easily construct
V^
estimators of A, A, and
[bLiML,3
ip.
~ P) aid
:
consistent estimators of asymptotic variances of i/n (6l/ml,i
y/n
[b^jMLA " P)- Suppose that A,
Likewise, let At, and Aj denote
A,
some consistent estimators
example,
A=
= -y^ZiXi,
X
-'S^Ziz'i,
^=
-y^Wiiyi-bxi
i=l
H
=
„
A(
{yio,yiT-i
1
-
PyiT-2,
"
--
= - Y] zitz'it,
A(
,3/^2
1
- Pynj
,
"
= - V" zux*n.
1=1
where
(3 is
any y^-consistent estimator of
ft
=yi -
Pxi,
(3.
Also, let
iLIML.i
Er=VA;Ar'A(
47
and
<^
are consistent
of At,
and
A<.
For
~
0)i
—
'
Prom
(42)
-
(45),
it
then follows that
AA-
1
-E
n
^-^
AA-iA
i=\
"
1
'
-E
1
E
" i=l
"
1
'
^^
A A-^
^i^i
—
A A-iA \a'a-ia
AA-i(?
A A-i^ / A A-i
zzr:z
,a'a-ia
AA-
^
,
AA-i ^„
/^
—
::-2i£'i
xtt:::
I
A
-
7:7^:
A-
iZiSi
A A-i^
A A-i
AA-iA VAA-iA
A A-i
I
XA-^^
AA-iA
LIML,i
AA-iA
AA-^^
A A-
ZiSi
iZiSi
AA-^i^
:Zi£,-
AA-U AA-iA
-E
"^V'^A-U
y
:^i£i
,AA-iA
'
^
AA-V?
—Jlimla
AA-U
—^
a'a-1(^
are consistent estimators of asymptotic variances of
-v/n
(
&l/ml,i
A-i
~
0}^
:jLIML,i
:2i£i
AA-UVAA-U
AA-U [AA-U
[AA-U
/ A
ZiCi
~
V"
(''l/a^l,2
~
/^J!
XA-^X
V^
(^l/ml,3
—
and ^/n {hhiMLA - Pj
Approximation of
I
We
examine an
easier
method
CUE
of calculating
an estimator that
order adapting Rothenberg's (1984) argument,
of
MLE. We
bcuE
basically argue that two
Newton
equivalent to
is
who was concerned about
CUE
up
to the second
properties of linearized version
iterations suffice for second order bias removal.
The
CUE
solves
mini,
(c)
=
min5(c)
G {c)
5(c),
where
n
Let b denote the minimizer, and
we have a
let
Lj
y'n- consistent estimator 60.
method. Note that we would have
60
—
'
(c)
any -^71— consistent estimator
6.
—
^
1=1
= gj' We consider an iterated version of CUE.
.
Such estimator can be
~ bcuE = Op
L2 (b)
for
n
i=l
= Op (1)
I
)
.
is
=br -
easily
Assume
L3 (b)
,
(This condition
br+l
-y=
= Op
GMM estimation
that
(1)
expected to be satisfied for most estimators.) Let
Ll
(br)
L2{br)'
48
found by the usual
Suppose that
/?),
Expanding around hcuE, and noting that L\ {bcus)
Ol
- OCUE = OQ- OcUE -
=
6o
,,
J
^2
-
(
^
{i>o
- bcuE) + IL3 (bcuE)
1/2
(bcuE)
{bo
- bcus) + 2^3
L3 jbcuE)
1
=
I
\L2 {bcus)
Op ({bo
- bcus)
nr lu^^ \
2L2
[bcUE)
2L2{bcuE)
It follows
+ L3 [bcus)
- bcus) + Op Ubo - bcus)
- bcus) +
(^0
{bcuE)
Op [bo
)
- bcuE)
{bp
- bcus)
{bo
- bcus)
L2 {bcuE?
\
)
j
- bcusf + Op
{bo- OcUE)
^''^'
'"
+
'
Op
(
(&o
- bcuB)]
^
'
[-
^\n
that
^1
We
{bo
[bo
- bcuE
""
+
we can obtain
(Oo)
L2 {bcus)
60
0,
- hcVE
L2 (bcuE)
=
=
- bcuE = Op\ -
can similarly show that
b2
- bcvE =
i-s
{bcuE)
{bx
2L2 {bcuE)
- bcuB? +
Op ((61
- bcuB?) = Op
(^\
or
^/^{b2-bcuE) = Op(n-'"^^
This implies that 62 has very similar properties as bcuE-
O
{n'^) coincide with those of bcuE-
49
Its
(approximate)
mean and
variance up to
References
1]
Ahn,
S.,
and
p. Schmidt, 1995, "Efficient Estimation of Models for
of Econometrics 68, 5
2]
Ahn,
S.,
and
-
,
28.
P. Schmidt, 1997, "Efficient Estimation of Dynamic Panel Data Models: Alternative
Assumptions and Simplified Estimation", Journal of Econometrics
3]
Dynamic Panel Data" Journal
Alonso-Borrego, C. and M. Arellano,
76,
309
-
321.
"Symmetrically Normalized Instrumental-
1999,
Variable Estimation Using Panel Data", Journal of Business and Economic Statistics 17, pp. 36-49.
4]
Andrews, D.W.K.,
1992, "Generic Uniform Convergence", Econometric Theory,
5]
Andrews, D.W.K.,
1994, "Empirical Process
metrics, Vol
6]
Alvarez,
Methods
IV (R.F. Engle and D.L. McFadden,
J.
and M. Arellano,
eds.).
"The Time
1998,
in Econometrics." In
8,
pp. 241-257.
Handbook of Econo-
North Holland, Amsterdam.
Series
and Cross-Section Asymptotics of Dy-
namic Panel Data Estimators", unpublished manuscript.
7]
Anderson, T.W. and C. Hsiao,
1982, "Formulation
Panel Data", Journal of Econometrics
8]
Arellano, M. and S.R. Bond,
47
18, pp.
1991,
-
and Estimation of Dynamic Models Using
82.
"Some Tests
of Specification for Panel Data:
Monte Carlo
Evidence and an Application to Employment Equations", Review of Economic Studies 58, pp. 277
-297.
9]
Arellano, M. and O. Bover,
1995, "Another
Look
at the Instrumental Variable Estimation of
Error- Component Models", Journal of Econometrics 68, pp. 29
[10)
Bekker,
p., 1994, "Alternative
Blundell, R. and
S.
Bond,
-
681.
1998, "Initial Conditions and
Data Models", Journal of Econometrics
[12]
Chamberlain,
C,
Econometrics Vol.
[13]
[14]
2.
51.
Approximations to the Distributions of Instrumental Variable Es-
timators", Econometrica 62, pp. 657
[11]
-
87, pp.
115
-
Moment
Restrictions in
Dynamic Panel
143.
1984, "Panel Data", in Griliches, Z. and
M. D.
Intriligator, eds.,
Handbook of
Amsterdam: North-Holland.
Chao, J. AND N. Swanson, 2000, "An Analysis of the Bias and MSE of the IV Estimator under
Weak Identification with Application to Bias Correction", unpublished manuscript.
Choi,
I.
and P.C.B. Phillips,
1992, "Asymptotic
and Finite Sample Distribution Theory
for
IV
Estimators and Tests in Partially Identified Structural Equations", Journal of Econometrics 51, 113
-
[15]
150.
Davis,
A.W,
1980, "Invariant polynomials with two matrix arguments, extending the zonal poly-
nomials", in P.R. Krishnaiah, ed, Mulitivariate Analysis V, 287-299. Amsterdam: North Holland.
[16]
Donald,
S.
and W. Newey,
1998,
"Choosing the Number of Instruments",
unpublished
manuscript.
[17]
Dufour, J-M,
1997.
"Some Impossibility Theorems
and Dvnamic Models", Econometrica
65,
1365
-
50
in
1388.
Econometrics with Applications to Structural
[18]
GrilicHES, Z. and
93
rics 31, pp.
[19]
Hahn,
J.
Hausman,
118.
-
Estimation of Panel Data Models with Sequential
J., 1997, "Efficient
Journal of Econometrics 79, pp.
[20]
Hahn,
1986, "Errors in Variables in Panel Data," Journal of Economet-
"How
J., 1999,
Hahn,
and
J.
Variables"
[22]
Hahn,
J.
Informative Is the Initial Condition in the Dynamic Panel Model with Fixed
"A New
2000,
and G. Kuersteiner,
Model with Fixed
[23]
Hausman,
-
Specification Test for the Validity of Instrumental
Hall, P. and
When Both n
Effects
J.
2000, "Asymptotically Unbiased Inference for a
Horowitz,
and
T
Hausman,
els
1996, "Bootstrap Critical Values for Tests
and W.E. Taylor,
J. A.,
with Covariance Restrictions:
An
Djmamic Panel
are Large", unpublished manuscript.
Method-of-Moments Estimators", Econometrica
[24]
326.
forthcoming Econometrica.
,
J.
Restrictions",
1 - 21.
Effects?", Journal of Econometrics 93, pp. 309
[21]
Moment
Based on Generalized-
64, 891-916.
1983, "Identification in Linear Simultaneous Equations
Mod-
Instrumental Variables Interpretation", Econometrica 51, pp.
1527-1550.
[25]
Hausman,
W.K. Newey, and W.E. Taylor,
J. A.,
of Simultaneous Equation
[26]
Models with Covariance Restrictions", Econometrica
-
1395.
KivlET, J.F., 1995, "On Bias, Inconsistency, and Efficiency of Various Estimators
Data Models", Journal of Econometrics
[28]
55, pp. 849-874.
Holtz-Eakin, D., W.K. Newey, and H. Rosen, "Estimating Vector Autoregressions with Panel
Data", Econometrica 56, pp. 1371
[27]
1987, "Efficient Estimation and Identification
KruinigER, H.,
"GMM
68, pp.
1 -
in
Dynamic Panel
268.
Estimation of Dynamic Panel Data Models with Persistent Data," unpub-
lished manuscript.
[29]
Kuersteiner, G.M., 2000, "Mean Squared Error Reduction
Series
[30]
Lancaster,
[31]
Magnus
J.,
and H. Neudecker
&
Mariano, R.S. and T. Sawa,
Information
Maximum
(1988), Matrix Differential Calculus with Applications in Statistics
Sons Ltd.
1972,
"The Exact Finite-Sample Distribution of the Limited-
Likelihood Estimator in the Case of
Journal of the American Statistical Association 67, 159
[33]
GMM Estimators of Linear Time
T., 1997, "Orthogonal Parameters and Panel Data", unpublished manuscript.
and Econometrics, John Wiley
[32]
for
Models" unpulished manuscript.
Moon, H.R. and P.C.B.
Phillips. 2000,
"GMM
-
Two
Included Endogenous Variables",
163.
Estimation of Autoregressive Roots Near Unity
with Panel Data", unpublished manuscript.
[34]
Nagar,
A.L., 1959, "The Bias and
Moment Matrix
of the General k-Class Estimators of the
Parameters in Simultaneous Equations"'. Econometrica 27, pp. 575
[35]
Nelson, C, R. Startz, and E. Zivot.
Presence of
Weak
-
595.
1998, 'Vnlid Confidence Intcrval.s mid Inrerence in
Instruments", International Economic Review 39. 1119
51
-
1
1
16.
(tie
[36]
Newey, W.K. and
R.J. Smith, 2000, "Asymptotic Equivalence of Empirical Likelihood and the
Continuous Updating Estimator", unpublished manuscript.
[37]
NiCKELL,
S., 1981,
"Biases in
Dynamic Models with Fixed
Effects",
Econometrica
49, pp,
1417
-
1426.
[38]
Pakes,
a and
D. Pollard, 1989, "Simulation and the Asymptotics of Optimazation Estimators,"
Econometrica 57, pp. 1027-1057.
[39]
Phillips, P.C.B., 1989, "Partially Identified Econometric Models", Econometric Theory
5,
181
-
240.
[40]
Richardson, D.J.,
of the
[41]
American
ROTHENBERG,
1968,
1214
-
ROTHENBERG, T-
Time
J., 1984,
Statistics", in Griliches, Z.
Series,
Coefficient Estimator", Journal
1226.
T.J., 1983, "Asymptotic Properties of
Studies in Econometrics,
[42]
"The Exact Distribution of a Structural
Statistical Association 63,
Some Estimators
and Multivariate
In Structural Models", in
Statistics.
"Approximating the Distributions of Econometric Estimators and Test
and M. D.
Intriligator, eds..
Handbook of Econometrics,
Vol.
2.
Amster-
dam: North-Holland.
[43]
Smith, M.D., 1993, "Expectations of Ratios of Quadratic Forms
Some Top-Order
[44]
Wang,
J.
and
Regression with
[45]
WiNDMEiJER,
Moment
in
Normal
Variables: Evaluating
Invariant Polynomials", Australian Journal of Statistics 35, 271-282.
E. Zivot, 1998, "Inference on Structural Parameters in Instrumental Variables
Weak
Instruments", Econometrica 66,1389 - 1404.
F., 2000,
"GMM Estimation
of Linear
Conditions", unpublished manuscript.
52
Dynamic Panel Data Models using Nonlinear
Table
Performance of
1:
bi^aqar
and
buML
RMSE
%bias
T
n
/?
boMM
b^agar
bhlML
bcMM
buagar
bllML
5
100
0.1
-16
3
-3
0.081
0.084
0.082
10
100
0.1
-14
1
-1
0.046
0.046
0.045
-4
-1
0.036
0.036
0.036
-1
0.020
0.020
0.020
5
500
0.1
10
500
0.1
-3
5
100
0.3
-9
-3
0.099
0.103
0.099
10
100
0.3
-7
-1
0.053
0.051
0.050
5
500
0.3
-2
-1
0.044
0.044
0.044
10
500
0.3
-2
5
100
0.5
-10
10
100
0.5
1
0.023
0.023
0.023
-3
0.132
0.140
0.130
-7
-1
0.064
0.059
0.058
-1
0.057
0.057
0.057
0.027
0.026
0.026
-15
0.321
102.156
0.327
-5
0.136
0.128
0.109
-3
0.130
0.141
0.127
-1
0.050
0.044
0.044
-70
-41
0.555
26.984
0.604
5
500
0.5
-2
10
500
0.5
-2
1
-129
5
100
0.8
-28
10
100
0.8
-14
5
500
0.8
-7
10
500
0.8
-3
5
100
0.9
-51
10
100
0.9
-24
-4
-15
0.250
4.712
0.229
5
500
0.9
-20
-41
-10
0.278
46.933
0.277
10
500
0.9
-9
-2
0.102
0.087
0.080
1
53
Performance of Second Order Theory
in Predicting Properties of
Pgmm
Table
2:
T
n
P
Actual Bias
Actual %Bias
Second Order Bias
Second Order %Bias
5
100
0.1
-0.016
-16.00
-0.018
-17.71
10
100
0.1
-0.014
-14.26
-0.016
-15.78
5
500
0.1
-0.004
-3.72
-0.004
-3.54
500'
0.1
-0.003
-3.20
-0.003
-3.16
100
0.3
-0.028
-9.23
-0.032
-10.60
10
100
0.3
-0.021
-7.11
-0.024
-8.13
5
500
0.3
-0.006
-2.08
-0.006
-2.12
10
500
0.3
-0.005
-1.58
-0.005
-1.63
10
5
5
100
0.5
-0.052
-10.32
-0.060
-12.09
10
100
0.5
-0.034
-6.78
-0.040
-8.00
5
500
0.5
-0.011
-2.29
-0.012
-2.42
10
500
0.5
-0.008
-1.51
-0.008
-1.60
5
100
0.8
-0.224
-28.06
-0.302
-37.81
10
100
0.8
-0.108
-13.53
-0.152
-18.98
5
500
0.8
-0.056
-7.02
-0.060
-7.56
10
500
0.8
-0.027
-3.44
-0.030
-3.80
5
100
0.9
-0.455
-50.56
-1.068
-118.64
10
100
0.9
-0.220
-24.47
-0.474
-52.66
5
500
0.9
-0.184
-20.48
-0.214
-23.73
10
500
0.9
-0.078
-8.64
-0.095
-10.53
54
Table
T
n
/?
%hias\bGMMJ
3:
Performance of 6bc2
RMSe(6ga^a/]^
%h\as\bBC2)
RMSEpBC2J
5
100
0.1
-14.96
0.25
0.08
0.08
10
100
0.1
-14.06
-0.77
0.05
0.05
5
500
0.1
-3.68
-0.38
0.04
0.04
10
500
0.1
-3.15
-0.16
0.02
0.02
5
100
0.3
-8.86
-0.47
0.10
0.10
10
100
0.3
-7.06
-0.66
0.05
0.05
-2.03
-0.16
5
500
0.3
0.04
0.04
10
500
0.3
-1.58
-0.10
0.02
0.02
5
100
0.5
-10.05
-1.14
0.13
0.13
10
100
0.5
-6.76
-0.93
0.06
0.06
5
500
0.5
-2.25
-0.15
0.06
0.06
-0.11
10
500
0.5
-1.53
0.03
0.03
5
100
0.8
-27.65
-11.33
0.32
0.34
10
100
0.8
-13.45
-4.55
0.14
0.11
-6.98
-0.72
0.13
0.13
-0.37
0.05
0.04
5
500
0.8
10
500
0.8
-3.48
5
100
0.9
-50.22
-42.10
0.55
0.78
10
100
0.9
-24.27
-15.82
0.25
0.23
5
500
0.9
-20.50
-6.23
0.28
0.30
0.9
-8.74
-2.02
0.10
0.08
10
500
Table
4:
Performance of Iterated Long Difference Estimator
1:^;
hsLSA
b2SLS,2
i>2SLS,Z
b2SLS,4
Bias
-0.0813
-0.0471
-0.0235
-0.0033
%Bias
-9.0316
-5.2316
-2.6072
-0.3644
RMSE
0.3802
0.2863
0.2479
0.2536
bcMMA
bGMM,2
t>GMM,3
i>GMM,4
Bias
-0.0770
-0.0374
0.0006
0.0104
%Bias
-8.5505
-4.1599
0.0622
1.1570
RMSE
0.1699
0.1954
0.2545
0.2851
bllML.l
btniL:!
bllAtL.S
bllMLA
Bias
-0.0878
-0.0475
-0.0186
0.0074
%Bias
-9.7571
-5.2756
-2.0698
0.8251
RMSE
0.2458
0.2391
0.2292
0.2638
55
Table
5:
Comparison with Blundell and Bond's (1998) Estimator: 13= .9,T
Mean
%
Median
Bias
%
Bias
RMSE
Table
6:
/3f
=
bBB2
bBB3
bsBi
btlML,!
buMLa
bLIML,3
-29.4131
4.7432
4.2551
-9.7571
-5.2755
-2.0697
-31.1881
-25.9085
5.9111
5.6280
-15.3878
-9.0639
-6.9573
0.4257
0.0823
0.0882
0.2458
0.2391
0.2292
Sensitivity of Blundell
and Bond's (1998) Estimator: (3= .9,T
—
Mean
%
Median
%
Bias
RMSE
Pf =
Mean
%
Median
RMSE
bBB2
bBB3
bBB4
8.9525
14.4790
20.9971
9.5207
15.4609
0.0994
0.1400
bBBl
Bias
Bias
%
Bias
N = 100
-33.8148
0.4796
-5
5,
^^
-in;
bBBl
=
=
5,
N=
100
::":^
blJML,!
bLlML,2
bLlML,3
21.5154
0.0252
0.1691
0.2334
21.1202
21.6144
-0.2163
-0.2214
-0.2469
0.1899
0.1944
0.0570
0.0611
0.0630
—
_,_
bBBi
bBB2
bBB3
bBB4
bLIML,\
bLIML,2
bLIML,3
10.8819
17.3840
24.8534
25.4517
0.0429
0.1455
0.1860
11.4178
18.2542
24.9990
25.5079
-0.1621
-0.1890
-0.2168
0.1156
0.1654
0.2246
0.2299
0.0521
0.0543
0.0555
56
Table
7:
Performance of Iterated Long Difference Estimator
N^
/?
=
.75
Actual
i>LIML,\
Mean
2nd order
%
bias
% bias
Mean % bias
RMSE
.80
Actual
Mean %bias
Actual Median
13
=
.85
% bias
% bias
.85
Actual
Mean %bias
% bias
Mean % bias
Actual
Mean %bias
% bias
% bias
Actual
Mean %bias
% bias
Mean % bias
=
200
Actual
Mean %bias
% bias
Mean % bias
Actual
Mean %bias
% bias
% bias
1.6201
3.6981
-5.4235
-4.0059
-3.5471
-8477
9.2416
14.3129
18.0912
0.2333
0.2494
0.2532
0.2848
-9.7571
-5.2756
-2.0698
0.8251
-15.3889
-9.0667
-6.9556
-5.6444
-1.7413
25.3502
40.2274
49.3254
0.2458
0.2391
0.2292
0.2638
-15.2028
-9.5738
-6.0855
-2.8321
-19.6368
-12.4895
-9.6105
-8.0684
-4.4189
132.5028
229.9208
298.9023
0.2518
0.2191
0.2124
0.2397
&L/ML,1
bLIML,2
^L/ML,3
bblMLA
1.2054
3.0110
3.8420
5.1421
-1.6533
-0.2333
-0.1000
-0.4733
-.0679
1,4022
2.3360
3.2936
0.1336
0.1630
0.1906
0.2189
1.4085
3.7041
4.3488
4.8453
-1.1813
-0.5938
-1.1500
2.3010
3.6602
4.9210
RMSE
0.1740
0.2071
0.2245
0.2435
Actual
Mean %bias
% bias
Mean % bias
Actual
Mean %bias
Actual Median
% bias
% bias
2nd order Mean
RMSE
.95
-0.5921
-.2010
2nd order
=
-3.8994
-10.1176
-3.3125
RMSE
.90
6.3443
2nd order Mean
Actual Median
P=
4.2732
0.3032
Actual Median
13
0.2857
2.5878
0.2523
RMSE
=
0.2465
-0.1119
0.2452
2nd order
.80
6.5872
0.2128
Actual Median
=
4.6720
0.2278
RMSE
TV
(3
2.8043
9.8596
RMSE
.75
-.1358
0.1806
-1.4000
2nd order
=
7.6702
-0.4800
7.3205
Actual Median
/?
5.3703
-0.0800
-1.4188
RMSE
.95
4.6584
-0.4467
4.6019
2nd order Mean
/?
1.2977
-3.0867
-2.3438
Actual Median
=
i>LIML,Z
-.4020
2nd order
.90
bhlMLfl
-5.7250
RMSE
=
5
2nd order Mean
Actual Median
/?
T=
7^
100
Actual Median
P=
for
Actual
Mean %bias
% bias
Mean % bias
Actual Median
2nd order
RMSE
0.0299
1.7783
1.7835
3.6882
-6.8412
-3.7059
-2.8588
-2.6000
-.4238
4.6208
7.1565
9.0456
0.2239
0.2363
0.2288
0.2513
-5.8274
-2.7803
-1.3073
0.1639
-13.1000
-7.9111
-5.9278
-5.2333
-.8706
12.6751
20.1137
24.6627
0.2406
0.2257
0.2252
0.2396
-13.3638
-8.7034
-6.2646
-4.1416
-19.3737
-12.3211
-9,5579
-8.0526
-2.2094
66.25 11
111.9604
149.1511
0.2515
0.2156
0.1991
0.2020
57
Table
8:
Performance of I3j2sls ^^^
N--= 100
P
= 0.75
=
Actual
Mean
Second Order
Bias
Mean
%
Bias
0.1761
0.2132
Actual
Mean
Actual
Mean
Second Order
Range
0.2434
0.3067
%
4.3037
10.4126
9.6240
13.0702
Bias
Mean
%
%
Bias
1.4569
8.6510.
0.1727
0.2048
Range
0.2422
0.3031
%
1.9659
7.9833
20.9080
27.0025
0.0656
7.5588
0.1604
0.1947
Bias
Bias
Mean
%
%
Bias
Bias
RMSE
InterQuartile
Actual
Mean
Second Order
Range
%
Bias
Mean
Actual Median
%
%
Bias
Bias
RMSE
InterQuartile
P
11.5527
RMSE
Actual Median
== 0.95
PcUE,LD
5.5331
7.4700
InterQuartile
P--= 0.9
—:^
Pl2SLS,LD
7.6105
RMSE
/?
5
1.3811
Actual Median
=
T=
5.4224
Second Order
= 0.85
for
Actual Median %Bias
InterQuartile
P--= 0.8
%
PcuE
Actual
Mean
Second Order
Range
%
Bias
Mean
Actual Median
%
%
Bias
RMSE
InterQuartile
Range
58
Bias
0.2270
0.2900
-0.7710
6.1387
65.0609
78.5269
-2.3467
6.1147
0.1534
0.1803
0.2115
0.2668
-3.3676
3.1244
481.1993
533.4268
-4.7764
3.1365
0.1494
0.1655
0.2002
0.2512
Table
N
==
=
9:
Performance of P12SLS ^^^
200
0.75
Actual
Mean
%
Bias
%
Second Order Mean
8.8638
RMSE
0.1519
0.1704
Range
0.1896
0.2297
% Bias
4.9410
8.3701
4.8120
6.5351
1.8273
5.4765
0.1447
0.1674
Range
0.1997
0.2468
%
2.7966
7.3021
10.4540
13.5012
1.0718
5.8672
0.1373
0.1585
Range
0.1909
0.2341
%
0.8948
5.4221
32.5304
39.2635
-0.0204
5.3657
0.1271
0.1448
Actual
Mean
%
%
Bias
Bias
Actual
Mean
Bias
Second Order Mean
Actual Median
%
%
Bias
Bias
RMSE
InterQuartile
Actual
Mean
Bias
Second Order Mean
Actual Median
%
%
Bias
Bias
RMSE
InterQuartile
P
0CUE,LD
5.9078
3.8052
InterQuartile
== 0.95
Pl2SLS,LD
4.2172
RMSE
P--= 0.9
5
1.5982
Actual Median
(3
T=
2.7112
Second Order Mean
= 0.85
for
Actual Median %Bias
InterQuartile
P--= 0.8
Bias
0CUE
Actual
Mean
Range
%
Bias
Second Order Mean
Actual Median
%
%
Bias
RMSE
InterQuartile
Range
59
Bias
0.1750
0.2101
-2.0943
2.8482
240.5997
266.7134
-2.5881
2.8984
0.1216
0.1331
0.1594
0.1915
Table
10:
Performance oipcuE,LD^ Pcue2,as^ Pcue2,ld, and Pcue2,bb
—
—-^
tk:
A^
/?
=
=
100
.75
Median
%
Bias
Interquartile
%
RMSE
Mean
/?
=
.8
Median
%
Bias
Interquartile
Mean
%
Range
Bias
Range
Bias
RMSE
/?
=
.85
Median
%
Bias
Interquartile
%
RMSE
Mean
P=
.9
Median
%
Bias
Interquartile
Mean
%
Range
Bias
Range
Bias
RMSE
P=
.95
Median
%
Bias
Interquartile
%
RMSE
Mean
Bias
Range
_-,_
for
T=5
—
^;
_n-
PcUB,LD
(^CUE,BB
0CUE2,AS
0CUE2,LD
PcUE2,BB
7.4700
1.2705
6.6814
4.2643
2.0471
0.3067
0.1480
0.2864
0.2911
0.2456
11.5527
0.4852
-296.6631
1250.1149
-136.4730
0.2132
0.1050
152.2249
676.8912
117.9397
8.6510
1.2629
4.7391
1.6364
0.6595
0.3031
0.1540
0.3206
0.3410
0.3676
10.4126
-0.0913
33.6498
-15.0393
-125.6554
0.2048
0.1092
29.2436
12.6828
74.6934
7.5588
1.9808
0.9468
-2.2980
-1.0482
0.2900
0.1645
0.4253
1.2817
0.4902
7.9833
0.3824
-100.7981
-161.9267
6.4686
0.1947
0.1225
28.4546
25.6932
23.8489
6.1147
3.0423
-4.2248
-16.4693
-6.9530
0.2668
0.1637
2.3282
1.5503
2.3169
6.1387
1.2087
-30.2898
-177.0842
495.0171
0.1803
0.1344
24.6733
131.8341
193.9465
-17.7102
-129.4765
-21.5058
3.1365
3.4897
0.2512
0.1452
2.5936
1.6277
2.5714
3.1244
1.0877
-290.6542
-42.6293
-32.0973
0.1655
0.1347
166.1415
67.0635
98.0361
60
Table
11:
Performance oi f3cuE,LD^ Pcue2,as^ Pcue2,ld^ and Pcue2,bb
P=
.75
Median
%
Bias
Interquartile
%
RMSE
Mean
(3
=
.8
Median
Bias
%
Bias
Interquartile
%
RMSE
Mean
P=
.85
Median
Range
Range
Bias
%
Bias
Interquartile
%
RMSE
Mean
P =:.9
Median
Bias
%
Bias
Interquartile
%
RMSE
Mean
P=
.95
Median
%
RMSE
Range
Bias
%
Bias
Interquartile
Mean
Range
Bias
Range
T=
5
—:^
~-^
N = 200
for
PcUE,LD
PcUE,BB
PcUE2,AS
0CUE2,LD
4.2172
0.4242
3.4952
4.0943
1.2644
0.2297
0.1032
0.1855
0.2034
0.1195
8.8638
0.1604
116.0861
29.5421
4.4327
0.1704
0.0719
85.9117
10.6641
4.5595
5.4765
0.5388
5.8182
5.3421
0.8893
PcUE2,BB
0.2468
0.1063
0.2105
0.2132
0.1472
8.3701
-0.1898
16.9181
-233.6177
-21.1440
0.1674
0.0736
13.7393
127.6513
9.5825
5.8672
0.6441
5.1660
3.7226
1.0708
0.2341
0.1143
0.2295
0.2347
0.2619
7.3021
-0.7076
688.7913
-50.0828
59.6610
0.1585
0.0779
455.6137
19.8972
66.8390
5.3657
0.9204
0.9958
-2.8551
-1.0774
0.2101
0.1152
0.3766
1.3425
0.5870
5.4221
-0.3893
479.7193
31.6093
-29.9555
0.1448
0.0913
381.8271
23.2206
42.2422
-12.6677
2.8984
2.5208
-11.2026
-125.6884
0.1915
0.1099
2.5978
1.5877
2.6203
2.8482
0.9733
-82.2370
-6464.7709
-177.6883
0.1331
0.1044
39.5396
4315.6181
116.8096
61
6t*5
017
Date Due
Lib-26-67
MIT LIBRARIES
3 9080 02246 0932
Download