FWF - Antragsunterlagen - Erwin-Schrödinger

advertisement
TECHNICAL APPENDIX
The proofs for the panel data error component SARAR(R,S) framework are given in full
length to the benefit of the reader. They build on and generalize analogous proofs by Kelejian
and Prucha (2010) for a cross-sectional SARAR(1,1) and Badinger and Egger (2011) for a
cross-section SARAR(R,S) model, as well as analogous proofs for a panel SARAR(0,1)
model with nonstochastic regressors by Kapoor, Kelejian, and Prucha (2007).
APPENDIX A
Notation
We adopt the standard convention to refer to matrices and vectors with acronyms in boldface.
Let A N denote some matrix. Its elements are referred to as aij, N ; ai., N and a.i, N denote the ith row and the i-th column of A N respectively. If A N is a square matrix, A N1 denotes its
inverse; if A N is singular, A N denotes its generalized inverse. If A N is a square, symmetric
and positive definite matrix, A1N/ 2 denotes the unique positive definite square root of A N and
A N1 / 2
denotes
( A N1 )1 / 2 . The (submultiplicative) matrix norm

is defined as
A N  [Tr (AN A N )]1 / 2 . Finally, unless stated otherwise, for expressions involving sums over
elements of vectors or matrices that are stacked over all time periods, we adopt the convention
to use single indexation i, running from i  1,..., NT , to denote elements of the stacked vectors
or matrices.
1
Remark A.1
i) Definition of row and column sum boundedness (compare Kapoor, Kelejian, and Prucha,
2007, p. 99): Let B N , N  1 , be some sequence of NT  NT matrices with T some fixed
positive integer. We will then say that the row and column sums of the (sequence of) matrices
B N are bounded uniformly in absolute value, if there exists a constant c   , which does not
depend on N, such that
NT
max
1 i  NT
b
j 1
ij , N
 c and max
1 j  NT
NT
b
i 1
ij , N
 c for all N  1.
The following results will be repeatedly used in the subsequent proofs.
1
Take the vector u N  [uN (1),..., uN (T )] , for example. Using indexation i  1,..., NT , the
elements ui , N , i  1,..., N refer to t  1, elements ui , N , i  N  1,...,2 N refer to t  2 , etc., and
elements ui , N , i  (T  1) N  1,..., NT refer to t  T . The major advantage of this notation is
that it avoids the use of double indexation for the cross-section and time dimension.
Moreover, it allows us the invoke several results referring to the case of a single cross-section,
which still apply to the case of T stacked cross-sections.
1
ii) Let R N be a (sequence of) N  N matrices whose row and column sums are bounded
uniformly in absolute value, and let S be some T  T matrix (with T  1 fixed). Then the row
and column sums of the matrix S  R N are bounded uniformly in absolute value (compare
Kapoor, Kelejian, and Prucha, 2007, p. 118).
iii) If A N and B N are (sequences of) NT  NT matrices (with T  1 fixed), whose row and
column sums are bounded uniformly in absolute value, then so are the row and column sums
of A N B N and A N  B N . If Z N is a (sequence of) NT  P matrices whose elements are
bounded uniformly in absolute value, then so are the elements of
AN ZN
and
( NT ) 1 ZN A N Z N . Of course, this also covers the case ( NT ) 1 ZN Z N for A N  I NT (compare
Kapoor, Kelejian, and Prucha, 2007, p. 119).
iv) Suppose that the row and columns sums of the NT  NT matrices A N  (aij, N ) are
NT
bounded uniformly in absolute value by some finite constant c A ; then
a
i 1
ij , N
q
 c Aq for q  1
(see Kelejian and Prucha, 2008, Remark C.1).
v) Let ξ N and η N be NT 1 random vectors (with T  1 fixed), where, for each N, the
elements are independently distributed with zero mean and finite variances. Then the elements
of ( NT ) 1 / 2 ZN ξ N are O p (1) and ( NT ) 1 ξN A N η N is O p (1) (compare Kelejian and Prucha,
2004, Remark A.1).
2
vi) Let ζ N be a NT 1 random vector (with T  1 fixed), where, for each N, the elements are
distributed with zero mean and finite fourth moments. Let π N be some nonstochastic NT 1
vector, whose elements are bounded uniformly in absolute value and let Π N be a NT  NT
nonstochastic matrix whose row and column sums are bounded uniformly in absolute value.
Define the column vector d N  π N  Π N ζ N . It follows that the elements of d N have finite
fourth moments. (Compare Kelejian and Prucha, 2008, Lemma C.2, for the case T  1 and
3
independent elements of ζ N .)
2
Kelejian and Prucha (2004) consider the case T  1 and where the elements of ξ N and η N
are identically distributed. Obviously, the results also holds for (fixed) T  1 and under
heteroskedasticity, as long as the variances of the elements of ξ N and η N are bounded
uniformly in absolute value.
3
The extension to (fixed) T  1 is obvious. Independence of the elements of ζ N is not
required for the result to hold. The fourth moments of the elements of d N  π N  Π N ζ N are
NT
NT
j 1
j 1
given by E ( i , N   ij, N  j , N )4  24 E[ i4, N  ( ij, N  j , N )4 ]
2
Remark A2.
The matrices Q 0, N and Q1, N have the following properties (see Kapoor, Kelejian, and Prucha,
2007, p. 101):
tr (Q0, N )  N (T  1) , tr(Q1, N )  N , Q 0, N (eT  I N )  0 , Q1, N (eT  I N )  (eT  I N ) ,
Q 0, N ε N  Q 0, N v N , Q1, N ε N  (eT  I N )μ N  Q1, N v N , (IT  D N )Q 0, N  Q 0, N (IT  D N ) ,
(IT  D N )Q1, N  Q1, N (IT  D N ) , tr[(IT  D N )Q0, N ]  (T  1)tr(D N ) ,
tr[(IT  D N )Q1, N ]  tr (D N ) ,
where D N is an arbitrary N  N matrix. Obviously, the row and column sums of Q 0, N and
Q1, N are bounded uniformly in absolute value.
APPENDIX B
The following lemma will be repeatedly used in the subsequent proofs.
4
Lemma B.1
Let A N be some nonstochastic NT  NT matrix (with T fixed), whose row and column sums
~ be a predictor for
are bounded uniformly in absolute value. Let u be defined by (2c) and u
N
N
u N . Suppose that Assumptions 1 to 4 hold. Then
(a) N 1E uN A N u N  O(1) , Var ( N 1uN A N u N )  o(1) ,
and
1
~ A u
~

N 1 (u
N
N N )  N E (u N A N u N )  o p (1) .
(b) N 1E d. j , N A N u N  O(1) , j  1,..., P , where d. j , N is the j-th column of the NT  P matrix
~  N 1E(D A u )  o (1) .
D N , and N 1DN A N u
N
N
N N
p
(c) If furthermore Assumption 6 holds, then
1 / 2
~ A u
~
N 1 / 2u
uN A N u N  αN N 1 / 2Δ N op (1) with α N  N 1E[DN ( A N  AN )u N ] .
N
N N  N
NT NT NT NT
 24 [ i4,N    ij,N  ik ,N  il ,N  im,N E  j ,N  k , N  l ,N  m,N ]  K   ,
by
Hölder’s
j 1 k 1 l 1 m1
inequality as long as the fourth moments of the elements of ζ N are bounded uniformly.
4
Compare Lemma C.1 in Kelejian and Prucha (2008) for the case of a cross-sectional
SARAR(1,1) model and Lemma C.1 in Badinger and Egger (2011) for the case of a crosssectional SARAR(R,S) model.
3
~ α  o (1) .
In light of (b), we have α N  O(1) and N 1DN (A N  AN )u
N
N
p
Proof of part (a)
Let
~
~ A u
~
N  N 1uN A N u N and N  N 1u
N
N N,
(B.1)
then given (4a), we have N  N 1εN B N ε N , with the symmetric NT  NT matrix B N defined
as
S
S
m 1
m 1
B N  (1 / 2)[IT  (I N    m , N Mm , N ) 1 ]( A N  AN )[IT  (I N    m , N M m , N ) 1 ]
(B.2)
By Assumptions 1-3 and Remark A.1 in Appendix A, the row and column sums of the
matrices B N and Ω  , N are bounded uniformly in absolute value. It follows that the row and
column sums of the matrices B N Ω , N B N Ω , N are bounded uniformly in absolute value.
In the following let K   be a common bound for the row and column sums of the absolute
value of the elements of B N , Ω  , N , and B N Ω , N B N Ω , N and of the absolute value of their
respective elements. Then
NT NT
E N  E N 1  bij, N  i , N  j ,N
(B.3)
i 1 j 1
NT NT
 N 1  bij, N E  i , N  j , N
i 1 j 1
NT NT
 N 1  bij,N   ,i  , j
i 1 j 1
 TK ,
3
where we used Hölder’s inequality in the last step. This proves that E N is O(1).
Now consider Var(N ) , invoking Lemma A.1 in KP (2008):
Var (N )  Cov( N 1εN B N ε N , N 1εN B N ε N )
(B.4a)
NT
 2 N 2Tr (B N Ω , N B N Ω , N )  N 2  bii2,* [ (i4,)N  3] ,
i 1
 2N Tr (B N Ω , N B N Ω , N )  N Tr{diag i 1,..., NT (bii2*, N )diag i 1,..., NT [ E(i4,N )  3]}
2
2
(B.4b)
4
where bii*, N is the i-th diagonal element of B*N  (bij,*, N )  SN BN S N and ( 4i ,)N is the fourth
moment of the i-th element of the NT 1 vector η N  S N1ε N , i.e., ( 4i ,)N  E (i4,N ) . In light of
Assumption 1 and the properties and Q 0, N and Q1, N , the row and column sums (and the
elements) of S N   v Q0, N  1 Q1, N are bounded uniformly in absolute value by some finite
constant, say K * . W.o.l.o.g. we can choose the bound K used above such that K *  K .
Moreover, the row and column sums (and the elements) of SN1   v1Q0, N  11Q1, N are also
bounded uniformly in absolute value by some constant K ** . W.o.l.o.g. we can choose K such
that K **  K .
In light of Remark A.1 and Assumption 1 it follows that the elements of η N  S 1ε N have
finite fourth moments. Denote their bound by K *** . W.o.l.o.g. we assume that K ***  K .
Hence, we have
Var(N )  2N 2Tr ( KI NT )  N 2Tr[diagi 1,..., NT ( K 2 K )] .
 2 N 1TK  N 1TK 3  N 1 (2TK  TK 3 )  o(1) .
The claim in part (a) of Lemma B.1 that N 1 (uN A N u N )  N 1E(uN A N u N )  o p (1) now
follows from Chebychev’s inequality (see, for example, White, 2001, p. 35).
1
~ A u
~

We now prove the second part of (a), i.e., N 1 (u
N
N N )  N E (u N A N u N )  o p (1) . Since
~
N  E (N )  o p (1) , it suffices to show that N  N  o p (1) . By Assumption 4, we have
~  u  D Δ , where D  (d ,..., d
~  u  D Δ into the
) . Substituting u
u
N
N
N
N
N
1., N
NT ., N
N
N
N
N
~
expression for N in (B.1), we obtain
~
N  N  N 1 (uN  ΔN DN )A N (u N  DN ΔN )  N 1uN A N u N
(B.5)
 N 1 (uN A N u N  ΔN DN A N u N  uN A N D N Δ N  ΔN DN A N D N Δ N  uN A N u N )
 N 1 (ΔN DN A N u N  uN A N D N Δ N  ΔN DN A N D N Δ N )
 N 1 (ΔN DN A N u N  ΔN DN AN u N  ΔN DN A N D N Δ N )
 N 1[ΔN DN ( A N  AN )u N  ΔN DN A N D N Δ N ]
 N   N ,
5
where
S
 N  N 1[ΔN DN ( A N  AN )u N ]  N 1{ΔN DN ( A N  AN )[IT  (I N    m, N M m, N ) 1 ]ε N } , (B.6)
m 1
 N (ΔN DN C N ε N ) , with
1
S
C N  ( A N  AN )[IT  (I N    m , N M m, N ) 1 ]  (c1 ., N ,.., cNT ., N ) ,
m 1
 N  N ΔN DN A N D N Δ N .
1
(B.7)
By Assumption 3 and Remark A.1, the row and column sums of C N are bounded uniformly
in absolute value. We next prove that N  o p (1) and  N  o p (1) .
Proof that N  o p (1) :
N  N 1 ΔN DN CN ε N
 N 1
NT
 Δ d
N
i 1
(B.8)
c εN
i ., N i ., N
NT
 N 1  ΔN di., N ci., N ε N
i 1
 N 1 ΔN
 N 1 ΔN
NT

i 1
NT

i 1
NT
NT
c
di., N
di., N
NT
c
 N 1 ΔN
 
N
1
1
ΔN
N N
N
NT
j 1
NT

1 / 2
1 / p 1 / 2
j, N
j 1
NN
N
1/ p
1/ 2
 NT
Note that   cij, N
 i 1
bound for the row
 j, N
c
j 1
NT
j ,N
 j, N
NT

i 1
ij , N
j 1
 N 1 ΔN
di., N
ij , N
j 1
i 1
 j, N
ij , N
di., N cij,N
 NT
  di., N
 i 1
N
ΔN
1/ 2

ΔN

1/ p
p



1/ q
q
 NT
  cij, N 
 i 1

 1 NT
 N   j, N

j 1

 1 NT
 N   j, N

j 1

 1 NT
 N  di., N

i 1

 1 NT
 N  di., N

i 1

1/ p
p



1/ p
p



1/ q
q
 NT
  cij, N 
 i 1

1/ q
q
 NT
  cij, N 
 i 1


  K   by Assumption. In the following we denote by K the uniform

and column sums of the absolute value of the elements of A N and C N .
6
From Remark C.1 in KP (2008) (see Remark A.1 in Appendix A) it follows that
1/ q
q
q
 NT
 NT
q
and
thus
c

K
  ij, N 
  cij, N 
 i 1

 i 1

N  K N
1 / p 1 / 2
N
1/ 2
ΔN
 K . Factoring K out of the sum yields
 1 NT
 N   j, N

j 1


NT

 T ( NT ) 1  di., N

i 1

1/ p
p



.
This holds for p  2   for some   0 as in Assumption 4 and 1 / p  1 / q  1 . By
Assumption
N
4,
1/ 2

Δ N  Op (1) .
Assumption
4
also
implies
that
1/ p
NT
p

 ( NT )1  di., N   Op (1) for p  2   and some   0 .
i 1


NT

Moreover, E  j, N  K   , which implies that  N 1   j , N
j 1


  Op (1) . Since N 1 / p 1 / 2  0 as


N   it follows that N  o p (1) .
Next consider
NT NT
 N  N 1 ΔN DN A N DN Δ N = N 1  ΔN di., N aij,N d j., N Δ N
(B.9)
i 1 j 1
 N 1 Δ N
2
1
2
N
ΔN
N
1/ p
N
1 / p 1 / 2
NT
NT
 d  d
i ., N
i 1
NT
 d
i ., N
i 1
K ΔN
N
2
j 1
aij,N
j ., N
 NT
  d j ., N

 j 1
1/ p
p




 NT
  aij, N

 j 1
NT
 1 NT

 N  di., N   N 1  d j ., N
i 1
j 1


1 / 2

K N
1/ 2
ΔN

2
 1 NT
 N  d j ., N

j 1

1/ q
q




1/ p
p
p








2/ p
 o p (1) .
From the last inequality we can also see that N 1 / 2 N  op (1) . Summing up, we have proved
~
that N  N  N   N  o p (1) .
Proof of part (b)
7
Denote by s*, N the s-th element of N 1DN A N u N . By Assumptions 3 and 4 and Remark A.1
in Appendix A there exists a constant K   such that E (ui2, N )  K and E dij, N
p
 K with
p  2   for some   0 . W.o.l.o.g. we assume that the row and column sums of the
matrices A N are bounded uniformly by K   . Notice first that
E ui , N d js, N  Eui2, N 
1/ 2

 Eui2, N

1/ 2
Ed 
1/ 2
2
js, N
1/ p
p
 E d

js, N 


 K 1 / 2 K 1 / p  K 1 / 21 / p with p as before.
It follows that
NT NT
E s*, N  N 1  aij, N E ui , N d js, N
(B.10)
i 1 j 1
NT NT
 K 1 / 2 1 / p N 1  aij, N  K 1 / 2 1 / p N 1NTK  TK 3 / 2 1 / p   ,
i 1 j 1
which shows that indeed E N 1d.s , N A N u N  O(1) . Of course, the argument also shows that
α N  N 1E[DN ( A N  AN )u N ]  O(1) .
It is readily verified that Var (s* )  o(1) , such that we have s*  E(s* )  op (1) . Next observe
that
~  N 1D A u   * ,
N 1DN A N u
N
N
N N
N
where
N*  N 1DN A N D N Δ N .
By
(B.11)
arguments
analogous
to
the
proof
that
~
 N  N 1[ΔN DN (A N  AN )u N ]  op (1) , it follows that N*  op (1) . Hence s*  s*  o p (1) ,
~
~ α  o (1) .
and thus s*  E (s* )  o p (1) , which also shows that N 1DN (A N  AN )u
N
N
p
Proof of part (c)
In light of the proof of part (a)
1 / 2
~ A u
~
uN A N u N  [ N 1uN ( A N  AN ) DN ]N 1 / 2Δ N  N 1 / 2 N ,
N 1 / 2u
N
N N  N
(B.12)
8
where N 1 / 2 N  op (1) as shown above, and in light of (b) and since N 1 / 2Δ N  Op (1) by
Assumption 4, we have
1 / 2
~ A u
~
N 1 / 2u
uN A N uN  N 1 / 2αN Δ N  op (1) .
N
N N  N
(B.13)
Proof of Theorem 1a. Consistency of Initial GM Estimator θˆ 0N
The objective function of the nonlinear least squares estimator in (17a) and its nonstochastic
counterpart are given by
~
~
0
R0N (, θ )  (~
γ N0  Γ0Nb0N )(~
γ N0  Γ0Nb0N ) and
(B.14a)
0
0
0
R 0N (θ )  ( γ 0N  Γ0N b )( γ 0N  Γ0N b ) .
(B.14b)
0
Since γ 0N  Γ 0N b0N  0 , we have RN0 (θ0N )  0 , i.e., RN0 (θ )  0 at the true parameter vector
θ0N  ( 1 ,...,  S , v2 ) . Hence,
0
0
0
R 0N (θ )  R 0N (θ0 )  (b  b0N )Γ 0N Γ 0N (bN  b0N ) .
(B.15)
In light of Rao (1973, p. 62) and Assumption 5, it follows that:
0
0
0
RN0 (θ )  RN0 (θ0 )  min (Γ 0N Γ 0N )(b  b0N )(b  b0N ) and
RN0 (θ )  RN0 (θ0 )  * (b  b0N )(b  b0N ) .
0
0
0
2
By the properties of the norm A  [tr(AA)]1 / 2 , we have θ  θ0  (b  b0N )(b  b0N ) such
0
0
0
2
that RN0 (θ )  RN0 (θ0 )  * θ  θ0 . Hence, for every   0
0
lim
0
inf
0
0
[ RN0 (θ )  RN0 (θ0 )] 
0
N   {θ 0 : θ  θ   }
inf
0
0
{θ 0 : θ  θ   }
2
* θ0  θ0  * 2  0 ,
(B.16)
which proves that the true parameter θ 0 is identifiable unique.
~
Next, let FN0  (~
γ N0 ,Γ0N ) and Φ 0N  ( γ 0N , Γ 0N ) . The objective function and its nonstochastic
counterpart can then be written as
9
0
0
0
RN0 (θ )  (1, b  )FN0 FN0 (1, b  ) and
0
0
0
RN0 (θ )  (1,b  )Φ0N Φ0N (1, b  ) .
Hence for ρ [a  , a  ] and  v  [0, b] it holds that
2
5
0
0
0
0
RN0 (θ )  RN0 (θ )  (1, b )(FN0 FN0  Φ0N Φ0N )(1, b ) .
Moreover, since the norm  is submultiplicative, i.e., AB  A B , we have
0
R (θ )  R (θ )  F  FN0  Φ 0N Φ0N (1, bN )
0
N
0
0
N
0
2
0
N
2S  S (S  1)  4
 FN0 FN0  Φ0N Φ0N [1  S (a  )2 
(a )  b 2 ] .
2
It is readily observed from (16), that the elements of the matrices γ 0N and Γ 0N are all of the
form uN A N u N , where A N are nonstochastic NT  NT matrices (with T fixed), whose row
and column sums are bounded uniformly in absolute value. In light of Lemma B.1, the
p
p
elements of Φ 0N are O(1) and it follows that FN0  Φ 0N  0 and FN0 FN0  Φ0N Φ0N 
0 as
N   . As a consequence, we have (for finite S)
S (S  1)  4 2 p
0
0
RN0 (θ )  RN0 (θ )  [FN0 FN0  Φ0N Φ0N ] [1  S (a  ) 2 
(a )  b ]  0 as N  
2
ρ[  a  ,a  ], v2 [ 0,b ]
sup
(B.17)
~0
Together with identifiable uniqueness, the consistency of θN  ( ~1, N ,..., ~S , N ,~v2, N ) now
follows directly from Lemma 3.1 in Pötscher and Prucha (1997).
Having proved that the estimators ~1, N ..., ~S , N ,~v2, N are consistent for 1,N ..., S ,N , v2 , we now
show that  12 can be estimated consistently from the last line (4S  2) of equation system
(12), using
~12,N  ~4S  2, N  ~4S  2,1, N ~1, N  ...  ~4 S  2, S , N ~S , N  ~4S  2, S 1, N ~12,N
 ~4 S  2, 2S , N ~S2, N  ~4 S  2, 2S 1, N ~1, N ~2, N  ...  ~4S  2, 2 S  S ( S 1) / 2, N ~S 1, N ~S , N .
5
(B.18)
This should be read as ρ s  [a  , a  ] for all s  1,..., S .
10
Since γ N  Γ N b0N  0 , we have
~12   12  (~4 S  2, N   4 S  2, N )  (~4 S  2,1, N   4 S  2,1, N ) ~1, N  ...  (~4 S  2, S , N   4 S  2, S , N ) ~S , N
 (~4 S  2, S 1, N   4 S  2, S 1, N ) ~12,N  ...  (~4 S  2, 2 S , N   4 S  2, 2 S , N ) ~S2, N
 (~

) ~ ~  ...  (~

) ~
~
4 S  2 , 2 S 1, N
  4 S  2,1, N
( ~
1, N
4 S  2 , 2 S 1, N
1, N
4 S  2 , 2 S  S ( S 1) / 2 , N
2, N
 1, N )  ...   4 S  2, S , N
( ~
S,N
 S , N )
  4 S  2, S 1, N ( ~12,N  12, N )...   4 S  2, 2 S , N ( ~S2, N   S2, N )

( ~ ~ 2    )...  
4 S  2 , 2 S 1, N
1, N
2, N
1, N
2, N
4 S  2 , 2 S  S ( S 1) / 2 , N
4 S  2 , 2 S  S ( S 1) / 2 , N
S 1, N
S,N
(B.19)
( ~S 1, N ~S , N   S 1, N  S , N ).
Since FN0  Φ 0N  0 as N   and the elements of Φ N are O(1) it follows from the
p
p
consistency of ~1, N ,..., ~S , N that ~12, N   12  0 as N   .
Proof of Theorem 1b. Consistency of the Weighted GM Estimator
The objective function of the weighted GM estimator and its nonstochastic counterpart are
given by
~
~
RN (θ)  (~
γ N  Γ N b)Θ N (~
γ N  Γ N b) and
(B.20a)
RN (θ)  ( γ N  Γ N b)Θ N ( γ N  Γ N b)
(B.20b)
First, in order to ensure identifiable uniqueness, we show that Assumption 5 also implies that
the smallest eigenvalue of ΓN Θ N Γ N is bounded away from zero, i.e.,
min (ΓN Θ N Γ N )  0 for some 0  0.
(B.21)
Let A  (aij )  Γ0N Γ0N and B  (bij )  Γ1N Γ1N . Note that Γ 0N and Γ1N are of dimension
( 2 S  1)  [2S  S ( S  1) / 2  1] (i.e., they have half the rows and one column less than than
ΓN ). A and B are of order [2S  S ( S  1) / 2  1]  [2S  S ( S  1) / 2  1] (i.e., they have one row
and column less than ΓN ΓN ).


 
Next define Γ N  (Γ0N , Γ1N ) , which differs from ΓN only by the ordering of the rows. Γ 0N

corresponds to Γ 0N with a zero column appended as last column, i.e., Γ 0N  (Γ 0N ,0) , such that
11
0 0
 0  0 Γ


ΓN ΓN   N ΓN
 0
 
( Γ 0N Γ 0N
is
of
the
a1,1
a1, 2 S  S ( S 1) / 2 1


.
0  
a2 S  S ( S 1) / 2 1, 2 S  S ( S 1) / 2 1
0 a2 S  S ( S 1) / 2 1,1

0
0
0

same
dimension
as
ΓN Γ N ,
i.e.,
0
0
.
0

0
(B.22a)
[2 S  S ( S  1) / 2  2] 
[2 S  S ( S  1) / 2  2] .)

Γ1N is a modified version of Γ1N , with a zero column included as second last column, such
that
b1,1
0
b1, 2 S  S ( S 1) / 2 1




.
0
.
1 1 
.

ΓN ΓN 


0
0 0
0


b2 S  S ( S 1) / 2 1,1 . 0 b2 S  S ( S 1) / 2 1, 2 S  S ( S 1) / 2 1 
 
( Γ1N Γ1N is of the same dimension as
ΓN Γ N , i.e.,
(B.22b)
[2 S  S ( S  1) / 2  2] 
[2 S  S ( S  1) / 2  2] .)

 
Since Γ N  (Γ0N , Γ1N ) differs from ΓN only by the ordering of the rows, it follows that

 
 0   1   Γ0N   0  0  1  1


Γ N Γ N = Γ N Γ N  Γ N Γ N    1   Γ N Γ N  Γ N Γ N , i.e.,

 Γ N 
Γ N Γ N
a1,1


.

a 2 S  S ( S 1) / 2  1,1

0

a1, 2 S  S ( S  1) / 2  1
a 2 S  S ( S  1) / 2  1, 2 S  S ( S 1) / 2  1
0
b1,1


.


0

b2 S  S ( S  1) / 2  1,1
0
(B.23)
0
0
0

0


0
.
.

0 0
0

. 0 b2 S  S ( S 1) / 2  1, 2 S  S ( S  1) / 2  1 
0
b1, 2 S  S ( S  1) / 2  1
We can thus write
 
 
xΓ N Γ N x  xΓ0N Γ0N x  xΓ1N Γ1N x  xA A N x A  xB B N x B .
(B.24)
12
The vector x is of dimension [2S  S ( S  1) / 2  2]  1 (corresponding to the number of
columns of Γ N ), wheras x A and x B are of dimension [2S  S ( S  1) / 2  1] , i.e. both have
one row less: x A excludes the last element of x, i.e., x 2 S  S ( S 1) / 2  2 , x B excludes the secondlast element of x , i.e., x 2 S  S ( S 1) / 2 1 .
Again, we invoke Rao (1973, p. 62) for each quadratic form. It follows
xA A N x A  xB B N x B  min ( A N )xAx A  min (B N )xB x B  * (xAx A  xB x B )  *xx
(B.25)
for any x  ( x1 , x2 ,..., x2 S 2 ) .
Hence, we have shown that
xΓN Γ N x  *xx , or, equivalently,
xΓN Γ N x
 * for x  0 .
xx
(B.26)
Next, note that in light of Rao (1973, p. 62),
min (ΓN Γ N )  inf
x
xΓN Γ N x
 *  0 .
xx
(B.27)
Using Mittelhammer (1996, p. 254) we have
min (ΓN Θ N Γ N )  inf
x
xΓN Θ N Γ N x
xΓN Γ N x
 min ( Ξ N1 ) inf
x
xx
xx
 min (Θ N )min (ΓN Γ N )  0  0 ,
(B.28)
with 0  ** since  min (Θ N )  *  0 by assumption (see Theorem 2).
This ensures that the true parameter vector θ N  ( 1,N ,..., S ,N , v2 ,12 ) is identifiable unique.
Next note that in light of the assumptions in Theorem 2, Θ N is O(1) by the equivalence of
matrix norms.
Analogous to the prove of Theorem 1, observe that RN (θ)  0 , i.e., RN (θ)  0 at the true
parameter vector θ N  ( 1,N ,..., S ,N , v2 ,12 ) . It follows that
13
RN (θ)  RN (θ N )  (b  bN )Γ N Θ N Γ N (b  bN ) .
(B.29)
~
Moreover, let FN  (~
γ N ,Γ N ) and Φ N  ( γ N ,Γ N ) , then,
RN (θ)  (1, b )FN ΘN FN (1, b ) and
(B.30a)
RN (θ)  (1, b )ΦN Θ N Φ N (1, b ) .
(B.30b)
The remainder of the proof is now analogous to that of Theorem 1a.
~
Proof of Theorem 2. Asymptotic Normality of θN
To derive the asymptotic distribution of the vector qN , defined in (30) we invoke the central
limit theorem for vectors of linear quadratic forms given by Kelejian and Prucha (2010,
Theorem A.1), which is an extension of the central limit theorem for a single linear quadratic
form by Kelejian and Prucha (2001, Theorem 1). The vector of quadratic forms in the present
context, to which the Theorem is applied is q*N . The variance-covariance matrix of qN was
derived above and is denoted as Ψ N . Accordingly, the variance-covariance matrix of
q*N  N 1 / 2qN is given by Ψ*N  NΨ N and (Ψ *N ) 1/ 2  N 1/ 2 Ψ N1/ 2 .
Note that in light of Assumptions 1, 2 and 7 (and Lemma B.1), the stacked innovations ξ N ,
the matrices A1, s , N ,..., A 4, s , N , s  1,..., S , A a, N , and A b, N , and the vectors a1, s, N , …, a 4, s, N ,
s  1,..., S , a a, N , and ab, N satisfy the assumptions of central limit theorem by Kelejian and
Prucha (2010, Theorem A.1). In the application of Theorem A.1, note that the sample size is
given by NT  N  N (T  1) rather than N . As Kelejian and Prucha (2001, p. 227, fn. 13)
point out, Theorem A.1 “also holds if the sample size is taken to be kn rather than n (with
k n   as N   ).” In the present case we have K N  (T  1) N , with T  1 and fixed,
which ensures that K n   as N   . Consequently, Theorem A.1 still applies to each
quadratic form in q*N . Moreover, as can be observed from the proof of Theorem A.1 in
Kelejian and Prucha (2010), the extension of the Theorem from a scalar to a vector of vector
of quadratic forms holds up under by this alternative definition of the sample size.
It follows that
d
 (Ψ*N ) 1 / 2 q*N   N 1 / 2 Ψ N1 / 2q*N  Ψ N1 / 2qN 
(0, I 4 S  2 ) ,
(B.31)
14
since N 1min (Ψ *N )  N 1min ( NΨ N )  min (Ψ N )  0 by assumption as required in Theorem
A.1.
Since the row and column sums of the matrices A1, s , N ,..., A 4, s , N , s  1,..., S , A a, N , and A b, N ,
and the vectors a1, s, N , …, a 4, s, N , s  1,..., S , a a, N , and ab, N , and the variances  v2,N and  2 ,N
are bounded uniformly in absolute value, it follows in light of (38) that the elements of Ψ N
and also those of Ψ1N/ 2 are bounded uniformly in absolute value.
~
We next turn to the derivation of the limiting distribution of the GM estimator θN . In
~
Theorem 1 we showed that the GM estimator θN defined by (18) is consistent. It follows that
– apart from a set of the sample space whose probability tends to zero – the estimator satisfies
the following first order condition:
6
~
~
~
~
~

q N (θN , Δ N ) ~
q N (θN , Δ N )Θ N q N (θN , Δ N ) 
Θ N q N ( θN , Δ N )  0 ,
ρ
θ
(B.32)
which is a ( S  2)  1 vector, the rows corresponding the partial derivatives of the criterion
function with respect to  s, N , s  1,..., S ,  v2 , and  12 .
Substituting the mean value theorem expression
~
q (θ , Δ ) ~
q N (θN , Δ N )  q N (θ N , Δ N )  N N N (θN  θ N ) ,
θ
(B.33)
where θN is some between value, into the first-order condition yields
~
~
q N (θN , Δ N ) ~ q N (θN , Δ N ) 1 / 2 ~
q N (θN , Δ N ) ~ 1 / 2
ΘN
N ( θN  θ N )  
Θ N N q N (θ N , Δ N ) . (B.34)
θ
θ
θ
Observe that
q N (θ, Δ N ) ~
 ΓNB
θ
N
and consider the two ( S  2)  ( S  2) matrices
~
q N (θN , Δ N ) ~ q N (θN , Δ N ) ~  ~ ~ ~
~
ΞN 
ΘN
 B N ΓN Θ N Γ N B N ,
θ
θ
(B.35)
6
The leading two and the negative sign are ignored without further consequences for the
proof.
15
ΞN  B N Γ N Θ N Γ N B N ,
~
where B N and B N correspond to B
(B.36)
N
~
as defined above with θN and θN substituted for
θ N . Notice that Ξ N is positive definite, since Γ N and Θ N are positive definite by assumption
and the [2S  S ( S  1) / 2  2]  ( S  2) matrix B
N
has full column rank.
~
p
0 and
In the proof of Theorem 1 (and Lemma B.1) we have demonstrated that Γ N  Γ N 
~
that the elements of Γ N and Γ N are O(1) and O p (1) , respectively. By Assumption 5,
~
~
~
~ and ρ (and thus also B
Θ N  Θ N  o p (1) , Θ N  O(1) and Θ N  Op (1) . Since ρ
N
N
N and
~
B N ) are consistent and bounded uniformly in probability, if follows that ΞN  ΞN  o p (1) ,
~
ΞN  O p (1) , and ΞN  O(1) . Moreover, Ξ N is positive definite and thus invertible, and its
inverse ΞN1 is also O(1) .
~
~
Denote ΞN as the generalized inverse of Ξ N . It then follows as a special case of Lemma F1 in
~
Pötscher and Prucha (1997) that Ξ N is non-singular with probability approaching 1 as
~
~
N   , that Ξ N is O p (1) , and that ΞN  ΞN1  o p (1) .
~
Pre-multiplying (B.34) with ΞN we obtain, after rearranging terms,
N
1/ 2
~
~
~ ~
1/ 2 ~
1 / 2 ~  q N ( θ N , Δ N ) ~
( θ N  θ N )  (I S  2  Ξ N Ξ N ) N ( θ N  θ N )  N Ξ N
Θ N q N (θ N , Δ N ) .(B.37)
θ
In light of the discussion above, the first term on the right-hand side is zero on -sets of
probability approaching 1 (compare Pötscher and Prucha, 1997, pp. 228). This yields
N
1/ 2
~
~
~  q N (θN , Δ N ) ~ 1 / 2
(θN  θ N )  ΞN
Θ N N q N (θ N , Δ N )  o p (1) .
θ
(B.38)
Next observe that
~
~  q N (θN , Δ N ) ~
ΞN
Θ N  ΞN1B N ΓN Θ N  o p (1) ,
θ
(B.39)
~
q N (θN , Δ N )
~
1
 B N ΓN  o p (1) .
since Ξ N  Ξ N  o p (1) and
θ
16
As we showed in section III, the elements of N 1 / 2q N (θ N , Δ N ) can be expressed as
N 1 / 2q N (θ N , Δ N )  N 1 / 2q*N  op (1)  qN  op (1) .
(B.40)
where q*N is defined in (27), and that
d
 (Ψ*N ) 1 / 2 q*N   N 1 / 2 Ψ N1 / 2q*N  Ψ N1 / 2qN 
(0, I 4 S  2 ) .
(B.41)
It now follows from (B.38), (B.39), and (B.40) that
~
N 1 / 2 (θN  θ N )  ΞN1JN Θ N Ψ1N/ 2 (Ψ N1 / 2qN )  o p (1) .
(B.42)
Since all nonstochastic terms on the right hand side from (B.42) are O(1) it follows that
~
~
N 1 / 2 (θN  θ N ) is O p (1) . To derive the asymptotic distribution of N 1 / 2 (θN  θ N ) , we invoke
Corollary F4 in Pötscher and Prucha (1997). In the present context, we have
d
ζ N  Ψ N1 / 2qN 
ζ ~ N (0, I 4 S  2 ) ,
~
N 1 / 2 (θN  θ N )  A N ζ N  o p (1) , with
A N  ΞN1JN Θ N Ψ1N/ 2 .
~
Furthermore, N 1 / 2 (θN  θ N )  Op (1) and its variance-covariance matrix is
Ω ~θ (Θ N )  (JN Θ N J N ) 1 JN Θ N Ψ N Θ N J N (JN Θ N J N ) 1 ,
N
where Ω ~θ is positive definite.
N
As a final point it has to be shown that lim inf N  min (A N A N )  0 as required in Corollary
F4 in Pötscher and Prucha (1997). Observe that
min (A N A N )  min (ΞN1JN Θ N Ψ N ΘN J N ΞN1 )
(B.43)
 min (Ψ N )min (Θ N ΘN )min (ΞN1ΞN1 )min (ΓN Γ N )min (B N B N )  0 ,
since the matrices involved are all positive definite.
17
Consistency Proof for Estimates of Third and Fourth Moments of the Error Components
Consistent estimates for the second moments of vit , N and i, N are delivered by the GM
estimators defined in (17) and (18), respectively (See Theorems 1a and 1b). In the following
we prove that the estimators for the third and fourth moments of vit , N and it, N , defined in
(39) and (40) are also consistent. The proof draws on Gilbert (2002), who considers the
estimation of third and fourth moments in error component models without spatial lags and
without spatial regressive disturbances. For reasons that will become clear below, we depart
from the convention adopted so far to use indexation i  1,..., NT for the stacked series. In the
subsequent proof we use the double indexation it , with i  1,..., N , t  1,..., T .
Preliminary Remarks
Note first that
~ε  ε  η , where
N
N
N
(B.44)
S
S
η N  [ (  m , N  ~m , N )IT  M m , N ][IT  (I N    m , N M m , N ) 1 ]ε N
m 1
m 1
S
S
m 1
m 1
 [IT  (I N    m , N M m, N )]D N Δ N  [ (  m, N  ~m , N )(IT  M m, N )]D N Δ N .
This can also be written as
ηN  R N gN ,
(B.45)
where
R N  (R1, N , R 2, N , R 3, N ) with
S
R 1, N  IT  (I N    m , N M m , N )D N ,
m 1
S
S
m1
m1
R 2, N  {( I T  M1, N )[IT  (I N    m, N M m, N ) 1 ]ε N ,..., (IT  M S , N )[I T  (I N    m, N M m, N ) 1 ]ε N } ,
R 3, N  [(IT  M1, N )D N ,..., (IT  M S , N )D N ] ,
ΔN



~
gN   (ρ N  ρN )  .
~ )Δ 
(ρ N  ρ
N
N
18
In light of Assumption 3 and since the elements of D N  (d1 ., N ,..., dN ., N ) have bounded fourth
moments (by Assumption in Theorem 3), each column of the matrix R N is of the form
π N  Π N ξ N , where the elements of the NT 1 vector π N are bounded uniformly in absolute
value by some finite constant, the row and column sums of the NT  NT matrix Π N are
bounded uniformly in absolute value by some finite constant, and the fourth moments of the
elements of ξ N are also bounded by some finite constant. It follows that the fourth moments
of the elements of R N are also bounded by some finite constant by Lemma C.2 in Kelejian
and Prucha (2010).
7
As a consequence, ηN  R N gN , or for the i-th element of the NT 1 vector η N ,
i , N  gN ri., N   N  i , N ,
(B.46)
4
where ri., N denotes the i-th row of R N ,  N  gN , and  i , N  ri., N with E  i , N  K    .
Without loss of generality we can select K  such that E (i, N )  K  for   4 . By

Assumption 1 there is also some K  such that E  i, N  K   for   4 . In the following
we use K to denote the larger bound, i.e., K  max( K , K  ) . Also note that N 1 / 2 N  Op (1) .
Estimation of Third Moments
1 N T ~3
We first consider  (3)  E ( it3, N ) and its estimate ~( 3, N) 
  it, N .
NT i 1 t 1
Using (B.44) we have
~(3, N) 
1 N T
 ( it, N  it, N )3
NT i 1 t 1
(B.47)
1 N T 3

( it , N  3 it2, Nit , N  3it2, N  it , N  it3, N )

NT i 1 t 1

1 N T 3
3 N T 2
3 N T 2
1 N T 3








it , N
 it, N NT 
 it, N it, N NT 
it , N it , N
NT i 1 t 1
NT i 1 t 1
i 1 t 1
i 1 t 1
 1, N  2, N  3, N   4, N .
By the weak law of large numbers for i.i.d. random variables, 1, N converges to  ( 3) as
N  .
7
See also Remark A.1 in Appendix A.
19
Next consider
1
NT
 2, N 

N
T
 3
i 1 t 1
it , N
2
it , N
(B.48)
N T
3
 N   it2, N  it , N
NT
i 1 t 1
1/ 2
1/ 2
4
2
 N T
 N T
3

 N    it ,N    it ,N 
NT  i1 t 1
  i1 t 1

1/ 2
 3( N  N ) N
1/ 2
1 / 2
1/ 2
N T
N T
4
2


 ( NT )1   it , N   ( NT )1  it , N 
i 1 t 1
i 1 t 1

 

 o p (1) ,
1/ 2
since E  it , N
4
N T
4

 K   and thus  ( NT )1   it , N 
i 1 t 1


2
 Op (1) , E it , N  K   and thus
1/ 2
N T
2

 ( NT ) 1  it , N 
i 1 t 1


 Op (1) , N 1 / 2 N  Op (1) , and N 1 / 2  o(1) .
Observe further that
1
NT
 3, N 

N
3
NT
N
T
 3
i 1 t 1
T
 
i 1 t 1
2
N
2
it , N
 it , N
 i2,N  it , N 
(B.49)
3 2 NT 2
 N   i , N  it , N
NT
i 1
1/ 2
1/ 2
N T
N T
4
2


 3( N  N ) N  ( NT )1   it , N   ( NT )1  i , N 
i 1 t 1
i 1 t 1

 

1/ 2
1
2
 o p (1) ,
1/ 2
since E  it , N
2
N T
2

 K   and thus  ( NT ) 1   it ,N 
i 1 t 1


4
 Op (1) , E i , N  K   and thus
1/ 2
N T
4

 ( NT )1  i , N 
i 1 t 1


 Op (1) , N 1 / 2 N  Op (1) , and N 1  o(1) .
Finally,
 4, N 

1
NT
1
NT
N
N
T
 
i 1 t 1
T
 
i 1 t 1
3
N
3
it , N
i,N
(B.50)
3
20

N T
3
1

( N 1/ 2 N )3 N 3 / 2 N  N 1   i , N 
NT
i 1 t 1


NT
3

 ( N 1 / 2 N )3 N  3 / 2  ( NT ) 1  i , N   o p (1) ,
i 1


since
N 3 / 2
NT
3

1
(
NT
)
i ,N   O p (1) , N 1 / 2 N  Op (1) , and
E ( i , N )  K  


i 1


 o(1) . As a consequence, we have ~(3, N)   (3)  o p (1) ,  (3)  O(1) by Assumption 1,
3
and
thus
and ~(3, N)  Op (1) .
Next consider the third moments of the unit-specific error component  (3) and its estimate
~(3) , which can be expressed (compare Gilbert, 2002, p. 48ff.) as
 (3)  E( is, N  it2, N ) for any given i and s  t ,
~(3, )N 
N T T
1
 ~is, N ~it2, N .
NT (T  1) i 1 s 1 t 1
(B.50a)
(B.50b)
ts
Notice that by Assumption 1,  (3) is invariant to the choice of i, s and t. Using (B.44), we
have
~(3, )N 
N T T
1
( is, N  is, N )( it , N  it , N ) 2

NT (T  1) i 1 s 1 t 1
(B.51)
ts

N T T
1
 ( is, N  it2, N  2it, N  is, N  it2, N  is, N  is, N  it2, N  2is, Nit, N  is, Nit2, N )
NT (T  1) i 1 s 1 t 1
ts
 1, N  2, N  3, N  4, N  5, N  6, N
Observe that
N
T
T
i 1
s
ts
N
T
T
1, N    is, N  it2, N   ( is, N  vis, N )( it , N  vit , N ) 2
N
T
(B.52)
i 1 s 1 t 1
ts
T
  ( i3, N  2i2, N vit , N  i , N vit2, N  vis, N i2, N  2i , N vit , N vis, N  vit2, N vis, N )
i 1 s 1 t 1
ts
 11, N  12, N  13, N  14, N  15, N  16, N .
21
By the weak law of large numbers 11, N converges in probability to  (3) . Notice further that,
by the properties of vit , N and it, N (see Assumption 1), 12, N ,13, N ,14, N ,15, N , and 16, N are all
o p (1) . As a consequence, 1, N converges in probability to  (3) .
Next observe that
2 , N 
N T T
1
 2it, N  is, N
NT (T  1) i 1 s 1 t 1
(B.53)
ts

N T T
2
 N   is, N  it , N
NT (T  1)
i 1 s 1 t 1
ts

N T T
1/ 2
1 / 2 
 2 N  N N  [ NT (T  1)]1   is, N
i 1 s 1 t 1

t s

3, N 
1/ 2

2



1/ 2


N T T

1
2 
 [ NT (T  1)]  it ,N 
i 1 s 1 t 1


t s


N T T
1
it2, N  is, N

NT (T  1) i 1 s 1 t 1
 o p (1) ,
(B.54)
ts

N T T
1/ 2
2
1 
 ( N  N ) N  [ NT (T  1)]1   is,N
i 1 s 1 t 1

t s

4 , N 
1/ 2

2



1/ 2


N T T

1
4 
 [ NT (T  1)]  it ,N 
i 1 s 1 t 1


t s


 o p (1) ,
N T T
1
is, N  it2, N

NT (T  1) i 1 s 1 t 1
(B.55)
ts
1/ 2
 N N N
1/ 2
5, N 

1 / 2
1/ 2
N T T
N T T
4



[ NT (T  1)]1   is, N  [ NT (T  1)]1  it2, N 
i 1 s t  s
i 1 s t  s

 

N T T
1
 2is,Nit,N
NT (T  1) i1 s t  s
 o p (1) ,
(B.56)
N T T
2
( N 1 / 2 N ) 2 N 1   is, N  it , N
NT (T  1)
i 1 s 1 t 1
ts
1/ 2


N T T
1/ 2
2
1 
1
2 
 2( N  N ) N [ NT (T  1)]  is,N 
i 1 s 1 t 1


t s


1/ 2


N T T

1
2 
[ NT (T  1)]  it ,N 
i 1 s 1 t 1


t s


 o p (1) ,
22
6, N 
N T T
1
is, Nit2, N
NT (T  1) i 1 s 1 t 1
(B.57)
ts
1/ 2


N T T
1/ 2
3
3 / 2 
1
2 
 2( N  N ) N [ NT (T  1)]  is, N 
i 1 s 1 t 1


t s


1/ 2


N T T

1
4 
[ NT (T  1)]  it ,N 
i 1 s 1 t 1


t s


 o p (1) ,
because N 1 / 2 N is O p (1) and the terms in brackets expressions are all O p (1) , since

E  is, N  K  
and

E  it , N  K  
for
 4
and
all
N.
It
follows
that
~(3, )N   (3)  op (1) ,  (3)  O(1) by Assumption 1, and that ~(3, )N  Op (1) . Obviously, we then
also have that (~(3, N)  ~(3, )N )   v(3)  ~v(,3N)   v(3)  op (1) .
Estimation of Fourth Moments
Consider the fourth moment of i, N and its estimate, which can be expressed as (compare
Gilbert, 2002, p. 48ff.):
 4  E( is it3 )  3E( is it )[ E( it2 )  E( is it )] for any given i and s  t ,
~( 4, N) 
(B.58a)
N T T
1
~is, N ~it3, N

NT (T  1) i 1 s 1 t 1
(B.58b)
ts

N T T
N T
N T T
3
1
~ ~ ( 1
~2 


~is, N ~it , N )



is , N it , N
it , N
NT (T  1) i 1 s 1 t 1
NT i 1 t 1
NT (T  1) i 1 s 1 t 1
ts
ts
 1, N   2, N (~2, N   2, N ) .
Observe that
1, N 
N T T
1
( is, N  is, N )( it , N  it , N )3

NT (T  1) i 1 s 1 t 1
(B.59)
ts

N T T
1
( is, N  it3, N  3 is, N  it2, Nit , N  3it2, N  is, N  it , N   is, Nit3, N

NT (T  1) i 1 s 1 t 1
ts
is, N  it3, N  3 it2, Nit, Nis, N  3it2, Nis, N  it, N it3, Nis, N )
 11, N  12, N  13, N  14, N  15, N  16, N  17, N  18, N .
The first term  11, N can also be written as
23
11, N   is, N  it3, N  (is, N  vis, N )( it, N  vit, N )3 
(B.60)
 (is, N  vis, N )( it3, N  3it2, N vit, N  3it, N vit2, N  vit3, N )
 (is, N it3, N  3is, N it2, N vit, N  3is, N it, N vit2, N  is, N vit3, N
 vis, N it3, N  3it2, N vis, N vit, N  3it, N vis, N vit2, N  vis, N vit3, N )
 (i4,N  3i3, N vit, N  3i2,N vit2, N  is, N vit3, N  vis, N it3, N  3it2, N vis, N vit, N  3it, N vis, N vit2, N  vis, N vit3, N ) .
By the properties of vit , N and it, N (see Assumption 1),  11, N converges in probability to
 ( 4)  3 2 v2 .
Moreover, it follows from the properties of v N and μ N (see Assumption 1), that the terms
12, N , 13, N , 14, N , 15, N , 16, N , 17, N , 18, N are o p (1) . It follows that 1, N converges in probability
to  ( 4)  3 2 v2 .
Next consider
 2, N 
which
E[
N T T
3
 ( is, N  it, N )( it, N  it, N )
NT (T  1) i 1 s 1 t  s
(B.61)

N T T
3
( is, N  is, N )( it , N  it , N )

NT (T  1) i 1 s 1 t  s

N T T
3
 ( is, N  it, N  is, N  it, N   is, Nit, N  is, Nit, N ) ,
NT (T  1) i 1 s 1 t  s
converges
 2
to
by
the
weak
law
of
large
numbers,
since
T
1 T
1
 is, N
 it , N ]   2 for s  t by the properties of vit , N and it, N and the sum


T s 1
(T  1) t  s
over the remainder terms appearing in  2, N are o p (1) by arguments analogous to those for
2, N and 5, N (see B.53 and B.56). Finally, by arguments similar to the proof for the
1 N T ~2
consistency of the estimate of the third moment, it follows that ~2, N 
  it, N
NT i 1 t 1
converges in probability to  2 . As a consequence, ~( 4)   ( 4)  o (1) ,  ( 4)  O(1) by

Assumption 1, and that ~
( 4)
,N
,N

p

 Op (1) .
We next consider the fourth moment of vit , N and its estimate, which can be written as
(compare Gilbert, 2002, p. 48ff.):
24
 v( 4)  E ( it4 )  E ( is it3 )  3E ( is it )[ E ( it2 )  E ( is it )] for any given i and s  t , (B.62a)
~v(,4N) 
N T T
1 N T ~4
1
 it , N 
~is, N ~it3, N


NT i 1 s 1
NT (T  1) i 1 s 1 t 1
(B.62b)
ts

N T T
N T
N T T
3
1
~ ~ ( 1
~2 


~is, N ~it , N )



is , N it , N
it , N
NT (T  1) i 1 s 1 t 1
NT i 1 t 1
NT (T  1) i 1 s 1 t 1
ts
ts
 1, N  1, N   2, N (~2, N   2, N ) .
We have already shown that 1, N converges in probability to  ( 4)  3 2 v2 and that
3 2, N (~2, N   2, N ) converges in probability to 3 2 v2 . Next, expanding 1, N , we obtain
1, N 
1 N T
( it , N  it , N ) 4

NT i 1 t 1
(B.63)

1 N T 2
( it , N  2 it , Nit , N  it2, N )( it2, N  2 it , Nit , N  it2, N )

NT i 1 t 1

1 N T 4
 ( it, N  2 it3, Nit, N   it2, Nit2, N )
NT i 1 t 1


1 N T
(2 it3, Nit , N  4 it2, Nit2, N  2 it , Nit3, N )

NT i 1 t 1

1 N T 2 2
( it , Nit , N  2 it , Nit3, N  it4, N )

NT i 1 t 1
1 N T 4
 ( it, N  4 it3, Nit, N  6 it2, Nit2, N  4 it, Nit3, N  it4, N )
NT i 1 t 1
1 N T 4
 it , N converges in probability to  ( 4)   v( 4)  6 2 v2 and that the

NT i 1 s 1
remainder terms of 1, N are all o p (1) , it follows that ~v(,4N)   v( 4)  o p (1) ,  v( 4 )  O (1) by
Observing that
Assumption 1, and that ~v(,4N)  Op (1) .
III. Proof of Theorem 3 (Variance-Covariance Estimation)
Lemma B.2
S
Suppose Assumptions 1-4 hold. Furthermore, assume that sup N   m , N  1 , and that the row
m 1
and column sums of M m, N , m  1,..., S are bounded uniformly in absolute value by 1 and
ρs , N , s  1,..., S , be estimators, satisfying
some finite constant respectively. Let ~v2 , ~12 , and ~
25
~v2   v2  op (1) , ~12  12  o p (1) , ~ρs , N  ρs , N  o p (1) , s  1,..., S . Let the NT  NT or N  N
matrix FN be of the form (compare Lemmata 1 and 2):
S
(a)
Fv, N  [I T  (I N    m , N Mm , N ) 1 ]H N ,
m 1
S
F , N  (eT  I N )[IT  (I N    m , N Mm , N ) 1 ]H N ,
m 1
S
(b)
*
Fv*,N
 ( v 2Q0, N   1 2Q1, N )H*N  Ωε,1N [IT  (I N    m, N M m, N )]H N ,
m 1
**
 ,N
F
S
 [ (eT  I N )]H*N  [ 1 2 (eT  I N )][ IT  (I N    m, N M m , N )]H N ,
2
1
m 1
where H N is a N  P* matrix whose elements are bounded uniformly in absolute value by
~
~
~
~
some constant c   . The corresponding estimates Fv, N , F , N , Fv*,*N , and F**, N are defined
~
analogously, replacing  v2 , 12 , Ωε, N , and  m, N , m  1,..., S , with ~v2 ,~12 , Ωε , N , and ~m, N ,
m  1,..., S , respectively.
~ ~
(i) Then, N 1FN FN  N 1FN FN  o p (1) and N 1FN FN  O (1) .
(ii) Let a N be some NT 1 or N 1 vector, whose elements are bounded uniformly in
~
absolute value. Then N 1FN a N  N 1FN a N  o p (1) and N 1Fv, N a N  O(1) .
Proof of part (i) of Lemma B.2
S
Under the maintained assumptions there exists a * with sup   m , N  *  1 . It follows
m 1
immediately by the properties of the matrices M m, N that the row and column sums of
*M m, N , m  1,..., S are bounded uniformly in absolute value by 1 and some finite constant
respectively. For later reference, note that the elements of the vector *k MkN h.s , N are also
bounded uniformly in absolute value by c for some finite integer k.
S
Next define FN  G N K N with K N  [IT  (I N    m , N Mm , N ) 1 ]H N . Denote the (r,s)-th
m 1
~ ~
element of the difference N FN FN  N 1FN FN as νN , which is given by
1
~ ~
 N  N 1 ( f.r, N f.s , N  f.r , N f.s , N ) , r, s  1,..., P* ,
(B.64a)
or
~
~ ~ ~
 N  N 1 (k.r , N GN G N k .s , N  k.r , N GN G N k .s , N ) .
(B.64b)
26
~ ~ ~
Define further E N  GN G N , such that  N  N 1 (k.r ,N E N k .s ,N  k.r , N E N k .s ,N ) .
Proof under Assumption (a)
~
8
Consider first the case FN  F , N ; then it holds that E N  E N  TQ1, N . (The subsequent proof
~
also
covers
the
case
with
Hence,
FN  Fv, N
E N  EN  I NT .)
~
3
~
 N  N 1 (k.r ,N E N k .s ,N  k.r ,N E N k .s ,N ) , which can be written as  N   i , N with
i 1
~
~
 1, N  N 1 (k .r , N  k .r , N )E N (k .s , N  k .s , N )
(B.65)
~
 2, N  N 1 (k .r , N  k .r , N )E N k .s , N
~
 3, N  N 1k.r , N E N (k .s , N  k .s , N )
Note that
S
S
~
k .s , N  k .s , N  [IT  (I N   ~m , N Mm, N )  h.s , N  [IT  (I N    m , N Mm, N ) 1 ]h.s , N
m 1
(B.66)
m 1
We next show that  i , N  o p (1) , i  1,...,3 , invoking the following theorem (see, e.g., Resnik,
p
X if and
1999, p. 171): Let ( X , X N , N  1 ) be real valued random variables. Then, X N 
only if each subsequence X N a contains a further subsequence X N  a that converges almost
surely to X .
As we show below we will be confronted with terms of the form:
(Nk ,l )  N 1 p*l  k h.r , N (IT  MNl )EN (IT  MNk ) h.s,N ,
(B.67)
where M N is a matrix, whose row and column sums are bounded uniformly in absolute value
by some constant c M . By the properties of E N  TQ1, N , the row and column sums of the
matrix (IT  M Nl )E N (IT  M Nk ) are bounded uniformly in absolute value, and (Nk ,l )  O(1)
(compare Remark A.1 in the Appendix).
8
For FN  F , N , we have G N  (eT  I N ) . Hence
GN G N  (eT  I N ) (eT  I N )
 (eT eT  I N )  (JT  I N )  TQ1, N .
27
Now, let the index N a denote some subsequence. In light of the aforementioned equivalence,
there exists a subsequence of this subsequence ( N a ) such that for events   A , with
P( AC )  0 , it holds that
~
ρm , N a ( )  ρm , N a  0 , m  1,..., S
(B.68)
and that for some N a  N  ,
(Nka,l ) ( )  K for some K with 0  K   , and
(B.69)
S
S
 ~ρ
m , N a
m 1
( )  p** , where p** 
sup N   m, N  p*
m1
2
In the following, assume that N a  N  . Since
 1.
S
 ~
m 1
m , N a
(B.70)
( )  1 , it follows from Horn and
S
Johnson (1985, p. 301) that (I N   ~m, N a ( )M m, N a ) is invertible, such that
m 1
S
S
~
k .s , N a  k .s , N a  [IT  (I N   ~m, N a ( )Mm, N a ) 1  IT  (I N    m , N a Mm, N a ) 1 ]h.s , N a (B.71)
m 1
m 1
l
l

 S

 S

 {IT  [I N     ~m, N a ( )Mm, N a  ]  IT  [I N     m, N a Mm, N a  ]}h.s , N a
l 1  m 1
l 1  m 1



l
l
 S
  S

~
 IT      m, N a ( )Mm, N a      m, N a Mm, N a   h.s , N a .
l 1  m 1
  m 1
 


Substituting into  1, N a , we have
~
~
 1, N   N a 1 (k .r , N   k .r , N  )E N  (k .s , N   k .s , N  )
a
a
a
a
a
a
(B.72)
l
l
 S
  S

~
 N a h.r , N a IT      m, N a ( )M m, N a      m, N a M m, N a  
l 1  m 1
  m 1
 

k
k 
  S
 ~
  S
 
 E N a I T      m, N a ( )M m, N a      m, N a M m, N a   h .s , N a
k 1  m 1
  m1
 

1

l
l
 S
  S

~
 N a h.r , N a IT     m, N a ( )M m, N a      m, N a M m, N a  
k 1 l 1  m 1
  m 1
 

1


28
k
k 
 S
  S
 
~
 E N a I T     m, N a ( )M m, N a      m, N a M m, N a   h .s , N a ,
  m1
 
 m1
where E N a  TQ1, N a for FN  F , N (and E N  I NT for FN  Fv , N ). A single element with index
(k,l) of this infinite double sum over k and l is given by
l
l
 S
  S

~
N a h.r , N a IT     m, N a ( )M m, N a      m, N a M m, N a   .
  m 1
 
 m 1
1
k
k 
 S
  S
 
~
E N a IT     m, N a ( )M m, N a      m, N a M m, N a   h.s , N a .
  m 1
 
 m 1
(B.73)


ρ N a ( ) there exist matrices M N a and M N a ,
Next note that for any values of ρ N a and any ~
whose row and column sums are bounded uniformly in absolute value, such that:
S

m 1
m , N a
S

Mm , N a    m, N a M N a and
m 1
S
 ~
m 1
m , N a
S

( )Mm , N a   ~m, N a ( )M N a .
(B.74)
m 1


M N a and M N a can thus be factored out of the sum, yielding
l
l
 S
 l
 S
 l 
~
   m, N a ( )  M N a     m, N a  M N a  .

 m 1

 m 1

(B.75)
 S

 S

By the same reasoning, for any values of   ~m, N a ( )  and    m, N a  , there exists a matrix
 m 1

 m1

MN a , whose row and column sums are bounded uniformly in absolute value, such that:
l
l
l
l
 S
 l
 S
  l   S ~
  S
 l
~
   m, N a ( )  M N a     m, N a  M N a      m, N a ( )      m, N a   M N a .

 m 1

  m 1
 
 m 1
  m 1
(B.76)
Substituting MN a into  1, N a , we obtain
 1, N 
a
l
l 
 S
  S
 l
~
 N a  h.r , Na IT     m, Na ( )      m, Na   MNa
k 1 l 1
  m1
 
 m1
1


29
k
k
 S
  S
  k
~
 E N a I T     m, N a ( )      m, N a   M N a h .s , N a .
  m1
 
 m1
Hence, we can write


 1, N    X N( k,l ) ,
a
k 1 l 1
(B.77)
a
where X N( ka,l )  aN( ka,l )(Nka,l ) with
aN( ka,l ) 
l
l
 S
  S

~
   m, N a      m, N a  
  m 1
 
 m 1
*l
k
k
 S
  S
 
~
   m, N a      m, N a  
  m 1
 
 m 1
*k
(Nka,l )  Na1 p*l  k h.r , N a (IT  MNl a )EN a (IT  MNk )h.s, N a .
and
(B.78)
(B.79)
Note that aN( ka,l )  0 in light of the aforementioned results and since (Nka,l )  K   , it
follows that X N( ka ,l )  0 . Moreover,
l
aN a
l
l
 S ~
  S
  S ~
  S

   m , N a      m , N a     m , N a      m , N a 
  m 1
  m 1
  m 1

  m 1
l
k
*
 
 2 ** 
 * 
l
k
*
 
 
2 **   4 ** 
 * 
 * 
k
(B.80)
lk
.
For N a   N , (Nka,l ) ( )  K , such that we have
X
( k ,l )
N a
B
(l , k )
 
 4 K  ** 
 * 
l k
.
(B.81)
Hence, there exists a dominating function B ( l ,k ) for all values of k,l. Moreover, since
 ** 

  1 by construction, we also have that

 * 
30




  B (l , k )   B (l , k )   ,
k 1 l 1
(B.82)
k 1 l 1
i.e., the dominating function is integrable (summable). It follows from dominated
convergence that
lim N a    1, N a  0 .
(B.83)
The same holds for the  i , N a , i  2,...,3 . It follows that  i , N a  0 as N a   and in light of
Resnik, 1999, p. 171) that  N  o p (1) .
~ ~
Thus, N 1FN FN  N 1FN FN  o p (1) . That N 1FN E N FN  O(1) follows from the properties of
FN and E N (compare Remark A.1). As already mentioned above, an analogous proof applies
~
to the case FN  Fv, N with E N  EN  I NT .
Proof under Assumption (b)
We first consider the case
*
. Then,
FN  Fv*,N
FN  G N K N
with
G N  Ωε,1N
and
S
K N  [IT  (I N    m , N M m , N )]H N , such that E N  Ωε,1N Ωε,1N  Ωε,2N  ( v4Q0, N  14Q1, N )
m 1
~
~
and EN  Ωε,2N  (~v4Q0, N  ~14Q1, N ) . Notice that the subsequent proof also covers the case
FN  F**, N .
9
7
~ ~ ~
First, we rewrite  N  N 1 (k.r ,N E N k .s ,N  k.r , N E N k .s ,N ) as  N   i , N , with
i 1
~
~
~
~
~
 1, N  N 1 (k .r , N  k .r , N )(E N  E N )(k .s , N  k .s , N ) ,
(B.84)
 2, N  N 1 (k .r , N  k .r , N )(E N  E N )k .s , N ,
~
~
 3, N  N 1k.r , N (E N  E N )(k .s , N  k .s , N ) ,
~
 4, N  N 1k.r , N (E N  E N )k .s , N ,
~
~
 5, N  N 1 (k .r , N  k .r , N )E N (k .s , N  k .s , N ) ,
9
S
In
case
FN  F**, N  [ 1 2 (eT  I N )][ IT  (I N    m, N M m, N )]H N ,
we
have
m 1
G N  [ 12 (eT  I N )] , such that EN  12 (eT  I N )12 (eT  I N )  14 (eT eT  I N )  14Q1, N .
31
~
 6, N  N 1 (k .r , N  k .r , N )E N k .s , N ,
~
 7, N  N 1k.r , N E N (k .s , N  k .s , N ) .
For the sake of simplicity, we define E N  E0, N  E1, N , where E0, N   v4Q0, N and
E1, N  14Q1, N and consider only E1, N in the following; it is obvious that an analogous proof
applies to E 0, N and thus also E N . Next, consider
S
S
~
k .s , N  k .s , N  [IT  (I N   ~m, N M m, N )]h.s , N  [IT  (I N    m , N M m , N )]h.s , N .
m 1
(B.85)
m 1
We next show that  i , N  o p (1) , i  1,...,7 . Consider
S
S
S
m 1
m 1
m 1
I T  [I N   ~m, N M m, N ]  I T  (I N    m, N M m, N )  I T   ( ~m, N   m , N )M m, N
(B.86)
and note that for any values of ~m, N and  m, N , there exists a matrix M N , whose row and
column sums are uniformly bounded in absolute value, such that
S
S
m 1
m 1
IT   ( ~m, N   m, N )M m, N  IT   ( ~m, N   m, N )M N .
(B.87)
Substituting MN a into (the first part of) the expression for  1, N , we obtain
S
S
~
 1, N  N 1h.r , N [IT   ( ~m, N   m, N )M N ]E1, N [IT   ( ~m, N   m, N )M N ]h.s ,N
m1
(B.88)
m1
S
S
m1
m1
 N 1h.r , N [IT   ( ~m, N   m, N )M N ]E1, N [I T   ( ~m, N   m, N )M N ]h.s , N .
2
S

~
 N  ( ~m,N   m,N ) h.r ,N (IT  MN )E1,N (IT  M N )h.s ,N
 m1

1
2
S

 N  ( ~m,N  m,N ) h.r ,N (IT  MN )E1, N (IT  M N )h.s ,N .
 m1

1
2
S

 ~14 N 1  ( ~m, N   m, N ) h.r , N (I T  MN )Q1, N (IT  M N )h.s , N
 m1

32
2
S

  N  ( ~m, N  m, N ) h.r , N (IT  MN )Q1, N (IT  M N )h.s , N .
m 1

4
1
1
2
S

 (~14   14 ) N 1  ( ~m,N  m,N ) h.r ,N (IT  MN )Q1,N (IT  M N )h.s ,N
 m1

 o p (1)o p (1)O(1)  o p (1) .
The same holds for the  i , N a , i  2,...,7 . Obviously, an analogous proof applies for E 0 , N and
E N  E0, N  E1, N .
 N  o p (1) .
thus
also
for
It
follows
that
Thus,
~ ~ ~
N 1FN E N FN  N 1FN E N FN  o p (1) . That N 1FN E N FN  O(1) follows from the properties the
FN elements of E1, N ( E 0, N ).
Proof of part (ii) of Lemma B.2
~
Denote the r-th element of the difference N 1FN a N  N 1FN a N as wN , which is given by
~
wN  N 1 ( f.r, N a N  f.r , N a N ) , r  1,..., P* ,
(B.89)
or
~ ~
wN  N 1 (k.r , N GN a N  k.r , N GN a N ) .
Proof under Assumption (a)
S
Consider first the case FN  F , N  (eT  I N )[IT  (I N    m, N Mm, N ) 1 ]H N . We then have
m 1
FN  G N K N
S
with
K N  [IT  (I N    m , N Mm , N ) 1 ]H N
m 1
and
G N  (eT  I N ) . Hence,
GN  (eT  I N ) . Obviously, the subsequent proof also covers the case FN  Fv, N . It follows
that
~
wN  N 1 (k.r , N  k.r , N ) aN ) ,
(B.90)
where the elements of the vector aN  (eT  I N )a N are uniformly bounded in absolute value
since the row and columns sums of (eT  I N ) are uniformly bounded in absolute value and
since the elements of a N are uniformly bounded in absolute value. (See Remark A.1 in the
Appendix.)
33
In light of the properties of Q 0, N , Q1, N , and aN , it follows directly from the proof of part (i)
~
of Lemma B.2, where we showed that  2, N  N 1 (k .r , N  k .r , N )E N k .s , N  o p (1) , that
~
wN  o p (1) and thus N 1FN a N  N 1FN a N  o p (1) . That N 1FN a N  O(1) follows
immediately from the properties of FN and a N by Remark A.1.
Proof under Assumption (b)
Consider first the case FN  Fv*,*N . Then, we have FN  G N K N with G N  Ωε,1N and
S
K N  [IT  (I N    m , N M m , N )]H N . Notice that the subsequent proof also covers the case
m 1
FN  F**, N . In that case,
~ ~
wN  N 1 (k.r , N GN a N  k.r , N GN a N )
~
~
~
 N 1 (k .r , N  k .r , N )Ωε,1N a N  N 1k .r , N (Ωε,1N  Ωε,1N )a N .
(B.91)
~
Substituting Ωε,1N  ~v2Q 0, N  ~2Q1, N , we have wN  wNv  w1N , where
~
wNv  N 1k .r , N ~v,2N Q 0, N a N  N 1k .r , N  v2 Q 0, N a N ,
~
w1N  ~1, 2N N 1k.r , N Q1, N a N   12k.r , N Q1, N a N .
Considering wNv , we have wNv  w1v, N  w2v, N , where
~
w1v, N  ~v2 N 1 (k.r , N  k.r , N )Q0, N a N ,
(B.92)
w2v, N  (~v2   v2 ) N 1k.r , N Q0, N a N .
The first term w1v, N  o p (1) by arguments analogous to the proof of Part (i) under Assumption
(b) by the consistency of ~s, N , s  1,..., S . In light of the properties of k .r , N , Q 0, N , and a N ,
N 1k Q a  O(1) and since (~ 2   2 )  o (1) , it follows that wv  o (1) by
.r , N
0, N
N
v, N
v
p
2, N
p
~
substituting Ωε,1N  Ωε,1N  (~v2   v2 )Q0, N  (~2   2 )Q1, N . Analogous arguments apply to
wN
such that wN  o p (1) . Since this holds for all r  1,..., P* , we have that
~
N 1Fv*,*N a N  N 1Fv*,*N a N  o p (1) . That N 1Fv*,*N a N  O(1) follows from immediately from the
properties of Fv*,*N and a N by Remark A.1. Analogous proof can be used to show that
~
N 1Fv*,*N a N  N 1Fv*,*N a N  o p (1) and that N 1Fv, N a N  O(1) , which completes the proof.
Proof of Theorem 3
34
~
As part of proving Theorem 3 it has to be shown that Ψ N  Ψ N  o p (1) . Observe that in light
~
of (15), each element of Ψ N and the corresponding element of Ψ N can be written as
~
~
~
~
~
Erp,,sq, N  Erp,,sq,,*N  Erp,,sq,,*N*  Erp,,sq,,*N**  Erp,,sq,,*N*** ,
(B.93)
Erp,,sq, N  Erp,,sq,,*N  Erp,,sq,,*N*  Erp,,sq,,*N**  Erp,,sq,,*N*** ,
for p, q  1,...,4, (a, b) , r , s  1,..., S  1 , where
10
~
Erp,,sq,,*N  2 N 1~v4,NTr (Avp,r ,N Avq,s,N )  2 N 1~4,NTr (Ap,r ,N Aq,s,N )  4~2~v2 N 1Tr[( Avp,,r ,N )Avq,,s,N ] ,
~
Erp,,sq,,*N*  ~v2, N N 1~
a pv , r, N ~
aqv, s , N  ~2, N N 1~
a p, r, N ~
aq, s , N
N
~ p ,q ,*** ~ (3) 1 NT ~ v
v
v
v
( 3)
1
~
~
E r ,s , N   v , N N  (a p ,r ,i , N aq ,s ,ii, N  a p ,r ,ii, N aq ,s ,i , N )    , N N  (a~p,r ,i , N aq,s ,ii, N  a~p,r ,ii, N aq,s ,i , N ) ,
i 1
i 1
NT
N
~
Erp,,sq,,*N***  (~v(,4N)  3~v4, N ) N 1  a vp , r ,ii, N a~qv, s ,ii, N (~( 4, N)  3~4, N ) N 1  a~p, r ,ii, N aq, s ,ii, N ,
i 1
i 1
and
Erp,,sq,,*N  2N 1 v4Tr (Avp,r ,N Avq,s,N )  2N 1 4Tr (Ap,r ,N Aq,s,N )  4 2 v2 N 1Tr[( Avp,,r ,N )Avq,,s,N ] ,
Erp,,sq,,*N*   v2 N 1avp , r, N avq , s , N   2 N 1a p , r, N a q, s , N ,
NT
N
E rp,,sq,,*N**   v(3) N 1  (a vp ,r ,i , N aqv,s ,ii, N  a vp ,r ,ii, N aqv,s ,i , N )   (3) N 1  (a p ,r ,i , N aq,s ,ii, N  a p ,r ,ii, N aq,s ,i , N ) ,
i 1
E
p , q ,****
r , s, N
 (
( 4)
v
i 1
 3 ) N
4
v
1
NT
a
i 1
v
p , r , ii , N
v
q , s , ii , N
a
(   3  ) N
( 4)
4
1
N
 a
i 1
p , r , ii , N
aq, s ,ii, N .
~
~
~
To proof that Ψ N  Ψ N  o p (1) , we show that Erp,,sq,,*N  Erp,,sq,,*N  o p (1) , Erp,,sq,,*N*  Erp,,sq,,*N*  o p (1) ,
~
~
Erp,,sq,,*N**  Erp,,sq,,*N**  o p (1) , and Erp,,sq,,*N***  Erp,,sq,,*N***  o p (1) .
~
i) Proof that Erp,,sq,,*N  Erp,,sq,,*N  o p (1)
Consider
~
Erp,,sq,,*N  Erp,,sq,,*N  (~v4, N   v4 )2 N 1Tr (Avp , r , N Avq, s , N )  (~4, N   4 )2 N 1Tr (A p , r , N A q, s , N ) ,
10
See Eqs. (34) in the main text for the structure of the matrix Ψ N and the proper indexation
of its elements.
35
and note that the row and column sums of the matrices A vp , r , N , Avq, s , N , A p , r , N , Aq, s , N , A vp,,r , N ,
and Avq,,s, N are uniformly bounded in absolute value by some constant, say K A . It follows that
2 N 1Tr (Avp, r , N Avq, s , N )  2 N 1NTK A2  2TK A2  O(1) ,
2 N 1Tr (Ap, r , N Aq, s , N )  2Ka2  2 N 1NK A2  O(1) ,
4 N 1Tr[( Avp,,r , N )Avq,,s , N )]  4 N 1NTK A2  4TK A2  O(1) ,
~
and thus Erp,,sq,,*N  Erp,,sq,,*N  o p (1) .
~
ii) Proof that Erp,,sq,,*N*  Erp,,sq,,*N*  o p (1)
Note that
~
~ ~ ~ ~ ~
~ ~ ~ ~ ~
1~
~ P
~2
 

 
Erp,,sq,,*N*  Erp,,sq,,*N*  ~v2, N N 1α
p , r , N N Fv , N Fv , N PN α q , s , N    , N N α p , r , N PN F , N F , N PN α q , s , N
  v2 N 1αp, r , N PN Fv, N Fv, N PN αq, s, N   2 N 1αp, r , N PN F , N F , N PN αq, s, N
v
p , q ,**,
, where
 pr ,,qs,,**,
N  r , s , N
~ ~ ~ ~ ~
v
2
1
~ P
 

 
pr ,,qs ,,**,
 ~v2, N N 1α
N
p ,r , N N Fv , N Fv , N PN α q ,s , N   v N α p ,r , N PN Fv , N Fv , N PN α q ,s , N , and
~ ~ ~ ~ ~

2
1
~ P
 

 
pr ,,qs ,,**,
 ~2,N N 1α
N
p ,r , N N F , N F , N PN α q ,s , N    N α p ,r , N PN F , N F , N PN α q ,s , N .
v
Consider pr ,,qs ,,**,
N , which can be written as
~ ~ ~ ~ ~
v
2
1
~ P
 

 
pr ,,qs ,,**,
 ~v2, N N 1α
N
p ,r , N N Fv , N Fv , N PN α q ,s , N   v N α p ,r , N PN Fv , N Fv , N PN α q ,s , N
v
pr ,,qs ,,**,
 (~v2,N   v2 ) N 1αp,r ,N PN Fv,N Fv,N PN α q,s ,N
N
~ ~ ~ ~ ~
1
~ P
 

 
 ~v2, N ( N 1α
p , r , N N Fv , N Fv , N PN α q , s , N  N α p , r , N PN Fv , N Fv , N PN α q , s , N ) .
~
Note that (~v2, N   v2 )  o p (1) and that ~v2, N  Op (1) . By assumption PN  PN  o p (1) ,
~
PN  O(1) and thus PN  Op (1) , where the dimension of PN is P*  P ; by Lemma B.1
~  O (1) , where the dimension of α is P  1 .
~  α  o (1) , α  O(1) and thus α
α
N
N
p
N
N
p
N
~ ~
Moreover, N 1Fv, N Fv, N  N 1Fv, N Fv, N is o p (1) and N 1Fv, N Fv, N  O(1) by Lemma B.2. It
~ ~
v
follows that pr ,,qs ,,**,
 o p (1) . By Lemma B.2 it also holds that N 1F , N F , N  N 1F , N F , N and
N
~

accordingly, pr ,,qs ,,**,
 o p (1) . It follows that Erp,,sq,,*N*  Erp,,sq,,*N*  o p (1) , and in light of Lemma B.2
N
36
it is readily observed that this also holds when F**, N and Fv*,*N are used instead of F , N and
Fv, N .
~
iii) Proof that Erp,,sq,,*N**  Erp,,sq,,*N**  o p (1)
Observe that
~
Erp,,sq,,*N** Erp,,sq,,*N**  pr ,,qs ,,*N**,v  pr ,,qs ,,*N**, ,
where
NT
NT
i 1
i 1
N
N
i 1
i 1
pr ,,qs,,*N**,v  ~v(,3N) N 1  (a~pv,r ,i , N aqv, s ,ii, N  a vp ,r ,ii, N a~qv, s ,i , N )   v(3, N) N 1  (a vp , r ,i , N aqv, s ,ii, N  a vp , r ,ii, N aqv, s ,i , N ) ,
pr ,,qs ,,*N**,  ~(3, N) N 1  (a~p,r ,i , N aq, s ,ii, N  a p ,r ,ii, N a~q, s ,i , N )   (3, )N N 1  (a p , r ,i , N aq, s ,ii, N  a p , r ,ii, N aq, s ,i , N ) .
~ ~ ~
v
a pv , r , N  Fv , N PN α
Next consider pr ,, qs ,,*N**,v and note that ~
p , r , N ; moreover, define a p , r , N as NT 1
vector made up of the main diagonal elements a vp , r ,ii, N of matrix A vp , r , N . It follows that


a pv , r,i , N avq , s , N  avp , r, N avq , s , N )  ~v(,3N) N 1 (a vp , r, N ~
aqv, s , N  a vp , r, N avq , s , N )
pr ,,qs,,*N**,v  ~v(,3N) N 1 (~
 (~v(,3N)   v(3, N) ) N 1 (avp , r, N aqv, s , N  a vp , r, N avq , s , N ) .
Note that  v(3)  O(1) , (~v(,3N)   v(3) )  o p (1) , ~v(,3N)  Op (1) . Moreover, by the properties of
avp , r, N , a vp , r , N , N 1 (avp ,r , N aqv, s , N  a vp ,r , N avq , s , N )  O(1) such that the term in the second line is
~ ~
v
v
~ P
  v

  v
o p (1) . Next note that ~
a pv , r , N a vp , r , N  α
p , r , N N Fv , N a p , r , N , and a p , r , N a p , r , N  α p , r , N PN Fv , N a p , r , N ,
such
that
the
first
term
in
the
first
line
is
given
by
~ 1~
~
~ P

 v

 1  v
α
p , r , N N N Fv , N a p , r , N  α p , r , N PN N Fv , N a p , r , N . By assumption PN  PN  o p (1) , PN  O(1)
~
~  α  o (1) ,
and thus PN  Op (1) , where the dimension of PN is P*  P . Moreover, α
N
N
p
~  O (1) , where the dimension of α is P  1 . By Lemma B.2, part
α  O(1) and thus α
N
N
p
N
~
(ii), we have that N 1Fv, N a vp , r , N  N 1Fv, N a vp, r , N  o p (1) and N 1Fv, N a vp, r , N  O(1) . It follows
~ 1~
~ P

 v

 1  v
that α
p , r , N N N Fv , N a p , r , N  α p , r , N PN N Fv , N a p , r , N  o p (1) . By analogous arguments the
second term in the first line is o p (1) , from which it follows that pr ,,qs ,,*N**,v  op (1) . An
~
analogous proof can be used to show that pr ,,qs ,,*N**,  o p (1) . Hence, Erp,,sq,,*N**  Erp,,sq,,*N**  o p (1) .
37
~
iv) Proof that Erp,,sq,,*N***  Erp,,sq,,*N***  o p (1)
Consider
~
Erp,,sq,,*N***  Erp,,sq,,*N***  [(~v( 4)  3~v4 )  ( v( 4)  3 v4 )] N 1a vp , r, N aqv, s , N
 [(~( 4 )  3~4 )  ( ( 4 )  3 4 )] N 1a p, r, N aq, s , N .
By the properties of the matrices A vp , r , N , Avq, s , N , A p , r , N , and Aq, s , N , it follows by arguments
analogous to above that N 1a vp , r, N aqv, s , N  O(1) and that N 1a p, r, N aq, s , N  O(1) . Since
(~v(,4N)   v( 4) )  op (1)
and
(~( 4, N)   ( 4) )  op (1) ,
and
since
(~v2, N   v2 )  o p (1) ,
(~2, N   2 )  o p (1) , and thus also (~v4, N   v4 )  o p (1) , (~4, N   4 )  o p (1) , it follows that
~
Erp,,sq,,*N***  Erp,,sq,,*N***  o p (1) .
~
~
~
We have shown that Erp,,sq,,*N  Erp,,sq,,*N  o p (1) , Erp,,sq,,*N*  Erp,,sq,,*N*  o p (1) , Erp,,sq,,*N**  Erp,,sq,,*N**  o p (1) ,
~
and Erp,,sq,,*N***  Erp,,sq,,*N***  o p (1) for all elements of Ψ N , p, q  1,...,4, (a, b) , r , s  1,..., S  1 . It
~
follows that Ψ N  Ψ N  o p (1) .
We are now ready to prove consistency of the estimate of the variance-covariance matrix, i.e.,
~
that Ω~θ  Ω~θ  o p (1) , where Ω ~θ N (Θ N )  (JN Θ N J N ) 1 JN Θ N Ψ N Θ N J N (JN Θ N J N ) 1 . As
N
N
~
~
proved above, we have Ψ N  Ψ N  o p (1) . By Assumption 5, we have Θ N  Θ N  o p (1) ,
~
Θ N  O(1) and Θ N  O p (1) . Let ΞN  JN ΘJ N  B N Γ N Θ N Γ N B N as defined in Theorem 2
~ ~ ~ ~ ~ 11
~
~ ~~
~
and ΞN  JN ΘJ N  B N ΓN Θ N Γ N B N . In Theorem 2, we showed that J N  O p (1) , J N  O(1) ,
~
~
~
and J N  J N  o p (1) and that ΞN  O p (1) , ΞN1  Op (1) and that ΞN  ΞN1  o p (1) . It now
~
follows that Ω~θ  Ω~θ  o p (1) .
N
N
~
~
There is a slight discrepancy to the definition of Ξ N in Theorem 2: Here B N is used rather
ρN and ρN are
than B N , which does not affect the proof, however, noting that both ~
11
consistent.
38
~ and Other Model Parameters)
III. Proof of Theorem 4 (Joint Distribution of ρ
N
The subsequent proof will focus on the case Fv, N and Fv, N ; this also convers the case for Fv*,*N
and F**, N . The first line in Theorem 4 holds in light of Assumption 7 (for N 1/ 2 Δ N ), bearing in
~
mind that Tv, N  Fv, N PN , T , N  F , N PN , and Theorem 2 (for N 1/ 2 (θN  θ N ) ).
d
We next prove that ξ o, N  Ψo1,N/ 2[( NT )1 / 2 ξN FN ,qN ] 
N (0, I P*  4 S  2 ) by verifying that the
assumptions of the central limit theorem A.1 by Kelejian and Prucha (2010) are fulfilled. Note
that min (Ψo, N )  cΨ* o  0 by assumption. In Theorem 2, we verified that the stacked
innovations ξ N , the matrices A1, s , N ,..., A 4, s , N , s  1,..., S , A a, N , and A b, N , and the vectors
a1, s, N , …, a 4, s, N , s  1,..., S , a a, N , and ab, N satisfy the assumptions of central limit theorem
by Kelejian and Prucha (2010, Theorem A.1).
Next, consider the two blocks of FN  (Fv, N , F , N ) , which are given by
S
Fv, N  [I T  (I N    m , N Mm , N ) 1 ]H N , and
m 1
S
F , N  [ 1 2 (eT  I N )]Ωε , N [IT  (I N    m , N Mm , N ) 1 ]H N .
m 1
S
Since the row and columns sums of [ 12 (eT  I N )] , Ωε, N , and [I T  (I N    m, N Mm , N ) 1 ]
m 1
are uniformly bounded in absolute value and since the elements of the matrix H N are
uniformly bounded in absolute value, it follows that the elements of FN are also uniformly
bounded in absolute value. Hence, the linear form FN ξ N  Fv, N v N  F , N μ N also fulfils the
d
assumptions of Theorem A.1. As a consequence, ξ o, N 
N (0, I P*  4 S  2 ) .
~
In the proofs of Theorems 2 and 3, we showed that Ψ N  Ψ N  o p (1) , Ψ N  O(1) , and
~
Ψ N  O p (1) . By analogous arguments, this also holds for the submatrices Ψ , N and Ψ  , N .
~
~
~
Hence, Ψo, N  Ψo, N  o p (1) , Ψ o , N  O(1) and Ψo, N  Ψo, N  o p (1) , and thus Ψ o, N  Op (1) .
~
~
~
PN  PN  o p (1) , PN  O(1) , and PN  Op (1) as well as Θ N  Θ N  o p (1) ,
~
~
Θ N  O(1) and Θ N  O p (1) . In the proof of Theorem 2 we showed that J N  J N  o p (1) ,
~
~ ~ ~
J N  O(1) , and J N  O p (1) , and that (JN Θ N J N )  (JN Θ N J N )1  o p (1) , (JN Θ N J N ) 1  O(1) ,
By assumption
39
~
~ ~ ~
and ( JN Θ N J N )  Op (1) . It now follows that Ωo, N  Ωo, N  o p (1) and Ω o , N  O(1) and thus
~
Ωo, N  Op (1) .
APPENDIX C.
Proof of Lemma 1.
In light of Eqs. (4a) and (4b), Assumptions 3 and 8, as well as sup N β N  b   , it follows
that all columns of Z N  ( X N , YN ) are of the form N  π N  Π N ε N , where the elements of
the vector π N and the row and column sums of the matrix Π N are bounded uniformly in
absolute value (see Remark A.1 in Appendix A). It follows from Lemma C.2 in Kelejian and
Prucha (2010) that the fourth moments of the elements of the matrix D N  Z N are bounded
uniformly by some finite constant and that Assumption 6 holds.
Next, note that
~
~
~
( NT )1 / 2 (δN  δ N )  PN ( NT )1 / 2 Fv, N v N  PN ( NT )1 / 2 F , N μ N ,
~
where PN is defined in the Lemma, and
S
Fv , N  [IT  (I N    m , N Mm, N ) 1 ]H N and
m 1
S
F , N  [ 1 2 (eT  I N )]Ω , N [IT  (I N    m , N Mm , N ) 1 ]H N .
m 1
~
In light of Assumption 10, PN  PN  o p (1) and PN  O(1) , with PN as defined in the Lemma.
By Assumptions 2, 3 and 9, the elements of Fv, N and F , N are bounded uniformly in absolute
value. By Assumption 2, E ( v N )  0 , E (μ N )  0 , and the diagonal variance-covariance
matrices of v N and μ N have uniformly bounded elements. Thus, E[( NT )1 / 2 Fv, N v N ]  0 and
the elements of the variance-covariance matrix of N 1 / 2Fv, N v N , i.e., ( NT )1 v2Fv,N Fv,N , are
bounded uniformly in absolute value (see Remark A.1 in Appendix A). Moreover,
E[( NT )1 / 2 F2, N μ N ]  0 , and the elements of the variance-covariance matrix of N 1 / 2F , N μ N ,
i.e., ( NT )1 2 F ,N F ,N , are bounded uniformly in absolute value. It follows from
Chebychev’s
consequently
that ( NT )1 / 2 Fv, N v N  Op (1) , ( NT )1 / 2 F , N μ N  Op (1) ,
~
( NT )1 / 2 (δN  δ N )  PN ( NT )1 / 2 Fv, N v N  PN ( NT )1 / 2 F , N μ N  o p (1)
inequality
and
and
40
PN ( NT )1 / 2 Fv, N v N  PN ( NT )1 / 2 F , N μ N  Op (1) . This completes the proof, recalling that
TN  (Tv, N , T , N )  (PN Fv, N , PN F , N ) .
Proof of Lemma 2.
The FGTSLS estimator is given by
ˆ
ˆ **  ** 1 ˆ **  **
 **  **
 **


δ N  (Z
N Z N ) Z N y N , where y N  Z N δ N  u N with
S



u*N*  Ω1,N/ 2 [I T  (I N    m , N M m, N )]u N .
m 1
ˆ
 
  
 
Substituting Z*N*  PH** Z*N*  H *N* (H *N*H *N* ) 1 H *N*Z*N* , we obtain
N
ˆ
  
( NT )1/ 2 [δ N  δ N ]  ( NT )1/ 2 Δ*N*  ( NT ) 1/ 2 PN** H *N*u*N* , with

 
 
 
 
 
PN**  {[( NT ) 1 Z*N*H *N* ][( NT ) 1 H *N*H *N* ]1[( NT ) 1 H *N*Z*N* ]}1[( NT ) 1 Z*N*H *N* ][( NT ) 1 H *N*H *N* ]1 .
Next note that
S


u*N* (θ N )  u*N* (θ N )   (  m , N   m, N )Ω1,N/ 2 (IT  M N )u N ,
m 1
S

 (Ω1,N/ 2  Ω1,N/ 2 )[I T  (I N    m , N M m , N )]u N
m 1

 (Ω1,N/ 2  Ω1,N/ 2 )(IT  M N )u N
S


 (Ω1,N/ 2  Ω1,N/ 2 ) (  m , N   m , N )(I T  M N )u N ,
m 1
where M N is a matrix, whose row and columns sums are bounded uniformly in absolute
value, satisfying
S

 (
m 1
m,N
  m , N )M m , N 
S

 (
m 1
m,N
  m,N )M N .

Substituting for u *N* , we obtain
5
ˆ
( NT )1 / 2 (δ N  δ N )  ( NT )1/ 2 Δ*N*  d1,N  d2, N  d3, N  d4, N  d5, N   di , N , where
i 1
41
 
d1, N  ( NT ) 1/ 2 PN**H*N*Ω1,N/ 2ε N ,
S
  S 
d2, N  ( NT ) 1/ 2 PN** (θ N ) H *N*  (  m, N   m, N )Ω1,N/ 2 (I T  M N )[I T  (I N    m, N M m, N ) 1 ]ε N ,
m1
d3, N
m1

 
 ( NT )1 / 2 PN**H*N* (Ω1,N/ 2  Ω1,N/ 2 ) ε N ,
S

 
d4, N  ( NT ) 1/ 2 PN**H *N* (Ω1,N/ 2  Ω1,N/ 2 )(I T  M N )[I T  (I N    m , N M m , N ) 1 ]ε N ,
m 1
S

  S 
d5, N  ( NT ) 1/ 2 PN**H*N*  (  m, N   m, N )(Ω1,N/ 2  Ω1,N/ 2 )(I T  M N )[I T  (I N    m, N M m, N ) 1 ]ε N .
m1
m1
Note that the FGTSLS estimator uses (generated) transformed instruments, which are based
on the estimate θ̂ N . Observe that
S


H*N*  H*N*   (  m , N   m , N )Ω1,N/ 2 (IT  M N )H N
m 1
S

 (Ω1,N/ 2  Ω1,N/ 2 )[I T  (I N    m , N M m , N )]H N
m 1

 (Ω1,N/ 2  Ω1,N/ 2 )(IT  M N )H N
S


 (Ω1,N/ 2  Ω1,N/ 2 ) (  m , N   m , N )(I T  M N )H N .
m 1
5

Substituting for H *N* , we obtain di , N   dij, N , i  1,...,5 . Considering d1, N , we have
j 1


d11, N  ( NT ) 1 / 2 PN**F1*,*N v N  ( NT ) 1 / 2 PN**F2*,*N μ N , with
F1*,*N  H *N [( v2Q 0, N   12Q1, N )  H *N Ω1,N , and
F2*,*N  H*N  12 (eT  I N ) .
 S 
d12, N  ( NT ) 1 / 2 PN**  (  m , N   m , N )HN (IT  M N )Ω1,N ε N ,
m 1
S


d13, N  ( NT ) 1/ 2 PN**HN [I T  (I N    m, N Mm, N )](Ω1,N/ 2  Ω1,N/ 2 )Ω1,N/ 2ε N ,
m1
d14, N


 ( NT ) 1/ 2 PN**HN (I T  M N )(Ω1,N/ 2  Ω1,N/ 2 )Ω1,N/ 2ε N ,

 S 
d15, N  ( NT ) 1 / 2 PN**  (  m , N   m , N )HN (IT  M N )(Ω1,N/ 2  Ω1,N/ 2 )Ω1,N/ 2ε N
m 1
42
Regarding d2 , N we have
S
S

d21, N  ( NT ) 1/ 2 PN**H *N*  ( ˆ m, N   m, N )Ω1,N/ 2 (I T  M N )[I T  (I N    m, N M m, N ) 1 ]ε N ,
m1
m1
S


d22, N  ( NT ) 1/ 2 PN**[ (  m, N   m, N )]2 HN (I T  M N )Ω1,N (I T  M N )[I T  (I N    m, N M m, N ) 1 ]ε N ,
S
m1
m1
S
 S 
d23, N  ( NT ) 1 / 2 PN**  (  m , N   m , N )HN [IT  (I N    m , N Mm , N )]
m1
m1
S

 (Ω1,N/ 2  Ω1,N/ 2 )Ω1,N/ 2 (I T  M N )[I T  (I N    m, N M m, N ) 1 ]ε N ,
m 1


d24, N  ( NT ) 1 / 2 PN**  (  m , N   m , N )HN (IT  M N )
S
m1
S
 1/ 2
1 / 2
1 / 2
 (Ω , N  Ω , N )Ω , N (I T  M N )[I T  (I N    m, N M m, N ) 1 ]ε N ,
m 1
S


d25, N   ( NT ) 1 / 2 PN**[ (  m , N   m , N )]2 HN (IT  M N )
m 1
S

 (Ω1,N/ 2  Ω1,N/ 2 )Ω1,N/ 2 (I T  M N )[I T  (I N    m, N M m, N ) 1 ]ε N .
m 1
Regarding d3, N we have


d31, N  ( NT )1 / 2 PN**H*N* (Ω1,N/ 2  Ω1,N/ 2 ) ε N ,

 S 
d32, N   ( NT ) 1 / 2 PN**  (  m , N   m , N )HN (IT  M N )Ω1,N/ 2 (Ω1,N/ 2  Ω1,N/ 2 ) ε N ,
m 1
S



d33, N  ( NT ) 1/ 2 PN**HN [I T  (I N    m , N Mm , N )](Ω1,N/ 2  Ω1,N/ 2 )(Ω1,N/ 2  Ω1,N/ 2 ) ε N
m 1


 ( NT ) 1/ 2 PN**HN [I T  (I N    m, N Mm , N )](Ω1,N/ 2  Ω1,N/ 2 ) 2 ε N ,
S
m 1



d34, N  ( NT ) 1/ 2 PN**HN (I T  M N )(Ω1,N/ 2  Ω1,N/ 2 )(Ω1,N/ 2  Ω1,N/ 2 ) ε N


 ( NT ) 1/ 2 PN**HN (I T  M N )(Ω1,N/ 2  Ω1,N/ 2 ) 2 ε N ,


 S 
d35, N   ( NT ) 1/ 2 PN**  (  m , N   m , N )HN (I T  M N )(Ω1,N/ 2  Ω1,N/ 2 )(Ω1,N/ 2  Ω1,N/ 2 ) ε N
m 1

 S 
 ( NT ) 1/ 2 PN**  (  m , N   m , N )HN (I T  M N )(Ω1,N/ 2  Ω1,N/ 2 ) 2 ε N .
m 1
Regarding d4 , N we have
43
S


d41, N  ( NT ) 1 / 2 PN**H*N* (Ω1,N/ 2  Ω1,N/ 2 )(IT  M N )[IT  (I N    m , N M m , N ) 1 ]ε N ,
m 1
 S 
d42, N   ( NT ) 1 / 2 PN**  (  m , N   m , N )HN (IT  M N )Ω1,N/ 2
m 1
S

 (Ω1,N/ 2  Ω1,N/ 2 )(I T  M N )[I T  (I N    m , N M m , N ) 1 ]ε N ,
m 1

d43, N  ( NT ) 1/ 2 PN**HN [I T  (I N    m , N Mm , N )]
S
m1
S

 (Ω1,N/ 2  Ω1,N/ 2 ) 2 (I T  M N )[I T  (I N    m, N M m, N ) 1 ]ε N
m1
S


d44, N  ( NT ) 1/ 2 PN**  HN (I T  M N )(Ω1,N/ 2  Ω1,N/ 2 ) 2 (I T  M N )[I T  (I N    m, N M m, N ) 1 ]ε N
m 1
d45, N
 S 
  ( NT ) 1/ 2 PN**  (  m, N   m, N )HN (I T  M N )
m1
S

 (Ω1,N/ 2  Ω1,N/ 2 ) 2 (I T  M N )[I T  (I N    m, N M m, N ) 1 ]ε N
m1
Regarding d5, N we have
S
S



d51, N   ( NT ) 1/ 2 PN**H *N*  (  m, N   m, N )(Ω1,N/ 2  Ω1,N/ 2 )(I T  M N )[I T  (I N    m, N M m, N ) 1 ]ε N ,
m 1
m 1
S


d52, N  ( NT ) 1/ 2 PN**[ (  m, N   m, N )]2 HN (I T  M N )Ω1,N/ 2
m1
S
ˆ 1/ 2  Ω 1/ 2 )(I  M )[I  (I   M ) 1 ]ε ,
 (Ω
 m,N m,N N
 ,N
 ,N
T
N
T
N
m 1
S


d53, N   ( NT ) 1/ 2 PN**  (  m, N   m , N )HN [I T  (I N    m , N Mm , N )]
S
m 1
m 1
S

 (Ω1,N/ 2  Ω1,N/ 2 ) 2 (I T  M N )[I T  (I N    m, N M m, N ) 1 ]ε N ,
m1


d54, N   ( NT ) 1/ 2 PN**  (  m, N   m, N )HN (I T  M N )
S
m 1
S

 (Ω1,N/ 2  Ω1,N/ 2 ) 2 (I T  M N )[I T  (I N    m, N M m, N ) 1 ]ε N ,
m1
S


d55, N  ( NT ) 1/ 2 PN**[ (  m, N   m , N )]2 HN (I T  M N )
m 1
S

 (Ω1,N/ 2  Ω1,N/ 2 ) 2 (I T  M N )[I T  (I N    m , N M m, N ) 1 ]ε N .
m 1
44
Summing up we have
5
5
( NT )1/ 2 Δ*N*   dij, N .
i 1 j 1

Next note that, in light of Assumption 10 and since θ N is N 1 / 2 -consistent, it follows that
ˆ **  **
1


( NT ) 1 Z
N Z N  Q H**Z** Q H**H**Q H**Z**  o p (1) .
By
Assumption
10b
we
also
QH**Z** QH1**H**QH**Z**  O(1)
have
and
thus
( QH**Z** QH1**H**QH**Z** )1  O(1) . It follows as a special case of Pötscher and Prucha (1997,
Lemma F1) that
ˆ ** ** 1
1
1

[ ( NT ) 1Z
 o p (1) .
N Z N ]  (Q H**Z** Q H**H**Q H**Z** )

It follows further that PN**  PN**  o p (1) and PN**  O(1) with PN** defined in the Lemma.


Next observe that (ρ N  ρ N )  o p (1) and that (Ω , N  Ω , N )  o p (1)I NT . Note further that all

terms dij, N except for d11, N are of the form o p (1)PN** ( NT )-1 / 2 D N ε N , where D N are NT  P*
S
matrices involving products of (I T  M N ) , [I T  (I N    m, N M m , N ) 1 ] , H N , Ω  , N ( Ω1,N ),
m 1
and Ω
1/ 2
 ,N
1/ 2
 ,N
(Ω
elements of
). By the maintained assumptions regarding these matrices it follows that the
DN
are bounded uniformly in absolute value. As a consequence,
E[( NT ) 1/ 2 D N ε N ]  0 and the elements of the variance-covariance matrix of ( NT ) 1/ 2 D N ε N ,
i.e., ( NT ) 1 D N Ω ε,N D N , are bounded uniformly in absolute value (see Remark A.1 in
Appendix A). It follows from Chebychev’s inequality that ( NT ) 1/ 2 D N ε N  Op (1) . As a
consequence, all terms dij, N except for d11, N are o p (1) , and d11, N  O p (1) . Finally, observe


that d11, N  ( NT ) 1 / 2 PN**Fv*,*N v N  ( NT ) 1 / 2 PN**F**, N μ N , with
S
Fv*,*N  ( v 2Q 0, N   1 2Q1, N )H*N  Ω1,N [IT  (I N    m, N M m , N )]H N , and
m 1
S
F**, N  [ 1 2 (eT  I N )]H*N  [ 1 2 (eT  I N )][ IT  (I N    m, N M m , N )]H N ,
m 1
which completes the proof, recalling that T  F P .
**
N
**
N
**
N
45
Download