Class 9. GMM Estimation, Dynamic Models, Arellano/Bond/Bover

advertisement
Part 9: GMM Estimation [ 1/57]
Econometric Analysis of Panel Data
William Greene
Department of Economics
Stern School of Business
Part 9: GMM Estimation [ 2/57]
http://people.stern.nyu.edu/wgreene/CumulantInstruments-Racicot-AE(2014)_46(10).pdf
Part 9: GMM Estimation [ 3/57]
Part 9: GMM Estimation [ 4/57]
The NYU
No Action
Letter
Part 9: GMM Estimation [ 5/57]
Part 9: GMM Estimation [ 6/57]
GMM Estimation for One Equation
1 N
1 N

g(β)= i1zi (y i  x iβ)  i1ziεi
N
N
 1 Ni1i2zizi 
 1 Ni1ei2zizi 
Asy.Var[g(β)]  
 , estimated with 

N
N
N
N




based on 2SLS residuals ei. The GMM estimator then minimizes
1
1 N
  1  e z z   1 N



q   i1zi (y i  x iβ)  ' 

z
(y

x
β
)
.
 
i 1 i
i
i

N
N
 N

 N
N
2
i 1 i i i
Part 9: GMM Estimation [ 7/57]
GMM for a System of Equations
Simultaneous equations
Labor supply
hours = f(wage, gh )  h= x hβh  h
wage = f(hours, gw )   w = x wβ w   w
Product market equilibrium
Quantity demanded = f(Price,...)
Price = f(market demand,...)
General format:
y1 = x1 β1  1
y 2 = x 2β2  2
...
y M= x MβM  M
Part 9: GMM Estimation [ 8/57]
SUR Model with Endogenous
RHS Variables
SUR System
y1 = x1 β1  1 , E[1 | x1 , x 2 ,...x G ]  0
y 2 = x 2β2  2 ,...
...
y M= x GβG  G ,...
Each equation has a set of L g  K g instruments, z g
Each equation can be fit by 2SLS, IV, GMM, as before.
Part 9: GMM Estimation [ 9/57]
GMM for the System - Notation
Index: i = 1,...,N for individuals
g = 1,...,G for equations (this would be t=1,...T for a panel)
Data matrices: G rows,
 y i1 
 x i1
 
 
y
0
i2 


yi 
,X 
 ...  i  ...
 

 y iG 
 0
K1
y i  X iβ + ε i
0
x i2
...
...
0
...
K2
...
...
...
0 
 β1 
 i1 

 
 
0 
β
i2 
2 


,β =
, εi =



 ... 
...
...

 
 
x iG 
 βG 
 iG 
K G columns
Part 9: GMM Estimation [ 10/57]
Instruments
 zi1 0 ... 0 
 
i2 ... 0 
0
z
 , G rows (1 for each equation)
Zi  
 ... ... ... ... 





0
0
...
x
iG 

L1 L 2 ... L G columns
Such that
 zi1,1i1   0 

  
zi1,2i1 
0


E  zi1i1   E 

for L1 instrumental variables



...
...

  
 zi1,L1 i1   0 
Same for zi2i2 , ...
Part 9: GMM Estimation [ 11/57]
Moment Equations
 zi1i1   0  L1 rows
 z     L rows
0 2
i2 i2 


E[Ziε]  E

, for observation i
 ...   ...  ...

  
ziGiG   0  L G rows
Summing over i gives the orthogonality condition,
1

E  Ni=1Ziε 
N


 zi1i1    0  L1


  
z

1 N  i2 i2    0  L 2

 E i=1

N
 ...    ... 


   

 ziGiG    0  L G
rows
rows
...
rows
Part 9: GMM Estimation [ 12/57]
Estimation-1
 βg
ig  y ig  x ig
For one equation,
ˆ = the minimizer of
β
g

M
m=1
(1/N)
N
i=1 ig,m
z
ˆ)
(y i  x i β

2
 gg (β g )'gg (β g )
Leads to 2SLS
For all equations at the same time
ˆ = the minimizer of
β


G
g=1

G
g=1


2
 M (1/N)N z (y  x  β

ˆ
)
i=1 ig,m
i
i
  m=1

 gg (β g )'gg (β g ) 
If the β gs are all different, still equation by equation 2SLS
Part 9: GMM Estimation [ 13/57]
Estimation-2
Assuming ig are all uncorrelated, equation by equation GMM
1
1 N
  1   e z z    1 N


qg   i1zig (y ig  xigβ g )  '  

z
(y

x
β
)

  
i 1 ig
ig
ig g  .


N
N
 N

  N
For the system,
N
2
i 1 ig ig ig
q = Gg=1qg
Cases to consider:
(1) Coefficient vectors have elements in common or are
restricted
(2) Disturbances are correlated.
Part 9: GMM Estimation [ 14/57]
Estimation-3
Combining GMM criteria

G
g 1
1
1 N
 1 1 N 2
  1 N





z
(y

x
β
)
'


z
z

z
(y

x
β
)
ˆ
ig g  
i  1 ig ig ig  
i  1 ig
ig
ig g  
 N i 1 ig ig



 NN
  N

 Ni1 zi1 (y i1  x i1
 β1 )   Ni 1ˆi12 zi1 zi1

0
 N
 
N
2



z
(y

x
β
)
0


ˆ
i  1 i2
i2
i2 2  
i  1 i2 z i2 z i2

q
'

 
...
...
...
 N
 


z
(y

x
β
)
0
0
iG G  
 i 1 iG iG
 Ni1 zi1 (y i1  x i1
 β1 ) 
 N



z
(y

x
β
)
i2 2 
  i 1 i2 i2


...
 N



z
(y

x
β
)
iG G 
 i 1 iG iG

...
0

...
0


...
...

2
 
... Ni 1ˆiG
ziG ziG
1
Part 9: GMM Estimation [ 15/57]
Estimation-4
If disturbances are correlated across equations,
 Ni1zi1 (y i1  x i1β1 ) 


N


z
(y

x
β
)
1
i1 i2
i2
i2 2 
q
'
N

...


N


z
(y

x
β
)
i1 iG
iG
iG G 


Ni1ˆi12 zi1zi1

N
 1 i1ˆi2ˆi1zi2 zi1
 N2
...

N

iGˆi1ziGzi1
i1ˆ

Ni1ˆi1ˆi2 zi1zi2
Ni1ˆi22 zi2 zi2
...
Ni1ˆiGˆi1ziGzi1
... Ni1ˆ
i1ˆ
iG zi1ziG 

... Ni1ˆi2ˆiG zi2 ziG 

...
...

2
... Ni1ˆiG
ziGziG 
 Ni1zi1 (y i1  x i1β1 ) 


N


z
(y

x
β
)
1
i1 i2
i2
i2 2 

N

...


N

 i1ziG (y iG  x iGβ G ) 
1
Part 9: GMM Estimation [ 16/57]
Estimation-5
If disturbances are correlated across equations,
G
G
ˆ gh (1 / N)Ni1zih (y ih  x ihβh ) 
q   g1  h1 (1 / N)Ni1zig (y ig  x igβg )  W


ˆ gh = the gh block of the inverse matrix
where W

 ˆ z z

N

i2ˆi1zi2 zi1
1
i1ˆ

 N2
...

N

iGˆi1ziGzi1
i1ˆ

N 2
i1 i1 i1 i1
 ˆ ˆ z z
 ˆ z z
...
Ni1ˆiGˆi1ziGzi1
N
i1 i1 i2 i1 i2
N 2
i1 i2 i2 i2
...  ˆ ˆ z z
...  ˆ ˆ z z
...
...
2
... Ni1ˆiG
ziGziG
N
i1 i1 iG i1 iG
N
i1 i2 iG i2 iG






1
Part 9: GMM Estimation [ 17/57]
The Panel Data Case
β is the same in every equation.
The number of moment equations is T  L
if each moment equation is L per period,
E[zit it ]  0,
If every disturbance at time t is also orthogonal
to every set of instruments in every other period, s,
Then
E[zit is ]  0, TL per period, for T periods, or T 2L
E.g., L=10 instruments, T=5 periods, K=5 parameters,
250 moment equations (!) for fitting 5 parameters.
Part 9: GMM Estimation [ 18/57]
Hausman and Taylor FE/RE Model
y it  x1it β1  x2it β2  z1i α1  z2i α2  it  ui
E[ui | x1it , z1i ]  0
E[ui | x2it , z2i ]  0  OLS and GLS are inconsistent
Var[ui | x1it , x2it , z1i , z2i ]  u2
E[it | x1it , x2it , z1i , z2i ]=0
Var[it | x1it , x2it , z1i , z2i ]=2
Cov[it ,ui | x1it , x2it , z1i , z2i ]=0
Var[it  ui | x1it , x2it , z1i , z2i ]=2  u2
Cov[it  ui , is  ui | x1it , x2it , z1i , z2i ]=u2
Part 9: GMM Estimation [ 19/57]
Useful Result: LSDV is an IV Estimator
y  X  D  
= X  w
1
p lim
X w  0, so X is endogenous. Correlated with w because of D.
NT
MD X  X* = x's in group mean deviations.
1
1
1
1
X*'w 
X*' D+  =
X MDD  X MD   
X 0  X MD  


NT
NT
NT
NT
1
1
1

X MD , so plim
X*'w  p lim
X MD   0
NT
NT
NT
1
1
plim
X*'X  plim
X'MD X  within groups sums of squares  0.
NT
NT
X* is a valid instrument.
plim b*=plim   X*  ' X   X*  y = 
1
Part 9: GMM Estimation [ 20/57]
Hausman and Taylor
y it  x1it β1  x2it β2  z1i α1  z2i α2  it  ui
Deviations from group means removes all time invariant variables
y it  y i  ( x1it - x1i )'β1  ( x2it - x2i )'β2  it
Implication: β1 , β2 are consistently estimated by LSDV.
( x1it - x1i ) = MD X1 = K 1 instrumental variables
( x2it - x2i ) = MD X 2 = K 2 instrumental variables
z1i
?
= L1 instrumental variables (uncorrelated with u)
= L 2 instrumental variables (where do we get them?)
H&T: x1i = (I-MD )X1 = K 1 additional instrumental variables. Needs K1  L 2 .
Part 9: GMM Estimation [ 21/57]
H&T’s FGLS Estimator
(1) LSDV estimates of β1 , β2 , 2
(2) (e*)' = (e1 , e1 ,..., e1 ),(e2 , e2 ,..., e2 ),...,(eN , eN ,..., eN )
(Ni=1 Ti observations).
zi1 zi2 
z z  T rows, repeat invariant variables
i2 
Zi *   i1
 i

 L1  L 2 columns




z
z
 i1
i2 
zi1 xi1,1 




Ti rows, repeat zi1 , time varying x i1,t
z
x
i1,2 
 i1
Wi  
  L  K columns
1
1


zi1 xi1,Ti 
Part 9: GMM Estimation [ 22/57]
H&T’s FGLS Estimator (cont.)
(2 cont.) IV regression of e * on Z * with instruments
Wi consistently estimates α1 and α2 .
(3) With fixed T, residual variance in (2) estimates u2  2 / T
With unbalanced panel, it estimates u2  2 /T or something
resembling this. (1) provided an estimate of 2 so use the two
to obtain estimates of u2 and 2 . For each group, compute
2
2
2
ˆ
i  1  
ˆ  / (
ˆ   Ti
ˆu )
(4) Transform
[x it1 , x it2 , zi1 , zi2 ] to
Wi * = [x it1 , x it2 , zi1 , zi2 ] - ˆ
i [x i1 , x i2 , zi1 , z i2 ]
and
y it to y it * = y it - ˆ
i y i .
Part 9: GMM Estimation [ 23/57]
H&T’s 4 STEP IV Estimator
Instrumental Variables Vi 
( x1it - x1i ) = K 1 instrumental variables
( x2it - x2i ) = K 2 instrumental variables
z1i
= L1 instrumental variables (uncorrelated with u)
x1i
= K 1 additional instrumental variables.
Now do 2SLS of y * on W * with instruments V to estimate
all parameters. I.e.,
ˆ * W*
ˆ )-1W
ˆ * y * .
[β1 , β2 , α1 , α2 ]=(W
Part 9: GMM Estimation [ 24/57]
Part 9: GMM Estimation [ 25/57]
Part 9: GMM Estimation [ 26/57]
Part 9: GMM Estimation [ 27/57]
Part 9: GMM Estimation [ 28/57]
Part 9: GMM Estimation [ 29/57]
Dynamic (Linear) Panel
Data (DPD) Models




Application
Bias in Conventional Estimation
Development of Consistent Estimators
Efficient GMM Estimators
Part 9: GMM Estimation [ 30/57]
Dynamic Linear Model
Balestra-Nerlove (1966), 36 States, 11 Years
Demand for Natural Gas
Structure
New Demand: G*i,t  Gi,t  (1  )Gi,t 1
Demand Function G*i,t  1  2Pi,t  3 Ni,t  4Ni,t  5 Yi,t  6 Yi,t  i,t
G=gas demand
N = population
P = price
Y = per capita income
Reduced Form
Gi,t  1  2Pi,t  3 Ni,t  4Ni,t  5 Yi,t  6 Yi,t  7 Gi,t 1   i  i,t
Part 9: GMM Estimation [ 31/57]
A General DPD model
y i,t  xi,t β  y i,t 1  ci  i,t
E[i,t | X i ,c i ]  0
2
E[i,t
| X i , c i ]  2 , E[i,t i,s | X i , c i ]  0 if t  s.
E[c i | X i ]  g( X i )
No correlation across individuals
Part 9: GMM Estimation [ 32/57]
OLS and GLS are inconsistent
y i,t  x i,t β  y i,t 1  ci  i,t
Cov[y i,t 1 , (c i  i,t )] 
2c  Cov[y i,t 2 , (c i  i,t )]
If T were large and -1<<1,
2c
this would approach
1
Implication : Both OLS and GLS are
inconsistent.
Part 9: GMM Estimation [ 33/57]
LSDV is Inconsistent
[(Steven) Nickell Bias]
y i,t  y i  ( x i,t  x i )'β+(y i,t 1  y i )  (i,t  i )
Cov[(y i,t 1
2 (T  1)  T   T
 y i ), (i,t  i )]  2
T
(1  )2
Large when T is moderate or small.
Proportional bias for conventional T (5 - 15), is
on the order of 15% - 60%.
Part 9: GMM Estimation [ 34/57]
Anderson Hsiao IV Estimator
Base on first differences
y i,t  y i,t 1  ( x i,t  x i,t 1 )'β+(y i,t 1  y i,t 2 )  (i,t  i,t 1 )
Instrumental variables
y i,3  y i,2  ( x i,3  x i,2 )'β+(y i,2  y i,1 )  (i,3  i,2 )
Can use y i1
y i,4  y i,3  ( x i,4  x i,3 )'β+(y i,3  y i,2 )  (i,4  i,3 )
Can use y i2 or (y i,2  y i,1 )
And so on.
Levels or lagged differences?
Levels allow you to use more data
Asymptotic variance of the estimator is smaller with levels.
Part 9: GMM Estimation [ 35/57]
Arellano and Bond Estimator - 1
Base on first differences
y i,t  y i,t 1  ( x i,t  x i,t 1 )'β+(y i,t 1  y i,t 2 )  (i,t  i,t 1 )
Instrumental variables
y i,3  y i,2  ( x i,3  x i,2 )'β+(y i,2  y i,1 )  (i,3  i,2 )
Can use y i1
y i,4  y i,3  ( x i,4  x i,3 )'β+(y i,3  y i,2 )  (i,4  i,3 )
Can use y i,1 and y i2
y i,5  y i,4  ( x i,5  x i,4 )'β+(y i,4  y i,3 )  (i,5  i,4 )
Can use y i,1 and y i2 and y i,3
Part 9: GMM Estimation [ 36/57]
Arellano and Bond Estimator - 2
More instrumental variables - Predetermined X
y i,3  y i,2  ( x i,3  x i,2 )'β+(y i,2  y i,1 )  (i,3  i,2 )
Can use y i1 and x i,1 , x i,2
y i, 4  y i,3  ( x i, 4  x i,3 )'β+(y i,3  y i,2 )  (i, 4  i,3 )
Can use y i,1 , y i2 , x i,1 , xi,2 , x i,3
y i,5  y i, 4  ( x i,5  x i, 4 )'β+(y i, 4  y i,3 )  (i,5  i, 4 )
Can use y i,1 , y i2 , y i,3 , x i,1 , xi,2 , xi,3 , xi, 4
Part 9: GMM Estimation [ 37/57]
Arellano and Bond Estimator - 3
Even more instrumental variables - Strictly exogenous X
y i,3  y i,2  ( x i,3  x i,2 )'β+(y i,2  y i,1 )  (i,3  i,2 )
Can use y i1 and x i,1 , x i,2 ,..., x i, T (all periods)
y i, 4  y i,3  ( x i, 4  x i,3 )'β+(y i,3  y i,2 )  (i, 4  i, 3 )
Can use y i,1 , y i2 , x i,1 , x i,2 ,..., x i, T
y i,5  y i, 4  ( x i,5  x i, 4 )'β+(y i, 4  y i,3 )  (i,5  i, 4 )
Can use y i,1 , y i2 , y i,3 , x i,1 , x i,2 ,..., x i, T
The number of potential instruments is huge.
These define the rows of Z i . These can be used for
simple instrumental variable estimation.
Part 9: GMM Estimation [ 38/57]
Instrumental Variables
Predetermined variables
 , x i,2

 y i,1 , x i,1

0
Zi  

...

0

0
...
 , x i,2
 , x i,3

y i,1 , y i,2 , x i,1
...
0
0


...
0
 (T rows)

...
...

 , x i,2
 ,...x i, T 1 
... y i,1 , y i,2 ,..., y i, T  2 , x i,1
Strictly Exogenous variables
 , x i,2
 ,...x i, T 1
 y i,1 , x i,1

0
Zi  

...

0

0
 , x i,2
 ,...x i, T 1
y i,1 , y i,2 , x i,1
...
0
...
0


...
0
 (T rows)

...
...

 , x i,2
 ,...x i, T 1 
... y i,1 , y i,2 ,..., y i, T  2 , x i,1
Part 9: GMM Estimation [ 39/57]
Simple IV Estimation


 



 

ˆ =  N X Z N Z Z 1 N Z X
θ
i=1 i i
i=1 i i
 i=1 i i
 N X Z N Z Z 1 N Z y
i=1 i i
i=1 i i
 i=1 i i
This is two stage least squares.

N
ˆ ]=
Est.Asy.Var[θ
ˆ  i=1 X iZ i

2


ˆ
2


 
N
i=1




1
Z iZ i

 
1
N
i=1

Z iX i 

1
ˆ ˆ
Ni 1 tT 3 [( y i, t  y i, t 1 )  ( x i, t  x i, t 1 )'β
(y i, t  y i, t 1 )]2
Ni1 (Ti  2)
Note that this variance estimator understates the true asymptotic
variance because observations are autocorrelated for one period.
(y i, t  y i, t 1 )  ...  (i, t  i, t 1 )  ...  v i, t
Cov[v i, t , v i, t 1 ]  [v i, t , v i, t 1 ]  2 (0 for longer lags, and leads)
Use a "White" robust estimator

ˆ ]=  N X Z
Est.Asy.Var[θ
 i=1 i i
 
N
i=1
ˆiv
ˆ iZ i
Z iv
 
1
N
i=1

Z iX i 

1
Part 9: GMM Estimation [ 40/57]
Arellano/Bond
First Difference Formulation
y it  x it β  y i, t 1  it
Parameters : θ = [β, ]
The data

 x i3
 y i3 

 y 

x i4
i4 


yi 
, Xi 
 ...






y

 iTi 
 x iT
K
y i,2  y i,1 

y i,3  y i,2 
, T -2 rows
 i

y i,T  y i, T1 
1
columns
Part 9: GMM Estimation [ 41/57]
Arellano/Bond - GLS
y i,t  y i,t 1  ( x i,t  x i,t 1 )'β+(y i,t 1  y i,t 2 )  (i,t  i )
 i,3  i,2 
 2 1 0


 1 2 1
 i,4  i,3 

2


Cov i,5  i,4    0 1 2





...
 ... ... 1


 0 0 ...
  

i,T
i,T

1


...
...
0
0 
... 0   2 Ωi

... 1
1 2 
Part 9: GMM Estimation [ 42/57]
Arellano/Bond GLS Estimator


 


 
ˆ =  N XZ N ZΩ Z
θ
i=1 i i i
 i=1 i i
 N XZ N ZΩ Z
i=1 i i i
 i=1 i i
1
1
1

Ni=1Zi X i 

Ni=1Zi y i 

1


1



= XZ  ZΩZ  ZX
XZ  ZΩZ  Zy 

 

1
Part 9: GMM Estimation [ 43/57]
GMM Estimator
y i,t  x i,t'β+y i,t 1  i,t
We make no assumptions about the disturbance. In first differences
y i,t  y i,t 1  ( x i,t  x i,t 1 )'β+(y i,t 1  y i,t 2 )  (i,t  i,t 1 )
(1) Two stage least squares

ˆ =  N X Z
θ
 i=1 i i

 

1


 

1
 ZiZi
 Zi X i   Ni=1 X iZi Ni=1ZiZi
Ni=1 Zi y i 
 


ˆ   1 Ni=1Zi v

ˆ
ˆ
(2) Form the weighting matrix for GMM: W
v
Z
i i i
 N2


1
N
i=1
N
i=1
The criterion for GMM estimation is
1
 ˆ -1  1 N


q=  Ni=1 viZi  W

Z
v
i=1
i
i
N

N








1



ˆ
ˆ 1 Ni=1Zi X i   Ni=1 X iZi W
ˆ 1 Ni=1Zi y i 
 N 
θ
GMM =  i=1 X i Z i W
 





ˆ
ˆ 1 Ni=1Zi X i 
 N 
Est.Asy.Var[θ
GMM ]   i=1 X i Z i W

1
Part 9: GMM Estimation [ 44/57]
Arellano/Bond/Bover’s Formulation
Start with H&T
y it  x1it β1  x2it β2  z1i α1  z2i α2  it  ui
Instrumental variables for period t
( x1it - x1i ) = K1 instrumental variables
( x2it - x2i ) = K 2 instrumental variables
z1i
= L1 instrumental variables (uncorrelated with u)
x1i
= K 1 additional instrumental variables. K 1  L 2 .
Let v it  it  ui
Let zit  [( x1it - x1i )',( x2it - x2i )',z1i , x1']
Then E[zit v it ]  0
We formulate this for the Ti observations in group i.
Part 9: GMM Estimation [ 45/57]
Arellano/Bond/Bover’s Formulation
Dynamic Model
y it  y i,t 1 +x1it β1  x2it β2  z1i α1  z2i α2  it  ui
Parameters : θ = [,β1 , β2 , α1 , α2 ]'
The data
 y i,2 
 y i,1 x1i2 x2i2 z1i z2i 








y
y
x1
x2
z1
z2
i3
i3
i
i
 i,3 
 i,2
yi  
,
X

i


 , Ti -1 rows




 y i,Ti 
 y i,T-1 x1iTi x2iTi z1i z2i 
1 K1
K2
L1 L2
columns
Part 9: GMM Estimation [ 46/57]
Arellano/Bond/Bover’s Formulation
y it  y i,t 1 + x1it β1  x2it β2  z1i α1  z2i α2  it  ui
Instrumental variables for period t as developed above
Let zit  [y i,1 , y i,2 ,...,( x1it - x1i )',( x2it - x2i )',z1i , x1']
Combine H&T treatment with DPD GMM estimator.
Instrumental variable creation is based on group mean
deviation rather than first differences.
Part 9: GMM Estimation [ 47/57]
Arellano/Bond/Bover’s Formulation
zit  [y i,1  y i ,..., y i,t 1  y i , ( x1it - x1i )',( x2it - x2 i )',z1i , x1']
Then E[zit v it ]  0
We formulate this for the last Ti -1 observations in group i.
(0,0,0)
(0,0,0)
(y i,1 , x1i2 ,x2i2 ,z1i )

(0,0,0)
(y i,1 , y i,2 , x1i3 ,x2i3 ,z1i )
(0,0,0)


Zi 
(0,0,0)
(0,0,0)
(y i,1 , y i,2 , y i,3 , x1i4 ,x2i,4 ,z1i )


(0,0,0)
(0,0,0)
(0,0,0)

(0,0,0)
(0,0,0)
(0,0,0)

1/(Ti  1) 



1/(Ti  1) 
i

H'i  MD,(T -1)
, where MiD,(T -1)  MDi without the last column.
i
i


...


1/(Ti  1) 

...
...
...
...
...
(0,0,0)


(0,0,0)


(0,0,0)


(y i,1 ,...,y i,T-2 , x1i,(T-1) ,x2i,(T-1) ,z1i )

(y i,1 ,...,y i,T-1 , z1, x1i ) 
(0,0,0)

(0,0,0)
(0,0,0)
(0,0,0)
These blocks may contain all previous
exogenous variables, or all exogenous
variables for all periods.
This may contain the all periods of data on x1 rather than just the group mean. (Amemiya and MaCurdy).
Part 9: GMM Estimation [ 48/57]
Arellano/Bond/Bover’s Formulation
For unbalanced panels the number of
columns for Zi varies. Given the form of
Zi, the number of columns depends on Ti.
We need all Zi to have the same number
of columns. For matrices with less
columns than the largest one, extra
columns of zeros are added.
Part 9: GMM Estimation [ 49/57]
Arellano/Bond/Bover’s Formulation
The covariance matrix defines the model:
Ωi =2 I - Classical (pooled) regression model (no effects)
Ωi =2 I + u2ii' - Random effects model
Ωi = A positive definite Ti xTi matrix - GR model
Part 9: GMM Estimation [ 50/57]
Arellano/Bond/Bover Estimator




ˆ =  N X  H Z N Z H Ω
ˆ H Z
δ
i=1
i
i
i
i=1
i
i
i
i
i

 N  
N
ˆ H Z


X
H
Z

Z
H
Ω
i=1
i
i
i
i=1
i
i
i
i
i

Two step (GMM) estimation

 
1
1

Zi Hi X i 


Ni=1Zi Hi y i 


N
i=1

1


ˆ
ˆ = I. Compute residuals v
ˆ i  y i  X iδ
(1) Use Ω
i
ˆ H =
Then HiΩ
i i
1 N
ˆiv
ˆ iHi
i1Hi v
N
ˆ.
(2) Recompute δ
ˆ ]=  N X  H Z
Est.Asy.Var[δ
 i=1 i i i

 
N
i=1
ˆ H Z
Zi Hi Ω
i
i
i

1

N
i=1

Zi Hi X i 


1
Part 9: GMM Estimation [ 51/57]
GMM Criterion
The GMM criterion which produces this estimator is

ˆ H Z
ˆ iHiZi  Zi Hi Ω
q=  v
i
i
i
N
i1
N
i=1

1
ˆi
ZiHi v
Post estimation, use this as 2 [DF] to test the overidentifying
restrictions. The degrees of freedom is the total number of
moment conditions (columns in Z) minus the number of
parameters in δ.
Part 9: GMM Estimation [ 52/57]
Application: Maquiladora
Part 9: GMM Estimation [ 53/57]
Maquiladora
Part 9: GMM Estimation [ 54/57]
Part 9: GMM Estimation [ 55/57]
Side Issue
How does y(t) = 1.220175 y(t-1) - 0.262198 y(t-2) + a behave?
y(t) = 1.220175 y(t-1) + a
is obviously explosive.
1.220175 0.262198
How to tell: A = 

1
0


Smallest (possibly complex) root must be greater than 1.0.
Part 9: GMM Estimation [ 56/57]
Postscript





There is no theoretical guidance on the
instrument set
There is no theoretical guidance on the form of
the covariance matrix
There is no theoretical guidance on the number
of lags at any level of the model
There is no theoretical guidance on the form of
the exogeneity – and it is not testable.
Results vary wildly with small variations in the
assumptions.
Part 9: GMM Estimation [ 57/57]
Ahn and Schmidt
y it  y i,t 1 +x1it β1  x2it β2  z1i α1  z2i α2  it  ui
There are (huge numbers of) additional moments.
(1) Initial condition, y i,0  x i,0 λ + i,0
E[i,t y i,0 ]  0 implies T more estimating equations
(2) Uncorrelatedness with differences,
E[y is (it  i,t 1 )]  0, t  2,..., T, s  0,..., T  2 is
T(T-1)/2 conditions
(3) (Nonlinear)
E[iT (it  i,t 1 )]  0 implies T-2 restrictions.
And so on.
Even moderately sized models embed potentially
thousands of such estimating equations for usually
very small numbers (say 5 or 10) parameters.
How much efficiency can be gained? Is there a cost?
Download