Two Stage Sampling

advertisement
Chapter 10
Two Stage Sampling (Subsampling)
In cluster sampling, all the elements in the selected clusters are surveyed. Moreover, the efficiency in
cluster sampling depends on size of the cluster. As the size increases, the efficiency decreases. It
suggests that higher precision can be attained by distributing a given number of elements over a large
number of clusters and then by taking a small number of clusters and enumerating all elements within
them. This is achieved in subsampling.
In subsampling
-
divide the population into clusters.
-
Select a sample of clusters [first stage}
-
From each of the selected cluster, select a sample of specified number of elements [second stage]
The clusters which form the units of sampling at the first stage are called the first stage units and the
units or group of units within clusters which form the unit of clusters are called the second stage units or
subunits.
The procedure is generalized to three or more stages and is then termed as multistage sampling.
For example, in a crop survey
-
villages are the first stage units,
-
fields within the villages are the second stage units and
-
plots within the fields are the third stage units.
In another example, to obtain a sample of fishes from a commercial fishery
-
first take a sample of boats and
-
then take a sample of fishes from each selected boat.
Two stage sampling with equal first stage units:
Assume that
-
population consists of NM elements.
-
NM elements are grouped into N first stage units of M second stage units each, (i.e., N
clusters, each cluster is of size M )
-
Sample of n first stage units is selected (i.e., choose n clusters)
1
-
Sample of m second stage units is selected from each selected first stage unit (i.e., choose m
units from each cluster).
-
Units at each stage are selected with SRSWOR.
Cluster sampling is a special case of two stage sampling in the sense that from a population of N
clusters of equal size m  M , a sample of n clusters are chosen.
If further M  m  1, we get SRSWOR.
If n  N , we have the case of stratified sampling.
yij : Value of the characteristic under study for the j th second stage units of the i th first
stage
unit; i  1, 2,..., N ; j  1, 2,.., m.
Yi 
1
M
Y 
1
MN
M
y
j 1
N
ij
: mean per 2nd stage unit of i th 1st stage units in the population.
M
 yij 
i 1 j 1
1
N
N
y
i 1
i
 YMN : mean per second stage unit in the population
yi 
1 m
yij : mean per second stage unit in the i th first stage unit in the sample.

m j 1
y
1 n m
1 n
y

yi  ymn : mean per second stage in the sample.
 ij n 
mn i 1 j 1
i 1
Advantages:
The principle advantage of two stage sampling is that it is more flexible than the one stage sampling. It
reduces to one stage sampling when m  M but unless this is the best choice of m , we have the
opportunity of taking some smaller value that appears more efficient. As usual, this choice reduces to a
balance between statistical precision and cost. When units of the first stage agree very closely, then
consideration of precision suggests a small value of m . On the other hand, it is sometimes as cheap to
measure the whole of a unit as to a sample. For example, when the unit is a household and a single
respondent can give as accurate data as all the members of the household.
A pictorial scheme of two stage sampling scheme is as follows:
2
Population (MN units)
Cluster
M units
Cluster
M units
… … …
Cluster
M units
N clusters
Cluster
M units
Cluster
M units
… … …
Cluster
M units
n clusters
Cluster
m units
Cluster
m units
… … …
Cluster
m units
mn units
Population
N clusters
(large in
number)
First stage
sample
n clusters
(small in
number)
Second stage
sample
m units
n clusters
(large number
of elements
from each
cluster)
Note: The expectations under two stage sampling scheme depend on the stages. For example, the
expectation at second stage unit will be dependent on first stage unit in the sense that second stage unit
will be in the sample provided it was selected in the first stage.
To calculate the average
-
First average the estimator over all the second stage selections that can be drawn from a fixed
set of n units that the plan selects.
-
Then average over all the possible selections of n units by the plan.
3
In case of two stage sampling,
E (ˆ) 
E1[ E2 (ˆ)]



average
over all
average
over
all
samples
average over
all possible 2nd stage
selections from
a fixed set of units
1st stage
samples
In case of three stage sampling,


E (ˆ)  E1  E2 E3 (ˆ)  .


To calculate the variance, we proceed as follows:
In case of two stage sampling,
Var (ˆ)  E (ˆ   ) 2
 E E (ˆ   ) 2
1
2
Consider
E2 (ˆ   ) 2  E2 (ˆ 2 )  2 E2 (ˆ)   2
.
2
  E2 (ˆ  V2 (ˆ)   2 E2 (ˆ)   2




Now average over first stage selection as
2
E1 E2 (ˆ   ) 2  E1  E2 (ˆ)   E1 V2 (ˆ)   2 E1 E2 (ˆ)  E1 ( 2 )


2
 E1  E1 E2 (ˆ)   2   E1 V2 (ˆ) 


Var (ˆ)  V1  E2 (ˆ)   E1 V2 (ˆ)  .
In case of three stage sampling,






Var (ˆ)  V1  E E3 (ˆ)   E1 V2 E3 (ˆ)   E1  E2 V3 (ˆ)  .




 2

4
Estimation of population mean:
Consider y  ymn as an estimator of the population mean Y .
Bias:
Consider
E ( y )  E1  E2 ( ymn ) 
 E1  E2 ( yim i ) 
(as 2nd stage is dependent on 1st stage)
 E1  E2 ( yim i ) 
(as yi is unbiased for Yi due to SRSWOR)
1 n 
= E1   Yi 
 n i 1 
1 N
  Yi
N i 1
Y .
Thus ymn is an unbiased estimator of the population mean.
Variance
Var ( y )  E1 V2 ( y i )   V1  E2 ( y / i ) 
 1 n
 1 n


 E1 V2   yi i   V1  E2   yi / i 


  n i 1
  n i 1
1 n

1 n

 E1  2  V ( yi i )   V1   E2 ( yi / i ) 
 n i 1

 n i 1

1 n 1 1  
1 n 
 E1  2     Si2   V1   Yi 
 n i 1 
 n i 1  m M  
n
1
1 1 
 2    E1 ( Si2 )  V1 ( yc )
n i 1  m M 
(where yc is based on cluster means as in cluster sampling)
N n 2
1 1 
n    S w2 
Sb
Nn
m M 
1 1 1 
1 1 
    S w2     Sb2
nm M 
n N 

1
n2
where S w2 
Sb2 
1
N
N
 Si2 
i 1
N M
1
 Yij  Yi 
N ( M  1) i 1 j 1
2
1 N
(Yi  Y ) 2

N  1 i 1
5
Estimate of variance
An unbiased estimator of variance of
y can be obtained by replacing Sb2 and S w2 by their unbiased
estimators in the expression of variance of y .
Consider an estimator of
1
N
S w2 
N
S
i 1
2
i
1 M
  yij  Yi 
M  1 j 1
where Si2 
2
1 n 2
 si
n i 1
1 m
where si2 
 ( yij  yi )2 .
m  1 j 1
as
sw2 
So
E ( sw2 )  E1 E2  sw2 i 
1 n

 E1 E2   si2 i 
 n i 1

n
1
 E1   E2 ( si2 i ) 
n i 1
 E1
1 n 2
 Si
n i 1
(as SRSWOR is used)

1 n
E1 ( Si2 )

n i 1

1
N

1
N
S
1


i 1  N
N
N
S
i 1
N
S
i 1
2
i



2
i
2
w
so sw2 is an unbiased estimator of S w2 .
Consider
sb2 
1 n
( yi  y ) 2

n  1 i 1
as an estimator of
Sb2 
1 N
(Yi  Y ) 2 .

N  1 i 1
6
So
E ( sb2 ) 
1
 n

E   ( yi  y ) 2 
n  1  i 1

 n

(n  1) E ( sb2 )  E   yi2  ny 2 
 i 1

n


 E   yi2   nE ( y 2 )
 i 1 
  n

2
 E1  E2   yi2    n Var ( y )   E ( y ) 


  i 1  
 1 1 
 n

1 1
 E1   E2 ( yi2 ) i )   n    Sb2   
m M
 i 1

 n N 

1 2
2
 Sw  Y 
n


 1 1 

 n
2 
 1 1 1
 E1   Var ( yi )   E ( yi    n    Sb2     S w2  Y 2 
m M n
 i 1

 n N 

 n  1 1 

 1 1 

 1 1 1
 E1      Si2  Yi 2   n    Sb2     S w2  Y 2 
m M n

 n N 

 i 1  m M 
1  n  1 1 
 1 1 


 1 1 1
 nE1     Si2  Yi 2   n    Sb2     S w2  Y 2 
m M n
 n N 


 n  i 1  m M 
N
N
 1 1  1

 1 1 

1
 1 1 1
 n     Si2   Yi 2   n    Sb2     S w2  Y 2 
N i 1 
m M n
 m M  N i 1
 n N 

 1 1 
1
 n    S w2 
N
 m M 
N
Y
i 1
i
2

 1 1  2  1 1  1 2
2
  n  n  N  S b   m  M  n S w  Y 






1 1
 (n  1)  
m M
1 1
 (n  1)  
m M
 2 n N 2
1 1  2
2
 S w   Yi  nY  n    Sb
N i 1

n N 
N
 2 n
1 1  2
2
2
 S w    Yi  NY   n    Sb
N  i 1

n N 

n
1 1 
1 1 
 (n  1)    S w2  ( N  1) Sb2  n    Sb2
N
m M 
n N 
1 1
 (n  1)  
m M
1 1
 E ( sb2 )   
m M

1 1
or E  sb2   
m M

 2
2
 S w  (n  1) Sb .

 2
2
 S w  Sb

 2
2
 sw   S b .
 
7
Thus
 ( y )  1  1  1  Sˆ 2   1  1  Sˆ 2
Var

  
 b
nm M 
n N 
1 1 1 
 1 1 
1 1
    sw2      sb2   
nm M 
 n N 
m M

 2
 sw 
 
1  1 1  2 1 1  2
   sw     sb .
N m M 
n N 
Allocation of sample to the two stages: Equal first stage units:
The variance of sample mean in the case of two stage sampling is
( y)  1  1  1
Var

nm M
 2 1 1  2
 S w     Sb .

n N 
It depends on Sb2 , S w2 , n and m. So the cost of survey of units in the two stage sample depends on
n and m.
Case 1. When cost is fixed
We find the values of n and m so that the variance is minimum for given cost.
(I) When cost function is C = kmn
Let the cost of survey be proportional to sample size as
C  knm
where C is the total cost and k is constant.
When cost is fixed as C  C0 . Substituting m 
C0
in Var ( y ), we get
kn
1  2 S w2  Sb2 1 kn 2

Var ( y )   Sb   
Sw
n
M  N n C0
S 2   S 2 kS 2 
1
  Sb2  w    b  w  .
n
M   N C0 
 2 S w2 
This variance is monotonic decreasing function of n if  Sb 
  0.
M 

when n assumes maximum value, i.e.,
nˆ 
C0
corresponding to m  1.
k
8
The variance is minimum

S2 
If  Sb2  w   0 (i.e., intraclass correlation is negative for large N ) , then the variance is a monotonic
M 

increasing function of n ,
It reaches minimum when n assumes the minimum value, i.e.,
nˆ 
C0
kM
(i.e., no subsampling).
(II) When cost function is C  k1n  k2 mn
Let cost C be fixed as C0  k1n  k2 mn where k1 and k2 are positive constants. The terms k1 and k2
denote the costs of per unit observations in first and second stages respectively. Minimize the variance of
sample mean under the two stage with respect to m subject to the restriction C0  k1n  k2 mn .
We have



S2 
S2 
S2  k S2
C0 Var ( y )  b   k1  Sb2  w   k2 S w2  mk2  Sb2  w   1 w .
N
M
M
m




S2 
When  Sb2  w   0, then
M

2
 



S2  
S2 
S2 
k S2 
C0 Var ( y )  b    k1  Sb2  w   k2 S w2    mk2  Sb2  w   1 w 
N 
M
M
m 




 

2
which is minimum when the second term of right hand side is zero. So we obtain
S w2
k1
.
mˆ 
k2  2 S w2 
 Sb 

M

The optimum n follows from C0  k1n  k2 mn as
nˆ 
C0
.
k1  k2 mˆ

S2 
When  Sb2  w   0 then
M




S2 
S2 
S2  k S2
C0 Var ( y )  b   k1  Sb2  w   k2 S w2  mk2  Sb2  w   1 w
N
M
M
m



is minimum if
m
is the greatest attainable integer. Hence
C0  k1  k2 M ; mˆ  M and nˆ 
If C0  k1  k2 M ; then mˆ 
C0
.
k1  k2 M
C0  k1
and nˆ  1.
k2
9
in this case,
when
If N is large, then
S w2  S 2 (1   )
S w2 
mˆ 
S w2
 S 2
M
k1  1 
  1 .
k2   
Case 2: When variance is fixed
Now we find the sample sizes when variance is fixed, say as V0 .
 2 1 1  2
 S w     Sb

n N 
1 1 
Sb2     S w2
m M 
n
S2
V0  b
N
V0 
1 1 1
 
nm M
So
 2 S w2
 Sb 
M
C  kmn  km 
2
 V  Sb
 0
N



kS w2
.

2
 V  Sb
 0
N


S2 
If  Sb2  w   0, C attains minimum when m assumes the smallest integral value, i.e., 1.
M


S2 
If  Sb2  w   0 , C attains minimum when mˆ  M .
M

Comparison of two stage sampling with one stage sampling
One stage sampling procedures are comparable with two stage sampling procedures when either
(i) sampling mn elements in one single stage or
(ii) sampling
mn
first stage units as cluster without sub-sampling.
M
We consider both the cases.
10
Case 1: Sampling mn elements in one single stage
The variance of sample mean based on
- mn elements selected by SRSWOR (one stage) is given by
1  2
 1

V ( ySRS )  
S
 mn MN 
- two stage sampling is given by
V ( yTS ) 
1 1 1
 
nm M
 2 1 1  2
 S w     Sb .

n N 
The intraclass correlation coefficient is
 N  1  2 Sw
2
2

 Sb 
1
N 
M M ( N  1) Sb  NS w


;

  1
2
( MN  1) S
M 1
 NM  1  2

S
 NM 
2
(1)
and using the identity
N
M
N
M
N
M
 ( yij  Y )2  ( yij  Yi )2   (Yi  Y )2
i 1 j 1
i 1 j 1
i 1 j 1
( NM  1) S 2  ( N  1) MSb2  N ( M  1) S w2
where Y 
1
MN
N
M
 yij , Yi 
i 1 j 1
1
M
(2)
M
y .
j 1
ij
Now we need to find Sb2 and S w2 from (1) and (2) in terms of S 2 . From (1), we have
 MN  1 
 N 1 
2
2
S w2   
 MS   
 MSb .
 N 
 N 
(3)
Substituting it in (2) gives
 N  1 
 MN  1 
2
2 
( NM  1) S 2  ( N  1) MSb2  N ( M  1) 
 MSb  
 MS  
 N 
 N 

2
2
2
 ( N  1) MSb  ( M  1)( N  1) Sb   M ( M  1)( MN  1) S
 ( N  1) MSb2 [1  ( M  1)]   M ( M  1)( MN  1) S 2
 ( N  1) MSb2   M ( M  1)( MN  1) S 2
 Sb2 
( MN  1) S 2
1  ( M  1)  
M 2 ( N  1)
Substituting it in (3) gives
11
N ( M  1) S w2  ( NM  1) S 2  ( N  1) MSb2
 ( MN  1) S 2

 ( NM  1) S  ( N  1) M  2
1  ( M  1)   
 M ( N  1)

 M  1  ( M  1)  
 ( NM  1) S 2 

M

 ( NM  1) S 2 ( M  1)(1   )
2

 MN  1  2
S w2  
 S (1   ).
 MN 
Substituting Sb2 and S w2 in Var ( yTS )
2
m(n  1)
M  m 
 MN  1  S 
N n m

V ( yTS )  
1
( M  1) 
 .


M  
 MN  mn  M ( N  1)
 N 1 M
When subsampling rate
m
is small, MN  1  MN and M  1  M , then
M
S2
V ( ySRS ) 
mn
S2 
 N n

1  
V ( yTS ) 
m  1  .

mn 
 N 1

The relative efficiency of the two stage in relation to one stage sampling of SRSWOR is
RE 
Var ( yTS )
 N n

m  1 .
 1  
Var ( ySRS )
 N 1

If N  1  N and finite population correction is ignorable, then
N n N n

 1, then
N 1
N
RE  1   (m  1).
Case 2: Comparison with cluster sampling
Suppose a random sample of
mn
clusters, without further subsampling is selected.
M
The variance of the sample mean of equivalent mn / M clusters is
M 1 2
Var ( ycl )  
  Sb .
 mn N 
The variance of sample mean under the two stage sampling is
1 1 1
Var ( yTS )   
nm M
 2 1 1  2
 S w     Sb .

n N 
So Var ( ycl ) exceedes Var ( yTS ) by
1M
 2 1 2 
  1   Sb  S w 
n m
M


12
which is approximately
 2 S w2 
1M
 2

1
S

N
and
for
large
 Sb 
  0.


n m
M


MN  1 S 2
1   ( M  1)
M ( N  1) M
MN  1 2
S w2 
S (1   )
MN
where Sb2 
So smaller the m / M , larger the reduction in the variance of two stage sample over a cluster sample.

S2 
When  Sb2  w   0 then the subsampling will lead to loss in precision.
M

Two stage sampling with unequal first stage units:
Consider two stage sampling when the first stage units are of unequal size and SRSWOR is employed at
each stage.
Let
yij : value of j th second stage unit of the i th first stage unit.
M i : number of second stage units in i th first stage units (i  1, 2,..., N ) .
N
M 0   M i : total number of second stage units in the population.
i 1
mi : number of second stage units to be selected from i th first stage unit, if it is in the sample.
n
m0   mi : total number of second stage units in the sample.
i 1
1
mi
yi ( mi ) 
mi
y
j 1
Yi 
1
Mi
y
Y 
1
N
N
N
Y 
ij
Mi
ij
j 1
y
i 1
 YN
i
Mi
 y
i 1 j 1
N
M
i 1
N
ij

M Y
i 1
i i
MN

1
N
N
u Y
i 1
i i
i
Mi
M
1 N
M   Mi
N i 1
ui 
13
The pictorial scheme of two stage sampling with unequal first stage units case is as follows:
Population (MN units)
Cluster
M1
units
Cluster
M2
units
… … …
Cluster
MN
units
Population
N clusters
N clusters
Cluster
M1
units
Cluster
M2
units
… … …
Cluster
Mn
units
n clusters
Cluster
m1 units
Cluster
m2 units
… … …
14
Cluster
mn units
First stage
sample
n clusters
(small)
Second stage
sample
n clusters
(small)
Now we consider different estimators for the estimation of population mean.
1. Estimator based on the first stage unit means in the sample:
1 n
Yˆ  yS 2   yi ( mi )
n i 1
Bias:
1 n

E ( yS 2 )  E   yi ( mi ) 
 n i 1

n
1

 E1   E2 ( yi ( mi ) ) 
 n i 1

1 n 
 E1   Yi 
 n i 1 
1 N
  Yi
N i 1
[Since a sample of size mi is selected out of M i units by SRSWOR]
YN
 Y.
So yS 2 is a biased estimator of Y and its bias is given by
Bias ( yS 2 )  E ( yS 2 )  Y

1
N
N
 Yi 
i 1
1 N
 M iYi
NM i 1
1 N
1  N  N

  M iYi    Yi    M i  
NM  i 1
N  i 1   i 1

N
1

 (M i  M )(Yi  YN ).
NM i 1

This bias can be estimated by
( y )  
Bias
S2
n
N 1
 (M i  m)( yi ( mi )  yS 2 )
NM (n  1) i 1
which can be seen as follows:
n

 ( y )   N 1 E  1
E  Bias
E2 ( M i  m)( yi ( mi )  yS 2 ) / n

2
1
S



NM
 n  1 i 1


N 1  1 n

E
( M i  m)(Yi  yn ) 

NM  n  1 i 1


1
NM
N
 (M
i 1
i
 M )(Yi  YN )
 YN  Y
where yn 
1 n
 Yi .
n i 1
15
An unbiased estimator of the population mean Y is thus obtained as
yS 2 
N 1 1 n
 (M i  m)( yi ( mi )  yS 2 ) .
NM n  1 i 1
Note that the bias arises due to the inequality of sizes of the first stage units and probability of selection
of second stage units varies from one first stage to another.
Variance:
Var ( yS 2 )  Var  E ( yS 2 n)   E Var ( yS 2 n) 
1 n 
1 n

 Var   yi   E  2  Var ( yi ( mi ) i ) 
 n i 1 
 n i 1

1 n  1
1  2
1 1 
    Sb2  E  2   
 Si 
n
m
M
n N 

1
i
i
i

 

1 N  1
1  2
1 1 
    Sb2 
 
 Si

Nn i 1  mi M i 
n N 

1 N
where S 
 Yi  YN
N  1 i 1
2
b

2
2
1 Mi
S 
  yij  Yi  .
M i  1 j 1
2
i
The MSE can be obtained as
MSE ( yS 2 )  Var ( yS 2 )   Bias ( yS 2 )  .
2
Estimation of variance:
Consider mean square between cluster means in the sample
sb2 
2
1 n
yi ( mi )  yS 2 .

n  1 i 1


It can be shown that
E ( sb2 )  Sb2 
Also si2 
1
N
N
 1
i 1

 m

i
1
Mi
 2
Si .

1 mi
( yij  yi ( mi ) ) 2

mi  1 j 1
1 Mi
( yij  Yi ) 2
E( s )  S 

M i  1 j 1
2
i
2
i
1 n  1
1  2 1
So E    
 si  
n
m
M
1

i
i 
 i

 N
N
 1
i 1

 m
i

1
Mi
 2
 Si .

16
Thus
1 n  1
1  2
E ( sb2 )  Sb2  E    
 si 
n
m
M

i
1
i 
 i


and an unbiased estimator of Sb2 is
1 n  1
1  2
Sˆb2  sb2    
 si .
n i 1  mi M i 
So an estimator of the variance can be obtained by replacing Sb2 and Si2 by their unbiased estimators as
N
 1
1
 ( y )   1  1  Sˆ 2  1
Var
 

S2

 b
Nn i 1  mi M i
n N 
 2
Sˆi .

2. Estimation based on first stage unit totals:
1 n M y
Yˆ  yS* 2   i i ( mi )
n i 1 M
1 n
  ui yi ( mi )
n i 1
where ui 
Mi
.
M
Bias
1 n

E ( yS* 2 )  E   ui yi ( mi ) 
 n i 1

1 n

 E   ui E2 ( yi ( mi ) i ) 
 n i 1

1 n

 E   uiYi 
 n i 1


1
N
N
u Y
i 1
i i
Y.
Thus yS* 2 is an unbiased estimator of Y .
17
Variance:
Var ( yS* 2 )  Var  E ( yS* 2 n)   E Var ( yS* 2 n) 
1 n

1 n

 Var   uiYi   E  2  ui2Var ( yi ( mi ) i ) 
 n i 1

 n i 1

1
1 1 
    Sb*2 
nN
n N 
N
u
i 1
2
i
 1
1  2
 
 Si
 mi M i 
1 Mi
where S 
( yij  Yi ) 2

M i  1 j 1
2
i
Sb*2 
1 N
(uiYi  Y ) 2 .

N  1 j 1
3. Estimator based on ratio estimator:
n
M y
Yˆ  yS**2 
i ( mi )
i
i 1
n
M
i 1
i
n

u y
i
i 1
i ( mi )
n
u
i
i 1
yS* 2

un
where ui 
Mi
1 n
, un   ui .
M
n i 1
This estimator can be seen as if arising by the ratio method of estimation as follows:
Let yi*  ui yi ( mi )
xi* 
Mi
, i  1, 2,..., N
M
be the values of study variable and auxiliary variable in reference to the ratio method of estimation. Then
1 n *
y   yi  yS* 2
n i 1
*
x* 
1 n *
 xi  un
n i 1
X* 
1
N
N
X
i 1
*
i
 1.
18
The corresponding ratio estimator of Y is
y*
y*
YˆR 
X *  S 2 1  yS**2 .
x*
un
So the bias and mean squared error of yS**2 can be obtained directly from the results of ratio estimator.
Recall that in ratio method of estimation, the bias and MSE of the ratio estimator upto second order of
approximation
is
N n
Y (Cx2  2  Cx C y )
Nn
Var ( x ) Cov( x , y ) 
Y 

2

XY
 X
Bias ( yˆ R ) 
MSE (YˆR )  Var ( y )  R 2Var ( x )  2 RCov( x , y ) 
where R 
Y
.
X
Bias:
The bias of yS**2 up to second order of approximation is
Var ( xS*2 ) Cov( xS*2 , yS* 2 ) 
Bias ( yS**2 )  Y 


2
XY
 X

where xS*2 is the mean of auxiliary variable similar to yS* 2 as xS*2 
1 n
 xi ( mi ) .
n i 1
Now we find Cov( xS*2 , yS* 2 ).
 1 n

1 n
1 n

1 n

Cov( xS*2 , yS* 2 )  Cov  E   ui xi ( mi ) ,  ui yi ( mi )    E Cov   ui xi ( mi ) ,  ui yi ( mi )  
n i 1
n i 1

 n i 1

  n i 1

n
n
n
1
1

1

 Cov   ui E ( xi ( mi ) ),  ui E ( yi ( mi ) )   E  2  ui2Cov( xi ( mi ) , yi ( mi ) ) i 
n i 1
 n i 1

 n i 1

1
1 n
1 n

 Cov   ui X i ,  uiYi   E  2
n i 1
 n i 1

n
1
1 1  *
    Sbxy

nN
n N 
N
u
i 1
2
i
1 N
 (ui X i  X )(uiYi  Y )
N  1 i 1
Sixy 
1 Mi
 ( xij  X i )( yij  Yi ).
M i  1 j 1
u
i 1
2
i
 1
1 
 
Sixy
 mi M i 
where
*
Sbxy

n
19
 1
1  
 
Sixy 
 mi M i  
Similarly, Var ( xS*2 ) can be obtained by replacing x in place of y in Cov( xS*2 , yS* 2 ) as
1
1 1 
Var ( xS*2 )     Sbx*2 
nN
n N 
where Sbx*2 
1 N
 (ui X i  X )2
N  1 i 1
Six*2 
1 Mi
( xij  X i ) 2 .

M i  1 i 1
N
 1
1
 
 mi M i
u
i 1
2
i
 2
Six

Substituting Cov( xS*2 , yS* 2 ) and Var ( xS*2 ) in Bias ( yS**2 ), we obtain the approximate bias as
*
 1 1   Sbx*2 Sbxy
 1
Bias ( y )  Y     2 



XY  nN
 n N   X
**
S2
 2  1
1   Six2 Sixy  
ui  
 2 

  .
XY  
i 1 
  mi M i   X
N
Mean squared error
MSE ( yS**2 )  Var ( yS* 2 )  2 R*Cov( xS*2 , yS* 2 )  R*2Var ( xS*2 )
1
1 1 
Var ( yS**2 )     Sby*2 
nN
n N 
1
1 1 
Var ( x )     Sbx*2 
nN
n N 
**
S2
2
i
 1
1  2
 
 Siy
 mi M i 
2
i
 1
1  2
 
 Six
 mi M i 
N
u
i 1
N
u
i 1
1
1 1  *

Cov( xS*2 , yS**2 )     Sbxy
nN
n N 
where
Sby*2 
1 N
 (uiYi  Y )2
N  1 i 1
Siy*2 
1 Mi
 ( yij  Yi )2
M i  1 j 1
R* 
N
u
i 1
2
i
 1
1
 
 mi M i

 Sixy

Y
Y.
X
Thus
1
1 1 
*
 R*2 Sbx*2  
MSE ( yS**2 )      Sby*2  2 R* Sbxy
nN
n N 
N

i 1

N

i 1

2
i

 1
1  2
*
*2 2
 
  Siy  2 R Sixy  R Six  .
 mi M i 

2
i

 1
1  2
*
*2 2
 
  Siy  2 R Sixy  R Six  .
 mi M i 

 u
Also
2
1
1 1  1 N 2
MSE ( y )    
ui Yi  R* X i  

nN
 n N  N  1 i 1
**
S2
20
 u
Estimate of variance
Consider
1 n
 ui yi ( mi )  yS* 2  ui xi ( mi )  xS*2  


n  1 i 1 
1 n

  xij  xi ( mi )  yij  yi ( mi ) .
mi  1 j 1 
*
sbxy

sixy
It can be shown that
*
E  sbxy
  Sbxy* 
1
N
N
u
i 1
2
i
 1
1
 
 mi M i

Sixy

E ( sixy )  Sixy .
So
1 n
 1
1   1
E   ui2  
 sixy  
 n i 1  mi M i   N
N

i 1

 u
2
i

 1
1 
 
 Sixy .
 mi M i 

Thus
 1
1 n
1
*
*
 sbxy
  ui2  
Sˆbxy
n i 1  mi M i

 sixy

 1
1 n
1  2
*2
  ui2  
Sˆbx*2  sbx
 six
n i 1  mi M i 
 1
1 n
1  2
*2
  ui2  
Sˆby*2  sby
 siy .
n i 1  mi M i 
Also
 1 n   1
1  2  1
E   ui2  
 six  
 n i 1   mi M i   N
 1 n   1
1
E   ui2  
 n i 1   mi M i
 2  1
 siy  
  N
A consistent estimator of MSE of
N

i 1

N

i 1

 u
2
i
 u
2
i
 1
1  2
 
 Six 
 mi M i  
 1
1
 
 mi M i
 2
 Siy .
 
yS**2 can be obtained by substituting the unbiased estimators of
respective statistics in MSE ( yS**2 ) as
 ( y ** )   1  1   s*2  2r * s*  r *2 s*2 
MSE
S2
bxy
bx

 by
n N 
1 n 2 1
1  2
*
*2 2

ui  
  siy  2r sixy  r six 

nN i 1  mi M i 
2
1 1  1 n
  
yi ( mi )  r * xi ( mi ) 


 n N  n  1 i 1

1 n  2 1
1  2
*
*2 2

ui  
  siy  2r sixy  r six  

nN i 1   mi M i 

21
yS* 2
where r  * .
xS 2
*
Download