Paper - University of South Florida

advertisement
Comparison of Weights
1
RUNNING HEAD: Comparison of Weights
Comparison of Weights in Meta-analysis Under Realistic Conditions
Michael T. Brannick
Liuqin Yang
Guy Cafri
University of South Florida
Poster presented at the 23rd annual conference of the Society for Industrial and
Organizational Psychology, San Francisco, CA, April 2008.
Comparison of Weights
2
Abstract
We compared several weighting procedures for random-effects meta-analysis under
realistic conditions. Weighting schemes included unit, sample size, inverse variance in r
and in z, empirical Bayes, and a combination procedure. Unit weights worked
surprisingly well, and the Hunter and Schmidt (2004) procedures appeared to work best
overall.
Comparison of Weights
3
Comparison of Weights in Meta-analysis Under Realistic Conditions
Meta-analysis refers to the quantitative analysis of the results of empirical studies
(e.g., Hedges & Vevea, 1998; Hunter & Schmidt, 2004; Lipsey & Wilson, 2001). Metaanalysis is often used as a means of review and synthesis of previous studies, and also to
test theoretical propositions that cannot be tested in the primary research (e.g., Bond &
Smith, 1996). A general aim of most meta-analyses is to estimate the mean of the
distribution of effect sizes from multiple studies, and to estimate and explain variance in
the distribution of effect sizes.
A major distinction is between fixed- and random-effects meta-analysis (National
Research Council, 1992). In fixed-effects analysis, the observed effect sizes are all taken
to be estimates of a common underlying parameter, so that if one could collect an infinite
sample size for each study, the results would be identical across studies. In randomeffects meta-analysis, the underlying parameter is assumed to have a distribution, so that
if one could collect infinite samples for each study, the studies would result in different
estimates. It seems likely that the latter condition (random-effects) is a better
representation of real data because of differences across studies such as measures,
procedures, treatments, and participant populations. In this paper, therefore, we confine
our discussion to random-effects procedures.
Commonly Used Methods
The two methods used for random-effects meta-analysis receiving most attention
were those developed by Hedges and Vevea (1998) and Hunter and Schmidt (2004). Both
methods have evolved somewhat (see Hedges, 1983; Hedges & Olkin, 1985; Schmidt &
Hunter, 1977; Hunter & Schmidt, 1990), but we will generally refer to the recent versions
Comparison of Weights
4
(i.e., Hedges & Vevea, 1998; Hunter & Schmidt, 2004). Both methods provide an
estimate of the overall mean effect size and an estimate of the variability of infinitesample effect sizes. For convenience and because of its common use, we will be dealing
with the effect size r, the correlation coefficient. The overall mean will be denoted 
(for any given study context, the local mean is  i ), and the variance of infinite-sample
effect sizes (the random-effects variance component, REVC) will be denoted  2 .
In previous comparisons of the two approaches, the Hunter & Schmidt (2004)
approach has generally provided more accurate results than has the Hedges and Vevea
(1998) approach (e.g., Field, 2001; Hall & Brannick, 2002; Schulze, 2004). Such a result
is unexpected because Hedges and Olkin (1985) showed that the maximum likelihood
estimator of the mean in the random-effects case depends upon both sampling variance of
the individual studies and the variance of infinite-sample effect sizes (the REVC,  2 ), b
but the Hunter and Schmidt (2004) procedure uses sample size weights, which do not
incorporate the REVC. Thus, the Hunter and Schmidt (2004) weights can be shown to be
suboptimal from a statistical/mathematical standpoint. However, both the individual
study values (ri) and the REVC are subject to sampling error, and thus in practice, they
may not provide more accurate estimates, particularly if the individual study sample sizes
are small. In addition, the first step in the Hedges approach is to transform the correlation
from r to z, which creates other problems (described shortly, see also Schmidt, Hunter &
Raju, 1988; Silver & Dunlop, 1987).
The current paper was an attempt to better understand the reason for the
unexpected findings and to improve current estimators by exploring several different
weighting schemes as well as the r to z transformation. To conserve space, we do not
Comparison of Weights
5
present either the Hunter and Schmidt (2004) or the Hedges and Vevea (1998) methods,
as they have appeared in several books and articles; we trust that the reader has a basic
familiarity with them. We describe the rest of the methods and our rationale for their
choice next.
Other Weighting Schemes
Hedges & Vevea in r. A main advantage of transforming r to z is that the
sampling variance of z is independent of  . There are some drawbacks to its use,
however. First, the REVC is in the metric of z, and thus cannot be directly interpreted as
the variance of correlations. Second, in the random effects case, the average of z values
back transformed to r will not generally equal the average of the r values (Schulze, 2004).
For example, if our population values are .4 and .6, the average of these is .5, but the back
transformed average based on z is .51. Finally, the analysis of moderators is complicated
by the use of z (Mason, Allam, & Brannick, in press). If r is linearly related to some
moderator, z cannot be, and vice versa. Therefore, it might be advantageous to compute
inverse variance weights in r rather than z. The (fixed-effects) weight in such a case is
computed as (see Hunter & Schmidt, 2004, who present the estimated sampling variance
for a study):
wi 
( N i  1)
.
(1  ri 2 ) 2
(1)
Note that this estimator is likely to produce biased estimates, particularly when N is small,
because large absolute values of r will receive greater weight. Other than the choice of
weight, this method proceeds just as the method described by Hedges & Vevea (1998). If
this method produces estimates that are as good as the original Hedges and Vevea (1998)
Comparison of Weights
6
method, then it is preferable because it avoids the interpretational problem introduced by
the z transformation.
Shrunken estimates. In equation 1, large values of r will occur by sampling error,
and large values of r will receive a large weight. Therefore, it might be advantageous to
use values of ̂ i that fall somewhere between r and ri . One way to compute values that
fall between the mean and the individual study values is to use Empirical Bayes (EB)
estimates (e.g., Burr & Doss, 2005). Empirical Bayes estimates pull individual study
estimates toward the overall mean effect size by computing a weighted average
composed of the overall mean and the initial study value. Individual studies are thus
shrunken toward the mean depending on the amount of information provided by the
overall mean and the individual study. To compute the EB estimates, first compute a
mean and REVC using the Hunter and Schmidt (2004) method. Then compute the
sampling variance of that mean, using
VM 
(1  r 2 ) 2
,
Nt 1
(2)
Where Nt is the total sample size for the meta analysis. The weight for the mean
is computed by:
wM 
1
.
VM  ˆ 2
(3)
Note that the weight for the mean will become very large with large values of
total sample size and small values of the REVC; we see the greatest shrinkage with small
sample size studies that show little or no variability beyond that expected by sampling
error. The shrunken (EB) estimates are computed as a weighted average of the mean
effect size and the study effect size, thus:
Comparison of Weights
EBi 
N i ri  wM r
.
N i  wM
7
(4)
The EB estimates are substituted for the raw correlations used to calculate weights
(but not the correlations themselves) in the Hedges and Vevea algorithm for raw
correlations to compute an overall mean. We do not have a mathematical justification for
the use of EB estimates in this context. They appear to be a rational compromise
between the maximum likelihood and sample size weights, however, and are subject
empirical evaluation, just as are the other methods.
Combination Estimates. Because previous studies showed an advantage to the
Hunter and Schmidt method of estimating the REVC, we speculated that incorporating
the this REVC into the optimal weights provided by Hedges and Vevea (1998) might
prove advantageous. Therefore, we incorporated a model that used the Hunter and
Schmidt method of estimating the REVC coupled with the Hedges and Vevea (1998)
method of estimating the mean (using r rather than z as the effect size).
Unit weights. Most meta-analysts use weights of one sort or another when
calculating the overall mean and variance of effect sizes. Unit weights ignore the
possibility that different studies provide different amounts of information (precision).
Unit weights are thus generally considered inferior. We have three reasons for including
them in our simulations. First, they serve as a baseline so that we can judge the
effectiveness of weighting schemes against the simplest of alternatives. Second, many
earlier meta-analyses (and some recent ones) used unit weights (e.g., Vacha-Haase, 1998),
so it is of interest to see whether weighting schemes appear to provide sufficient
information that we should redo the original, unit-weighted analyses with precision
weights. Finally, the incorporation of the REVC into the optimal weights makes the
Comparison of Weights
8
weights approach unit weights as the REVC becomes large relative to the sampling error
of the individual studies. Thus, as a limit, unit weights should excel as true between
studies variance increases, and we should expect unit weights to become increasingly
viable as the REVC becomes large relative to individual study sampling error.
Realistic Simulation
Previous simulations have usually not been guided by representative values of
parameters (  , REVC). The choice of individual study sample sizes and the number of
studies to be synthesized has also been entirely at the discretion of the researchers, who
must guess at values of interest to researchers. However, now that so many metaanalyses have been published, it is possible to base simulations on published analyses.
Basing simulations on published meta-analyses helps establish an argument for the
generalizability of the simulation.
The simulations reported in this paper are based (except where noted) on
previously published meta-analyses. The approach was to sample actual numbers of
studies and sample sizes directly from the published meta-analyses. The underlying
distributions of rho were based on the published meta-analyses as well, but we had to
make some assumptions about those distributions as they can only be estimated and
cannot be known exactly with real data.
Method
The general approach was to sample effect sizes, sample sizes, and numbers of
studies from actual published meta-analyses. The published distributions were also used
to inform the selection of parameters for simulations. Having known parameters allowed
the evaluation of the quality of estimation from the various methods of meta-analysis.
Comparison of Weights
9
The simulations were thus closely linked to what is encountered in practice and are
thought to provide evaluations of meta-analytic methods under realistic conditions.
Inclusion criteria and coding. Three journals were chosen to represent metaanalyses in a broad array of applied topics, Academy of Management Journal, Journal of
Applied Psychology, and Personnel Psychology. Each of the journals was searched by
hand for the period 1979 to 2005. All of the meta-analyses were selected and coded
provided that the article included the effect sizes and the effect sizes were either
correlations coefficients were easily converted to correlations. The selection and coding
process resulted in a database with 48 meta-analyses and 1837 effect sizes. Fourteen out
of 48 the meta-analyses were randomly chosen for double-coding by two coders. The
intraclass correlations (Shrout & Fleiss, 1979) ICC(2, 1) and ICC(3, 1) were 1.0 for the
coding of sample sizes, and .99 for the coding of effect sizes.
Design
Simulation conditions of N_bar and N_skew cut-off. The distribution of all the
study sample sizes (N) in each of 48 meta-analyses was investigated. Most meta-analyses
contained a rather skewed distribution of sample sizes. Based on inspection of the
distributions of sample size for meta-analyses, we decided to divide the meta-analyses
into groups depending on the average sample size and degree of skew. For each metaanalysis, we computed the average N (N_bar) and the skewness of N (N_skew) for that
meta-analysis. Then the distribution of averages and skewness values was computed
across meta-analyses (essentially an empirical sampling distribution). The empirical
sampling distribution of average sample size had a median of 168.57. The empirical
sampling distribution of skewness had a median of 2.25. Each of the meta-analyses was
Comparison of Weights
10
classified for convenience into one of four conditions based on the medians for sample
size and skew. Representative distributions of sample sizes from each of the four
conditions are shown in Figure 1.
Insert Figure 1 about here
Number of studies (k). The number of studies used for our simulation were
sampled from actual meta-analyses. Overall, k varied from 10 to 97. When one metaanalysis was randomly chosen from our pool of 48 meta-analyses, the number of studies
and sample sizes of each study were used for that particular simulation.
Choice of parameters. The sample sizes and number of studies for each metaanalysis were sampled in our simulations. The values of parameters (  and   ) could
not be sampled directly because they are unknown. However, an attempt was made to
choose values of parameters that are plausible, which was done in the following way. All
the coded effect sizes from each of our 48 meta-analyses were meta-analyzed with
Hunter & Schmidt (2004) approach without any artifact corrections (the ‘bare bones’
approach), which resulted in an estimated  and estimated   for each meta-analysis.
The distribution of | ˆ | across meta-analyses showed a 10th percentile of .10, a median
of .22, and a 90th percentile of .44. The distribution of ˆ 2 showed a 10th percentile
of .0005, a median of .0128, and a 90 percentile of .0328. The values of  and   were
used to create a 3 (   .10, .22, .44 ) by 3 (    .02, .11, .18 ) design of parameter
conditions for the simulations by which to compare the meta-analysis approaches in their
ability to recover parameters.
Comparison of Weights
11
Data Generation. A Monte Carlo program was written in SAS IML. The
program first picked a quadrant from which to sample studies (that is, the combination of
distributions of k and N). Then the program picked a combination of parameters (values
of  and   ). These defined an underlying normal distribution of rho (underlying
values were constrained to fall within plus and minus .96; larger absolute values were
resampled). Then the program simulated k studies of various sample sizes drawn from a
population with the chosen parameters (observed correlations greater than .99 in absolute
value were resampled). The k studies thus sampled were analyzed by meta-analysis. The
meta-analysis of studies was repeated 5000 times to create an empirical sampling
distribution of meta-analysis estimates.
Estimators. Six approaches (described in the introduction) were selected in our
simulation. These six approaches were (1) unit weight with r as the effect size, (2) Hunter
& Schmidt (2004, ‘bare bones’) with r as the effect size, (3) Hedges & Vevea (1998) with
z as the effect size, (4) inverse variance weights (based on the logic of Hedges & Vevea,
1998) with r as the effect size, (5) Empirical Bayes weights with r as the effect size, and
(6) Combination of H&S and H&V with r as effect size.
Data analysis. The data were meta-analyzed for each of the 5000 trials,  and
  were estimated with each of the six chosen meta-analysis approaches, and the root
mean square residuals (RMSR, that is, the root-mean-square difference between the
parameter and the estimate) for ̂ and for ˆ 2 were calculated over trials for each
approach. The RMSR essentially shows the average distance of the estimate to the
parameter, and thus estimators with small RMSR are preferred.
Comparison of Weights
12
Results
The preliminary results of simulation showed that skew in the distribution of
sample sizes had essentially no effect on the outcome. Therefore, we deleted this factor
and reran all the simulations in order to simplify the presentation of the results. The
results thus correspond to a design with two levels of average sample size and 9
combinations of underlying parameters (three levels of  and three levels of   ).
The results are summarized in Figures 2 and 3. Figure 2 show results for
estimating the grand mean (distributions of ̂ ). Figure 3 shows empirical sampling
distributions of ˆ 2 . Each figure contains a large amount of information beyond the
shape of the distribution of estimates. For example, in Figure 2, the numbers at the very
top of the graph show the root-mean-square residual for the distribution of the estimator
(for the first distribution of unit weights, that value is .030). The numbers at the bottom
of each graph indicate the mean of the empirical distributions. For example, the unit
weight estimator at the top of Figure 2 produced a mean of .099. The value of the
underlying parameter is also shown in the figures as a horizontal line. In top graph in
Figure 2, the parameter  was .010. The numbers serving as subscripts for the labels at
the bottom of the figures indicate the sample sizes of the meta-analyses included in the
distributions. For example, in Figure 2, UW1 mean unit weights with smaller average
sample size, and UW2 means unit weights with larger average sample size studies.
Only three of 9 of the design cells corresponding to underlying parameters are
illustrated in Figures 2 and 3 (those cells are  =.10,  2 = .005;  =.22,  2 =.013;
Comparison of Weights
13
 =.44,  2 =.033). Figures 2 and 3 are representative of the pattern of the results; full
results (those cells not shown) are available from the senior author upon request.
As can be seen in the figures, the design elements had their generally expected
impacts on the estimates. The empirical sampling distributions are generally more
compact given larger average sample sizes. The means of the sampling distributions get
larger as the underlying parameters (  and  2 ) increase. The variance of the distribution
of ̂ increases as  2 increases. There are also interesting differences across estimators.
We discuss those in the following section.
Discussion
The goal of the current study was to examine the quality of different estimators of
the overall mean and REVC in meta-analysis under realistic conditions of interest to
industrial and organizational psychologists. The results suggest several conclusions of
interest.
First, unit weights are provided surprisingly good estimates, particularly when the
underlying mean and REVC are large. As one might expect, when the sample sizes are
large, study weights become essentially irrelevant. However, it was not clear from the
literature that unit weights would actually prove superior to popular weighting schemes
when estimating the REVC. It appears that published meta-analyses using unit weights
are likely to provide reasonable estimates of parameters, and need not be redone simply
to provide a new estimate of the mean or REVC. Of course, unit weights do not provide
standard errors other than those employed in primary research, but this is a matter beyond
the scope of this paper.
Comparison of Weights
14
The methods that relied on alternate weighting schemes for r rather than z did not
appear to result in an improvement over the sample size weights advocated by Hunter and
Schmidt (1990; 2004). It appears that sampling error for r renders the inverse sampling
error weights in r problematic. The EB estimates should have provided estimates
somewhere between the Hunter and Schmidt and the HVr estimates. According to our
results, they did not, and this result is puzzling and awaits further confirmation. The
Hedges and Vevea (1998) estimates preformed as expected – there was a slight
overestimate of the grand mean, which increased as the true grand mean and REVC
increased. The REVC in z is problematic and is illustrated in Figure 3 for reference only
because it is not in the same metric as r. The combination estimator worked rather well,
but did not appear to work better than the Hunter and Schmidt (2004) approach.
Overall, the Hunter and Schmidt (2004) method provided estimates that were
either the best or near the best for the given condition. The study was unable to find
estimates that generally out-performed the Hunter and Schmidt procedure.
Contributions of the Study
Through the coding of published studies, the current paper provides a database
and quantitative summary of published meta-analyses of interest to industrial and
organizational psychologists (the database is available online through the senior author’s
website). The current study is the first to use empirical data to derive population values
and sample values for a Monte Carlo simulation of meta-analysis. The study also
provides a methodology for assuring that a simulation is representative of conditions of
interest to authors in a given area.
Comparison of Weights
15
The finding that unit weights provide estimates nearly as good as, and sometimes
better than, more sophisticated estimates of the mean and REVC is important for both the
previously published meta-analyses and the literature on weights in meta-analysis. In
areas of research in which the sample sizes for studies are large (e.g., survey research,
current studies of personnel selection), the choice of weights for meta-analysis appears of
little concern. For areas in which the sample sizes are small or in which there are few
studies, weights will be of greater concern.
The current study provides another example in which the Hunter and Schmidt
(2004) approach to meta-analyzing the correlation coefficient tends to provide the most
accurate estimates of the overall mean and REVC (see, e.g. Field, 2001, Hall & Brannick,
2002, Schulze, 2004). It appears that the more sophisticated (statistically optimal)
weights are offset in practice by the sampling error in r and in the REVC. Future
research might determine at what point the balance tips in the other direction, that is, at
what point N (sample size) and k (number of studies) become large enough to produce
smaller standard errors for mean and the REVC.
Comparison of Weights
16
References
Bond, R., & Smith, P. B. (1996). Culture and conformity: A meta-analysis of studies
using Asch’s (1952b, 1956) line judgment task. Psychological Bulletin, 119, 111137.
Burr, D., & Doss, H. (2005). A Bayesian semiparametric model for random-effects
meta-analysis. Journal of the American Statistical Association, 100, 242-251.
Field, A. P. (2001). Meta-analysis of correlation coefficients: A Monte Carlo
comparison of fixed-and random-effects methods. Psychological Methods, 6,
161-180.
Hall, S. M., & Brannick, M. T. (2002). Comparison of two random-effects methods of
meta-analysis. Journal of Applied Psychology, 87, 377-389.
Hedges, L. V. (1983). A random effects model for effect sizes. Psychological Bulletin,
93, 388-395.
Hedges, L. V., & Olkin, I. (1985). Statistical methods for meta-analysis. Orlando, FL:
Academic Press.
Hedges, L. V. & Vevea, J. L. (1998). Fixed- and random-effects models in meta-analysis.
Psychological Methods, 3, 486-504.
Hunter, J. E., & Schmidt, F. L. (1990). Methods of meta-analysis: Correcting error and
bias in research findings. Newbury Park, CA: Sage.
Hunter, J. E., & Schmidt, F. L. (2004). Methods of meta-analysis: Correcting error and
bias in research findings (2nd edition). Newbury Park, CA: Sage. Hillsdale, NJ.
Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis. Thousand Oaks, CA:
Sage.
Comparison of Weights
17
Mason, C., Allam, R., & Brannick, M. T. (in press). How to meta-analyze coefficient of
stability estimates: Some recommendations based on Monte Carlo studies.
Educational and Psychological Measurement.
National Research Council (1992). Combining information: Statistical issues and
opportunities for research (pp. 5-15). Washington, DC: National Academy Press.
Schmidt, F. L., & Hunter, J. E. (1977). Development of a general solution to the problem
of validity generalization. Journal of Applied Psychology, 62, 529-540.
Schmidt, F. L., Hunter, J. E., & Raju, N. S. (1988). Validity generalization and
situational specificity: A second look at the 75% rule and Fisher’s z
transformation. Journal of Applied Psychology, 73, 665-672.
Schulze, R. (2004). Meta-analysis: A comparison of approaches. Gottingen, Germany:
Hogrefe & Huber.
Shrout, P. E., & Fleiss, J. L. (1979). Intraclass correlations: Uses in assessing rater
reliability. Psychological Bulletin, 86, 420-428.
Silver, N., & Dunlap, W. (1987). Averaging coefficients: Should Fisher’s z
transformation be used? Journal of Applied Psychology, 72, 146-148
Vacha-Haase, T. (1998). Reliability generalization: Exploring variance in measurement
error affecting score reliability across studies. Educational and Psychological
Measurement, 58, 6-20.
Comparison of Weights
18
Figure Captions
Figure 1 The distribution of study sample sizes from representative meta-analyses. Note.
HN_HS = high sample size, high skew, HN_LS = high sample size, low skew, LN_HS =
low sample size, high skew, LN_LS = low sample size, low skew.
Figure 2 Distributions of estimated rho
Figure 3 Distributions of estimated REVC
Comparison of Weights
19
Figure 1
Study sample sizes from each meta-analysis
1000
800
600
400
200
0
H N _H S
H N _LS
LN _H S
LN _LS
Comparison of Weights
20
Figure 2
0.75
.030
.017 .023 .013
.027 .013
.024 .013 .043 .013 .028 .013
RMSR
Estimated Rho
0.6
0.45
0.3
0.15
Rho=.1
0.0
.099 .100 .100
UW1
UW2
0.75
.105
SH2
.102
.101
HVr1
SH1
.105
C1
EB1
HVz1
EB2
HVr2
HVz2
Mean of Rho estimates
C2
.036 .028 .037 .036 .033 .028 .034 .028 .055 .040 .037 .028
RMSR
Estimated Rho
0.6
0.45
0.3
Rho=.22
0.15
0.0
.217 .219 .218 .219 .231 .224 .224 .223 .236 .230 .231 .224
UW1
0.75
UW2
SH1
HVr1
SH2
HVz1
HVr2
HVz2
EB1
EB2
C1
Mean of Rho estimates
C2
.043 .040 .053 .057 .049 .041 .051 .049 .174 .159 .049 .041
RMSR
Estimated Rho
0.6
Rho=.44
0.45
0.3
0.15
0.0
.435
UW1
.439 .437 .439 .458 .446 .464 .462 .571 .553 .458 .446
SH1
EB1
C1
HVz1
HVr1
HVz2
EB2
HVr2
SH2
UW2
C2
Mean of Rho estimates
Comparison of Weights
21
Figure 3
0.12
.017
.006 .003 .001 .03 .001 .004 .001 .016 .000 .004 .001
RMSR
0.11
Estimated tau-square
0.1
0.09
0.08
0.07
0.06
0.05
0.04
0.03
0.02
0.01
Tau-square= .0005
0.0
.011 .004
.001 .001 .008 .001 .002 .001 .003 .000 .002 .001 Mean of Tau-sqaure estimates
SH1
UW1
EB1
C1
HVr1
HVz1
C2
HVr2
UW2
SH2
EB2
HVz2
0.12
.016 .008
.007 .005 .025 .007 .009 .007 .013 .006 .007 .005
RMSR
0.11
0.1
Estimated tau-square
0.09
0.08
0.07
0.06
0.05
0.04
0.03
0.02
Tau-square= .013
0.01
0.0
.011 .011 .020 .013 .014 .014 .012 .010 .012 .011 Mean of Tau-sqaure estimates
HVr1
HVz1 HVz2 EB1 EB2 C1 C2
HVr2
UW2
SH2
.021 .015
UW1
0.12
SH1
.017 .011 .013 .013 .032 .026 .057 .058 .018 .016 .014 .014
RMSR
0.11
Estimated tau-square
0.1
0.09
0.08
0.07
0.06
0.05
0.04
Tau-square= .033
0.03
0.02
0.01
0.0
.039 .033
.030
UW1
SH1
UW2
.029 .045 .042 .067 .066 .034 .032 .032 .031
SH2
EB1
C1
HVz1
HVr1
HVz2
EB2
HVr2
C2
Mean of Tau-sqaure estimates
Comparison of Weights
22
Download