Estimation of a Common Mean and Weighted Means Statistics

advertisement
Estimation of a Common Mean and Weighted
Means Statistics
Andrew L. Rukhin and Mark G. Vangel
Abstract
Measurements made by several laboratories may exhibit non-negligible
between-laboratory variability, as well as dierent within-laboratory
variances. Also, the number of measurements made at each laboratory
often dier. A question of fundamental importance in the analysis of
such data is how to form a best consensus mean, and what uncertainty
to attach to this estimate. An estimation equation approach due to
Mandel and Paule is often used at the National Institute of Standards
and Technology (NIST), particularly when certifying standard reference materials.
Primary goals of this work are to study the theoretical properties
of this method, and to compare it with some alternative methods, in
particular to the maximum likelihood estimator. Towards this end, we
show that the Mandel-Paule solution can be interpreted as a simplied
version of the maximum likelihood method. A class of weighted means
statistics is investigated for situations where the number of laboratories
is large. This class includes a modied maximum likelihood estimator and the Mandel-Paule procedure. Large sample behavior of the
distribution of these estimators is investigated. This study leads to a
utilizable estimate of the variance of the Mandel-Paule statistic and to
an approximate condence interval for the common mean. It is shown
that the Mandel-Paule estimator of the between-laboratory variance
is inconsistent in this setting. The results of numerical comparison of
mean squared errors of these estimators for a special distribution of
within-laboratory variances are also reported.
Keywords: heteroscedasticity, interlaboratory study, Mandel-Paule algorithm,
maximum likelihood estimator, unbalanced one-way ANOVA model, variance components
1
Andrew L. Rukhin is a Professor in the Department of Mathematics and
Statistics at University of Maryland at Baltimore County, 1000 Hilltop Circle, Baltimore, MD, 21250. Mark G. Vangel is Mathematical Statistician
in the Statistical Engineering Division, National Institute of Standards and
Technology, Building 820, Gaithersburg, MD 20899-0001. This work has
been done during the sabbatical leave of A. Rukhin at NIST. The authors
are grateful to Stefan Leigh for stimulating discussions and to Brad Biggersta for his careful reading of and commenting on the original draft of this
paper. The helpful comments of the associate editor and two referees are
also acknowledged
2
1 Statement of the problem
Consider the situation where measurements are made by each of p laboratories. Assume that the ith laboratory repeats its measurements ni times,
and that the data fxij g for i = 1; : : : ; p and j = 1; : : : ; ni follow a oneway random-eects ANOVA model, which may be both unbalanced and
heteroscedastic, i.e.
xij = + bi + eij ;
with mutually independent bi N (0 ; 2 ) and eij N (0 ; i2 ). Thus i2
and 2 are the nuisance parameters: the within-laboratory and betweenlaboratory variances, respectively. A fundamental problem in the analysis
of data from interlaboratory studies is to estimate the structural parameter
, and to provide a standard error for this estimate. See Mandel (1991) or
Crowder (1992) for further discussion.
In the following section we discuss the classical maximum likelihood estimate (MLE) which goes back to Cochran (1937, 1954); see also Rao (1981).
2 Maximum likelihood estimation
It will be convenient to use the additional notation, i2 = i2 =ni , i = ni ? 1
and i = 2 =(2 + i2 ): The usual estimators
of the laboratoryPmeans and
Pn
within-laboratory variances are xi = j =1 xij =ni and s2i = nj =1 (xij ?
xi )2 =(ni ? 1); t2i = s2i =ni with xi ; s2i ; i = 1; : : : ; p forming a sucient statistic.
We write the loglikelihood function in the reparametrized form
" p
#
p
X
i t2i i
1 X
2
(x ? ) +
+ (N + p) log 2
2 i=1 i i
1
?
i
i=1
i
?
p
X
i
i
p
X
1 ? i ? i=1 log i + C;
P
where N = p1 i and C does not depend on unknown parameters ; i2 ; 2 .
It is easy to see that the MLE of has the form
Pp
^i xi
(1)
x^ = Pi=1
p
i=1 ^i
and the MLE of 2 is
Pp
t2
2
(
x
?
x
^
)
+
^
i
i=1 i
1?
^
^ 2 =
;
(2)
N +p
i=1
i log
i i
i
3
where the MLE ^i of i is found by minimizing
(N + p) log
" p
X
1
i (xi ? x^) +
2
p
X
1
i i t2i
1 ? i
#
?
p
X
log i +
1
p
X
i log
1
1 ? i : (3)
i
Evaluating the partial derivatives of (3) and using (2), we obtain
^j j t2j ^2
^j (1 ? ^j )(xj ? x^) +
1 ? ^j = [j + 1 ? ^j ] :
2
(4)
Adding these equations shows that, again because of (2),
p
X
1
^i2 (xi ? x^)2 = ^ 2
p
X
^i :
(5)
1
The formulas ^i ^i2 =(1 ? ^i ) = ^ 2 ; i = 1; : : : ; p and (4) imply the identity
"
#
p
p
X
1 ? ^i
i
t2i
(1 ? ^i )2 (x ? x^)2 ? X
1
?
=
i
2
2
4
2
^i
^i
1
i=1 ^i
i=1 ^i
p
X
=
Pp
1
^i2 (xi ? x^)2
^ 4
Pp
? ^ ^i = 0:
1
2
(6)
We will make use of (2) and (6) later.
Notice that the likelihood equations (4) are a particular case of the likelihood equations for the general variance components setting investigated
in Harville (1977) (see Chapters 6 and 8 of Searle, Casella and McCulloch,
1992). However, our results are more specic in view of the special nature
of the problem and its reduction by suciency.
3 Mandel-Paule algorithm
Because of the rather complicated form of the likelihood equations, simpler
procedures are desirable in practice. One such method was rst suggested
by Mandel and Paule (1970) for equal i2 's, and later developed in a more
general form by Paule and Mandel (1982). It is now widely used in applications, particularly in analytical chemistry. Experience has shown that this
method, which we will refer to as the Mandel-Paule algorithm, often provides
reasonable estimates, and is recommended (Schiller and Eberhardt, 1992)
for use in the preparation of standard reference materials. The goal of this
4
section is to relate the nature of this estimator to the maximum likelihood
estimator.
The Mandel-Paule algorithm consists in using weights of the form
1
(7)
wi =
y + t2i
in the estimator of the common mean, which is a weighted means statistic
x~ =
Pp
wi xi
:
i=1 wi
i=1
P
p
(8)
Here y is designed to estimate 2 , and it is found from the equation
p
X
(xi ? x~)2 = p ? 1:
(9)
2
i=1 y + ti
We dene the modied Mandel-Paule procedure to be as above with p instead
of p ? 1 in the right-hand side of (9).
Thus the Mandel-Paule rule provides the estimate x~ of the common
mean along with the estimate y of 2 . As we shall see, the rst is a quite
satisfactory rule from many perspectives, although the latter lacks some
desirable properties.
Notice that the Mandel-Paule rule is well-dened, i.e. that (9) has at
most one positive solution. Indeed, one can show that the left-hand side
of (9) is a monotonically decreasing convex function. Thus (9) can have at
most one positive root, y, which is taken to be zero when negative.
Mandel and Paule's original motivation for (9) was that (even without
the normality assumption) the optimal weights, wi0 ; i = 1; : : : ; p, for the
weighted means statistic are
1
(10)
wi0 = 2 2 :
+ i
P
With these weights E i wi0 (xi ? x~)2 = p ? 1: By employing the idea behind
the method of moments, this identity can be used as the estimating equation
for and 2 , provided that i2 are estimated by t2i .
We show next that the modied Mandel-Paule estimator usually will be
close to the MLE. To see this, divide the equations (4) by 1 ? ^j and add
them, obtaining the identity
#
" p
p
p
X
X
X
i ^i t2i
2
2
^
2
? 1 ?i ^ = 0:
^i (xi ? x^) ? p^ +
2
(1
?
^
)
i
i
i=1
i=1
i=1
5
Notice that
p
X
i ^i ^i2
=
;
2
i=1 (1 ? ^i )
i=1 1 ? ^i
so that because of (6) and the formula, (1 ? ^i )?1 = 1 + ^ 2 =^i2 , one has
^ 2
p
X
i
"
p
X
i
i ^i
t2i
2
2
2
[^
?
t
]
=
^
1
?
^i (xi ? x^) ? p^ =
i
i
2
^i2
i=1 1 ? ^i
i=1
i=1 (1 ? ^i )
p
X
2
2
p
X
p
X
"
^ 2
i 1 + 2
= ^
^i
i=1
2
#"
"
#
#
p
2
2
X
1 ? ^ti2 = ^ 2 i 1 ? ^ti2 :
i
i
i=1
#
(11)
It is reasonable to expect that the MLEs ^i2 will be close to the t2i , so if
one makes the approximation (which is suggested in Cochran 1937, p. 113)
side of (11), then the modied Mandel-Paule
^i2 t2i in the right-hand
Pp
estimate obtains via i=1 ^i (xi ? x^)2 p^ 2 , or more revealingly
p
X
(xi ? x^)2 p:
^ 2 + t2i
i=1 A somewhat dierent manifestation of the same phenomenon appears if
one assumes that i from (3) have the form corresponding to that in (7), i.e.
if
1 = y
(12)
i =
1 + t2i =y y + t2i
for some positive y. Then i =(1 ? i ) = y=t2i , and
^ 2 =
=y
Pp
^)
i=1 i (xi ? x
+
N +p
(xi ?x
^ )2
i=1 y+t2i
Pp
2
+
N +p
Pp
i=1 i
=y
Pp
i=1
i t2i i
1?i
(xi ?x
^)2
i=1 y+t2i
Pp
N +p
+N
:
Therefore the modied Mandel-Paule rule is characterized by the following fact: ^ 2 = y; if the weights i admit representation (12).
As a consequence, the corresponding weighted means statistic (8) must
be close to the maximum likelihood estimator (1), so that both parts of
the modied Mandel-Paule estimator are close to their maximum likelihood
counterparts.
6
Theorem 3.1 The Mandel-Paule procedure determined by the weights of
the form (7) with y found from (9) is well-dened. When the maximum
likelihood weights have the form (12), this procedure coincides with the solution of the likelihood equations (4), and the maximum likelihood estimator
of 2 coincides with the solution of (9).
4 Discussion and Modications of the Mandel-Paule
Algorithm
One can think of ^i =^ 2 from (3) as estimates of wi0 from (10). In view of
(2) the modied Mandel-Paule estimator of the common mean can be interpreted as follows. This is the procedure which uses the weights of the
form 1=(1 + t2i =y) instead of the more dicult to nd values of ^i and still
maintains the same estimate of 2 as maximum likelihood. Thus, except
for the use of p ? 1 instead of p, what Mandel and Paule have proposed
is equivalent to replacing the nuisance parameters i2 in the appropriately
transformed likelihood equations with estimates t2i , and solving the two remaining ML equations for and 2 . For this reason the Mandel-Paule rule
is quite natural. It also suggests using the weights (12) with y determined
from the Mandel-Paule equation (9) as a rst approximation to the true
weights ^i in (4).
Because of (5), if the maximum likelihood weights have the form (12),
then
2
P
^ = y
2
p (xi ?x^)
i=1 (y+t2i )2
Pp
1
i=1 y+t2i
:
Thus if the weights of this form satisfy the likelihood equations, then
(xi ?x
^)2
i=1 y+t2i
Pp
N +p
+N
=
(xi ?x
^ )2
i=1 (y+t2i )2
:
Pp
1
i=1 y+t2i
Pp
(13)
An alternative (modied maximum likelihood) version of the MandelPaule rule, suggested by Cochran (1937), arises from this formula. Rather
than solving (9) one equates to one the left-hand side of (13), i.e. determines
y from the equation
p
p
X
1 :
(xi ? x~)2 = X
(14)
2 2
(y + t )
y + t2
i=1
i
i=1
7
i
However unlike the Mandel-Paule estimating equation, the function of y in
(14) generally speaking is not monotone, and the numerical determination
of the needed root can be much more dicult.
As another simplied (Mandel-Paule approach inspired) version of maximum likelihood, one can use the weights of the form (7), where y is determined directly from (3), or from (5). Thus y is found as
"
min
(N + p) log
y
p
X
1
!
#
p
(xi ? x~)2 + N + X
log(y + t2i )
y + t2i
1
or from the equation
(N + p)
p
X
1
(xi ? x~)2 =
(y + t2i )2
" p
X
1
1
y + t2i
#" p
X
1
#
(xi ? x~)2 + N :
y + t2i
(15)
Simulations show that this method performs somewhat better than the
Mandel-Paule procedure.
5 Asymptotic Behavior of Weighted Means
In this section we look at the asymptotic behavior of the class of statistics
including the modied maximum likelihood procedure and the Mandel-Paule
rule.
This class is formed by general weighted means statistics x~ of the form
(8) with wi = 1=(y + t2i ); i = 1; : : : ; p. The value of y is determined from
an estimating equation such as (9), (14) or (15). As it happens, under the
following assumptions this value converges with probability one to a constant
obtained from the limiting form of the estimating equations. For this reason
we look at the asymptotic behavior of statistics (8) for a xed positive y.
To perform the asymptotic comparison of these statistics we assume
that p ! 1. Thus the number of nuisance parameters increases, and we
face a Neyman-Scott type of problem. Our (essentially Bayesian) model
bears some resemblance to the parametrization of the likelihood function
used originally by Hartley and Rao (1967) (see also Section 6.2.c of Searle,
Casella and McCulloch, 1992).
In many interlaboratory studies it is believed that despite unbalancedness the relative within-laboratory variancers i2 =ni can usefully be regarded
as realizations of a random variable. We will assume that the unobservable
8
nuisance parameters i2 = i2 =2 ; i = 1; : : : ; p, are i.i.d. replicas of a random variable with some xed but otherwise arbitrary distribution function G( ). Although the elicitation of the form of G from practitioners is
very dicult, the following setting is useful since, for example, the asymptotic estimation of uncertainty in the statistics (8) becomes possible. The
asymptotic variance so obtained agrees closely with estimates derived by
other methods as well as with simulation results (see Section 6). We stress
that this estimation is not valid only under assumptions of Section 1 as the
following limiting normality of x~ may not hold in general. An alternative
condition is that the i can be non-identically distributed (as it might more
natural to assume the i2 being i.i.d. random variables), but such that an
analogue of the Lindeberg condition for the independent random variables
xi =(y + t2i ) is satised.
It will be assumed that the observable i.i.d. random variables xi ; t2i
and i ; i = 1; 2; : : : ; p, are realizations of the random vector (X; T; N) such
that the joint distribution (X; T; N; ) has the following form: (X; T ) are
conditionally (for given N ?= and ) independent
with the conditional
2
2
distribution of X being N ; (1 + ) for some unknown 2 , and the
conditional distribution of T 2 being 2 2 2 = . Thus T 2 = 2 2 V with V
denoting a random variable whose conditional distribution given N = has
the form 2 = . Thus E (X j; ) = and E ([X ? ]2 j; ) = 2 (1 + 2 ).
When N = , the conditional density of the random vector (X; T ) has the
form
(
)
1 p x ? ; t / t ?1 Z exp ? (x ? )2 ? t2 ? dG( ):
2 +1
22 (1 + 2 ) 22 2
The only remaining parameters in this setting are and 2 . The random
vectors (Xi ; Ti ); i = 1P; 2 : : : ; p, are independent and identically distributed
with the density ?2 P (N = )p ((x ? )=; t=), so that by the law of
large numbers for a xed y,
p
1 !E 1
1X
p 1 y + Ti
y+T
and
h
i
E y+XT E E (X j ) E y+1T j
h
i
x~ ! 1 =
= :
1
E y+T
E E y+T j
Thus under our assumptions, x~ is a consistent estimator of (this also
follows from Kiefer and Wolfowitz, 1956). According to the Central Limit
9
Theorem, p?1=2 p1 wi (Xi ? ) = p?1=2 (~x ? ) p1 wi has asymptotically a
normal distribution with zero mean and the variance E (X ? )2 (y + T )?2 .
Therefore p1=2 (~x ? ) is asymptotically normally distributed with zero mean
and the variance
P
P
E ((Xy+?T))2
2
Ey
1
+
T
E (+1+2V )2
2
2
= 2 E
1
+ 2
V
2
= 2 R():
(16)
Here = y=2 , and the expected value is taken with respect to the joint
distribution of and N. Thus (16) leads to the asymptotically optimal
value of y = 2 , with being the minimizer of R(). The determination of
optimal involves the prior density g and the known distribution of . To
choose an asymptotically optimal statistic (8), one must determine as the
minimizer of (16) and then nd a consistent estimate ~ 2 of 2 , with y = ~ 2 .
For the weighted means statistics, where the random y = yp is determined from estimating equations (9), (14) or (15), one has yp ! y1 or
yp =2 ! = y1 =2 in probability. Thus for these statistics, the estimation
of 2 is performed automatically via the formula ~ 2 = yp =.
Under our assumptions the solution yp of (9) for the Mandel-Paule procedure (or its modied version) converges to the one found from the equation
(X ? )2 = E 1 + 2 = 1:
(17)
E
y+T
+ 2V
Since, by Jensen's inequality,
1 + 2 > E 1 + 2 ;
E
+ 2V
+ 2
the solution of (17) is always larger than one. It follows that the MandelPaule estimate yp of 2 cannot be consistent. To obtain a consistent estimate
one has to use yp = with found from (17).
In our model the asymptotic variance in (16) can be estimated consistently by p1 with
#?2
" p
p
X
1
(
xi ? x~)2 X
(18)
1 =
(y + t2i )2 1 y + t2i :
1
P
Indeed, as indicated above, p?1 p1 y+1t2 ! E y+1T and
i
1
p
p
X
1
(xi ? x~)2 ! E (X ? )2 ;
(y + T )2
(y + t2i )2
10
so that V ar(~x)=1 ! 1 and p1 converges to the value in (16). Therefore
with z denoting the critical point of standard normal distribution, the
interval,
r
Pp (xi ?x~)2
y t
1 ( + 2 )2
i
x~ z=2
Pp
1
1
y+t2i
;
(19)
provides an approximate (1 ? )%-condence interval for the common mean
. Notice that this interval does not depend on the specic form of the
distribution G.
This argument shows that it is more reasonable to estimate the variance
of x~ using the Mandel-Paule rule 1 with y determined by (9) than by
0 =
" p
X
1
#
1 ?1 ;
y + t2i
as advocated
ini Mandel (1991) p 72. Indeed the limit of p0 is not (16)
h
?1
1
2
but E +2 V , so that this estimator of the limiting variance of the
Mandel-Paule procedure typically is not even consistent.
Numerical simulations also show that 1 is a better estimator of the
accuracy of the Mandel-Paule statistic x~ than 0 , even for small p. This
accuracy estimation is of importance in the standard reference materials
studies.
For the modied maximum likelihood estimator dened by (14) the limiting value satises to the equation
1 + 2 = E 1 ;
( + 2 V )2
+ 2V
and for the procedure from (15) to the equation
E
(E N + 1)E (1+
+V )2
= E +1 2 V :
1+ 2
E N + E +V
(20)
2
(21)
Note that the equation (20) obtains from (21) when E N ! 1.
Theorem 5.1 Under conditions of this section any weighted means statistic (8) pp[~x ? ] is asymptotically normal with zero mean and the variance
given by (16). For the Mandel-Paule procedure the limiting value is found
11
from (17), and this procedure cannot be a consistent estimate of 2 . For the
modied maximum likelihood rule the value of is determined from (20), and
for the procedure dened by (15) from (21). The interval (19) is asymptotically (1 ? )%-condence interval for , as the statistic (18) provides a
consistent estimator of the variance of the limiting distribution of x~, so that
V ar(~x)=1 ! 1. In general, p0 with y found from (9) is not a consistent
estimator of this variance.
Observe that when 2 02 with probability one,
E (+12 V )2
0
R() = (1 + ) 2
1
E +2 V
2
0
1+ :
2
0
0
Therefore the best choice of in this (and only in this) situation is = 1
with the corresponding statistic ^ = x. This fact makes sense since in this
case the optimal weights (10) are all equal.
6 An Example: Arsenic in Oyster Tissue and Monte
Carlo Results
The results of part of a typical interlaboratory study are presented in Table
1. Homogeneous samples of oyster tissue were distributed to 28 laboratories, and each made replicate measurements of the concentration of several
trace metals (Willie and Berman, 1995). The data for arsenic will be used
in this example. As is often the case in such studies, the data are somewhat
unbalanced, here because one of the laboratories provided only two measurements. All sucient statistics are calculated from ni = 5 observations,
except for the laboratory
indicated with an asterisk, for which ni = 2. The
P
weights wiMP = wi =P k wk are from the Mandel-Paule equation (9), and the
weights wiML = ^i = k ^k . are maximum-likelihood estimates.
The MLE's are x~ML = 13:22 and ^ML = 1:36; for the Mandel-Paule
method, x~MP = 13:23 and ~MP = 1:38. The corresponding weights for
maximum-likelihood and the Mandel-Paule approach are given in Table
1. The close agreement of these two methods is apparent. The asymptotic approximation for the variance of x~ in (16), is 0:263 for this example.
This agrees closely with 0:254 obtained from the delta-method, and (for
maximum-likelihood), with 0:262 determined from the observed Fisher information matrix and with 0:269 calculated from a Bayesian hierarchical
12
model (values which, of course, were computationally much more dicult
to obtain).
13
Table 1. Arsenic in NIST SRM 1566a (oyster tissue).
xi
9.78
10.18
10.35
11.60
12.01
12.26
12.88
12.88
12.96
13.00
13.08
13.30
13.46
13.48
si
0.30
0.46
0.04
0.78
2.62
0.83
0.59
0.29
0.52
0.86
0.43
0.16
0.21
0.41
pwiMP pwiML
1.04
1.03
1.05
0.99
0.61
0.98
1.01
1.04
1.02
0.97
1.03
1.05
1.04
1.03
1.04
1.03
1.05
0.98
0.62
0.98
1.01
1.04
1.02
0.97
1.03
1.05
1.05
1.03
xi
13.48
13.55
13.61
13.78
13.82
13.86
13.94
13.98
14.22
14.60
14.68
15.00
15.08
15.48
si
0.47
0.06
0.36
0.61
0.33
0.28
0.15
0.80
0.88
0.43
0.33
0.71
0.18
1.64
pwiMP pwiML
1.02
1.05
1.03
1.01
1.04
1.04
1.05
0.98
0.97
1.03
1.04
1.00
1.05
0.82
1.03
1.05
1.04
1.01
1.04
1.04
1.05
0.98
0.97
1.03
1.04
1.00
1.05
0.80
A Monte Carlo simulation study was done on the basis of this example
assuming that the random variable N takes on two values 1 = 4 and 2 = 1,
probabilities of which are 27=28 and 1=28 respectively and that 2 has the
distribution function of the form y=(1 + y); 0 y < 1. In other terms 2 =
U=(1 ? U ) with a random variable U whose distribution is uniform on the
interval (0; 1), so that the rst moment of 2 is innite. All expected values
needed to determine R() in (16) have been evaluated to obtain the following
Table 2, which contains values of o , the minimizer of R() in (16); of
MP , the Mandel-Paule choice from (17); of MML , the modied maximum
likelihood method value from (20); and of 1 from (21) corresponding to the
procedure dened by (15). The corresponding values of the limiting variance
R() are given in the second line. These values are to be compared with
the mean squared errors of the corresponding estimators (9), (14) and (15)
obtained from a Monte-Carlo simulation when p = 28. These mean squared
errors rescaled by 2 =p are given in the third line.
Table 2. The values of for dierent estimators the limiting
variance R() and the Monte-Carlo mean squared errors
14
o MP MML 1
1:00 4:11 5:31 0:96
R() 2:23 2:32 2:29 2:24
M:S:E:
2:31 2:39 2:28
The condence coecient of the interval (19) of all four estimators is
about 0:93. Note that the modied Mandel-Paule rule exhibits the same
performance as the original procedure (9). Also, in these simulations the
four estimators above systematically outperformed the estimators of the
mean based on MINQUE type estimators of i2 and 2 considered in Rao et
al (1981).
Finally, we mention that the estimates p1 and p0 of the variance for the
Mandel-Paule rule in the same simulation show a dierent behavior. The
estimator p1 is indeed preferable to p0 , which tends to underestimate the
true variance of the Mandel-Paule rule. Similar results hold when has an
exponential distribution except that in this situation the ASR type estimator
proposed by Rao et al (1981) Sec 2.5 has the smallest mean squared error.
In conclusion, we stress that the Mandel-Paule method provides a good
approximation to maximum likelihood and as such has a potential applicability in other problems with variance components. The corresponding
estimate of the interlaboratory variance is not consistent and cannot be
recommended unless the number of laboratories is fairly small.
References
[1] Cochran, W. G. (1937), Problems arising in the analysis of a series of
similar experiments. Journal of the Royal Statistical Society, Supplement, Vol. 4,102{118.
[2] Cochran, W. G. (1954), The combination of estimates from dierent
experiments. Biometrics, 10, 101{129.
[3] Crowder, M. (1992), Interlaboratory comparisons: Round robins with
random eects. Applied Statistics, 41, 409{425.
[4] Hartley, H. O., and Rao, J. N. K. (1977), Maximum likelihood estimation for the mixed analysis of variance model. Biometrika, 54,93{108.
15
[5] Harville, D. (1977), Maximum likelihood approaches to variance components estimnation and related problems. Journal of the American
Statistical Association, 72,320{340.
[6] Kiefer, J., and Wolfowitz, J. (1956), Consistency of the maximum likelihood estimator in the presence of innitely many incidental parameters.
Annals of Mathematical Statistics, 27, 887{906.
[7] Mandel, J. (1991) Evaluation and control of measurements. M. Dekker,
New York.
[8] Mandel, J., and Paule, R. C. (1970), Interlaboratory evaluation of a
material with unequal number of replicates. Analytical Chemistry, 42,
1194{1197.
[9] Paule, R. C., and Mandel, J. (1982), Consensus values and weighting
factors. Journal of Research of the National Bureau of Standards, 87,
377{385.
[10] Rao, P. S. R. S.(1981), Cochran's contributions to variance component
models for combining estimates. In P. Rao and J. Sedransk, editors,
W.G. Cochran's Impact on Statistics. J. Wiley, New York.
[11] Rao, P. S. R. S. Kaplan, J., and Cochran, W. G. (1981), Estimators
for the one-way random eects model with unequal error variances.
Journal of the American Statistical Association, 76, 89{97,
[12] Schiller, S., and Eberhardt, K. (1992), Combining data from independent chemical analysis methods. Spectrochimica Acta, 12, 1607{1613.
[13] Searle, S., Casella, G., and McCulloch, C. (1992). Variance Components. J. Wiley, New York.
[14] Willie, S. and Berman, S. (1995), NOAA National Status and Trends
Program Ninth Round Intercomparison Exercise Results for Trace Metals in Marine Sediments and Biological Tissues, NOAA Technical Memorandum NOS ORCA 93, U.S. Department of Commerce.
16
Download