A FURTHER CONTRIBUTION TO THE THEORY OF SYSTEMATIC STATISTICS J·unjiro. Ogawa .

advertisement
A FURTHER CONTRIBUTION TO THE THEORY OF SYSTEMATIC STATISTICS
.;
i/'.t:
:PlfK
by
J·unjiro. Ogawa .
University of North Car9}ina
Sponsored by the Office of
Naval Research under the contract
for research in probability and
statistics at Chapel Hill. Reproduction in whole or in part is
permitted for any purpose of the
United States government. Table
3.1 produced under sponsorship of
the Office of Ordnance Research.
Institute of Statistics
Mimeograph Series No. 168
April, 1957
A FURTHER CONTRIBUTION TO THE THEORY OF SYSTEMATIC STATISTICS
by
l
Junjiro Ogawa, Institute of Statistics, University of North Carolina
Introduction.
Until 1945 the main interest in the field of statistical
estimation seems to have been in the so-called
lI
efficient" estimators.
But
from the point of view of economy in practical use, it seems reasonable to 1nquire whether the output of information is worth the input measured in money,
man-hours, or otherwise.
Thus we may ask whether comparable results could
have been obtained by a smaller expenditure. From this standpoint F. Mosteller
2
proposed £2_7 in 1946 the use of order statistics for such purposes on the
ground that, however large the sample size, the observations can easily be
•
ordered in magnitude With the help of punch-card eqUipment.
He considered
the problem of estimation of the mean and standard deviation of a univariate
normal population, and that of estimation of the correlation coefficient of
a bivariate normal distribution.
Then in 1951 J. Ogawa
£4_7 considered
more systematically the problem of estimation of the location and the scale
parameter of a population whose density function depends only on these two
parameters, and obtained the optimum solutions in some cases for the univariate
normal population.
There are many cases in which the samples are by their very nature
lSponsored by the Office of Naval Research under the contract for
research in probability and statistics at Chapel Hill. Reproduction in whole
or 1npart is permitted for any purpose of the United states government.
Table ;.1 produced under sponsorship of the Office of Ordnance Research.
2The numbers in square brackets refer to the bibliography listed at
the end.
2
ordered in magnitude as for example in a life test of electric lamps or a
fatigue test of a certain material.
Usually in such cases the population
probability density functions are assumed to be exponential.
So at least
for the exponential distribution estimation and testing of an hypothesis
based upon systematic statistics are of great importance, from the point of
view of application.
The first two sections of this paper will be devoted to the general
theory of systematic statistics from the population whose density function
depends only upon the location and scale parameters for sUfficiently large
values of sample size.
Then in § 3 the application of the general theory
to the exponential distribution will be given and optimum spacing of the
selected sample quantl1es and the corresponding best estimators will be
tabulated.
Finally, in
§ 4,
discussion on testing an hypothesis will be
presented.
1.
Relative Efficiencies.
The fundamental probability distribution of
the large sample theory of systematic statistics from the population whose
probability density function depends only on the location parameter m and
the scale parameter a is as follows.
The limiting frequency function 1s
,
(1.1)
where f(u) denotes the standardized frequency function of the population
and u i stands for the Ai-quantile of the population, i.e.,
i
= 1,2, ••• ,k
( 1.2)
and
f.1. = f(u.),
1.
i
= 1,2, ••• ,k
•
(1.3)
Finally, we put
AO
= 0,
~+ 1 = 1.
(1.5)
In the first place, we define the relative efficiency of the systematic
statistics with respect to parameter to be the ratio of the amount of information in Fisher's sense derived from (1.1) to that derived from the
original whole sample.
We shall consider the following two cases separately:
Case I.
The case in which the scale parameter a is known and the location
parameter m is unknown.
For the sake of convenience, put
(1.6)
4
where the right-hand side is the same as the expression in the
ex~~,~
in the density function h(X(n )'X(n )""'X(n » except for the constant
k
2
l
factor - n/2 1,2, then we have
(1.7)
ThuS it follows by differentiation 'with respect to m that
(1.8)
where
k+l
Kl = r,
i=l
and f k+l
= 0 and f O should be equal to lor"
0 as the case may be:.- Hence the
amount of information Is(m) of the systematic statistics With respect to the
location parameter
ill
is asymptotically equal to
(1.10)
The likelihood function of the original whole sample, considered as
a random sample of size n, is
L = a-
n
n
TI
i=l
xi-m.
f( -a- )
I
(1.11)
5
hence we have
n
log L
L
==
xi-m
--a- ) -
log f(
(1.12)
n log cr ,
i=l
and consequently the amount of information IO(m) of the ori$inal whole sample
with respect to the location parameter m is equal to
2
(1.13)
where
e
Ui
= (Xi
- m)/cr, i.e. U stands for the standardized
variate.
If f(t)
)== (21C
\
1/2exp 1.-
t
2/2 } , then
IO(m) ==
and i f f(t)
=e
J.
-\I
for t
> 0,
n
-;- ,
(1.14)
then
IO(m)
=
n
2
-;
(1.15)
Thus the relative efficiency of the systematic statistics with respect to
the location parameter m is defined as:
2
)
1)(m)
~
(1.16)
In particular, for the normal distribution we have
,,(m)
= Kl
'
(1.17)
6
and for the exponential distribution we get
T}
e
(m)
=
~J
= .! ,-...n
n
0
(1.18)
In the case of the one-sided exponential distribution defined by
- x-m
g(x) = -1Cf e - Cf
o
=
we know already that
min
for x
> m,
otherWise,
xi i.e. x(l) is the maximum likelihood estimator,
!<i<n
and further that this is the uniformly minimum variance estimator of m for
~
all values of n.
Thus, the estimator based upon the frequency function (1.1)
becomes meaningless.
Case 2.
The case in which the location parameter m is known and the scale
parameter a is unknown.
log h = - k log
In this case
n
Cf
-
---
2(;'2·
S + a term independent of a.
(1.20)
Therefore we get
(1.21)
and since it can easily be seen that
(1.22)
7
where
( 1.23)
we obtain finally the amount of information of the systematic statistics
with respect to the scale parameter
0,
i.e.
(1.24)
or
I
S
(0)
On the other hand we have
o log
ao
h _
1
~
- -;
1=1
n
- - (j
X.
-m
(1.26)
f(~~:---)
Hence by making use of the general relation
I
[
(Ufffg~)
=
1,
(1.27)
we obtain
(1.28)
In particular, for the normal distribution we have
8
I O( a)
2n
= 2'
(J
and for the exponential distribution we have
n
(1.30)
= -;
Thus we obtain the relative efficiency of the systematic statistics with
respect to the scale parameter a as follows:
'1 ( a)
,
E
2
(U;(bY») - 1
(1.31)
In particula; for the normal distribution we have
(1.32)
and for the exponential distribution we have
(1.33)
2.
The Best Linear Unbiased Estimators of the Unknown Parameters Based
upon the Selected Sample Quantiles for Sufficiently Large Sample Size n.
In the circumstance now under consideration, the basic distribution
is given by (1.1), and the unknown parameters are m and a.
~
Since the dis-
tribution given in (1.1) is a k-dimensional normal distrtubtion, we can apply
9
the extended Gauss-Markov theorem on least squares
the best linear unbiased estimators which are
estimators,
L:'_7 in
order to find
asymptotical~etticient
Hence, as far as the large sample problems are concerned there
are no other estimators which are more efficient than the best linear unbiased ones.
Case I.
The case in which the scale parameter a is known and the location
parameter m is unknown.
1\
In this case, we can find the best linear unbiased estimator m of the
location parameter m by solving the single normal equation
oS
I
1\
dill m=m
=0
'
(2.1)
which turns out to be
(2.2)
where
k+l
X =
(fi-fi-1)(fiX(ni)-fi-1X(n1_1»
.E
)..1-)..i-1
i=l
and
(2.4)
;\
Hence the best linear unbiased estimator m of m is
1
m= - X -
.J\
Kl
10
and it can easily be seen after small calculation that
(2.6)
From (2.6), we can see that the best linear unbiased estimator In? is an
"efficient estimator" (so to speak) from the point of view of the relative
efficiency given in the preceding section.
If in particular the frequency function f(t) of the population is
symmetric with respect to the origin, for example the normal distribution,
)' x(n2)""'x(~)
and the spacing of the selected sample quantiles X(n
i
is also symmetric, i.e.,
n i + nk _1+ 1
n,;
i
==
Ai + 'K-i+l = 1,;
i
= 1,2, ••• ,k,
u + uk _ + = 0,;
i
i 1
i=1,2, ••• ,k
==
1,2, ••• ,k
(2.7)
or in terms of hi
(2.8)
then we have
(2.9)
and
fi
e
==
fk_i+l;
1 == 1,2, ... ,k.
In such e. case, it follows clearly that
(2.10)
11
(2.11)
Hence we have
/\
1
m =-K X,
l
(2.12)
and
Case 2.
The case in which the location parameter m 1s known and the scale
parameter
(J
is unknown.
In a quite similar manner, we can find the best linear unbiased
estimator
/\
(J
of the scale parameter
(J
by solVing the normal equation
~I·
ocr
.cr = 1'=0.
(J
(2.l4)
This comes to be
where
(2.16)
Hence we get
(2.17)
12
and
1
!'
Var( 0)
(2.18)
K2
In the particular case in which f{t) and the spacing of the selected sample
quant11es are symmetric we get as before
and the variance of
3.
1\
0
is the same as before.
Determination of the Optimum Spacings for the Estimation of the Scale
Parameter
of the One-Parameter Exponential Distribution Based on the
Selected Sample Quantiles for SUfficient~y Large Sample Size n. ~5_7
In
L:4_7,
the author developed the general theory of estimation' of
the location and the scale parameters based upon the selected sample quantiles
for sUfficiently large sample size n and determined the optimum spacings
in the case of the normal population.
Owing to its importance in practical applications, optimum spacings
are calculated herein for the estimation of the scale parameter
q
of the
exponential distribution whose density funotion is given as follows:
x
1
Ci e
g(x)
=
i
0
,
if x
>0
(3.1)
0
, otherwise
13
Suppose we are given an ordered sample of size n, where n is sufficiently
large.
In other words, the sample size n is l.arge enough so that the con-
elusions drawn making use of the limit distribution are valid with enough
accuracy.
For given k real numbers such that
o < ).,1 < A2 < ••• < \: <
we select k sample Ai-quantiles for i
= 1,2, ••• ,k,
l,
i.e., k order statistics
where
and the symbol ~nAi_7 stands for the greatest integer not exceeding nAif
Furthermore, let the A.-quantile
of the standardized exponential
1
distribution, whose density is given by
__ [ eo-X:
f(x)
l ,
if
x>
0
(3.4)
otherwise
be u i ' and that of (3.1) be ·x , then it is clear that
i
The limiting joint distribution of the k sample quant11es x(n ), ••• ,x(D )
1
k
haa the density function
14
~
= Cn, kexp[ ..
2(Jc:.
where
Ai
e
= l-e -u.1
1
I
= 1,2, ••• ,k
(3.7)
•
Hence we get
log h
=- ~
c;
log
(J2
-
n
2l-
S + term which is independent of
(J
(3.8)
where
k
- 2
r:
1=2
e
Thus we get the amount of information of the systematic statistics with
respect to the scale parameter cr in this case
15
(3.10)
where
K
2
=
k+1
t
1=1
==
k+l
t
(3.11)
i=l
Consequently we get the relative efficiency of the systematic statistics
with respect to
G
as
(3.12)
The best linear unbiased estimator ~ of
where
Y=
k+l
t
i=l
and this can be written as
k
Y
~
where
= r
i=l
a x
i (n i )
G
is
16
(3.16)
The coefficients a
for the optimum spacings are calculated in Table 3.1.
i
From (3.13) and (3.15) we have
where
(3.18)
I
Although it may appear to be more convenient to tabulate a. than a., for
1
k
= 10,
for instance, there are ten rounding errolBin using
1
(3.17), while
/'0
the calculation of a by the formula
I
has the error obtained by only one rounding process.
The required optimum spacing is the set
(Al'A2?"~'~)'
or
equivalently the set of values (u i ,u 2 , ••• ,uk ) which gives the maximum value
of the relative efficiency K2 • We can obtain the optimum spacings step by
step as follows:
17
(i)
The case where k
= 1:
in this case it is easily seen that
(3.20)
= 0.6471 at u1 = 1.59, and the corresponding
which has only one maximum K
2
probability is ~l
(ii)
=1 -
e- l •59
The case where k
= 0.7961.
= 2:
put u2
= ul
+
+ x, then we have
~2
).
>
e -1
and this function of x and ul is maximized at the point
x
= 1.59
i.e.,
and the maximum value of K2 is
(iii)
The case where k
= 3:
put
then it follows after some calculations that
18
2
2
x + x+
e -x { -x
e -1
It is easily seen that this function of x,y and ul is maximized at
the point
Hence we get the optimum spacing in this case as follows:
and the correspoinding maximum value of K2 is
In a similar way, the following results in Table 3.1 have been
calculated.
In Table 3.1, u i stands for the Ai-quantile of the standardized
population. The bottom row of the table which representing the relative
efficiencies K2 has been graphed as Fig. 1 to show the increasing rate of
relative efficiencies against the number of sample quentiles which has been
selected.
It will be seen from Fig. 1 that after k
relative efficiency is not appreciable.
= 10,
the gain in
e.
.e
Table 3.1
1
1
1
2
2
.s....
optimum Spacings for Estimates of Relative Efficiencies and the Coefficients of Best Estimates*.
1
2
1.59
1.02
.79607 .63941
.40731 .4283
2.1
.92647
.14687
3
3
3
4
.4
4
5
5
5
6
6
6
7
7
7
3
0.75
.52763
.30974
1.77
.82967
.20233
3.36
.96527
.06938
4
0.61
-.45665
.36098
l.3
.74334
.21719
5
0.50
.39347
.33051
loll
.67044
.21896
2~38
1.86
.90745 .84433
.10994 .13173
3.97 • 2.88
.98113 .94387
.03770 .06668
4.47
.98855
.02286
8
6
7
0.45
0.37
.34949 .30927
.29900 .27352.
0.93
0.0
.60545.55067
.21499 .20654 _
1.54
1.30
.78562'.727 47
.14242 .14850
2.29
1.91
.89873 .85192
.08571 .09839
3.31
2.66
.96348 .93005
.04394 .05919
4.90
3.68
.99255 .97478
.01487 .02996
5.27
.99486
.01027
8
8
9
9
9
10
10
lO
11
11
11
12
12
12
13
13
13
14
14
14
15
15
15
.64761
.82026
.89049
.92691
.94757
.96056
.96926
8
0.33
.28108
.24989
0.70
.50341
.19671
1.13
.67697
.14845
1.63
.80407 .10677'
2.24
.89354
.07074
2.99
.94971
.04255
4.01
.98187
.02154
5.60
.99630
.00739
_
.9'1)37
9
10
11
12 _
13
0.30
0.27
0.25
0.23
0.21
.25918 .23662 .22120 .20541 .18942
.23224 .21644 .20181 .19003 .17766
0.3 . 0.57
0.52
-0.
O. _
.46741 .43447 .40548 .38122 .35596
.18514 .17729 .16860 .16032 ..15409
1.00
0.90
0.82
0.75
0.69
.63212 .59343 .55956 .52762 .49842
.14569 .14135 .13803 .13398 .13001
1.43
1.27
1.15
1.05
0.96
.76069 .71917 .68336 .65006, .61711
.10999 .11122 .11012 .10971 .10856
1.93
1.70
1.52
1.38
1.26
.85485 .81732 .78129 .74842 .71635
.07910 .08396 .08661 .08748 .08887
2.54
2.20
1.95
1.75
1.59
.92113 .88920.85773 .82623 .79607
.05239 .06037 .06539 .068$0 .07093
3.29
2.81
2.45
2.18
1.96
.96275 .93980 .91371 .88696 ..85914
.03152 .04000 .04702 .05195 .05578
4.31
3.56
3.06
2.68
2.39
.98657 .97156 .95311 .93144 .90837
.01596 .02407 .03115 .03736 .04213
3.81
3.29
2.89
5·90
4.58
.99726 .98975 ..97785 .96275 .94442
.00547 ..01218 .01874 .02475.03028.
6.17
4.83. 4.04
3.50
.99791 .99201 .98240 .96980
.00418 .00948 .01489 .02006
6.42
5.06
4.25
.99837 .99365 .98574
.00325 .00754 .01207
6.65
5.22
.99871 .99486
.00258 .00611
6.86
.99895
.00210
e
14
0.20
.18127
.16759
0.41
.33635
.145=:;2
0.64
.47271
.12609
0.89
.58934
.10644
1.16
.68651
.08890
1.46
.76776
.07281
1.79
.83304
.05802
2.16
.88467
.04570
2.59
.924 98
.034 47
3.09
.95450
.02479
3 .70
.97528
.01643
4.45
.98832
.009':\9
5.47
.99579
.00500
7.06
.99914
.00172
15
0.19
.17304
.16096
0.39
.3229 h
.1386::
0.60
.45119
.12031
0.83
.56395
.10430
1.08
.66040
.03203
1.35
.74076
.07350
1.65
.80795
.06019
1.98
.86193
.04300
2.3)
.90463
.03778
2.7B
.93 7 96
.02851
3 .28
.96237
.02051
3.89
.97956
.01358
4.64
.9~034
.00817
5.66
.99652
.00414
( .2)
9°02 0
:0)142
.97982
.98316
.98374
.98739
.98939
.99071
.99180
20
Fig. 1. The curve of relative efficiencies against
the number of selected sample quantiles to be used
l.000
.980
.960
.940
.920
.900
.880
.860
e
.840
.820
.800
.780
.760
.740
.720
.700
.680
.6 0
.6 0
.6
0
1
e
2
3
4
5
6
7
8
9
10
11
Number of Sample Quantiles
12
13
14
15
21
Example:
As an illustration of the estimation procedure, we shall
calculate the estimate of the standard deviation of the time intervals
in days between explosions in mines, involving more than 10 men killed
from 6th December 1875 to 29th May 1951. The data are from Maguire,
Pearson and Wynn
£1_7.
It should be noted that this example may
not be the best to point out the value of this method because the sample
size (n
= 109)
is not so large that the labor necessary for the classical
estimation is almDst cG:lmparable
'tOt) ,
that necessary for our estimate.
This example should be regarded as an illustration of the calculating
procedure itself.
There are cases, which are tmportant in applications
such as in life testing of the electric lamps and in fatigue tests of
certain material, where the samples are ordered in magnitudes in natural
order.
In such a case the procedure here may be of great help in getting
quickly the estimate of
0,
especially for a large bulk of data.
The Table 1 of B. A. Maguire, E. S. Pearson and A. H. A. Wynn
is reproduced here with observations arranged in order of magnitude as
follows:
22
e
Table 3.2. Time intervals 1n days between explosions in mines,
involving more than 10 men killed from 6 Dec. 1875 to 29 May 1951.
(B. A. Maguire, E. S. Pearson and A. H. A. wynn)
Order
1
2
e
e
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
.34
35
36
37
38
39
40
41
42
43
44
45
Observation Order
1
46
4
47
4
48
49
7
11
50
13
51
15
52
15
53
17
54
18
55
19
56
19
57
20
58
20
59
22
60
61
23
62
28
63
29
64
31
32
65
66
36
67
37
68
47
48
69
49
70
50
71
72
54
54
73
74
55
58
75
76
59
77
59
61
78
61
79
66
80
81
72
·82
72
83
75
84
78
78
85
81
86
87
93
88
96
89
99
108
90
Observation
11;
114
120
120
123
124
129
131
137
145
151
156
171
176
182
188
189
195
203
208
215
217
217
217
224
228
233
255
271
275
275
275
286
286
312
312
315
326
326
329
330
336
338
345
348
Order
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
Observation
354
·361
364
369
378
390
457
467
498
517
566
644
745
871
1205
1312
1357
1613
1630
23
For k
= 10,
we can see from Table 3.1 that
1 ..
1 109 x
,23662_7 + 1 = 26
n2 = £n>-'2_7 + 1 =
1 109 x
.1j.3447_7 + 1
= 48
.59343_7 + 1
= 65
£ 109 x .71917_7 + 1
= 79
n1 =
n
1 n>-'1_7 +
3 = £ n>-'3-7 + 1
n4
=
£n>-'4-7 + 1
n = £n>-'5-7 + 1
5
f
n;"6-7 + 1
n6
==
n
= £n;"7_7
7
= £109
=
x
= £109
x .81732_7 + 1 = 90
= £109 x .88920_7 + 1 = 97
+ 1 =
£ 109 x
.93980_7 + 1 = 103
nS
= 1n).8-7 +
1
= 1 109 x .97156_7 + 1
n
9
= £nr.,)-7 +
1
= £109 x .98975_7 + 1 = loB
n10= .Ln;"1~7+ 1
= 1 109
x .99791_7 + 1
==
106
= 109
Thus, we used the ordered observations as follows:
Order(n.)
J.
e
Observations(x( 11 »
1
Coeff1c1ents(a.1- )
26
50
.21646
48
120
.17729
65
208
.14135
79
291
.11122
90
348
.08396
97
459
.06037
103
745
.04000
106
1312
.02407
108
1613
.01218
109
1630
.00418
24
Thus we calculate the expression
and consequently we have
If we compare the estimate
0
= 241
which was calculated using all of
the sample by the classical method with the above, there is quite good
agreement.
4.
Testing Statistical Hypothesis.
In order to test the hypothesis
(4.1)
if we start with the frequency function given by (l.l), then this can be
done as follows:
n
~
k
E
{ 1=1
(A
if the null-hypothesis H is true, then
Ai + l-A i _1
A }{~ A
)
i+1- 1 i- i-1
O
2
2
f1(XCn.)-uiOO)
1
25
is distributed according to the X2-distribution with degrees of freedom k.
This comes out as
The minumum value So of S under the variation of a is given by
S
~
and
a
=
k+l
L
1=1
(fiX( n. )-f.~- lX( n. 1 »2
1.
~-
(4.4)
n 2 So 1s distributed according to the ;t2-distribution with degrees
0'0
of freedom k-l, provided EO is true.
Hence
is independent of So and is distributed according to the;t2-distr1bution
with degree of freedom 1.
Thus the statistic
t =
~
vk-I
(4.6)
follows the Student's t-distribution with degrees of freedom k-l if the
26
null-hypothesis Ho is true.
If
H:o:::
the null-hypothesis H is not true, and an alternative hypothesis
O
0(1°0) is true, then the distribution of t in (4.6) follows the non-
central t distribution 'With non-centrality parameter-
5
= ,/if;
(1 -
°0
(1
)
(4.7)
,
and the power functWJ1 of' the t-test is an increasing function of the
absolute value of 5.
Hence"tt Ls reasonable to
se.~e8t
sample quantiles the
spacing of which makes K maximum, i.e. optimum spacing for estimation purpose
2
is also optimum for testing purpose.
Thus we can use Table ;.1 also for
testing purposes.
Finally, it should be noted that the confidence interval of a with
confidence coefficient 100(1-0:) per cent it given by
(4.8)
where
tk_l(lO(~)
with degrees of
Example:
stands for the 1000 per cent point of the t-distribution
freedom k-l.
We shall calculate the 95%
confidence interval for
° in
the
example of the preceding section, as an illuetration of the procedure
explained in this section.
We consider case k
= 10.
Then we have
27
e
""'0
--
11
'£
i=l
(f' 1.. x ( 01 ) -f.J.- 1x ( n _
i 1
l
_ K 1,2
2
)..1- A1_1
2
.-
11
l..:
1=1
(fiX(ni)-f1-1X(ni_l»
""i- A1_1
10
2
(E18 i X(n. »
1
K
2
and calculation Ylill be executed as shown in the following table.
(4.9)
The 95%
confidence interval calculated below seems to be somewhat wide, which may
be due to the smallness of the sample size.
e
.
. .e
Calculating SCheme for
!
l:fo
~O!
.
So
2
I_~_n__
f1X(n~)
fIX(n )
Al-~O
D
1
!
.
f1
I
I· 76:;:;8
I ~-~
a~
I .19785 11'2
X(D )
1
\5 0
IX(D
f 1X(D )
1
138.16900
)
2
38.16900
16.13093
f'2X(n )-f 1X(n )
2
1
I :f2x(n2 ) -fIx(n1 )
'L-lL
.~
If X(n I 29.69460
2
2
a3
1"3
1.15896
1.56553 1120
167.86360
If:;
t:;x(n:;)
X(n,)
I t 3X(D
.
3
) -f2X(n )
2
16.70296
a4
I A4
1·1257!~
.40657 208
11'4
IX(n4)
04.56656
f
(D4) -f ,x(n,)
a5
l~
1~-A4 1.28083
I.09815 If,
If4X(n4) I -2.84503
.08396 1.81732 "'6-"'5
,.18268
81.72153
X(D )
5
f
1:;48
5X(n5 )
~
2
-18.14889
16:;.57264 ' f 6X(n6)-f5X(n )
5
) -:f 4x(n )
4
~-A4
f 5X("5)-f 4X(D4)
5
.~
2
(f :;x en :;) -f2X(n
2
I
>"6-~
a 2 x(D )
2
,)
I
&3x {n }
3
(f 4X(n4}-f:;X(D:;»2
"'4-"'3
t
a4x(D~)
(f5X(n ) -f 4X(n4»2
5
~-A4
I
-184.90973
f 6X(n6)-:f5X(n )
5
1X(D )
1
-lL
I
I
D
2
-2.26263
f ,X(D
291
(f 2X{n )-f1X(n »
2
1
3X(n.:;) -f 2X(n )
2
'" _~
3
"'4-"'3
.
.11122 1.71917
,
·-1
f4x(n4)-f~(n:;)
.141:;5 I .59:;43 ;\.4-"'3
I
10.50765
I
4X
~
15.00864
)
f
.17729 I .43447 I "':;-A2
-
Al
1
.21644 I .2366'2
1).,2
l
l'1 x (n )
!'1. I .23662
~
(f 6X(n )-f5X(D
6
~6-~
5
»
2
a 5x(D )
5
Calculating Scheme for
11.
a6
6
1'6
.88920 A.-r-1I.6
.06037
a
.0'1~~
;"7
7
.11080 457
•05060
f
.04000 .93980 1\8-11.
7
Aa
aa
1'6x (n6)
x(n6)
.03176
7
f
r (n 7)
•
-5.78660
f 8x (DS) -f
fS
fSX(oa)
-7.16a36
(oa)
.97156
~9-1I.S
.02872 1312
37.68064
a
~
.01819
f'9
f 9X(n
x(n )
9
9
)
-179·9ts100
r (n7) -f6x(n6)
44.a4900
.02407
9
f
.06020 745
x
(ContLuUen)
-12.93701~
50.63560
x(D )
7
So
f't (D,) -f6x (n6)
A.-r-A6
.98975 1"10-1'9
.01025 1613
16.53325
r (n7)
r
fax(nS) -f (n )
7
! 11.10
.00816
flO
I
Pt(n
10
) f 1<f(n
10
1\9-1'8
.00209 1630
A11 -"-10
I
-
IAll
.00209
i
3.40670
10x (n10 )-1'9x (n )
9
)
10
e-
aax (08)
(1' 9X(09) -1'ax(D »
a
A9-hg
2
a x
9 (D )
9
C:.
(f10x(D10)-f9x(09»
['10-11.9
)
l-~O
a10x (n
f
C:
C
10 x(n
10
1-11,10
10
)
)
-1630.00000
Total
it
r (n7)
"a-A.r
:'10-~
-1' 10x (n
10
-3.40670
2
-1608.64563
) -13.12655
-f 1<f(n
(faK(na)-f rCn7 »
-1162.58329
10x (D 10 ) -1' 9x (n )
i
.co418 1.99791
a
1\s-~
1'9X(D )-f8X(n8)
9
9
a10
2
-225.70403
f 9x (n ) -f~ (118 )
9
-21.1 4739
l'
(f't(n )-f6X(n »
6
7
'1-1\6
-114.35969
l'
.01~18
a6x (n6)
60461.82398
238.50937
·e
30
= 2600.72348
2 ?62 x
• <;.
/
/2600.72348
9 x .98316
=
38.77971
31
REFERENCES
£1_7 Maguire, B. A., Pearson, E. S. and Wynn, A. H. A., "The time
intervals between indus tria.1 accidents," Biometrika, Vol. 39
(1957).
pp. 168-180.
£2_7 Mosteller,
F., "On some useful 'inefficient' statistics," Annals
of Mathematical Statistics, Vol. 17(1948). pp. 377-408.
£3_7 Neyman, J and David, F. N., "Extension of the Markoff's theorem
on least squares," Statistical Research
Memoir~,
Vol. 1(1936).
pp. 105-116.
£4_7 Ogawa.,
I,"
J., "Contributions to the theory of systematic sta.tistics,
Osaka Mathematical Journal, Vol 3(1951). pp. 175-213.
15_7 Ogawa, J., Determination of optimum spacings for the estimation
of the scale parameter of the exponential distribution based on
sample quantiles • .Typew~ltten manuscript
<JfBtostat~CS'-f
a~ th~ Depar~nt
SChool -Ofl"'Ptlblie Health, University of North
-Carolina, Chapel J{iJ:l, North Carolina -
32
ACKNOWLEDGEMENT
The author wishes to" express his thanks to
Professor B. G. Greenberg and Dr. A. E. Sarhan of the
Department of Biostatistics, School of Public Health,
University of North Carolina for their kind encouragements, essential suggestions and discussions given to
him while this paper was prepared.
The author I s
thanks are also due to Professor HSl:cHd 'Hotelling and
Dr. David ':8; :.Dunc'anr.0f tHe, Ud ver,j-ity.of
for their
v.alua.bl~"comments on
North~C.grolina
this paper.
Download