A MULTIPLE DECISION PROCEDURE FOR TESTING WITH ORDERED ALTERNATIVES A Thesis by

advertisement
A MULTIPLE DECISION PROCEDURE FOR TESTING WITH ORDERED
ALTERNATIVES
A Thesis by
Alexandra Kathleen Echart
Bachelor of Science, Wichita State University, 2014
Submitted to the Department of Mathematics, Statistics, & Physics
and the faculty of the Graduate School of
Wichita State University
in partial fulfillment of
the requirements for the degree of
Master of Science
July 2015
c
Copyright
2015 by Alexandra Kathleen Echart
All Rights Reserved
A MULTIPLE DECISION PROCEDURE FOR TESTING WITH ORDERED
ALTERNATIVES
The following faculty members have examined the final copy of this thesis for form and
content, and recommend that it be accepted in partial fulfillment of the requirement for the
degree of Master of Science with a major in Mathematics.
Hari Mukerjee, Committee Chair
Kirk Lancaster, Committee Member
Xiaomi Hu, Committee Member
Rhonda Lewis, Committee Member
iii
DEDICATION
To my parents, brother, friends, and family
iv
ACKNOWLEDGMENTS
I would like to thank my advisor, Hari Mukerjee, for his support throughout my undergraduate and graduate studies. Thanks are also due to Hammou El Barmi who constructed
a PAVA code in R. I would also like to thank the members of my committee, Kirk Lancaster,
Xiaomi Hu, and Rhonda Lewis.
v
ABSTRACT
In testing hypotheses with ordered alternatives, the conclusion of an order restriction
when rejecting the null hypothesis is very sensitive to the correctness of the assumed model.
In other words, when the null hypothesis is rejected, there is no protection against the fact
that both the null and the ordered alternatives might be false. In this thesis we suggest a
novel method of providing this protection. This entails redefining the classical concept of
hypothesis testing to one where multiple decisions are made: (1) Decide that there is not
enough evidence against the null hypothesis, (2) there is a strong evidence in favor of the
ordered alternative, or, (3) there is a strong evidence against both the null and the ordered
alternatives. By simulations and examples, we show that this new procedure provides very
substantial protections against false conclusions of the order restriction while reducing the
power of the test very little when the ordering is correct.
vi
TABLE OF CONTENTS
Chapter
Page
I. INTRODUCTION
1
II. IMPLEMENTING THE MULTIPLE DECISION PROCEDURE
8
III. SIMULATIONS
10
IV. EXAMPLES
21
V. SUMMARY AND CONCLUDING REMARKS
22
BIBLIOGRAPHY
23
APPENDICES
25
A.
B.
C.
D.
E.
F.
G.
H.
I.
J.
Code
Code
Code
Code
Code
Code
Code
Code
Code
Code
for the Pool Adjacent Violators Algorithm
and Output for Obtaining Some Simulation Results
and Output for Obtaining Some Simulation Results
and Output for Obtaining Some Simulation Results
and Output for Obtaining Some Simulation Results
and Output for Obtaining Results For Example 1
and Output for Obtaining Results For Example 2
and Output for Obtaining Results For Example 3
and Output for Obtaining Results For Example 4
and Output for Obtaining Results For Example 5
vii
when
when
when
when
K
K
K
K
=
=
=
=
3
4
5
6
26
27
33
39
42
45
47
49
52
55
LIST OF TABLES
Table
Page
3.1 Comparison of Equal Alpha Levels
11
3.2 α1 = 0.10 Fixed
12
3.3 α1 = 0.05 Fixed
13
3.4 α1 = 0.01 Fixed
14
3.5 α2 = 0.10 Fixed
15
3.6 α2 = 0.05 Fixed
16
3.7 α2 = 0.01 Fixed
17
3.8 Results for k = 5, α1 = α2 = 0.05
18
3.9 Results for k = 6, α1 = α2 = 0.05
19
3.10 Comparison of NNT and MDP Tests
20
4.1 Examples 1 - 5
21
viii
LIST OF ABBREVIATIONS
ANOVA
Analysis of Variance
CCC
Closed Convex Cone
IID
Independent Identically Distributed
JT
Jonckheere and Terpstra
LRT
Likelihood Ratio Test
LS
Least Squares
MDP
Multiple Decision Procedure
MLE
Maximum Likelihood Estimate
NNT
New Nonparametric Test
NNTa
New Nonparametric Test Based on Asymptotics
NNTs
New Nonparametric Test Based on Simulations
ORI
Order Restricted Inference
PAVA
Pool Adjacent Violators Algorithm
RWD
Robertson, Wright, and Dykstra
TM
Terpstra and Magel
ix
LIST OF SYMBOLS
α1
The Level of Significance for Testing H0 vs. H1 − H0
α2
The Level of Significance for Testing H1 vs. H2 − H1
Ba,b
A Beta Variable with Parameters a and b
c
Critical Value
c1
A Critical Value for Testing H0 vs. H1 − H0
c2
A Critical Value for Testing H1 vs. H2 − H1
C
A Closed Convex Cone
CC
The Complement of a Closed Convex Cone
χ̄201
The Test Statistic for Testing H0 vs. H1 − H0 , Variances Known
χ̄212
The Test Statistic for Testing H1 vs. H2 − H1 , Variances Known
χ2a
A Chi-Square Random Variable with a Degrees of Freedom
D0
Conclude No Strong Evidence Against H0
D1
Conclude Against H0 in Favor of H1 − H0
D2
Conclude Against H1 in Favor of H2 − H1
2
Ē01
The Test Statistic for Testing H0 vs. H1 − H0 , Variances Unknown
2
Ē12
The Test Statistic for Testing H1 vs. H2 − H1 , Variances Unknown
f
Function Vector
fi
Function for the ith Population
I(·)
Indicator Function
∞
Infinity
k
Denotes the Number of Populations/Samples
L
The Largest Linear Subspace of the CCC C
L⊥
The Orthogonal Complement of L
x
LIST OF SYMBOLS (continued)
M
The Number of Level Sets
µ
The Mean Vector for the Populations
µi
The Mean of the ith Population
µ̄0
The Estimate of the Common Mean under H0
µ̂
The Mean Vector for the Samples
µ̂i
The Mean of the ith Sample
µ∗
The LS Projection of µ̂ onto C
µ∗i
The ith Component of the LS Projection of µ̂ onto C
N
The Sum of the i Sample Sizes ni
N (a, b)
A Normal Distribution with Mean a and Variance b
Nk (a, b)
A Multivariate Normal Distribution with Mean a and Variance b
ni
The Sample Size of the ith Sample
P (·)
The Probability of an Event
P (A|B)
The Conditional Probability of an Event A given B
P (l, k, w)
The Probability that the Number of Level Sets M = l
p1
The Supremum over H0 of P (χ̄201 ≥ c1 |H0 )
p2
The Supremum over H1 of P (χ̄212 ≥ c2 |H1 )
Πw (µ̂|C)
The LS Projection of µ̂ onto C
Rk
k-Dimensional Euclidean Space
S01
2
A Transformed Version of Ē01
that is Asymptotically χ̄201
S12
2
A Transformed Version of Ē12
that is Asymptotically χ̄212
σ2
The Variance Vector of the Population
σ̂ 2
The Estimate of σ 2
xi
LIST OF SYMBOLS (continued)
sup
The Supremum
w
The Weight Vector for the Sample
wi
The Weight for the ith Sample
W −1
The k × k Matrix diag{w1−1 , w2−1 , . . . , wk−1 }
Xij
IID Continuous Random Variables from N (0, 1)
Yij
The jth Observation Belonging to the ith Sample
Ȳi
The Mean of the ith Sample
xii
CHAPTER 1
INTRODUCTION
An important problem in order restricted inference (ORI) is to establish that a kdimensional parameter µ = (µ1 , µ2 , ..., µk ) lies in a closed convex cone (CCC) C. The
standard procedure is to test the null hypothesis µ ∈ L against the alternative that µ ∈ C
but not in L, where L is the largest linear subspace of C. The most common application
is in a one-way analysis of variance (ANOVA). For independent random samples from k
populations, with mean µi for the ith population, and letting
L = {µ : µ1 = µ2 = · · · = µk } and C = {µ : µ1 ≤ µ2 ≤ · · · ≤ µk },
(1.1)
H0 : µ ∈ L vs H1 : µ ∈ C − L.
(1.2)
we test
Of course, the µi ’s could be decreasing also. Another common application occurs when the
µi ’s are partially ordered in an umbrella shape:
C = {µ : µ1 ≤ · · · ≤ µc ≥ µc+1 ≥ · · · ≥ µk },
(1.3)
where 1 < c < k is known or unknown. Under the usual assumption of independent identically distributed (IID) N (0, σ 2 ) errors, Bartholomew (1961 a,b) developed the likelihood
ratio test (LRT) for equation (1.2); the monograph by Robertson, Wright, and Dykstra
(1988), heretofore referred to as RWD (1988), contains an excellent account of it and further
developments. The procedure is as follows. Let the sample size be ni for the ith population
P
with N =
i ni , and let {Yij : 1 ≤ i ≤ k, 1 ≤ j ≤ ni } be the observations with the
population mean vector µ̂ = {Ȳ1 , Ȳ2 , ..., Ȳk }, the maximum likelihood estimator (MLE) of µ
when there are no restrictions placed on µ. The MLE µ∗ under the restriction that µ ∈ C
can be shown to be the least squares (LS) projection of µ̂ onto C, denoted by Πw (µ̂|C), that
1
minimizes
k
X
wi (µ̂i − fi )2
in the class of f = {fi } ∈ C,
(1.4)
i=1
where w is a weight vector with wi = ni /σ 2 . The popular PAVA (pool adjacent violator
algorithm) could be used to compute µ∗ ; see RWD (1988) for this and all un-referenced
results in ORI stated in this paper. The µ̂i ’s have k distinct values. The restricted estimator
averages a number of adjacent µ̂i ’s giving rise to M so-called level sets; the number of level
sets M is random with 1 ≤ M ≤ k. This is a result of the fact that C is a polyhydral cone
and the LS projection of µ̂ could be in any one of the l-dimensional faces of C, 1 ≤ l ≤ k. Let
P
P
µ̄0 = ki=1 wi Ȳi / ki=1 wi , the estimate of the common mean under H0 . When the variance
σ 2 is known, the LRT rejects H0 for large values of the test statistic
χ̄201
=
k
X
wi (µ∗i − µ̄0 )2 .
(1.5)
i=1
The null distribution of the test statistic is given by
P (χ̄201
≥ c) =
k
X
P (l, k, w)P (χ2l−1 ≥ c),
(1.6)
l=1
where P (l, k, w) = P (number of level sets M = l), χ2l−1 is an ordinary χ2 random variable
with l −1 degrees of freedom, and χ20 ≡ 0. The P (l, k, w)’s are probabilities of certain wedges
(the pre-images of sectors of Rk that project onto the l-dimensional faces of C) of Rk under
the Nk (0, W −1 ) distribution, where W −1 = diag{w1−1 , w2−1 , ..., wk−1 }. It should be noted that
the P (l, k, w)’s depend on w only through its direction, and not its magnitude. For some
simple cases these can be computed exactly, and they have been tabulated in RWD (1988).
Many approximation schemes had been developed, but, with the advent of fast computers,
these probabilities are now found by simulations in the general case.
2
When σ 2 is unknown, it is estimated as in ANOVA:
σ̂ 2 =
ni
k X
X
(Yij − Ȳi )2 /(N − k).
(1.7)
i=1 j=1
One rejects H0 for large values of the test statistic
2
Ē01
Pk
wi (µ∗i − µ̄0 )2
.
= Pk Pni
2
i=1
j=1 (Yij − µ̄0 )
i=1
(1.8)
Its null distribution is given by
2
P (Ē01
≥ c) =
k
X
P (l, k, w)P (B(l−1)/2,(N −l)/2 ≥ c),
(1.9)
l=1
where Ba,b is a beta variable with parameters a and b (B0,b = 0). It can be shown that a
transformed version is asymptotically χ̄201 :
S01 ≡
2
(N − k)Ē01
d
→ χ̄201
2
1 − Ē01
as N − k → ∞.
(1.10)
The discussion above is an abbreviated description of the test in equation (1.2): µ ∈ H0
vs µ ∈ H1 − H0 in the usual ANOVA setting under the normality assumption (We use
H() both as a symbol for a hypothesis as well as a symbol for the set µ is in under H() ;
the possible confusion is minimal.). Jonckheere (1954) and Terpstra (1952) independently
provided a nonparametric test (hereafter called the JT test) for linearly ordered location
parameters of k populations:
H0 : µ1 = · · · = µk vs H1 : µ1 ≤ · · · ≤ µk with at least one strict inequality,
(1.11)
based on independent random samples:
{Yij = Xij + µi : 1 ≤ i ≤ k, 1 ≤ j ≤ ni ; Xij ∼ IID continuous}.
3
(1.12)
The paper by Terpstra and Magel (2003) (referred to as TM (2003) from now on) has an
extensive bibliography of modifications of the JT test and other proposed nonparametric
tests. Parametric or nonparametric, all of these tests conclude in favor of H1 − H0 when
H0 is rejected, ignoring the possibility that the model may be incorrect. In one dramatic
simulation experiment TM (2003) show that the JT test rejects H0 with probabaility 0.932
at the level of significance of 0.05 when
(µ1 , µ2 , µ3 , µ4 ) = (0, 1.5, 0.5, 1, 5), with IID N (0, 1) errors, and ni ≡ 20.
(1.13)
As they point out, this could lead to devastating consequences in many applications. They
stated that a good test should have the following three properties:
P1. The power of the test should be approximately equal to the stated significance level
when H0 is true,
P2. The test should have a higher power than the general alternative test when H1 is
true, and
P3. The test should have low power for any alternative that does not fit the profile given
in H1 .
TM (2003) then propose their own test statistic
T =
n1
X
i1 =1
···
nk
X
[I(Y1i1 ≤ · · · ≤ Ykik ) − I[(Y1i1 = · · · = Ykik )].
(1.14)
ik =1
It measures the number of times the population-permuted observations follow the order reQ
striction of µ ∈ H1 − H0 . There are N !/ ki=1 ni ! partitions of (1,...N) of this form, all
equally likely under H0 . Although the exact distribution of T can be found in principle, the
computation is not feasible even for small numbers. For example, the number of partitions
is approximately 1.7 × 1010 when k = 4 and ni ≡ 5. However, they were able to derive the
asymptotic distribution. Additionally, it is possible to perform a Monte Carlo approxima4
tion of the exact distribution by taking a random sample of the partitions. Even then, the
computations are very computer intensive. They called their test the NNT (new nonparametric test), and NNTa and NNTs for that test based on the asymptotics and that based
on simulations (but only for very small numbers), respectively. Their main result was that
both NNT’s lose little power compared to the JT test when the alternatives are in C − L,
but lose a large percentage of the power when the alternatives are in C C . In particular, for
the example in equation (1.13), the JT test rejects H0 93.2% of the time while the NNTa
rejects only 53.9% of the time. Using a variety of alternatives, both in C − L and in C C , they
show that NNT is clearly favorable over the JT test if one wants some protection against
“false positives.” However, this was done only for k = 3 and k = 4 because of computational
complexity.
In this paper we introduce an entirely new approach to protection against the false
conclusion of H1 − H0 in the normal error model. Frequently it is possible to perform a
goodness-of-fit LRT of H1 vs H2 − H1 , where H2 puts no restriction on µ, i.e., µ ∈ Rk .
Suppose we reject H0 in favor of H1 − H0 if the LRT statistic χ̄201 is large, and we reject H1
in favor of H2 −H1 if the LRT statistic χ̄212 is large. We suggest a multiple decision procedure
(MDP) whereby we make three possible decisions:
D0 : Conclude that there is no strong evidence against H0 if χ̄201 is small and χ̄212 is small.
D1 : Conclude against H0 and in favor of H1 − H0 if χ̄201 is large and χ̄212 is small.
D2 : Conclude against H1 in favor of H2 − H1 if χ̄212 is large, suggesting possible further
multiple pairwise comparisons.
Although the MDP uses two standard tests of the ordinary type, it differs from a usual test
in several ways.
1. If α1 and α2 are the levels of significance for the χ̄201 and χ̄212 tests, respectively, they
are chosen independently, e.g., a large α2 will be called for if avoidance of a false D1 is very
5
important.
2. The level of significance = supH0 P (Reject H0 |H0 true) is of no interest, although one
could define an entity, supH0 P (Conclude D1 |H0 true).
3. The power of the MDP under various alternatives does not seem to make any sense.
4. As a related matter, the p-value does not make any sense either, although, given the
observed χ̄201 = c1 and χ̄212 = c2 , one could define a bivariate p-value as
(p1 , p2 ) = (sup P (χ̄201 ≥ c1 |H0 ), sup P (χ̄212 > c2 |H1 )).
H0
(1.15)
H1
The case in favor of H1 − H0 gets stronger as p1 goes down and p2 goes up (easier to reject
H1 in favor of H2 − H1 ); the maximum value of p2 occurring when c2 = 0, i.e., µ̂ = µ∗ . The
case against H1 − H0 gets stronger as p1 gets larger and/or p2 gets smaller.
Our main concerns will be the probabilities of concluding D1 when µ ∈ H0 , µ ∈ H1 − H0 ,
and when µ ∈ H2 − H1 . It is clear from the description of the MDP that the probability
of concluding in favor of H1 − H0 will be less than that using the LRT when µ ∈ H1 − H0 .
We will show by simulations that this loss is very small in all cases, but the reduction in the
probability is very substantial when µ ∈ H2 − H1 . The computational procedure, that will
be described in the next section, is simplified in the normal case because χ̄201 and χ̄212 are
independent conditional on the level sets.
In Chapter 2, we provide the details of applying the MDP. Although similar procedures
could be employed for other polyhydral cones, we concentrate only on the isotonic cone
given in equation (1.1) in this initial effort. In Chapter 3, we give some simulation results
to show how the probability of concluding H1 − H0 when µ ∈ H1 − H0 is reduced very little
from the LRT (as described in equations (1.5)-(1.10)) and how the probability of concluding
H1 − H0 is reduced very substantially when µ ∈ H2 − H1 . In particular, for the example
in equation (1.13), the LRT concludes D1 99.8% of the time, while the MDP concludes D1
6
only (47.0%,24.6%,15.7%) of the time when α2 = (0.01, 0.05, 0.10) with α1 held fixed at
0.05 as seen in Table 3.3. We also compare these reductions with those of the NNT of TM
(2003) from the JT tests. In Chapter 4, we analyze the five data sets given in TM (2003) to
show that the MDP analyses seem to corroborate the intuitive results from the data while
the LRTs do not in some cases. In Chapter 5, we summarize our results and make some
concluding remarks.
7
CHAPTER 2
IMPLEMENTING THE MULTIPLE DECISION PROCEDURE
We continue using the set-up and notation given in the Introduction. We have already
described the LRT for testing H0 : µ ∈ L vs H1 − H0 : µ ∈ C − L in equations (1.5)-(1.10).
We now describe the LRT for testing H1 : µ ∈ C vs H2 − H1 : µ ∈ C C . When σ 2 is known,
the LRT rejects H1 in favor of H2 − H1 for large values of
χ̄212
≡
k
X
wi (Ȳi − µ∗i )2 ,
where wi = ni /σ 2 .
(2.1)
i=1
Its null distribution is given by
P (χ̄212
> c) =
k
X
P (l, k, w) P (χ2k−l > c).
(2.2)
l=1
When σ 2 is unknown, the LRT rejects H1 for large values of
2
Ē12
(ν)
Pk
wi (Ȳi − µ∗i )2
.
≡ Pk i=1
Pni
∗ 2
(Y
−
µ
)
ij
i
j=1
i=1
(2.3)
Its null distribution is given by
2
P (Ē12
> c) =
k
X
P (l, k, w) P (B(l−1)/2,(N −k)/2 > c.),
(2.4)
l=1
with Ba,b as defined in (1.9). Also,
S12 ≡
2
(N − k)Ē12
d
→ χ̄212
2
1 − Ē12
as N − k → ∞.
(2.5)
The following conditional independence results for computing joint probabilities are stated
in the form of a theorem.
8
Theorem 1 (Theorem 2.3.1 of RWD (1988)) For µ ∈ H0 ,
P (χ̄201
≥
c1 , χ̄212
> c2 ) =
k
X
P (l, k, w)P (χ̄201 ≥ c1 ) P (χ̄212 > c2 ),
(2.6)
2
2
> c2 ).
≥ c1 ) P (Ē12
P (l, k, w)P (Ē01
(2.7)
l=1
2
P (Ē01
≥
2
c1 , Ē12
> c2 ) =
k
X
l=1
It is instructive to graphically compare the regions of µ̂ where the LRT and the MDP conclude
D1 . We show this in Figure 2.1 for k = 3 on the plane L⊥ .
(a) χ̄201 Test
(b) χ̄212 G.O.F. Test
Figure 2.1: Regions of µ̂ for Decision D1 when k = 3
9
(c) MDP
CHAPTER 3
SIMULATIONS
Using 50,000 simulations in each case, we have computed the percentage of the time the
MDP concludes D0 , D1 and D2 for a variety of µ’s, both in C and in C C , for k = 3, 4, 5, 6. In
each case we have also computed the percentage of the time the χ̄201 LRT concludes D1 and
noted the reduction percentage of (LRT−MDP)/LRT. We have considered all combinations
of αi = 0.01, 0.05, and 0.10 for i = 1, 2, where α1 is the level of significance of the χ̄201 test and
α2 is the level of significance of χ̄212 test. We present only a small fraction of our results. It is
clear that, irrespective of the value α2 in these ranges, the reduction percentage is minimal
when µ ∈ C, whereas it is large to very large when µ ∈ C C .
For the cases when k = 3, 4, not only did we compare different combinations of alpha
levels, we also compared the MDP results (for α1 = 0.05 and α2 = 0.05) to that of the
NNTa and NNTs from TM (2003). For comparison purposes, we used 20,000 simulations
for each case when k = 3 and 3,000 simulations for the cases when k = 4. In Table 3.10, we
see that NNTa and NNTs reject H0 for µ = (0, 1.5, 0.5, 1.5) 53.9% and 49.2% of the time,
respectively, whereas the MDP reduces this percentage to 25.0%.
10
0, 0)
0.5, 1)
1, 1)
1, 2)
0, 0, 0)
0.5, 1.5, 1.5)
1, 1, 2)
1, 1, 2.5)
0, 0.5, 3)
0, 0)
0.5, 1)
1, 1)
1, 2)
0, 0, 0)
0.5, 1.5, 1.5)
1, 1, 2)
1, 1, 2.5)
0, 0.5, 3)
0, 0)
0.5, 1)
1, 1)
1, 2)
0, 0, 0)
0.5, 1.5, 1.5)
1, 1, 2)
1, 1, 2.5)
0, 0.5, 3)
µ
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
µ
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
µ
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
11
D0 %
97.9
23.5
13.2
0.0
98.0
0.1
0.0
0.0
0.0
D0 %
89.9
8.0
3.6
0.0
90.2
0.0
0.0
0.0
0.0
D0 %
80.6
4.0
1.5
0.0
80.6
0.0
0.0
0.0
0.0
D1 %
1.0
76.5
86.5
100.0
0.9
99.8
99.9
99.9
99.9
D1 %
5.1
92.0
94.7
100.0
4.8
99.3
99.4
99.4
99.3
D1 %
9.6
95.9
94.9
100.0
9.6
98.5
98.5
98.5
98.5
D2 %
1.1
0.0
0.3
0.0
1.0
0.1
0.1
0.1
0.1
D2 %
5.1
0.0
1.7
0.0
5.0
0.7
0.6
0.6
0.7
D2 %
9.8
0.1
4.6
0.0
9.9
1.5
1.5
1.5
1.5
χ̄201 %
1.0
76.5
86.8
100.0
0.9
99.9
100.0
100.0
100.0
χ̄201 %
5.2
92.0
96.3
100.0
4.9
100.0
100.0
100.0
100.0
9.9
96.0
98.5
100.0
10.1
100.0
100.0
100.0
100.0
χ̄201 %
α1 = α2
Red %
3.03
0.10
3.65
0.00
4.95
1.50
1.50
1.50
1.50
α1 = α2
Red %
1.92
0.00
1.66
0.00
2.04
0.70
0.60
0.60
0.70
α1 = α2
Red %
0.00
0.00
0.35
0.00
0.00
0.10
0.10
0.10
0.10
0.0
0.0
0.0
0.0
(0, 1.5, 0.5, 1.5)
(0, 2, 1, 1)
(0, 1, 2.5, 1)
(0, 0, 3, 0.5)
= 0.05
µ
0.1
0.0
0.0
0.0
(0, 1.5, 0.5, 1.5)
(0, 2, 1, 1)
(0, 1, 2.5, 1)
(0, 0, 3, 0.5)
= 0.01
µ
0.6
0.2
0.0
0.0
(0,
(0,
(0,
(0,
1.5, 0.5, 1.5)
2, 1, 1)
1, 2.5, 1)
0, 3, 0.5)
39.4
27.2
0.1
(0, 1, 0.5)
(1, 0, 1)
(0, 2, 1)
D0 %
15.0
8.4
0.0
(0, 1, 0.5)
(1, 0, 1)
(0, 2, 1)
D0 %
7.5
3.8
0.0
D0 %
(0, 1, 0.5)
(1, 0, 1)
(0, 2, 1)
= 0.10
µ
46.5
23.6
5.1
0.0
48.8
7.4
34.5
D1 %
24.6
9.2
1.2
0.0
56.0
6.8
15.3
D1 %
16.1
4.9
0.5
0.0
51.7
5.3
8.8
D1 %
52.9
76.2
94.9
100.0
11.8
65.3
65.4
D2 %
75.3
90.8
98.8
100.0
29.0
84.8
84.7
D2 %
83.9
95.1
99.5
100.0
40.8
90.9
91.2
D2 %
98.8
99.1
100.0
100.0
55.3
21.5
99.8
χ̄201 %
99.8
99.9
100.0
100.0
78.8
44.8
100.0
χ̄201 %
99.9
100.0
100.0
100.0
87.4
58.8
100.0
χ̄201 %
52.94
76.19
94.90
100.00
11.75
65.58
65.43
Red %
75.35
90.79
98.80
100.00
28.93
84.82
84.70
Red %
83.88
95.10
99.50
100.00
40.85
90.99
91.20
Red %
TABLE 3.1
COMPARISON OF EQUAL ALPHA LEVELS
0, 0)
0.5, 1)
1, 1)
1, 2)
0, 0, 0)
0.5, 1.5, 1.5)
1, 1, 2)
1, 1, 2.5)
0, 0.5, 3)
0, 0)
0.5, 1)
1, 1)
1, 2)
0, 0, 0)
0.5, 1.5, 1.5)
1, 1, 2)
1, 1, 2.5)
0, 0.5, 3)
0, 0)
0.5, 1)
1, 1)
1, 2)
0, 0, 0)
0.5, 1.5, 1.5)
1, 1, 2)
1, 1, 2.5)
0, 0.5, 3)
µ
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
µ
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
µ
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
12
D0 %
89.2
3.8
1.5
0.0
89.1
0.0
0.0
0.0
0.0
D0 %
85.4
3.9
1.6
0.0
85.2
0.0
0.0
0.0
0.0
D0 %
80.6
4.0
1.5
0.0
80.6
0.0
0.0
0.0
0.0
D1 %
9.8
96.2
98.2
100.0
9.9
99.9
99.9
99.9
99.9
D1 %
9.5
96.0
96.7
100.0
9.7
99.4
99.3
99.4
99.3
D1 %
9.6
95.9
94.9
100.0
9.6
98.5
98.5
98.5
98.5
D2 %
1.0
0.0
0.2
0.0
1.0
0.1
0.1
0.1
0.1
D2 %
5.0
0.0
1.7
0.0
5.1
0.6
0.7
0.6
0.7
D2 %
9.8
0.1
4.6
0.0
9.9
1.5
1.5
1.5
1.5
α1 = α2 = 0.10
Red % µ
9.9
3.03
96.0
0.10 (0, 1, 0.5)
98.5
3.65 (1, 0, 1)
100.0
0.00 (0, 2, 1)
10.1
4.95
100.0
1.50 (0, 1.5, 0.5, 1.5)
100.0
1.50 (0, 2, 1, 1)
100.0
1.50 (0, 1, 2.5, 1)
100.0
1.50 (0, 0, 3, 0.5)
α1 = 0.10, α2 = 0.05
χ̄201 % Red % µ
9.7
2.06
96.0
0.00 (0, 1, 0.5)
98.3
1.73 (1, 0, 1)
100.0
0.00 (0, 2, 1)
9.9
2.02
100.0
0.60 (0, 1.5, 0.5, 1.5)
100.0
0.70 (0, 2, 1, 1)
100.0
0.60 (0, 1, 2.5, 1)
100.0
0.70 (0, 0, 3, 0.5)
α1 = 0.10, α2 = 0.01
χ̄201 % Red % µ
9.8
0.00
96.2
0.00 (0, 1, 0.5)
98.4
0.20 (1, 0, 1)
100.0
0.00 (0, 2, 1)
9.9
0.00
99.9
0.00 (0, 1.5, 0.5, 1.5)
100.0
0.10 (0, 2, 1, 1)
100.0
0.10 (0, 1, 2.5, 1)
100.0
0.10 (0, 0, 3, 0.5)
χ̄201 %
76.9
20.5
35.0
47.0
24.3
4.9
0.0
11.4
14.2
0.0
0.0
0.0
0.0
0.0
D1 %
24.5
9.0
1.3
0.0
0.1
0.0
0.0
0.0
D0 %
62.6
9.0
15.1
8.9
6.3
0.0
D1 %
16.1
4.9
0.5
0.0
0.0
0.0
0.0
0.0
D0 %
51.7
5.3
8.8
D1 %
7.5
3.8
0.0
D0 %
53.0
75.7
95.1
100.0
11.7
65.3
65.0
D2 %
75.5
91.0
98.7
100.0
28.5
84.8
84.9
D2 %
83.9
95.1
99.5
100.0
40.8
90.9
91.2
D2 %
99.9
100.0
100.0
100.0
52.95
75.70
95.10
100.00
11.71
65.08
65.00
Red %
χ̄201 %
87.1
58.7
100.0
75.48
91.00
98.70
100.00
99.9
100.0
100.0
100.0
30.18
84.67
84.90
Red %
χ̄201 %
87.5
58.7
100.0
83.88
95.10
99.50
100.00
40.85
90.99
91.20
Red %
99.9
100.0
100.0
100.0
87.4
58.8
100.0
χ̄201 %
TABLE 3.2
α1 = 0.10 FIXED
0, 0)
0.5, 1)
1, 1)
1, 2)
0, 0, 0)
0.5, 1.5, 1.5)
1, 1, 2)
1, 1, 2.5)
0, 0.5, 3)
0, 0)
0.5, 1)
1, 1)
1, 2)
0, 0, 0)
0.5, 1.5, 1.5)
1, 1, 2)
1, 1, 2.5)
0, 0.5, 3)
0, 0)
0.5, 1)
1, 1)
1, 2)
0, 0, 0)
0.5, 1.5, 1.5)
1, 1, 2)
1, 1, 2.5)
0, 0.5, 3)
µ
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
µ
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
µ
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
13
D0 %
94.1
8.2
3.9
0.0
94.2
0.0
0.0
0.0
0.0
D0 %
89.9
8.0
3.6
0.0
90.2
0.0
0.0
0.0
0.0
D0 %
85.4
8.0
3.6
0.0
85.0
0.0
0.0
0.0
0.0
D1 %
4.9
91.8
95.8
100.0
4.8
99.9
99.9
99.9
99.9
D1 %
5.1
92.0
94.7
100.0
4.8
99.3
99.4
99.4
99.3
D1 %
4.9
91.9
92.9
100.0
4.9
98.4
98.5
98.5
98.5
D2 %
1.0
0.0
0.3
0.0
1.0
0.1
0.1
0.1
0.1
D2 %
5.1
0.0
1.7
0.0
5.0
0.7
0.6
0.6
0.7
D2 %
9.7
0.1
3.5
0.0
10.1
1.6
1.5
1.5
1.5
α1 = 0.05, α2 = 0.10
Red % µ
5.1
3.92
92.0
0.11 (0, 1, 0.5)
96.2
3.43 (1, 0, 1)
100.0
0.00 (0, 2, 1)
5.1
3.92
100.0
1.60 (0, 1.5, 0.5, 1.5)
100.0
1.50 (0, 2, 1, 1)
100.0
1.50 (0, 1, 2.5, 1)
100.0
1.50 (0, 0, 3, 0.5)
α1 = α2 = 0.05
χ̄201 % Red % µ
5.2
1.92
92.0
0.00 (0, 1, 0.5)
96.3
1.66 (1, 0, 1)
100.0
0.00 (0, 2, 1)
4.9
2.04
100.0
0.70 (0, 1.5, 0.5, 1.5)
100.0
0.60 (0, 2, 1, 1)
100.0
0.60 (0, 1, 2.5, 1)
100.0
0.70 (0, 0, 3, 0.5)
α1 = 0.05, α2 = 0.01
χ̄201 % Red % µ
4.9
0.00
91.8
0.00 (0, 1, 0.5)
96.1
0.31 (1, 0, 1)
100.0
0.00 (0, 2, 1)
4.9
2.04
100.0
0.10 (0, 1.5, 0.5, 1.5)
100.0
0.10 (0, 2, 1, 1)
100.0
0.10 (0, 1, 2.5, 1)
100.0
0.10 (0, 0, 3, 0.5)
χ̄201 %
69.3
15.6
34.9
47.0
24.2
5.0
0.0
18.9
19.0
0.0
0.1
0.1
0.0
0.0
D1 %
24.6
9.2
1.2
0.0
0.1
0.0
0.0
0.0
D0 %
56.0
6.8
15.3
15.0
8.4
0.0
D1 %
15.7
4.8
0.5
0.0
0.0
0.0
0.0
0.0
D0 %
46.3
3.9
8.7
D1 %
12.5
4.9
0.0
D0 %
52.9
75.8
95.0
100.0
11.8
65.4
65.1
D2 %
75.3
90.8
98.8
100.0
29.0
84.8
84.7
D2 %
84.3
95.2
99.5
100.0
41.2
91.2
91.3
D2 %
99.8
99.9
100.0
100.0
52.91
75.78
95.00
100.00
11.72
65.49
65.10
Red %
χ̄201 %
78.5
45.2
100.0
75.35
90.79
98.80
100.00
99.8
99.9
100.0
100.0
28.93
84.82
84.70
Red %
χ̄201 %
78.8
44.8
100.0
95.20
95.20
99.50
100.00
41.17
91.28
91.30
Red %
99.8
100.0
100.0
100.0
78.7
44.7
100.0
χ̄201 %
TABLE 3.3
α1 = 0.05 FIXED
0, 0)
0.5, 1)
1, 1)
1, 2)
0, 0, 0)
0.5, 1.5, 1.5)
1, 1, 2)
1, 1, 2.5)
0, 0.5, 3)
0, 0)
0.5, 1)
1, 1)
1, 2)
0, 0, 0)
0.5, 1.5, 1.5)
1, 1, 2)
1, 1, 2.5)
0, 0.5, 3)
0, 0)
0.5, 1)
1, 1)
1, 2)
0, 0, 0)
0.5, 1.5, 1.5)
1, 1, 2)
1, 1, 2.5)
0, 0.5, 3)
µ
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
µ
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
µ
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
14
D0 %
97.9
23.5
13.2
0.0
98.0
0.1
0.0
0.0
0.0
D0 %
94.0
23.5
13.0
0.0
94.0
0.1
0.0
0.0
0.0
D0 %
89.3
23.3
12.5
0.0
89.0
0.0
0.0
0.0
0.0
D1 %
1.0
76.5
86.5
100.0
0.9
99.8
99.9
99.9
99.9
D1 %
0.9
76.4
85.5
100.0
0.9
99.2
99.4
99.4
99.3
D1 %
1.1
76.7
84.0
100.0
9.5
98.4
98.5
98.5
98.4
D2 %
1.1
0.0
0.3
0.0
1.0
0.1
0.1
0.1
0.1
D2 %
5.1
0.0
1.5
0.0
5.1
0.7
0.6
0.6
0.7
D2 %
9.7
0.0
3.5
0.0
10.1
1.6
1.5
1.5
1.6
α1 = .01, α2 = 0.10
Red % µ
1.1
0.00
76.7
0.00 (0, 1, 0.5)
87.0
3.45 (1, 0, 1)
100.0
0.00 (0, 2, 1)
9.9
10.00
99.9
1.50 (0, 1.5, 0.5, 1.5)
100.0
1.50 (0, 2, 1, 1)
100.0
1.50 (0, 1, 2.5, 1)
100.0
1.60 (0, 0, 3, 0.5)
α1 = 0.01, α2 = 0.05
χ̄201 % Red % µ
0.9
0.00
76.4
0.00 (0, 1, 0.5)
86.7
1.38 (1, 0, 1)
100.0
0.00 (0, 2, 1)
0.9
0.00
100.0
0.70 (0, 1.5, 0.5, 1.5)
100.0
0.60 (0, 2, 1, 1)
100.0
0.60 (0, 1, 2.5, 1)
100.0
0.70 (0, 0, 3, 0.5)
α1 = α2 = 0.01
χ̄201 % Red % µ
1.0
0.00
76.5
0.00 (0, 1, 0.5)
86.8
0.35 (1, 0, 1)
100.0
0.00 (0, 2, 1)
0.9
0.00
99.9
0.10 (0, 1.5, 0.5, 1.5)
100.0
0.10 (0, 2, 1, 1)
100.0
0.10 (0, 1, 2.5, 1)
100.0
0.10 (0, 0, 3, 0.5)
χ̄201 %
48.8
7.4
34.5
46.5
23.6
5.1
0.0
39.4
27.2
0.1
0.6
0.2
0.0
0.0
D1 %
24.7
9.0
1.3
0.0
0.4
0.1
0.0
0.0
D0 %
39.3
3.3
15.5
31.9
12.0
0.0
D1 %
15.8
4.9
0.5
0.0
0.2
0.0
0.0
0.0
D0 %
32.5
2.0
8.9
D1 %
26.6
7.1
0.0
D0 %
52.9
76.2
94.9
100.0
11.8
65.3
65.4
D2 %
74.9
90.9
98.7
100.0
28.8
84.7
84.5
D2 %
83.9
95.1
99.5
100.0
40.9
90.9
91.1
D2 %
98.8
99.1
100.0
100.0
52.94
76.19
94.90
100.00
11.75
65.58
65.43
Red %
χ̄201 %
55.3
21.5
99.8
74.87
90.10
98.70
100.00
98.7
99.1
100.0
100.0
28.93
84.72
84.47
Red %
χ̄201 %
55.3
21.6
99.8
84.01
95.06
99.50
100.00
41.02
90.83
91.08
Red %
98.8
99.2
100.0
100.0
55.1
21.8
99.8
χ̄201 %
TABLE 3.4
α1 = 0.01 FIXED
0, 0)
0.5, 1)
1, 1)
1, 2)
0, 0, 0)
0.5, 1.5, 1.5)
1, 1, 2)
1, 1, 2.5)
0, 0.5, 3)
0, 0)
0.5, 1)
1, 1)
1, 2)
0, 0, 0)
0.5, 1.5, 1.5)
1, 1, 2)
1, 1, 2.5)
0, 0.5, 3)
0, 0)
0.5, 1)
1, 1)
1, 2)
0, 0, 0)
0.5, 1.5, 1.5)
1, 1, 2)
1, 1, 2.5)
0, 0.5, 3)
µ
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
µ
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
µ
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
15
D0 %
89.3
23.3
12.5
0.0
89.0
0.0
0.0
0.0
0.0
D0 %
85.4
8.0
3.6
0.0
85.0
0.0
0.0
0.0
0.0
D0 %
80.6
4.0
1.5
0.0
80.6
0.0
0.0
0.0
0.0
D1 %
1.1
76.7
84.0
100.0
9.5
98.4
98.5
98.5
98.4
D1 %
4.9
91.9
92.9
100.0
4.9
98.4
98.5
98.5
98.5
D1 %
9.6
95.9
94.9
100.0
9.6
98.5
98.5
98.5
98.5
D2 %
9.7
0.0
3.5
0.0
10.1
1.6
1.5
1.5
1.6
D2 %
9.7
0.1
3.5
0.0
10.1
1.6
1.5
1.5
1.5
D2 %
9.8
0.1
4.6
0.0
9.9
1.5
1.5
1.5
1.5
α1 = α2 = 0.10
Red % µ
9.9
3.03
96.0
0.10 (0, 1, 0.5)
98.5
3.65 (1, 0, 1)
100.0
0.00 (0, 2, 1)
10.1
4.95
100.0
1.50 (0, 1.5, 0.5, 1.5)
100.0
1.50 (0, 2, 1, 1)
100.0
1.50 (0, 1, 2.5, 1)
100.0
1.50 (0, 0, 3, 0.5)
α1 = 0.05, α2 = 0.10
χ̄201 % Red % µ
5.1
3.92
92.0
0.11 (0, 1, 0.5)
96.2
3.43 (1, 0, 1)
100.0
0.00 (0, 2, 1)
5.1
3.92
100.0
1.60 (0, 1.5, 0.5, 1.5)
100.0
1.50 (0, 2, 1, 1)
100.0
1.50 (0, 1, 2.5, 1)
100.0
1.50 (0, 0, 3, 0.5)
α1 = .01, α2 = 0.10
χ̄201 % Red % µ
1.1
0.00
76.7
0.00 (0, 1, 0.5)
87.0
3.45 (1, 0, 1)
100.0
0.00 (0, 2, 1)
9.9
10.00
99.9
1.50 (0, 1.5, 0.5, 1.5)
100.0
1.50 (0, 2, 1, 1)
100.0
1.50 (0, 1, 2.5, 1)
100.0
1.60 (0, 0, 3, 0.5)
χ̄201 %
32.5
2.0
8.9
15.8
4.9
0.5
0.0
26.6
7.1
0.0
0.2
0.0
0.0
0.0
D1 %
15.7
4.8
0.5
0.0
0.0
0.0
0.0
0.0
D0 %
46.3
3.9
8.7
12.5
4.9
0.0
D1 %
16.1
4.9
0.5
0.0
0.0
0.0
0.0
0.0
D0 %
51.7
5.3
8.8
D1 %
7.5
3.8
0.0
D0 %
83.9
95.1
99.5
100.0
40.9
90.9
91.1
D2 %
84.3
95.2
99.5
100.0
41.2
91.2
91.3
D2 %
83.9
95.1
99.5
100.0
40.8
90.9
91.2
D2 %
98.8
99.2
100.0
100.0
84.01
95.06
99.50
100.00
41.02
90.83
91.08
Red %
χ̄201 %
55.1
21.8
99.8
95.20
95.20
99.50
100.00
99.8
100.0
100.0
100.0
41.17
91.28
91.30
Red %
χ̄201 %
78.7
44.7
100.0
83.88
95.10
99.50
100.00
40.85
90.99
91.20
Red %
99.9
100.0
100.0
100.0
87.4
58.8
100.0
χ̄201 %
TABLE 3.5
α2 = 0.10 FIXED
0, 0)
0.5, 1)
1, 1)
1, 2)
0, 0, 0)
0.5, 1.5, 1.5)
1, 1, 2)
1, 1, 2.5)
0, 0.5, 3)
0, 0)
0.5, 1)
1, 1)
1, 2)
0, 0, 0)
0.5, 1.5, 1.5)
1, 1, 2)
1, 1, 2.5)
0, 0.5, 3)
0, 0)
0.5, 1)
1, 1)
1, 2)
0, 0, 0)
0.5, 1.5, 1.5)
1, 1, 2)
1, 1, 2.5)
0, 0.5, 3)
µ
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
µ
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
µ
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
16
D0 %
94.0
23.5
13.0
0.0
94.0
0.1
0.0
0.0
0.0
D0 %
89.9
8.0
3.6
0.0
90.2
0.0
0.0
0.0
0.0
D0 %
85.4
3.9
1.6
0.0
85.2
0.0
0.0
0.0
0.0
D1 %
0.9
76.4
85.5
100.0
0.9
99.2
99.4
99.4
99.3
D1 %
5.1
92.0
94.7
100.0
4.8
99.3
99.4
99.4
99.3
D1 %
9.5
96.0
96.7
100.0
9.7
99.4
99.3
99.4
99.3
D2 %
5.1
0.0
1.5
0.0
5.1
0.7
0.6
0.6
0.7
D2 %
5.1
0.0
1.7
0.0
5.0
0.7
0.6
0.6
0.7
D2 %
5.0
0.0
1.7
0.0
5.1
0.6
0.7
0.6
0.7
α1 = 0.10, α2 = 0.05
Red % µ
9.7
2.06
96.0
0.00 (0, 1, 0.5)
98.3
1.73 (1, 0, 1)
100.0
0.00 (0, 2, 1)
9.9
2.02
100.0
0.60 (0, 1.5, 0.5, 1.5)
100.0
0.70 (0, 2, 1, 1)
100.0
0.60 (0, 1, 2.5, 1)
100.0
0.70 (0, 0, 3, 0.5)
α1 = α2 = 0.05
χ̄201 % Red % µ
5.2
1.92
92.0
0.00 (0, 1, 0.5)
96.3
1.66 (1, 0, 1)
100.0
0.00 (0, 2, 1)
4.9
2.04
100.0
0.70 (0, 1.5, 0.5, 1.5)
100.0
0.60 (0, 2, 1, 1)
100.0
0.60 (0, 1, 2.5, 1)
100.0
0.70 (0, 0, 3, 0.5)
α1 = 0.01, α2 = 0.05
χ̄201 % Red % µ
0.9
0.00
76.4
0.00 (0, 1, 0.5)
86.7
1.38 (1, 0, 1)
100.0
0.00 (0, 2, 1)
0.9
0.00
100.0
0.70 (0, 1.5, 0.5, 1.5)
100.0
0.60 (0, 2, 1, 1)
100.0
0.60 (0, 1, 2.5, 1)
100.0
0.70 (0, 0, 3, 0.5)
χ̄201 %
39.3
3.3
15.5
24.7
9.0
1.3
0.0
31.9
12.0
0.0
0.4
0.1
0.0
0.0
D1 %
24.6
9.2
1.2
0.0
0.1
0.0
0.0
0.0
D0 %
56.0
6.8
15.3
15.0
8.4
0.0
D1 %
24.5
9.0
1.3
0.0
0.1
0.0
0.0
0.0
D0 %
62.6
9.0
15.1
D1 %
8.9
6.3
0.0
D0 %
74.9
90.9
98.7
100.0
28.8
84.7
84.5
D2 %
75.3
90.8
98.8
100.0
29.0
84.8
84.7
D2 %
75.5
91.0
98.7
100.0
28.5
84.8
84.9
D2 %
98.7
99.1
100.0
100.0
74.87
90.10
98.70
100.00
28.93
84.72
84.47
Red %
χ̄201 %
55.3
21.6
99.8
75.35
90.79
98.80
100.00
99.8
99.9
100.0
100.0
28.93
84.82
84.70
Red %
χ̄201 %
78.8
44.8
100.0
75.48
91.00
98.70
100.00
30.18
84.67
84.90
Red %
99.9
100.0
100.0
100.0
87.5
58.7
100.0
χ̄201 %
TABLE 3.6
α2 = 0.05 FIXED
0, 0)
0.5, 1)
1, 1)
1, 2)
0, 0, 0)
0.5, 1.5, 1.5)
1, 1, 2)
1, 1, 2.5)
0, 0.5, 3)
0, 0)
0.5, 1)
1, 1)
1, 2)
0, 0, 0)
0.5, 1.5, 1.5)
1, 1, 2)
1, 1, 2.5)
0, 0.5, 3)
0, 0)
0.5, 1)
1, 1)
1, 2)
0, 0, 0)
0.5, 1.5, 1.5)
1, 1, 2)
1, 1, 2.5)
0, 0.5, 3)
µ
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
µ
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
µ
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
17
D0 %
97.9
23.5
13.2
0.0
98.0
0.1
0.0
0.0
0.0
D0 %
94.1
8.2
3.9
0.0
94.2
0.0
0.0
0.0
0.0
D0 %
89.2
3.8
1.5
0.0
89.1
0.0
0.0
0.0
0.0
D1 %
1.0
76.5
86.5
100.0
0.9
99.8
99.9
99.9
99.9
D1 %
4.9
91.8
95.8
100.0
4.8
99.9
99.9
99.9
99.9
D1 %
9.8
96.2
98.2
100.0
9.9
99.9
99.9
99.9
99.9
D2 %
1.1
0.0
0.3
0.0
1.0
0.1
0.1
0.1
0.1
D2 %
1.0
0.0
0.3
0.0
1.0
0.1
0.1
0.1
0.1
D2 %
1.0
0.0
0.2
0.0
1.0
0.1
0.1
0.1
0.1
α1 = 0.10, α2 = 0.01
Red % µ
9.8
0.00
96.2
0.00 (0, 1, 0.5)
98.4
0.20 (1, 0, 1)
100.0
0.00 (0, 2, 1)
9.9
0.00
99.9
0.00 (0, 1.5, 0.5, 1.5)
100.0
0.10 (0, 2, 1, 1)
100.0
0.10 (0, 1, 2.5, 1)
100.0
0.10 (0, 0, 3, 0.5)
α1 = 0.05, α2 = 0.01
χ̄201 % Red % µ
4.9
0.00
91.8
0.00 (0, 1, 0.5)
96.1
0.31 (1, 0, 1)
100.0
0.00 (0, 2, 1)
4.9
2.04
100.0
0.10 (0, 1.5, 0.5, 1.5)
100.0
0.10 (0, 2, 1, 1)
100.0
0.10 (0, 1, 2.5, 1)
100.0
0.10 (0, 0, 3, 0.5)
α1 = α2 = 0.01
χ̄201 % Red % µ
1.0
0.00
76.5
0.00 (0, 1, 0.5)
86.8
0.35 (1, 0, 1)
100.0
0.00 (0, 2, 1)
0.9
0.00
99.9
0.10 (0, 1.5, 0.5, 1.5)
100.0
0.10 (0, 2, 1, 1)
100.0
0.10 (0, 1, 2.5, 1)
100.0
0.10 (0, 0, 3, 0.5)
χ̄201 %
48.8
7.4
34.5
46.5
23.6
5.1
0.0
39.4
27.2
0.1
0.6
0.2
0.0
0.0
D1 %
47.0
24.2
5.0
0.0
0.1
0.1
0.0
0.0
D0 %
69.3
15.6
34.9
18.9
19.0
0.0
D1 %
47.0
24.3
4.9
0.0
0.0
0.0
0.0
0.0
D0 %
76.9
20.5
35.0
D1 %
11.4
14.2
0.0
D0 %
52.9
76.2
94.9
100.0
11.8
65.3
65.4
D2 %
52.9
75.8
95.0
100.0
11.8
65.4
65.1
D2 %
53.0
75.7
95.1
100.0
11.7
65.3
65.0
D2 %
98.8
99.1
100.0
100.0
52.94
76.19
94.90
100.00
11.75
65.58
65.43
Red %
χ̄201 %
55.3
21.5
99.8
52.91
75.78
95.00
100.00
99.8
99.9
100.0
100.0
11.72
65.49
65.10
Red %
χ̄201 %
78.5
45.2
100.0
52.95
75.70
95.10
100.00
11.71
65.08
65.00
Red %
99.9
100.0
100.0
100.0
87.1
58.7
100.0
χ̄201 %
TABLE 3.7
α2 = 0.01 FIXED
TABLE 3.8
RESULTS FOR k = 5, α1 = α2 = 0.05
µ
(0, 0, 0, 0, 0)
(0, 1, 1, 1.5, 1.5)
(0, 1, 1.5, 1, 1.5)
(0, 1, 1.5, 1.5, 1)
(0, 0.5, 1, 1, 1.5)
(0, 1, 1.5, 1, 0.5)
(0, 1, 1, 1.5, 0.5)
(0, 0.5, 1, 1.5, 1)
(0.5, 0.5, 1, 1, 1.5)
(0.5, 1, 0.5, 1, 1.5)
(0.5, 1.5, 0.5, 1, 1)
(0.5, 1, 1.5, 1, 0.5)
(0, 1, 1, 1.5, 2)
(0, 1, 1, 2, 1.5)
(0, 1, 1.5, 1, 2)
(0, 1, 2, 1.5, 1)
(0, 0, 0, 0.5, 3)
(0, 0, 0, 3, 0.5)
(0, 0, 0.5, 0.5, 3)
(0, 0.5, 3, 0.5, 0)
(0, 0, 0.5, 3, 0.5)
(0, 0.5, 0.5, 1, 3)
(0, 0.5, 0.5, 3, 1)
(0, 0.5, 3, 1, 0.5)
(0, 0.5, 1, 3, 0.5)
(0, 0, 0, 0.5, 2)
(0, 2, 0.5, 0, 0)
(0, 1, 1, 2, 2)
(0, 1, 2, 2, 1)
(1, 1, 2, 2, 0)
(0, 1, 2, 1, 2)
(0, 2, 1, 2, 1)
(0, 1.5, 1.5, 2, 2)
(0, 1.5, 2, 1.5, 2)
(0, 1.5, 2, 2, 1.5)
(0, 2, 1.5, 2, 1.5)
D0 %
89.9
0.0
0.1
0.1
0.1
0.9
0.7
0.2
3.4
6.1
14.2
13.7
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
D1 %
4.9
99.2
87.2
76.1
99.6
24.9
26.9
87.3
95.8
81.2
13.4
12.0
99.7
85.6
88.0
28.5
99.0
0.0
99.2
0.0
0.0
99.7
0.0
0.0
0.0
1.7
0.0
99.2
14.9
0.0
34.4
2.9
99.2
87.4
76.3
58.8
18
D2 %
5.2
0.8
12.7
23.8
0.3
74.2
72.4
12.5
0.8
12.7
72.4
74.3
0.3
14.4
12.0
71.5
1.0
100.0
0.8
100.0
100.0
0.3
100.0
100.0
100.0
98.3
100.0
0.8
85.1
100.0
65.6
97.1
0.8
12.6
23.7
41.2
χ̄201
5.1
100.0
99.9
99.9
99.9
96.6
97.1
99.8
96.5
93.0
47.9
45.0
100.0
100.0
100.0
100.0
100.0
100.0
100.0
99.6
100.0
100.0
100.0
100.0
100.0
100.0
0.6
100.0
100.0
31.5
100.0
100.0
100.0
100.0
100.0
100.0
Reduction %
3.92
0.80
12.71
23.82
0.30
74.22
72.30
12.53
0.73
12.69
72.03
73.33
0.30
14.40
12.00
71.50
1.00
100.00
0.80
100.00
100.00
0.30
100.00
100.00
100.00
98.30
100.00
0.80
85.10
100.00
65.6
97.1
0.80
12.60
23.70
41.20
TABLE 3.8 (continued)
RESULTS FOR k = 5, α1 = α2 = 0.05
µ
(0.5, 0.5, 0.5, 1.5, 1.5)
(0.5, 1.5, 1.5, 0.5, 0.5)
(0.5, 0.5, 1.5, 0.5, 1.5)
(0.5, 0.5, 1.5, 1.5, 0.5)
(0.5, 1.5, 0.5, 1.5, 0.5)
(0, 0.5, 1.5, 1.5, 2)
(0, 0.5, 1.5, 2, 1.5)
(0, 1.5, 2, 1.5, 0.5)
(0, 0.5, 2, 1.5, 1.5)
(0, 0.5, 0.5, 1.5, 2)
(0, 0.5, 2, 1.5, 0.5)
(0, 0.5, 1.5, 0.5, 2)
(0, 0.5, 1.5, 2, 0.5)
D0 %
0.2
1.4
1.2
1.8
1.5
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
D1 %
97.8
1.0
30.3
11.8
1.3
99.7
87.9
1.0
76.4
99.7
1.4
34.8
1.4
D2 %
2.0
97.6
68.5
86.5
97.2
0.3
12.1
99.0
23.6
0.3
98.6
65.2
98.6
χ̄201
99.8
40.1
96.0
86.9
45.2
100.0
100.0
100.0
100.0
100.0
99.9
100.0
100.0
Reduction %
2.00
97.51
68.44
86.42
97.12
0.30
12.10
99.00
23.60
0.30
98.60
65.20
98.60
TABLE 3.9
RESULTS FOR k = 6, α1 = α2 = 0.05
µ
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
(0,
0, 0, 0, 0, 0)
0, 0.5, 0.5, 1, 1)
0.5, 1, 1, 0.5, 0)
0, 0.5, 1, 0.5, 1)
0, 0.5, 1, 1, 0.5)
0.5, 1, 1, 1.5, 2)
0.5, 1, 2, 1.5, 1)
1, 1.5, 2, 1, 0.5)
0.5, 1, 1.5, 1, 2)
1, 1, 2, 2, 3)
1, 2, 3, 2, 1)
1, 2, 1, 2, 3)
0, 1.5, 1.5, 3, 3)
0, 1.5, 3, 1.5, 3)
1.5, 3, 3, 1.5, 0)
D0 %
90.3
0.6
5.6
1.2
1.9
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
D1 %
5.0
98.5
9.5
88.5
78.7
99.9
37.8
1.2
92.3
99.6
0.0
44.1
99.1
3.6
0.0
19
D2 %
4.8
0.8
84.9
10.3
19.3
0.1
62.2
98.8
7.7
0.4
100.0
55.9
0.9
96.4
100.0
χ̄201
5.1
99.4
62.9
98.7
97.6
100.0
100.0
99.8
100.0
100.0
100.0
100.0
100.0
100.0
100.0
Reduction %
1.96
0.91
84.90
10.33
19.36
0.10
62.20
98.80
7.70
0.40
100.00
55.90
0.90
96.40
100.00
20
(0, 0.5, 1)
(0, 1, 1)
(0, 1, 2)
(0, 0.5, 1.5, 1.5)
(0, 1, 1, 2)
(0, 1, 1, 2.5)
(0, 0, 0.5, 3)
Z ∼ Exp(1)
k=3
N S = 20, 000
Z ∼ Exp(1)
k=4
N S = 3, 000
(0, 0.5, 1)
(0, 1, 1)
(0, 1, 2)
Z ∼ t(3)
k=3
N S = 20, 000
(0, 0.5, 1.5, 1.5)
(0, 1, 1, 2)
(0, 1, 1, 2.5)
(0, 0, 0.5, 3)
(0, 0.5, 1.5, 1.5)
(0, 1, 1, 2)
(0, 1, 1, 2.5)
(0, 0, 0.5, 3)
Z ∼ N (0, 1)
k=4
N S = 3, 000
Z ∼ t(3)
k=4
N S = 3, 000
Ordered
Alternative
(0, 0.5, 1)
(0, 1, 1)
(0, 1, 2)
Model
Specifications
Z ∼ N (0, 1)
k=3
N S = 20, 000
1.000
1.000
1.000
0.999
0.992
0.975
1.000
0.970
0.984
0.997
0.998
0.762
0.683
0.998
0.990
0.997
1.000
1.000
NNTa
0.886
0.820
1.000
1.000
1.000
1.000
1.000
0.987
0.934
1.000
0.946
0.979
0.997
0.994
0.723
0.647
0.996
1.000
1.000
0.999
1.000
NNTs
0.862
0.792
1.000
0.989
0.990
0.994
0.993
0.922
0.939
1.000
0.916
0.924
0.931
0.922
0.817
0.805
0.991
0.997
0.992
0.994
0.992
MDP
0.918
0.950
1.000
(0, 1.5, 0.5, 1.5)
(0, 2, 1, 1)
(0, 1, 2.5, 1)
(0, 0, 3, 0.5)
(0, 1, 0.5)
(1, 0, 1)
(0, 2, 1)
(0, 1.5, 0.5, 1.5)
(0, 2, 1, 1)
(0, 1, 2.5, 1)
(0, 0, 3, 0.5)
(0, 1, 0.5)
(1, 0, 1)
(0, 2, 1)
(0, 1.5, 0.5, 1.5)
(0, 2, 1, 1)
(0, 1, 2.5, 1)
(0, 0, 3, 0.5)
Non-ordered
Alternative
(0, 1, 0.5)
(1, 0, 1)
(0, 2, 1)
0.189
0.003
0.483
0.017
0.353
0.000
0.111
0.396
0.071
0.389
0.031
0.213
0.019
0.194
0.539
0.065
0.297
0.001
NNTa
0.243
0.011
0.176
0.148
0.002
0.470
0.009
0.317
0.000
0.091
0.346
0.046
0.336
0.024
0.180
0.015
0.167
0.492
0.001
0.261
0.000
NNTs
0.213
0.006
0.145
0.242
0.074
0.013
0.000
0.558
0.055
0.152
0.320
0.158
0.081
0.001
0.450
0.124
0.250
0.250
0.084
0.011
0.000
MDP
0.554
0.068
0.156
TABLE 3.10
COMPARISON OF NNT AND MDP TESTS
CHAPTER 4
EXAMPLES
TM (2003) considered five real-life examples and compared the JT test with their NNT.
We do the same, but comparing the LRT with the MDP. We have used the asymptotic forms
of S01 and S12 given in (1.10) and (2.5) and used the critical values given in RWD (1988). It
has been shown by extensive simulations that the critical values are insensitive to moderate
departures from the equal weights case for the P (l, k, w)’s. In particular, if the largest to the
smallest wi ’s is less than or equal to 1.5, there is very little difference in the critical values.
For simplicity, we have used the equal weight approximation in all cases.
Examples in Table 4.1 correspond to the data given in (1) Table 5 (Example 2) of
Neuhauser et al. (1998, p. 907), (2) Table 6.16 of Daniel (1990, p. 237), (3) Table 6.7
of Hollander and Wolfe (1999, p. 211), (4) Table 1 (Replicate 2) of Simpson and Margolin
(1986, p. 589), and (5) the same data as in (4) except that we have added 10 to each
observation in group 6. The reason TM (2003) chose example 5 is that example 4 clearly
shows an umbrella ordering, and this ordering seems to be retained in the modified data set
in example 5, but harder to detect.
TABLE 4.1
EXAMPLES 1 - 5
Example
Example
Example
Example
Example
1
2
3
4
5
JT p-value NNT p-value
s01
s12
< 0.001
0.001 23.160 0.148
< 0.001
< 0.001 71.663 0.000
0.015
< 0.001 5.985 0.917
0.225
0.591 1.761 33.840
0.066
0.591 5.142 14.841
21
p1
0.000
0.000
0.032
0.291
0.058
p2
0.847
1.000
0.739
0.000
0.004
CHAPTER 5
SUMMARY AND CONCLUDING REMARKS
In a one-way ANOVA, sometimes there are reasons to believe that population means
are in increasing (or decreasing) order. Under the usual normality assumptions there is a
LRT that tests homogeneity against an increasing order,– rejection of the null is supposed to
provide evidence for the linear order. However, it has been found that the LRT rejects the
null with a high probability even when the ordering is strongly violated, possibly resulting
in dire consequences. Using the fact that there is also a LRT for goodness-of-fit of ordering
against all alternatives, we have developed a multiple decision procedure by combining the
two tests, resulting in three possible decisions: (1) there is not enough evidence against
homogeneity, (2) there is a strong evidence for the linear ordering, and (3) there is strong
evidence against both homogeneity and the ordering. Extensive simulations show that, when
the ordering holds, the MDP and the LRT makes the correct decision at almost the same
rate, while, when the ordering is violated, the MDP correctly identifies it considerably more
frequently than the LRT.
The MDP procedure is applicable in all cases where the mean vector of k populations are
assumed to lie in a CCC, C, with L as its largest linear subspace, there is a test for µ ∈ L
vs µ ∈ C − L, and there is a test for µ ∈ C vs µ ∈ C C . These C’s include among others
the partially ordered cones of simple tree orderings, and still others in infinite dimensional
spaces. We leave these for future research.
22
BIBLIOGRAPHY
23
BIBLIOGRAPHY
Daniel, W. W. (1990). Applied Nonparamteric Statistics, 2nd Ed. PWS-Kent.
Hollander, M. and Wolfe, D. L. (1999). Nonparametric Statistical Methods, 2nd Ed. John
Wiley and Sons Inc. New York.
Jonkheere, A. R. (1954). A distributio-free k-sample test against ordered alternatives.
Biometrika, 41, 133-145.
Neuhauser, M., Liu, P.-Y. and Hothorn, L. (1998). Nonparametric tests for trends: Jonkheere’s test, a modification and a maximum test. Biometrical Journal, 40(8), 899-909.
Robertson, T., Wright, F. T. and Dykstra, R. L. (1988). Order Restricted Inferences, Wiley,
New York.
Simpson, D. G. and Margolin, B. H. (1986). Recursive nonparametric testing for doseresponse relationships subject to downturns at high doses. Biometrika, 78(3), 589-596.
Terpstra, T. (1952). Asymptotic normality and consistency of Kendall’s test against trend
when ties are present in one ranking. Indagationes Mathematica, 14, 337-333.
Terpstra, J. T. and Magel, R. C. (2003). A new nonparametric test for the ordered alternative
problem. Nonparametric Statistics, 15(3), 289-301.
24
APPENDICES
25
APPENDIX A
CODE FOR THE POOL ADJACENT VIOLATORS ALGORITHM
The PAVA code was used extensively throughout the simulations and examples and is
thanks to Hammou El Barmi, Adjunct Professor in the Department of Statistics at Columbia
University in the City of New York.
> # The Pool Adjacent Violators Algorithm (PAVA)
rpava < − function(x, wt = rep(1, length(x)) ){
+ n < − length(x)
+ if (n <= 1) return (x)
+ if (any(is.na(x)) || any(is.na(wt))) {
+ stop(”Missing Values in ’x’ or ’wt’ not allowed”)
+}
+ lvlsets < − (1:n)
+ repeat {
+ viol < − (as.vector (diff(x)) < 0) # Find adjacent violators
+ if (! (any(viol)) ) break
+ i < − min( (1:(n-1))[viol]) # Pool first pair of violators
+ lvl1 < − lvlsets[i]
+ lvl2 < − lvlsets[i + 1]
+ ilvl < − ( lvlsets == lvl1 | lvlsets == lvl2 )
+ x[ilvl] < − sum( x[ilvl]*wt[ilvl] )/sum( wt[ilvl] )
+ lvlsets [ilvl] < − lvl1 }
+x
+}
26
APPENDIX B
CODE AND OUTPUT FOR OBTAINING SOME SIMULATION
RESULTS WHEN K = 3
The following code is used for obtaining the simulations results seen in Tables 3.1 through
3.7 and Table 3.10 for the case when k = 3. Notice that we have included only the output for
the specific cases when µ = c(0, 0, 0), µ = (0, 0.5, 1), and µ = (0, 1, 0.5), for α1 = α2 = 0.05,
n = 20 and number of simulations (nsim) equal to 50,000. In order to obtain all other
simulation results for k = 3, we need only change the values of alpha13 (α1 ), alpha23 (α2 ),
nsim, and the µ vectors.
> # After inputing the rpava code (Appendix A), we read our table of critical values.
See RWD TABLE
> critval = read.table(”Chi values for equal weights.txt”)
> # Specify starting parameters
> nsim = 50000 # The number of simulations
> n = 20 # The sample size
> k3 = 3 # k = 3
> # Choose the critical values.
> alpha13 = critval[2,3] # Critical value for Alpha1 = 0.05, k = 3
> alpha23 = critval[8,3] # Critical value for Alpha2 = 0.05, k = 3
> # k = 3, ordered alternatives, N(0, 1)
> # Specify the mean vector
> mu = c(0, 0, 0)
> # Create arrays to store values
> ybar < − array(0, c(nsim, k3))
27
APPENDIX B (continued)
> mustar < − array(0, c(nsim, k3))
> muhat < − array(0, c(nsim, 1))
> chival < − array(0, c(nsim, 6))
> # A loop that finds the observations, calculates the sample mean, and finds the mu
star and mu hat values.
> for(i in 1:nsim){
+ x < − array(0, c(n,k3))
+ {for (j in 1:k3){x[,j] < − (rnorm(n) + mu[j])}} # Creates the Xij’s
+ {for (j in 1:k3){ybar[i,j] = mean(x[,j])}} # Finds the Yi bars for each sample
+ mustar[i,] = rpava(ybar[i,], wt = rep(n, k3)) # Finds the mu star values
+ muhat[i] = mean(mustar[i,]) # Finds the mu hat values
+}
> # A sequence of loops that find the chibar square values, then determines whether the
MDP concludes D2, D1, D0.
> # Also determines whether the LRT rejects or fails to reject.
> for(i in 1:nsim){chival[i, 1] = sum(n*(mustar[i,] - muhat[i])ˆ2)} # Finds Chibarˆ2 01
> for(i in 1:nsim){chival[i, 2] = sum(n*(ybar[i,] - mustar[i,])ˆ2)} # Finds Chibarˆ2 12
> for(i in 1:nsim){chival[i, 3] = ifelse(chival[i,2] > alpha23, 1, 0)}
> for(i in 1:nsim){chival[i, 4] = ifelse(chival[i,1] > alpha13 & chival[i,2] < alpha23, 1, 0)}
> for(i in 1:nsim){chival[i, 5] = ifelse(chival[i,1] < alpha13 & chival[i,2] < alpha23, 1, 0)}
> for(i in 1:nsim){chival[i, 6] = ifelse(chival[i,1] > alpha13, 1, 0)}
> # Finds the percent of the time the MDP concludes for each decision and LRT percent.
> sum(chival[,3])/nsim # D2
28
APPENDIX B (continued)
[1] 0.0507
> sum(chival[,4])/nsim # D1
[1] 0.04788
> sum(chival[,5])/nsim # D0
[1] 0.90142
> sum(chival[,6])/nsim # LRT
[1] 0.0486
> # Specify the new mean vector
> mu = c(0, 0.5, 1)
> # Create arrays to store values
> ybar < − array(0, c(nsim, k3))
> mustar < − array(0, c(nsim, k3))
> muhat < − array(0, c(nsim, 1))
> chival < − array(0, c(nsim, 6))
> # A loop that finds the observations, calculates the sample mean, and finds the mu
star and mu hat values.
> for(i in 1:nsim){
+ x < − array(0, c(n,k3))
+ {for (j in 1:k3){x[,j] < − (rnorm(n) + mu[j])}} # Creates the Xij’s
+ {for (j in 1:k3){ybar[i,j] = mean(x[,j])}} # Finds the Yi bars for sample
+ mustar[i,] = rpava(ybar[i,], wt = rep(n, k3)) # Finds the mu star values
+ muhat[i] = mean(mustar[i,]) # Finds the mu hat values
+}
29
APPENDIX B (continued)
> # A sequence of loops that find the chibar square values, then determines whether the
MDP concludes D2, D1, D0.
> # Also determines whether the LRT rejects or fails to reject.
> for(i in 1:nsim){chival[i, 1] = sum(n*(mustar[i,] - muhat[i])ˆ2)} # Finds Chibarˆ2 01
> for(i in 1:nsim){chival[i, 2] = sum(n*(ybar[i,] - mustar[i,])ˆ2)} # Finds Chibarˆ2 12
> for(i in 1:nsim){chival[i, 3] = ifelse(chival[i,2] > alpha23, 1, 0)}
> for(i in 1:nsim){chival[i, 4] = ifelse(chival[i,1] > alpha13 & chival[i,2] < alpha23, 1, 0)}
> for(i in 1:nsim){chival[i, 5] = ifelse(chival[i,1] < alpha13 & chival[i,2] < alpha23, 1, 0)}
> for(i in 1:nsim){chival[i, 6] = ifelse(chival[i,1] > alpha13, 1, 0)}
> # Finds the percent of the time the MDP concludes for each decision and LRT percent.
> sum(chival[,3])/nsim # D2
[1] 2e-04
> sum(chival[,4])/nsim # D1
[1] 0.91632
> sum(chival[,5])/nsim # D0
[1] 0.08348
> sum(chival[,6])/nsim # LRT
[1] 0.9165
> # k = 3, non-ordered alternatives, N(0, 1)
> # Specify the new mean vector
> mu = c(0, 1, 0.5)
> # Create arrays to store values
> ybar < − array(0, c(nsim, k3))
30
APPENDIX B (continued)
> mustar < − array(0, c(nsim, k3))
> muhat < − array(0, c(nsim, 1))
> chival < − array(0, c(nsim, 6))
> # A loop that finds the observations, calculates the sample mean, and finds the mu
star and mu hat values.
> for(i in 1:nsim){
+ x < − array(0, c(n,k3))
+ {for (j in 1:k3){x[,j] < − (rnorm(n) + mu[j])}} # Creates the Xij’s
+ {for (j in 1:k3){ybar[i,j] = mean(x[,j])}} # Finds the Yi bars for sample
+ mustar[i,] = rpava(ybar[i,], wt = rep(n, k3)) # Finds the mu star values
+ muhat[i] = mean(mustar[i,]) # Finds the mu hat values
+}
> # A sequence of loops that find the chibar square values, then determines whether the
MDP concludes D2, D1, D0.
> # Also determines whether the LRT rejects or fails to reject.
> for(i in 1:nsim){chival[i, 1] = sum(n*(mustar[i,] - muhat[i])ˆ2)} # Finds Chibarˆ2 01
> for(i in 1:nsim){chival[i, 2] = sum(n*(ybar[i,] - mustar[i,])ˆ2)} # Finds Chibarˆ2 12
> for(i in 1:nsim){chival[i, 3] = ifelse(chival[i,2] > alpha23, 1, 0)}
> for(i in 1:nsim){chival[i, 4] = ifelse(chival[i,1] > alpha13 & chival[i,2] < alpha23, 1, 0)}
> for(i in 1:nsim){chival[i, 5] = ifelse(chival[i,1] < alpha13 & chival[i,2] < alpha23, 1, 0)}
> for(i in 1:nsim){chival[i, 6] = ifelse(chival[i,1] > alpha13, 1, 0)}
> # Finds the percent of the time the MDP concludes for each decision and LRT percent.
> sum(chival[,3])/nsim # D2
31
APPENDIX B (continued)
[1] 0.28964
> sum(chival[,4])/nsim # D1
[1] 0.55634
> sum(chival[,5])/nsim # D0
[1] 0.15402
> sum(chival[,6])/nsim # LRT
[1] 0.7837
32
APPENDIX C
CODE AND OUTPUT FOR OBTAINING SOME SIMULATION
RESULTS WHEN K = 4
The following code is used for obtaining the simulations results seen in Tables 3.1 through
3.7 and Table 3.10 for the case when k = 4. Notice that we have included only the output
for the specific cases when µ = c(0, 0, 0, 0), µ = (0, 0.5, 1.5, 1.5), and µ = (0, 1.5, 0.5, 1.5),
for α1 = α2 = 0.05, n = 20 and number of simulations (nsim) equal to 50,000. In order to
obtain all other simulation results for k = 4, we need only change the values of alpha14 (α1 ),
alpha24 (α2 ), nsim, and the µ vectors.
> # After inputing the rpava code (Appendix A), we read our table of critical values.
See RWD TABLE
> critval = read.table(”Chi values for equal weights.txt”)
> # Specify starting parameters
> nsim = 50000 # The number of simulations
> n = 20 # The sample size
> k4 = 4 # k = 4
> # Choose the critical values.
> alpha14 = critval[2,4] # Critical value for Alpha1 = 0.05, k = 4
> alpha24 = critval[8,4] # Critical value for Alpha2 = 0.05, k = 4
> # k = 4, ordered alternatives, N(0, 1)
> # Specify the new mean vector
> mu = c(0, 0, 0, 0)
> # Create arrays to store values
> ybar < − array(0, c(nsim, k4))
33
APPENDIX C (continued)
> mustar < − array(0, c(nsim, k4))
> muhat < − array(0, c(nsim, 1))
> chival < − array(0, c(nsim, 6))
> # A loop that finds the observations, calculates the sample mean, and finds the mu
star and mu hat values.
> for(i in 1:nsim){
+ x < − array(0, c(n,k4))
+ {for (j in 1:k4){x[,j] < − (rnorm(n) + mu[j])}} # Creates the Xij’s
+ {for (j in 1:k4){ybar[i,j] = mean(x[,j])}} # Finds the Yi bars for sample
+ mustar[i,] = rpava(ybar[i,], wt = rep(n, k4)) # Finds the mu star values
+ muhat[i] = mean(mustar[i,]) # Finds the mu hat values
+}
> # A sequence of loops that find the chibar square values, then determines whether the
MDP concludes D2, D1, D0.
> # Also determines whether the LRT rejects or fails to reject.
> for(i in 1:nsim){chival[i, 1] = sum(n*(mustar[i,] - muhat[i])ˆ2)} # Finds Chibarˆ2 01
> for(i in 1:nsim){chival[i, 2] = sum(n*(ybar[i,] - mustar[i,])ˆ2)} # Finds Chibarˆ2 12
> for(i in 1:nsim){chival[i, 3] = ifelse(chival[i,2] > alpha24, 1, 0)}
> for(i in 1:nsim){chival[i, 4] = ifelse(chival[i,1] > alpha14 & chival[i,2] < alpha24, 1, 0)}
> for(i in 1:nsim){chival[i, 5] = ifelse(chival[i,1] < alpha14 & chival[i,2] < alpha24, 1, 0)}
> for(i in 1:nsim){chival[i, 6] = ifelse(chival[i,1] > alpha14, 1, 0)}
> # Finds the percent of the time the MDP concludes for each decision and LRT percent.
> sum(chival[,3])/nsim # D2
34
APPENDIX C (continued)
[1] 0.04782
> sum(chival[,4])/nsim # D1
[1] 0.04812
> sum(chival[,5])/nsim # D0
[1] 0.90406
> sum(chival[,6])/nsim # LRT
[1] 0.04926
> # Specify the new mean vector
> mu = c(0, 0.5, 1.5, 1.5)
> # Create arrays to store values
> ybar < − array(0, c(nsim, k4))
> mustar < − array(0, c(nsim, k4))
> muhat < − array(0, c(nsim, 1))
> chival < − array(0, c(nsim, 6))
> # A loop that finds the observations, calculates the sample mean, and finds the mu
star and mu hat values.
> for(i in 1:nsim){
+ x < − array(0, c(n,k4))
+ {for (j in 1:k4){x[,j] < − (rnorm(n) + mu[j])}} # Creates the Xij’s
+ {for (j in 1:k4){ybar[i,j] = mean(x[,j])}} # Finds the Yi bars for each sample
+ mustar[i,] = rpava(ybar[i,], wt = rep(n, k4)) # Finds the mu star values
+ muhat[i] = mean(mustar[i,]) # Finds the mu hat values
+}
35
APPENDIX C (continued)
> # A sequence of loops that find the chibar square values, then determines whether the
MDP concludes D2, D1, D0.
> # Also determines whether the LRT rejects or fails to reject.
> for(i in 1:nsim){chival[i, 1] = sum(n*(mustar[i,] - muhat[i])ˆ2)} # Finds Chibarˆ2 01
> for(i in 1:nsim){chival[i, 2] = sum(n*(ybar[i,] - mustar[i,])ˆ2)} # Finds Chibarˆ2 12
> for(i in 1:nsim){chival[i, 3] = ifelse(chival[i,2] > alpha24, 1, 0)}
> for(i in 1:nsim){chival[i, 4] = ifelse(chival[i,1] > alpha14 & chival[i,2] < alpha24, 1, 0)}
> for(i in 1:nsim){chival[i, 5] = ifelse(chival[i,1] < alpha14 & chival[i,2] < alpha24, 1, 0)}
> for(i in 1:nsim){chival[i, 6] = ifelse(chival[i,1] > alpha14, 1, 0)}
> # Finds the percent of the time the MDP concludes for each decision and LRT percent.
> sum(chival[,3])/nsim # D2
[1] 0.00686
> sum(chival[,4])/nsim # D1
[1] 0.99306
> sum(chival[,5])/nsim # D0
[1] 8e-05
> sum(chival[,6])/nsim # LRT
[1] 0.99992
> # k = 4, non-ordered alternative, N(0, 1)
> # Specify the new mean vector
> mu = c(0, 1.5, 0.5, 1.5)
> # Create arrays to store values
> ybar < − array(0, c(nsim, k4))
36
APPENDIX C (continued)
> mustar < − array(0, c(nsim, k4))
> muhat < − array(0, c(nsim, 1))
> chival < − array(0, c(nsim, 6))
> # A loop that finds the observations, calculates the sample mean, and finds the mu
star and mu hat values.
> for(i in 1:nsim){
+ x < − array(0, c(n,k4))
+ {for (j in 1:k4){x[,j] < − (rnorm(n) + mu[j])}} # Creates the Xij’s
+ {for (j in 1:k4){ybar[i,j] = mean(x[,j])}} # Finds the Yi bars for each sample
+ mustar[i,] = rpava(ybar[i,], wt = rep(n, k4)) # Finds the mu star values
+ muhat[i] = mean(mustar[i,]) # Finds the mu hat values
+}
> # A sequence of loops that find the chibar square values, then determines whether the
MDP concludes D2, D1, D0.
> # Also determines whether the LRT rejects or fails to reject.
> for(i in 1:nsim){chival[i, 1] = sum(n*(mustar[i,] - muhat[i])ˆ2)} # Finds Chibarˆ2 01
> for(i in 1:nsim){chival[i, 2] = sum(n*(ybar[i,] - mustar[i,])ˆ2)} # Finds Chibarˆ2 12
> for(i in 1:nsim){chival[i, 3] = ifelse(chival[i,2] > alpha24, 1, 0)}
> for(i in 1:nsim){chival[i, 4] = ifelse(chival[i,1] > alpha14 & chival[i,2] < alpha24, 1, 0)}
> for(i in 1:nsim){chival[i, 5] = ifelse(chival[i,1] < alpha14 & chival[i,2] < alpha24, 1, 0)}
> for(i in 1:nsim){chival[i, 6] = ifelse(chival[i,1] > alpha14, 1, 0)}
> # Finds the percent of the time the MDP concludes for each decision and LRT percent.
> sum(chival[,3])/nsim # D2
37
APPENDIX C (continued)
[1] 0.75188
> sum(chival[,4])/nsim # D1
[1] 0.24756
> sum(chival[,5])/nsim # D0
[1] 0.00056
> sum(chival[,6])/nsim # LRT
[1] 0.9981
38
APPENDIX D
CODE AND OUTPUT FOR OBTAINING SOME SIMULATION
RESULTS WHEN K = 5
The following code is used for obtaining the simulations results seen in Table 3.8 for the
case when k = 5. Notice that we have included only the output for the specific case when
µ = c(0, 0, 0, 0, 0) for α1 = α2 = 0.05, n = 20 and number of simulations (nsim) equal to
50,000. In order to obtain all other simulation results for k = 5, we need only change the
values of alpha15 (α1 ), alpha25 (α2 ), nsim, and the µ vectors.
> # After inputing the rpava code (Appendix A), we read our table of critical values.
See RWD TABLE
> critval = read.table(”Chi values for equal weights.txt”)
> # Specify starting parameters
> nsim = 50000 # The number of simulations
> n = 20 # The sample size
> k5 = 5 # k = 5
> # Choose the appropriate critical values.
> alpha15 = critval[2,5] # Critical value for Alpha1 = 0.05, k = 5
> alpha25 = critval[8,5] # Critical value for Alpha2 = 0.05, k = 5
> # k = 5, ordered alternatives, N(0, 1)
> # Specify the new mean vector
> mu = c(0, 0, 0, 0, 0)
> # Create arrays to store values
> ybar < − array(0, c(nsim, k5))
> mustar < − array(0, c(nsim, k5))
39
APPENDIX D (continued)
> muhat < − array(0, c(nsim, 1))
> chival < − array(0, c(nsim, 6))
> # A loop that finds the observations, calculates the sample mean, and finds the mu
star and mu hat values.
> for(i in 1:nsim){
+ x < − array(0, c(n,k5))
+ {for (j in 1:k5){x[,j] < − (rnorm(n) + mu[j])}} # Creates the Xij’s
+ {for (j in 1:k5){ybar[i,j] = mean(x[,j])}} # Finds the Yi bars for sample
+ mustar[i,] = rpava(ybar[i,], wt = rep(n, k5)) # Finds the mu star values
+ muhat[i] = mean(mustar[i,]) # Finds the mu hat values
+}
> # A sequence of loops that find the chibar square values, then determines whether the
MDP concludes D2, D1, D0.
> # Also determines whether the LRT rejects or fails to reject.
> for(i in 1:nsim){chival[i, 1] = sum(n*(mustar[i,] - muhat[i])ˆ2)} # Finds Chibarˆ2 01
> for(i in 1:nsim){chival[i, 2] = sum(n*(ybar[i,] - mustar[i,])ˆ2)} # Finds Chibarˆ2 12
> for(i in 1:nsim){chival[i, 3] = ifelse(chival[i,2] > alpha25, 1, 0)}
> for(i in 1:nsim){chival[i, 4] = ifelse(chival[i,1] > alpha15 & chival[i,2] < alpha25, 1, 0)}
> for(i in 1:nsim){chival[i, 5] = ifelse(chival[i,1] < alpha15 & chival[i,2] < alpha25, 1, 0)}
> for(i in 1:nsim){chival[i, 6] = ifelse(chival[i,1] > alpha15, 1, 0)}
> sum(chival[,3])/nsim # D2
[1] 0.04986
> sum(chival[,4])/nsim # D1
40
APPENDIX D (continued)
[1] 0.04862
> sum(chival[,5])/nsim # DO
[1] 0.90152
> sum(chival[,6])/nsim # LRT
[1] 0.04988
41
APPENDIX E
CODE AND OUTPUT FOR OBTAINING SOME SIMULATION
RESULTS WHEN K = 6
The following code is used for obtaining the simulations results seen in Table 3.9 for the
case when k = 6. Notice that we have included only the output for the specific cases when
µ = c(0, 0, 0, 0, 0, 0) for α1 = α2 = 0.05, n = 20 and number of simulations (nsim) equal to
50,000. In order to obtain all other simulation results for k = 6, we need only change the
values of alpha16 (α1 ), alpha26 (α2 ), nsim, and the µ vectors.
> # After inputing the rpava code (Appendix A), we read our table of critical values.
See RWD TABLE
> critval = read.table(”Chi values for equal weights.txt”)
> # Specify starting parameters
> nsim = 50000 # The number of simulations
> n = 20 # The sample size
> k6 = 6 # k = 6
> # Choose the appropriate critical values.
> alpha16 = critval[2,6] # Critical value for Alpha1 = 0.05, k = 6
> alpha26 = critval[8,6] # Critical value for Alpha2 = 0.05, k = 6
> # k = 6, ordered alternatives, N(0, 1)
> # Specify the new mean vector
> mu = c(0, 0, 0, 0, 0, 0)
> # Create arrays to store values
> ybar < − array(0, c(nsim, k6))
> mustar < − array(0, c(nsim, k6))
42
APPENDIX E (continued)
> muhat < − array(0, c(nsim, 1))
> chival < − array(0, c(nsim, 6))
> # A loop that finds the observations, calculates the sample mean, and finds the mu
star and mu hat values.
> for(i in 1:nsim){
+ x < − array(0, c(n,k6))
+ {for (j in 1:k6){x[,j] < − (rnorm(n) + mu[j])}} # Creates the Xij’s
+ {for (j in 1:k6){ybar[i,j] = mean(x[,j])}} # Finds the Yi bars for sample
+ mustar[i,] = rpava(ybar[i,], wt = rep(n, k6)) # Finds the mu star values
+ muhat[i] = mean(mustar[i,]) # Finds the mu hat values
+}
> # A sequence of loops that find the chibar square values, then determines whether the
MDP concludes D2, D1, D0.
> # Also determines whether the LRT rejects or fails to reject.
> for(i in 1:nsim){chival[i, 1] = sum(n*(mustar[i,] - muhat[i])ˆ2)} # Finds Chibarˆ2 01
> for(i in 1:nsim){chival[i, 2] = sum(n*(ybar[i,] - mustar[i,])ˆ2)} # Finds Chibarˆ2 12
> for(i in 1:nsim){chival[i, 3] = ifelse(chival[i,2] > alpha26, 1, 0)}
> for(i in 1:nsim){chival[i, 4] = ifelse(chival[i,1] > alpha16 & chival[i,2] < alpha26, 1, 0)}
> for(i in 1:nsim){chival[i, 5] = ifelse(chival[i,1] < alpha16 & chival[i,2] < alpha26, 1, 0)}
> for(i in 1:nsim){chival[i, 6] = ifelse(chival[i,1] > alpha16, 1, 0)}
> sum(chival[,3])/nsim # D2
[1] 0.04832
> sum(chival[,4])/nsim # D1
43
APPENDIX E (continued)
[1] 0.04922
> sum(chival[,5])/nsim # DO
[1] 0.90246
> sum(chival[,6])/nsim # LRT
[1] 0.05042
44
APPENDIX F
CODE AND OUTPUT FOR OBTAINING RESULTS FOR EXAMPLE 1
The following code is used for obtaining the sample statistics s01 and s12 and our bivariate
p-value (p1 , p2 ) for Example 1. The initial data can be obtained from Example 2 in Table 5
(Example data sets) of Neuhauser et al. (1998) page 907.
> # Neuhauser Data: Example 1
>k=4
> w = c(49, 50, 53, 57, 59, 70, 71, 78, 82, 85) # Group 0
> x = c(67, 75, 90, 94, 97, 114, 127, 130, 140, 151) # Group 1
> y = c(60, 63, 66, 81, 98, 110, 128, 133, 144, 157) # Group 2
> z = c(89, 91, 100, 104, 118, 120, 123, 137, 147, 149) # Group 3
> # Find the mean for each of the samples
> wm = mean(w); xm = mean(x); ym = mean(y); zm = mean(z)
> # Find the weight vector
> n = c(length(w), length(x), length(y), length(z)); a = rep(1, length(n)); wt = n/a
> # Display the mean vector
> ybar = c(wm, xm, ym, zm); ybar
[1] 65.4 108.5 104.0 117.8
> # Find and output mu star via the PAVA
> mustar = rpava(ybar, wt); mustar
[1] 65.40 106.25 106.25 117.80
> # Calculate and output the weighted overall mean
> muhat = sum(wt*mustar)/sum(wt); muhat
[1] 98.925
45
APPENDIX F (continued)
> # Calculate and output each of the relevant test statistic values
> chival01 = sum(wt*(mustar - muhat)ˆ2); chival01
[1] 15875.02
> chival12 = sum(wt*(ybar - mustar)ˆ2); chival12
[1] 101.25
> n1 = sum((w - wm)ˆ2); n2 = sum((x - xm)ˆ2)
> n3 = sum((y - ym)ˆ2); n4 = sum((z - zm)ˆ2)
> ssd = c(n1, n2, n3, n4); > ssda = sum((1/a)*ssd)
> ebar01 = chival01/(chival01 + chival12 + ssda); ebar01
[1] 0.3914851
> s01 = (sum(n) - k)*ebar01/(1 - ebar01); s01
[1] 23.16043
> ebar12 = chival12/(chival12 + ssda); ebar12
[1] 0.004103219
> s12 = (sum(n) - k)*ebar12/(1 - ebar12); s12
[1] 0.1483245
> # Calculate the values for our bivariate p-value using the P(l, k, w)s from RWD.
> p1 = .25*(1 - pchisq(s01, 0)) + .45833*(1 - pchisq(s01, 1)) + .250*(1 - pchisq(s01,
2)) + .04165*(1 - pchisq(s01, 3)); p1
[1] 4.577671e-06
> p2 = .25*(1 - pchisq(s12, 3)) + .45833*(1 - pchisq(s12, 2)) + .250*(1 - pchisq(s12,
1)) + .04165*(1 - pchisq(s12, 0)); p2
[1] 0.8469709
46
APPENDIX G
CODE AND OUTPUT FOR OBTAINING RESULTS FOR EXAMPLE 2
The following code is used for obtaining the sample statistics s01 and s12 and our bivariate
p-value (p1 , p2 ) for Example 2. The initial data can be obtained from Table 6.16 (Differential
plasmatocyte counts, percent from larvae of Drosophila algonquin 27 hours after parasitization by Pseudeucoila bochei (host age 91 hours when parasitized)) of Daniel (1990) page
237.
> # Daniels Data: Example 2
>k=3
> x = c(54.0, 67.0, 47.2, 71.1, 62.7, 44.8, 67.4, 80.2) # Successful host reactions
> y = c(79.8, 82.0, 88.8, 79.6, 85.7, 81.7, 88.5) # Unsuccessful host reactions
> z = c(98.6, 99.5, 95.8, 93.3, 98.9, 91.1, 94.5) # No visible host reactions
> # Find the mean for each of the samples
> xm = mean(x); ym = mean(y); zm = mean(z)
> # Find the weight vector
> n = c(length(x), length(y), length(z)); a = rep(1, length(n)); wt = n/a
> # Display the mean vector
> ybar = c(xm, ym, zm); ybar
[1] 61.80000 83.72857 95.95714
> # Find and output mu star via the PAVA
> mustar = rpava(ybar, wt); mustar
[1] 61.80000 83.72857 95.95714
> # Calculate and output the weighted overall mean
> muhat = sum(wt*mustar)/sum(wt); muhat
47
APPENDIX G (continued)
[1] 79.64545
> # Calculate and output each of the relevant test statistic values
> chival01 = sum(wt*(mustar - muhat)ˆ2); chival01
[1] 4526.883
> chival12 = sum(wt*(ybar - mustar)ˆ2); chival12
[1] 0
> n1 = sum((x - xm)ˆ2); n2 = sum((y - ym)ˆ2); n3 = sum((z - zm)ˆ2)
> ssd = c(n1, n2, n3); ssda = sum((1/a)*ssd)
> ebar01 = chival01/(chival01 + chival12 + ssda); ebar01
[1] 0.7904328
> s01 = (sum(n) - k)*ebar01/(1 - ebar01); s01
[1] 71.66302
> ebar12 = chival12/(chival12 + ssda); ebar12
[1] 0
> s12 = (sum(n) - k)*ebar12/(1 - ebar12); s12
[1] 0
> # Calculate the values for our bivariate p-value using the P(l, k, w)s from RWD.
> p1 = .33333*(1 - pchisq(s01, 0)) + .5*(1 - pchisq(s01, 1)) + .16667*(1 - pchisq(s01,
2)); p1
[1] 3.700817e-17
> p2 = .33333*(1 - pchisq(s12, 2)) + .5*(1 - pchisq(s12, 1)) + .16667*(1 - pchisq(s12,
0)); p2
[1] 1
48
APPENDIX H
CODE AND OUTPUT FOR OBTAINING RESULTS FOR EXAMPLE 3
The following code is used for obtaining the sample statistics s01 and s12 and our bivariate
p-value (p1 , p2 ) for Example 3. The initial data can be obtained from Table 6.7 (Average
Basal Are Increment (BAI) Values for Oak Strands in Souteastern Ohio) of Hollander and
Wolfe (1999) page 211.
> # Hollander and Wolfe Data: Example 3
>k=5
> v = c(1.91, 1.53, 2.08, 1.71) # Growing site index interval: 66-68
> w = c(2.44) # Growing site index interval: 69-71
> x = c(2.45, 2.04, 1.60, 2.37) # Growing site index interval: 72-74
> y = c(2.52, 2.36, 2.73) # Growing site index interval: 75-77
> z = c(1.66, 2.10, 2.88, 2.78) # Growing site index interval: 78-80
> # Find the mean for each of the samples
> vm = mean(v); wm = mean(w); xm = mean(x)
> ym = mean(y); zm = mean(z)
> # Find the weight vector
> n = c(length(v), length(w), length(x), length(y), length(z))
> a = rep(1, length(n)); wt = n/a
> # Display the mean vector
> ybar = c(vm, wm, xm, ym, zm); ybar
[1] 1.807500 2.440000 2.115000 2.536667 2.355000
> # Find and output mu star via the PAVA
> mustar = rpava(ybar, wt); mustar
49
APPENDIX H (continued)
[1] 1.807500 2.180000 2.180000 2.432857 2.432857
> # Calculate and output the weighted overall mean
> muhat = sum(wt*mustar)/sum(wt); muhat
[1] 2.1975
> # Calculate and output each of the relevant test statistic values
> chival01 = sum(wt*(mustar - muhat)ˆ2); chival01
[1] 0.9976821
> chival12 = sum(wt*(ybar - mustar)ˆ2); chival12
[1] 0.1410762
> n1 = sum((v - vm)ˆ2); n2 = sum((w - wm)ˆ2); n3 = sum((x - xm)ˆ2)
> n4 = sum((y - ym)ˆ2); n5 = sum((z - zm)ˆ2)
> ssd = c(n1, n2, n3, n4, n5)
> ssda = sum((1/a)*ssd)
> ebar01 = chival01/(chival01 + chival12 + ssda); ebar01
[1] 0.352376
> s01 = (sum(n) - k)*ebar01/(1 - ebar01); s01
[1] 5.985164
> ebar12 = chival12/(chival12 + ssda); ebar12
[1] 0.07693871
> s12 = (sum(n) - k)*ebar12/(1 - ebar12); s12
[1] 0.9168685
> # Calculate the values for our bivariate p-value using the P(l, k, w)s from RWD.
50
APPENDIX H (continued)
> p1 = .2*(1 - pchisq(s01, 0)) + .41667*(1 - pchisq(s01, 1)) + .29167*(1 - pchisq(s01,
2)) + .08333*(1 - pchisq(s01, 3)) + .00833*(1 - pchisq(s01, 4)); p1
[1] 0.03166966
> p2 = .2*(1 - pchisq(s12, 4)) + .41667*(1 - pchisq(s12, 3)) + .29167*(1 - pchisq(s12,
2)) + .08333*(1 - pchisq(s12, 1)) + .00833*(1 - pchisq(s12, 0)); p2
[1] 0.7392652
51
APPENDIX I
CODE AND OUTPUT FOR OBTAINING RESULTS FOR EXAMPLE 4
The following code is used for obtaining the sample statistics s01 and s12 and our bivariate
p-value (p1 , p2 ) for Example 4. The initial data can be obtained from Replicate 2 in Table
1 (Revertant colonies for Acid Red 114, TA98, Hamster liver activation) of Simpson and
Margolin (1986) page 589.
> # Simpson and Margolin: Example 4
>k=6
> u = c(19, 17, 16) # Group 1 (0 micrograms per milliliter)
> v = c(15, 25, 24) # Group 2 (100 micrograms per milliliter)
> w = c(26, 17, 31) # Group 3 (333 micrograms per milliliter)
> x = c(39, 44, 30) # Group 4 (1000 micrograms per milliliter)
> y = c(33, 26, 23) # Group 5 (3333 micrograms per milliliter)
> z = c(10, 8) # Group 6 (10000 micrograms per milliliter)
> # Find the mean for each of the samples
> um = mean(u); vm = mean(v); wm = mean(w)
> xm = mean(x); ym = mean(y); zm = mean(z)
> # Find the weight vector
> n = c(length(u), length(v), length(w), length(x), length(y), length(z))
> a = rep(1, length(n)); wt = n/a
> # Display the mean vector
> ybar = c(um, vm, wm, xm, ym, zm); ybar
[1] 17.33333 21.33333 24.66667 37.66667 27.33333 9.00000
> # Find and output mu star via the PAVA
52
APPENDIX I (continued)
> mustar = rpava(ybar, wt); mustar
[1] 17.33333 21.33333 24.66667 26.62500 26.62500 26.62500
> # Calculate and output the weighted overall mean
> muhat = sum(wt*mustar)/sum(wt); muhat
[1] 23.70588
> # Calculate and output each of the relevant test statistic values
> chival01 = sum(wt*(mustar - muhat)ˆ2); chival01
[1] 209.6544
> chival12 = sum(wt*(ybar - mustar)ˆ2); chival12
[1] 988.5417
> n1 = sum((u - um)ˆ2); n2 = sum((v - vm)ˆ2); n3 = sum((w - wm)ˆ2)
> n4 = sum((x - xm)ˆ2); n5 = sum((y - ym)ˆ2); n6 = sum((z - zm)ˆ2)
> ssd = c(n1, n2, n3, n4, n5, n6)
> ssda = sum((1/a)*ssd)
> ebar01 = chival01/(chival01 + chival12 + ssda); ebar01
[1] 0.1379733
> s01 = (sum(n) - k)*ebar01/(1 - ebar01); s01
[1] 1.760625
> ebar12 = chival12/(chival12 + ssda); ebar12
[1] 0.754684
> s12 = (sum(n) - k)*ebar12/(1 - ebar12); s12
[1] 33.84012
> # Calculate the values for our bivariate p-value using the P(l, k, w)s from RWD.
53
APPENDIX I (continued)
> p1 = .16667*(1 - pchisq(s01, 0)) + .38056*(1 - pchisq(s01, 1)) + .31250*(1 - pchisq(s01,
2)) + .11806*(1 - pchisq(s01, 3)) + .02083*(1 - pchisq(s01, 4)) + .00139*(1 - pchisq(s01,
5)); p1
[1] 0.2908909
> p2 = .16667*(1 - pchisq(s12, 5)) + .38056*(1 - pchisq(s12, 4)) + .31250*(1 - pchisq(s12,
3)) + .11806*(1 - pchisq(s12, 2)) + .02083*(1 - pchisq(s12, 1)) + .00139*(1 - pchisq(s12,
0)); p2
[1] 8.051721e-07
54
APPENDIX J
CODE AND OUTPUT FOR OBTAINING RESULTS FOR EXAMPLE 5
The following code is used for obtaining the sample statistics s01 and s12 and our bivariate
p-value (p1 , p2 ) for Example 5. The initial data can be obtained from Replicate 2 in Table 1
of Simpson and Margolin (1986) page 589; notice that we added 10 to each observation in
group 6 (or vector z below).
> # Simpson and Margolin Edited: Example 5
>k=6
> u = c(19, 17, 16) # Group 1 (0 micrograms per milliliter)
> v = c(15, 25, 24) # Group 2 (100 micrograms per milliliter)
> w = c(26, 17, 31) # Group 3 (333 micrograms per milliliter)
> x = c(39, 44, 30) # Group 4 (1000 micrograms per milliliter)
> y = c(33, 26, 23) # Group 5 (3333 micrograms per milliliter)
> z = c(20, 18) # Group 6 (10000 micrograms per milliliter)
> # Find the mean for each of the samples
> um = mean(u); vm = mean(v); wm = mean(w)
> xm = mean(x); ym = mean(y); zm = mean(z)
> # Find the weight vector
> n = c(length(u), length(v), length(w), length(x), length(y), length(z))
> a = rep(1, length(n)); wt = n/a
> # Display the mean vector
> ybar = c(um, vm, wm, xm, ym, zm); ybar
[1] 17.33333 21.33333 24.66667 37.66667 27.33333 19.00000
> # Find and output mu star via the PAVA
55
APPENDIX J (continued)
> mustar = rpava(ybar, wt); mustar
[1] 17.33333 21.33333 24.66667 29.12500 29.12500 29.12500
> # Calculate and output the weighted overall mean
> muhat = sum(wt*mustar)/sum(wt); muhat
[1] 24.88235
> # Calculate and output each of the relevant test statistic values
> chival01 = sum(wt*(mustar - muhat)ˆ2); chival01
[1] 352.8897
> chival12 = sum(wt*(ybar - mustar)ˆ2); chival12
[1] 433.5417
> n1 = sum((u - um)ˆ2); n2 = sum((v - vm)ˆ2); n3 = sum((w - wm)ˆ2)
> n4 = sum((x - xm)ˆ2); n5 = sum((y - ym)ˆ2); n6 = sum((z - zm)ˆ2)
> ssd = c(n1, n2, n3, n4, n5, n6)
> ssda = sum((1/a)*ssd)
> ebar01 = chival01/(chival01 + chival12 + ssda); ebar01
[1] 0.3185602
> s01 = (sum(n) - k)*ebar01/(1 - ebar01); s01
[1] 5.142291
> ebar12 = chival12/(chival12 + ssda); ebar12
[1] 0.5743225
> s12 = (sum(n) - k)*ebar12/(1 - ebar12); s12
[1] 14.84116
> # Calculate the values for our bivariate p-value using the P(l, k, w)s from RWD.
56
APPENDIX J (continued)
> p1 = .16667*(1 - pchisq(s01, 0)) + .38056*(1 - pchisq(s01, 1)) + .31250*(1 - pchisq(s01,
2)) + .11806*(1 - pchisq(s01, 3)) + .02083*(1 - pchisq(s01, 4)) + .00139*(1 - pchisq(s01,
5)); p1
[1] 0.0581037
> p2 = .16667*(1 - pchisq(s12, 5)) + .38056*(1 - pchisq(s12, 4)) + .31250*(1 - pchisq(s12,
3)) + .11806*(1 - pchisq(s12, 2)) + .02083*(1 - pchisq(s12, 1)) + .00139*(1 - pchisq(s12,
0)); p2
[1] 0.004447629
57
Download