Uploaded by Engineering Design

StatisticalCompendium

advertisement
Statistical Compendium
Edited by Dr. ir. E.E.M. van Berkum and Dr. A. Di Bucchianico
α
zα
zα/2
0.10 0.05 0.025 0.01
(one-sided) 1.282 1.645 1.960 2.326
(two-sided) 1.645 1.960 2.241 2.576
c 2016
12
Statistical Compendium
Edited by Dr. ir. E.E.M. van Berkum and Dr. A. Di Bucchianico
c 2016
Contents
Preface
iv
1 Probability
1.1
Probability and events . . . . . . . . . . .
1.2
Discrete random variables . . . . . . . . .
1.3
Continuous random variables . . . . . . .
1.4
Rules for expectations and (co)-variances
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
1
2
2
4
2 Discrete Distributions
5
3 Continuous Distributions
9
4 Estimation and Statistical Testing
4.1
Estimation . . . . . . . . . . . . . . . . . .
4.2
Statistical testing . . . . . . . . . . . . . . .
4.3
Formulas for minimum required sample size
4.3.1 Estimation . . . . . . . . . . . . . . .
4.3.2 Testing . . . . . . . . . . . . . . . . .
5 Linear Regression
5.1
Simple linear regression . . .
5.1.1 Confidence intervals and
5.1.2 Correlation . . . . . . .
5.2
Multiple linear regression . .
5.2.1 Confidence intervals and
5.2.2 Model diagnostics . . .
. . .
tests
. . .
. . .
tests
. . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
17
17
19
22
22
22
.
.
.
.
.
.
23
23
24
25
26
27
28
6 Analysis of Variance
30
6.1
One-way analysis of variance . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
6.2
One-way analysis of variance with blocks . . . . . . . . . . . . . . . . . . . . 31
6.3
Two-way crossed analysis of variance . . . . . . . . . . . . . . . . . . . . . . . 32
7 Contingency Tables
33
8 Design of Experiments with Factors at Two Levels
34
9 Error Propagation
36
ii
CONTENTS
10 Tables
10.1
Standard normal distribution . . . . . . . . . . . . .
10.2
Student t-distribution (tν;α ) . . . . . . . . . . . . . .
10.3
χ2 -distribution (χ2ν;α ) . . . . . . . . . . . . . . . . .
m with α = 0.10 (and α = 0.90) . .
10.4
F -distribution fn;α
m
10.5
F -distribution fn;α with α = 0.05 (and α = 0.95) . .
m with α = 0.025 (and α = 0.975) .
10.6
F -distribution fn;α
m
10.7
F -distribution fn;α with α = 0.01 (and α = 0.99) . .
m with α = 0.005 (and α = 0.995) .
10.8
F -distribution fn;α
10.9
Studentized range qa,f (α) with α = 0.10 . . . . . . .
10.10 Studentized range qa,f (α) with α = 0.05 . . . . . . .
10.11 Studentized range qa,f (α) with α = 0.01 . . . . . . .
10.12 Cumulative binomial probabilities (1 ≤ n ≤ 7) . . .
10.13 Cumulative binomial probabilities (8 ≤ n ≤ 11) . . .
10.14 Cumulative binomial probabilities (n = 12, 13, 14) .
10.15 Cumulative binomial probabilities (n = 15, 20) . . .
10.16 Cumulative Poisson probabilities (0.1 ≤ λ ≤ 5.0) . .
10.17 Cumulative Poisson probabilities (5.5 ≤ λ ≤ 9.5) . .
10.18 Cumulative Poisson probabilities (10.0 ≤ λ ≤ 15.0) .
10.19 Wilcoxon rank sum test . . . . . . . . . . . . . . . .
10.20 Wilcoxon signed rank test . . . . . . . . . . . . . .
10.21 Kendall rank correlation test . . . . . . . . . . . . .
10.22 Spearman rank correlation test . . . . . . . . . . .
10.23 Kruskal-Wallis test . . . . . . . . . . . . . . . . . . .
10.24 Friedman test . . . . . . . . . . . . . . . . . . . . . .
10.25 Orthogonal polynomials . . . . . . . . . . . . . . . .
11 Dictionary English-Dutch
iii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
37
38
40
41
42
44
46
48
50
52
53
54
55
56
57
58
59
60
61
62
64
65
66
67
68
69
70
Bibliography
74
Index
75
Preface
This booklet is the fourth English version of a collection of statistical tables developed by the
statistics group of Eindhoven University of Technology. The first version of this collection
of tables was developed by Mrs. Bosch and Kamps in the 1960’s. From 1993 to 1995, their
collection was drastically revised in view of the changes in computing power: obsolete tables
were removed and new tables were added. All tables were recomputed using the computer
algebra software Mathematica, except for the tables of the studentized range distribution for
which we used a Turbo Pascal procedure from [2]. The tables of nonparametric statistics
contain exact values computed in Mathematica using algorithms developed at the Eindhoven
University of Technology (see [11] and [12]). Moreover, explanations of basic issues in statistics
and probability theory were added in order to make the material more accessible for students.
This was a major effort by several colleagues. The number of contributing colleagues has
grown to such an extent that we refrain from mentioning them here by name.
Because of the introduction of the Bachelor-Master system in the early 2000’s, the need
arose for an English version. We would like to thank our former colleague Maarten Jansen
for providing us with a translation into English of the Dutch version. The current version is
a slightly improved version of the the English version of July 2014. Some minor mistakes in
the uniform distribution have been corrected and the explanation of the tables of probability
distributions have been improved slightly. Remarks and suggestions are welcome and can be
sent by email to e.e.m.v.berkum@tue.nl or a.d.bucchianico@tue.nl.
E.E.M. van Berkum and A. Di Bucchianico
Eindhoven, April 2016
Only clean copies, i.e., copies without written notes, of the Statistical
Compendium are allowed during official examinations!
iv
Chapter 1
Probability
1.1
Probability and events
Complement rule (A0 = Ac = complement of A)
P (A0 ) = 1 − P (A).
De Morgan’s laws
(A ∪ B)0 = A0 ∩ B 0 ,
(A ∩ B)0 = A0 ∪ B 0 .
Difference rule
P (A\B) = P (A) − P (A ∩ B).
Boole’s law
P (A ∪ B) = P (A) + P (B) − P (A ∩ B).
Conditional probability (P (B) 6= 0)
P (A ∩ B)
.
P (A | B) =
P (B)
Mutually exclusive (disjoint) events
A ∩ B = ∅.
Total probability (Stratified sampling)
P (A) = P
| B) P (B) + P (A | B 0 ) P (B 0 ).
P(A
n
P (A) =
i=1 P (Bi ) P (A | Bi ), if {Bi }1≤i≤n is a partition.
Bayes’s rule
P (A | B) =
P (Aj | B) =
P (B | A) P (A)
,
P (B | A) P (A) + P (B | A0 ) P (A0 )
P (B | Aj ) P (Aj )
Pn
, where {Ai }1≤i≤n is a partition.
i=1 P (B | Ai ) P (Ai )
Chain rule
P (A ∩ B ∩ C ∩ D) = P (A) P (B | A) P (C | A ∩ B) P (D | A ∩ B ∩ C).
Independent events
P (A ∩ B) = P (A)P (B)
(⇔ P (B | A) = P (B) if P (A) 6= 0).
1
2
CHAPTER 1. PROBABILITY
1.2
Discrete random variables
Discrete random variables – scalars
The random variable X takes values xi with probability pi for i = 0, 1, . . . , m (where m may
be infinity).
X
FX (x) = P (X ≤ x) =
m
X
pi ,
i : xi ≤x
E(X) = µX =
m
X
xi p i ,
E g(X) =
i=0
m
X
g(xi ) pi .
i=0
If X takes values on 0, 1, 2, . . ., then E(X) =
∞
X
P (X > i) =
i=0
2
V (X) = σX
= E(X − µX )2 =
pi = 1
i=0
m
X
∞
X
P (X ≥ i).
i=1
(xi − µX )2 pi =
i=0
m
X
x2i pi − µ2X = E(X 2 ) − µ2X .
i=0
Pairs of discrete random variables – discrete random vectors
The pair (X, Y ) takes values (xi , yj ) with probability pij for i = 0, 1, . . . , m and j = 0, 1, . . . , m
(with m and/or n possibly infinity).
m X
n
X
pij = 1
pi. = P (X = xi ) =
i=0 j=0
n
X
pik
p.j = P (Y = yj ) =
k=0
pij
P (X = xi | Y = yj ) =
p.j
E g(X, Y ) =
m
X
pkj
k=0
m X
n
X
g(xi , yj ) pij
i=0 j=0
Mutual independence (of random variables)
X and Y independent ⇐⇒ pij = pi. p.j for all i and j.
If X and Y are independent, nonnegative and integer valued, and if Z = X + Y , then we have
(convolution)
k
X
P (Z = k) =
P (X = i) P (Y = k − i).
i=0
1.3
Continuous random variables
Continuous random variables – scalars
Z ∞
d
FX (x),
fX (x) dx = 1.
FX (x) = P (X ≤ x) =
fX (u) du,
fX (x) =
dx
−∞
−∞
Z ∞
Z ∞
E(X) = µX =
x fX (x) dx,
E g(X) =
g(x) fX (x) dx.
Z
x
−∞
−∞
1.3. CONTINUOUS RANDOM VARIABLES
3
∞
Z
(1 − F (x)) dx.
If X is nonnegative, then E(X) =
0
V (X) =
2
σX
2
Z
∞
∞
Z
2
(x − µX ) fX (x) dx =
= E(X − µX ) =
−∞
−∞
x2 fX (x) dx − µ2X .
If Y = g(X) with g a strictly increasing function, then
FY (y) = P (Y ≤ y) = P (g(X) ≤ y) = P (X ≤ g −1 (y)) = FX (g −1 (y)).
In general: if U = g(X) and h is the inverse of g, then
fU (u) = fX (h(u)) | h0 (u) | .
Pairs of continuous random variables
x
Z
y
Z
FX,Y (x, y) = P (X ≤ x, Y ≤ y) =
fX,Y (u, v) dv du.
−∞
Z
∂ ∂
fX,Y (x, y) =
FX,Y (x, y),
∂x ∂y
∞
−∞
∞
Z
fX,Y (x, y) dx dy = 1.
−∞
−∞
Z
FX (x) = lim FX,Y (x, y),
∞
fX (x) =
y→∞
fX,Y (x, y) dy.
−∞
Z
fX,Y (x, y)
fX (x | Y = y) =
,
fY (y)
∞
Z
∞
E g(X, Y ) =
g(x, y) fX,Y (x, y) dx dy.
−∞
−∞
If the pair (U, V ) constitutes an invertible function of the pair (X, Y ), then
fU,V (u, v) = fX,Y (x(u, v), y(u, v))
∂x ∂y ∂x ∂y
−
.
∂u ∂v
∂v ∂u
Mutual independence (of random variables)
X and Y (mutually) independent ⇐⇒ FX,Y (x, y) = FX (x) FY (y) for all x and y
⇐⇒ fX,Y (x, y) = fX (x) fY (y) for all x and y.
If X and Y are mutually independent, and if Z = X + Y , then (convolution)
Z
∞
fY (z − x) fX (x) dx.
fZ (z) =
−∞
If both X and Y are also nonnegative, then the convolution reduces to
Z z
fZ (z) =
fY (z − x) fX (x) dx.
0
4
CHAPTER 1. PROBABILITY
Vectors of more than two continuous random variables
FX1 ,...,Xn (x1 , . . . , xn ) = P (X1 ≤ x1 , . . . , Xn ≤ xn )
Z xn
Z x1
fX1 ,...,Xn (u1 , . . . , un ) du1 . . . dun .
···
FX1 ,...,Xn (x1 , . . . , xn ) =
−∞
−∞
∂
∂
fX1 ,...,Xn (x1 , . . . , xn ) =
···
FX1 ,...,Xn (x1 , . . . , xn )
∂x1
∂xn
.
If X = (X1 , . . . , Xn )0 , is a vector of n random variables, then the n × n covariance matrix
Cov(X) is defined as
Cov(X)ij = Cov(Xi , Xj ) = E ((Xi − E (Xi )) (Xj − E (Xj ))) = E (Xi Xj ) − E (Xi ) E (Xj ) .
1.4
Rules for expectations and (co)-variances
Expectation
µX = E (X)
E (X + Y ) = E (X) + E (Y ).
E (aX) = a E (X) and E (aX + bY ) = a E (X) + b E (Y ) .
X and Y independent =⇒ E(XY ) = E (X) E (Y )(not ⇐=).
Variance
2
V (X) = σX
= E (X − µX )2 = E(X 2 ) − µ2X .
Covariance
Cov (X, Y ) = E (X − µX ) (Y − µY )) = E (XY ) − µX µY .
Correlation coefficient
ρXY = Cov (X, Y )/
p
V (X) V (Y ).
Cov (X1 + X2 , Y ) = Cov (X1 , Y ) + Cov (X2 , Y ) and Cov (aX, Y ) = a Cov (X, Y ).
Cov (X, X) = V (X).
V (X + Y ) = V (X) + V (Y ) + 2 Cov (X, Y ).
V (aX) = a2 V (X) and V (aX + bY ) = a2 V (X) + b2 V (Y ) + 2 a b Cov (X, Y ) .
Corollary: if X and Y are independent, then
V (X + Y ) = V (X) + V (Y ).
Conditional expectation
E (X) = E (E (X | Y )) .
V (X) = E (V (X | Y )) + V (E(X | Y )) .
Random sum (Wald’s formula). If the positive, integer-valued random variable N is independent from Xi for all i and if all Xi are mutually independent with the same means E (Xi ) = µ,
then
!
N
X
E
Xi = µ E(N ).
i=1
Chapter 2
Discrete Distributions
This chapter contains an overview of common discrete distributions, in alphabetical order.
For more information on these distributions, we refer to [5]. Some generating functions can
be expressed in terms of hypergeometric functions (see also [5]). The symbol X always refers
to a random variable with the distribution being discussed.
Bernoulli distribution
A special case of the binomial distribution, namely n = 1. Often q stands for 1 − p.
• Parameter: 0 ≤ p ≤ 1
• Values: 0, 1
• Probability mass function: P (X = 1) = p, P (X = 0) = 1 − p
• Expected value: p
• Variance: p (1 − p)
• Probability generating function: p t + (1 − p)
• Moment generating function: p et + (1 − p).
Binomial distribution
The binomial distribution describes the number of successes among n independent trials with
equal success probability p. Often q denotes 1 − p. The binomial distribution is a special
case of the multinomial distribution, with m = 2. The binomial distribution converges (in
distribution) for n → ∞ and np = λ fixed to a Poisson distribution with parameter λ. For
p ≤ 0.10, the binomial distribution can be approximated by a Poisson distribution. For np > 5
and n(1 − p) > 5, the binomial distribution can be approximated by a normal distribution.
• Parameters: n = 1, 2, . . ., 0 ≤ p ≤ 1
• Values: 0, 1, . . . , n
n k
• Probability mass function: P (X = k) =
p (1 − p)n−k for k = 0, 1, . . . , n and 0
k
otherwise
5
6
CHAPTER 2. DISCRETE DISTRIBUTIONS
• Expected value: np
• Variance: np(1 − p)
• Probability generating function: (p t + 1 − p)n
n
• Moment generating function: p et + 1 − p
Geometric distribution
This is a special case of the negative binomial distribution, with r = 1. The geometric
distribution measures the number of independent trials, each with success probability p, until
the first success (successful trial included in the total number). Note that some authors
only consider the number of failures, i.e., they consider X − 1 instead of X. The geometric
distribution has no memory, i.e., P (X > n + m | X > n) = P (X > m). It is therefore the
discrete counterpart of the exponential distribution.
• Parameter: 0 ≤ p ≤ 1
• Values: 1, 2, . . .
• Probability mass function: P (X = k) = p (1 − p)k−1 for k = 1, 2, . . . and 0 otherwise
• Expected value:
• Variance:
1
p
1−p
p2
• Probability generating function:
• Moment generating function:
pt
1 − (1 − p) t
p et
1 − (1 − p) et
Hypergeometric distribution
The hypergeometric distribution counts the number of successes when n elements are selected
without replacement from a group of N elements of which M mean “success” and N −M imply
“failure”.
• Parameters: N = 1, 2, . . ., n = 0, 1, 2, . . . , N , M = 0, 1, 2, . . . , N .
• Values: max(0, n − (N − M )), . . . , min(n, M )
N −M M
• Probability mass function: P (X = k) =
and 0 otherwise
• Expected value:
nM
N
k
n−k
N
n
for k = max(0, n−(N −M )), . . . , min(n, M )
7
• Variance:
n M (N − M ) (N − n)
N 2 (N − 1)
• Probability generating function: 2 F1 [−n, −M, −N ; 1 − t] where 2 F1 is a hypergeometric
function.
• Moment generating function: 2 F1 [−n, −M, −N ; 1 − et ] where 2 F1 is a hypergeometric
function.
Multinomial distribution
The multinomial distribution generalizes the binomial distribution. The multinomial distribution describes a sequence of n mutually independent experiments with a fixed finite number
m (m ≥ 2) of possible outcomes. Let Xi denote the number of occurrences of the ith possible result (i = 1, . . . , m) and pi the probability that the ith possible result occurs in one
experiment.
• Parameters: n = 1, 2, . . ., m = 1, 2, . . ., 0 ≤ pi ≤ 1 with p1 + . . . + pm = 1
Pm
• Values: {(k1 , . . . , km ) | ki ∈ {0, 1, . . . , n} (i = 1, . . . , m) and
i=1 ki = n}
n!
• Probability mass function: P ((X1 , . . . , Xm ) = (k1 , . . . , km )) =
pk1 . . . pkmm for
k1 ! . . . km ! 1
Pm
{(k1 , . . . , km ) | ki ∈ {0, 1, . . . , n} (i = 1, . . . , m) and
i=1 ki = n} and 0 otherwise
• Vector of expected values: (np1 , . . . , npm )
• Covariance matrix: Cov(Xi , Xj ) = −npi pj ( i 6= j), Var(Xi ) = npi (1 − pi )
!n
m
X
• Probability generating function:
pi ti
i=1
• Moment generating function:
m
X
!n
pi eti
i=1
Negative binomial distribution
This distribution counts the total number of independent Bernoulli experiments with equal
success probability p that is necessary to arrive at r successful experiments (the total number
including the rth success). IfPUi are mutually independent and all geometrically distributed
with parameter p, then X = ri=1 Ui has the negative binomial distribution with parameters
p and r. Note that some authors only consider the number of failures, i.e., they consider X − r
instead of X.
• Parameters: 0 ≤ p ≤ 1, r = 1, 2, . . .
• Values: r, r + 1, . . .
k−1 r
• Probability mass function: P (X = k) =
p (1 − p)k−r for k = r, r + 1, . . . and
r−1
0 otherwise
8
CHAPTER 2. DISCRETE DISTRIBUTIONS
• Expected value:
• Variance:
r
p
r (1 − p)
p2
r
pt
• Probability generating function:
1 − (1 − p) t
!r
p et
• Moment generating function:
1 − (1 − p) et
Poisson distribution
This important distribution is often used to describe counts of number of events that occur
within a fixed time or space unit. For λ > 15, the Poisson probabilities are well approximated
using the normal distribution. The binomial distribution with n → ∞ and np = λ fixed
converges (in distribution) to a Poisson distribution with parameter λ.
• Parameter: λ > 0
• Values: 0, 1, . . .
• Probability mass function: P (X = k) = e−λ
λk
for k = 0, 1, . . . and 0 otherwise
k!
• Expected value: λ
• Variance: λ
• Probability generating function: eλ(t − 1)
t
• Moment generating function: eλ(e − 1)
Uniform distribution (discrete)
The discrete uniform distribution should not be confused with the continuous uniform distribution. The uniform distributions are sometimes also called homogeneous distributions.
• Values: a, a + 1, . . . , b with a ≤ b
• Probability mass function: P (X = k) =
• Expected value:
• Variance:
1
for k = a, a + 1, . . . , b and 0 otherwise
b−a+1
b+a
2
(b − a + 1)2 − 1
12
• Probability generating function:
• Moment generating function:
ta − tb−a+1
(b − a + 1) (1 − t)
eat − e(b + 1)t
(b − a + 1) (1 − et )
Chapter 3
Continuous Distributions
This chapter contains an overview of common continuous distributions, in alphabetical order.
For more information on the distributions discussed in this chapter, we refer to [6] and [7].
Some expressions involve the gamma function. This function is defined for positive x as
Z ∞
Γ(x) =
e−t tx−1 dt
0
Useful properties of the gamma function are
• Γ(n + 1) = n! for non-negative integer n
• Γ(x + 1) = x Γ(x)
√
• Γ( 21 ) = π
The second property also defines the gamma function for negative, non-integer x.
Beta distribution
This distribution appears when studying the order statistics of a sample from a uniform
random variable. If X is beta distributed with integer parameters α and β, then P (X ≤ t) =
P (α ≤ Y ≤ α + β − 1), where Y is binomial with parameters n = α + β − 1 and p = t.
• Parameters: α > 0, β > 0
• Values: (0, 1)
• Density:
xα−1 (1 − x)β−1
for 0 < x < 1, where B(α, β) is the beta function defined by
B(α, β)
Z 1
Γ(α) Γ(β)
B(α, β) :=
=
y α−1 (1 − y)β−1 dy
Γ(α + β)
0
• Expected value:
• Variance:
α
α+β
αβ
(α + β + 1) (α + β)2
• Characteristic function: M (α, α+β, it), where M is a confluent hypergeometric function.
9
10
CHAPTER 3. CONTINUOUS DISTRIBUTIONS
Cauchy distribution
The ratio of two independent normally distributed random variables with zero mean is Cauchy
distributed. The Cauchy distribution with λ = 1 and θ = 0 coincides with the Student tdistribution with one degree of freedom.
• Parameters: λ > 0, −∞ < θ < ∞
• Values: (−∞, ∞)
• Density:
1
#
x−θ 2
πλ 1 +
λ
"
• Expected value: does not exist
• Variance: does not exist
• Characteristic function: eitθ − |t|λ
χ2 -distribution
The χ2 -distribution is characterized by one parameter, denoted here by n, and known as the
“degrees of freedom”. Notation: χ2n . The name of the χ2 -distribution is derived from its
relation to the standard normal distribution: if Z is a standard normal random variable, then
its square X = Z 2 is χ2 distributed, with one degree
of freedom. If Xi are χ2 distributed,
P
and mutually independent, then the sum X = i Xi is χ2 and the parameter (degrees of
freedom) is the sum of the parameters of the individual Xi . The χ2 -distribution is also a
special case of the gamma distribution, with α = ν/2 and λ = 1/2. The χ2 -distribution
is of great importance in the Analysis of Variance (ANOVA), contingency table tests, and
goodness-of-fit tests.
• Parameters: ν = 1, 2, . . .
• Values: (0, ∞)
• Density:
e−x/2 x(ν−2)/2
for x > 0 and 0 otherwise
2ν/2 Γ(ν/2)
• Expected value: ν
• Variance: 2ν
• Characteristic function: (1 − 2it)−v/2
Erlang distribution
This is a special case of the gamma distribution for positive integer values of α. It measures
the time until the nth event in a Poisson process. If X1 is Erlang distributed with parameters
n and λ and if X2 is Erlang distributed with parameters m and λ, and if X1 and X2 are
independent, then X1 + X2 is Erlang distributed with parameters n + m and λ. For n = 1,
11
the Erlang distribution is the exponential distribution. If Xi are mutually independent and
n
X
exponentially distributed with intensity λ, then
Xi is Erlang distributed with parameters
i=1
n and λ. Sometimes β = 1/λ is used as parameter.
• Parameters: n = 1, 2, . . ., λ > 0
• Values: (0, ∞)
• Density:
xn−1 λn e− λ x
for x > 0 and 0 otherwise
(n − 1)!
• Expected value:
• Variance:
n
λ
n
λ2
• Characteristic function:
t
1−i
λ
−n
Exponential distribution
This is a special case of both the gamma and the Weibull distributions. The exponential
distribution has the lack-of-memory property, in the sense that P (X > s + t | X > s) =
P (X > t). This property defines the exponential distribution, i.e., no other continuous random
variable has this property. The times between events in a Poisson process are exponentially
distributed.
If Xi are mutually independent and exponentially distributed with intensity λ,
P
then ni=1 Xi is Erlang distributed with parameters n and λ. Note that is also common to
use β = 1/λ as parameter.
• Parameters: λ > 0; sometimes β = 1/λ is used
• Values: (0, ∞)
• Density: λe−λx for x > 0 and 0 otherwise
• Cumulative distribution function: 1 − e−λx for x > 0 and 0 otherwise
• Expected value: 1/λ
• Variance: 1/λ2
• Characteristic function:
1
1 − it/λ
F -distribution
The F -distribution, named after the famous statistician Fisher, is the distribution of a ratio of
two independent χ2 random variables. It has two parameters, denoted by m and n, which are
called the degrees of freedom of the numerator and the denominator, respectively. Notation:
Fnm . If X is Student t-distributed with n degrees of freedom, then X 2 is an Fn1 variable. If U
12
CHAPTER 3. CONTINUOUS DISTRIBUTIONS
is χ2 distributed with m degrees of freedom, V is χ2 distributed with n degrees of freedom,
m
m
and if U and V are independent, then X = U/m
V /n is an Fn variable. The values fn;α are defined
m
by P Fnm > fn;α
= α (so they do not follow the customary definition of quantiles). From
m
n .
the definition of Fnm as a ratio of two χ2 variables, it follows that fn;1−α
= 1/fm;α
• Parameters: m = 1, 2, . . ., n = 1, 2, . . .
• Values: (0, ∞)
m+n
Γ
mm/2 nn/2 x(m/2)−1
2
for x > 0 and 0 otherwise
• Density: m n (n + mx)(m+n)/2
Γ
Γ
2
2
n
• Expected value:
if n ≥ 3; not defined for n = 1 or n = 2.
n−2
• Variance:
2n2 (m + n − 2)
(n = 5, 6, . . .)
m (n − 2)2 (n − 4)
• Characteristic function: M
function.
1
n
1
2 m; − 2 n; − m
it , where M is a confluent hypergeometric
Gamma distribution
Special cases of the gamma distribution include the χ2 -distribution (α = ν/2 and λ = 1/2), the
Erlang distribution (α positive integer) and the exponential distribution (α = 1). Sometimes
β = 1/λ is used as parameter.
• Parameters: α > 0, λ > 0
• Values: (0, ∞)
• Density: λα
xα−1 e−λ x
for x > 0 and 0 otherwise
Γ(α)
• Expected value:
• Variance:
α
λ
α
λ2
• Characteristic function:
1−i
t −α
λ
Gumbel distribution
The Gumbel distribution is one of the limiting distributions in extreme value theory.
• Parameters: −∞ < α < ∞, β > 0
• Values: (−∞, ∞)
13
−(x−α)/β
• Cumulative distribution function: e−e
• Expected value: α + βγ where γ ≈ 0, 577216 (Euler’s constant)
• Variance:
π2 β 2
6
• Characteristic function: eiαt Γ(1 − iβt)
Logistic distribution
This distribution is often used in the description of growth curves.
• Parameters: −∞ < α < ∞, β > 0
• Values: (−∞, ∞)
• Cumulative distribution function:
1 + e−(x − α)/β
−1
• Expected value: α
• Variance:
π2 β 2
3
• Characteristic function: eiαt
πβ t
sinh πβt
Lognormal distribution
X has a lognormal distribution if ln X ∼ N (µ, σ 2 ).
• Parameters: −∞ < µ < ∞, σ > 0
• Values: (0, ∞)
• Density:
1
√
σx 2π
−
e
(ln x − µ)2
2σ 2
for x > 0 and 0 otherwise
1
µ + σ2
2
• Expected value: e
2
2
• Variance: e2µ + 2σ − e2µ + σ
• Characteristic function: No closed expression known
14
CHAPTER 3. CONTINUOUS DISTRIBUTIONS
Normal distribution
As suggested by its name, the normal distribution is the most important probability distribution in view of the Central Limit Theorem. Notation: X ∼ N (µ, σ 2 ). The special case µ = 0
and σ = 1 is called standard normal distribution, and a standard normal variable is most often
denoted with the letter Z. The standard normal density is mostly written as ϕ(z) and the
cumulative distribution function as Φ(z). It holds that Φ(z) = 1 − Φ(−z). The notation zα is
often defined as P (Z > zα ) = α (so they do not follow the customary definition of quantiles).
• Parameters: −∞ < µ < ∞, σ > 0
• Values: (−∞, ∞)
• Density:
σ
1
√
(x − µ)2
2σ 2
e
−
2π
• Expected value: µ
• Variance: σ 2
2 2
• Characteristic function: eiµt − (t σ /2)
Pareto distribution
The Pareto distribution is often used in economical applications, such as the study of household
incomes.
• Parameters: a > 0, θ > 0
• Values: (a, ∞)
• Cumulative distribution function: 1 −
• Expected value:
• Variance:
a θ
x
for x > a and 0 otherwise
θa
(if θ > 1)
θ−1
θa2
(if θ > 2)
(θ − 1)2 (θ − 2)
• Characteristic function: No closed expression known
Student t-distribution
If Z is a standard normal variable and U is a χ2 variable with n degrees of freedom, and if Z
Z
and U are independent, then p
has a Student t-distribution with parameter n. Notation:
U/n
Tn . The parameter is called the number of degrees of freedom. The standardized sample mean
X −µ
√ of a sample of normal random variables is Student t distributed with parameter n − 1.
S/ n
The values tn;α are defined by P (Tn > tn;α ) = α (so they do not follow the customary definition
of quantiles).
15
The Student t-distribution is named after the statistician William Gosset. His employer, the
Guinness breweries, prohibited any scientific publication by its employees. Hence, Gosset
published using a pen name, Student.
• Parameters: n = 1, 2, . . .
• Values: (−∞, ∞)
n+1
Γ
2
• Density:
(n+1)/2
√
x2
n
1+
nπ Γ
2
n
• Expected value: 0 if n ≥ 2, not defined for n = 1.
• Variance:
n
(n ≥ 3)
n−2
∞
• Characteristic function:
eitz
√
n
1
dz, where B(a, b) is the beta
B(1/2, n/2) −∞ (1 + z 2 )(n+1)/2
Γ(a) Γ(b) R 1 a−1
(1 − y)b−1 dy.
function defined by B(a, b) =
= 0 y
Γ(a + b)
Z
Uniform distribution (continuous)
Also known as homogenous distribution. This distribution should not be confused with the
discrete uniform distribution.
• Parameters: −∞ < a < b < ∞
• Values: (a, b)
• Density:
1
for a < x < b and 0 otherwise
b−a
• Cumulative distribution function:
0
for x ≤ a
x−a
for a < x < b
b−a
1
for x ≥ b
• Expected value:
• Variance:
a+b
2
(b − a)2
12
• Characteristic function:
eitb − eita
i t (b − a)
16
CHAPTER 3. CONTINUOUS DISTRIBUTIONS
Weibull distribution
The Weibull distribution often models survival times when the lack of memory property does
not hold. The exponential distribution is a special case (β = 1 and λ = 1/δ).
• Parameters: β > 0, δ > 0
• Values: (0, ∞)
β x β−1 −(x/δ)β
• Density:
e
for x > 0 and 0 otherwise
δ δ
β
• Cumulative distribution function: 1 − e−(x/δ) for x > 0 and 0 otherwise
1
• Expected value: δ Γ 1 +
β
2
1
2
2
• Variance: δ Γ 1 +
−Γ 1+
β
β
• Characteristic function: no closed expression known.
Chapter 4
Estimation and Statistical Testing
4.1
Estimation
A statistic is a function of sample observations. Any statistic T can be used as point estimator
for a parameter θ. The value that appears in an estimation procedure is called the estimate.
Standard estimators
For a sample X1 , . . . , Xn , we may use the following estimators:
Sample mean: X =
1
n
Pn
Sample variance: S 2 =
i=1
Xi
1
n−1
Pn
i=1
Xi − X
2
=
1
n−1
P
n
i=1
Xi2 − nX
2
Definitions
1. T is an unbiased estimator for θ if E(T ) = θ. The bias of θ is defined as E(T ) − θ.
2. Tn is a consistent estimator for θ if limn→∞ P [|Tn − θ| > ε] = 0 for all ε > 0 ., where n
denotes the sample size.
3. T is a Minimum Variance Unbiased (MVU) estimator for θ if among all unbiased estimators for θ, the estimator T has the smallest variance.
4. T is a sufficient estimator for θ if for any other estimator T 0 it holds that the conditional
density f (T 0 | T = t) is independent of θ. (Loosely speaking: given the value t of T , no
information about θ is lost by summarising the sample values through T 0 .)
5. The Mean Squared Error (MSE) of T is
M SE (T ) = E(T − θ)2 = Var (T ) + (E(T ) − θ)2 .
6. A 100(1 − α)% confidence interval is the realization of a random interval which with
probability 1 − α contains the true value of the parameter. The next table contains twosided confidence intervals for some common situations. A one-sided confidence interval
can be constructed in a similar way. E.g., in the case of a normal distribution with known
σ
σ, a right-sided 100(1 − α)% confidence interval for µ equals x − zα √ < µ < ∞.
n
17
18
CHAPTER 4. ESTIMATION AND STATISTICAL TESTING
Table 4.1 Overview of estimation procedures
Problem
normal mean µ
Point
estimate
Two-sided 100(1 − α)%
confidence interval
x
σ
σ
x − zα/2 √ < µ < x + zα/2 √
n
n
σ 2 known
normal mean µ
x
σ 2 unknown
normal means µ1 − µ2
σ12 and σ22 known
x1 − x2
normal means µ1 − µ2
x1 − x2
x1 − x2 − zα/2
q
σ12
n1
+
σ22
n2
< µ1 − µ2 < x1 − x2 + zα/2
x1 − x2 − tn1 +n2 −2;α/2 sp
q
1
n1
+
x1 − x2 x1 − x2 − tν;α/2
q
s21
n1
< µ1 − µ2 <
q
1
n2
s22
n2
< µ1 − µ2 < x1 − x2 + tν;α/2






2
2
2
 (s1 /n1 + s2 /n2 ) 


where ν =  2
(s1 /n1 )2 (s22 /n2 )2 
+
n1 − 1
n2 − 1
σ12 and σ22 unknown
+
normal means µ1 − µ2
paired samples with
µd = µ1 − µ2
d
sd
sd
d − tn−1;α/2 √ < µd < d + tn−1;α/2 √
n
n
variance σ 2
s2
2
(n − 1)s2
2 < (n − 1)s
<
σ
χ2n−1;α/2
χ2n−1;1−α/2
ratio of variances
s21
s22
σ12 /σ22
proportion p
proportions p1 − p2
pb
pb1 − pb2
r
σ12
n1
q
+
s21
n1
pb1 − pb2 − zα/2
q
pb1 (1−pb1 )
n1
tanh arctanh r −
z
√α/2
n−3
q
+
pb2 (1−pb2 )
n2
pb1 (1−pb1 )
n1
+
<
pb2 (1−pb2 )
n2
< ρ < tanh arctanh r +
see also page 25
z
√α/2
n−3
σ22
n2
+
s21 n2 −1
σ12
s21 n2 −1
f
<
<
f
s22 n1 −1;1−α/2 σ22
s22 n1 −1;α/2
1
−1
fnn12−1;1−α/2
= n1 −1
fn2 −1;α/2
q
q
pb (1−b
p)
p)
< p < pb + zα/2 pb (1−b
pb − zα/2
n
n
< p1 − p2 < pb1 − pb2 + zα/2
(binomial distributions)
correlation coefficient ρ
q
1
n2
< x1 − x2 + tn1 +n2 −2;α/2 sp n11 +
q
(n1 −1) s21 +(n2 −1) s22
where sp =
n1 +n2 −2
σ12 = σ22 but unknown
normal means µ1 − µ2
s
s
x − tn−1;α/2 √ < µ < x + tn−1;α/2 √
n
n
1 Pn
2
2
where s = n−1 i=1 (xi − x)
s22
n2
4.2. STATISTICAL TESTING
4.2
19
Statistical testing
H0 : hypothesis to be tested, i.e., null hypothesis
H1 : alternative hypothesis
C : critical region, i.e., if X ∈ C, then H0 is rejected.
α : level of significance,
= P (type I error)
= P (H0 being rejected while H0 valid)
= PH0 (X ∈ C)
β : P (type II error)
= P (H0 not being rejected while H0 not valid)
= PH1 (X 6∈ C)
H0 valid
H1 valid
H0 not
H0
rejected
rejected
1−α
α
confidence
significance
β
probability of
1−β
power
type II error
20
CHAPTER 4. ESTIMATION AND STATISTICAL TESTING
Table 4.2
H0
µ = µ0
Overview of testing procedures
z0 =
x − µ0
√
σ/ n
(σ 2 known)
µ = µ0
(σ 2 unknown)
µ1 − µ2 = ∆0
(σ12 and σ22 known)
µ1 − µ2 = ∆0
(σ12 = σ22
but unknown)
µ1 − µ2 = ∆0
(σ12 and σ22 unknown)
H1
Critical region
H1 : µ 6= µ0
z0 ≥ zα/2 or z0 ≤ −zα/2
H1 : µ > µ0
z0 ≥ zα
H1 : µ < µ0
z0 ≤ −zα
H1 : µ 6= µ0
t0 ≥ tn−1;α/2 or t0 ≤ −tn−1;α/2
H1 : µ > µ0
t0 ≥ tn−1;α
H1 : µ < µ0
t0 ≤ −tn−1;α
H1 : µ1 − µ2 6= ∆0
z0 ≥ zα/2 or z0 ≤ −zα/2
H1 : µ1 − µ2 > ∆0
z0 ≥ zα
H1 : µ1 − µ2 < ∆0
z0 ≤ −zα
H1 : µ1 − µ2 6= ∆0
|t0 | ≥ tn1 +n2 −2;α/2
H1 : µ1 − µ2 > ∆0
t0 ≥ tn1 +n2 −2;α
H1 : µ1 − µ2 < ∆0
t0 ≤ −tn1 +n2 −2;α
H1 : µ1 − µ2 6= ∆0
t0 ≥ tν;α/2 or t0 ≤ −tν;α/2
H1 : µ1 − µ2 > ∆0
t0 ≥ tν;α
H1 : µ1 − µ2 < ∆0
t0 ≤ −tν;α
Test statistic
t0 =
x − µ0
√
s/ n
where s2
1 Pn
2
= n−1
i=1 (xi − x)
x1 − x2 − ∆0
z0 = s
σ12 σ22
+
n1 n2
t0 =
x1 − x2 − ∆0
r
1
1
sp
+
n1 n2
with sp as on page 18
x1 − x2 − ∆0
t0 = s
s21
s2
+ 2
n1 n2


2
2 2


s
s


1
+ 2


n1 n2


ν= 2

 (s1 /n1 )2 (s22 /n2 )2 
+
n1 − 1
n2 − 1
4.2. STATISTICAL TESTING
Table 4.2
H0
21
Overview of testing procedures (continued)
H1
Critical region
H1 : µd 6= 0
t0 ≥ tn−1;α/2 or t0 ≤ −tn−1;α/2
(paired
H1 : µd > 0
t0 ≥ tn−1;α
observations)
H1 : µd < 0
t0 ≤ −tn−1;α
H1 : σ 2 6= σ02
χ20 ≥ χ2n−1;α/2 or χ20 ≤ χ2n−1;1−α/2
H1 : σ 2 > σ02
χ20 ≥ χ2n−1;α
H1 : σ 2 < σ02
χ20 ≤ χ2n−1;1−α
H1 : σ12 6= σ22
−1
f0 ≥ fnn21−1;α/2
or
µd = 0
σ 2 = σ02
σ12 = σ22
Test statistic
t0 =
χ20 =
d
√
sd / n
(n − 1) s2
σ02
f0 =
s21
s22
−1
f0 ≤ fnn21−1;1−α/2
=
1
n −1
fn 2−1;α/2
1
p = p0
p1 = p2
z0 = p
x − n p0
n p0 (1 − p0 )
pb1 − pb2
z0 = r
pb(1 − pb) n11 +
1
n2
H1 : σ12 > σ22
−1
f0 ≥ fnn21−1;α
H1 : p 6= p0
z0 ≥ zα/2 or z0 ≤ −zα/2
H1 : p > p0
z0 ≥ zα
H1 : p < p0
z0 ≤ −zα
H1 : p1 6= p2
z0 ≥ zα/2 or z0 ≤ −zα/2
H1 : p1 > p2
z0 ≥ zα
H1 : p1 < p2
z0 ≤ −zα
H1 : ρ 6= 0
t0 ≥ tn−2;α/2 or t0 ≤ −tn−2;α/2
H1 : ρ > 0
t0 ≥ tn−2;α
H1 : ρ < 0
t0 ≤ −tn−2;α
n1 pb1 + n2 pb2
pb =
n1 + n2
ρ=0
√
r n−2
t0 = √
1 − r2
Sxy
with r = p
Sxx Syy
see also page 25
22
4.3
4.3.1
CHAPTER 4. ESTIMATION AND STATISTICAL TESTING
Formulas for minimum required sample size
Estimation
Consider the case of a normal distribution with known σ. The sample size required for
estimating µ with maximum allowed deviation E = x − µ and with significance α is at least
n=
z
α/2 σ
E
2
.
The width of the two-sided confidence interval is at most 2E.
4.3.2
Testing
Consider the case of a normal distribution with known σ and the test
H0 : µ = µ0
versus H1 : µ 6= µ0 .
The probability of a type II error for µ = µ0 + δ equals
√ √ δ n
δ n
β = Φ zα/2 −
− Φ −zα/2 −
.
σ
σ
Given an upper bound β for the probability of a type II error and an upper bound α for the
probability of a type I error, then (approximately) the sample size n must be at least
n≈
(zα/2 + zβ )2 σ 2
,
δ2
where δ = µ − µ0 . This approximation is valid when Φ −zα/2 − δ
to β.
√
n/σ is small compared
Consider the case of two independent samples, of size n1 and n2 , from two normal variables
with variances σ12 and σ22 , and consider the test
H0 : µ1 − µ2 = ∆0
versus H1 : µ1 − µ2 6= ∆0 .
The probability of a type II error for µ1 − µ2 = ∆ equals




∆ − ∆0 
∆ − ∆0 
β = Φ zα/2 − q 2
− Φ −zα/2 − q 2
.
2
σ1
σ2
σ1
σ22
+
+
n1
n2
n1
n2
Consider n1 = n2 = n. Given an upper bound β for the probability of a type II error and an
upper bound α for the probability of a type I error, then (approximately) the sample size n
must be at least
(zα/2 + zβ )2 (σ12 + σ22 )
n≈
.
(∆ − ∆0 )2
√ p
This approximation is valid when Φ −zα/2 − (∆ − ∆0 ) n/ σ12 + σ22 is small compared to
β.
Chapter 5
Linear Regression
5.1
Simple linear regression
The simple linear regression model is
Y i = β 0 + β 1 x i + εi
(i = 1, . . . , n),
where V (εi ) = σ 2 (i = 1, . . . , n) and Cov(εi , εj ) = 0 (i 6= j). The least squares estimators are:
βb1 := Sxy /Sxx
and βb0 := y − βb1 x,
where
Sxx
=
n
X
(xi − x)2
=
(yi − y)2
=
(xi − x) (yi − y)
=
!2
n
X
1
x2i −
xi
n
i=1
i=1
!2
n
n
X
1 X
2
yi −
yi
n
i=1
i=1
! n
!
n
n
X
X
1 X
xi yi −
xi
yi
n
n
X
i=1
Syy
=
n
X
i=1
Sxy
=
n
X
i=1
i=1
i=1
i=1
It holds that
E(βb0 ) = β0
V (βb0 ) = σ 2
E(βb1 ) = β1
σ2
V (βb1 ) =
Sxx
Cov βb0 , βb1
= −x
1
x2
+
n Sxx
σ2
Sxx
The fitted values are ybi = βb0 + βb1 xi (i = 1, . . . , n). The residuals are ei = yi − ybi (i = 1, . . . , n).
The error sum of squares SSE is defined as
SSE =
n
X
e2i
=
i=1
n
X
i=1
23
(yi − ybi )2 .
24
CHAPTER 5. LINEAR REGRESSION
The decomposition of the sum of squares according to the Analysis of Variance is
n
X
SST
=
(yi − y)2
=
SSReg
n
X
(b
yi − y)2
i=1
i=1
+ SSE ,
n
X
+
(yi − ybi )2 .
i=1
It holds that
2
2
Sxx Syy − Sxy
Sxy
and SSE =
.
Sxx
Sxx
SSE
An unbiased estimator of σ 2 is σ
b2 =
.
n−2
SSReg
.
The coefficient of determination is defined by R2 =
SST
SSReg = βb1 Sxy =
5.1.1
Confidence intervals and tests
In order to construct confidence intervals and test hypotheses we assume that all error terms
εi are independent and follow a normal distribution εi ∼ N (0, σ 2 ).
It holds that: (n − 2) σ
b2 /σ 2 ∼ χ2n−2 . The statistics
βb − β1
p1
σ
b2 /Sxx
βb − β0
r 0
2
σ
b2 n1 + Sxxx
and
both follow a Student t-distribution with n − 2 degrees of freedom.
From this distributional fact, the following test statistics are constructed for the hypotheses
H0 : β0 = a and H0 : β1 = b.
For testing H0 : β1 = 0 versus H1 : β1 6= 0 one can also use the statistic
F0 =
SSReg /1
M SReg
=
SSE /(n − 2)
M SE
which under H0 has an F -distribution with 1 degree of freedom for the numerator and n − 2
1
degrees of freedom for the denominator. The null hypothesis is rejected if F0 > fn−2;α
.
A 100(1 − α)% confidence interval for β1 is
s
σ
b2
βb1 ± tn−2;α/2
.
Sxx
A 100(1 − α)% confidence interval for β0 is
s
βb0 ± tn−2;α/2
σ
b2
1
x2
+
.
n Sxx
An estimator for the expected value of the response at x0 is µ
bY |x0 = βb0 + βb1 x0 . A 100(1 − α)%
confidence interval for the expected response at a point x = x0 is given by
s 1 (x0 − x)2
2
b
+
.
µ
bY |x0 ± tn−2;α/2 σ
n
Sxx
5.1. SIMPLE LINEAR REGRESSION
25
An estimator of the response at x0 is yb0 = βb0 + βb1 x0 . A 100(1 − α)% prediction interval for
the response at a point x = x0 is given by
s
yb0 ± tn−2;α/2
σ
b2
1 (x0 − x)2
1+ +
.
n
Sxx
Lack-of-fit
If there are repeated observations for a same value of x, the variance can be estimated in a
model independent way. The treatment sum of squared of the repeated measurements (SSP E )
is
ni
m X
X
SSP E =
(yiu − y i. )2 ,
i=1 u=1
where m the number of different levels, ni ≥ 1 the number of measurements at xi and y i. the
average of the measurements at xi . This leads to a decomposition of the error sum of squares
SSE = SSLOF + SSP E .
The statistic to test whether the model fits well (“lack-of-fit”) is
F0 =
5.1.2
SSLOF /(m − 2)
M SLOF
=
.
SSP E /(n − m)
M SP E
Correlation
In many applications both X and Y are random. We assume that (X, Y ) has a bivariate normal
Cov(X, Y )
distribution with parameters µX , µY , σX , σY and correlation coefficient ρ = p
.
V (X)V (Y )
Define
σY
σY
β0 = µY − µX ρ
and β1 = ρ
,
σX
σX
and consider the model E(Y | X = x) = β0 + β1 x. The maximum likelihood estimators are
βb0 = Y − βb1 X
SXY
and βb1 =
,
SXX
where SXX , SXY and SY Y are defined as on page 23.
SXY
The estimator of ρ is the sample correlation coefficient R = √
.
SXX SY Y
√
R n−2
To test H0 : ρ = 0 one uses the statistic Tn−2 = √
, which has a Student t-distribution
1 − R2
with n − 2 degrees of freedom.
√
To test H0 : ρ = ρ0 a different statistic is used, namely Z0 = (arctanh R − arctanh ρ0 ) n − 3,
1+x
which follows approximately a standard normal distribution, where arctanh x = 21 ln
.
1−x
ey − e−y
Remark: tanh y = y
. This test statistic can be used to construct a confidence interval.
e + e−y
See page 18.
26
5.2
CHAPTER 5. LINEAR REGRESSION
Multiple linear regression
In multiple regression there are more than one (say k) regression variables (or predictor variables, or explanatory variables). The model is
Yi = β0 + β1 xi1 + . . . + βk xik + εi
(i = 1, . . . , n),
where xij is the value of the jth regressor at the ith observation. It holds that V (εi ) = σ 2 for
i = 1, . . . , n and Cov(εi , εj ) = 0 for i 6= j. In matrix notation the model is written as
Y = X β + ε,
where Y is a column vector of response values (length n), X is the design matrix with n rows
and p = k + 1 columns, β is a column vector of parameters (length p), and ε is the vector of
errors. The least squares estimator for β is given by
βb := (X T X)−1 X T Y.
b
The column vector (length n) of the fitted values is yb = X β.
The residuals are ei = yi − ybi (i = 1, . . . , n) and e is the column vector of residuals.
The error sum of squared SSE is
SSE =
n
X
e2i =
i=1
n
X
(yi − ybi )2 = eT e = y T y − βbT X T y.
i=1
The variance σ 2 is estimated by
σ
b2 =
SSE
.
n−p
b = β.
The estimator βb is unbiased: E(β)
b
The covariance matrix of β is
b = σ 2 (X T X)−1 = σ 2 C,
Cov(β)
where



C = (X 0 X)−1 = 

C00 C01 C02 · · · C0k
C10 C11 C12 · · · C1k
..
..
..
..
.
.
.
.
Ck0 Ck1 Ck2 · · · Ckk



.

Hence, the variances and covariances of the estimators are
V (βbj ) = σ 2 Cjj
and Cov(βbi , βbj ) = σ 2 Cij .
The decomposition of the sum of squares according to the Analysis of Variance is
SST = SSReg + SSE ,
where
SSReg = βb T X T y − ny 2
and SSE = y T y − βb T X T y.
5.2. MULTIPLE LINEAR REGRESSION
5.2.1
27
Confidence intervals and tests
In order to construct confidence intervals and test hypotheses we assume that all error terms
εi are independent and follow a normal distribution εi ∼ N (0, σ 2 ). This can also be denoted
as
ε ∼ Nn (0, σ 2 I),
where Nn (µ, σ 2 I) is the n-dimensional normal distribution with mean vector µ and covariance
matrix σ 2 I. It holds that
(n − p) σ
b2 /σ 2 ∼ χ2n−p
and βb ∼ Np (β, (X T X)−1 σ 2 ).
For the vector of fitted values Yb = X βb it holds that
Yb ∼ Nn (X β, X(X T X)−1 X T σ 2 ).
Test for significance of regression
The hypotheses is
H0 : β1 = β2 = . . . = βk = 0
versus H1 : βj 6= 0 for at least one j.
Under the null hypothesis, the statistic
F0 =
M SReg
SSReg /k
=
SSE /(n − p)
M SE
follows an F -distribution with k degrees of freedom for the numerator and n − p degrees of
k
freedom for the denominator. The null hypothesis is rejected if F0 > fn−p;α
.
Testing and estimating one parameter
For the hypotheses H0 : βi = βi0 versus H1 : βi 6= βi0 , one uses the test statistic
βbi − βi0
√
,
σ
b2 Cii
where Cii is defined as on page 26. Under H0 , this this statistic has a Student t-distribution
with n − p degrees of freedom. This can be used to test and construct a confidence interval
for βi .
Expected value of the response and prediction interval
Let x0 denote a vector of regression variables. Of interest is the prediction Y0 of the response
b so
at the value x0 and the expected value µY |x0 at that value. Both are estimated by x00 β,
b
µ
bY |x0 = Yb0 = x00 β.
The statistic
µ
bY |x0 − µY |x0
p
σ
b2 x00 (X 0 X)−1 x0
28
CHAPTER 5. LINEAR REGRESSION
has a Student t-distribution with n − p degrees of freedom. This can be used to construct a
test. A 100(1 − α)% confidence interval for µY |x0 is
q
µ
bY |x0 ± tn−p;α/2 σ
b2 x00 (X 0 X)−1 x0
The statistic
Yb0 − Y0
p
σ
b2 (1 + x00 (X 0 X)−1 x0 )
has a Student t-distribution with n − p degrees of freedom. A 100(1 − α)% prediction interval
for Y0 is
q
b
b2 (1 + x0 (X 0 X)−1 x0 )
Y0 ± tn−p;α/2 σ
0
Testing partial hypotheses
Consider
H0 : β(1) = 0 versus H1 : β(1) 6= 0,
where β(1) is a vector of length r of model parameters. Denote β(2) the vector of the remaining
parameters in the model. Hence, β(1) is only a part of all parameters. Define SSReg (β) the
regression sum of squares in the model with all parameters and SSReg (β(2)) the regression
sum of squared in the model with only parameters β(2).
A test statistic is
[SSReg (β) − SSReg (β(2))]/r
F0 =
,
M SE
where M SE = SSE /(n − p) is the mean squared error in the model with all parameters and
r .
F0 ∼ Fn−p
5.2.2
Model diagnostics
The coefficient of determination in a model with k regression variables is
Rp2 =
SSReg
SST
(p = k + 1).
The standardized residuals are
di = √
The studentized residuals are
ei
.
M SE
ei
ri = p
,
M SE (1 − hii )
where
hii = x0i (X 0 X)−1 xi .
Cook’s distance of the ithe value is
Di =
ri2
hii
.
p (1 − hii )
The adjusted coefficient of determination of a model with k regression variables is
2
Rp = 1 −
n−1
(1 − Rp2 ).
n−p
5.2. MULTIPLE LINEAR REGRESSION
29
The Cp criterion for a model with k variables is
Cp =
SSE (p)
− n + 2p,
σ
b2
where σ
b2 is the estimator of σ 2 in the complete model.
The variance inflation factor (VIF) is
VIF(βbj ) =
1
,
1 − Rj2
where Rj2 is the coefficient of determination when variable xj is the response variable and the
other xi ’s are regression variables.
Chapter 6
Analysis of Variance
6.1
One-way analysis of variance
Model: Yij = µ + τi + εij , i = 1 . . . a, j = 1 . . . ni ,
µi := µ + τi ,
a
X
τi ni = 0, εij ∼ N (0, σ 2 ) and independent, N =
i=1
SS
(Sum of squares)
Treatments
SSA =
X y2
y2
i.
− ..
ni
N
i
X
X y2
2
i.
SSE =
yij
−
ni
i,j
Total
ni
i=1
Source
Error
a
X
SST =
X
i,j
df
MS
(mean sum of sq.)
F
a−1
SSA
a−1
M SA
M SE
N −a
SSE
N −a
N −1
SS
N −1
i
y2
2
yij
− ..
N
Estimation and testing
The estimators of the parameters are
µ
b = y ..
τbi = y i. − y ..
µ
bi = y i.
A confidence interval for µi is
y i. − tN −a;α/2
p
p
M SE /ni < µi < y i. + tN −a;α/2 M SE /ni .
To test whether the expected values at level i and level j differ significantly, we use the LSD
(Least Significant Difference)
s
1
1
LSD = tN −a;α/2 M SE
+
.
ni nj
30
6.2. ONE-WAY ANALYSIS OF VARIANCE WITH BLOCKS
6.2
31
One-way analysis of variance with blocks
Model: Yij = µ + τi + βj + εij , i = 1 . . . a, j = 1 . . . b,
εij ∼ N (0, σ 2 ) and independent,
a
X
i=1
Source
SS (sum of squares)
τi =
b
X
βj = 0
j=1
df
M S (mean
F
sum of squares)
Factor A
SSA =
X y2
y2
i.
− ..
b
ab
a−1
SSA
a−1
b−1
SSB
b−1
i
2
X y.j.
y2
Block factor B SSB =
− ...
a
ab
j
Error
SSE =
X
2
yij
−
i,j
Total
SST =
X
i,j
X y 2 X y.j2
y2
i.
−
+ ... (a − 1)(b − 1)
b
a
ab
i
2
yij
−
j
y..2
ab
M SA
M SE
SSE
(a − 1)(b − 1)
ab − 1
Estimation and testing
The estimators of the parameters are
µ
b = y ... ,
τbi = y i.. − y ... ,
Note that because of the nature of blocking factors, estimation and testing is not of interest.
To test whether the expected value at a certain combination of treatments (say, 1) differs
significantly from that at a different combination (say, 2), we use the LSD (Least Significant
Difference)
s
1
1
LSD = tν;α/2 M SE
+
,
n1 n2
where ν is the degrees of freedom of the error sum of squares, n1 the sample size at treatment
1 and n2 the sample size at treatment 2.
M SB
M SE
32
CHAPTER 6. ANALYSIS OF VARIANCE
6.3
Two-way crossed analysis of variance
Model: Yijk = µ + τi + βj + (τ β)ij + εijk , i = 1 . . . a, j = 1 . . . b, k = 1 . . . n,
εijk ∼ N (0, σ 2 ) and independent,
a
X
τi =
i=1
Source
SS (sum of squares)
Factor A
SSA =
Factor B
b
X
j=1
βj =
a
X
(τ β)ij =
i=1
b
X
df
X y2
y2
i..
− ...
bn abn
i
2
X y.j.
y2
SSB =
− ...
an abn
M S (mean
sum of squares)
F
a−1
SSA
a−1
M SA
M SE
b−1
SSB
b−1
M SB
M SE
SSAB
(a − 1)(b − 1)
M SAB
M SE
j
Interaction SSAB =
2
2
X yij.
X y2
X y.j.
y2
i..
−
−
+ ... (a − 1)(b − 1)
n
bn
an abn
i,j
Error
SSE =
X
SST =
X
i
i,j,k
Total
j
2
X yij.
2
yijk
−
n
2
yijk
−
i,j,k
ab(n − 1)
i,j
2
y...
(τ β)ij = 0
j=1
SSE
ab(n − 1)
abn − 1
abn
Estimation and testing
The estimators of the parameters are
µ
b
τbi
βbj
d
(τ
β)ij
=
=
=
=
y ... ,
y i.. − y ... ,
y .j. − y ... ,
y ij. − y i.. − y .j. + y ... .
To test whether the expected value at a certain combination of treatments (say, 1) differs
significantly from that at a different combination (say, 2), we use the LSD (Least Significant
Difference)
s
1
1
LSD = tν;α/2 M SE
+
,
n1 n2
where ν is the degrees of freedom of the error sum of squares, n1 the sample size at treatment
1 and n2 the sample size at treatment 2.
Chapter 7
Contingency Tables
A contingency table is a table of frequency counts. In two-way contingency tables the elements
of a sample are classified by two variables.
classification B
1
2
···
c
classification A
1
2
..
.
r
O11
O21
..
.
O12
O22
..
.
···
···
O1c
O2c
..
.
O1.
O2.
..
.
Or1
O.1
Or2
O.2
···
···
Orc
O.c
Or.
O..
Denote by r the number of levels for the first classification, and by c that of the second
classification. The random variable Oij denotes the frequency of elements in cell (i, j) (level
i of the first classification and level j of the second classification) and Eij is the expectation
of Oij under the null hypothesis. The test statistic
X02 =
r X
c
X
(Oij − Eij )2
,
Eij
i=1 j=1
is used to test the the null hypothesis that the two classification variables are independent.
It is approximately χ2 -distributed with (r − 1)(c − 1) degrees of freedom. Under H0 it holds
that
Oi. O.j
Eij =
,
O..
with
c
r
r X
c
X
X
X
Oi. =
Oij
O.j =
Oij
and
O.. =
Oij
j=1
i=1
i=1 j=1
Adjusted residuals can be computed using the formula
Oij − Eij
r
i.
Eij 1 − O
1−
O..
33
O.j
O..
.
Chapter 8
Design of Experiments with Factors at
Two Levels
This chapter deals with 2k designs. These are experiments where all (k) factors have two
levels: there is the low level (−) and the high level (+). Factors are denoted with capital
letters A, B, C, . . .. Hence, in a 2k design there are 2k different combinations of the factor
levels. A level combination is denoted by a combination of small letters. If a factor X has
a high level in such a level combination, the letter x appears in the letter combination. If a
factor X has a low level, the letter x does not appear. The notation for the level combination
where all factors have the low level is (1). The 8 level combinations of a 23 design are denoted
(1), a, b, ab, c, ac, bc, abc .
This notation is also used to denote the (sum of the) measurement(s) at the corresponding
level combination. A contrast of an effect (main effect or interaction) is equal to the sum of
the measurements at the high level of this effect, minus the sum of the measurements at the
low level of that effect.
One way of finding the sign of an interaction is by multiplication. For example for a
3
2 -scheme we have
Level
combination
(1)
a
b
ab
c
ac
bc
abc
I
+
+
+
+
+
+
+
+
A
−
+
−
+
−
+
−
+
B
−
−
+
+
−
−
+
+
AB
+
−
−
+
+
−
−
+
Effect
C AC
− +
− −
− +
− −
+ −
+ +
+ −
+ +
BC
+
+
−
−
−
−
+
+
ABC
−
+
+
−
+
−
−
+
From this table it follows that
ContrastABC = −(1) + a + b − ab + c − ac − bc + abc.
This holds in case of a fractional design. Let N be the total number of measurements of the
design. In a complete design with n repetitions, we have that N = 2k n. An estimator of an
34
35
effect (main effect or interaction) is given by
c = Contrast = Contrast .
Eff
N/2
2k−1 n
The variance of the estimator of the effect is
2
c = σ .
V (Eff)
N/4
We use this to construct a confidence interval for the effect. This has the form
s
σ
b2
c ± tν;α/2
Eff
,
N/4
with ν the degrees of freedom of the error sum of squares.
The sum of squares of an effect is
SSEffect =
c 2N
(Eff)
(Contrast)2
=
.
N
4
Chapter 9
Error Propagation
In experiments, one is often not interested in the actual observed quantities, but rather in
quantities that cannot be observed in its own, but which can be computed from the observed
quantities.
Consider the model η = f (µ1 , µ2 ). We assume that f is a known function. The unknown
parameters µ1 and µ2 are estimated from observations X1 and X2 with Xi = µi + εi , E(εi ) =
0, V (εi ) = σi2 , E (ε1 ε2 ) = 0 (i = 1, 2). A straightforward estimator for η is Y = f (X1 , X2 ).
We have the following results concerning expected value and variance of this estimator.
Expansion of Y = f (X1 , X2 ) in a Taylor series around µ = (µ1 , µ2 ) yields:
Y
= f (µ) + (X1 − µ1 )f10 (µ) + (X2 − µ2 )f20 (µ) +
00
00
00
+ 12 [(X1 − µ1 )2 f11
(µ) + 2(X1 − µ1 )(X2 − µ2 )f12
(µ) + (X2 − µ2 )2 f22
(µ)] + · · ·
2 2
2
∂ f
∂ f
∂f
∂ f
00
0
00
, f (µ) =
=
.
In this expression, fi (µ) =
, f (µ) =
∂xi µ ii
∂x1 ∂x2 µ
∂x2 ∂x1 µ
∂x2i µ 12
Reorganising terms, we obtain:
00
00
00
Y = η + ε1 f10 (µ) + ε2 f20 (µ) + 12 [ε21 f11
(µ) + 2ε1 ε2 f12
(µ) + ε22 f22
(µ)] + · · ·
For µy = E (Y ) it then holds, approximately:
00
00
µy ≈ η + [ 12 σ12 f11
(µ) + 12 σ22 f22
(µ)]
For σy2 = V (Y ) we have the Law of Propagation of (Random) Errors :
σy2 ≈ (f10 (µ))2 σ12 + (f20 (µ))2 σ22
A special case is: Y = X1a1 X2a2 . For this situation, we have the Law of Propagation of Relative
Errors
Vy2 ≈ a21 V12 + a22 V22
In this expression V = σ/µ stands for the variation coefficient.
Remarks
1. Unknown quantities are replaced by their estimates.
2. Expressions for error propagation based on Y = f (X) yield different results than expressions based
on X = f (−1) (Y ). These expressions may therefore only be applied if a causal link exists between
the variables.
36
Chapter 10
Tables
Contents
10.1
Standard normal distribution . . . . . . . . . . . . . . . . . . . . . . . . . . .
38
10.2
Student t-distribution (tν;α ) . . . . . . . . . . . . . . . . . . . . . . . . . . . .
40
10.8
χ -distribution (χ2ν;α ) . . . .
m
with α =
F -distribution fn;α
m
with α =
F -distribution fn;α
m
F -distribution fn;α with α =
m
with α =
F -distribution fn;α
m
with α =
F -distribution fn;α
0.005 (and α = 0.995) . . . . . . . . . . . . . . .
50
10.9
Studentized range qa,f (α) with α = 0.10 . . . . . . . . . . . . . . . . . . . . .
52
10.10
Studentized range qa,f (α) with α = 0.05 . . . . . . . . . . . . . . . . . . . . .
53
10.11
Studentized range qa,f (α) with α = 0.01 . . . . . . . . . . . . . . . . . . . . .
54
10.12
Cumulative binomial probabilities (1 ≤ n ≤ 7) . . . . . . . . . . . . . . . . .
55
10.13
Cumulative binomial probabilities (8 ≤ n ≤ 11) . . . . . . . . . . . . . . . . .
56
10.14
Cumulative binomial probabilities (n = 12, 13, 14) . . . . . . . . . . . . . . .
57
10.15
Cumulative binomial probabilities (n = 15, 20) . . . . . . . . . . . . . . . . .
58
10.16
Cumulative Poisson probabilities (0.1 ≤ λ ≤ 5.0) . . . . . . . . . . . . . . . .
59
10.17
Cumulative Poisson probabilities (5.5 ≤ λ ≤ 9.5) . . . . . . . . . . . . . . . .
60
10.18
Cumulative Poisson probabilities (10.0 ≤ λ ≤ 15.0) . . . . . . . . . . . . . . .
61
10.19
Wilcoxon rank sum test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
62
10.20
Wilcoxon signed rank test
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
64
10.21
Kendall rank correlation test . . . . . . . . . . . . . . . . . . . . . . . . . . .
65
10.22
Spearman rank correlation test
. . . . . . . . . . . . . . . . . . . . . . . . .
66
10.23
Kruskal-Wallis test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
67
10.24
Friedman test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
68
10.25
Orthogonal polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
69
10.3
10.4
10.5
10.6
10.7
2
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
41
0.10 (and α = 0.90) . . . . . . . . . . . . . . . .
42
0.05 (and α = 0.95) . . . . . . . . . . . . . . . .
44
0.025 (and α = 0.975) . . . . . . . . . . . . . . .
46
0.01 (and α = 0.99) . . . . . . . . . . . . . . . .
48
For tables that are not included in this Statistical Compendium, we refer to [1], [3] and [10]
in the bibliography.
37
38
CHAPTER 10. TABLES
10.1
Standard normal distribution
Example: Φ(−0.31) = P (Z ≤ −0.31) = 0.3783.
-0.0
-0.1
-0.2
-0.3
-0.4
-0.5
-0.6
-0.7
-0.8
-0.9
-1.0
-1.1
-1.2
-1.3
-1.4
-1.5
-1.6
-1.7
-1.8
-1.9
-2.0
-2.1
-2.2
-2.3
-2.4
-2.5
-2.6
-2.7
-2.8
-2.9
-3.0
-3.1
-3.2
-3.3
-3.4
-3.5
-3.6
0
.5000
.4602
.4207
.3821
.3446
.3085
.2743
.2420
.2119
.1841
.1587
.1357
.1151
.0968
.0808
.0668
.0548
.0446
.0359
.0287
.0228
.0179
.0139
.0107
.0082
.0062
.0047
.0035
.0026
.0019
.0013
.0010
.0007
.0005
.0003
.0002
.0002
-0.01
.4960
.4562
.4168
.3783
.3409
.3050
.2709
.2389
.2090
.1814
.1562
.1335
.1131
.0951
.0793
.0655
.0537
.0436
.0351
.0281
.0222
.0174
.0136
.0104
.0080
.0060
.0045
.0034
.0025
.0018
.0013
.0009
.0007
.0005
.0003
.0002
.0002
-0.02
.4920
.4522
.4129
.3745
.3372
.3015
.2676
.2358
.2061
.1788
.1539
.1314
.1112
.0934
.0778
.0643
.0526
.0427
.0344
.0274
.0217
.0170
.0132
.0102
.0078
.0059
.0044
.0033
.0024
.0018
.0013
.0009
.0006
.0005
.0003
.0002
.0001
-0.03
.4880
.4483
.4090
.3707
.3336
.2981
.2643
.2327
.2033
.1762
.1515
.1292
.1093
.0918
.0764
.0630
.0516
.0418
.0336
.0268
.0212
.0166
.0129
.0099
.0075
.0057
.0043
.0032
.0023
.0017
.0012
.0009
.0006
.0004
.0003
.0002
.0001
-0.04
.4840
.4443
.4052
.3669
.3300
.2946
.2611
.2296
.2005
.1736
.1492
.1271
.1075
.0901
.0749
.0618
.0505
.0409
.0329
.0262
.0207
.0162
.0125
.0096
.0073
.0055
.0041
.0031
.0023
.0016
.0012
.0008
.0006
.0004
.0003
.0002
.0001
-0.05
.4801
.4404
.4013
.3632
.3264
.2912
.2578
.2266
.1977
.1711
.1469
.1251
.1056
.0885
.0735
.0606
.0495
.0401
.0322
.0256
.0202
.0158
.0122
.0094
.0071
.0054
.0040
.0030
.0022
.0016
.0011
.0008
.0006
.0004
.0003
.0002
.0001
-0.06
.4761
.4364
.3974
.3594
.3228
.2877
.2546
.2236
.1949
.1685
.1446
.1230
.1038
.0869
.0721
.0594
.0485
.0392
.0314
.0250
.0197
.0154
.0119
.0091
.0069
.0052
.0039
.0029
.0021
.0015
.0011
.0008
.0006
.0004
.0003
.0002
.0001
-0.07
.4721
.4325
.3936
.3557
.3192
.2843
.2514
.2206
.1922
.1660
.1423
.1210
.1020
.0853
.0708
.0582
.0475
.0384
.0307
.0244
.0192
.0150
.0116
.0089
.0068
.0051
.0038
.0028
.0021
.0015
.0011
.0008
.0005
.0004
.0003
.0002
.0001
-0.08
.4681
.4286
.3897
.3520
.3156
.2810
.2483
.2177
.1894
.1635
.1401
.1190
.1003
.0838
.0694
.0571
.0465
.0375
.0301
.0239
.0188
.0146
.0113
.0087
.0066
.0049
.0037
.0027
.0020
.0014
.0010
.0007
.0005
.0004
.0003
.0002
.0001
-0.09
.4641
.4247
.3859
.3483
.3121
.2776
.2451
.2148
.1867
.1611
.1379
.1170
.0985
.0823
.0681
.0559
.0455
.0367
.0294
.0233
.0183
.0143
.0110
.0084
.0064
.0048
.0036
.0026
.0019
.0014
.0010
.0007
.0005
.0003
.0002
.0002
.0001
10.1. STANDARD NORMAL DISTRIBUTION
39
Standard normal distribution
Example: Φ(0.31) = P (Z ≤ 0.31) = 0.6217.
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2.0
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9
3.0
3.1
3.2
3.3
3.4
3.5
3.6
0
.5000
.5398
.5793
.6179
.6554
.6915
.7257
.7580
.7881
.8159
.8413
.8643
.8849
.9032
.9192
.9332
.9452
.9554
.9641
.9713
.9772
.9821
.9861
.9893
.9918
.9938
.9953
.9965
.9974
.9981
.9987
.9990
.9993
.9995
.9997
.9998
.9998
.01
.5040
.5438
.5832
.6217
.6591
.6950
.7291
.7611
.7910
.8186
.8438
.8665
.8869
.9049
.9207
.9345
.9463
.9564
.9649
.9719
.9778
.9826
.9864
.9896
.9920
.9940
.9955
.9966
.9975
.9982
.9987
.9991
.9993
.9995
.9997
.9998
.9998
.02
.5080
.5478
.5871
.6255
.6628
.6985
.7324
.7642
.7939
.8212
.8461
.8686
.8888
.9066
.9222
.9357
.9474
.9573
.9656
.9726
.9783
.9830
.9868
.9898
.9922
.9941
.9956
.9967
.9976
.9982
.9987
.9991
.9994
.9995
.9997
.9998
.9999
.03
.5120
.5517
.5910
.6293
.6664
.7019
.7357
.7673
.7967
.8238
.8485
.8708
.8907
.9082
.9236
.9370
.9484
.9582
.9664
.9732
.9788
.9834
.9871
.9901
.9925
.9943
.9957
.9968
.9977
.9983
.9988
.9991
.9994
.9996
.9997
.9998
.9999
.04
.5160
.5557
.5948
.6331
.6700
.7054
.7389
.7704
.7995
.8264
.8508
.8729
.8925
.9099
.9251
.9382
.9495
.9591
.9671
.9738
.9793
.9838
.9875
.9904
.9927
.9945
.9959
.9969
.9977
.9984
.9988
.9992
.9994
.9996
.9997
.9998
.9999
.05
.5199
.5596
.5987
.6368
.6736
.7088
.7422
.7734
.8023
.8289
.8531
.8749
.8944
.9115
.9265
.9394
.9505
.9599
.9678
.9744
.9798
.9842
.9878
.9906
.9929
.9946
.9960
.9970
.9978
.9984
.9989
.9992
.9994
.9996
.9997
.9998
.9999
.06
.5239
.5636
.6026
.6406
.6772
.7123
.7454
.7764
.8051
.8315
.8554
.8770
.8962
.9131
.9279
.9406
.9515
.9608
.9686
.9750
.9803
.9846
.9881
.9909
.9931
.9948
.9961
.9971
.9979
.9985
.9989
.9992
.9994
.9996
.9997
.9998
.9999
.07
.5279
.5675
.6064
.6443
.6808
.7157
.7486
.7794
.8078
.8340
.8577
.8790
.8980
.9147
.9292
.9418
.9525
.9616
.9693
.9756
.9808
.9850
.9884
.9911
.9932
.9949
.9962
.9972
.9979
.9985
.9989
.9992
.9995
.9996
.9997
.9998
.9999
.08
.5319
.5714
.6103
.6480
.6844
.7190
.7517
.7823
.8106
.8365
.8599
.8810
.8997
.9162
.9306
.9429
.9535
.9625
.9699
.9761
.9812
.9854
.9887
.9913
.9934
.9951
.9963
.9973
.9980
.9986
.9990
.9993
.9995
.9996
.9997
.9998
.9999
.09
.5359
.5753
.6141
.6517
.6879
.7224
.7549
.7852
.8133
.8389
.8621
.8830
.9015
.9177
.9319
.9441
.9545
.9633
.9706
.9767
.9817
.9857
.9890
.9916
.9936
.9952
.9964
.9974
.9981
.9986
.9990
.9993
.9995
.9997
.9998
.9998
.9999
40
CHAPTER 10. TABLES
10.2
Student t-distribution (tν;α )
Example: P (T3 ≥ 1.638) = 0.1, thus t3;0.1 = 1.638.
HH
ν
α
H
HH
H
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
50
100
200
∞
0.3
0.2
0.15
0.1
0.05
0.025
0.02
0.01
0.005
0.0025
0.001
0.727
0.617
0.584
0.569
0.559
0.553
0.549
0.546
0.543
0.542
0.540
0.539
0.538
0.537
0.536
0.535
0.534
0.534
0.533
0.533
0.532
0.532
0.532
0.531
0.531
0.531
0.531
0.530
0.530
0.530
0.530
0.530
0.530
0.529
0.529
0.529
0.529
0.529
0.529
0.529
0.528
0.526
0.525
0.524
1.376
1.061
0.978
0.941
0.920
0.906
0.896
0.889
0.883
0.879
0.876
0.873
0.870
0.868
0.866
0.865
0.863
0.862
0.861
0.860
0.859
0.858
0.858
0.857
0.856
0.856
0.855
0.855
0.854
0.854
0.853
0.853
0.853
0.852
0.852
0.852
0.851
0.851
0.851
0.851
0.849
0.845
0.843
0.842
1.963
1.386
1.250
1.190
1.156
1.134
1.119
1.108
1.100
1.093
1.088
1.083
1.079
1.076
1.074
1.071
1.069
1.067
1.066
1.064
1.063
1.061
1.060
1.059
1.058
1.058
1.057
1.056
1.055
1.055
1.054
1.054
1.053
1.052
1.052
1.052
1.051
1.051
1.050
1.050
1.047
1.042
1.039
1.036
3.078
1.886
1.638
1.533
1.476
1.440
1.415
1.397
1.383
1.372
1.363
1.356
1.350
1.345
1.341
1.337
1.333
1.330
1.328
1.325
1.323
1.321
1.319
1.318
1.316
1.315
1.314
1.313
1.311
1.310
1.309
1.309
1.308
1.307
1.306
1.306
1.305
1.304
1.304
1.303
1.299
1.290
1.286
1.282
6.314
2.920
2.353
2.132
2.015
1.943
1.895
1.860
1.833
1.812
1.796
1.782
1.771
1.761
1.753
1.746
1.740
1.734
1.729
1.725
1.721
1.717
1.714
1.711
1.708
1.706
1.703
1.701
1.699
1.697
1.696
1.694
1.692
1.691
1.690
1.688
1.687
1.686
1.685
1.684
1.676
1.660
1.653
1.645
12.71
4.303
3.182
2.776
2.571
2.447
2.365
2.306
2.262
2.228
2.201
2.179
2.160
2.145
2.131
2.120
2.110
2.101
2.093
2.086
2.080
2.074
2.069
2.064
2.060
2.056
2.052
2.048
2.045
2.042
2.040
2.037
2.035
2.032
2.030
2.028
2.026
2.024
2.023
2.021
2.009
1.984
1.972
1.960
15.90
4.849
3.482
2.999
2.757
2.612
2.517
2.449
2.398
2.359
2.328
2.303
2.282
2.264
2.249
2.235
2.224
2.214
2.205
2.197
2.189
2.183
2.177
2.172
2.167
2.162
2.158
2.154
2.150
2.147
2.144
2.141
2.138
2.136
2.133
2.131
2.129
2.127
2.125
2.123
2.109
2.081
2.067
2.054
31.82
6.965
4.541
3.747
3.365
3.143
2.998
2.896
2.821
2.764
2.718
2.681
2.650
2.624
2.602
2.583
2.567
2.552
2.539
2.528
2.518
2.508
2.500
2.492
2.485
2.479
2.473
2.467
2.462
2.457
2.453
2.449
2.445
2.441
2.438
2.434
2.431
2.429
2.426
2.423
2.403
2.364
2.345
2.326
63.66
9.925
5.841
4.604
4.032
3.707
3.499
3.355
3.250
3.169
3.106
3.055
3.012
2.977
2.947
2.921
2.898
2.878
2.861
2.845
2.831
2.819
2.807
2.797
2.787
2.779
2.771
2.763
2.756
2.750
2.744
2.738
2.733
2.728
2.724
2.719
2.715
2.712
2.708
2.704
2.678
2.626
2.601
2.576
127.3
14.10
7.453
5.598
4.773
4.317
4.029
3.833
3.690
3.581
3.497
3.428
3.372
3.326
3.286
3.252
3.222
3.197
3.174
3.153
3.135
3.119
3.104
3.091
3.078
3.067
3.057
3.047
3.038
3.030
3.022
3.015
3.008
3.002
2.996
2.990
2.985
2.980
2.976
2.971
2.937
2.871
2.839
2.807
318.3
22.33
10.215
7.173
5.893
5.208
4.785
4.501
4.297
4.144
4.025
3.930
3.852
3.787
3.733
3.686
3.646
3.610
3.579
3.552
3.527
3.505
3.485
3.467
3.450
3.435
3.421
3.408
3.396
3.385
3.375
3.365
3.356
3.348
3.340
3.333
3.326
3.319
3.313
3.307
3.261
3.174
3.131
3.090
10.3. χ2 -DISTRIBUTION (χ2ν;α )
10.3
41
χ2 -distribution (χ2ν;α )
Example: P (χ23 ≥ 6.25) = 0.1, thus χ23;0.1 = 6.25.
@ α
0.005 0.01 0.025 0.05
ν @@
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
u
7.88
10.6
12.8
14.9
16.7
18.5
20.3
22.0
23.6
25.2
26.8
28.3
29.8
31.3
32.8
34.3
35.7
37.2
38.6
40.0
41.4
42.8
44.2
45.6
46.9
48.3
49.6
51.0
52.3
53.7
55.0
56.3
57.6
59.0
60.3
61.6
62.9
64.2
65.5
66.8
2.58
6.63
9.21
11.3
13.3
15.1
16.8
18.5
20.1
21.7
23.2
24.7
26.2
27.7
29.1
30.6
32.0
33.4
34.8
36.2
37.6
38.9
40.3
41.6
43.0
44.3
45.6
47.0
48.3
49.6
50.9
52.2
53.5
54.8
56.1
57.3
58.6
59.9
61.2
62.4
63.7
2.33
5.02
7.38
9.35
11.1
12.8
14.4
16.0
17.5
19.0
20.5
21.9
23.3
24.7
26.1
27.5
28.8
30.2
31.5
32.9
34.2
35.5
36.8
38.1
39.4
40.6
41.9
43.2
44.5
45.7
47.0
48.2
49.5
50.7
52.0
53.2
54.4
55.7
56.9
58.1
59.3
1.96
3.84
5.99
7.81
9.49
11.1
12.6
14.1
15.5
16.9
18.3
19.7
21.0
22.4
23.7
25.0
26.3
27.6
28.9
30.1
31.4
32.7
33.9
35.2
36.4
37.7
38.9
40.1
41.3
42.6
43.8
45.0
46.2
47.4
48.6
49.8
51.0
52.2
53.4
54.6
55.8
1.64
0.1 0.25
2.71
4.61
6.25
7.78
9.24
10.6
12.0
13.4
14.7
16.0
17.3
18.5
19.8
21.1
22.3
23.5
24.8
26.0
27.2
28.4
29.6
30.8
32.0
33.2
34.4
35.6
36.7
37.9
39.1
40.3
41.4
42.6
43.7
44.9
46.1
47.2
48.4
49.5
50.7
51.8
1.28
1.32
2.77
4.11
5.39
6.63
7.84
9.04
10.2
11.4
12.5
13.7
14.8
16.0
17.1
18.2
19.4
20.5
21.6
22.7
23.8
24.9
26.0
27.1
28.2
29.3
30.4
31.5
32.6
33.7
34.8
35.9
37.0
38.1
39.1
40.2
41.3
42.4
43.5
44.5
45.6
0.67
0.5
0.75
0.9
0.95 0.975
0.99 0.995
.455 .102 .016 .004 .000 .000 .000
1.39 .575 .211 .103 .051 .020 .010
2.37 1.21 .584 .352 .216 .115 .072
3.36 1.92 1.06 .711 .484 .297 .207
4.35 2.67 1.61 1.15 .831 0.55 .412
5.35 3.45 2.20 1.64 1.24 0.87 .676
6.35 4.25 2.83 2.17 1.69 1.24 .989
7.34 5.07 3.49 2.73 2.18 1.65 1.34
8.34 5.90 4.17 3.33 2.70 2.09 1.73
9.34 6.74 4.87 3.94 3.25 2.56 2.16
10.3 7.58 5.58 4.57 3.82 3.05 2.60
11.3 8.44 6.30 5.23 4.40 3.57 3.07
12.3 9.30 7.04 5.89 5.01 4.11 3.57
13.3 10.2 7.79 6.57 5.63 4.66 4.07
14.3 11.0 8.55 7.26 6.26 5.23 4.60
15.3 11.9 9.31 7.96 6.91 5.81 5.14
16.3 12.8 10.1 8.67 7.56 6.41 5.70
17.3 13.7 10.9 9.39 8.23 7.01 6.26
18.3 14.6 11.7 10.1 8.91 7.63 6.84
19.3 15.5 12.4 10.9 9.59 8.26 7.43
20.3 16.3 13.2 11.6 10.3 8.90 8.03
21.3 17.2 14.0 12.3 11.0 9.54 8.64
22.3 18.1 14.8 13.1 11.7 10.2 9.26
23.3 19.0 15.7 13.8 12.4 10.9 9.89
24.3 19.9 16.5 14.6 13.1 11.5 10.5
25.3 20.8 17.3 15.4 13.8 12.2 11.2
26.3 21.7 18.1 16.2 14.6 12.9 11.8
27.3 22.7 18.9 16.9 15.3 13.6 12.5
28.3 23.6 19.8 17.7 16.0 14.3 13.1
29.3 24.5 20.6 18.5 16.8 15.0 13.8
30.3 25.4 21.4 19.3 17.5 15.7 14.5
31.3 26.3 22.3 20.1 18.3 16.4 15.1
32.3 27.2 23.1 20.9 19.0 17.1 15.8
33.3 28.1 24.0 21.7 19.8 17.8 16.5
34.3 29.1 24.8 22.5 20.6 18.5 17.2
35.3 30.0 25.6 23.3 21.3 19.2 17.9
36.3 30.9 26.5 24.1 22.1 20.0 18.6
37.3 31.8 27.3 24.9 22.9 20.7 19.3
38.3 32.7 28.2 25.7 23.7 21.4 20.0
39.3 33.7 29.1 26.5 24.4 22.2 20.7
0.00 −0.67 −1.28 −1.64 −1.96 −2.33 −2.58
For values not in the table one can use the approximation of Wilson and Hilferty (see [6,
!3
r
2
2
p. 176]) to obtain the critical value: χ2 ν,α = ν u
+1−
, where u is given in the
9ν
9ν
bottom line of the table.
42
CHAPTER 10. TABLES
10.4
m
with α = 0.10 (and α = 0.90)
F -distribution fn;α
2
Example: P (F32 ≥ 5.46) = 0.1, thus f3;0.10
= 5.46
m
fn;0.90
=
HH
m
H
HH
n
H
1
2
3
4
5
6
1
n
fm;0.10
7
8
9
10
11
12
13
14
1 39.9 49.5 53.6 55.8 57.2 58.2 58.9 59.4 59.9 60.2 60.5 60.7 60.9 61.1
2 8.53 9.00 9.16 9.24 9.29 9.33 9.35 9.37 9.38 9.39 9.40 9.41 9.41 9.42
3 5.54 5.46 5.39 5.34 5.31 5.28 5.27 5.25 5.24 5.23 5.22 5.22 5.21 5.20
4 4.54 4.32 4.19 4.11 4.05 4.01 3.98 3.95 3.94 3.92 3.91 3.90 3.89 3.88
5 4.06 3.78 3.62 3.52 3.45 3.40 3.37 3.34 3.32 3.30 3.28 3.27 3.26 3.25
6 3.78 3.46 3.29 3.18 3.11 3.05 3.01 2.98 2.96 2.94 2.92 2.90 2.89 2.88
7 3.59 3.26 3.07 2.96 2.88 2.83 2.78 2.75 2.72 2.70 2.68 2.67 2.65 2.64
8 3.46 3.11 2.92 2.81 2.73 2.67 2.62 2.59 2.56 2.54 2.52 2.50 2.49 2.48
9 3.36 3.01 2.81 2.69 2.61 2.55 2.51 2.47 2.44 2.42 2.40 2.38 2.36 2.35
10 3.29 2.92 2.73 2.61 2.52 2.46 2.41 2.38 2.35 2.32 2.30 2.28 2.27 2.26
11 3.23 2.86 2.66 2.54 2.45 2.39 2.34 2.30 2.27 2.25 2.23 2.21 2.19 2.18
12 3.18 2.81 2.61 2.48 2.39 2.33 2.28 2.24 2.21 2.19 2.17 2.15 2.13 2.12
13 3.14 2.76 2.56 2.43 2.35 2.28 2.23 2.20 2.16 2.14 2.12 2.10 2.08 2.07
14 3.10 2.73 2.52 2.39 2.31 2.24 2.19 2.15 2.12 2.10 2.07 2.05 2.04 2.02
15 3.07 2.70 2.49 2.36 2.27 2.21 2.16 2.12 2.09 2.06 2.04 2.02 2.00 1.99
16 3.05 2.67 2.46 2.33 2.24 2.18 2.13 2.09 2.06 2.03 2.01 1.99 1.97 1.95
17 3.03 2.64 2.44 2.31 2.22 2.15 2.10 2.06 2.03 2.00 1.98 1.96 1.94 1.93
18 3.01 2.62 2.42 2.29 2.20 2.13 2.08 2.04 2.00 1.98 1.95 1.93 1.92 1.90
19 2.99 2.61 2.40 2.27 2.18 2.11 2.06 2.02 1.98 1.96 1.93 1.91 1.89 1.88
20 2.97 2.59 2.38 2.25 2.16 2.09 2.04 2.00 1.96 1.94 1.91 1.89 1.87 1.86
21 2.96 2.57 2.36 2.23 2.14 2.08 2.02 1.98 1.95 1.92 1.90 1.87 1.86 1.84
22 2.95 2.56 2.35 2.22 2.13 2.06 2.01 1.97 1.93 1.90 1.88 1.86 1.84 1.83
23 2.94 2.55 2.34 2.21 2.11 2.05 1.99 1.95 1.92 1.89 1.87 1.84 1.83 1.81
24 2.93 2.54 2.33 2.19 2.10 2.04 1.98 1.94 1.91 1.88 1.85 1.83 1.81 1.80
25 2.92 2.53 2.32 2.18 2.09 2.02 1.97 1.93 1.89 1.87 1.84 1.82 1.80 1.79
30 2.88 2.49 2.28 2.14 2.05 1.98 1.93 1.88 1.85 1.82 1.79 1.77 1.75 1.74
40 2.84 2.44 2.23 2.09 2.00 1.93 1.87 1.83 1.79 1.76 1.74 1.71 1.70 1.68
50 2.81 2.41 2.20 2.06 1.97 1.90 1.84 1.80 1.76 1.73 1.70 1.68 1.66 1.64
100 2.76 2.36 2.14 2.00 1.91 1.83 1.78 1.73 1.69 1.66 1.64 1.61 1.59 1.57
m WITH α = 0.10 (AND α = 0.90)
10.4. F -DISTRIBUTION fn;α
43
m
F -distribution fn;α
with α = 0.10 (and α = 0.90)
17
Example: P (F317 ≥ 5.19) = 0.1, thus f3;0.10
= 5.19
m
fn;0.90
=
HH
H
n
m
HH 15
H
16
17
18
19
20
1
n
fm;0.10
21
22
23
24
25
30
40
50
100
1 61.2 61.3 61.5 61.6 61.7 61.7 61.8 61.9 61.9 62.0 62.1 62.3 62.5 62.7 63.0
2 9.42 9.43 9.43 9.44 9.44 9.44 9.44 9.45 9.45 9.45 9.45 9.46 9.47 9.47 9.48
3 5.20 5.20 5.19 5.19 5.19 5.18 5.18 5.18 5.18 5.18 5.17 5.17 5.16 5.15 5.14
4 3.87 3.86 3.86 3.85 3.85 3.84 3.84 3.84 3.83 3.83 3.83 3.82 3.80 3.80 3.78
5 3.24 3.23 3.22 3.22 3.21 3.21 3.20 3.20 3.19 3.19 3.19 3.17 3.16 3.15 3.13
6 2.87 2.86 2.85 2.85 2.84 2.84 2.83 2.83 2.82 2.82 2.81 2.80 2.78 2.77 2.75
7 2.63 2.62 2.61 2.61 2.60 2.59 2.59 2.58 2.58 2.58 2.57 2.56 2.54 2.52 2.50
8 2.46 2.45 2.45 2.44 2.43 2.42 2.42 2.41 2.41 2.40 2.40 2.38 2.36 2.35 2.32
9 2.34 2.33 2.32 2.31 2.30 2.30 2.29 2.29 2.28 2.28 2.27 2.25 2.23 2.22 2.19
10 2.24 2.23 2.22 2.22 2.21 2.20 2.19 2.19 2.18 2.18 2.17 2.16 2.13 2.12 2.09
11 2.17 2.16 2.15 2.14 2.13 2.12 2.12 2.11 2.11 2.10 2.10 2.08 2.05 2.04 2.01
12 2.10 2.09 2.08 2.08 2.07 2.06 2.05 2.05 2.04 2.04 2.03 2.01 1.99 1.97 1.94
13 2.05 2.04 2.03 2.02 2.01 2.01 2.00 1.99 1.99 1.98 1.98 1.96 1.93 1.92 1.88
14 2.01 2.00 1.99 1.98 1.97 1.96 1.96 1.95 1.94 1.94 1.93 1.91 1.89 1.87 1.83
15 1.97 1.96 1.95 1.94 1.93 1.92 1.92 1.91 1.90 1.90 1.89 1.87 1.85 1.83 1.79
16 1.94 1.93 1.92 1.91 1.90 1.89 1.88 1.88 1.87 1.87 1.86 1.84 1.81 1.79 1.76
17 1.91 1.90 1.89 1.88 1.87 1.86 1.86 1.85 1.84 1.84 1.83 1.81 1.78 1.76 1.73
18 1.89 1.87 1.86 1.85 1.84 1.84 1.83 1.82 1.82 1.81 1.80 1.78 1.75 1.74 1.70
19 1.86 1.85 1.84 1.83 1.82 1.81 1.81 1.80 1.79 1.79 1.78 1.76 1.73 1.71 1.67
20 1.84 1.83 1.82 1.81 1.80 1.79 1.79 1.78 1.77 1.77 1.76 1.74 1.71 1.69 1.65
21 1.83 1.81 1.80 1.79 1.78 1.78 1.77 1.76 1.75 1.75 1.74 1.72 1.69 1.67 1.63
22 1.81 1.80 1.79 1.78 1.77 1.76 1.75 1.74 1.74 1.73 1.73 1.70 1.67 1.65 1.61
23 1.80 1.78 1.77 1.76 1.75 1.74 1.74 1.73 1.72 1.72 1.71 1.69 1.66 1.64 1.59
24 1.78 1.77 1.76 1.75 1.74 1.73 1.72 1.71 1.71 1.70 1.70 1.67 1.64 1.62 1.58
25 1.77 1.76 1.75 1.74 1.73 1.72 1.71 1.70 1.70 1.69 1.68 1.66 1.63 1.61 1.56
30 1.72 1.71 1.70 1.69 1.68 1.67 1.66 1.65 1.64 1.64 1.63 1.61 1.57 1.55 1.51
40 1.66 1.65 1.64 1.62 1.61 1.61 1.60 1.59 1.58 1.57 1.57 1.54 1.51 1.48 1.43
50 1.63 1.61 1.60 1.59 1.58 1.57 1.56 1.55 1.54 1.54 1.53 1.50 1.46 1.44 1.39
100 1.56 1.54 1.53 1.52 1.50 1.49 1.48 1.48 1.47 1.46 1.45 1.42 1.38 1.35 1.29
44
CHAPTER 10. TABLES
10.5
m
with α = 0.05 (and α = 0.95)
F -distribution fn;α
2
Example: P (F32 ≥ 9.55) = 0.05, thus f3;0.05
= 9.55
m
fn;0.95
=
HH
m
H
HH
n
H
1
1 161
1
n
fm;0.05
2
3
4
5
6
7
8
9
10
11
12
13
14
15
199
216
225
230
234
237
239
241
242
243
244
245
245
246
2 18.5 19.0 19.2 19.2 19.3 19.3 19.4 19.4 19.4 19.4 19.4 19.4 19.4 19.4 19.4
3 10.1 9.55 9.28 9.12 9.01 8.94 8.89 8.85 8.81 8.79 8.76 8.74 8.73 8.71 8.70
4 7.71 6.94 6.59 6.39 6.26 6.16 6.09 6.04 6.00 5.96 5.94 5.91 5.89 5.87 5.86
5 6.61 5.79 5.41 5.19 5.05 4.95 4.88 4.82 4.77 4.74 4.70 4.68 4.66 4.64 4.62
6 5.99 5.14 4.76 4.53 4.39 4.28 4.21 4.15 4.10 4.06 4.03 4.00 3.98 3.96 3.94
7 5.59 4.74 4.35 4.12 3.97 3.87 3.79 3.73 3.68 3.64 3.60 3.57 3.55 3.53 3.51
8 5.32 4.46 4.07 3.84 3.69 3.58 3.50 3.44 3.39 3.35 3.31 3.28 3.26 3.24 3.22
9 5.12 4.26 3.86 3.63 3.48 3.37 3.29 3.23 3.18 3.14 3.10 3.07 3.05 3.03 3.01
10 4.96 4.10 3.71 3.48 3.33 3.22 3.14 3.07 3.02 2.98 2.94 2.91 2.89 2.86 2.85
11 4.84 3.98 3.59 3.36 3.20 3.09 3.01 2.95 2.90 2.85 2.82 2.79 2.76 2.74 2.72
12 4.75 3.89 3.49 3.26 3.11 3.00 2.91 2.85 2.80 2.75 2.72 2.69 2.66 2.64 2.62
13 4.67 3.81 3.41 3.18 3.03 2.92 2.83 2.77 2.71 2.67 2.63 2.60 2.58 2.55 2.53
14 4.60 3.74 3.34 3.11 2.96 2.85 2.76 2.70 2.65 2.60 2.57 2.53 2.51 2.48 2.46
15 4.54 3.68 3.29 3.06 2.90 2.79 2.71 2.64 2.59 2.54 2.51 2.48 2.45 2.42 2.40
16 4.49 3.63 3.24 3.01 2.85 2.74 2.66 2.59 2.54 2.49 2.46 2.42 2.40 2.37 2.35
17 4.45 3.59 3.20 2.96 2.81 2.70 2.61 2.55 2.49 2.45 2.41 2.38 2.35 2.33 2.31
18 4.41 3.55 3.16 2.93 2.77 2.66 2.58 2.51 2.46 2.41 2.37 2.34 2.31 2.29 2.27
19 4.38 3.52 3.13 2.90 2.74 2.63 2.54 2.48 2.42 2.38 2.34 2.31 2.28 2.26 2.23
20 4.35 3.49 3.10 2.87 2.71 2.60 2.51 2.45 2.39 2.35 2.31 2.28 2.25 2.22 2.20
21 4.32 3.47 3.07 2.84 2.68 2.57 2.49 2.42 2.37 2.32 2.28 2.25 2.22 2.20 2.18
22 4.30 3.44 3.05 2.82 2.66 2.55 2.46 2.40 2.34 2.30 2.26 2.23 2.20 2.17 2.15
23 4.28 3.42 3.03 2.80 2.64 2.53 2.44 2.37 2.32 2.27 2.24 2.20 2.18 2.15 2.13
24 4.26 3.40 3.01 2.78 2.62 2.51 2.42 2.36 2.30 2.25 2.22 2.18 2.15 2.13 2.11
25 4.24 3.39 2.99 2.76 2.60 2.49 2.40 2.34 2.28 2.24 2.20 2.16 2.14 2.11 2.09
30 4.17 3.32 2.92 2.69 2.53 2.42 2.33 2.27 2.21 2.16 2.13 2.09 2.06 2.04 2.01
40 4.08 3.23 2.84 2.61 2.45 2.34 2.25 2.18 2.12 2.08 2.04 2.00 1.97 1.95 1.92
50 4.03 3.18 2.79 2.56 2.40 2.29 2.20 2.13 2.07 2.03 1.99 1.95 1.92 1.89 1.87
100 3.94 3.09 2.70 2.46 2.31 2.19 2.10 2.03 1.97 1.93 1.89 1.85 1.82 1.79 1.77
m WITH α = 0.05 (AND α = 0.95)
10.5. F -DISTRIBUTION fn;α
45
m
F -distribution fn;α
with α = 0.05 (and α = 0.95)
17
Example: P (F317 ≥ 8.68) = 0.05, thus f3;0.05
= 8.68
m
fn;0.95
=
HH
H
n
m
1
n
fm;0.05
HH 16
H
17
18
19
20
21
22
23
24
25
30
40
50
100
1 246
247
247
248
248
248
249
249
249
249
250
251
252
253
2 19.4 19.4 19.4 19.4 19.4 19.4 19.5 19.5 19.5 19.5 19.5 19.5 19.5 19.5
3 8.69 8.68 8.67 8.67 8.66 8.65 8.65 8.64 8.64 8.63 8.62 8.59 8.58 8.55
4 5.84 5.83 5.82 5.81 5.80 5.79 5.79 5.78 5.77 5.77 5.75 5.72 5.70 5.66
5 4.60 4.59 4.58 4.57 4.56 4.55 4.54 4.53 4.53 4.52 4.50 4.46 4.44 4.41
6 3.92 3.91 3.90 3.88 3.87 3.86 3.86 3.85 3.84 3.83 3.81 3.77 3.75 3.71
7 3.49 3.48 3.47 3.46 3.44 3.43 3.43 3.42 3.41 3.40 3.38 3.34 3.32 3.27
8 3.20 3.19 3.17 3.16 3.15 3.14 3.13 3.12 3.12 3.11 3.08 3.04 3.02 2.97
9 2.99 2.97 2.96 2.95 2.94 2.93 2.92 2.91 2.90 2.89 2.86 2.83 2.80 2.76
10 2.83 2.81 2.80 2.79 2.77 2.76 2.75 2.75 2.74 2.73 2.70 2.66 2.64 2.59
11 2.70 2.69 2.67 2.66 2.65 2.64 2.63 2.62 2.61 2.60 2.57 2.53 2.51 2.46
12 2.60 2.58 2.57 2.56 2.54 2.53 2.52 2.51 2.51 2.50 2.47 2.43 2.40 2.35
13 2.51 2.50 2.48 2.47 2.46 2.45 2.44 2.43 2.42 2.41 2.38 2.34 2.31 2.26
14 2.44 2.43 2.41 2.40 2.39 2.38 2.37 2.36 2.35 2.34 2.31 2.27 2.24 2.19
15 2.38 2.37 2.35 2.34 2.33 2.32 2.31 2.30 2.29 2.28 2.25 2.20 2.18 2.12
16 2.33 2.32 2.30 2.29 2.28 2.26 2.25 2.24 2.24 2.23 2.19 2.15 2.12 2.07
17 2.29 2.27 2.26 2.24 2.23 2.22 2.21 2.20 2.19 2.18 2.15 2.10 2.08 2.02
18 2.25 2.23 2.22 2.20 2.19 2.18 2.17 2.16 2.15 2.14 2.11 2.06 2.04 1.98
19 2.21 2.20 2.18 2.17 2.16 2.14 2.13 2.12 2.11 2.11 2.07 2.03 2.00 1.94
20 2.18 2.17 2.15 2.14 2.12 2.11 2.10 2.09 2.08 2.07 2.04 1.99 1.97 1.91
21 2.16 2.14 2.12 2.11 2.10 2.08 2.07 2.06 2.05 2.05 2.01 1.96 1.94 1.88
22 2.13 2.11 2.10 2.08 2.07 2.06 2.05 2.04 2.03 2.02 1.98 1.94 1.91 1.85
23 2.11 2.09 2.08 2.06 2.05 2.04 2.02 2.01 2.01 2.00 1.96 1.91 1.88 1.82
24 2.09 2.07 2.05 2.04 2.03 2.01 2.00 1.99 1.98 1.97 1.94 1.89 1.86 1.80
25 2.07 2.05 2.04 2.02 2.01 2.00 1.98 1.97 1.96 1.96 1.92 1.87 1.84 1.78
30 1.99 1.98 1.96 1.95 1.93 1.92 1.91 1.90 1.89 1.88 1.84 1.79 1.76 1.70
40 1.90 1.89 1.87 1.85 1.84 1.83 1.81 1.80 1.79 1.78 1.74 1.69 1.66 1.59
50 1.85 1.83 1.81 1.80 1.78 1.77 1.76 1.75 1.74 1.73 1.69 1.63 1.60 1.52
100 1.75 1.73 1.71 1.69 1.68 1.66 1.65 1.64 1.63 1.62 1.57 1.52 1.48 1.39
46
CHAPTER 10. TABLES
m
with α = 0.025 (and α = 0.975)
F -distribution fn;α
10.6
2
Example: P (F32 ≥ 16.0) = 0.025, thus f3;0.025
= 16.0
m
fn;0.975
=
HH
m
H
HH
n
H
1
1 648
1
n
fm;0.025
2
3
4
5
6
7
8
9
10
11
12
13
14
799
864
900
922
937
948
957
963
969
973
977
980
983
2 38.5 390 39.2 39.2 39.3 39.3 39.4 39.4 39.4 39.4 39.4 39.4 39.4 39.4
3 17.4 16.0 15.4 15.1 14.9 14.7 14.6 14.5 14.5 14.4 14.4 14.3 14.3 14.3
4 12.2 10.6 9.98 9.60 9.36 9.20 9.07 8.98 8.90 8.84 8.79 8.75 8.71 8.68
5 10.0 8.43 7.76 7.39 7.15 6.98 6.85 6.76 6.68 6.62 6.57 6.52 6.49 6.46
6 8.81 7.26 6.60 6.23 5.99 5.82 5.70 5.60 5.52 5.46 5.41 5.37 5.33 5.30
7 8.07 6.54 5.89 5.52 5.29 5.12 4.99 4.90 4.82 4.76 4.71 4.67 4.63 4.60
8 7.57 6.06 5.42 5.05 4.82 4.65 4.53 4.43 4.36 4.30 4.24 4.20 4.16 4.13
9 7.21 5.71 5.08 4.72 4.48 4.32 4.20 4.10 4.03 3.96 3.91 3.87 3.83 3.80
10 6.94 5.46 4.83 4.47 4.24 4.07 3.95 3.85 3.78 3.72 3.66 3.62 3.58 3.55
11 6.72 5.26 4.63 4.28 4.04 3.88 3.76 3.66 3.59 3.53 3.47 3.43 3.39 3.36
12 6.55 5.10 4.47 4.12 3.89 3.73 3.61 3.51 3.44 3.37 3.32 3.28 3.24 3.21
13 6.41 4.97 4.35 4.00 3.77 3.60 3.48 3.39 3.31 3.25 3.20 3.15 3.12 3.08
14 6.30 4.86 4.24 3.89 3.66 3.50 3.38 3.29 3.21 3.15 3.09 3.05 3.01 2.98
15 6.20 4.77 4.15 3.80 3.58 3.41 3.29 3.20 3.12 3.06 3.01 2.96 2.92 2.89
16 6.12 4.69 4.08 3.73 3.50 3.34 3.22 3.12 3.05 2.99 2.93 2.89 2.85 2.82
17 6.04 4.62 4.01 3.66 3.44 3.28 3.16 3.06 2.98 2.92 2.87 2.82 2.79 2.75
18 5.98 4.56 3.95 3.61 3.38 3.22 3.10 3.01 2.93 2.87 2.81 2.77 2.73 2.70
19 5.92 4.51 3.90 3.56 3.33 3.17 3.05 2.96 2.88 2.82 2.76 2.72 2.68 2.65
20 5.87 4.46 3.86 3.51 3.29 3.13 3.01 2.91 2.84 2.77 2.72 2.68 2.64 2.60
21 5.83 4.42 3.82 3.48 3.25 3.09 2.97 2.87 2.80 2.73 2.68 2.64 2.60 2.56
22 5.79 4.38 3.78 3.44 3.22 3.05 2.93 2.84 2.76 2.70 2.65 2.60 2.56 2.53
23 5.75 4.35 3.75 3.41 3.18 3.02 2.90 2.81 2.73 2.67 2.62 2.57 2.53 2.50
24 5.72 4.32 3.72 3.38 3.15 2.99 2.87 2.78 2.70 2.64 2.59 2.54 2.50 2.47
25 5.69 4.29 3.69 3.35 3.13 2.97 2.85 2.75 2.68 2.61 2.56 2.51 2.48 2.44
30 5.57 4.18 3.59 3.25 3.03 2.87 2.75 2.65 2.57 2.51 2.46 2.41 2.37 2.34
40 5.42 4.05 3.46 3.13 2.90 2.74 2.62 2.53 2.45 2.39 2.33 2.29 2.25 2.21
50 5.34 3.97 3.39 3.05 2.83 2.67 2.55 2.46 2.38 2.32 2.26 2.22 2.18 2.14
100 5.18 3.83 3.25 2.92 2.7 2.54 2.42 2.32 2.24 2.18 2.12 2.08 2.04 2.00
m WITH α = 0.025 (AND α = 0.975)
10.6. F -DISTRIBUTION fn;α
47
m
F -distribution fn;α
with α = 0.025 (and α = 0.975)
17
Example: P (F417 ≥ 8.61) = 0.025, thus f4;0.025
= 8.61
m
fn;0.975
=
HH
H
n
m
1
n
fm;0.025
HH 15
H
16
17
18
19
20
21
22
23
24
25
30
40
50
100
1 985
987
989
990
992
993
994
995
996
997
998 1000 1010 1010 1010
2 39.4 39.4 39.4 39.4 39.4 39.4 39.5 39.5 39.5 39.5 39.5 39.5 39.5 39.5 39.5
3 14.3 14.2 14.2 14.2 14.2 14.2 14.2 14.1 14.1 14.1 14.1 14.1 14.0 14.0 14.0
4 8.66 8.63 8.61 8.59 8.58 8.56 8.55 8.53 8.52 8.51 8.50 8.46 8.41 8.38 8.32
5 6.43 6.40 6.38 6.36 6.34 6.33 6.31 6.30 6.29 6.28 6.27 6.23 6.18 6.14 6.08
6 5.27 5.24 5.22 5.20 5.18 5.17 5.15 5.14 5.13 5.12 5.11 5.07 5.01 4.98 4.92
7 4.57 4.54 4.52 4.50 4.48 4.47 4.45 4.44 4.43 4.41 4.40 4.36 4.31 4.28 4.21
8 4.10 4.08 4.05 4.03 4.02 4.00 3.98 3.97 3.96 3.95 3.94 3.89 3.84 3.81 3.74
9 3.77 3.74 3.72 3.70 3.68 3.67 3.65 3.64 3.63 3.61 3.60 3.56 3.51 3.47 3.40
10 3.52 3.50 3.47 3.45 3.44 3.42 3.40 3.39 3.38 3.37 3.35 3.31 3.26 3.22 3.15
11 3.33 3.30 3.28 3.26 3.24 3.23 3.21 3.20 3.18 3.17 3.16 3.12 3.06 3.03 2.96
12 3.18 3.15 3.13 3.11 3.09 3.07 3.06 3.04 3.03 3.02 3.01 2.96 2.91 2.87 2.80
13 3.05 3.03 3.00 2.98 2.96 2.95 2.93 2.92 2.91 2.89 2.88 2.84 2.78 2.74 2.67
14 2.95 2.92 2.90 2.88 2.86 2.84 2.83 2.81 2.80 2.79 2.78 2.73 2.67 2.64 2.56
15 2.86 2.84 2.81 2.79 2.77 2.76 2.74 2.73 2.71 2.70 2.69 2.64 2.59 2.55 2.47
16 2.79 2.76 2.74 2.72 2.70 2.68 2.67 2.65 2.64 2.63 2.61 2.57 2.51 2.47 2.40
17 2.72 2.70 2.67 2.65 2.63 2.62 2.60 2.59 2.57 2.56 2.55 2.50 2.44 2.41 2.33
18 2.67 2.64 2.62 2.60 2.58 2.56 2.54 2.53 2.52 2.50 2.49 2.44 2.38 2.35 2.27
19 2.62 2.59 2.57 2.55 2.53 2.51 2.49 2.48 2.46 2.45 2.44 2.39 2.33 2.30 2.22
20 2.57 2.55 2.52 2.50 2.48 2.46 2.45 2.43 2.42 2.41 2.40 2.35 2.29 2.25 2.17
21 2.53 2.51 2.48 2.46 2.44 2.42 2.41 2.39 2.38 2.37 2.36 2.31 2.25 2.21 2.13
22 2.50 2.47 2.45 2.43 2.41 2.39 2.37 2.36 2.34 2.33 2.32 2.27 2.21 2.17 2.09
23 2.47 2.44 2.42 2.39 2.37 2.36 2.34 2.33 2.31 2.30 2.29 2.24 2.18 2.14 2.06
24 2.44 2.41 2.39 2.36 2.35 2.33 2.31 2.30 2.28 2.27 2.26 2.21 2.15 2.11 2.02
25 2.41 2.38 2.36 2.34 2.32 2.30 2.28 2.27 2.26 2.24 2.23 2.18 2.12 2.08 2.00
30 2.31 2.28 2.26 2.23 2.21 2.20 2.18 2.16 2.15 2.14 2.12 2.07 2.01 1.97 1.88
40 2.18 2.15 2.13 2.11 2.09 2.07 2.05 2.03 2.02 2.01 1.99 1.94 1.88 1.83 1.74
50 2.11 2.08 2.06 2.03 2.01 1.99 1.98 1.96 1.95 1.93 1.92 1.87 1.80 1.75 1.66
100 1.97 1.94 1.91 1.89 1.87 1.85 1.83 1.81 1.80 1.78 1.77 1.71 1.64 1.59 1.48
48
CHAPTER 10. TABLES
10.7
m
with α = 0.01 (and α = 0.99)
F -distribution fn;α
2
Example: P (F32 ≥ 30.8) = 0.01, thus f3;0.01
= 30.8
m
fn;0.01
=
HH
m
H
HH
n
H
1
2
3
4
5
1
n
fm;0.99
6
7
8
9
10
11
12
13
14
1 4050 5000 5400 5620 5760 5860 5930 5980 6020 6060 6080 6110 6130 6140
2 98.5 99.0 99.2 99.2 99.3 99.3 99.4 99.4 99.4 99.4 99.4 99.4 99.4 99.4
3 34.1 30.8 29.5 28.7 28.2 27.9 27.7 27.5 27.3 27.2 27.1 27.1 27.0 26.9
4 21.2 18.0 16.7 16.0 15.5 15.2 15.0 14.8 14.7 14.5 14.5 14.4 14.3 14.2
5 16.3 13.3 12.1 11.4 11.0 10.7 10.5 10.3 10.2 10.1 9.96 9.89 9.82 9.77
6 13.7 10.9 9.78 9.15 8.75 8.47 8.26 8.10 7.98 7.87 7.79 7.72 7.66 7.60
7 12.2 9.55 8.45 7.85 7.46 7.19 6.99 6.84 6.72 6.62 6.54 6.47 6.41 6.36
8 11.3 8.65 7.59 7.01 6.63 6.37 6.18 6.03 5.91 5.81 5.73 5.67 5.61 5.56
9 10.6 8.02 6.99 6.42 6.06 5.80 5.61 5.47 5.35 5.26 5.18 5.11 5.05 5.01
10 10.0 7.56 6.55 5.99 5.64 5.39 5.20 5.06 4.94 4.85 4.77 4.71 4.65 4.60
11 9.65 7.21 6.22 5.67 5.32 5.07 4.89 4.74 4.63 4.54 4.46 4.40 4.34 4.29
12 9.33 6.93 5.95 5.41 5.06 4.82 4.64 4.50 4.39 4.30 4.22 4.16 4.10 4.05
13 9.07 6.70 5.74 5.21 4.86 4.62 4.44 4.30 4.19 4.10 4.02 3.96 3.91 3.86
14 8.86 6.51 5.56 5.04 4.69 4.46 4.28 4.14 4.03 3.94 3.86 3.80 3.75 3.70
15 8.68 6.36 5.42 4.89 4.56 4.32 4.14 4.00 3.89 3.80 3.73 3.67 3.61 3.56
16 8.53 6.23 5.29 4.77 4.44 4.20 4.03 3.89 3.78 3.69 3.62 3.55 3.50 3.45
17 8.40 6.11 5.18 4.67 4.34 4.10 3.93 3.79 3.68 3.59 3.52 3.46 3.40 3.35
18 8.29 6.01 5.09 4.58 4.25 4.01 3.84 3.71 3.60 3.51 3.43 3.37 3.32 3.27
19 8.18 5.93 5.01 4.50 4.17 3.94 3.77 3.63 3.52 3.43 3.36 3.30 3.24 3.19
20 8.10 5.85 4.94 4.43 4.10 3.87 3.07 3.56 3.46 3.37 3.29 3.23 3.18 3.13
21 8.02 5.78 4.87 4.37 4.04 3.81 3.64 3.51 3.40 3.31 3.24 3.17 3.12 3.07
22 7.95 5.72 4.82 4.31 3.99 3.76 3.59 3.45 3.35 3.26 3.18 3.12 3.07 3.02
23 7.88 5.66 4.76 4.26 3.94 3.71 3.54 3.41 3.30 3.21 3.14 3.07 3.02 2.97
24 7.82 5.61 4.72 4.22 3.90 3.67 3.50 3.36 3.26 3.17 3.09 3.03 2.98 2.93
25 7.77 5.57 4.68 4.18 3.85 3.63 3.46 3.32 3.22 3.13 3.06 2.99 2.94 2.89
30 7.56 5.39 4.51 4.02 3.70 3.47 3.30 3.17 3.07 2.98 2.91 2.84 2.79 2.74
40 7.31 5.18 4.31 3.83 3.51 3.29 3.12 2.99 2.89 2.80 2.73 2.66 2.61 2.56
50 7.17 5.06 4.20 3.72 3.41 3.19 3.02 2.89 2.78 2.70 2.63 2.56 2.51 2.46
100 6.90 4.82 3.98 3.51 3.21 2.99 2.82 2.69 2.59 2.50 2.43 2.37 2.31 2.27
m WITH α = 0.01 (AND α = 0.99)
10.7. F -DISTRIBUTION fn;α
49
m
F -distribution fn;α
with α = 0.01 (and α = 0.99)
17
Example: P (F517 ≥ 9.64) = 0.01, thus f5;0.01
= 9.64
m
fn;0.01
=
HH
H
n
m
HH 15
H
16
17
18
19
1
n
fm;0.99
20
21
22
23
24
25
30
40
50
100
1 6160 6170 6180 6190 6200 6210 6220 6220 6230 6230 6240 6260 6290 6300 6330
2 99.4 99.4 99.4 99.4 99.4 99.4 99.5 99.5 99.5 99.5 99.5 99.5 99.5 99.5 99.5
3 26.9 26.8 26.8 26.8 26.7 26.7 26.7 26.6 26.6 26.6 26.6 26.5 26.4 26.4 26.2
4 14.2 14.2 14.1 14.1 14.0 14.0 14.0 14.0 13.9 13.9 13.9 13.8 13.7 13.7 13.6
5 9.72 9.68 9.64 9.61 9.58 9.55 9.53 9.51 9.49 9.47 9.45 9.38 9.29 9.24 9.13
6 7.56 7.52 7.48 7.45 7.42 7.40 7.37 7.35 7.33 7.31 7.30 7.23 7.14 7.09 6.99
7 6.31 6.28 6.24 6.21 6.18 6.16 6.13 6.11 6.09 6.07 6.06 5.99 5.91 5.86 5.75
8 5.52 5.48 5.44 5.41 5.38 5.36 5.34 5.32 5.30 5.28 5.26 5.20 5.12 5.07 4.96
9 4.96 4.92 4.89 4.86 4.83 4.81 4.79 4.77 4.75 4.73 4.71 4.65 4.57 4.52 4.41
10 4.56 4.52 4.49 4.46 4.43 4.41 4.38 4.36 4.34 4.33 4.31 4.25 4.17 4.12 4.01
11 4.25 4.21 4.18 4.15 4.12 4.10 4.08 4.06 4.04 4.02 4.01 3.94 3.86 3.81 3.71
12 4.01 3.97 3.94 3.91 3.88 3.86 3.84 3.82 3.80 3.78 3.76 3.70 3.62 3.57 3.47
13 3.82 3.78 3.75 3.72 3.69 3.66 3.64 3.62 3.60 3.59 3.57 3.51 3.43 3.38 3.27
14 3.66 3.62 3.59 3.56 3.53 3.51 3.48 3.46 3.44 3.43 3.41 3.35 3.27 3.22 3.11
15 3.52 3.49 3.45 3.42 3.40 3.37 3.35 3.33 3.31 3.29 3.28 3.21 3.13 3.08 2.98
16 3.41 3.37 3.34 3.31 3.28 3.26 3.24 3.22 3.20 3.18 3.16 3.10 3.02 2.97 2.86
17 3.31 3.27 3.24 3.21 3.19 3.16 3.14 3.12 3.10 3.08 3.07 3.00 2.92 2.87 2.76
18 3.23 3.19 3.16 3.13 3.10 3.08 3.05 3.03 3.02 3.00 2.98 2.92 2.84 2.78 2.68
19 3.15 3.12 3.08 3.05 3.03 3.00 2.98 2.96 2.94 2.92 2.91 2.84 2.76 2.71 2.60
20 3.09 3.05 3.02 2.99 2.96 2.94 2.92 2.90 2.88 2.86 2.84 2.78 2.69 2.64 2.54
21 3.03 2.99 2.96 2.93 2.90 2.88 2.86 2.84 2.82 2.80 2.79 2.72 2.64 2.58 2.48
22 2.98 2.94 2.91 2.88 2.85 2.83 2.81 2.78 2.77 2.75 2.73 2.67 2.58 2.53 2.42
23 2.93 2.89 2.86 2.83 2.80 2.78 2.76 2.74 2.72 2.70 2.69 2.62 2.54 2.48 2.37
24 2.89 2.85 2.82 2.79 2.76 2.74 2.72 2.70 2.68 2.66 2.64 2.58 2.49 2.44 2.33
25 2.85 2.81 2.78 2.75 2.72 2.70 2.68 2.66 2.64 2.62 2.60 2.54 2.45 2.40 2.29
30 2.70 2.66 2.63 2.60 2.57 2.55 2.53 2.51 2.49 2.47 2.45 2.39 2.30 2.25 2.13
40 2.52 2.48 2.45 2.42 2.39 2.37 2.35 2.33 2.31 2.29 2.27 2.20 2.11 2.06 1.94
50 2.42 2.38 2.35 2.32 2.29 2.27 2.24 2.22 2.20 2.18 2.17 2.10 2.01 1.95 1.82
100 2.22 2.19 2.15 2.12 2.09 2.07 2.04 2.02 2.00 1.98 1.97 1.89 1.80 1.74 1.60
50
CHAPTER 10. TABLES
m
with α = 0.005 (and α = 0.995)
F -distribution fn;α
10.8
2
Example: P (F32 ≥ 49.8) = 0.005, thus f3;0.005
= 49.8
m
fn;0.005
=
HH
m
H
HH
n
H
1
2 199
1
n
fm;0.995
2
3
4
5
6
7
8
9
10
11
12
13
14
199
199
199
199
199
199
199
199
199
199
199
199
199
3 55.6 49.8 47.5 46.2 45.4 44.8 44.4 44.1 43.9 43.7 43.5 43.4 43.3 43.2
4 31.3 26.3 24.3 23.2 22.5 22.0 21.6 21.4 21.1 21.0 20.8 20.7 20.6 20.5
5 22.8 18.3 16.5 15.6 14.9 14.5 14.2 14.0 13.8 13.6 13.5 13.4 13.3 13.2
6 18.6 14.5 12.9 12.0 11.5 11.1 10.8 10.6 10.4 10.3 10.1 10.0 9.95 9.88
7 16.2 12.4 10.9 10.1 9.52 9.16 8.89 8.68 8.51 8.38 8.27 8.18 8.10 8.03
8 14.7 11.0 9.60 8.81 8.30 7.95 7.69 7.50 7.34 7.21 7.10 7.01 6.94 6.87
9 13.6 10.1 8.72 7.96 7.47 7.13 6.88 6.69 6.54 6.42 6.31 6.23 6.15 6.09
10 12.8 9.43 8.08 7.34 6.87 6.54 6.30 6.12 5.97 5.85 5.75 5.66 5.59 5.53
11 12.2 8.91 7.60 6.88 6.42 6.10 5.86 5.68 5.54 5.42 5.32 5.24 5.16 5.10
12 11.8 8.51 7.23 6.52 6.07 5.76 5.52 5.35 5.20 5.09 4.99 4.91 4.84 4.77
13 11.4 8.19 6.93 6.23 5.79 5.48 5.25 5.08 4.94 4.82 4.72 4.64 4.57 4.51
14 11.1 7.92 6.68 6.00 5.56 5.26 5.03 4.86 4.72 4.60 4.51 4.43 4.36 4.30
15 10.8 7.70 6.48 5.80 5.37 5.07 4.85 4.67 4.54 4.42 4.33 4.25 4.18 4.12
16 10.6 7.51 6.30 5.64 5.21 4.91 4.69 4.52 4.38 4.27 4.18 4.10 4.03 3.97
17 10.4 7.35 6.16 5.50 5.07 4.78 4.56 4.39 4.25 4.14 4.05 3.97 3.90 3.84
18 10.2 7.21 6.03 5.37 4.96 4.66 4.44 4.28 4.14 4.03 3.94 3.86 3.79 3.73
19 10.1 7.09 5.92 5.27 4.85 4.56 4.34 4.18 4.04 3.93 3.84 3.76 3.70 3.64
20 9.94 6.99 5.82 5.17 4.76 4.47 4.26 4.09 3.96 3.85 3.76 3.68 3.61 3.55
21 9.83 6.89 5.73 5.09 4.68 4.39 4.18 4.01 3.88 3.77 3.68 3.60 3.54 3.48
22 9.73 6.81 5.65 5.02 4.61 4.32 4.11 3.94 3.81 3.70 3.61 3.54 3.47 3.41
23 9.63 6.73 5.58 4.95 4.54 4.26 4.05 3.88 3.75 3.64 3.55 3.47 3.41 3.35
24 9.55 6.66 5.52 4.89 4.49 4.20 3.99 3.83 3.69 3.59 3.50 3.42 3.35 3.30
25 9.48 6.60 5.46 4.84 4.43 4.15 3.94 3.78 3.64 3.54 3.45 3.37 3.30 3.25
30 9.18 6.35 5.24 4.62 4.23 3.95 3.74 3.58 3.45 3.34 3.25 3.18 3.11 3.06
40 8.83 6.07 4.98 4.37 3.99 3.71 3.51 3.35 3.22 3.12 3.03 2.95 2.89 2.83
50 8.63 5.90 4.83 4.23 3.85 3.58 3.38 3.22 3.09 2.99 2.90 2.82 2.76 2.70
100 8.24 5.59 4.54 3.96 3.59 3.33 3.13 2.97 2.85 2.74 2.66 2.58 2.52 2.46
m WITH α = 0.005 (AND α = 0.995)
10.8. F -DISTRIBUTION fn;α
51
m
F -distribution fn;α
with α = 0.005 (and α = 0.995)
16
Example: P (F316 ≥ 43.0) = 0.005, thus f3;0.005
= 43.0
m
fn;0.005
=
HH
H
n
m
1
n
fm;0.995
HH 15
H
16
17
18
19
20
21
22
23
24
25
30
40
50
100
2 199
199
199
199
199
199
199
199
199
199
199
199
199
199
199
3 43.1 43.0 42.9 42.9 42.8 42.8 42.7 42.7 42.7 42.6 42.6 42.5 42.3 42.2 42.0
4 20.4 20.4 20.3 20.3 20.2 20.2 20.1 20.1 20.1 20.0 20.0 19.9 19.8 19.7 19.5
5 13.1 13.1 13.0 13.0 12.9 12.9 12.9 12.8 12.8 12.8 12.8 12.7 12.5 12.5 12.3
6 9.81 9.76 9.71 9.66 9.62 9.59 9.56 9.53 9.50 9.47 9.45 9.36 9.24 9.17 9.03
7 7.97 7.91 7.87 7.83 7.79 7.75 7.72 7.69 7.67 7.64 7.62 7.53 7.42 7.35 7.22
8 6.81 6.76 6.72 6.68 6.64 6.61 6.58 6.55 6.53 6.50 6.48 6.40 6.29 6.22 6.09
9 6.03 5.98 5.94 5.90 5.86 5.83 5.80 5.78 5.75 5.73 5.71 5.62 5.52 5.45 5.32
10 5.47 5.42 5.38 5.34 5.31 5.27 5.25 5.22 5.2 5.17 5.15 5.07 4.97 4.90 4.77
11 5.05 5.00 4.96 4.92 4.89 4.86 4.83 4.8 4.78 4.76 4.74 4.65 4.55 4.49 4.36
12 4.72 4.67 4.63 4.59 4.56 4.53 4.5 4.48 4.45 4.43 4.41 4.33 4.23 4.17 4.04
13 4.46 4.41 4.37 4.33 4.30 4.27 4.24 4.22 4.19 4.17 4.15 4.07 3.97 3.91 3.78
14 4.25 4.20 4.16 4.12 4.09 4.06 4.03 4.01 3.98 3.96 3.94 3.86 3.76 3.70 3.57
15 4.07 4.02 3.98 3.95 3.91 3.88 3.86 3.83 3.81 3.79 3.77 3.69 3.58 3.52 3.39
16 3.92 3.87 3.83 3.80 3.76 3.73 3.71 3.68 3.66 3.64 3.62 3.54 3.44 3.37 3.25
17 3.79 3.75 3.71 3.67 3.64 3.61 3.58 3.56 3.53 3.51 3.49 3.41 3.31 3.25 3.12
18 3.68 3.64 3.60 3.56 3.53 3.50 3.47 3.45 3.42 3.40 3.38 3.30 3.20 3.14 3.01
19 3.59 3.54 3.50 3.46 3.43 3.40 3.37 3.35 3.33 3.31 3.29 3.21 3.11 3.04 2.91
20 3.50 3.46 3.42 3.38 3.35 3.32 3.29 3.27 3.24 3.22 3.20 3.12 3.02 2.96 2.83
21 3.43 3.38 3.34 3.31 3.27 3.24 3.22 3.19 3.17 3.15 3.13 3.05 2.95 2.88 2.75
22 3.36 3.31 3.27 3.24 3.21 3.18 3.15 3.12 3.10 3.08 3.06 2.98 2.88 2.82 2.69
23 3.30 3.25 3.21 3.18 3.15 3.12 3.09 3.06 3.04 3.02 3.00 2.92 2.82 2.76 2.62
24 3.25 3.20 3.16 3.12 3.09 3.06 3.04 3.01 2.99 2.97 2.95 2.87 2.77 2.70 2.57
25 3.20 3.15 3.11 3.08 3.04 3.01 2.99 2.96 2.94 2.92 2.90 2.82 2.72 2.65 2.52
30 3.01 2.96 2.92 2.89 2.85 2.82 2.80 2.77 2.75 2.73 2.71 2.63 2.52 2.46 2.32
40 2.78 2.74 2.70 2.66 2.63 2.60 2.57 2.55 2.52 2.50 2.48 2.40 2.30 2.23 2.09
50 2.65 2.61 2.57 2.53 2.50 2.47 2.44 2.42 2.39 2.37 2.35 2.27 2.16 2.10 1.95
100 2.41 2.37 2.33 2.29 2.26 2.23 2.20 2.17 2.15 2.13 2.11 2.02 1.91 1.84 1.68
52
CHAPTER 10. TABLES
10.9
Studentized range qa,f (α) with α = 0.10
Let Yij = µ+αi +εij (i = 1, . . . , a and j = 1, . .s
. , ni ) with corresponding multiple comparisons
1
1
M SE
+
, with a treatments and
confidence intervals Y i1 . − Y i2 . ± qa,f (α)
2
ni 1
n i2
P
f = ai=1 ni − a degrees of freedom of M SE . Example: q2,3 (0.10) = 3.328.
H
HH a
f HH
H
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
2
3
4
5
6
7
8
9
10
8.929
4.129
3.328
3.015
2.850
2.748
2.679
2.630
2.592
2.563
2.540
2.521
2.504
2.491
2.479
2.469
2.460
2.452
2.445
2.439
2.433
2.428
2.424
2.420
2.416
2.412
2.409
2.406
2.403
2.400
2.398
2.396
2.393
2.391
2.389
2.388
2.386
2.384
2.383
2.381
13.44
5.733
4.467
3.976
3.717
3.558
3.451
3.374
3.316
3.270
3.234
3.204
3.179
3.158
3.140
3.124
3.110
3.098
3.087
3.077
3.069
3.061
3.054
3.047
3.041
3.036
3.030
3.026
3.021
3.017
3.013
3.010
3.006
3.003
3.000
2.998
2.995
2.992
2.990
2.988
16.36
6.772
5.199
4.586
4.264
4.065
3.931
3.834
3.761
3.704
3.658
3.621
3.589
3.563
3.540
3.520
3.503
3.487
3.474
3.462
3.451
3.441
3.432
3.423
3.416
3.409
3.402
3.396
3.391
3.386
3.381
3.376
3.372
3.368
3.364
3.361
3.357
3.354
3.351
3.348
18.49
7.538
5.738
5.035
4.664
4.435
4.280
4.169
4.084
4.018
3.965
3.921
3.885
3.854
3.828
3.804
3.784
3.766
3.751
3.736
3.724
3.712
3.701
3.692
3.683
3.675
3.667
3.660
3.654
3.648
3.642
3.637
3.632
3.627
3.623
3.619
3.615
3.611
3.608
3.605
20.15
8.139
6.162
5.388
4.979
4.726
4.555
4.431
4.337
4.264
4.205
4.156
4.116
4.081
4.052
4.026
4.003
3.984
3.966
3.950
3.936
3.923
3.911
3.900
3.890
3.881
3.873
3.865
3.858
3.851
3.845
3.839
3.833
3.828
3.823
3.819
3.814
3.810
3.806
3.802
21.50
8.633
6.511
5.679
5.238
4.966
4.780
4.646
4.545
4.465
4.401
4.349
4.304
4.267
4.235
4.207
4.182
4.161
4.142
4.124
4.109
4.095
4.082
4.070
4.059
4.049
4.040
4.032
4.024
4.016
4.009
4.003
3.997
3.991
3.986
3.981
3.976
3.972
3.967
3.963
22.64
9.049
6.806
5.926
5.458
5.168
4.971
4.829
4.721
4.636
4.567
4.511
4.464
4.424
4.390
4.360
4.334
4.310
4.290
4.271
4.255
4.239
4.226
4.213
4.201
4.191
4.181
4.172
4.163
4.155
4.148
4.141
4.135
4.129
4.123
4.117
4.112
4.107
4.103
4.099
23.62
9.409
7.062
6.139
5.648
5.344
5.137
4.987
4.873
4.783
4.711
4.652
4.602
4.560
4.524
4.492
4.464
4.440
4.418
4.398
4.380
4.364
4.350
4.336
4.324
4.313
4.302
4.293
4.284
4.275
4.268
4.260
4.253
4.247
4.241
4.235
4.230
4.224
4.220
4.215
24.48
9.725
7.287
6.327
5.816
5.499
5.283
5.126
5.007
4.913
4.838
4.776
4.724
4.679
4.641
4.608
4.579
4.553
4.530
4.510
4.491
4.474
4.459
4.445
4.432
4.420
4.409
4.399
4.389
4.381
4.372
4.365
4.357
4.351
4.344
4.338
4.332
4.327
4.322
4.317
10.10. STUDENTIZED RANGE qa,f (α) WITH α = 0.05
10.10
53
Studentized range qa,f (α) with α = 0.05
Let Yij = µ+αi +εij (i = 1, . . . , a and j = 1, . .s
. , ni ) with corresponding multiple comparisons
1
1
M SE
+
, with a treatments and
confidence intervals Y i1 . − Y i2 . ± qa,f (α)
2
ni 1
n i2
P
f = ai=1 ni − a degrees of freedom of M SE . Example: q2,3 (0.05) = 4.501.
H
HH a
f HH
H
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
2
3
4
5
6
7
8
9
10
17.97
6.085
4.501
3.926
3.635
3.460
3.344
3.261
3.199
3.151
3.113
3.081
3.055
3.033
3.014
2.998
2.984
2.971
2.960
2.950
2.941
2.933
2.926
2.919
2.913
2.907
2.902
2.897
2.892
2.888
2.884
2.881
2.877
2.874
2.871
2.868
2.865
2.863
2.861
2.858
26.98
8.331
5.910
5.040
4.602
4.339
4.165
4.041
3.948
3.877
3.820
3.773
3.734
3.701
3.673
3.649
3.628
3.609
3.593
3.578
3.565
3.553
3.542
3.532
3.523
3.514
3.506
3.499
3.493
3.486
3.481
3.475
3.470
3.465
3.461
3.457
3.453
3.449
3.445
3.442
32.82
9.798
6.825
5.757
5.218
4.896
4.681
4.529
4.415
4.327
4.256
4.199
4.151
4.111
4.076
4.046
4.020
3.997
3.977
3.958
3.942
3.927
3.914
3.901
3.890
3.880
3.870
3.861
3.853
3.845
3.838
3.832
3.825
3.820
3.814
3.809
3.804
3.799
3.795
3.791
37.08
10.88
7.502
6.287
5.673
5.305
5.060
4.886
4.755
4.654
4.574
4.508
4.453
4.407
4.367
4.333
4.303
4.276
4.253
4.232
4.213
4.196
4.180
4.166
4.153
4.141
4.130
4.120
4.111
4.102
4.094
4.086
4.079
4.072
4.066
4.060
4.054
4.049
4.044
4.039
40.41
11.73
8.037
6.706
6.033
5.628
5.359
5.167
5.024
4.912
4.823
4.750
4.690
4.639
4.595
4.557
4.524
4.494
4.468
4.445
4.424
4.405
4.388
4.373
4.358
4.345
4.333
4.322
4.311
4.301
4.292
4.284
4.276
4.268
4.261
4.255
4.249
4.243
4.237
4.232
43.12
12.44
8.478
7.053
6.330
5.895
5.606
5.399
5.244
5.124
5.028
4.950
4.884
4.829
4.782
4.741
4.705
4.673
4.645
4.620
4.597
4.577
4.558
4.541
4.526
4.511
4.498
4.486
4.475
4.464
4.454
4.445
4.436
4.428
4.421
4.414
4.407
4.400
4.394
4.388
45.40
13.03
8.853
7.347
6.582
6.122
5.815
5.596
5.432
5.304
5.202
5.119
5.049
4.990
4.940
4.896
4.858
4.824
4.794
4.768
4.743
4.722
4.702
4.684
4.667
4.652
4.638
4.625
4.613
4.601
4.591
4.581
4.572
4.563
4.555
4.547
4.540
4.533
4.527
4.521
47.36
13.54
9.177
7.602
6.801
6.319
5.997
5.767
5.595
5.461
5.353
5.265
5.192
5.130
5.077
5.031
4.991
4.955
4.924
4.895
4.870
4.847
4.826
4.807
4.789
4.773
4.758
4.745
4.732
4.720
4.709
4.698
4.689
4.680
4.671
4.663
4.655
4.648
4.641
4.634
49.07
13.99
9.462
7.826
6.995
6.493
6.158
5.918
5.738
5.598
5.486
5.395
5.318
5.253
5.198
5.150
5.108
5.071
5.037
5.008
4.981
4.957
4.935
4.915
4.897
4.880
4.864
4.850
4.837
4.824
4.812
4.802
4.791
4.782
4.773
4.764
4.756
4.749
4.741
4.735
54
CHAPTER 10. TABLES
10.11
Studentized range qa,f (α) with α = 0.01
Let Yij = µ+αi +εij (i = 1, . . . , a and j = 1, . .s
. , ni ) with corresponding multiple comparisons
1
1
M SE
+
, with a treatments and
confidence intervals Y i1 . − Y i2 . ± qa,f (α)
2
ni 1
n i2
P
f = ai=1 ni − a degrees of freedom of M SE . Example: q2,3 (0.01) = 8.260.
H
HH a
f HH
H
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
2
3
4
5
6
7
8
9
10
90.02
14.04
8.260
6.511
5.702
5.243
4.949
4.745
4.596
4.482
4.392
4.320
4.260
4.210
4.167
4.131
4.099
4.071
4.046
4.024
4.004
3.986
3.970
3.955
3.942
3.930
3.918
3.908
3.898
3.889
3.881
3.873
3.865
3.859
3.852
3.846
3.840
3.835
3.830
3.825
135.0
19.02
10.62
8.120
6.976
6.331
5.919
5.635
5.428
5.270
5.146
5.046
4.964
4.895
4.836
4.786
4.742
4.703
4.669
4.639
4.612
4.588
4.566
4.546
4.527
4.510
4.495
4.481
4.467
4.455
4.443
4.433
4.423
4.413
4.404
4.396
4.388
4.381
4.374
4.367
164.3
22.29
12.17
9.173
7.804
7.033
6.543
6.204
5.957
5.769
5.621
5.502
5.404
5.322
5.252
5.192
5.140
5.094
5.054
5.018
4.986
4.957
4.931
4.907
4.885
4.865
4.847
4.830
4.814
4.799
4.786
4.773
4.761
4.750
4.739
4.729
4.720
4.711
4.703
4.695
185.6
24.72
13.32
9.958
8.422
7.556
7.005
6.625
6.347
6.136
5.970
5.836
5.726
5.634
5.556
5.489
5.430
5.379
5.334
5.293
5.257
5.225
5.195
5.168
5.144
5.121
5.101
5.082
5.064
5.048
5.032
5.018
5.005
4.992
4.980
4.969
4.959
4.949
4.940
4.931
202.2
26.63
14.24
10.58
8.913
7.972
7.373
6.960
6.658
6.428
6.247
6.101
5.981
5.881
5.796
5.722
5.659
5.603
5.553
5.510
5.470
5.435
5.403
5.373
5.347
5.322
5.300
5.279
5.260
5.242
5.225
5.210
5.195
5.181
5.169
5.156
5.145
5.134
5.124
5.114
215.8
28.20
15.00
11.10
9.321
8.318
7.679
7.237
6.915
6.669
6.476
6.320
6.192
6.085
5.994
5.915
5.847
5.787
5.735
5.688
5.646
5.608
5.573
5.542
5.514
5.487
5.463
5.441
5.420
5.401
5.383
5.367
5.351
5.336
5.323
5.310
5.298
5.286
5.275
5.265
227.2
29.53
15.64
11.54
9.669
8.613
7.939
7.474
7.134
6.875
6.671
6.507
6.372
6.258
6.162
6.079
6.007
5.944
5.889
5.839
5.794
5.754
5.718
5.685
5.655
5.627
5.602
5.578
5.556
5.536
5.517
5.500
5.483
5.468
5.453
5.440
5.427
5.414
5.403
5.392
237.0
30.68
16.20
11.93
9.972
8.870
8.166
7.681
7.325
7.055
6.842
6.670
6.528
6.410
6.309
6.222
6.147
6.081
6.022
5.970
5.924
5.882
5.844
5.809
5.778
5.749
5.722
5.697
5.674
5.653
5.633
5.615
5.598
5.581
5.566
5.552
5.538
5.526
5.513
5.502
245.5
31.69
16.69
12.26
10.24
9.097
8.368
7.863
7.495
7.214
6.992
6.814
6.667
6.543
6.439
6.348
6.270
6.201
6.141
6.087
6.038
5.994
5.955
5.919
5.886
5.856
5.828
5.802
5.778
5.756
5.736
5.716
5.698
5.682
5.666
5.651
5.637
5.623
5.611
5.599
10.12. CUMULATIVE BINOMIAL PROBABILITIES (1 ≤ n ≤ 7)
10.12
55
Cumulative binomial probabilities (1 ≤ n ≤ 7)
Examples: P (Bin(5, 0.3) ≤ 2) = 0.8369; P (Bin(7, 0.6) ≤ 2) = P (Bin(7, 0.4) ≥ 7 − 5).
n k 0.05
1 0 0.9500
1 1.0000
2 0 0.9025
1 0.9975
2 1.0000
3 0 0.8574
1 0.9928
2 0.9999
3 1.0000
4 0 0.8145
1 0.9860
2 0.9995
3 1.0000
4 1.0000
5 0 0.7738
1 0.9774
2 0.9988
3 1.0000
4 1.0000
5 1.0000
6 0 0.7351
1 0.9672
2 0.9978
3 0.9999
4 1.0000
5 1.0000
6 1.0000
7 0 0.6983
1 0.9556
2 0.9962
3 0.9998
4 1.0000
5 1.0000
6 1.0000
7 1.0000
0.1
0.9000
1.0000
0.8100
0.9900
1.0000
0.7290
0.9720
0.9990
1.0000
0.6561
0.9477
0.9963
0.9999
1.0000
0.5905
0.9185
0.9914
0.9995
1.0000
1.0000
0.5314
0.8857
0.9842
0.9987
0.9999
1.0000
1.0000
0.4783
0.8503
0.9743
0.9973
0.9998
1.0000
1.0000
1.0000
0.15
0.8500
1.0000
0.7225
0.9775
1.0000
0.6141
0.9393
0.9966
1.0000
0.5220
0.8905
0.9880
0.9995
1.0000
0.4437
0.8352
0.9734
0.9978
0.9999
1.0000
0.3771
0.7765
0.9527
0.9941
0.9996
1.0000
1.0000
0.3206
0.7166
0.9262
0.9879
0.9988
0.9999
1.0000
1.0000
0.2
0.8000
1.0000
0.6400
0.9600
1.0000
0.5120
0.8960
0.9920
1.0000
0.4096
0.8192
0.9728
0.9984
1.0000
0.3277
0.7373
0.9421
0.9933
0.9997
1.0000
0.2621
0.6554
0.9011
0.9830
0.9984
0.9999
1.0000
0.2097
0.5767
0.8520
0.9667
0.9953
0.9996
1.0000
1.0000
0.25
0.7500
1.0000
0.5625
0.9375
1.0000
0.4219
0.8438
0.9844
1.0000
0.3164
0.7383
0.9492
0.9961
1.0000
0.2373
0.6328
0.8965
0.9844
0.9990
1.0000
0.1780
0.5339
0.8306
0.9624
0.9954
0.9998
1.0000
0.1335
0.4449
0.7564
0.9294
0.9871
0.9987
0.9999
1.0000
p
0.3
0.7000
1.0000
0.4900
0.9100
1.0000
0.3430
0.7840
0.9730
1.0000
0.2401
0.6517
0.9163
0.9919
1.0000
0.1681
0.5282
0.8369
0.9692
0.9976
1.0000
0.1176
0.4202
0.7443
0.9295
0.9891
0.9993
1.0000
0.0824
0.3294
0.6471
0.8740
0.9712
0.9962
0.9998
1.0000
0.35
0.6500
1.0000
0.4225
0.8775
1.0000
0.2746
0.7183
0.9571
1.0000
0.1785
0.5630
0.8735
0.9850
1.0000
0.1160
0.4284
0.7648
0.9460
0.9947
1.0000
0.0754
0.3191
0.6471
0.8826
0.9777
0.9982
1.0000
0.0490
0.2338
0.5323
0.8002
0.9444
0.9910
0.9994
1.0000
0.4
0.6000
1.0000
0.3600
0.8400
1.0000
0.2160
0.6480
0.9360
1.0000
0.1296
0.4752
0.8208
0.9744
1.0000
0.0778
0.3370
0.6826
0.9130
0.9898
1.0000
0.0467
0.2333
0.5443
0.8208
0.9590
0.9959
1.0000
0.0280
0.1586
0.4199
0.7102
0.9037
0.9812
0.9984
1.0000
1
2
1
6
1
3
0.5000
1.0000
0.2500
0.7500
1.0000
0.1250
0.5000
0.8750
1.0000
0.0625
0.3125
0.6875
0.9375
1.0000
0.0313
0.1875
0.5000
0.8125
0.9688
1.0000
0.0156
0.1094
0.3438
0.6562
0.8906
0.9844
1.0000
0.0078
0.0625
0.2266
0.5000
0.7734
0.9375
0.9922
1.0000
0.8333
1.0000
0.6944
0.9722
1.0000
0.5787
0.9259
0.9954
1.0000
0.4823
0.8681
0.9838
0.9992
1.0000
0.4019
0.8038
0.9645
0.9967
0.9999
1.0000
0.3349
0.7368
0.9377
0.9913
0.9993
1.0000
1.0000
0.2791
0.6698
0.9042
0.9824
0.9980
0.9999
1.0000
1.0000
0.6667
1.0000
0.4444
0.8889
1.0000
0.2963
0.7407
0.9630
1.0000
0.1975
0.5926
0.8889
0.9877
1.0000
0.1317
0.4609
0.7901
0.9547
0.9959
1.0000
0.0878
0.3512
0.6804
0.8999
0.9822
0.9986
1.0000
0.0585
0.2634
0.5706
0.8267
0.9547
0.9931
0.9995
1.0000
56
CHAPTER 10. TABLES
10.13
Cumulative binomial probabilities (8 ≤ n ≤ 11)
Examples: P (Bin(9, 0.3) ≤ 2) = 0.4628; P (Bin(9, 0.6) ≤ 2) = P (Bin(9, 0.4) ≥ 7).
n k
8 0
1
2
3
4
5
6
7
8
9 0
1
2
3
4
5
6
7
8
9
10 0
1
2
3
4
5
6
7
8
9
10
11 0
1
2
3
4
5
6
7
8
9
10
11
0.05
0.6634
0.9428
0.9942
0.9996
1.0000
1.0000
1.0000
1.0000
1.0000
0.6302
0.9288
0.9916
0.9994
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.5987
0.9139
0.9885
0.9990
0.9999
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.5688
0.8981
0.9848
0.9984
0.9999
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.1
0.4305
0.8131
0.9619
0.9950
0.9996
1.0000
1.0000
1.0000
1.0000
0.3874
0.7748
0.9470
0.9917
0.9991
0.9999
1.0000
1.0000
1.0000
1.0000
0.3487
0.7361
0.9298
0.9872
0.9984
0.9999
1.0000
1.0000
1.0000
1.0000
1.0000
0.3138
0.6974
0.9104
0.9815
0.9972
0.9997
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.15
0.2725
0.6572
0.8948
0.9786
0.9971
0.9998
1.0000
1.0000
1.0000
0.2316
0.5995
0.8591
0.9661
0.9944
0.9994
1.0000
1.0000
1.0000
1.0000
0.1969
0.5443
0.8202
0.9500
0.9901
0.9986
0.9999
1.0000
1.0000
1.0000
1.0000
0.1673
0.4922
0.7788
0.9306
0.9841
0.9973
0.9997
1.0000
1.0000
1.0000
1.0000
1.0000
0.2
0.1678
0.5033
0.7969
0.9437
0.9896
0.9988
0.9999
1.0000
1.0000
0.1342
0.4362
0.7382
0.9144
0.9804
0.9969
0.9997
1.0000
1.0000
1.0000
0.1074
0.3758
0.6778
0.8791
0.9672
0.9936
0.9991
0.9999
1.0000
1.0000
1.0000
0.0859
0.3221
0.6174
0.8389
0.9496
0.9883
0.9980
0.9998
1.0000
1.0000
1.0000
1.0000
0.25
0.1001
0.3671
0.6785
0.8862
0.9727
0.9958
0.9996
1.0000
1.0000
0.0751
0.3003
0.6007
0.8343
0.9511
0.9900
0.9987
0.9999
1.0000
1.0000
0.0563
0.2440
0.5256
0.7759
0.9219
0.9803
0.9965
0.9996
1.0000
1.0000
1.0000
0.0422
0.1971
0.4552
0.7133
0.8854
0.9657
0.9924
0.9988
0.9999
1.0000
1.0000
1.0000
p
0.3
0.0576
0.2553
0.5518
0.8059
0.9420
0.9887
0.9987
0.9999
1.0000
0.0404
0.1960
0.4628
0.7297
0.9012
0.9747
0.9957
0.9996
1.0000
1.0000
0.0282
0.1493
0.3828
0.6496
0.8497
0.9527
0.9894
0.9984
0.9999
1.0000
1.0000
0.0198
0.1130
0.3127
0.5696
0.7897
0.9218
0.9784
0.9957
0.9994
1.0000
1.0000
1.0000
0.35
0.0319
0.1691
0.4278
0.7064
0.8939
0.9747
0.9964
0.9998
1.0000
0.0207
0.1211
0.3373
0.6089
0.8283
0.9464
0.9888
0.9986
0.9999
1.0000
0.0135
0.0860
0.2616
0.5138
0.7515
0.9051
0.9740
0.9952
0.9995
1.0000
1.0000
0.0088
0.0606
0.2001
0.4256
0.6683
0.8513
0.9499
0.9878
0.9980
0.9998
1.0000
1.0000
0.4
0.0168
0.1064
0.3154
0.5941
0.8263
0.9502
0.9915
0.9993
1.0000
0.0101
0.0705
0.2318
0.4826
0.7334
0.9006
0.9750
0.9962
0.9997
1.0000
0.0060
0.0464
0.1673
0.3823
0.6331
0.8338
0.9452
0.9877
0.9983
0.9999
1.0000
0.0036
0.0302
0.1189
0.2963
0.5328
0.7535
0.9006
0.9707
0.9941
0.9993
1.0000
1.0000
1
2
1
6
1
3
0.0039
0.0352
0.1445
0.3633
0.6367
0.8555
0.9648
0.9961
1.0000
0.0020
0.0195
0.0898
0.2539
0.5000
0.7461
0.9102
0.9805
0.9980
1.0000
0.0010
0.0107
0.0547
0.1719
0.3770
0.6230
0.8281
0.9453
0.9893
0.9990
1.0000
0.0005
0.0059
0.0327
0.1133
0.2744
0.5000
0.7256
0.8867
0.9673
0.9941
0.9995
1.0000
0.2326
0.6047
0.8652
0.9693
0.9954
0.9996
1.0000
1.0000
1.0000
0.1938
0.5427
0.8217
0.9520
0.9910
0.9989
0.9999
1.0000
1.0000
1.0000
0.1615
0.4845
0.7752
0.9303
0.9845
0.9976
0.9997
1.0000
1.0000
1.0000
1.0000
0.1346
0.4307
0.7268
0.9044
0.9755
0.9954
0.9994
0.9999
1.0000
1.0000
1.0000
1.0000
0.0390
0.1951
0.4682
0.7414
0.9121
0.9803
0.9974
0.9998
1.0000
0.0260
0.1431
0.3772
0.6503
0.8552
0.9576
0.9917
0.9990
0.9999
1.0000
0.0173
0.1040
0.2991
0.5593
0.7869
0.9234
0.9803
0.9966
0.9996
1.0000
1.0000
0.0116
0.0751
0.2341
0.4726
0.7110
0.8779
0.9614
0.9912
0.9986
0.9999
1.0000
1.0000
10.14. CUMULATIVE BINOMIAL PROBABILITIES (n = 12, 13, 14)
10.14
57
Cumulative binomial probabilities (n = 12, 13, 14)
Examples: P (Bin(13, 0.3) ≤ 2) = 0.2025; P (Bin(13, 0.6) ≤ 2) = P (Bin(13, 0.4) ≥ 13 − 2).
n k
12 0
1
2
3
4
5
6
7
8
9
10
11
12
13 0
1
2
3
4
5
6
7
8
9
10
11
12
13
14 0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
0.05
0.5404
0.8816
0.9804
0.9978
0.9998
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.5133
0.8646
0.9755
0.9969
0.9997
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.4877
0.8470
0.9699
0.9958
0.9996
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.1
0.2824
0.6590
0.8891
0.9744
0.9957
0.9995
0.9999
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.2542
0.6213
0.8661
0.9658
0.9935
0.9991
0.9999
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.2288
0.5846
0.8416
0.9559
0.9908
0.9985
0.9998
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.15
0.1422
0.4435
0.7358
0.9078
0.9761
0.9954
0.9993
0.9999
1.0000
1.0000
1.0000
1.0000
1.0000
0.1209
0.3983
0.6920
0.8820
0.9658
0.9925
0.9987
0.9998
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.1028
0.3567
0.6479
0.8535
0.9533
0.9885
0.9978
0.9997
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.2
0.0687
0.2749
0.5583
0.7946
0.9274
0.9806
0.9961
0.9994
0.9999
1.0000
1.0000
1.0000
1.0000
0.0550
0.2336
0.5017
0.7473
0.9009
0.9700
0.9930
0.9988
0.9998
1.0000
1.0000
1.0000
1.0000
1.0000
0.0440
0.1979
0.4481
0.6982
0.8702
0.9561
0.9884
0.9976
0.9996
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.25
0.0317
0.1584
0.3907
0.6488
0.8424
0.9456
0.9857
0.9972
0.9996
1.0000
1.0000
1.0000
1.0000
0.0238
0.1267
0.3326
0.5843
0.7940
0.9198
0.9757
0.9944
0.9990
0.9999
1.0000
1.0000
1.0000
1.0000
0.0178
0.1010
0.2811
0.5213
0.7415
0.8883
0.9617
0.9897
0.9978
0.9997
1.0000
1.0000
1.0000
1.0000
1.0000
p
0.3
0.0138
0.0850
0.2528
0.4925
0.7237
0.8822
0.9614
0.9905
0.9983
0.9998
1.0000
1.0000
1.0000
0.0097
0.0637
0.2025
0.4206
0.6543
0.8346
0.9376
0.9818
0.9960
0.9993
0.9999
1.0000
1.0000
1.0000
0.0068
0.0475
0.1608
0.3552
0.5842
0.7805
0.9067
0.9685
0.9917
0.9983
0.9998
1.0000
1.0000
1.0000
1.0000
0.35
0.0057
0.0424
0.1513
0.3467
0.5833
0.7873
0.9154
0.9745
0.9944
0.9992
0.9999
1.0000
1.0000
0.0037
0.0296
0.1132
0.2783
0.5005
0.7159
0.8705
0.9538
0.9874
0.9975
0.9997
1.0000
1.0000
1.0000
0.0024
0.0205
0.0839
0.2205
0.4227
0.6405
0.8164
0.9247
0.9757
0.9940
0.9989
0.9999
1.0000
1.0000
1.0000
0.4
0.0022
0.0196
0.0834
0.2253
0.4382
0.6652
0.8418
0.9427
0.9847
0.9972
0.9997
1.0000
1.0000
0.0013
0.0126
0.0579
0.1686
0.3530
0.5744
0.7712
0.9023
0.9679
0.9922
0.9987
0.9999
1.0000
1.0000
0.0008
0.0081
0.0398
0.1243
0.2793
0.4859
0.6925
0.8499
0.9417
0.9825
0.9961
0.9994
0.9999
1.0000
1.0000
1
2
1
6
1
3
0.0002
0.0032
0.0193
0.0730
0.1938
0.3872
0.6128
0.8062
0.9270
0.9807
0.9968
0.9998
1.0000
0.0001
0.0017
0.0112
0.0461
0.1334
0.2905
0.5000
0.7095
0.8666
0.9539
0.9888
0.9983
0.9999
1.0000
0.0001
0.0009
0.0065
0.0287
0.0898
0.2120
0.3953
0.6047
0.7880
0.9102
0.9713
0.9935
0.9991
0.9999
1.0000
0.1122
0.3813
0.6774
0.8748
0.9636
0.9921
0.9987
0.9998
1.0000
1.0000
1.0000
1.0000
1.0000
0.0935
0.3365
0.6281
0.8419
0.9488
0.9873
0.9976
0.9997
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.0779
0.2960
0.5795
0.8063
0.9310
0.9809
0.9959
0.9993
0.9999
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.0077
0.0540
0.1811
0.3931
0.6315
0.8223
0.9336
0.9812
0.9961
0.9995
1.0000
1.0000
1.0000
0.0051
0.0385
0.1387
0.3224
0.5520
0.7587
0.8965
0.9653
0.9912
0.9984
0.9998
1.0000
1.0000
1.0000
0.0034
0.0274
0.1053
0.2612
0.4755
0.6898
0.8505
0.9424
0.9826
0.9960
0.9993
0.9999
1.0000
1.0000
1.0000
58
CHAPTER 10. TABLES
10.15
Cumulative binomial probabilities (n = 15, 20)
Examples: P (Bin(15, 0.3) ≤ 2) = 0.1268; P (Bin(15, 0.6) ≤ 2) = P (Bin(15, 0.4) ≥ 15 − 2).
n k
0.05
0.1
0.15
0.2
0.25
p
0.3
15 0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
0.4633
0.8290
0.9638
0.9945
0.9994
0.9999
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.2059
0.5490
0.8159
0.9444
0.9873
0.9978
0.9997
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.0874
0.3186
0.6042
0.8227
0.9383
0.9832
0.9964
0.9994
0.9999
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.0352
0.1671
0.3980
0.6482
0.8358
0.9389
0.9819
0.9958
0.9992
0.9999
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.0134
0.0802
0.2361
0.4613
0.6865
0.8516
0.9434
0.9827
0.9958
0.9992
0.9999
1.0000
1.0000
1.0000
1.0000
1.0000
0.0047
0.0353
0.1268
0.2969
0.5155
0.7216
0.8689
0.9500
0.9848
0.9963
0.9993
0.9999
1.0000
1.0000
1.0000
1.0000
0.0016
0.0142
0.0617
0.1727
0.3519
0.5643
0.7548
0.8868
0.9578
0.9876
0.9972
0.9995
0.9999
1.0000
1.0000
1.0000
0.0005
0.0052
0.0271
0.0905
0.2173
0.4032
0.6098
0.7869
0.9050
0.9662
0.9907
0.9981
0.9997
1.0000
1.0000
1.0000
0.0000
0.0005
0.0037
0.0176
0.0592
0.1509
0.3036
0.5000
0.6964
0.8491
0.9408
0.9824
0.9963
0.9995
1.0000
1.0000
0.0649
0.2596
0.5322
0.7685
0.9102
0.9726
0.9934
0.9987
0.9998
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.0023
0.0194
0.0794
0.2092
0.4041
0.6184
0.7970
0.9118
0.9692
0.9915
0.9982
0.9997
1.0000
1.0000
1.0000
1.0000
20 0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
0.3585
0.7358
0.9245
0.9841
0.9974
0.9997
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.1216
0.3917
0.6769
0.8670
0.9568
0.9887
0.9976
0.9996
0.9999
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.0388
0.1756
0.4049
0.6477
0.8298
0.9327
0.9781
0.9941
0.9987
0.9998
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.0115
0.0692
0.2061
0.4114
0.6296
0.8042
0.9133
0.9679
0.9900
0.9974
0.9994
0.9999
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.0032
0.0243
0.0913
0.2252
0.4148
0.6172
0.7858
0.8982
0.9591
0.9861
0.9961
0.9991
0.9998
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.0008
0.0076
0.0355
0.1071
0.2375
0.4164
0.6080
0.7723
0.8867
0.9520
0.9829
0.9949
0.9987
0.9997
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.0002
0.0021
0.0121
0.0444
0.1182
0.2454
0.4166
0.6010
0.7624
0.8782
0.9468
0.9804
0.9940
0.9985
0.9997
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.0000
0.0005
0.0036
0.0160
0.0510
0.1256
0.2500
0.4159
0.5956
0.7553
0.8725
0.9435
0.9790
0.9935
0.9984
0.9997
1.0000
1.0000
1.0000
1.0000
1.0000
0.0000
0.0000
0.0002
0.0013
0.0059
0.0207
0.0577
0.1316
0.2517
0.4119
0.5881
0.7483
0.8684
0.9423
0.9793
0.9941
0.9987
0.9998
1.0000
1.0000
1.0000
0.0261
0.1304
0.3287
0.5665
0.7687
0.8982
0.9629
0.9887
0.9972
0.9994
0.9999
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.0003
0.0033
0.0176
0.0604
0.1515
0.2972
0.4793
0.6615
0.8095
0.9081
0.9624
0.9870
0.9963
0.9991
0.9998
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.35
0.4
1
2
1
6
1
3
10.16. CUMULATIVE POISSON PROBABILITIES (0.1 ≤ λ ≤ 5.0)
10.16
59
Cumulative Poisson probabilities (0.1 ≤ λ ≤ 5.0)
Example: P (Poisson(0.3) ≤ 2) = 0.9964
HH
H
x
λ
HH
H
0
1
2
3
4
5
6
HH
x
λ
HH
H
H
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0.9048
0.9953
0.9998
1.0000
1.0000
1.0000
1.0000
0.8187
0.9825
0.9989
0.9999
1.0000
1.0000
1.0000
0.7408
0.9631
0.9964
0.9997
1.0000
1.0000
1.0000
0.6703
0.9384
0.9921
0.9992
0.9999
1.0000
1.0000
0.6065
0.9098
0.9856
0.9982
0.9998
1.0000
1.0000
0.5488
0.8781
0.9769
0.9966
0.9996
1.0000
1.0000
0.4966
0.8442
0.9659
0.9942
0.9992
0.9999
1.0000
0.4493
0.8088
0.9526
0.9909
0.9986
0.9998
1.0000
0.4066
0.7725
0.9371
0.9865
0.9977
0.9997
1.0000
1
1.5
2
2.5
3
3.5
4
4.5
5
0.3679
0.7358
0.9197
0.9810
0.9963
0.9994
0.9999
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.2231
0.5578
0.8088
0.9344
0.9814
0.9955
0.9991
0.9998
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.1353
0.4060
0.6767
0.8571
0.9473
0.9834
0.9955
0.9989
0.9998
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.0821
0.2873
0.5438
0.7576
0.8912
0.9580
0.9858
0.9958
0.9989
0.9997
0.9999
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.0498
0.1991
0.4232
0.6472
0.8153
0.9161
0.9665
0.9881
0.9962
0.9989
0.9997
0.9999
1.0000
1.0000
1.0000
1.0000
1.0000
0.0302
0.1359
0.3208
0.5366
0.7254
0.8576
0.9347
0.9733
0.9901
0.9967
0.9990
0.9997
0.9999
1.0000
1.0000
1.0000
1.0000
0.0183
0.0916
0.2381
0.4335
0.6288
0.7851
0.8893
0.9489
0.9786
0.9919
0.9972
0.9991
0.9997
0.9999
1.0000
1.0000
1.0000
0.0111
0.0611
0.1736
0.3423
0.5321
0.7029
0.8311
0.9134
0.9597
0.9829
0.9933
0.9976
0.9992
0.9997
0.9999
1.0000
1.0000
0.0067
0.0404
0.1247
0.2650
0.4405
0.6160
0.7622
0.8666
0.9319
0.9682
0.9863
0.9945
0.9980
0.9993
0.9998
0.9999
1.0000
60
CHAPTER 10. TABLES
10.17
Cumulative Poisson probabilities (5.5 ≤ λ ≤ 9.5)
Example: P (Poisson(6.5) ≤ 2) = 0.0430
H
HH
x
λ
5.5
6.0
6.5
7.0
7.5
8.0
8.5
9.0
9.5
0
0.0041
0.0025
0.0015
0.0009
0.0006
0.0003
0.0002
0.0001
0.0000
1
0.0266
0.0174
0.0113
0.0073
0.0047
0.0030
0.0019
0.0012
0.0008
2
0.0884
0.0620
0.0430
0.0296
0.0203
0.0138
0.0093
0.0062
0.0042
3
0.2017
0.1512
0.1118
0.0818
0.0591
0.0424
0.0301
0.0212
0.0149
4
0.3575
0.2851
0.2237
0.1730
0.1321
0.0996
0.0744
0.0550
0.0403
5
0.5289
0.4457
0.3690
0.3007
0.2414
0.1912
0.1496
0.1157
0.0885
6
0.6860
0.6063
0.5265
0.4497
0.3782
0.3134
0.2562
0.2068
0.1649
7
0.8095
0.7440
0.6728
0.5987
0.5246
0.4530
0.3856
0.3239
0.2687
8
0.8944
0.8472
0.7916
0.7291
0.6620
0.5925
0.5231
0.4557
0.3918
9
0.9462
0.9161
0.8774
0.8305
0.7764
0.7166
0.6530
0.5874
0.5218
10
0.9747
0.9574
0.9332
0.9015
0.8622
0.8159
0.7634
0.706
0.6453
11
0.9890
0.9799
0.9661
0.9467
0.9208
0.8881
0.8487
0.803
0.7520
12
0.9955
0.9912
0.984
0.973
0.9573
0.9362
0.9091
0.8758
0.8364
13
0.9983
0.9964
0.9929
0.9872
0.9784
0.9658
0.9486
0.9261
0.8981
14
0.9994
0.9986
0.9970
0.9943
0.9897
0.9827
0.9726
0.9585
0.9400
15
0.9998
0.9995
0.9988
0.9976
0.9954
0.9918
0.9862
0.9780
0.9665
16
0.9999
0.9998
0.9996
0.9990
0.9980
0.9963
0.9934
0.9889
0.9823
17
1.0000
0.9999
0.9998
0.9996
0.9992
0.9984
0.9970
0.9947
0.9911
18
1.0000
1.0000
0.9999
0.9999
0.9997
0.9993
0.9987
0.9976
0.9957
19
1.0000
1.0000
1.0000
1.0000
0.9999
0.9997
0.9995
0.9989
0.9980
20
1.0000
1.0000
1.0000
1.0000
1.0000
0.9999
0.9998
0.9996
0.9991
21
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.9999
0.9998
0.9996
22
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.9999
0.9999
23
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.9999
24
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
H
H
H
10.18. CUMULATIVE POISSON PROBABILITIES (10.0 ≤ λ ≤ 15.0)
10.18
61
Cumulative Poisson probabilities (10.0 ≤ λ ≤ 15.0)
Example: P (Poisson(11.5) ≤ 2) = 0.0008
HH λ
10.0
H
x
H
H
H
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
0.0000
0.0005
0.0028
0.0103
0.0293
0.0671
0.1301
0.2202
0.3328
0.4579
0.5830
0.6968
0.7916
0.8645
0.9165
0.9513
0.9730
0.9857
0.9928
0.9965
0.9984
0.9993
0.9997
0.9999
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
10.5
11.0
11.5
12.0
12.5
13.0
13.5
14.0
14.5
15.0
0.0000
0.0003
0.0018
0.0071
0.0211
0.0504
0.1016
0.1785
0.2794
0.3971
0.5207
0.6387
0.7420
0.8253
0.8879
0.9317
0.9604
0.9781
0.9885
0.9942
0.9972
0.9987
0.9994
0.9998
0.9999
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.0000
0.0002
0.0012
0.0049
0.0151
0.0375
0.0786
0.1432
0.2320
0.3405
0.4599
0.5793
0.6887
0.7813
0.8540
0.9074
0.9441
0.9678
0.9823
0.9907
0.9953
0.9977
0.9990
0.9995
0.9998
0.9999
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.0000
0.0001
0.0008
0.0034
0.0108
0.0277
0.0603
0.1137
0.1906
0.2888
0.4017
0.5198
0.6329
0.7330
0.8153
0.8783
0.9236
0.9542
0.9738
0.9857
0.9925
0.9962
0.9982
0.9992
0.9996
0.9998
0.9999
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.0000
0.0001
0.0005
0.0023
0.0076
0.0203
0.0458
0.0895
0.1550
0.2424
0.3472
0.4616
0.5760
0.6815
0.7720
0.8444
0.8987
0.9370
0.9626
0.9787
0.9884
0.9939
0.9970
0.9985
0.9993
0.9997
0.9999
0.9999
1.0000
1.0000
1.0000
1.0000
1.0000
0.0000
0.0001
0.0003
0.0016
0.0054
0.0148
0.0346
0.0698
0.1249
0.2014
0.2971
0.4058
0.5190
0.6278
0.7250
0.8060
0.8693
0.9158
0.9481
0.9694
0.9827
0.9906
0.9951
0.9975
0.9988
0.9994
0.9997
0.9999
1.0000
1.0000
1.0000
1.0000
1.0000
0.0000
0.0000
0.0002
0.0011
0.0037
0.0107
0.0259
0.0540
0.0998
0.1658
0.2517
0.3532
0.4631
0.5730
0.6751
0.7636
0.8355
0.8905
0.9302
0.9573
0.9750
0.9859
0.9924
0.9960
0.9980
0.9990
0.9995
0.9998
0.9999
1.0000
1.0000
1.0000
1.0000
0.0000
0.0000
0.0001
0.0007
0.0026
0.0077
0.0193
0.0415
0.0790
0.1353
0.2112
0.3045
0.4093
0.5182
0.6233
0.7178
0.7975
0.8609
0.9084
0.9421
0.9649
0.9796
0.9885
0.9938
0.9968
0.9984
0.9992
0.9996
0.9998
0.9999
1.0000
1.0000
1.0000
0.0000
0.0000
0.0001
0.0005
0.0018
0.0055
0.0142
0.0316
0.0621
0.1094
0.1757
0.2600
0.3585
0.4644
0.5704
0.6694
0.7559
0.8272
0.8826
0.9235
0.9521
0.9712
0.9833
0.9907
0.9950
0.9974
0.9987
0.9994
0.9997
0.9999
0.9999
1.0000
1.0000
0.0000
0.0000
0.0001
0.0003
0.0012
0.0039
0.0105
0.0239
0.0484
0.0878
0.1449
0.2201
0.3111
0.4125
0.5176
0.6192
0.7112
0.7897
0.8530
0.9012
0.9362
0.9604
0.9763
0.9863
0.9924
0.9959
0.9979
0.9989
0.9995
0.9998
0.9999
1.0000
1.0000
0.0000
0.0000
0.0000
0.0002
0.0009
0.0028
0.0076
0.0180
0.0375
0.0699
0.1185
0.1848
0.2676
0.3632
0.4657
0.5681
0.6641
0.7489
0.8195
0.8752
0.9170
0.9469
0.9673
0.9805
0.9888
0.9938
0.9967
0.9983
0.9991
0.9996
0.9998
0.9999
1.0000
62
CHAPTER 10. TABLES
10.19
Wilcoxon rank sum test
Usage The table contains the left critical values of the Wilcoxon rank sum test. This is a
test for two samples X1 , . . . , Xm and Y1 , . . . , Yn where m ≥ n (interchange the two samples
if necessary). The corresponding statistic W is the sum of the ranks of the Yi ’s (the smaller
sample) in the combined sample. The right critical value WR can be found from the left
critical value WL by using the formula
WR = n (m + n + 1) − WL .
A ∗ indicates that there does not exist a test with the given value of the significance level α.
An equivalent form of this test consists in calculating the Mann–Whitney statistic Mm,n . This
statistic is defined by the sum of the number of Yi ’s that is less than Xj (the sum is taken
over all j’s).
The critical values are part of the critical area. Example: m = 5, n = 3 and α = 0.05. The
critical area consists of two parts, {W ≤ 6} and {W ≥ 21}.
For values of n which are not in the table, use that
W − 1 n (m + n + 1)
q 2
1
12 m n (m + n + 1)
is approximately distributed as a standard normal distribution.
0.2
m
1
2
3
4
5
6
n
1
1
2
1
2
3
1
2
3
4
1
2
3
4
5
1
2
3
4
5
6
0.1
∗
∗
∗
∗
3
7
∗
3
7
13
∗
4
8
14
20
∗
4
9
15
22
30
α (two-sided)
0.05 0.02
α (one-sided)
0.05 0.025 0.01
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
6
∗
∗
∗
∗
∗
∗
∗
∗
6
∗
∗
11
10
∗
∗
∗
∗
3
∗
∗
7
6
∗
12
11
10
19
17
16
∗
∗
∗
3
∗
∗
8
7
∗
13
12
11
20
18
17
28
26
24
0.1
0.01
0.005
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
15
∗
∗
∗
10
16
23
0.2
m
7
8
n
1
2
3
4
5
6
7
1
2
3
4
5
6
7
8
0.1
∗
4
10
16
23
32
41
∗
5
11
17
25
34
44
55
α (two-sided)
0.05 0.02
α (one-sided)
0.05 0.025 0.01
∗
∗
∗
3
∗
∗
8
7
6
14
13
11
21
20
18
29
27
25
39
36
34
∗
∗
∗
4
3
∗
9
8
6
15
14
12
23
21
19
31
29
27
41
38
35
51
49
45
0.1
0.01
0.005
∗
∗
∗
10
16
24
32
∗
∗
∗
11
17
25
34
43
10.19. WILCOXON RANK SUM TEST
63
Wilcoxon rank sum test (continued)
0.2
m
9
10
11
12
n
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
10
1
2
3
4
5
6
7
8
9
10
11
1
2
3
4
5
6
7
8
9
10
11
12
0.1
1
5
11
19
27
36
46
58
70
1
6
12
20
28
38
49
60
73
87
1
6
13
21
30
40
51
63
76
91
106
1
7
14
22
32
42
54
66
80
94
110
127
α
0.1
α
0.05
∗
4
10
16
24
33
43
54
66
∗
4
10
17
26
35
45
56
69
82
∗
4
11
18
27
37
47
59
72
86
100
∗
5
11
19
28
38
49
62
75
89
104
120
(two-sided)
0.05 0.02
(one-sided)
0.025 0.01
∗
∗
3
∗
8
7
14
13
22
20
31
28
40
37
51
47
62
59
∗
∗
3
∗
9
7
15
13
23
21
32
29
42
39
53
49
65
61
78
74
∗
∗
3
∗
9
7
16
14
24
22
34
30
44
40
55
51
68
63
81
77
96
91
∗
∗
4
∗
10
8
17
15
26
23
35
32
46
42
58
53
71
66
84
79
99
94
115
109
0.01
0.005
∗
∗
6
11
18
26
35
45
56
∗
∗
6
12
19
27
37
47
58
71
∗
∗
6
12
20
28
38
49
61
73
87
∗
∗
7
13
21
30
40
51
63
76
90
105
0.2
m
13
14
15
n
1
2
3
4
5
6
7
8
9
10
11
12
13
1
2
3
4
5
6
7
8
9
10
11
12
13
14
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
0.1
1
7
15
23
33
44
56
69
83
98
114
131
149
1
8
16
25
35
46
59
72
86
102
118
136
154
174
1
8
16
26
37
48
61
75
90
106
123
141
159
179
200
α
0.1
α
0.05
∗
5
12
20
30
40
52
64
78
92
108
125
142
∗
6
13
21
31
42
54
67
81
96
112
129
147
166
∗
6
13
22
33
44
56
69
84
99
116
133
152
171
192
(two-sided)
0.05 0.02
(one-sided)
0.025 0.01
∗
∗
4
3
10
8
18
15
27
24
37
33
48
44
60
56
73
68
88
82
103
97
119
113
136
130
∗
∗
4
3
11
8
19
16
28
25
38
34
50
45
62
58
76
71
91
85
106
100
123
116
141
134
160
152
∗
∗
4
3
11
9
20
17
29
26
40
36
52
47
65
60
79
73
94
88
110
103
127
120
145
138
164
156
184
176
0.01
0.005
∗
∗
7
13
22
31
41
53
65
79
93
109
125
∗
∗
7
14
22
32
43
54
67
81
96
112
129
147
∗
∗
8
15
23
33
44
56
69
84
99
115
133
151
171
64
CHAPTER 10. TABLES
10.20
0.2
n
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
0.1
0
2
3
5
8
10
14
17
21
26
31
36
42
48
55
62
69
77
86
94
104
113
α
0.1
α
0.05
∗
0
2
3
5
8
10
13
17
21
25
30
35
41
47
53
60
67
75
83
91
100
Wilcoxon signed rank test
(two-sided)
0.05 0.02
(one-sided)
0.025 0.01
∗
∗
∗
∗
0
∗
2
0
3
1
5
3
8
5
10
7
13
9
17
12
21
15
25
19
29
23
34
27
40
32
46
37
52
43
58
49
65
55
73
62
81
69
89
76
0.01
0.005
∗
∗
∗
∗
0
1
3
5
7
9
12
15
19
23
27
32
37
42
48
54
61
68
0.2
n
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
50
0.1
124
134
145
157
169
181
194
207
221
235
250
265
281
297
313
330
348
365
384
402
503
α
0.1
α
0.05
110
119
130
140
151
163
175
187
200
213
227
241
256
271
286
302
319
336
353
371
466
(two-sided)
0.05 0.02
(one-sided)
0.025 0.01
98
84
107
92
116
101
126
110
137
120
147
130
159
140
170
151
182
162
195
173
208
185
221
198
235
211
249
224
264
238
279
252
294
266
310
281
327
296
343
312
434
397
0.01
0.005
75
83
91
100
109
118
128
138
148
159
171
182
194
207
220
233
247
261
276
291
373
Usage The table contains the left critical values of the Wilcoxon signed rank test. The
corresponding statistic W is the sum of the ranks of the absolute values corresponding to the
positive values. The right critical value WR can be found from the left critical value WL by
using the formula
1
WR = n (n + 1) − WL .
2
A star indicates that there does not exist a test with the given value of the significance level
α.
The critical values are part of the critical area. Example: n = 10 and α = 0.05. The critical
area consists of two parts, {0 ≤ W ≤ 8} and {47 ≤ W ≤ 55}.
For values of n which are not in the table, use that
W − 14 n (n + 1)
q
1
24
n (n + 1) (2n + 1)
is approximately distributed as the standard normal distribution.
10.21. KENDALL RANK CORRELATION TEST
10.21
0.2
n
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
0.1
6
8
9
11
12
14
17
19
20
24
25
29
30
34
37
39
42
Kendall rank correlation test
α (two-sided)
0.05 0.02
α (one-sided)
0.05 0.025 0.01
6
∗
∗
8
10
10
11
13
13
13
15
17
16
18
20
18
20
24
21
23
27
23
27
31
26
30
36
28
34
40
33
37
43
35
41
49
38
46
52
42
50
58
45
53
63
49
57
67
52
62
72
0.1
65
0.01
0.2
0.005
∗
∗
15
19
22
26
29
33
38
44
47
53
58
64
69
75
80
n
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
0.1
44
47
51
54
58
61
63
68
70
75
77
82
86
89
93
96
100
105
109
112
α
0.1
α
0.05
56
61
65
68
72
77
81
86
90
95
99
104
108
113
117
122
128
133
139
144
(two-sided)
0.05 0.02
(one-sided)
0.025 0.01
66
78
71
83
75
89
80
94
86
100
91
107
95
113
100
118
106
126
111
131
117
137
122
144
128
152
133
157
139
165
146
172
152
178
157
185
163
193
170
200
0.01
0.005
86
91
99
104
110
117
125
130
138
145
151
160
166
175
181
190
198
205
213
222
Usage Given are pairs (Xi , Yi ). The table contains the right critical values of Kendall’s rank
correlation test. For each Yi one considers the values Yj with j < i (remember that the xvalues have to be ordered from small to large). The corresponding statistic S is the number
of positive differences, minus the number of negative differences. The left critical value is
equal to minus the right critical value. A star indicates that there does not exist a test with
the given value of the significance level α.
The use of Kendall’s rank correlation test in nonparametric regression is known as Theil’s
zero-slope test.
The critical values are part of the critical area. Example: n = 10 and α = 0.05. The critical
area consists of two parts, {S ≤ −23} and {S ≥ 23}.
For values of n which are not in the table, use that
| S | −1
r
n(n − 1)(2n + 5)
18
is approximately distributed as the standard normal distribution.
66
CHAPTER 10. TABLES
10.22
Spearman rank correlation test
α (two-sided)
0.2
0.1
0.05
0.02
0.01
α (one-sided)
n
0.1
0.05
0.025
0.01
0.005
3
*
*
*
*
*
4
1.000
1.000
*
*
*
5
0.800
0.900
1.000
1.000
*
6
0.657
0.829
0.886
0.943
1.000
7
0.571
0.710
0.786
0.893
0.929
8
0.524
0.643
0.738
0.833
0.881
9
0.483
0.600
0.700
0.783
0.833
10
0.455
0.564
0.648
0.745
0.794
11
0.427
0.536
0.618
0.709
0.755
12
0.406
0.503
0.587
0.678
0.727
13
0.385
0.484
0.560
0.648
0.703
14
0.367
0.464
0.538
0.626
0.679
15
0.354
0.446
0.521
0.604
0.654
16
0.341
0.429
0.503
0.582
0.635
17
0.328
0.414
0.488
0.566
0.618
18
0.317
0.401
0.472
0.550
0.600
19
0.309
0.391
0.460
0.535
0.584
20
0.299
0.380
0.447
0.522
0.570
21
0.292
0.370
0.436
0.509
0.556
22
0.284
0.361
0.425
0.497
0.544
Usage Given are pairs (Xi , Yi ). Separately order the Xi ’s and Yi ’s. The table contains right
critical values of Spearman’s rank correlation test. The corresponding statistic is
P
6 ni=1 d2i
,
rS = 1 −
n3 − n
where di is the difference of ranks of Xi and Yi . The left critical value is equal to minus the
right critical value. A star indicates that there does not exist a test with the given value of
the significance level α. The critical values are part of the critical area. Example: n = 9 and
α = 0.10. The critical area consists of two parts, {rS ≤ −0.6} and {rS ≥ 0.6}. For values of
n which are not in the table, use that
s
n−2
rS
1 − rS2
is approximately Student-t distributed with n − 2 degrees of freedom.
10.23. KRUSKAL-WALLIS TEST
10.23
n1
2
2
2
3
3
3
3
3
3
4
4
4
4
4
4
4
4
4
4
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
n2
1
2
2
1
2
2
3
3
3
1
2
2
3
3
3
4
4
4
4
1
2
2
3
3
3
4
4
4
4
5
5
5
5
5
n3
1
1
2
1
1
2
1
2
3
1
1
2
1
2
3
1
2
3
4
1
1
2
1
2
3
1
2
3
4
1
2
3
4
5
67
Kruskal-Wallis test
0.10
*
*
4.571
*
4.286
4.500
4.571
4.556
4.622
*
4.500
4.458
4.056
4.511
4.709
4.167
4.555
4.545
4.654
*
4.200
4.373
4.018
4.651
4.533
3.987
4.541
4.549
4.668
4.109
4.623
4.545
4.523
4.560
0.05
*
*
*
*
*
4.714
5.143
5.361
5.600
*
*
5.333
5.208
5.444
5.791
4.967
5.455
5.598
5.692
*
5.000
5.160
4.960
5.251
5.648
4.985
5.273
5.656
5.657
5.127
5.338
5.705
5.666
5.780
0.01
*
*
*
*
*
*
*
*
7.200
*
*
*
*
6.444
6.745
6.667
7.036
7.144
7.654
*
*
6.533
*
6.909
7.079
6.955
7.205
7.445
7.760
7.309
7.338
7.578
7.823
8.000
0.001
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
8.909
9.269
*
*
*
*
*
8.727
*
8.591
8.795
9.168
*
8.938
9.284
9.606
9.920
P
Let ni be the sample size of the ith sample, k the number of samples, n = ki=1 ni and Ri
the sum of the ranks of the ith sample (treatment). The table contains the critical values of
k
X
12
Ri2
the statistic H =
− 3 (n + 1).
n (n + 1)
ni
i=1
A ∗ indicates that there does not exist a test with the given value of the significance level α.
the critical area for sample sizes 4, 2 and 2 with α = 0.05 is
H
n−k
{H ≥ 5.333}. For values not in the table, use (see [4]) the statistic J =
1+
2
n−1−H
with approximate critical values given by Jα = 21 ((k − 1) fα,k−1,n−k + χ2α,k−1 ).
Example:
68
CHAPTER 10. TABLES
10.24
Friedman test
α = 0.05
α = 0.01
number of treatments
number of treatments
number of blocks
3
4
5
6
7
number of blocks
3
4
5
6
7
3
18
37
64
104
158
3
*
45
76
123
186
4
26
52
89
144
217
4
32
64
109
176
265
5
32
65
113
183
277
5
42
83
143
229
344
6
42
76
137
222
336
6
54
102
176
282
423
7
50
91
167
272
412
7
62
123
216
348
519
8
50
102
190
310
471
8
72
140
260
420
628
9
56
115
214
349
529
9
78
161
296
475
706
10
62
128
238
388
588
10
96
178
332
528
785
11
72
144
261
427
647
11
104
209
365
581
862
12
78
157
285
465
706
12
114
228
398
633
941
13
86
170
309
504
764
13
122
247
432
686
1019
14
86
183
333
543
823
14
126
267
465
739
1098
15
96
196
356
582
882
15
134
284
498
792
1177
Let n be the number of treatments, m the number of blocks and Tj the sum of the ranks of
the jth treatment. The table contains the critical values of the statistic
S=
n
X
j=1
For values not in the table use that
1
Tj2 − m2 n(n + 1)2
4
12S
is approximately distributed as χ2n−1 .
mn(n + 1)
10.25. ORTHOGONAL POLYNOMIALS
10.25
69
Orthogonal polynomials
The table contains orthogonal polynomials in the transformed variable
χ−X
,
d
where χ is the original observation level and d is the distance between two consecutive observation levels.
x=
n
polynomial
numerical values
3 f1 = x
3 f2 = 3x2 − 2
4 f1 = 2 x
4 f2 =
x2
−
5 f3
6 f1
6 f2
6 f3
7 f1
x3
8 f2 = x2 −
9 f2 = 3 x2 − 20
x 5 x2 − 59
9 f3 =
6
10 f1 = 2x
4x2 − 33
10 f2 =
8
x 20x2 − 293
10 f3 =
12
−2
1
−1
1
3
1
−1
−1
1
−1
3
−3
1
−1
0
1
2
2
−1
−2
−1
2
−1
2
0
−2
1
−5
−3
−1
1
3
5
5
−1
−4
−4
−1
5
−5
7
4
−4
−7
5
−2
−1
0
1
2
3
5
0
−3
−4
−3
0
5
−1
1
1
0
−1
−1
1
−7
−5
−3
−1
1
3
5
7
7
1
−3
−5
−5
−3
1
7
−7
5
7
3
−3
−7
−5
7
21
4
x 4 x2 − 37
8 f3 =
6
9 f1 = x
1
−3
x2
−4
− 7x
7 f3 =
6
8 f1 = 2 x
7 f2 =
1
−2
x2
−2
x 5 x2 − 17
=
6
= 2x
12 x2 − 35
=
8
x 20 x2 − 101
=
12
=x
5 f2 =
0
−3
5
4
x 20 x2 − 41
4 f3 =
6
5 f1 = x
−1
−4
−3
−2
−1
0
1
2
3
4
28
7
−8
−17
−20
−17
−8
7
28
−14
7
13
9
0
−9
−13
−7
14
−9
-7
-5
-3
-1
1
3
5
7
9
6
2
-1
-3
-4
-4
-3
-1
2
6
−42
14
35
31
12
-12
-31
-35
-14
42
Chapter 11
Dictionary English-Dutch
Translation of some terms from probability and statistics
alternative hypothesis
alternatieve hypothese
approximate methods
benaderende methoden
asymptotic variance
asymptotische variantie
asymptotically unbiased
asymptotisch zuiver
at least
minstens
Bayes’ rule
regel van Bayes
biased estimator
onzuivere schatter
binomial distribution
binomiale verdeling
bootstrap
bootstrap
central limit theorem
centrale limietstelling
chi-square distribution
chi-kwadraatverdeling
coin
munt
combination
combinatie
complement
complement
compound Poisson distribution
samengestelde Poissonverdeling
conditional
voorwaardelijk
conditional density
voorwaardelijke kansdichtheid
conditional distribution
voorwaardelijke verdeling
conditional expectation
voorwaardelijke verwachting
conditional probability
voorwaardelijke kans
confidence interval
betrouwbaarheidsinterval
consistent
consistent
contingency tables
afhankelijkheidstabellen
continuity theorem
continuïteitsstelling
continued on next page
70
71
convergence almost surely
bijna zekere convergentie
convergence in distribution
convergentie in verdeling
convergence in probability
convergentie in kans
convolution
convolutie
correlation
correlatie
countably infinite
aftelbaar oneindig
covariance
covariantie
critical region
kritieke gebied
cumulative distribution function
cumulatieve verdelingsfunctie
degree of freedom (df)
vrijheidsgraad
denominator
noemer
density function
(kans)dichtheid
dependent
afhankelijk
Design of Experiments
proefopzetten
die
dobbelsteen
discrete
discreet
disjoint
disjunct
empty set
lege verzameling
estimated standard error
geschatte standaardfout
estimate
schatting
estimator
schatter
event
gebeurtenis
expected value, mean
verwachting
finite population correction
eindige-populatiecorrectie
Fisher-distribution (F -distribution)
Fisher-verdeling (F -verdeling)
fit
aanpassing
frequency function
kansfunctie
geometric distribution
geometrische verdeling
head
kop
hypergeometric distribution
hypergeometrische verdeling
independence
onafhankelijkheid
independent
onafhankelijk
independent random variables
onafhankelijke stochasten
indicator function
indicatorfunctie
intersection
doorsnede
joint density
gezamenlijke kansdichtheid
continued on next page
72
CHAPTER 11. DICTIONARY ENGLISH-DUTCH
joint distribution
gezamenlijke verdeling
joint frequency function
gezamenlijke kansfunctie
law of large numbers
wet van de grote aantallen
law of total expectation
wet van de totale verwachting
likelihood
aannemelijkheid
limit theorems
limietstellingen
marginal density
marginale kansdichtheid
marginal distribution
marginale verdeling
marginal frequency function
marginale kansfunctie
maximum likelihood estimator (MLE)
meest aannemelijke schatter
measurement error
meetfout
median
mediaan
memoryless
geheugenloos
mode
modus
moment generating function
momentgenererende functie
Monte Carlo method
Monte Carlo methode
multinomial coefficient
multinomiaalcoëfficiënt
multiplication principle
vermenigvuldigingsregel
mutually independent
onafhankelijk
negative binomial distribution
negatief binomiale verdeling
normal approximation
normale benadering
null distribution
nulverdeling
null hypothesis
nulhypothese
numerator
teller
order statistics
geordende statistische grootheden
ordered sample
geordende steekproef
p-value
p-waarde
permutation
permutatie
population correlation coefficient
populatiecorrelatiecoëfficiënt
population covariance
populatiecovariantie
population mean
populatiegemiddelde
population standard deviation
populatiestandaarddeviatie
population variance
populatievariantie
power
onderscheidingsvermogen
probability
kans
probability (mass) function
kansfunctie
continued on next page
73
probability measure
kansmaat
quantile
kwantiel
quartile
kwartiel
random sum
stochastische som
random variable
stochastische variabele, stochast
range
bereik
rank
rang
ratio estimate
quotiëntschatting
sample
steekproef
sample distribution
steekproefverdeling
sample mean
steekproefgemiddelde
sample moment
steekproefmoment
sample space
uitkomstenruimte
sample variance
steekproefvariantie
sampling fraction
steekproeffractie
scale parameter
schaalparameter
set
verzameling
shape parameter
vormparameter
signed rank test
rangtekentoets
simulation
simulatie
skewness
scheefheid
standard error
standaardfout
standardization
standaardisering
Student-distribution (t-distribution)
Student-verdeling (t-verdeling)
tail
staart (van verdeling), munt
test
toets
test statistic
toetsingsgrootheid
toss
worp
type I error
type I fout
type II error
type II fout
unbiased estimator
zuivere schatter
union
vereniging
unordered sample
ongeordende steekproef
with replacement
met teruglegging
without replacement
zonder teruglegging
Bibliography
[1] M. Abramowitz and I.A. Stegun (eds.), Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, Dover, New York, 1965.
[2] J.B. Dijkstra, M.J.M. Rietjens and L.P.F.M. van Reij, Statistische Routine-bibliotheek
PP-4 in Turbo Pascal, PP-4.111 Verdelingsfuncties, Eindhoven University of Technology
Computer Centre, 1993.
[3] H.L. Harter and D.B. Owen (eds.), Selected Tables in Mathematical Statistics (7 volumes), Markham, Chicago, 1970-1980.
[4] R.L. Iman and J.M. Davenport, New approximations to the exact distribution of the
Kruskal-Wallis test statistic, Commun. Statist. Theor. Meth. A 5 (1976), 1335-1348.
[5] N.L. Johnson, S. Kotz and A.W. Kemp, Univariate Discrete Distributions, Wiley, 1993.
[6] N.L. Johnson, S. Kotz and N. Balakrishnan, Continuous Univariate Distributions volume
1, 2nd ed., Wiley, New York, 1994.
[7] N.L. Johnson, S. Kotz and N. Balakrishnan, Continuous Univariate Distributions volume
2, 2nd ed., Wiley, New York, 1995.
[8] D.C. Montgomery and G.C. Runger, Applied Statistics for Engineers, 6th ed., Wiley, New
York, 2014.
[9] S. Kotz, N.L. Johnson (eds.), Encyclopaedia of Statistical Sciences (9 volumes), Wiley,
New York, 1982-1988.
[10] E.S. Pearson and H.O. Hartley (eds.), Biometrika Tables for Statisticians, Cambridge
University Press, Cambridge, 1954.
[11] M.A. van de Wiel, Exact null distributions of quadratic distribution-free statistics for
two-way classification, J. Statist. Plann. Inference 120 (2004), 29–40.
[12] M.A. van de Wiel and A. Di Bucchianico, Fast computation of the exact null distribution
of Spearman’s ρ and Page’s L statistic for samples with and without ties, J. Statist.
Plann. Inference 92 (2001), 133–145.
74
Index
2k design, 34
adjusted coefficient of determination, 28
alternative hypothesis, 19
Analysis of Variance, 30
one-way, 30
one-way with blocks, 31
two-way, 32
ANOVA
one-way, 30
one-way with blocks, 31
two-way, 32
average
sample, 17
Bayes’s rule, 1
Bernoulli distribution, 5
beta
distribution, 9
function, 9, 15
bibliography, 74
binomial
distribution
table, 58
binomial
distribution
table, 55
binomial distribution, 5
block factor, 31
Boole’s law, 1
Cauchy distribution, 10
chain rule, 1
χ2 -distribution, 10
table, 41
coefficient
of determination, 24, 28
of variation, 36
complement’s rule, 1
conditional
expectation, 4
probability, 1
confidence interval, 18, 24
β0 , 24
β1 , 24
expected response, 24, 27
consistent estimator, 17
contingency tables, 33
contrast, 34
convolution, 2, 3
Cook, 28
Cook’s distance, 28
correlation, 25
coefficient, 4, 25
correlation coefficient, 18, 21
covariance, 4
matrix, 4, 26
Cp , 29
critical region, 19
De Morgan’s laws, 1
design
factorial, 34
fractional, 34
matrix, 26
Design of Experiments, 34
dictionary
English-Dutch, 70
difference rule, 1
disjoint events, 1
distribution
Bernoulli, 5
beta, 9
binomial, 5
Cauchy, 10
χ2 , 10
continuous, 9
discrete, 5
Erlang, 10
75
76
INDEX
exponential, 11
F , 11
gamma, 12
geometric, 6
Gumbel, 12
hypergeometric, 6
logistic, 13
lognormal, 13
multinomial, 7
negative binomial, 7
normal, 14
Pareto, 14
Poisson, 8
standard normal, 14
Student-t, 14
t, 14
uniform
continuous, 15
discrete, 8
Weibull, 16
DOE, 34
factor, 34
factorial design, 30, 34
formula
Wald, 4
fractional design, 34
Friedman test, 68
function
beta, 9, 15
gamma, 9
F -distribution, 11
table, 42
effect
interaction, 35
main, 35
Erlang distribution, 10
error
propagation, 36
sum of squares, 23, 26, 28
type I, 19
type II, 19
estimation, 17
procedures, 18
estimator
consistent, 17
minimum variance unbiased, 17
MVU, 17
sufficient, 17
unbiased, 17
events, 1
disjoint, 1
independent, 1
mutually exclusive, 1
expectation
conditional, 4
explanatory variables, 26
exponential distribution, 11
independence, 2
independent, 3
events, 1
interaction, 34
effect, 35
gamma
distribution, 12
function, 9
geometric distribution, 6
Gumbel distribution, 12
hypergeometric distribution, 6
hypothesis
alternatieve, 19
null, 19
Kendall rank correlation test, 65
Kruskal-Wallis test, 67
lack-of-fit, 25
law
Boole’s, 1
De Morgan’s, 1
propagation of random errors, 36
propagation of relative errors, 36
Least Significant Difference, 30–32
level
combination, 34
of significance, 19
linear regression, 23
literature, 74
logistic distribution, 13
lognormal distribution, 13
LSD, 30–32
main effect, 35
INDEX
matrix
covariance, 4
design, 26
mean
sample, 17
minimum variance unbiased estimator, 17
M SE, 17
multinomial distribution, 7
mutually exclusive events, 1
negative binomial distribution, 7
normal distribution, 14
standard, 14
table, 38–39
null hypothesis, 19
orthogonal polynomials
table, 69
Pareto distribution, 14
point estimator, 17
Poisson
distribution, 8
distribution
table, 59–61
power, 19
prediction interval
response, 25, 27
predictor variables, 26
probability distribution, see distribution
propagation of errors, 36
random, 36
relative, 36
R2 , 24, 28
rank
correlation test
Kendall, 65
Spearman, 66
sum test of Wilcoxon, 62–63
region
critical, 19
regression
multiple linear, 26
simple linear, 23
variables, 26
residuals, 23, 26
standardized, 28
77
studentized, 28
ρ, 4, 25
Rp2 , 28
2
Rp , 28
rule
(co)-variances, 4
expectations, 4
Bayes’s, 1
chain, 1
complement’s, 1
difference, 1
s2 , 18
sample
average, 17
correlation coefficient, 25
mean, 17
size, 22
variance, 17
signed rank test of Wilcoxon, 64
significance, 19
s2p , 18
Spearman rank correlation test, 66
SSE, 23, 26, 28
SSE , 23, 26, 28
SSLOF , 25
SSP E , 25
SSReg , 24
SST , 24
standard normal distribution, 14
standardized residuals, 28
statistical testing, 19
Student t-distribution, 14
table, 40
studentized
range distribution table, 52–54
residuals, 28
sufficient estimator, 17
sum of squares, 24
Sxx , 23
Sxy , 23
Syy , 23
table
binomial distribution, 55–58
χ2 distribution, 41
contingency, 33
78
INDEX
Friedman test, 68
F -distribution, 42
Kendall rank correlation test, 65
Kruskal-Wallis test, 67
normal distribution, 38–39
orthogonal polynomials, 69
Poisson distribution, 59–61
Spearman rank correlation test, 66
Student t-distribution, 40
studentized range distribution, 52–54
t-distribution, 40
Theil zero-slope test, 65
Wilcoxon
rank sum test, 62–63
signed rank test, 64
test
Friedman, 68
Kendall, 65
Kruskal-Wallis, 67
power, 19
Spearman, 66
studentized range, 52–54
Theil, 65
Wilcoxon
rank sum test, 62–63
signed rank test, 64
testing procedures, 20–21
Theil
zero-slope test, 65
total probability, 1
t-distribution, 14
table, 40
unbiased estimator, 17
uniform distribution
continuous, 15
discrete, 8
variance, 4
inflation factor, 29
sample, 17
variation coefficient, 36
VIF, 29
Wald’s formula, 4
Weibull distribution, 16
Wilcoxon
rank sum test, 62–63
signed rank test, 64
Download