Stat 330 Formula Sheet Final Exam

advertisement
Stat 330
Formula Sheet
Final Exam
probability mass function properties: 0 ≤ pX (k) ≤ 1 for all k = 0, 1, 2, . . . and
∑
k
pX (k) = 1
probability distribution function properties:
FX (t) non-decreasing for t, limt→−∞ FX (t) = 0, limt→∞ FX (t) = 1, step-function, increases at k.
expected value & variance properties:
E[aX + bY ] = aE[X] + bE[Y ], E[X 2 ] = V ar[X] + (E[X])2 , V ar[aX + b] = a2 V ar[X]
V ar[X + Y ] = V ar[X] + V ar[Y ], iff X, Y are independent.
E[X · Y ] = E[X] · E[Y ] only if X, Y are independent.
∑
covariance of X and Y : Cov(X, Y ) = (x,y) (x − E[X])(y − E[y])pX,Y (x, y)
correlation of X and Y : Corr(X, Y ) = √
Cov(X,Y )
,
V ar[X]V ar[Y ]
correlation is in [0, 1], unitless.
discrete distribution(s):
Binomial B(n, p): X = number of successes in n indep. trials, p = P (success) (Bernoulli trials).
( )
pmf: pX (k) = nk pk (1 − p)n−k , for FX (k) see Binomial table, E[X] = np, V ar[X] = np(1 − p)
Geometric Geo(p): X = number of Bernoulli trials until 1st success, p = P (success).
pmf: pX (k) = (1 − p)k−1 p, FX (k) = 1 − (1 − p)k , E[X] = 1/p, V ar[X] =
1−p
p2
Poisson P oi(λ): X = number of events in some unit of time/space,
λ = rate of events in some unit of time/space.
pmf: pX (k) = e−λ λk! , for FX (k) see Poisson table, E[X] = λ, V ar[X] = λ
k
continuous random variable X uncountable set of possible outcomes (=interval),
probability density function : fX (k) = F ′ (X = k),
cumulative distribution function FX (t) = P (X ≤ t),
∫∞
∫∞
expected value E[X] = −∞ xfX (x)dx, variance V ar[X] = −∞ (x − E[X])2 fX (x)dx.
∫∞
probability density function (pdf ) properties: fX (x) ≥ 0 for all x and −∞ fX (x)dx = 1.
cumulative distribution function (cdf ) properties:
FX (t) non decreasing for t, limt→−∞ FX (t) = 0, limt→∞ FX (t) = 1.
rules for densities & distribution functions
∫b
∫t
P (a ≤ X ≤ b) = a fX (x)dx = FX (b) − FX (a), P (X = a) = 0, FX (t) = −∞ fX (x)dx.
Uniform U (a, b), X = random value between a and b, all ranges of values in [a, b] are equally likely,
pdf: f (x) = 1/(b − a) for x ∈ [a, b], cdf: FX (x) =
x−a
b−a
for x ∈ [a, b],
E[X] = (a + b)/2, V ar[X] = (b − a)2 /12.
Exponential Exp(λ), X = time/space until 1st occurrence of event,
λ = rate of events in some unit of time/space.
pdf: f (x) = λe−λx for x ≥ 0, FX (x) = 1 − e−λx for x ≥ 0,
E[X] = 1/λ, V ar[X] = 1/λ2 .
Exponential distribution is memoryless, i.e. P (X ≤ s + t | X ≥ s) = P (X ≤ t).
Erlang Distribution Erlang(k, λ): If Y1 , . . . , Yk are k independent exponential random variables
with parameter λ, their sum X has an Erlang distribution:
∑
X := ki=1 Yi is Erlang(k, λ) k is stage parameter, λ is rate parameter
Erlang density f (x) = λe−λx ·
(λx)k−1
(k−1)!
for x ≥ 0
E[X] = k/λ, V ar[X] = k/λ2
Erlang cdf is calculated using Poisson cdf: FX (t) = 1 − FY (k − 1)
where X ∼ Erlang(k, λ) and Y ∼ P oi(λt)
so use Poisson cdf table with parameter= λt
Normal r.v.: X ∼ N (µ, σ 2 ), Normal density is “bell-shaped” f (x) =
√ 1 e−
2πσ 2
(x−µ)2
2σ 2
E[X] = µ, V ar[X] = σ 2 .
standardization: FX (x) = Φ( x−µ
σ ); Z ∼ N (0, 1) and Φ(z) ≡ FZ (z) and Φ(−z) = 1 − Φ(z).
2 )
X ∼ N µx , σx2 ), Y ∼ N (µy , σy2 ), then W := aX + bY has normal distribution W ∼ N (µW , σW
2 = a2 σ 2 + b2 σ 2 + 2abCov(X, Y )
where µW = aµx + bµy and σW
x
y
Central Limit Theorem (CLT): If X1 , X2 , . . . , Xn are i.i.d. r.v.’s with E[Xi ] = µ, V ar[Xi ] = σ 2 ,
∑
∑
approx
approx
then X := n1 ni=1 Xi ∼ N (µ, σ 2 /n) and Sn = i Xi ∼ N (nµ, nσ 2 )
Bin(n, p)
P oi(λ)
approx
∼
approx
∼
N (np, np(1 − p)) for large n (if np > 5),
N (λ, λ) for large λ,
Linear Congruential Method: x0 seed, xn ≡ (axn−1 + c) mod m for a, c, m, then ui :=
from uniform U (0, 1).
xi
m
is
inverse method for discrete distributions: Let p(x1 ), p(x2 ), . . . , p(xn ) for x1 < x2 < . . . < xn
be a pmf; then for a random number u, X := xj ⇐⇒ F (xj−1 ) ≤ u ≤ F (xj ) where
∑
∑j
F (xj−1 ) = j−1
k=1 p(xk ) and F (xj ) =
k=1 p(xk )
inverse method for continuous distributions: xi = FX−1 (ui ).
Algorithm: First find FX , then set u = FX (x) and solve for x
Simulating from distributions: X ∼ Exp(λ) X := − λ1 ln U , X ∼ Bin(n, p) : take n uniforms
random numbers, count how many are under p,
X ∼ Geo(p) : count how many random numbers until first is under p
Poisson Process with rate λ: X(t) ∈ {0, 1, 2, 3, . . .}, X(t2 ) − X(t1 ) ∼ P oi(λ(t2 − t1 )), 0 ≤ t1 < t2 ,
for any 0 ≤ t1 < t2 ≤ t3 < t4
Xt2 − Xt1 is independent from Xt4 − Xt3
time of jth occurrence:Oj ∼ Erlang(j, λ)
time between j − 1 and jth arrival: Ij ∼ Exp(λ)
X(t) Poisson process with rate λ ⇐⇒ Ij ∼ Exp(λ)
Birth & Death Processes X(t) ∈ {0, 1, 2, 3 . . .}, for all t. visualize with state diagram
steady state probabilities: limt→∞ P (X(t) = k) = pk
Balance equations: λk pk + µk pk = λk−1 pp−1 + µk+1 pk+1
∑
λ0 λ1·...·λk−1
S =1+ ∞
k=1 µ1 µ2 ·...·µk
the B&D process is stable only if S exists.
Then p0 = S −1 , pk =
λ0 λ1·...·λk−1
µ1 µ2 ·...·µk p0 .
Queueing Theory with arrival rate λ, service rate µ traffic intensity a = µλ , ρ =
M/M/1 Queue
p0 = 1 − a,
pk = ak (1 − a),
a
1−a ,
1
W = µ1 · 1−a
,
Ws = µ1 ,
a
Wq = µ1 1−a
,
L=
Lq =
a2
1−a ,
P (q(t) ≤ x) = 1 − ae−x(µ−λ) where q(t) is the time spent in the queue
M/M/1/K Queue
pk =
1−a
,
1−aK+1
ak (1 − a),
L=
a
1−a
p0 =
−
(K+1)aK+1
,
1−aK+1
λa = (1 − pK )λ,
L
λa ,
Ws = µ1 ,
W =
Wq = W − Ws ,
Lq = Wq · λa
M/M/c Queue
(∑
)−1
c−1 ak
ac 1
p0 =
,
k=0 k! + c! 1−ρ
{
ak
for 0 ≤ k ≤ c − 1,
k! p0
pk =
,
ak
p
for k ≥ c,
c!ck−c 0
c
a
,
C(c, a) = p0 c!(1−ρ)
ρ
1−ρ C(c, a),
1
Wq = cµ(1−ρ)
C(c, a),
Ws = µ1 ,
Lq =
W = Wq + Ws ,
L=a+
ρ
1−ρ C(c, a)
a
c
Estimation and Confidence intervals
Parameter
Estimate
µ
x̄
p
p̂
(1 − α) ·√
100% Confidence interval
s2
x̄ ± zα/2
n
√
p̂(1 − p̂)
p̂ ± zα/2
(substitution)
n
1
p̂ ± zα/2 · √ (conservative)
2 n
√
µ1 − µ2
x̄1 − x̄2
s21
s2
+ 2
n1
n2
x̄1 − x̄2 ± zα/2
√
p1 − p2
p̂1 − p̂2
p̂1 − p̂2 ± zα/2 · 
1
p̂1 − p̂2 ± zα/2 ·
2

p̂1 (1 − p̂1 ) p̂2 (1 − p̂2 ) 
+
n1
n2
(√
1
1
+
n1
n2
)
where zα/2 = Φ−1 (1 − α/2), and
α
0.1
0.05
zα/2
1.65
1.96
α
0.02
0.01
zα/2
2.33
2.58
Hypothesis Testing
Null Hypothesis (H0 )
Alternative Hypothesis (Ha )
µ=#
µ > #, µ < # or µ ̸= #
p=#
p > #, p < # or p ̸= #
µ1 − µ2 = #
µ1 − µ2 > #, µ1 − µ2 < #,
Test Statistic
z=
z=√
p1 − p2 > #, p1 − p2 < #,
or p1 − p2 ̸= #
p̂ − #
#(1 − #)/n
x̄1 − x̄2 − #
z=√ 2
s1 /n1 + s22 /n2
or µ1 − µ2 ̸= #
p1 − p2 = #
x̄ − #
√
s/ n
z=√
p̂1 − p̂2 − #
p̂1 (1 − p̂1 )/n1 + p̂2 (1 − p̂2 )/n2
∗, ∗∗
p̂1 − p̂2 − #
n1 p̂1 + n2 p̂2
√
∗ If # = 0, can also use z = √
, where p̂ =
n1 + n2
p̂(1 − p̂) 1/n1 + 1/n2
p̂1 − p̂2 − #
p̂1 − p̂2 − #
√
∗∗ For large sample size, z = √
is equivalent to z = √
p̂(1 − p̂) 1/n1 + 1/n2
p̂1 (1 − p̂1 )/n1 + p̂2 (1 − p̂2 )/n2
Download