.7) 4 ron (Ba

advertisement
P (T < 12) = P (T ≤ 12) = P (X ≥ 3) = 1 − P o2.4(2) = 1 − 0.5697
From the Gamma-Poisson formula,
Note that t = 12 so X ∼ P o(12/5) i.e., X ∼ P o(2.4).
Need P (T < 12) where T ∼ Gam(3, 1/5).
P (T > t) = P (X < α) and P (T ≤ t) = P (X ≥ α)
For T ∼ Gam(α, λ) and X ∼ P o(λt),
However we will use the Gamma-Poisson formula:
2
Stat 330 (Spring 2015): slide set 14
Gamma Example (Cont’d)
Last update: February 3, 2015
Stat 330 (Spring 2015)
Slide set 14
Stat 330 (Spring 2015): slide set 14
3
1/5
= 15 (min) and V ar(T ) =
3
(1/5)2
= 75 (min)
This is the sum of two independent exponential random variables.
We want the total time for the second hit X := Y1 + Y2.
That is Y2 ∼ Exponential with λ = 2
3
Let Y2, the time between first and second hit. By the memoryless property
of the exponential distribution, Y2 has the same distribution as waiting time
for the first hit.
Let Y1=the waiting time until the first hit. Then Y1 ∼ Exponential with
λ=2
To calculate waiting time for the second hit, we add the waiting times until
the first hit and the time between the first and the second hit.
Hits on a web page Recall: we modeled waiting times until the first hit as
Exp(2). How long do we have to wait for the second hit?
Erlang distribution
1
Stat 330 (Spring 2015): slide set 14
This can be done using repeated integration by parts (see Baron p.87).
(b) Compute the probability for the entire program to be compiled in less
than 12 minutes.
E(T ) =
For a Gamma random variable T with α = 3 and λ = 1/5,
(a) Compute the expectation and variance of the total compilation time.
Compilation of a computer program consists of 3 blocks that are processed
sequentially, one after the other. Each block takes Exponential time with
mean of 5 minutes, independently of other blocks.
Gamma Example (Baron 4.7)
Stat 330 (Spring 2015): slide set 14
i=1
k
Yi
Erlangk,λ(t) = P (Y ≥ k) = 1 − P (Y ≤ k − 1) = 1 − P oλt(k − 1)
Thus
P (X ≤ t) = P (Y ≥ k) where Y ∼ P o(λt)
From Gamma-Poisson formula,
6
In order to use the Gamma-Poisson formula, we now consider distribution
of X as X ∼ Gamma(k, λ)
Erlangk,λ(t) = P (X ≤ t)
We need the cdf of the Erlang random variable X denoted by Erlangk,λ(x)
Thus, in order to compute the distribution function, we can use the GammaPoisson formula.
for x ≥ 0
7
We will come across the Erlang distribution again, when modelling the
waiting times in queueing systems, where customers arrive with a Poisson
rate and need exponential time to be served.
P (Z > 1) = 1 − Erlang3,2(1) = 1 − (1 − P o2·1(3 − 1)) = P o2(2) = 0.677
Z := waiting time until the third hit has an Erlang(3,2) distribution. Thus
2. Find the probability that we have to wait > 1 min for the 3rd hit.
fX (x) = fk,λ(x) = 4xe−2x
Thus X has an Erlang distribution with stage parameter 2; thus, the density
of X is
We previously defined X , as the sum of two exponential variables, each
with rate λ = 2.
1. What is the density of the waiting time until the second hit?
Erlang distribution: Example
Hits on a web page (continued)
Stat 330 (Spring 2015): slide set 14
Stat 330 (Spring 2015): slide set 14
Erlang distribution (cont’d)
5
4
i=1
This so because Erlang(k, λ) distribution is the same as a Gamma(k, λ)
distribution where k is an integer.
i=1
Note this is the same density as that of Gamma(k, λ) with α = k is an
integer.
for x ≥ 0
1
λ2
Alternatively, we can use the formulas for the expectation and variance of
Gamma(k, λ) random variable.
λk
xk−1e−λx
(k − 1)!
1
λ
V ar[Yi] = k ·
E[Yi] = k ·
k
i=1
k
Yi ] =
Yi ] =
k
i=1
k
V ar[X] = V ar[
E[X] = E[
Expected value and variance of an Erlang distributed variable X can be
computed using the properties of expected value and variance for sums of
independent random variables:
Stat 330 (Spring 2015): slide set 14
Erlang distribution (cont’d)
where k is called the stage parameter, λ is the rate parameter.
f (x) =
is Erlang(k,λ). The Erlang density fk,λ is
X :=
If Y1, . . . , Yk are k independent exponential random variables with parameter
λ, their sum X has an Erlang distribution:
Erlang distribution (cont’d)
2πσ 2
1
e
−
(x−μ)2
2σ 2
for − ∞ < x < ∞
The density has two
Stat 330 (Spring 2015): slide set 14
−∞
−∞
∞
(x − μ)2fμ.σ2 (x)dx = . . . = σ 2.
Z=
X −μ
σ
10
We use the fact that X ∼ N (μ, σ 2) can be standardized to obtain a Z
random variable Z ∼ N (0, 1) as follows:
We can use these tables to compute the cdf of the normal distribution
N (μ, σ 2) for any set of values of μ and σ. How?
Fortunately, tables of the cdf of standard normal distribution N (0, 1),the
normal distribution that has mean 0 and a variance of 1, are available.
Unfortunately, there does not exist a closed form for this integral. However,
to get probabilities means we need to evaluate this integral.
V ar[Z] =
1
V
σ2
∼ N (0, 1).
ar[X] = 1
E[Z] = σ1 (E[X] − μ) = 0
X−μ
σ
11
This table is sufficient because, Φ(−z) = 1 − Φ(z) as f0,1 is symmetric
around 0.
The values of the Φ(z) are tabulated in tables usually called stanadard
normal tables (or Z tables); however, these tables are (sometimes) only
available for positive values of z
It is common practice to denote the cdf N0,1(t) by Φ(t) (more commonly
represented as Φ(z))
no Thus
IF X ∼ N (μ, σ 2) then Z =
Standard Normal distribution
Stat 330 (Spring 2015): slide set 14
Stat 330 (Spring 2015): slide set 14
μ determines the location of the peak on the x−axis, σ 2 determines the
“width” of the bell.
9
Normal distribution (cont’d)
The cumulative distribution function (cdf) of X is
t
Nμ,σ2 (t) := Fμ,σ2 (t) = −∞ fμ,σ2 (x)dx
Stat 330 (Spring 2015): slide set 14
Normal densities for several parameters
8
Thus, the parameters μ and σ 2 are actually the mean and the variance of
the N (μ, σ 2) distribution.
V ar[X] =
The expected value and variance of a normal distributed r.v. X are:
∞
xfμ.σ2 (x)dx = . . . = μ
E[X] =
fμ,σ2 (x) = √
The normal density is a “bell-shaped” density.
parameters: μ and σ 2 and is
Normal distribution
Stat 330 (Spring 2015): slide set 14
X−1
√ .
2
Thus:
Review Examples 4.10, 4.11, and 4.12 from Baron
14
Note that the standard normal table only shows probabilities for z < 3.99.
This is all we need, though, since P (Z ≥ 4) ≤ 0.0001.
= 0.7611 − 0.5 = 0.2611.
P (1 < X < 2) = P
1−1 X −1 2−1
√ < √ < √
=
2
2
2
√
= P (0 < Z < 0.5 2) = Φ(0.71) − Φ(0)
A standardization of X gives Z :=
Suppose, X ∼ N (1, 2) and that we need to calculate P (1 < X < 2)
Using the Z-table (cont’d)
Stat 330 (Spring 2015): slide set 14
12
It’s easy to see, that the area in the lkeft tail is equal to the area in the
right tail:
P (Z ≤ −z) = P (Z ≥ +z).
This is true because P (Z ≥ +z) = 1 − P (Z ≤ z), which proves the above
statement.
Recall, the area to the left of the graph up to a specified vertical line at z
represents the probability P (Z < z)
Standard Normal distribution (cont’d)
look-up
•
•
look-up
13
P (|Z| > 2) = P (Z < −2) + P (Z > 2) = 2(1 − Φ(2)) = 2(1 −
0.9772) = 0.0456.
look-up
or P (|Z| > 2) = P (Z < −2) + P (Z > 2) = 2Φ(−2) = 2 × 0.0228 =
0.0456.
P (Z < −2.31) = 1 − Φ(2.31) = 1 − 0.9896 = 0.0104.
or P (Z < −2.31) = Φ(−2.31) = 0.0104.
look-up
P (0 < Z < 1) = P (Z < 1) − P (Z < 0) = Φ(1) − Φ(0) = 0.8413 −
0.5 = 0.3413.
•
0.8413.
P (Z < 1) = Φ(1)
=
Stat 330 (Spring 2015): slide set 14
•
straight look-up
Suppose Z is a standard normal random variable.
Using the Z-table
Download