ESD.86. Queueing & Transitions Sampling from Distributions, Gauss Richard C. Larson

advertisement
ESD.86. Queueing & Transitions
Sampling from Distributions, Gauss
Richard C. Larson
March 12, 2007
Courtesy of Dr. Sam Savage, www.AnalyCorp.com. Used with permission.
http://simulationtutorials.com/images/CLT.gif
Outline
‹ One
More Time: Markov Birth and Death
Queueing Systems
‹ Central Limit Theorem
‹ Monte Carlo Sampling from Distributions
‹ ‘Q&A’
Buy one, get the other 3 for free!
W =
1
μ
+W q
λ
L = Lq + LSF = Lq +
μ
L = λW
Optional Exercise:
Is it “better’’ to enter a single
server queue with service rate μ
or a 2-server queue each with rate
μ /2?
Can someone draw one or both of the
state-rate-transition diagrams?
Then what do you do?
Final Example:
Single Server, Discouraged Arrivals
λ /3
λ /2
λ /4
λ /5
State-Rate-Transition Diagram, Discouraged Arrivals
1 λ k
Pk = ( ) P0
k! μ
1 λ k
λ 1 λ 2 1 λ 3
−1
P0 = [1+ ( ) + ( ) + ( ) + ...+ ( ) + ...]
3! μ
k! μ
μ 2! μ
P0 = (e
λ / μ −1
) =e
−λ / μ
>0
P0 = (e
λ / μ −1
) =e
−λ / μ
>0
ρ = utilization factor = 1− P0 = 1− e
−λ / μ
< 1.
( λ / μ) − λ / μ
e , k = 0,1,2,... Poisson Distribution!
Pk =
k!
k
L = time - average number in system = λ/μ How?
L = λA W
Little's Law, where
λA ≡ average rate of accepted arrivals into system
http://www.fdcw.unimaas.nl/cwsiot/shopwindow/music/Floor%20Manschot%20&%20Michiel%20Stoter/plaatjes/happiness.jpg
Apply Little’s Law to Service Facility
ρ = λA (average service time)
ρ = average number in service facility = λA / μ
−λ / μ
λA = μρ = μ(1−e )
L
λ /μ
λ
W =
=
= 2
−λ / μ
−λ / μ
λA μ(1−e ) μ (1−e )
Central Limit Theorem Demo
Thanks to Prof. Dan Frey! :)
ESD.86
1.5
1
p1 ( x)
(
− x−μ
1
2π ⋅σ n
⋅e
n
)2
( n)2
2⋅ σ
0.5
0
−1
0
x
1
One Uniformly Distributed Random Variable
1
0.8
p2 ( x)
0.6
(
− x−μ
1
2π ⋅σ n
⋅e
n
( n)
2⋅ σ
)
2
2
0.4
0.2
0
−1
0
x
1
Sum of 2 iid Uniformly Distributed Random Variables
0.8
0.6
p3 ( x)
(
− x−μ
1
2π ⋅σ n
⋅e
n
)2
( n)2
0.4
2⋅ σ
0.2
0
−2
−1
0
x
1
2
Sum of 3 iid Uniformly Distributed Random Variables
0.8
0.6
p4 ( x)
(
− x−μ
1
2π ⋅σ n
⋅e
n
)2
( n)2
0.4
2⋅ σ
0.2
0
−2
−1
0
x
1
2
Sum of 4 iid Uniformly Distributed Random Variables
0.8
0.6
p5 ( x)
(
− x−μ
1
2π ⋅σ n
⋅e
n
)2
( n)2
0.4
2⋅ σ
0.2
0
−2
0
x
2
Sum of 5 iid Uniformly Distributed Random Variables
It’s Movie Time!
The Gaussian or Normal PDF
1
fY (y) =
σ Y 2π
e
−{(y−E[y ]) 2 /(2σ Y2 )}
-∞< y <∞
f(x)
σ
x
m
f (x) =
σ
1
2π
exp
{
(x - m)2
2σ2
Figure by MIT OCW.
{
http://coding.yonsei.ac.kr/images/f10.gif
Central Limit Theorem
‹ Consider
the sum Sn of n iid random
variables Xi, where
E[X i ] = mX < ∞
VAR[X i ] = σ X2 < ∞
n
Sn = X1 + X 2 + ...+ X n = ∑ X i
i=1
‹ Then,
as n ‘’gets large,” Sn tends to a
Gaussian or Normal distribution with
mean equal
to nmX and variance equal
2
to nσ X .
n
Sn = X1 + X 2 + ...+ X n = ∑ X i
i=1
1
−{(y−nm X )2 /(2nσ X2 )}
f Sn (y) =
e
-∞< y <∞
σ X 2πn
fs (y)
n
σX n
nmx
y
Figure by MIT OCW.
Normalizing Random Variables
Suppose we have a r.v. W having
Mean = E[W ] = a and
Variance = E[(W − a) ] = σ
2
2
W
Define a new r.v.
X ≡ W − a. Then
E[X] = E[W − a] = E[W ] − a = a − a = 0
VAR[X] = VAR[W ] = σ
2
W
Normalizing Random Variables
Suppose we have a r.v. W having
Mean = E[W ] = a and
Variance = E[(W − a) ] = σ
2
2
W
Define a new r.v.
X ≡ W − a. Then
E[X] = E[W − a] = E[W ] − a = a − a = 0
VAR[X] = VAR[W ] = σ W2
Or suppose we define
Y ≡ γW . Then
E[Y] = γE[W ] = γa
σ = E[(γW − γa) ] = γ E[(W − a) ] = γ σ
2
Y
2
2
2
2
2
W
Normalizing Random Variables
Suppose we have a r.v. W having
Mean = E[W ] = a and
Variance = E[(W − a) ] = σ
2
Thus, if we define
Z ≡ (W − a) /σ W , then
E[Z] = 0
σ =1
2
Z
2
W
Most table
lookups of the
Gaussian are via
the CDF, with a
normalized r.v.
Z is called a normalized r.v.
Obtaining Samples of the
Gaussian R.V.
In Monte Carlo simulations, one often
uses the Central Limit Theorem (CLT)
to approximate the Gaussian.
Example 1: Erlang Order N for large
N should be approximately “Gaussian”
Example 2: Sum and normalize 12
uniforms over [0,1]. Good idea?
Let’s talk about Monte Carlo
sampling: Inverse Method.
Uses CDF, and is Never Fail!
pX(y)
1/2
1/4
1/4
0
1
FX(y)
2
3
y
1
2
3
y
1
R=r1
0
pX(y)
1/2
1/4
1/4
0
1
FX(y)
2
3
y
1
2
3
y
1
R=r2
0
pX(y)
1/2
1/4
1/4
0
1
FX(y)
2
3
y
1
2
3
y
1
R=r3
0
pX(y)
1/2
1/4
1/4
0
1
FX(y)
2
3
y
1
2
3
y
1
R=r4
0
Inverse Method Also Works for
Continuous Random Variables
fX(y)
1/2
1/4
0
1
1
2
3
4
y
2
3
4
y
FX(y)
3/4
R=r1
1/4
0
1
fX(y)
1/2
1/4
0
R=r2
1
1
2
3
4
y
2
3
4
y
FX(y)
3/4
1/4
0
1
fX(y)
1/2
1/4
0
1
1
2
3
4
y
2
3
4
y
FX(y)
3/4
R=r3
1/4
0
1
fX(y)
1/2
1/4
0
1
1
2
3
4
y
2
3
4
y
FX(y)
3/4
1/4
R=r4
0
1
Time to
Buckle your
Seatbelts!
http://www.census.gov/pubinfo/www/multimedia/img/seatbelt-lo.jpg
Example 3: The “Relationships Method”
1
−x 2 / 2σ 2
f X (x) = f Y (x) =
e
−∞< x <∞
2πσ
X and Y are zero - mean independent Gaussian r.v.'s.
R ≡ X +Y
2
2
FR (r) ≡ P{R ≤ r} = P{ X 2 + Y 2 ≤ r}
FR (r) =
∫ ∫ 2πσ
1
circle of
radius r
2
e
−(x +y )2 / 2σ
2
dxdy
∫ ∫ 2πσ
1
FR (r) =
2
e
−(x +y )2 / 2σ
2
dxdy
circle of
radius r
f R ( ρ )dρ =
∫θ
2π
d
θρ
d
ρ
=0
ρ −ρ
f R (ρ) = 2 e
σ
2
/ 2σ
1
2πσ
2
2
e
− ρ 2 / 2σ
, ρ≥0
2
ρ −ρ
= 2e
σ
2
/ 2σ
2
dρ, ρ ≥ 0
A Rayleigh pdf
With parameter 1/σ
FR ( ρ) ≡ P{R ≤ ρ} = 1− e
− ρ 2 / 2σ
2
, ρ≥0
R1 ≡ sample from a uniform pdf over [0,1]
R1 = 1− e
− ρ 2 / 2σ
2
, which implies that
ρ = σ −2ln(1− R1)
θ = 2πR2
X = ρ cosθ = σ −2ln(1− R1 ) cos(2πR2 )
Y = ρ sin θ = σ −2ln(1− R1 ) sin(2πR2 )
Here we have 2 exact samples
from the Gaussian pdf, with no
approximation from the CLT!
Spin the Flashlight
And, so, finite variance is just a
professor’s oral exam trick
question? :)
1
θ
Point of flashlight
illumination
x
x axis
1
θ
Point of flashlight
illumination
x
1. R.V.': X, Θ
x axis
1
θ
Point of flashlight
illumination
x
1. R.V.': X, Θ
2. Sample space for Θ: [-π/2, π/2]
x axis
1
θ
Point of flashlight
illumination
x
1. R.V.': X, Θ
2. Sample space for Θ: [-π/2, π/2]
3. Θ uniform over [-π/2, π/2]
x axis
1
θ
Point of flashlight
illumination
x
x axis
1. R.V.': X, Θ
2. Sample space for Θ: [-π/2, π/2]
3. Θ uniform over [-π/2, π/2]
4. (a) FX(x) = P{X<x} = P{tanΘ<x}=P{Θ < tan-1(x)}= 1/2 + (1/π) tan-1(x)
(b) fX(x) =(d/dx) FX(x) = 1/(π)(1 + x2) all x
Cauchy pdf
Mean =?, Variance = ????
Download