1 6.041/6.431 Analysis

advertisement
1
6.041/6.431 Probabilistic Systems
Analysis
Quiz II Review
Fall 2010
Probability Density Functions (PDF)
For a continuous RV X with PDF fX (x),
b
P (a ≤ X ≤ b) =
fX (x)dx
a
P (X ∈ A) =
fX (x)dx
A
Properties:
• Nonnegativity:
fX (x) ≥ 0 ∀x
• Normalization:
∞
−∞
1
2
3
2
Mean and variance of a continuous RV
PDF Interpretation
E[X] =
Caution: fX (x) =
P (X = x)
• if X is continuous, P (X = x) = 0 ∀x!!
• fX (x) can be ≥ 1
fX (x)dx = 1
∞
xfX (x)dx
2
Var(X) = E (X − E[X])
∞
=
(x − E[X])2 fX (x)dx
−∞
−∞
Interpretation: “probability per unit length” for “small” lengths
around x
P (x ≤ X ≤ x + δ) ≈ fX (x)δ
E[X 2 ] − (E[X])2 (≥ 0)
∞
E[g(X)] =
g(x)fX (x)dx
=
−∞
E[aX + b]
Var(aX + b)
3
= aE[X] + b
=
a2 Var(X)
4
4
Cumulative Distribution Functions
Definition:
FX (x) = P (X ≤ x)
5
Uniform Random Variable
If X is a uniform random variable over the interval [a,b]:
monotonically increasing from 0 (at −∞) to 1 (at +∞).
fX (x) =
• Continuous RV (CDF is continuous in x):
x
fX (t)dt
FX (x) = P (X ≤ x) =
−∞
FX (x) =
dFX
(x)
dx
• Discrete RV (CDF is piecewise constant):
pX (k)
FX (x) = P (X ≤ x) =
fX (x) =
⎧
⎨
if a ≤ x ≤ b
⎩ 0
otherwise
⎧
⎪
⎪
⎨ 0
if x ≤ a
⎪
⎪
⎩
x−a
b−a
if a ≤ x ≤ b
1
otherwise (x > b)
b−a
2
(b − a)2
var(X) =
12
E[X] =
k≤x
pX (k) = FX (k) − FX (k − 1)
5
6
1
b−a
6
Exponential Random Variable
X is an exponential random variable with parameter λ:
⎧
⎨ λe−λx if x ≥ 0
fX (x) =
⎩
0 otherwise
⎧
⎨ 1 − e−λx if x ≥ 0
FX (x) =
⎩
0 otherwise
1
1
var(X) = 2
E[X] =
λ
λ
7
Normal/Gaussian Random Variables
General normal RV: N (μ, σ 2 ):
2
2
1
√ e−(x−μ) /2σ
σ 2π
E[X] = μ, Var(X) = σ 2
fX (x)
=
Property: If X ∼ N (μ, σ 2 ) and Y = aX + b
then Y ∼ N (aμ + b, a2 σ 2 )
Memoryless Property: Given that X > t, X − t is an
exponential RV with parameter λ
7
8
8
Normal CDF
Standard Normal RV: N (0, 1)
CDF of standard normal RV Y at y: Φ(y)
- given in tables for y ≥ 0
- for y < 0, use the result: Φ(y) = 1 − Φ(−y)
To evaluate CDF of a general standard normal, express it as a
function of a standard normal:
X −μ
∼ N (0, 1)
σ
X − μ
x − μ
x − μ
P (X ≤ x) = P
≤
=Φ
σ
σ
σ
X ∼ N (μ, σ 2 ) ⇔
9
Joint PDF
Joint PDF of two continuous RV X and Y : fX,Y (x, y)
P (A) =
fX,Y (x, y)dxdy
A
∞
Marginal pdf: fX (x) = −∞ fX,Y (x, y)dy
∞ ∞
E[g(X, Y )] = −∞ −∞ g(x, y)fX,Y (x, y)dxdy
Joint CDF: FX,Y (x, y) = P (X ≤ x, Y ≤ y)
9
10
11
10
Independence
Conditioning on an event
Let X be a continuous RV and A be an event with P (A) > 0,
By definition,
X, Y independent ⇔ fX,Y (x, y) = fX (x)fY (y) ∀(x, y)
If X and Y are independent:
• E[XY ]=E[X]E[Y ]
• g(X) and h(Y ) are independent
fX (x)
P (X∈A)
if x ∈ A
⎩ 0
otherwise
P (X ∈ B|X ∈ A) =
fX|A (x)dx
B
∞
xfX|A (x)dx
E[X|A] =
fX|A (x)
=
• E[g(X)h(Y )] = E[g(X)]E[h(Y )]
E[g(X)|A] =
11
⎧
⎨
−∞
∞
−∞
12
g(x)fX|A (x)dx
12
If A1 , . . . , An are disjoint events that form a partition of the sample
space,
fX (x)
=
E[X] =
E[g(X)] =
X, Y continuous RV
n
P (Ai )fX|Ai (x) (≈ total probability theorem)
i=1
n
i=1
n
P (Ai )E[X|Ai ] (total expectation theorem)
Conditioning on a RV
fX|Y (x|y)
=
fX (x)
=
fX,Y (x, y)
fY (y)
∞
fY (y)fX|Y (x|y)dy (≈ totalprobthm)
−∞
Conditional Expectation:
E[X|Y = y]
P (Ai )E[g(X)|Ai ]
E[g(X)|Y = y]
=
E[g(X, Y )|Y = y]
=
13
−∞
∞
E[g(X)] =
E[g(X, Y )]
=
−∞
−∞
∞
xfX|Y (x|y)dx
−∞
∞
−∞
g(X)fX|Y (x|y)dx
g(x, y)fX|Y (x|y)dx
Continuous Bayes’ Rule
X, Y continuous RV, N discrete RV, A an event.
∞
−∞
∞
∞
14
13
=
i=1
Total Expectation Theorem:
E[X] =
E[X|Y = y]fY (y)dy
fX|Y (x|y)
=
fY |X (y |x)fX (x)
fY |X (y |x)fX (x)
= ∞
fY (y )
f
(y|t)fX (t)dt
−∞ Y |X
P (A|Y = y)
=
P (A)fY |A (y)
P (A)fY |A (y)
=
fY (y)
fY |A (y)P (A) + fY |Ac (y)P (Ac )
E[g(X)|Y = y]fY (y)dy
E[g(X, Y )|Y = y]fY (y)dy
P (N = n|Y = y) =
15
pN (n)fY |N (y|n)
pN (n)fY |N (y|n)
= fY (y)
i pN (i)fY |N (y|i)
16
14
Derived distributions
15
Def: PDF of a function of a RV X with known PDF: Y = g(X).
Method:
• Get the CDF:
W = X + Y , with X, Y independent.
• Discrete case:
pW (w) =
FY (y) = P (Y ≤ y) = P (g(X) ≤ y) =
fX (x)dx
x|g(x)≤y
• Differentiate: fY (y) =
Convolution
dFY
dy
• Continuous case:
fW (w) =
1
x−b
|a| fX ( a )
17
• put the PMFs (or PDFs) on top of each other
∞
−∞
fX (x)fY (w − x) dx
18
16
Graphical Method:
pX (x)pY (w − x)
x
(y)
Special case: if Y = g(X) = aX + b, fY (y) =
Law of iterated expectations
• shift the flipped PMF (or PDF) of Y by w
E[X|Y = y] = f (y) is a number.
E[X|Y ] = f (Y ) is a random variable
(the expectation is taken with respect to X).
To compute E[X|Y ], first express E[X|Y = y] as a function of y.
• cross-multiply and add (or evaluate the integral)
Law of iterated expectations:
• flip the PMF (or PDF) of Y
In particular, if X, Y are independent and normal, then
W = X + Y is normal.
E[X] = E[E[X|Y ]]
(equality between two real numbers)
19
20
17
Law of Total Variance
Var(X|Y ) is a random variable that is a function of Y
(the variance is taken with respect to X).
To compute Var(X|Y ), first express
Var(X|Y = y) = E[(X − E[X|Y = y])2 |Y = y]
as a function of y.
Law of conditional variances:
18
Sum of a random number of iid RVs
N discrete RV, Xi i.i.d and independent of N .
Y = X1 + . . . + XN . Then:
E[Y ]
=
E[X]E[N ]
Var(Y )
=
E[N ]Var(X) + (E[X])2 Var(N )
Var(X) = E[Var(X|Y )] + Var(E[X|Y ])
(equality between two real numbers)
21
19
22
Covariance and Correlation
Cov(X, Y )
=
E[(X − E[X])(Y − E[Y ])]
= E[XY ] − E[X]E[Y ]
• By definition, X, Y are uncorrelated ⇔ Cov(X, Y ) = 0.
• If X, Y independent ⇒ X and Y are uncorrelated. (the
converse is not true)
Correlation Coefficient: (dimensionless)
ρ=
Cov(X, Y )
σX σY
ρ = 0 ⇔ X and Y are uncorrelated.
|ρ| = 1 ⇔ X − E[X] = c[Y − E[Y ]] (linearly related)
• In general, Var(X+Y)= Var(X)+ Var(Y)+ 2 Cov(X,Y)
• If X and Y are uncorrelated, Cov(X,Y)=0 and Var(X+Y)=
Var(X)+Var(Y)
23
∈ [−1, 1]
24
MIT OpenCourseWare
http://ocw.mit.edu
6.041SC Probabilistic Systems Analysis and Applied Probability
Fall 2013
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
Download