Random Variables

advertisement
Random Variables
Randomness
• The word random effectively means
unpredictable
• In engineering practice we may treat some
signals as random to simplify the analysis
even though they may not actually be
random
Random Variable Defined
()
A random variable X ζ is the assignment of numerical
values to the outcomes ζ of experiments
Random Variables
Examples of assignments of numbers to the outcomes of
experiments.
Discrete-Value vs ContinuousValue Random Variables
• A discrete-value (DV) random variable has a set
of distinct values separated by values that cannot
occur
• A random variable associated with the outcomes
of coin flips, card draws, dice tosses, etc... would
be DV random variable
• A continuous-value (CV) random variable may
take on any value in a continuum of values which
may be finite or infinite in size
Distribution Functions
The distribution function of a random variable X is the
probability that it is less than or equal to some value,
as a function of that value.
()
FX x = P ⎡⎣ X ≤ x ⎤⎦
Since the distribution function is a probability it must satisfy
the requirements for a probability.
()
0 ≤ FX x ≤ 1 , − ∞ < x < ∞
()
( )
( )
P ⎡⎣ x1 < X ≤ x2 ⎤⎦ = FX x2 − FX x1
FX x is a monotonic function and its derivative is never negative.
Distribution Functions
The distribution function for tossing a single die
(
) ( ) ( ) ⎤⎥
( ) ( ) ( )⎥⎦
⎡u x − 1 + u x − 2 + u x − 3
FX x = 1 / 6 ⎢
⎢⎣ + u x − 4 + u x − 5 + u x − 6
() ( )
Distribution Functions
A possible distribution function for a continuous random
variable
Probability Mass and Density
The derivative of the distribution function is the probability
density function (PDF)
( ( ))
d
fX x ≡
FX x
dx
Probability density can also be defined by
()
()
f X x dx = P ⎡⎣ x < X ≤ x + dx ⎤⎦
Properties
∞
()
f X x ≥ 0 , − ∞ < x < +∞
∫ f ( x ) dx = 1
X
−∞
x
( ) ∫ f (λ ) dλ
FX x =
X
−∞
x2
()
P ⎡⎣ x1 < X ≤ x2 ⎤⎦ = ∫ f X x dx
x1
Probability Mass and Density
The PDF for tossing a die
Probability Mass and Density
A DV random variable X is a Bernoulli random variable if it
takes on only two values 0 and 1 and
pX
and its PDF is
()
⎧1− p , x = 0
⎪
x = P ⎡⎣ X = x ⎤⎦ = ⎨ p
, x =1
⎪0
, otherwise
⎩
()
() (
) ()
(
)
f X x = 1− p δ x + pδ x − 1
p X x is called the probability mass function (PMF).
Probability Mass and Density
Bernoulli PMF
Bernoulli PDF
Probability Mass and Density
If we perform n trials of an experiment whose outcome is
Bernoulli distributed and if X represents the total number of 1’s
that occur in those n trials, then X is said to be a Binomial random
variable and its PMF is
⎧⎛ n ⎞ x
p 1− p
⎪⎜
⎟
p X x = ⎨⎝ x ⎠
⎪
⎩0
and its PDF is
(
()
fX
)
n− x
()
}
, otherwise
⎛ n ⎞ k
x = ∑⎜
p 1− p
⎟
k =0 ⎝ k ⎠
n
{
, x ∈ 0,1,2,, n
(
)
n− k
(
δ x−k
)
Probability Mass and Density
Binomial PMF
Binomial PDF
Probability Mass and Density
If we perform Bernoulli trials until a 1 (success) occurs and the
probability of a 1 on any single trial is p, the probability that the
(
first success will occur on the kth trial is p 1− p
)
k −1
. A DV random
variable X is said to be a Geometric random variable if its PMF is
pX
(
⎧⎪ p 1− p
x =⎨
⎪⎩0
()
)
x−1
{
, x ∈ 1,2,3,...∞
, otherwise
and its PDF is
()
∞
(
f X x = ∑ p 1− p
k =1
)
k −1
(
δ x−k
)
}
Probability Mass and Density
Geometric PMF
Geometric PDF
Probability Mass and Density
If we perform Bernoulli trials until the rth 1 occurs and the
probability of a 1 on any single trial is p, the probability that the
rth success will occur on the kth trial is
⎛ k −1 ⎞ r
P ⎡⎣ rth success on kth trial ⎤⎦ = ⎜
p 1− p
⎟
⎝ r −1 ⎠
(
)
k −r
A DV random variable Y is said to be a Negative - Binomial
or Pascal random variable with parameters r and p if its PMF is
⎧⎛ y − 1 ⎞
y−r
r
p 1− p
, y ∈ r,r + 1,,∞
⎪⎜
⎟
p Y y = ⎨⎝ r − 1 ⎠
⎪
, otherwise
⎩0
∞ ⎛
⎞ r
k −r
k
−
1
and its PDF is fY y = ∑ ⎜
p 1− p
δ y−k
⎟
k =r ⎝ r − 1 ⎠
(
( )
( )
)
{
(
)
(
)
}
Probability Mass and Density
Negative Binomial
(Pascal) PMF
Negative Binomial
(Pascal) PDF
Probability Mass and Density
Suppose we randomly place n points in the time interval 0 ≤ t < T
with each point being equally likely to fall anywhere in that range.
The probability that k of them fall inside an interval of length Δt < T
inside that range is
⎛ n⎞ k
P ⎡⎣ k inside Δt ⎤⎦ = ⎜ ⎟ p 1− p
⎝ k⎠
(
)
n− k
n!
=
p k 1− p
k! n − k !
(
(
)
)
n− k
where p = Δt / T is the probability that any single point falls within
Δt. Further, suppose that as n → ∞, n / T = λ , a constant. If λ
is constant and n → ∞ that implies that T → ∞ and p → 0. Then λ
is the average number of points per unit time, over all time.
Probability Mass and Density
Events occurring at random times
Probability Mass and Density
It can be shown that
n
⎛ α⎞
α
α k −α
P ⎡⎣ k inside Δt ⎤⎦ =
lim ⎜ 1− ⎟ =
e
k! n→∞ ⎝
n⎠
k!

k
=e− α
where α = λΔt. A DV random variable is a Poisson random
variable with parameter α if its PMF is
pX
⎧ α x −α
⎪ e , x ∈ 0,1,2,,∞
x = ⎨ x!
⎪0
, otherwise
⎩
{
()
and its PDF is
fX
α k −α
x =∑ e δ x−k
k =0 k!
()
∞
(
)
}
Probability Mass and Density
The uniform PDF
The probabilities of X being in Range #1 and X being
in Range #2 are the same as long as both ranges have the
same width and lie entirely between a and b
Probability Mass and Density
The Exponential PDF
The arrival times of photons in a light beam at a surface are
random and have a Poisson distribution. Let the time between
photons be T . The mean time between photons is T . The
probability that a photon arrives in any very short length of time
Δt
⎡
⎤
Δt located randomly in time is P ⎣ photon arrival during Δt ⎦ = .
T
Let a photon arrive at time t0 . What is the probability
that the next photon will arrive within the next t seconds?
Probability Mass and Density
The Exponential PDF
From one point of view, the probability that a photon arrives
within the time range t0 + t < T ≤ t0 + t + Δt is
(
)
()
P ⎡⎣t0 + t < T < t0 + t + Δt ⎤⎦ = FT t + Δt − FT t .
This probability is also the product of the probability of
a photon arriving in any length of time Δt, which is Δt / T ,
and the probability that no photon arrives before that time,
()
which is P ⎡⎣ no photon before t0 + t ⎤⎦ = 1− FT t .
Probability Mass and Density
The Exponential PDF
Δt
⎡
⎤
FT t + Δt − FT t = ⎣1− FT t ⎦
T
Dividing both sides by Δt and letting it approach zero,
(
lim
(
)
)
()
( )= d
FT τ + Δτ − FT τ
F ( t )) =
(
dt
Δt
Solving the differential equation,
Δt→0
() (
FT t = 1− e−t /T
()
T
()
1− FT t
T
e−t /T
u t ⇒ fT t =
u t
T
) ()
()
, t ≥0
()
Probability Mass and Density
The Erlang PDF
The Erlang PDF is a generalization of the exponential PDF.
It is the probability of the time between one event and the
kth event later in time.
fT ,k
t k −1e−t /T
t = k
u t , k = 1,2,3,…
T k −1 !
()
(
)
)
(
Notice that, for k = 1,
e−t /T
fT ,1 t =
u t
T
which is the same as the exponential PDF.
()
()
Probability Mass and Density
The Gamma PDF
The Gamma PDF is based on the Gamma function
∞
Γ ( x ) = ∫ λ x−1e− λ d λ , x > 0
0
Some important properties of the Gamma function are
Γ ( x + 1) = xΓ ( x ) , x > 1
Γ ( n + 1) = n! , n ≥ 0 and n an integer
Γ (1 / 2 ) = π , Γ ( 3 / 2 ) = (1 / 2 ) π
⎛ n⎞
Γ ( n + 1)
⎜⎝ k ⎟⎠ = Γ ( k + 1) Γ ( n − k + 1)
Probability Mass and Density
The Gamma PDF
xα −1e x/ β
fX ( x; α , β ) = α
u( x) , α > 0 , β > 0
β Γ (α )
Probability Mass and Density
The Gamma PDF
If β = 1, the Gamma PDF is called the standard Gamma PDF
xα −1e x
fX ( x; α ) =
u( x) , α > 0
Γ (α )
λ α −1e− λ
FX ( x; α ) = ∫ fX ( λ ) d λ = ∫
dλ
Γ (α )
0
0
x
x
The distribution function FX ( x; α ) is also called in mathematics
the incomplete Gamma function.
Probability Mass and Density
The Gamma PDF
Two special cases of the Gamma function are important.
If we let α = 1 and β = 1 / λ we get the exponential PDF
fX ( x ) = λ e− λ x u ( x ) .
If we let α = N / 2 and β = 2 we get the chi - squared PDF
x N /2−1e− x/2
fX ( x ) = N /2
u( x)
2 Γ ( N / 2)
in which N is called the number of degrees of freedom of X.
Functions of a Random Variable
Consider first a transformation from a DV random variable X
( )
to another DV random variable Y through Y = g X . If the
( )
function g is invertible, then X = g −1 Y and the PMF for Y is then
( ( )) where p ( x ) is the PMF for X . For the DV
( )
pY y = p X g −1 y
X
random variable X the PDF consists only of impulses
()
N
(
)
f X x = ∑ aiδ x − xi where N is the number of impulses.
i=1
The PDF of Y also consists only of impulses and each impulse
in the PDF of Y corresponds to an impulse in the PDF of X
( )
N
(
)
N
(
( ))
fY y = ∑ aiδ y − yi = ∑ aiδ y − g xi
i=1
i=1
Functions of a Random Variable
If the function g is not invertible the PMF and PDF of Y can be found
by finding the probability of each value of Y. Each value of X with
non-zero probability causes a non-zero probability for the
corresponding value of Y. So, for the ith value of Y ,
P ⎡⎣Y = yi ⎤⎦ = P ⎡⎣ X = xi,1 ⎤⎦ + P ⎡⎣ X = xi,2 ⎤⎦ +
n
+ P ⎡⎣ X = xi,n ⎤⎦ = ∑ P ⎡⎣ X = xi,k ⎤⎦
k =1
Functions of a Random Variable
( )
Let X and Y be CV random variables and let Y = g X . Also, let
the function g be invertible, meaning that an inverse function
( )
X = g −1 Y exists and is single-valued as in the illustrations below.
Functions of a Random Variable
Then it can be shown that the PDF’s of X and Y are related by
( )
fY y =
( ( ))
f X g −1 y
dy / dx
Functions of a Random Variable
Let the PDF of X be f X
( )
Then X = g −1 Y =
⎛ x − 1⎞
1
x = rect ⎜
and let Y = 2 X + 1.
⎟
4
⎝ 4 ⎠
()
Y −1
dY
,
= 2.
2
dX
Functions of a Random Variable
(
fY
)
⎛
⎞
⎛ y − 1⎞ 1 rect y− 1 / 2 − 1
⎜
⎟
fX ⎜
⎟
4
4
⎛ y − 3⎞
⎝ 2 ⎠
⎝
⎠ 1
y =
=
= rect ⎜
2
2
8
⎝ 8 ⎟⎠
( )
Functions of a Random Variable
Now let Y = −2 X + 5 ⇒ X = g
fY
−1
5−Y
dY
Y =
,
= −2
2
dX
( )
⎛ 5− y⎞
fX ⎜
⎛ 3− y⎞ 1
⎛ y − 3⎞
⎝ 2 ⎟⎠ 1
y =
= rect ⎜
= rect ⎜
⎟
2
8
⎝ 8 ⎠ 8
⎝ 8 ⎟⎠
( )
Functions of a Random Variable
( )
Now let Y = g X = X 2 . This is more complicated because the event
{1 < Y ≤ 4} is caused by the event {1 < X ≤ 2} but it is also caused by
the event {−2 ≤ X < −1} . If we make the transformation from
the PDF of X to the PDF of Y in two steps the process is simpler
to see. Let Z = X .
Functions of a Random Variable
fZ
⎛ z − 3 / 2⎞
1
1
z = rect z − 1 / 2 + rect ⎜
4
4
3 ⎟⎠
⎝
()
(
)
dY
Z = Y , Y ≥0 ,
= 2Z = 2 Y , Y ≥ 0
dZ
⎛ y − 3 / 2⎞ 1
1
−1
⎧f g y
⎫
rect ⎜
⎟ + rect
Z
3
⎪
⎝
⎠ 4
, y≥0 ⎪ 4
fY y = ⎨ dy / dz
⎬=
2 y
⎪
⎪
, y < 0⎭
⎩0
⎛ 2 y − 3⎞
⎛
1 ⎡
1⎞ ⎤
⎢ rect ⎜
fY y =
⎟ + rect ⎜ y − ⎟ ⎥ u y
6 ⎠
2⎠ ⎥
⎝
8 y ⎢⎣
⎝
⎦
( )
( )
( ( ))
( )
(
y −1/ 2
)
( )
u y
Functions of a Random Variable
∞
∫ f ( y ) dy = 1
Y
−∞
Functions of a Random Variable
( )
In general, if Y = g X and the real solutions of this equation are
x1 , x2 ,x N then, for those ranges of Y for which there is a
( )
corresponding X through Y = g X we can find the PDF of Y.
Notice that for some ranges of X and there are multiple real
solutions and for other ranges there may be fewer. In this figure
there are three real values of x which produce , y1. But there is
only one value of x which produces y2 .
Only the real solutions are used. So in
some transformations, the transformation
used may depend on the range of values
of X and Y being considered.
Functions of a Random Variable
For those ranges of Y for which there is a corresponding real X
( )
(x ) +
through Y = g X
( )
fY y =
fX
1
⎛ dY ⎞
⎜⎝ dX ⎟⎠
X =x
1
( )
f X x2
⎛ dY ⎞
⎜⎝ dX ⎟⎠
X =x
+ +
2
( )
f X xN
⎛ dY ⎞
⎜⎝ dX ⎟⎠
X =x
( )
N
In the previous example Y = g X = X 2 and x1,2 = ± y
fY
( ) ( )
⎧f
y
fX − y
X
⎪⎪
+
y =⎨ 2 y
2 y
⎪
0
⎪⎩
( )
⎧
⎛
⎫ ⎪ rect
⎜
⎪
⎝
, y ≥ 0⎪ ⎪
⎬= ⎨
⎪ ⎪
, y < 0 ⎪⎭ ⎪
⎩
⎛ − y − 1⎞
y − 1⎞
⎟ + rect ⎜
⎟
4 ⎠
4
⎝
⎠
8 y
0
, y≥0
, y<0
Functions of a Random Variable
One problem that arises when transforming CV random variables
occurs when the derivative is zero. This occurs in any type of
transformation for which Y is constant for a non-zero range of X .
Since division by zero is undefined, the formula
fY
(
)
( y) =
( dy / dx )
f X x1
x= x1
(
)
+
( dy / dx )
f X x2
x= x2
(
)
+ +
( dy / dx )
f X xN
x= x N
is not usable in the range of X in which Y is a constant. In these
cases it is better to utilize a more fundamental relation between
X and Y.
Functions of a Random Variable
Let fX ( x ) = (1 / 6 ) rect ( x / 3)
⎧ 3X − 2 , X ≥ 1
Let Y = ⎨
, X <1
⎩1
We can say P [Y = 1] = P [ X < 1] = 2 / 3. So Y has a probability
of 2/3 of being exactly one. That means that there must be an
impulse in fY ( y ) at y = 1 and the strength of the impulse is 2/3. In
the remaining range of Y we can use
fY ( y ) =
fX ( x1 )
( dy / dx )x= x
1
+
fX ( x2 )
( dy / dx )x= x
+ +
2
fX ( x N )
( dy / dx )x= x
N
Functions of a Random Variable
For this example, the PDF of Y would be
( ) (
) (
) (
) ((
) ) (
)
fY y = 2 / 3 δ y − 1 + 1 / 18 rect y + 2 / 18 u y − 1
or, simplifying,
( ) (
) (
) (
) ((
) )
fY y = 2 / 3 δ y − 1 + 1 / 18 rect y − 4 / 6
The Inverse Problem
()
. a random variable X with a known PDF f x it is desired to
Given
X
( )
random variable Y with a desired PDF f ( y ) .
find a functional transformation Y = g X that produces a
Y
First let the PDF
of Y be uniform in the interval 0 < y ≤ 1. The function that
( )
converts the PDF of X to this uniform PDF of Y is Y = FX X .
This implies that y = P ⎡⎣ X ≤ x ⎤⎦ and therefore, 0 < y ≤ 1. If X ≤ x
since FX x is monotonic then Y ≤ y implying that P ⎡⎣Y ≤ y ⎤⎦ = P ⎡⎣ X ≤ x ⎤⎦ .
Combining, FY y = P ⎡⎣Y ≤ y ⎤⎦ = P ⎡⎣ X ≤ x ⎤⎦ = FX x = y , 0 < y ≤ 1
()
( )
( )
and fY y = 1 , 0 < y ≤ 1.
()
The Inverse Problem
Now suppose the PDF of X is uniform and we want an arbitrary PDF of
( )
Y = F ( X ) converts an arbitrary PDF of X into a uniform PDF for Y
between 0 and 1, then Y = F ( X ) converts a uniform PDF for X
Y , fY y . This is simply the previous exercise in reverse. If choosing
X
−1
Y
between 0 and 1 into an arbitrary PDF for Y. Now, to convert an
arbitrary PDF of one random variable to an arbitrary PDF of another
random variable we just combine these two operations into one.
( ( ))
The simplest solution is Y = FY-1 FX X .
The Inverse Problem
Another approach to the problem of changing one PDF into another is
( )
to start with the general relationship fY y =
( ( )) . If f ( y ) is
f X g −1 y
Y
dy / dx
to be the constant one in the range 0 < y ≤ 1, then dy / dx must be the
( ( )) in that same range. So we want the magnitude of
same as f X g −1 y
()
()
the derivative of y = g x to be the same as f X x which is the derivative
()
()
()
of FX x . So if we choose g x to be the same as FX x , when we
differentiate we will get the correct denominator to make the PDF of Y
uniform. Because of the magnitude operation there is another solution,
( )
Y = 1− FX X . In this second solution the mapping between X and Y is
different but the distribution of values is the same.
The Inverse Problem
Example
A random variable X has a PDF
⎧2x , 0 < x < 1
fX x = ⎨
⎩0 , otherwise
and we want to convert it into another random variable Y with a PDF
()
⎧⎪1− y , y < 1
fY y = ⎨
⎪⎩0 , otherwise
What is the desired function Y = g X ?
( )
( )
The Inverse Problem
⎧⎪1− y , y < 1
fX
, fY y = ⎨
⎪⎩0 , otherwise
Integration
Integration
⎧0 , y < −1
⎧0 , x < 0
⎪ 2
⎪ 2
⎪ y /2 + y + 1 / 2 , − 1 < y < 0
FX x = ⎨ x , 0 < x < 1
, FY y = ⎨
2
y
−
y
/ 2 +1/ 2 , 0 < y < 1
⎪1 , x > 1
⎪
⎩
⎪⎩1 , y > 1
Inversion
⎧−1+ 2 y , 0 < y < 1 / 2
⎪
−1
FY y = ⎨
⎪⎩1− 2 1− y , 1 / 2 < y < 1
⎧−1+ 2x 2 , 0 < x 2 < 1 / 2
⎪
−1
FY FX x = ⎨
2
2
1−
2
1−
x
,
1
/
2
<
x
<1
⎪⎩
⎧2x , 0 < x < 1
x =⎨
⎩0 , otherwise
()
( )
()
( )
( )
( ( ))
(
(
)
)
The Inverse Problem
Expectation and Moments
Imagine an experiment with M possible distinct outcomes
1 M
performed N times. The average of those N outcomes is X = ∑ ni xi
N i=1
where xi is the ith distinct value of X and ni is the number of
M
M
ni
1 M
times that value occurred. Then X = ∑ ni xi = ∑ xi = ∑ ri xi .
N i=1
i=1 N
i=1
The expected value of X is
M
M
M
ni
E X = lim ∑ xi = lim ∑ ri xi = ∑ P ⎡⎣ X = xi ⎤⎦ xi .
N →∞
N →∞
i=1 N
i=1
i=1
( )
Expectation and Moments
The probability that X lies within some small range can be
⎡
Δx
Δx ⎤
approximated by P ⎢ xi −
< X ≤ xi +
≅ f X xi Δx
⎥
2
2 ⎦
⎣
and the expected value is then approximated by
( )
M
⎡
Δx
Δx ⎤
E X = ∑ P ⎢ xi −
< X ≤ xi +
xi ≅ ∑ xi f X xi Δx
⎥
2
2 ⎦
i=1
i=1
⎣
where M is now the number of
subdivisions of width Δx
of the range of the random
variable.
( )
M
( )
Expectation and Moments
∞
( ) ∫ x f ( x ) dx.
In the limit as Δx approaches zero, E X =
X
−∞
∞
( ( )) = ∫ g ( x ) f ( x ) dx.
Similarly E g X
X
−∞
∞
( ) ∫
The nth moment of a random variable is E X n =
−∞
()
x n f X x dx.
Expectation and Moments
The first moment of a random variable is its expected value
∞
( ) ∫ x f ( x ) dx. The second moment of a random variable
E X =
X
−∞
is its mean-squared value (which is the mean of its square, not the
square of its mean).
∞
( ) ∫
E X2 =
−∞
()
x 2 f X x dx
Expectation and Moments
A central moment of a random variable is the moment of
that random variable after its expected value is subtracted.
( )
E ⎛ ⎡⎣ X − E X ⎤⎦ ⎞ =
⎝
⎠
n
∞
∫ ⎡⎣ x − E ( X )⎤⎦
−∞
n
()
f X x dx
The first central moment is always zero. The second central
moment (for real-valued random variables) is the variance,
( )
σ = E ⎛ ⎡⎣ X − E X ⎤⎦ ⎞ =
⎝
⎠
2
X
2
∞
∫ ⎡⎣ x − E ( X )⎤⎦
−∞
2
()
f X x dx
The positive square root of the variance is the standard
deviation.
Expectation and Moments
Properties of Expectation
()
( )
( )
E a = a , E aX = a E X
⎛
⎞
, E⎜ ∑ Xn⎟ = ∑ E Xn
⎝ n
⎠
n
( )
where a is a constant. These properties can be use to prove
( )
( )
the handy relationship σ 2X = E X 2 − E 2 X . The variance of
a random variable is the mean of its square minus the square of
its mean.
Expectation and Moments
For complex-valued random variables absolute moments are useful.
The nth absolute moment of a random variable is defined by
( ) = ∫ x f ( x) dx
E X
∞
n
n
X
−∞
and the nth absolute central moment is defined by
( )
E⎛ X − E X
⎝
n
⎞=
⎠
∞
∫
−∞
( )
x−E X
n
()
f X x dx
Expectation and Moments
Let Z = X + jY .
( ) (
E Z
2
= E X + jY
( )
σ = E⎛ Z − E Z
⎝
2
Z
2
2
)
( ) ( )
= E X2 + E Y2 = 2 / 3
(
⎞ = E ⎛ X + jY − E X + jY
⎠
⎝
and it can be shown that
( ) − E(Z )
σ =σ +σ = E Z
2
Z
2
X
2
Y
2
)
2
⎞
⎠
2
Notice that for a real-valued random variable X , if n is an even
( )
number X − E X
n
( )
n
= ⎡⎣ X − E X ⎤⎦ .
Conditional Probability
Distribution Function
FX |A
(
)
P ⎡⎣ X ≤ x ∩ A⎤⎦
x = P ⎡⎣ X ≤ x | A⎤⎦ =
P ⎡⎣ A⎤⎦
()
This is the distribution function for x given that the condition A exists.
()
( −∞ ) = 0
0 ≤ FX |A x ≤ 1 , − ∞ < x < ∞
FX |A
( )
(x ) − F (x )
and FX |A +∞ = 1
P ⎡⎣ x1 < X ≤ x2 | A⎤⎦ = FX |A 2
X |A
1
FX |A x is a monotonic function of x
()
Conditional Probability
{
}
Let A be A = X ≤ a where a is a constant.
FX |A
(
) (
)
P ⎡⎣ X ≤ x ∩ X ≤ a ⎤⎦
x = P ⎡⎣ X ≤ x | X ≤ a ⎤⎦ =
P ⎡⎣ X ≤ a ⎤⎦
()
(
) (
)
(
) (
)
If a ≤ x then P ⎡⎣ X ≤ x ∩ X ≤ a ⎤⎦ = P ⎡⎣ X ≤ a ⎤⎦ and
P ⎡⎣ X ≤ a ⎤⎦
FX |A x = P ⎡⎣ X ≤ x | X ≤ a ⎤⎦ =
=1
P ⎡⎣ X ≤ a ⎤⎦
If a ≥ x then P ⎡⎣ X ≤ x ∩ X ≤ a ⎤⎦ = P ⎡⎣ X ≤ x ⎤⎦ and
()
FX |A
()
()
P ⎡⎣ X ≤ x ⎤⎦ FX x
x = P ⎡⎣ X ≤ x | X ≤ a ⎤⎦ =
=
P ⎡⎣ X ≤ a ⎤⎦ FX a
()
Conditional Probability
( ( ))
d
Conditional PDF f X |A x =
FX |A x
dx
Conditional expected value of a function
()
∞
( ( ) ) ∫ g ( x ) f ( x ) dx
E g X |A =
X |A
−∞
(
∞
) ∫ x f ( x ) dx
Conditional mean E X | A =
X |A
−∞
Download