Discrete Random Variables

advertisement
Discrete Random Variables
Randomness
• The word random effectively means
unpredictable
• In engineering practice we may treat some
signals as random to simplify the analysis
even though they may not actually be
random
Random Variable Defined
A random variable X ( ) is the assignment of numerical
values to the outcomes of experiments
Random Variables
Examples of assignments of numbers to the outcomes of
experiments.
Discrete-Value vs ContinuousValue Random Variables
• A discrete-value (DV) random variable has a set
of distinct values separated by values that cannot
occur
• A random variable associated with the outcomes
of coin flips, card draws, dice tosses, etc... would
be DV random variable
• A continuous-value (CV) random variable may
take on any value in a continuum of values which
may be finite or infinite in size
Probability Mass Functions
The probability mass function (pmf ) for a discrete random
variable X is
()
PX x = P X = x .
Probability Mass Functions
A DV random variable X is a Bernoulli random variable if it
takes on only two values 0 and 1 and its pmf is
PX
and 0 < p < 1.
1 p , x = 0
x = p
, x =1
0
, otherwise
()
Probability Mass Functions
Example of a Bernoulli pmf
Probability Mass Functions
If we perform n trials of an experiment whose outcome is
Bernoulli distributed and if X represents the total number of 1’s
that occur in those n trials, then X is said to be a Binomial random
variable and its pmf is
PX
n x
p 1 p
x = x 0
()
(
)
n x
{
}
, x 0,1,2,, n
, otherwise
Probability Mass Functions
Binomial pmf
Probability Mass Functions
If we perform Bernoulli trials until a 1 (success) occurs and the
probability of a 1 on any single trial is p, the probability that the
(
first success will occur on the kth trial is p 1 p
)
k 1
. A DV random
variable X is said to be a Geometric random variable if its pmf is
PX
(
p 1 p
x =
0
()
)
x1
{
, x 1,2,3,...
, otherwise
}
Probability Mass Functions
Geometric pmf
Probability Mass Functions
If we perform Bernoulli trials until the rth 1 occurs and the
probability of a 1 on any single trial is p, the probability that the
rth success will occur on the kth trial is
k 1 r
P rth success on kth trial = p 1 p
r 1
(
)
(
)
k r
.
A DV random variable Y is said to be a negative - Binomial
or Pascal random variable with parameters r and p if its pmf is
y 1 r
p 1 p
PY y = r 1
0
( )
(
)
yr
{
, y r,r + 1,,
, otherwise
}
Probability Mass Functions
Negative Binomial
(Pascal) pmf
Probability Mass Functions
Suppose we randomly place n points in the time interval 0 t < T
with each point being equally likely to fall anywhere in that range.
The probability that k of them fall inside an interval of length t < T
inside that range is
n k
P k inside t = p 1 p
k
(
)
n k
(
)
n!
=
p k 1 p
k! n k !
(
)
n k
where p = t / T is the probability that any single point falls within
t . Further, suppose that as n , n / T = , a constant. If is constant and n that implies that T and p 0. Then is the average number of points per unit time, over all time.
Probability Mass Functions
Events occurring at random times
Probability Mass Functions
It can be shown that
n
k lim 1 =
e
P k inside t =
k! n n
k!
k
=e where = t. A DV random variable is a Poisson random
variable with parameter if its pmf is
PX
x e , x 0,1,2,,
x = x!
0
, otherwise
()
{
}
Cumulative Distribution
Functions
The cumulative distribution function (CDF) is defined by
()
FX x = P X x .
For example, the CDF for tossing a single die is
(
) ( ) ( ) ( ) ( ) ( )
u x 1 + u x 2 + u x 3
FX x = 1 / 6 + u x 4 + u x 5 + u x 6
1 , x 0
where u x 0 , x < 0
() ( )
()
Functions of a Random Variable
Consider a transformation from a DV random variable X
( )
to another DV random variable Y through Y = g X . If the
( )
function g is invertible, then X = g 1 Y and the pmf for Y is
( )
( ( )) where P ( x ) is the pmf for X.
PY y = PX g 1 y
X
Functions of a Random Variable
If the function g is not invertible the pmf and pdf of Y can be found
by finding the probability of each value of Y. Each value of X with
non-zero probability causes a non-zero probability for the
corresponding value of Y. So, for the ith value of Y ,
P Y = yi = P X = xi,1 + P X = xi,2 +
n
+ P X = xi,n = P X = xi,k k =1
The function to the right is an
example of a non-invertible
function.
Expectation and Moments
Imagine an experiment with M possible distinct outcomes
performed N times. The average of those N outcomes is
1 M
X = ni xi where x i is the ith distinct value of X and ni
N i=1
is the number of times that value occurred. Then
M
M
ni
1 M
X = ni xi = xi = ri xi
N i=1
i=1 N
i=1
The expected value of X is
M
M
M
ni
E X = lim xi = lim ri xi = P X = xi xi
N N i=1 N
i=1
i=1
Expectation and Moments
Three common measures are used in statistics to indicate
an "average" of a random variable are the mean, the
mode and the median. The mean is the sum of the values
1 M
divided by the number of values X = ni xi .
N i=1
The mode is the value that occurs most often.
(
)
()
PX xmode PX x for all x.
The median is the value for which an equal number
of values fall above and below.
(
)
(
PX X > xmedian = PX X < xmedian
)
Expectation and Moments
The first moment of a random variable is its expected value
M
E X = xi P X = xi i=1
The second moment of a random variable is its mean-squared
value (which is the mean of its square, not the square of its
mean).
M
E X 2 = xi2 P X = xi i=1
The name "moment" comes from the fact that it is mathematically
the same as a moment in classical mechanics.
Expectation and Moments
The nth moment of a random variable is defined by
M
E X n = xin P X = xi i=1
The expected value of a function g of a random variable is
( )
M
( )
E g X = g X P X = xi i=1
Expectation and Moments
A central moment of a random variable is the moment of
that random variable after its expected value is subtracted.
(
)
(
)
n
n
E X E X = xi E X P X = xi i=1
The first central moment is always zero. The second central
moment (for real-valued random variables) is the variance,
(
)
M
(
)
2
2
= E X E X = xi E X P X = xi i=1
2
X
M
The variance of X can also be written as Var X . The positive
square root of the variance is the standard deviation.
Expectation and Moments
Properties of expectation
E a = a , E aX = a E X , E X n = E X n n
n
where a is a constant. These properties can be use to prove
the handy relationship,
2X = E X 2 E 2 X The variance of a random variable is the mean of its square
minus the square of its mean. Another handy relation is
Var aX + b = a 2 Var X .
Conditional Probability Mass
Functions
The concept of conditional probability can be extended to a
conditional probability mass function defined by
PX |A
()
PX x
, x A
x = P A
, otherwise
0
()
where A is the condition that affects the probability of X .
Similarly the conditional expected value of X is
()
E X | A
= x PX |A x and the conditional cumulative
xB
()
distribution function for X is FX |A x = P X x | A
.
Conditional Probability
{
}
Let A be A = X a where a is a constant.
Then FX |A
(
) (
()
(
) (
)
(
) (
)
If a x then P X x X a = P X a and
P X a FX |A x = P X x | X a =
= 1.
P X a If a x then P X x X a = P X x and
P X x FX x
FX |A x = P X x | X a =
=
P X a FX a
()
()
)
P X x X a x = P X x | X a =
.
P X a ()
()
Download