Uploaded by zyysaga

ECE2191 Lecture Notes

advertisement
1.15Review of Set Theorytheo.1.1 @cref@cref[theo][1][1]1.1[1][5][]5
1.38Operations on setstheo.1.3 @cref@cref[theo][3][1]1.3[1][8][]8 1.48Operations on setstheo.1.4
@cref@cref[theo][4][1]1.4[1][8][]8
Probability Density Functionstheo.4.1 @cref@cref[theo][1][4]4.1[1][67][]67
4.275Expected Values and Moments Involving Pairs of Random Variablestheo.4.2 @cref@cref[theo][2][4]4.2[1][75][]75
4.376Expected Values and Moments Involving Pairs of Random Variablestheo.4.3 @cref@cref[theo][3][4]4.3[1][
4.786Expectations Involving Multiple Random Variablestheo.4.7 @cref@cref[theo][7][4]4.7[1][86][]86
4.887Expectations Involving Multiple Random Variablestheo.4.8 @cref@cref[theo][8][4]4.8[1][87][]87
5.194Laws of Large Numberstheo.5.1 @cref@cref[theo][1][5]5.1[1][93][]94
5.294Laws of
Large Numberstheo.5.2 @cref@cref[theo][2][5]5.2[1][94][]94
PROBABILITY MODELS IN
ENGINEERING
COURSE NOTES
ECE2191
Dr Faezeh Marzbanrad
Department of Electrical and
Computer Systems Engineering
Monash University
Lecturers:
Dr Faezeh Marzbanrad (Clayton)
Dr Wynita Griggs (Clayton)
Dr Mohamed Hisham (Malaysia)
2020
Contents
Contents
1 Preliminary Concepts
1.1 Probability Models in Engineering
1.2 Review of Set Theory . . . . . . .
1.3 Operations on sets . . . . . . . . .
1.4 Other Notations . . . . . . . . . .
1.5 Random Experiments . . . . . . .
1.5.1 Tree Diagrams . . . . . .
1.5.2 Coordinate System . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
4
4
4
7
10
10
12
13
2 Probability Theory
2.1 Definition of Probability . . . . . . . . . . . . . . . . . . . . .
2.1.1 Relative Frequency Definition . . . . . . . . . . . . . .
2.1.2 Axiomatic Definition . . . . . . . . . . . . . . . . . . .
2.2 Joint Probabilities . . . . . . . . . . . . . . . . . . . . . . . . .
2.3 Conditional Probabilities . . . . . . . . . . . . . . . . . . . . .
2.3.1 Bayes’s Theorem . . . . . . . . . . . . . . . . . . . . .
2.4 Independence . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.5 Basic Combinatorics . . . . . . . . . . . . . . . . . . . . . . . .
2.5.1 Sequence of Experiments . . . . . . . . . . . . . . . .
2.5.2 Sampling with Replacement and with Ordering . . . .
2.5.3 Sampling without Replacement and with Ordering . .
2.5.4 Sampling without Replacement and without Ordering
2.5.5 Sampling with Replacement and without Ordering . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
14
14
14
14
16
17
18
21
22
22
23
24
25
27
3 Random Variables
3.1 The Notion of a Random Variable . . . . . . . . . . . . . . . . .
3.2 Discrete Random Variables . . . . . . . . . . . . . . . . . . . . .
3.2.1 Probability Mass Function . . . . . . . . . . . . . . . . .
3.2.2 The Cumulative Distribution Function . . . . . . . . . .
3.2.3 Expected Value and Moments . . . . . . . . . . . . . . .
3.2.4 Conditional Probability Mass Function and Expectation
3.2.5 Common Discrete Random Variables . . . . . . . . . . .
3.3 Continuous Random Variables . . . . . . . . . . . . . . . . . . .
3.3.1 The Probability Density Function . . . . . . . . . . . . .
3.3.2 Conditional CDF and PDF . . . . . . . . . . . . . . . . .
3.3.3 The Expected Value and Moments . . . . . . . . . . . .
3.3.4 Important Continuous Random Variables . . . . . . . .
3.4 The Markov and Chebyshev Inequalities . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
29
29
31
31
32
34
37
40
47
48
51
52
55
61
4 Two or More Random Variables
4.1 Pairs of Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.1.1 Joint Cumulative Distribution Function . . . . . . . . . . . . . . . . . .
64
64
65
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Contents
4.2
4.1.2 Joint Probability Density Functions . . . . . . . . . . . . . . . . . . .
4.1.3 Joint Probability Mass Functions . . . . . . . . . . . . . . . . . . . .
4.1.4 Conditional Probabilities and densities . . . . . . . . . . . . . . . . .
4.1.5 Expected Values and Moments Involving Pairs of Random Variables
4.1.6 Independence of Random Variables . . . . . . . . . . . . . . . . . . .
4.1.7 Pairs of Jointly Gaussian Random Variables . . . . . . . . . . . . . .
Multiple Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2.1 Vector Random Variables . . . . . . . . . . . . . . . . . . . . . . . . .
4.2.2 Joint and Conditional PMFs, CDFs and PDFs . . . . . . . . . . . . . .
4.2.3 Expectations Involving Multiple Random Variables . . . . . . . . . .
4.2.4 Multi-Dimensional Gaussian Random Variables . . . . . . . . . . . .
5 Random Sums and Sequences
5.1 Independent and Identically Distributed Random Variables
5.2 Mean and Variance of Sums of Random Variables . . . . .
5.3 The Sample Mean . . . . . . . . . . . . . . . . . . . . . . .
5.4 Laws of Large Numbers . . . . . . . . . . . . . . . . . . . .
5.5 The Central Limit Theorem . . . . . . . . . . . . . . . . . .
5.6 Convergence of Sequences of Random Variables . . . . . .
5.6.1 Sure Convergence . . . . . . . . . . . . . . . . . .
5.6.2 Almost-Sure Convergence . . . . . . . . . . . . . .
5.6.3 Convergence in Probability . . . . . . . . . . . . .
5.6.4 Convergence in the Mean Square Sense . . . . . .
5.6.5 Convergence in Distribution . . . . . . . . . . . . .
5.7 Confidence Intervals . . . . . . . . . . . . . . . . . . . . .
3
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
67
70
71
74
78
81
84
84
85
86
88
.
.
.
.
.
.
.
.
.
.
.
.
90
90
91
92
93
95
98
100
101
102
103
103
104
1 Preliminary Concepts
1 Preliminary Concepts
1.1 Probability Models in Engineering
In many real world situations, the outcome is uncertain. Many systems involve phenomena with
unpredictable variation and randomness. We often deal with random experiments in which the
outcome varies unpredictably when the experiment is repeated under the same conditions. In
those cases, deterministic models are not appropriate, since they predict the same outcome for
each repetition of an experiment. Probability models are intended for such random experiments.
In engineering problems in particular, the occurrence of many events is either uncertain or the
outcome cannot be specified by a precise value or formula. The exact value of the power line
voltage during high activity in the summer is an example which cannot be described in any
deterministic way. In communications, the events can frequently be reduced to a series of binary
digits, while the sequence of these digits is uncertain and that is how it carries the information.
Therefore in engineering applications, probability models play a fundamental role.
1.2 Review of Set Theory
In random experiments we are interested in the occurrence of events that are represented by sets.
Before proceeding with further discussion of events and random experiments, we present some
essential concepts from set theory. As we will see, the definitions and concepts presented here
will clarify and unify the mathematical foundations of probability theory.
Definition 1.1. Set: A set is an unordered collection of objects.
We typically use a capital letter to denote a set, listing the objects within braces or by graphing.
The notation ๐ด = {๐‘ฅ : ๐‘ฅ > 0, ๐‘ฅ ≤ 2} is read as “the set ๐ด contains all ๐‘ฅ such that ๐‘ฅ is greater than
zero and less than or equal to two.” The notation ๐œ ∈ ๐ด is read as “the object zeta is in the set A.”
Two sets are equal if they have exactly the same objects in them; i.e., ๐ด = ๐ต if ๐ด contains exactly
the same elements that are contained in ๐ต.
Definition 1.2. Null set: denoted ∅, is the empty set and contains no objects.
Definition 1.3. Universal set: denoted ๐‘†, is the set of all objects in the universe. The universe can
be anything we define it to be.
For example, we sometimes consider ๐‘† = ๐‘…, the set of all real numbers.
Definition 1.4. Subset: If every object in set ๐ด is also an object in set ๐ต, then ๐ด is a subset of ๐ต,
denoted by ๐ด ⊂ ๐ต.
The expression ๐ต ⊃ ๐ด read as “๐ด contains ๐ต” is equivalent to ๐ด ⊂ ๐ต.
Definition 1.5. Union: The union of sets ๐ด and ๐ต, denoted ๐ด ∪ ๐ต, is the set of objects that belong
to ๐ด or ๐ต or both, i.e., ๐ด ∪ ๐ต = {๐œ : ๐œ ∈ ๐ด or ๐œ ∈ ๐ต}.
4
1 Preliminary Concepts
Definition 1.6. Intersection: The intersection of sets ๐ด and ๐ต, denoted ๐ด ∩ ๐ต, is the set of objects
common to both ๐ด and ๐ต; i.e ., ๐ด ∩ ๐ต = {๐œ : ๐œ ∈ ๐ด and ๐œ ∈ ๐ต}
Note that if ๐ด ⊂ ๐ต, then ๐ด ∩ ๐ต = ๐ด. In particular, we always have ๐ด ∩ ๐‘† = ๐ด.
Definition 1.7. Complement: The complement of a set ๐ด, denoted ๐ด๐‘ , is the collection of all objects
in ๐‘† not included in ๐ด; i.e ., ๐ด๐‘ = {๐œ ∈ ๐‘† : ๐œ ∉ ๐ด} .
Definition 1.8. Difference: The relative complement or difference of sets ๐ด and ๐ต is the set of
elements in ๐ด that are not in ๐ต: ๐ด − ๐ต = {๐œ : ๐œ ∈ ๐ด and ๐œ ∉ ๐ต}
Note that ๐ด − ๐ต = ๐ด ∩ ๐ต๐‘ .
These definitions and relationships among sets are illustrated in Figure 1.1. These diagrams are
called Venn diagrams, which represent sets by simple plane areas within the universal set, pictured
as a rectangle. Venn diagrams are important visual aids to understand relationships among sets.
A
B
s
A
B
s
s
(b) Set A.
(a) Universal Set S.
B
A
(c) Set B.
A
B
s
(d) Set Ac.
B
A
B
(g) A ⊂ B.
A
B
s
s
s
(f) Set A ∩ B.
(e) Set A ∪ B.
A
A
B
s
s
A
B
(h) disjoint sets A and B.
(b) Set A-B.
Figure 1.1: Venn diagrams representing sets
Theorem 1.1
Let ๐ด ⊂ ๐ต and ๐ต ⊂ ๐ด. Then ๐ด = ๐ต.
Proof. Since the empty set is a subset of any set, if ๐ด = ∅ then ๐ต ⊂ ๐ด implies that ๐ต = ∅.
Similarly, if ๐ต = ∅ then ๐ด ⊂ ๐ต implies that ๐ด = ∅. The theorem is obviously true if ๐ด and ๐ต
are both empty. If ๐ด and ๐ต are nonempty, since ๐ด ⊂ ๐ต, if ๐œ ∈ ๐ด then ๐œ ∈ ๐ต. Since ๐ต ⊂ ๐ด, if
๐œ ∈ ๐ต then ๐œ ∈ ๐ด. We therefore conclude that ๐ด = ๐ต.
The converse of the above theorem is also true: If ๐ด = ๐ต then ๐ด ⊂ ๐ต and ๐ต ⊂ ๐ด.
5
1 Preliminary Concepts
Example 1.1
Let ๐ด = {(๐‘ฅ, ๐‘ฆ) : ๐‘ฆ ≤ ๐‘ฅ }, ๐ต = {(๐‘ฅ, ๐‘ฆ) : ๐‘ฅ ≤ ๐‘ฆ + 1}, ๐ถ = {(๐‘ฅ, ๐‘ฆ) : ๐‘ฆ < 1}, and
๐ท = {(๐‘ฅ, ๐‘ฆ) : 0 ≤ ๐‘ฆ}. Find and sketch ๐ธ = ๐ด ∩ ๐ต, ๐น = ๐ถ ∩ ๐ท, ๐บ = ๐ธ ∩ ๐น , and
๐ป = {(๐‘ฅ, ๐‘ฆ) : (−๐‘ฅ, ๐‘ฆ + 1) ∈ ๐บ }.
Solution. We first sketch the boundaries of the given sets ๐ด, ๐ต, ๐ถ, and ๐ท. Note that if the
boundary of the region is included in the set, it is indicated with a solid line, and if not, it is
indicated with a dotted line. We have
๐ธ = ๐ด ∩ ๐ต = {(๐‘ฅ, ๐‘ฆ) : ๐‘ฅ − 1 ≤ ๐‘ฆ ≤ ๐‘ฅ }
and
๐น = ๐ถ ∩ ๐ท = {(๐‘ฅ, ๐‘ฆ) : 0 ≤ ๐‘ฆ < 1}.
The set ๐บ is the set of all ordered pairs (๐‘ฅ, ๐‘ฆ) satisfying both ๐‘ฅ − 1 ≤ ๐‘ฆ ≤ ๐‘ฅ and 0 ≤ ๐‘ฆ < 1.
Using 1− to denote a value just less than 1, the second inequality may be expressed as
0 ≤ ๐‘ฆ ≤ 1− . We may then express the set ๐บ as
๐บ = {(๐‘ฅ, ๐‘ฆ) : ๐‘š๐‘Ž๐‘ฅ {0, ๐‘ฅ − 1} ≤ ๐‘ฆ ≤ ๐‘š๐‘–๐‘›{๐‘ฅ, 1− }},
The set ๐ป is obtained from ๐บ by folding about the y-axis and translating down one unit.
This can be seen from the definitions of G and H by noting that (๐‘ฅ, ๐‘ฆ) ∈ ๐ป if (−๐‘ฅ, ๐‘ฆ + 1) ∈ ๐บ;
hence, we replace ๐‘ฅ with −๐‘ฅ and ๐‘ฆ with ๐‘ฆ + 1 in the above result for ๐บ to obtain
๐ป = {(๐‘ฅ, ๐‘ฆ) : ๐‘š๐‘Ž๐‘ฅ {0, −๐‘ฅ − 1} ≤ ๐‘ฆ + 1 ≤ ๐‘š๐‘–๐‘›{−๐‘ฅ, 1− }},
or
๐ป = {(๐‘ฅ, ๐‘ฆ) : ๐‘š๐‘Ž๐‘ฅ {−1, −๐‘ฅ − 2} ≤ ๐‘ฆ ≤ ๐‘š๐‘–๐‘›{−1 − ๐‘ฅ, 0− }}.
The sets are illustrated in Figure 1.2.
Figure 1.2:
6
1 Preliminary Concepts
1.3 Operations on sets
Throughout probability theory it is often required to establish relationships between sets. The
set operations ∪ and ∩ operate on sets in much the same way the operations + and × operate
on real numbers. Similarly, the special sets ∅ and ๐‘† correspond to the additive identity 0 and
the multiplicative identity 1, respectively. This correspondence between operations on sets and
operations on real numbers is made explicit by the theorem below, which can be proved by
applying the definitions of the basic set operations stated above.
Theorem 1.2: Properties of Set Operations
Commutative Properties:
๐ด∪๐ต = ๐ต ∪๐ด
(1.1)
๐ด∩๐ต = ๐ต ∩๐ด
(1.2)
๐ด ∪ (๐ต ∪ ๐ถ) = (๐ด ∪ ๐ต) ∪ ๐ถ
(1.3)
๐ด ∩ (๐ต ∩ ๐ถ) = (๐ด ∩ ๐ต) ∩ ๐ถ
(1.4)
๐ด ∩ (๐ต ∪ ๐ถ) = (๐ด ∩ ๐ต) ∪ (๐ด ∩ ๐ถ)
(1.5)
๐ด ∪ (๐ต ∩ ๐ถ) = (๐ด ∪ ๐ต) ∩ (๐ด ∪ ๐ถ)
(1.6)
(๐ด ∪ ๐ต)๐‘ = ๐ด๐‘ ∩ ๐ต๐‘
(1.7)
Associative Properties:
Distributive Properties:
De Morgan’s Laws:
๐‘
๐‘
(๐ด ∩ ๐ต) = ๐ด ∪ ๐ต
๐‘
(1.8)
Identities involving ∅ and ๐‘†:
๐ด∪∅=๐ด
(1.9)
๐ด∩๐‘† =๐ด
(1.10)
๐ด∩∅=∅
(1.11)
๐ด∪๐‘† =๐‘†
(1.12)
๐ด ∩ ๐ด๐‘ = ∅
(1.13)
Identities involving complements:
๐‘
๐ด∪๐ด =๐‘†
(1.14)
๐‘ ๐‘
(1.15)
(๐ด ) = ๐ด
Example 1.2
Prove De Morgan’s rules.
7
1 Preliminary Concepts
Solution. First suppose that ๐œ ∈ (๐ด ∪ ๐ต)๐‘ , then ๐œ ∉ (๐ด ∪ ๐ต). In particular, we have ๐œ ∉ ๐ด
which implies ๐œ ∈ ๐ด๐‘ . Similarly, we have ๐œ ∉ ๐ต which implies ๐œ ∈ ๐ต๐‘ . Hence ๐œ is in both ๐ด๐‘
and ๐ต๐‘ that is, ๐œ ∈ ๐ด๐‘ ∩ ๐ต๐‘ . We have shown that (๐ด ∪ ๐ต)๐‘ ⊂ ๐ด๐‘ ∩ ๐ต๐‘ .
To prove inclusion in the other direction, suppose that ๐œ ∈ ๐ด๐‘ ∩ ๐ต๐‘ . This implies that ๐œ ∈ ๐ด๐‘
so ๐œ ∉ ๐ด. Similarly, ๐œ ∈ ๐ต๐‘ so ๐œ ∉ ๐ต. Therefore, ๐œ ∉ (๐ด ∪ ๐ต) and so ๐œ ∈ (๐ด ∪ ๐ต)๐‘ . We have
shown that ๐ด๐‘ ∩ ๐ต๐‘ ⊂ (๐ด ∪ ๐ต)๐‘ . This proves that (๐ด ∪ ๐ต)๐‘ = ๐ด๐‘ ∩ ๐ต๐‘ .
To prove the second De Morgan rule, apply the first De Morgan rule to ๐ด๐‘ and to ๐ต๐‘ to
obtain: (๐ด๐‘ ∪ ๐ต๐‘ )๐‘ = (๐ด๐‘ )๐‘ ∩ (๐ต๐‘ )๐‘ = ๐ด ∩ ๐ต, where we used the identity (๐ด๐‘ )๐‘ = ๐ด. Now
take complements of both sides: ๐ด๐‘ ∪ ๐ต๐‘ = (๐ด ∩ ๐ต)๐‘ .
[Exercise−] Use a Venn diagram to demonstrate De Morgan’s rule.
Additional insight to operations on sets is provided by the correspondence between the algebra
of set inclusion and Boolean algebra. An element either belongs to a set or it does not. Thus,
interpreting sets as Boolean (logical) variables having values of 0 or 1, the ∪ operation as the
logical "OR", the ∩ as the logical "AND" operation, and the ๐‘ as the logical complement "NOT",
any expression involving set operations can be treated as a Boolean expression.
Theorem 1.3
Negative Absorption Theorem:
๐ด ∪ (๐ด๐‘ ∩ ๐ต) = ๐ด ∪ ๐ต.
(1.16)
Proof. Using the distributive property,
๐ด ∪ (๐ด๐‘ ∩ ๐ต) = (๐ด ∪ ๐ด๐‘ ) ∩ (๐ด ∪ ๐ต)
= ๐‘† ∩ (๐ด ∪ ๐ต)
= ๐ด ∪ ๐ต.
Theorem 1.4
Principle of Duality: Any set identity remains true if the symbols ∪,∩, S, and ∅, are replaced
with the symbols ∩,∪,∅, and S, respectively.
Proof. The proof follows by applying De Morgan’s Laws and renaming sets ๐ด๐‘ , ๐ต๐‘ , etc. as
๐ด, ๐ต, etc.
Properties of set operations are easily extended to deal with any finite number of sets. To do this,
we need notation for the union and intersection of a collection of sets.
Definition 1.9. Union: We define the union of a collection of sets (or “set of sets”)
{๐ด๐‘– : ๐‘– ∈ ๐ผ }
(1.17)
by:
Ø
๐ด๐‘– = {๐œ ∈ ๐‘† : ๐œ ∈ ๐ด๐‘– for some ๐‘– ∈ ๐ผ }
๐‘– ∈๐ผ
8
(1.18)
1 Preliminary Concepts
Definition 1.10. Intersection: We define the intersection of a collection of sets
{๐ด๐‘– : ๐‘– ∈ ๐ผ }
(1.19)
by:
Ù
๐ด๐‘– = {๐œ ∈ ๐‘† : ๐œ ∈ ๐ด๐‘– for every ๐‘– ∈ ๐ผ }
(1.20)
๐‘– ∈๐ผ
Theorem 1.5: Properties of Set Operations (extended)
Commutative and Associative Properties:
๐‘›
Ø
๐‘–=1
๐‘›
Ù
๐ด๐‘– = ๐ด1 ∪ ๐ด2 ∪ ... ∪ ๐ด๐‘› = ๐ด๐‘– 1 ∪ ๐ด๐‘– 2 ∪ ... ∪ ๐ด๐‘–๐‘› ,
(1.21)
๐ด๐‘– = ๐ด1 ∩ ๐ด2 ∩ ... ∩ ๐ด๐‘› = ๐ด๐‘– 1 ∩ ๐ด๐‘– 2 ∩ ... ∩ ๐ด๐‘–๐‘› ,
(1.22)
๐‘–=1
where ๐‘– 1 ∈ {1, 2, ..., ๐‘›} = ๐ผ 1, ๐‘– 2 ∈ ๐ผ 2 = ๐ผ 1 ∩ {๐‘– 1 }๐‘ , and ๐‘–๐‘™ ∈ ๐ผ๐‘™ = ๐ผ๐‘™−1 ∩ {๐‘–๐‘™−1 }๐‘ , ๐‘™ = 2, 3, ..., ๐‘›. In
other words, the union (or intersection) of ๐‘› sets is independent of the order in which the
unions (or intersections) are taken.
Distributive Properties:
๐ต∩
๐ต∪
๐‘›
Ø
๐‘–=1
๐‘›
Ù
๐ด๐‘– =
๐ด๐‘– =
๐‘–=1
๐‘›
Ø
๐‘–=1
๐‘›
Ù
(๐ต ∩ ๐ด๐‘– )
(1.23)
(๐ต ∪ ๐ด๐‘– )
(1.24)
๐‘–=1
De Morgan’s Laws:
(
(
๐‘›
Ù
๐‘–=1
๐‘›
Ø
๐ด๐‘– )๐‘ =
๐ด๐‘– )๐‘ =
๐‘–=1
๐‘›
Ø
๐‘–=1
๐‘›
Ù
๐ด๐‘๐‘–
(1.25)
๐ด๐‘๐‘–
(1.26)
๐‘–=1
Throughout much of probability, it is useful to decompose a set into a union of simpler, nonoverlapping sets. This is an application of the “divide and conquer” approach to problem solving.
Necessary terminology is established in the following definitions.
Definition 1.11. Mutually Exclusive: The sets ๐ด1, ๐ด2, ..., ๐ด๐‘› are mutually exclusive (or disjoint)
if ๐ด๐‘– ∩ ๐ด ๐‘— = ∅ for all ๐‘– and ๐‘— with ๐‘– ≠ ๐‘— .
Definition 1.12. Partition: The sets ๐ด1, ๐ด2, ..., ๐ด๐‘› form a partition of the set ๐ต if they are mutually
Ð
exclusive and ๐ต = ๐ด1 ∪ ๐ด2 ∪ ... ∪ ๐ด๐‘› = ๐‘›๐‘–=1 ๐ด๐‘–
Definition 1.13. Collectively Exhaustive: The sets ๐ด1, ๐ด2, ..., ๐ด๐‘› are collectively exhaustive if
Ð
๐‘† = ๐ด1 ∪ ๐ด2 ∪ ... ∪ ๐ด๐‘› = ๐‘›๐‘–=1 ๐ด๐‘–
9
1 Preliminary Concepts
Example 1.3
Let ๐‘† = {(๐‘ฅ, ๐‘ฆ) : ๐‘ฅ ≥ 0, ๐‘ฆ ≥ 0}, ๐ด = {(๐‘ฅ, ๐‘ฆ) : ๐‘ฅ + ๐‘ฆ < 1}, ๐ต = {(๐‘ฅ, ๐‘ฆ) : ๐‘ฅ < ๐‘ฆ}, and
๐ถ = {(๐‘ฅ, ๐‘ฆ) : ๐‘ฅ๐‘ฆ > 1/4}. Are the sets ๐ด, ๐ต, and ๐ถ mutually exclusive, collectively exhaustive,
and/or a partition of ๐‘†?
Solution. Since ๐ด ∩ ๐ถ = ∅, the sets ๐ด and ๐ถ are mutually exclusive; however, ๐ด ∩ ๐ต ≠ ∅ and
๐ต ∩ ๐ถ ≠ ∅, so ๐ด and ๐ต, and ๐ต and ๐ถ are not mutually exclusive. Since ๐ด ∪ ๐ต ∪ ๐ถ ≠ ๐‘†, the
events are not collectively exhaustive. The events ๐ด, ๐ต, and ๐ถ are not a partition of S since
they are not mutually exclusive and collectively exhaustive.
Definition 1.14. Cartesian Product: The Cartesian product of sets ๐ด and ๐ต is a set of ordered
pairs of elements of ๐ด and ๐ต:
๐ด × ๐ต = {๐œ = (๐œ 1, ๐œ 2 ) : ๐œ 1 ∈ ๐ด, ๐œ 2 ∈ ๐ต}.
(1.27)
The Cartesian product of sets ๐ด1, ๐ด2, ..., ๐ด๐‘› is a set of n-tuples (an ordered list of ๐‘› elements) of
elements of ๐ด1, ๐ด2, ..., ๐ด๐‘› :
๐ด1 × ๐ด2 × ... × ๐ด๐‘› = {๐œ = (๐œ 1, ๐œ 2, ...๐œ๐‘› ) : ๐œ 1 ∈ ๐ด1, ๐œ 2 ∈ ๐ด2, ..., ๐œ๐‘› ∈ ๐ด๐‘› }.
(1.28)
An important example of a Cartesian product is the usual n-dimensional real Euclidean space:
๐‘…๐‘› = ๐‘… × ๐‘… × ... × ๐‘… .
|
{z
}
(1.29)
๐‘› terms
1.4 Other Notations
Some special sets of real numbers will often be encountered:
(๐‘Ž, ๐‘) = ๐‘ฅ : ๐‘Ž < ๐‘ฅ < ๐‘,
(๐‘Ž, ๐‘] = ๐‘ฅ : ๐‘Ž < ๐‘ฅ ≤ ๐‘,
[๐‘Ž, ๐‘) = ๐‘ฅ : ๐‘Ž ≤ ๐‘ฅ < ๐‘,
[๐‘Ž, ๐‘] = ๐‘ฅ : ๐‘Ž ≤ ๐‘ฅ ≤ ๐‘.
Note that if ๐‘Ž > ๐‘, then (๐‘Ž, ๐‘) = (๐‘Ž, ๐‘] = [๐‘Ž, ๐‘) = [๐‘Ž, ๐‘] = ∅. If ๐‘Ž = ๐‘, then (๐‘Ž, ๐‘) = (๐‘Ž, ๐‘] = [๐‘Ž, ๐‘) =
∅ and [๐‘Ž, ๐‘] = ๐‘Ž. The notation (๐‘Ž, ๐‘) is also used to denote an ordered pair—we depend on the
context to determine whether (๐‘Ž, ๐‘) represents an open interval of real numbers or an ordered
pair.
1.5 Random Experiments
To further clarify the basics of random experiments, we begin with a few simple definitions.
Definition 1.15. Experiment: An experiment is a procedure we perform (quite often hypothetical)
that produces some result.
Often the letter ๐ธ is used to designate an experiment. For example, the experiment ๐ธ 5 might
consist of tossing a coin five times.
10
1 Preliminary Concepts
Definition 1.16. Outcome: An outcome is a possible result of an experiment.
The letter ๐œ is often used to represent outcomes. For example, the outcome ๐œ 1 of experiment ๐ธ 5
might represent the sequence of tosses heads-heads-tails-heads-tails; or concisely, HHTHT.
Definition 1.17. Event: An event is a certain set of outcomes of an experiment.
For example, the event ๐ถ associated with experiment ๐ธ 5 might be ๐ถ = {all outcomes consisting of
an even number of heads}
Definition 1.18. Sample space: The sample space is the collection or set of “all possible” distinct
(collectively exhaustive and mutually exclusive) outcomes of an experiment.
The letter ๐‘† is used to designate the sample space, which is the universal set of outcomes of an
experiment. Note that in the coin tossing experiment, the coin may land on edge. But experience
has shown us that such a result is highly unlikely to occur. Therefore, our sample space for such
experiments typically excludes such unlikely outcomes, and only includes all possible outcomes.
For now, we assume all outcomes to be distinct. Consequently, we are considering only the set of
simple outcomes that are collectively exhaustive and mutually exclusive.
A sample space is called discrete if it is a finite or a countably infinite set. It is called continuous
or a continuum otherwise. The set of all real numbers between 0 and 1 is an example of an
uncountable sample space. For now, we only deal with discrete sample spaces.
Example 1.4
Consider the experiment of flipping a fair coin once, where fair means that the coin is not
biased in weight to a particular side. There are two possible outcomes: head (๐œ 1 = ๐ป ) or a
tail (๐œ 2 = ๐‘‡ ). Thus, the sample space ๐‘†, consists of two outcomes, ๐œ 1 = ๐ป and ๐œ 2 = ๐‘‡ .
Example 1.5
Now consider flipping the coin until a tails occurs, when the experiment is terminated.
The sample space consists of a collection of sequences of coin tosses. The outcomes are
๐œ๐‘› , ๐‘› = 1, 2, 3, .... The final toss in any particular sequence is a tail and terminates the
sequence. The preceding tosses prior to the occurrence of the tail must be heads. The
possible outcomes that may occur are: ๐œ 1 = (๐‘‡ ), ๐œ 2 = (๐ป,๐‘‡ ), ๐œ 3 = (๐ป, ๐ป,๐‘‡ ), ...
Note that in this case, n can extend to infinity. This is a combined sample space resulting
from conducting independent but identical experiments. In this example, the sample space
is countably infinite.
Example 1.6
A cubical die with numbered faces is rolled and the result observed. The sample space
consists of six possible outcomes, ๐œ 1 = 1, ๐œ 2 = 2, ..., ๐œ 6 = 6, indicating the possible observed
faces of the cubical die.
Example 1.7
Now consider the experiment of rolling two dice and observing the results. The sample space
11
1 Preliminary Concepts
consists of 36 outcomes: ๐œ 1 = (1, 1), ๐œ 2 = (1, 2), ..., ๐œ 6 = (1, 6), ๐œ 7 = (2, 1), ๐œ 8 = (2, 2), ..., ๐œ 3 6 =
(6, 6) the first component in the ordered pair indicates the result of the toss of the first die,
and the second component indicates the result of the toss of the second die. Alternatively
we can consider this experiment as two distinct experiments, each consisting of rolling
a single die. The sample spaces (๐‘† 1 and ๐‘† 2 ) for each of the two experiments are identical,
namely, the same as Example 1.6. We may now consider the sample space of the original
experiment ๐‘†, to be the combination of the sample spaces ๐‘† 1 and ๐‘† 2 , which consists of
all possible combinations of the elements of both ๐‘† 1 and ๐‘† 2 . This is another example of a
combined sample space. Several interesting events can be also defined from this experiment,
such as:
๐ด = {the sum of the outcomes of the two rolls = 4},
๐ต = {the outcomes of the two rolls are identical},
๐ถ = {the first roll was bigger than the second}.
The choice of a particular sample space depends upon the questions that are to be answered
concerning the experiment. Suppose that in Example 1.7, we were asked to record after each roll
the sum of the numbers shown on the two faces. Then, the sample space could be represented
by eleven outcomes, ๐œ 1 = 2, ๐œ 2 = 3, ..., ๐œ 11 = 12. However, the original sample space was in
some way more fundamental. Because the sum of the die faces can be determined from the numbers on the die faces, but the sum is not sufficient to specify the sequence of numbers that occurred.
1.5.1 Tree Diagrams
Many experiments consist of a sequence of simpler “sub-experiments” as, for example, the
sequential tossing of a coin or the sequential die rolling. A tree diagram is a useful graphical
representation of a sequence of experiments, particularly when each sub-experiment has a small
number of possible outcomes.
Example 1.8
The coin in Example 1.4 is tossed twice. Illustrate the sample space with a tree diagram.
Let ๐ป๐‘– and ๐‘‡๐‘– denote the outcome of a head or a tale on the the ๐‘– ๐‘กโ„Ž toss, respectively. The
sample space is: ๐‘† = {๐ป 1๐ป 2, ๐ป 1๐‘‡2,๐‘‡1๐ป 2,๐‘‡1๐‘‡2 } The tree diagram illustrating the sample space
for this sequence of two coin tosses is shown in Figure 1.3.
Figure 1.3: Tree diagram for Example 1.8
12
1 Preliminary Concepts
Each node represents an outcome of one coin toss and the branches of the tree connect
the nodes. The number of branches to the right of each node corresponds to the number
of outcomes for the next coin toss (or experiment). A sequence of samples connected by
branches in a left to right path from the origin to a terminal node represents a sample point
for the combined experiment. There is a one-to-one correspondence between the paths in
the tree diagram and the sample points in the sample space for the combined experiment.
1.5.2 Coordinate System
Coordinate system representation is another way to illustrate the sample space, especially useful
for a combination of two experiment with numerical outcomes. With this method, each axis lists
the outcomes for each sub-experiment. In Example 1.7, if a die is tossed twice, the coordinate
system can represent the sample space as shown in Figure 1.4.
Figure 1.4: Coordinate system representation for Example 1.7
Note that there are 36 sample points in the experiment. Additionally, we distinguish between
sample points with regard to order; e.g., (1,2) is different from (2,1).
Further Reading
1. John D. Enderle, David C. Farden, Daniel J. Krause, Basic Probability Theory for Biomedical
Engineers, Morgan & Claypool, 2006: sections 1.1 and 1.2
2. Scott L. Miller, Donald Childers, Probability and random processes: with applications to
signal processing and communications, 2nd ed., Elsevier 2012: section 2.1
3. Alberto Leon-Garcia, Probability, statistics, and random processes for electrical engineering,
3rd ed. Pearson, 2007: sections 1.3 and 2.1
4. Charles W. Therrien, Probability for electrical and computer engineers, CRC Press, 2004:
chapter 1
13
2 Probability Theory
2 Probability Theory
2.1 Definition of Probability
Now that the concepts of experiments, outcomes, and events have been introduced, the next
step is to assign probabilities to various outcomes and events. This requires a careful definition
of probability. It should be clear from our everyday usage of the word probability that it is a
measure of the likelihood of various events. In general terms, probability is a function of an event
that produces a numerical quantity that measures the likelihood of that event. More specifically,
probability is a real number between 0 and 1, with probability = 0 meaning that the event is
extremely unlikely to occur and probability = 1 meaning that the event is almost certain to occur.
Several approaches to probability theory have been taken. Two definitions are discussed in this
section.
2.1.1 Relative Frequency Definition
The relative frequency definition of probability is based on observation or experimental evidence
and not on prior knowledge. If an experiment is repeated ๐‘ times and a certain event ๐ด occurs in
๐‘๐ด out of ๐‘ trials, then the probability of ๐ด is defined to be:
๐‘๐ด
๐‘ →+∞ ๐‘
๐‘ƒ (๐ด) = lim
(2.1)
For example, if a six-sided die is rolled a large number of times and the numbers on the face of the
die come up in approximately equal proportions, then we could say that the probability of each
number on the upturned face of the die is 1/6. The difficulty with this definition is determining
when ๐‘ is sufficiently large and indeed if the limit actually exists. We will certainly use this
definition in practice, relating deduced probabilities to the physical world, but we will not develop
probability theory from it.
2.1.2 Axiomatic Definition
For now, we consider the event space (denoted by ๐น ) to be simply the space containing all events to
which we wish to assign a probability. We start with three axioms that any method for assigning
probabilities must satisfy:
1. For any event ๐ด ∈ ๐น , ๐‘ƒ (๐ด) ≥ 0 (a negative probability does not make sense).
2. If ๐‘† is the sample space for a given experiment, ๐‘ƒ (๐‘†) = 1 (probabilities are normalized so
that the maximum value is unity).
3. If ๐ด ∩ ๐ต = ∅, then ๐‘ƒ (๐ด ∪ ๐ต) = ๐‘ƒ (๐ด) + ๐‘ƒ (๐ต). In general if ๐ด1, ๐ด2, ... are mutually exclusive
events in ๐น , i.e. ๐ด๐‘– ∩ ๐ด ๐‘— = for all ๐‘– ≠ ๐‘—, then:
๐‘ƒ(
∞
Ø
๐ด๐‘– ) =
๐‘–=1
∞
Õ
๐‘–=1
14
๐‘ƒ (๐ด๐‘– )
2 Probability Theory
The following theorem is a direct consequence of the axioms of probability, which is useful for
solving probability problems.
Theorem 2.1
Assuming that all events indicated are in the event space ๐น , we have:
(i) ๐‘ƒ (๐ด๐‘ ) = 1 − ๐‘ƒ (๐ด),
(ii) ๐‘ƒ (∅) = 0,
(iii) 0 ≤ ๐‘ƒ (๐ด) ≤ 1,
(iv) ๐‘ƒ (๐ด ∪ ๐ต) = ๐‘ƒ (๐ด) + ๐‘ƒ (๐ต) − ๐‘ƒ (๐ด ∩ ๐ต)
(v) ๐‘ƒ (๐ต) ≤ ๐‘ƒ (๐ด) if ๐ต ⊂ ๐ด.
Proof.
(i) Since ๐‘† = ๐ด ∪ ๐ด๐‘ and ๐ด ∩ ๐ด๐‘ = ∅, we apply the second and third axioms of probability
to obtain ๐‘ƒ (๐‘†) = 1 = ๐‘ƒ (๐ด) + ๐‘ƒ (๐ด๐‘ ), from which (i) follows.
(ii) Applying (i) with ๐ด = ๐‘† we have ๐ด๐‘ = ∅ so that ๐‘ƒ (∅) = 1 − ๐‘ƒ (๐‘†) = 0.
(iii) From (i) we have ๐‘ƒ (๐ด) = 1 − ๐‘ƒ (๐ด๐‘ ), from the first axiom we have ๐‘ƒ (๐ด) ≥ 0 and
๐‘ƒ (๐ด๐‘ ) ≥ 0; consequently, 0 ≤ ๐‘ƒ (๐ด) ≤ 1.
(iv) Let ๐ถ = ๐ต ∩ ๐ด๐‘ . Then ๐ด ∪ ๐ถ = ๐ด ∪ (๐ต ∩ ๐ด๐‘ ) = (๐ด ∪ ๐ต) ∩ (๐ด ∪ ๐ด๐‘ ) = ๐ด ∪ ๐ต, and
๐ด ∩ ๐ถ = ๐ด ∩ ๐ต ∩ ๐ด๐‘ = ∅, so that ๐‘ƒ (๐ด ∪ ๐ต) = ๐‘ƒ (๐ด ∪ ๐ถ) = ๐‘ƒ (๐ด) + ๐‘ƒ (๐ถ). Now we find
๐‘ƒ (๐ถ). Since ๐ต = ๐ต ∩ ๐‘† = ๐ต ∩ (๐ด ∪ ๐ด๐‘ ) = (๐ต ∩ ๐ด) ∪ (๐ต ∩ ๐ด๐‘ ) and (๐ต ∩ ๐ด) ∩ (๐ต ∩ ๐ด๐‘ ) = ∅,
๐‘ƒ (๐ต) = ๐‘ƒ (๐ต ∩ ๐ด๐‘ ) + ๐‘ƒ (๐ด ∩ ๐ต) = ๐‘ƒ (๐ถ) + ๐‘ƒ (๐ด ∩ ๐ต), so ๐‘ƒ (๐ถ) = ๐‘ƒ (๐ต) − ๐‘ƒ (๐ด ∩ ๐ต).
(v) We have ๐ด = ๐ด ∩ (๐ต ∪ ๐ต๐‘ ) = (๐ด ∩ ๐ต) ∪ (๐ด ∩ ๐ต๐‘ ), and if ๐ต ⊂ ๐ด, then ๐ด = ๐ต ∪ (๐ด ∩ ๐ต๐‘ ).
Since ๐ต ∩ (๐ด ∩ ๐ต๐‘ ) = ∅, consequently, ๐‘ƒ (๐ด) = ๐‘ƒ (๐ต) + ๐‘ƒ (๐ด ∩ ๐ต๐‘ ) ≥ ๐‘ƒ (๐ต).
[Exercise−] Visualize this theorem by drawing a Venn diagram.
Example 2.1
Given ๐‘ƒ (๐ด) = 0.4, ๐‘ƒ (๐ด ∩ ๐ต๐‘ ) = 0.2, and ๐‘ƒ (๐ด ∪ ๐ต) = 0.6, find ๐‘ƒ (๐ด ∩ ๐ต) and ๐‘ƒ (๐ต).
Solution. We have ๐‘ƒ (๐ด) = ๐‘ƒ (๐ด ∩ ๐ต) + ๐‘ƒ (๐ด ∩ ๐ต๐‘ ) so that ๐‘ƒ (๐ด ∩ ๐ต) = 0.4 − 0.2 = 0.2. Similarly,
๐‘ƒ (๐ต๐‘ ) = ๐‘ƒ (๐ต๐‘ ∩ ๐ด) + ๐‘ƒ (๐ต๐‘ ∩ ๐ด๐‘ ) = 0.2 + 1 − ๐‘ƒ (๐ด ∪ ๐ต) = 0.6. Hence, ๐‘ƒ (๐ต) = 1 − ๐‘ƒ (๐ต๐‘ ) = 0.4.
Note that since probabilities are non-negative (theorem 2.1 (iii)), the theorem 2.1 (iv) implies that
the probability of the union of two events is no greater than the sum of the individual event
probabilities:
๐‘ƒ (๐ด ∪ ๐ต) ≤ ๐‘ƒ (๐ด) + ๐‘ƒ (๐ต)
(2.2)
This can be extended to Boole’s Inequality, described as follows.
15
2 Probability Theory
Theorem 2.2
Boole’s inequality: Let ๐ด1, ๐ด2, ... all belong to ๐น . Then
๐‘ƒ(
∞
Ø
๐ด๐‘– ) =
๐‘–=1
∞
Õ
(๐‘ƒ (๐ด๐‘˜ ) − ๐‘ƒ (๐ด๐‘˜ ∩ ๐ต๐‘˜ )) ≤
๐‘˜=1
∞
Õ
๐‘ƒ (๐ด๐‘˜ )
๐‘˜=1
where
๐ต๐‘˜ =
๐‘˜−1
Ø
๐ด๐‘–
๐‘–=1
Proof. Note that ๐ต 1 = ∅, ๐ต 2 = ๐ด1, ๐ต 3 = ๐ด1 ∪ ๐ด2, ..., ๐ต๐‘˜ = ๐ด1 ∪ ๐ด2 ∪ ... ∪ ๐ด๐‘˜−1 ; as ๐‘˜ increases,
the size of ๐ต๐‘˜ is non-decreasing. Let ๐ถ๐‘˜ = ๐ด๐‘˜ ∩ ๐ต๐‘˜๐‘ ; thus, ๐ถ๐‘˜ = ๐ด๐‘˜ ∩ (๐ด๐‘1 ∩ ๐ด๐‘2 ∩ ... ∩ ๐ด๐‘๐‘˜−1 )
consists of all elements in ๐ด๐‘˜ and not in any ๐ด๐‘– , ๐‘– = 1, 2, ..., ๐‘˜ − 1. Then
๐ต๐‘˜+1 =
๐‘˜
Ø
๐ด๐‘– = ๐ต๐‘˜ ∪ (๐ด๐‘˜ ∩ ๐ต๐‘˜๐‘ ) .
| {z }
๐‘–=1
๐ถ๐‘˜
and ๐‘ƒ (๐ต๐‘˜ + 1) = ๐‘ƒ (๐ต๐‘˜ ) + ๐‘ƒ (๐ถ๐‘˜ ). We have ๐‘ƒ (๐ต 2 ) = ๐‘ƒ (๐ถ 1 ), ๐‘ƒ (๐ต 3 ) = ๐‘ƒ (๐ถ 1 ) + ๐‘ƒ (๐ถ 2 ), and
๐‘ƒ (๐ต๐‘˜+1 ) = ๐‘ƒ (
๐‘˜
Ø
๐ด๐‘– ) =
๐‘–=1
๐‘˜
Õ
๐‘ƒ (๐ถ๐‘– )
๐‘–=1
The desired result follows by noting that ๐‘ƒ (๐ถ๐‘– ) = ๐‘ƒ (๐ด๐‘– ) − ๐‘ƒ (๐ด๐‘– ∩ ๐ต๐‘– ).
Example 2.2
Let ๐‘† = [0, 1] (the set of real numbers ๐‘ฅ : 0 ≤ ๐‘ฅ ≤ 1). Let ๐ด1 = [0, 0.5], ๐ด2 = (0.45, 0.7),
๐ด3 = [0.6, 0.8), and assume ๐‘ƒ (๐œ ∈ ๐ผ ) = length of the interval ๐ผ ∩ ๐‘†, so that ๐‘ƒ (๐ด1 ) = 0.5,
๐‘ƒ (๐ด2 ) = 0.25, and ๐‘ƒ (๐ด3 ) = 0.2. Find ๐‘ƒ (๐ด1 ∪ ๐ด2 ∪ ๐ด3 ).
Solution. Let ๐ถ 1 = ๐ด1, ๐ถ 2 = ๐ด2 ∩ ๐ด๐‘1 = (0.5, 0.7), and ๐ถ 3 = ๐ด3 ∩ ๐ด๐‘1 ∩ ๐ด๐‘2 = [0.7, 0.8). Then
๐ถ 1, ๐ถ 2, and ๐ถ 3 are mutually exclusive and ๐ด1 ∪๐ด2 ∪๐ด3 = ๐ถ 1 ∪๐ถ 2 ∪๐ถ 3 ; hence ๐‘ƒ (๐ด1 ∪๐ด2 ∪๐ด3 ) =
๐‘ƒ (๐ถ 1 ∪ ๐ถ 2 ∪ ๐ถ 3 ) = 0.5 + 0.2 + 0.1 = 0.8. Note that for this example, Boole’s inequality yields
๐‘ƒ (๐ด1 ∪ ๐ด2 ∪ ๐ด3 ) ≤ 0.5 + 0.25 + 0.2 = 0.95.
2.2 Joint Probabilities
Suppose that we have two sets, ๐ด and ๐ต. We saw a few results in the previous section that dealt
with how to calculate the probability of the union of two sets, ๐ด ∪ ๐ต. At least as frequently, we
are interested in calculating the probability of the intersection of two sets, ๐ด ∩ ๐ต.
Definition 2.1. Joint probability: The probability of the intersection of two sets, ๐ด ∩ ๐ต is referred
to as the joint probability of the sets ๐ด and ๐ต, ๐‘ƒ (๐ด ∩ ๐ต), usually denoted by ๐‘ƒ (๐ด, ๐ต).
Extending to an arbitrary number of sets, the joint probability of the sets ๐ด1, ๐ด2, ..., ๐ด๐‘€ , denoted
๐‘ƒ (๐ด1, ๐ด2, ..., ๐ด๐‘€ ), is ๐‘ƒ (๐ด1 ∩ ๐ด2 ∩ ... ∩ ๐ด๐‘€ ).
16
2 Probability Theory
From the relative frequency definition, in practice we may let ๐‘›๐ด,๐ต be the number of times that ๐ด
and ๐ต simultaneously occur in ๐‘› trials. Then,
๐‘›๐ด,๐ต
๐‘ƒ (๐ด, ๐ต) = lim
(2.3)
๐‘›→∞ ๐‘›
Example 2.3
A standard deck of playing cards has 52 cards that can be divided in several manners. There
are four suits (spades, hearts,diamonds, and clubs), each of which has 13 cards (ace, 2, 3, 4,
... , 10, jack, queen, king). There are two red suits (hearts and diamonds) and two black suits
(spades and clubs). Also, the jacks, queens, and kings are referred to as face cards, while
the others are number cards. Suppose the cards are sufficiently shuffled (randomized) and
one card is drawn from the deck. The experiment has 52 outcomes corresponding to the 52
individual cards that could have been selected. Hence, each outcome has a probability of
1/52. Define the events:
A = {red card selected},
B = {number card selected},
C = {heart selected}.
Since the event A consists of 26 outcomes (there are 26 red cards), then ๐‘ƒ (๐ด) = 26/52 = 1/2.
Likewise, ๐‘ƒ (๐ต) = 40/52 = 10/13 and ๐‘ƒ (๐ถ) = 13/52 = 1/4. Events A and B have 20
outcomes in common, hence ๐‘ƒ (๐ด, ๐ต) = 20/52 = 5/13. Likewise, ๐‘ƒ (๐ต, ๐ถ) = 10/52 = 5/26
and ๐‘ƒ (๐ด, ๐ถ) = 13/52 = 1/4. It is interesting to note that in this example, ๐‘ƒ (๐ด, ๐ถ) = ๐‘ƒ (๐ถ),
because ๐ถ ⊂ ๐ด and as a result ๐ด ∩ ๐ถ = ๐ถ.
2.3 Conditional Probabilities
Often the occurrence of one event may be dependent upon the occurrence of another. In Example
2.3, the event A = {a red card is selected} had a probability of ๐‘ƒ (๐ด) = 1/2. If it is known that event
C = {a heart is selected} has occurred, then the event A is now certain (probability equal to 1),
since all cards in the heart suit are red. Likewise, if it is known that the event C did not occur, then
there are 39 cards remaining, 13 of which are red (all the diamonds). Hence, the probability of
event A in that case becomes 1/3. Clearly, the probability of event A depends on the occurrence of
event C. We say that the probability of A is conditional on C, or the probability of A conditioned
on knowing that C has occurred.
Definition 2.2. Conditional probability: the probability of A given knowledge that the event B
has occurred is referred to as the conditional probability of A given B, denoted by ๐‘ƒ (๐ด|๐ต), i.e.:
๐‘ƒ (๐ด|๐ต) =
๐‘ƒ (๐ด, ๐ต)
๐‘ƒ (๐ต)
(2.4)
provided that ๐‘ƒ (๐ต) is nonzero.
The conditional probability measure is a legitimate probability measure that satisfies each of the
axioms of probability. Note carefully that ๐‘ƒ (๐ต|๐ด) ≠ ๐‘ƒ (๐ด|๐ต). If we interpret probability as relative
frequency, then ๐‘ƒ (๐ด|๐ต) should be the relative frequency of the event ๐ด ∩ ๐ต in experiments where
๐ต occurred. Suppose that the experiment is performed ๐‘› times, and suppose that event ๐ต occurs
๐‘›๐ต times, and that event ๐ด ∩ ๐ต occurs ๐‘›๐ด,๐ต times. The relative frequency of interest is then:
๐‘›๐ด,๐ต /๐‘›
๐‘›๐ด,๐ต
๐‘ƒ (๐ด, ๐ต)
= lim
= lim
๐‘›→∞
๐‘›→∞
๐‘ƒ (๐ต)
๐‘›๐ต /๐‘›
๐‘›๐ต
17
(2.5)
2 Probability Theory
provided that ๐‘ƒ (๐ต) is nonzero.
We may find in some cases that conditional probabilities are easier to compute than the corresponding joint probabilities, and hence this formula offers a convenient way to compute joint
probabilities:
๐‘ƒ (๐ด, ๐ต) = ๐‘ƒ (๐ต|๐ด)๐‘ƒ (๐ด) = ๐‘ƒ (๐ด|๐ต)๐‘ƒ (๐ต)
(2.6)
This idea can be extended to more than two events. Consider finding the joint probability of three
events, ๐ด, ๐ต, and ๐ถ:
๐‘ƒ (๐ด, ๐ต, ๐ถ) = ๐‘ƒ (๐ถ |๐ด, ๐ต)๐‘ƒ (๐ด, ๐ต) = ๐‘ƒ (๐ถ |๐ด, ๐ต)๐‘ƒ (๐ต|๐ด)๐‘ƒ (๐ด)
(2.7)
In general, for ๐‘€ events, ๐ด1, ๐ด2, ..., ๐ด๐‘€ ,
๐‘ƒ (๐ด1, ๐ด2, ..., ๐ด๐‘€ ) = ๐‘ƒ (๐ด๐‘€ |๐ด1, ๐ด2, ..., ๐ด๐‘€−1 )๐‘ƒ (๐ด๐‘€−1 |๐ด1, ๐ด2, ..., ๐ด๐‘€−2 )... × ๐‘ƒ (๐ด2 |๐ด1 )๐‘ƒ (๐ด1 )
(2.8)
Example 2.4
Return to the experiment of drawing cards from a deck as described in Example 2.3. Suppose
now that we select two cards at random from the deck. When we select the second card,
we do not return the first card to the deck. In this case, we say that we are selecting cards
without replacement. As a result, the probabilities associated with selecting the second card
are slightly different if we have knowledge of which card was drawn on the first selection.
To illustrate this, let:
A = {first card was a spade} and
B = {second card was a spade}.
The probability of the event A can be calculated as in the previous example to be ๐‘ƒ (๐ด) =
13/52 = 1/4. Likewise, if we have no knowledge of what was drawn on the first selection, the
probability of the event B is the same, ๐‘ƒ (๐ต) = 1/4. To calculate the joint probability of A and
B, we have to do some counting. To begin, when we select the first card there are 52 possible
outcomes. Since this card is not returned to the deck, there are only 51 possible outcomes
for the second card. Hence, this experiment of selecting two cards from the deck has 52 ∗ 51
possible outcomes each of which is equally likely. Similarly, there are 13 ∗ 12 outcomes that
belong to the joint event ๐ด ∩ ๐ต. Therefore, the joint probability for A and B is ๐‘ƒ (๐ด, ๐ต) =
(13 ∗ 12)/(52 ∗ 51) = 1/17. The conditional probability of the second card being a spade
given that the first card is a spade is then ๐‘ƒ (๐ต|๐ด) = ๐‘ƒ (๐ด, ๐ต)/๐‘ƒ (๐ด) = (1/17)/(1/4) = 4/17.
However, calculating this conditional probability directly is probably easier than calculating
the joint probability. Given that we know the first card selected was a spade, there are now
51 cards left in the deck, 12 of which are spades, thus ๐‘ƒ (๐ต|๐ด) = 12/51 = 4/17.
2.3.1 Bayes’s Theorem
The concept of conditional probability leads us to the following theorem.
Theorem 2.3
For any events ๐ด and ๐ต such that ๐‘ƒ (๐ต) ≠ 0,
๐‘ƒ (๐ด|๐ต) =
๐‘ƒ (๐ต|๐ด)๐‘ƒ (๐ด)
๐‘ƒ (๐ต)
18
(2.9)
2 Probability Theory
Proof. From definition 2.2,
๐‘ƒ (๐ด, ๐ต) = ๐‘ƒ (๐ด|๐ต)๐‘ƒ (๐ต) = ๐‘ƒ (๐ต|๐ด)๐‘ƒ (๐ด).
(2.10)
It follows directly by dividing the preceding equations by ๐‘ƒ (๐ต).
Theorem 2.3 is useful for calculating certain conditional probabilities since, in many problems, it
may be quite difficult to compute ๐‘ƒ (๐ด|๐ต) directly, whereas calculating ๐‘ƒ (๐ต|๐ด) may be straightforward.
Theorem 2.4: Theorem of Total Probability
Let ๐ต 1, ๐ต 2, ..., ๐ต๐‘› be a set of mutually exclusive and collectively exhaustive events. That is,
๐ต๐‘– ∩ ๐ต ๐‘— = for all ๐‘– ≠ ๐‘— and
๐‘›
๐‘›
Ø
Õ
๐ต๐‘– = ๐‘† ⇒
๐‘ƒ (๐ต๐‘– ) = 1
(2.11)
๐‘–=1
๐‘–=1
then
๐‘ƒ (๐ด) =
๐‘›
Õ
๐‘ƒ (๐ด|๐ต๐‘– )๐‘ƒ (๐ต๐‘– )
(2.12)
๐‘–=1
Proof. From the Venn diagram in Figure 2.1, it can be seen that the event ๐ด can be written
as:
๐ด = (๐ด∩๐ต 1 ) ∪ (๐ด∩๐ต 2 ) ∪...∪ (๐ด∩๐ต๐‘› ) ⇒ ๐‘ƒ (๐ด) = ๐‘ƒ ({๐ด∩๐ต 1 }∪{๐ด∩๐ต 2 }∪...∪{๐ด∩๐ต๐‘› }) (2.13)
Also, since the ๐ต๐‘– are all mutually exclusive, then the {๐ด ∩ ๐ต๐‘– } are also mutually exclusive,
so that
๐‘›
๐‘›
Õ
Õ
๐‘ƒ (๐ด) =
๐‘ƒ (๐ด, ๐ต๐‘– ) =
๐‘ƒ (๐ด|๐ต๐‘– )๐‘ƒ (๐ต๐‘– ) (by Theorem 2.3).
(2.14)
๐‘–=1
๐‘–=1
Figure 2.1: Venn diagram used to help prove the theorem of total probability
By combining the results of Theorems 2.3 and 2.4, we get what has come to be known as Bayes’s
theorem.
Theorem 2.5: Bayes’s Theorem
19
2 Probability Theory
Let ๐ต 1, ๐ต 2, ..., ๐ต๐‘› be a set of mutually exclusive and collectively exhaustive events. Then:
๐‘ƒ (๐ด|๐ต๐‘– )๐‘ƒ (๐ต๐‘– )
๐‘ƒ (๐ต๐‘– |๐ด) = Í๐‘›
๐‘–=1 ๐‘ƒ (๐ด|๐ต๐‘– )๐‘ƒ (๐ต๐‘– )
(2.15)
๐‘ƒ (๐ต๐‘– ) is often referred to as the a priori probability of event ๐ต๐‘– , while ๐‘ƒ (๐ต๐‘– |๐ด) is known as the a
posteriori probability of event ๐ต๐‘– given ๐ด.
Example 2.5
A certain auditorium has 30 rows of seats. Row 1 has 11 seats, while Row 2 has 12 seats, Row
3 has 13 seats, and so on to the back of the auditorium where Row 30 has 40 seats. A door
prize is to be given away by randomly selecting a row (with equal probability of selecting
any of the 30 rows) and then randomly selecting a seat within that row (with each seat in
the row equally likely to be selected). Find the probability that Seat 15 was selected given
that Row 20 was selected and also find the probability that Row 20 was selected given that
Seat 15 was selected.
Solution. The first task is straightforward. Given that Row 20 was selected, there are 30
possible seats in Row 20 that are equally likely to be selected. Hence, ๐‘ƒ (๐‘†๐‘’๐‘Ž๐‘ก15|๐‘…๐‘œ๐‘ค20) =
1/30. Without the help of Bayes’s theorem, finding the probability that Row 20 was selected
given that we know Seat 15 was selected would seem to be a formidable problem. Using
Bayes’s theorem,
๐‘ƒ (๐‘…๐‘œ๐‘ค20|๐‘†๐‘’๐‘Ž๐‘ก15) = ๐‘ƒ (๐‘†๐‘’๐‘Ž๐‘ก15|๐‘…๐‘œ๐‘ค20)๐‘ƒ (๐‘…๐‘œ๐‘ค20)/๐‘ƒ (๐‘†๐‘’๐‘Ž๐‘ก15).
The two terms in the numerator on the right-hand side are both equal to 1/30. The term in
the denominator is calculated using the help of the theorem of total probability.
๐‘ƒ (๐‘†๐‘’๐‘Ž๐‘ก15) =
30
Õ
1 1
= 0.0342
๐‘˜ + 10 30
๐‘˜=5
With this calculation completed, the a posteriori probability of Row 20 being selected given
seat 15 was selected is given by:
๐‘ƒ (๐‘…๐‘œ๐‘ค20|๐‘†๐‘’๐‘Ž๐‘ก15) =
1/30 ∗ 1/30
= 0.0325
0.0342
Note that the a priori probability that Row 20 was selected is 1/30 = 0.0333. Therefore, the
additional information that Seat 15 was selected makes the event that Row 20 was selected
slightly less likely. In some sense, this may be counterintuitive, since we know that if Seat
15 was selected, there are certain rows that could not have been selected (i.e., Rows 1–4
have fewer than 15 seats) and, therefore, we might expect Row 20 to have a slightly higher
probability of being selected compared to when we have no information about which seat
was selected. To see why the probability actually goes down, try computing the probability
that Row 5 was selected given that Seat 15 was selected. The event that Seat 15 was selected
makes some rows much more probable, while it makes others less probable and a few rows
now impossible.
20
2 Probability Theory
2.4 Independence
In Example 2.5, it was seen that observing one event can change the probability of the occurrence
of another event. In that particular case, the fact that it was known that Seat 15 was selected,
lowered the probability that Row 20 was selected. We say that the event ๐ด = {Row 20 was
selected} is statistically dependent on the event ๐ต = {Seat 15 was selected}. If the description of
the auditorium were changed so that each row had an equal number of seats (e.g., say all 30 rows
had 20 seats each), then observing the event B = Seat 15 was selected would not give us any new
information about the likelihood of the event ๐ด = {Row 20 was selected}. In that case, we say
that the events ๐ด and ๐ต are statistically independent.
Mathematically, two events ๐ด and ๐ต are independent if ๐‘ƒ (๐ด|๐ต) = ๐‘ƒ (๐ด). That is, the a priori
probability of event ๐ด is identical to the a posteriori probability of ๐ด given ๐ต. Note that if
๐‘ƒ (๐ด|๐ต) = ๐‘ƒ (๐ด), then the following conditions also hold: ๐‘ƒ (๐ต|๐ด) = ๐‘ƒ (๐ต) and ๐‘ƒ (๐ด, ๐ต) = ๐‘ƒ (๐ด)๐‘ƒ (๐ต).
Furthermore, if ๐‘ƒ (๐ด|๐ต) ≠ ๐‘ƒ (๐ด), then the other two conditions also do not hold. We can thereby
conclude that any of these three conditions can be used as a test for independence and the other
two forms must follow. We use the last form as a definition of independence since it is symmetric
relative to the events A and B.
Definition 2.3. Independence: Two events are statistically independent if and only if:
๐‘ƒ (๐ด, ๐ต) = ๐‘ƒ (๐ด)๐‘ƒ (๐ต)
(2.16)
Example 2.6
Consider the experiment of tossing two numbered dice and observing the numbers that
appear on the two upper faces. For convenience, let the dice be distinguished by color, with
the first die tossed being red and the second being white. Let:
A = {number on the red die is less than or equal to 2},
B = {number on the white die is greater than or equal to 4},
C = {the sum of the numbers on the two dice is 3}.
As mentioned in the preceding text, there are several ways to establish independence (or
lack thereof) of a pair of events. One possible way is to compare ๐‘ƒ (๐ด, ๐ต) with ๐‘ƒ (๐ด)๐‘ƒ (๐ต).
Note that for the events defined here, ๐‘ƒ (๐ด) = 1/3, ๐‘ƒ (๐ต) = 1/2, ๐‘ƒ (๐ถ) = 1/18. Also, of the 36
possible outcomes of the experiment, six belong to the event ๐ด ∩ ๐ต and hence ๐‘ƒ (๐ด, ๐ต) = 1/6.
Since ๐‘ƒ (๐ด)๐‘ƒ (๐ต) = 1/6 as well, we conclude that the events ๐ด and ๐ต are independent. This
agrees with intuition since we would not expect the outcome of the roll of one die to affect
the outcome of the other. What about the events ๐ด and ๐ถ? Of the 36 possible outcomes of the
experiment, two belong to the event ๐ด∩๐ถ and hence ๐‘ƒ (๐ด, ๐ถ) = 1/18. Since ๐‘ƒ (๐ด)๐‘ƒ (๐ถ) = 1/54,
the events ๐ด and ๐ถ are not independent. Again, this is intuitive since whenever the event ๐ถ
occurs, the event ๐ด must also occur and so the two must be dependent. Finally, we look at
the pair of events ๐ต and ๐ถ. Clearly, ๐ต and ๐ถ are mutually exclusive. If the white die shows a
number greater than or equal to 4, there is no way the sum can be 3. Hence, ๐‘ƒ (๐ต, ๐ถ) = 0 and
since ๐‘ƒ (๐ต)๐‘ƒ (๐ถ) = 1/36, these two events are also dependent.
Note that mutually exclusive events are not the same as independent events. For two events ๐ด
and ๐ต for which ๐‘ƒ (๐ด) ≠ 0 and ๐‘ƒ (๐ต) ≠ 0, ๐ด and ๐ต can never be both independent and mutually
exclusive. Thus, mutually exclusive events are necessarily statistically dependent.
Generalizing the definition of independence to three events, ๐ด, ๐ต, and ๐ถ are mutually independent
if each pair of events is independent;
๐‘ƒ (๐ด, ๐ต) = ๐‘ƒ (๐ด)๐‘ƒ (๐ต)
21
(2.17)
2 Probability Theory
๐‘ƒ (๐ด, ๐ถ) = ๐‘ƒ (๐ด)๐‘ƒ (๐ถ)
(2.18)
๐‘ƒ (๐ต, ๐ถ) = ๐‘ƒ (๐ต)๐‘ƒ (๐ถ)
(2.19)
๐‘ƒ (๐ด, ๐ต, ๐ถ) = ๐‘ƒ (๐ด)๐‘ƒ (๐ต)๐‘ƒ (๐ถ)
(2.20)
and in addition,
Definition 2.4. The events ๐ด1, ๐ด2, ..., ๐ด๐‘› are independent if any subset of ๐‘˜ < ๐‘› of these events are
independent, and in addition
๐‘ƒ (๐ด1, ๐ด2, ..., ๐ด๐‘› ) = ๐‘ƒ (๐ด1 )๐‘ƒ (๐ด2 )...๐‘ƒ (๐ด๐‘› )
(2.21)
There are basically two ways in which we can use the idea of independence. We can compute
joint or conditional probabilities and apply one of the definitions as a test for independence.
Alternatively, we can assume independence and use the definitions to compute joint or conditional
probabilities that otherwise may be difficult to find. The latter approach is used extensively in
engineering applications. For example, certain types of noise signals can be modeled in this
way. Suppose we have some time waveform ๐‘‹ (๐‘ก) which represents a noisy signal that we wish
to sample at various points in time, ๐‘ก 1, ๐‘ก 2, ..., ๐‘ก๐‘› . Perhaps we are interested in the probabilities
that these samples might exceed some threshold, so we define the events ๐ด๐‘– = ๐‘ƒ (๐‘‹ (๐‘ก๐‘– ) > ๐‘‡ ),
๐‘– = 1, 2, ..., ๐‘›. In some cases, we can assume that the value of the noise at one point in time does
not affect the value of the noise at another point in time. Hence, we assume that these events are
independent and therefore ๐‘ƒ (๐ด1, ๐ด2, ..., ๐ด๐‘› ) = ๐‘ƒ (๐ด1 )๐‘ƒ (๐ด2 )...๐‘ƒ (๐ด๐‘› ).
2.5 Basic Combinatorics
In many situations, the probability of each possible outcome of an experiment is taken to be
equally likely. The card drawing and dice rolling examples can fall into this category, where
finding the probability of a certain event ๐ด can be obtained by counting.
๐‘ƒ (๐ด) =
Number of outcomes in A
Number of outcomes in entire sample space
(2.22)
Sometimes, when the scope of the experiment is fairly small, it is straightforward to count the
number of outcomes. On the other hand, for problems where the experiment is fairly complicated,
the number of outcomes involved can quickly become astronomical, and the corresponding
exercise in counting can be quite daunting. In this section, we present some fairly simple tools
that are helpful for counting the number of outcomes in a variety of commonly encountered
situations.
2.5.1 Sequence of Experiments
Suppose a combined experiment (๐ธ = ๐ธ 1 ×๐ธ 2 ×๐ธ 3 ×...×๐ธ๐‘˜ ) is performed where the first experiment
๐ธ 1 has ๐‘› 1 possible outcomes, followed by a second experiment ๐ธ 2 which has ๐‘› 2 possible outcomes
and so on. A sequence of ๐‘˜ such experiments thus has
๐‘› = ๐‘› 1๐‘› 2 ...๐‘›๐‘˜ =
๐‘˜
Ö
๐‘›๐‘–
(2.23)
๐‘–=1
possible outcomes. This result allows us to quickly calculate the number of sample points in a
sequence of experiments.
22
2 Probability Theory
Example 2.7
How many odd two digit numbers can be formed from the digits 2, 7, 8, and 9, if each digit
can be used only once?
Solution. As the first experiment, there are two ways of selecting a number for the unit’s
place (either 7 or 9). In each case of the first experiment, there are three ways of selecting a
number for the ten’s place in the second experiment, excluding the digit used for the unit’s
place. The number of outcomes in the combined experiment is therefore 2 × 3 = 6.
Example 2.8
An analog-to-digital converter outputs an 8-bit word to represent an input analog voltage
in the range −5 to +5 V. Determine the total number of words possible and the maximum
sampling (quantization) error.
Solution. Since each bit (or binary digit) in a computer word is either a one or a zero, and
there are 8 bits, then the total number of computer words is ๐‘› = 28 = 256. To determine
the maximum sampling error, first compute the range of voltage assigned to each computer
word which equals 10 V/256 words = 0.0390625 V/word and then divide by two (i.e. round
off to the nearest level), which yields a maximum error of 0.0195312 V/word.
2.5.2 Sampling with Replacement and with Ordering
Suppose we choose ๐‘˜ objects in an order from a set ๐ด with ๐‘› distinct objects, in a way that after
selecting each object and noting its identity in an ordered list, we place it back in the set before
the next choice is made. Therefore the same choice can be repeated. We will refer to the set ๐ด as
the “population.” The experiment produces an ordered ๐‘˜−tuple
(๐‘ฅ 1, ๐‘ฅ 2, ..., ๐‘ฅ๐‘˜ )
where ๐‘ฅ๐‘– ∈ ๐ด and ๐‘– = 1, 2, ..., ๐‘˜. Equation 2.23 with ๐‘› 1 = ๐‘› 2 = ... = ๐‘›๐‘˜ = ๐‘› implies that number of
distinct ordered ๐‘˜−tuples = ๐‘›๐‘˜ .
Example 2.9
How many k-digit binary numbers are there?
Solution. There are 2๐‘˜ different binary numbers. Note that the digits are "ordered", and
repeated 0 and 1 digits are possible.
Example 2.10
An urn contains five balls numbered 1 to 5. Suppose we select two balls from the urn with
replacement. How many distinct ordered pairs are possible? What is the probability that
the two draws yield the same number?
23
2 Probability Theory
Solution. The number of ordered pairs is 52 = 25. Figure 2.2 shows the 25 possible pairs.
Five of the 25 outcomes have the two draws with the same number; if we suppose that all
pairs are equiprobable, then the probability that the two draws yield the same number is
5/25 = 0.2.
Figure 2.2: Possible outcomes in sampling with replacement and with ordering of two balls
from an urn containing five distinct balls
2.5.3 Sampling without Replacement and with Ordering
Suppose we choose ๐‘˜ objects in an order from a population ๐ด of ๐‘› distinct objects, without
replacement. Clearly ๐‘˜ ≤ ๐‘›. The number of possible outcomes in the first draw is ๐‘› 1 = ๐‘›; the
number of possible outcomes in the second draw is ๐‘› 2 = ๐‘› − 1, namely all ๐‘› objects except the
one selected in the first draw; and so on, up to ๐‘›๐‘˜ = ๐‘› − (๐‘˜ − 1) in the final draw. The number of
distinct ordered ๐‘˜−tuples is:
๐‘ƒ๐‘˜๐‘› = ๐‘›(๐‘› − 1)...(๐‘› − ๐‘˜ + 1) =
๐‘›!
(๐‘› − ๐‘˜)!
(2.24)
The quantity ๐‘ƒ๐‘˜๐‘› is also called the number of permutations of ๐‘› things taken ๐‘˜ at a time, or
๐‘˜−permutations.
Example 2.11
An urn contains five balls numbered 1 to 5. Suppose we select two balls in succession
without replacement. How many distinct ordered pairs are possible? What is the probability
that the first ball has a number larger than that of the second ball?
Solution. Equation 2.24 states that the number of ordered pairs is 5 × 4 = 20, as shown in
figure 2.3. Ten ordered pairs (in the dashed triangle) have the first number larger than the
second number ; thus the probability of this event is 10/20 = 0.5.
24
2 Probability Theory
Figure 2.3: Possible outcomes in sampling without replacement and with ordering.
Example 2.12
An urn contains five balls numbered 1 to 5. Suppose we draw three balls with replacement.
What is the probability that all three balls are different
Solution. From Equation 2.23 there are 53 = 125 possible outcomes, which we will suppose
are equiprobable. The number of these outcomes for which the three draws are different is
given by Equation 2.24, 5 × 4 × 3 = 60, Thus the probability that all three balls are different
is 60/125 = 0.48.
In many problems of interest, we seek to find the number of different ways that we can rearrange
or order several items. The number of permutations can easily be determined from equation 2.24
and is given as follows. Consider drawing ๐‘› objects from an urn containing ๐‘› distinct objects until
the urn is empty, i.e. sampling without replacement with ๐‘˜ = ๐‘›. Thus, the number of possible
orderings, i.e. permutations of ๐‘› distinct objects is:
number of permutations of ๐‘› objects = ๐‘›(๐‘› − 1)...(2) (1) = ๐‘›!
(2.25)
2.5.4 Sampling without Replacement and without Ordering
Suppose we pick ๐‘˜ objects from a set of ๐‘› distinct objects without replacement and that we record
the result without regard to order. (You can imagine that you have no record of the order in which
the selection was done.) We call the resulting subset of ๐‘˜ selected objects a combination of size
๐‘˜. The number of different combinations of size ๐‘˜ from a set of size ๐‘› (๐‘˜ ≤ ๐‘›) is:
๐‘›(๐‘› − 1)...(๐‘› − ๐‘˜ + 1)
๐‘›!
๐‘›
๐‘›
๐ถ๐‘˜ =
=
,
(2.26)
๐‘˜
๐‘˜!
(๐‘› − ๐‘˜)!๐‘˜!
๐‘›
The expression
is also called a binomial coefficient and is read “n choose k.” Note that choosing
๐‘˜
๐‘˜ objects out of a set of ๐‘› is equivalent to choosing the objects that are to be left out, since
๐‘›
๐ถ๐‘˜๐‘› = ๐ถ๐‘›−๐‘˜
25
(2.27)
2 Probability Theory
Note that from Equation 2.25, there are ๐‘˜! possible orders in which the ๐‘˜ selected objects could
have been selected. Thus in the case of ๐‘˜−permutations ๐‘ƒ๐‘˜๐‘› , the total number of distinct ordered
samples of ๐‘˜ objects is:
๐‘ƒ๐‘˜๐‘› = ๐ถ๐‘˜๐‘› ๐‘˜!
(2.28)
Example 2.13
Find the number of ways of selecting two balls from five balls numbered 1 to 5, without
replacement and without regard to order.
Solution. From Equation 2.26:
5!
5
=
= 10
2
2!3!
Figure 2.4 shows the 10 pairs.
Figure 2.4: Possible outcomes in sampling without replacement and without ordering.
Example 2.14
Find the number of distinct permutations of 2 white balls and 3 black balls.
Solution. This problem is equivalent to the sampling problem: Assume 5 possible positions
for the balls, then pick a combination of 2 positions out of 5 and arrange the 2 white balls
accordingly. Each combination leads to a distinct arrangement (permutation) of 2 white
balls and 3 black balls. Thus the number of distinct permutations of 2 white balls and 3 black
balls is: ๐ถ 25 . The 10 distinct permutations with 2 whites (zeros) and 3 blacks (ones) are:
00111 01011 01101 01110 10011 10101 10110 11001 11010 11100. Note that the position of
whites (zeros) can be represented by the pair of numbers on the two selected balls in figure
2.4.
Example 2.14 shows that sampling without replacement and without ordering is equivalent to
partitioning the set of ๐‘› distinct objects into two sets: ๐ต, containing the ๐‘˜ items that are picked
from the urn, and ๐ต๐‘ containing the ๐‘› − ๐‘˜ left behind. Suppose we partition a set of ๐‘› distinct
26
2 Probability Theory
objects into ๐ฝ subsets ๐ต 1, ๐ต 2, ..., ๐ต ๐ฝ where ๐ต ๐ฝ is assigned ๐‘˜ ๐ฝ elements and ๐‘˜ 1 + ๐‘˜ 2 + ... + ๐‘˜ ๐ฝ = ๐‘› It is
shown that the number of distinct partitions is:
๐‘›!
๐‘˜ 1 !๐‘˜ 2 !...๐‘˜ ๐ฝ !
(2.29)
which is called the multinomial coefficient. The binomial coefficient is a special case of the
multinomial coefficient where ๐ฝ = 2.
2.5.5 Sampling with Replacement and without Ordering
Suppose we pick ๐‘˜ objects from a set of ๐‘› distinct objects with replacement and we record the
result without regard to order. This can be done by filling out a form which has ๐‘› columns, one
for each distinct object. Each time an object is selected, an “×” is placed in the corresponding
column. For example, if we are picking 5 objects from 4 distinct objects, one possible form would
look like this:
Object 1
××
Object 2
Object 3
×
Object 4
××
Note that this form can be summarized by the sequence ×× | | × | ×× where the "|" s indicate the
lines between columns, and where nothing appears between consecutive |s if the corresponding
object was not selected. Each different arrangement of 5 ×s and 3 |s leads to a distinct form. If
we identify ×s with “white balls” and |s with “black balls,” then this problem
becomes similar to
8
the example 2.14, and the number of different arrangements is given by
. In the general case
3
the form will involve ๐‘˜ ×s and (๐‘› − 1) |s. Thus the number of different ways of picking ๐‘˜ objects
from a set of ๐‘› distinct objects with replacement and without ordering is given by:
๐‘› −1+๐‘˜
๐‘› −1+๐‘˜
=
(2.30)
๐‘˜
๐‘›−1
Example 2.15
Find the number of ways of selecting two balls from five balls numbered 1 to 5, with replacement but without regard to order.
Solution. From Equation 2.30:
6!
5−1+2
=
= 15
2
2!4!
Figure 2.5 shows the 15 pairs. Note that because of the replacement after each selection, the
same ball can be selected twice for each pair.
27
2 Probability Theory
Figure 2.5: Possible outcomes in sampling with replacement and without ordering.
Further Reading
1. John D. Enderle, David C. Farden, Daniel J. Krause, Basic Probability Theory for Biomedical
Engineers, Morgan & Claypool, 2006: sections 1.2.3 to 1.9
2. Scott L. Miller, Donald Childers, Probability and random processes: with applications to
signal processing and communications, 2nd ed., Elsevier 2012: section 2.2 to 2.7
3. Alberto Leon-Garcia, Probability, statistics, and random processes for electrical engineering,
3rd ed. Pearson, 2007: sections 2.2 to 2.6
28
3 Random Variables
3 Random Variables
In most random experiments, we are interested in a numerical attribute of the outcome of the
experiment. A random variable is defined as a function that assigns a numerical value to the
outcome of the experiment.
3.1 The Notion of a Random Variable
The outcome of a random experiment need not be a number. However, we are usually interested
not in the outcome itself, but rather in some measurement or numerical attribute of the outcome.
For example, in ๐‘› tosses of a coin, we may be interested in the total number of heads and not
in the specific order in which heads and tails occur. In a randomly selected Web document, we
may be interested only in the length of the document. In each of these examples, a measurement
assigns a numerical value to the outcome of the random experiment. Since the outcomes are
random, the results of the measurements will also be random. Hence it makes sense to talk about
the probabilities of the resulting numerical values.
Definition 3.1. Random variable: A random variable is a real valued function of the elements
of a sample space, ๐‘†. A random variable ๐‘‹ is a function that assigns a real number, ๐‘‹ (๐œ ), to each
outcome ๐œ in the sample space, ๐‘†, of a random experiment, ๐ธ. If the mapping ๐‘‹ (๐œ ) is such that the
random variable ๐‘‹ takes on a finite or countably infinite number of values, then we refer to ๐‘‹ as
a discrete random variable; whereas, if the range of ๐‘‹ (๐œ ) is an uncountably infinite number of
points, we refer to ๐‘‹ as a continuous random variable.
Figure 3.1 illustrates how a random variable assigns a number to an outcome in the sample space.
The sample space ๐‘† is the domain of the random variable, and the set ๐‘†๐‘ฅ of all values taken on by
๐‘‹ is the range of the random variable. Thus ๐‘†๐‘ฅ is a subset of the set of all real numbers. We will
use the capital letters (๐‘‹ , ๐‘Œ , etc.) to denote random variables, and lower case letters (๐‘ฅ, ๐‘ฆ, etc.) to
denote possible values of the random variables.
Figure 3.1: A random variable assigns a number ๐‘‹ (๐œ ) to each outcome ๐œ in the sample space ๐‘† of
a random experiment.
Since ๐‘‹ (๐œ ) is a random variable whose numerical value depends on the outcome of an experiment,
we cannot describe the random variable by stating its value; rather, we describe the probabilities
29
3 Random Variables
that the variable takes on a specific value or values (e.g. ๐‘ƒ (๐‘‹ = 3) or ๐‘ƒ (๐‘‹ > 8)).
Example 3.1
A coin is tossed three times and the sequence of heads and tails is noted. The sample space
for this experiment is ๐‘† ={ HHH, HHT, HTH, HTT, THH, THT, TTH, TTT}. (a) Let ๐‘‹ be the
number of heads in the three tosses. Find the random variable ๐‘‹ (๐œ ) for each outcome ๐œ . (b)
Now find the probability of the event {๐‘‹ = 2}.
Solution. (a) ๐‘‹ assigns each outcome ๐œ in ๐‘† a number from the set ๐‘†๐‘ฅ = {0, 1, 2, 3}. The table
below lists the eight outcomes of ๐‘† and the corresponding values of ๐‘‹ .
๐œ :
๐‘‹ (๐œ ) :
HHH
3
HHT
2
HTH
2
THH
2
HTT
1
THT
1
TTH
1
TTT
0
(b) Note that ๐‘‹ (๐œ ) = 2 if and only if ๐œ is in { HHT, HTH, THH}, therefore:
๐‘ƒ (๐‘‹ = 2) = ๐‘ƒ ({ HHT, HTH, THH})
= ๐‘ƒ ({HHT}) + ๐‘ƒ ({HTH}) + ๐‘ƒ ({THH})
= 3/8
Example 3.1 shows a general technique for finding the probabilities of events involving the random
variable ๐‘‹ . Let the underlying random experiment have sample space ๐‘†. To find the probability
of a subset ๐ต of ๐‘…, e.g., ๐ต = {๐‘ฅ๐‘˜ }, we need to find the outcomes in ๐‘† that are mapped to ๐ต, i.e.:
๐ด = {๐œ : ๐‘‹ (๐œ ) ∈ ๐ต}
(3.1)
As shown in figure 3.2. If event ๐ด occurs then ๐‘‹ (๐œ ) ∈ ๐ต, so event ๐ต occurs. Conversely, if event ๐ต
occurs, then the value ๐‘‹ (๐œ ) implies that ๐œ is in ๐ด, so event ๐ด occurs. Thus the probability that ๐‘‹
is in ๐ต is given by:
๐‘ƒ (๐‘‹ ∈ ๐ต) = ๐‘ƒ (๐ด) = ๐‘ƒ ({๐œ : ๐‘‹ (๐œ ) ∈ ๐ต})
(3.2)
We refer to ๐ด and ๐ต as equivalent events. In some random experiments the outcome ๐œ is already
the numerical value we are interested in. In such cases we simply let ๐‘‹ (๐œ ) = ๐œ that is, the identity
function, to obtain a random variable.
Figure 3.2: An illustration of ๐‘ƒ (๐‘‹ ∈ ๐ต) = ๐‘ƒ (๐œ ∈ ๐ด).
30
3 Random Variables
3.2 Discrete Random Variables
Definition 3.2. Discrete random variable: A random variable ๐‘‹ that assumes values from a
countable set, that is, ๐‘†๐‘ฅ = {๐‘ฅ 1, ๐‘ฅ 2, ๐‘ฅ 3, ...}.
A discrete random variable is said to be finite if its range is finite, that is, ๐‘†๐‘ฅ = {๐‘ฅ 1, ๐‘ฅ 2, ๐‘ฅ 3, ..., ๐‘ฅ๐‘› }.
We are interested in finding the probabilities of events involving a discrete random variable
๐‘‹ . Since the sample space is discrete, we only need to obtain the probabilities for the events
๐ด๐‘˜ = {๐œ : ๐‘‹ (๐œ ) = ๐‘ฅ๐‘˜ } in the underlying random experiment. The probabilities of all events
involving ๐‘‹ can be found from the probabilities of the ๐ด๐‘˜ s.
3.2.1 Probability Mass Function
Definition 3.3. Probability mass function: The probability mass function (PMF), ๐‘ƒ๐‘‹ (๐‘ฅ), of a
random variable, ๐‘‹ , is a function that assigns a probability to each possible value of the random
variable.
The probability that the random variable ๐‘‹ takes on the specific value ๐‘ฅ is the value of the
probability mass function for ๐‘ฅ. That is,
๐‘ƒ๐‘‹ (๐‘ฅ) = ๐‘ƒ (๐‘‹ = ๐‘ฅ) = ๐‘ƒ ({๐œ : ๐‘‹ (๐œ ) = ๐‘ฅ๐‘˜ }) for ๐‘ฅ a real number
(3.3)
Note that we use the convention that upper case variables represent random variables while lower
case variables represent fixed values that the random variable can assume. The PMF satisfies the
following properties that provide all the information required to calculate probabilities for events
involving the discrete random variable ๐‘‹ :
(i) ๐‘ƒ๐‘‹ (๐‘ฅ) ≥ 0 for all ๐‘ฅ
Í
Í
Í
(ii) ๐‘ฅ ∈๐‘†๐‘ฅ ๐‘ƒ๐‘‹ (๐‘ฅ) = ๐‘˜ ๐‘ƒ๐‘‹ (๐‘ฅ) = ๐‘˜ ๐‘ƒ (๐ด๐‘˜ ) = 1
Í
(iii) ๐‘ƒ (๐‘‹ ∈ ๐ต) = ๐‘ฅ ∈๐ต ๐‘ƒ๐‘‹ (๐‘ฅ) where ๐ต ⊂ ๐‘†๐‘‹
Example 3.2
Let ๐‘‹ be the number of heads in three independent tosses of a fair coin. Find the PMF of ๐‘‹ .
Solution. As seen in Example 3.1:
๐‘ƒ๐‘‹ (0) = ๐‘ƒ (๐‘‹ = 0) = ๐‘ƒ ({TTT}) = 1/8
๐‘ƒ๐‘‹ (1) = ๐‘ƒ (๐‘‹ = 1) = ๐‘ƒ ({HTT}) + ๐‘ƒ ({THT}) + ๐‘ƒ ({TTH}) = 3/8
๐‘ƒ๐‘‹ (2) = ๐‘ƒ (๐‘‹ = 2) = ๐‘ƒ ({HHT}) + ๐‘ƒ ({HTH}) + ๐‘ƒ ({THH}) = 3/8
๐‘ƒ๐‘‹ (3) = ๐‘ƒ (๐‘‹ = 3) = ๐‘ƒ ({HHH}) = 1/8
Note that: ๐‘ƒ๐‘‹ (0) + ๐‘ƒ๐‘‹ (1) + ๐‘ƒ๐‘‹ (2) + ๐‘ƒ๐‘‹ (3) = 1
31
3 Random Variables
Figure 3.2 shows the graph of ๐‘ƒ๐‘‹ (๐‘ฅ) versus ๐‘ฅ for the random variables in this example.
Generally the graph of the PMF of a discrete random variable has vertical arrows of height ๐‘ƒ๐‘‹ (๐‘ฅ๐‘˜ )
at the values ๐‘ฅ๐‘˜ in ๐‘†๐‘ฅ . The relative values of PMF at different points give an indication of the
relative likelihoods of occurrence.
Finally, let’s consider the relationship between relative frequencies and the PMF. Suppose we
perform ๐‘› independent repetitions to obtain ๐‘› observations of the discrete random variable ๐‘‹ .
Let ๐‘๐‘˜ (๐‘›) be the number of times the event ๐‘‹ = ๐‘ฅ๐‘˜ occurs and let ๐‘“๐‘˜ (๐‘›) = ๐‘๐‘˜ (๐‘›)/๐‘› be the
corresponding relative frequency. As ๐‘› becomes large we expect that ๐‘“๐‘˜ (๐‘›) → ๐‘ƒ๐‘‹ (๐‘ฅ๐‘˜ ). Therefore
the graph of relative frequencies should approach the graph of the PMF. For the experiment in
Example 3.2, 1000 repetitions of an experiment of tossing a coin may generate a graph of relative
frequencies shown in Figure 3.3.
Figure 3.3: Relative frequencies and corresponding PMF for the experiment in Example 3.2
3.2.2 The Cumulative Distribution Function
The PMF of a discrete random variable was defined in terms of events of the form {๐‘‹ = ๐‘}. The
cumulative distribution function is an alternative approach which uses events of the form {๐‘‹ ≤ ๐‘}.
The cumulative distribution function has the advantage that it is not limited to discrete random
variables and applies to all types of random variables.
Definition 3.4. Cumulative distribution function: The cumulative distribution function (CDF)
of a random variable ๐‘‹ is defined as the probability of the event {๐‘‹ ≤ ๐‘ฅ }:
๐น๐‘‹ (๐‘ฅ) = ๐‘ƒ (๐‘‹ ≤ ๐‘ฅ)
for −∞ < ๐‘ฅ < +∞
(3.4)
In other words, the CDF is the probability that the random variable ๐‘‹ takes on a value in the
set (−∞, ๐‘ฅ]. In terms of the underlying sample space, the CDF is the probability of the event
32
3 Random Variables
{๐œ : ๐‘‹ (๐œ ) ≤ ๐‘ฅ }. The event {๐‘‹ ≤ ๐‘ฅ } and its probability vary as ๐‘ฅ is varied; since ๐น๐‘‹ (๐‘ฅ) is a
function of the variable ๐‘ฅ.
From the definition of CDF, the following property can be derived:
๐‘ƒ (๐‘‹ > ๐‘ฅ) = 1 − ๐น๐‘‹ (๐‘ฅ)
(3.5)
The CDF has the following interpretation in terms of relative frequency. Suppose that the
experiment that yields the outcome ๐œ and hence ๐‘‹ (๐œ ) is performed a large number of times.
๐น๐‘‹ (๐‘) is then the long-term proportion of times in which ๐‘‹ (๐œ ) ≤ ๐‘.
Like the PMF, the CDF summarizes the probabilistic properties of a random variable. Knowledge
of either of them allows the other function to be calculated. For example, suppose that the PMF is
known. The CDF can then be calculated from the expression:
Õ
Õ
๐น๐‘‹ (๐‘ฅ) =
๐‘ƒ (๐‘‹ = ๐‘ฆ) =
๐‘ƒ๐‘‹ (๐‘ฆ)
(3.6)
๐‘ฆ ≤๐‘ฅ
๐‘ฆ ≤๐‘ฅ
In other words, the value of ๐น๐‘‹ (๐‘ฅ) is constructed by simply adding together the probabilities
๐‘ƒ๐‘‹ (๐‘ฅ) for values ๐‘ฆ that are no larger than ๐‘ฅ. Note that:
๐‘ƒ (๐‘Ž < ๐‘‹ ≤ ๐‘) = ๐น๐‘‹ (๐‘) − ๐น๐‘‹ (๐‘Ž)
(3.7)
The CDF is an increasing step function with steps at the values taken by the random variable.
The heights of the steps are the probabilities of taking these values. Mathematically, the PMF can
be obtained from the CDF through the relationship:
๐‘ƒ๐‘‹ (๐‘ฅ) = ๐น๐‘‹ (๐‘ฅ) − ๐น๐‘‹ (๐‘ฅ − )
(3.8)
where ๐น๐‘‹ (๐‘ฅ − ) is the limiting value from below of the cumulative distribution function. If there is
no step in the cumulative distribution function at a point ๐‘ฅ, then ๐น๐‘‹ (๐‘ฅ) = ๐น๐‘‹ (๐‘ฅ − ) and ๐‘ƒ๐‘‹ (๐‘ฅ) = 0.
If there is a step at a point ๐‘ฅ, then ๐น๐‘‹ (๐‘ฅ) is the value of the CDF at the top of the step, and ๐น๐‘‹ (๐‘ฅ − )
is the value of the CDF at the bottom of the step, so that ๐‘ƒ๐‘‹ (๐‘ฅ) is the height of the step. These
relationships are illustrated in the following example.
Example 3.3
Similar to Example 3.2, let ๐‘‹ be the number of heads in three tosses of a fair coin. Find the
CDF of X.
Solution. From Example 3.2, we know that ๐‘‹ takes on only the values 0, 1, 2, and 3 with
probabilities 1/8, 3/8, 3/8, and 1/8, respectively, so ๐น๐‘‹ (๐‘ฅ) is simply the sum of the probabilities of the outcomes from {0, 1, 2, 3} that are less than or equal to ๐‘ฅ. The resulting CDF is a
non-decreasing staircase function that grows from 0 to 1. It has jumps at the points 0, 1, 2, 3
of magnitudes 1/8, 3/8, 3/8, and 1/8, respectively.
33
3 Random Variables
Let us take a closer look at one of these discontinuities, say, in the vicinity of ๐‘ฅ = 1. For a
small positive number ๐›ฟ, we have:
๐น๐‘‹ (1− ) = ๐น๐‘‹ (1 − ๐›ฟ) = ๐‘ƒ (๐‘‹ ≤ 1 − ๐›ฟ) = ๐‘ƒ ({0 heads}) = 1/8
so the limit of the CDF as ๐‘ฅ approaches 1 from the left is 1/8. However,
๐น๐‘‹ (1) = ๐‘ƒ (๐‘‹ ≤ 1) = ๐‘ƒ ({0 or 1 heads}) = 1/8 + 3/8 = 1/2
Thus the CDF is continuous from the right and equal to 1/2 at the point ๐‘ฅ = 1. Indeed, we
note the magnitude of the step at the point ๐‘ฅ = 1 is ๐‘ƒ (๐‘‹ = 1) = 1/2 − 1/8 = 3/8. The CDF
can be written compactly in terms of the unit step function:
1
3
3
1
๐น๐‘‹ (๐‘ฅ) = ๐‘ข (๐‘ฅ) + ๐‘ข (๐‘ฅ − 1) + ๐‘ข (๐‘ฅ − 2) + ๐‘ข (๐‘ฅ − 3)
8
8
8
8
3.2.3 Expected Value and Moments
Expected Value
In some situations we are interested in a few parameters that summarize the information provided
by the PMF. For example, Figure 3.4 shows the results of many repetitions of an experiment that
produces two random variables. It can be observed that the random variable ๐‘Œ varies about the
value 0, whereas the random variable ๐‘‹ varies around the value 5. It is also clear that ๐‘‹ is more
spread out than ๐‘Œ . We may just need some parameters that quantify these properties.
Figure 3.4: The graphs show 150 repetitions of the experiments yielding ๐‘‹ and ๐‘Œ . It is clear that
๐‘‹ is centered about the value 5 while ๐‘Œ is centered about 0. It is also clear that ๐‘‹ is
more spread out than ๐‘Œ (Taken from Alberto Leon-Garcia, Probability, statistics, and
random processes for electrical engineering,3rd ed. Pearson, 2007).
Definition 3.5. Expected value: The expected value or expectation or mean of a discrete random
variable ๐‘‹ , with a probability mass function ๐‘ƒ๐‘‹ (๐‘ฅ) is defined by:
Õ
๐‘š๐‘‹ = ๐ธ [๐‘‹ ] =
๐‘ฅ๐‘˜ ๐‘ƒ๐‘‹ (๐‘ฅ๐‘˜ )
(3.9)
๐‘˜
34
3 Random Variables
๐ธ [๐‘‹ ] provides a summary measure of the average value taken by the random variable and is also
known as the mean of the random variable. The expected value ๐ธ [๐‘‹ ] is defined if the above sum
converges absolutely, that is:
Õ
๐ธ [|๐‘‹ |] =
|๐‘ฅ๐‘˜ |๐‘ƒ๐‘‹ (๐‘ฅ๐‘˜ ) < ∞
(3.10)
๐‘˜
otherwise the expected value does not exist.
Random variables with unbounded expected value are not uncommon and appear in models
where outcomes that have extremely large values are not that rare. Examples include the sizes
of files in Web transfers, frequencies of words in large bodies of text, and various financial and
economic problems.
If we view ๐‘ƒ๐‘‹ (๐‘ฅ) as the distribution of mass on the points ๐‘ฅ 1, ๐‘ฅ 2, ... on the real line, then ๐ธ [๐‘‹ ]
represents the center of mass of this distribution.
Example 3.4
Revisiting Example 3.1, let ๐‘‹ be the number of heads in three tosses of a fair coin. Find
๐ธ [๐‘‹ ].
Solution. From Example 3.2 and the pmf of ๐‘‹ :
๐ธ [๐‘‹ ] =
3
Õ
๐‘˜๐‘ƒ๐‘‹ (๐‘˜) = 0(1/8) + 1(3/8) + 2(3/8) + 3(1/8) = 1.5
๐‘˜=0
The use of the term “expected value” does not mean that we expect to observe ๐ธ [๐‘‹ ] when we
perform the experiment that generates ๐‘‹ . For example, the expected value of the number of heads
in Example 3.4 is 1.5, but its outcomes can only be 0, 1, 2 or 3.
๐ธ [๐‘‹ ] can be explained as an average of ๐‘‹ in a large number of observations of ๐‘‹ . Suppose we
perform ๐‘› independent repetitions of the experiment that generates ๐‘‹ , and we record the observed
values as ๐‘ฅ (1), ๐‘ฅ (2), ..., ๐‘ฅ (๐‘›), where ๐‘ฅ ( ๐‘—) is the observation in the ๐‘— ๐‘ก โ„Ž experiment. Let ๐‘๐‘˜ (๐‘›) be
the number of times ๐‘ฅ๐‘˜ is observed (๐‘˜ = 1, 2, ..., ๐พ), and let ๐‘“๐‘˜ (๐‘›) = ๐‘๐‘˜ (๐‘›)/๐‘› be the corresponding
relative frequency. The arithmetic average, or sample mean of the observations, is:
๐‘ฅ (1) + ๐‘ฅ (2) + ... + ๐‘ฅ (๐‘›) ๐‘ฅ 1 ๐‘ 1 (๐‘›) + ๐‘ฅ 2 ๐‘ 2 (๐‘›) + ... + ๐‘ฅ ๐พ ๐‘๐พ (๐‘›)
=
๐‘›
๐‘›
= ๐‘ฅ 1 ๐‘“1 (๐‘›) + ๐‘ฅ 2 ๐‘“2 (๐‘›) + ... + ๐‘ฅ ๐พ ๐‘“๐พ (๐‘›)
Õ
=
๐‘ฅ๐‘˜ ๐‘“๐‘˜ (๐‘›)
h๐‘‹ i๐‘› =
(3.11)
(3.12)
(3.13)
๐‘˜
The first numerator adds the observations in the order in which they occur, and the second
numerator counts how many times each ๐‘ฅ๐‘˜ occurs and then computes the total. As ๐‘› becomes
large, we expect relative frequencies to approach the probabilities ๐‘ƒ๐‘‹ (๐‘ฅ๐‘˜ ):
lim ๐‘“๐‘˜ (๐‘›) = ๐‘ƒ๐‘‹ (๐‘ฅ๐‘˜ )
๐‘›→∞
for all ๐‘˜
(3.14)
Equation 3.13 then implies that:
h๐‘‹ i๐‘› =
Õ
๐‘ฅ๐‘˜ ๐‘“๐‘˜ (๐‘›) →
Õ
๐‘˜
๐‘˜
35
๐‘ฅ๐‘˜ ๐‘ƒ๐‘‹ (๐‘ฅ๐‘˜ ) = ๐ธ [๐‘‹ ]
(3.15)
3 Random Variables
Thus we expect the sample mean to converge to ๐ธ [๐‘‹ ] as ๐‘› becomes large.
We can also easily find the expected value of functions of a random variable. Let ๐‘‹ be a discrete
random variable, and let ๐‘ = ๐‘”(๐‘‹ ) Since ๐‘‹ is discrete, ๐‘ = ๐‘”(๐‘‹ ) will assume a countable set of
values of the form ๐‘”(๐‘ฅ๐‘˜ ) where ๐‘ฅ๐‘˜ ∈ ๐‘†๐‘‹ . One way to find the expectation of ๐‘ is to use Equation
3.9, which requires that we first find the PMF of ๐‘ . Another way is to use the following:
Õ
๐ธ [๐‘ ] = ๐ธ [๐‘”(๐‘‹ )] =
๐‘”(๐‘ฅ๐‘˜ )๐‘ƒ๐‘‹ (๐‘ฅ๐‘˜ )
(3.16)
๐‘˜
Let ๐‘ be the function:
๐‘ = ๐‘Ž๐‘”(๐‘‹ ) + ๐‘โ„Ž(๐‘‹ ) + ๐‘
where ๐‘Ž, ๐‘, and ๐‘ are real numbers, then:
๐ธ [๐‘ ] = ๐‘Ž๐ธ [๐‘”(๐‘‹ )] + ๐‘๐ธ [โ„Ž(๐‘‹ )] + ๐‘
(3.17)
๐ธ [๐‘”(๐‘‹ ) + โ„Ž(๐‘‹ )] = ๐ธ [๐‘”(๐‘‹ )] + ๐ธ [โ„Ž(๐‘‹ )]
(3.18)
๐ธ [๐‘Ž๐‘‹ ] = ๐‘Ž๐ธ [๐‘‹ ]
(3.19)
๐ธ [๐‘‹ + ๐‘] = ๐ธ [๐‘‹ ] + ๐‘
(3.20)
๐ธ [๐‘] = ๐‘
(3.21)
It further implies that:
Variance of a Random Variable
We usually need more information about ๐‘‹ , than what expected value ๐ธ [๐‘‹ ] provides. For example,
if we know that ๐ธ [๐‘‹ ] = 0 then it could be that ๐‘‹ is zero all the time, or it takes on extremely
large positive and negative values. We are therefore interested not only in the mean of a random
variable, but also in the extent of the random variable’s variation about its mean. Let the deviation
of the random variable ๐‘‹ about its mean be ๐‘‹ − ๐ธ [๐‘‹ ] which can take on positive and negative
values. Since we are interested in the magnitude of the variations only, it is convenient to work
with the square of the deviation, which is always positive, (๐‘‹ − ๐ธ [๐‘‹ ]) 2 .
Definition 3.6. Variance: The variance of the random variable ๐‘‹ is defined as:
Õ
๐œŽ๐‘‹2 = ๐‘‰ ๐ด๐‘… [๐‘‹ ] = ๐ธ [(๐‘‹ − ๐‘š๐‘‹ ) 2 ] =
(๐‘ฅ − ๐‘š๐‘‹ ) 2 ๐‘ƒ๐‘‹ (๐‘ฅ)
(3.22)
๐‘‹ ∈๐‘†๐‘‹
The variance is a positive quantity that measures the spread of the distribution of the random
variable about its mean value. Larger values of the variance indicate that the distribution is more
spread out. For example in Figure 3.4, ๐‘‹ has a larger variance than ๐‘Œ .
Definition 3.7. Standard deviation: The standard deviation of the random variable ๐‘‹ is defined
by:
๐œŽ๐‘‹ = ๐‘†๐‘‡ ๐ท (๐‘‹ ) = ๐‘‰ ๐ด๐‘… [๐‘‹ ] 1/2
(3.23)
By taking the square root of the variance, we obtain a quantity with the same units as ๐‘‹ .
An alternative expression for the variance can be obtained as follows:
๐‘‰ ๐ด๐‘… [๐‘‹ ] = ๐ธ [(๐‘‹ − ๐‘š๐‘‹ ) 2 ] = ๐ธ [๐‘‹ 2 − 2๐‘š๐‘‹ ๐‘‹ + ๐‘š๐‘‹2 ]
2
= ๐ธ [๐‘‹ ] − 2๐‘š๐‘‹ ๐ธ [๐‘‹ ]
2
= ๐ธ [๐‘‹ ]
− ๐‘š๐‘‹2
+ ๐‘š๐‘‹2
(3.24)
(3.25)
(3.26)
๐ธ [๐‘‹ 2 ] is called the second moment of ๐‘‹ .
36
3 Random Variables
Definition 3.8. Moment: The ๐‘›๐‘กโ„Ž moment of ๐‘‹ is defined as: ๐ธ [๐‘‹ ๐‘› ].
Example 3.5
Revisiting Example 3.1, let ๐‘‹ be the number of heads in three tosses of a fair coin. Find
๐‘‰ ๐ด๐‘… [๐‘‹ ].
Solution.
2
๐ธ [๐‘‹ ] =
3
Õ
๐‘˜ 2 ๐‘ƒ๐‘‹ (๐‘˜) = 0(1/8) + 12 (3/8) + 22 (3/8) + 32 (1/8) = 3
๐‘˜=0
๐‘‰ ๐ด๐‘… [๐‘‹ ] = ๐ธ [๐‘‹ 2 ] − (๐ธ [๐‘‹ ]) 2 = 3 − (1.5) 2 = 0.75
Let ๐‘Œ = ๐‘‹ + ๐‘, then:
๐‘‰ ๐ด๐‘… [๐‘‹ + ๐‘] = ๐ธ [(๐‘‹ + ๐‘ − (๐ธ [๐‘‹ ] + ๐‘)) 2 ]
2
= ๐ธ [(๐‘‹ − ๐ธ [๐‘‹ ]) ] = ๐‘‰ ๐ด๐‘… [๐‘‹ ]
(3.27)
(3.28)
Adding a constant to a random variable does not affect the variance.
Let ๐‘ = ๐‘๐‘‹ then:
๐‘‰ ๐ด๐‘… [๐‘๐‘‹ ] = ๐ธ [(๐‘๐‘‹ − ๐‘ (๐ธ [๐‘‹ ])) 2 ]
(3.29)
2
(3.30)
2
= ๐ธ [๐‘ (๐‘‹ − ๐ธ [๐‘‹ ]) ]
2
= ๐‘ (๐‘‰ ๐ด๐‘… [๐‘‹ ])
(3.31)
Scaling a random variable by ๐‘ scales the variance by ๐‘ 2 and the standard deviation by |๐‘ |.
Note that a random variable that is equal to a constant ๐‘‹ = ๐‘ with probability 1, has zero variance:
๐‘‰ ๐ด๐‘… [๐‘‹ ] = ๐ธ [(๐‘‹ − ๐‘) 2 ] = ๐ธ [0] = 0
Finally, Variance is a special case of central moments, for ๐‘› = 2, where we define ๐‘›๐‘กโ„Ž central
moment as follows.
Definition 3.9. Central Moments: The ๐‘›๐‘กโ„Ž central moment of a random variable is defined as:
๐ธ [(๐‘‹ − ๐‘š๐‘‹ )๐‘› ].
3.2.4 Conditional Probability Mass Function and Expectation
In many situations we have partial information about a random variable ๐‘‹ or about the outcome
of its underlying random experiment. We are interested in how this information changes the
probability of events involving the random variable.
Definition 3.10. Conditional Probability Mass Function: Let ๐‘‹ be a discrete random variable
with PMF ๐‘ƒ๐‘‹ (๐‘ฅ) and let ๐ถ be an event that has nonzero probability, ๐‘ƒ (๐ถ) > 0. The conditional
probability mass function of ๐‘‹ is defined by the conditional probability:
๐‘ƒ๐‘‹ ๐ผ๐ถ (๐‘ฅ) = ๐‘ƒ (๐‘‹ = ๐‘ฅ |๐ถ)
for ๐‘ฅ a real number
(3.32)
Applying the definition of conditional probability we have:
๐‘ƒ๐‘‹ ๐ผ๐ถ (๐‘ฅ) =
๐‘ƒ ({๐‘‹ = ๐‘ฅ } ∩ ๐ถ)
๐‘ƒ (๐ถ)
37
(3.33)
3 Random Variables
As illustrated in Figure 3.5, the above expression has a nice intuitive interpretation: The conditional
probability of the event {๐‘‹ = ๐‘ฅ๐‘˜ } is given by the probabilities of outcomes ๐œ for which both
๐‘‹ (๐œ ) = ๐‘ฅ๐‘˜ and ๐œ are in ๐ถ, normalized by ๐‘ƒ (๐ถ).
Figure 3.5: Conditional PMF of ๐‘‹ given event ๐ถ.
The conditional PMF has the same properties as PMF. If ๐‘† is partitioned by ๐ด๐‘˜ = {๐‘‹ = ๐‘ฅ๐‘˜ }, then:
Ø
๐ถ=
(๐ด๐‘˜ ∩ ๐ถ) and
๐‘˜
Õ
๐‘ฅ๐‘˜ ∈๐‘†๐‘‹
๐‘ƒ๐‘‹ |๐ถ (๐‘ฅ๐‘˜ ) =
Õ
๐‘ƒ๐‘‹ |๐ถ (๐‘ฅ๐‘˜ ) =
๐‘˜
Õ ๐‘ƒ ({๐‘‹ = ๐‘ฅ } ∩ ๐ถ)
๐‘˜
๐‘ƒ (๐ถ)
๐‘˜
1 Õ
๐‘ƒ (๐ถ)
=
๐‘ƒ (๐ด๐‘˜ ∩ ๐ถ) =
=1
๐‘ƒ (๐ถ)
๐‘ƒ (๐ถ)
๐‘˜
Most of the time the event ๐ถ is defined in terms of ๐‘‹ , for example ๐ถ = {๐‘Ž ≤ ๐‘‹ ≤ ๐‘}. For ๐‘ฅ๐‘˜ ∈ ๐‘†๐‘‹ ,
we have the following result:
( ๐‘ƒ (๐‘ฅ )
๐‘‹ ๐‘˜
if ๐‘ฅ๐‘˜ ∈ ๐ถ
๐‘ƒ๐‘‹ |๐ถ (๐‘ฅ๐‘˜ ) = ๐‘ƒ (๐ถ)
(3.34)
0
if ๐‘ฅ๐‘˜ ∉ ๐ถ
Example 3.6
Let ๐‘‹ be the number of heads in three tosses of a fair coin. Find the conditional PMF of ๐‘‹
given that we know the observed number was less than 2.
Solution. We condition on the event ๐ถ = {๐‘‹ < 2}. From Example 3.2:
๐‘ƒ (๐ถ) = ๐‘ƒ๐‘‹ (0) + ๐‘ƒ๐‘‹ (1) = 1/8 + 3/8 = 1/2.
Therefore:
๐‘ƒ๐‘‹ |๐ถ (0) =
๐‘ƒ๐‘‹ (0) 1/8
=
= 1/4.
๐‘ƒ (๐ถ)
1/2
๐‘ƒ๐‘‹ |๐ถ (1) =
๐‘ƒ๐‘‹ (1) 3/8
=
= 3/4.
๐‘ƒ (๐ถ)
1/2
and ๐‘ƒ๐‘‹ |๐ถ (๐‘ฅ๐‘˜ ) is zero otherwise. Note that ๐‘ƒ๐‘‹ |๐ถ (0) + ๐‘ƒ๐‘‹ |๐ถ (1) = 1.
Many random experiments have natural ways of partitioning the sample space ๐‘† into the union
of disjoint events ๐ต 1, ๐ต 2, ..., ๐ต๐‘› . Let ๐‘ƒ๐‘‹ |๐ต๐‘– (๐‘ฅ) be the conditional PMF of ๐‘‹ given event ๐ต๐‘– . The
38
3 Random Variables
theorem on total probability allows us to find the PMF of ๐‘‹ in terms of the conditional PMFs:
๐‘›
Õ
๐‘ƒ๐‘‹ (๐‘ฅ) =
๐‘ƒ๐‘‹ |๐ต๐‘– (๐‘ฅ)๐‘ƒ (๐ต๐‘– )
(3.35)
๐‘–=1
Definition 3.11. Conditional Expected Value: Let ๐‘‹ be a discrete random variable, and suppose
that we know that event ๐ต has occurred. The conditional expected value of ๐‘‹ given ๐ต is defined as:
Õ
Õ
๐‘š๐‘‹ |๐ต = ๐ธ [๐‘‹ |๐ต] =
๐‘ฅ๐‘ƒ๐‘‹ |๐ต (๐‘ฅ) =
๐‘ฅ๐‘˜ ๐‘ƒ๐‘‹ |๐ต (๐‘ฅ๐‘˜ )
(3.36)
๐‘ฅ ∈๐‘†๐‘ฅ
๐‘˜
where we apply the absolute convergence requirement on the summation.
Definition 3.12. Conditional Variance: Let ๐‘‹ be a discrete random variable, and suppose that
we know that event ๐ต has occurred. The conditional variance of ๐‘‹ given ๐ต is defined as:
Õ
๐œŽ๐‘‹2 |๐ต = ๐‘‰ ๐ด๐‘… [๐‘‹ |๐ต] = ๐ธ [(๐‘‹ − ๐‘š๐‘‹ |๐ต ) 2 |๐ต] =
(๐‘ฅ๐‘˜ − ๐‘š๐‘‹ |๐ต ) 2 ๐‘ƒ๐‘‹ |๐ต (๐‘ฅ๐‘˜ )
(3.37)
๐‘˜
2
− ๐‘š๐‘‹2 |๐ต
= ๐ธ [๐‘‹ |๐ต]
(3.38)
Note that the variation is measured with respect to ๐‘š๐‘‹ |๐ต not ๐‘š๐‘‹ .
Let ๐ต 1, ๐ต 2, ..., ๐ต๐‘› be the partition of ๐‘†, and let ๐‘ƒ๐‘‹ |๐ต๐‘– (๐‘ฅ) be the conditional PMF of ๐‘‹ given event ๐ต๐‘– .
๐ธ [๐‘‹ ] can be calculated from the conditional expectation ๐ธ [๐‘‹ |๐ต๐‘– ]:
๐ธ [๐‘‹ ] =
๐‘›
Õ
๐ธ [๐‘‹ |๐ต๐‘– ]๐‘ƒ (๐ต๐‘– )
(3.39)
๐‘–=1
By the theorem on total probability we have:
๐ธ [๐‘‹ ] =
Õ
๐‘ฅ๐‘˜ ๐‘ƒ๐‘‹ (๐‘ฅ๐‘˜ ) =
๐‘˜
Õ
๐‘ฅ๐‘˜ {
๐‘›
Õ
๐‘ƒ๐‘‹ |๐ต๐‘– (๐‘ฅ๐‘˜ )๐‘ƒ (๐ต๐‘– )}
๐‘˜
๐‘› Õ
๐‘›
Õ
Õ
=
{ ๐‘ฅ๐‘˜ ๐‘ƒ๐‘‹ |๐ต๐‘– (๐‘ฅ๐‘˜ )}๐‘ƒ (๐ต๐‘– ) =
๐ธ [๐‘‹ |๐ต๐‘– ]๐‘ƒ (๐ต๐‘– )
๐‘–=1
(3.40)
๐‘–=1
(3.41)
๐‘–=1
๐‘˜
where we first express ๐‘ƒ๐‘‹ (๐‘ฅ๐‘˜ ) in terms of the conditional PMFs, and we then change the order of
summation. Using the same approach we can also show:
๐ธ [๐‘”(๐‘‹ )] =
๐‘›
Õ
๐ธ [๐‘”(๐‘‹ )|๐ต๐‘– ]๐‘ƒ (๐ต๐‘– )
(3.42)
๐‘–=1
Example 3.7
Let ๐‘‹ be the number of heads in three tosses of a fair coin. Find the expected value and
variance of ๐‘‹ ,if we know that at least one head was observed.
Solution. We are given ๐ถ = {๐‘‹ > 0}, so for ๐‘ฅ๐‘˜ = 1, 2, 3:
๐‘ƒ (๐ถ) = 1 − ๐‘ƒ๐‘‹ (0) = 7/8
39
3 Random Variables
๐ธ [๐‘‹ |๐ถ] =
Õ
๐‘ฅ๐‘˜ ๐‘ƒ๐‘‹ |๐ถ (๐‘ฅ๐‘˜ ) = 1(
๐‘ƒ๐‘‹ (2)
๐‘ƒ๐‘‹ (3)
๐‘ƒ๐‘‹ (1)
) + 2(
) + 3(
)
๐‘ƒ (๐ถ)
๐‘ƒ (๐ถ)
๐‘ƒ (๐ถ)
๐‘˜
3/8
3/8
1/8
) + 2(
) + 3(
)
7/8
7/8
7/8
= 12/7 ≈ 1.7
= 1(
which is larger than ๐ธ [๐‘‹ ] = 1.5 found in Example 3.4
๐ธ [๐‘‹ 2 |๐ถ] =
Õ
๐‘ฅ๐‘˜2 ๐‘ƒ๐‘‹ |๐ถ (๐‘ฅ๐‘˜ ) = 1(
๐‘˜
3/8
1/8
3/8
) + 4(
) + 9(
) = 24/7
7/8
7/8
7/8
๐‘‰ ๐ด๐‘… [๐‘‹ |๐ถ] = ๐ธ [๐‘‹ 2 |๐ถ] − (๐ธ [๐‘‹ |๐ถ]) 2 ≈ 0.49
3.2.5 Common Discrete Random Variables
In this section we present the most important of the discrete random variables and their basic
properties and applications.
Bernoulli Random Variable
Definition 3.13. Bernoulli trial: A Bernoulli trial involves performing an experiment once and
noting whether a particular event ๐ด occurs. The outcome of the Bernoulli trial is said to be a “success”
if ๐ด occurs and a “failure” otherwise.
We can view the outcome of a single Bernoulli trial as the outcome of a toss of a coin for which
the probability of heads (success) is ๐‘ = ๐‘ƒ (๐ด). The probability of ๐‘˜ successes in ๐‘› Bernoulli trials
is then equal to the probability of ๐‘˜ heads in ๐‘› tosses of the coin.
Definition 3.14. Bernoulli random variable: Let ๐ด be an event related to the outcomes of some
random experiment. The Bernoulli random variable ๐ผ๐ด equals one if the event ๐ด occurs, and zero
otherwise, and is given by the indicator function for ๐ด:
(
1
if ๐œ ∈ ๐ด
๐ผ๐ด (๐œ ) =
(3.43)
0
if ๐œ ∉ ๐ด
๐ผ๐ด is a discrete random variable with range = {0, 1}.
• The PMF of ๐ผ๐ด is:
๐‘ƒ๐ผ (1) = ๐‘
and ๐‘ƒ๐ผ (0) = 1 − ๐‘ = ๐‘ž
(3.44)
where ๐‘ƒ (๐ด) = ๐‘.
• The mean of ๐ผ๐ด is ๐ธ [๐ผ๐ด ] = 1 × ๐‘ƒ๐ผ (1) + 0 × ๐‘ƒ๐ผ (0) = ๐‘ The sample mean in ๐‘› independent
Bernoulli trials is simply the relative frequency of successes and converges to ๐‘ as ๐‘›
increases.
0๐‘ 0 (๐‘›) + 1๐‘ 1 (๐‘›)
→๐‘
(3.45)
h๐ผ๐ด i๐‘› =
๐‘›
• The variance of ๐ผ๐ด can be found as follows:
๐ธ [๐ผ๐ด2 ] = 1 × ๐‘ƒ๐ผ (1) + 0 × ๐‘ƒ๐ผ (0) = ๐‘
๐œŽ๐ผ2 = ๐‘‰ ๐ด๐‘… [๐ผ๐ด ] = ๐‘ − ๐‘ 2 = ๐‘ (1 − ๐‘) = ๐‘๐‘ž
40
(3.46)
3 Random Variables
The variance is quadratic in ๐‘, with value zero at ๐‘ = 0 and ๐‘ = 1 and maximum at ๐‘ = 1/2.
This agrees with intuition since values of ๐‘ close to 0 or to 1 imply a preponderance of
successes or failures and hence less variability in the observed values. The maximum
variability occurs when which corresponds to the case that is most difficult to predict. Every
Bernoulli trial, regardless of the event ๐ด, is equivalent to the tossing of a biased coin with
probability of heads ๐‘.
Binomial Random Variable
Consider ๐‘› independent Bernoulli trials, with outcomes of ๐‘˜ successes (e.g. Example3.1). Outcomes
of the repeated trials are represented as ๐‘› element vectors whose elements are taken from ๐‘† = {0, 1},
therefore the repeated experiment has a sample space of ๐‘†๐‘› = {0, 1}๐‘› , which is referred to as a
Cartesian space. For example consider the following outcome:
๐œ๐‘˜ = (1, 1, ..., 1, 0, 0, ..., 0 )
| {z } | {z }
๐‘˜ times
๐‘› − ๐‘˜ times
The probability of this outcome occurring is:
(3.47)
๐‘ƒ (๐œ๐‘˜ ) = ๐‘ ๐‘˜ (1 − ๐‘)๐‘›−๐‘˜
In fact, the order of the 1s and 0s in the sequence is irrelevant. Any outcome with exactly ๐‘˜ 1s
and ๐‘› − ๐‘˜ 0s would have the same probability. The number of outcomes in the event of exactly ๐‘˜
successes, is just the number of combinations of ๐‘› trials taken ๐‘˜ successes at a time.
Theorem 3.1: Binomial probability law
Let ๐‘˜ be the number of successes in ๐‘› independent Bernoulli trials, then the probabilities of
๐‘˜ are given by the binomial probability law:
๐‘› ๐‘˜
๐‘ƒ๐‘› (๐‘˜) =
๐‘ (1 − ๐‘)๐‘›−๐‘˜ for ๐‘˜ = 0, ..., ๐‘›
(3.48)
๐‘˜
๐‘›
is the binomial coefficient (see equation 2.26).
๐‘˜
Now let the random variable ๐‘‹ represent the number of successes occurred in the sequence of ๐‘›
trials.
Definition 3.15. Binomial random variable: let ๐‘‹ be the number of times a certain event ๐ด
occurs in ๐‘› independent Bernoulli trials. ๐‘‹ is called the Binomial random variable.
For example, ๐‘‹ could be the number of heads in ๐‘› tosses of a coin (as seen in Examples 3.2 to 3.5,
where ๐‘› = 3 and ๐‘ = 1/2).
• The PMF of the binomial random variable ๐‘‹ is:
๐‘› ๐‘˜
๐‘ƒ (๐‘‹ = ๐‘˜) = ๐‘ƒ๐‘‹ (๐‘˜) =
๐‘ (1 − ๐‘)๐‘›−๐‘˜
๐‘˜
41
for ๐‘˜ = 0, ..., ๐‘›
(3.49)
3 Random Variables
• The expected value of ๐‘‹ is:
๐ธ [๐‘‹ ] =
๐‘›
Õ
๐‘›
Õ
๐‘› ๐‘˜
๐‘˜๐‘ƒ๐‘‹ (๐‘˜) =
๐‘˜
๐‘ (1 − ๐‘)๐‘›−๐‘˜
๐‘˜
๐‘˜=0
Since the summation is zero for ๐‘˜ = 0,
๐‘›
๐‘›
Õ
Õ
๐‘›!
(๐‘› − 1)!
๐‘˜
๐‘›−๐‘˜
๐ธ [๐‘‹ ] =
๐‘˜
๐‘ (1 − ๐‘)
= ๐‘›๐‘
๐‘ ๐‘˜−1 (1 − ๐‘)๐‘›−๐‘˜
๐‘˜!(๐‘› − ๐‘˜)!
(๐‘˜ − 1)!(๐‘› − ๐‘˜)!
๐‘˜=1
๐‘›−1
Õ
= ๐‘›๐‘
๐‘—=0
(3.50)
๐‘˜=0
(3.51)
๐‘˜=1
(๐‘› − 1)!
๐‘ ๐‘— (1 − ๐‘)๐‘›−1−๐‘—
( ๐‘—)!(๐‘› − 1 − ๐‘—)!
(3.52)
(3.53)
Í
(๐‘›−1)!
๐‘—
๐‘›−1−๐‘— equal to one, since it adds all the terms
Note that the summation ๐‘›−1
๐‘—=0 ( ๐‘—)!(๐‘›−1−๐‘—)! ๐‘ (1 − ๐‘)
in a binomial PMF with parameters ๐‘› − 1 and ๐‘, so:
๐ธ [๐‘‹ ] = ๐‘›๐‘ × 1 = ๐‘›๐‘
(3.54)
It agrees with our intuition since we expect a fraction ๐‘ of the outcomes to result in success.
to find the variance of ๐‘‹ ,
๐‘›
๐‘›
Õ
Õ
๐‘›!
๐‘›!
๐ธ [๐‘‹ 2 ] =
๐‘˜2
๐‘ ๐‘˜ (1 − ๐‘)๐‘›−๐‘˜ =
๐‘˜
๐‘ ๐‘˜ (1 − ๐‘)๐‘›−๐‘˜
๐‘˜!(๐‘› − ๐‘˜)!
(๐‘˜ − 1)!(๐‘› − ๐‘˜)!
๐‘˜=0
๐‘˜=1
๐‘›−1
Õ
๐‘›−1 ๐‘—
๐‘ (1 − ๐‘)๐‘›−1−๐‘—
= ๐‘›๐‘
( ๐‘— + 1)
๐‘—
๐‘—=0
๐‘›−1 ๐‘›−1 Õ
Õ
๐‘›−1 ๐‘—
๐‘›−1 ๐‘—
๐‘›−1−๐‘—
๐‘ (1 − ๐‘)๐‘›−1−๐‘— )
๐‘ (1 − ๐‘)
+
= ๐‘›๐‘ (
๐‘—
๐‘—
๐‘—
(3.55)
(3.56)
(3.57)
๐‘—=0
๐‘—=0
In the third line, the first sum is the mean of a binomial random variable with parameters ๐‘› − 1
and ๐‘, and hence equal to (๐‘› − 1)๐‘. The second sum is the sum of the binomial probabilities and
hence equal to 1. Therefore,
๐ธ [๐‘‹ 2 ] = ๐‘›๐‘ (๐‘›๐‘ + 1 − ๐‘)
(3.58)
๐‘‰ ๐ด๐‘… [๐‘‹ ] = ๐ธ [๐‘‹ 2 ] − ๐ธ [๐‘‹ ] 2 = ๐‘›๐‘ (๐‘›๐‘ + 1 − ๐‘) − (๐‘›๐‘) 2 = ๐‘›๐‘ (1 − ๐‘) = ๐‘›๐‘๐‘ž
(3.59)
We see that the variance of the binomial is ๐‘› times the variance of a Bernoulli random variable.
We observe that values of p close to 0 or to 1 imply smaller variance, and that the maximum
variability is when ๐‘ = 1/2.
The binomial random variable arises in applications where there are two types of objects (i.e.,
heads/tails, correct/erroneous bits, good/defective items, active/silent speakers), and we are
interested in the number of type 1 objects in a randomly selected batch of size ๐‘›, where the type
of each object is independent of the types of the other objects in the batch.
Example 3.8
A binary communications channel introduces a bit error in a transmission with probability
๐‘. Let ๐‘‹ be the number of errors in ๐‘› independent transmissions. Find the probability of
one or fewer errors.
42
3 Random Variables
Solution. ๐‘‹ is a binomial random variable, and the probability of ๐‘˜ errors in ๐‘› bit transmissions
is given by the PMF in Equation 3.60:
๐‘› 0
๐‘› 1
๐‘ƒ (๐‘‹ ≤ 1) =
๐‘ (1 − ๐‘)๐‘› +
๐‘ (1 − ๐‘)๐‘›−1 = (1 − ๐‘)๐‘› + ๐‘›๐‘ (1 − ๐‘)๐‘›−1
0
1
Geometric Random Variable
Definition 3.16. Geometric random variable: The geometric random variable is defined as the
number ๐‘‹ of independent Bernoulli trials until the first occurrence of a success.
Note that the event ๐‘‹ = ๐‘˜ occurs if the underlying experiment finds ๐‘˜ − 1 consecutive failures,
followed by one success. If the probability of “success” in each Bernoulli trial is ๐‘, then:
• Therefore the PMF is:
๐‘ƒ๐‘‹ (๐‘˜) = ๐‘ƒ (00...01) = (1 − ๐‘)๐‘˜−1๐‘ = ๐‘ž๐‘˜−1๐‘
for ๐‘˜ = 1, 2, ...
(3.60)
Note that the PMF decays geometrically with ๐‘˜, and the ratio 1 − ๐‘ = ๐‘ž. As ๐‘ increases, the
PMF decays more rapidly.
• The probability that ๐‘‹ ≤ ๐‘˜ can be written in closed form:
๐‘˜
Õ
๐‘ƒ (๐‘‹ ≤ ๐‘˜) =
๐‘ž ๐‘—−1๐‘ = ๐‘
๐‘˜−1
Õ
๐‘—=1
๐‘ž๐‘— = ๐‘
๐‘—=0
1 − ๐‘ž๐‘˜
= 1 − ๐‘ž๐‘˜
1−๐‘ž
(3.61)
๐‘˜๐‘ž๐‘˜−1
(3.62)
• The expectation of ๐‘‹ is:
๐ธ [๐‘‹ ] =
∞
Õ
๐‘˜๐‘๐‘ž๐‘˜−1 = ๐‘
∞
Õ
๐‘˜=1
๐‘˜=1
This expression can be evaluated by differentiating the series:
∞
Õ
1
=
๐‘ฅ๐‘˜
1−๐‘ฅ
(3.63)
๐‘˜=0
to obtain:
∞
Õ
1
=
๐‘˜๐‘ฅ ๐‘˜−1
(1 − ๐‘ฅ) 2
(3.64)
๐‘˜=0
Letting ๐‘ฅ = ๐‘ž:
๐ธ [๐‘‹ ] = ๐‘
1
= 1/๐‘
(1 − ๐‘ž) 2
(3.65)
which is finite as long as ๐‘ > 0.
• If the Equation 3.64 is further differentiated, we obtain:
∞
Õ
2
=
๐‘˜ (๐‘˜ − 1)๐‘ฅ ๐‘˜−2
(1 − ๐‘ฅ) 3
๐‘˜=0
43
(3.66)
3 Random Variables
Let ๐‘ฅ = ๐‘ž and multiply both sides by ๐‘๐‘ž to obtain:
∞
Õ
2๐‘๐‘ž
=
๐‘๐‘ž
๐‘˜ (๐‘˜ − 1)๐‘ž๐‘˜−2
(1 − ๐‘ž) 3
๐‘˜=0
=
∞
Õ
(๐‘˜ 2 − ๐‘˜)๐‘๐‘ž๐‘˜−1 = ๐ธ [๐‘‹ 2 ] − ๐ธ [๐‘‹ ]
๐‘˜=0
So the second moment is:
๐ธ [๐‘‹ 2 ] =
1+๐‘ž
2๐‘๐‘ž
+ ๐ธ [๐‘‹ ] = 2๐‘ž/๐‘ 2 + 1/๐‘ = 2
3
(1 − ๐‘ž)
๐‘
๐‘‰ ๐ด๐‘… [๐‘‹ ] = ๐ธ [๐‘‹ 2 ] − ๐ธ [๐‘‹ ] 2 =
1+๐‘ž
− 1/๐‘ 2 = ๐‘ž/๐‘ 2
๐‘2
(3.67)
(3.68)
We see that the mean and variance increase as ๐‘, the success probability, decreases.
Sometimes we are interested in ๐‘€ the number of failures before a success occurs, also referred to
as a geometric random variable. Its PMF is:
๐‘ƒ (๐‘€ = ๐‘˜) = (1 − ๐‘)๐‘˜ ๐‘
(3.69)
The geometric random variable is the only discrete random variable that satisfies the memoryless
property:
๐‘ƒ (๐‘‹ ≥ ๐‘˜ + ๐‘— |๐‘‹ > ๐‘—) = ๐‘ƒ (๐‘‹ ≥ ๐‘˜)
(3.70)
The above expression states that if a success has not occurred in the first ๐‘— trials, then the
probability of having to perform at least ๐‘˜ more trials is the same as the probability of initially
having to perform at least ๐‘˜ trials. Thus, each time a failure occurs, the system “forgets” and
begins anew as if it were performing the first trial.
The geometric random variable arises in applications where one is interested in the time (i.e.,
number of trials) that elapses between the occurrence of events in a sequence of independent
experiments. Examples where the modified geometric random variable arises are: number of
customers awaiting service in a queuing system; number of white dots between successive black
dots in a scan of a black-and-white document.
Example 3.9
A production line yields two types of devices. Type 1 devices occur with probability ๐›ผ
and work for a relatively short time that is geometrically distributed with parameter ๐‘Ÿ .
Type 2 devices work much longer, occur with probability 1 − ๐›ผ and have a lifetime that is
geometrically distributed with parameter ๐‘ . Let ๐‘‹ be the lifetime of an arbitrary device. Find
the PMF, mean and variance of ๐‘‹ .
Solution. The random experiment that generates ๐‘‹ involves selecting a device type and then
observing its lifetime. We can partition the sets of outcomes in this experiment into event
๐ต 1 consisting of those outcomes in which the device is type 1, and ๐ต 2 consisting of those
outcomes in which the device is type 2. From the theorem of total probability:
๐‘ƒ๐‘‹ (๐‘˜) = ๐‘ƒ๐‘‹ |๐ต1 (๐‘˜)๐‘ƒ (๐ต 1 ) + ๐‘ƒ๐‘‹ |๐ต2 (๐‘˜)๐‘ƒ (๐ต 2 )
= (1 − ๐‘Ÿ )๐‘˜−1๐‘Ÿ (๐›ผ) + (1 − ๐‘ )๐‘˜−1๐‘  (1 − ๐›ผ) for ๐‘˜ = 1, 2, ...
The conditional mean and second moment of each device type is that of a geometric random
44
3 Random Variables
variable with the corresponding parameter:
๐ธ [๐‘‹ |๐ต 1 ] = 1/๐‘Ÿ
๐ธ [๐‘‹ |๐ต 2 ] = 1/๐‘ 
๐ธ [๐‘‹ 2 |๐ต 1 ] = (1 + 1 − ๐‘Ÿ )/๐‘Ÿ 2
๐ธ [๐‘‹ 2 |๐ต 2 ] = (1 + 1 − ๐‘ )/๐‘  2
The mean and the second moment of ๐‘‹ are then:
๐ธ [๐‘‹ ] = (๐ธ [๐‘‹ |๐ต 1 ]) (๐›ผ) + (๐ธ [๐‘‹ |๐ต 2 ]) (1 − ๐›ผ) = ๐›ผ/๐‘Ÿ + (1 − ๐›ผ)/๐‘ 
๐ธ [๐‘‹ 2 ] = ๐ธ [๐‘‹ 2 |๐ต 1 ] (๐›ผ) + ๐ธ [๐‘‹ 2 |๐ต 2 ] (1 − ๐›ผ) = ๐›ผ (2 − ๐‘Ÿ )/๐‘Ÿ 2 + (1 − ๐›ผ) (2 − ๐‘ )/๐‘  2
๐‘‰ ๐ด๐‘… [๐‘‹ ] = ๐ธ [๐‘‹ 2 ] − ๐ธ [๐‘‹ ] 2 = ๐›ผ (2 − ๐‘Ÿ )/๐‘Ÿ 2 + (1 − ๐›ผ) (2 − ๐‘ )/๐‘  2 − (๐›ผ/๐‘Ÿ + (1 − ๐›ผ)/๐‘ ) 2
Note that we do not use the conditional variances to find ๐‘‰ ๐ด๐‘… [๐‘‹ ], since the Equation 3.42
does not similarly apply to the conditional variances.
Poisson Random Variable
In many applications, we are interested in counting the number of occurrences of an event in
a certain time period or in a certain region in space. The Poisson random variable arises in
situations where the events occur “completely at random” in time or space. For example, the
Poisson random variable arises in counts of emissions from radioactive substances, in counts of
demands for telephone connections, and in counts of defects in a semiconductor chip, in queuing
theory and in communication networks. The number of customers arriving at a cashier in a store
during some time interval may be well modeled as a Poisson random variable as may the number
of data packets arriving at a node in a computer network.
• The PMF of the Poisson random variable is given by:
๐‘ƒ๐‘‹ (๐‘˜) =
๐›ผ ๐‘˜ −๐›ผ
๐‘’ , ๐‘˜ = 0, 1, 2, ...
๐‘˜!
(3.71)
where ๐›ผ is the average number of event occurrences in a specified time interval or region
in space. The PMF sums to one, as required, since:
∞
Õ
๐›ผ๐‘˜
๐‘’ −๐›ผ = ๐‘’ −๐›ผ
∞
Õ
๐›ผ๐‘˜
๐‘˜!
๐‘˜!
๐‘˜=0
= ๐‘’ −๐›ผ ๐‘’ ๐›ผ = 1
๐‘˜=0
where we used the fact that the second summation is the infinite series expansion for ๐‘’ ๐›ผ .
• The mean can be found as follows:
∞
∞
∞
Õ
Õ
Õ
๐›ผ ๐‘˜ −๐›ผ
๐›ผ๐‘˜
๐›ผ (๐‘˜−1)
−๐›ผ
−๐›ผ
๐ธ [๐‘‹ ] =
๐‘˜ ๐‘’ =๐‘’
=๐‘’ ๐›ผ
= ๐‘’ −๐›ผ ๐›ผ๐‘’ ๐›ผ = ๐›ผ
๐‘˜!
(๐‘˜ − 1)!
(๐‘˜ − 1)!
๐‘˜=0
๐‘˜=1
๐‘˜=1
45
(3.72)
3 Random Variables
• [Exercise−] It can be shown that the variance is:
๐‘‰ ๐ด๐‘… [๐‘‹ ] = ๐›ผ
(3.73)
One of the applications of the Poisson probabilities is to approximate the binomial probabilities
when the number of repeated trials, ๐‘› , is very large and the probability of success in each
individual trial,๐‘ , is very small. Then the binomial random variable can be well approximated by
a Poisson random variable. That is, the Poisson random variable is a limiting case of the binomial
random variable. Let ๐‘› approach infinity and ๐‘ approach 0 in such a way that lim๐‘›→∞ ๐‘›๐‘ = ๐›ผ,
then the binomial PMF converges to the PMF of Poisson random variable:
๐›ผ๐‘˜
๐‘› ๐‘˜
(3.74)
๐‘ (1 − ๐‘)๐‘›−๐‘˜ → ๐‘’ −๐›ผ , for ๐‘˜ = 0, 1, 2, ...
๐‘˜
๐‘˜!
The Poisson random variable appears in numerous physical situations because many models are
very large in scale and involve very rare events. For example, the Poisson PMF gives an accurate
prediction for the relative frequencies of the number of particles emitted by a radioactive mass
during a fixed time period.
The Poisson random variable also comes up in situations where we can imagine a sequence of
Bernoulli trials taking place in time or space. Suppose we count the number of event occurrences
in a T-second interval. Divide the time interval into a very large number, ๐‘›, of sub-intervals. A
pulse in a sub-interval indicates the occurrence of an event. Each sub-interval can be viewed as
one in a sequence of independent Bernoulli trials if the following conditions hold: (1) At most one
event can occur in a sub-interval, that is, the probability of more than one event occurrence is
negligible; (2) the outcomes in different sub-intervals are independent; and (3) the probability of
an event occurrence in a sub-interval is ๐‘ = ๐›ผ/๐‘› where ๐›ผ is the average number of events observed
in a 1-second interval. The number ๐‘ of events in 1 second is a binomial random variable with
parameters ๐‘› and ๐‘ = ๐›ผ/๐‘›. Thus as ๐‘› → ∞ ๐‘ becomes a Poisson random variable with parameter
๐›ผ.
Example 3.10
An optical communication system transmits information at a rate of 109 bits/second. The
probability of a bit error in the optical communication system is 10−9 . Find the probability
of five or more errors in 1 second.
Solution. Each bit transmission corresponds to a Bernoulli trial with a “success” corresponding to a bit error in transmission. The probability of ๐‘˜ errors in ๐‘› = 109 transmissions (1
second) is then given by the binomial probability with ๐‘› = 109 and ๐‘ = 10−9 .
The Poisson approximation uses ๐›ผ = ๐‘›๐‘ = 109 (10−9 ) = 1. Thus:
๐‘ƒ (๐‘‹ ≥ 5) = 1 − ๐‘ƒ (๐‘‹ < 5) = 1 −
4
Õ
๐›ผ๐‘˜
๐‘’ −๐›ผ
๐‘˜!
๐‘˜=0
= 1 − ๐‘’ (1 + 1/1! + 1/2! + 1/3! + 1/4!) = 0.00366
−1
Uniform Random Variable
Definition 3.17. Uniform random variable: The discrete uniform random variable ๐‘‹ takes on
values in a set of consecutive integers ๐‘†๐‘‹ = { ๐‘— + 1, ..., ๐‘— + ๐ฟ} with equal probability.
46
3 Random Variables
• The PMF of the uniform random variable is:
๐‘ƒ๐‘‹ (๐‘˜) = 1/๐ฟ
for ๐‘˜ ∈ { ๐‘— + 1, ..., ๐‘— + ๐ฟ}
• [Exercise−] It can be shown that the mean is:
๐ธ [๐‘‹ ] = ๐‘— +
๐ฟ+1
2
(3.75)
• [Exercise−] It is easy to show that the variance is:
๐‘‰ ๐ด๐‘… [๐‘‹ ] =
๐ฟ2 − 1
12
(3.76)
This random variable occurs whenever outcomes are equally likely, e.g., toss of a fair coin or a
fair die, spinning of an arrow in a wheel divided into equal segments, selection of numbers from
an urn.
Example 3.11
Let ๐‘‹ be the time required to transmit a message, where ๐‘‹ is a uniform random variable
with ๐‘†๐‘‹ = {1, ..., ๐ฟ}. Suppose that a message has already been transmitting for ๐‘š time units,
find the probability that the remaining transmission time is ๐‘— time units and the expected
value of the remaining transmission time.
Solution. We are given the condition ๐ถ = {๐‘‹ > ๐‘š}, so for ๐‘š + 1 ≤ ๐‘š + ๐‘— ≤ ๐ฟ:
๐‘ƒ๐‘‹ |๐ถ (๐‘š + ๐‘—) =
๐ธ [๐‘‹ |๐ถ] =
๐‘ƒ (๐‘‹ = ๐‘š + ๐‘—)
1/๐ฟ
1
=
=
, for ๐‘š + 1 ≤ ๐‘š + ๐‘— ≤ ๐ฟ
๐‘ƒ (๐‘‹ > ๐‘š)
(๐ฟ − ๐‘š)/๐ฟ ๐ฟ − ๐‘š
๐ฟ
Õ
๐‘—=๐‘š+1
๐‘— (1/๐ฟ − ๐‘š) =
๐ฟ +๐‘š + 1
2
The expectation can also be directly calculated from Equation 3.75, replacing the parameters
๐ฟ and ๐‘— by ๐ฟ − ๐‘š and ๐‘š, respectively.
3.3 Continuous Random Variables
Consider a discrete uniform random variable, ๐‘‹ , that takes on values from the set {0, 1/๐‘ , 2/๐‘ , ..., (๐‘ −
1)/๐‘ }, with PMF of 1/๐‘ . If ๐‘ is a large number so that it appears that the random number can
be anything in the continuous range [0, 1), i.e. ๐‘ → ∞, then the PMF approaches zero! That
is, each point has zero probability of occurring, or in other words, every possible outcome has
probability zero. Yet, something has to occur! Since a continuous random variable typically
has a zero probability of taking on a specific value, the pmf cannot be used to characterize the
probabilities of ๐‘‹ . Therefore we define it by its CDF property.
47
3 Random Variables
Definition 3.18. Continuous random variable: A random variable whose CDF ๐น๐‘‹ (๐‘ฅ) is continuous everywhere, and which, in addition, is sufficiently smooth that it can be written as an integral
of some non-negative function ๐‘“ (๐‘ฅ):
∫ ๐‘ฅ
๐น๐‘‹ (๐‘ฅ) =
๐‘“ (๐‘ก)๐‘‘๐‘ก
(3.77)
−∞
For continuous random variables, we calculate probabilities as integrals of “probability densities”
over intervals of the real line.
A random variable can also be of mixed type, that is a random variable with a CDF that has
jumps on a countable set of points ๐‘ฅ 0, ๐‘ฅ 1, ๐‘ฅ 2, ... but that also increases continuously over at least
one interval of values of ๐‘ฅ. The CDF for these random variables has the form:
๐น๐‘‹ (๐‘ฅ) = ๐‘๐น 1 (๐‘ฅ) + (1 − ๐‘)๐น 2 (๐‘ฅ)
where 0 < ๐‘ < 1 and ๐น 1 (๐‘ฅ) is the CDF of a discrete random variable and ๐น 2 (๐‘ฅ) is the CDF of a
continuous random variable. Random variables of mixed type can be viewed as being produced
by a two-step process: A coin is tossed; if the outcome of the toss is heads, a discrete random
variable is generated according to ๐น 1 (๐‘ฅ) otherwise, a continuous random variable is generated
according to ๐น 2 (๐‘ฅ).
3.3.1 The Probability Density Function
While the CDF represents a mathematical tool to statistically describe a random variable, it is
often quite cumbersome to work with CDFs or to infer various properties of a random variable
from its CDF. To help circumvent these problems, an alternative and often more convenient
description known as the probability density function is often used.
Definition 3.19. Probability density function: The probability density function of ๐‘‹ (PDF), if
it exists, is defined as the derivative of ๐น๐‘‹ (๐‘ฅ):
๐‘“๐‘‹ (๐‘ฅ) =
๐‘‘๐น๐‘‹ (๐‘ฅ)
๐‘‘๐‘ฅ
(3.78)
The PDF represents the “density” of probability at the point ๐‘ฅ in the following sense: The
probability that ๐‘‹ is in a small interval in the vicinity of ๐‘ฅ, i.e. ๐‘ฅ < ๐‘‹ ≤ ๐‘ฅ + โ„Ž, is:
๐‘ƒ (๐‘ฅ < ๐‘‹ ≤ ๐‘ฅ + โ„Ž) = ๐น๐‘‹ (๐‘ฅ + โ„Ž) − ๐น๐‘‹ (๐‘ฅ) =
๐น๐‘‹ (๐‘ฅ + โ„Ž) − ๐น๐‘‹ (๐‘ฅ)
โ„Ž
โ„Ž
(3.79)
If the CDF has a derivative at ๐‘ฅ, then as โ„Ž becomes very small,
๐‘ƒ (๐‘ฅ < ๐‘‹ ≤ ๐‘ฅ + โ„Ž) ≈ ๐‘“๐‘‹ (๐‘ฅ)โ„Ž
(3.80)
Thus represents the “density” of probability at the point ๐‘ฅ in the sense that the probability that
๐‘‹ is in a small interval in the vicinity of ๐‘ฅ is approximately ๐‘“๐‘‹ (๐‘ฅ)โ„Ž. The derivative of the CDF,
when it exists, is positive since the CDF is a non-decreasing function of ๐‘ฅ, thus:
๐‘“๐‘‹ (๐‘ฅ) ≥ 0
(3.81)
Note that the PDF specifies the probabilities of events of the form “๐‘‹ falls in a small interval of
width ๐‘‘๐‘ฅ about the point ๐‘ฅ”. Therefore probabilities of events involving ๐‘‹ in a certain range can
48
3 Random Variables
be expressed in terms of the PDF by adding the probabilities of intervals of width ๐‘‘๐‘ฅ. As the
widths of the intervals approach zero, we obtain an integral in terms of the PDF:
∫ ๐‘
๐‘ƒ (๐‘Ž ≤ ๐‘‹ ≤ ๐‘) =
๐‘“๐‘‹ (๐‘ฅ)๐‘‘๐‘ฅ
(3.82)
๐‘Ž
The probability of an interval is therefore the area under ๐‘“๐‘‹ (๐‘ฅ) in that interval.
Figure 3.6: (a) The probability density function specifies the probability of intervals of infinitesimal
width. (b) The probability of an interval [๐‘Ž, ๐‘] is the area under the PDF in that interval.
(Taken from Alberto Leon-Garcia, Probability, statistics, and random processes for
electrical engineering,3rd ed. Pearson, 2007)
The probability of any event that consists of the union of disjoint intervals can thus be found by
adding the integrals of the PDF over each of the intervals.
The CDF of ๐‘‹ can be obtained by integrating the PDF:
∫ ๐‘ฅ
๐น๐‘‹ (๐‘ฅ) =
๐‘“๐‘‹ (๐‘ก)๐‘‘๐‘ก
(3.83)
−∞
Since the probabilities of all events involving ๐‘‹ can be written in terms of the CDF, it then follows
that these probabilities can be written in terms of the PDF. Thus the PDF completely specifies the
behavior of continuous random variables.
By letting ๐‘ฅ tend to infinity in Equation 3.83, we obtain:
∫ ∞
1=
๐‘“๐‘‹ (๐‘ก)๐‘‘๐‘ก
(3.84)
−∞
A valid PDF can be formed by normalising any non-negative, piecewise continuous function ๐‘”(๐‘ฅ)
that has a finite integral over all real values of ๐‘ฅ.
Example 3.12
The PDF for the random variable ๐‘‹ is:
(
๐‘“๐‘‹ (๐‘ฅ) =
๐›ฝ๐‘ฅ 2
0
−1 < ๐‘ฅ < 2
otherwise
Find ๐›ฝ so that ๐‘“๐‘‹ (๐‘ฅ) is a PDF, and find the CDF ๐น๐‘‹ (๐‘ฅ).
49
3 Random Variables
Solution. We require:
1=
∫
∞
∫
2
๐‘“๐‘‹ (๐‘ก)๐‘‘๐‘ก = ๐›ฝ
−∞
๐‘ฅ 2๐‘‘๐‘ฅ = (๐›ฝ/3) (8 + 1) = 3๐›ฝ
−1
So, ๐›ฝ = 1/3, which is positive, as required. To find the CDF:
∫ ๐‘ฅ
∫ ๐‘ฅ
๐น๐‘‹ (๐‘ฅ) =
๐‘“๐‘‹ (๐‘ก)๐‘‘๐‘ก =
(1/3)๐‘ก 2๐‘‘๐‘ก = (1/9) (๐‘ฅ 3 + 1)
−∞
−1
Finally, since ๐‘“๐‘‹ (๐‘ฅ) = 0 for ๐‘ฅ > 2, ๐น๐‘‹ (๐‘ฅ) = 1 for๐‘ฅ ≥ 2.
PDF of Discrete Random Variables
The derivative of the CDF does not exist at points where the CDF is not continuous. As seen
in section 3.2.2, CDF of discrete random variables has discontinuities, where the notion of PDF
cannot be applied. We can generalize the definition of the PDF by noting the relation between the
unit step function ๐‘ข (๐‘ฅ) and Dirac delta function ๐›ฟ (๐‘ฅ):
(
1 ๐‘ฅ≥0
๐‘ข (๐‘ฅ) =
(3.85)
0 ๐‘ฅ<0
∫ ๐‘ฅ
๐‘ข (๐‘ฅ) =
๐›ฟ (๐‘ก)๐‘‘๐‘ก
(3.86)
−∞
Recall that the delta function ๐›ฟ (๐‘ฅ) is zero everywhere except at ๐‘ฅ = 0, where it is unbounded. To
maintain the right continuity of the step function at 0, we use the convention:
∫ 0
๐‘ข (0) = 1 =
๐›ฟ (๐‘ก)๐‘‘๐‘ก
(3.87)
−∞
The PDF for a discrete random variable can be defined by:
๐‘“๐‘‹ (๐‘ฅ) =
Õ
๐‘‘
๐‘ƒ๐‘‹ (๐‘ฅ๐‘˜ )๐›ฟ (๐‘ฅ − ๐‘ฅ๐‘˜ )
๐น๐‘‹ (๐‘ฅ) =
๐‘‘๐‘ฅ
(3.88)
๐‘˜
Thus the generalized definition of PDF places a delta function of weight ๐‘ƒ (๐‘‹ = ๐‘ฅ๐‘˜ ) at the points
๐‘ฅ๐‘˜ where the CDF is discontinuous.
50
3 Random Variables
Example 3.13
Find the PDF of ๐‘‹ in Example 3.3.
Solution. We found that the CDF of ๐‘‹ is:
1
3
3
1
๐น๐‘‹ (๐‘ฅ) = ๐‘ข (๐‘ฅ) + ๐‘ข (๐‘ฅ − 1) + ๐‘ข (๐‘ฅ − 2) + ๐‘ข (๐‘ฅ − 3)
8
8
8
8
Therefore the PDF of ๐‘‹ is given by:
1
3
3
1
๐‘“๐‘‹ (๐‘ฅ) = ๐›ฟ (๐‘ฅ) + ๐›ฟ (๐‘ฅ − 1) + ๐›ฟ (๐‘ฅ − 2) + ๐›ฟ (๐‘ฅ − 3)
8
8
8
8
3.3.2 Conditional CDF and PDF
Definition 3.20. Conditional cumulative distribution function: Suppose that event ๐ถ is given
and that ๐‘ƒ (๐ถ) > 0. The conditional CDF of ๐‘‹ given ๐ถ is defined by:
๐น๐‘‹ |๐ถ (๐‘ฅ) =
๐‘ƒ ({๐‘‹ ≤ ๐‘ฅ } ∩ ๐ถ)
๐‘ƒ (๐ถ)
(3.89)
and satisfies all the properties of a CDF.
The conditional PDF of ๐‘‹ given ๐ถ is then defined by:
๐‘“๐‘‹ |๐ถ (๐‘ฅ) =
๐‘‘
๐น๐‘‹ |๐ถ (๐‘ฅ)
๐‘‘๐‘ฅ
(3.90)
Example 3.14
The lifetime ๐‘‹ of a machine has a continuous CDF ๐น๐‘‹ (๐‘ฅ). Find the conditional CDF and
PDF given the event ๐ถ = {๐‘‹ > ๐‘ก } (i.e., “machine is still working at time ๐‘ก”).
Solution. The conditional CDF is:
๐น๐‘‹ |๐ถ (๐‘ฅ) = ๐‘ƒ (๐‘‹ ≤ ๐‘ฅ |๐‘‹ > ๐‘ก) =
๐‘ƒ ({๐‘‹ ≤ ๐‘ฅ } ∩ {๐‘‹ > ๐‘ก })
๐‘ƒ (๐‘‹ > ๐‘ก)
The intersection of the two events in the numerator is equal to the empty set when ๐‘ฅ < ๐‘ก
and to {๐‘ก < ๐‘‹ ≤ ๐‘ฅ } when ๐‘ฅ ≥ ๐‘ก. Then:
( ๐น (๐‘ฅ)−๐น (๐‘ก )
๐‘‹
๐‘‹
๐‘ฅ >๐‘ก
1−๐น๐‘‹ (๐‘ก )
๐น๐‘‹ |๐ถ (๐‘ฅ) =
0
๐‘ฅ ≤๐‘ก
The conditional pdf is found by differentiating with respect to ๐‘ฅ:
๐‘“๐‘‹ |๐ถ (๐‘ฅ) =
๐‘“๐‘‹ (๐‘ฅ)
1 − ๐น๐‘‹ (๐‘ก)
51
3 Random Variables
Now suppose that we have a partition of the sample space ๐‘† into the union of disjoint events
๐ต 1, ๐ต 2, ..., ๐ต๐‘› . Let ๐น๐‘‹ |๐ต๐‘– (๐‘ฅ) be the conditional CDF of ๐‘‹ given event ๐ต๐‘– . The theorem on total
probability allows us to find the CDF of ๐‘‹ in terms of the conditional CDFs:
๐น๐‘‹ (๐‘ฅ) = ๐‘ƒ (๐‘‹ ≤ ๐‘ฅ) =
๐‘›
Õ
๐‘ƒ (๐‘‹ ≤ ๐‘ฅ |๐ต๐‘– )๐‘ƒ (๐ต๐‘– ) =
๐‘–=1
๐‘›
Õ
๐น๐‘‹ |๐ต๐‘– (๐‘ฅ)๐‘ƒ (๐ต๐‘– )
(3.91)
๐‘–=1
The PDF is obtained by differentiation:
๐‘›
๐‘“๐‘‹ (๐‘ฅ) =
Õ
๐‘‘
๐น๐‘‹ (๐‘ฅ) =
๐‘“๐‘‹ |๐ต๐‘– (๐‘ฅ)๐‘ƒ (๐ต๐‘– )
๐‘‘๐‘ฅ
๐‘–=1
(3.92)
3.3.3 The Expected Value and Moments
Expected Value
We discussed the expectation for discrete random variables in Section 3.2.3, and found that the
sample mean of independent observations of a random variable approaches ๐ธ [๐‘‹ ]. Suppose we
perform a series of such experiments for continuous random variables. Since for continuous
random variables we have ๐‘ƒ (๐‘‹ = ๐‘ฅ) = 0 for any specific value of ๐‘ฅ, we divide the real line into
small intervals and count the number of times the observations fall in the interval ๐‘ฅ๐‘˜ < ๐‘‹ < ๐‘ฅ๐‘˜ + Δ.
As ๐‘› becomes large, then the relative frequency ๐‘“๐‘˜ (๐‘›) = ๐‘๐‘˜ (๐‘›)/๐‘› will approach ๐‘“๐‘‹ (๐‘ฅ๐‘˜ )Δ, the
probability of the interval. We calculate the sample mean in terms of the relative frequencies and
let ๐‘› → ∞:
Õ
Õ
h๐‘‹ i๐‘› =
๐‘ฅ๐‘˜ ๐‘“๐‘˜ (๐‘›) →
๐‘ฅ๐‘˜ ๐‘“๐‘‹ (๐‘ฅ๐‘˜ )Δ
๐‘˜
๐‘˜
The expression on the right-hand side approaches an integral as we decrease Δ. Thus, the expected
value or mean of a continuous random variable ๐‘‹ is defined by:
∫ +∞
๐ธ [๐‘‹ ] =
๐‘ก ๐‘“๐‘‹ (๐‘ก)๐‘‘๐‘ก
(3.93)
−∞
The expected value ๐ธ [๐‘‹ ] is defined if the above integral converges absolutely, that is,
∫ +∞
๐ธ [|๐‘‹ |] =
|๐‘ก |๐‘“๐‘‹ (๐‘ก)๐‘‘๐‘ก < ∞
−∞
We already discussed ๐ธ [๐‘‹ ] for discrete random variables in detail, but the definition in Equation
3.93 is applicable if we express the PDF of a discrete random variable using delta (๐›ฟ) functions:
∫ +∞ Õ
๐ธ [๐‘‹ ] =
๐‘ก
๐‘ƒ๐‘‹ (๐‘ฅ๐‘˜ )๐›ฟ (๐‘ก − ๐‘ฅ๐‘˜ )๐‘‘๐‘ก
−∞
=
=
Õ
๐‘˜
Õ
๐‘˜
∫
+∞
๐‘ƒ๐‘‹ (๐‘ฅ๐‘˜ )
๐‘ก๐›ฟ (๐‘ก − ๐‘ฅ๐‘˜ )๐‘‘๐‘ก
−∞
๐‘ƒ๐‘‹ (๐‘ฅ๐‘˜ )๐‘ฅ๐‘˜
๐‘˜
Example 3.15
The PDF of the uniform random variable is a constant value over a certain range and zero
52
3 Random Variables
outside that range:
(
๐‘“๐‘‹ (๐‘ฅ) =
1
๐‘−๐‘Ž
๐‘Ž ≤๐‘ฅ ≤๐‘
๐‘ฅ < ๐‘Ž ๐‘Ž๐‘›๐‘‘ ๐‘ฅ > ๐‘
0
Find the expectation ๐ธ [๐‘‹ ].
Solution.
∫
๐‘
๐ธ [๐‘‹ ] =
๐‘ก
๐‘Ž
๐‘Ž +๐‘
1
๐‘‘๐‘ก =
๐‘ −๐‘Ž
2
which is the midpoint of the interval [๐‘Ž, ๐‘].
The result in Example 3.15 could have been found immediately by noting that ๐ธ [๐‘‹ ] = ๐‘š when
the PDF is symmetric about a point ๐‘š, i.e. ๐‘“๐‘‹ (๐‘š − ๐‘ฅ) = ๐‘“๐‘‹ (๐‘š + ๐‘ฅ) for all ๐‘ฅ, then assuming that
the mean exists,
∫
∫
+∞
0=
+∞
(๐‘š − ๐‘ก) ๐‘“๐‘‹ (๐‘ก)๐‘‘๐‘ก = ๐‘š −
๐‘ก ๐‘“๐‘‹ (๐‘ก)๐‘‘๐‘ก
−∞
−∞
The first equality above follows from the symmetry of ๐‘“๐‘‹ (๐‘ก) about ๐‘ก = ๐‘š and the odd symmetry
of (๐‘š − ๐‘ก) about the same point. We then have that ๐ธ [๐‘‹ ] = ๐‘š.
The following expressions are useful when ๐‘‹ is a nonnegative random variable:
∫ ∞
๐ธ [๐‘‹ ] =
(1 − ๐น๐‘‹ (๐‘ก))๐‘‘๐‘ก if ๐‘‹ continuous and nonnegative
(3.94)
0
๐ธ [๐‘‹ ] =
∞
Õ
๐‘ƒ (๐‘‹ > ๐‘˜)
if ๐‘‹ nonnegative, integer-valued
(3.95)
๐‘˜=0
Functions of a Random Variable
The concept of expectation can be applied to the functions of random variables as well. This will
allow us to define many other parameters that describe various aspects of a continuous random
variable.
Definition 3.21. Given a continuous random variable ๐‘‹ with PDF ๐‘“๐‘‹ (๐‘ฅ), the expected value of a
function, ๐‘”(๐‘‹ ), of that random variable is given by:
∫ +∞
๐ธ [๐‘”(๐‘‹ )] =
๐‘”(๐‘ฅ) ๐‘“๐‘‹ (๐‘ฅ)๐‘‘๐‘ฅ
(3.96)
−∞
Example 3.16
If ๐‘Œ = ๐‘Ž๐‘‹ + ๐‘ where ๐‘‹ is a continuous random variable with expected value of ๐ธ [๐‘‹ ] and ๐‘Ž
and ๐‘ are constant values, find ๐ธ [๐‘Œ ].
Solution.
∫
+∞
๐ธ [๐‘Œ ] = ๐ธ [๐‘Ž๐‘‹ + ๐‘] =
∫
+∞
(๐‘Ž๐‘ฅ + ๐‘) ๐‘“๐‘‹ (๐‘ฅ)๐‘‘๐‘ฅ = ๐‘Ž
−∞
๐‘ฅ ๐‘“๐‘‹ (๐‘ฅ)๐‘‘๐‘ฅ + ๐‘ = ๐‘Ž๐ธ [๐‘‹ ] + ๐‘
−∞
53
3 Random Variables
In general, expectation is a linear operation and expectation operator can be exchanged (in order)
with any other linear operation. For any linear combination of functions:
∫ ∞ Õ
Õ
Õ ∫ ∞
Õ
๐ธ [ ๐‘Ž๐‘˜ ๐‘”๐‘˜ (๐‘‹ )] =
( ๐‘Ž๐‘˜ ๐‘”๐‘˜ (๐‘ฅ))๐‘“๐‘‹ (๐‘ฅ)๐‘‘๐‘ฅ =
๐‘Ž๐‘˜
๐‘”๐‘˜ (๐‘ฅ) ๐‘“๐‘‹ (๐‘ฅ)๐‘‘๐‘ฅ =
๐‘Ž๐‘˜ ๐ธ [๐‘”๐‘˜ (๐‘‹ )]
−∞
๐‘˜
๐‘˜
๐‘˜
−∞
๐‘˜
(3.97)
Moments
Definition 3.22. Moment: The ๐‘›๐‘กโ„Ž moment of a continuous random variable ๐‘‹ is defined as:
∫ +∞
๐‘ฅ ๐‘› ๐‘“๐‘‹ (๐‘ฅ)๐‘‘๐‘ฅ
(3.98)
๐ธ [๐‘‹ ๐‘› ] =
−∞
The zeroth moment is simply the area under the PDF and must be one for any random variable.
The most commonly used moments are the first and second moments. The first moment is the
expected value. For some random variables, the second moment might be a more meaningful
characterization than the first. For example, suppose ๐‘‹ is a sample of a noise waveform. We might
expect that the distribution of the noise is symmetric about zero and hence the first moment will
be zero. It only shows that the noise does not have a bias. However, the second moment of the
random noise is in some sense a measure of the strength of the noise, which can give us some
useful physical insight into the power of the noise.
Under certain conditions, a PDF is completely specified if the expected values of all the moments
of ๐‘‹ are known.
Variance
Similar to the definition of variance for discrete random variables, for continuous random variables
๐‘‹ , the variance is defined as:
๐‘‰ ๐ด๐‘… [๐‘‹ ] = ๐ธ [(๐‘‹ − ๐ธ [๐‘‹ ]) 2 ] = ๐ธ [๐‘‹ 2 ] − ๐ธ [๐‘‹ ] 2
(3.99)
and the standard deviation is defined by: ๐‘†๐‘‡ ๐ท [๐‘‹ ] = ๐œŽ๐‘‹ = ๐‘‰ ๐ด๐‘… [๐‘‹ ] 1/2 .
Example 3.17
Find the variance of the continuous uniform random variable in Example 3.15.
Solution.
∫
๐‘
(๐‘ฅ −
๐‘‰ ๐ด๐‘… [๐‘‹ ] =
๐‘Ž
Let ๐‘ฆ = ๐‘ฅ −
๐‘Ž+๐‘
2 ,
1
๐‘‰ ๐ด๐‘… [๐‘‹ ] =
๐‘ −๐‘Ž
∫
๐‘Ž +๐‘ 2 1
)
๐‘‘๐‘ฅ
2
๐‘ −๐‘Ž
(๐‘−๐‘Ž)/2
−(๐‘−๐‘Ž)/2
๐‘ฆ 2๐‘‘๐‘ฆ =
(๐‘ − ๐‘Ž) 2
12
The properties derived in section 3.2.3 can be similarly derived for the variance of continuous
random variables:
๐‘‰ ๐ด๐‘… [๐‘] = 0
(3.100)
54
3 Random Variables
๐‘‰ ๐ด๐‘… [๐‘‹ + ๐‘] = ๐‘‰ ๐ด๐‘… [๐‘‹ ]
(3.101)
๐‘‰ ๐ด๐‘… [๐‘๐‘‹ ] = ๐‘ 2๐‘‰ ๐ด๐‘… [๐‘‹ ]
(3.102)
where ๐‘ is a constant.
The mean and variance are the two most important parameters used in summarizing the PDF
of a random variable. Other parameters and moments are occasionally used. For example, the
skewness defined by ๐ธ [(๐‘‹ − ๐ธ [๐‘‹ ]) 3 ]/๐‘†๐‘‡ ๐ท [๐‘‹ ] 3 measures the degree of asymmetry about the
mean. It is easy to show that if a PDF is symmetric about its mean, then its skewness is zero.
The point to note with these parameters of the PDF is that each involves the expected value of a
higher power of ๐‘‹ .
3.3.4 Important Continuous Random Variables
The Uniform Random Variable
The uniform random variable arises in situations where all values in an interval of the real line
are equally likely to occur.
• As introduced in Example 3.15, the uniform random variable ๐‘ˆ in the interval [๐‘Ž, ๐‘] has
PDF:
(
1
๐‘Ž ≤๐‘ฅ ≤๐‘
๐‘“๐‘ˆ (๐‘ฅ) = ๐‘−๐‘Ž
(3.103)
0
๐‘ฅ < ๐‘Ž ๐‘Ž๐‘›๐‘‘ ๐‘ฅ > ๐‘
• and CDF:
๐น๐‘ˆ (๐‘ฅ) =
๏ฃฑ
๏ฃด
0
๏ฃด
๏ฃฒ
๏ฃด
๐‘ฅ−๐‘Ž
๏ฃด ๐‘−๐‘Ž
๏ฃด
๏ฃด1
๏ฃณ
๐‘ฅ <๐‘Ž
๐‘Ž ≤๐‘ฅ ≤๐‘
๐‘ฅ >๐‘
(3.104)
• As found in Examples 3.15 and 3.17
๐ธ [๐‘ˆ ] =
๐‘‰ ๐ด๐‘… [๐‘ˆ ] =
๐‘Ž +๐‘
2
(3.105)
(๐‘ − ๐‘Ž) 2
12
(3.106)
The uniform random variable appears in many situations that involve equally likely continuous
random variables. Obviously ๐‘ˆ can only be defined over intervals that are finite in length.
The Exponential Random Variable
The exponential random variable arises in the modeling of the time between occurrence of events
(e.g., the time between customer demands for call connections), and in the modeling of the lifetime
of devices and systems.
• The exponential random variable ๐‘‹ with parameter ๐œ† has PDF:
(
๐œ†๐‘’ −๐œ†๐‘ฅ ๐‘ฅ ≥ 0
๐‘“๐‘‹ (๐‘ฅ) =
0
๐‘ฅ<0
55
(3.107)
3 Random Variables
• and CDF:
(
๐น๐‘‹ (๐‘ฅ) =
1 − ๐‘’ −๐œ†๐‘ฅ
0
๐‘ฅ≥0
๐‘ฅ<0
(3.108)
The parameter ๐œ† is the rate at which events occur, so ๐น๐‘‹ (๐‘ฅ), the probability of an event
occurring by time ๐‘ฅ, increases at the rate ๐œ† increases.
• The expectation is given by:
∫
∞
๐ธ [๐‘‹ ] =
๐‘ก๐œ†๐‘’ −๐œ†๐‘ก ๐‘‘๐‘ก
0
∫
∫
using integration by parts ( ๐‘ข๐‘‘๐‘ฃ = ๐‘ข๐‘ฃ − ๐‘ฃ๐‘‘๐‘ข), with ๐‘ข = ๐‘ก and ๐‘‘๐‘ฃ = ๐œ†๐‘’ −๐œ†๐‘ก ๐‘‘๐‘ก:
∞ ∫ ∞
−๐œ†๐‘ก
๐ธ [๐‘‹ ] = −๐‘ก๐‘’
+
๐‘’ −๐œ†๐‘ก ๐‘‘๐‘ก
0
(3.109)
0
= lim ๐‘ก๐‘’ −๐œ†๐‘ก − 0 + (
๐‘ก →∞
−๐‘’ −๐œ†๐‘ก ∞
)
๐œ†
0
−๐‘’ −๐œ†๐‘ก 1 1
+ =
๐‘ก →∞
๐œ†
๐œ† ๐œ†
= lim
(3.110)
where we have used the fact that ๐‘’ −๐œ†๐‘ก and ๐‘ก๐‘’ −๐œ†๐‘ก go to zero as ๐‘ก approaches infinity.
• [Exercise−] It can be shown that the variance is:
๐‘‰ ๐ด๐‘… [๐‘‹ ] =
1
๐œ†2
(3.111)
In event inter-arrival situations, ๐œ† is in units of events/second and 1/๐œ† is in units of seconds per
event inter-arrival.
The exponential random variable satisfies the memoryless property:
๐‘ƒ (๐‘‹ > ๐‘ก + โ„Ž|๐‘‹ > ๐‘ก) = ๐‘ƒ (๐‘‹ > โ„Ž)
(3.112)
The expression on the left side is the probability of having to wait at least โ„Ž additional seconds
given that one has already been waiting ๐‘ก seconds.The expression on the right side is the probability
of waiting at least โ„Ž seconds when one first begins to wait. Thus the probability of waiting at
least an additional โ„Ž seconds is the same regardless of how long one has already been waiting!
This property can be proved as follows:
๐‘ƒ (๐‘‹ > ๐‘ก + โ„Ž ∩ ๐‘‹ > ๐‘ก)
for โ„Ž > 0
๐‘ƒ (๐‘‹ > ๐‘ก)
๐‘ƒ (๐‘‹ > ๐‘ก + โ„Ž) ๐‘’ −๐œ† (๐‘ก +โ„Ž)
=
=
๐‘ƒ (๐‘‹ > ๐‘ก)
๐‘’ −๐œ†๐‘ก
= ๐‘’ −๐œ†โ„Ž = ๐‘ƒ (๐‘‹ > โ„Ž)
๐‘ƒ (๐‘‹ > ๐‘ก + โ„Ž|๐‘‹ > ๐‘ก) =
The memoryless property of the exponential random variable makes it the cornerstone for the
theory of Markov chains, which is used extensively in evaluating the performance of computer
systems and communications networks. It can be shown that the exponential random variable is
the only continuous random variable that satisfies the memoryless property.
56
3 Random Variables
The Gaussian (Normal) Random Variable
There are many real-world situations where one deals with a random variable ๐‘‹ that consists
of the sum of a large number of “small” random variables. The exact description of the PDF
of ๐‘‹ in terms of the component random variables can become quite complex and unwieldy.
However, under very general conditions, as the number of components becomes large, the CDF
of ๐‘‹ approaches that of the Gaussian random variable. This random variable appears so often in
problems involving randomness that it is known as the “normal” random variable.
Figure 3.7: Probability density function of Gaussian random variable.
• The PDF for the Gaussian random variable ๐‘‹ is given by:
1 −(๐‘ฅ−๐‘š) 2 /2๐œŽ 2
๐‘’
๐‘“๐‘‹ (๐‘ฅ) = √
2๐œ‹๐œŽ
−∞ <๐‘ฅ < ∞
(3.113)
where ๐‘š and ๐œŽ > 0 are real numbers, denoting the mean and standard deviation of ๐‘‹ . As
shown in Figure 3.7, the Gaussian PDF is a “bell-shaped” curve centered and symmetric
about ๐‘š and whose “width” increases with ๐œŽ. In general, the Gaussian PDF is centered
about the point ๐‘ฅ = ๐‘š and has a width that is proportional to ๐œŽ. The special case when
๐‘š = 0 and ๐œŽ = 1, is called “standard normal” random variable. Because Gaussian random
variables are so commonly used in such a wide variety of applications, it is standard practice
to introduce a shorthand notation to describe a Gaussian random variable, ๐‘‹ ∼ ๐‘ (๐‘š, ๐œŽ 2 ).
• The CDF of the Gaussian random variable is given by:
∫ ๐‘ฅ
0
2
2
1
๐‘ƒ (๐‘‹ ≤ ๐‘ฅ) = √
๐‘’ −(๐‘ฅ −๐‘š) /2๐œŽ ๐‘‘๐‘ฅ 0
2๐œ‹๐œŽ −∞
The change of variable ๐‘ก = (๐‘ฅ 0 − ๐‘š)/๐œŽ results in:
∫ (๐‘ฅ−๐‘š)/๐œŽ
2
1
๐‘ฅ −๐‘š
๐น๐‘‹ (๐‘ฅ) = √
๐‘’ −๐‘ก /2๐‘‘๐‘ก = Φ(
)
๐œŽ
2๐œ‹ −∞
where Φ(๐‘ฅ) is the CDF of a Gaussian random variable with ๐‘š = 0 and ๐œŽ = 1:
∫ ๐‘ฅ
2
1
Φ(๐‘ฅ) = √
๐‘’ −๐‘ก /2๐‘‘๐‘ก
2๐œ‹ −∞
57
(3.114)
(3.115)
3 Random Variables
Therefore any probability involving an arbitrary Gaussian random variable can be expressed
in terms of Φ(๐‘ฅ).
• Note that the PDF of a Gaussian random variable is symmetric about the point ๐‘š. Therefore
the mean is ๐ธ [๐‘‹ ] = ๐‘š (as also defined above).
• Since ๐œŽ is the standard deviation, the variance is ๐‘‰ ๐ด๐‘… [๐‘‹ ] = ๐œŽ 2 .
In electrical engineering it is customary to work with the Q-function, which is defined by:
∫ ∞
2
1
๐‘„ (๐‘ฅ) = 1 − Φ(๐‘ฅ) = √
๐‘’ −๐‘ก /2๐‘‘๐‘ก
(3.116)
2๐œ‹ ๐‘ฅ
๐‘„ (๐‘ฅ) is simply the probability of the “tail” of the PDF. The symmetry of the PDF implies that:
๐‘„ (0) = 1/2
and
๐‘„ (−๐‘ฅ) = 1 − ๐‘„ (๐‘ฅ)
(3.117)
From Equation 3.114 which corresponds to ๐‘ƒ (๐‘‹ ≤ ๐‘ฅ), the following can be derived:
๐‘ƒ (๐‘‹ > ๐‘ฅ) = ๐‘„ (
๐‘ฅ −๐‘š
)
๐œŽ
(3.118)
Figure 3.8: Standardized integrals related to the Gaussian CDF and the Φ and ๐‘„ functions.
Figure 3.8 shows the standardized integrals related to the Gaussian CDF and the Φ and ๐‘„ functions.
It can be shown that it is impossible to express the CDF integral in closed form. However, as with
other important integrals that cannot be expressed in closed form (e.g., Bessel functions), one can
always look up values of the required CDF in looking up tables, or use numerical approximations
of the desired integral to any desired accuracy. The following expression has been found to give
good accuracy for ๐‘„ (๐‘ฅ) over the entire range 0 < ๐‘ฅ < ∞:
๐‘„ (๐‘ฅ) ≈ (
1
2
1
) √ ๐‘’ −๐‘ฅ /2
√
(1 − ๐‘Ž)๐‘ฅ + ๐‘Ž ๐‘ฅ 2 + ๐‘ 2๐œ‹
(3.119)
where ๐‘Ž = 1/๐œ‹ and ๐‘ = 2๐œ‹.
In some problems, we are interested in finding the value of ๐‘ฅ for which ๐‘„ (๐‘ฅ) = 10−๐‘˜ . Table 3.1
gives these values for ๐‘˜ = 1, ..., 10.
58
3 Random Variables
Table 3.1: Look-up table for ๐‘„ (๐‘ฅ) = 10−๐‘˜ .
๐‘˜ ๐‘ฅ so that ๐‘„ (๐‘ฅ) = 10−๐‘˜
1
1.2815
2
2.3263
3
3.0902
4
3.7190
5
4.2649
6
4.7535
7
5.1993
8
5.6120
9
5.9978
6.3613
10
The Gaussian random variable plays a very important role in communication systems, where
transmission signals are corrupted by noise voltages resulting from the thermal motion of electrons.
It can be shown from physical principles that these voltages will have a Gaussian PDF.
Example 3.18
A communication system accepts a positive voltage ๐‘‰ as input and outputs a voltage
๐‘Œ = ๐›ผ๐‘‰ + ๐‘ , where ๐›ผ = 10−2 and ๐‘ is a Gaussian random variable with parameters ๐‘š = 0
and ๐œŽ = 2. Find the value of ๐‘‰ that gives ๐‘ƒ (๐‘Œ < 0) = 10−6 .
Solution. The probability ๐‘ƒ (๐‘Œ < 0) is written in terms of ๐‘ as follows:
๐‘ƒ (๐‘Œ < 0) = ๐‘ƒ (๐›ผ๐‘‰ + ๐‘ < 0) = ๐‘ƒ (๐‘ < −๐›ผ๐‘‰ ) = Φ(
๐›ผ๐‘‰
−๐›ผ๐‘‰
) = ๐‘„(
) = 10−6 .
๐œŽ
๐œŽ
From Table 3.1 we see that the argument of the Q-function should be
๐‘‰ = 950.6
๐›ผ๐‘‰
๐œŽ
= 4.753. Thus
The Gamma Random Variable
The Gamma random variable is a versatile random variable that appears in many applications.
For example, it is used to model the time required to service customers in queueing systems, the
lifetime of devices and systems in reliability studies, and the defect clustering behavior in VLSI
chips.
• The PDF of the gamma random variable has two parameters, ๐›ผ > 0 and ๐œ† > 0 and is given
by:
๐œ†(๐œ†๐‘ฅ) ๐›ผ−1๐‘’ −๐œ†๐‘ฅ
๐‘“๐‘‹ (๐‘ฅ) =
0 < ๐‘ฅ < ∞,
(3.120)
Γ(๐›ผ)
where Γ is the gamma function, which is defined by:
∫ ∞
Γ(๐›ผ) =
๐‘ฅ ๐›ผ−1๐‘’ −๐‘ฅ ๐‘‘๐‘ฅ
0
59
๐›ผ>0
(3.121)
3 Random Variables
The gamma function has the following properties:
√
Γ(1/2) = ๐œ‹,
Γ(๐›ผ + 1) = ๐›ผ Γ(๐›ผ)
Γ(๐‘š + 1) = ๐‘š!
for ๐›ผ > 0
for ๐‘š a non negative integer
• The CDF of Gamma random variable is given by:
๐น๐‘‹ (๐‘ฅ) =
๐›พ (๐›ผ, ๐œ†๐‘ฅ)
Γ(๐›ผ)
0<๐‘ฅ <∞
(3.122)
where the incomplete gamma function ๐›พ is given by:
∫
๐›พ (๐›ผ, ๐›ฝ) =
๐›ฝ
๐‘ฅ ๐›ผ−1๐‘’ −๐‘ฅ ๐‘‘๐‘ฅ
0
(3.123)
• Mean of the Gamma random variable is:
๐ธ [๐‘‹ ] = ๐›ผ/๐œ†
(3.124)
• Variance of the Gamma random variable is:
๐‘‰ ๐ด๐‘… [๐‘‹ ] = ๐›ผ/๐œ† 2
(3.125)
The versatility of the gamma random variable is due to the richness of the gamma function Γ(๐›ผ).
Figure 3.9: Probability density function of gamma random variable.
The PDF of the gamma random variable can assume a variety of shapes as shown in Figure 3.9. By
varying the parameters ๐œ† and ๐›ผ it is possible to fit the gamma PDF to many types of experimental
data. The exponential random variable is obtained by letting ๐›ผ = 1. By letting ๐œ† = 1/2 and ๐›ผ = ๐‘˜/2,
60
3 Random Variables
where ๐‘˜ is a positive integer, we obtain the Chi-square random variable, which appears in
certain statistical problems and wireless communications applications. The m-Erlang random
variable is obtained when ๐›ผ = ๐‘š a positive integer. The m-Erlang random variable is used in the
system reliability models and in queueing systems models and plays a fundamental role in the
study of wireline telecommunication networks.
In general, the CDF of the gamma random variable does not have a closed-form expression.
However, the special case of the m-Erlang random variable does have a closed-form expression.
3.4 The Markov and Chebyshev Inequalities
In general, the mean and variance of a random variable do not provide enough information to
determine the CDF/PDF. However, the mean and variance of a random variable ๐‘‹ do allow us to
obtain bounds for probabilities of the form ๐‘ƒ (|๐‘‹ | ≥ ๐‘ก).
Definition 3.23. Markov inequality: Suppose first that ๐‘‹ is a nonnegative random variable with
mean ๐ธ [๐‘‹ ].The Markov inequality then states that:
๐‘ƒ (๐‘‹ ≥ ๐‘Ž) ≤
๐ธ [๐‘‹ ]
๐‘Ž
for ๐‘‹ non negative
Markov inequality can be obtained as follows:
∫ ๐‘Ž
∫ ∞
∫
๐ธ [๐‘‹ ] =
๐‘ก ๐‘“๐‘‹ (๐‘ก)๐‘‘๐‘ก +
๐‘ก ๐‘“๐‘‹ (๐‘ก)๐‘‘๐‘ก ≥
0
๐‘Ž
∞
∫
๐‘ก ๐‘“๐‘‹ (๐‘ก)๐‘‘๐‘ก ≥
๐‘Ž
(3.126)
∞
๐‘Ž๐‘“๐‘‹ (๐‘ก)๐‘‘๐‘ก ≥ ๐‘Ž๐‘ƒ (๐‘‹ ≥ ๐‘Ž)
๐‘Ž
Example 3.19
The mean height of children in a kindergarten class is 70 cm. Find the bound on the probability that a kid in the class is taller than 140 cm.
Solution. The Markov inequality gives ๐‘ƒ (๐ป ≥ 140) ≤ 70/140 = 0.5
The bound in the above example appears to be ridiculous. However, a bound, by its very nature,
must take the worst case into consideration. One can easily construct a random variable for which
the bound given by the Markov inequality is exact. The reason we know that the bound in the
above example is ridiculous is that we have knowledge about the variability of the children’s
height about their mean.
Definition 3.24. Chebyshev inequality: Suppose that the mean ๐ธ [๐‘‹ ] = ๐‘š and the variance
๐‘‰ ๐ด๐‘… [๐‘‹ ] = ๐œŽ 2 of a random variable are known, and that we are interested in bounding ๐‘ƒ (|๐‘‹ −๐‘š| ≥ ๐‘Ž).
The Chebyshev inequality states that:
๐‘ƒ (|๐‘‹ − ๐‘š| ≥ ๐‘Ž) ≤
๐œŽ2
๐‘Ž2
(3.127)
The Chebyshev inequality is a consequence of the Markov inequality. Let ๐ท 2 = (๐‘‹ − ๐‘š) 2 be the
squared deviation from the mean. Then the Markov inequality applied to ๐ท 2 gives:
๐‘ƒ (๐ท 2 ≥ ๐‘Ž 2 ) ≤
๐ธ [(๐‘‹ − ๐‘š) 2 ] ๐œŽ 2
= 2
๐‘Ž2
๐‘Ž
61
3 Random Variables
and note that {๐ท 2 ≥ ๐‘Ž 2 } and {|๐‘‹ −๐‘š| ≥ ๐‘Ž} are equivalent events. Suppose that a random variable
๐‘‹ has zero variance; then the Chebyshev inequality implies that ๐‘ƒ (๐‘‹ = ๐‘š) = 1, i.e. random
variable is equal to its mean with probability one, hence constant in almost all experiments.
Example 3.20
If ๐‘‹ is a Gaussian random variable with mean ๐‘š and variance ๐œŽ 2 , Find the upper bound for
๐‘ƒ (|๐‘‹ − ๐‘š| ≥ ๐‘˜๐œŽ) according to the Chebyshev inequality.
Solution. The Chebyshev inequality for ๐‘Ž = ๐‘˜๐œŽ gives:
๐‘ƒ (|๐‘‹ − ๐‘š| ≥ ๐‘˜๐œŽ) ≤
1
๐‘˜2
if ๐‘˜ = 2 the Chebyshev inequality gives the upper bound 0.25. Also we know that for
Gaussian random variables:
๐‘ƒ (|๐‘‹ − ๐‘š| ≥ 2๐œŽ) = 2๐‘„ (2) ≈ 0.0456.
We see that for certain random variables, the Chebyshev inequality can give rather loose bounds.
Nevertheless, the inequality is useful in situations in which we have no knowledge about the
distribution of a given random variable other than its mean and variance. We will later use the
Chebyshev inequality to prove that the arithmetic average of independent measurements of the
same random variable is highly likely to be close to the expected value of the random variable
when the number of measurements is large.
If more information is available than just the mean and variance, then it is possible to obtain
bounds that are tighter than the Markov and Chebyshev inequalities. Consider the Markov
inequality again. The region of interest is ๐ด = {๐‘ก ≥ ๐‘Ž}, so let ๐ผ๐ด (๐‘ก) be the indicator function, i.e.
๐ผ๐ด (๐‘ก) = 1 if ๐‘ก ∈ ๐ด and ๐ผ๐ด (๐‘ก) = 0 otherwise. The key step in the derivation is to note that ๐‘ก/๐‘Ž ≥ 1
in the region of interest. In effect we bounded ๐ผ๐ด (๐‘ก) by ๐‘ก/๐‘Ž and then have:
∫ ∞
∫ ∞
๐‘ƒ (๐‘‹ ≥ ๐‘Ž) =
๐ผ๐ด (๐‘ก) ๐‘“๐‘‹ (๐‘ก)๐‘‘๐‘ก ≤
(๐‘ก/๐‘Ž) ๐‘“๐‘‹ (๐‘ก)๐‘‘๐‘ก = ๐ธ [๐‘ฅ]/๐‘Ž
0
0
By changing the upper bound on ๐ผ๐ด (๐‘ก), we can obtain different bounds on ๐‘ƒ (๐‘‹ ≥ ๐‘Ž). Consider
the bound ๐ผ๐ด (๐‘ก) ≤ ๐‘’ ๐‘  (๐‘ก −๐‘Ž) , also shown in Figure 3.10, where ๐‘  > 0 then the following bound can
be obtained.
Definition 3.25. Chernoff bound: Suppose ๐‘‹ is a random variable, then:
∫ ∞
๐‘ƒ (๐‘‹ ≥ ๐‘Ž) ≤
(๐‘’ ๐‘  (๐‘ก −๐‘Ž) ) ๐‘“๐‘‹ (๐‘ก)๐‘‘๐‘ก = ๐‘’ −๐‘ ๐‘Ž ๐ธ [๐‘’ ๐‘ ๐‘‹ ]
0
(3.128)
This bound is called the Chernoff bound, which can be seen to depend on the expected value of an
exponential function of ๐‘‹ . This function is called the moment generating function.
62
3 Random Variables
Figure 3.10: Bounds on indicator function for ๐ด = {๐‘ก ≥ ๐‘Ž}.
Further Reading
1. Alberto Leon-Garcia, Probability, statistics, and random processes for electrical engineering,
3rd ed. Pearson, 2007: chapters 3 and 4
2. Scott L. Miller, Donald Childers, Probability and random processes: with applications to
signal processing and communications, 2nd ed., Elsevier 2012: section 2.8 and 2.9, and
chapters 3 and 4.
3. Anthony Hayter, Probability and Statistics for Engineers and Scientists, 4th ed., Brooks/Cole,
Cengage Learning 2012: chapter 2 to 5.
63
4 Two or More Random Variables
4 Two or More Random Variables
Many random experiments involve several random variables. In some experiments a number of
different quantities are measured. For example, the voltage signals at several points in a circuit at
some specific time may be of interest. Other experiments involve the repeated measurement of
a certain quantity such as the repeated measurement (“sampling”) of the amplitude of an audio
or video signal that varies with time. In this chapter, we extend the random variable concepts
already introduced to two or more random variables. In a sense we have already covered all the
fundamental concepts of probability and random variables, and we are “simply” elaborating on the
case of two or more random variables. Nevertheless, there are significant analytical techniques
that need to be learned.
4.1 Pairs of Random Variables
Some experiments involve two random variables, e.g. the study of a system with a random input.
Due to the randomness of the input, the output will naturally be random as well. Quite often it is
necessary to characterize the relationship between the input and the output. A pair of random
variables can be used to characterize this relationship: one for the input and another for the output.
Another class of examples involving random variables are those involving spatial coordinates in
two dimensions. A pair of random variables can be used to probabilistically describe the position
of an object which is subject to various random forces. There are endless examples of situations
where we are interested in two random quantities that may or may not be related to one another,
for example, the height and weight of a student, or the temperature and relative humidity at a
certain place and time.
Consider an experiment ๐ธ whose outcomes lie in a sample space, ๐‘†. A two dimensional random
variable is a mapping of the points in the sample space to ordered pairs {๐‘ฅ, ๐‘ฆ}. Usually, when
dealing with a pair of random variables, the sample space naturally partitions itself so that it can
be viewed as a combination of two simpler sample spaces. For example, suppose the experiment
was to observe the height and weight of a typical student. The range of student heights could
fall within some set which we call sample space ๐‘† 1 , while the range of student weights could
fall within the space ๐‘† 2 . The overall sample space of the experiment could then be viewed as
๐‘† 1 × ๐‘† 2 . For any outcome ๐‘  ∈ ๐‘† of this experiment, the pair of random variables (๐‘‹, ๐‘Œ ) is merely a
mapping of the outcome ๐‘  to a pair of numerical values ๐‘ฅ (๐‘ ), ๐‘ฆ (๐‘ฅ). In the case of our height/weight
experiment, it would be natural to choose ๐‘ฅ (๐‘ ) to be the height of the student, while ๐‘ฆ (๐‘ ) is the
weight of the student. While the density functions ๐‘“๐‘‹ (๐‘ฅ) and ๐‘“๐‘Œ (๐‘ฆ) do partially characterize
the experiment, they do not completely describe the situation. It would be natural to expect
that the height and weight are somehow related to each other. While it may not be very rare to
have a student 180 cm tall nor unusual to have a student who weighs 55 kg, it is probably rare
indeed to have a student who is both 180 cm tall and weighs 55 kg. Therefore, to characterize the
relationship between a pair of random variables, it is necessary to look at the joint probabilities
of events relating to both random variables.
64
4 Two or More Random Variables
4.1.1 Joint Cumulative Distribution Function
We start with the notion of a joint CDF.
Definition 4.1. Joint Cumulative Distribution Function: The joint CDF of a pair of random
variables, {๐‘‹, ๐‘Œ }, is ๐น๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ) = ๐‘ƒ (๐‘‹ ≤ ๐‘ฅ, ๐‘Œ ≤ ๐‘ฆ). That is, the joint CDF is the joint probability of
the two events {๐‘‹ ≤ ๐‘ฅ } and {๐‘Œ ≤ ๐‘ฆ}.
As with the CDF of a single random variable, not any function can be a joint CDF. The joint CDF
of a pair of random variables will satisfy properties similar to those satisfied by the CDFs of single
random variables.
• Since the joint CDF is a probability, it must take on a value between 0 and 1, i.e. 0 ≤
๐น๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ) ≤ 1
• ๐น๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ) evaluated at either ๐‘ฅ = −∞ or ๐‘ฆ = −∞ (or both) must be zero and ๐น๐‘‹ ,๐‘Œ (∞, ∞)
must be one, i.e.
๐น๐‘‹ ,๐‘Œ (−∞, −∞) = 0
๐น๐‘‹ ,๐‘Œ (−∞, ๐‘ฆ) = 0
๐น๐‘‹ ,๐‘Œ (๐‘ฅ, −∞) = 0
๐น๐‘‹ ,๐‘Œ (∞, ∞) = 1
• For ๐‘ฅ 1 ≤ ๐‘ฅ 2 and ๐‘ฆ1 ≤ ๐‘ฆ2 , {๐‘‹ ≤ ๐‘ฅ 1 } ∩ {๐‘Œ ≤ ๐‘ฆ1 } is a subset of {๐‘‹ ≤ ๐‘ฅ 2 } ∩ {๐‘Œ ≤ ๐‘ฆ2 } so that
๐น๐‘‹ ,๐‘Œ (๐‘ฅ 1, ๐‘ฆ1 ) ≤ ๐น๐‘‹ ,๐‘Œ (๐‘ฅ 2, ๐‘ฆ2 ). That is, the CDF is a monotonic, non-decreasing function of
both ๐‘ฅ and ๐‘ฆ.
• Since the event {๐‘‹ ≤ ∞} must happen, then {๐‘‹ ≤ ∞} ∩ {๐‘Œ ≤ ๐‘ฆ} = {๐‘Œ ≤ ๐‘ฆ}, so that
๐น๐‘‹ ,๐‘Œ (∞, ๐‘ฆ) = ๐น๐‘Œ (๐‘ฆ). Likewise, ๐น๐‘‹ ,๐‘Œ (๐‘ฅ, ∞) = ๐น๐‘‹ (๐‘ฅ). In the context of joint CDFs, ๐น๐‘‹ (๐‘ฅ) and
๐น๐‘Œ (๐‘ฆ) are referred to as the marginal CDFs of ๐‘‹ and ๐‘Œ , respectively.
• Consider using a joint CDF to evaluate the probability that the pair of random variables
(๐‘‹, ๐‘Œ ) falls into a rectangular region bounded by the points (๐‘ฅ 1, ๐‘ฆ1 ), (๐‘ฅ 2, ๐‘ฆ1 ), (๐‘ฅ 1, ๐‘ฆ2 ) and
(๐‘ฅ 2, ๐‘ฆ2 ) (white rectangle is figure 4.1). Evaluating ๐น๐‘‹ ,๐‘Œ (๐‘ฅ 2, ๐‘ฆ2 ) gives the probability that the
random variable falls anywhere below or to the left of the point (๐‘ฅ 2, ๐‘ฆ2 ); this includes all
of the area in the desired rectangle, plus everything below and to the left of the desired
rectangle. The probability of the random variable falling to the left of the rectangle can be
subtracted off using ๐น๐‘‹ ,๐‘Œ (๐‘ฅ 1, ๐‘ฆ2 ). Similarly, the region below the rectangle can be subtracted
off using ๐น๐‘‹ ,๐‘Œ (๐‘ฅ 2, ๐‘ฆ1 ) (two shaded regions). In subtracting off these two quantities, we have
subtracted twice the probability of the pair falling both below and to the left of the desired
rectangle (dark-shaded region). Hence we must add back this probability using ๐น๐‘‹ ,๐‘Œ (๐‘ฅ 1, ๐‘ฆ1 ).
That is:
๐‘ƒ (๐‘ฅ 1 < ๐‘‹ ≤ ๐‘ฅ 2, ๐‘ฆ1 < ๐‘Œ ≤ ๐‘ฆ2 ) = ๐น๐‘‹ ,๐‘Œ (๐‘ฅ 2, ๐‘ฆ2 ) − ๐น๐‘‹ ,๐‘Œ (๐‘ฅ 1, ๐‘ฆ2 ) − ๐น๐‘‹ ,๐‘Œ (๐‘ฅ 2, ๐‘ฆ1 ) + ๐น๐‘‹ ,๐‘Œ (๐‘ฅ 1, ๐‘ฆ1 ) ≥ 0.
(4.1)
65
4 Two or More Random Variables
Figure 4.1: Illustrating the evaluation of the probability of a pair of random variables falling in a
rectangular region.
Equation 4.1 tells us how to calculate the probability of the pair of random variables falling in
a rectangular region. Often, we are interested in also calculating the probability of the pair of
random variables falling in non rectangular (e.g., a circle or triangle) region. This can be done by
forming the required region using many infinitesimal rectangles and then repeatedly applying
Equation 4.1.
Example 4.1
Consider a pair of random variables which are uniformly distributed over the unit square
(i.e., 0 < ๐‘ฅ < 1, 0 < ๐‘ฆ < 1). Find the joint CDF.
Solution. The CDF is:
๏ฃฑ
๏ฃด
0,
๐‘ฅ < 0 or ๐‘ฆ < 0
๏ฃด
๏ฃด
๏ฃด
๏ฃด
๏ฃด
๐‘ฅ,
0 ≤ ๐‘ฅ ≤ 1, ๐‘ฆ > 1
๏ฃด
๏ฃฒ
๏ฃด
๐น๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ) = ๐‘ฆ,
๐‘ฅ > 1, 0 ≤ ๐‘ฆ ≤ 1
๏ฃด
๏ฃด
๏ฃด
๏ฃด
๐‘ฅ๐‘ฆ, 0 ≤ ๐‘ฅ ≤ 1, 0 ≤ ๐‘ฆ ≤ 1
๏ฃด
๏ฃด
๏ฃด
๏ฃด 1,
๐‘ฅ > 1, ๐‘ฆ > 1
๏ฃณ
Even this very simple example leads to a rather cumbersome function. Nevertheless, it is
straightforward to verify that this function does indeed satisfy all the properties of a joint
CDF. From this joint CDF, the marginal CDF of ๐‘‹ can be found to be:
๏ฃฑ
๏ฃด
0, ๐‘ฅ < 0
๏ฃด
๏ฃฒ
๏ฃด
๐น๐‘‹ (๐‘ฅ) = ๐น๐‘‹ ,๐‘Œ (๐‘ฅ, ∞) = ๐‘ฅ, 0 ≤ ๐‘ฅ ≤ 1
๏ฃด
๏ฃด
๏ฃด 1, ๐‘ฅ > 1
๏ฃณ
Hence, the marginal CDF of ๐‘‹ is a uniform distribution. The same statement holds for ๐‘Œ as
well.
66
4 Two or More Random Variables
4.1.2 Joint Probability Density Functions
As seen in Example 4.1, even the simplest joint random variables can lead to CDFs which are
quite unwieldy. As a result, working with joint CDFs can be difficult. In order to avoid extensive
use of joint CDFs, attention is now turned to the two dimensional equivalent of the PDF.
Definition 4.2. Joint Probability Density Functions: The joint probability density function of
a pair of random variables (๐‘‹, ๐‘Œ ) evaluated at the point (๐‘ฅ, ๐‘ฆ) is:
๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ) = ๐‘™๐‘–๐‘š๐œ€๐‘ฅ →0,๐œ€ ๐‘ฆ →0
๐‘ƒ (๐‘ฅ ≤ ๐‘‹ < ๐‘ฅ + ๐œ€๐‘ฅ , ๐‘ฆ ≤ ๐‘Œ < ๐‘ฆ + ๐œ€ ๐‘ฆ )
๐œ€๐‘ฅ ๐œ€ ๐‘ฆ
(4.2)
Similar to the one-dimensional case, the joint PDF is the probability that the pair of random variables
(๐‘‹, ๐‘Œ ) lies in an infinitesimal region defined by the point (๐‘ฅ, ๐‘ฆ) normalised by the area of the region.
For a single random variable, the PDF was the derivative of the CDF. By applying Equation 4.1 to
the definition of the joint PDF, a similar relationship is obtained.
Theorem 4.1
The joint PDF ๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ) can be obtained from the joint CDF ๐น๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ) by taking a partial
derivative with respect to each variable. That is,
๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ) =
๐œ•2
๐น๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ)
๐œ•๐‘ฅ ๐œ•๐‘ฆ
(4.3)
Proof. Using Equation 4.1
๐‘ƒ (๐‘ฅ ≤ ๐‘‹ < ๐‘ฅ + ๐œ€๐‘ฅ , ๐‘ฆ ≤ ๐‘Œ < ๐‘ฆ + ๐œ€ ๐‘ฆ )
= ๐น๐‘‹ ,๐‘Œ (๐‘ฅ + ๐œ€๐‘ฅ , ๐‘ฆ + ๐œ€ ๐‘ฆ ) − ๐น๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ + ๐œ€ ๐‘ฆ ) − ๐น๐‘‹ ,๐‘Œ (๐‘ฅ + ๐œ€๐‘ฅ , ๐‘ฆ) + ๐น๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ)
= [๐น๐‘‹ ,๐‘Œ (๐‘ฅ + ๐œ€๐‘ฅ , ๐‘ฆ + ๐œ€ ๐‘ฆ ) − ๐น๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ + ๐œ€ ๐‘ฆ )] − [๐น๐‘‹ ,๐‘Œ (๐‘ฅ + ๐œ€๐‘ฅ , ๐‘ฆ) − ๐น๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ)]
Dividing by ๐œ€๐‘ฅ and taking the limit as ๐œ€๐‘ฅ → 0 results in
๐‘ƒ (๐‘ฅ ≤ ๐‘‹ < ๐‘ฅ + ๐œ€๐‘ฅ , ๐‘ฆ ≤ ๐‘Œ < ๐‘ฆ + ๐œ€ ๐‘ฆ )
๐œ€๐‘ฅ →0
๐œ€๐‘ฅ
[๐น๐‘‹ ,๐‘Œ (๐‘ฅ + ๐œ€๐‘ฅ , ๐‘ฆ + ๐œ€ ๐‘ฆ ) − ๐น๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ + ๐œ€ ๐‘ฆ )]
[๐น๐‘‹ ,๐‘Œ (๐‘ฅ + ๐œ€๐‘ฅ , ๐‘ฆ) − ๐น๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ)]
= lim
− lim
๐œ€๐‘ฅ →0
๐œ€๐‘ฅ →0
๐œ€๐‘ฅ
๐œ€๐‘ฅ
๐œ•
๐œ•
=
๐น๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ + ๐œ€ ๐‘ฆ ) − ๐น๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ)
๐œ•๐‘ฅ
๐œ•๐‘ฅ
lim
Dividing by ๐œ€ ๐‘ฆ and taking the limit as ๐œ€ ๐‘ฆ → 0 gives the desired result:
๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ) =
lim
๐œ€๐‘ฅ →0,๐œ€ ๐‘ฆ →0
= lim
๐œ€ ๐‘ฆ →0
๐‘ƒ (๐‘ฅ ≤ ๐‘‹ < ๐‘ฅ + ๐œ€๐‘ฅ , ๐‘ฆ ≤ ๐‘Œ < ๐‘ฆ + ๐œ€ ๐‘ฆ )
๐œ€๐‘ฅ , ๐œ€ ๐‘ฆ
๐œ•
๐œ•๐‘ฅ ๐น๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ
+ ๐œ€๐‘ฆ ) −
๐œ€๐‘ฆ
๐œ•
๐œ•๐‘ฅ ๐น๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ)
=
๐œ•2
๐น๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ)
๐œ•๐‘ฅ ๐œ•๐‘ฆ
This theorem shows that we can obtain a joint PDF from a joint CDF by differentiating with
respect to each variable. The converse of this statement would be that we could obtain a joint
67
4 Two or More Random Variables
CDF from a joint PDF by integrating with respect to each variable. Specifically:
∫ ๐‘ฆ∫ ๐‘ฅ
๐น๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ) =
๐‘“๐‘‹ ,๐‘Œ (๐‘ข, ๐‘ฃ)๐‘‘๐‘ข๐‘‘๐‘ฃ
−∞
(4.4)
−∞
Example 4.2
Consider the pair of random variables with uniform distribution in Example 4.1. Find the
joint PDF.
Solution. By differentiating the joint CDF with respect to both ๐‘ฅ and ๐‘ฆ, the joint PDF is
(
1, 0 < ๐‘ฅ < 1 and 0 < ๐‘ฆ < 1
๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ) =
0,
otherwise
which is much simpler than the joint CDF.
From the definition of the joint PDF and its relationship with the joint CDF, several properties of
joint PDFs can be inferred:
(i) ๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ) ≥ 0
(ii)
∫∞∫∞
๐‘“ (๐‘ฅ, ๐‘ฆ)๐‘‘๐‘ฅ๐‘‘๐‘ฆ
−∞ −∞ ๐‘‹ ,๐‘Œ
(iii) ๐‘“๐‘‹ (๐‘ฅ) =
=1
∫∞
๐‘“ (๐‘ฅ, ๐‘ฆ)๐‘‘๐‘ฆ and ๐‘“๐‘Œ (๐‘ฆ) =
−∞ ๐‘‹ ,๐‘Œ
(iv) ๐‘ƒ (๐‘ฅ 1 < ๐‘‹ ≤ ๐‘ฅ 2, ๐‘ฆ1 < ๐‘Œ ≤ ๐‘ฆ2 ) =
∫∞
๐‘“ (๐‘ฅ, ๐‘ฆ)๐‘‘๐‘ฅ
−∞ ๐‘‹ ,๐‘Œ
๐‘ฆ2 ∫ ๐‘ฅ 2
๐‘“ (๐‘ฅ, ๐‘ฆ)๐‘‘๐‘ฅ๐‘‘๐‘ฆ
๐‘ฆ1 ๐‘ฅ 1 ๐‘‹ ,๐‘Œ
∫
Property (i) follows directly from the definition of the joint PDF since both the numerator and
denominator there are nonnegative. Property (ii) results from the relationship in Equation 4.4
together with the fact that ๐น๐‘‹ ,๐‘Œ (∞, ∞) = 1. This is the normalization integral for joint PDFs. These
first two properties form a set of sufficient conditions for a function of two variables to be a valid
joint PDF. Property (iii) is obtained by noting
∫ ∞ that
∫ ๐‘ฅ the marginal CDF of ๐‘‹ is ๐น๐‘‹ (๐‘ฅ) = ๐น๐‘‹ ,๐‘Œ (๐‘ฅ, ∞).
Using Equation 4.4 then results in ๐น๐‘‹ (๐‘ฅ) = −∞ −∞ ๐‘“๐‘‹ ,๐‘Œ (๐‘ข, ๐‘ฆ)๐‘‘๐‘ข๐‘‘๐‘ฆ. Differentiating this expression
with respect to ๐‘ฅ produces the expression in property (iii) for the marginal PDF of ๐‘ฅ. A similar
derivation produces the marginal PDF of ๐‘ฆ. Hence, the marginal PDFs are obtained by integrating
out the unwanted variable in the joint PDF. The last property is obtained by combining Equations
4.1 and 4.4.
Property (iv) of joint PDFs specifies how to compute the probability that a pair of random variables
takes on a value in a rectangular region. Often, we are interested in computing the probability
that the pair of random variables falls in a region which is not rectangularly shaped. In general,
suppose we wish to compute ๐‘ƒ ((๐‘‹, ๐‘Œ ) ∈ ๐ด), where ๐ด is the region illustrated in Figure 4.2. This
general region can be approximated as a union of many nonoverlapping rectangular regions as
shown in the figure. In fact, as we make the rectangles ever smaller, the approximation improves
to the point where the representation becomes exact in the limit as the rectangles get infinitely
small. That is, any region can be represented as an infinite number of infinitesimal rectangular
68
4 Two or More Random Variables
Ð
regions so that ๐ด = ๐‘…๐‘– , where ๐‘…๐‘– represents the ith rectangular region. The probability that the
random pair falls in ๐ด is then computed as:
Õ
Õโˆฌ
๐‘ƒ ((๐‘‹, ๐‘Œ ) ∈ ๐ด) =
๐‘ƒ ((๐‘‹, ๐‘Œ ) ∈ ๐‘…๐‘– ) =
๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ)๐‘‘๐‘ฅ๐‘‘๐‘ฆ
(4.5)
๐‘–
๐‘–
๐‘…๐‘–
The sum of the integrals over the rectangular regions can be replaced by an integral over the
original region ๐ด:
โˆฌ
๐‘ƒ ((๐‘‹, ๐‘Œ ) ∈ ๐ด) =
๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ)๐‘‘๐‘ฅ๐‘‘๐‘ฆ
(4.6)
๐ด
This important result shows that the probability of a pair of random variables falling in some
two-dimensional region ๐ด is found by integrating the joint PDF of the two random variables over
the region ๐ด.
Figure 4.2: Approximation of an arbitrary region by a series of infinitesimal rectangles.
Example 4.3
Suppose that a pair of random variables has the joint PDF given by:
๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ) = ๐‘๐‘’ −๐‘ฅ ๐‘’ −๐‘ฆ/2๐‘ข (๐‘ฅ)๐‘ข (๐‘ฆ)
Find (a) the constant value ๐‘ and (b) the probability of the event {๐‘‹ > ๐‘Œ }.
Solution. (a) The constant ๐‘ is found using the normalization integral:
∫ ∞∫ ∞
∫ ∞∫ ∞
๐‘๐‘’ −๐‘ฅ ๐‘’ −๐‘ฆ/2๐‘‘๐‘ฅ๐‘‘๐‘ฆ = 1 ⇒ ๐‘ = 1/2
๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ)๐‘‘๐‘ฅ๐‘‘๐‘ฆ =
−∞
−∞
0
0
(b) This probability can be viewed as the probability of the pair (๐‘‹, ๐‘Œ ) falling in the region
๐ด that is now defined as ๐ด = {(๐‘ฅ, ๐‘ฆ) : ๐‘ฅ > ๐‘ฆ}. This probability is calculated as:
โˆฌ
∫ ∞∫ ∞
∫ ∞
1 −๐‘ฅ −๐‘ฆ/2
1 −3๐‘ฆ/2
๐‘ƒ (๐‘‹ > ๐‘Œ ) =
๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ)๐‘‘๐‘ฅ๐‘‘๐‘ฆ =
๐‘’ ๐‘’
๐‘‘๐‘ฅ๐‘‘๐‘ฆ =
๐‘’
๐‘‘๐‘ฆ = 1/3
2
2
๐‘ฅ >๐‘ฆ
0
๐‘ฆ
0
69
4 Two or More Random Variables
4.1.3 Joint Probability Mass Functions
When the random variables are discrete rather than continuous, it is often more convenient to
work with probability mass functions (PMFs) rather than PDFs or CDFs. It is straightforward to
extend the concept of the PMF to a pair of random variables.
Definition 4.3. Joint Probability Mass Function: The joint PMF for a pair of discrete random
variables ๐‘‹ and ๐‘Œ is given by: ๐‘ƒ๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ) = ๐‘ƒ ({๐‘‹ = ๐‘ฅ } ∩ {๐‘Œ = ๐‘ฆ})
In particular, suppose the random variable ๐‘‹ takes on values from the set {๐‘ฅ 1, ๐‘ฅ 2, ..., ๐‘ฅ ๐‘€ } and
the random variable ๐‘Œ takes on values from the set {๐‘ฆ1, ๐‘ฆ2, ..., ๐‘ฆ๐‘ }. Here, either ๐‘€ or ๐‘ could be
potentially infinite, or both could be finite. Several properties of the joint PMF analogous to those
developed for joint PDFs should be apparent.
(i)
0 ≤ ๐‘ƒ๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ) ≤ 1
(4.7)
(ii)
๐‘€ Õ
๐‘
Õ
๐‘ƒ๐‘‹ ,๐‘Œ (๐‘ฅ๐‘š , ๐‘ฆ๐‘› ) = 1
(4.8)
๐‘š=1 ๐‘›=1
(iii)
๐‘
Õ
๐‘ƒ๐‘‹ ,๐‘Œ (๐‘ฅ๐‘š , ๐‘ฆ๐‘› ) = ๐‘ƒ๐‘‹ (๐‘ฅ๐‘š ),
๐‘›=1
๐‘€
Õ
๐‘ƒ๐‘‹ ,๐‘Œ (๐‘ฅ๐‘š , ๐‘ฆ๐‘› ) = ๐‘ƒ๐‘Œ (๐‘ฆ๐‘› )
(4.9)
๐‘š=1
(iv)
๐‘ƒ ((๐‘‹, ๐‘Œ ) ∈ ๐ด) =
ÕÕ
๐‘ƒ๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ)
(4.10)
(๐‘ฅ,๐‘ฆ) ∈๐ด
Furthermore, the joint PDF or the joint CDF of a pair of discrete random variables can be related
to the joint PMF through the use of delta functions or step functions by:
๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ) =
๐‘€ Õ
๐‘
Õ
๐‘ƒ๐‘‹ ,๐‘Œ (๐‘ฅ๐‘š , ๐‘ฆ๐‘› )๐›ฟ (๐‘ฅ − ๐‘ฅ๐‘š )๐›ฟ (๐‘ฆ − ๐‘ฆ๐‘› )
(4.11)
๐‘ƒ๐‘‹ ,๐‘Œ (๐‘ฅ๐‘š , ๐‘ฆ๐‘› )๐‘ข (๐‘ฅ − ๐‘ฅ๐‘š )๐‘ข (๐‘ฆ − ๐‘ฆ๐‘› )
(4.12)
๐‘š=1 ๐‘›=1
๐น๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ) =
๐‘€ Õ
๐‘
Õ
๐‘š=1 ๐‘›=1
Usually, it is most convenient to work with PMFs when the random variables are discrete. However,
if the random variables are mixed (i.e., one is discrete and one is continuous), then it becomes
necessary to work with PDFs or CDFs since the PMF will not be meaningful for the continuous
random variable.
Example 4.4
Two discrete random variables ๐‘ and ๐‘€ have a joint PMF given by:
๐‘ƒ ๐‘ ,๐‘€ (๐‘›, ๐‘š) =
๐‘Ž๐‘› ๐‘ ๐‘š
(๐‘› + ๐‘š)!
, ๐‘š = 0, 1, 2, 3, ..., ๐‘› = 0, 1, 2, 3, ...
๐‘›!๐‘š! (๐‘Ž + ๐‘ + 1)๐‘›+๐‘š+1
Find the marginal PMFs ๐‘ƒ ๐‘ (๐‘›) and ๐‘ƒ๐‘€ (๐‘š).
70
4 Two or More Random Variables
Solution. The marginal PMF of ๐‘ can be found by summing over ๐‘š in the joint PMF:
๐‘ƒ ๐‘ (๐‘›) =
∞
Õ
๐‘ƒ ๐‘ ,๐‘€ (๐‘›, ๐‘š) =
๐‘š=0
∞
Õ
(๐‘› + ๐‘š)!
๐‘Ž๐‘› ๐‘ ๐‘š
๐‘›!๐‘š! (๐‘Ž + ๐‘ + 1)๐‘›+๐‘š+1
๐‘š=0
To evaluate this series, the following identity is used:
∞
Õ
(๐‘› + ๐‘š)! ๐‘š
1 ๐‘›+1
๐‘ฅ =(
)
๐‘›!๐‘š!
1−๐‘ฅ
๐‘š=0
The marginal PMF then reduces to
๐‘ƒ ๐‘ (๐‘›) =
=
∞
Õ
๐‘๐‘š
(๐‘› + ๐‘š)!
๐‘Ž๐‘›
(๐‘Ž + ๐‘ + 1)๐‘›+1 ๐‘š=0 ๐‘›!๐‘š! (๐‘Ž + ๐‘ + 1)๐‘š
๐‘Ž๐‘›
1
(
๐‘›+1
(๐‘Ž + ๐‘ + 1)
1−
๐‘
๐‘Ž+๐‘+1
)๐‘›+1 =
๐‘Ž๐‘›
(1 + ๐‘Ž)๐‘›+1
Likewise, by symmetry, the marginal PMF of ๐‘€ is
๐‘ƒ๐‘€ (๐‘š) =
๐‘๐‘š
(1 + ๐‘)๐‘š+1
Hence, the random variables ๐‘€ and ๐‘ both follow a geometric distribution
4.1.4 Conditional Probabilities and densities
The notion of conditional distribution functions and conditional density functions can be extended
to the case where the conditioning event is related to another random variable. For example,
we might want to know the distribution of a random variable representing the score a student
achieves on a test given the value of another random variable representing the number of hours
the student studied for the test. Or, perhaps we want to know the probability density function of
the outside temperature, given that the humidity is known to be below 50%.
To start with, consider a pair of discrete random variables ๐‘‹ and ๐‘Œ with a PMF, ๐‘ƒ๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ). Suppose
we would like to know the PMF of the random variable X given that the value of ๐‘Œ has been
observed. Then, according to the definition of conditional probability:
๐‘ƒ (๐‘‹ = ๐‘ฅ |๐‘Œ = ๐‘ฆ) =
๐‘ƒ (๐‘‹ = ๐‘ฅ, ๐‘Œ = ๐‘ฆ) ๐‘ƒ๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ)
=
๐‘ƒ (๐‘Œ = ๐‘ฆ)
๐‘ƒ๐‘Œ (๐‘ฆ)
(4.13)
We refer to this as the conditional PMF of ๐‘‹ given ๐‘Œ . By way of notation we write:
๐‘ƒ๐‘‹ |๐‘Œ (๐‘ฅ |๐‘ฆ) =
๐‘ƒ๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ)
๐‘ƒ๐‘Œ (๐‘ฆ)
(4.14)
Example 4.5
Using the joint PMF given in Example 4.4 along with the marginal PMF found in that example, find the conditional PMF: ๐‘ƒ ๐‘ |๐‘€ (๐‘›|๐‘š)
71
4 Two or More Random Variables
Solution.
๐‘ƒ๐‘€,๐‘ (๐‘š, ๐‘›)
๐‘ƒ๐‘€ (๐‘š)
(๐‘› + ๐‘š)!
๐‘Ž๐‘› ๐‘ ๐‘š
(1 + ๐‘)๐‘š+1
=
๐‘›!๐‘š! (๐‘Ž + ๐‘ + 1)๐‘›+๐‘š+1
๐‘๐‘š
๐‘›
๐‘š+1
(๐‘› + ๐‘š)! ๐‘Ž (1 + ๐‘)
=
๐‘›!๐‘š! (๐‘Ž + ๐‘ + 1)๐‘›+๐‘š+1
๐‘ƒ ๐‘ |๐‘€ (๐‘›|๐‘š) =
Note that the conditional PMF of ๐‘ given ๐‘€ is quite different than the marginal PMF of ๐‘ .
That is, knowing ๐‘€ changes the distribution of ๐‘ .
The simple result developed in Equation 4.13 can be extended to the case of continuous random
variables and PDFs.
Definition 4.4. Conditional probability density function: The conditional PDF of a random
variable ๐‘‹ given that ๐‘Œ = ๐‘ฆ is:
๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ)
๐‘“๐‘‹ |๐‘Œ (๐‘ฅ |๐‘ฆ) =
(4.15)
๐‘“๐‘Œ (๐‘ฆ)
Integrating both sides of this equation with respect to x produces the conditional CDFs:
Definition 4.5. Conditional cumulative distribution function: The conditional CDF of a
random variable ๐‘‹ given that ๐‘Œ = ๐‘ฆ is:
∫๐‘ฅ
๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ 0, ๐‘ฆ)๐‘‘๐‘ฅ 0
๐น๐‘‹ |๐‘Œ (๐‘ฅ |๐‘ฆ) = −∞
(4.16)
๐‘“๐‘Œ (๐‘ฆ)
Usually, the conditional PDF is much easier to work with, so the conditional CDF will not be
discussed further.
Example 4.6
A certain pair of random variables has a joint PDF given by:
๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ) =
2๐‘Ž๐‘๐‘
๐‘ข (๐‘ฅ)๐‘ข (๐‘ฆ)
(๐‘Ž๐‘ฅ + ๐‘๐‘ฆ + ๐‘) 3
for some positive constants ๐‘Ž, ๐‘, and ๐‘. Find the conditional PDF of ๐‘‹ given ๐‘Œ and ๐‘Œ given
๐‘‹.
Solution. The marginal PDFs are easily found to be:
∫ ∞
๐‘“๐‘‹ (๐‘ฅ) =
๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ)๐‘‘๐‘ฆ =
0
∫
๐‘“๐‘Œ (๐‘ฆ) =
∞
๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ)๐‘‘๐‘ฅ =
0
๐‘Ž๐‘
๐‘ข (๐‘ฅ)
(๐‘Ž๐‘ฅ + ๐‘) 2
๐‘๐‘
๐‘ข (๐‘ฆ)
(๐‘๐‘ฆ + ๐‘) 2
The conditional PDF of ๐‘‹ given ๐‘Œ then works out to be:
๐‘“๐‘‹ |๐‘Œ (๐‘ฅ |๐‘ฆ) =
๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ)
2๐‘Ž(๐‘๐‘ฆ + ๐‘) 2
=
๐‘ข (๐‘ฅ)
๐‘“๐‘Œ (๐‘ฆ)
(๐‘Ž๐‘ฅ + ๐‘๐‘ฆ + ๐‘) 3
72
4 Two or More Random Variables
The conditional PDF of Y given X could also be determined in a similar way:
๐‘“๐‘Œ |๐‘‹ (๐‘ฆ|๐‘ฅ) =
๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ)
2๐‘ (๐‘Ž๐‘ฅ + ๐‘) 2
=
๐‘ข (๐‘ฆ)
๐‘“๐‘‹ (๐‘ฅ)
(๐‘Ž๐‘ฅ + ๐‘๐‘ฆ + ๐‘) 3
Example 4.7
๐‘‹ and ๐‘Œ are two Gaussian random variables with a joint PDF:
1
2
๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ) = √ ๐‘’๐‘ฅ๐‘ (− (๐‘ฅ 2 − ๐‘ฅ๐‘ฆ + ๐‘ฆ 2 ))
3
๐œ‹ 3
Find the marginal PDFs and the conditional PDF of ๐‘‹ given ๐‘Œ .
Solution. The marginal PDF is found as follows:
∫ ∞
∫ ∞
1
2
2
๐‘“๐‘‹ (๐‘ฅ) =
๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ)๐‘‘๐‘ฆ = √ ๐‘’๐‘ฅ๐‘ (− ๐‘ฅ 2 )
๐‘’๐‘ฅ๐‘ (− (๐‘ฆ 2 − ๐‘ฅ๐‘ฆ))๐‘‘๐‘ฆ
3
3
๐œ‹ 3
−∞
−∞
∫ ∞
2
2
๐‘ฅ
1
๐‘ฅ
2
= √ ๐‘’๐‘ฅ๐‘ (− )
๐‘’๐‘ฅ๐‘ (− (๐‘ฆ 2 − ๐‘ฅ๐‘ฆ + ))๐‘‘๐‘ฆ
2 −∞
3
4
๐œ‹ 3
∫ ∞
2
1
๐‘ฅ
2
= √ ๐‘’๐‘ฅ๐‘ (− )
๐‘’๐‘ฅ๐‘ (− (๐‘ฆ − ๐‘ฅ/2) 2 )๐‘‘๐‘ฆ
2 −∞
3
๐œ‹ 3
Now the integrand is a Gaussian-looking function. If the appropriate constant is added to
the integrand, the integrand
p will be a valid PDF and hence must integrate out to one. In
this
case,
the
constant
is
2/(3๐œ‹). Therefore, the integral as just written must evaluate to
p
(3๐œ‹)/2. So:
๐‘ฅ2
1
๐‘“๐‘‹ (๐‘ฅ) = √ ๐‘’๐‘ฅ๐‘ (− )
2
2๐œ‹
and we see that ๐‘‹ is a zero-mean, unit-variance, Gaussian (i.e., standard normal) random
variable. By symmetry, the marginal PDF of ๐‘Œ must also be of the same form.
The conditional PDF of ๐‘‹ given ๐‘Œ is
๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ)
๐‘“๐‘‹ |๐‘Œ (๐‘ฅ |๐‘ฆ) =
=
๐‘“๐‘Œ (๐‘ฆ)
1
√
๐‘’๐‘ฅ๐‘ (− 32 (๐‘ฅ 2
๐œ‹ 3
− ๐‘ฅ๐‘ฆ + ๐‘ฆ 2 ))
๐‘ฆ2
√1 ๐‘’๐‘ฅ๐‘ (− )
2
2๐œ‹
r
=
๐‘ฆ
2
2
๐‘’๐‘ฅ๐‘ (− (๐‘ฅ − ) 2 )
3๐œ‹
3
2
So, the conditional PDF of ๐‘‹ given ๐‘Œ is also Gaussian. But, given that it is known that ๐‘Œ = ๐‘ฆ,
the mean of ๐‘‹ is now ๐‘ฆ/2 (instead of zero), and the variance of ๐‘‹ is 3/4 (instead of one). In
this example, knowledge of ๐‘Œ has shifted the mean and reduced the variance of ๐‘‹ .
In addition to conditioning on a random variable taking on a point value such as ๐‘Œ = ๐‘ฆ, the
conditioning can also occur on an interval of the form ๐‘ฆ1 ≤ ๐‘Œ ≤ ๐‘ฆ2 . To simplify notation, let the
conditioning event ๐ด be ๐ด = {๐‘ฆ1 ≤ ๐‘Œ ≤ ๐‘ฆ2 }. The relevant conditional PMF, PDF, and CDF are
then given, respectively, by:
Í๐‘ฆ2
๐‘ฆ=๐‘ฆ1 ๐‘ƒ๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ)
๐‘ƒ๐‘‹ |๐ด (๐‘ฅ) = Í๐‘ฆ2
(4.17)
๐‘ฆ=๐‘ฆ1 ๐‘ƒ๐‘Œ (๐‘ฆ)
73
4 Two or More Random Variables
๐‘ฆ2
๐‘“ (๐‘ฅ, ๐‘ฆ)๐‘‘๐‘ฆ
๐‘ฆ1 ๐‘‹ ,๐‘Œ
∫ ๐‘ฆ2
๐‘“ (๐‘ฆ)๐‘‘๐‘ฆ
๐‘ฆ1 ๐‘Œ
(4.18)
๐น๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ2 ) − ๐น๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ1 )
๐น๐‘Œ (๐‘ฆ2 ) − ๐น๐‘Œ (๐‘ฆ1 )
(4.19)
∫
๐‘“๐‘‹ |๐ด (๐‘ฅ) =
๐น๐‘‹ |๐ด (๐‘ฅ) =
Example 4.8
Using the joint PDF of Example 4.7, determine the conditional PDF of ๐‘‹ given that ๐‘Œ > ๐‘ฆ0 .
Solution.
∫
∞
1
2
√ ๐‘’๐‘ฅ๐‘ (− (๐‘ฅ 2 − ๐‘ฅ๐‘ฆ + ๐‘ฆ 2 ))๐‘‘๐‘ฆ
3
๐‘ฆ0 ๐œ‹ 3
r
∫
∞
๐‘ฅ2
2
2
๐‘ฅ
1
๐‘’๐‘ฅ๐‘ (− (๐‘ฆ − ) 2 )๐‘‘๐‘ฆ
= √ ๐‘’๐‘ฅ๐‘ (− )
2
3๐œ‹
3
2
2๐œ‹
๐‘ฆ0
∫
∞
๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ)๐‘‘๐‘ฆ =
๐‘ฆ0
2๐‘ฆ0 − ๐‘ฅ
1
๐‘ฅ2
= √ ๐‘’๐‘ฅ๐‘ (− )๐‘„ ( √ )
2
2๐œ‹
3
Since the marginal PDF of ๐‘Œ is a zero-mean, unit-variance Gaussian PDF,
∫ ∞
∫ ∞
๐‘ฆ2
1
๐‘“๐‘Œ (๐‘ฆ)๐‘‘๐‘ฆ =
√ ๐‘’๐‘ฅ๐‘ (− )๐‘‘๐‘ฆ = ๐‘„ (๐‘ฆ0 )
2
2๐œ‹
๐‘ฆ0
๐‘ฆ0
Therefore, the PDF of ๐‘‹ given ๐‘Œ > ๐‘ฆ0 is:
2๐‘ฆ −๐‘ฅ
0
๐‘ฅ 2 ๐‘„ ( √3 )
1
๐‘“๐‘‹ |๐‘Œ >๐‘ฆ0 (๐‘ฅ) = √ ๐‘’๐‘ฅ๐‘ (− )
2 ๐‘„ (๐‘ฆ0 )
2๐œ‹
Note that when the conditioning event was a point condition on ๐‘Œ , the conditional PDF of ๐‘‹
was Gaussian; yet, when the conditioning event is an interval condition on ๐‘Œ , the resulting
conditional PDF of ๐‘‹ is not Gaussian at all.
4.1.5 Expected Values and Moments Involving Pairs of Random Variables
We are often interested in how two variables ๐‘‹ and ๐‘Œ vary together. In particular, we are interested
in whether the variation of ๐‘‹ and ๐‘Œ are correlated. For example, if ๐‘‹ increases does ๐‘Œ tend to
increase or to decrease? The joint moments of ๐‘‹ and ๐‘Œ provide this information.
Definition 4.6. Let ๐‘”(๐‘ฅ, ๐‘ฆ) be an arbitrary two-dimensional function. The expected value of ๐‘”(๐‘‹, ๐‘Œ ),
where ๐‘‹ and ๐‘Œ are random variables, is
โˆฌ
๐ธ [๐‘”(๐‘‹, ๐‘Œ )] =
๐‘”(๐‘ฅ, ๐‘ฆ) ๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ)๐‘‘๐‘ฅ๐‘‘๐‘ฆ
(4.20)
For discrete random variables, the equivalent expression in terms of the joint PMF is:
ÕÕ
๐ธ [๐‘”(๐‘‹, ๐‘Œ )] =
๐‘”(๐‘ฅ๐‘š , ๐‘ฆ๐‘› )๐‘ƒ๐‘‹ ,๐‘Œ (๐‘ฅ๐‘š , ๐‘ฆ๐‘› )
๐‘š
๐‘›
74
(4.21)
4 Two or More Random Variables
If the function ๐‘”(๐‘ฅ, ๐‘ฆ) is actually a function of only a single variable, say ๐‘ฅ, then this definition
reduces to the definition of expected values for functions of a single random variable:
∫ ∞∫ ∞
∫ ∞
∫ ∞
∫ ∞
๐ธ [๐‘”(๐‘‹ )] =
๐‘”(๐‘ฅ) ๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ)๐‘‘๐‘ฅ๐‘‘๐‘ฆ =
๐‘”(๐‘ฅ) (
๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ)๐‘‘๐‘ฆ)๐‘‘๐‘ฅ =
๐‘”(๐‘ฅ)๐‘“๐‘‹ (๐‘ฅ)๐‘‘๐‘ฅ
−∞
−∞
−∞
−∞
−∞
(4.22)
To start with, consider an arbitrary linear function of the two variables ๐‘”(๐‘ฅ, ๐‘ฆ) = ๐‘Ž๐‘ฅ + ๐‘๐‘ฆ, where ๐‘Ž
and ๐‘ are constants. Then:
∫ ∞∫ ∞
๐ธ [๐‘Ž๐‘‹ + ๐‘๐‘Œ ] =
(๐‘Ž๐‘ฅ + ๐‘๐‘ฆ)๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ)๐‘‘๐‘ฅ๐‘‘๐‘ฆ
−∞ −∞
∫ ∞∫ ∞
∫ ∞∫ ∞
=๐‘Ž
๐‘ฅ ๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ)๐‘‘๐‘ฅ๐‘‘๐‘ฆ + ๐‘
๐‘ฆ ๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ)๐‘‘๐‘ฅ๐‘‘๐‘ฆ
−∞
−∞
−∞
−∞
= ๐‘Ž๐ธ [๐‘‹ ] + ๐‘๐ธ [๐‘Œ ]
This result merely states that expectation is a linear operation.
Definition 4.7. Correlation The correlation between two random variables is defined as:
โˆฌ
๐‘…๐‘‹ ,๐‘Œ = ๐ธ [๐‘‹๐‘Œ ] =
๐‘ฅ๐‘ฆ ๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ)๐‘‘๐‘ฅ๐‘‘๐‘ฆ
(4.23)
Furthermore, two random variables which have a correlation of zero are said to be orthogonal.
One instance in which the correlation appears is in calculating the second moment of a sum of
two random variables. That is, consider finding the expected value of ๐‘”(๐‘‹, ๐‘Œ ) = (๐‘‹ + ๐‘Œ ) 2 .
๐ธ [(๐‘‹ + ๐‘Œ ) 2 ] = ๐ธ [๐‘‹ 2 + 2๐‘‹๐‘Œ + ๐‘Œ 2 ] = ๐ธ [๐‘‹ 2 ] + ๐ธ [๐‘Œ 2 ] + 2๐ธ [๐‘‹๐‘Œ ]
(4.24)
Hence the second moment of the sum is the sum of the second moments plus twice the correlation.
Definition 4.8. Covariance The covariance between two random variables is:
โˆฌ
๐ถ๐‘‚๐‘‰ (๐‘‹, ๐‘Œ ) = ๐ธ [(๐‘‹ − ๐ธ [๐‘‹ ]) (๐‘Œ − ๐ธ [๐‘Œ ])] =
(๐‘ฅ − ๐ธ [๐‘‹ ]) (๐‘ฆ − ๐ธ [๐‘Œ ]) ๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ)๐‘‘๐‘ฅ๐‘‘๐‘ฆ
(4.25)
If two random variables have a covariance of zero, they are said to be uncorrelated.
Theorem 4.2
The correlation and covariance are strongly related to one another as follows:
๐ถ๐‘‚๐‘‰ (๐‘‹, ๐‘Œ ) = ๐‘…๐‘‹ ,๐‘Œ − ๐ธ [๐‘‹ ]๐ธ [๐‘Œ ]
(4.26)
Proof.
๐ถ๐‘‚๐‘‰ (๐‘‹, ๐‘Œ ) = ๐ธ [(๐‘‹ − ๐ธ [๐‘‹ ]) (๐‘Œ − ๐ธ [๐‘Œ ])] = ๐ธ [๐‘‹๐‘Œ − ๐ธ [๐‘‹ ]๐‘Œ − ๐ธ [๐‘Œ ]๐‘‹ + ๐ธ [๐‘‹ ]๐ธ [๐‘Œ ]]
= ๐ธ [๐‘‹๐‘Œ ] − ๐ธ [๐‘‹ ]๐ธ [๐‘Œ ] − ๐ธ [๐‘Œ ]๐ธ [๐‘‹ ] + ๐ธ [๐‘‹ ]๐ธ [๐‘Œ ] = ๐ธ [๐‘‹๐‘Œ ] − ๐ธ [๐‘‹ ]๐ธ [๐‘Œ ]
As a result, if either ๐‘‹ or ๐‘Œ (or both) has a mean of zero, correlation and covariance are equivalent.
The covariance function occurs when calculating the variance of a sum of two random variables:
๐‘‰ ๐ด๐‘… [๐‘‹ + ๐‘Œ ] = ๐‘‰ ๐ด๐‘… [๐‘‹ ] + ๐‘‰ ๐ด๐‘… [๐‘Œ ] + 2๐ถ๐‘‚๐‘‰ (๐‘‹, ๐‘Œ )
75
(4.27)
4 Two or More Random Variables
This result can be obtained from Equation 4.24 by replacing ๐‘‹ with ๐‘‹ − ๐ธ [๐‘‹ ] and ๐‘Œ with ๐‘Œ − ๐ธ [๐‘Œ ].
Another statistical parameter related to a pair of random variables is the correlation coefficient,
which is nothing more than a normalized version of the covariance.
Definition 4.9. Correlation coefficient The correlation coefficient of two random variables ๐‘‹ and
๐‘Œ , ๐œŒ๐‘‹๐‘Œ , is defined as
๐ธ [(๐‘‹ − ๐ธ [๐‘‹ ]) (๐‘Œ − ๐ธ [๐‘Œ ])]
๐ถ๐‘‚๐‘‰ (๐‘‹, ๐‘Œ )
=
๐œŒ๐‘‹๐‘Œ = p
๐œŽ๐‘‹ ๐œŽ๐‘Œ
๐‘‰ ๐ด๐‘…(๐‘‹ )๐‘‰ ๐ด๐‘…(๐‘Œ )
(4.28)
The next theorem quantifies the nature of the normalization.
Theorem 4.3
Correlation coefficient is less than 1 in absolute value.
Proof. Consider taking the second moment of ๐‘‹ + ๐‘Ž๐‘Œ , where ๐‘Ž is a real constant:
๐ธ [(๐‘‹ + ๐‘Ž๐‘Œ ) 2 ] = ๐ธ [๐‘‹ 2 ] + 2๐‘Ž๐ธ [๐‘‹๐‘Œ ] + ๐‘Ž 2 ๐ธ [๐‘Œ 2 ] ≥ 0
Since this is true for any ๐‘Ž, we can tighten the bound by choosing the value of ๐‘Ž that
minimizes the left-hand side. This value of ๐‘Ž turns out to be
๐‘Ž=
−๐ธ [๐‘‹๐‘Œ ]
๐ธ [๐‘Œ 2 ]
Plugging in this value gives
๐ธ [๐‘‹ 2 ] +
๐ธ [๐‘‹๐‘Œ ] 2
๐ธ [๐‘‹๐‘Œ ] 2
−
2
≥ 0 ⇒ ๐ธ [๐‘‹๐‘Œ ] 2 ≤ ๐ธ [๐‘‹ 2 ]๐ธ [๐‘Œ 2 ]
๐ธ [๐‘Œ 2 ]
๐ธ [๐‘Œ 2 ]
If we replace ๐‘‹ with ๐‘‹ − ๐ธ [๐‘‹ ] and ๐‘Œ with ๐‘Œ − ๐ธ [๐‘Œ ], the result is
(๐ถ๐‘‚๐‘‰ (๐‘‹, ๐‘Œ )) 2 ≤ ๐‘‰ ๐ด๐‘… [๐‘‹ ]๐‘‰ ๐ด๐‘… [๐‘Œ ]
Rearranging terms then gives the desired result:
๐ถ๐‘‚๐‘‰ (๐‘‹, ๐‘Œ )
|๐œŒ๐‘‹๐‘Œ | = | p
|≤1
๐‘‰ ๐ด๐‘… [๐‘‹ ]๐‘‰ ๐ด๐‘… [๐‘Œ ]
Note that we can also infer from the proof that equality holds if ๐‘Œ is a constant times ๐‘‹ . That is, a
correlation coefficient of 1 (or −1) implies that ๐‘‹ and ๐‘Œ are completely correlated (knowing ๐‘Œ
determines ๐‘‹ ). Furthermore, uncorrelated random variables will have a correlation coefficient
of zero. Therefore, as its name implies, the correlation coefficient is a quantitative measure of
the correlation between two random variables. It should be emphasized at this point that zero
correlation is not to be confused with independence. These two concepts are not the same.
Example 4.9
Consider once again the joint PDF of Example 4.7. Find ๐‘…๐‘‹ ,๐‘Œ , ๐ถ๐‘‚๐‘‰ (๐‘‹, ๐‘Œ ) and ๐œŒ๐‘‹ ,๐‘Œ .
76
4 Two or More Random Variables
Solution. The correlation for these random variables is:
∫ ∞∫ ∞
๐‘ฅ๐‘ฆ
2
๐ธ [๐‘‹๐‘Œ ] =
√ ๐‘’๐‘ฅ๐‘ (− (๐‘ฅ 2 + ๐‘ฅ๐‘ฆ + ๐‘ฆ 2 ))๐‘‘๐‘ฆ๐‘‘๐‘ฅ
3
−∞ −∞ ๐œ‹ 3
In order to evaluate this integral, the joint PDF is rewritten ๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ) = ๐‘“๐‘Œ |๐‘‹ (๐‘ฆ|๐‘ฅ)๐‘“๐‘‹ (๐‘ฅ) and
then those terms involving only ๐‘ฅ are pulled outside the inner integral over ๐‘ฆ.
∫ ∞ r
∫ ∞
2
๐‘ฅ
๐‘ฅ2
๐‘ฅ
2
๐‘ฆ
๐‘’๐‘ฅ๐‘ (− (๐‘ฆ − ) 2 )๐‘‘๐‘ฆ)๐‘‘๐‘ฅ
๐ธ [๐‘‹๐‘Œ ] =
√ ๐‘’๐‘ฅ๐‘ (− ) (
2
3๐œ‹
3
2
2๐œ‹
−∞
−∞
The inner integral (in square brackets) is the expected value of a Gaussian random variable
with a mean of ๐‘ฅ/2 and variance of 3/4 which thus evaluates to ๐‘ฅ/2. Hence,
∫
1 ∞ ๐‘ฅ2
๐‘ฅ2
๐ธ [๐‘‹๐‘Œ ] =
√ ๐‘’๐‘ฅ๐‘ (− )๐‘‘๐‘ฅ
2 −∞ 2๐œ‹
2
The remaining integral is the second moment of a Gaussian random variable with zero mean
and unit variance which integrates to 1. The correlation of these two random variables is
therefore ๐ธ [๐‘‹๐‘Œ ] = 1/2. Since both ๐‘‹ and ๐‘Œ have zero means, ๐ถ๐‘‚๐‘‰ (๐‘‹, ๐‘Œ ) is also equal to
1/2. Finally, the correlation coefficient is also ๐œŒ๐‘‹๐‘Œ = 1/2 due to the fact that both ๐‘‹ and ๐‘Œ
have unit variance.
The concepts of correlation and covariance can be generalized to higher-order moments as given
in the following definition.
Definition 4.10. Joint moment: The (๐‘š, ๐‘›)๐‘กโ„Ž joint moment of two random variables ๐‘‹ and ๐‘Œ is:
โˆฌ
๐‘š ๐‘›
๐ธ [๐‘‹ ๐‘Œ ] =
๐‘ฅ ๐‘š๐‘ฆ๐‘› ๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ)๐‘‘๐‘ฅ๐‘‘๐‘ฆ
(4.29)
Definition 4.11. Joint central moment: The (๐‘š, ๐‘›)๐‘กโ„Ž joint central moment of two random variables ๐‘‹ and ๐‘Œ is:
โˆฌ
๐‘š
๐‘›
๐ธ [(๐‘‹ − ๐ธ [๐‘‹ ]) (๐‘Œ − ๐ธ [๐‘Œ ]) ] =
(๐‘ฅ − ๐ธ [๐‘‹ ])๐‘š (๐‘ฆ − ๐ธ [๐‘Œ ])๐‘› ๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ)๐‘‘๐‘ฅ๐‘‘๐‘ฆ
(4.30)
These higher-order joint moments are not frequently used. As with single random variables,
a conditional expected value can also be defined for which the expectation is carried out with
respect to the appropriate conditional density function.
Definition 4.12. The conditional expected value of a function ๐‘”(๐‘‹ ) of a random variable ๐‘‹ given
that ๐‘Œ = ๐‘ฆ is:
∫
∞
๐ธ [๐‘”(๐‘‹ )|๐‘Œ ] =
๐‘”(๐‘ฅ) ๐‘“๐‘‹ |๐‘Œ (๐‘ฅ |๐‘ฆ)๐‘‘๐‘ฅ
(4.31)
−∞
Conditional expected values can be particularly useful in calculating expected values of functions
of two random variables that can be factored into the product of two one-dimensional functions.
77
4 Two or More Random Variables
That is, consider a function of the form ๐‘”(๐‘ฅ, ๐‘ฆ) = ๐‘”1 (๐‘ฅ)๐‘”2 (๐‘ฆ). Then:
∫ ∞∫ ∞
๐ธ [๐‘”1 (๐‘‹ )๐‘”2 (๐‘Œ )] =
๐‘”1 (๐‘ฅ)๐‘”2 (๐‘ฆ)๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ)๐‘‘๐‘ฅ๐‘‘๐‘ฆ
−∞ −∞
∫ ∞∫ ∞
=
๐‘”1 (๐‘ฅ)๐‘”2 (๐‘ฆ)๐‘“๐‘‹ (๐‘ฅ) ๐‘“๐‘Œ |๐‘‹ (๐‘ฆ|๐‘ฅ)๐‘‘๐‘ฅ๐‘‘๐‘ฆ
−∞ −∞
∫ ∞
∫ ∞
=
๐‘”1 (๐‘ฅ) ๐‘“๐‘‹ (๐‘ฅ) (
๐‘”2 (๐‘ฆ) ๐‘“๐‘Œ |๐‘‹ (๐‘ฆ|๐‘ฅ)๐‘‘๐‘ฆ)๐‘‘๐‘ฅ
−∞
−∞
∫ ∞
=
๐‘”1 (๐‘ฅ) ๐‘“๐‘‹ (๐‘ฅ)๐ธ๐‘Œ [๐‘”2 (๐‘Œ )|๐‘‹ ]๐‘‘๐‘ฅ
−∞
= ๐ธ๐‘‹ [๐‘”1 (๐‘‹ )๐ธ๐‘Œ [๐‘”2 (๐‘Œ )|๐‘‹ ]]
Here, the subscripts on the expectation operator have been included for clarity to emphasize
that the outer expectation is with respect to the random variable X, while the inner expectation
is with respect to the random variable Y (conditioned on X). This result allows us to break a
two-dimensional expectation into two one-dimensional expectations. This technique was used in
Example 4.9, where the correlation between two variables was essentially written as:
๐‘…๐‘‹ ,๐‘Œ = ๐ธ๐‘‹ [๐‘‹ ๐ธ๐‘Œ [๐‘Œ |๐‘‹ ]]
(4.32)
In that example, the conditional PDF of Y given X was Gaussian, thus finding the conditional
mean was accomplished by inspection. The outer expectation then required finding the second
moment of a Gaussian random variable, which is also straightforward.
4.1.6 Independence of Random Variables
The concept of independent events was introduced in section 2.4. In this section, we extend
this concept to the realm of random variables. To make that extension, consider the events
๐ด = {๐‘‹ ≤ ๐‘ฅ } and ๐ต = {๐‘Œ ≤ ๐‘ฆ} related to the random variables ๐‘‹ and ๐‘Œ . The two events ๐ด and ๐ต
are statistically independent if ๐‘ƒ (๐ด, ๐ต) = ๐‘ƒ (๐ด)๐‘ƒ (๐ต). Restated in terms of the random variables,
this condition becomes
๐‘ƒ (๐‘‹ ≤ ๐‘ฅ, ๐‘Œ ≤ ๐‘ฆ) = ๐‘ƒ (๐‘‹ ≤ ๐‘ฅ)๐‘ƒ (๐‘Œ ≤ ๐‘ฆ) ⇒ ๐น๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ) = ๐น๐‘‹ (๐‘ฅ)๐น๐‘Œ (๐‘ฆ)
(4.33)
Hence, two random variables are statistically independent if their joint CDF factors into a product
of the marginal CDFs. Differentiating both sides of this equation with respect to both ๐‘ฅ and ๐‘ฆ
reveals that the same statement applies to the PDF as well. That is, for statistically independent
random variables, the joint PDF factors into a product of the marginal PDFs:
๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ) = ๐‘“๐‘‹ (๐‘ฅ) ๐‘“๐‘Œ (๐‘ฆ)
(4.34)
It is not difficult to show that the same statement applies to PMFs as well. The preceding condition
can also be restated in terms of conditional PDFs. Dividing both sides of Equation 4.34 by ๐‘“๐‘‹ (๐‘ฅ)
results in
๐‘“๐‘Œ |๐‘‹ (๐‘ฆ|๐‘ฅ) = ๐‘“๐‘Œ (๐‘ฆ)
(4.35)
A similar result involving the conditional PDF of X given Y could have been obtained by dividing
both sides by the PDF of Y. In other words, if X and Y are independent, knowing the value of the
random variable X should not change the distribution of Y and vice versa.
78
4 Two or More Random Variables
Example 4.10
Revisiting Example 4.7 once again, verify the Independence of ๐‘‹ and ๐‘Œ .
Solution. Since the marginal PDF of X is
1
๐‘ฅ2
๐‘“๐‘‹ (๐‘ฅ) = √ ๐‘’๐‘ฅ๐‘ (− )
2
2๐œ‹
and the conditional PDF of X given Y is:
r
๐‘“๐‘‹ |๐‘Œ (๐‘ฅ |๐‘ฆ) =
๐‘ฆ
2
2
๐‘’๐‘ฅ๐‘ (− (๐‘ฅ − ) 2 )
3๐œ‹
3
2
which are not equal, these two random variables are not independent.
Example 4.11
Suppose the random variables X and Y are uniformly distributed on the square defined by
0 ≤ ๐‘ฅ, ๐‘ฆ ≤ 1. Are these two random variables independent?
Solution. The joint PDF of X and Y is:
(
๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ) =
1,
0,
0 ≤ ๐‘ฅ, ๐‘ฆ ≤ 1
otherwise
and marginal PDFs of X and Y is:
(
๐‘“๐‘‹ (๐‘ฅ) =
(
๐‘“๐‘Œ (๐‘ฆ) =
1,
0,
0≤๐‘ฅ ≤1
otherwise
1,
0,
0≤๐‘ฆ ≤1
otherwise
These random variables are statistically independent since ๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ) = ๐‘“๐‘‹ (๐‘ฅ) ๐‘“๐‘Œ (๐‘ฆ).
Theorem 4.4
Let ๐‘‹ and ๐‘Œ be two independent random variables and consider forming two new random
variables ๐‘ˆ = ๐‘”1 (๐‘‹ ) and ๐‘‰ = ๐‘”2 (๐‘Œ ). These new random variables ๐‘ˆ and ๐‘‰ are also
independent
Another important result deals with the correlation, covariance, and correlation coefficients of
independent random variables.
79
4 Two or More Random Variables
Theorem 4.5
If ๐‘‹ and ๐‘Œ are independent random variables, then ๐‘…๐‘‹ ,๐‘Œ = ๐ธ [๐‘‹ ]๐ธ [๐‘Œ ], ๐ถ๐‘œ๐‘ฃ (๐‘‹, ๐‘Œ ) = 0, and
๐œŒ๐‘‹ ,๐‘Œ = 0.
Proof.
โˆฌ
๐ธ [๐‘‹๐‘Œ ] =
∫
๐‘ฅ๐‘ฆ ๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ)๐‘‘๐‘ฅ๐‘‘๐‘ฆ =
∫
๐‘ฅ ๐‘“๐‘‹ (๐‘ฅ)๐‘‘๐‘ฅ
๐‘ฆ ๐‘“๐‘Œ (๐‘ฆ)๐‘‘๐‘ฆ = ๐ธ [๐‘‹ ]๐ธ [๐‘Œ ]
The conditions involving covariance and correlation coefficient follow directly from this
result.
Therefore, independent random variables are necessarily uncorrelated, but the converse is not
always true. Uncorrelated random variables do not have to be independent as demonstrated by
the next example.
Example 4.12: Uncorrelated but Dependent Random Variables
Consider a pair of random variables ๐‘‹ and ๐‘Œ that are uniformly distributed over the unit
circle so that:
(
1/๐œ‹, ๐‘ฅ 2 + ๐‘ฆ 2 ≤ 1
๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ) =
0,
otherwise
The marginal PDF of ๐‘‹ can be found as follows:
√
∫
∞
๐‘“๐‘‹ (๐‘ฅ) =
∫
๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ)๐‘‘๐‘ฆ =
−∞
1−๐‘ฅ 2
√
− 1−๐‘ฅ 2
1
2√
๐‘‘๐‘ฆ =
1 − ๐‘ฅ 2,
๐œ‹
๐œ‹
−1 ≤๐‘ฅ ≤ 1
By symmetry, the marginal PDF of ๐‘Œ must take on the same functional form. Hence, the
product of the marginal PDFs is
๐‘“๐‘‹ (๐‘ฅ) ๐‘“๐‘Œ (๐‘ฆ) =
4p
(1 − ๐‘ฅ 2 ) (1 − ๐‘ฆ 2 ),
๐œ‹2
− 1 ≤ ๐‘ฅ, ๐‘ฆ ≤ 1
Clearly, this is not equal to the joint PDF, and therefore, the two random variables are
dependent. This conclusion could have been determined in a simpler manner. Note that
if we are told that ๐‘‹ = 1, then necessarily ๐‘Œ = 0, whereas if we know that ๐‘‹ = 0, then ๐‘Œ
can range anywhere from -1 to 1. Therefore, conditioning on different values of ๐‘‹ leads to
different distributions for ๐‘Œ . Next, the correlation between ๐‘‹ and ๐‘Œ is calculated.
๐‘ฅ๐‘ฆ
1
๐‘‘๐‘ฅ๐‘‘๐‘ฆ =
๐œ‹
๐‘ฅ 2 +๐‘ฆ 2 ≤1 ๐œ‹
โˆฌ
๐‘…๐‘‹ ,๐‘Œ = ๐ธ [๐‘‹๐‘Œ ] =
∫
√
1
∫
๐‘ฅ(
−1
1−๐‘ฅ 2
√
− 1−๐‘ฅ 2
๐‘ฆ๐‘‘๐‘ฆ)๐‘‘๐‘ฅ
Since the inner integrand is an odd function (of ๐‘ฆ) and the limits of integration are symmetric
about zero, the integral is zero. Hence, ๐‘…๐‘‹ ,๐‘Œ = 0. Note from the marginal PDFs just found
that both ๐‘‹ and ๐‘Œ are zero-mean. So, it is seen for this example that while the two random
variables are uncorrelated, they are not independent.
80
4 Two or More Random Variables
4.1.7 Pairs of Jointly Gaussian Random Variables
As with single random variables, the most common and important example of a two-dimensional
probability distribution is that of a joint Gaussian distribution. The jointly Gaussian random
variables appear in numerous applications in electrical engineering.They are frequently used to
model signals in signal processing applications, and they are the most important model used in
communication systems that involve dealing with signals in the presence of noise. They also play
a central role in many statistical methods.
Definition 4.13. Jointly Gaussian random variables: A pair of random variables ๐‘‹ and ๐‘Œ is
said to be jointly Gaussian if their joint PDF is of the general form:
๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ) =
(
1
๐‘’๐‘ฅ๐‘ (−
q
2
2๐œ‹๐œŽ๐‘‹ ๐œŽ๐‘Œ 1 − ๐œŒ๐‘‹๐‘Œ
๐‘ฅ−๐‘š๐‘‹ 2
๐œŽ๐‘‹ )
๐‘‹
− 2๐œŒ๐‘‹๐‘Œ ( ๐‘ฅ−๐‘š
๐œŽ๐‘‹ ) (
๐‘ฆ−๐‘š๐‘Œ
๐œŽ๐‘Œ
)+(
2 )
2(1 − ๐œŒ๐‘‹๐‘Œ
๐‘ฆ−๐‘š๐‘Œ 2
๐œŽ๐‘Œ )
)
(4.36)
where ๐‘š๐‘‹ and ๐‘š๐‘Œ are the means of ๐‘‹ and ๐‘Œ , respectively; ๐œŽ๐‘‹ and ๐œŽ๐‘Œ are the standard deviations of
๐‘‹ and ๐‘Œ , respectively; and ๐œŒ๐‘‹๐‘Œ is the correlation coefficient of ๐‘‹ and ๐‘Œ .
It can be shown that this joint PDF results in Gaussian marginal PDFs:
∫ ∞
1
(๐‘ฅ − ๐‘š๐‘‹ ) 2
๐‘“๐‘‹ (๐‘ฅ) =
๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ)๐‘‘๐‘ฆ = √
)
๐‘’๐‘ฅ๐‘ (−
2๐œŽ๐‘‹2
2๐œ‹๐œŽ๐‘‹
−∞
∫ ∞
(๐‘ฆ − ๐‘š๐‘Œ ) 2
1
๐‘“๐‘Œ (๐‘ฆ) =
๐‘’๐‘ฅ๐‘ (−
)
๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ)๐‘‘๐‘ฅ = √
2๐œŽ๐‘Œ2
2๐œ‹๐œŽ๐‘Œ
−∞
(4.37)
(4.38)
Furthermore, if ๐‘‹ and ๐‘Œ are jointly Gaussian, then the conditional PDF of ๐‘‹ given ๐‘Œ = ๐‘ฆ is also
2 ).
Gaussian, with a mean of ๐‘š๐‘‹ + ๐œŒ๐‘‹๐‘Œ (๐œŽ๐‘‹ /๐œŽ๐‘Œ ) (๐‘ฆ − ๐‘š๐‘Œ ) and a variance of ๐œŽ๐‘‹2 (1 − ๐œŒ๐‘‹๐‘Œ
Figure 4.3 shows the joint Gaussian PDF for three different values of the correlation coefficient.
In Figure 4.3(a), the correlation coefficient is ๐œŒ๐‘‹๐‘Œ = 0 and thus the two random variables are
uncorrelated. Figure 4.3(b) shows the joint PDF when the correlation coefficient is large and
positive, ๐œŒ๐‘‹๐‘Œ = 0.9. Note how the surface has become taller and thinner and largely lies above
the line ๐‘ฆ = ๐‘ฅ. In Figure 4.3(c), the correlation is now large and negative, ๐œŒ๐‘‹๐‘Œ = −0.9. Note that
this is the same picture as in Figure 4.3(b), except that it has been rotated by 90๐‘œ . Now the surface
lies largely above the line ๐‘ฆ = −๐‘ฅ. In all three figures, the means of both ๐‘‹ and ๐‘Œ are zero and
the variances of both ๐‘‹ and ๐‘Œ are 1. Changing the means would simply translate the surface but
would not change the shape. Changing the variances would expand or contract the surface along
either the ๐‘‹ − or ๐‘Œ −axis depending on which variance was changed.
81
4 Two or More Random Variables
Figure 4.3: The joint Gaussian PDF: (a) ๐‘š๐‘‹ = ๐‘š๐‘Œ = 0, ๐œŽ๐‘‹ = ๐œŽ๐‘Œ = 1, ๐œŒ๐‘‹๐‘Œ = 0; (b) ๐‘š๐‘‹ = ๐‘š๐‘Œ = 0,
๐œŽ๐‘‹ = ๐œŽ๐‘Œ = 1, ๐œŒ๐‘‹๐‘Œ = 0.9; (c) ๐‘š๐‘‹ = ๐‘š๐‘Œ = 0, ๐œŽ๐‘‹ = ๐œŽ๐‘Œ = 1, ๐œŒ๐‘‹๐‘Œ = −0.9
Example 4.13
82
4 Two or More Random Variables
The joint Gaussian PDF is given by the Equation 4.36. Suppose have the following equation:
(
๐‘ฆ − ๐‘š๐‘Œ 2
๐‘ฅ − ๐‘š๐‘‹ 2
๐‘ฅ − ๐‘š ๐‘‹ ๐‘ฆ − ๐‘š๐‘Œ
) − 2๐œŒ๐‘‹๐‘Œ (
)(
)+(
) = ๐‘2
๐œŽ๐‘‹
๐œŽ๐‘‹
๐œŽ๐‘Œ
๐œŽ๐‘Œ
This is the equation for an ellipse. Plotting these ellipses for different values of ๐‘ results in
what is known as a contour plot. Figure 4.4 shows such plots for the two-dimensional joint
Gaussian PDF.
Figure 4.4: Contour plots for joint Gaussian random variables
Theorem 4.6
Uncorrelated Gaussian random variables are independent.
Proof. Uncorrelated Gaussian random variables have a correlation coefficient of zero. Plugging ๐œŒ๐‘‹๐‘Œ = 0 into the general joint Gaussian PDF results in
(
1
๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ) =
๐‘’๐‘ฅ๐‘ (−
2๐œ‹๐œŽ๐‘‹ ๐œŽ๐‘Œ
๐‘ฅ−๐‘š๐‘‹ 2
๐œŽ๐‘‹ )
+(
๐‘ฆ−๐‘š๐‘Œ 2
๐œŽ๐‘Œ )
2
)
This clearly factors into the product of the marginal Gaussian PDFs.
(๐‘ฆ − ๐‘š๐‘Œ ) 2
1
(๐‘ฅ − ๐‘š๐‘‹ ) 2
1
๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ) = √
๐‘’๐‘ฅ๐‘ (−
)
๐‘’๐‘ฅ๐‘
(−
) = ๐‘“๐‘‹ (๐‘ฅ) ๐‘“๐‘Œ (๐‘ฆ)
√
2๐œŽ๐‘‹2
2๐œŽ๐‘Œ2
2๐œ‹๐œŽ๐‘‹
2๐œ‹๐œŽ๐‘Œ
While Example 4.12 demonstrated that this property does not hold for all random variables,
however it is true for Gaussian random variables. This allows us to give a stronger interpretation
to the correlation coefficient when dealing with Gaussian random variables. Previously, it was
stated that the correlation coefficient is a quantitative measure of the amount of correlation
between two variables. While this is true, it is a rather vague statement. We see that in the case
of Gaussian random variables, we can make the connection between correlation and statistical
dependence. Hence, for jointly Gaussian random variables, the correlation coefficient can indeed
be viewed as a quantitative measure of statistical dependence.
83
4 Two or More Random Variables
4.2 Multiple Random Variables
In many applications, it is necessary to deal with a large numbers of random variables. Often, the
number of variables can be arbitrary. Therefore we extend the concepts developed previously
for single random variables and pairs of random variables to allow for an arbitrary number of
random variables. A common example is multidimensional Gaussian random variables, while
most non-Gaussian random variables are difficult to deal with in many dimensions. One of the
main goals here is to develop a vector/matrix notation which will allow us to represent potentially
large sequences of random variables with a compact notation.
4.2.1 Vector Random Variables
The notion of a random variable is easily generalized to the case where several quantities are of
interest.
Definition 4.14. Vector random variables: A vector random variable X is a function that assigns
a vector of real numbers to each outcome ๐œ in ๐‘†, the sample space of the random experiment.
We use uppercase boldface notation for vector random variables. By convention X is a column vector (๐‘› rows by 1 column), so the vector random variable with components๐‘‹ 1, ๐‘‹ 2, ..., ๐‘‹๐‘› corresponds
to
๏ฃฎ๐‘‹ 1 ๏ฃน
๏ฃฏ ๏ฃบ
๏ฃฏ๐‘‹ 2 ๏ฃบ
๏ฃฏ ๏ฃบ
X = ๏ฃฏ .. ๏ฃบ = [๐‘‹ 1, ๐‘‹ 2, ..., ๐‘‹๐‘› ]๐‘‡
(4.39)
๏ฃฏ . ๏ฃบ
๏ฃฏ ๏ฃบ
๏ฃฏ๐‘‹๐‘› ๏ฃบ
๏ฃฐ ๏ฃป
where T denotes the transpose of a matrix or vector. Possible values of the vector random variable
are denoted by x= [๐‘ฅ 1, ๐‘ฅ 2, ..., ๐‘ฅ๐‘› ]๐‘‡ where ๐‘ฅ๐‘– corresponds to the value of ๐‘‹๐‘– .
Example 4.14: Samples of an Audio Signal
Let the outcome of a random experiment be an audio signal ๐‘‹ (๐‘ก). Let the random variable
๐‘‹๐‘˜ = ๐‘‹ (๐‘˜๐‘‡ ) be the sample of the signal taken at time ๐‘˜๐‘‡ . An MP3 codec processes the audio
in blocks of ๐‘› samples X= [๐‘‹ 1, ๐‘‹ 2, ..., ๐‘‹๐‘› ]๐‘‡ . X is a vector random variable.
Each event ๐ด involving X= [๐‘‹ 1, ๐‘‹ 2, ..., ๐‘‹๐‘› ]๐‘‡ has a corresponding region in an n-dimensional real
space ๐‘…๐‘› . As before, we use “rectangular” product-form sets in ๐‘…๐‘› as building blocks. For the
n-dimensional random variable X= [๐‘‹ 1, ๐‘‹ 2, ..., ๐‘‹๐‘› ]๐‘‡ we are interested in events that have the
product form:
๐ด = {๐‘‹ 1 in ๐ด1 } ∩ {๐‘‹ 2 in ๐ด2 } ∩ ... ∩ {๐‘‹๐‘› in ๐ด๐‘› }
(4.40)
where each ๐ด๐‘˜ is a one-dimensional event (i.e., subset of the real line) that involves ๐‘‹๐‘˜ only. The
event ๐ด occurs when all of the events {๐‘‹๐‘˜ in ๐ด๐‘˜ } occur jointly. We are interested in obtaining the
probabilities of these product-form events:
๐‘ƒ (๐ด) = ๐‘ƒ (X ∈ A) = ๐‘ƒ ({๐‘‹ 1 in ๐ด1 } ∩ {๐‘‹ 2 in ๐ด2 } ∩ ... ∩ {๐‘‹๐‘› in ๐ด๐‘› })
, ๐‘ƒ (๐‘‹ 1 in ๐ด1, ๐‘‹ 2 in ๐ด2, ..., ๐‘‹๐‘› in ๐ด๐‘› )
(4.41)
(4.42)
In principle, this probability is obtained by finding the probability of the equivalent event in the
underlying sample space, that is,
๐‘ƒ (๐ด) = ๐‘ƒ ({๐œ in ๐‘† : X(๐œ ) in A})
= ๐‘ƒ ({๐œ in ๐‘† : ๐‘‹ 1 (๐œ ) ∈ ๐ด1, ๐‘‹ 2 (๐œ ) ∈ ๐ด2, ..., ๐‘‹๐‘› (๐œ ) ∈ ๐ด๐‘› })
84
(4.43)
(4.44)
4 Two or More Random Variables
4.2.2 Joint and Conditional PMFs, CDFs and PDFs
The concepts of PMF, CDF, PDF are easily extended to an arbitrary number of random variables.
Definition 4.15. For a vector of ๐‘ random variables X= [๐‘‹ 1, ๐‘‹ 2, ..., ๐‘‹ ๐‘ ]๐‘‡ , with possible values
x= [๐‘ฅ 1, ๐‘ฅ 2, ..., ๐‘ฅ ๐‘ ]๐‘‡ , the joint PMF, CDF, and PDF are given, respectively, by:
๐‘ƒ X (x) = ๐‘ƒ๐‘‹1,๐‘‹2,...,๐‘‹๐‘ (๐‘ฅ 1, ๐‘ฅ 2, ..., ๐‘ฅ ๐‘ ) = ๐‘ƒ (๐‘‹ 1 = ๐‘ฅ 1, ๐‘‹ 2 = ๐‘ฅ 2, ..., ๐‘‹ ๐‘ = ๐‘ฅ ๐‘ )
(4.45)
๐น X (x) = ๐น๐‘‹1,๐‘‹2,...,๐‘‹๐‘ (๐‘ฅ 1, ๐‘ฅ 2, ..., ๐‘ฅ ๐‘ ) = ๐‘ƒ (๐‘‹ 1 ≤ ๐‘ฅ 1, ๐‘‹ 2 ≤ ๐‘ฅ 2, ..., ๐‘‹ ๐‘ ≤ ๐‘ฅ ๐‘ )
(4.46)
๐œ•๐‘
๐น๐‘‹ ,๐‘‹ ,...,๐‘‹ (๐‘ฅ 1, ๐‘ฅ 2, ..., ๐‘ฅ ๐‘ )
๐œ•๐‘ฅ 1 ๐œ•๐‘ฅ 2 ...๐œ•๐‘ฅ ๐‘ 1 2 ๐‘›
๐‘“X (x) = ๐‘“๐‘‹1,๐‘‹2,...,๐‘‹๐‘ (๐‘ฅ 1, ๐‘ฅ 2, ..., ๐‘ฅ ๐‘ ) =
(4.47)
Marginal CDFs can be found for a subset of the variables by evaluating the joint CDF at infinity
for the unwanted variables. For example, if we are only interested in a subset {๐‘‹ 1, ๐‘‹ 2, ..., ๐‘‹๐‘€ } of
X= [๐‘‹ 1, ๐‘‹ 2, ..., ๐‘‹ ๐‘ ]๐‘‡ , where ๐‘ ≥ ๐‘€:
๐น๐‘‹1,๐‘‹2,...,๐‘‹๐‘€ (๐‘ฅ 1, ๐‘ฅ 2, ..., ๐‘ฅ ๐‘€ ) = ๐น๐‘‹1,๐‘‹2,...,๐‘‹๐‘ (๐‘ฅ 1, ๐‘ฅ 2, ..., ๐‘ฅ ๐‘€ , ..., ∞, ∞, ..., ∞)
(4.48)
Marginal PDFs are found from the joint PDF by integrating out the unwanted variables. Similarly,
marginal PMFs are obtained from the joint PMF by summing out the unwanted variables.
∫ ∞∫ ∞ ∫ ∞
๐‘“๐‘‹1,๐‘‹2,...,๐‘‹๐‘€ (๐‘ฅ 1, ๐‘ฅ 2, ..., ๐‘ฅ ๐‘€ ) =
...
๐‘“๐‘‹1,๐‘‹2,...,๐‘‹๐‘ (๐‘ฅ 1, ๐‘ฅ 2, ..., ๐‘ฅ ๐‘ )๐‘‘๐‘ฅ ๐‘€+1๐‘‘๐‘ฅ ๐‘€+2 ...๐‘‘๐‘ฅ ๐‘ (4.49)
−∞
−∞
๐‘ƒ๐‘‹1,๐‘‹2,...,๐‘‹๐‘€ (๐‘ฅ 1, ๐‘ฅ 2, ..., ๐‘ฅ ๐‘€ ) =
−∞
Õ Õ
...
๐‘ฅ ๐‘€+1 ๐‘ฅ ๐‘€+2
Õ
๐‘ƒ๐‘‹1,๐‘‹2,...,๐‘‹๐‘ (๐‘ฅ 1, ๐‘ฅ 2, ..., ๐‘ฅ ๐‘ )
(4.50)
๐‘ฅ๐‘
Similar to that done for pairs of random variables, we can also establish conditional PMFs and
PDFs.
Definition 4.16. For a set of ๐‘ random variables ๐‘‹ 1, ๐‘‹ 2, ..., ๐‘‹ ๐‘ , the conditional PMF and PDF of
๐‘‹ 1, ๐‘‹ 2, ..., ๐‘‹๐‘€ conditioned on ๐‘‹๐‘€+1, ๐‘‹๐‘€+2, ..., ๐‘‹ ๐‘ are given by
๐‘ƒ๐‘‹1,๐‘‹2,...,๐‘‹๐‘€ |๐‘‹๐‘€+1,...,๐‘‹๐‘ (๐‘ฅ 1, ๐‘ฅ 2, ..., ๐‘ฅ ๐‘€ |๐‘ฅ ๐‘€+1, ..., ๐‘ฅ ๐‘ ) =
๐‘ƒ (๐‘‹ 1 = ๐‘ฅ 1, ๐‘‹ 2 = ๐‘ฅ 2, ..., ๐‘‹ ๐‘ = ๐‘ฅ ๐‘ )
๐‘ƒ (๐‘‹๐‘€+1 = ๐‘ฅ ๐‘€+1, ..., ๐‘‹ ๐‘ = ๐‘ฅ ๐‘ )
๐‘“๐‘‹1,๐‘‹2,...,๐‘‹๐‘€ |๐‘‹๐‘€+1,...,๐‘‹๐‘ (๐‘ฅ 1, ๐‘ฅ 2, ..., ๐‘ฅ ๐‘€ |๐‘ฅ ๐‘€+1, ..., ๐‘ฅ ๐‘ ) =
๐‘“๐‘‹1,๐‘‹2,...,๐‘‹๐‘ (๐‘ฅ 1, ๐‘ฅ 2, ..., ๐‘ฅ ๐‘ )
๐‘“๐‘‹๐‘€+1,...,๐‘‹๐‘ (๐‘ฅ ๐‘€+1, ..., ๐‘ฅ ๐‘ )
(4.51)
(4.52)
Using conditional PDFs, many interesting factorization results can be established for joint PDFs
involving multiple random variables. For example, consider four random variables, ๐‘‹ 1, ๐‘‹ 2, ๐‘‹ 3, ๐‘‹ 4 .
๐‘“๐‘‹1,๐‘‹2,๐‘‹3,๐‘‹4 (๐‘ฅ 1, ๐‘ฅ 2, ๐‘ฅ 3, ๐‘ฅ 4 ) = ๐‘“๐‘‹1 |๐‘‹2,๐‘‹3,๐‘‹4 (๐‘ฅ 1 |๐‘ฅ 2, ๐‘ฅ 3, ๐‘ฅ 4 ) ๐‘“๐‘‹2,๐‘‹3,๐‘‹4 (๐‘ฅ 2, ๐‘ฅ 3, ๐‘ฅ 4 )
= ๐‘“๐‘‹1 |๐‘‹2,๐‘‹3,๐‘‹4 (๐‘ฅ 1 |๐‘ฅ 2, ๐‘ฅ 3, ๐‘ฅ 4 ) ๐‘“๐‘‹2 |๐‘‹3,๐‘‹4 (๐‘ฅ 2 |๐‘ฅ 3, ๐‘ฅ 4 ) ๐‘“๐‘‹3,๐‘‹4 (๐‘ฅ 3, ๐‘ฅ 4 )
= ๐‘“๐‘‹1 |๐‘‹2,๐‘‹3,๐‘‹4 (๐‘ฅ 1 |๐‘ฅ 2, ๐‘ฅ 3, ๐‘ฅ 4 ) ๐‘“๐‘‹2 |๐‘‹3,๐‘‹4 (๐‘ฅ 2 |๐‘ฅ 3, ๐‘ฅ 4 ) ๐‘“๐‘‹3 |๐‘‹4 (๐‘ฅ 3 |๐‘ฅ 4 )๐‘“๐‘‹4 (๐‘ฅ 4 )
Almost endless other possibilities exist as well.
Definition 4.17. A set of ๐‘ random variables are statistically independent if any subset of the
random variables are independent of any other disjoint subset. In particular, any joint PDF of ๐‘€ ≤ ๐‘
variables should factor into a product of the corresponding marginal PDFs.
85
4 Two or More Random Variables
As an example, consider three random variables, ๐‘‹ , ๐‘Œ , ๐‘ . For these three random variables to be
independent, we must have each pair independent. This implies that:
๐‘“๐‘‹ ,๐‘Œ (๐‘ฅ, ๐‘ฆ) = ๐‘“๐‘‹ (๐‘ฅ) ๐‘“๐‘Œ (๐‘ฆ)
๐‘“๐‘‹ ,๐‘ (๐‘ฅ, ๐‘ง) = ๐‘“๐‘‹ (๐‘ฅ)๐‘“๐‘ (๐‘ง)
(4.53)
๐‘“๐‘Œ ,๐‘ (๐‘ฆ, ๐‘ง) = ๐‘“๐‘Œ (๐‘ฆ)๐‘“๐‘ (๐‘ง)
In addition, the joint PDF of all three must also factor into a product of the marginals,
๐‘“๐‘‹ ,๐‘Œ ,๐‘ (๐‘ฅ, ๐‘ฆ, ๐‘ง) = ๐‘“๐‘‹ (๐‘ฅ)๐‘“๐‘Œ (๐‘ฆ) ๐‘“๐‘ (๐‘ง)
(4.54)
Note that all three conditions in Equation 4.53 follow directly from the single condition in
Equation 4.54. Hence, Equation 4.54 is a necessary and sufficient condition for three variables to
be statistically independent. Naturally, this result can be extended to any number of variables.
That is, the elements of a random vector X= [๐‘‹ 1, ๐‘‹ 2, ..., ๐‘‹ ๐‘ ]๐‘‡ are independent if
๐‘“X (x) =
๐‘
Ö
๐‘“๐‘‹๐‘› (๐‘ฅ๐‘› )
(4.55)
๐‘›=1
4.2.3 Expectations Involving Multiple Random Variables
For a vector of random variables X= [๐‘‹ 1, ๐‘‹ 2, ..., ๐‘‹ ๐‘ ]๐‘‡ , we can construct a corresponding mean
vector that is a column vector of the same dimension and whose components are the means
of the elements of X. Mathematically, we say ๐‘š = ๐ธ [X] = [๐ธ [๐‘‹ 1 ], ๐ธ [๐‘‹ 2 ], ..., ๐ธ [๐‘‹ ๐‘ ]]๐‘‡ . Two
other important quantities associated with the random vector are the correlation and covariance
matrices.
Definition 4.18. For a random vector X= [๐‘‹ 1, ๐‘‹ 2, ..., ๐‘‹ ๐‘ ]๐‘‡ , the correlation matrix is defined as
RXX = ๐ธ [XX๐‘‡ ]. That is, the (๐‘–, ๐‘—)๐‘กโ„Ž element of the ๐‘ × ๐‘ matrix RXX is ๐ธ [๐‘‹๐‘– ๐‘‹ ๐‘— ]. Similarly, the
covariance matrix is defined as CXX = ๐ธ [(X − ๐‘š) (X − ๐‘š)๐‘‡ ] so that the (๐‘–, ๐‘—)๐‘กโ„Ž element of CXX is
COV(๐‘‹๐‘– , ๐‘‹ ๐‘— ).
Theorem 4.7
Correlation matrices and covariance matrices are symmetric and positive definite.
Proof. Recall that a square matrix, RXX , is symmetric if RXX = R๐‘‡XX . Equivalently, the
(๐‘–, ๐‘—)๐‘กโ„Ž element must be the same as the (๐‘–, ๐‘—)๐‘กโ„Ž element. This is clearly the case here since
๐ธ [๐‘‹๐‘– ๐‘‹ ๐‘— ] = ๐ธ [๐‘‹ ๐‘— ๐‘‹๐‘– ]. Recall that the matrix is positive definite if z๐‘‡ RXX z > 0 for any vector
z such that ||๐‘ง|| > 0.
z๐‘‡ RXX z = z๐‘‡ ๐ธ [XX๐‘‡ ]z = ๐ธ [z๐‘‡ XX๐‘‡ z] = ๐ธ [(z๐‘‡ X) 2 ]
(4.56)
Note that z๐‘‡ X is a scalar random variable (a linear combination of the components of X).
Since the second moment of any random variable is positive (except for the pathological
case of a random variable which is identically equal to zero), then the correlation matrix is
positive definite. As an aside, this also implies that the eigenvalues of the correlation matrix
are all positive. Identical steps can be followed to prove the same properties hold for the
covariance matrix.
86
4 Two or More Random Variables
Next, consider a linear transformation of a vector random variable. That is, create a new set of ๐‘€
random variables, Y = [๐‘Œ1, ๐‘Œ2, ..., ๐‘Œ๐‘€ ]๐‘‡ , according to:
๐‘Œ1 =๐‘Ž 1,1๐‘‹ 1 + ๐‘Ž 1,2๐‘‹ 2 + ... + ๐‘Ž 1,๐‘ ๐‘‹ ๐‘ + ๐‘ 1
๐‘Œ2 =๐‘Ž 2,1๐‘‹ 1 + ๐‘Ž 2,2๐‘‹ 2 + ... + ๐‘Ž 2,๐‘ ๐‘‹ ๐‘ + ๐‘ 2
..
.
(4.57)
๐‘Œ๐‘€ =๐‘Ž๐‘€,1๐‘‹ 1 + ๐‘Ž๐‘€,2๐‘‹ 2 + ... + ๐‘Ž๐‘€,๐‘ ๐‘‹ ๐‘ + ๐‘ ๐‘€
The number of new variables, M, does not have to be the same as the number of original variables,
N. To write this type of linear transformation in a compact fashion, define a matrix A whose
(๐‘–, ๐‘—)๐‘กโ„Ž element is the coefficient ๐‘Ž๐‘–,๐‘— and a column vector, b= [๐‘ 1, ๐‘ 2, ..., ๐‘ ๐‘€ ]๐‘‡ . Then the linear
transformation of Equation 4.57 is written in vector/matrix form as Y = AX + b. The next theorem
describes the relationship between the means of X and Y and the correlation matrices of X and Y.
Theorem 4.8
For a linear transformation of vector random variables of the form Y = AX + b, the means
of X and Y are related by.
mY = AmX + b
(4.58)
Also, the correlation matrices of X and Y are related by:
RYY = ARXX A๐‘‡ + AmX b๐‘‡ + bm๐‘‡X A๐‘‡ + bb๐‘‡
(4.59)
and the covariance matrices of X and Y are related by:
CYY = ACXX A๐‘‡
(4.60)
mY = ๐ธ [Y] = ๐ธ [AX + b] = A๐ธ [X] + b = AmX + b
(4.61)
Proof. For the mean vector,
Similarly, for the correlation matrix
RYY = ๐ธ [YY๐‘‡ ] = ๐ธ [(AX + b) (AX + b)๐‘‡ ]
= ๐ธ [AXX๐‘‡ A๐‘‡ ] + ๐ธ [bX๐‘‡ A๐‘‡ ] + ๐ธ [AXb๐‘‡ ] + ๐ธ [bb๐‘‡ ]
= ARXX A๐‘‡ + AmX b๐‘‡ + bm๐‘‡X A๐‘‡ + bb๐‘‡
(4.62)
To prove the result for the covariance matrix, write Y−mY as
Y − mY = (AX + b) − (AmX + b) = A(X − mX )
(4.63)
Then,
CYY = ๐ธ [(Y − mY ) (Y − mY )๐‘‡ ] = ๐ธ [(A(X − mX )) (A(X − mX ))๐‘‡ ]
= ๐ธ [A(X − mX ) (X − mX )๐‘‡ A๐‘‡ ] = A๐ธ [(X − mX ) (X − mX )๐‘‡ ]A๐‘‡ = ACXX A๐‘‡ (4.64)
87
4 Two or More Random Variables
4.2.4 Multi-Dimensional Gaussian Random Variables
Recall from the study of two-dimensional random variables in the previous chapter that the
functional form of the joint Gaussian PDF was fairly complicated. It would seem that the prospects
of forming a joint Gaussian PDF for an arbitrary number of dimensions are grim. However, the
vector/matrix notation developed in the previous sections make this task manageable and, in fact,
the resulting joint Gaussian PDF is quite simple.
Definition 4.19. The joint Gaussian PDF for a vector of ๐‘ random variables, ๐‘‹ , with mean vector,
m๐‘‹ , and covariance matrix, CXX , is given by
๐‘“X (x) = p
1
1
๐‘’๐‘ฅ๐‘ (− (X − mX )๐‘‡ C−1
XX (X − mX ))
2
(2๐œ‹) ๐‘ ๐‘‘๐‘’๐‘ก (CXX )
(4.65)
Example 4.15
To demonstrate the use of this matrix notation, suppose X is a two-element vector and the
mean vector and covariance matrix are given by their general forms:
๐‘š
mX = 1
๐‘š2
and
๐œŽ12
๐œŒ๐œŽ1๐œŽ2
=
๐œŒ๐œŽ1๐œŽ2
๐œŽ22
CXX
The determinant of the covariance matrix is
๐‘‘๐‘’๐‘ก (CXX ) = ๐œŽ12๐œŽ22 − (๐œŒ๐œŽ1๐œŽ2 ) 2 = ๐œŽ12๐œŽ22 (1 − ๐œŒ 2 )
while the inverse is
C−1
XX =
๐œŽ22
−๐œŒ๐œŽ1๐œŽ2
−๐œŒ๐œŽ1๐œŽ2
๐œŽ12
๐œŽ12๐œŽ22 (1 − ๐œŒ 2 )
=
๐œŽ1−2
−๐œŒ๐œŽ1−1๐œŽ2−1
−1
−1
−๐œŒ๐œŽ1 ๐œŽ2
๐œŽ2−2
1 − ๐œŒ2
The quadratic form in the exponent then works out to be
๐œŽ1−2
−๐œŒ๐œŽ1−1๐œŽ2−1
๐œŽ2−2
−๐œŒ๐œŽ1−1๐œŽ2−1
๐‘ฅ1 − ๐‘š1
๐‘‡ −1
(X − mX ) CXX (X − mX ) = ๐‘ฅ 1 − ๐‘š 1 ๐‘ฅ 2 − ๐‘š 2
๐‘ฅ2 − ๐‘š2
1 − ๐œŒ2
๐‘ฅ 1 −๐‘š 1 2
๐‘ฅ 1 −๐‘š 1 ๐‘ฅ 2 −๐‘š 2
๐‘ฅ 2 −๐‘š 2 2
( ๐œŽ1 ) − 2๐œŒ ( ๐œŽ1 )( ๐œŽ2 ) + ( ๐œŽ2 )
=
1 − ๐œŒ2
Plugging all these results into the general form for the joint Gaussian PDF gives
๐‘“๐‘‹1,๐‘‹2 (๐‘ฅ 1, ๐‘ฅ 2 ) = q
1
๐‘’๐‘ฅ๐‘ (−
1 2
1
2
2 2
( ๐‘ฅ 1๐œŽ−๐‘š
) − 2๐œŒ ( ๐‘ฅ 1๐œŽ−๐‘š
)( ๐‘ฅ 2๐œŽ−๐‘š
) + ( ๐‘ฅ 2๐œŽ−๐‘š
)
1
1
2
2
(2๐œ‹) 2๐œŽ12๐œŽ22 (1 − ๐œŒ 2 )
2(1 − ๐œŒ 2 )
)
(4.66)
This is exactly the form of the two-dimensional joint Gaussian PDF defined in Equation 4.36.
88
4 Two or More Random Variables
Example 4.16
Suppose ๐‘‹ 1, ๐‘‹ 2, ..., ๐‘‹๐‘› are jointly Gaussian random variables with ๐ถ๐‘‚๐‘‰ (๐‘‹๐‘– , ๐‘‹ ๐‘— ) = 0 for ๐‘– ≠ ๐‘—.
Show that ๐‘‹ 1, ๐‘‹ 2, ..., ๐‘‹๐‘› are independent random variables.
Solution. Since for all ๐ถ๐‘‚๐‘‰ (๐‘‹๐‘– , ๐‘‹ ๐‘— ) = 0 for all ๐‘– ≠ ๐‘—, all of the off-diagonal elements of the
covariance matrix of X are zero. In other words, CXX is a diagonal matrix of the general
form:
๏ฃฎ๐œŽ12 0 ... 0 ๏ฃน
๏ฃฏ
๏ฃบ
๏ฃฏ 0 ๐œŽ 2 ... 0 ๏ฃบ
2
๏ฃฏ
๏ฃบ
CXX = ๏ฃฏ
๏ฃบ
...
...
...
๏ฃฏ
๏ฃบ
๏ฃฏ 0 0 ... ๐œŽ 2 ๏ฃบ
๏ฃฐ
๐‘๏ฃป
The determinant of a diagonal matrix is the product of the diagonal entries so that in this
case ๐‘‘๐‘’๐‘ก (CXX ) = ๐œŽ12๐œŽ22 ...๐œŽ๐‘2 . The inverse is also trivial to compute and takes on the form
C−1
XX
๏ฃฎ๐œŽ1−2 0 ... 0 ๏ฃน
๏ฃฏ
๏ฃบ
๏ฃฏ 0 ๐œŽ −2 ... 0 ๏ฃบ
2
๏ฃฏ
๏ฃบ
=๏ฃฏ
...
... ๏ฃบ๏ฃบ
๏ฃฏ ...
๏ฃฏ 0
0 ... ๐œŽ๐‘−2 ๏ฃบ๏ฃป
๏ฃฐ
The quadratic form that appears in the exponent of the Gaussian PDF becomes,
(X − mX )๐‘‡ C−1
XX (X − mX ) =
๐‘ฅ1 − ๐‘š1 ๐‘ฅ2 − ๐‘š2
๏ฃฎ๐œŽ1−2 0 ... 0 ๏ฃน
๏ฃบ
๏ฃฏ
๏ฃฏ 0 ๐œŽ2−2 ... 0 ๏ฃบ
๏ฃบ
๏ฃฏ
... ๐‘ฅ ๐‘ − ๐‘š ๐‘ ๏ฃฏ
๏ฃบ
...
...
...
๏ฃบ
๏ฃฏ
−2
๏ฃฏ 0
0 ... ๐œŽ๐‘ ๏ฃบ๏ฃป
๏ฃฐ
๏ฃฎ ๐‘ฅ1 − ๐‘š1 ๏ฃน
๏ฃบ
๏ฃฏ
๐‘
๏ฃฏ ๐‘ฅ2 − ๐‘š2 ๏ฃบ Õ
๐‘ฅ ๐‘› − ๐‘š๐‘› 2
๏ฃบ=
๏ฃฏ
(
)
๏ฃฏ ... ๏ฃบ
๐œŽ๐‘›
๏ฃฏ
๏ฃบ ๐‘›=1
๏ฃฏ๐‘ฅ ๐‘ − ๐‘š ๐‘ ๏ฃบ
๏ฃป
๏ฃฐ
The joint Gaussian PDF for a vector of uncorrelated random variables is then
๐‘“X (x) = q
1
(2๐œ‹) ๐‘ ๐œŽ12๐œŽ22 ...๐œŽ๐‘2
Ö 1
(๐‘ฅ๐‘› − ๐‘š๐‘› ) 2
1 Õ ๐‘ฅ ๐‘› − ๐‘š๐‘› 2
๐‘’๐‘ฅ๐‘ (−
(
) )=
๐‘’๐‘ฅ๐‘ (−
)
p
2
2 ๐‘›=1
๐œŽ๐‘›
2๐œŽ๐‘›2
๐‘›=1 2๐œ‹๐œŽ๐‘›
๐‘
๐‘
This shows that for any number of uncorrelated Gaussian random variables, the joint
PDF factors into the product of marginal PDFs and hence uncorrelated Gaussian random
variables are independent. This is a generalization of the same result for two Gaussian
random variables.
Further Reading
1. Scott L. Miller, Donald Childers, Probability and random processes: with applications to
signal processing and communications, Elsevier 2012: sections 5.1 to 5.7 and 6.1 to 6.3
2. Alberto Leon-Garcia, Probability, statistics, and random processes for electrical engineering,
3rd ed. Pearson, 2007: sections 5.1 to 5.9 and 6.1 to 6.4
3. Charles W. Therrien, Probability for electrical and computer engineers, CRC Press, 2004:
chapter 5
89
5 Random Sums and Sequences
5 Random Sums and Sequences
Many problems involve the counting of the number of occurrences of events, the measurement
of cumulative effects, or the computation of arithmetic averages in a series of measurements.
Usually these problems can be reduced to the problem of finding, exactly or approximately, the
distribution of a random variable that consists of the sum of ๐‘› independent, identically distributed
random variables. In this chapter, we investigate sums of random variables and their properties
as ๐‘› becomes large.
5.1 Independent and Identically Distributed Random Variables
In many applications, we are able to observe an experiment repeatedly. Each new observation can
occur with an independent realization of whatever random phenomena control the experiment.
This sort of situation gives rise to independent and identically distributed (IID or i.i.d.) random
variables.
Definition 5.1. Independent and Identically Distributed: A sequence of random variables
๐‘‹ 1, ๐‘‹ 2, ..., ๐‘‹๐‘› is IID if
๐น๐‘‹ ๐‘– (๐‘ฅ) = ๐น๐‘‹ (๐‘ฅ) ∀๐‘– = 1, 2, ..., ๐‘›
(5.1)
and
๐น๐‘‹1,๐‘‹2,...,๐‘‹๐‘› (๐‘ฅ 1, ๐‘ฅ 2, ..., ๐‘ฅ๐‘› ) =
๐‘›
Ö
๐น๐‘‹ ๐‘– (๐‘ฅ๐‘– )
(5.2)
๐‘–=1
For continuous random variables, the CDFs can be replaced with PDFs in Equations 5.1 and 5.2, while
for discrete random variables, the CDFs can be replaced by PMFs.
Suppose, for example, we wish to measure the voltage produced by a certain sensor. The sensor
might be measuring the relative humidity outside. Our sensor converts the humidity to a voltage
level which we can then easily measure. However, as with any measuring equipment, the voltage
we measure is random due to noise generated in the sensor as well as in the measuring equipment.
Suppose the voltage we measure is represented by a random variable ๐‘‹ given by ๐‘‹ = ๐‘ฃ (โ„Ž) + ๐‘ ,
where ๐‘ฃ (โ„Ž) is the true voltage that should be presented by the sensor when the humidity is โ„Ž,
and ๐‘ is the noise in the measurement. Assuming that the noise is zero-mean, then ๐ธ [๐‘‹ ] = ๐‘ฃ (โ„Ž).
That is, on the average, the measurement will be equal to the true voltage ๐‘ฃ (โ„Ž). Furthermore,
if the variance of the noise is sufficiently small, then the measurement will tend to be close to
the true value we are trying to measure. But what if the variance is not small? Then the noise
will tend to distort our measurement making our system unreliable. In such a case, we might be
able to improve our measurement system by taking several measurements. This will allow us to
“average out” the effects of the noise.
Suppose we have the ability to make several measurements and observe a sequence of measurements ๐‘‹ 1, ๐‘‹ 2, ..., ๐‘‹๐‘› . It might be reasonable to expect that the noise that corrupts a given
measurement has the same distribution each time (and hence the ๐‘‹๐‘– are identically distributed)
and is independent of the noise in any other measurement (so that the ๐‘‹๐‘– are independent). Then
the ๐‘› measurements form a sequence of IID random variables. A fundamental question is then:
90
5 Random Sums and Sequences
How do we process an IID sequence to extract the desired information from it? In the preceding
case, the parameter of interest, ๐‘ฃ (โ„Ž), happens to be the mean of the distribution of the ๐‘‹๐‘– . This
turns out to be a fairly common problem and so we address that in the following sections.
5.2 Mean and Variance of Sums of Random Variables
Let ๐‘‹ 1, ๐‘‹ 2, ..., ๐‘‹๐‘› be a sequence of random variables, and let ๐‘†๐‘› be their sum:
๐‘†๐‘› = ๐‘‹ 1 + ๐‘‹ 2 + ... + ๐‘‹๐‘›
It was shown in section 3.2.3 that regardless of statistical dependence of ๐‘‹๐‘– s, the expected value
of a sum of ๐‘› random variables is equal to the sum of the expected values:
๐ธ [๐‘‹ 1 + ๐‘‹ 2 + ... + ๐‘‹๐‘› ] = ๐ธ [๐‘‹ 1 ] + ๐ธ [๐‘‹ 2 ] + ... + ๐ธ [๐‘‹๐‘› ]
Thus knowledge of the means of the ๐‘‹๐‘– s suffices to find the mean of ๐‘†๐‘› . The following example
shows that in order to compute the variance of a sum of random variables, we need to know the
variances and covariances of the ๐‘‹๐‘– s.
Example 5.1
Find the variance of ๐‘ = ๐‘‹ + ๐‘Œ .
Solution. The variance of ๐‘ is:
๐‘‰ ๐ด๐‘… [๐‘ ] = ๐ธ [(๐‘ − ๐ธ [๐‘ ]) 2 ] = ๐ธ [(๐‘‹ + ๐‘Œ − ๐ธ [๐‘‹ ] − ๐ธ [๐‘Œ ]) 2 ]
= ๐ธ [((๐‘‹ − ๐ธ [๐‘‹ ]) + (๐‘Œ − ๐ธ [๐‘Œ ])) 2 ]
= ๐ธ [(๐‘‹ − ๐ธ [๐‘‹ ]) 2 + (๐‘Œ − ๐ธ [๐‘Œ ]) 2 + (๐‘‹ − ๐ธ [๐‘‹ ])(๐‘Œ − ๐ธ [๐‘Œ ]) + (๐‘Œ − ๐ธ [๐‘Œ ])(๐‘‹ − ๐ธ [๐‘‹ ])]
= ๐‘‰ ๐ด๐‘… [๐‘‹ ] + ๐‘‰ ๐ด๐‘… [๐‘Œ ] + ๐ถ๐‘‚๐‘‰ (๐‘‹, ๐‘Œ ) + ๐ถ๐‘‚๐‘‰ (๐‘Œ , ๐‘‹ )
= ๐‘‰ ๐ด๐‘… [๐‘‹ ] + ๐‘‰ ๐ด๐‘… [๐‘Œ ] + 2๐ถ๐‘‚๐‘‰ (๐‘‹, ๐‘Œ )
In general, the covariance ๐ถ๐‘‚๐‘‰ (๐‘‹, ๐‘Œ ) is not equal to zero, so the variance of a sum is not
necessarily equal to the sum of the individual variances.
The result in Example 5.1 can be generalized to the case of ๐‘› random variables:
๐‘‰ ๐ด๐‘… [๐‘‹ 1 + ๐‘‹ 2 + ... + ๐‘‹๐‘› ] = ๐ธ [
=
=
๐‘›
Õ
(๐‘‹ ๐‘— − ๐ธ [๐‘‹ ๐‘— ])
๐‘—=1
๐‘›
๐‘›
ÕÕ
๐‘›
Õ
(๐‘‹๐‘˜ − ๐ธ [๐‘‹๐‘˜ ])]
๐‘˜=1
๐ธ [(๐‘‹ ๐‘— − ๐ธ [๐‘‹ ๐‘— ]) (๐‘‹๐‘˜ − ๐ธ [๐‘‹๐‘˜ ])]
๐‘—=1 ๐‘˜=1
๐‘›
Õ
๐‘› Õ
๐‘›
Õ
๐ถ๐‘‚๐‘‰ (๐‘‹ ๐‘— , ๐‘‹๐‘˜ )
๐‘‰ ๐ด๐‘… [๐‘‹๐‘˜ ] +
(5.3)
๐‘—=1 ๐‘˜=1
๐‘—≠๐‘˜
๐‘˜=1
Thus in general, the variance of a sum of random variables is not equal to the sum of the individual
variances.
91
5 Random Sums and Sequences
An important special case is when the ๐‘‹ ๐‘— s are independent random variables. If ๐‘‹ 1, ๐‘‹ 2, ..., ๐‘‹๐‘› are
independent random variables, then ๐ถ๐‘‚๐‘‰ (๐‘‹ ๐‘— , ๐‘‹๐‘˜ ) = 0 for ๐‘— ≠ ๐‘˜ and:
๐‘‰ ๐ด๐‘… [๐‘‹ 1 + ๐‘‹ 2 + ... + ๐‘‹๐‘› ] =
๐‘›
Õ
๐‘‰ ๐ด๐‘… [๐‘‹๐‘˜ ]
(5.4)
๐‘˜=1
Now suppose ๐‘‹ 1, ๐‘‹ 2, ..., ๐‘‹๐‘› are ๐‘› IID random variables, each with mean ๐‘š and variance ๐œŽ 2 , then
the sum of ๐‘‹๐‘– s, ๐‘†๐‘› , has the following mean:
๐ธ [๐‘†๐‘› ] = ๐ธ [๐‘‹ 1 ] + ๐ธ [๐‘‹ 2 ] + ... + ๐ธ [๐‘‹๐‘› ] = ๐‘›๐‘š
(5.5)
The covariance of pairs of independent random variables is zero, so:
๐‘‰ ๐ด๐‘… [๐‘†๐‘› ] =
๐‘›
Õ
๐‘‰ ๐ด๐‘… [๐‘‹๐‘˜ ] = ๐‘›๐‘‰ ๐ด๐‘… [๐‘‹๐‘– ] = ๐‘›๐œŽ 2
(5.6)
๐‘˜=1
5.3 The Sample Mean
Definition 5.2. Sample Mean: Let ๐‘‹ be a random variable for which the mean, ๐ธ [๐‘‹ ] = ๐‘š is
unknown. Let ๐‘‹ 1, ๐‘‹ 2, ..., ๐‘‹๐‘› denote ๐‘› independent, repeated measurements of ๐‘‹ ; i.e. ๐‘‹๐‘– s are IID
random variables with the same PDF as ๐‘‹ . The sample mean of the sequence is used to estimate
๐ธ [๐‘‹ ]:
๐‘›
1Õ
๐‘€๐‘› =
๐‘‹๐‘—
(5.7)
๐‘› ๐‘—=1
The sample variance is then defined as:
๐œŽ๐‘›2
๐‘›
1Õ
=
(๐‘‹ ๐‘— − ๐‘€๐‘› ) 2
๐‘› ๐‘—=1
(5.8)
The sample mean is itself a random variable, so it will exhibit random variation. Our aim is to
verify if ๐‘€๐‘› can be a good estimator of ๐ธ [๐‘‹ ] = ๐‘š. A good estimator is expected to have the
following two properties:
1. On the average, it should give the correct expected value (with no bias): ๐ธ [๐‘€๐‘› ] = ๐‘š
2. It should not vary too much about the correct value of this parameter, that is, ๐ธ [(๐‘€๐‘› − ๐‘š) 2 ]
(variance) is small.
The expected value of the sample mean is given by:
๐ธ [๐‘€๐‘› ] = ๐ธ [
๐‘›
๐‘›
1Õ
1Õ
๐‘‹๐‘—] =
๐ธ [๐‘‹ ๐‘— ] = ๐‘š
๐‘› ๐‘—=1
๐‘› ๐‘—=1
(5.9)
since ๐ธ [๐‘‹ ๐‘— ] = ๐ธ [๐‘‹ ] = ๐‘š for all ๐‘—. Thus the sample mean is equal to ๐ธ [๐‘‹ ] = ๐‘š on the average.
For this reason, we say that the sample mean is an unbiased estimator for ๐‘š.
The mean square error of the sample mean about ๐‘š is equal to the variance of ๐‘€๐‘› that is,
๐ธ [(๐‘€๐‘› − ๐‘š) 2 ] = ๐ธ [(๐‘€๐‘› − ๐ธ [๐‘€๐‘› ]) 2 ]
92
(5.10)
5 Random Sums and Sequences
Note that ๐‘€๐‘› = ๐‘†๐‘› /๐‘› where ๐‘†๐‘› = ๐‘‹ 1 + ๐‘‹ 2 + ... + ๐‘‹๐‘› . From Equation 5.6, ๐‘‰ ๐ด๐‘… [๐‘†๐‘› ] = ๐‘›๐œŽ 2 , since the
๐‘‹ ๐‘— s are IID random variables. Thus
๐‘‰ ๐ด๐‘… [๐‘€๐‘› ] =
๐‘›๐œŽ 2 ๐œŽ 2
1
๐‘‰
๐ด๐‘…
[๐‘†
]
=
=
๐‘›
๐‘›2
๐‘›2
๐‘›
(5.11)
Therefore the variance of the sample mean approaches zero as the number of samples is increased.
This implies that the probability that the sample mean is close to the true mean approaches one as
๐‘› becomes very large. We can formalize this statement by using the Chebyshev inequality from
Equation 3.127:
๐‘‰ ๐ด๐‘… [๐‘€๐‘› ]
๐‘ƒ (|๐‘€๐‘› − ๐ธ [๐‘€๐‘› ] | ≥ ๐œ€) ≤
(5.12)
๐œ€2
Substituting for ๐ธ [๐‘€๐‘› ] and ๐‘‰ ๐ด๐‘… [๐‘€๐‘› ], we obtain
๐‘ƒ (|๐‘€๐‘› − ๐‘š| ≥ ๐œ€) ≤
๐œŽ2
๐‘›๐œ€ 2
(5.13)
If we consider the complement, we obtain
๐‘ƒ (|๐‘€๐‘› − ๐‘š| < ๐œ€) ≥ 1 −
๐œŽ2
๐‘›๐œ€ 2
(5.14)
Thus for any choice of error ๐œ€ and probability 1 − ๐›ฟ, we can select the number of samples ๐‘› so
that ๐‘€๐‘› is within ๐œ€ of the true mean with probability 1 − ๐›ฟ or greater. The following example
illustrates this.
Example 5.2
A voltage of constant, but unknown, value is to be measured. Each measurement ๐‘‹ ๐‘— is
actually the sum of the desired voltage ๐‘ฃ and a noise voltage ๐‘ ๐‘— of zero mean and standard
deviation of 1 microvolt (๐œ‡๐‘‰ ):
๐‘‹๐‘— = ๐‘ฃ + ๐‘๐‘—
Assume that the noise voltages are independent random variables. How many measurements
are required so that the probability that ๐‘€๐‘› is within ๐œ€ = 1๐œ‡๐‘‰ of the true mean is at least 0.99?
Solution. Each measurement ๐‘‹ ๐‘— has mean ๐‘ฃ and variance 1, so from Equation 5.14 we require
that ๐‘› satisfy:
๐œŽ2
1
1 − 2 = 1 − = 0.99
๐‘›๐œ€
๐‘›
This implies that ๐‘› = 100. Thus if we were to repeat the measurement 100 times and compute
the sample mean, on the average, at least 99 times out of 100, the resulting sample mean
will be within 1๐œ‡๐‘‰ of the true mean.
5.4 Laws of Large Numbers
Note that if we let ๐‘› approach infinity in Equation 5.14 we obtain
lim ๐‘ƒ (|๐‘€๐‘› − ๐‘š| < ๐œ€) = 1
๐‘›→∞
(5.15)
Equation 5.14 requires that the ๐‘‹ ๐‘— s have finite variance. It can be shown that this limit holds even
if the variance of the ๐‘‹ ๐‘— s does not exist.
93
5 Random Sums and Sequences
Theorem 5.1: Weak Law of Large Numbers
Let ๐‘‹ 1, ๐‘‹ 2, ... be a sequence of IID random variables with finite mean ๐ธ [๐‘‹ ] = ๐‘š, then for
๐œ€ > 0,
lim ๐‘ƒ (|๐‘€๐‘› − ๐‘š| < ๐œ€) = 1
(5.16)
๐‘›→∞
The weak law of large numbers states that for a large enough fixed value of ๐‘›, the sample mean
using ๐‘› samples will be close to the true mean with high probability. The weak law of large
numbers does not address the question about what happens to the sample mean as a function
of ๐‘› as we make additional measurements. This question is taken up by the strong law of large
numbers.
Suppose we make a series of independent measurements of the same random variable. Let
๐‘‹ 1, ๐‘‹ 2, ... be the resulting sequence of IID random variables with mean ๐‘š. Now consider the
sequence of sample means that results from the above measurements: ๐‘€1, ๐‘€2, ... where ๐‘€ ๐‘— is
the sample mean computed using ๐‘‹ 1 through ๐‘‹ ๐‘— . We expect that with high probability, each
particular sequence of sample means approaches ๐‘š and stays there:
๐‘ƒ ( lim ๐‘€๐‘› = ๐‘š) = 1
๐‘›→∞
(5.17)
that is, with virtual certainty, every sequence of sample mean calculations converges to the true
mean of the quantity (The proof of this result is beyond the level of this unit).
Theorem 5.2: Strong Law of Large Numbers
Let ๐‘‹ 1, ๐‘‹ 2, ... be a sequence of IID random variables with finite mean ๐ธ [๐‘‹ ] = ๐‘š, and finite
variance, then,
๐‘ƒ ( lim ๐‘€๐‘› = ๐‘š) = 1
(5.18)
๐‘›→∞
Equation 5.18 appears similar to Equation 5.16, but in fact it makes a dramatically different
statement. It states that with probability 1, every sequence of sample mean calculations will
eventually approach and stay close to ๐ธ [๐‘‹ ] = ๐‘š. This is the type of convergence we expect in
physical situations where statistical regularity holds.
Although under certain conditions, the theory predicts the convergence of sample means to
expected values, there are still gaps between the mathematical theory and the real world (i.e., we
can never actually carry out an infinite number of measurements and compute an infinite number
of sample means). Nevertheless, the strong law of large numbers demonstrates the remarkable
consistency between the theory and the observed physical behavior.
Note that relative frequencies discussed in previous chapters, are special cases of sample averages.
If we apply the weak law of large numbers to the relative frequency of an event ๐ด, ๐‘“๐ด (๐‘›), in a
sequence of independent repetitions of a random experiment, we obtain
lim ๐‘ƒ (|๐‘“๐ด (๐‘›) − ๐‘ƒ (๐ด)| < ๐œ€) = 1
๐‘›→∞
(5.19)
If we apply the strong law of large numbers, we obtain:
๐‘ƒ ( lim ๐‘“๐ด (๐‘›) = ๐‘ƒ (๐ด)) = 1
๐‘›→∞
94
(5.20)
5 Random Sums and Sequences
Example 5.3
In order to estimate the probability of an event ๐ด, a sequence of Bernoulli trials is carried
out and the relative frequency of ๐ด is observed. How large should ๐‘› be in order to have a
0.95 probability that the relative frequency is within 0.01 of ๐‘ = ๐‘ƒ (๐ด)?
Solution. Let ๐‘‹ = ๐ผ๐ด be the indicator function of ๐ด. From Equations 3.45 and 3.46 we have
that the mean of is ๐‘š = ๐‘ and the variance is ๐œŽ 2 = ๐‘ (1 − ๐‘). Since ๐‘ is unknown, ๐œŽ 2 is also
unknown. However, it is easy to show that ๐‘ (1 − ๐‘) is at most 1/4 for 0 ≤ ๐‘ ≤ 1 Therefore,
by Equation 5.13,
๐œŽ2
1
๐‘ƒ (|๐‘“๐ด (๐‘›) − ๐‘ | ≥ ๐œ€) ≤ 2 ≤
๐‘›๐œ€
4๐‘›๐œ€ 2
The desired accuracy is ๐œ€ = 0.01 and the desired probability is:
1 − 0.95 =
1
4๐‘›๐œ€ 2
We then solve this for ๐‘› and obtain ๐‘› = 50, 000. It has already been pointed out that the
Chebyshev inequality gives very loose bounds, so we expect that this value for ๐‘› is probably
overly conservative. In the next section, we present a better estimate for the required value
of ๐‘›.
5.5 The Central Limit Theorem
Probably the most important result dealing with sums of random variables is the central limit
theorem which states that under some mild conditions, these sums converge to a Gaussian random
variable in distribution. This result provides the basis for many theoretical models of random
phenomena. The central limit theorem explains why the Gaussian random variable appears in so
many diverse applications. In nature, many macroscopic phenomena result from the addition of
numerous independent, microscopic processes; this gives rise to the Gaussian random variable.
In many man-made problems, we are interested in averages that often consist of the sum of
independent random variables. This again gives rise to the Gaussian random variable.
Let ๐‘‹ 1, ๐‘‹ 2, ... be a sequence of IID random variables with finite mean ๐ธ [๐‘‹ ] = ๐‘š, and finite
variance ๐œŽ 2 , and let ๐‘†๐‘› be the sum of the first ๐‘› random variables in the sequence. We present
the central limit theorem, which states that, as ๐‘› becomes large, the CDF of a properly normalized ๐‘†๐‘› approaches that of a Gaussian random variable. This enables us to approximate
the CDF of ๐‘†๐‘› with that of a Gaussian random variable. We know from equations 5.5 and 5.6
that if the ๐‘‹ ๐‘— s are IID, then ๐‘†๐‘› has mean ๐‘›๐‘š and variance ๐‘›๐œŽ 2 . The central limit theorem states
that the CDF of a suitably normalized version of ๐‘†๐‘› approaches that of a Gaussian random variable.
Central Limit Theorem:
Let ๐‘‹ ๐‘— be a sequence of IID random variables with mean ๐‘š and variance ๐œŽ 2 . Define a new random
variable, ๐‘ , as a (shifted and scaled) sum of the ๐‘‹ ๐‘— s:
๐‘›
1 Õ ๐‘‹๐‘— − ๐‘š
๐‘=√
๐‘› ๐‘—=1 ๐œŽ
(5.21)
Note that ๐‘ has been constructed such that ๐ธ [๐‘ ] = 0 and ๐‘‰ ๐ด๐‘… [๐‘ ] = 1. In the limit as ๐‘› approaches
95
5 Random Sums and Sequences
infinity, the random variable ๐‘ converges in distribution to a standard Gaussian random variable.
Several remarks about this theorem are in order at this point. First, no restrictions were put on
the distribution of the ๐‘‹ ๐‘— s, since it applies to any infinite sum of IID random variables, regardless
of the distribution.
From a practical standpoint, the central limit theorem implies that for the sum of a sufficiently
large (but finite) number of random variables, the sum is approximately Gaussian distributed. Of
course, the goodness of this approximation depends on how many terms are in the sum and also
the distribution of the individual terms in the sum.
Figures 5.1 to 5.3 compare the exact CDF and the Gaussian approximation for the sums of Bernoulli,
uniform, and exponential random variables, respectively. In all three cases, it can be seen that the
approximation improves as the number of terms in the sum increases.
Figure 5.1: (a) The CDF of the sum of five independent Bernoulli random variables with ๐‘ = 1/2
and the CDF of a Gaussian random variable of the same mean and variance. (b) The
CDF of the sum of 25 independent Bernoulli random variables with ๐‘ = 1/2 and the
CDF of a Gaussian random variable of the same mean and variance.
Figure 5.2: The CDF of the sum of five independent discrete, uniform random variables from the
set {0, 1, 2, ..., 9} and the CDF of a Gaussian random variable of the same mean and
variance.
The central limit theorem guarantees that the sum converges in "distribution" to Gaussian, but
this does not necessarily imply convergence in "density". As a counter example, suppose that the
96
5 Random Sums and Sequences
Figure 5.3: (a) The CDF of the sum of five independent exponential random variables of mean 1
and the CDF of a Gaussian random variable of the same mean and variance. (b) The
CDF of the sum of 50 independent exponential random variables of mean 1 and the
CDF of a Gaussian random variable of the same mean and variance.
๐‘‹ ๐‘— s are discrete random variables, then the sum must also be a discrete random variable. Strictly
speaking, the density of ๐‘ would then not exist, and it would not be meaningful to say that the
density of ๐‘ is Gaussian. From a practical standpoint, the probability density of ๐‘ would be a
series of impulses. While the envelope of these impulses would have a Gaussian shape to it, the
density is clearly not Gaussian. If the ๐‘‹ ๐‘— s are continuous random variables, the convergence in
density generally occurs as well.
The IID assumption is not needed in many cases. The central limit theorem also applies to
independent random variables that are not necessarily identically distributed. Loosely speaking, all that is required is that no term (or small number of terms) dominates the sum, and
the resulting infinite sum of independent random variables will approach a Gaussian distribution in the limit as the number of terms in the sum goes to infinity. The central limit theorem
also applies to some cases of dependent random variables, but we will not consider such cases here.
Example 5.4
The time between events in a certain random experiment is IID exponential random variables
with mean ๐‘š seconds. Find the probability that the 1000th event occurs in the time interval
(1000 ± 50)๐‘š.
Solution. Let ๐‘‹๐‘– be the time between events and let ๐‘†๐‘› be the time of the ๐‘›th event, then
๐‘†๐‘› = ๐‘‹ 1 + ๐‘‹ 2 + ... + ๐‘‹๐‘› . We know that the mean and variance of the exponential random
variable ๐‘‹ ๐‘— is given by ๐ธ [๐‘‹ ๐‘— ] = ๐‘š and ๐‘‰ ๐ด๐‘… [๐‘‹ ๐‘— ] = ๐‘š 2 . The mean and variance of ๐‘†๐‘› are
then ๐ธ [๐‘†๐‘› ] = ๐‘›๐ธ [๐‘‹ ๐‘— ] = ๐‘›๐‘š and ๐‘‰ ๐ด๐‘… [๐‘†๐‘› ] = ๐‘›๐‘‰ ๐ด๐‘… [๐‘‹ ๐‘— ] = ๐‘›๐‘š 2 . The central limit theorem
then gives
950๐‘š − 1000๐‘š
1050๐‘š − 1000๐‘š
≤ ๐‘๐‘› ≤
)
√
√
๐‘š 1000
๐‘š 1000
' ๐‘„ (1.58) − ๐‘„ (−1.58) = 1 − 2๐‘„ (1.58) = 0.8866
๐‘ƒ (950๐‘š ≤ ๐‘† 1000 ≤ 1050๐‘š) = ๐‘ƒ (
97
5 Random Sums and Sequences
Thus as ๐‘› becomes large, ๐‘†๐‘› is very likely to be close to its mean ๐‘›๐‘š. We can therefore
conjecture that the long-term average rate at which events occur is
๐‘›
1
๐‘› events
=
= ๐‘’๐‘ฃ๐‘’๐‘›๐‘ก๐‘ /๐‘ ๐‘’๐‘๐‘œ๐‘›๐‘‘
๐‘†๐‘› seconds ๐‘›๐‘š ๐‘š
5.6 Convergence of Sequences of Random Variables
We discussed the convergence of the sequence of arithmetic averages ๐‘€๐‘› of IID random variables
to the expected value ๐‘š:
๐‘€๐‘› → ๐‘š as ๐‘› → ∞
(5.22)
The weak law and strong law of large numbers describe two ways in which for the sequence of
random variables ๐‘€๐‘› converges to the constant value given by ๐‘š. In this section we consider the
more general situation where a sequence of random variables (usually not IID) ๐‘‹ 1, ๐‘‹ 2, ... converges
to some random variable ๐‘‹ :
๐‘‹๐‘› → ๐‘‹ as ๐‘› → ∞
(5.23)
We will describe several ways in which this convergence can take place. Note that Equation 5.22
is a special case of Equation 5.23 where the limiting random variable ๐‘‹ is given by the constant ๐‘š.
To understand the meaning of Equation 5.23, we first need to revisit the definition of a vector
random variable X= (๐‘‹ 1, ๐‘‹ 2, ..., ๐‘‹๐‘› ). X was defined as a function that assigns a vector of real
values to each outcome ๐œ from some sample space ๐‘†:
๐‘‹ (๐œ ) = (๐‘‹ 1 (๐œ ), ๐‘‹ 2 (๐œ ), ..., ๐‘‹๐‘› (๐œ ))
(5.24)
The randomness in the vector random variable was induced by the randomness in the underlying
probability law governing the selection of ๐œ . We obtain a sequence of random variables by letting
๐‘› increase without bound, that is, a sequence of random variables ๐‘‹ is a function that assigns a
countably infinite number of real values to each outcome ๐œ from some sample space ๐‘†:
๐‘‹ (๐œ ) = (๐‘‹ 1 (๐œ ), ๐‘‹ 2 (๐œ ), ..., ๐‘‹๐‘› (๐œ ), ...)
(5.25)
From now on, we will use the notation {๐‘‹๐‘› (๐œ )} or {๐‘‹๐‘› } instead of X(๐œ ) to denote the sequence
of random variables.
A sequence of random variables can be viewed as a sequence of functions of ๐œ . On the other hand,
it is more natural to instead imagine that each point in ๐‘†, say ๐œ , produces a particular sequence of
real numbers,
๐‘ฅ 1, ๐‘ฅ 2, ๐‘ฅ 3, ...
where ๐‘ฅ 1 = ๐‘‹ 1 (๐œ ), ๐‘ฅ 2 = ๐‘‹ 2 (๐œ ) and so on. This sequence is called the sample sequence for the
point ๐œ .
Example 5.5
Let ๐œ be selected at random from the interval ๐‘† = [0, 1] where we assume that the probability
that ๐œ is in a sub-interval of ๐‘† is equal to the length of the sub-interval. For ๐‘› = 1, 2, ... we
define the sequence of random variables:
1
๐‘‰๐‘› (๐œ ) = ๐œ (1 − )
๐‘›
The two ways of looking at sequences of random variables is evident here. First, we can
view ๐‘‰๐‘› (๐œ ) as a sequence of functions of ๐œ as shown in Figure 5.4(a). Alternatively, we can
98
5 Random Sums and Sequences
imagine that we first perform the random experiment that yields ๐œ and that we then observe
the corresponding sequence of real numbers ๐‘‰๐‘› (๐œ ) as shown in Figure 5.4(b).
Figure 5.4: Two ways of looking at sequences of random variables: (a) Sequence of random
variables as a sequence of functions of ๐œ , (b) Sequence of random variables as a
sequence of real numbers determined by ๐œ
The standard methods from calculus can be used to determine the convergence of the sample
sequence for each point ๐œ . Intuitively, we say that the sequence of real numbers ๐‘ฅ๐‘› converges
to the real number ๐‘ฅ if the difference |๐‘ฅ๐‘› − ๐‘ฅ | approaches zero as ๐‘› approaches infinity. More
formally, we say that:
The sequence ๐‘ฅ๐‘› converges to ๐‘ฅ if, given any ๐œ€ > 0, we can specify an integer ๐‘ such that for all
values of ๐‘› beyond ๐‘ we can guarantee that |๐‘ฅ๐‘› − ๐‘ฅ | < ๐œ€
Thus if a sequence converges, then for any ๐œ€ we can find an ๐‘ so that the sequence remains inside
a 2๐œ€ corridor about ๐‘ฅ, as shown in Figure 5.5.
Figure 5.5: Convergence of a sequence of numbers
If we make ๐œ€ smaller, ๐‘ becomes larger. Hence we arrive at our intuitive view that ๐‘ฅ๐‘› becomes
closer and closer to ๐‘ฅ. If the limiting value ๐‘ฅ is not known, we can still determine whether a
99
5 Random Sums and Sequences
sequence converges by applying the Cauchy criterion:
The sequence ๐‘ฅ๐‘› converges if and only if, given ๐œ€ > 0 we can specify integer ๐‘ 0 such that for ๐‘š and
๐‘› greater than ๐‘ 0, |๐‘ฅ๐‘› − ๐‘ฅ๐‘š | < ๐œ€
The Cauchy criterion states that the maximum variation in the sequence for points beyond ๐‘ 0 is
less than ๐œ€.
Example 5.6
Let ๐‘‰๐‘› (๐œ ) be the sequence of random variables from Example 5.5. Does the sequence of real
numbers corresponding to a fixed ๐œ converge?
Solution. From Figure 5.4(a), we expect that for a fixed value ๐œ , ๐‘‰๐‘› (๐œ ) will converge to the
limit ๐œ . Therefore, we consider the difference between the ๐‘›th number in the sequence and
the limit:
๐œ
1
1
|๐‘‰๐‘› (๐œ ) − ๐œ | = |๐œ (1 − ) − ๐œ | = | | <
๐‘›
๐‘›
๐‘›
where the last inequality follows from the fact that ๐œ is always less than one. In order to
keep the above difference less than ๐œ€ we choose ๐‘› so that
|๐‘‰๐‘› (๐œ ) − ๐œ | <
that is, we select ๐‘› > ๐‘ =
1
๐œ€
1
<๐œ€
๐‘›
Thus the sequence of real numbers ๐‘‰๐‘› (๐œ ) converges to ๐œ .
When we talk about the convergence of sequences of random variables, we are concerned with
questions such as: Do all (or almost all) sample sequences converge, and if so, do they all converge
to the same values or to different values? The first two definitions of convergence address these
questions.
5.6.1 Sure Convergence
Definition 5.3. Sure Convergence: The sequence of random variables {๐‘‹๐‘› (๐œ )} converges surely
to the random variable ๐‘‹ (๐œ ) if the sequence of functions ๐‘‹๐‘› (๐œ ) converges to the function ๐‘‹ (๐œ ) as
๐‘› → ∞ for all ๐œ in ๐‘†:
๐‘‹๐‘› (๐œ ) → ๐‘‹ (๐œ ) as ๐‘› → ∞ for all ๐œ ∈ ๐‘†
Example 5.7
Let ๐‘‹ be a random variable uniformly distributed over [0, 1). Then define the random
sequence
๐‘‹
๐‘‹๐‘› =
, ๐‘› = 1, 2, 3, ...
1 + ๐‘›2
In this case, for any realization ๐‘‹ = ๐‘ฅ, a sequence is produced of the form:
๐‘ฅ๐‘› =
๐‘ฅ
1 + ๐‘›2
100
5 Random Sums and Sequences
which converges to lim๐‘›→∞ ๐‘ฅ๐‘› = 0. We say that the sequence converges surely to lim๐‘›→∞ ๐‘‹๐‘› =
0.
Sure convergence requires that the sample sequence corresponding to every ๐œ converges. Note
that it does not require that all the sample sequences converge to the same values; that is, the
sample sequences for different points ๐œ and ๐œ 0 can converge to different values.
Example 5.8
Let ๐‘‹ be a random variable uniformly distributed over [0, 1). Then define the random
sequence
๐‘›๐‘‹
, ๐‘› = 1, 2, 3, ...
๐‘‹๐‘› =
1 + ๐‘›2
In this case, for any realization ๐‘‹ = ๐‘ฅ, a sequence is produced of the form:
๐‘ฅ๐‘› =
๐‘›๐‘ฅ
1 + ๐‘›2
which converges to lim๐‘›→∞ ๐‘ฅ๐‘› = ๐‘ฅ. We say that the sequence converges surely to a random
variable lim๐‘›→∞ ๐‘‹๐‘› = ๐‘‹ . In this case, the value that the sequence converges to depends on
the particular realization of the random variable ๐‘‹ .
5.6.2 Almost-Sure Convergence
Definition 5.4. Almost-Sure Convergence: The sequence of random variables {๐‘‹๐‘› (๐œ )} converges
almost surely to the random variable ๐‘‹ (๐œ ) if the sequence of functions ๐‘‹๐‘› (๐œ ) converges to the function
๐‘‹ (๐œ ) as ๐‘› → ∞ for all ๐œ in ๐‘†, except possibly on a set of probability zero; that is:
๐‘ƒ (๐œ : ๐‘‹๐‘› (๐œ ) → ๐‘‹ (๐œ ) as ๐‘› → ∞) = 1
In Figure 5.6 we illustrate almost-sure convergence for the case where sample sequences converge
to the same value ๐‘ฅ; we see that almost all sequences must eventually enter and remain inside a
2๐œ€ corridor. In almost-sure convergence some of the sample sequences may not converge, but
these must all belong to ๐œ s that are in a set that has probability zero.
Figure 5.6: Almost-sure convergence for sample sequences
The strong law of large numbers is an example of almost-sure convergence. Note that sure
convergence implies almost-sure convergence.
101
5 Random Sums and Sequences
Example 5.9
As an example of a sequence that converges almost surely, consider the random sequence
๐‘‹๐‘› =
๐‘ ๐‘–๐‘›(๐‘›๐œ‹๐‘‹ )
๐‘›๐œ‹๐‘‹
where ๐‘‹๐‘› is a random variable uniformly distributed over [0,1). For almost every realization
๐‘‹ = ๐‘ฅ, the sequence:
๐‘ ๐‘–๐‘›(๐‘›๐œ‹๐‘ฅ)
๐‘ฅ๐‘› =
๐‘›๐œ‹๐‘ฅ
converges to lim๐‘›→∞ ๐‘ฅ๐‘› = 0. The one exception is the realization ๐‘‹ = 0 in which case the
sequence becomes ๐‘ฅ๐‘› = 1 which converges, but not to the same value. Therefore, we say
that the sequence ๐‘‹๐‘› converges almost surely to lim๐‘›→∞ ๐‘‹๐‘› = 0 since the one exception to
this convergence occurs with zero probability; that is, ๐‘ƒ (๐‘‹ = 0) = 0
5.6.3 Convergence in Probability
Definition 5.5. Convergence in Probability: The sequence of random variables {๐‘‹๐‘› (๐œ )} converges in probability to the random variable ๐‘‹ (๐œ ) if for any ๐œ€ > 0:
๐‘ƒ (|๐‘‹๐‘› (๐œ ) − ๐‘‹ (๐œ )| > ๐œ€) → 0 as ๐‘› → ∞
In Figure 5.7 we illustrate convergence in probability for the case where the limiting random
variable is a constant ๐‘ฅ; we see that at the specified time ๐‘› 0 most sample sequences must be within
๐œ€ of ๐‘ฅ. However, the sequences are not required to remain inside a 2๐œ€ corridor. The weak law of
large numbers is an example of convergence in probability.Thus we see that the fundamental
difference between almost-sure convergence and convergence in probability is the same as that
between the strong law and the weak law of large numbers.
Figure 5.7: Convergence in probability for sample sequences
Example 5.10
Let ๐‘‹๐‘˜ , ๐‘˜ = 1, 2, 3, ... be a sequence of IID Gaussian random variables with mean ๐‘š and
Í
variance ๐œŽ 2 . Suppose we form the sequence of sample means ๐‘€๐‘› = ๐‘›1 ๐‘›๐‘˜=1 ๐‘‹๐‘˜ , ๐‘› = 1, 2, 3, ....
Since the ๐‘€๐‘› are linear combinations of Gaussian random variables, then they are also
Gaussian with ๐ธ [๐‘€๐‘› ] = ๐‘š and ๐‘‰ ๐ด๐‘… [๐‘€๐‘› ] = ๐œŽ 2 /๐‘›. Therefore, the probability that the sample
102
5 Random Sums and Sequences
mean is removed from the true mean by more than ๐œ€ is
r
๐‘›๐œ€
๐‘ƒ (|๐‘€๐‘› − ๐‘š| > ๐œ€) = 2๐‘„ (
)
๐œŽ2
As ๐‘› → ∞, this quantity clearly approaches zero, so that this sequence of sample means
converges in probability to the true mean.
5.6.4 Convergence in the Mean Square Sense
Definition 5.6. Convergence in the Mean Square Sense: The sequence of random variables
{๐‘‹๐‘› (๐œ )} converges in the Mean Square (MS) sense to the random variable ๐‘‹ (๐œ ) if:
๐ธ [(๐‘‹๐‘› (๐œ ) − ๐‘‹ (๐œ )) 2 ] → 0 as ๐‘› → ∞
Mean square convergence is of great practical interest in electrical engineering applications
because of its analytical simplicity and because of the interpretation of ๐ธ [(๐‘‹๐‘› − ๐‘‹ ) 2 ] as the
“power” in an error signal.
Example 5.11
Consider the sequence of sample means of IID Gaussian random variables described in
Example 5.10. This sequence also converges in the MS sense since:
๐ธ [(๐‘€๐‘› − ๐‘š) 2 ] = ๐‘‰ ๐ด๐‘… [๐‘€๐‘› ] =
๐œŽ2
๐‘›
This sequence of sample variances converges to 0 as ๐‘› → ∞, thus producing convergence
of the random sequence in the MS sense.
5.6.5 Convergence in Distribution
Definition 5.7. Convergence in Distribution: The sequence of random variables {๐‘‹๐‘› } with
cumulative distribution functions {๐น๐‘› (๐‘ฅ)} converges in distribution to the random variable ๐‘‹ with
cumulative distribution ๐น (๐‘ฅ) if:
๐น๐‘› (๐‘ฅ) → ๐น (๐‘ฅ) as ๐‘› → ∞
for all ๐‘ฅ at which ๐น (๐‘ฅ) is continuous.
The central limit theorem is an example of convergence in distribution.
Example 5.12
Consider once again the sequence of sample means of IID Gaussian random variables
described in Example 5.10. Since ๐‘€๐‘› is Gaussian with mean ๐‘š and variance ๐œŽ 2 /๐‘›, its CDF
takes the form
๐‘ฅ −๐‘š
๐น๐‘€๐‘› (๐‘ฅ) = 1 − ๐‘„ ( √ )
๐œŽ/ ๐‘›
For any ๐‘ฅ > ๐‘š, lim๐‘›→∞ ๐น๐‘€๐‘› (๐‘ฅ) = 1, while for any ๐‘ฅ < ๐‘š, lim๐‘›→∞ ๐น๐‘€๐‘› (๐‘ฅ) = 0. Thus, the
103
5 Random Sums and Sequences
limiting form of the CDF is:
lim ๐น๐‘€๐‘› (๐‘ฅ) = ๐‘ข (๐‘ฅ − ๐‘š)
๐‘›→∞
where ๐‘ข (๐‘ฅ) is the unit step function. Note that the point ๐‘ฅ = ๐‘š is not a point of continuity
of ๐น๐‘€ (๐‘ฅ).
It should be noted, as was seen in the previous sequence of examples, that some random sequences
converge in many of the different senses. In fact, one form of convergence may necessarily
imply convergence in several other forms. Table 5.1 illustrates these relationships. For example,
convergence in distribution is the weakest form of convergence and does not necessarily imply
any of the other forms of convergence. Conversely, if a sequence converges in any of the other
modes presented, it will also converge in distribution.
Table 5.1: Relationships between convergence modes, showing whether the convergence mode in
each row implies the convergence mode in each column
This ↓ implies this → Sure Almost Sure Probability Mean Square Distribution
Sure
X
Yes
Yes
No
Yes
Almost Sure
No
X
Yes
No
Yes
Probability
No
No
X
No
Yes
Mean Square
No
No
Yes
X
Yes
Distribution
No
No
No
No
X
5.7 Confidence Intervals
Consider once again the problem of estimating the mean of a distribution from ๐‘› IID random
variables. When the sample mean ๐‘€๐‘› is formed, it could be said that (hopefully) the true mean is
“close” to the sample mean. While this is a vague statement, with the help of the central limit
theorem, we can make the statement mathematically precise.
If a sufficient number of samples are taken, the sample mean can be well approximated by a
Gaussian random variable with a mean of ๐ธ [๐‘€๐‘› ] = ๐‘š (Equation 5.9) and variance of ๐‘‰ ๐ด๐‘… [๐‘€๐‘› ] =
๐œŽ 2 /๐‘› (Equation 5.11). Using the Gaussian distribution, the probability of the sample mean being
within some amount ๐œ€ of the true mean can be easily calculated,
√
(5.26)
๐‘ƒ (|๐‘€๐‘› − ๐‘š| < ๐œ€) = ๐‘ƒ (๐‘š − ๐œ€ < ๐‘€๐‘› < ๐‘š + ๐œ€) = 1 − 2๐‘„ (๐œ€ ๐‘›/๐œŽ)
Stated another way, let ๐œ€๐‘Ž be the value of ๐œ€ such that the right hand side of the above equation is
1 − ๐‘Ž; that is,
๐œŽ
๐œ€๐‘Ž = √ ๐‘„ −1 (๐‘Ž/2)
(5.27)
๐‘›
where ๐‘„ −1 is the inverse of the Q-function. Then, given ๐‘› samples which lead to a sample mean
๐‘€๐‘› , the true mean will fall in the interval (๐‘€๐‘› − ๐œ€๐‘Ž , ๐‘€๐‘› + ๐œ€๐‘Ž ) with probability 1 − ๐‘Ž. The interval
(๐‘€๐‘› −๐œ€๐‘Ž , ๐‘€๐‘› +๐œ€๐‘Ž ) is referred to as the confidence interval while the probability is the confidence
level or, alternatively, is the level of significance. The confidence level and level of significance
are usually expressed as percentages. The corresponding values of the quantity ๐‘ ๐‘Ž = ๐‘„ −1 (๐‘Ž/2)
are provided in Table 5.2 for several typical values of ๐‘Ž.
104
5 Random Sums and Sequences
Table 5.2: Reference values to calculate confidence intervals
Percentage of
Percentage of
Confidence Level Level of Significance
(1 − ๐‘Ž) ∗ 100%
๐‘Ž ∗ 100%
๐‘ ๐‘Ž = ๐‘„ −1 (๐‘Ž/2)
90
10
1.64
5
1.96
95
99
1
2.58
99.9
0.1
3.29
99.99
0.01
3.89
Example 5.13
Suppose the IID random variables each have a variance of ๐œŽ 2 = 4. A sample of ๐‘› = 100 values
is taken and the sample mean is found to be ๐‘€๐‘› = 10.2. (a) Determine the 95% confidence
interval for the true mean ๐‘š. (b) Suppose we want to be 99% confident that the true mean
falls within a factor of ±0.5 of the sample mean. How many samples need to be taken in
forming the sample mean?
√
Solution. (a) In this case ๐œŽ/ ๐‘› = 0.2, and the appropriate value of ๐‘ ๐‘Ž is ๐‘ 0.05 = 1.96 from
Table 5.2. The 95% confidence interval is then:
๐œŽ
๐œŽ
(๐‘€๐‘› − √ ๐‘ 0.05, ๐‘€๐‘› + √ ๐‘ 0.05 ) = (9.808, 10.592)
๐‘›
๐‘›
(b) To ensure this level of confidence, it is required that
๐œŽ
√ ๐‘ 0.01 = 0.5
๐‘›
and therefore
2.58 ∗ 2 2
๐‘ 0.01๐œŽ 2
) =(
) = 106.5
0.5
0.5
Since ๐‘› must be an integer, it is concluded that at least 107 samples must be taken.
๐‘›=(
In summary, to achieve a level of significance specified by ๐‘Ž, we note that by virtue of the central
limit theorem, the sum
๐‘€๐‘› − ๐‘š
๐‘ˆ๐‘› =
(5.28)
√
๐œŽ/ ๐‘›
approximately follows a standard normal distribution. We can then easily specify a symmetric
interval about zero in which a standard Gaussian random variable will fall with probability 1 − ๐‘Ž.
As long as ๐‘› is sufficiently large, the original distribution of the IID random variables does not
matter.
Note that in order to form the confidence interval as specified, the standard deviation of the ๐‘‹ ๐‘—
must be known. While in some cases, this may be a reasonable assumption, in many applications,
the standard deviation is also unknown. The most obvious thing to do in that case would be to
replace the true standard deviation with the sample standard deviation.
105
5 Random Sums and Sequences
Further Reading
1. Scott L. Miller, Donald Childers, Probability and random processes: with applications to
signal processing and communications, Elsevier 2012: chapter 7.
2. Alberto Leon-Garcia, Probability, statistics, and random processes for electrical engineering,
3rd ed. Pearson, 2007: chapter 7.
106
Download