Unit 1 Probability

advertisement
Unit 1
Probability
Probabilities and Events
When we do an experiment, the set of all possible outcomes of the experiment
is called the sample space.
Example1 (i) Consider an experiment consisting of flipping a coin, and let the
outcome be the side that land face up. Thus, the sample space of this experiment
is
S={h t}
Where the outcome is h if the coin shows heads and t if it shows tails.
(ii) If the experiment consists of rolling a pair of dice – with the outcome
being the pair (i, j), where i is the value that appears on the first die and j the
value on the second – then the sample space consists of 36 outcomes (What are
they?).
Consider once again an experiment with the sample space S ={1,2,…m}.
We will now suppose that there are numbers p, …,pm, with
m
p≥0, i =1,…, m, and
p
i 1
i
and such that the probability that i is the outcome of the experiment.
Any set of possible outcomes of the experiment is called an event. That is,
an event is a subset of S, the set of all possible outcomes. For any event A, we
says that A occurs whenever the outcome of the experiment is a point in A.
P( A)   pi
(1)
iA
This implies
P ( S )   pi  1
(2)
i
Example 2
Suppose the experiment consists of rolling a pair of fair dice. If A
is the event that the sum of the dice is equal to 7, then
1
P(A)=6/36=1/6 (why?)
For any event A, we let Ac, called the complement of A, be the event
containing all those outcomes in S that are not in A. We see that
P(A)+P(Ac) =1
Example 3
(3)
Let the experiment consist of rolling a pair of dice. If A is the
event that the sum is 10 and B is the event that both dice land on even number
greater than 3, then
A={(4,6), (5,5), (6,4)}, B={(4,4),(4,6), (6,4), (6,6)}.
We express
A  B  {( 4,4), (4,6), (5,5), (6,4), (6,6)} (union of events)
AB ={(4,6), (6,4)} (intersection of event)
The addition theorem of probability
P( A  B)  P( A)  P( B)  P( AB)
Example 4 Suppose the probabilities that the Dow-Jones stock index increases
today is 0.54, that it increases tomorrow is 0.54, and that increases both days is
0.28. What is the probability that it does not increase on either days?
P( A  B)  P( A)  P( B)  P( AB) =0.54+0.54 –0.28 =0.80
Therefore, the probability that it increases on neither day is 1 –0.80 =0.20.
If AB=ø (empty set), we say that A and B are mutually exclusive or disjoint.
In that case
P( A  B)  P( A)  P( B)
Conditional Probability
The probability of A under the condition of B is the conditional probability. The
formula for conditional probability is
P( A | B) 
P( AB)
P( B)
(4)
Example 5 Suppose that two balls are to be drawn, without replacement, from
an urn that contains 9 blue and 7 yellow balls. If each ball drawn is equally to
be any of the balls in the urn at the time, what is the probability that both balls
2
are blue?
Solution Let B1 and B2 denote, respectively, the events that the first and
second balls withdrawn are blue.
P( B1 B2 )  P( B2 | B1 )  P( B1 ) 
8 9
3
 
15 16 10
From the above example, we see that A is independent of B is
P(AB)=P(A)P(B)
Examples 6
(5)
Suppose that, with probability 0.52, the closing price of a stock is
at least as high as the close on the previous day, and that the results for
successive days are independent. Find the probability that the closing price goes
down in each of the next four days, but not on the following day.
Solution Let Ai be the event that the closing price goes down on day i. Then
by independence, we have
P(A1A2A3A4A5c) =P(A1) P(A2) P(A3) P(A4) P(A5c) =0.484(0.52) =0.0276
Random Variables and Expected Values
Definition If X is a random variable whose possible values are x1, x2 …, xn, then
the expected value of X, denoted by E[X], is defined by
n
E[ X ]   x j Pj
j 1
Example 7 Verify the following formula
E[aX+b]=aE[X]+b
(6)
Solution Let Y=aX+b
n
n
n
j 1
j 1
j 1
E[Y ]   (ax j  b) Pj  a x j Pj  b Pj aE[ X ]  b
Definition The variance of X, denoted by Var(X), is defined by
Var(X)=E[(X—E[X])2]
Example 8 Verify the following formula, where a and b are constants:
Var[aX+b]=a2Var(X)
3
Solution
Var(aX  b)  E[( aX  b  E[aX  b]) 2 ]  E[a 2 ( X  E[ X ]) 2 ]  a 2Var( X )
Proposition If X1,…, Xk are independent random variables, then
 k
 k

Var  X j   Var ( X j )
 j 1  j 1
Example 9
Find Var(X) when X is a Bernoulli random variable, which is equal
to 1 with probability of p and to 0 with probability 1–p.
Solution Because E[X]=p, then
 (1  p) 2 with p
( X  E[ X ])   2
with 1  p
p
2
Hence
Var ( X )  E[( X  E[ X ]) 2 ]  (1  p) 2 p  p 2 (1  p)  p(1  p)
Example 10 Find the variance of X, a binomial random variable with parameter
n and p.
Solution Recalling that X represents the number of successes in n independent
trials, each of which is a success with probability p, we can represent it as
n
X X j
j 1
where X is defined to equal 1 if trial j is a success and 0 otherwise.
n
n
j 1
j 1
Var ( X )  Var ( X j )   p(1  p)  np(1  p)
The square root of the variance is called standard deviation. As we shall see, a
random variable tends to lie within a few standard deviations of its expected
value.
Proposition If X1,…, Xk are independent random variables, then
 k
 k
Var  X j   Var X j 
 j 1  j 1
Example 11
Find the variance of X, a binomial random variable with
parameters n and p.
4
Solution Recalling that X represents the number of successes in n independent
trials, each of which is a success with probability p
n
X X j
j 1
where Xj is defined to equal to 1 if trial j is a success and 0 otherwise. Hence
n
n
j 1
j 1
Var ( X )  Var ( X j )   p(1  p)  np(1  p)
Covariance and Correlation
The covariance of any random variable X and Y, denoted by Cov(X,Y), is
defined by
Cov(X,Y)=E[(X—E[X])(Y—E[Y])]= E[XY]—E[X]E[Y]
A positive value of the covariance indicates that X and Y both tend to be
large at the same time, whereas a negative value indicates that when one is large
the other tends to be small. Independent random variables have covariance
equal to 0.
The degree to which large value of X tend to be associated with large value
of Y is measured by the correlation between X and Y, denoted as ρ(X,Y)
 ( X ,Y ) 
Cov( X , Y )
Var ( X )Var (Y )
It can be shown that
 1   ( X ,Y )  1
If X and Y are linearly related by the equation
Y=a+BX
Then ρ(X,Y) equal to 1 when b is is positive and –1 when b is negative.
Exercises
1. A family picnic scheduled for tomorrow will be postponed if it is either
cloudy or rainy. If the probability that it will be cloudy is 0.40, the
probability that it will be rainy is 0.30, and the probability that it will be both
5
rainy and cloudy is 0.20, what is the probability that the picnic will not be
postponed?
2. A club has 120 members, of whom 35 play chess, 58 play bridge, and 27
play both chess and bridge. If a member of the club is randomly chosen,
what is the conditional probability that she
(a) plays chess given that she play bridge?
(b) plays bridge given that she plays chess?
3. A lawyer must decide whether to charge a fixed fee of $5,000 or take a
contingency fee of $25,000 if she wins the case (and 0 if she lose the case).
She estimates that her probability of winning is 0.30. Determine the mean
and standard deviation of her fee if
(a) she takes the fixed fee;
(b) she takes the contingency fee.
4. Let X1,…,Xn be independent random variables, all having the same
distribution with expected valueμand variance  2 . The random variable X ,
defined as the arithmetic average of these variables, is called the sample
mean. That is, the sample mean and random variable S2 are defined
respectively by
n
X
X
i 1
 X
2
n
i
n
and S 
2
i 1
i
X
n 1
.
(a) Show that E[X ]  
(b) Show that Var( X )   2 / n
(c) Show that E[S 2 ]   2
Reference Book
Sheldon M. Ross: An Elementary Introduction to Mathematical Finance,
Cambridge University Press, 2003 (China Machine Press, 2004).
6
Download