Chapter 3 PowerPoint

advertisement
Chapter 3: Probability
One day there was a fire in the wastebasket in the Dean’s
office. In rushed a physicist, a chemist, and a statistician.
The physicist immediately started calculations to
determine how much energy would have to be removed
from the fire to stop combustion.
The chemist tried to figure out what chemical reagent
would have to be added to the fire to prevent oxidation.
While they were doing this, the statistician set fires in all
the other wastebaskets in the office.
“What are you doing?” they demanded. “Well, to solve the
problem, you obviously need a larger sample size,” the
statistician replied.
The Monty Hall Problem
In the Game Show, “Let’s Make
a Deal,” A contestant was
presented with three doors.
Behind one door was a prize
(often a new car) and behind the
other two were goats. The
contestant would choose a door.
Then Monty would open one of
the other doors, exposing a
goat. The contestant could then
switch or stay.
 What should the contestant do?

Dice Simulation
 Do
Excel Demo
 Points to be made:




Randomness means unpredictable results
Probability means long run is predictable
We imagine a mechanism, rule, or law that
produces results with definite probabilities
If we know the “probability law,” we can make
meaningful predictions about the likelihood of
various outcomes
Why we play silly games
 In
the study of probability, we need simple
examples to learn from. Some of these
may seem silly or unrealistic, but they are
actually models of “real” problems. If we
understand the examples, we can tackle
real problems by formulating them in terms
of simple examples like dice games, coin
tosses, spinners, etc. Recognizing a
familiar set-up is often the key to success.
Counting 3’s
 Another
dice game: Roll two dice and
record the number of 3’s.
 The possible outcomes are 0, 1, or 2.
 We will count the frequency of each
outcome as we repeat the process.
 (Excel Simulation)
Properties of this Experiment
If we continue this experiment indefinitely:
•
The frequencies will have approximately a
25:10:1 ratio (we need to find out why)
•
The relative frequencies will settle down.
A computer simulation of experimental outcomes
is a helpful tool that may lead to important
insights regarding a probability problem.
But, it is not a substitute for the theoretical
development that we will begin now.
Definitions






Probability Experiment: A repeatable process
that yields a result or observations.
Trial: One repetition (yielding one observation)
Outcome: One possible result of an experiment.
The language of set theory is used…
The set of all possible outcomes is the Sample
Space, often denoted by S.
Events are subsets of the Sample Space; they
contain one or more outcomes, and are often
denoted by A, B, or E.
Some Examples







An Experiment: Select two students at random and ask
them if they have cars on campus. Record Y if the
answer is “yes” and N if the answer is “no.”
There are two trials, because each student yields one
observation. Each trial has an outcome of Y or N
But the outcomes of the experiment are ordered pairs:
The Sample Space: S={(N,N), (N,Y), (Y,N), (Y,Y)}
One example event: both students have cars.
A={(Y,Y)}
Another event: only one student has a car.
B={(Y,N),(N,Y)}
Yet another event: at least one has a car.
C={(Y,N),(N,Y),(Y,Y)}
More Examples
 Toss




one coin, then toss one die.
S={H1 H2 H3 H4 H5 H6 T1 T2 T3 T4 T5 T6}
(The notation has been simplified.)
A={The coin was a head}
B={The die toss was an even number}
 Randomly
select three voters and ask if
they favor an increase in property taxes for
road construction in the county.


S = {NNN, NNY, NYN, NYY, YNN, YNY, YYN,
YYY}
C={At least one voter said yes}
 Exercise:
List the elements of A, B, and C.
More Terms
 Outcomes
are also called sample points.
 n(S): the number of outcomes in the
sample space.
 Events containing only one outcome are
called Simple Events.
 Events containing two or more outcomes
are called Compound Events.
Notes





The outcomes in a sample space can never
overlap (they are mutually exclusive).
The sample space must contain all possible
outcomes (relate to exhaustive, below).
Two events may or may not be mutually
exclusive.
If two or more events together include all
outcomes, they are called exhaustive.
In some cases a collection of events may be
both mutually exclusive and exhaustive.
When Events Occur

Remember, events can contain multiple outcomes.
 An event occurs if it contains the actual outcome of
the experiment.
 More than one event can occur for a single trial (if
not mutually exclusive).
 Example: On the way to work, some employees at
a certain company stop for a bagel and/or a cup of
coffee. Possible outcomes for (bagel,coffee) are:




(n,n):
(b,n):
(n,c):
(b,c):
Don’t stop
Get only a bagel
Get only coffee
Get bagel and coffee
Example: Not Mutually Exclusive/Exhaustive
 Define
event B as “gets bagel”
 Define event C as “gets coffee”
 Then B={(b,n),(b,c)} and C={(n,c),(b,c)}
 B∩C={(b,c)} so B and C are not mutually
exclusive.
 If the outcome is (b,c), then both B and C
have occurred.
 It is also true that B and C are not
exhaustive, since BUC≠S.
The accompanying Venn diagram illustrates the
choices of the employees for a randomly selected work
day.
Coffee
Coffee
32
Bagel
18
16
11
Exercises
 For
the three dice games that we
simulated:


Give the Sample Space
Construct several events, demonstrating:
•
•
•
•
•

Simple events
Compound events
Mutually Exclusive events
Exhaustive events
Mutually Exclusive and Exhaustive events
Find n(S) and n(A) for several events.
Determining Probability
 Probability
of an Event: The expected
relative frequency of the event
 Three ways to determine the probability of
an event:



Empirically
Theoretically
Subjectively
Empirical Probability

Based on counts of data. It is the observed relative
frequency.
n '(A)

P (A) 
 Use prime notation:
n

n’(A): number of times the event A has occurred

n: number of trials or observations, or sample size.
The Law of Large Numbers says the larger the
number of experimental trials n, the closer the
empirical probability P’(A) is expected to be to the true
probability P(A).
In symbols:
As n  , P '(A)  P(A)


Theoretical and Subjective
 Theoretical
Probability, P(A), is the
expected relative frequency (long run)
 P(A) is based on knowledge (or
assumptions) of the fundamental
properties of the experiment.
 Subjective Probability is based on
someone’s opinion and/or experience. It
is usually just a guess and subject to bias.
Theoretical Probability

Toss a fair coin. Let event H be the occurrence
of a head. What is P(H)?



In a single toss of the coin, there are two possible
outcomes.
Since the coin is fair, each outcome is equally likely.
Therefore it follows that P(H) = 1/2.
This doesn’t mean one head occurs in every two
tosses.
 After many trials, the proportion of heads is
expected to be close to half, not based on data,
but by reasoning from the fundamental
properties of the experiment.

Equally Likely Outcomes

The previous example of a coin toss is an
example of an experiment in which all
outcomes are equally likely.
 Many common problems (coins, dice, cards,
SRS) have this property.
 If this property holds, the probability of an
event A is the ratio of the number of outcomes
in A to the number of outcomes in S.
n(A)
P(A) 
n( S )
Examples
 A die



S={1,2,3,4,5,6}, thus n(S)=6.
Define event E as E={2,4,6}. Then n(E)=3.
P(E)=3/6=1/2.
 Toss




toss has six equally likely outcomes.
two coins; there are 4 outcomes.
S={TT,TH,HT,HH}.
Define event E as E={at least one head}.
E={TH,HT,HH}
P(E)=3/4
Example: A fair coin is tossed 5 times, and a head (H)
or a tail (T) is recorded each time. What is the
probability of
A = {exactly one head in 5 tosses}, and
B = {exactly 5 heads}?
The outcomes consist of a sequence of 5 H’s and T’s
A typical outcome: HHTTH
There are 32 possible outcomes, all equally likely.
A = {HTTTT, THTTT, TTHTT, TTTHT, TTTTH}
B = {HHHHH}
n(A) 5
P(A) 

n( S ) 32
n(B) 1
P(B) 

n( S ) 32
The Monty Hall Problem
In the Game Show, “Let’s Make
a Deal,” A contestant was
presented with three doors.
Behind one door was a prize
(often a new car) and behind the
other two were goats. The
contestant would choose a door.
Then Monty would open one of
the other doors, exposing a
goat. The contestant could then
switch or stay.
 What should the contestant do?

Solve the Monty Hall Problem
 List
the elements of the sample space
under each strategy



This is probably the part that makes the
problem difficult
There are two parts to each outcome: choice
of door and location of car.
Represent an outcome by an ordered pair
(door chosen, door with car)
 Determine
the probability of winning under
each strategy
Solution to Monty Hall


S={(1,1),(1,2),(1,3),(2,1),(2,2),(2,3),(3,1),(3,2),(3,3)}
Strategy=Stay
Win={(1,1),(2,2),(3,3)}
P(Win)=3/9=1/3
 Strategy=Switch
Win={(1,2),(1,3),(2,1),(2,3),(3,1),(3,2)}
P(Win)=6/9=2/3
 Note: The Win events under each strategy are
complements of each other.
Revisit a Previous Example
 An
Experiment: Select two students at
random and ask them if they have cars on
campus. Record Y if the answer is “yes”
and N if the answer is “no.”
 We can use a tree diagram to enumerate
the elements of the sample space.
Hmmm…
 How
many statisticians does it take to
screw in a light bulb?
 We don’t know yet…the entire sample was
skewed to the left.
 Hope that didn’t go right by you…
Tree Diagram of Sample Space
Student 1 Student 2 Outcomes
Y
Y, Y
N
Y, N
Y
N, Y
N
N, N
Y
N
-Tree diagrams start from a common point, or “root”
-This tree has four branches (from root to ends)
-There are 2 first- and 4 second-generation branches.
-The path along each branch shows a possible outcome.
Example: An experiment consists of selecting electronic
parts from an assembly line and testing each to see if it
passes inspection (P) or fails (F). The experiment
terminates as soon as one acceptable part is found or
after three parts are tested. Construct the sample
space.
Outcome
F
FFF
F
F
P
FFP
P
FP
P
P
S = { FFF, FFP, FP, P }
Laws or Axioms of Probability

The probability of any event A is
between 0 and 1.
 The sum of the probabilities of all
outcomes is 1.
 A probability of 0 means the event
cannot occur.
 A probability of 1 means the event
is certain, it must occur every time.
0  P(A)  1

all simple
events A
P(A)  1
Introducing Odds
Example: On the way to work Bob’s personal judgment is that
he is four times more likely to get caught in a traffic jam (J)
than have an easy commute (E). What values should be
assigned to P(J) and P(E)?
P(J)  4  P(E)
4  P(E)  P(E)  1
5  P(E)  1
P(J)  P(E)  1
1
P(E) 
5
1 4
P(J)  4  P(E)  4   
5 5
Definition of Odds
 The complement of A is denoted by A .
 A contains all outcomes in S not in A.





Two events are complementary if they are mutually
exclusive and exhaustive.
Odds are a way of expressing probabilities for
complementary events as a ratio of expected
frequencies.
If the odds in favor of an event A are a to b then the
odds against A are b to a.
a
P
(A)

Then the probability that A occurs is
ab
The probability A does not occur is P(A)  b
ab
Example:
1. The complement of the event “success” is “failure.”
2. The complement of the event “rain” is “no rain.”
3. The complement of the event “at least 3 patients
recover” out of 5 patients is “2 or fewer recover.”
Notes:
1. P(A)  P(A)  1 for any event A
2. P(A)  1  P(A)
3. Every event A has a complementary event A
4. Useful in calculations such as when the question
asks for the probability of “at least one.”
5. The complement of S is Ø, the empty set.
6. Obviously, P(Ø)=1-P(S)=1-1=0.
Addition Rules





If A and B occur, the outcome is in both, i.e., A∩B has
occurred. So
P(A and B)=P(A∩B).
If A and B are mutually exclusive, A∩B=Ø so
P(A and B)=P(Ø)=0.
If A or B occurs, the outcome is in at least one of them, i.e.,
AUB has occurred. So
P(A or B)=P(AUB)
=P(A)+P(B)–P(A∩B).
Note: If A and B are NOT mutually exclusive, just adding
P(A)+P(B) would count the outcomes in the intersection
twice, so we have to correct for this double-count.
But, if A and B ARE mutually exclusive, P(A∩B)=0 so
P(A or B)=P(AUB)
=P(A)+P(B).
Example
This diagram shows the probability that a randomly selected
consumer has tried a snack food (F) is .5, tried a new soft
drink (D) is .6, and tried both the snack food and the soft
drink is .2.
.3
F
.2 .4
D
S
.1
P(Tried the snack food or the soft drink)
 P(F or D)  P(F)  P(D)  P(F and D)  .5  .6  .2  .9
P(Tried neither the snack food nor the soft drink)
 P(F and D)  P(F or D)  1  P(F or D)  1  .9  .1
Examples
 Suppose A and
B are mutually exclusive,
and P(A)=.12 and P(B)=.34. Find P(AUB).
 Suppose P(A)=.6, P(AUB)=.9, and
P(B)=.5. Find P(A∩B).
 Suppose A, B, and C are mutually
exclusive and exhaustive. If P(A)=.2,
P(B)=.4, find P(C).
Conditional Probability

Sometimes two events are related in such a way that the
probability of one depends upon whether the other
occurs.
 Partial information about the outcome may alter our
assessment of the probabilities.
 The symbol P(A | B) represents the probability that A will
occur given B is known (assured). This is called
conditional probability.
 Suppose I toss a die and show you that there is a 3 on
the front face. What can you say about the probabilities
for the top face?


What is P(1 on top|3 on front)?
What is P(4 on top|3 on front)?
Attention!!
 It
is crucial to realize we are not talking
about two sequential events. This is for one
outcome of one trial of an experiment, for
which we have partial information, allowing
us to remove some of the uncertainty.
 When I show you the three on the front face,
the toss has already occurred, but you don’t
know the result. The “chance” involved is in
your ability to guess the correct value, rather
than in a particular value coming up.
Die Example
 Normally,
There are six
possibilities with P=1/6 for
each.
 With the three showing on
front, we eliminate two
outcomes, restricting the
sample space.
 The four remaining
numbers are equally likely,
with P=1/4.
1 2 3
6 5 4
1 2 3
6 5 4
1 2
6 5
Calculating Conditional Probability
Recall our definition of probability in terms of frequencies of equally likely
outcomes:
n(A)
P(A) 
n(S)
Given B has occurred, the numerator becomes the number of outcomes of
A that are still in the sample space. Any outcomes in A that were not
in B are eliminated now. The denominator is the number of outcomes
in B, the new sample space.
n(A B)
P(A | B) 
n(B)
To relate this back to the original probabilities, divide the numerator and
denominator by n(S).
n(A B) / n( S ) P(A B)
P(A | B) 

n(B) / n( S )
P(B)
Though this formula was derived using the idea of equal probabilities for
all outcomes, the final form works in general.
Independent Events

Two events, defined for one trial of an
experiment, are independent iff
P(A | B) = P(A) or P(B | A) = P(B).
 This should be understood to mean that if A and
B are independent, the occurrence of B does not
affect the probability of A, and visa versa.
 If A and B are independent, then so are:
A and B
A and B
A and B
Example of Independent Events
Consider the experiment in which a single
fair die is rolled: S = {1, 2, 3, 4, 5, 6 }.
Define the following events:
A = {1, 2}
B = “an odd number occurs”
P (A B) 1/ 6 1
P (A | B) 

  P(A)
P (B)
3/ 6 3
P (A B) 1/ 6 1
P (B | A) 

  P(B)
P (A)
1/ 3 2
Example of non-Independent
Events
Consider the experiment in which a single
fair die is rolled: S = {1, 2, 3, 4, 5, 6 }.
Define the following events:
A = {1}
B = “an odd number occurs”
P(A B) 1/ 6 1 1
P(A | B) 

   P(A)
P(B)
3/ 6 3 6
P(A B) 1/ 6
1
P(B | A) 

 1   P(B)
P(A)
1/ 6
2
General Multiplication Rule

A little algebra gives this variation:
P(A | B) 

P(A B)
 P(A B)  P(A | B) P(B)
P(B)
Which might be more usefully thought of as:
P(A and B)  P(A | B) P(B)
Note: How to recognize phrasing that indicates intersections:
1. Both A and B: A B
2. A but not B: A B
3. Neither A nor B = Not A and Not B = Not (A or B):
A BA B
4.
Not (A and B)=Not A or Not B:
A BA B
Special Multiplication Rule
If A and B are independent events in S, then
P(A | B)  P(A), so P(A and B)  P(A)  P(B) .
Example: Suppose the event A is “Allen gets a cold this
winter,” B is “Bob gets a cold this winter,” and C is “Chris
gets a cold this winter.” P(A) = .15, P(B) = .25, P(C) = .3,
and all three events are independent. Find the probability
that:
1. All three get colds this winter.
2. Allen and Bob get a cold but Chris does not.
3. None of the three gets a cold this winter.
Solution:
P(All three get colds this winter)
 P(A and B and C)  P(A)  P(B)  P(C)
 (.15)(.25)(.30)  .0113
P(Allen and Bob get a cold, but Chris does not)
 P(A and B and C)  P(A)  P(B)  P(C)
 (.15)(.25)(.70)  .0263
P(None of the three gets a cold this winter)
 P(A and B and C)  P(A)  P(B)  P(C)
 (.85)(.75)(.70)  .4463
Summary Notes

Independent and mutually exclusive are two very different
concepts.



P(A and B) = P(A) P(B) when A and B are independent.



Mutually exclusive says the two events cannot occur together, that
is, they have no intersection.
Independence says each event does not affect the other event’s
probability.
Since P(A) and P(B) are not zero, P(A and B) is nonzero.
Thus, independent events have an intersection.
Events cannot be both mutually exclusive and
independent.


If two events are independent, then they are not mutually exclusive.
If two events are mutually exclusive, then they are not independent.
Tree Diagrams
Tree Diagrams can be used to calculate
probabilities that involve the multiplication
and addition rules.
• A set of branches that initiate from a single
point has a total probability 1.
• Each outcome for the experiment is
represented by a branch that begins at the
common starting point and ends at the
terminal points at the right.
Example: A certain company uses three overnight
delivery services: A, B, and C. The probability of
selecting service A is 1/2, of selecting B is 3/10, and of
selecting C is 1/5. Suppose the event T is “on time
delivery.” P(T|A) = 9/10, P(T|B) = 7/10, and P(T|C) =
4/5. A service is randomly selected to deliver a package
overnight. Construct a tree diagram representing this
experiment.
The resulting tree diagram
Service
A
Delivery
9 / 10
T
1 / 10
1/ 2
T
7 / 10
3 / 10
B
T
3 / 10
T
4/5
1/ 5
C
T
1/ 5
T
Using the tree diagram:
1. The probability of selecting service A and having the
package delivered on time.
1 9
9
P(A and T)  P(A )  P(T| A )   
2 10 20
2. The probability of having the package delivered on time.
P (T)  P (A and T)  P( B and T)  P(C and T)
 P (A )  P (T| A )  P ( B)  P (T| B)  P (C)  P (T| C)
1  9   3   7   1  4 

           
 2   10  10  10  5  5
9 21 4
 

20 100 25
41

50
Example: This problem involves testing individuals for the
presence of a disease. Suppose the probability of having
the disease (D) is .001. If a person has the disease, the
probability of a positive test result (Pos) is .90. If a
person does not have the disease, the probability of a
negative test result (Neg) is .95. For a person selected at
random:
1.
2.
3.
4.
Find the probability of a negative test result given the
person has the disease (False negative).
Find the probability of having the disease and a positive
test result.
Find the probability of a positive test result.
Find the probability a person has the disease, given a
positive test result.
Test
Result
Disease
.001
.999
.90
Pos
.10
Neg
.05
Pos
.95
Neg
D
D
1. Find the probability of a negative test result given the
person has the disease (False negative). Answer: .10
Test
Result
Disease
D
.001
.90
.10
Pos
Neg
Pos
.999
2.
.05
D
Neg
.95
Find the probability of having the disease and a positive
test result. Answer: .001x.90=.0009
Test
Result
Disease
.001
.999
.90
Pos
.10
Neg
.05
Pos
.95
Neg
D
D
3. Find the probability of a positive test result.
Answer: .001x.9+.999x.05=.05085
Test
Result
Disease
.001
.999
.90
Pos
.10
Neg
.05
Pos
.95
Neg
D
D
4. Find the probability a person has the disease,
given a positive test result: A difficult problem
to answer this way.
.90
D
.001
.999
.10
Neg
.05
Pos
D
.95
+
D
D
Pos
Neg
–
90
10
100
4995 94905 99900
5085 94915 100000
4. Find the probability a person has the disease, given a
positive test result: 90/5085≈.0177
Baye’s Theorem
• Simplifies problems like this when we
essentially need to reverse the direction of a
conditional probability.
• Complete Baye’s theorem is written for
multiple events, but for two events it
simplifies like this:
P( A | B) P( B)
P( B | A) 
P( A)
• For our problem:
P( | D) P( D) .9  .001
P( D | ) 

 .0177
P()
.05085
Outcomes with
Unequal Probabilities



A scenario in which outcomes have different
probabilities may be illustrated by the “urn” problems.
Suppose we have an urn (an opaque container from
which we may randomly select items) containing
marbles of different colors, such as:
 Two red
 Three blue
 Five white
Represent the outcome of one draw as R, B, or W.
Clearly,
 P(R)=.2
 P(B)=.3
 P(W)=.5
Clarification

There are only three outcomes, R, B, and W.
This is because the information obtained from a
draw is the color, not the particular marble.
 However, we realize that there are several
marbles associated with each outcome.
 If we choose to use the notation n(A) in this
case, we will have to define it as the number of
marbles associated with the event A, and n(S)
would be the total number of marbles. Doing
this will enable us to correctly use the definition
of probability: P(A)=n(A)/n(S)
Two Draws with Replacement

Suppose we draw a marble, return it to the urn, and draw
again. Since the first marble is replaced, the first draw has
no effect on the probability of the second draw; thus the
draws are independent.
P(R,R)=(.2)(.2)=.04
P(W,R)=(.5)(.2)=.10
P(1 red)=(.2)(.3)+(.2)(.5)
+(.3)(.2)+(.5)(.2)
=.32
P(at least one red)
=.32+(.2)(.2)=.36
P(no red)=1–.36=.64
P(1 red and 1 blue)
=(.2)(.3)+(.3)(.2)
=.12
Two Draws, without Replacement

Suppose we draw two marbles, sequentially. When the first
marble is taken out, the proportions of the remaining
marbles change; thus the draws are not independent.
P(R,R)=1/45≈.022
P(W,R)=1/9≈.111
P(1 red)=1/15+1/9+1/15
+1/9=16/45≈.356
P(at least one red)
=16/45+1/45=17/45
≈.378
P(no red)=1–17/45=28/45
≈.622
P(1 red and 1 blue)
=1/15+1/15=2/15
≈.133
Spinners and Unequal Probability

A spinner is a device with a rotating pointer or
wheel and markings that determine an outcome
when the device stops spinning. Many
children’s games have these; roulette wheels
and “Wheels of Fortune” are fancier examples.
 The probability that the spinner stops in any
particular arc (part of the circle) is determined by
the proportion of the circle taken up by the arc,
or by the corresponding angle divided by 360º.
 By changing the angles (arcs) corresponding to
different outcomes, any desired set of
probabilities can be achieved.
Dice and Unequal Probability



Dice can also be used in a variety of imaginative ways to
mimic unequal probability. For example, re-label a die
so that one side is 1, two sides are 2, and three sides
are 3. Then the corresponding probabilities are 1/6, 1/3,
and 1/2.
Coins can be used too. For example, toss a coin twice.
Record a 2 for two heads, a 1 for one head, and a 0 for
no heads. The probabilities are 1/4, 1/2, and 1/4.
Recall the die toss simulation where we counted the
number of threes in two tosses?





There are 36 outcomes total:
1 with 2 threes {33}
10 with 1 three {13 23 43 53 63 31 32 34 35 36}
and the other 25 have no threes.
Hence the ratio of 25:10:1 as demonstrated in the simulation.
Hmmm…
 Why
did the statistician cross the
interstate?
 To get data from the other side of the
median.
Download