Document

advertisement
CHAPTER 5
Jointly Distributed Random Variables
Joint Probability Mass Function
Let X and Y be two discrete rv’s defined on the
sample space of an experiment. The joint
probability mass function p(x, y) is defined for
each pair of numbers (x, y) by
p( x, y )  P( X  x and Y  y )
Let A be the set consisting of pairs of (x, y)
values, then
P  X , Y   A 
  p ( x, y )
 x, y  A
Marginal Probability Mass Functions
The marginal probability mass
functions of X and Y, denoted pX(x) and
pY(y) are given by
p X ( x )   p ( x, y )
y
pY ( y )   p( x, y )
x

– the height and weight of a person;

– the temperature and rainfall of a day;

– the two coordinates of a needle randomly dropped on a table;

– the number of 1s and the number of 6s in 10 rolls of a die.
Example. We are interested in the effect of seat belt use on saving lives. If
we consider the following random variables X1 and X2 defined as follows:


X1 =0 if child survived

X1 =1 if child did not survive

And X2 = 0 if no belt

X2 = 1 if adult belt used

X2 = 2 if child seat used









The following table represents the joint probability distribution of X1
and X2 . In general we write P(X1 = x1 , X2 = x2 ) = p(x1 , x2) and call
p(x1 , x2) the joint probability function of (X1 , X2).
X1
0
1
-------------------------------------0 | 0.38
0.17 | 0.55
X2 1 | 0.14
0.02 | 0.16
2 | 0.24
0.05 | 0.29
-----------------------------------------0.76
0.24

Probability that a child will both survive and be in a child seta
when involved in an accident is:

P(X1 = 0, X2 = 2) = 0.24

Probability that a child will be in a child seat:

P(X2 = 2) = P(X1 = 0, X2 = 2) + P(X1 =1, X2 = 2) =
0.24+0.05= 0.29
Joint Probability Density Function
Let X and Y be continuous rv’s. Then f (x, y)
is a joint probability density function for X
and Y if for any two-dimensional set A
P  X , Y   A   f ( x, y )dxdy
A
If A is the two-dimensional rectangle
( x, y) : a  x  b, c  y  d ,
bd
P  X , Y   A    f ( x, y )dydx
ac
Marginal Probability Density Functions
The marginal probability density functions of X
and Y, denoted fX(x) and fY(y), are given by

f X ( x) 

f ( x, y )dy for    x  

f ( x, y )dx for    y  


fY ( y ) 

Independent Random Variables
Two random variables X and Y are said to be
independent if for every pair of x and y
values
p( x, y)  p X ( x)  pY ( y)
when X and Y are discrete or
f ( x, y)  f X ( x)  fY ( y)
when X and Y are continuous. If the
conditions are not satisfied for all (x, y) then
X and Y are dependent.
Conditional Probability Function
Let X and Y be two continuous rv’s with joint pdf
f (x, y) and marginal X pdf fX(x). Then for any X
value x for which fX(x) > 0, the conditional
probability density function of Y given that X = x
is
f ( x, y )
fY | X ( y | x) 
f X ( x)
  y  
If X and Y are discrete, replacing pdf’s by pmf’s
gives the conditional probability mass function
of Y when X = x.
Let X and Y denote the proportion of two different chemicals in a sample
mixture of chemicals used as an insecticide. Suppose X and Y have joint
probability density given by:
2, 0  x  1,0  y  1,0  x  y  1
f ( x, y )  
elsewhere
0,
(Note that X + Y must be at most unity since the random variables denote
proportions within the same sample).



1) Find the marginal density functions for X and Y.
2) Are X and Y independent?
3) Find P(X > 1/2 | Y =1/4).
1 x
 2dy  2(1  x ), 0  x  1
f1 ( x )   
0

0
otherwise


1 y
 2dx  2(1  y ), 0  y  1
f 2 ( y)   
0

0
otherwise


2) f1(x) f2(y)=2(1-x)* 2(1-y) ≠ 2 = f(x,y), for 0 ≤ x ≤ 1-y.
Therefore X and Y are not independent.

3)

1
1
f ( x, y  )
1
1
1
2
2

4 dx 
P  X  | Y     f ( x | y  )dx  


1
1
2
4  1/ 2
4
3

1/ 2 f ( y 
1 / 2 2(1  )
)
4
4
1
1
5.2 Expected Values, Covariance, and Correlation
Let X and Y be jointly distributed rv’s with pmf
p(x, y) or pdf f (x, y) according to whether the
variables are discrete or continuous. Then the
expected value of a function h(X, Y), denoted
E[h(X, Y)] or h( X ,Y )
 h( x, y )  p ( x, y )
discrete
is
 x y

 

h( x, y )  f ( x, y )dxdy continuous


  
Covariance
The covariance between two rv’s X and Y is
Cov  X , Y   E  X   X Y  Y  
 ( x   X )( y  Y ) p ( x, y ) discrete
 x y

 

( x   X )( y  Y ) f ( x, y )dxdy continuous


  
Short-cut Formula for Covariance
Cov  X , Y   E  XY    X  Y
Cov(X, Y) = 0 does not imply X and Y are independent!!
Correlation Proposition
1. If a and c are either both positive or both
negative, Corr(aX + b, cY + d) = Corr(X, Y)
2. For any two rv’s X and Y,
1  Corr( X , Y )  1.
Correlation Proposition
1. If X and Y are independent, then   0,
but   0 does not imply independence.
2.   1 or  1 iff Y  aX  b
for some numbers a and b with a  0.
The proportions X and Y of two chemicals found in samples of
an insecticide have the joint probability density function
2, 0  x  1,0  y  1,0  x  y  1
f ( x, y )  
elsewhere
0,
The random variable Z=X + Y denotes the proportion of the
insecticide due to both chemicals combined.
1)Find E(Z) and V(Z)
2) Find the correlation between X and Y and interpret its
meaning.
1 1 x
E( X  Y )  

0 0
1 1 x
E[( X  Y ) 2 ]  
0
1
2
( x  y)2dydx   (1  x )dx 
3
0
2
4
2
2
x
23 1
2
3
1
(
x

y
)
2
dydx

(
1

x
)
dx

(
x

)
|

 
0
0
0 3
3
4
34 2
1
2
1 2 1
V ( Z )  E ( Z 2 )  E ( Z )2     
2 3 1
1 X
f (X ) 
f (Y ) 

1 X
f ( X , Y )dy   2dY  2(1  X ),0  X 1
0
0
1Y
1Y

f ( X , Y )dx   2dX  2(1  Y ),0  Y 1
0
0
1
z z 
1 1 1 1


E ( X )  E (Y )   z 2(1  z )dz  2 z  z dz  2    2    2  
 2 3  0  2 3  6 3
0
0
1
1
2
3
2
1
z z 
1 1  1  1
E ( X 2 )  E (Y 2 )   z 2 (1  z )dz  2 z 2  z 3dz  2    2    2  
 3 4  0  3 4   12  6
0
0
1
1
3
4
2
1 1
1 1 1
Var ( X )  Var (Y )  E ( X 2 )  ( E ( X )) 2       
6  3
6 9 18
1 1 x
E ( XY )  

1
xy 2dxdy   x
0 0
1 x

0
1 x
1
2 ydydx   x ( y 2 )
0
0
dx 
0
1
x
2x
x 
1 2 1 1
    
  x (1  x ) dx   x  2 x  x dx  


3
4  0 2 3 4 12
 2
0
0
1
1
2

1 1
 
12  3 
1
 
 18 
2
2
2
2
1 1

1
 12 9  
1
2
18
3
3
4
A statistic is any quantity whose value can be
calculated from sample data. Prior to obtaining
data, there is uncertainty as to what value of any
particular statistic will result. A statistic is a
random variable denoted by an uppercase letter;
a lowercase letter is used to represent the
calculated or observed value of the statistic.
Random Samples
The rv’s X1,…,Xn are said to form a (simple
random sample of size n if
1. The Xi’s are independent rv’s.
2. Every Xi has the same probability
distribution.
Simulation Experiments
The following characteristics must be specified:
1. The statistic of interest.
2. The population distribution.
3. The sample size n.
4. The number of replications k.
Using the Sample Mean
Let X1,…, Xn be a random sample from a
distribution with mean value  and standard
deviation  . Then
 
2
2

2. V  X    X 
n
1. E X   X  
In addition, with To = X1 +…+ Xn,
E To   n , V To   n 2 ,and  To  n .
Normal Population Distribution
Let X1,…, Xn be a random sample from a
normal distribution with mean value  and
standard deviation  . Then for any n, X
is normally distributed, as is To.
The Central Limit Theorem
Let X1,…, Xn be a random sample from a
2

distribution with mean value and variance  .
Then if n sufficiently large, X has
approximately a normal distribution with
2
2

 X   and  X  n , and To also has
approximately a normal distribution with
2
To  n ,  To  n . The larger the value of
n, the better the approximation.
The Central Limit Theorem
X small to
moderate n
Population
distribution
X large n

Rule of Thumb
If n > 30, the Central Limit Theorem can
be used.
Download