Homework 8 Solutions - Wharton Statistics Department

advertisement
Homework 8 Solutions
1. Ross 6.27 p. 316
Let X ~ U (0, 1) and Y ~ Exp(1) be independent random variables. Then the joint density of X
and Y is f ( x, y)  f X ( x) f Y ( y)  (1)(e  y )  e  y
a) If Z  X  Y , the cdf of Z is
FZ (a)  P( X  Y  a)
a
 a a x  y
e
dydx

(1  e x a )dx  a  1  e  a ,
a 1
 

.
0 0
y
0
  e dydx   1 a  x
1
x ya
 e  y dydx  (1  e x  a )dx  1  (e  1)e  a , a  1
0
0 0
Taking the derivative with respect to a gives us
1  e  a ,
a 1
f Z (a)  
a
(e  1)e , a  1
b) If Z  X / Y , the cdf of Z is
X
X


FZ (a)  P  a   P Y  
a
Y


1 

e
y
1
dydx   e  x / a dx  ae 1 / a  a  a(1  e 1 / a )
0 x/a
0
Taking the derivative of this with respect to a gives us
 1
f Z (a )  1  1  e 1 / a .
 a
2. Ross 6.29 p. 316
If W  I 2 R , then the cdf of W is
FW ( w)  P( I 2 R  w) 
 6 x(1  x)  2 ydydx
x y  w; x , y[ 0 , 1]
2
1 w / x2
w 1

 12 x(1  x) ydydx   12 x(1  x) ydydx
0 0
 (3w  2w
w
3/ 2
0
)  (3w  3w  6w 3 / 2 )  3w 2  8w 3 / 2  6w, 0  w  1
2
Take the derivative of this with respect to w to obtain the pdf fW (w)  6(1  w  2 w ) .
3. Ross 6.34 p. 317
a) Let X and Y denote, respectively, the number of males and females in the sample that
never eat breakfast. Then X has a binomial distribution with mean (200)(0.252)  50.4
and variance (200)(0.252)(0.748)  37.6992 , and Y has a binomial distribution with
mean (200)(0.236)  47.2 and variance (200)(0.236)(0.764)  36.0608 . Using the
normal approximation to the binomial distribution, and the fact that the sum of 2 normal
random variables is itself a normal random variable, we have that T  X  Y is
approximately normally distributed with mean 97.6 and variance 73.76; hence, the
probability that at least 110 of the 400 people in the sample never eat breakfast is (with
the continuity correction)

109.5  97.6 
P( X  Y  110)  P(T  109.5)  P Z 
  P( Z  1.3856)  0.0829 .
73.76 

b) Let D  Y  X be the difference between the number of females who never eat breakfast
and the number of males who never eat breakfast in our sample. Similar to (a), we have
that D is approximately normally distributed with mean -3.2 and variance 73.76; hence,
the probability that the number of women who never eat breakfast is at least as large as
the number of men who never eat breakfast is (again with the continuity correction)

 0.5  (3.2) 
P(Y  X )  P( D  0.5)  P Z 
  P( Z  0.3144)  0.3766 .
73.76 

4. Ross 6.39 p. 317
a) If X is chosen at random from the set {1, 2, 3, 4, 5} and Y is chosen at random from the
subset {1, …, X}, then the joint mass function of X and Y is
 1  1 
P( X  j , Y  i )  P( X  j ) P(Y  i | X  j )    , j  1,,5; i  1,, j
 5  j 
b) The conditional mass function of X given that Y  i is just
P( X  j , Y  i )  1  5  1   1 
P( X  j | Y  i ) 
        
P(Y  i)
 5 j  k i  5k   j 
5
1
  k ,
i  j 5.
k i
c) X and Y are not independent. One way of explaining why is to note that P( X  j )  1 / 5
for all j, but that P( X  j | Y  i )  1 / 5 for any values of i and j; that is, the conditional
probability is not equal to the unconditional probability.
5. Ross 6.42 p. 318
The joint density of X and Y is f ( x, y)  xe x ( y 1) , where X and Y are positive-valued random
variables.
a) To obtain the conditional density of X given Y  y , and that of Y, given X  x , we will
first need to find the marginal densities of X and Y.
The marginal density of X:

f X ( x)   xe

 x ( y 1)
0
1
 1


dy  xe  e  xy   xe x 0    e  x .
x
 x
 y 0

x
The marginal density of Y (using integration by parts and then noting that the first term
evaluates to 0):

f Y ( y )   xe
0
 x ( y 1)
x  x ( y 1)
dx  
e
y 1


x 0

0
1  x ( y 1)
e
dx
y 1



1
1
 
e  x ( y 1)  
2
2
 ( y  1)
 x 0 ( y  1)
Hence, the conditional density of X given Y  y is just
f ( x, y )
f X |Y ( x | y ) 
 ( y  1) 2 xe x ( y 1) , x  0 ,
f Y ( y)
and the conditional density of Y given X  x is just
f ( x, y )
f Y | X ( y | x) 
 xe xy , y  0 .
f X ( x)
b) The density function of Z  XY can be obtained by first determining the cdf FZ (a) and
then differentiating with respect to a:
 a/ x

1
 1
 x ( y 1)
FZ (a)  P( XY  a)    xe
dydx   xe x  e  a   dx
x
 x
0 0
0


  (1  e  a )e  x  (1  e  a )  e  x


0
 1  e a
0
 f Z (a)  FZ/ (a)  e  a , a  0
6. Ross 6.45 p. 318
X 1 , X 2 , X 3 iid ~ U (0, 1) , and we want the probability that the largest is greater than the sum of
the others:
P( X 1  X 2  X 3 )  P( X 2  X 1  X 3 )  P( X 3  X 1  X 2 )  3P( X 1  X 2  X 3 )
(Since the 3 variables are iid, the probability that X i is larger than the sum of the other two is the
same for all i, so we can just examine the case where X 1 is largest and then multiply the result
by 3).
1 1 x3
 dx1dx2 dx3  3
3P ( X 1  X 2  X 3 )  3
x1  x2  x3 ; 0 xi 1
1 1 x3
0
1
  dx dx dx
1
0
2
3
x2  x3
1
 (1  x3 ) 3 
(1  x3 ) 2
1 1

 3  (1  x 2  x3 )dx 2 dx3  3
dx3  3
  30   
2
6
6 2


0
0 0
0
1
7. Ross 6.46 p. 318
Since the machine needs at least 3 motors to operate, the length of time that the machine
functions is just the length of time the 3rd-largest (or 3rd-smallest in this case) motor functions.
I.e., if we look at the order statistics X (1) , X ( 2 ) , X ( 3) , X ( 4 ) ,and X ( 5 ) , then the density function
of the length of time that the machine functions is just the density function of X ( 3) . Then, using
Equation (6.2), the density function of X ( 3) is just
 x t 
5!
f X ( 3) ( x) 
 te dt 
(5  3)!(3  1)!  0



31 
 t 
  te dt 
x

2
53
xe x

2
 30  ( x  1)e  x  1 0  ( x  1)e  x xe x

 30 x( x  1) 2 e 3 x 1  ( x  1)e  x

2
8. Ross 6.57 p. 319
We have that Y1  X 1  X 2 and Y2  e X1 . Solving these for x1 and x 2 gives us x1  log( y 2 )
and x2  y1  log( y2 ) . Also,
g1 g1
1 1
x
x 2
J ( x1 , x 2 )  1
 x1
 e x1   y 2 .
g 2 g 2
e
0
x1 x 2
With X 1 and X 2 being independent Exp( ) random variables, their joint density is just
f X1 , X 2 ( x1 , x2 )  2 e  ( x1  x2 ) . Hence, from Equation (7.1), we have
f Y1 ,Y2 ( y1 , y 2 )  f X 1 , X 2 (log( y 2 ), y1  log( y 2 ))
1
1 2 y1

e ,
y2 y2
y 2  1, y1  log( y 2 ) .
Download