Chapter 3 - Elsevier

advertisement
P a g e | 33
CHAPTER 3
Additional Topics in Probability
3.1 Introduction
3.2 Some special distribution functions
3.3 Joint distributions
3.4 Functions of random variables
3.5 Limit Theorems
3.6 Chapter summary
3.7 Computer examples
Projects for Chapter 3
P a g e | 34
Exercises 3.2
3.2.1
(a)
7
10  7
P( X  7)  (10
7 )(0.5) (0.5)
 120(0.5)7 (0.5)3
 0.117
(b)
P( X  7)  1  P(8)  P(9)  P(10)
 1-0.044-0.010-0.001
 0.945
(c)
P( X  0)  1  P(0)
 1-0.001
 0.999
(d)
E ( X )  10(0.5)  5
Var(X)  10(0.5)(1-0.5)  2.5
3.2.3
(a) P( Z  1.645)  0.05, so z0  1.645
(b) P( Z  1.175)  0.88, so z0  1.175
(c) P( Z  1.28)  0.10, so z0  1.28
(d) P( Z  1.645)  0.95, so z0  1.645
3.2.5
(a)
P( X  20)  P( Z 
20  10
)  P( Z  2)  0.9772
5
(b)
(b) P( X  5)  P( Z 
5  10
)  P( Z  1)  0.8413
5
(c)
12  10
15  10
Z
)  P(0.4  Z  1)  P( Z  1)  P( Z  0.4)
5
5
 0.8413  0.6554  0.1859
P(12  X  15)  P(
P a g e | 35
(d)
P(| X  12 | 15)  P(15  X  12  15)  P(3  X  27)
3  10
27  10
Z
)  P(2.6  Z  3.4)
5
5
 P( Z  3.4)  P( Z  2.6)
 P(
 0.9997  0.0047  0.9950
3.2.7
Let X = the number of people satisfied with their health coverage, then n = 15 and
p = 0.7.
(a)
10
15 10
P( X  10)  (15
10 )(0.7) (0.3)
 3003(0.7)10 (0.3)5
 0.206
There is a 20.6% chance that exactly 10 people are satisfied with their health coverage.
(b)
P( X  10)  1  P( X  11)  P( X  12)  P( X  13)  P( X  14)  P( X  15)
 1  (.219  .170  .092  .031  .005)
 1  .517
 .483
There is a 51.5% chance that no more than 10 people are satisfied with their health
coverage.
(c)
E ( X )  np  15(0.7)  10.5
3.2.9
Let X = the number of defective tubes in a certain box of 400, then n = 400 and
p = 3/100=0.03.
(a)
 400 
r
400  r
P( X  r )  
 (0.03) (0.97)
 r 
(b)
400 400


i
400 i
P( X  k )   
 (0.03) (0.97)
i k  i 
(c)
P( X  1)  P( X  0)  p( X  1)  0.0000684
(d)
P a g e | 36
Part (c) shows that the probability that at most one defective is 0.0000684, which is very
small.
3.2.11
e  x
 0 since   0 and x  0.
x!
Since each p( x)  0, then
p( x) 
e  x
 x 
1  2
 e  x
e (1  
 ...)
x!
x!
1! 2!
 e   (e )  1 here we apply Taylor's expansion on e .
p( x)   x p( x)   x
This shows that p( x)  0 and

x
p ( x)  1.
3.2.13
The probability density function is given by
1
 , 0  x  10
f ( x)  10
0,
otherwise
Hence,
P (5  X  9)  
91
5 10
dx  0.4.
Hence, there is a 40% chance that a piece chosen at random will be suitable for kitchen use.
3.2.15
The probability density function is given by
 1
, 0  x  100

f ( x)  100
0,
otherwise
(a) P(60  X  80) 
(b) P( X  90) 
100
90
80 1
60 100 dx  0.2.
1
dx  0.1.
100
(c)
There is a 20% chance that the efficiency is between 60 and 80 units; there is 10% chance that
the efficiency is greater than 90 units.
3.2.17
Let X = the failure time of the component. And X follows exponential distribution with rate
0.05. Then the p.d.f. of X is given by f ( x)  0.05e0.05 x , x  0
Hence,
10
R(10)  1  F (10)  1   0.05e0.05 x dx  1  (1  e0.5 )  e0.5  0.607
0
P a g e | 37
3.2.19
The uniform probability density function is given by
f ( x)  1, 0  x  1 .
Hence,
P(0.5  X  0.65 | X  0.75) 
P(0.5  X  0.65 and X  0.75)
P( X  0.75)
0.65
P(0.5  X  0.65) 0.5 1dx 0.15



 0.2
0.75
P( X  0.75)
0
.
75
1dx
0
3.2.21
First, find z0 such that P( Z  z0 )  0.15
P( Z  1.036)  0.15, so z0  1.036
x0  72  1.036  6  78.22
The minimum score that a student has to get to an “A” grade is 78.22.
3.2.23
P(1.9  X  2.02)  P

1.9 1.96
0.04
Z 
2.02 1.96
0.04
  P(1.5  Z  1.5)  0.866
P( X  1.9 or X  2.02)  1  P(1.9  X  2.02)  0.134
13.4% of the balls manufactured by the company are defective.
3.2.25
(a)


P( X  125)  P Z  12510115  P( Z  1)  0.159
(b)


115
P( X  95)  P Z  9510
 P( Z  2)  0.023
(c)
First, find z0 such that P( Z  z0 )  0.95
P( Z  1.645)  0.95, so z0  1.645
x0  115  1.645 10  131.45
(d)
There is a 16% chance that a child chosen at random will have a systolic pressure greater
than 125 mm Hg. There is a 2.3% chance that a child will have a systolic pressure less than 95
mm Hg. 95% of this population have a systolic blood pressure below 131.45.
3.2.27
First find z1 , z 2 and z 3 such that
P( Z  z1 )  0.2, P( Z  z 2 )  0.5 and P( Z  z 3 )  0.8
P a g e | 38
Using standard normal table, we can find that z1  0.842 , z 2  0 and z 3  0.842 .
Then
y1  0  (0.842)  0.65  0.5473  x1  e y1  0.58 , similarly we can
obtain x2  1 and x3  1.73 .
For the probability of surviving 0.2, 0.5 and 0.8 the experimenter should choose doses 0.58, 1
and 1.73, respectively.
3.2.29
(a)

M X (t )  E (e )   etx
tX
0

1
( )  

x
1
x 1e  x /  dx

( ) 
 1
exp(
1 t
0
 1

x)dx

 

1

u
 
( )  0  1   t 
eu

1 t
du by letting u 
x with 1   t  0
1 t

 
  
1
 1

  u exp(u )du
 
( )   1   t  0
note that the integrand is the kernel density of Gamma ( ,1)

  


 ( ) 1
1


t


1
 (1   t )  when t  .
1

( )  

(b) E ( X )  M ' X (0)   (1   t ) 1 ( )
E ( X 2 )  M X( 2) (0) 
t 0
  , and
d
[  (1  t )  1 (  )]
t 0
dt
  (  1)(1  t )   2 (  ) 2
t 0
  (  1)  2
Var ( X )  E ( X 2 )  E ( X ) 2   (  1)  2  ( ) 2   2 .
P a g e | 39
3.2.31
(a)
First consider the following product

( )(  )   u

 1 u
e du  v  1e v dv
0
0

 x
2 ( 1)  x 2
e

2 xdx y 2(  1) e  y 2 ydv by letting u  x 2 and v  y 2
0
2
0
  2 1  x 2   2  1  y 2 
  2 x
e dx  2  y
e dy  noting that the integrands are even functions



 0
 0






2 1  x 2
2  1  y 2
  x
e dx   y
e dy 



 
 

 

 x
2 1
y
2  1 ( x 2  y 2 )
e
dxdy
 
Transforming to polar coordinates with x  r cos and y  r sin 
( )(  ) 
2 
  r cos
2 1
r sin 
2  1  r 2
e
rdrd
00

2
  r 2  2  2 e r rdr  (cos  ) 2 1 (sin  ) 2  1 d
2
0
0
 1     1  s   / 2

 s
e ds  4  (cos  ) 2 1 (sin  ) 2  1 d  by letting s  r 2
2


 0
 0

 /2
 (   )2  (cos  ) 2 1 (sin  ) 2  1 d
0
1
 (   )2  t  1 / 2 (1  t )  1 / 2
0
1
 (   )  t  1 (1  t )  1 dt
0
 (   ) B( ,  )
Hence, we have shown that
B( ,  ) 
(b)
( )(  )
.
(   )
1
2 t (1  t )
dt by letting t  cos 2 
P a g e | 40
1
B(  1,  ) x ( 1) 1 (1  x)  1
 1
 1
x

x
(1

x
)
dx

dx
B ( ,  ) 0
B( ,  ) 0 B(  1,  )
1
E( X ) 
1
B (  1,  )
(  1)(  ) (   )
1 
B ( ,  )
(    1) ( )(  )
( )(  ) (   )



, and
(   )(   ) ( )(  )   

1
B(  2,  ) x (  2) 1 (1  x)  1
2
 1
 1
x

x
(1

x
)
dx

dx
B ( ,  ) 0
B ( ,  ) 0 B (  2,  )
1
E( X 2 ) 
1
B (  2,  )
(  2)(  ) (   )
1 
B ( ,  )
(    2) ( )(  )
 (  1)( )(  )
(   )
 (  1)


.
(   )(    1)(   ) ( )(  ) (   )(    1)

2
  
 (  1)

 
Then Var ( X )  E ( X )  E ( X ) 
 
.
2
(   )(    1)     
(   ) (    1)
2
2
3.2.33
In this case, the number of breakdowns per month can be assumed to have Poisson
distribution with mean 3.
(a)
P( X  1) 
e3 31
 0.1494 . There is a 14.94% chance that there will be just one network
1!
breakdown during December.
(b)
P( X  4)  1  P(0)  P(1)  P(2)  P(3)  0.3528 . There is a 35.28% chance that there
will be at least 4 network breakdowns during December.
(c)
e3 3x
 0.9881 . There is a 98.81% chance that there will be at most 7
x!
x 0
7
P( X  7)  
network breakdowns during December.
3.2.35
(a)
4
4
1
P(1  X  4)  
x 21e  x /1dx   xe  x dx
2
(2) 1
1
1

  xe x
 0.6442
   ( e
4
1
4
1
x
)dx by integratio n by parts
P a g e | 41
The probability that an acid solution made by this procedure will satisfactorily etch a tray is
0.6442.
(b)
4
4
4
1
1
P(1  X  4)  
x11e  x /2 dx   e  x /2 dx  (e  x /2 )  0.4712 .
1
1
(1)  2
21
1
The probability that an acid solution made by this procedure will satisfactorily etch a tray is
0.4712.
3.2.37
Note MGF for exponential and gamma distribution:
1
and
(1   t )
1

(1   t )
for t 
1

k
 t  Xi 
 k
 k
1
M Y (t )  E  e i1   E   etX i    E etX i  


i 1 1   t 
 i 1
 i 1


1
1
1
1



 ... 
k
1   t  1   t  1   t 
1   t 
k
 
By the uniqueness of the MGF,
k
Y   X i ~ Gamma (k ,  )
i 1
Exercises 3.3
3.3.1
(a)
The joint probability function is
 8  6  10 
  

x
y
4

x

y
 ,
P( X  x, Y  y )    
 24 
 
4
where 0  x  4, 0  y  4, and 0  x  y  4 .
(b)
 8  6  10 
  

3 0 4  3  0
P( X  3, Y  0)    
 0.053 .
 24 
 
4
(c)
P a g e | 42
 8  6  10 

2
2   
x  1  4  x  1

P( X  3, Y  1)   P( X  x, Y  1)  
 0.429 .
 24 
x 0
x 0
 
4
(d)
y
0
0.020
0.090
0.119
0.053
0.007
0.289
x
0
1
2
3
4
Sum
1
0.068
0.203
0.158
0.032
2
0.064
0.113
0.040
3
0.019
0.015
4
0.001
0.461
0.217
0.034
0.001
Sum
0.172
0.421
0.317
0.085
0.007
1.00
3.3.3
1
 1

 (1  x)dx  (1  y )dy 
c
(
1

x
)(
1

y
)
dxdy

c


 

11
 1
 1

1
1

x 2 
y 2 

c x
y
 4c

2 1 
2 1 



1 1

1 1
f ( x, y )dxdy 
11
1 1
Thus, if c = 1/4, then
  f ( x, y)dxdy  1 . And we also see that
f ( x, y )  0 for all x and y.
11
Hence, f ( x, y ) is a joint probability density function.
3.3.5
By definition, the marginal pdf of X is given by the row sums, and the marginal pdf of Y is
obtained by the column sums. Hence,
xi
f X ( xi )
yi
f Y ( yi )
-1
3
5
otherwise
0.6
0.3
0.1
0
-2
0
1
4
otherwise
0.4
0.3
0.1
0.2
0
P a g e | 43
3.3.7
From Exercise 3.3.5 we can calculate the following.
P( X  1 | Y  0) 
P( X  1, Y  0) 0.1

 0.33 .
f Y (0)
0.3
3.3.9
2
(a) The marginal of X is f X ( x) 

x
2
8
4
f ( x, y )dy   xydy  (4 x  x 3 ), 1  x  2 .
9
9
x
1.75
28

4
P(1.5  X  1.75, Y  1)     xydy  dx   (4 x  x3 )dx
9
9
1.5  x
1.5

1.75
(b)
1.75
4
x4
 (2 x 2  )  0.2426
9
4 1.5
3.3.11
Using the joint density in Exercise 3.3.9 we can obtain the joint mgf of ( X , Y ) as
M ( X ,Y ) (t1 , t 2 )  E (e
t1 X t2Y
22
)    e t1x t2 y
1 x
8
xydydx
9
2
2
2



8
8
x
1
  xet1x   e t2 y ydy dx   xet1x  K  e t2 x  2 e t2 x dx


9
9
t2
t2


1
1
x

e 2t 2
where K  2 (2t 2  1)
t2
2

2
2
8
8
8
K  xet1x dx 
x 2 e (t1 t2 ) x dx  2  xe(t1 t2 ) x dx

9 1
9t 2 1
9t 2 1
2

8 x
1
 K  e t1x  2 e t1x 
9  t1
t1
1
2
8

9t 2
 x2
2x
2
( t1 t 2 ) x
( t1 t 2 ) x
( t1 t 2 ) x 


e

e

e
2
3
t t

(t1  t 2 )
(t1  t 2 )
 1 2
1
8
 2
9t 2
 x
1
( t1 t 2 ) x
( t1 t 2 ) x 


e

e
2
t t

(
t

t
)
1
2
 1 2
1
2
After simplification we then have
P a g e | 44
M ( X ,Y ) (t1 , t 2 ) 
t1  3t 2  t12  3t 22  4t1t 2  t12 t 2  2t1t 22  t 23
t 22 (t1
 t2 )
3
e t1 t2 
(2t 2  1)(1  t1 )
t12 t 22
e t1  2t2
  t1  3t 2  2t12  6t 22  8t1t 2  4t12 t 2  8t1t 22  4t 23 (2t 2  1)( 2t1  1)  2t1  2t2
e
 

2
3
2 2

t
(
t

t
)
t
t
2 1
2
1 2


3.3.13
(a)
2


6 xy
36 x 2
f ( x, y )   
 
2
y  0  n( n  1)(2n  1) 
 n(n  1)(2n  1)
n
n
f X ( x)  
y 0
n
y
2
y 0
6 x2

, x  1, 2,..., n.
n(n  1)(2n  1)
n
fY ( y )  
x 0

2


6 xy
36 y 2
f ( x, y )   


2
x  0  n( n  1)(2n  1) 
 n(n  1)(2n  1)
n
n
x
2
x 0
6 y2
, y  1, 2,..., n..
n(n  1)(2n  1)
Given y  1,2,..., n , we have
2


6 xy
 n(n  1)( 2n  1) 
f ( x, y ) 
6x 2
 
f ( x | y) 

, x  1,2,..., n.
f Y ( y)
n(n  1)( 2n  1)
6y2
n(n  1)( 2n  1)
(b)
Given x  1,2,..., n , we have
2


6 xy
 n(n  1)( 2n  1) 
f ( x, y ) 
6y2

f ( y | x) 


, y  1,2,..., n.
f X ( x)
n(n  1)( 2n  1)
6x 2
n(n  1)( 2n  1)
3.3.15
(a)
3
3
E ( XY )   xy  f ( x, y)   xy  f ( x, y) 
x 1 y 1
x, y
(b)
3
E ( X )   x  f x ( x)   x  f x ( x) 
x
x 1
5
3
35
12
P a g e | 45
3
E (Y )   y  f y ( y)   y  f y ( y) 
y 1
y
11
6
Then, Cov( X , Y )  E ( XY )  E ( X ) E (Y ) 
35 5 11
5
  
12 3 6
36
(c)
2
5
5

Var( X )   [ x  E ( X )]  f x ( x)    x    f x ( x)  , and
3
9
x
x 1 
3
2
2
3
11 
23

.
Var (Y )  [ y  E (Y )]2  f y ( y)    y    f y ( y) 
6
36
x, y
y 1 
Then,  XY 
Cov( X , Y )
Var ( X )Var (Y )

 5 / 36
(5 / 9)( 23 / 36)
 0.233 .
3.3.17
Assume that a and c are nonzero.
Cov(U ,V )  Cov(aX  b, cY  d )  acCov( X , Y ) ,
Var (U )  Var (aX  b)  a 2Var ( X ) , and Var (V )  Var (cY  d )  c 2Var (Y ) .
Then, UV 

Cov(U , V )
Var (U )Var (V )

acCov( X , Y )
2

2
a Var ( X )c Var (Y )
ac
(ac)
2
 XY
  , if ac  0
ac
 XY   XY
.
ac
  XY , otherwise
3.3.19
We fist state the famous Cauchy-Schwarz inequality:
E( XY )  E( X 2 ) E(Y 2 ) and the equality holds if and only if there exists some
constant α and β, not both zero, such that P( X
2
  Y )  1.
2
Now, consider
Cov( X , Y )
 XY  1 
Var ( X )Var (Y )
 1  Cov( X , Y )  Var ( X )Var (Y )
 E ( X   X )(Y  Y )  E ( X   X ) 2 E (Y  Y ) 2
By the Cauchy-Schwarz inequality we have
P( X   X
2
  Y  Y )  1
2
 P( X   X  K (Y  Y ))  1 for some constant K
P a g e | 46
 P( X  aY  b)  1 for some constants a and b.
3.3.21
(a)
First, we compute the marginal densities.
f X ( x) 


x
x
y
y
0
0
 f ( x, y)dy   e
y
dy  e  x , x  0 , and
f Y ( y )   f ( x, y )dx   e  y dx  ye  y , y  0 .
For given y  0 , we have the conditional density as
f ( x | Y  y) 
f ( x, y ) e  y
1
 y  , 0  x  y .
fY ( y )
ye
y
Then, ( X | Y  y ) follows Uniform(0, y). Thus, E ( X | Y  y ) 
y
2
(b)
 y



1
E ( XY )    xy  f ( x, y )dxdy     xye  y dx  dy   y 3e  y dy  3 ,


2
y x
00
0


E ( X )   x  f X ( x)dx   xe x dx  1 , and
x
0

E (Y )   y  f Y ( y )dy   y 2 e  y dy  2 .
y
0
Then, Cov( X , Y )  E ( XY )  E ( X ) E (Y )  3  1  2  1 .
(c)
To check for independence of X and Y
f X (1) f Y (1)  e 2  e 1  f (1,1) .
Hence, X and Y are not independent.
3.3.23
Let  2  Var ( X )  Var (Y ) . Since X and Y are independent, we have
Cov( X , Y )  E ( XY )  E ( X ) E (Y )  E ( X ) E (Y )  E ( X )(Y )  0 . Then,
Cov( X , aX  Y )  aCov( X , X )  Cov( X , Y )  aVar ( X )  a 2 , and
Var (aX  Y )  a 2Var ( X )  Var (Y )  (a 2  1) 2 .
P a g e | 47
Thus,  X ,aX Y 
Cov( X , aX  Y )
Var ( X )Var (aX  Y )

a 2
a
 2 (a 2  1) 2
a2 1
Exercises 3.4
3.4.1
The pdf of X is f X ( x) 
1
if 0  x  a and zero otherwise.
a

FY ( y )  P (Y  y )  P (cX  d  y )  P X 
y d
c

y d
c

f X ( x)dx


0, otherwise
0, otherwise

yd
yd
yd


, if 0 
a
, if d  y  ac  d
c
 ac
 ac
yd

1, if y  ac  d
a
1, if
c
fY ( y ) 
dFY ( y ) 1

dy
ac
1

  ac
 0
, d  y  ac  d
,
otherwise
3.4.3
Let U  XY and V  Y .
Then X  U / V and Y  V , and
x
J  u
y
u
x
1
u
 2 1
v 
v
v  v.
y
0
1
v
Then the joint pdf of U and V is given by
u
u
1
fU ,V (u, v)  f X ,Y ( , v)  J  f X ,Y ( , v)
v
v
v
Then the pdf of U is given by
fU (u )   fU ,V (u, v)dv 
v



u
1
f X ,Y ( , v) dv .
v
v
.
P a g e | 48
3.4.5
The joint pdf of ( X , Y ) is
f X ,Y ( x, y ) 
1
21 2
e
 x2
y2 

 2
2
 2 2 
2 
 1
,    x  , -   y  ;  1 ,  2  0 .
We can easily show that the marginal densities are

f X ( x) 
 f ( x, y)dy 


f Y ( y) 
 f ( x, y)dx 


1
2  1
e

1
2  2
e
x2
 12
,    x   , and
y2
 22
,   y  .
This implies that X ~ N (0,  12 ) and Y ~ N (0,  22 ) .
Also, notice that f X ,Y ( x, y )  f X ( x) f Y ( y ) fo2r all x and y, thus X and Y are independent.
By the definition of Chi-Square distribution and the independency of X and Y, we know that
1
 12
X2 
1
 22
Y 2 ~  2 (2) . Therefore, the pdf of U 
1
 12
X2 
1
 22
Y 2 is
u
1 
fU (u )  e 2 , u  0 .
2
3.4.7
Let U  X  Y and V  Y .
Then X  U  V and Y  V , and
x
J  u
y
u
x
v  1  1  1 .
y 0 1
v
Then the joint pdf of U and V is given by
fU ,V (u, v)  f X ,Y (u  v, v)  J  f X ,Y (u  v, v) .

Thus, the pdf of U is given by fU (u ) 
 f X ,Y (u  v, v)dv .

3.4.9
(a)
Here let g ( x ) 
Also, f X ( x) 
x

1
2 
, and hence, g 1 ( z )  z   . Thus,
1 x 2
 (
)
e 2  ,
d 1
g ( z)   .
dz
   x   . Therefore, the pdf of Z is
P a g e | 49
1
d 1
1  2 z2
f Z ( z )  f X ( g ( z )) g ( z ) 
e
,    z   , which is the pdf of
dz
2
1
N (0,1) .
(b)
The cdf of U is given by
FU (u )  P(U  u )  P (
u

( X   )2

2
u

 u
1  12 z 2
e dz  2 
2
0
X 


 u)  P   u 
 uP  u Z  u





1  12 z 2
e dz , since the integrand is an even function.
2
Hence, the pdf of U is
d
fU (u ) 
FU (u ) 
du
2
2
e

u
2
1
2 u

1
2
u

1
u

2e 2 ,
u  0 and zero
otherwise, which is the pdf of  2 (1) .
3.4.11
Since the support of the pdf of V is v  0 , then g (v) 
the support. Hence, g 1 ( y ) 
1 2
mv is a one-to-one function on
2
d 1
1 m 2
2y
g ( y) 
. Thus,
.
de
2 2y m
m
Therefore, the pdf of E is given by
2y
d
2 y  m
f ( y)  fV ( g ( y)) g 1 ( y)  c
e
dy
m
1
1
2my

c 2y
m3

e
2  y
m ,
y  0.
3.4.13
Let U 
Y 
X 2  Y 2 and V  tan 1   . Here U is considered to be the radius and V is the
X
angle. Hence, this is a polar transformation and hence is one-to-one.
Then X  U cosV and Y  U sin V , and
x
J  u
y
u
x
v  cos v  u sin v  u cos 2 v  u sin 2 v  u .
y sin v u cos v
v
Then the joint pdf of U and V is given by
P a g e | 50
fU ,V (u , v)  f X ,Y (uv, v)  J 


u
2
2
e
 1

exp 
(u 2 cos 2 v  u 2 sin 2 v)u
2
2
 2

1
2
u2
2 2
.
, u  0, 0  v  2 .
3.4.15
The joint pdf of ( X , Y ) is
1 
f X ,Y ( x, y )  f X ( x) f Y ( y )  e
4
x y
2
, x, y  0 .
Apply the result in Exercise 3.4.14 with   2 . We have the joint pdf of U 
V  Y as fU ,V (u, v) 
X Y
and
2
1 (u  v )
e
, v  2u, v  0. Thus, the pdf of U is give by
2
fU (u )   fU ,V (u , v)dv
v
 1  ( u  v )
1
dv  e u , u  0
 e
2
0 2


1
 1 (u  v )
dv  e u , u  0.
  2e
2
 2 u
Exercises 3.5
3.5.1
(a)
Note that X follows Beta (5,5) , then apply the result in Exercise 3.2.31 we have   1 / 2
and  2  1 / 44 . From the Chebyshev’s theorem
P(   K  X    K )  1 
1
.
K2
Equating   K to 0.2 and   K to 0.8 with   1 / 2 and   1 / 44  0.15 , we
obtain K  2 . Hence,
P(0.2  X  0.8)  1 
1
 0.75
22
(b)
0.8
P(0.2  X  0.8) 
 630 x
0.2
4
(1  x) 4 dx  0.961
P a g e | 51
3.5.3
 2  E ( X   ) 2   ( x   ) 2 f ( x)
x

 (x  )
2
 (x  )
2
x    K

 ( x   ) 2 f ( x)   ( x   ) 2 f ( x)
f ( x) 
  K  x    K 
f ( x) 
x    K
 (x  )
x    K
2
f ( x)
x    K
Note that ( x   ) 2  K 2 2 for x    K or x    K . Then the above inequality
can be written as

 2  K 2 2 

 f ( x)   f ( x)  K 2 2 P( X    K )  P( X    K )
 x   K
x    K

 K 2 2 P( X    K )
This implies that P( X    K ) 
1
1
or P( X    K )  1  2
2
K
K
3.5.5
Apply Chebyshev’s theorem we have
 X

E ( X n  np) 2
P  n  p  0.1  P( X n  np  0.1n)  1 
(0.1n) 2
 n

Var ( X n )
np(1  p)
1
1
2
0.01n
0.01n 2
p(1  p)
100 1
1
 1  100
 1
 , since p(1  p) 
n
n 4
4
This implies that we want to find n such that
1
100 1
25
  0.9 
 0.1  n  250
n 4
n
3.5.7
Let X 1 ,..., X n denote each toss of coin with value 1 if head occurs and 0 otherwise. Then,
X 1 ,..., X n are independent variables which follow Bernoulli distribution with p  1 / 2 .
Thus, E ( X i )  1 / 2 , and Var ( X i )  1 / 4 . For any   0 , from the law of large numbers
we have
S

S
1
1
for large n.
P n      1 as n   , i.e. n will be near to
n
2
n
2


If the coin is not fair, then the fraction of heads,
getting head for large n.
Sn
, will be near to the true probability of
n
P a g e | 52
3.5.9
Note that E ( X i)  0 , and Var ( X i )  i . Hence, X 1 ,..., X n are not identically
distributed. Thus, the conditions of the law of large numbers stated in the text are not
satisfied.
There is a weaker version of the weak law of large numbers which requires only E ( X i)   ,
and Var ( X )  0 as n   . However, in this case
 n X  Var ( X i )  i n(n  1)
i
1


2
Var ( X )  Var  i 1   i 1 2
 i 12 

as n   .
2
n
n
n
2
 n 


n
n
Therefore, the conditions of the weaker version are not satisfied, either.
3.5.11
First note that E ( X i )  2 and Var ( X i )  2 . From the CLT,
X 100  2
follows
2 /100
approximately N (0,1) . Hence, we have
P( X 100  2)  P

X100  2
2 2

2/100
2/100
  P(Z  0)  0.5 , where Z ~ N (0,1)
3.5.13
First note that E ( X i )  1 / 2 and Var ( X i )  1 / 12 . Then, by CLT we know that
Zn 
Sn  n / 2
X  1/ 2

approximately follows N (0,1) for large n.
n /12
n / 1/12
3.5.15
From the Chebyshev’s theorem P(   K  X    K )  1 
1
Equating   K
K2
to 104 and   K to 140 with   122 and   2 , we obtain K  9 . Hence,
P(104  X  140)  1 
1
 0.988
92
3.5.17
Let X i  1 if the ith person in the sample is color blind and 0 otherwise. Then each X i
follows Bernoulli distribution with estimated probability 0.02, and E ( X i )  0.02 and
P a g e | 53
n
Var ( X i )  0.0196 . Let S n   X i .
i 1
We want P( S n  1)  0.99 . By the CLT,
S n / n  0.02
follows approximately
0.0196 / n
N (0,1) . Then,

0.99  P ( S n  1)  P Z 
Using the normal table,
1/ n 0.02
0.0196/ n
1 / n  0.02
0.0196 / n

 2.33 . Solving this equation, we have n  359.05 .
Thus, the sample size must be at least 360.
3.5.19
be iid with   1 and  2  0.04 . Let X 
Let X 1 ,..., X 100
By the CLT,
X 1
follows approximately N (0,1) . Then,
0.04 / 100
P(0.99  X  1)  P
1 100
 Xi .
100 i 1

0.991
0.04/100
Z
11
0.04/100
  P(0.5  Z  0)  0.1915
3.5.21
Let X i  1 if the ith dropper in the sample is defective and 0 otherwise. Then each X i
follows Bernoulli distribution with estimated probability
500
 0.05 , and
10000
125
E ( X i )  0.05 and Var ( X i )  0.0475 . Let S125   X i . From the CLT,
i 1
S125  125  0.05
0.0475  125
follows approximately N (0,1) . Hence, we have

P( S125  2)  P Z 
21250.05
0.0475125

  P(Z  1.744)  0.041
Download