Ch4

advertisement
4-13【Question】
If X is a nonnegative continuous random variable, show that

E ( X )   [1  F ( x )]dx
0
Apply this result to find the mean of the exponential distribution.
【Solution】
X is nonnegative continue r.v

(i) E ( X )   xf ( x)dx
0

= lim
b
b  0
xf ( x)dx

= lim ( x( F ( x)  1)) 0   ( F ( x)  1)dx
b
b 
0

= lim b( F (b)  1)   (1  F ( x))dx
b 
0
claim: lim b( F (b)  1)  0
b 


b
b
consider: 0  b(1  F (b))  b f ( x)dx   xf ( x)dx
0  lim b(1  F (b))  0
b 

 E ( X )   (1  F ( x))dx
0
(ii) x ~ exp(  )
F ( x)  1  e x

1
0

E ( X )   e x dx 
4-18【Question】
If U 1 ,  , U n are independent uniform random variables, find E (U ( n )  U (1) ) .
【Solution】
U 1 ,  ,U n ~ uniform(0,1)
f U ( n ) ,U (1) (u n , u1 ) 
n!
 1  1  (u n  u1 ) n 2
0!(n  2)!0!
= n(n  1)(u n  u1 ) n  2
 R  U ( n )  U (1)
U (1)  Y


Y  U (1)
U ( n )  R  Y
J 
0 1
1
1 1
 f R,Y (r, y)  n(n  1)r n2 1 , 0  y  r  y  1
1 r
f R (r )   n(n  1)r n2 dy  n(n  1)(1  r )r n2 , 0  r  1
0
1
E ( R)   r  n(n  1)(1  r )r n2 dr
0
1
1
0
0
=  n(n  1)(1  r )r n1dr  n(n  1) (1  r )r n1dr
=
n(n  1)(n)(2) n(n  1)( n  1)! n  1


(n  2)
(n  1)!
n 1
(  n,   2)
4-25【Question】
If X 1 and X 2 are independent random variables following a gamma
distribution with parameters  and  , find E ( R 2 ) , where R 2  X12  X 22
【Solution】
X 1 , X 2 ~ Gamma( ,  ) , indep
f X1 , X 2 ( x1 , x2 ) 
  1 x   1 x
x1 e
x2 e , x1 , x2  0
( )
( )
1
2
E ( R 2 )  E ( X 12  X 22 )
=

0


0
( x12  x22 ) f x1 , x2 ( x1 , x2 )dx1dx2
  1 x   1 x
x1 e
x2 e dx1dx2 
0 0 ( )
( )

  
  1 x
 1 x
x
e
x2 e dx1dx2
0 0 ( ) 1
( )

 (  2)
 
  1 x
=

x
e
dx

x2 1e x dx2
2
2

0 ( ) 2
0
( )
( )
(  2) (  2) 2 (  1)


=
( )2
( )2
2



1
2
1
2
2
2
4-26【Question】
Referring to Example B in Section 4.1.2 what is the expected number of coupons
needed to collect r different types, where r  n ?
【Solution】
X 1 :the number of trails up to and including the trail on which the first coupon is
collected.
X 2 :the number of trails from that point up to and including the trail on which the
next coupon different from the first is obtained,……and so on, up to X n .
 X k ~ geo(
E( X k ) 
n  r 1
)
n
k  1,  , n
X k a r e i n d e. p
n
n  r 1
Let X  X 1  X 2         X r
r
E( X )  E( X 1         X r )   E( X k )
k 1
n
n
n
  
)
=( 
n n 1
n  r 1
nr
1
1
 n
k 1 k
k 1 k
= n(log n     n )  n(log( n  r )     nr )
n
= n
= n(log
n
  n   nr ) , r  n
nr
4-30【Question】
1
) , where X is a Poisson random variable.
X 1
【Solution】
X ~ p ( ) ,   0
Find E (
p( X  k )  e 
E(
k
k  0,1,2,  
k!



1
1
k
k
k 1
)  (
)e 
 e  
 e  
X  1 k 0 k  1
k!
k 0 (k  1)!
k 1 k!
=
e 


k
k 0
k!
(
 1) 
e 

(e   1) 
1  e 

4-34【Question】
Let X be uniform on [0,1], and let Y 
X . Find E (Y ) by (a) finding the
density of Y and then finding the expectation and (b) using Theorem A of
section 4.1.1.
【Solution】
X ~ U [0,1]
 f X ( x)  1, x  [0.1]
Y  X  X Y2
dx
 2y
dy
(a) f Y ( y)  f X ( y 2 ) J  1 2 y  2 y
E (Y ) 
1
(b) E (Y )  
0
1

0
y  2 ydy 
x dx 
1

0
2 y 2 dy 
2
3
2
3
4-35【Question】
Find the mean of a negative binomial random variable.(Hint:Express the
random variable as a sum.)
【Solution】
The negative binomial random variable Y is the number of the trials on which the r-th
success occurs, in a sequence of independent trials with constant probability p of
success on each trail. Let xi denote a random variable defined as the number of the
trail on which the i-th success occurs, for i=1,2,….,r. Now define
Wi  X i  X i 1 , i  1,2,  , r
r
where X 0 is defined to be zero. Then we can write X r  Y   Wi .
i 1
Notice that the random variable W1 ,W2 ,  ,Wr have identical geometric distribution
and are mutually independent.
r
r
r
i 1
i 1
i 1
 E (Y )  E ( Wi )   E (Wi )  
1 r

p p
4-37【Questuon】
For what values of p is the group testing of Example C in section 4.1.2 inferior
to testing every individual.
【Solution】
E ( N )  n(1 
1
 pk )
k
E ( N )  n  n(1 
If p  k
n  mk
n, m, k  N
(from 課本 p.121)
1
1
1
 pk )  n  pk   p  k
k
k
k
1
, then the group testing is inferior to testing every individual.
k
4-38【Question】
Let X be an exponential random variable with standard deviation  . Find
p( X  E ( X )  k ) for k  2,3,4, and compare the results to the bounds from
Chebyshev’s inequality.
【Solution】
令 X ~  ( )
   E( X ) 
1

,   Var ( X ) 
p( x    k )  p( x 
1

= p( x 

k

k 1

1

2
) = p( x 
)

1

k 1

)  p( x 
1 k

)
( k  2 1  k  0  p ( x 
1 k

)  0)

= k 1 e x dx  e k 1 , k  2,3,  

Chebyshev’s inequality
k
p( x    k )  e
 k 1
p ( x    k ) 
2
e 3  0.04979
1
 0.25
4
3
e 4  0.01832
1
 0.1111
9
1
 0.0625
16
39 Show that Var(X - Y)  Var(X)  Var(Y) - 2Cov(X, Y).
4
e 5  0.006738
1
k2
Ans: Var ( X  Y )  E ( X  Y ) 2  [ E ( X  Y )]2
 E ( X 2  2 XY  Y 2 )  [ E ( X )  E (Y )] 2
 E ( X 2 )  2 E ( XY )  E (Y 2 )  E 2 ( X )  2 E ( X ) E (Y )  E 2 (Y )
 E ( X 2 )  E 2 ( X )  E (Y 2 )  E 2 (Y )  2[ E ( XY )  E ( X ) E (Y )]
 Var ( X )  Var (Y )  2 cov( X , Y )
41 Find the covariance and the correlation of N i and N j , where N1 , N 2 , …, N r
are multinomial random variables.(Hint:Express them as sums.)
Ans: N1 , N 2 , …, N r :multinomial r.v
PN
1 ...N r
( n1 ...nr )=
r
n!
n
n
P1 1 ...Pr r ,
n1!...nr !
 ni =n,
i 1
r
 P =1
i 1
i
n n
 P N ( ni )=   Pi i (1  Pi ) nni , N i ~ Bin (n, Pi ), i  1...r ,  E ( N i )  n  Pi ,
i
 ni 
Var ( N i )  n  Pi (1  Pi ).
n
E( Ni , N j )  
nn j
n n
n j 0 ni 0
i
j
n!
nn n
n
n
 Pi i  Pj j  (1  Pi  Pj ) i j
ni !n j !(n  ni  n j )
n
n j  n!Pj j  Pi nn j
(n  n j  1)!
nn n 
n 1
Pi i (1  Pi  Pj ) i j 
=

n j 0 n j !( n  n j  1)! 
 ni 0 (ni  1)!(n  ni  n j )!

n
nn j 1  n  n j  1 ni 1

 Pi (1  Pi  Pj ) nni n j 
=
  

n j 0 ( n j  1)!( n  n j  1)! 

 ni 0  n j  1 
n
n!Pj j  Pi
n

n
 n2
  Pi Pjn j (1  Pj ) nn j 1
=  n(n  1)

n j 0
 n j  1

 n2  n  2  n j 1

 n  2  n1
1
 Pj (1  Pj ) nn j 1  

P
(
1

P
)
= n(n  1) Pi Pj   

j
j

n  1
 n 1 
n j 1 j 



= n(n  1) Pi PJ (1) n2  0  n(n  1) Pi PJ
∴Cov(n i ,n j )=E(n i ,n j )-E(n i )E(n j )=n(n-1)P i P j -nP i nP j =- nP i P j
P=

Cov(ni , n j )
Var (ni )Var (n j )

 nPi Pj
n  Pi (1  Pi )  n  Pj (1  Pj )

 nPi Pj
n Pi Pj (1  Pi )(1  Pj )
Pi Pj
(1  Pi )(1  Pj )
45 Two independent measurements, X and Y, are taken of a quantity μ. E(X)=E(Y)=
μ, but σ X and σ Y are unequal. The two measurements are combined by means
of a weighted average to give
Z = αX + (1-α)Y
Where α is a scalar and 0  α  1.
a Show that E(Z) = μ.
b Find α in terms of σ X and σ Y to minimize Var(Z).
c Under what circumstances is it better to use the average ( X  Y ) 2 than
either X or Y alone?
Ans:(a) E ( Z )  E (X  (1   )Y )  E ( X )  (1   ) E (Y )        
(b) Var (Z )  Var (X  (1   )Y )   2 X2  (1   ) 2  Y2  f ( )
f ( )  2 X2  (1   ) Y2
f ( )  2 X2  2 Y2 ﹥0
 f ( )  0 時有 min
 2 X2  2(1   ) Y2  2 X2  2 Y2  2 Y2   
(c) V a (rZ )  V a (r

 X2   Y2
 Y2
 X2   Y2
 2   Y2
X Y
) X
, Var ( X )   X2 , Var (Y )   Y2
2
4
  X2   Y2  3 X2
4
   Y2

  Y2   X2  3 Y2
4
2
X

49
X Y
1 2
is better, when  X2  3
2
3 Y
Let T = k 1 X k , where the X k are independent random variables with means μ
n
and variances σ 2 . Find E(T) and Var(T).
n
n
n
k 1
k 1
Ans: E (T )  E ( k  X k )  E (k  X k )  kE( X k ) 
k 1
n(n  1)

2
n
V a (rT )  V a (r k  X k ) ( X 1 , X 2 ,, X n are independent)
k 1
n
n
k 1
k 1
 Var (k  X k )  k 2Var ( X k ) 
51
n(n  1)( 2n  1) 2

6
If X and Y are independent random variables, find Var(XY) in terms of the
means and variances of X and Y
Ans: X , Y independent
Var ( XY )  E ( XY ) 2  E 2 ( XY )
 E ( X 2Y 2 )  E 2 X  E 2Y ( X , Y indep ,  E ( XY )  EX  EY )
 EX 2  EY 2  E 2 X  E 2Y
( X , Y indep,  X 2 ,Y 2 indep, E ( X 2  Y 2 )  EX 2  EY 2 )
 (VarX  E 2 X )  (VarY  E 2Y )  E 2 X  E 2Y
 VarX VarY  VarX  E 2Y  E 2 X VarY  E 2 X  E 2Y  E 2 X  E 2Y
 VarX VarY  VarX  E 2Y  E 2 X VarY
  X2  Y2  Y2 X2   X2  Y2
53
Let (X,Y) be a random point uniformly distributed on a unit disk. Show that
Cov(X,Y)=0, but that X and Y are not independent.
Ans: f X ,Y ( x, y ) 
1

1 x 2

f X ( x) 
 1 x 2
同理 f y ( y ) 
,
1

2

x2  y2  1
dy 
2

1 x2 ,
1 y2 ,
x   1,1
y   1,1
 f x, y ( x, y)  f x ( x)  f y ( y) x 2  y 2  1
 X , Y a r en o ti n d e p e ntd e n
1
EX  
1
2

x 1  x 2 dx  0 (
2

x 1  x 2 為奇 函 數)
同理, EY  0
1 x 2
E( X ,Y )  
54
2
x y
1
 
2
1 1
1 1
2 1 x
x
y
x  0dx  0
2 dx 

1

x
1  1 x

2 1
2 1
 Cov( X , Y )  E ( XY )  EX  EY  0  0  0
 Cov( X , Y )  0, but that X and Y are not i n d e p e ntd. e n
1
dydx 
Let Y have a density that is symmetric about zero, and let X=SY, where S is an
independent random variable taking on the values +1 and –1 with probability
each. Show that Cov(X,Y)=0, but that X and Y are not independent.
Ans: E ( S )  1 P( S  1)  (1)  P( S  1) 
1 1
 0
2 2
Cov( X , Y )  Cov( SY , Y )
 E ( SY  Y )  E ( SY ) E (Y )
 E ( S ) E (Y 2 )  E ( S ) E (Y ) E (Y ) ( S和Y indep.)
0

0
 1
f x│y ( x│y )  
2
1
 2
 f x│y  f x
x y
x  y
f X ( x) 
1
1
f Y ( x)  f Y ( x)
2
2
x y
 X , Y are noti n d e p e ntd. e n
55
In Section 3.7, the joint density of the minimum and maximum of n
1
2
independent uniform random variables was found. In the case n =2, this
amounts to X and Y, the minimum and maximum, respectively, of two
independent random variables uniform on [0,1], having the joint density
f ( x, y )  2,
0 x y
a Find the covariance and the correlation of X and Y. Does the sigh of the
correlation make sense intuitively?
b Find E(X│Y=y) and E(Y│X=x). Do these results male sense intuitively?
c Find the probability density functions of the random variables E(X│Y) and
E(Y│X).
d What is the linear predictor of Y in terms of X (denoted by Yˆ =a + bX) that
has minimal mean squared error? What is the mean square prediction error?
e What is the predictor of Y in terms of X[ Yˆ =h(x)] that has minimal mean
squared error? What is the mean square prediction error?
Ans: f X ,Y ( x, y )  2, 0 x y 1
a.
y
y
f Y ( y )   f X ,Y ( x, y )dx   2dx  2 y,
0
0
1
E ( XY )  
0

y
0
1
f X ,Y ( x, y )dxdy  
0
1
1
0
0
EX   xf X ( x)dx   2 x(1  x)dx 
1
1
0
0

y
0
2 xydxdy 
1
3
EX 2   x 2 f X ( x)dx   2 x 2 (1  x)dx 
1
1
0
0
EY   yf Y ( y )dy   2 y 2 dx 
1
1
0
0
y  (0,1)
1
6
2
3
EY 2   y 2 f Y ( y )dx   2 y 3 dy 
1
2
1 1 2 1
( ) 
6 3
18
1 2
1
VarY  EY 2  E 2Y   ( ) 2 
2 3
18
1 1 2 1
Cov( X , Y )  E ( XY )  EX  EY    
4 3 3 36
1
Cov( X , Y )
1

 36 
2
VarX VarY
1 1

18 18
VarX  EX 2  E 2 X 
1
4
b. f X│Y ( x│y) 
f X ,Y ( x, y)
fY ( y)

1
, y  (0,1), x  (0, y)  X│Y  y ~ U (0, y)
y
y
, y  (0,1)
2
f ( x, y )
1
( y│x)  X ,Y

, x  (0,1), y  ( x,1)  Y│X  x ~ U ( x,1)
f X ( x)
1 x
 E ( X│Y  y ) 
f ,Y│X
 E (Y│X  x) 
c.
x 1
, x  (0,1)
2
令W1  ( X│Y ) 
W2  (Y│X ) 
y
1
,  fW1 (W1 )  2 f Y (2W1 )  8W1 , W1  (0, )
2
2
X 1
1
,  fW2 (W2 )  2 f X (2W2  1)  8(1  W2 ), W2  ( ,1)
2
2
d. Y之最佳線性預測值:Yˆ  a  bX
a  Y  b X 

1
b  ( Y ) 
X
2
2 1 1 1
  
3 2 3 2
Yˆ 
1 1
 X,
2 2
x  (0,1)
1 1
1
1
1
此時之M.S.E.:E (Y   X ) 2   Y2 (1   2 )  (1  ) 
2 2
18
4
24
X 1
,
e. Y之最佳預測值:Yˆ  E (Y│X ) 
2
x  (0,1)
此時M.S.E:
. E (Y  E (Y│X )) 2  EY 2  E ( E 2 (Y│X ))
1
( x  1) 2
  E(
)
2
4
1 1 ( x  1) 2
 
f X ( x)dx
2 0 4
1 1 1
   ( x  1) 2 (1  x) dx
2 2 0
1 11
 
2 24
1

24
58 Let X and Y be jointly distributed random variables with correlation  XY ; define
~
~
~
the s tan dardized random variables X and Y as X = ( X  E( X ))
~
and Y = (Y  E(Y ))
Var( X )
~ ~
Var (Y ) . Show that Cov( X , Y )=  XY .
~ ( X  EX ) ~ (Y  EY )
Ans: X 
, Y
, correlatio n: X ,Y
VarX
VarY
( X  EX )  (Y  EY )
X  EX
Y  EY
~ ~
Cov( X , Y )  E (
)  E(
)  E(
)
VarX VarY
VarX
VarY
E ( X  EX )  (Y  EY )

V a r X V a r Y
  X ,Y
67 A fair coin is tossed n times, and the number of heads, N, is counted. The coin is
then tossed N more times. Find the expected total number of heads generated by
this process.
1
Ans: N ~ B (n, )
2
令 X 標丟 N=m 次,出現頭之次數,m=0,1,2,…,n
1
 X│N  m ~ B(m, ),
2
x  0,1,2,, m
 EX  E ( E ( X│N ))  E (
EN 
N
n
)
2
4
( Note:若X ~ B(n, p), 則EX  np)
n
2
∴The expected total number of heads generated by this process is
3
n
4
70
Let the point (X,Y) be uniformly distributed over the half disk x 2  y 2  1 ,
where y  0. If you observe X, what is the best prediction for Y ? If you observe
Y , what is the best prediction for X ? For both questions , “best” means having
the minimum mean squared error .
sol
f X ,Y ( x, y ) =
2

. x 2  y 2  1 , y>0
f X (x) = 0
f
Y
1 x 2
fx, y( x. y)dy 
2
1 x 2
 
0
1 y 2
( y)  
f x , y ( x, y)dx 
 1 y 2
2

1 y

2
 1 y
2
dy 
dx 
2
1  x 2 ,x  (1,1)

4
1  y 2 ,y  (0,1)

2
f X ,Y ( x, y )
f y x ( y x) 
f X ( x)


2

f Y ( y)
1 x
1 x2
2

f X ,Y ( x, y)
f X Y ( x y) 
1


4
1 y2

2
1
2 1 y
2
, x  (1,1), y  (0, 1  x 2 )
, x  (0,1), y  ( 1  y 2 , 1  y 2 )
 在給定 X
 x 之下, Y 之最佳預測值
E (Y X  x) 
1 x2
, x  (1,1) , (Note: Y X  x ~ U (0, 1  x 2 ) )
2
 在給定 Y
 y 之下, X 之最佳預測值
E( X Y  y)  0, y  (0,1) , (Note: X Y  y ~ U ( 1  y 2 , 1  y 2 ) )
P159 #71
Let X and Y have the joint density
a Find Cov(X,Y) and the correlation of X and Y.
b
Find E ( X Y  y) and E (Y X  x).
c
Find the density functions of the random variables E ( X Y ) and E (Y X ).
sol
a.
f X ( x) 


x
f X ,Y ( x, y)dy 


x
e  y dy  e  y , x  (0, )
1

 EX  1 ( X ~  (1) 且若 Y ~  ( ) , EY  ,VarY 
f Y ( y) 


y
0
f X ,Y ( x, y)dx 


y
0
1
) VarX  1
2
e  y dy  ye  y , y  (0, )


 EY   f Y ( y)dy   y 2 e  y dy  2 , EY 2   y 2 f Y ( y)dy   y 3 e  y dy  6
0
0
0
0
E ( XY )  

0

y
0
xyf X ,Y ( x, y)dxdy  

0

y
0
xye y dxdy  

0
1 3 y
y e dy  3
2
 Cov( X , Y )  E ( XY )  EX  EY  3  1  2  1 , VarY  EY 2  E 2Y  6  4  2
Cov( x, y )
1
1
correlatio n  XY 


VarX  VarY
1 2
2
b.
f X ,Y ( x, y ) e  y
1
f X Y (x y 
 , x  (0, y ), y  (0, )
y
f Y ( y ) ye
y
y
, y  (0, ), ( X Y  y ~ U (0, y ))
2
f X ,Y ( x, y ) e  y
f Y X ( y x) 
  x  e ( y  x ) , x  (0, ), y  ( x, )
f X ( x)
e
 E ( X Y  y) 


x
x
 E (Y X  x)   y  f Y X ( y x)dy   y  e ( y  x ) dy  x  1, x  (0, )
令W1  E ( X Y ) 
Y
, f W1 ( w1 )  2 f Y (2w1 )  4W1e  2 w1 , w1  (0, )
2
令W2  E(Y X )  x  1, fW2 (w2 )  f X (w2  1)  e ( w2 1) , w2  (1, )
P160 #75
Find the moment-generating function of a Bernoulli random variable , and use it
to find the mean, variance, and third moment.
sol
P( X  0)  1  p, P( X  1)  p
M X (t )  E (e tx )  (1  p )  e 0  pe  pe t , t  R
EX  M X (0)  p, EX 2  M X (0)  p, EX 3  M X (0)  p
VarX  EX 2  E 2 X  p  p 2  p (1  p )
P160 #84
Assuming that X ~ N (0,  2 ) ,use the m.g.f. to show that the odd moment are
zero
and the even moments are
 2n 
(2n)! 2 n
2 n (n!)
sol
 M (t )在t  0之n次導數皆存在,n  1
由propertyB知X之任意次動差皆存 在
1
又利用e 2
 2t 2
1
M (t )  e 2
 2t 2
之Tayior展開式得 :
 2 n1  0,  2 n 
P161



 (2n)! 2 n  t 2 n
 nt n
1 1 2 2 n
 2nt 2n
(  t )  n
  n




n!
n  0 n! 2
n 0 2 ( n!)
n 0  2  ( n!)  ( 2n)!
n 0


(2n)! 2 n
,n  0
2 n  (n!)
#88
If X is a nonnegative integer-valued random variable, the probability-generating
function of X is defined to be

G ( s )   s k Pk
k 0
where pk  P( X  k ) .
a
Show that
pk 
b
1 dk

G ( s ) s 0
k! ds k
Show that
dG
s 1  E ( X )
ds
d 2G
s 1  E  X ( X  1) 
ds 2
c
Express the probability-generating function in terms of moment-generating
function.
d
Find the probability-generating function of the Poisson distribution.
sol
b.
 G(s) uniformly converges for s  1
 可將此power series 逐eri
而得G ( s )之導數, 即

g ( s )   kpk s k 1 ......(1)
k 1

g ( s)   k (k  1) p k s k  2 ......(2)
k 2


k 
一般有G ( n ) ( s)   k (k  1)( k  2).....( k  n  1) p k s k  n    n! p k s k  n
k n
k n  n 
此級數在 s  1
仍收斂  在上一式中若令s  0, 則除了常數項其餘均為0,
故得 p n  g (n) (0)
n!
若EX存在, 則由(1)可得
EX  g (1)
而若EX(X  1)亦存在,由(2)可得
EX(X  1)  g (1)
c
假定s  0 令X之 mgf 為 M X (t)
G X ( s )  E ( s X )  E (e X (ln S ) )  M X (ln S )
for some s  0
d
由課本 P144 example A 知 , 若X ~ P( ), 則 X 之mgf : M X (t )  exp  (e t  1)
 由 (c)之結果得G X ( s)  M X (ln S )  exp  ( s  1) for all s >0
P161 #93
Find expressions for the approximate mean and variance of Y = g(X) for
(a) g(x)= x , (b) g(x)=logx, and (c) g(x)= sin 1 x .
sol
a
g ( x)  x ,
E( X )  ,
Var ( X )   2
E ( g ( x))  g ( E ( X ))  
 

Var ( g ( x))  (  )


b
2
Var ( X ) 
2
4
E( X )  
g ( x)  log X
Var ( X )   2
E ( g ( x))  g ( E ( X ))  log 
Var ( g ( x))  (log  ) 
Var ( X ) 
2
2
2
c
E( X )  
g ( x)  sin 1
Var ( X )   2
E ( g ( x))  g ( E ( X ))  sin 1 

Var ( g ( x))  (sin 1  )

2
Var ( X ) 
2
1  2
P161 #96
Two sides, x 0 and y 0 , of a right triangle are independently measured as X and
Y, where E(X)= x 0 and E(Y)= y 0 and Var( X )  Var(Y )   2 . The angle between
the two sides is then determined as
Y 

X
Find the approximate mean and variance of  .
  t a n1 
sol
由課本 P152 知 Z  g(x, y), 則
1
g(μ( 2 1 2 g(μ( 2
 2 g(μ(
EZ  g(μ(  σ 2X (
)  σY (
)  σ X, Y
2
x
2
y
xy
g(μ( 2
g(μ( 2
g(μ( g(μ(
VarZ  σ 2X (
)  σ 2Y (
)  2σ X, Y (
)(
),
x
y
x
y
where μ denote the point (μ X , μ Y )
X, Y are independen t random variables . EX  x 0 , EY  y 0
VarX  VarY  σ 2
Y
θ  tan 1 ( )
X
y
令θ(x,y)  tan 1 ( )
x
θ
y
θ
x
 2θ
2xy
 2θ
 2xy
 2θ
y2  x 2

 2
,




x x  y 2 y x 2  y 2 x 2 (x 2  y 2 ) 2 y 2 (x 2  y 2 ) 2 xy (x 2  y 2 ) 2
 EZ  tan 1 (
y0
x y σ2
x y σ2
y
)  0 0 2  20 0 2 2  tan 1 ( 0 ) ( X, Y independen t  σ xy  0)
2
2
x0
x0
(x 0  y 0 )
x 0  y0


y σ
x σ
σ2
VarZ  2


(Note : X, Y independen t )
(x 0  y 02 ) 2 (x 02  y 02 ) 2 x 02  y 02
2
0
2
2
0
2
Download