CHAPTER 4 Section 4.1 1. 2. 1 1 f ( x)dx 12 xdx 14 x 2 a. P(X 1) = b. P(.5 X 1.5) = c. P(X > 1.5) = f(x) = 1 10 0 1.5 1 2 .5 f ( x)dx 2 1 1.5 2 1.5 1.5 xdx 14 x 2 1 0 .25 .5 .5 xdx 14 x 2 2 1.5 167 .438 for –5 x 5, and = 0 otherwise 0 1 dx .5 a. P(X < 0) = b. P(–2.5 < X < 2.5) = c. P(–2 X 3) = d. P(k < X < k + 4) = 5 10 3 2.5 1 2.5 10 1 2 10 dx .5 k 4 k dx .5 1 10 dx 10x k k 4 101 [(k 4) k ] .4 3. a. 0.4 f(x) 0.3 0.2 0.1 0.0 -3 b. P(X > 0) = 2 -2 -1 0 x 1 2 2 .09375 (4 x 2 )dx .09375 (4 x 0 x3 ) .5 3 0 131 3 Chapter 4: Continuous Random Variables and Probability Distributions 1 .09375(4 x 2 )dx .6875 c. P(–1 < X < 1) = d. P(x < –.5 OR x > .5) = 1 – P(–.5 X .5) = 1 – 1 .5 .5 .09375 (4 x 2 )dx = 1 – .3672 = .6328 4. a. b. f ( x; )dx e x 2 0 P(X 200) = x 200 2 / 2 2 f ( x; )dx dx e x 200 x e x 2 0 2 / 2 2 0 2 / 2 2 0 (1) 1 dx e x 2 / 2 2 200 0 .1353 1 .8647 P(X < 200) = P(X 200) .8647, since x is continuous. P(X 200) = 1 – P(X < 200) .1353 c. P(100 X 200) = d. For x > 0, P(X x) = x 200 100 f ( y; )dy f ( x; )dx e x 1= 0 / 20, 000 2 f ( x)dx kx2 dx k 0 2 x3 3 0 200 .4712 100 2 2 y y 2 / 2 2 e dx e y / 2 2 e x 5. a. 2 x 0 1 ex 2 / 2 2 k 83 k 83 1.6 1.4 1.2 f(x) 1.0 0.8 0.6 0.4 0.2 0.0 0.0 0.5 1.0 1.5 2.0 x 1 b. P(0 X 1) = c. P(1 X 1.5) = d. P(X 1.5) = 1 – 3 0 8 x 2 dx 18 x 3 1.5 1 1.5 0 1 0 18 .125 3 8 x 2 dx 18 x 3 18 32 18 1 19 64 .2969 3 8 x 2 dx 18 x 3 1 1.5 1 1.5 0 132 3 3 0 1 1 3 3 8 2 27 64 37 64 .5781 Chapter 4: Continuous Random Variables and Probability Distributions 6. a. 0.8 0.7 0.6 f(x) 0.5 0.4 0.3 0.2 0.1 0.0 2.0 2.5 3.0 3.5 4.0 4.5 x 4 1 k[1 ( x 3) 2 ]dx k[1 u 2 ]du 4 3 k 3 4 b. 1= c. P(X > 3) = d. P114 X 134 e. P( |X–3| > .5) = 1 – P( |X–3| .5) = 1 – P( 2.5 X 3.5) 2 1 4 3 3 4 [1 ( x 3) 2 ]dx .5 by symmetry of the p.d.f 13 / 4 3 11 / 4 4 1/ 4 3 4 1 / 4 [1 ( x 3) 2 ]dx =1– .5 3 .5 4 [1 (u ) 2 ]du [1 (u ) 2 ]du 7. 1 10 for 25 x 35 and = 0 otherwise a. f(x) = b. P(X > 33) = 35 1 33 10 dx .2 35 x2 c. E(X) = x dx 30 25 20 25 35 1 10 30 2 is from 28 to 32 minutes: d. P(28 < X < 32) = P( a x a+2) = 32 1 28 10 a2 a dx 101 x28 .4 1 10 32 dx .2 , since the interval has length 2. 133 47 .367 128 5 .313 16 Chapter 4: Continuous Random Variables and Probability Distributions 8. a. 0.20 f(y) 0.15 0.10 0.05 0.00 0 2 4 6 8 10 y 5 10 y2 2 1 2 y y b. f ( y)dy ydy ( y)dy = 5 50 0 5 50 5 1 1 1 1 = (4 2) (2 ) 1 2 2 2 2 5 10 1 0 25 2 5 1 25 3 3 y2 9 .18 ydy 50 0 50 c. P(Y 3) = d. P(Y 8) = e. P( 3 Y 8) = P(Y 8) – P(Y < 3) = f. P(Y < 2 or Y > 6) = a. P(X 6) = 1 0 25 5 1 0 25 8 ydy ( 52 251 y )dy 5 2 1 0 25 23 .92 25 46 9 37 .74 50 50 50 10 ydy ( 52 251 y )dy 6 2 .4 5 9. 6 5.5 .15e .15( x5) dx .15 e .15u du (after u = x – .5) .5 = e .15u 5.5 0 0 1 e .825 .562 b. 1 – .562 = .438; .438 c. P( 5 Y 6) = P(Y 6) – P(Y 5) .562 – .491 = .071 134 Chapter 4: Continuous Random Variables and Probability Distributions 10. a. b. c. d. f ( x; k , )dx P(X b) = b k k 1 dx k k k 1 x x k k dx k k 1 x P(a X b) = b a k 1 b 1 k 1 x b k k dx k k 1 x b k k 1 k x a a b k k Section 4.2 11. .25 a. P(X 1) = F(1) = b. P(.5 X 1) = F(1) – F(.5) = c. P(X > .5) = 1 – P(X .5) = 1 – F(.5) = 1 4 3 16 .1875 15 16 .9375 ~ 2 ~ d. .5 = F ( ) ~ 2 2 ~ 2 1.414 4 x 2 for 0 x < 2, and = 0 otherwise e. f(x) = F(x) = f. 1 1 2 2 x3 8 1.333 E(X) = x f ( x)dx x xdx x dx 0 2 2 0 6 0 6 2 2 2 1 1 2 x4 xdx x 3 dx 2, g. E(X ) = x f ( x) dx x 0 2 2 0 8 0 2 2 2 So Var(X) = E(X2) – [E(X)]2 = h. 2 2 86 368 .222 , x .471 2 From g , E(X2) = 2 135 Chapter 4: Continuous Random Variables and Probability Distributions 12. a. P(X < 0) = F(0) = .5 b. P(–1 X 1) = F(1) – F(–1) = c. P(X > .5) = 1 – P(X .5) = 1 – F(.5) = 1 – .6836 = .3164 d. F(x) = F(x) = e. F ~ .5 by definition. F(0) = .5 from a above, which is as desired. 11 16 .6875 2 d 1 3 x3 4 x = 0 3 4 3x .09375 4 x 2 dx 2 32 3 32 3 13. a. b. 1 1 k dx 1 k 4 3 x cdf: F(x)= x x 3 1 f ( y)dy x 1 k k 1 0 ( )(1) 1 k 3 3 3 x 1 3 3 y dy 3 y x 3 1 1 3 . So 3 x 1 4 0, x 1 F x 3 1 x , x 1 c. 1 18 18 or .125; P(2 x 3) F (3) F (2) 1 271 1 18 .963 .875 .088 P(x > 2) = 1 – F(2) = 1 – d. 3 3 3 3 2 E ( X ) x 4 dx 3 dx 3 x 0 1 1 2 2 2 x x 1 3 3 1 E ( X 2 ) x 2 4 dx 2 dx 3x 033 1 1 x x 1 2 9 3 3 V ( X ) E ( X ) [ E ( X )] 3 3 or .75 4 4 2 V ( x) 3 4 .866 2 e. 2 P(1.5 .866 x 1.5 .866) P( x 2.366) F (2.366) 1 (2.366 3 ) .9245 136 Chapter 4: Continuous Random Variables and Probability Distributions 14. a. If X is uniformly distributed on the interval from A to B, then 1 A B A 2 AB B 2 2 E( X ) x dx , E( X ) A BA 2 3 2 B A . V(X) = E(X2) – [E(X)]2 = 2 B With A = 7.5 and B = 20, E(X) = 13.75, V(X) = 13.02 0 x 7 .5 b. F(x) = 12.5 1 x 7 .5 7.5 x 20 x 20 c. P(X 10) = F(10) = .200; P(10 X 15) = F(15) – F(10) = .4 d. = 3.61, so = (10.14, 17.36) Thus, P( – X + ) = F(17.36) – F(10.14) = .5776 Similarly, P( – 2 X + 2) = P(6.53 X 20.97) = 1 a. F(x) = 0 for x 0, = 1 for x 1, and for 0 < x < 1, 15. x x x F ( x) f ( y)dy 90 y 8 (1 y)dy 90 ( y 8 y 9 )dy 0 90 y 1 9 9 1 10 y 10 x 0 0 10 x 9 x 9 4 10 1.0 0.8 3 F(x) f(x) 0.6 2 0.4 1 0.2 0 0.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 x 0.6 x b. F(.5) = 10(.5)9 – 9(.5)10 .0107 c. P(.25 X .5) = F(.5) – F(.25) .0107 – [10(.25)9 – 9(.25)10] .0107 – .0000 .0107 d. The 75th percentile is the value of x for which F(x) = .75 .75 = 10(x)9 – 9(x)10 x .9036 137 0.8 1.0 Chapter 4: Continuous Random Variables and Probability Distributions e. E(X) = 90 9 x10 11 x E(X2) = 90 11 1 1 0 0 x f ( x)dx x 90 x 8 (1 x)dx 90 x 9 (1 x)dx 11 1 0 119 .8182 1 1 0 0 x 2 f ( x)dx x 2 90 x 8 (1 x)dx 90 x10 (1 x)dx x 11 90 12 x 12 1 0 .6818 V(X) .6818 – (.8182)2 = .0124, x = .11134. f. = (.7068, .9295). Thus, P( – X + ) = F(.9295) – F(.7068) = .8465 – .1602 = .6863, so the probability X is more than 1 sd from its mean equals 1–.6863 = 3137. a. F(x) = 0 for x < 0 and F(x) = 1 for x > 2. For 0 x 2, 16. F(x) = x 3 0 8 y 2 dy 18 y 3 x 0 18 x 3 1.0 0.8 F(x) 0.6 0.4 0.2 0.0 0.0 0.5 1 1 3 8 2 b. P(X .5) = F(.5) = c. P(.25 X .5) = F(.5) – F(.25) d. .75 = F(x) = e. E(X) = f. 2.0 1 64 1 64 = 2 2 0 12 5 1.5 18 14 3 7 512 .0137 x 3 x3 = 6 x 1.8171 E(X2) = V(X) = 1 8 1.0 x x f ( x)dx x x 32 2 0 x dx x dx x 3 8 1 3 8 0 2 3 x dx x dx x5 3 8 3 20 2 1 3 8 0 4 3 1 8 5 2 0 3 1 8 4 4 2 0 32 1.5 125 2.4 .15 x = .3873 = (1.1127, 1.8873). Thus, P( – X + ) = F(1.8873) – F(1.1127) = .8403 – .1722 = .6681 138 Chapter 4: Continuous Random Variables and Probability Distributions 17. xA =p BA x = (100p)th percentile = A + (B – A)p a. F(X) = b. 1 1 x2 1 1 A B E( X ) x dx B 2 A2 A BA B A 2 A 2 B A 2 B B 1 1 A 2 AB B 2 E( X 2 ) B 3 A3 3 B A 3 2 A 2 AB B 2 A B B A ( B A) V ( X ) , x 3 12 12 2 2 B c. A f ( x) 1 B n1 An1 dx B A (n 1)( B A) 1 1 for –1 ≤ x ≤ 1 1 (1) 2 1 1 dx = .25 .5 2 a. P(Y = .5) = P(X ≥ .5) = b. P(Y = –.5) = .25 as well, due to symmetry. For –.5 < y < .5, F(y) = .25 + y 1 dx = .25 + 2 .5(y + .5) = .5 + .5y. Since Y ≤ .5, F(.5) = 1 and F(y) = 1 for y > .5 as well. That is, y .5 0 F ( y) .5 .5 y .5 y .5 1 .5 y 1.0 0.8 0.6 F(y) 18. E( X n ) x n 0.4 0.2 0.0 -1.0 -0.5 0.0 y 139 0.5 1.0 .5 Chapter 4: Continuous Random Variables and Probability Distributions 19. a. P(X 1) = F(1) = .25[1 + ln(4)] .597 b. P(1 X 3) = F(3) – F(1) .966 – .597 .369 c. f(x) = F(x) = .25 ln(4) – .25 ln(x) for 0 < x < 4 a. For 0 y 5, F(y) = 20. 1 y2 udu 25 50 y 0 For 5 y 10, F(y) = y 0 5 y f (u)du f (u)du f (u)du 0 5 y 2 1 u 2 y2 du y 1 2 0 5 25 5 50 1.0 0.8 F(y) 0.6 0.4 0.2 0.0 0 2 4 6 8 10 y y 2p y p 50 p 1/ 2 b. For 0 < p .5, p = F(yp) = c. E(Y) = 5 by straightforward integration (or by symmetry of f(y)), and similarly V(Y)= 50 y 2p 2 For .5 < p 1, p = y p 1 y p 10 5 2(1 p) 5 50 50 4.1667 . For the waiting time X for a single bus, 12 25 E(X) = 2.5 and V(X) = 12 21. E(area) = E(R2) = r 2 f (r )dr r 2 1 (10 r ) 2 dr 9 11 140 3 4 501 314.79 5 Chapter 4: Continuous Random Variables and Probability Distributions 22. x 1 1 1 a. For 1 x 2, F(x) = 21 2 dy 2 y 2 x 4, so 1 y 1 x y 0 x 1 1 x 2 F(x) = 2 x 1x 4 1 x2 x b. 1 2 x p 4 p 2xp2 – (4 – p)xp + 2 = 0 xp = 14 [4 p xp ~ ~ = 1.64 find , set p = .5 p 2 8 p ] To 2 c. E(X) = E(X2) = d. 2 1 2 x2 1 1 x 21 2 dx 2 x dx 2 ln( x) 1.614 1 x x 2 1 2 2 1 Amount left = max(1.5 – X, 0), so E(amount left) = 23. 2 x3 8 x 1 dx 2 x Var(X) = .0626 3 1 3 2 2 1 1.5 1 max( 1.5 x,0) f ( x)dx 2 (1.5 x)1 2 dx .061 1 x With X = temperature in C, temperature in F = 9 9 E X 32 (120) 32 248, 5 5 so = 3.6 141 9 X 32, so 5 2 9 9 Var X 32 (2) 2 12.96 , 5 5 Chapter 4: Continuous Random Variables and Probability Distributions 24. x a. E(X) = b. E(X) = c. E(X2) = k k k k x k 1 dx k 1 x k 1 dx k k k x k 1 k dx k k 1 k 1 x 1 k 2 , so k 2 k 2 k 2 k 2 Var(X) = k 2 k 1 k 2k 12 d. Var(X) = , since E(X2) = . e. E(Xn) = k k a. ~ + 32) = P(1.8X + 32 1.8 ~ + 32) = P( X ~ ) = .5 P(Y 1.8 25. x n( k 1) dx , which will be finite if n – (k+1) < –1, i.e. if n < k. b. 90th for Y = 1.8(.9) + 32 where (.9) is the 90th percentile for X, since P(Y 1.8(.9) + 32) = P(1.8X + 32 1.8(.9) + 32) = (X (.9) ) = .9 as desired. c. The (100p)th percentile for Y is 1.8(p) + 32, verified by substituting p for .9 in the argument of b. When Y = aX + b, (i.e. a linear transformation of X), and the (100p)th percentile of the X distribution is (p), then the corresponding (100p)th percentile of the Y distribution is a(p) + b. (same linear transformation applied to X’s percentile) 142 Chapter 4: Continuous Random Variables and Probability Distributions 26. a. 1= k (1 x / 2.5) 7 dx = 0 2.5k (1 x / 2.5) 6 6 = 0 k k = 2.4 2 .4 b. 2.5 2.0 f(x) 1.5 1.0 0.5 0.0 0 c. E(X ) = E(X) = 1 2 3 4 x x 2.4(1 x / 2.5) 7 dx = 0 2 6 7 8 2.5(u 1) 2.4u 7 2.5du = 0.5, or $500. Similarly, 1 x 2 2.4(1 x / 2.5) 7 dx = 0 (2.5(u 1)) 2 2.4u 7 2.5du = 0.625, so V(X) = 1 0.625 – (0.5)2 = 0.375, and σX = d. 5 0.375 = 0.612, or $612. The maximum out-of-pocket expense, $2500, occurs when $500 + 20%(X – $500) equals $2500; this accounts for the $500 deductible and the 20% of costs above $500 not paid by the insurance plan. Solve: $2,500 = $500 + 20%(X – $500) X = $10,500. At that point, the insurance plan has already paid $8,000, and the plan will pay all expenses thereafter. Recall that the units on X are thousands of dollars. If Y denotes the expenses paid by the company (also in $1000s), Y = 0 for X ≤ 0.5; Y = .8(X – 0.5) for 0.5 ≤ X ≤ 10.5; and Y = (X – 10.5) + 8 for X > 10.5. From this, E(Y) = y 2.4(1 x / 2.5) 7 dx = 0 10.5 0.5 0 dx + 0 .8( x 0.5) 2.4(1 x / 2.5) 7 dx + 0.5 ( x 10 .5) 2.4(1 x / 2.5) 7 dx = 0 + 0.16024 + .00013, or $160.37. 10.5 27. 0 360 360 0 = 180 and σX = = 30 12 . Using the 2 12 linear representation of Y, E(Y) = (2π/360)E(X) – π = (2π/360)(180) – π = 0, and σY = Since X is uniform on [0,360], E(X) = (2π/360)σX = (2π/360)( 30 12 ) = 12 6 ≈ 1.814. (In fact, Y is uniform on [–π, π].) 143 Chapter 4: Continuous Random Variables and Probability Distributions Section 4.3 28. a. P(0 Z 2.17) = (2.17) – (0) = .4850 b. (1) – (0) = .3413 c. (0) – (–2.50) = .4938 d. (2.50) – (–2.50) = .9876 e. (1.37) = .9147 f. P( –1.75 < Z) + [1 – P(Z < –1.75)] = 1 – (–1.75) = .9599 g. (2) – (–1.50) = .9104 h. (2.50) – (1.37) = .0791 i. 1 – (1.50) = .0668 j. P( |Z| 2.50 ) = P( –2.50 Z 2.50) = (2.50) – (–2.50) = .9876 a. .9838 is found in the 2.1 row and the .04 column of the standard normal table so c = 2.14. b. P(0 Z c) = .291 (c) = .7910 c = .81 c. P(c Z) = .121 1 – P(c Z) = P(Z < c) = (c) = 1 – .121 = .8790 d. P(–c Z c) = (c) – (–c) = (c) – (1 – (c)) = 2(c) – 1 (c) = .9920 c = .97 e. P( c | Z | ) = .016 1 – .016 = .9840 = 1 – P(c | Z | ) = P( | Z | < c ) = P(–c < Z < c) = (c) – (–c) = 2(c) – 1 (c) = .9920 c = 2.41 29. 144 c = 1.17 Chapter 4: Continuous Random Variables and Probability Distributions 30. a. (c) = .9100 c 1.34 (.9099 is the entry in the 1.3 row, .04 column) b. 9th percentile = –91st percentile = –1.34 c. (c) = .7500 c .675 since .7486 and .7517 are in the .67 and .68 entries, respectively. d. 25th = –75th = –.675 e. (c) = .06 c .–1.555 (.0594 and .0606 appear as the –1.56 and –1.55 entries, respectively). a. Area under Z curve above z.0055 is .0055, which implies that ( z.0055) = 1 – .0055 = .9945, so z.0055 = 2.54 b. ( z.09) = .9100 z = 1.34 (since .9099 appears as the 1.34 entry). c. ( z.663) = area below z.663 = .3370 z.633 –.42 a. P(X 100) = P z b. P(X 80) = P z c. P(65 X 100) = P 31. 32. 100 80 = P(Z 2) = (2.00) = .9772 10 80 80 = P(Z 0) = (0.00) = .5 10 100 80 65 80 z = P(–1.50 Z 2) 10 10 = (2.00) – (–1.50) = .9772 – .0668 = .9104 d. P(70 X) = P(–1.00 Z) = 1 – (–1.00) = .8413 e. P(85 X 95) = P(.50 Z 1.50) = (1.50) – (.50) = .2417 f. P(|X – 80| 10) = P(–10 X – 80 10) = P(70 X 90) P(–1.00 Z 1.00) = .6826 145 Chapter 4: Continuous Random Variables and Probability Distributions 33. a. 18 15 P(X 18) = P Z = P(Z 2.4) = (2.4) = .9452 1.25 b. P(10 X 12) = P(–4.00 Z –2.40) P(Z –2.40) = (–2.40) = .0082 c. P( |X – 15| 1.5(1.25) ) = P( |Z| ≤ 1.5) = P(–1.5 Z 1.5) = (1.5) – (–1.5) = .8664 a. P(X > .25) = P(Z > –.83) = 1 – .2033 = .7967 b. P(X .10) = (–3.33) = .0004 c. We want the value of the distribution, c, that is the 95 th percentile (5% of the values are higher). The 95th percentile of the standard normal distribution = 1.645. So c = .30 + (1.645)(.06) = .3987. The largest 5% of all concentration values are above .3987 mg/cm 3. a. P(X 10) = P(Z .43) = 1 – (.43) = 1 – .6664 = .3336. P(X > 10) = P(X 10) = .3336, since for any continuous distribution, P(x = a) = 0. b. P(X > 20) = P(Z > 4) 0 c. P(5 X 10) = P(–1.36 Z .43) = (.43) – (–1.36) = .6664 – .0869 = .5795 d. P(8.8 – c X 8.8 + c) = .98, so 8.8 – c and 8.8 + c are at the 1st and the 99th percentile of the given distribution, respectively. The 1st percentile of the standard normal distribution has the value –2.33, so 8.8 – c = + (–2.33) = 8.8 – 2.33(2.8) c = 2.33(2.8) = 6.524. e. From a, P(x > 10) = .3336. Define event A as {diameter > 10}, then P(at least one A i) = 1 – P(no Ai) = 1 P( A) 4 1 (1 .3336) 4 1 .1972 .8028 a. P(X < 1500) = P(Z < 3) = (3) = .9987; P(X ≥ 1000) = P(Z ≥ –.33) = 1 – (–.33) = 1 – .3707 = .6293 b. P(1000 < X < 1500) = P(–.33 < Z < 3) = (3) – (–.33) = .9987 – .2707 = .7280 c. From the table, (z) = .02 z ≈ –2.05 x = 1050 – 2.05(150) = 742.5 μm. The smallest 2% of droplets are those smaller than 742.5 μm in size d. P(at least one droplet in 5 that exceeds 1500 μm) = 1 – P(all 5 are less than 1500 μm) = 1 – (.9987)5 = 1 – .9935 = .0065 34. 35. 36. 146 Chapter 4: Continuous Random Variables and Probability Distributions 37. 38. a. P(X = 105) = 0, since the normal distribution is continuous; P(X < 105) = P(Z < 0.2) = P(Z ≤ 0.2) = Φ(0.2) = .5793; P(X ≤ 105) = .5793 as well, since X is continuous b. No, the answer does not depend on μ or σ. For any normal rv, P(|X–μ| > σ) = P(|Z| > 1) = P(Z < –1 or Z > 1) = 2Φ(–1) = 2(.1587) = .3174 c. From the table, (z) = .1% = .001 z ≈ –3.09 x = 104 – 3.09(5) = 88.55 mmol/L. The smallest .1% of chloride concentration values are those less than 88.55 mmol/L Let X denote the diameter of a randomly selected cork made by the first machine, and let Y be defined analogously for the second machine. P(2.9 X 3.1) = P(–1.00 Z 1.00) = .6826 P(2.9 Y 3.1) = P(–7.00 Z 3.00) = .9987 So the second machine wins handily. 39. 40. 41. 42. a. + (91st percentile from std normal) = 30 + 5(1.34) = 36.7 b. 30 + 5( –1.555) = 22.225 c. = 3.000 m; = 0.140. We desire the 90th percentile: 30 + 1.28(0.14) = 3.179 = 43; = 4.5 40 43 = P(Z < –0.667) = .2514 4.5 60 43 P(X > 60) = P z = P(Z > 3.778) 0 4.5 a. P(X < 40) = P z b. 43 + (–0.67)(4.5) = 39.985 100 200 P(damage) = P(X < 100) = P z = P(Z < –3.33) = .0004 300 P(at least one among five is damaged) = 1 – P(none damaged) = 1 – (.9996)5 = 1 – .998 = .002 From Table A.3, P(–1.96 Z 1.96) = .95. Then P( – .1 X + .1) = .1 .1 .1 .1 .0510 P z implies that = 1.96, and thus that 1.96 147 Chapter 4: Continuous Random Variables and Probability Distributions 43. Since 1.28 is the 90th z percentile (z.1 = 1.28) and –1.645 is the 5th z percentile (z.05 = 1.645), the given information implies that + (1.28) = 10.256 and + (–1.645) = 9.671, from which (–2.925) = –.585, = .2000, and = 10. 44. 45. a. P( – 1.5 X + 1.5) = P(–1.5 Z 1.5) = (1.50) – (–1.50) = .8664 b. P( X < – 2.5 or X > + 2.5) = 1 – P( – 2.5 X + 2.5) = 1 – P(–2.5 Z 2.5) = 1 – .9876 = .0124 c. P( – 2 X – or + X + 2) = P(within 2 sd’s) – P(within 1 sd) = P( – 2 X + 2) – P( – X + ) = .9544 – .6826 = .2718 With = .500 inches, the acceptable range for the diameter is between .496 and .504 inches, so unacceptable bearings will have diameters smaller than .496 or larger than .504. The new distribution has = .499 and =.002. P(X < .496 or X >.504) = .496 .499 .504 .499 P Z P Z PZ 1.5 PZ 2.5 = Φ(–1.5) + [1 – .002 .002 Φ(2.5)] = .073. 7.3% of the bearings will be unacceptable. 46. a. P(67 X 75) = P(–1.00 Z 1.67) = .7938 b. P(70 – c X 70 + c) = P c c c c Z 2( ) 1 .95 ( ) .9750 3 3 3 3 c 1.96 c 5.88 3 47. c. 10P(a single one is acceptable) = 7.938 d. p = P(X < 73.84) = P(Z < 1.28) = .9, so P(Y 8) = B(8;10,.9) = .264 The stated condition implies that 99% of the area under the normal curve with = 12 and = 3.5 is to the left of c – 1, so c – 1 is the 99th percentile of the distribution. Thus c – 1 = + (2.33) = 20.155, and c = 21.155. 48. a. By symmetry, P(–1.72 Z –.55) = P(.55 Z 1.72) = (1.72) – (.55) b. P(–1.72 Z .55) = (.55) – (–1.72) = (.55) – [1 – (1.72)] No, symmetry of the Z curve about 0. 148 Chapter 4: Continuous Random Variables and Probability Distributions 49. X N(3432, 482) a. 4000 3432 Px 4000 P Z Pz 1.18 1 (1.18) 1 .8810 .1190 ; 482 4000 3432 3000 3432 P3000 x 4000 P Z 482 482 1.18 .90 .8810 .1841 .6969 b. 50. c. We will use the conversion 1 lb = 454 g, then 7 lbs = 3178 grams, and we wish to find 3178 3432 Px 3178 P Z 1 (.53) .7019 482 d. We need the top .0005 and the bottom .0005 of the distribution. Using the Z table, both .9995 and .0005 have multiple z values, so we will use a middle value, ±3.295. Then 3432±(482)3.295 = 1844 and 5020, or the most extreme .1% of all birth weights are less than 1844 g and more than 5020 g. e. Converting to lbs yields mean 7.5595 and sd 1.0608. Then 7 7.5595 P X 7 P Z 1 (.53) .7019 This yields the same answer as in 1.0608 part c. We use a Normal approximation to the Binomial distribution: X b(x;1000,.03) ≈ N(30,5.394) a. b. 51. 2000 3432 5000 3432 P X 2000 X 5000 P Z P Z 482 482 2.97 1 3.25 .0015 .0006 .0021 39.5 30 Px 40 1 P x 39 1 P Z 5.394 1 (1.76) 1 .9608 .0392 5% of 1000 = 50: P x 50 P Z 50.5 30 (3.80) 1.00 5.394 P(|X – | ) = P(X – or X + ) = 1 – P( – X + ) = 1 – P(–1 Z 1) = .3174. Similarly, P(|X – | 2) = 1 – P(–2 Z 2) = .0456 and P(|X – | 3) = .0026. These are considerably less than the bounds 1, .25, and .11 given by Chebyshev. 52. a. P(20 – .5 X 30 + .5) = P(19.5 X 30.5) = P(–1.1 Z 1.1) = .7286 b. P(at most 30) = P(X 30 + .5) = P(Z 1.1) = .8643. P(less than 30) = P(X < 30 – .5) = P(Z < .9) = .8159 149 Chapter 4: Continuous Random Variables and Probability Distributions 53. p = .5 μ = 12.5 & σ2 = 6.25; p = .6 μ = 15 & σ2 = 6; p = .8 μ = 20 and σ2 = 4 a. p .5 .6 .8 P(15 ≤ X ≤ 20) = .212 = .577 = .573 P(14.5 ≤ Normal ≤ 20.5) = P(.80 Z 3.20) = .2112 = P(–.20 Z 2.24) = .5668 = P(–2.75 Z .25) = .5957 p .5 .6 .8 P(X ≤ 15) = .885 = .575 = .017 P(Normal ≤ 15.5) = P(Z 1.20) = .8849 = P(Z .20) = .5793 = P(Z –2.25) = .0122 p .5 .6 .8 P(X ≥ 20) = .002 = .029 = .617 P(Normal ≥ 19.5) = P(Z ≥ 2.80) = .0026 = P(Z ≥ 1.84) = .0329 = P(Z ≥ –0.25) = .5987 b. c. 54. p = .10; n = 200; np = 20, npq = 18 30 .5 20 = (2.47) = .9932 18 a. P(X 30) = b. P(X < 30) =P(X 29) = c. P(15 X 25) = P(X 25) – P(X 14) = 29 .5 20 = (2.24) = .9875 18 25 .5 20 14 .5 20 18 18 (1.30) – (–1.30) = .9032 – .0968 = .8064 55. n = 500, p = .75 = 375, σ = 9.68246 a. P(360 X 400) = P(359.5 X 400.5) = P(–1.60 Z 2.58) = .9409 b. 56. P(X < 400) = P(X 399.5) = P(Z 2.53) = .9943 P(X + [(100p)th percentile for std normal]) X P ... = P(Z […]) = p as desired 150 Chapter 4: Continuous Random Variables and Probability Distributions 57. a. Fy(y) = P(Y y) = P(aX + b y) = P X ( y b) (for a > 0). a Now differentiate with respect to y to obtain 1 fy(y) = Fy ( y ) 2 a e 1 2 a 2 2 [ y ( a b )]2 so Y is normal with mean a + b and variance a22. 9 5 (115) 32 239 , variance = 12.96 b. Normal, mean a. P(Z 1) .5 exp b. P(Z > 3) .5 exp c. P(Z > 4) .5 exp 58. 83 351 562 .1587 703 165 2362 .0013 399.3333 3294 .0000317 , so 340.75 P(–4 < Z < 4) 1 – 2(.0000317) = .999937 d. 4392 .00000029 305.6 P(Z > 5) .5 exp 151 Chapter 4: Continuous Random Variables and Probability Distributions Section 4.4 59. 1 1 a. E(X) = b. c. P(X 4 ) = 1 e (1)( 4) 1 e 4 .982 d. P(2 X 5) = 1 e (1)(5) 1 e (1)( 2) e 2 e 5 .129 a. P(X 100 ) = 1 e 1 1 60. (100)(.01386) ( 200)(.01386) 1 e 1.386 .7499 1 e 2.772 .9375 P(X 200 ) = 1 e P(100 X 200) = P(X 200 ) – P(X 100 ) = .9375 – .7499 = .1876 b. 1 72.15 , = 72.15 .01386 = P(X > + 2) = P(X > 72.15 + 2(72.15)) = P(X > 216.45) = 1 1 e ( 216.45)(.01386) e 2.9999 .0498 c. 61. ~ ~ .5 = P(X ~ ) 1 e ( )(.01386) .5 e ( )(.01386) .5 ~(.01386 ) ln(. 5) .693 ~ 50 Mean = a. 1 25,000 implies = .00004 P(X > 20,000) = 1 – P(X 20,000) = 1 – F(20,000; .00004) e 1.2 P(X 30,000) = F(30,000; .00004) 1 e P(20,000 X 30,000) = .699 – .551 = .148 b. 1 .699 25,000 , so P(X > + 2) = P( x > 75,000) = 1 – F(75,000;.00004) = .05. Similarly, P(X > + 3) = P( x > 100,000) = .018 152 (.00004)( 20, 000) .449 Chapter 4: Continuous Random Variables and Probability Distributions 62. a. Clearly E(X) = 0 by symmetry, so V(X) = E(X 2) = b. (3) 3 2 2 . Solving P(|X – 0| ≤ 40.9) = 2 40.9 2 x2 2 e | x| dx = x 2 e x dx = 0 = (40.9)2 yields λ = 0.034577 2 40.9 e | x| dx = 40.9 e x dx = 1 – e–40.9 λ = .75688 0 63. a. If a customer’s calls are typically short, the first calling plan makes more sense. If a customer’s calls are somewhat longer, then the second plan makes more sense, viz. 99¢ is less than 20min(10¢/min) = $2 for the first 20 minutes under the first (flat–rate) plan. b. h1(X) = 10X, while h2(X) = 99 for X ≤ 20 and 99 + 10(X – 20) for X > 20. With μ = 1/λ for the exponential distribution, it’s obvious that E[h1(X)] = 10E[X] = 10μ. On the other hand, 10 E[h2(X)] = 99 + 10 ( x 20 )e x dx = 99 + e 20 = 99 + 10μe–20/μ. 20 When μ = 10, E[h1(X)] = 100¢ = $1.00 while E[h2(X)] = 99 + 100e–2 ≈ $1.13. When μ = 15, E[h1(X)] = 150¢ = $1.50 while E[h2(X)] = 99 + 150e–4/3 ≈ $1.39. As predicted, the first plan is better when expected call length is lower, and the second plan is better when expected call length is somewhat higher. 64. a. (6) = 5! = 120 b. 5 3 1 3 1 1 3 1.329 2 2 2 2 2 2 4 c. F(4;5) = .371 from row 4, column 5 of Table A.4 d. F(5;4) = .735 e. F(0;4) = P(X 0; = 4) = 0 a. P(X 5) = F(5;7) = .238 b. P(X < 5) = P(X 5) = .238 c. P(X > 8) = 1 – P(X 8) = 1 – F(8;7) = .313 d. P( 3 X 8 ) = F(8;7) – F(3;7) = .653 e. P( 3 < X < 8 ) =.653 f. P(X < 4 or X > 6) = 1 – P(4 X 6 ) = 1 – [F(6;7) – F(4;7)] = .713 65. 153 Chapter 4: Continuous Random Variables and Probability Distributions 66. 67. a. = 20, 2 = 80 = 20, 2 = 80 = b. 24 P(X 24) = F ;5 = F(6;5) = .715 4 c. P(20 X 40) = F(10;5) – F(5;5) = .411 80 20 ,=5 = 24, 2 = 144 = 24, 2 = 144 = 6, = 4 a. P(12 X 24) = F(4;4) – F(2;4) = .424 b. P(X 24) = F(4;4) = .567, so while the mean is 24, the median is less than 24, since P(X ~ ) = .5. This is a result of the positive skew of the gamma distribution. c. We want a value for which F(x;4) = .99. In table A.4, we see F(10;4) = .990. So with = 6, the 99th percentile = 6(10) = 60. d. We want a value for which F(t;4)=.995. In the table, F(11;4)=.995, so t = 6(11)=66. At 66 weeks, only .5% of all transistors would still be operating. a. E(X) = = n b. 30 P(X 30) = F ;10 = F(15;10) = .930 2 c. P(X t) = P(at least n events in time t) = P( Y n) when Y Poisson with parameter t . 68. 1 n ; for = .5, n = 10, E(X) = 20 e t t k . k! k 0 n 1 Thus P(X t) = 1 – P( Y < n) = 1 – P( Y n – 1) 1 69. a. {X t} = A1 A2 A3 A4 A5 b. P(X t) =P( A1 ) P( A2 ) P( A3 ) P( A4 ) P( A5 ) = .05t t) = 1 – e , fx(t) = .05e but with parameter = .05. c. .05t e t 5 e .05t , so Fx(t) = P(X for t 0. Thus X also has an exponential distribution , By the same reasoning, P(X t) = 1 – e parameter n. 154 n t , so X has an exponential distribution with Chapter 4: Continuous Random Variables and Probability Distributions 70. x p 71. yX y a. {X2 y} = b. FY(y) = P(X2 y) = 1 y / 2 e 2 e x p 1 p , ln( 1 p) ~ .693 . x p ln( 1 p) x p . For p = .5, x.5 = With xp = (100p)th percentile, p = F(xp) = 1 – e 1 z2 / 2 e dz , so fY(y) = y 2 1 1 / 2 1 y / 2 1 1 / 2 1 y / 2 1 / 2 . We recognize this y e y e y 2 2 2 2 y as the chi–squared p.d.f. with = 1. 155 Chapter 4: Continuous Random Variables and Probability Distributions Section 4.5 72. 1 1 1 3 2.66 , 2 2 2 1 2 Var(X) = 91 1 1 1.926 2 a. E(X) = 31 b. P(X 6) = 1 e c. P(1.5 X 6) = 1 e a. P(X 250) = F(250;2.5, 200) = 1 e P(X < 250) = P(X 250) .8257 ( 6 / ) 1 e (6 / 3) 1 e 4 .982 ( 6 / 3) 2 2 1 e (1.5 / 3) e .25 e 4 .760 2 73. P(X > 300) = 1 – F(300; 2.5, 200) = ( 250 / 200) 2.5 e (1.5) 1 e 1.75 .8257 .0636 2.5 b. P(100 X 250) = F(250;2.5, 200) – F(100;2.5, 200) .8257 – .162 = .6637 c. ~ is requested. The equation F( ~ ) = .5 reduces to The median .5 = 2.5 2.5 ~ ~ e ( / 200) , i.e., ln(.5) , so ~ = (.6931).4(200) = 172.727. 200 74. a. For x > 3.5, F(x) = P( X x) = P(X – 3.5 x – 3.5) = 1 – e b. E(X – 3.5) = 1.5 c. P(X > 5) = 1 – P(X 5) = 1 d. P(5 X 8) = ( x 3. 5 ) 2 1.5 3 = 1.329 so E(X) = 4.829 2 2 2 3 Var(X) = Var(X – 3.5) = 1.5 2 .483 2 1 e 9 1 e e .368 1 e e e .3679 .0001 .3678 1 1 1 1 156 9 Chapter 4: Continuous Random Variables and Probability Distributions 75. 1 Using the substitution y = x and dy = x dx , 0 76. a. 0 x 1 x x e dx = 1 1 y e y dy 1 by definition of the gamma function 2 .5 F ~ 1 e / 3 e / 9 .5 ~ 2 9 ln(. 5) 6.2383 ~ 2.50 b. 1 e 3.5 /1.5 .5 c. P = F(xp) = 1 – e ~ d. 2 xp ~ 3.52 = –2.25 ln(.5) = 1.5596 ~ = 4.75 (xp/) = –ln(1 – p) xp = [ –ln(1–p)]1/ The desired value of t is the 90th percentile (since 90% will not be refused and 10% will be). From c, the 90th percentile of the distribution of X – 3.5 is 1.5[ –ln(.1)]1/2 = 2.27661, so t = 3.5 + 2.2761 = 5.7761 77. a. E( X ) 2 2 e e 4.82 123 .97 V ( X ) e 2( 4.5).8 2 e .8 1 15,367 .34 .8964 13,776 .53 117 .373 b. ln(100 ) 4.5 P( X 100 ) P Z 0.13 .5517 .8 c. ln( 200 ) 4.5 P( X 200 ) P Z 1 1.00 1 .8413 .1587 P( X 200 ) .8 a. P(X ≤ 0.5) = F(0.5) = 1 – exp[– (0.5/β)α] = .3099 78. b. 1 2 Using a computer, 1 = Γ(1.55) = 0.889 and 1 = Γ(2.10) = 1.047. 1 . 817 1 . 817 From these we find μ = (.863)(0.889) = 0.785 and σ 2 = (.863)2{1.047 – [0.889]2} = .1911, or σ = 0.437. Hence, P(X > μ + σ) = P(X > 1.222) = 1 – F(1.222) = exp[– (1.222/β)α] = .1524 c. F(x) = ½ ½ = 1 – exp[– (x/β)α] exp[– (x/β)α] = ½ (x/β)α = ln 2 x = β(ln 2)1/α = .863(ln 2)1/1.817 = .7054 d. Using the same math as part c, η(p) = β(–ln(1 – p))1/α = .863(–ln(1 – p))1/1.817 157 Chapter 4: Continuous Random Variables and Probability Distributions 79. 23.5 1.2 2 1.2 2 e 1 14907 .168 ; = 68.0335; V(X) = e x = 122.0949 a. E(X) = e 3.51.2 b. P(50 X 250) = P z 2 /2 ln(250 ) 3.5 ln(50) 3.5 P z 1.2 1.2 P(Z 1.68) – P(Z .34) = .9535 – .6331 = .3204. ln(68 .0335 ) 3.5 = P(Z .60) = .7257. The lognormal 1.2 distribution is not a symmetric distribution. c. P(X 68.0335) = P z a. ln( ~ ) ~ .5 = F( ) = , (where ~ 80. refers to the lognormal distribution and and to the normal distribution). Since the median of the standard normal distribution is 0, ln( ~ ) 0 , so ln( ~ ) = ~ = e . For the power distribution, ~ = e 3.5 33.12 b. ln( X ) 1 – = (z) = P(Z z) = z P(ln( X ) z ) = P( X e z ) , so the 100(1 – )th percentile is e z . For the power distribution, the 95th percentile is 81. e 3.5(1.645)(1.2) e 5.474 238.41 e 5(.01) / 2 e 5.005 149.157 ; Var(X) = e10(.01) e.01 1 223.594 a. E(X) = b. P(X > 125) = 1 – P(X 125) = 1 P z c. P(110 X 125) 1.72 d. ~ = e 5 148 .41 e. P(any particular one has X > 125) = .9573 expected # = 10(.9573) = 9.573 f. We wish the 5th percentile, which is e 5( 1.645)(.1) 125 .90 ln(125 ) 5 1 1.72 .9573 .1 ln(110 ) 5 .0427 .0013 .0414 .1 158 Chapter 4: Continuous Random Variables and Probability Distributions 82. 83. e1.99 2 /2 10.024 ; Var(X) = e 3.8(.81) e.81 1 125.395 , x = 11.20 a. E(X) = b. P(X 10) = P(ln(X) 2.3026) = P(Z .45) = .6736 P(5 X 10) = P(1.6094 ln(X) 2.3026) = P(–.32 Z .45) = .6736 – .3745 = .2991 The point of symmetry must be 1 2 , so we require that f 12 f 12 , i.e., 12 1 12 1 12 1 12 1 , which in turn implies that . 84. a. E(X) = b. f(x) = 10 5 5 .0255 .714 , V(X) = 5 2 7 (49)(8) 7 x 4 1 x 30 x 4 x 5 for 0 X 1, 52 so P(X .2) = .2 0 c. .4 4 .2 d. E(1 – X) = 1 – E(X) = 1 – a. E(X) = b. E[(1 – X)m] = 85. 30 x 4 x 5 dx .0016 30x P(.2 X .4) = x 5 dx .03936 5 2 .286 7 7 1 1 1 1 x 1 x dx x 1 x dx 0 0 1 = 1 1 x 1 1 x 1 x dx 1 m m 1 x 1 1 x dx 0 m 1 x 1 0 m For m = 1, E(1 – X) = . 159 Chapter 4: Continuous Random Variables and Probability Distributions 86. a. 100 1 Y 100 Y 1 ; Var(Y) = Var 7 20 2800 28 20 2 E(Y) = 10 E 3, 3 , after some algebra. 1 2 b. 12 8 ;3,3 F ;3,3 = F(.6;3,3) – F(.4; 3,3). 20 20 P(8 Y 12) = F The standard density function here is 30y2(1 – y)2, so P(8 Y 12) = 30 y 1 y .6 .4 c. 2 2 dy .365 . We expect it to snap at 10, so P( Y < 8 or Y > 12) = 1 – P(8 X 12) = 1 – .365 = .665. Section 4.6 87. The given probability plot is quite linear, and thus it is quite plausible that the tension distribution is normal. 88. The z percentiles and observations are below, along with the probability plot. The plot is quite straight except for the point corresponding to the largest observation. This observation is clearly much larger than what would be expected in a normal random sample. Because of this outlier, it would be inadvisable to analyze the data using any inferential method that depended on assuming a normal population distribution. Observation 152.7 172.0 172.5 173.3 193.0 204.7 216.5 234.9 262.6 422.6 400 lifetime Percentile –1.645 –1.040 –0.670 –0.390 –0.130 0.130 0.390 0.670 1.040 1.645 300 200 -2 -1 0 z %ile 160 1 2 Chapter 4: Continuous Random Variables and Probability Distributions 89. The z percentile values are as follows: –1.86, –1.32, –1.01, –0.78, –0.58, –0.40, –0.24,–0.08, 0.08, 0.24, 0.40, 0.58, 0.78, 1.01, 1.30, and 1.86. The accompanying probability plot is reasonably straight, and thus it would be reasonable to use estimating methods that assume a normal population distribution. thickness 1.8 1.3 0.8 -2 -1 0 1 2 z %ile The Weibull plot uses ln(observations) and the extreme value percentiles of the pi values given; i.e., η(p) = ln[–ln(1–p)]. The accompanying probability plot appears sufficiently straight to lead us to agree with the argument that the distribution of fracture toughness in concrete specimens could well be modeled by a Weibull distribution. 1.1 1.0 0.9 obsv 90. 0.8 0.7 0.6 0.5 0.4 -4 -3 -2 -1 Extreme value percentile 161 0 1 Chapter 4: Continuous Random Variables and Probability Distributions 91. The (z percentile, observation) pairs are (–1.66, .736), (–1.32, .863), (–1.01, .865), (–.78, .913), (–.58, .915), (–.40, .937), (–.24, .983), (–.08, 1.007), (.08, 1.011), (.24, 1.064), (.40, 1.109), (.58, 1.132), (.78, 1.140), (1.01, 1.153), (1.32, 1.253), (1.86, 1.394). The accompanying probability plot is straight, suggesting that an assumption of population normality is plausible. 1.4 1.3 obsvn 1.2 1.1 1.0 0.9 0.8 0.7 -2 -1 0 1 2 z %ile 92. The 10 largest z percentiles are 1.96, 1.44, 1.15, .93, .76, .60, .45, .32, .19 and .06; the remaining 10 are the negatives of these values. The accompanying normal probability plot is reasonably straight. An assumption of population distribution normality is plausible. 500 400 load life a. 300 200 100 0 -2 -1 0 z %ile 162 1 2 Chapter 4: Continuous Random Variables and Probability Distributions b. For a Weibull probability plot, the natural logs of the observations are plotted against extreme value percentiles; these percentiles are –3.68, –2.55, –2.01, –1.65, –1.37, –1.13, –.93, –.76, –.59, –.44, –.30, –.16, –.02, .12, .26, .40, .56, .73, .95, and 1.31. The accompanying probability plot is roughly as straight as the one for checking normality (a plot of ln(x) versus the z percentiles, appropriate for checking the plausibility of a lognormal distribution, is also reasonably straight – any of 3 different families of population distributions seems plausible.) ln(loadlife) 6 5 4 -4 -3 -2 -1 0 1 W %ile 93. To check for plausibility of a lognormal population distribution for the rainfall data of Exercise 81 in Chapter 1, take the natural logs and construct a normal probability plot. This plot and a normal probability plot for the original data appear below. Clearly the log transformation gives quite a straight plot, so lognormality is plausible. The curvature in the plot for the original data implies a positively skewed population distribution – like the lognormal distribution. 8 3000 7 6 rainfall ln(rainfall) 2000 5 4 3 1000 2 1 0 -2 -2 -1 0 1 -1 0 z %ile 2 z %ile 163 1 2 Chapter 4: Continuous Random Variables and Probability Distributions 94. a. The plot of the original (untransformed) data appears somewhat curved. 5 precip 4 3 2 1 0 -2 -1 0 1 2 z %iles b. The square root transformation results in a very straight plot. It is reasonable that this distribution is normally distributed. sqrt 2.0 1.5 1.0 0.5 -2 -1 0 1 2 z %iles The cube root transformation also results in a very straight plot. It is very reasonable that the distribution is normally distributed. 1.6 cubert c. 1.1 0.6 -2 -1 0 z %iles 164 1 2 Chapter 4: Continuous Random Variables and Probability Distributions 95. The pattern in the plot (below, generated by Minitab) is reasonably linear. By visual inspection alone, it is plausible that strength is normally distributed. Normal Probability Plot .999 .99 Probability .95 .80 .50 .20 .05 .01 .001 125 135 145 strength Average: 134.902 StDev: 4.54186 N: 153 We use the data (table below) to create the desired plot. This half–normal plot reveals some extreme values, without which the distribution may appear to be normal. ordered absolute values (w's) 0.89 1.15 1.27 1.44 2.34 3.78 3.96 12.38 30.84 43.4 probabilities 0.525 0.575 0.625 0.675 0.725 0.775 0.825 0.875 0.925 0.975 z values 0.063 0.19 0.32 0.454 0.6 0.755 0.935 1.15 1.44 1.96 2 z values 96. Anderson-Darling Normality Test A-Squared: 1.065 P-Value: 0.008 1 0 0 5 10 15 20 25 wi 165 30 35 40 45 Chapter 4: Continuous Random Variables and Probability Distributions 97. The (100p)th percentile (p) for the exponential distribution with = 1 satisfies F((p)) = 1 – 5 1.5 exp[–(p)] = p, i.e., (p) = –ln(1 – p). With n = 16, we need (p) for p = 16 , 16 ,..., 1516.5 . These are .032, .398, .170, .247, .330, .421, .521, .633, .758, .901, 1.068, 1.269, 1.520, 1.856, 2.367, 3.466. this plot exhibits substantial curvature, casting doubt on the assumption of an exponential population distribution. Because is a scale parameter (as is for the normal family), = 1 can be used to assess the plausibility of the entire exponential family. 600 500 failtime 400 300 200 100 0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 percentile Supplementary Exercises 98. 10 .4 25 a. P(10 X 20) = b. P(X 10) = P(10 X 25) = c. For 0 X 25, F(x) = x 0 d. E(X) = 15 .6 25 1 x . F(x)=0 for x < 0 and = 1 for x > 25. dy 25 25 A B 0 25 12.5 ; Var(X) = B A2 2 2 12 166 625 52.083 , so σx = 7.22 12 Chapter 4: Continuous Random Variables and Probability Distributions 99. a. 1 For 0 y 25, F(y) = 24 y 0 2 u u 12 y 1 u 2 u 3 . Thus 24 2 36 0 0 y0 1 3 2 F(y) = y y 0 y 12 18 48 y 12 1 b. P(Y 4) = F(4) = .259, P(Y > 6) = 1 – F(6) = .5 P(4 X 6) = F(6) – F(4) = .5 – .259 = .241 c. E(Y) = 1 24 12 0 1 24 E(Y2) = 12 y 1 y3 y4 y 2 1 dy 6 24 3 48 12 0 y y 3 1 dy 43 .2 , so V(Y) = 43.2 – 36 = 7.2 12 12 0 d. P(Y < 4 or Y > 8) = 1 – P(4 X 8) = .518 e. the shorter segment has length min(Y, 12 – Y) so 12 6 min( y,12 y) f ( y)dy min( y,12 y) f ( y)dy 90 min( y,12 y) f ( y)dy y f ( y)dy (12 y) f ( y)dy = 3.75 24 E[min(Y, 12 – Y)] = 0 0 12 6 12 6 0 6 100. a. Clearly f(x) 0. The c.d.f. is , for x > 0, F ( x) x f ( y)dy x 0 x 1 32 16 dy 1 3 2 2 y 4 y 4 x 42 32 0 ( F(x) = 0 for x 0.) Since F() = f ( y)dy 1, f(x) is a legitimate pdf. b. See above c. P(2 X 5) = F(5) – F(2) = 1 d. 16 16 1 .247 81 36 32 32 E ( x) x f ( x)dx x dx ( x 4 4) dx 3 0 x 4 x 43 0 e. 32 x 4 2 dx 4 0 E(salvage value) = 32 x 43 0 dx 8 4 4 100 32 dx 3200 x 4 x 43 167 0 1 x 4 4 dx 3200 16 .67 (3)( 64 ) Chapter 4: Continuous Random Variables and Probability Distributions 101. a. x2 7 3 By differentiation, f(x) = x 4 4 0 0 x 1 7 1 x 3 otherwise 1.0 0.8 f(x) 0.6 0.4 0.2 0.0 0.0 0.5 1.0 1.5 2.0 2.5 x 17 7 3 .5 11 .917 2 2 23 3 12 4 4 3 102. b. P(.5 X 2) = F(2) – F(.5) = 1 c. E(X) = 7 131 3 7 3 2 x x dx 0 1 x 4 4 x dx 108 1.213 1 = 40 V; = 1.5 V a. 42 40 39 40 1.5 1.5 P(39 < X < 42) = = (1.33) – (–.67) = .9082 – .2514 = .6568 b. We desire the 85th percentile: 40 + (1.04)(1.5) = 41.56 c. P(X > 42) = 1 – P(X 42) = 1 42 40 = 1 – (1.33) = .0918 1 .5 Let D represent the number of diodes out of 4 with voltage exceeding 42. 4 0 P(D 1 ) = 1 – P(D = 0) = 1 .0918 .9082 =1 – .6803 = .3197 0 168 4 Chapter 4: Continuous Random Variables and Probability Distributions 103. = 137.2 oz.; = 1.6 oz 135 137.2 = 1 – (–1.38) = 1 – .0838 = .9162 1. 6 a. P(X > 135) = 1 b. With Y = the number among ten that contain more than 135 oz, Y ~ Bin(10, .9162). So, P(Y 8) = b(8; 10, .9162) + b(9; 10, .9162) + b(10; 10, .9162) =.9549 c. = 137.2; a. Let S = defective. Then p = P(S) = .05; n = 250 = np = 12.5, = 3.446. The random variable X = the number of defectives in the batch of 250. X ~ Binomial. Since np = 12.5 10, and nq = 237.5 10, we can use the normal approximation. 135 137.2 1.65 1.33 104. 24.5 12.5 1 3.48 1 .9997 .0003 3.446 P(Xbin 25) 1 b. P(Xbin = 10) P(Xnorm 10.5) – P(Xnorm 9.5) = .58 .87 .2810 .1922 .0888 a. P(X > 100) = 1 b. P(50 < X < 80) = 105. 100 96 1 .29 1 .6141 .3859 14 80 96 50 96 14 14 = (–1.5) – (–3.29) = .1271 – .0005 = .1266. c. a = 5th percentile = 96 + (–1.645)(14) = 72.97. b = 95th percentile = 96 + (1.645)(14) = 119.03. The interval (72.97, 119.03) contains the central 90% of all grain sizes. 169 Chapter 4: Continuous Random Variables and Probability Distributions 106. a. F(X) = 0 for x < 1 and = 1 for x > 3. For 1 x 3, F ( x) 1 0dy x 1 P(X 2.5) = F(2.5) = 1.5(1 – .4) = .9; P(1.5 x 2.5) = F(2.5) – F(1.5) = .4 c. E(X) = d. E(X2) = 3 1 x 3 1 f ( y)dy 3 1 3 31 3 2 dx dx 1.5 ln( x)1 1.648 2 x 2 1 x x2 =.553 3 1 3 3 2 dx dx 3 , so V(X) = E(X2) – [E(X)]2 = .284, 2 x 2 1 1 x 1.5 0 e. h(x) = x 1.5 1 so E[h(X)] = x 3 1 1 2 dy 1.51 2 y x b. 1 .5 x 2 .5 2 .5 x 3 3 1 x 1.5 2 x 2.5 3 2 1.5 dx 1 2.5 3 1 dx .267 2 x2 107. a. 0.5 0.4 f(x) 0.3 0.2 0.1 0.0 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 x b. F(x) = 0 for x < –1 or = 1 for x > 2. For –1 x 2, x 1 1 x 3 11 F ( x) 4 y 2 dy 4 x 1 9 9 3 27 c. The median is 0 iff F(0) = .5. Since F(0) = 11 27 , this is not the case. Because median must be greater than 0. d. Y is a binomial r.v. with n = 10 and p = P(X > 1) = 1 – F(1) = 170 5 27 11 27 < .5, the Chapter 4: Continuous Random Variables and Probability Distributions 108. 1 = 1.075, 1 a. E(X) = b. P(3.0 < X) = 1 – P(X 3.0) = 1 – F(3.0) = e–.93(3.0) = .0614 P(1.0 X 3.0) = F(3.0) – F(1.0) = .333 c. The 90th percentile is requested; denoting it by c, we have = 1.075 .9 = F(c) = 1 – e–(.93)c, whence c = ln(. 1) 2.476 (.93) 109. a. (150 150) exp[ exp( 0)] exp( 1) .368 , where 90 exp(u) = eu. P(X 300) = exp[ exp( 1.6667)] .828 , P(X 150) = exp exp and P(150 X 300) = .828 – .368 = .460. b. The desired value c is the 90th percentile, so c satisfies (c 150) . Taking the natural log of each side twice in succession 90 (c 150) yields ln[–ln(.9)] = , so c = 90(2.250367) + 150 = 352.53. 90 .9 = exp exp x x exp exp exp 1 c. f(x) = F(X) = d. We wish the value of x for which f(x) is a maximum; this is the same as the value of x for which ln[f(x)] is a maximum. The equation of d [ln f ( x) ] 0 gives dx x x 1 , so 0 , which implies that x = . Thus the mode is . exp e. E(X) = .5772 + = 201.95, whereas the mode is 150 and the median is –(90)ln[–ln(.5)] + 150 = 182.99. The distribution is positively skewed. a. E(cX) = cE(X) = b. E[c(1 – .5eax)] = 110. c c[.5 a] c1 .5e e dx a ax x 0 171 Chapter 4: Continuous Random Variables and Probability Distributions 111. a. From a graph of f(x; , ) or by differentiation, x* = . b. No; the density function has constant height for A X B. c. F(x;) is largest for x = 0 (the derivative at 0 does not exist since f is not continuous there) so x* = 0. d. ln[ f x; , ] ln ln 1 ln( x) x ; d 1 1 ln[ f x; , ] x x* ( 1) dx x 12 2. 2 e. From d x* a. 112. 0 0 f ( x)dx .1e .2 x dx .1e .2 x dx .5 .5 1 0.10 0.08 f(x) 0.06 0.04 0.02 0.00 -2 b. c. x -1 0 x For x 0, F(x) = x 1 1 .1e .2 y dy 1 e .2 x . 0 2 2 P(X < 0) = F(0) = .1e .2 y dy 2 1 .2 x e . 2 For x < 0, F(x) = 1 1 .5 , P(X < 2) = F(2) = 1 – .5e–.4 = .665, 2 P(–1 X 2) – F(2) – F(–1) = .256, 1 – (–2 X 2) = .670 172 Chapter 4: Continuous Random Variables and Probability Distributions 113. a. Clearly f(x; 1, 2, p) 0 for all x, and = p e 1 x 1 0 f ( x; 1 , 2 , p)dx 1 p 2 e 2 x dx p 1e 1x dx 1 p 2 e 2 x dx 0 0 = p + (1 – p) = 1 b. For x > 0, F(x; 1, 2, p) = x f ( y; 1 , 2 , p)dy p(1 e 1x ) (1 p)(1 e 2 x ). 0 c. E(X) = 0 x p1e 1x (1 p)2 e 2 x dx 0 0 p x1e 1x dx (1 p) x 2 e 2 x dx 2 2p 21 p d. E(X ) = e. For an exponential r.v., CV = 12 22 2 p 2 1 p 2 2 2 1 1 2 CV = p 1 p 1 2 = [2r – 1]1/2 where r = 1 provided f. n , 1 , so Var(X) = 1 1 2p 12 p 1 1 p 2 21 p 22 p 1 p 2 1 1 . For X hyperexponential, 2 2 p22 (1 p)12 = 1 2 p 2 (1 p)1 p 2 2 (1 p)12 1 2 . But straightforward algebra shows that r > p2 (1 p)1 2 1 2 , so that CV > 1. 2 n 2 2 , so n and CV = 1 n 173 1 if n > 1. Chapter 4: Continuous Random Variables and Probability Distributions 114. a. 1 5 b. k 51 dx k k 15 1 where we must have > 1. 1 x For x 5, F(x) = x 5 c. E(X) = 5 x k 1 1 5 dy 51 1 1 1 y x 5 x 1 . 1 15 k 5 1 , provided > 2. dx dx 1 5 2 x x X X 5 d. P ln y P e y P X 5e y F 5e y 1 y 5 5e 5 1 e 1 y , the cdf of an exponential r.v. with parameter – 1. 1 115. a. b. c. I A lognormal distribution, since ln o is a normal r.v. I i I I PI o 2 I i P o 2 P ln o Ii Ii ln 2 1 1 1 6.14 1 .05 I ln 2 1 P ln o Ii ln 2 I I E o e1.0025 / 2 2.72, Var o e 2.0025 e .0025 1 .0185 Ii Ii 174 Chapter 4: Continuous Random Variables and Probability Distributions 116. a. The Weibull pdf exists for x ≥ 0, but only a section is graphed below. 0.020 f(x) 0.015 0.010 0.005 0.000 150 117. 160 170 x 180 190 1759 b. P(X > 175) = 1 – F(175; 9, 180) = e 180 .4602 P(150 X 175) = F(175; 9, 180) – F(150; 9, 180) = .5398 – .1762 = .3636 c. P(at least one) = 1 – P(none) = 1 – (1 – .3636)2 = .5950 d. We want the 10th percentile: .10 = F( x; 9, 180) = 1 e 180 . A small bit of algebra leads us to x = 140.178. Thus 10% of all tensile strengths will be less than 140.178 MPa. F(y) = P(Y y) = P(Z + y) = P Z y y x 9 1 2 e 12 z 2 dz . Now differentiate with respect to y to obtain a normal pdf with parameters and . 118. a. FY(y) = P(Y y) = P(60X y) = P X y y ; . Thus fY(y) F 60 60 y y 1 y 1 60 e = f , which shows that Y has a gamma distribution 60 ; 60 60 with parameters and 60. b. With c replacing 60 in a, the same argument shows that cX has a gamma distribution with parameters and c. 175 Chapter 4: Continuous Random Variables and Probability Distributions 119. a. Y = –ln(X) x = e–y = k(y), so k(y) = –e–y. Thus since f(x) = 1, g(y) = 1 | –e–y | = e–y for 0 < y < , so y has an exponential distribution with parameter = 1. b. y = Z + y = h(z) = Z + z = k(y) = y and k(y) = 1 , from which the result follows easily. c. y = h(x) = cx x = k(y) = a. If we let y 1 and k(y) = , from which the result follows easily. c c 120. 2 and 2 , then we can manipulate f(v) as follows: 2 2 f ( ) 2 e / 2 e / 2 2 1 e / 2 1 e , 2 2 2 2 2 2 2 2 2 2 which is in the Weibull family of distributions. b. F 25 0 400 e 800 d ; cdf: F ;2, 2 1 e 2 v2 1 e 800 , so 625 F 25;2, 2 1 e 800 1 .458 .542 121. a. Assuming independence, P(all 3 births occur on March 11) = b. 3651 3 (365) .0000073 c. 3651 3 .00000002 Let X = deviation from due date. XN(0, 19.88). Then the baby due on March 15 was 4 days early. P(x = –4) ≈ P(–4.5 < x < –3.5) 3.5 4.5 .18 .237 .4286 .4090 .0196 . 19.88 19.88 Similarly, the baby due on April 1 was 21 days early, and P(x = –21) 20.5 21.5 1.03 1.08 .1515 .1401 .0114 . 19.88 19.88 ≈ The baby due on April 4 was 24 days early, and P(x = –24) ≈ .0097 Again, assuming independence, P( all 3 births occurred on March 11) = .0196.0114.0097 .00002145 d. To calculate the probability of the three births happening on any day, we could make similar calculations as in part c for each possible day, and then add the probabilities. 176 Chapter 4: Continuous Random Variables and Probability Distributions 122. a. F(x) = e x and F(x) = 1 e x , so r(x) = e x e x , a constant; this is consistent with the memoryless property of the exponential distribution. b. r(x) = 1 x ; for > 1 this is increasing, while for < 1 it is a decreasing function. c. ln(1 – F(x)) = x2 x 2 x x2 , 1 dx x F ( x ) 1 e 2 x2 x x 2 f(x) = 1 e 0x 123. a. 1 ln 1 U x Pln( 1 U ) x P 1 U e x x x P U 1 e 1 e since FU(u) = u (U is uniform on [0, 1]). Thus X has an FX(x) = P exponential distribution with parameter . b. By taking successive random numbers u1, u2, u3, …and computing x i 1 ln 1 u i , 10 … we obtain a sequence of values generated from an exponential distribution with parameter = 10. 124. a. b. E(g(X)) E[g() + g()(X – )] = E(g()) + g()E(X – ), but E(X) – = 0 and E(g()) = g() ( since g() is constant), giving E(g(X)) g(). V(g(X)) V[g() + g()(X – )] = V[g()(X – )] = (g())2V(X – ) = (g())2V(X). v v v v g ( I ) , g ( I ) 2 , so E g ( I ) R I I 20 I 2 v v v V g ( I ) 2 V I , g ( I ) 2 I 800 20 I 125. g() + g()(X – ) g(X) implies that E[g() + g()(X – )] = E(g()) = g() E(g(X)), i.e. that g(E(X)) E(g(X)). 177 Chapter 4: Continuous Random Variables and Probability Distributions 126. 2X 2 For y > 0, F ( y ) P(Y y ) P 2 take the cdf of X (Weibull), replace x by y 2y . Now P X y P X 2 2 2 y , and then differentiate with respect to y to 2 obtain the desired result fY(y). 127. a. (850 150 ) 2 (8)( 2) 8 = 710; V(X) = σX ≈ 84.423 82 (8 2) 2 (8 2 1) P(|X – 710| ≤ 84.423) = P(625.577 ≤ X ≤ 794.423) = E(X) = 150 + (850 – 150) 7 1 (10 ) x 150 850 x 1 dx = .684. The computation of this integral 625.577 700 (8)(2) 700 700 requires a calculator or computer. 794.423 7 1 (10 ) x 150 850 x 1 dx = .376. Again, the computation 750 700 (8)( 2) 700 700 of the requested integral requires a calculator or computer. 850 b. P(X > 750) = a. For w < 0, F(w) = 0. F(0) = P(V ≤ vd) = 1 – exp(–λvd), since V is exponential. For w > 0, F(w) = P(W ≤ w) = P(k(V–vd) ≤ w) = P(V ≤ vd + w/k) = 1 – exp(–λ[vd+w/k]). This can be written more compactly as 128. F(w) = 1 – exp(–λ[vd+w/k]) for w ≥ 0 and F(w) = 0 otherwise b. exp(–λ[vd+w/k]). Since the only other possible value of W is k zero, which would not contribute to the expected value, we can compute E(W) as the appropriate integral: For w > 0, f(w) = F′(w) = E(W) = 0 w k exp( [v d w / k ]) dw = exp( v d ) 178 0 w k exp( w / k )dw = k e v d