Uploaded by chul ho shin

Student Solutions Loss Models From Data to Decisions, 4th

advertisement
Student Solutions Manual to Accompany
LOSS MODELS
WILEY SERIES IN PROBABILITY AND STATISTICS
Established by WALTER A. SHEWHART and SAMUEL S. WILKS
Editors: David J. Balding, Noel A. C. Cressie, Garrett M. Fitzmaurice,
Harvey Goldstein, Iain M. Johnstone, Geert Molenherghs, David W. Scott,
Adrian E M. Smith, Ruey S. Tsay, Sanford We is berg
Editors Emeriti: Vic Barnett, J. Stuart Hunter, Joseph B. Kadane, Jozej L. Teugels
A complete list of the titles in this series appears at the end of this volume.
Student Solutions Manual to Accompany
LOSS MODELS
From Data to Decisions
Fourth Edition
Stuart A. Klugman
Society of Actuaries
Harry H. Panjer
University of Waterloo
Gordon E.Willmot
University of Waterloo
SOCIETY OF ACTUARIES
©WILEY
A JOHN WILEY & SONS, INC., PUBLICATION
Copyright © 2012 by John Wiley & Sons, Inc. All rights reserved
Published by John Wiley & Sons, Inc., Hoboken, New Jersey
Published simultaneously in Canada
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form
or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as
permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior
written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to
the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax
(978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should
be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ
07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permission.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in
preparing this book, they make no representations or warranties with respect to the accuracy or
completeness of the contents of this book and specifically disclaim any implied warranties of
merchantability or fitness for a particular purpose. No warranty may be created or extended by sales
representatives or written sales materials. The advice and strategies contained herein may not be
suitable for your situation. You should consult with a professional where appropriate. Neither the
publisher nor author shall be liable for any loss of profit or any other commercial damages, including
but not limited to special, incidental, consequential, or other damages.
For general information on our other products and services or for technical support, please contact our
Customer Care Department within the United States at (800) 762-2974, outside the United States at
(317) 572-3993 or fax (317) 572-4002.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may
not be available in electronic formats. For more information about Wiley products, visit our web site at
www.wiley.com.
Library of Congress Cataloging-in-Publication Data is available.
ISBN 978-1-118-31531-6
10 9 8 7 6 5 4 3 2 1
CONTENTS
Introduction
2
Chapt er 2 solutions
2.1
3
Section 2.2
Chaplter 3 solutions
3
3
9
3.1
Section 3.1
9
3.2
Section 3.2
14
3.3
Section 3.3
15
3.4
Section 3.4
16
3.5
Section 3.5
20
v
VI
CONTENTS
4
Chapter 4 solutions
23
4.1
23
5
6
Section 4.2
Chapter 5 solutions
29
5.1
Section 5.2
29
5.2
Section 5.3
40
5.3
Section 5.4
41
Chapter 6 solutions
43
6.1
Section 6.1
43
6.2
Section 6.5
43
6.3
Section 6.6
44
Chapl :er 7 solutions
47
7.1
Section 7,1
47
7.2
Section 7.2
48
7.3
Section 7.3
50
7.4
Section 7.5
52
Chapter 8 solutions
55
8.1
Section 8.2
55
8.2
Section 8.3
57
8.3
Section 8.4
59
8.4
Section 8.5
60
8.5
Section 8.6
64
Chapter 9 solutions
67
9.1
Section 9.1
67
9.2
Section 9.2
68
9.3
Section 9.3
68
9.4
Section 9.4
78
9.5
Section 9.6
79
9.6
Section 9.7
85
9.7
Section 9.8
87
CONTENTS
10
11
Chapter 10 solutions
93
10.1
10.2
10.3
93
96
97
13
15
16
17
Section 11.2
Section 11.3
99
99
100
Chapter
Chapt
:er 12 solutions
105
12.1
12.2
12.3
12.4
105
111
114
116
Section
Section
Section
Section
12.1
12.2
12.3
12.4
Chapter 13 solutions
119
Section
Section
Section
Section
Section
119
124
137
145
146
13.1
13.2
13.3
13.4
13.5
14
Section 10.2
Section 10.3
Section 10.4
Chapter 11 solutions
11.1
11.2
12
Vii
13.1
13.2
13.3
13.4
13.5
Chapter 14
147
14.1
147
Section 14.7
Chapter 15
151
15.1
15.2
151
161
Section 15.2
Section 15.3
Chapter 16 solutions
165
16.1
16.2
16.3
165
166
175
Section 16.3
Section 16.4
Section 16.5
Chapter 17 solutions
183
17.1
183
Section 17.7
viii
18
19
20
CONTENTS
Chapter 18
187
18.1
187
Section 18.9
Chapter 19
219
19.1
219
Section 19.5
Chapter 20 solutions
227
20.1
20.2
20.3
20.4
227
228
229
230
Section
Section
Section
Section
20.1
20.2
20.3
20.4
CHAPTER 1
INTRODUCTION
The solutions presented in this manual reflect the authors' best attempt to
provide insights and answers. While we have done our best to be complete and
accurate, errors may occur and there may be more elegant solutions. Errata
will be posted at the ftp site dedicated to the text and solutions manual:
ftp://ftp.wiley.com/public/sci_tech_med/loss_models/
Should you find errors or would like to provide improved solutions, please
send your comments to Stuart Klugman at sklugman@soa.org.
Student Solutions Manual to Accompany Loss Models: From Data to Decisions, Fourth
Edition. By Stuart A. Klugman, Harry H. Panjcr, Gordon E. Willmot
Copyright © 2012 John Wiley & Sons, Inc.
1
CHAPTER 1
INTRODUCTION
The solutions presented in this manual reflect the authors' best attempt to
provide insights and answers. While we have done our best to be complete and
accurate, errors may occur and there may be more elegant solutions. Errata
will be posted at the ftp site dedicated to the text and solutions manual:
ftp://ftp.wiley.com/public/sci_tech_med/loss_models/
Should you find errors or would like to provide improved solutions, please
send your comments to Stuart Klugman at sklugman@soa.org.
Student Solutions Manual to Accompany Loss Models: From Data to Decisions, Fourth
Edition. By Stuart A. Klugman, Harry H. Panjcr, Gordon E. Willmot
Copyright © 2012 John Wiley & Sons, Inc.
1
CHAPTER 2
CHAPTER 2 SOLUTIONS
2.1
SECTION 2.2
, r r n
, c , ,
/ O.Olx,
2.1 F5(x) = 1 - 5 8 (x) = | 0Q2x _
01
* ^ = Fp//
\ / °
/.(«)
5 (x) = | 0 0 2 )
M*) = $& 5 8 = <
W
100
05
' 5 0° << χx << 50
'
?5
ΓΧ
—
,
I. 75 - a;
0 < x < 50,
< χ < ?5
50
50<x<75.
2.2 The requested plots follow. The triangular spike at zero in the density
function for Model 4 indicates the 0.7 of discrete probability at zero.
Student Solutions Manual to Accompany Loss Models: From Data to Decisions, Fourth 3
Edition. By Stuart. A. Klugnmn, Harry H. PHiijor, Gordon E. Willmot
Copyright ( ö 2012 John Wiley & Sons, Inc.
CHAPTER 2 SOLUTIONS
0.6
F(x)
0.4
0.2
1
2
X
3
4
5
Distribution function for Model 3.
0.6
F(x)
0.4
0.2
100000
200000
x 300000
400000
500000
Distribution function for Model 4.
10
20
30
x 40
50
60
70
Distribution function for Model 5.
SECTION 2.2
0.51
0.41
0.31
P(x)
0.21
0.Γ
1
X
3
4
Probability function for Model 3.
5e-06
100000
200000 x 300000
400000
500000
Density function for Model 4.
0.025 :
0.02:
o.oi5:
f(x) ■
o.oi:
0.005 -
10
20
30
x 40
50
60
Density function for Model 5.
70
6
CHAPTER 2 SOLUTIONS
le-051
8e-06:
6e-06:
h(x) ■
4e-061
2e-06 ]
100000
200000 x 300000
400000
500000
Hazard rate for Model 4.
0.2
0.181
0.16
0.14
0.12
(x)0.1
0.08 i
0.061
0.04
0.02
10
20
30
x 40
50
60
70
Hazard rate for Model 5.
2.3 f'(x) = 4(1 + x 2 ) - 3 - 24x 2 (l + x 2 ) - 4 . Setting the derivative equal to zero
and multiplying by (1 -f x 2 ) 4 give the equation 4(1 + x2) — 24x 2 = 0. This is
equivalent to x2 = 1/5. The only positive solution is the mode of l/\/5·
2.4 The survival function can be recovered as
0.5
=
5(0.4) =
=
e
e-fi'*
-Ax-0.5e2x\°'1
_
e -0.4A-0.5e
A
+*2xdx
'»
() 8
- +0.5
Taking logarithms gives
-0.693147 = -0.4Λ - 1.112770 -f 0.5,
and thus A = 0.2009.
SECTION 2.2
7
2.5 The ratio is
/ 10,000 \ 2
V10,000 + d /
/ 20,000 \ 2
\20,000+ cp)
( 20,000 + cP\2
\20,000 + 2(iJ
20,0002 + 40,000^ + d4
20,0002 + 80,000d + AcP'
Prom observation or two applications of L'Höpital's rule, we see that the limit
is infinity.
SECTION 2.2
7
2.5 The ratio is
/ 10,000 \ 2
V10,000 + d /
/ 20,000 \ 2
\20,000+ cp)
( 20,000 + cP\2
\20,000 + 2(iJ
20,0002 + 40,000^ + d4
20,0002 + 80,000d + AcP'
Prom observation or two applications of L'Höpital's rule, we see that the limit
is infinity.
CHAPTER 3
CHAPTER 3 SOLUTIONS
3.1
SECTION 3.1
/»OO
OO
3.1
/
=
-OO
(x - μ)3/(χ)άχ
/4 -3^2M + 2/i3,
OO
/
(x -
=
J — OO
(x3 - 3χ2μ + 3χμ2 -
μ3)/(χ)άχ
μ)4/(χ)άχ
-OO
/ O O
(x 4 - 4χ 3 μ + 6 x V - 4χμ 3 +
μ4)/(χ)άχ
-OO
=
^ 4 - V 3 / X + 6/X2M2-3/i4·
3.2 For Model 1, σ 2 = 3,333.33 - 502 = 833.33, σ = 28.8675.
/ 4 = Jo10" *3(0·01)<ίχ = 250,000, μ 3 = 0, Ί ι = 0.
μ4 = / 0 100 χ 4 (0.01)(/χ = 20,000,000, μ 4 = 1,250,000, 7 2 = 1·8·
Student Solutions Manual to Accompany Loss Models: From Data to Decisions, Fourth
Edition. By Stuart A. Klugman, Harry H. Panjer, Gordon E. Willmot
Copyright © 2012 John Wiley & Sons, Inc.
9
10
CHAPTER 3 SOLUTIONS
For Model 2, σ 2 = 4,000,000 - Ι,ΟΟΟ2 - 3,000,000, σ - 1,732.05. μ'3 and
μ4 are both infinite so the skewness and kurtosis are not defined.
For Model 3, σ 2 = 2.25 - .93 2 = 1.3851, σ = 1.1769.
μ!> = 0(0.5) + 1(0.25) + 8(0.12) + 27(0.08) + 64(0.05) = 6.57, μ 3 = 1.9012,
7 1 = 1.1663, μ\ = 0(0.5) + 1(0.25) + 16(0.12) + 81(0.08) + 256(0.05) = 21.45,
μ4 = 6.4416, 7 2 = 3.3576.
For Model 4, σ 2 = 6,000,000,000 - 30,0002 = 5,100,000,000, σ = 71,414.
3
3
00001
^x = 1.8 χ 10 15 ,
μ>3 = 0 (0.7) + J T x (0.000003)eμ 3 = 1.314 χ 10**, 7ι = 3.6078.
μ'Α = / 0 oo x 4 (0.000003)e- 00001:E (ia: - 7.2 χ 10 20 , μ4 = 5.3397 χ 10 20 ,
7 2 = 20.5294.
For Model 5, σ2 = 2,395.83 - 43.752 - 481.77, σ = 21.95.
μ'3 = / ο 50 x 3 (0.01)dx + /5705 χ 3 (0.02)ώ = 142,578.125, μ3 = -4,394.53,
7 α = -0.4156.
μ'Α - /05° x4{0.0l)dx + J5705 x4(0.02)da; - 8,867,187.5, μ4 = 439,758.30,
7 2 = 1.8947.
3.3 The standard deviation is the mean times the coefficient of variation, or 4,
and so the variance is 16. From (3.3) the second raw moment is 16 4- 2 2 = 20.
The third central moment is (using Exercise 3.1) 136 - 3(20)(2) + 2(2) 3 = 32.
The skewness is the third central moment divided by the cube of the standard
deviation, or 32/4 3 = 1/2.
3.4 For a gamma distribution the mean is αθ. The second raw moment
is a(a + 1)#2, and so the variance is αθ2. The coefficient of variation is
= a l'z = 1. Therefore a — 1. The third raw moment is a(a -f
1)(α + 2)θ = 6Θ . From Exercise 3.1, the third central moment is 60' —
3(2<92)<9 + 2(93 = 203 and the skewness is 2<93/(02)3/2 = 2.
3.5 For Model 1,
_ />-».»■»
1 - O.Old
100 -d
2
For Model 2,
f°° (
2,000
\3
J d U + 2,000j
'
2,000
ci + 2,000
N3
2,000 + d
2
SECTION 3.1
11
For Model 3,
0.25(1 - d) + 0.12(2 - d) + 0.08(3 - d) + 0.05(4 - d)
0.5
= 1.86 - d,
=<
0.12(2 - d) + 0.08(3 -d)+ 0.05(4 - d) 0 „ 0
0.25
~ ^
0.08(3-d) + 0 . 0 5 ( 4 - d ) = 3 3 8 4 6 _ d ;
^
d
'
2 < d < 3,
U. l o
O05(4-d)_
l
0.05
0 < d < 1,
3 < d < 4.
'
ar Model 4,
e
J
r?°0.3e- 000001 ^x
W = " 0 . 3e -o.oooo lrf
=
100 000
'
·
The functions are straight lines for Models 1, 2, and 4. Model 1 has negative
slope, Model 2 has positive slope, and Model 4 is horizontal.
3.6 For a uniform distribution on the interval from 0 to w, the density function
is f(x) = 1/w. The mean residual life is
e(d)
=
f™(x — d)w
rW
l
dx
Id w
2
(x - d) w
2w
w —dd
w
{w - d)2
2(w - d)
w—d
l
dx
The equation becomes
w - 30
100 - 30
+ 4,
with a solution of w — 108.
3.7 From the definition,
e(A)
S™(x-\)\-le-x'xdx
/~A_1e-*Adar
= λ.
12
CHAPTER 3 SOLUTIONS
Ε(Χ)
=
/
JO
=
[ xf(x)dx
Jo
xf(x)dx
= / xf(x)dx
Jo
4- /
Jd
d/(a;)da; + /
Jd
4- d[l - F(d)} 4- e(d)S(d)
(z -
= E[X Ad}+
d)f(x)dx
e(d)S(d).
3.9 For Model 1, from (3.8),
E[lAw]=
/
Jo
4- M(1 - 0 . 0 1 M ) = M(1 - 0 . 0 0 5 M )
x(0.01)dx
and from (3.10),
1QQ
E [ X Λ u] = 50 -
U
- 0.01M) = M(1 - 0.005M).
(l
From (3.9),
-oo
0.01M2
1 - O.Olxdx = M - ~ ' ^ ~
JO
2
Odx+
= M(1 - 0 . 0 0 5 M ) .
For Model 2, from (3.8),
r [ v
L
,
J
3(2,000)3
Γ
7o
(z + 2,000)
2,0003
,
4
(2, 000 4- M )
3
4,000,000
-1000
" (2,000 4- u)2
and from (3.10),
E[X Au] = 1,000
2,000
2,000 + u (
2,000 4- u
1,000
4,000,000
(2,000+ M) 2 J
From (3.9),
-2,000 3
[
J
7o V 2,000 + * ;
=
1,000 1 -
2(2,000 4- x)2
4,000,000
(2,000 + u)2
For Model 3, from (3.8),
( 0 ( 0 . 5 ) + M ( 0 . 5 ) = 0.5M,
0(0.5) 4-1(0.25) 4- M ( 0 . 2 5 ) - 0.25 4- 0.25M,
E[X Aw] =
0(0.5) + 1(0.25) + 2(0.12) + M ( 0 . 1 3 )
= 0.49 4 - 0 . 1 3 M ,
0(0.5) 4-1(0.25) 4- 2(0.12) 4- 3(0.08) + M ( 0 . 0 5 )
= 0.73 4- 0 . 0 5 M ,
0 < M < 1,
1 < M < 2,
2 < M < 3,
3 < M < 4,
SECTION 3.1
13
and from (3.10),
E[X A u] =
{ 0.93 - (1.86 - M)(0.5) = 0.5u,
0.93 - (2.72 - M)(0.25) = 0.25 + 0.25M,
\
0.93 - (3.3846 - M)(0.13) = 0.49 + 0.13M,
[ 0.93 - (4 - M)0(.05) - 0.73 + 0.05M,
0 < u < 1,
1 < u < 2,
2 < u < 3,
3 < u < 4.
For Model 4, from (3.8),
E[X Λ M] =
Γ x ( 0 . 0 0 0 0 0 3 ) e - 0 0 0 0 0 1 ^ x + M(0.3)e-° o o o o l u
Jo
=
30,000[l-e-ooooolw],
and from (3.10),
E[X Au} = 30,000 - 100,000(0.3e-° oooolu ) = 30,000[1 - e- 0 0 0 0 0 l M ]
3.10 For a discrete distribution (which all empirical distributions are), the
mean residual life function is
3(d)
=
Y,Xj>d(x3
-
d
)p(xj)
EXj>dp(x3)
When d is equal to a possible value of X, the function cannot be continuous
because there is jump in the denominator but not in the numerator. For an
exponential distribution, argue as in Exercise 3.7 to see that it is constant.
For the Pareto distribution,
e(d)
E(X) - E(X A d)
S{d)
_θ
a —1
Θ
a-1
θ_
a —1
a-1
\θ+ά)
\e+d)
θ+d
θ+ d
Θ
a-V
which is increasing in d. Only the second statement is true.
3.11 Applying the formula from the solution to Exercise 3.10 gives
10,000 + 10,000
0.5-1
-40,000,
14
CHAPTER 3 SOLUTIONS
which cannot be correct. Recall that the numerator of the mean residual life
is E ( X ) — E ( X Λ d). However, when a < 1, the expected value is infinite and
so is the mean residual life.
3 . 1 2 T h e right truncated variable is defined as Y = X given t h a t X < u.
W h e n X > u, this variable is not defined. T h e fcth moment is
E ( y
} =
F(u)
= -
^
)
·
3 . 1 3 This is a single parameter Pareto distribution with parameters a — 2.5
and Θ = 1. T h e moments are μλ = 2.5/1.5 = 5/3 and μ2 = 2.5/.5 — ( 5 / 3 ) 2 =
20/9. The coefficient of variation is ^ 2 0 / 9 / ( 5 / 3 ) = 0.89443.
3 . 1 4 μ = 0.05(100) + 0.2(200) + 0.5(300) + 0.2(400) + 0.05(500) = 300.
σ 2 - 0 . 0 5 ( - 2 0 0 ) 2 + 0 . 2 ( - 1 0 0 ) 2 + 0.5(0) 2 + 0.2(100) 2 + 0.05(20()) 2 = 8,000.
μ 3 - 0 . 0 5 ( - 2 0 0 ) 3 + 0 . 2 ( - 1 0 0 ) 3 + 0.5(0) 3 H- 0.2(100) 3 + 0.05(200) 3 = 0.
μ4 = 0 . 0 5 ( - 2 0 0 ) 4 + 0 . 2 ( - 1 0 0 ) 4 + 0 . 5 ( 0 ) 4 + 0 . 2 ( 1 0 0 ) 4 + 0 . 0 5 ( 2 0 0 ) 4 - 200,000,000.
Skewness is ηλ = μ 3 / σ 3 = 0. Kurtosis is η2 = μ 4 / σ 4 = 200,000,000/8,000 2 =
3.125.
3 . 1 5 T h e Pareto mean residual life function is
and so e x ( 2 0 ) / e x ( 0 ) - (2Θ -l· θ)/(θ + Θ) = 1.5.
3 . 1 6 Sample mean: 0.2(400) + 0.7(800) 4- 0.1(1,600) - 800. Sample variance: 0 . 2 ( - 4 0 0 ) 2 + 0.7(0) 2 + 0.1(800) 2 = 96,000. Sample third central moment: 0 . 2 ( - 4 0 0 ) 3 + 0.7(0) 3 + 0.1(800) 3 = 38,400,000. Skewness coefficient:
38,400,000/96,000 1 5 = 1.29.
3.2
SECTION 3.2
3 . 1 7 T h e pdf is f(x) = 2 x " 3 , x > 1. The mean is / ~ 2x~2dx = 2. T h e
median is the solution to .5 = F(x) = 1 — a: - 2 , which is 1.4142. T h e mode is
the value where the pdf is highest. Because the pdf is strictly decreasing, the
mode is at its smallest value, 1.
3 . 1 8 For Model 2, solve p= l - ^ ^ g ^ - V
and so πρ -
and the requested percentiles are 519.84 and 1419.95.
2,000[(l-p)-1/3-l]
SECTION 3 3
15
For Model 4, the distribution function j u m p s from 0 to 0.7 at zero and
so 7Γ0.5 = 0. For percentile above 70, solve p = 1 - 0.3e~ 0OOOOl7r ^, and so
πρ = -100,000 ln[(l - p ) / 0 . 3 ] and π 0 . 8 = 40,546.51.
For Model 5, the distribution function has two specifications. From x = 0
to x — 50 it rises from 0.0 to 0.5, and so for percentiles at 50 or below,
the equation to solve is p = 0.01π ρ for πρ — 100p. For 50 < x < 75, the
distribution function rises from 0.5 to 1.0, and so for percentiles from 50 to
100 the equation to solve is p = 0.02π ρ — 0.5 for πρ — 50p + 25. T h e requested
percentiles are 50 and 65.
3 . 1 9 T h e two percentiles imply
0.1
=
0.9
=
1-
Θ
β +
1
θ-k
» + 50-3fcy
Rearranging the equations and taking their ratio yield
/6fl-3fc\Q
\2e-k
)
0.9
0.1
Taking logarithms of both sides gives In 9 = a In 3 for a = In 9 / In 3 = 2.
3 . 2 0 T h e two percentiles imply
0.25
0.75
=
=
l-e-O.ooo/T,
1 - e ~(ioo,ooo/ ö r
Subtracting and then taking logarithms of b o t h sides give
In 0.75
=
-(l,OOO/0) r ,
ln0.25
=
-(1OO,OOO/0) T .
Dividing the second equation by the first gives
In 0.25
In 0.75
100 T .
Finally, taking logarithms of both sides gives r In 100 = In [In 0.25/ In 0.75] for
r - 0.3415.
3.3
SECTION 3.3
3 . 2 1 T h e sum has a g a m m a distribution with parameters a — 16 and Θ = 250.
Then, P r ( S i 6 > 6,000) = 1 - Γ(16; 6,000/250) = 1 - Γ(16;24). From the
16
CHAPTER 3 SOLUTIONS
Central Limit Theorem, the sum has an approximate normal distribution
with mean αθ = 4,000 and variance αθ2 = 1,000,000 for a standard deviation
of 1000. The probability of exceeding 6,000 is 1 - Φ[(6,000 - 4,000)/l,000] =
1 - Φ(2) = 0.0228.
3.22 A single claim has mean 8,000/(5/3) = 4,800 and variance
2(8,000) 2 /[(5/3)(2/3)] - 4,8002 = 92,160,000.
The sum of 100 claims has mean 480,000 and variance 9,216,000,000, which
is a standard deviation of 96,000. The probability of exceeding 600,000 is
approximately
1 - Φ[(600,000 - 480,000)/96,000] = 1 - Φ(1.25) = 0.106.
3.23 The mean of the gamma distribution is 5(1,000) = 5,000 and the variance
is 5(1,000)2 = 5,000,000. For 100 independent claims, the mean is 500,000
and the variance is 500,000,000 for a standard deviation of 22,360.68. The
probability of total claims exceeding 525,000 is
1 - Φ[(525,000 - 500,000)/22,360.68] = 1 - Φ(1.118) = 0.13178.
3.24 The sum of 2,500 contracts has an approximate normal distribution with
mean 2,500(1,300) = 3,250,000 and standard deviation V/275ÖÖ(400) = 20,000.
The answer is Pr(X > 3,282,500) = Pr[Z > (3,282,500-3,250,000)/20,000] =
Pr(Z > 1.625) = 0.052.
3.4
SECTION 3.4
3.25 While the Weibull distribution has all positive moments, for the inverse
Weibull moments exist only for fc < r. Thus by this criterion, the inverse
Weibull distribution has a heavier tail. With regard to the ratio of density
functions, it is (with the inverse Weibull in the numerator and marking its
parameters with asterisks)
n-*fi*T*
r
θ
x
r r x
π-τ*-1,>-(θ*/x)T*
e
rQ- x - e-^lQy
The logarithm is
(χ/θ)τ
-(θ*/χ)τ*
ocx-T-r\-(r/xV
+(*/*)T.
- ( τ + τ*)1ηχ.
The middle term goes to zero, so the issue is the limit of (χ/θ)τ — (τ + τ*) Ιηχ,
which is clearly infinite. With regard to the hazard rate, for the Weibull
distribution we have
SECTION 3.4
17
0.025
0.02
0.015
-Weibull
- Inverse Weibull
0.01 4
0.005
23
Figure 3.1
25
Tails of a Weibull and inverse Weibull distribution.
which is clearly increasing when r > 1, constant when r = 1, and decreasing
when r < 1. For the inverse Weibull,
h(x)
=
:
1 _
77ΓΓΤΖ
e-{9/xY
oc
1
j.T+l[ e (0/*) T _ 1 ] '
The derivative of the denominator is
(r + \)xT[e^^y
- 1] +
χτ+1β^χ^τθτ(-τ)χ-τ-\
and the limiting value of this expression is θτ > 0. Therefore, in the limit,
the denominator is increasing and thus the hazard rate is decreasing.
Figure 3.1 displays a portion of the density function for Weibull (r = 3,
Θ = 10) and inverse Weibull (r = 4.4744, Θ = 7.4934) distributions with the
same mean and variance. The heavier tail of the inverse Weibull distribution
is clear.
3.26 Means:
Gamma
Lognormal
Pareto
0.2(500) - 100.
exp(3.70929 + 1.338562/2) = 99.9999.
150/(2.5 - 1) - 100.
Second moments:
Gamma
Lognormal
Pareto
5002(0.2)(1.2) = 60,000.
exp[2(3.70929) 4- 2(1.33856)2] = 59,999.88.
1502(2)/[1.5(0.5)] = 60,000.
18
CHAPTER 3 SOLUTIONS
Density functions:
Gamma
:
0.62851.T-°- 8 e -°· 002 '.
(2π)- 1/2 (1.338566α:)~ 1 βχρ
Lognormal
1 / I n s - 3.70929 V
1.338566 )
688,919(x +150)~ 3 · 5 .
Pareto
The gamma and lognormal densities are equal when x = 2,221 while the lognormal and Pareto densities are equal when x — 9,678. Numerical evaluation
indicates that the ordering is as expected.
3.27 For the Pareto distribution
I
e(x)
S(x]
Jo
dy
k
(A)
{θ + χ)α I (9 + y + x)-"dy
Jo
„ (Θ + y + x)- Q + 1 1
= (θ + χ
-a + 1
0
+ x
(θ + χ)-η+1
(Θ + X
a — 1
a —1
Thus e(x) is clearly increasing and e(x) > 0, indicating a heavy tail.
The square of the coefficient of variation is
2Θ2
(a— l)(a — 2)
Θ2
(o — 1)-
a-2
- 1,
which is greater than 1, also indicating a heavy tail.
3.28
My(t)
etySx(y)
°°
Z"00 ety /■
Sx(y)
dy —
dy
+
E(X)
t E(X) o . / o
~ E(X)
E
Mx(t) - 1
, Λ/.γ(ί)
tE{X)
tE{X)
tE(X)
/
Jo
e4t
This result assumes lim^^^ etySx(y) — 0. An application of L'Hopital's rule
shows that this is the same limit as (—t~l) lim^^^ ety fx(y). This limit must
be zero, otherwise the integral defining Μχ(ί) will not converge.
3.29 (a)
S(x)
{\ + 2t2)e~2tdt
=
I
=
-
-(l + t + i2)e-'2t\?
(1 + x + x2)e~2x, x > 0 .
SECTION 3.4
(b)
=
h(x)
+ x + x2)
-AinS(x)
= A^x)-^-\n(l
ax
ax
ax
1+2X
21 + x + x2'
=
19
(c) For y > 0,
/
Jy
S(t)dt
=
/ (1 + 1 + t2)e~2tdt
Jy
=
- ( 1 + 1 + \t2)e~2X
= (l + y +
\y2)e~2\
Thus,
(<0
e(x) =
| ~ S(t)<ft _ 1 + x + 2^x 2 r o m a
J,^~" =~~ -,1 ',+"Lx ',+ ^2x2 ^
()
5(ar)
Jx
anc
* ( c )·
(e) Using (b),
lim /ι(χ)
=
*-^oo
lim e(x)
lim f 2
x—*oo Y
x->oo
11
=
1 + 2x
1 + X +
1
2
lim h(x)
χ—>οο
(f) Using (d), we have
e(x) = 1
Λ
X
2
/
±x2
2
1 + x* + x 2
and so
e'(x)
v }
=
x
2
1 + x+ x
t
+
\x2(l + 2x) _ -x(l
(1 + x + x 2 ) 2
——^
+ x + x 2 ) + ±x 2 + x 3
(1 + x + x 2 ) 2
<0.
(l + x + x2)2 ~
Also, from (b) and (e), h(0) = l,h(\)
= ?> a n d M 0 0 ) = 2 ·
3.30 (a) Integration by parts yields
/
/•DO
(y-x)f(y)dy=
«/Χ
and hence S e (x) = j^° fe(y)dy
follows.
/
/»OO
5(|/)dy,
J X
= — — /χ°°} S(y)dy, from which the result
20
CHAPTER 3 SOLUTIONS
(b) It follows from (a) that
E(X)Se(x)
= /
Jx
yf(y)dy - x /
Jx
f(y)dy = /
Jx
yf(y)dy -
xS(x),
which gives the result by addition of xS(x) to both sides.
(c) Because e(x) = E(X)Se(x)/S(x),
from (b)
yf(y)dy = S(x)
J X
'
,
E(X)Se(x)
S{x)
S(x)[x + e(x)],
and the first result follows by division of both sides by x + e(x). The inequality
then follows from E(X) = J0°° yf(y)dy > f™ yf(y)dy.
(d) Because e(0) = E(X), the result follows from the inequality in (c) with
e(x) replaced by E(X) in the denominator.
(e) As in (c), it follows from (b) that
f.fl,i*.&i.i{>m + g } = M.){>ffl + f ! } ,
that is,
4- x
e(x)
from which the first result follows by solving for Se(x). The inequality then
follows from the first result since E(X) = J 0 yf(y)dy > Jx
yf(y)dy.
3.5
J™yf(y)dy = E(X)Se(x){-' e(x)
SECTION 3.5
3.31 Denote the risk measures by p(X) = μχ 4- b x , p(Y) = βγ 4- kay, and
p(X 4- Y) = Ρχ+γ 4- hax+γ. Note that μ χ + y = Ρχ 4- My and
0"x+y
=
<
ö"x 4- cry 4- 2ρσχσγ
σ 2 γ + (7y + 2σχσγ = (σχ 4- σγ)2,
where p here is the correlation coefficient, not the risk measure, and must be
less than or equal to one. Therefore, σχ+γ < σχ 4- σγ. Thus,
p(X 4- Y) < μχ 4- pY 4- k(ax 4- a y ) - p(X) + p(Y),
which establishes subadditivity.
Because pcX — cpx and σ0χ = οσχ, p(cX) = cpx -\-kcax — cp(X), which
establishes positive homogeneity.
Because px+c = px 4- c and σχ+α = σχ, ρ(Χ 4- c) = μ γ 4- c 4- Α:σχ =
p(X) + c, which establishes translation invariance.
For the example given, note that X < Y for all possible pairs of outcomes.
Also, ρχ = 3, pY = 4, σχ = y/3, σγ = 0. With k = 1, p(X) = 3 4- v ^ =
4.732, which is greater than p(Y) = 4 + 0 = 4, violating monotonicity.
SECTION 3.5
21
3.32 The cdf is F(x) = 1 — exp(—x/0), and so the percentile solves p = 1 —
exp(—πρ/θ) and the solution is VaRp(X) = πρ — —01n(l—p). Because the exponential distribution is memoryless, β(π ρ ) = 0 and so TVaR p (X) =VaR p (X)+
e(np) = πρ + θ.
3.33 The cdf is F(x) = 1 - [0/(0 + x)]a and so the percentile solves p =
-1). From
1 - [0/(0 + πρ)]α and the solution is VsRp(X) = πρ = 0[(1 -p)~1^
(3.13) and Appendix A,
TVaR p (X)
-
7ΓΡ4-
=
ττρ 4-
E(X) - E(X A πρ)
1 - F(np)
_θ
α —1
θ_
a—1
ν^+π ρ ;
<*-1
α-1
7Γ ρ +
1-P
Substitute 1 - ρ for [0/(0 + π ρ )] α to obtain
TVaR p (X) = π ρ +
= 7Γρ +
α - 1 V 0 + 7Γ
0 + πρ
α —1
3.34 First, obtain
φ-!(0.90) = 1.28155, Φ" 1 (0.99) = 2.32635, and Φ" 1 (0.999) = 3.09023.
Using πρ = μ + σΦ _ 1 (ρ), the results for the normal distribution are obtained.
For the Pareto distribution, using 0 = 150 and a = 2.5 and the formulas
in Example 3.17,
VaR p (X) = 120 [(1 - p)~ 1 / 2 · 2 - 1
will yield the results.
For the Weibull(50,0.5) distribution and the formulas in Appendix A,
p = 1 — exp '-(ΞΕ)
from which
0.5
VaR p pO = πρ = 50 { [ - ln(l -p)}2}
which gives the results.
,
22
CHAPTER 3 SOLUTIONS
3.35 Prom Exercise 3.34, VaRo.999pO = ττο.999 = 2,385.85. The mean is
E(X) = 0Γ (1 + 1/r) = 50Γ (3) = 100. From Appendix A,
Ε(ΧΛπ 0 .999)
=
0r(l + l / r ) r [ l - f l / r ; ( ^ | £ i ) T ]
Γ
/7Γθ.999\ Τ Ί
+π0,999 exp [- {
θ
) J
=
50Γ (3) Γ (3; 6.9078) + 2,385.85 exp (-6.9078)
=
100Γ (3; 6.9078) +2,385.85exp (-6.9078).
The value of Γ (3; 6.9078) can be obtained using the Excel function GAMMADIST(6.90775, 3, 1, TRUE). It is 0.968234.
Therefore, E(X Λ πο.999) = 99.209. From these results using (3.13),
TVaRo.999^0 = 2,385.85 + 1,000(100 - 99.209) = 3,176.63.
(The answer is sensitive to the number of decimal places retained.)
3.36 For the exponential distribution,
VaRo.95 (X) = 500 [- ln(0.05)] = 1,497.866.
Therefore TVaRo.95 W = 1,997.866.
From Example 3.19, for the Pareto distribution,
VaRo.95 (X)
=
1/3
1,000 ( 0 . 0 5 ) ~ - l
Ε(ΧΛΤΓΟ.95)
-
500
TVaRo. 95 PO
=
1,714.4176
1,000
y
2,714.4176y/
1,714.2+ 5 0 ° ^ 2 ' 1
0.05
4 0
432.140
= 3,071.63.
3.37 For x > x0
d
E
dx
Wx>4
=
IT yf{y)dy
Έ
S(x)
χ χ
ST yf(y)dv
-/(*)]
S(x)
[S(x)Y
XT yf(y)dy _
ί( )
=
h(x)
S(x)
S{x)
Because h(x) > 0 and e(x) > 0, the derivative must be nonnegative.
CHAPTER 4
CHAPTER 4 SOLUTIONS
4.1
SECTION 4.2
4.1 Arguing as in the examples,
FY(y)
=
Pr(X<y/c)
Φ \ln(y/c)-v>]
l
σ
J
["lny-(lnc
+ μ)
φ
0"
which indicates that Y has the lognormal distribution with parameters μ-hln c
and σ. Because no parameter was multiplied by c, there is no scale parameter.
To introduce a scale parameter, define the lognormal distribution function as
F(x) = Φ ( l n x ~ l n t / ) . Note that the new parameter v is simply βμ. Then,
Student Solutions Manual to Accompany Loss Models: From Data to Decisions,
Edition. By Stuart A. Klugmaii, Harry H. Panjor, Gordon E. Willinot
Copyright © 2012 John Wiley & Sons, Inc.
Fourth23
24
CHAPTER 4 SOLUTIONS
arguing as before,
Pr(X <
Fy(y)
=
Φ
=
Φ
=
Φ
y/c)
\n(y/c)
— In v
In ?/ — (In c -f In z/)
In i/ — In ci/
demonstrating t h a t v is a scale parameter.
4.2 T h e following is not the only possible set of answers to this question.
Model 1 is a uniform distribution on the interval 0 to 100 with parameters
0 and 100. It is also a beta distribution with parameters a = 1, b = 1, and
0 = 100. Model 2 is a Pareto distribution with parameters a = 3 and 0 =
2000. Model 3 would not normally be considered a parametric distribution.
However, we could define a parametric discrete distribution with arbitrary
probabilities at 0 , 1 , 2 , 3 , and 4 being the parameters. Conventional usage
would not accept this as a parametric distribution. Similarly, Model 4 is not a
standard parametric distribution, b u t we could define one as having arbitrary
probability p at zero and an exponential distribution elsewhere. Model 5 could
be from a parametric distribution with uniform probability from a to 6 and a
different uniform probability from b to c.
4 . 3 For this year,
PT(X
> d) = 1 - F(d) =
θ+d
For next year, because Θ is a scale parameter, claims will have a Pareto distribution with parameters a = 2 and 1.06Ö. T h a t makes the probability
(_Μ6θ_\2
T h
1.06(0+ rf)"
1.060+ d
1.123602 + 2.24720d + 1.1236d2
lim
1.123602 + 2.120d + d2
d—+oo
2.24720 + 2.2472d
lim
2.120 + 2 d
d—>οο
2.2472
lim
d—»oo — — — = 1.1236.
lim
SECTION 4.2
25
4.4 The rath moment of a /c-point mixture distribution is
v(Ym) = Jym[aifxl(y) + --- + akfxk(y)}dy
=
aiE(Yn
+ --- + akE(Yn-
For this problem, the first moment is
a-^— + (1 - a ) - ^ _ if a > 1.
a -f1
a —I
Similarly, the second moment is
a
l
20?
,.,
x 2(9?
.,
7TT
^Τ + (1 - α ) τ
ΤΓ" if a > 2.
ν
J
(α-1)(α-2)
(a + l)a
4.5 Using the results from Exercise 4.4, E(X) = JZi=i a*/4> a n d f° r the gamma
distribution this becomes 5Ζΐ=ι a ^ a ^ . Similarly, for the second moment
we have E(X 2 ) = Σί=ι αΦ2^ which, for the gamma distribution, becomes
Σ ^ α ^ Ο ^ + Ι)^.
4.6 Parametric distribution families: It would be difficult to consider Model
1 as being from a parametric family (although the uniform distribution could
be considered as a special case of the beta distribution). Model 2 is a Pareto
distribution and as such is a member of the transformed beta family. As a
stretch, Model 3 could be considered a member of a family that places probability (the parameters) on a given number of non-negative integers. Model
4 could be a member of the "exponential plus family," where the plus means
the possibility of discrete probability at zero. Creating a family for Model 5
seems difficult.
Variable-component mixture distributions: Only Model 5 seems to be a
good candidate. It is a mixture of uniform distributions in which the component uniform distributions are on adjoining intervals.
4.7 For this mixture distribution,
F(5,000)
=
=
=
0,75φ
p , 0 0 0 - 3 0 0 0 Λ α 2 5 φ ^5,000-4,000
1,000
j
V
1,000
0.75Φ(2) + 0.25Φ(1)
0.75(0.9772) + 0.25(0.8413) = 0.9432.
The probability of exceeding 5,000 is 1 - 0.9432 = 0.0568.
26
CHAPTER 4 SOLUTIONS
4.8 The distribution function of Z is
F(z)
=
0.5 11 —
-
1-0.5
-
1
i + (z/yi^öö)
1
+ 0.5
2
1
1 + 2/1,000
1,000
1,000
-0.5"'"1000 + *
1,000 + z2
0.5(1,0002 + l,000z + Ι,ΟΟΟ2 + 1,000z2)
(1,000 + ζ 2 )(1,000 + ζ)
The median is the solution to 0.5 = F(m) or
(1,000 + m 2 )(l,000 + m)
Ι,ΟΟΟ2 + 1,000m2 + 1,000m + m 3
m3
m
=
=
=
=
2(l,000) 2 + 1,000m + 1,000m2
2(1,000)2 + 1,000m + 1,000m2
1,0002
100.
The distribution function of W is
Fw(w)
=
PT(W < w) = P r ( l . l Z <w) = Pr(Z < w/1.1) = Fz(w/1A)
1
0.5 1 + 0.5 1 1 + (w/l.ly/TßÖÖ)2
l + z/1,100
This is a 50/50 mixture of a Burr distribution with parameters a = 1, 7 = 2,
and Θ = 1.1^/1,000 and a Pareto distribution with parameters a = 1 and
Θ - 1,100.
4.9 Right censoring creates a mixed distribution with discrete probability at
the censoring point. Therefore, Z is matched with (c). X is similar to Model
5, which has a continuous distribution function but the density function has
a jump at 2. Therefore, X is matched with (b). The sum of two continuous
random variables will be continuous as well, in this case over the interval from
0 to 5. Therefore, Y is matched with (a).
4.10 The density function is the sum of five functions. They are (where it is
understood that the function is zero where not defined)
/i(x)
f2(x)
=
=
0.03125, 1 < x < 5,
0.03125, 3 < x < 7,
f3(x)
f4(x)
fc{x)
=
=
-
0.09375, 4 < x < 8,
0.06250, 5 < x < 9,
0.03125, 8 < x < 12.
SECTION 4.2
27
Adding the functions yields
/{*)
0.03125,
0.06250,
0.15625,
0.18750,
0.15625,
0.09375,
0.03125,
1 < x < 3,
3 < x < 4,
4 < x < 5,
5 < x < 7,
7 < x < 8,
8 < x < 9,
9 < a : < 12
This is a mixture of seven uniform distributions, each being uniform over the
indicated interval. The weight for mixing is the value of the density function
multiplied by the width of the interval.
4.11
1/2"
FY(y)
= Φ
=
Φ
μ
y-ομ
^μ
\y)
ίθο
\y
1/2"
+ exp I — 1 Φ
+ exp
f2c6\^Φ
\ομ )
y/c + μ (Bc_
μ
\y
y + ομ (Be
ομ
\y
1/2'
1/2"
and so Y is inverse Gaussian with parameters ομ and c6. Because it is still
inverse Gaussian, it is a scale family. Because both μ and Θ change, there is
no scale parameter.
4.12 Fy(y) = Fx(y/c) = 1 — exp[— (y/c6)T], which is a Weibull distribution
with parameters r and cd.
SECTION 4.2
27
Adding the functions yields
/{*)
0.03125,
0.06250,
0.15625,
0.18750,
0.15625,
0.09375,
0.03125,
1 < x < 3,
3 < x < 4,
4 < x < 5,
5 < x < 7,
7 < x < 8,
8 < x < 9,
9 < a : < 12
This is a mixture of seven uniform distributions, each being uniform over the
indicated interval. The weight for mixing is the value of the density function
multiplied by the width of the interval.
4.11
1/2"
FY(y)
= Φ
=
Φ
μ
y-ομ
^μ
\y)
ίθο
\y
1/2"
+ exp I — 1 Φ
+ exp
f2c6\^Φ
\ομ )
y/c + μ (Bc_
μ
\y
y + ομ (Be
ομ
\y
1/2'
1/2"
and so Y is inverse Gaussian with parameters ομ and c6. Because it is still
inverse Gaussian, it is a scale family. Because both μ and Θ change, there is
no scale parameter.
4.12 Fy(y) = Fx(y/c) = 1 — exp[— (y/c6)T], which is a Weibull distribution
with parameters r and cd.
CHAPTER 5
CHAPTER 5 SOLUTIONS
5.1
SECTION 5.2
5.1 FY(y) = 1 - (1 + y/9)~a
= 1- (^)"·
distribution. The pdf is fY(y)
= dFY(y)/dy
=
This is the cdf of the Pareto
{θ+^α+ι
·
5.2 After three years, values are inflated by 1.331. Let X be the 1995 variable
and Y = 1.331X be the 1998 variable. We want
P r ( y > 500) = Pr(X > 500/1.331) - Pr(X > 376).
From the given information we have Pr(X > 350) = 0.55 and Pr(X > 400) =
0.50. Therefore, the desired probability must be between these two values.
5.3 Inverse:
FY(y) = 1
1
θ + y- 1
y + θ'1
Student Solutions Mmiual to Accompany Loss Models: From Data to Decisions,
Edition. By Stuart A. Klugmaii, Harry H. Panjcr, Gordon E. Willmot
Copyright © 2012 John Wiley & Sons, Inc.
Fourth29
30
CHAPTER 5 SOLUTIONS
This is the inverse Pareto distribution with r = a and Θ = l/θ. Transformed:
Fy(y) — 1 — \Q?T)
lT
0 = 0' .
. This is the Burr distribution with a — a, 7 = r, and
Inverse transformed:
Fy(y) = 1 - 1 -
θ + ν~τ
ντ + (θ~1/τΥ
This is the inverse Burr distribution with r = a, 7 = r, and θ = Θ
5.4
FY(y) = 1 -
flU
(IT- !1//*)
i + (ir70)
7
1
i + ür70)7
'T.
w
i + (</0)7'
This is the loglogistic distribution with 7 unchanged and Θ = I/o.
5.5
F z (*) = Φ
ln(*/0) - μ
-Φ
In z — In θ — μ
which is the cdf of another lognormal distribution with μ — 1η#+μ and σ — σ.
5.6 The distribution function of Y is
FY{y)
=
=
=
-
P r ( y < y ) = Pr[ln(l + X / ö ) < j / ]
Vi(\ +
X/e<ey)
Pr[X < 0 ( ^ - 1)]
Θ
1<9 + 0 ( e ^ - l )
1-
l-e-ay.
This is the distribution function of an exponential random variable with parameter 1/a.
5 . 7 X | 9 = 0haspdf
fx\e(x\0) =
τ[(χ/θγΓβχρ[-(χ/θ)η
χΓ(α)
and Θ has pdf
/θ(β)
τ[(δ/θ)ηββχΡ[-(δ/θγ}
ΘΤ{β)
SECTION 5.2
31
The mixture distribution has pdf
■lQ
1 e>
^Γ/"Α
χΓ(α)
J0
ητ^ίο\
ΘΤ(β)
τ 2 χ τ α <ST/3
Γ(α + ß)
χΓ(α)Γ(/3) τ ( ζ τ + <Γ)"+0
Γ(α +
β)τχτα~ιδτβ
τ
Γ(α)Γ(/3)(χ + <Γ)"+0 '
which is a transformed beta pdf with 7 = r, r = a, a = ß, and Θ — δ.
5.8 The requested gamma distribution has a# = 1 and c*02 = 2 for a = 0.5
and Θ = 2. Then
P r ( 7 V = 1)
ί°
io " ll
=
^2°·
W5Γ(0.5)
n ^ d A
Γ(0.5)>/2
.5)>/2 7o
Z" 0 0
1
1.5Γ(0.5)>/3
Γ(1.5)
1.5Γ(0.5)Λ/3
0.5
= 0.19245.
1.5\/3
Line 3 follows from the substitution y — 1.5A. Line 5 follows from the gamma
function identity Γ(1.5) = 0.5Γ(0.5). N has a negative binomial distribution,
and its parameters can be determined by matching moments. In particular,
we have E(N) = E[E(JV|A)] = E(A) = 1 and Var(iV) = E[Var(iV|A)] +
Var[E(7V|A)] = Ε(Λ) + Var(A) = 1 + 2 = 3.
5.9 The hazard rate for an exponential distribution is h(x) = f(x)/S(x)
=
0" 1 . Here Θ is the parameter of the exponential distribution, not the value
from the exercise. But this means that the Θ in the exercise is the reciprocal
of the exponential parameter, and thus the density function is to be written
F(x) — 1 — e~9x. The unconditional distribution function is
,11
Fx(x)
=
(1-ε-"χ)0Λάθ
/
0.1(0 +
χ-1ε-θχ)\\1
1 + (β_ΠΧ β Χ)
ά
-" ·
32
CHAPTER 5 SOLUTIONS
Then, S*(0.5) = 1 - F x (0.5) = - ^ ( e -
5 5
- e"0-5) = 0.1205.
5.10 We have
Pr(JV>2)
=
l-Fjv(l)
=
=
1- /
(e-x+\e-x)0.2d\
Jo
1 - l-(l +
\)e-x-e-x}0.2\l
=
l + 1.2e _5 + 0 . 2 e - 5 - 0 . 2 - 0 . 2
=
0.6094.
5.11 With probability p, pi = 0.52 = 0.25. With probability 1 - p, p2 =
©(Ο.δ) 4 = 0.375. Mixed probability is 0.25p + 0.375(1 -p) = 0.375 -0.125p.
5.12 It follows from (5.3) that
fx{x)
= -S'(x)
= -M'A[-A{x)][-a(x)]
=
a(x)M'A[-A(x)}.
Then
nx[X
>
Sx(x)
M A [-A(x)]
·
5.13 It follows from Example 5.7 with a = 1 that Sx(x) = (1 + ΘχΊ)~1, which
is a loglogistic distribution with the usual (i.e., in Appendix A) parameter θ
replaced by 0 - 1 ' 7 .
5.14 (a) Clearly, a(x) > 0, and we have
A(x)=
px
Jo
a(t)dt=-
n
px
2 70
(1 + 0t)~Ut
= Λ/1 + Θί\1 = y/l + θχ - 1.
Because A(oo) = oo, it follows that /ΐχ|Λ(#|λ) — λα(χ) satisfies Ιΐχ\\(χ\λ)
and J0 hx\h{x\\)dx = oo.
> 0
(b) Using (a), we find that
S X | A (x|A) = e - M ^ ^ - i ) .
It is useful to note that this conditional survival function may itself be shown
to be an exponential mixture with inverse Gaussian frailty.
(c) It follows from Example 3.7 that MA(t) = (1 - t)~2(y and thus from (5.3)
that X has a Pareto distribution with survival function Sx(x) = M\(l —
yjl + θχ) = (1 + θχ)-α.
SECTION 5.2
33
(d) The survival function Sx(x) = (1 + θχ)~α is also of the form given by
(5.3) with mixed exponential survival function Sx\\(x\X) = e~Xx and gamma
frailty with moment generating function M\(t) = (1 — 6t)~a, as in Example
3.7.
5.15 Write Fx[x) = 1- MA[-A(x)],
and thus
E(A)A(x)
where
MA(t) - 1
ίΕ(Λ)
is the moment generating function of the equilibrium distribution of Λ, as
is clear from Exercise 3.28. Therefore, Si(x) is again of the form given by
(5.3), but with the distribution of Λ now given by the equilibrium pdf ΡΓ(Λ >
λ)/Ε(Λ).
Mi(t)
5.16 (a)
ΜΑ.(ί) =
J
etX
I e<*->A/A. (\)d\
f^WdX=1~Z7)
all λ
all λ
MA(t - S)
MA(-s)
'
with the integral replaced by a sum if Λ is discrete.
bM
< > '^ = ^ Γ
a n d thus Ε(Λ } M (0)
Also, M'i(t) = M^{Zsy
* = ^ = W0)-
implying t h a t E
Now,
C
c'A(t)
=
A(-S)
=
c'L(t) =
(c)
hx{x)
M^y
M'A(t)
, and replacement of t by — s yields
MA(t)
Ε(Λ„). Similarly.
M'Jjif)
MA(t)
\M'A(t)
[MA(t)
M'A{-s)
c'A(s)
^ = M *» (0) =
MA(-s)
LMA(-S)
and so
2
E ( A f ) - [ E ( A s ) ] 2 = Var(A s ).
=
^ I n 5 x ( x ) = - £ l n MA\-A{x)\
=
a(x)c'A[-A(x)}.
=
-±cA[-A{x)\
34
CHAPTER 5 SOLUTIONS
(d)
d
tix{x)
=
a'(x)c'A{-A(x)}+a(x)
—
=
a'(x)c'A[-A(x)\
la(x)}2c'i[-A(x)}.
-
c'A[-A(x)}
(e) Using (b) and (d), we have
tix(x)
- [α(χ)]2νΆτ[ΑΑ{χ)]
= a\x)E[AA{x)]
< 0
if a'(x) < 0 because E[A^ (;r )j > 0 and Var[A v4(;r )] > 0. But -^hX\A(x\X)
λ α ' ( χ ) , and thus a'{x) < 0 when ^hx\A(x\\)
< 0.
=
5.17 Using the first definition of a spliced model, we have
f
, , _ [
}x{x)
~
T.
0
0 < x < 1,000
\ 7e-*/ , x> ι,οοο,
where the coefficient r is a\ multiplied by the uniform density of 0.001 and the
coefficient 7 is a^ multiplied by the scaled exponential coefficient. To ensure
continuity we must have r = ^ g - 1 ' 0 0 0 / ^ Finally, to ensure t h a t the density
integrates to 1, we have
/•Ι,ΟΟΟ
1
7e-
1 000
'
/»OO
θ
Ίε-*' άχ
=
/
Jo
/"dx+ /
Λ,οοο
=
l)0007e-1-000/e+7Öe-1'00°/e,
which implies 7 = [(1,000+ 0 ) e - 1 , 0 0 0 / 0 ] - 1 . T h e final density, a one-parameter
distribution, is
fx(x) = {
(
0<x<
1,000 + 0 '
e-x/0
1,000,
x > 1,000.
(1,000 + 6>)β-^ 0 0 °/^
Figure 5.1 presents this density function for the value Θ = 1,000.
5.18 fY(y)
3
et/9
= exp(-|(lny)/0|)/20y.
H o o = ¥*'*■
For x < 0, Fx(x)
For x > 0 it is I + i
f*
exponentiation the two descriptions are Fy(y)
FY{y)
e
= l-\e-^y' ,
e
= jg Jl^e'^dt
=
-t/edi = i _ I e - / * .
W i t h
= \eXnyle,
0 < y < 1, and
y>\.
5.19 F(ar) = / * it~Adt = 1 - x " 3 . y = 1.1X.
P r ( y > 2.2) = 1 - Fy(2.2) = ( 2 . 2 / 1 . 1 ) " 3 = 0.125.
FY{y)
= 1 - (x/1.1)"3.
SECTION 5.2
35
0.0006 ι
0.0005
0.0004
Uniform
S 0.0003
\
Exponential j
0.0002
0.0001
0
(D
500
1000
1500
2000
X
Figure 5.1
Continuous spliced density function.
5.20 (a) P r ( X " 1 < x) = P r ( X > l/x)
fore,
= P r ( X > l / x ) , and the pdf is, there-
'"*<*> - έ Ρ Γ ( χ > ί)
-W 1
X1
[θχ*
2πχ
\X
1- _ μ χ \
exp
θχ
2
i
exp
Θ
~2x
(■
μχ
)
X2)
,
x >0.
(b) T h e inverse Gaussian pdf may be expressed as
f(x)
Thus, / 0 °° f(x)dx
=
θ_
2π'
θ/μ 1 5
Χ/^Ζ6 χ- · βΧρ[-
θχ
2μ2
θ
2χ
= 1 may be expressed as
x x—,5expe x p ^θχ^ - - jθ\d x = ^ 2πe - ^ ,
f"
(-
T
36
CHAPTER 5 SOLUTIONS
valid for Θ > 0 and μ > 0. Then
0t\X
+ t2X
1
/(*)
2π
2 π
θ
2μ2
e e /"a;- 1 - 5 exp
β ^ - β χ ρ
ttl^-lf-^i
,χ
ν
θ
2μ,
2χ
where θ* = θ - 2£2 and μ+ = M w / 2 ! ' 4 i · Therefore, if #* > 0 and μ* > 0,
/»OO
M(U,t2)
=
et,x+taX~'/(*)<**
/
2π
2π
(c)
Μχ(ί)
=
βθ/μ
Ι^1ε-θ./μ.
θ-2ί2
θ-2ί2
0
exp ~μ~
μ
θ
θ-2ί2
exp
θ - 2μΗχ
y 6» - 2ί 2
θ-^(θ-2ί2)(θ-2μΗχ)
μ
Μ(ί,0)
exp
(d)
0»/^
Μ1/Χ(ί)
V
=
-
Μ
/
= exp
»ίι-,/ι-¥<
Μ(0,ί)
0
exp
'-2ί
0-2ί
= "-ϊ
ι
exp
yίθ-^/θ(θ-2ί)
Μ
jp-v 1 -!.
-1/2
where #ι = θ/μ2 and μχ = 1/μ.
exp
*,'-Η?·
Thus M1/X(t)
= MZl(t)Mz2(t)
with
Μζι (0 — (l — -§t)
the mgf of a gamma distribution with a = 1/2 and
Θ replaced by 2/Θ (in the notation of Example 3.7). Also, Mz2{t) is the mgf
of an inverse Gaussian distribution with Θ replaced by θ\ — θ/μ2 and μ by
μλ — Ι/μ, as is clear from (c).
SECTION 5.2
(e)
Η^ϊ
Mz(t) = E { exp
expl
£
2
μ2
ν
2t
t^
and, therefore,
ΪΜ[^,ί
= e
Mz(t)
e
0 - ^(θ - 2ί) 2
Θ
exp
M
0-2t
»
Θ
exp
e-2t
-
2i
μ
6> - (6> - 2t)
μ
-1/2
'»-ϊ«
the same gamma mgf discussed in (d).
(f) First rewrite the mgf as
i ( 0
s-wU?-'
= exp
Mx(t)
Then,
M^(i)
=
-(2<rp
2μ 2
- ί
,
(-Ι)Μχ(ί)
(
- (ί) *(£-'Γ** ·>·
Similarly,
(0- I
Λί£(«)
+
=
*.
2μ 2
-ί)
Μ χ (<)
v(
a)'"(^-)"
' ",+
,4
Mx{t)
α) · (έ-)" α)·(*-Γ
37
38
CHAPTER 5 SOLUTIONS
and the result holds for k = 1, 2. Assuming that it holds for k > 2, differentiating again yields
U
1
fc+l+3w
£•-1
fc
,
v
+
fc + 2 + a n
k + 2+ n
2μ 2
n=0
*
Now, change the index of summation in the second sum to obtain
M{xh+1)(t)
/A-^„»±1=« / θ
*, i* ST· (k + n-l)l
JL
fc-f
3n- 1
Λ~^~
fc
+ 1+ n
Separate out the two terms when n = 0 and n = k and combine the others to
obtain
SECTION 5.2
=
f(*+l)
M^>(t)
fc+1 fc+1
2
i\
Mx(t)
Λ*±ι
/ 0
+Μχ(ί)^
^
n=l
xl
2
V
(2fc-l)!/l\2fc^
, / Θ
(fc + n - 1 ) ! / I
(fc-n)!n! V2
v
/
39
\~»+
fc+1 - n
\
p - n) + 2n]
,v-'
Μχ(ί)
_fc±l±2i
+
r,
n=0
fc+l+an
fc+l+n
(/c — n)!n!
W
and the kth derivative of Μχ (t) is as given by induction on n. Then
fc-f-3n —fc— n
G)
· - & μ
fc+n
(261)"'
and the result follows from
E(Xk)
= M{x]{0)
(g) Clearly,
κ
Λτ-κ-'
20
p
μ
'
which implies from (f) that, for m = 1,2,..., the moment result may be stated
Γ mfi ^
i^mi„
(θ\
^^ m -x(f)
40
CHAPTER 5 SOLUTIONS
which is also obviously true when m = 0. But,
fix)
2π
X
26
2/^xv
x
2
2β
X
2μ·
and thus
— e» /
2π
J0
e
*^
2x
—μ
dx=\
e»
Km_i
or, equivalently,
/•OO
/
Jo
xn -2e~*^~'2*dx
=
2μ7η-ΐΚπ
Let a = 0 / ( 2 μ 2 ) , implying t h a t μ — y/0/(2a),
and also t h a t θ/μ =
2μα
y/2cS.
5.2
SECTION 5.3
5.21 Let r = a / 0 , and then in the Pareto distribution substitute τθ for a.
T h e limiting distribution function is
liml-'
0-+OO
θ
\X
1-
+
τθ
0
lim
θ-^οο V X + 0
T h e limit of t h e logarithm is
lim τ0[1η 0 - ln(x + 0)]
0—»οο
=
r lim
1 η 0 - 1 η ( χ + 0)
-1
Θ—>oo
1
r
r hm
0 " - ( x + 0)-
0-+oc
—r lim
—Ö——
-0"
£0
0^oo X + I
2
-ra;.
T h e second line and the final limit use L'Hopital's rule. T h e limit of the
distribution function is then 1 — exp(—rx), an exponential distribution.
5.22 T h e generalized Pareto distribution is the transformed b e t a distribution
with 7 = 1. T h e limiting distribution is then transformed g a m m a with r = 1,
which is a g a m m a distribution. T h e g a m m a parameters are a = τ and 0 = ξ.
SECTION 5.4
5.23 Hold a constant and let θτ1^
/(*)
=
Γ(α +
-» ξ. Then let θ = ξτ'1^.
41
Then
φχΐ*-1
ιτ
Τ(α)Γ(τ)θ' (ί
+ χ'<θ-'ϊ)α+τ
e-a-T(a +
τ)α+τ-1(2π)1/2Ίχ-'τ-1
Γ(α)ε~τττ-1(2π)1/2ξ'ΪΤτ-τ(1
+ χ-τξ~Ίτ)α+τ
(1+1)
α+τ-1
jx-^-1
Γ(α)τ-α-τξ'ι(-τ+α)ξ-Ίαχ-^τ+α)(1
+
χΐΓΊτ)α+τ
»0,+T-17a;-7a-l
α-\-τ
'Ρ]
Γ ( α ) Γ 7 α 1 + S£L
7ξ
7«
Γ(α)χτα+ι β («Μ 7
5.3
SECTION 5.4
5.24 Prom Example 5.12, τ{θ) = - 1 / 0 and q(ß) = θα. Then, r'(0) = 1/02
and g'(0) = α 0 ° - 1 . From (5.9),
μ(0) =
, ' ,„. = . -2ηα
ο.„. = αθ.
r'(0)g(0)
Θ'Η'
With μ'(θ) = a and (5.10),
Var(X)
0~ 2
r'(0)
'
5.25 For the mean,
ln/(x;0)
c>ln/(x;0)
8Θ
In p(m, x ) + ror(0)x — m In g(0),
mq'(e)~\
df(x-,e)
i
mr'(9)x
00
/(x;0)
mg'(0)
mr'(9)x
/(*;*),
=
m
dfjx-,θ)
00
- /
m
0/(*;0) c£x
«1/(0) | x/(x; θ)άχ - ^ p I /(x; θ)άχ
00
=
E(X)
=
mq'{9)
Qiß) '
ς'(θ)/[Γ'(θ)ς(θ)}=μ(θ).
mr'(0)E(X)
42
CHAPTER 5 SOLUTIONS
For the variance
αί(χ;θ)
ΘΘ
02ί(χ;θ)
θθ
5—
=
πιν'(θ)[χ-μ(θ)]/(χ;θ),
=
mr"{fl)\x - μ(θ)}ί(χ; Θ) - τηΓ'(θ)μ'(θ)/(χ;
2
θ)
2
+τη[τ'(θ)} [χ-μ(θ)} ί(χ;θ),
=
I
d2f 9)
= 0-
^ dx
Οθ
-πιτ'(θ)μ'(θ)
+ m2[r'{ß)]2E{[X
ror'(0)//(0)(l)
- μ(θ)}2}
+ m2[r,(<9)]2Var(X),
νχτ(Χ)
= μ'(θ)/[πΐΓ'(θ)].
5.26 (a) We prove the result by induction on m. Assume that Xj has a
continuous distribution (if Xj is discrete, simply replace the integrals by sums
in what follows). If m = 1, then S — X, and the result holds with p(x) =
Pi(x). If true for m with p(x) — p^,(x), then for m 4- 1, S has pdf (by the
convolution formula) of the form
I
p*m(s - x)erW>-*)
n?=i QJV)
=
pm+1(x)erW*
im+Λθ)
erW* [ J J y ; ( s -
ax
x)pm+1(x)dx]
which is of the desired form with Pm+i(s) = JQ Pm(s ~ x)Pm+i{x)dx. It
is instructive to note that p(x) is clearly the convolution of the functions
Pi{x),P2(x), · · - ,Pm(x)>
(b) By (a), 5 has pf of the form
If Xj has a continuous distribution and 5 has pdf / ( s ; #), then that of X is
f\(x\9)
=
mf(mx',e)
mp r * n (mx)e mr ^)*
mv
of the desired form with p(m,x) = mp*n(mx). If Xj is discrete, the pf of X
is clearly of the desired form with p(ra, x) — p^h(mx).
CHAPTER 6
CHAPTER 6 SOLUTIONS
6.1
b i
SECTION 6.1
oo
ΡΝ(Ζ)
= Σ
ν
k=0
P'N{z) =
P'NW
P'NM
6.2
=
=
=
oo
^
= Σν^χ"ζ
fc=0
= ΜΝ{\ηζ)
z-lM'N(\nz)
MfN(0) = E(N)
+
z-2M^(lnz)
-z-2M'N(\nz)
2
-E(iV) + E(iV ) = E[N(N - 1)].
SECTION 6.5
6.2 For Exercise 14.3, the values at k = 1,2,3 are 0.1000, 0.0994, and 0.1333,
which are nearly constant. The Poisson distribution is recommended. For
Exercise 14.5, the values at k = 1,2,3,4 are 0.1405, 0.2149, 0.6923, and
1.3333, which is increasing. The geometric/negative binomial is recommended
(although the pattern looks more quadratic than linear).
Student Solutions Manual to Accompany Loss Models: From Data to Decisions,
Edition. By Stuart A. Klugman, Hairy H. Panjer, Gordon E. Willinot
Copyright © 2012 John Wiley & Sons, Inc.
FourthAZ
44
CHAPTER 6 SOLUTIONS
6.3 For the Poisson, λ > 0 and so it must be a = 0 and b > 0. For the
binomial, m must be a positive integer and 0 < q < 1. This requires a < 0
and b > 0 provided — 6/a is an integer > 2. For the negative binomial, both
r and β must be positive so a > 0 and 6 can be anything provided b/a > — 1.
The pair a = — 1 and b = 1.5 cannot work because the binomial is the
only possibility but — b/a = 1 . 5 , which is not an integer. For proof, let po be
arbitrary. Then px = ( - 1 + 1·5/1)ρ = 0.5p and p2 = ( - 1 4- 1.5/2)(0.5p) =
-0.125p < 0.
6.3
6
·
4
SECTION 6.6
Pk
=
Pk-i
=
Pk-2
=
ß
r-1
1 + /3 + k
/c + r - l / c + r - 2
0
fc-l
i8
Pi
fc+r-1
fc
ß
β
= Pk-i
14-/3
1+β
■/?
/c + r - l / c + r - 2
ik
i f e - 1
r + 1
2~"
The factors will be positive (and thus pk will be positive) provided p\ > 0,
β > 0, r > - 1 , and r ^ 0.
To see that the probabilities sum to a finite amount,
oo
X^Pfc
oo
=
Pi
k=l
ß
+ /?/
ΣΪ
k=l
lI v ^
Pifc=l
Σι/
=
=
Pi
ß
+ /?
fe-l
/c + r - l / c + r - 2
fc
fe-l
(l+/3)r+1
rß
r-fl
fc-1
fc + r - 1
/3
1 + /9
■/3
fc + r - 1
fc
The terms of the summand are the pf of the negative binomial distribution
and so must sum to a number less than one (po is missing), and so the original
sum must converge.
6.5 From the previous solution (with r = 0),
ß
i = Σ Ρ * = ^ Σ ( Ϊ+ß
fc=l
k=l
ß
fe=i
=
x
fc-l
+/
ι +β
-In I 1
/3
Pi-
ß
1+/
fc-l
fc-lfc-2
fc fc- 1
SECTION 6.6
45
using the Taylor series expansion for ln(l — x). Thus
ß
Pi
and
Pk
6.6
P(z) =
k ;
(l + /3)ln(l + /3)
_(
ß Ϋ
\l + ß)
1
feln(l + ßY
* fV
ß
V1·
l n (l + / 3 ) ^ V l + /?y *'
_zß_
-In 1
ln(l + /3)
l+ß
m
\l+ß-zß)
ln(l + /3)
ln[l-/?(z-l)]
ln(l + /3) ·
6.7 The pgf goes to 1 — (1 — z)~r as ß —> oo. The derivative with respect to
z is —r(l — z)~r~1. The expected value is this derivative evaluated at z = 1,
which is infinite due to the negative exponent.
SECTION 6.6
45
using the Taylor series expansion for ln(l — x). Thus
ß
Pi
and
Pk
6.6
P(z) =
k ;
(l + /3)ln(l + /3)
_(
ß Ϋ
\l + ß)
1
feln(l + ßY
* fV
ß
V1·
l n (l + / 3 ) ^ V l + /?y *'
_zß_
-In 1
ln(l + /3)
l+ß
m
\l+ß-zß)
ln(l + /3)
ln[l-/?(z-l)]
ln(l + /3) ·
6.7 The pgf goes to 1 — (1 — z)~r as ß —> oo. The derivative with respect to
z is —r(l — z)~r~1. The expected value is this derivative evaluated at z = 1,
which is infinite due to the negative exponent.
CHAPTER 7
CHAPTER 7 SOLUTIONS
7.1
SECTION 7.1
7.1 Poisson: P(z) = e^ 2 " 1 *, B{z) = e 2 , Θ = \.
Negative binomial: P(z) = [1 - ß(z - l)]" 7 ", B(z) = (1 - z)~r, Θ = ß.
Binomial: P{z) = [1 + g(z - !)]"', B(z) = (1 + ^ ) m , Θ = q.
7.2 The geometric-geometric distribution has pgf
PGG(Z)
=
(l-ZW-/^-!)]1 - /32(z - 1)
1-/32(1+ /?,)(*-!)"
1
-!})
- 1
The Bernoulli-geometric distribution has pgf
PBGW
=
l +
qill-ßiz-l)}-1-!}
I - ß(l - q)(z - 1)
l-ß(z-l)
·
Student Solutions Manual to Accompany Loss Models: From Data to Decisions,
Edition. By Stuart A. Klugman, Harry H. Panjcr, Gordon E. Wilhiiot
Copyright © 2012 John Wiley & Sons, Inc.
FourthAl
48
CHAPTER 7 SOLUTIONS
The ZM geometric distribution has pgf
P
^
PZMG(Z)
, [1-/?*(*-I)]"1-(1 + I T
^ι
Po + ( l - f > o ) i
=
! _ (
1+
1
Α«)-Ι
l-(po/?*+po-l)(z-l)
1-/T(z-1)
In PBG(Z), replace 1 — g with (1 + βλ)~λ and replace /? with β2(\ -f βλ)
to see that it matches PGG(Z)· It is clear that the new parameters will stay
within the allowed ranges.
In PGG{Z), replace q with (1 — ρο)(1 + β*)/β* and replace β with β*. Some
algebra leads to PZMG(Z)·
7.3 The binomial-geometric distribution has pgf
PBG{Z)
=
{'^[i-ßl-i)-1}}"1
ri-^(i-g)(^-i)iro
L ι-β(ζ-ΐ)
J '
The negative binomial-geometric distribution has pgf
PNBG(Z)
1
1
ßl
[l-ß2(z-l)
1-02(1+&)(*-!)
1-β2(ζ-1)
1 ~ ß2(z ~ 1)
L i - / 3 2 ( i + /?,)(*-i)J ·
In the binomial-geometric pgf, replace m with r, ß(l — q) with /? 2 , a n d ß
with (1 + ßi)ß2 to obtain PNBG{Z)7.2
SECTION 7.2
7.4
= explf^Wz)-!]).
^=1
This is a compound distribution. The primary distribution is Poisson with
parameter ^,7=1 X{. The secondary distribution has pgf Pz(z).
SECTION 7.2
7-5 P(z) = ΣΓ=ο zkPk, P(1)(z)
= ΣΓ=ο
49
k*k-lPk,
oo
Ρϋ)(«)
£ fc(fc - 1) · · · (Ä - j + l)* f c -' P f c , ρΟ·)(ΐ)
=
fc=0
= f>(Ä-l)...(fc-.7 + l)pfc
fc=0
E[iV(iV - 1) · · · (N - j + 1).
P(z)
pV\z)
=
exp{A[P 2 (z)-l]},
=
μ
=
\P(z)P?\z),
1)
p( (l) =
p(l)
*2
\P(l)m'1=\m'v
pV\z)
λ2Ρ(ζ)
ß2~ß
p(2)(l) = λ 2 (τηΊ) 2 + A(m2 - m'j).
+
Then μ'2 = λ (m'j ) 2 + λτη 2 and μ 2 = A*2 ~ M
p(3>(*)
=
μ!ζ - 3μ2 4 2μ =
\Ρ(ζ)Ρ?\ζ),
λτη2.
A 3 P( Ä )[P 2 ( 1 ) ]V3Ä 2 P( Ä )P 2 ( 1 ) («)P 2 W
λ 3 (πι[) 3 4- 3A2mi (m 2 - mi) 4- A(m3 - 3m 2 4- 2m[).
Then,
μ3
=
=
=
μ'3- 3μ2μ + 2μ 3
(^3 - 3μ 2 4- 2μ) 4- 3μ 2 - 2μ - 3μ'2μ 4- 2μ 3
λ 3 (mi ) 3 + 3A2mi (m 2 - mi) + A(mi> - 3m 2 + 2mi)
+3[A 2 (m;) 2 4- \m'2] - 2Xm[ - 3Xm'1[X2(πι[)2 4- Xm'2] 4- 2A 3 (m;) 3
= λ3[Κ)3-3Κ)3 + 2Κ)3
=
+A2[3m'1m2 - 3(m,;) 2 4- 3(m,;) 2 - 3m,; m 2 ]
+A[m^ — 3m 2 4- 2m; 4 3m 2 — 2m;]
Arn^.
7.6 For the binomial distribution, m[ = mq and m 2 = mq(l — q) 4- m2q2.
For the third central moment, P^\z) = m(m - l)(m - 2)[1 4- q(z - l ) ] m - V
and P(3>(1) = m(m - l)(m - 2)q3. Then, m'3 = m(m - l)(ra - 2)<73 4- 3mq 3mq2 4 3m2q2 - 2mq.
50
CHAPTER 7 SOLUTIONS
μ = Χτη'λ — Xmq.
<τ2 = \m'2 = X(mq — mq2 + m2q2) = Xmq(l - q + rag) = μ[1 4- g(ra - 1)].
μ3
=
=
=
=
Araij = Xmq(m2q2 — 3mq2 + 2g2 + 3 — 3q + 3ra — 2)
Arag(3)(l - q + rag) - Arag(2) + Xmq(m2q2 - 3mq2 + 2q2)
3σ2 - 2μ + Xmq(m2q2 - 3mq2 + 2g2)
3σ 2 - 2μ + Arag(ra - l)(ra - 2)<?2.
Now,
τ η - 2 ( σ 2 - μ ) 2 ra
— =
μ
ra
ra — 1
- 2/i 2 [g(ra - l)] 2
,
ow
1λχ
3
^—^
— = (ra - 2)(ra - 1)λ7τιςΤ
—1
μ
and the relationship is shown.
7.3
SECTION 7.3
which is
7.7 For the compound distribution, the pgf is P(z) — PNB[PP(Z)\
exactly the same as for the mixed distribution because mixing distribution
goes on the outside.
7.8
n
n
PN(z) = Y[PNt(z)
i=\
=
l[Pi[ex^%
i=\
Then N is mixed Poisson. The mixing distribution has pgf P(z) = ΠΓ=ι Ρί(ζ)·
7.9 Because Θ has a scale parameter, we can write Θ = cX, where X has density (probability, if discrete) function fx(x). Then, for the mixed distribution
(with formulas for the continuous case)
Pk
=
Kk
^■
k\
je-x(,ekfx{e/c)c-'de
je-^fx{x)dx.
The parameter λ appears only as the product \c. Therefore, the mixed distribution does not depend on λ as a change in c will lead to the same distribution.
7.10
s
Pk
Jo
SECTION 7.3
° ° e - ¥ a2
(Θ +
k\ a +1
v2
roo
/c!(,
fe!(o
+ 1) Λ
51
l)e-Q°de
3
fc!(Q + l ) 7 0
\a + l
/
T(fc + 2)
+ T(fe + 1)
kl(a + l) + [ a + 1
g2[(fc + l ) / ( g + !) + !]
dß
k 2
(a
l)fe+2
+
T h e pgf of t h e mixing distribution is
P(Z)
=
=
/ z0-?—(e
+ l)e-a<>de
a
JO
+ l
rv2
i°°
-^— / (0 +
l)e-(a-lnz^de
α + 1 Jo
0+ 1
fl + 1 -(α-\ηζ)θ
a"
a — Inz
(a — lnz) 2
a + 1
v2
1
a
2
a + 1 a — lnz + (a — Inz)
-(a-ln2)0
and so the pgf of the mixed distribution [obtained by replacing Inz with
\(z - 1)] is
a2
{Z)
~
f
1
α+ 1\α-λ(ζ-1)
a
a + 1 a - λ(ζ - 1)
+
1
I
[α-λ(ζ-Ι)]2/
+a+l
[a-
a
X(z-l)
which is a two-point mixture of negative binomial variables with identical
values of β. Each is also a Poisson-logarithmic distribution with identical
logarithmic secondary distributions. The equivalent distribution has a logarithmic secondary distribution with β = λ / α and a primary distribution that
is a mixture of Poissons. The first Poisson has λ = 1η(1 + λ/α) and the second
Poisson has λ = 2 ln(l + λ / α ) . To see that this is correct, note that the pgf is
p z
()
v }
=
a
f /
λ \ ln[l - X(z - l ) / a ] 1
7 exp In 1 + , ^
w (
f
a + 1 * \ \
a)
1η(1
+
λ/α)
J
+
βχρ 21η 1 + ln[l - \(z - 1)/«]]
ln(l + λ/ a)
/
^τϊ { ( έ)
a + 1
In
1-
Hz -1)
a
1
a + 1
A(z-l)
52
7.4
CHAPTER 7 SOLUTIONS
SECTION 7.5
7.11 (a) po = e~4 = 0.01832. a = 0,6 - 4 and then p1 = 4p0 = 0.07326 and
P2 = (4/2)pi = 0.14653.
(b) po = 5" 1 = 0.2. a = 4/5 = 0.8,6 = 0 and then
a n d p 2 = (4/5)pi - 0 . 1 2 8 .
Pl
= (4/5)p 0 = 0.16
(c) po = (1 + 2)" 2 - 0.11111. a = 2/3,6 = (2 - l)(2/3) - 2/3 and then
pi = (2/3 4- 2/3)po = 0.14815 and p 2 = (2/3 + 2/6)pi = 0.14815.
(d) po = (1 - 0.5)8 = 0.00391. a = -0.5/(1 - 0.5) = - 1 , 6 = (8 + 1)(1) = 9
and then px = ( - 1 + 9)p 0 = 0.3125 and p 2 = ( - 1 + 9/2)pi = 0.10938.
(e) po = 0, pi = 4/ln(5) = 0.49707. a = 4/5 = 0.8,6 = -0.8 and then
P2 = (0.8 - 0.8/2)pi = 0.14472.
(f) po = 0, pi = ^=£#^ = 0.72361. a = 4/5 = 0.8,6 = (-0.5 - 1)(0.8) =
-1.2 and then p 2 = (0.8 - 1.2/2)pi = 0.14472.
(g) The secondary probabilities were found in part (f). Then go = e - 2 =
0.13534, gi = \(1)(0.72361)(0.13534) - 0.19587, and
92
= |[1(0.723671)(0.19587) + 2(0.14472)(0.13534)] = 0.18091.
(h) p ^ = 0.5 is given, pj = 1/5 - 0.2 and then p f = (1 - 0.5)0.2 = 0.1.
From part (b), a = 0.8, 6 = 0 and then pif = (0.8)pf = 0.08.
(i) The secondary Poisson distribution has /o = e _ 1 = 0.36788, f\ =
0.36788, and / 2 = 0.18394. Then, g0 = e" 4 ^" 0 · 3 6 7 8 8 ) = 0.07978. From
the recursive formula, gi = \(1)(0.36788)(0.07978) = 0.11740, and g2 =
|[1(0.36788)(0.11740) + 2(0.18394)(0.07978)] = 0.14508.
(j) For the secondary ETNB distribution, / 0 = 0, fx = γ ^ τ ^ = 0.53333.
With a = 6 - 1/3, f2 = (1/3 + l / 6 ) / i = 0.26667. Then, g0 = e~4 - 0.01832,
9\ = f (1)(0.53333)(0.01832) - 0.03908, and
92 = ^[1(0.53333)(0.03908) + 2(0.26667)(0.01832)] = 0.06123.
(k) With /o = 0.5 (given) the other probabilities from part (j) must be
multiplied by 0.5 to produce f\ = 0.26667 and / 2 = 0.13333. Then, go =
SECTION 7.5
e -4(i-o.5) =
53
0.01832, <h = f (1)(0.26667)(0.01832) = 0.03908, and
g2 = | [1(0.26667) (0.03908) + 2(0.13333) (0.01832)] = 0.06123.
7.12 Pk/Pk-i = (ns^)
k.
¥" a + b/k for any choices of a, b, and p, and for all
SECTION 7.5
e -4(i-o.5) =
53
0.01832, <h = f (1)(0.26667)(0.01832) = 0.03908, and
g2 = | [1(0.26667) (0.03908) + 2(0.13333) (0.01832)] = 0.06123.
7.12 Pk/Pk-i = (ns^)
k.
¥" a + b/k for any choices of a, b, and p, and for all
CHAPTER 8
CHAPTER 8 SOLUTIONS
8.1
SECTION 8.2
8.1 For the excess loss variable,
MV) = 0 · 0 0 0 0 °3 6 oooooU5.ooo)
= O . O O O O l e — " , FY(y) =
i-e-°-«™v.
For the left censored and shifted variable,
M 2 / )
"
/ 1 - 0.3e-° 0 5 = 0.71463,
\ 0.000003e- 0 0 0 0 0 1 ^ + 5 ' 0 0 °),
ϊγ\ν)
-
/ 0.71463,
\ 1 _ 0i3e -o.ooooi(i/+5,ooo) ?
y = 0,
j/>0,
y = 0,
^ > 0,
and it is interesting to note that the excess loss variable has an exponential
distribution.
Student Solutions Manual, to Accompany Loss Models: From Data to Decisions,
Edition. By Stuart A. Kluginan, Harry H. Panjer, Gordon E. Willniot
Copyright © 2012 John Wiley & Sons, Inc.
Fourth55
56
CHAPTER 8 SOLUTIONS
8.2 For the per-payment variable,
r
(
fy(y)
x
FY(y)
0.000003e~ 000001?/
m n m
= Q>3e-o.ooooi(5,ooo) = Q=
1 - e-o.000010,-5,000)^
y >
-o.ooooi(?,-5,ooo)
QQQQle
5 ( m
For the per-loss variable,
/ i \
fY{y)
i7 I \
*Y\V)
_
~
_
~
! l ~ 0-3e-° 0 5 = 0.71463, y = 0,
\ 0.000003e- 000001 ",
y > 5,000,
/ 0.71463,
0 < y < 5,000,
{ ! _ 0.36-0.00001^ 2 / > 5 i 0 o o .
8.3 From Example 3.1 E(X) = 30,000 and from Exercise 3.9,
E(X A 5,000) = 30,000[1 - e-o.ooooi(5.ooo)] =
1 4 6 3 12.
00001 5 000
( · ' = 0.71463, and so for an ordinary
Also F(5,000) = 1 - o.3e-°
deductible the expected cost per loss is 28,536.88 and per payment is 100,000.
For the franchise deductible, the expected costs are 28,536.88 + 5,000(0.28537)
= 29,963.73 per loss and 100,000 + 5,000 = 105,000 per payment.
8.4 For risk 1,
θ
E(X)-E(X Ad)
a —1
(a-l)(9
°
1a — 1
+
' + *:
k)"-1'
The ratio is then
Θ08"
(0.8a - 1)(θ + k)0Sa~l
<T
I (a - 1)(θ + k)«-i
I
(6» +
fc)a2'*(a-l)
02η
θ (0.8α - 1) '
As k goes to infinity, the limit is infinity.
8.5 The expected cost per payment with the 10,000 deductible is
E p Q - E(X Λ 10,000) _ 20,000 - 6,000
1 - F(10,000)
~
1 - 0.60
"
35 ϋϋϋ
'
·
At the old deductible, 40% of losses become payments. The new deductible
must have 20% of losses become payments and so the new deductible is 22,500.
The expected cost per payment is
E(X) - E(X Λ 22,500) _ 20,000 - 9,500
1 - F(22,500)
1 - 0.80
The increase is 17,500/35,000 = 50%.
52,500.
SECTION 8.3
8.2
57
SECTION 8.3
8.6 From Exercise 8.3, the loss elimination ratio is
(30,000 - 28,536.88)/30,000 = 0.0488.
8.7 With inflation at 10% we need
E(X Λ 5,000/1.1) = 30,000[1 - e " 0 0 0 0 0 1 ^ 0 0 0 / 1 1 ) ] = 1333.11.
After inflation, the expected cost per loss is 1.1(30,000-1333.11) = 31,533.58
an increase of 10.50%. For the per payment calculation we need ^(5,000/1.1) =
l - O ^ e - 0 0 0 0 0 1 ^ ' 0 0 0 / 1 1 ) = 0.71333 for an expected cost of 110,000, an increase
of exactly 10%.
8.8 E(X) = exp(7 + 2 2 /2) = exp(9) = 8103.08. The limited expected value is
E(X Λ 2,000)
In 2 , 0 0 0 - 7 - 2 2
ln2,000-7
=
β9Φ
=
=
8,103.08Φ(-1.7) + 2,000[1 - Φ(0.3)]
8,103.08(0.0446) + 2,000(0.3821) - 1,125.60.
2,000 1 - Φ
The loss elimination ratio is 1,125.60/8,103.08 = 0.139. With 20% inflation,
the probability of exceeding the deductible is
Pr(X > 2,000/1.2)
/ln(2,000/1.2)
Φ
2
Pr(1.2X > 2,000)
V
=
=
1 - Φ(0.2093)
0.4171,
and, therefore, 4.171 losses can be expected to produce payments.
8.9 The loss elimination ratio prior to inflation is
E(X A 2k)
E(X)
k
2-1
2-1
\2k+k )
2-1
2fc
2k+ k
Because Θ is a scale parameter, inflation of 100% will double it to equal 2k.
Repeating the preceding gives the new loss elimination ratio of
1-
2k
2k + 2k
1
2'
58
CHAPTER 8 SOLUTIONS
8.10 The original loss elimination ratio is
1,000(1 - e- 500 / 1 - 000 )
E(X A 500)
E(X)
~~
1,000
0.39347.
Doubling it produces the equation
0.78694= l W l - ; o o d / 1 ' 0 0 0 ) = 1 _ ^ / , o o o .
The solution is d = 1,546.
8.11 For the current year the expected cost per payment is
E(X) - E(X A 15,000) _ 20,000 - 7,700
1-0.70
1-F(15,000)
~
After 50% inflation it is
1.5[E(X) - E(X Λ 15,000/1.5)]
1 - F(15,000/1.5)
_
1.5[E(X) - E(X A 10,000)]
1 - F(10,000)
1.5(20,000 - 6,000)
52,500.
1 - 0.60
8.12 The ratio desired is
„6.9078+1.51742/2<f, / i n 10,000-6.9078-1.5174 2 \
^ \
1.5174
)
C
E(X A 10,000)
E(X
,ιηηηηΓι
*■ /Ίηΐο,οοο-β.9θ7βΜ
+10,000 [1 - Φ ^
'15174
)\
_
A 1,000)
β 6.9078+1.5174
2
/2φ / i n 1,000-6.9078-1.5174 2 \
+1,000 [ ΐ - φ ( | η 1 ' 0 1 ° ^ 7 6 4 9 0 7 8 ) ]
6 8.059 φ ( 0 ) +
8059
ε
xo^oQj! _ φ(ΐ,5ΐ74)]
Φ(-1.5174) + 1,000[1 - Φ(0)]
3,162(0.5) + 10,000(0.0647)
= 3.162.
3,162(0.0647) + 1,000(0.5)
φ
This year, the probability of exceeding 1,000 is Pr(X > 1,000) = 1 /i„i ,000-6.9078 \ = 0 5 W i t h 1 Q % i n f l a t i o i l ) t h e distribution is lognormal
with parameters μ = 6.9078 + In 1.1 = 7.0031 and μ = 1.5174. The probability is 1 - Φ ('nl'01°577740031) = 1 - Φ(-0.0628) = 0.525, an increase of
SECTION 8.4
59
5%. Alternatively, the original lognormal distribution could be used and then
PT(X > 1,000/1.1) computed.
8.13 The desired quantity is the expected value of a right truncated variable.
It is
$1 °°°xf(x)
F(1,000)
E(X A 1,000) - 1,0QQ[1 - F(1,000)]
E(X A 1,000) - 400
"
F(1,000)
~
0.6
=
From the loss elimination ratio,
E(X A 1,000) _ E(X A 1,000)
E(X)
~
2,000
and so E(X A 1,000) = 600, making the answer 200/0.6 = 333.
8.3
SECTION 8.4
8.14 From Exercise 3.9 we have
E(X A 150,000) = 30,000[1 - e" 0 · 00001 ^ 50 ' 000 )] = 23,306.10.
After 10% inflation the expected cost is
1.1E(X Λ 150,000/1.1) - 33,000[1 - e" 0 0 0 0 0 ^ 1 5 0 ' 0 0 0 / 1 1 )] = 24,560.94,
for an increase of 5.38%.
8.15 From Exercise 3.10 we have
θ+d
100 + d
Therefore, the range is 100 to infinity. With 10% inflation, Θ becomes 110 and
the mean residual life is ey(d) = 110 + d. The ratio is joo+d · ^ s ^ increases,
the ratio decreases from 1.1 to 1. The mean residual life is
ez(d) =
fi°°{x - d)f(x)dx
+ Ζ~ ο (500 -
ΓΤ^)
d)f(x)dx
This function is a quadratic function of d. It starts at 83.33, increases to a
maximum of 150 at d = 200, and decreases to 0 when d = 500. The range is
0 to 150.
60
8.4
CHAPTER 8 SOLUTIONS
SECTION 8.5
8.16 25 =E(X) = f™(l-x/w)dx
Then
/•50
= w/2 for w = 50. 5(10) = 1-10/50 = 0..
E(Y)
=
/
(a;-10)(l/50)dx/0.8 = 20,
E(y 2 )
=
Var(F)
=
/ (x - 10) 2 (l/50)d:r/0.8 = 533.33,
Jio
533.33 - 202 = 133.33.
8.17 The bonus is B = 500,000(0.7 - L/500,000)/3 - (350,000 - L)/3 if
positive. The bonus is positive provided L < 350,000. The expected bonus is
E(B)
=
- /
,•350,000
(350,000 -
l)fL(l)dl
3 Jo
^{350,000FL(350,000) - E(L Λ 350,000)
{350,000FL (350,000) - E(L .
+350,000(1 - FL(350,000)]}
600,000 \ 2 '
|35o,ooo - ««^22
950,000 )
.
I
=
56,556.
8.18 The quantity we seek is
1.1[E(X Λ 22/1.1) - E(X Λ 11/1.1)]
1.1(17.25-10) _
~ 18.1 - 10.95 " ^ 1 1 5 '
E(X Λ 22) - E(X Λ 11)
8.19 This exercise asks for quantities on a per-loss basis. The expected value
is
E(X) - E(X A 100) - 1,000 - 1,000(1 - e" 100 / 1 ' 000 ) = 904.84.
To obtain the second moment, we need
Ε[(ΧΛ100) 2 ]
=
=
=
r-100
/»1UU
\2 -100/1,000
/
x20.001e-°-°°la!da: + (100)V
Jo
- e - 0 0 0 1 x ( a ; 2 + 2,000x + 2,000,000) |J°° + 9,048.37
9,357.68.
SECTION 8.5
61
The second moment is
E(X2) - E[(X Λ 100)2] - 200E(X) + 200E(X Λ 100)
= 2(1,000)2 - 9,357.68 - 200(1,000) + 200(95.16)
- 1,809,674.32,
for a variance of 1,809,674.32 - 904.842 = 990,938.89.
8.20 Under the old plan the expected cost is 500/1 = 500. Under the new
plan, the expected claim cost is K. The bonus is
_ f 0.5(500-X),
~ { 0.5(500-500),
X<500
X>500,
which is 250 less a benefit with a limit of 500 and a coinsurance of 0.5. Therefore,
E(B)
250 - 0.5E(X Λ 500)
K2
K
250
The equation to solve is
~Ί
+
2K + ι,οοο'
5 0 0 ^ + 250-f + ^ f l ^
and the solution is K = 354.
8.21 For year a, expected losses per claim are 2,000, and thus 5,000 claims
are expected. Per loss, the reinsurer's expected cost is
E(X) - E(X A 3,000)
=
2,000-2,000 1
=
800,
2,000
2,000 + 3,000
and, therefore, the total reinsurance premium is 1.1(5,000)(800) = 4,400,000.
For year b, there are still 5,000 claims expected. Per loss, the reinsurer's
expected cost is
1.05[E(X) - E(X Λ 3,000/1.05)]
J2,C
1= 1.05^2,000-2,000
= 864.706.
2,000
2,000 + 3,000/1.05
)]}
62
CHAPTER 8 SOLUTIONS
and the total reinsurance premium is 1.1(5,000)(864.706) = 4,755,882. The
ratio is 1.0809.
8.22 For this uniform distribution,
pu
E(XAu)
=
=
=
/
J0
x(0.00002)dx + /
/.50,000
Ju
u(0.00002)dx
O.OOOOlu2 + 0.00002u(50,000 - u)
u- O.OOOOlix2.
Prom Theorem 8.7, the expected payment per payment is
E(X A 25,000) - E(X A 5,000)
1 - F(5000)
18,750 - 4,750
5,000
1 50.000
15,556.
8.23 This is a combination of franchise deductible and policy limit, so none
of the results apply. From the definition of this policy, the expected cost per
loss is
/»ιυυ,υυυ
/
Λο,οοο
xf(x)dx
,»100,000
+ 100,000[1 - F(100,000)]
Λ50,000
= /
xf{x)dx + 100,000[1 - F(100,000)] - /
xf(x)dx
Jo
Jo
= E(X A 100,000) - E(X A 50,000) + 50,000[1 - F(50,000)].
Alternatively, it could be argued that this policy has two components. The
first is an ordinary deductible of 50,000 and the second is a bonus payment
of 50,000 whenever there is a payment. The first two terms in the preceding
formula reflect the cost of the ordinary deductible and the third term is the
extra cost of the bonus. Using the lognormal distribution, the answer is
. / I n 100,000 - 10 - 1 .
, ^ . In 100,000 - 10
ε 10 · 5 Φ
■—
+ 100,000 1 - Φ '
_ 6 ιο.5 φ / Ί η 5 0 , 0 0 0 - 1 0 - 1
1
= ε10·5Φ(0.513) + 100,000[1 - Φ(1.513)] - β 10 · 5 Φ(-0.180)
= e 105 (0.6959) -h 100,000(0.0652) - e 105 (0.4285) - 16,231.
8.24 E ( r L ) = E ( X ) - E ( X Λ 30,000) = 10,000 - 10,000(1 - e" 3 ) = 497.87.
E(YP) = 497.87/e" 3 = 10,000.
SECTION 8.5
E[(F L ) 2 ]
=
=
63
E(X 2 ) - E[(X A 30,000)2] - 60,000E(X)
+60, OOOE(X Λ 30,000)
2(10,000)2 - 10,0002(2)Γ(3; 3) - 30,000 2 e" 3 - 60,000(10,000)
+60,000(9,502.13)
9,957,413.67.
E[(y p ) 2 ] - 9,957,413.67/e" 3 - 200,000,000. Then Var(y L ) = 9,957,413.67497.872 = 9,709,539.14, and the cv is 9,709,539.141/2/497.87 = 6.259. For
Yp, the variance is 100,000,000 and the cv is 1.
8.25 The density function over the respective intervals is 0.3/50 = 0.006,
0.36/50 - 0.0072, 0.18/100 = 0.0018, and 0.16/200 - 0.0008. The answer is
/»50
/
Jo
/.100
Λ200
x2(0.006)dx + /
x 2 (0.0072)dx+ /
χ2(0.0018)ώζ
Λο
Λοο
/»350
+ /
7200
/»400
χ 2 (0.0008)ώτ+ /
7350
3502(0.0008)dx = 20,750.
8.26 The 10,000 payment by the insured is reached when losses hit 14,000.
The breakpoints are at 1,000, 6,000, and 14,000. So the answer must be of
the form
aE{X A 1,000) + bE(X A 6,000) + cE(X A 14,000) + dE(X).
For the insurance to pay 90% above 14,000, it must be that d = 0.9. To pay
nothing from 6,000 to 14,000, c + d = 0, so c = -0.9. To pay 80% from 1,000
to 6,000, b + c + d = 0.8, so b = 0.8. Finally, to pay nothing below 1,000,
a + b + c + d = 0, so a = —0.8. The answer is
-0.8(833.33) + 0.8(2,727.27) - 0.9(3,684.21) + 0.9(5,000) = 2,699.36.
8.27 The expected payment is λ = 3. With the coinsurance, the expected
payment is e*3. For the deductible,
E(X A 2) = (0)e~ 3 + (l)(3e - 3 ) + 2(1 - e" 3 - 3e~ 3 ) = 1.751065.
The expected cost is 3 - 1.751065 = 1.248935. Then, a = 1.248935/3 0.4163.
64
CHAPTER 8 SOLUTIONS
8.28 T h e maximum loss is 95. The expected payment per payment is
n
U.o
E(X Λ 95) - E(X Λ 20)
„,^v
F(20)
X
QC
=
U.o-
? 5
_
0.8
8.5
" ϊ θ ο ϋ ^ ~ Io° λ ~
202
1 - 10.000
Wmdx
95^201
0
3 000)
ζ
=38.91.
SECTION 8.6
8 . 2 9 (a) For frequency, the probability of a claim is 0.03, and for severity, the
probability of a 10,000 claim is 1/3 and of a 20,000 claim is 2 / 3 .
(b) P r ( X = 0) = 1/3 and P r ( X = 10,000) = 2 / 3 .
(c) For frequency, the probability of a claim is 0.02, and the severity distribution places probability 1 at 10,000.
8.30 v = [1 - F ( l , 0 0 0 ) ] / [ 1 - F(500)] - 9/16. T h e original frequency distribution is P - E T N B with λ = 3, r = - 0 . 5 , and β = 2. T h e new frequency is P - Z M N B with λ = 3, r = - 0 . 5 , β = 2(9/16) = 9 / 8 , and p$ =
0-3 '+(i7/8)^-o(i7/8) r> = 0.37472. This is equivalent to a P - E T N B with
λ = 3(1 - 0.37472) = 1.87584, β = 9 / 8 , and r = - 0 . 5 , which is P-IG with
λ = 1.87584 and β = 9 / 8 .
T h e new severity distribution has cdf F(x)
(1,500+^ )
'
w h i c h
is P a r e t o
w i t h
a
=
2
a n d
Θ
=
F T
( +^j~0^5Q0)
=
1 -
= 1,500.
8.31 T h e value of p$J will be negative and so there is no appropriate frequency
distribution to describe effect of lowering the deductible.
8.32 We have
PNP(Z)
=
ΡΝ(1-υ
+
υζ)
- { l - ( l - v +
vz)}-r
r
— (v — vz)~
-v'r{\-z)-r
-v-r[l-PNL(z)}
-v~r
Therefore,
+
Pr(7V p = 0) = 1 -
irrPNL(z).
v~7\
SECTION 8.6
65
and
Ρτ(Νρ
= n) = [1 - Pr (Np = 0)] Pr (NL = n) ; n = 1,2,3,... .
8.33 Before the deductible, the expected number of losses is rß = 15. From
the Weibull distribution, 5(200) = exp[-(200/1,000) 0 · 3 ] = 0.5395. The expected number of payments is 15(0.5395) — 8.0925.
8.34 5(20) = 1 - (20/100) 2 = 0.96. The expected number of claims without
the deductible is 4/0.96 = 4.1667.
SECTION 8.6
65
and
Ρτ(Νρ
= n) = [1 - Pr (Np = 0)] Pr (NL = n) ; n = 1,2,3,... .
8.33 Before the deductible, the expected number of losses is rß = 15. From
the Weibull distribution, 5(200) = exp[-(200/1,000) 0 · 3 ] = 0.5395. The expected number of payments is 15(0.5395) — 8.0925.
8.34 5(20) = 1 - (20/100) 2 = 0.96. The expected number of claims without
the deductible is 4/0.96 = 4.1667.
CHAPTER 9
CHAPTER 9 SOLUTIONS
9.1
SECTION 9.1
9.1 The number of claims, AT, has a binomial distribution with m = number of
policies and q — 0.1. The claim amount variables, X\,X<i,..., all are discrete
with PT(XJ = 5,000) = 1 for all j .
9.2 (a) An individual model is best because each insured has a unique distribution. (b) A collective model is best because each malpractice claim has the
same distribution and there are a random number of such claims, (c) Each
family can be modeled with a collective model using a compound frequency
distribution. There is a distribution for the number of family members and
then each family member has a random number of claims.
Student Solutions Manual to Accompany Loss Models: From. Data to Decisions,
Edition. By Stuart A. Klugman, Harry H. Panjor, Gordon E. Willmot
Copyright © 2012 John Wiley L· Sons, Inc.
Fourth67
68
CHAPTER 9 SOLUTIONS
9.2
SECTION 9.2
9.3
E(iV)
P^(z)
P< x) (l)
=
=
=
P^(l),
aQ(z)a~lQ^(z),
aQW'-'LQWMota.
9.4 The Poisson and all compound distributions with a Poisson primary distribution have a pgf of the form P(z) = exp{X[P2(z) — 1]} = [<5(ζ)]λ, where
Q(z) = exp[P 2 (z) - 1].
The negative binomial and geometric distributions and all compound distributions with a negative binomial or geometric primary distribution have
P(z) = {1-β[Ρ2(ζ)
- \}}-r = [Q(z)\r, where Q(z) = {1 - β[Ρ2(ζ) - l ] } " 1 .
The same is true for the binomial distribution and binomial-X compound
distributions with a = m and Q(z) = 1 + q[P2(z) — 1].
The zero-truncated and zero-modified distributions cannot be written in
this form.
9.3
SECTION 9.3
9.5 To simplify writing the expressions, let
Njp
=
μΝί =
Nj
=
μΝί =
XJP
=
Xj
=
Ε(Νή,
Ε[(Ν-Ν1ρ)*],
μχι=Ε(Χή,
μΧί =
Ε[(ΧΝ-Χ1ργ],
and similarly for S. For the first moment, P^\z)
so
E(S)
=
Pg)(l)
=
P$\l)P$\l)
=
= P^)[Px(z)]Px1)(z),
P$)[Px(l)]Plx\l)
= (Nlp)(Xlp)
=
E(N)E(X).
For the second moment use
Pf(l)
=
=
S2p-Slp
= P%)[Px(z)]{Px1)(z)f
+
PlJ)lPx{z)}Px2)(z)
2
(N2p-Nlp)(Xpl)
+
Nlp(X2p-Xlp).
Var(S)
=
=
52 = S2p - (Sip)2 = S2p - Sip + Sip - (Sip)2
(N2p - Nlp)(Xlp)2
+ Nlp(X2p - Xlp) +
(Nlp)(Xlp)
-(Nlp)2(Xlp)2
Nlp{X2p - (Xlp) 2 ] + [N2p (Nlp)2](Xlp)2
(Nlp)(X2) + (N2)(Xlp)2.
=
=
and
SECTION 9.3
For the third moment use
Pf
=
S3p-3S2p
=
=
=
P$){Px{z)}[Px1)(z)f
+3P{2)[Px(z)}Px1)(z)Px2){z)
+
P^[Px(z)]P^\z)
(N3p-3N2p
+
2Nlp)(Xlp)3
+3(N2p - Nlp)(Xlp)(X2p
- Xlp)
+Nlp(X3p - 3X2p + 2Xlp).
=
S3
+ 2Slp =
S3p-3(S2p){Slp)
+ 2(Slp)3
S3p - 3S2p + 2S\p + 3 [52 + (51p) 2 ](l - 35lp) + 2(51p) 3
(N3p - 3N2p +
2Nlp)(Xlp)3
+3(N2p - Nlp)(Xlp)(X2p
- Xlp)
+Nlp(X3p - 3X2p + 2Xlp)
+3{Nlp[X2p - (Xlp)2}
+[N2p-(Nlp)2}(Xlp)2
+(Nlp)2(Xlp)2}[l
3(Nlp)(Xlp)]
+2(Nlpf{Xlpf
=
9 6
Nlp(X3)
+ 3(N2)(Xlp)(X2)
+
N3(Xlp)3.
E(X)
=
1,000 + 0.8(500) = 1,400.
Var(X)
=
=
=
=
=
=
Vax(X1) +
Vai(X2)+2Cov(X1,X2)
5002 + 0.64(300)2 + 2(0.8) (100,000)
467,600.
E{N)E(X) = 4(1,400) = 5,600.
£(JV)VarpO+Var(JV)E(X) 2
4(467,600) + 4(1,400)2 = 9,710,400.
E(5)
Var(5)
9.7 E(5) = 15(5)(5) = 375.
Var(5) = 15(5)(100/12) + 15(5)(6)(5) 2 = 11,875. StDev(5) = 108.97.
The 95th percentile is 375 + 1.645(108.97) = 554.26.
9.8
Var(AT)
Ε(λ)
Var(A)
Var(JV)
=
=
=
=
E[Var(JV|A)] + Var[E(JV|A)] = Ε(λ) + Var(A),
0.25(5) + 0.25(3) + 0.5(2) = 3,
0.25(25) + 0.25(9) + 0.5(4) - 9 = 1.5,
4.5.
69
70
CHAPTER 9 SOLUTIONS
9.9 T h e calculations appear in Table 9.1.
Results for Exercise 9.9.
Table 9.1
X
0
1
2
3
4
5
6
h(x)
h(x)
0.9
0.1
Mx)
h+*(?)
0.45
0.32
0.21
0.02
0.5
0.3
0.2
0.25
0.25
0.25
0.25
fs(x)
0.1125
0.1925
0.2450
0.2500
0.1375
0.0575
0.0050
9 . 1 0 T h e calculations appear in Table 9.2.
Table 9.2
X
0
1
2
3
4
5
6
7
h{x)
h{x)
0.6
0.2
0.1
0.1
0.25
0.25
0.25
0.25
Results for Exercise 9.10.
h+z{x)
0.150
0.200
0.225
0.250
0.100
0.050
0.025
h(x)
1-p
P
0.06 = / 5 ( 5 ) - 0.1 - 0.05p, p = 0.8.
9 . 1 1 If all 10 do not have AIDS,
E(5)
-
10(1,000) - 10,000,
Var(S)
=
10(250,000) = 2,500,000,
and so the premium is 10,000 + 0.1(2,500,000) 1 / 2 = 10,158.
fs(x)
0.15p
0.15 + 0.05p
0.2 + 0.025p
0.225 + 0.025p
0.25-0.15p
0.1-0.05p
0.05 - .025p
0.025 - 0.025p
SECTION 9.3
71
If the number with AIDS has the binomial distribution with m = 10 and
q = 0.01, then, letting N be the number with AIDS,
E(5)
Var(S)
=
E[E(5|7V)] = E[70,000N + 1,000(10 - TV)]
=
=
=
10,000 + 69,000[10(0.01)]
16,900,
Var[E(S|iV)] + E[Var(S|iV)]
Var[70,000iV + 1,000(10 - N)]
+E[1,600,00(W + 250,000(10 - N)\
69,0002[10(0.01)(0.99)] + 2,500,000 + 1,350,000[10(0.01)]
473,974,000,
=
=
and so the premium is 16,900 + 0.1(473,974,000) ^ 2 - 19,077. The ratio is
10,158/19,077 = 0.532.
9.12 Let M be the random number of males and C be the number of cigarettes
smoked. Then E(C) = E[E(C|M)] = E[6M + 3(8 - M)] = 3E(M) + 24. But
M has the binomial distribution with mean 8(0.4) = 3.2, and so E(C) =
3(3.2) 4- 24 = 33.6.
Var(C)
=
=
=
E[Var(C|Af)] 4- Vax[E(C|M)]
E[64M + 31(8 - M)] + Var(3M + 24)
33E(M) + 248 + 9 Var(M)
33(3.2) + 9(8)(0.4)(0.6) = 370.88.
The answer is 33.6 -h V370.88 = 52.86.
9.13 For insurer A, the group pays the net premium, so the expected total
cost is just the expected total claims, that is, E(5) = 5.
For insurer B, the cost to the group is 7 — D, where D is the dividend. We
have
f Ik - 5,
\ 0,
Then E(D) = / 0 (7fe - s)(0A)ds
7 - 2.45/c2, and so fc = 0.9035.
S < 7fe,
S > 7k.
= 2.45fe2.
We want 5 = E(7 - D) =
72
CHAPTER 9 SOLUTIONS
9.14 Let Θ be the underwriter's estimated mean. The underwriter computes
the premium as
/»DO
2E[(5 - 1.250)+]
(s - 1.250)0 _ 1 β- Α / 0 ώ
=
2 /
=
-2(s - 1.25θ)ε-^θ
I CXD
- 2θε-*Ιβ\
11.250
= 2öe" 1 · 2 5 .
Let μ be the true mean. Then Θ = 0.9/x. The true expected loss is
/»OO
E[(5 - 1.25(0.9)/x)+] = 2 /
Λ.125μ
(s - 1.125μ)μ- 1 6- β / μ ώ = / l e " 1 1 2 5 .
The loading is
2(0.9)^·^ _1
=
Q5885
μ6-1.125
9.15 A convenient relationship is, for discrete distributions on whole numbers,
E[(X -d1)+] = E[(X - d)+] - 1 + F(d). For this problem, E[(X - 0)+] =
E(X) = 4. E [ ( X - 1 ) + ] = 4 - 1 + 0 . 0 5 = 3.05, E [ ( X - 2 ) + ] = 3 . 0 5 - 1 + 0 . 1 1 =
2.16, E[(X - 3)+] = 2.16 - 1 + 0.36 = 1.52. Then, by linear interpolation,
d = 2 + (2 - 2.16)/(1.52 - 2.16) = 2.25.
9.16 15 = J~ 0 [l - F(s)]ds, 10 = /~ 0 [1 - F(s)]ds, F(120) - F(80) = 0.
Subtracting the second equality from the first yields 5 = f100 [1 — F(s)]ds, but
over this range F(s) = F(80), and so
/»120
/>120
5= /
[1 - F(s)]ds = /
[1 - F(80)}ds = 20[1 - F(80)]
Λοο
Λοο
and, therefore, F(80) = 0.75.
E(A)
E(B)
=
/
( | - 5θ) (0.01)dx = ( | r - δθχ^ (0.01)
=
5 0 Α Γ 1 - 5 0 + 12.5/c.
=
/
Jo
rioo
kx(0.01)dx=
, 2
— (0.01)
^
o
= 50fc.
The solution to 50ÄT1 - 50 + 12.5/e = 50fc is k = 2/3.
SECTION 9.3
73
9.18 E(X) = 440, F(30) = 0.3, f(x) = 0.01 for 0 < x < 30.
E(benefits)
=
/
/»30
Jo
/»oo
20x(0.01)dx + /
J30
/»OO
[600 + 100(:r - 30)]/(x)da
=
90 + /
=
90 - 2,400[1 - F(30)] + 100 /
Jo
J30
+ 100 /
(-2,400)f(x)dx
/»OO
xf(x)dx
J30
/»OO
xf(x)dx
/»30
-100 /
Jo
=
9
·19
E(S)
xf(x)dx
/»30
90 - 2,400(0.7) + 100(440) - 100 / x(0.01)da;
Jo
90 - 1,680 + 44,000 - 450 - 41,960.
E(N)E(X) = [0(0.5) + 1(0.4) + 3(0.1)] [1(0.9) + 10(0.1)]
0.7(1.9) = 1.33.
=
=
We require P r ( 5 > 3.99). Using the calculation in Table 9.3, P r ( 5 > 3.99) =
1 - 0.5 - 0.36 - 0.0729 = 0.0671.
Table 9.3 Calculations for Exercise 9.19.
X
fx°(x)
fx H*)
0
1
2
3
1
0
0
0
0
0.9
0
0
0
0
0.81
0
0
0
0
0.729
0.5
0.4
0
0.1
Pn
Γχ\χ)
Γχ\χ)
fs(x)
0.5000
0.3600
0
0.0729
9.20 For 100 independent lives, E(S) = lOOmq and Var(S) =
I00m2q(l-q)
250,000. The premium is lOOmq -f 500. For this particular group,
E(5)
Var(S)
=
=
97 (mq) + (Sm)q = lOOmq,
97m2q(l - q) + (3m) V 1 - q)
106m 2 ^(l -q)= 265,000,
and the premium is lOOmq + 514.18. The difference is 14.78.
9.21
E(5)
Var(S)
=
=
1(8,000)(0.025) + 2(8,000) (0.025) = 600,
1(8,000)(0.025)(0.975)
+22(8,000)(0.025)(0.975) = 975.
74
CHAPTER 9 SOLUTIONS
The cost of reinsurance is 0.03(2)(4,500) = 270.
Pr(5 + 270 > 1,000)
=
Pr(5 > 730)
'5-600
730-600
Pr
>
/975
/975
=
4.163
so K = 4.163.
9.22
E(Z)
=
/
/•100
Λο
0.8(j/-10)(0.02)(l-O.Ol2/)dj/
/•100
3.016 /
Jio
9.23
Pr(S > 100)
=
- 0 . 01y2 + I.ly-I0dy=
19.44.
] T Pr(7V - n) Pr(X* n > 100)
n=0
=
0.5(0) +0.2 P r ( X > 100)
+0.2Pr(X* 2 > 100) +0.1Pr(X* 3 > 100).
Because X ~ 7V(100,9), X* 2 ~ N(200,18), and X* 3 ~ AT(300,27), and so
Pr(S>100)
0.2Pr(Z>100-100Uo.2PrrZ>100-200
=
Vis
100 - 300
+0.2 Pr [Z>
\/27
0.2(0.5)+ 0.2(1)+ 0.1(1) = 0.4.
9.24 The calculations are in Table 9.4. The expected retained payment is
2,000(0.1) +3,000(0.15) +4,000(0.06) + 5,000(0.6275) = 4,027.5, and the total
cost is 4,027.5 + 1,472 = 5,499.5.
Table 9.4
/*(*)
Pk
Calculations for Exercise 9.24.
Sx(x)
fs(x)
fx(x)
0
0
0
0
1
0
0
0
0
0
0
0.4
0.6
0
0.16
1/16
1/4
3/8
0.0625
0.0000
0 .1000
0.1500
0.0600
SECTION 9.3
75
9.25 In general, paying for days a through 6, the expected number of days is
b
J2(k - a + l)pk + (b - a 4-1)[1 - F(b)]
k=a
b
k
k=aj=a
= έ Σ > + (&-«+ΐ)[ΐ-*χ*)]
j=a
k=j
b
= Yl[F(b) - F(j - 1)] + (6 - a + 1)[1 - F(b)}
= ^[l-F(j-l)] = ^(0.8r1
(0.8)α~ι - (0.8)b
.2
For the 4 through 10 policy, the expected number of days is (0.83—0.810)/0.2 =
2.02313. For the 4 through 17 policy, the expected number of days is (0.8 3 —
0.8 17 )/0.2 = 2.44741. This is a 21% increase.
9.26
oo
/
x3x~4dx = 3/2,
x23x~4dx
Var(S)
=
= 3,
3 - (3/2) 2 = 3/4.
r (l+0)(3/2)
=
3x~4dx
l - [ ( l + 0)(3/2)]- 3 ,
and so Θ = 0.43629.
1.5+Ay^74
/
=
and so λ = 0.75568.
1-[1.5 + λΛ/3/4]-3,
3x~*dx
76
CHAPTER 9 SOLUTIONS
9.27 The answer is the sum of the following:
0.807?ioo pays 80% of all in excess of 100,
0.10i?noo pays an additional 10% in excess of 1,100, and
0.10i?2ioo pays an additional 10% in excess of 2,100.
9.28 PT(S = 0) = ΣΓ=οΡηΡΓ(£ η = 0) where Bn ~bin(n,0.25). Thus
00
Pr(5
p
-2on
^ " ( 0 · 7 5 Γ = e - V · 5 = e" 1 / 2 .
= 0) = ^
n=0
9.29 Let μ be the mean of the exponential distribution. Then,
E(5)
=
=
Ε[Ε(5|μ)]
Ε(μ)
3,000,000.
9.30 E(N) = 0.1.
Λ30
E(X)
=
/
Jo
=
/
r10
+ /
E{S)
=
=
100£/(£)d£ + 3,000Pr(T>30)
100ί(0.04)ώ+ /
/■30
/.20
100i(0.035)di
100i(0.02)dt + 3,000(0.05)
200 + 525 + 500 + 150 = 1,350.
0.1(1,350) = 135.
9.31 Total claims are compound Poisson with λ = 5 and severity distribution
fx(x)
=
0AMx)+0.6f2(x)
( 0.4(0.001) + 0.6(0.005) = 0.0034,
\ 0.4(0.001) + 0.6(0) = 0.0004,
0 < x < 200,
200 < x < 1,000.
SECTION 9.3
77
Then,
E[(X-100)+]
=
/
(x -100)
200
/
(x - 100)(0.0034)ώ
JlOO
/»OO
/»oo
+ /
=
fx(x)dx
J200
(x-100)(0.0004)d£
177.
9.32 The calculations appear in Table 9.5.
Table 9.5 Calculations for Exercise 9.32.
5
PT(S = S)
d
dPr(S = s)
0
1
2
3
0.031676
0.126705
0.232293
0.258104
0.351222
4
3
2
1
0
0.12671
0.38012
0.46459
0.25810
4+
Total
0
1.22952
9.33 The negative binomial probabilities are po = 0.026, p\ = 0.061, and
p2 = 0.092, and the remaining probability is 0.821. The mean is 6, and so the
total expected payments are 600. The expected policyholder payment is
0(0.026) + 100(0.061) + 200(0.092) + 300(0.821) = 270.80.
The insurance company expected payment is 600 — 270.80 = 329.20.
9.34 Because the only two ways to get 600 in aggregate claims are one
100+500 and six 100s, their probabilities are
—^-2(0.80)(0.16) = 0.02156 and — — (0.80)6 = 0.03833
for a total of 0.05989. The recursive formula can also be used.
9.35 The moments are E(N) = 16(6) - 96, Var(AT) = 16(6)(7) = 672,
E(X) = 8/2 = 4. Var(X) = 8 2 /12 = 5.33, E(S) = 96(4) = 384, Var(S) =
96(5.33) + 672(4)2 = 11,264. The 95th percentile is, from the normal distribution, 384+ 1.645(ll,264)1/2 = 558.59.
78
CHAPTER 9 SOLUTIONS
9.36 The moments are (noting that X has a Pareto distribution) E(N) = 3,
Var(iV) = 3, E(X) = 2/(3 - 1) = 1. Var(X) = 2 2 2/[(3 - 1)(3 - 2)] - l 2 = 3,
Var(S) - 3(3) + 3(1) 2 - 12.
9.4
SECTION 9.4
9.37 Because this is a compound distribution defined on the nonnegative integers, we can use Theorem 7.4. With an appropriate adaptation of notation,
PN{Px(z;ß))
=
PN{Px(z);ß[l-fx(0)}}.
So just replace ß by ß* = ß[l 9.38 (a)
fq(x)
fx(0)].
ßn
£
= V
oo
ln(l + β)
fs(x)
—
;^n(l+/3)"ln(l+/?)0n(n-l)!
1
(b)
χη-1£-χ/θ
-
=
ß
θ(1 + β)\
n=l
i-x»-ie-«/e.
η!
xß
x\n(l+ß) l·** [θ(1+β)\
e~x/9
v
-χ/θ
'
n=l
1
η\
xß
xln(l + /3) \ exp*[θ(1 + β)
exp
-}
— exp (-1)
θ(1 + β)
xln(l + /3)
9.39 To use (9.15) we need the binomial probabilities po = 0.63 = 0.216,
Pi = 3(_0.6)20.4 = 0.432, p2 = 3(0.6)0.42 =_ 0.288, and p3 = 0.43 - 0.064.
Then, P 0 = 0.432 + 0.288 + 0.064 - 0.784, ΡΎ = 0.288 -f 0.064 = 0.352, and
P2 = 0.064. Then
F 5 (300)
=
1-e
-300/100
0.89405.
0.784 ( 3 0 0 Λ 0 0 ) ° + 0 . 3 5 2 ( 3 0 0 / 1 0 0 ) 1
1!
0!
(300/100) 2
+0.064
2!
9.40 For the three policies the means are 10,000(0.01) = 100, 20,000(0.02) =
400, and 40,000(0.03) = 1,200. Because the Poisson variance is equal to the
SECTION 9.6
79
mean, the variances are 10,0002(0.01) = 1,000,000, 20,0002(0.02) = 8,000,000,
and 40,000(0.03) = 48,000,000. The overall mean is 5,000(100) +3,000(400) +
1,000(1,200) = 2,900,000 and the variance is 5,000(l,000,000)+3,000(8,000,000)+
1,000(48,000,000) = 7,7 x 10 10 . Total claims are the sum of 9,000 compound
Poisson distributions which itself is a compound Poisson distribution with
A = 5,000(0.01) + 3,000(0.02) + 1,000(0.03) = 140 and the severity distribution places probability 50/140 on 10,000, 60/140 on 20,000, and 30/140 on
40,000. Using the recursive formula with units of 10,000,
0) = e" -140
Pr(5
Pr(5
=
Pr(5
=
l) = ^ A e - 1 4 0 = 50e- 1 4 0 ,
1 14
140
A 5 0 e- 1 4 0 + 2 6 -140
2) =
i4 '
14
.>-? A14l , 3 1 0 e -
Pr(5
Pr(5
9.5
>
o
L
140
J £±
-
l,310e- 1 4 0 ,
+ 2^-50e- 1 4 0 + 3 ^ - e - 1 4 0 = 24,196.67e- 140 ,
14
14
3) = 1 - 25,557.67e~140
SECTION 9.6
9.41
Ak
(fe+1>h
mn
x-kh-h
f(x)dx
-h
Jkh
kh
t-κη
MK+ijft
=
-
rdx+
Tdx + (k + l){F[(k +
Jo
h
J0
h
E[X A(k + l)h] + (k + 1){1 - F[(k + l)h]}
h
l)h}-F(kh)}
+ τ Ε ( Χ Λ kh) - k[l - F(kh)}
+(k + l){F[{k +
=
| E ( X Akh)-^E[X
h
a
,(fc.fe+1
m\
l)h}-F(kh)}
=
=
<
Jkh
|E[X
h
>h X -
kh
h
A(k + l)h] +
l-F{kh),
f{x)dx
Λ (fc + l)h] - | Ε ( Χ Λ kh) - 1 + F[(k + l)h}.
h
80
CHAPTER 9 SOLUTIONS
For k = 1,2,... ,
m\-l+ml
fk =
=
A kh) - \E[X
\E(X
ft ft
+\nx
ft ft
=
A{k-
A (k + l)ft] 4- 1 -
A kh) - \E\X
\{2Έ(Χ
h
l)ft] - 1 + F(kh)
A kh) - E[X A(k-
F(kh)
l)ft] - E[X A (k + l)h]}.
Also, /o = m§ = 1 — E(X A h)/h. All of the rrij have nonnegative integrands and, therefore, all of the fk are nonnegative. To be a valid probability
function, they must add to one,
oo
-
1
Λ + Σ Λ
=
oo
1-^Ε(ΧΛ/ι) + -^{Ε(ΧΛΑ:/ι)-Ε[ΧΛ(Α:-1)/ι]}
fc=l fc=l
i
°°
+ - ] Γ { Ε ( Χ Λ /eft) - E[X Λ (fc 4- l)h}}
k=\
=
1 - yE(X A h) + | E ( X Λ/ι) = 1,
AI
ft
because both sums are telescoping.
The mean of the discretized distribution is
J T ftfc/fc =
k=l
] T fc{E(X Λ /eft) - E[X A(k-
l)h}}
k=l
OO
+ Σ kiE(X
Λ kh
) - EiX Λ (fc + l)ft]}
oo
=
E(X Λ ft) + 5^(jfc 4 1){E[X Λ (k + l)ft] - E(X Λ (jfeft)}
oo
+ 5 3 ME(X Λ kh) - E[X A (k 4 l)h]}
fc=l
oo
= E(X Ah) + J2{E[X
A(k+l)h}-E{XA(kh)}
k=\
=
E(X)
because E(X A oo) - E(X).
9.42 Assume x = 1. Then g0 = exp[-200(l - 0.76)] = exp(-48).
recursive formula gives
9k = ^ ( 0 . 1 4 ^ _ ! 4- 0.10<?fe_2 4- 0.06^_3 + 0.12^_ 4 )
The
SECTION 9.6
81
with gk = 0 for k < 0. Now use a spreadsheet to recursively compute probabilities until the probabilities sum to 0.05, which happens at k = 62. Then
62x = 4,000,000 for x = 64,516. The expected compensation is
200(0.14 + 0.10 + 0.06 + 0.12)(64,516) - 5,419,344.
9.43 (a) PN(z) = wP^z) + (1 ^
Ps(z)
=
w)P2(z).
PN(PX(Z))
=
wPi(Px(z))
=
wPSl(z)
+
+
(l-w)P2(Px(z))
(l-w)PS2(z),
fs(x) = wfSl + (1 - w)fs2(x),
Hence, first use (9.22) to compute fs1(x).
the results.
x = 0,1,2,...
.
Then take a weighted average of
(c) Yes. Any distributions P\(z) and P2(z) using (9.22) can be used.
9.44 Prom (9.22), the recursion for the compound Poisson distribution,
fs(0) = e~\
5
5
fs(x) = -Y]yfx(y)fs(x
x *-**
- y)-
Then
/5(l) = 5/x(l)e-5,
and so / χ ( 1 ) = \ since / s ( l ) = e - 5 . Then,
fs(2)
=
| [ / x ( l ) / s ( l ) + 2/x(2)/ s (0)]
=
| [ I e - 5 + 2/x(2)e- 5 ]
and, since /s(2) = §e~ 5 , we obtain fx(2) = | .
9.45 fs(7) = f [ l / x ( l ) / s ( 6 ) + 2 / x ( 2 ) / s ( 5 ) + 4 / x ( 4 ) / s ( 3 ) ] . Therefore, 0.041 =
f [£/s(6) + §0.0271 + §0.0132] for / s ( 6 ) = 0.0365.
82
CHAPTER 9 SOLUTIONS
9.46 From (9.21) with /χ(0) = 0 and x replaced by z:
M
1V1
(a +
+ l^)/χ(»)/5(«-ν)
fs(z) = Σ ( α
Zs
M-l
£
(a +
bZ)fx(y)fs{z-v)
»=i
+ (a + 6 ^ ) / x ( M ) / s ( 2 - M ) .
Let z = x + M,
/s(i + W) = Σ {a + b^-^)fx(y)fs(x
+ M-y)
Rearrangement gives the result.
(b) The maxmimum number of claims is m and the maximum claim amount
is M. Thus, the maximum value of S is mM with probability [qfx(M)]m.
This becomes the starting point for a backward recursion.
9.47 We are looking for 6 - E(5) - E(£>).
7
4
E(X) = 1(1) + f(2) = -A and E(5) = 1(2) = | .
D
/s(0)
=
Λ
.5 -S,
0,
S < 4.5
5>4.5.
e" 2 ,
/s(l) = | 1 Κ 2 = Κ 2 >
fs(2) = | ( l I l e - 2 + 2fe- 2 ) = f e - 2 ,
/ S (3)
=
| ( l I f e - 2 + 2fIe-2) = I e - 2 ,
Λ(4) = I ( l | I e - 2 + 2ffe- 2 ) = i|e- 2 .
Then E(D) = (4.5 + 3.5± + 2 . 5 f + 1.5f| + 0 . 5 § | ) e - 2 = 1.6411. The answer
is 6 - 3.5 - 1.6411 = 0.8589.
9.48 For adults, the distribution is compound Poisson with λ = 3 and severity
distribution with probabilities 0.4 and 0.6 on 1 and 2 (in units of 200). For
SECTION 9.6
83
children it is λ = 2 and severity probabilities of 0.9 and 0.1. The sum of the
two distributions is also compound Poisson, with λ = 5. The probability at 1
is [3(0.4) + 2(0.9)]/5 = 0.6 and the remaining probability is at 2. The initial
aggregate probabilities are
/s(0)
= e-5,
fs(l)
=
f l | e - 5 = 3e- 5 ,
/ 5 (2)
=
|(l|3e-5
/s(3)
=
| ( l l f e - 5 + 2l3e-5) = fe- 5 ,
2fe-5) = fe- 5 ,
+
fs(4) = f(lff e -*
+
2ffe-5) = ^ e - 5 .
The probability of claims being 800 or less is the sum
1 + 3 + ^ + ^ + ^ ) e - 5 = 35.375e- 5 = 0.2384.
2
2
8 /
9.49 The aggregate distribution is 2 times a Poisson variable. The probabilities are P r ( 5 = 0) = e" 1 , Pr(5 = 2) = Pr(iV = 1) = e" 1 , P r ( 5 = 4) =
Pr(AT = 2) = \e~\ E(D) = (6 - 0)e" 1 + (6 - 2)e~l + (6 - 4)±e~ 1 = l i e " 1 =
4.0467.
9.50 λ = 1 + 1 = 2, / χ ( 1 ) - [1(1) + l(0.5)]/2 = 0.75, / x ( 2 ) - 0.25. The
calculations appear in Table 9.6. The answer is F£ 4 (6) = (81 +108+54)/256 =
243/256 - 0.94922.
Table 9.6 Calculations for Exercise 9.50
X
0
1
2
3
4
5
6
fx°(x)
1
0
0
0
0
0
0
fxHx)
0
3/4
1/4
0
0
0
0
fx2(x)
0
0
9/16
6/16
1/16
0
0
fx3(x)
0
0
0
27/64
27/64
9/64
1/64
Γχ\χ)
0
0
0
0
81/256
108/256
54/256
9.51 56 = 29EpO, so E(X) = 56/29. 126 = 29£(X 2 ), so E(X 2 ) = 126/29.
Let fi = Ρτ(Χ = Ϊ). Then there are three equations:
/ 1 + / 2 + /3
/ i + 2 / 2 + 3/3
/i+4/2+9/3
= 1,
= 56/29,
= 126/29.
84
CHAPTER 9 SOLUTIONS
The solution is f2 = 11/29. (Also, fx = 10/29 and / 3 - 8/29) and the
expected count is 29(11/29) = 11.
9.52 Let fj = PT(X = j). 0.16 = Xfu k = 2λ/ 2 , 0.72 - 3λ/ 3 . Then
/i = 0.16/λ and f3 = 0.24/λ, and so f2 = 1 - 0.16/λ - 0.24/λ. 1.68 =
λ[1(0.16/λ) + 2(1 - 0.4/λ) + 3(0.24/λ)] - 0.08 + 2λ, and so λ = 0.8.
9.53 1 - F ( 6 ) = 0.04-0.02, F(6) = 0.98. 1 - F ( 4 ) = 0.20-0.10, F(4) = 0.90.
P r ( 5 = 5 or 6) - F(6) - F(4) - 0.08.
9.54 For 1,500 lives, λ = 0.01(1,500) = 15. In units of 20, the severity
distribution is Pr(X = 0) = 0.5 and Pr(X = x) = 0.1 for x = 1, 2,3,4, 5. Then
E(X 2 ) = 0.5(0) + 0.1(1 + 4 + 9 + 16 + 25) = 5.5 and Var(S) = 15(5.5) - 82.5.
In payment units it is 202(82.5) = 33,000.
9.55 Pr(iV = 2) = \ Pr(N = 2|class I) + f Pr(7V - 2|class II). Pr(7V - 2|class
I) = Jo jrfßyildß = 0.068147. [Hint: Use the substitution y = ß/(l + /?)].
Pr(7V = 2|class II) - (0.25) 2 /(1.25) 3 = 0.032. Pr(7V = 2) = 0.25(0.068147) +
0.75(0.32) - 0.04104.
9.56 With E(iV) = 5 and E(X) = 5(0.6) + fc(0.4) - 3 + 0.4/c, E(5) = 15 + 2fc.
With Pr(S = 0) = e~ 5 = 0.006738, the expected cost not covered by the
insurance is 0(0.006738) + 5(0.993262) = 4.96631. Therefore, 28.03 = 15 +
2k - 4.96631 for k = 8.9982.
9.57 Using the recursive formula, Pr(S = 0) = e" 3 , Pr(5 = 1) = f (0.4)e~ 3 =
1.2e~3, P r ( 5 = 1) = f[0.4(1.2)e" 3 + 2(0.3)e" 3 ] = 1.62e"3, P r ( 5 = 3) =
\[0.4(1.62)e-3+2(0.3)(1.2)e-3+3(0.2)e-3]
= 1.968e"3. Pr(S < 3) = 5.788e~3 =
0.28817.
9.58 Using the recursive formula, Pr(S = 0) = (1 + 0.5)~ 6 = 0.08779. a =
0.5/(1 + 0.5) = 1/3, b = (6 - l)(l/3) = 5/3.
SECTION 9.7
85
Pr(5 = 1)
=
(I + ^
Pr(5 = 2)
=
( | + ^pj
=
0.08545,
=
( | + ^ ) (0.4)(0.08545) + Q + ^ ί Η ) (0.3)(0.07023)
Pr(5 = 3)
J (0.4)(0.08779) = 0.07023,
(0.4)(0.07023) + ( | + W l \
+ I i + ^ψ^-
(0.3)(0.08779)
) (0.2)(0.08779) = 0.09593.
Then, Pr(5 < 3) = 0.33940.
9.59 The result is 0.016.
9.60 The result is 0.055. Using a normal approximation with a mean of 250
and a variance 8,000, the probability is 0.047.
9.61 lfFx(x)
= l--
e
μχ
,χ
> 0, then
A-ft (*)-!-.-*.
and for j = 1,2,3,.
fi
··
5
=
=
Fx
IW>]
i _ e~»h{i+h) - i -f
.M
=
-Fx
(1 -
ßh\
β~μΗ)
[('">]
h
e-» 0-2)
„-ßh(j-l)
(l-/o)(l-#^-\
ßh
where φ = e~ .
9.6
SECTION 9.7
9.62 In this case, v = 1 - Fx(d) = e~^, and from (9.30),
PNp(z) = B [0(1 -υ + υζ-1)] = Β [θν(ζ -1)] = Β [θε-μά(ζ - 1)] .
Also, Yp = aZ, where
Pr(Z >z) = Pr(X > z + d)/Pr(X > d) = e-rt*+*)/e-i*
= e-"*.
86
CHAPTER 9 SOLUTIONS
That is, FYp(y) = l - P r ( F F > y) = l - P r ( Z > y/a) = 1 -
e~^.
9.63 (a) The cumulative distribution function of the individual losses is, from
Appendix A, given by Fx(x) = 1 - (l + γ ^ ) e - " · ^ , x > 0. Also, E(X Ax) =
200r(3;^)+z[l-r(2;Ty].Then
E(X Λ 175)
E(X A 50)
=
=
200(0.25603) + 175(1 - 0.52212) = 134.835,
200(0.01439) + 50(1 - 0.09020) = 48.368,
and the individual mean payment amount on a per-loss basis is
E(YL) = E(X Λ 175) - E(X Λ 50) = 134.835 - 48.368 = 86.467.
Similarly, for the second moment,
E [(X Λ xf)
= 60,000Γ (4; J L ) + , 2 [l - Γ ( 2 ; ^ ) ] ,
and, in particular,
E [(X Λ 175)2]
2
E [(X A 50) ]
-
60,000(0.10081) + 30,625(1 - 0.52212) = 20,683.645,
-
60,000(0.00175) + 2,500(1 - 0.09020) = 2,379.587.
Therefore,
E[(rL)2]
=
=
=
E [{X A 175)2] - E [(X A 50)2]
-100E(X Λ 175) + 100E(X Λ 50)
20,683.645 - 2,379.587 - 100(134.835) + 100(48.368)
9,657.358.
Consequently,
Var(F L ) = 9,657.358 - (86.467)2 = 2,180.816 = (46.70)2.
For the negative binomially distributed number of losses,
E(NL) = (2)(1.5) = 3 and Var(iVL) - (2)(1.5)(1 + 1.5) = 7.5.
The mean of the aggregate payments is, therefore,
E(5) - E(NL)E(YL)
= (3)(86.467) = 259.401,
and using Equation (9.9), the variance is
Var(S)
=
E(7VL) Var(y L ) -f Var(7VL) [ E ( y L ) f
=
(3)(2,180.816) + (7.5)(86.467)2
=
62,616.514 = (250.233)2.
SECTION 9 8
87
(b) The number of payments Np has a negative binomial distribution with
r* = r = 2 and ß" = ß [1 - F x (50)] = (1.5)(1 - 0.09020) = 1.36469.
(c) The maximum payment amount is 175 - 50 = 125. Thus FYp(y) = 1 for
y > 125. For y < 125,
FYp (y)
=
+ 50) / P r ( X > 50)
l-Pi(X>y
(! + & ) « - *
(d)
/o
=
FYP(20)=
0.072105,
/i
/2
/3
=
=
=
F y p (60) - Fyp (20) = 0.231664 - 0.072105 = 0.159559,
F y p ( 1 0 0 ) - F y p ( 6 0 ) = 0.386868-0.231664 = 0.155204,
Fyp(140) - Fyp (100) = 1 - 0.386868 = 0.613132.
a
=
=
=
[1 + /?* (1 - 0.072105)]"2 = [1 + (1.36469)(1 - 0.072105)]" 2
0.194702
1.36469/2.36469 = 0.577112, 6 = (2 - 1) a = 0.577112.
9x
=
3i
g2
=
=
=
=
(e)
9o
gz
9.7
9.64
Σΐ=ι (a + b%)fv9x-y
n(tmiftQ'srf-.,V\f
—
:
7
= 0.602169 > 11 + - I
fygx-y,
i - aj0
£-j v
XJ
(0.602169) (2) (0.159559) (0.194702) = 0.037415,
(0.602169) [(1.5)(0.159559)(0.037415) + (2)(0.155204)(0.194702)]
0.041786,
0.602169 [(4/3)(0.159559)(0.041786) + (5/3)(0.155204)(0.037415)
+(2)(0.613132)(0.194702)] = 0.154953.
SECTION 9.8
E
and then E(IjBj\Ij)
qj. Then
n
Ε
W = Σ ^^)
=
j=i
n
Σ^^^Ι^)]'
i=i
= 0 with probability 1 — qj and = μ^ with probability
Ε [ Ε ( / ^ · | / , ) ] = 0(1 -
Qj)
+ μ^· = μ ^ ,
thus establishing (9.35). For the variance,
n
Var(S) = ^VaiiljBj)
n
= ^ V a r t E ^ B ^ J j ) ] +E[Var(J J -B i |J i )].
88
CHAPTER 9 SOLUTIONS
For the first term,
Vav[E(IjB3\Ij)}
=
tf(l-qj)
+
v>qj-frjqj)*
= 0 with probability 1 — qj and = σ | with
For the second term, VS,T(IJBJ\IJ)
probability qj. Then,
E[V&T(IjBj\Ij)}=0(l-qj)+a2jqj.
Inserting the two terms into the sum establishes (9.36).
9.65 Let 5 be the true random variable and let 5i, 52, and 53 be the three
approximations. For the true variable,
2
J= l
For all three approximations,
Ε(5) = Χ;λΛ·, V a r ( 5 ) - ^ A ^ 2 .
j=i
i=i
For the first approximation, with Aj = <7j it is clear that E(5i) = E(52), and
because qj > qj{\ — qj), Var(5i) > Var(5).
For the second approximation with Xj — — ln(l — qj), note that
2
" ln(l -Qj) = Qj + \
3
+ j
+ --'> Qj,
and then it is clear t h a t E(52) > E ( 5 i ) . Because the variance also involves
Xj, the same is true for the variance.
For the third approximation, again using a Taylor series expansion,
2
Qj
3
4?
9?
and, therefore, the quantities for this approximation are the highest of all.
9.66
For A,
B
E^5)
=
E(5) + 2SD(5)
=
40(2) + 60(4) + 2^40(4)
=
375.136.
=
=
E^^jjvj]
= Epjv
+
60(10)
+ 4(100 - ΛΓ)]
400 - 2E(JV) = 400 - 2(40) = 320.
SECTION 9.8
Var(S)
=
=
89
Var[E(S|]V)]+E[Var(S|iV)]
Var(400 - 27V) + E[4AT + 10(100 - TV)]
4Var(AT)-6E(AT) + 1000
4(100)(0.4)(0.6) - 6(100)(0.4) 4-1,000 = 856.
Therefore, A = 320 4- 2V856 - 378.515 and A/B = 1.009.
9.67 Premium per person is 1.1(1,000)[0.2(0.02) 4- 0.8(0.01)] = 13.20. With
30% smokers,
E(5)
Var(5)
=
=
1,000(0.3(0.02) 4- 0.7(0.01)] = 13,
Ι,ΟΟΟ2 [0.3(0.02) (0.98) 4- 0.7(0.01)0(0.99)] = 12,810.
With n policies, the probability of claims exceeding premium is
Pr(5>13.2n)
=
Pr f z >
^
"
^
=
Pr(Z > 0.0017671 y/n) = 0.20.
Therefore, 0 . 0 0 1 7 6 7 1 ^ - 0.84162 for n = 226,836.
9.68 Let the policy being changed be the nth policy and let Fn(x) represent the distribution function using n policies. Then, originally, Fn(x) =
0.8F n _i(x) + 0.2F n _i(x - 1). Starting with x = 0:
0.40 =
0.58 =
0.64 0.69 =
0.70 =
0.78 =
0.80Fn_x(0)
0.80F n -i(l)
0.80Fn_i(2)
0.80Fn_!(3)
0.80Fn_i(4)
0.80F n-i(5)
+ 0.2(0),
4- 0.2(0.50),
+ 0.2(0.60),
-l· 0.2(0.65),
+ 0.2(0.70),
+ 0.2(0.70),
F n _i(0) = 0.50,
F n - i ( l ) = 0.60,
F n -i(2) = 0.65,
F n _i(3) = 0.70,
F n -i(4) = 0.70,
F n -i(5) = 0.80.
With the amount changed, F n (5) = 0.8F n _i(5) + 0.2F n _i(3) = 0.8(0.8) 40.2(0.7) = 0.78.
9.69 E(5) = 400(0.03)(5) + 300(0.07)(3) 4- 200(0.10)(2) = 163.
For a single insured with claim probability q and exponential mean 0,
Var(S)
-
E(N) Var(X) + Var(7V)E(X)2
=
=
qe2 + q(l-q)e2
q(2-q)e2.
90
CHAPTER 9 SOLUTIONS
Var(S)
=
400(0.03)(1.97)(25) +300(0.07)(1.93)(9)
+200(0.10)(1.90)(4)
1,107.77.
=
P =E(S) + 1.645SD(S) = 163 + 1.645^1,107.77 = 217.75.
9.70 For one member, the mean is
0.7(160) + 0.2(600) + 0.5(240) = 352
and the variance is
0.7(4,900) + 0.21(160)2 + 0.2(20,000) + 0.16(600)2
+0.5(8,100) + 0.25(240)2 = 88,856 .
For n members, the goal is
0.1
>
"
and thus,
P r [ 5 > 1.15(352)n]
/
1.15(352)n-352n\
P r
^Z>
V88,856n
) '
1.15(352)n-352«
V88,856n
>
L28155
'
which yields n > 52.35. So 53 members are needed.
9.71 The mean is
100,000 = 0.2/c(3,500) + 0.6a/c(2,000) = (700 + l,200a)fc,
and the variance is
Var(S)
=
=
0.2(0.8)/c2(3,500) +0.6(0.4)(aA;)2(2,000)
(560 + 480α 2 )/Λ
Solving the first equation for k and substituting give
Because the goal is to minimize the variance, constants can be removed, such
as all the zeros, thus leaving
56 + 48a 2
(7+12a)2'
SECTION 9.8
91
Taking the derivatives leads to a numerator of (the denominator is not important because the next step is to set the derivative equal to zero)
(7 + 12α) 2 96α - (56 + 48α 2 )2(7 + 12α)12.
Setting this expression equal to zero, dividing by 96, and rearranging lead to
the quadratic equation
8 4 α 2 - 1 1 9 α - 9 8 = 0,
and the only positive root is the solution a = 2.
9.72 λ = 500(0.01) + 500(0.02) = 15. fx(x) = 500(0.01)/15 = 1/3, fx(2x) =
2/3. E(X2) = (l/3)x 2 + (2/3)(2x) 2 = 3x 2 . Var(S) = 15(3x2) = 45z 2 = 4,500.
x = 10.
9.73 All work is done in units of 100,000. The first group of 500 policies is
not relevant. The others have amounts 1, 2, 1, and 1.
E(S)
Var(5)
=
=
=
=
500(0.02)(1) + 500(0.02)(2) + 300(0.1)(1) + 500(0.1)(1)
110,
500(0.02)(0.98)(1) + 500(0.02)(0.98)(4)
+300(0.1)(0.9)(1) + 500(0.1)(0.9)(1)
121.
(a) P = 110 + 1.645ν / Ϊ2Ϊ = 128.095.
(b) μ + σ 2 /2 = In 110 = 4.70048. 2μ + 2σ 2 = ln(121 + HO2) = 9.41091. μ =
4.695505 and σ = 0.0997497. Ι η Ρ = 4.695505 + 1.645(0.0997497) = 4.859593,
Ρ = 128.97.
(c) αθ = 110, αθ2 = 121, θ = 121/110 = 1.1, α = 100. This is a chi-square
distribution with 200 degrees of freedom. The 95th percentile is (using the
Excel® 1 GAMMAINV function) 128.70.
(d) λ = 500(0.02)+500(0.02)+300(0.10)+500(0.10) = 100. fx(l) = (10+30+
50)/100 = 0.9 and fx(2) — 0.1. Using the recursive formula for the compound
Poisson distribution, we find F 5 (128) = 0.94454 and F 5 (129) = 0.95320, and
so, to be safe at the 5% level, a premium of 129 is required.
1
Excel® is cither a registered trademark or trademark of Microsoft Corporation in the
United States and/or other countries.
92
CHAPTER 9 SOLUTIONS
(e) One way to use the software is to note that S = X\ + 2X2 4- X3 where
each X is binomial with m = 500, 500, and 800 and q = 0.02, 0.02, and 0.10.
The results are FS(12S) = 0.94647 and F 5 (129) = 0.95635, and so, to be safe
at the 5% level, a premium of 129 is required.
9.74 Prom (9.35), E(S) = 500(0.02)(500)+250(0.03)(750)+250(0.04)(l,000) =
20,625. Prom (9.36),
Var(S)
=
-
500[0.02(500)2 + 0.02(0.98)(500)2]
250[0.03(750)2 + 0.03(0.97)(750)2]
250[0.04(1,000)2 + 0.04(0.96)(1,000)2] = 32,860,937.5
The compound Poisson model has λ - 500(0.02)+250(0.03)+250(0.04) = 27.5
and the claim amount distribution has density function, from (9.38),
fv(r)
JXK
}
=
1
_12_ J_p-*/500 , _I^__1 -s/750 ■ 1Q
r -x/l,000
+
+
27.5 500
27.5 750
27.5 1,000
This mixture of exponentials distribution has mean
[10(500) + 7.5(750) + 10(l,000)]/27.5 = 750
and variance
[10(2)(500)2 + 7.5(2)(750)2 + 10(2)(l,000) 2 ]/27.5 - 7502 = 653,409.09.
Then
E(5)
Var(5)
=
-
27.5(750) = 20,625,
27.5(653,409.09) + 27.5(750)2 = 33,437,500.
CHAPTER 10
CHAPTER 10 SOLUTIONS
10.1
SECTION 10.2
10.1 When three observations are taken without replacement, there are only
four possible results. They are 1,3,5; 1,3,9; 1,5,9; and 3,5,9. The four sample
means are 9/3, 13/3, 15/3, and 17/3. The expected value (each has probability
1/4) is 54/12 or 4.5, which equals the population mean. The four sample
medians are 3, 3, 5, and 5. The expected value is 4, and so the median is
biased.
10.2
E (
j ^
=
E
n
(X 1 + . . . + X n )
-ipW-H
+ E(Xn)]
(μ + · · · + μ) = μ.
10.3 For a sample of size 3 from a continuous distribution, the density function
of the median is 6f(x)F(x)[l - F(x)}. For this exercise, F(x) = (χ-θ + 2)/4,
Student Solutions Manual to Accompany Loss Models: From Data to Decisic
Edition. By Stuart, A. Klugman, Harry H. Panjer, Gordon E. Willmot
Copyright © 2012 John Wiley & Sons, Inc.
Fourth93
94
CHAPTER 10 SOLUTIONS
Θ — 2 < x < Θ -{-2. The density function for the median is
t
( \ ^ ^ χ ~
fmed(x) = 6(0.25)
θ
+ 22~χ
+
θ
and the expected value is
χ(χ-θ
+ 2)(2-χ
64
J9-2
^ j
+ θ),
-y* + (6 - %
6_ [_£
64
4
(6 - %
3
3
2
=
6 f\
64 / (y
+ 0
~
M
~
y dy
)
+ 4(0 - 2)ydy
4(fl-2)y2i '
2
_6_
-64 + < 6 - ' > 6 4 + 32(0-2)
64
Θ,
where the first line used the substitution y — x — θ Λ-2.
10.4 Because the mean of a Pareto distribution does not always exist, it
is not reasonable to discuss unbiasedness or consistency. Had the problem
been restricted to Pareto distributions with a > 1, then consistency can be
established. It turns out that for the sample mean to be consistent, only the
first moment needs to exist (the variance having a limit of zero is a sufficient,
but not a necessary, condition for consistency).
10.5 The mean is unbiased, so its MSE is its variance. It is
MqE
m
Var
42
P0
_4
MS£ m o a n (0) - — ^ — - j2(3j - g.
The median is also unbiased. The variance is
ρθ+2
Je-2
(x - θ)2(χ - Θ + 2)(2 - x + Θ)
dx
64
64 J0
-y4+8y3-20y2
£
__6_
~ 64 " 5
= 4/5,
+
+ 16ydy
8(4) 4 _ 20(4) 3
4
3
+
16(4)2
2
and so the sample mean has the smaller MSE.
_6_ r4
64 J0
(y - 2)2y(4 - y)dy
SECTION 10.2
95
10.6 We have
Var(0 c )
=
-
Var[w0A + (1 - w)0B]
w2(160,000) + (1 - w)2(40,000)
200,000w2 - 80,000w + 40,000.
The derivative is 400,000?/; — 80,000, and setting it equal to zero provides the
solution, w = 0.2.
10.7 MSE - Var+bias 2 . 1 = Var+(0.2) 2 , Var = 0.96.
10.8 To be unbiased,
m = E(Z) = a(0.8m) + ßm = (0.8a + ß)m,
and so 1 = 0.8a + ß or ß = 1 - 0.8α. Then
Var(Z) - a2m2 + /?21.5m2 = [a2 + (1 - 0.8a) 2 1.5]m 2 ,
which is minimized when a 2 + 1.5 — 2.4a + 0.96a 2 is minimized. This result
occurs when 3.92a-2.4 = 0 or a = 0.6122. Then β = 1-0.8(0.6122) = 0.5102.
10.9 One way to solve this problem is to list the 20 possible samples of size 3
and assign probability 1/20 to each. The population mean is (1 + 1 + 2 + 3 +
5 + 10)/6 = l l / 3 .
(a) The 20 sample means have an average of 11/3, and so the bias is zero. The
variance of the sample means (dividing by 20 because this is the population
of sample means) is 1.9778, which is also the MSE.
(b) The 20 sample medians have an average of 2.7, and so the bias is 2.7 —
11/3 = -0.9667. The variance is 1.81 and the MSE is 2.7444.
(c) The 20 sample midranges have an average of 4.15, and so the bias is
4.15 - 11/3 = 0.4833. The variance is 2.65 and the MSE is 2.8861.
(d) E(aX (1 ) + bX(2) + cX (3 )) = 1.25a + 2.76 + 7.05c, where the expected
values of the order statistics can be found by averaging the 20 values from the
enumerated population. To be unbiased, the expected value must be 11/3,
and so the restriction is 1.25a + 2.76 + 7.05c = 11/3. With this restriction,
the MSE is minimized at a - 1.445337, b = 0.043733, and c = 0.247080 with
an MSE of 1.620325. With no restrictions, the minimum is at a = 1.289870,
b = 0.039029, and c = 0.220507 with an MSE of 1.446047 (and a bias of
-0.3944).
96
CHAPTER 10 SOLUTIONS
10.10 bias(fli) = 165/75 - 2 - 0.2, Var(0i) = 375/75 - (165/75) 2 = 0.16,
MSE(öi) - 0.16 + (0.2)2 - 0.2. bias(02) = 147/75 - 2 - -0.04, Var(#2) =
312/75 - (147/75) 2 = 0.3184, MSE(#2) = 0.3184 + (-0.04) 2 = 0.32. The
relative efficiency is 0.2/0.32 = 0.625, or 0.32/0.20 = 1.6.
10.2
SECTION 10.3
10.11 (a) From the information given in the problem, we can begin with
0.95 = Pr(o < Χ/Θ < 6),
where X/θ is known to have the gamma distribution with a = 50 and Θ — 0.02.
This does not uniquely specify the endpoints. However, if 2.5% probability is
allocated to each side, then a = 0.7422 and b = 1.2956. Inserting these values
in the inequality, taking reciprocals, and multiplying through by X give
0.95 = Pr(0.7718X < Θ < 1.3473X).
Inserting the sample mean gives an interval of 212.25 to 370.51.
(b) The sample mean has E(X) = Θ and Var(X) = 6>2/50. Then, using
275 2 /50 as the approximate variance,
0.95
X
=
Pr (-1.96 <
=
Pr(-76.23 < X - Θ < 76.23).
V
< 1.96
~,—
~ 275/^ ~
Inserting the sample mean of 275 gives the interval 275 ± 76.23, or 198.77 to
351.23.
(c) Leaving the Θ in the variance alone,
0.95 =
=
=
Pr f -1.96 <
V
X
~JL < 1.96
" Θ/VSÖ
"
Pr(-0.27726> < X - Θ < 0.2772Ö)
Pr(X/1.2772 < Θ < X/0.7228)
for an interval of 215.31 to 380.46.
10.12 The estimated probability of one or more claims is 400/2,000 = 0.2.
For this binomial distribution, the variance is estimated as 0.2(0.8)/2,000 =
0.00008. The upper bound is 0.2 + 1.96^0.00008 = 0.21753.
SECTION 10.4
10.3
97
SECTION 10.4
10.13 For the exact test, null should be rejected if X < c. The value of c
comes from
0.05
=
=
Pr(X < c\0 = 325)
Pr(X/325 < c/325),
where X/325 has the gamma distribution with parameters 50 and 0.02. From
that distribution, c/325 = 0.7793 for c = 253.27. Because the sample mean
of 275 is not below this value, the null hypothesis is not rejected. The p-value
is obtained from
Pr(X < 275|0 = 325) = Pr(X/325 < 275/325).
From the gamma distribution with parameters 50 and 0.02, this probability
is 0.1353.
For the normal approximation,
0.05
=
Pr(X < c\0 = 325)
Γ
V " 325/vW '
Solving (c - 3 2 5 ) / ( 3 2 5 / Λ / 5 0 ) - -1.645 produces c = 249.39. Again, the null
hypothesis is not rejected. For the p-value,
Pr
(z <
V
"
275
~ ™ = -1.0879^ = 0.1383.
325/Λ/50
)
SECTION 10.4
10.3
97
SECTION 10.4
10.13 For the exact test, null should be rejected if X < c. The value of c
comes from
0.05
=
=
Pr(X < c\0 = 325)
Pr(X/325 < c/325),
where X/325 has the gamma distribution with parameters 50 and 0.02. From
that distribution, c/325 = 0.7793 for c = 253.27. Because the sample mean
of 275 is not below this value, the null hypothesis is not rejected. The p-value
is obtained from
Pr(X < 275|0 = 325) = Pr(X/325 < 275/325).
From the gamma distribution with parameters 50 and 0.02, this probability
is 0.1353.
For the normal approximation,
0.05
=
Pr(X < c\0 = 325)
Γ
V " 325/vW '
Solving (c - 3 2 5 ) / ( 3 2 5 / Λ / 5 0 ) - -1.645 produces c = 249.39. Again, the null
hypothesis is not rejected. For the p-value,
Pr
(z <
V
"
275
~ ™ = -1.0879^ = 0.1383.
325/Λ/50
)
CHAPTER 11
CHAPTER 11 SOLUTIONS
11.1
SECTION 11.2
11.1 When all information is available, the calculations are in Table 11.1. As
in Example 11.4, values apply from the current y-value to the next one.
11.2 (a) μ = £ > i / 3 5 = 204,900. μ'2 = Σ ζ 2 / 3 5 = 1-4134 x 10 11 .
σ = 325,807. A3 = 1-70087 x 10 17 , As = 9.62339 x 10 16 ,
c = 325,807/204,900 = 1.590078, ηχ = 2.78257.
(b) £„(500,000) = E j = i Vj + 5(500,000)]/35 = 153,139.
E™ (500,000) = E j L j VJ + 5(500,000)2]/35 = 53,732,687,032.
11.3
A2 =
A3 =
7! =
μ\ = [2(2,000) + 6(4,000) + 12(6,000) + 10(8,000)]/30 = 6,000,
[2(-4,000) 2 + 6(-2,000) 2 + 12(0)2 + 10(2,000)2]/30 = 3,200,000,
[2(-4,000) 3 + 6(-2,000) 3 + 12(0)3 + 10(2,000)3]/30 = -3,200,000,000.
-3,200,000,000/(3,200,000) 15 = -0.55902.
Student Solutions Manual to Accompany Loss Models: From Data to Decisions,
Edition. By Stuart A. Klugmtui, Harry H. Panjer, Gordon E. Willmot
Copyright © 2012 John Wiley & Sons, Inc.
Fourth99
100
CHAPTER 11 SOLUTIONS
Table 11.1
3
1
2
3
4
5
6
7
8
9
10
11
12
13
Vj
0.1
0.5
0.8
1.8
2.1
2.5
2.8
3.9
4.0
4.1
4.6
4.8
5.0
*F(x) =
l-e
Sj
r3
1
1
1
2
1
1
1
2
1
1
2
2
14
30
29
28
27
25
24
23
22
20
19
18
16
14
H(x)
Fso(x)
1. - 29/30 == 0.0333
]L - 28/30 -= 0.0667
1L - 27/30 == 0.1000
]L - 25/30 == 0.1667
]L - 24/30 == 0.2000
JL - 23/30 == 0.2333
JL - 22/30 == 0.2667
]L - 20/30 = 0.3333
]L - 19/30 = 0.3667
]L - 18/30 = 0.4000
L - 16/30 = 0.4667
:L - 14/30 = 0.5333
1 - 0/30 == 1.0000
F(xY
1/30 =-- 0.0333
0.0333 + 1/29 == 0.0678
0.0678 + 1/28 == 0.1035
0.1035 + 2/27 == 0.1776
0.1776 + 1/25 == 0.2176
0.2176 + 1/24 == 0.2593
0.2593 + 1/23 == 0.3027
0.3027 + 2/22 == 0.3937
0.3937+1/20 == 0.4437
0.4437+1/19 == 0.4963
0.4963 + 2/18 == 0.6074
0.6074 + 2/16 == 0.7324
0.7324+14/14 == 1.7324
0.0328
0.0656
0.0983
0.1627
0.1956
0.2284
0.2612
0.3254
0.3583
0.3912
0.4552
0.5192
0.8231
-H(x)m
Table 11.2
Payment range
Calculations for Exercise 11.4.
Number of payments
0-25
25-50
50-75
75-100
100 150
150-250
250-500
500-1,000
1,000-2,000
2,000-4,000
11.2
Calculations for Exercise 11.1.
6
24
30
31
57
80
85
54
15
10
Ogive value
392
30
392
60
392
91
392
148
392
228
392
313
392
367
392
382
392
392
392
0.0153
0.0765
0.1531
= 0.2321
: 0.3776
= 0.5816
= 0.7985
= 0.9362
: 0.9745
: 1.0000
Histogram value
392(25)
24
39|25)
392(25)
31
392(25)
392(50)
80
392(100)
392(250)
392(500)
392(1000)
10
392(2000)
0.000612
0.002449
0.003061
: 0.003163
: 0.002908
: 0.002041
: 0.000867
= 0.000276
: 0.000038
: 0.000013
:
SECTION 11.3
1 1 . 4 There are 392 observations and the calculations are in Table 11.2. For
each interval, the ogive value is for the right-hand endpoint of the interval,
while the histogram value is for the entire interval. Graphs of the ogive and
histogram appear in Figures 11.1 and 11.2.
SECTION 11.3
Figure 11.1
Figure 11.2
101
Ogive for Exercise 11.4.
Histogram for Exercise 11.4.
11.5 (a) T h e ogive connects the points (0.5,0), (2.5,0.35), (8.5,0.65), (15.5,0.85),
and (29.5,1).
(b) T h e histogram has height 0.35/2 = 0.175 on the interval (0.5,2.5), height
0.3/6 = 0.05 on the interval (2.5,8.5), height 0.2/7 = 0.028571 on the interval
(8.5,15.1), and height 0.15/14 = 0.010714 on the interval (15.5,29.5).
1 1 . 6 T h e plot appears in Figure 11.3. T h e points are the complements of the
survival probabilities at the indicated times.
102
CHAPTER 11 SOLUTIONS
0.03 -,
0.025 0.02
- - -O- - - Refinances
0.015
——A.——
Orininal
m mm
■"■- ^
0.01
■■" v j i i y i i i a i
0.005 0<ψ ■ ■ - ()
-v
i
i
2
4
6
Years
Figure 11.3
Ogive for mortgage lifetime for Exercise 11.6.
Because one curve lies completely above the other, it appears possible that
original issues have a shorter lifetime.
11.7 The heights of the histogram bars are, respectively, 0.5/2 = 0.25, 0.2/8 =
0.025, 0.2/90 = 0.00222, 0.1/900 = 0.000111. The histogram appears in
Figure 11.4.
11.8 The empirical model places probability 1/n at each data point. Then
Ε(ΧΛ2)
=
£ > , · ( 1 / 4 0 ) + Σ 2(1/40)
=
=
(20 + 15)(l/40) + (14)(2)(l/40)
1.575.
SECTION 11.3
Figure 11.4
103
Histogram for Exercise 11.7.
11.9 We have
E(X A 7,000)
=
1
2,000
2,000
+
Σ
*;+ Σ
^Ej<7,000
£
vXj<6,000
Σ
6,000<^<7,000
7,000
xj>7,000
Xj+
Σ
6,000
ajj> 6,000
(*i - 6,000) + J2
lfi0
Xj>7,000
°)
J
=
E(X Λ 6,000) + [200,000 - 30(6,000) + 270(l,000)]/2,000
-
1,955.
11.10 Let n be the sample size. The equations are
36 x
0.6y
„^
36 0.4a:
J Λ rn
0.21 = — +
and 0.51 = — + - + — - .
n
n
n
n
n
Also, n = 200 + x + ?/. The equations can be rewritten as
0.21(200 + x + y) = 36 + 0.4x and 0.51(200 + x + y) = 36 + x + 0.6?/.
The linear equations can be solved for x = 120.
SECTION 11.3
Figure 11.4
103
Histogram for Exercise 11.7.
11.9 We have
E(X A 7,000)
=
1
2,000
2,000
+
Σ
*;+ Σ
^Ej<7,000
£
vXj<6,000
Σ
6,000<^<7,000
7,000
xj>7,000
Xj+
Σ
6,000
ajj> 6,000
(*i - 6,000) + J2
lfi0
Xj>7,000
°)
J
=
E(X Λ 6,000) + [200,000 - 30(6,000) + 270(l,000)]/2,000
-
1,955.
11.10 Let n be the sample size. The equations are
36 x
0.6y
„^
36 0.4a:
J Λ rn
0.21 = — +
and 0.51 = — + - + — - .
n
n
n
n
n
Also, n = 200 + x + ?/. The equations can be rewritten as
0.21(200 + x + y) = 36 + 0.4x and 0.51(200 + x + y) = 36 + x + 0.6?/.
The linear equations can be solved for x = 120.
CHAPTER 12
CHAPTER 12 SOLUTIONS
12.1
SECTION 12.1
12.1 The calculations are in Tables 12.1 and 12.2.
Student Solutions Manual to Accompany Loss Models: From Data to Decisions,
Edition. By Stuart A. Klugman, Harry H. Panjer, Gordon E. Willmot
Copyright © 2012 John Wiley & Sons, Inc.
Fourthl05
106
CHAPTER 12 SOLUTIONS
Table 12.1
i
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
di
Ui
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
Calculations for Exercise 12.1.
Xi
-
0.1
0.5
0.8
-
1.8
1.8
2.1
2.5
2.8
-
3.9
-
4.0
4.1
0.8
-
2.9
2.9
-
4.0
Table 12.2
-
i
16
17
18
19-30
31
32
33
34
35
36
37
38
39
40
dt
Ui
0
0
0
0
0.3
0.7
1.0
1.8
2.1
2.9
2.9
3.2
3.4
3.9
4.8
Xi
-
-
4.8
4.8
5.0
5.0
5.0
-
3.9
5.0
4.8
-
5.0
5.0
4.1
3.1
4.0
-
-
Further calculations for Exercise 12.1.
Vj
1
2
3
4
5
6
7
8
9
10
11
12
0.1
0.5
0.8
1.8
2.1
2.5
2.8
3.9
4.0
4.1
4.8
5.0
3
17
30-0-0
31-1-0
32-2-0
33-3-1
34-5-1
35-6-1
35-7-1
39-8-4
40-10-4
40-11-6
40-12-7
40-15-8
30 or 0 + 30 - 0 - 0 = 30
: 30 or 30 + 1 - 1 - 0 = 30
30 or 30 + 1 - 1 - 0 = 30
: 29 or 30 + 1 - 1 - 1 = 29
= 28 or 29 + 1 - 2 - 0 = 28
: 28 or 28 + 1 - 1 - 0 = 28
27 or 28 + 0 - 1 - 0 = 27
: 27 or 27 + 4 - 1 - 3 = 27
: 26 or 27 + 1 - 2 - 0 = 26
: 23 or 26 + 0 - 1 - 2 = 23
: 21 or 23 + 0 - 1 - 1 = 21
: 17 or 2 1 + 0 - 3 - 1 = 17
SECTION 12.1
12.2
0 < £ <0.1,
^
0.9667,
0.1 < t < 0.5,
ΰ . 9 6 6 7 % 1 = 0.9344,
0.5 < t < 0.8,
Q.9344% 1 = 0.9033,
0.8 < t < 1.8,
0.9033 2 929- 2
0.8410,
1.8 <t < 2 . 1 ,
Q . 8 4 1 028^ 1 = 0.8110,
2.1 < t < 2.5,
0.8110 2 828- 1 = 0.7820,
2.5 < t < 2.8,
0.7820 2 727- 1 = 0.7530,
2.8 < t < 3.9,
0.7530 27-2
27
0.6973,
3.9 < t < 4.0,
0 . 6 9 7 3 ^ = 0.6704,
4.0 < t < 4.1,
0 . 6 7 0 4 ^ = 0.6413,
4.1 < t < 4.8,
0 . 6 4 1 3 ^ = 0.5497,
4.8 < t < 5.0,
=
11
Q . 5 4 9 7 17 ^ = 0,
t > 5.0.
107
108
CHAPTER 12 SOLUTIONS
12.3
0 < t < 0.1,
( o,
H(t)
^ = 0.0333
0.1 <t < 0 . 5 ,
0.0333 4- ^ = 0.0667,
0.5 < t < 0.8,
0.0667 4- ^ = 0.1000,
0.8 < t < 1.8,
0.1000 4 - ^ = 0.1690,
1.8 < t < 2.1,
0.1690+^ = 0.2047,
2.1 < t < 2.5,
0.2047 + ^ = 0.2404,
2.5 < t < 2.8,
0.2404 4- 27 = 0.2774,
2.8 < t < 3.9,
0.2774 + |F = 0.3515,
3.9 < t < 4.0,
0.3515 + ^ = 0.3900,
4.0 <t < 4 . 1 ,
0.3900 4- ± = 0.4334,
4.1 < t < 4.8,
0.4334 4- |[= 0.5763,
4.8 < t < 5.0,
0.5763 4- H = 1.5763,
t > 5.0.
e"° - 1,
„-0.0333
-0.0667
-0.1000
e-0.1690 =
e-0.2047 =
S(t)
e-0.2404 =
e-0.2774 =
0.3515 _
0.3900 _
0.4334 _
0.5763 _
-1.5763
0 <t < 0 . 1 ,
0,9672, 0.1 < t < 0.5,
0.9355, 0.5 < t < 0.8,
9048,
8445,
Q
8149,
Q
7863,
0
,7578,
Q
7036,
Q
,6771,
Q
,6483,
n
.5620,
.2076,
0
0.8 < t < 1.8,
1.8 < t < 2.1,
2.1 < t < 2.5,
2.5 < t < 2.8,
2.8 < t < 3.9,
3.9 < t < 4.0,
4.0<*<4.1,
4.1 < t < 4.8,
4.8 < t < 5.0,
t > 5.0.
12.4 Using the raw data, the results are in Table 12.3. When the deductible
and limit are imposed, the results are as in Table 12.4. Because 1,000 is a
censoring point and not an observed loss value, there is no change in the
survival function at 1,000.
SECTION 12.1
Table 12.3
Value (x)
27
82
115
126
155
161
243
294
340
384
457
680
855
877
974
1,193
1,340
1,884
2,558
15,743
r
s
20
19
18
17
16
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
Table 12.4
Value (x)
115
126
155
161
243
294
340
384
457
680
855
877
974
109
Calculations for Exercise 12.4.
SKM(X)
0.95
0.90
0.85
0.80
0.75
0.70
0.65
0.60
0.55
0.50
0.45
0.40
0.35
0.30
0.25
0.20
0.15
0.10
0.05
0.00
HNA(X)
0.0500
0.1026
0.1582
0.2170
0.2795
0.3462
0.4176
0.4945
0.5779
0.6688
0.7688
0.8799
1.0049
1.1477
1.3144
1.5144
1.7644
2.0977
2.5977
3.5977
SNA(X)
0.9512
0.9025
0.8537
0.8049
0.7562
0.7074
0.6586
0.6099
0.5611
0.5123
0.4636
0.4148
0.3661
0.3174
0.2686
0.2199
0.1713
0.1227
0.0744
0.0274
Further calculations for Exercise 12.4.
r
s
SKM{X)
18
17
16
15
14
13
12
11
10
9
8
7
6
1
1
1
1
1
1
1
1
1
1
1
1
1
0.9444
0.8889
0.8333
0.7778
0.7222
0.6667
0.6111
0.5556
0.5000
0.4444
0.3889
0.3333
0.2778
HNA(x)
0.0556
0.1144
0.1769
0.2435
0.3150
0.3919
0.4752
0.5661
0.6661
0.7773
0.9023
1.0451
1.2118
SNA(X)
0.9459
0.8919
0.8379
0.7839
0.7298
0.6758
0.6218
0.5677
0.5137
0.4596
0.4056
0.3517
0.2977
12.5 Suppose the lapse was at time 1. The estimate of S(4) is (3/4) (2/3) (1/2) =
0.25. If it is at time 2, the estimate is (4/5)(2/3)(l/2)=0.27. If it is at time
3, the estimate is (4/5)(3/4)(1/2) = 0.3. If it is at time 4, the estimate is
(4/5) (3/4) (2/3) = 0.4. If it is at time 5, the estimate is (4/5) (3/4) (2/3) (1/2)
= 0.20. Therefore, the answer is time 5.
110
CHAPTER 12 SOLUTIONS
Table 12.5
Age(i)
#ds
0
1
2
3
4
5
7
300
#xs
20
30
#us
r
S(t)
6
300
ϋ-0.98
0
0
a
314
304
324
0.98§ff - 0.94879
0.94879||| = 0.91758
0 . 9 1 7 5 8 ^ ^ = 0.892
=> a = 9
0.892^^
45
b
9
10
12
13
Calculations for Exercise 12.7.
279-α - 2 7 0
35
6
15
244-a - b = 235 - b
12.6 # ( 1 2 ) = 7^ + 7^ + 7^ + 6 = ° · 6 5 ·
is 5(12) = e " 0 · 6 5 - 0.522.
T h e
0 . 8 9 2 ^ ^ § | ^ = 0.856
=>b = 4
estimate of the survival function
1 2 . 7 T h e information may be organized as in Table 12.5
12.8 H(t10)
S(ts)
= e-
- H(t9)
014307
= ^
= 0.077, n = 22. H(ts)
= ± + ± + ^ = 0.14307,
= 0.8667.
1 2 . 9 0.60 - 0 . 7 2 ^ , r4 = 12. 0.50 = 0 . 6 0 · ^ , r 5 = 6. W i t h two deaths at
the fourth death time and the risk set decreasing by 6, there must have been
four censored observations.
1 2 . 1 0 0.575 = 5(10) = e-"< 1 0 >. # ( 1 0 ) = - ln(0.575) = 0.5534 = ^ 4- ^
f + "Er ^ n e s o m t i o n is k = 36.
+
1 2 . 1 1 W i t h no censoring, the r-values are 12, 9, 8, 7, 6, 4, and 3, and the .s
values are 3, 1, 1, 1, 2, 1, and 1. Then
#(7,000) = ^ + i + I
+ I + 2 + l + i
=
L5456>
W i t h censoring, there are only five uncensored values with r values 9, 8, 7, 4,
and 3, and all five s values are 1. Then
# ( 7 , 0 0 0 ) =1+λ+λ+λ+±=
°·9623·
12.12 ^
= ^ ^ = ^f
= > r 4 = 13. T h e risk set at the fifth death time
is the 13 from the previous death time less the 3 who died and less the 6
SECTION 12.2
who were censored. That leaves r 5 = 13 — 3 — 6 = 4. Then, %rr =
*=Ρ => s5 = 2.
12.2
'
°
0.5
111
r5
r5
s
^ =
SECTION 12.2
12.13 To proceed, we need the estimated survival function at whole number
durations. From Exercise 11.1, we have 5 30 (1) = 27/30, 5 30 (2) = 25/30,
5 30 (3) = 22/30, 5 30 (4) = 19/30, and 5 30 (5) = 14/30. Then the mortality
estimates are q0 = 3/30, qx = 2/27, q2 = 3/25, fa = 3/22, q4 = 5/19,
5Po = 14/30. The six variances are
Var(^o)
Var(gi)
Varfe)
Var(g3)
Var(g4)
Var( 5 p 0 )
3(27)/30 3 = 0.003,
2(25)/27 3 = 0.002540,
3(22)/25 3 - 0.004224,
3(19)/22 3 = 0.005353,
5(14)/19 3 = 0.010206,
14(16)/30 3 = 0.008296.
=
=
=
=
=
=
The first and last variances are unconditional.
12.14 From the data set, there were 1,915 out of 94,935 that had two or
more accidents. The estimated probability is 1,915/94,935 = 0.02017, and
the estimated variance is
0.02017(0.97983)/94,935 = 2.08176 x 10" 7 .
12.15 From Exercise 12.13, 53o(3) = 22/30, and the direct estimate of its
variance is 22(8)/30 3 = 0.0065185. Using Greenwood's formula, the estimated
variance is
1
1
29(28)
28(27)
27(25)
1
1
1 \
22(8)
25(24)
24(23)
23(22)/
30 3 '
30(29)
\30J V30(i
For 2^3, the point estimate is 8/22, and the direct estimate of the variance is
8(14)/22 3 = 0.010518. Greenwood's estimate is
(M)'(
2
22(20)
+
1
20(19)
+
1
19(18)
+
2
18(16)
+
2 \ _ 8(14)
16(14) J ~~ 22 3 "
112
CHAPTER 12 SOLUTIONS
12.16 Prom Exercise 12.2, i>4o(3) = 0.7530, and Greenwood's formula provides a variance of
2
0.75302 f—j— + -J— + -J— +
30(29)
+
2 ^
30(29)
+
28^7)
30(29)
+
29(27)
2 ^ 6 ) ) =°·00571·
Also, 2<?3 = (0.7530- 0.5497)/0.7530 = 0.26999, and the variance is estimated
as
°-730012 (2^5)
+
26(W + 23(W + 2m)
= °·00768·
12.17 Using the log-transformed method,
P
/l.96y«57l^
1^0.753 In 0.753 )
u 4yjy1
'
'
The lower limit is 0.753 1 / 0 · 49991 = 0.56695, and the upper limit is 0.753 0 4 9 9 9 1
0.86778.
12.18 Prom Exercise 12.3, H(3) = 0.2774. The variance is estimated as
The linear confidence interval is
0.2774 ± 1.96V0.0096342 or 0.0850 to 0.4698.
The log-transformed interval requires
U = exp
1.96x/0.0096342
0.2774
- 2.00074, or 0.49981.
The lower limit is 0.2774(0.49981) = 0.13865 and the upper limit is
0.2774(2.00074) - 0.55501.
12.19 Without any distributional assumptions, the variance is estimated as
(1/5) (4/5)/5 — 0.032. From the distributional assumption, the true value
of 3<77 is [(8/15) - (5/15)]/(8/15) = 3/8, and the variance is (3/8)(5/8)/5 =
0.046875. The difference is -0.014875.
12.20 First, obtain the estimated survival probability as
SECTION 12.2
113
Greenwood's formula gives
( 4032)2
°-
+
{vk)
8^6) + 4ö) + 6öiW) = °·00551·
12.21 The Nelson-Äalen estimates are the centers of the confidence intervals,
which are 0.15 and 0.27121. Therefore, s i + i / r i + i = 0.12121. Prom the first
confidence interval, the estimated variance is (0.07875/1.96)2 = 0.0016143,
while for the second interval it is (0.11514/1.96)2 = 0.0034510 and, therefore,
Si+i/rf^ = 0.0018367. Dividing the first result by the second gives r^+i = 66.
The first equation then yields s^+i = 8.
12.22 Greenwood's estimate is
2
v = s2 V/ 50(48)
+
4
45(41)
+
8
\
(41 - c)(33 - c ) ) '
Then,
0.011467
= -L
= 0.003001 +
S2
'
(41-c)(33-c)'
( 4 1 - Χ 3 3 - ) = ÖÄ66= 9 4 5 ·
Solving the quadratic equation yields c — 6.
12.23 For the Nelson-Äalen estimate,
1.5641 = £ ( 3 5 ) = A + A + A + ^ +
(1.5641 - 0.5641)8(8 - d) = d(8 - d) + 16,
0 = d2 - I6d + 48,
d = 4.
The variance is
2
Ϊ5^
+
3
Ϊ3*
+
2
4
2
+
+
ϊο^ 8*
4^=0·23414·
12.24 The standard deviation is
/ 15
20 13 1/2
+
+ ττ^= 0.11983.
-I 7^0
- -\
V1ÖÖ2 ^ 652 T 4Q2 )
-
J^,
114
CHAPTER 12 SOLUTIONS
Data for Exercise 12.28.
Table 12.6
Vi
P(VJ)
0.1
0.5
0.8
1.8
2.1
2.5
2.8
3.9
4.0
4.1
4.8
0.0333
0.0323
0.0311
0.0623
0.0300
0.0290
0.0290
0.0557
0.0269
0.0291
0.0916
1 2 . 2 5 T h e uncensored observations are 4 and 8, the two r-values are 10 and
5, and the two s-values are 2 and 1. T h e n 5(11) = y ^ | = 0.64. Greenwood's
estimate is
1
2
(0.64)
Ll0(8) + 5(4)J
0.03072.
1 2 . 2 6 At day 8 (the first uncensored time), r = 7 ( 8 - 1 ) and s = 1. At day 12,
r = 5 and s = 2. T h e n # ( 1 2 ) = £ + § = 0.5429. Also, V a r [ # ( 1 2 ) ] = ^ + |> =
0.1004. T h e interval is 0.5429 ± 1.645\/0.1004, which is (0.0217,1.0641).
12.3
SECTION 12.3
1 2 . 2 7 In order for the mean to be equal to y, we must have θ/(α — 1) = y.
Letting a be arbitrary (and greater t h a n 1), use a Pareto distribution with
Θ — y(a — 1). This makes the kernel function
ky(x)
_
«K° - Pd"a + 1
[(a-
I)?/+ x ]
"
1 2 . 2 8 T h e d a t a points and probabilities can be taken from Exercise 12.2.
They are given in Table 12.6
T h e probability at 5.0 is discrete and so should not be spread out by the
kernel density estimator. Because of the value at 0.1, the largest available
bandwidth is 0.1. Using this bandwidth and the triangular kernel produces
the graph in Figure 12.1.
This graph is clearly not satisfactory. T h e g a m m a kernel is not available
because there would be positive probability at values greater t h a n 5. Your
SECTION 12.3
Figure 12.1
115
Triangular kernel for Exercise 12.28.
author tried to solve this by using the beta distribution. With Θ known to
be 5, the mean (to be set equal to y) is 5a/(a + b). To have some smoothing
control and to determine parameter values, the sum a + b was fixed. Using a
value of 50 for the sum, the kernel is
ky\x) —
/x\1Qy
Γ(50)
Γ(10</)Γ(50 - lOj/) \ 5 /
/
V
Χxν 5 0 - 1 0 2 / - 1
- I
5
/
1
-, 0 < x < 5
x
and the resulting smoothed estimate appears in Figure 12.2
12.29 With a bandwidth of 60, the height of the kernel is 1/120. At a value
of 100, the following data points contribute probability 1/20: 47, 75, and 156.
Therefore, the height is 3(1/20)(1/120) = 1/800.
12.30 The uniform kernel spreads the probability of 0.1 to 10 units on either
side of an observation. The observation at 25 contributes a density of 0.005
from 15 to 35, thus contributing nothing to survival past age 40. The same
applies to the point at 30. The points at 35 each contribute probability from
25 to 45 and 0.25 of that probability is above 40. Together they contribute
2(0.25)(0.1) - 0.05. The point at 37 contributes (7/20)(0.1) = 0.035. The
next four points contribute a total of (9/20+15/20+17/20+19/20)(0.1) = 0.3.
The final point (at 55) contributes all its probability at points above 40 and
so contributes 0.1 to the total, which is 0.05 + 0.035 + 0.3 + 0.1 = 0.485.
116
CHAPTER 12 SOLUTIONS
0.4 -r
0.35 \
0.3 -I
0.25 \
g
0.2 J
0.15 \
0.1 J
0.05 J
0J
0
1
2
3
4
5
6
x
Figure 12.2
Beta kernel for Exercise 12.28.
1 2 . 3 1 (a) W i t h two of the five points below 150, the empirical estimate is
2 / 5 = 0.4.
(b) T h e point at 82 has probability 0.2 spread from 32 to 132; all is below
150, so the contribution is 0.2. The point at 126 has probability from 76 to
176; the contribution below 150 is (74/100)(0.2) = 0.148. T h e point at 161
contributes (39/100)(0.2) = 0.078. T h e last two points contribute nothing, so
the estimate is 0.2 + 0.148 + 0.078 = 0.426.
(c) As in part (b), the first point contributes 0.2 and the last two points
contribute nothing. T h e triangle has base 100 and area 0.2, so the height must
be 0.004. For the point at 126, the probability excluded is the triangle from
150 to 176, which has base 26 and height at 150 of 0.004(26/50) = 0.00208.
T h e area is 0.5(26)(0.00208) = 0.02704 and the contribution is 0.17296. For
the point at 161, the area included is the triangle from 111 to 150, which
has base 39 and height at 150 of 0.004(39/50) - 0.00312. T h e area is
0.5(39)(0.00312) = 0.06084. T h e estimate is 0.2 + 0.17296 + 0.06084 - 0.4338.
12.4
SECTION 12.4
1 2 . 3 2 T h e only change is the entries in the dj and w™ columns are swapped.
T h e exposures then change. For example, at j = 2, the exposure is 28 + 0 +
3(1/2)-2(1/2) =28.5.
SECTION 12.4
Kaplan-Meier calculations for Exercise 12.33.
Table 12.7
x
r
45.3
45.4
46.2
46.4
46.7
s
S(x)
8
]L
7
1L
8
]L
6
1L
5
]L
7/8 = 0.875
0.875(6/7) =
0.750(7/8) =
0.656(5/6) =
0.549(4/5) =
Table 12.8
Cj
PJ
250
500
0
6
12
14
12
12
5
4
2
1
0
1,000
2,750
3,000
3,500
5,250
5,500
6,000
10,250
10,500
117
nj
7
8
7
0
0
0
0
0
0
0
0
0.750
0.656
0.549
0.438
Calculations for Exercise 12.34.
w*
0
0
1
1
0
1
1
1
0
1
0
dj
1
2
4
1
0
6
0
1
1
0
0
*r
1/7
2/14
4/19
1/14
0/12
6/12
0/5
1/4
1/2
0/1
-
S(c3)
1.000
1.000(6/7) = 0.857
0.857(12/14) = 0.735
0.735(15/19) = 0.580
0.580(13/14) - 0.539
0.539(12/12) = 0.539
0.539(6/12) = 0.269
0.269(5/5) = 0.269
0.269(3/4) = 0.202
0.202(1/2) = 0.101
0.101(1/1) = 0.101
1 2 . 3 3 T h e Kaplan-Meier calculations appear in Table 12.7. T h e n q^ —
1 _ 0.750 - 0.250 and q46 = 1 - 0.438/0.750 - 0.416.
For exact exposure at age 45, the first eight individuals contribute 1 + 1 +
0.3 + 1 + 0.4 + 1 + 0.4 4- 0.8 = 5.9 of exposure for qAb = 1 - e " 2 / 5 · 9 = 0.28751.
For age 46, eight individuals contribute for an exposure of 0 . 7 + 1 + 1 + 1 +
0.3 + 0.2 + 0.4 + 0.9 = 5.5 for q46 = 1 - e ~ 3 / 5 · 5 = 0.42042.
For actuarial exposure at age 45 the two deaths add 1.3 to the exposure for
q45 = 2/7.2 = 0.27778. For age 46, the three deaths add 1.7 for q46 = 3/7.2 =
0.41667.
1 2 . 3 4 T h e calculations are in Table 12.8. T h e intervals were selected so t h a t
all deductibles and limits are at the boundaries and so no assumption is needed
about t h e intermediate values. This setup means t h a t no assumptions need
to be made about the timing of entrants and withdrawals. W h e n working
with the observations, the deductible must be added to t h e payment in order
to produce the loss amount. T h e requested probability is 5(5,500)/5(500) =
0.269/0.857 = 0.314.
1 2 . 3 5 For the intervals, the n-values are 6, 6, 7, 0, 0, 0, and 0, the ^-values
are 0, 1, 1, 1, 0, 0, and 0, and the d-values are 1, 2, 4, 7, 1, 1, and 0. the
118
CHAPTER 12 SOLUTIONS
P-values are then 6, 6 - 0 - 1 + 6 = 11, 1 1 - 1 - 2 + 7 = 15, 1 5 - 1 - 4 + 0 = 10,
1 0 - 1 - 7 + 0 = 2, 2 - 0 - 1 + 0 = 1 , and 1 - 0 - 1 + 0 = 0. The estimates are
5(500) = 5/6 and 5(6,000) = (5/6)(9/ll)(ll/15)(3/10)(l/2) = 3/40, where
both estimates are conditioned on survival to 250. The answer is the ratio
(3/40)/(5/6) = 9/100 = 0.09.
CHAPTER 13
CHAPTER 13 SOLUTIONS
13.1
SECTION 13.1
13.1 The mean of the data is μ[ = [27 + 82 + 115 + 126 + 155 + 161 + 243 +
13(250)]/20 = 207.95. The expected value of a single observation censored at
250 is Ε(ΧΛ250) = 0(1 - e _ 2 5 0 / ö ) . Setting the two equal and solving produce
Θ = 657.26.
13.2 The equations to solve are
βχρ(μ + σ 2 /2)
exp(2/i 4- 2σ2)
=
=
1,424.4,
13,238,441.9.
Taking logarithms yields
μ + σ2/2
=
7.261506,
2μ + 2σ 2
=
16.398635.
The solution of these two equations is μ = 6.323695 and σ 2 = 1.875623, and
then σ = 1.369534.
Student Solutions Manual to Accompany Loss Models: From Data to Decisions,
Edition. By Stuart A. Klugmau, Harry H. Panjor, Gordon E. Willmot
Copyright © 2012 John Wiley & Sons, Inc.
Fourth\\%
120
CHAPTER 13 SOLUTIONS
1 3 . 3 T h e two equations to solve are
0.2
=
l-e-<5/">\
0.8
=
l-e-(12/"»T.
Moving the 1 to the left-hand side and taking logarithms produces
0.22314
=
(5/0)T,
1.60944
-
(12/#)r.
Dividing the second equation by the first equation produces
7.21269 = 2.4 T .
Taking logarithms again produces
1.97584 -
0.87547T,
and so f — 2.25689. Using the first equation,
Θ = 5/(0.22314 1 / 2 · 2 5 6 8 9 ) = 9.71868.
Then S(8) = e -(8/9.7i868) 2 - a5,i ^
0.52490.
=
1 3 . 4 T h e equations to solve are
0.5
=
l-exp[-(10,000/6>)T],
0.9
=
l-exp[-(100,000/#)r].
Then
In 0.5
=
-0.69315 = - ( 1 0 , 0 0 0 / # ) r ,
In 0.1
-
- 2 . 3 0 2 5 9 = -(100,000/6>) r .
Dividing the second equation by the first gives
3.32192
=
10r,
In 3.32192
=
1.20054 = r l n l O = 2.30259r,
f
=
1.20054/2.30259 = 0.52139.
Then
0.69315
=
(lO,OOO/0) 0 · 52139 ,
0.69315 1 / 0 · 5 2 1 3 9
=
0.49512 = 10,000/0,
=
20,197.
Θ
SECTION 13.1
121
13.5 The two moment equations are
4 + 5 + 21 + 99 + 421
Θ
5
α-Γ
4 2 + 5 2 + 21 2 + 99 2 + 421 2 _
_
202
5
'
'
(α-1)(α-2)'
Dividing the second equation by the square of the first equation gives
HO2
a —2
The solution is & = 3.8188. Prom the first equation, Θ = 110(2.8188) =
310.068. For the 95th percentile,
,
„nr
Λ
0.95 = 1 —
χ3 8188
310.068
'
ν310.068 + π 0 . 9 5
for 7Γο.95 = 369.37.
13.6 After inflation, the 100 claims from year 1 total 100(10,000)(1.1)2 =
1,210,000, and the 200 claims from year 2 total 200(12,500)(1.1) = 2,750,000.
The average of the 300 inflated claims is 3,960,000/300 = 13,200. The moment
equation is 13,200 - 0/(3 - 1), which yields Θ = 26,400.
13.7 The equations to solve are
0.2
=
^(18.25) = Φ
0.8
=
F(35.8) =
'In 1 8 . 2 5 - μ
(
$
^
^
The 20th and 80th percentiles of the normal distribution are —0.842 and 0.842,
respectively. The equations become
_0. 8 42
2
·
9 0 4
'^,
σ
_ o
3.578 -μ
0.842 -.
σ
Dividing the first equation by the second yields
2.904 - μ
~ 3.578 -μ
The solution is μ — 3.241 and substituting in either equation yields σ = 0.4.
The probability of exceeding 30 is
Pr(X>30)
=
=
1 - F(30) = 1 - Φ
=
1 - Φ(0.4) = 1 - 0.6554 = 0.3446.
( ^ ^ Ö T ^ )
=
J
~
Φ(0,4)
122
CHAPTER 13 SOLUTIONS
13.8 For a mixture, the mean and second moment are a combination of the
individual moments. The first two moments are
E(X)
E(X2)
=
=
p(l) + ( l - p ) ( 1 0 ) = 1 0 - 9 p ,
p(2) + ( l - p ) ( 2 0 0 ) = 200-198p,
Var(X)
=
200 - 198p - (10 - 9p) 2 = 100 - 18p - Sip2 = 4.
The only positive root of the quadratic equation is p = 0.983.
13.9 We need the 0.6(21) = 12.6th smallest observation. It is 0.4(38) +
0.6(39) = 38.6.
13.10 We need the 0.75(21) = 15.75th smallest observation. It is 0.25(13) +
0.75(14) = 13.75.
13.11 μ = 975, μ2 = 977,916§, σ 2 = 977,916§-975 2 = 27,291 §. The moment
equations are 975 = αθ and 27,291 § = αθ2. The solutions are a = 34.8321
and Θ = 27.9915.
13.12 F(x) = (χ/θ)^/[1 + {χ/θγ). The equations are 0.2 = (100/<9)^/[l +
(100/0)^] and 0.8 = (400/6>)7/[l + (400/0)^]. From the first equation 0.2 =
0.8(100/#) 7 or ΘΊ = 4(100) 7 . Insert this result in the second equation to get
0.8 = 4 ^ - 7 ( 1 + 4 7 " 1 ), and so 7 = 2 and then Θ = 200.
13.13 E(X) = Jo pxpdx = p/(l + p) = x. p = x/{l - x).
13.14 μ = 3,800 - αθ. μ'2 = 16,332,000, σ2 = 1,892,000 = αθ2. ά = 7.6321,
θ = 497.89.
13.15 μ = 2,000 - βχρ(μ+σ 2 /2), μ'2 = 6,000,000 = βχρ(2μ+2σ 2 ). 7.690090 =
μ 4- σ 2 /2 and 15.60727 = 2μ + 2σ 2 . The solutions are μ = 7.39817 and
σ = 0.636761.
Pr(X > 4,500)
13.16 μ = 4.2 = (ß/2)y/2n.
=
=
1 - Φ[(1η4,500 - 7.39817)/0.636761]
1-Φ(1.5919) = 0.056.
ß = 3.35112.
13.17 X is Pareto, and so E(X) = 1,000/(a - 1) = x = 318.4. a = 4.141.
SECTION 13.1
123
13.18 rß = 0.1001, rß(l + ß) = 0.1103 - 0.10012 = 0.10027999. 1 + ß =
1.0017981, ß = 0.0017981, r = 55.670.
13.19 rß = 0.166, rß(l + ß) = 0.252 - 0.1662 = 0.224444. 1 + ß = 1.352072,
ß = 0.352072, r = 0.47149.
13.20 With τι+1 = 16, we need the 0.3(16) = 4.8 and 0.65(16) = 10.4 smallest
observations. They are 0.2(280) + 0.8(350) = 336 and 0.6(450) + 0.4(490) =
466. The equations are
0.3
-
1-
\2
ö7
————
and 0.65 = 1
7
'
0 +336V
1/2
=
1 + (336/0) and (0.35)~
ln(0.282814)
(0.7)-1/2-!
(0.19523) 1/3 ' 8614
=
=
=
ln(336/466), 7 = 3.8614,
(336/Ö) 38614 ,
336/6», θ = 558.74.
(0.7)-
7
1/2
iff1
\2
ν07+466τ,
= 1 + (466/0)\
13.21 With n + 1 = 17, we need the 0.2(17) = 3.4 and 0.7(17) = 11.9 smallest
observations. They are 0.6(75) + 0.4(81) = 77.4 and 0.1(122) + 0.9(125) =
124.7. The equations are
0.2
- In 0.8
1.20397/0.22314
r
Θ
=
=
=
=
=
l-e-(774/ö)Tand0.7=l-e-(124/7/^,
0.22314 - (77.4/0) T and - In 0.3 = 1.20397 = (124.7/0)7
5.39558 = (124.7/77.4) r ,
ln(5.39558)/ ln(124.7/77.4) = 3.53427,
77.4/0.22314 1/3 · 53427 = 118.32.
13.22 Shifting adds δ to the mean and median. The median of the unshifted
distribution is the solution to 0.5 = S(m) = e~m/e for m = 01n(2). The
equations to solve are
300 = Θ + δ and 240 = θ 1η(2) + δ.
Subtracting the second equation from the first gives 60 = 0[1 — ln(2)] for
Θ = 195.53. From the first equation, δ = 104.47.
124
CHAPTER 13 SOLUTIONS
13.2
SECTION 13.2
13.23 For the inverse exponential distribution,
n
Z(0)
=
n
^(Ιηθ-θχΤ1
- 2 1 η ^ ) = nlnΘ - ny0 - 2 ^ l n : ^ · ,
j=\
Ι'(θ)
=
j=\
ηθ~ι -ny,
n
1
1
Θ = y~\ where y = - V — .
n f-f x7·
For Data Set B, we have Θ = 197.72 and the loglikelihood value is -159.78.
Because the mean does not exist for the inverse exponential distribution,
there is no traditional method-of-moments estimate available. However, it
is possible to obtain a method-of-moments estimate using the negative first
moment rather than the positive first moment. That is, equate the average
reciprocal to the expected reciprocal:
i£i-E(*-)-.-.
n ^—: χΊ
3= 1 J
This special method-of-moments estimate is identical to the maximum likelihood estimate.
For the inverse gamma distribution with a — 2,
n
1(0) =
n
]Γ(21η6> - θχγ
- 3 1 η ^ ) = 2nln0 - nyO - 3 ^ 1 η ^ · ,
3=1
ϊ(θ)
=
3=1
Ύ
2ηθ~ -ny,0
= 2/y.
For Data Set B, Θ = 395.44 and the value of the loglikelihood function is
— 169.07. The method-of-moments estimate solves the equation
1,424.4 = - ^ - = ~ ,
θ = 1,424.4,
which differs from the maximum likelihood estimate.
For the inverse gamma distribution with both parameters unknown,
f(x\Q)=
eae~9/x
M P / x, ln/(x|0) = a In θ - θχ'1 - (a + 1) lna: - 1ηΓ(α).
χ α + 1 1(a)
The likelihood function must be maximized numerically. The answer is a —
0.70888 and Θ = 140.16 and the loglikelihood value is -158.88. The method-
SECTION 13.2
125
Estimates for Exercise 13.25.
Table 13.1
Model
Original
Exponential
Gamma
Inv. exponential
Inv. gamma
9=
a =
Θ=
ά =
Censored
1,424.4
0.55616, θ = 2,561.1
197.72
0.70888, Θ = 140.16
Θ = 594.14
a = 1.5183, Θ = 295.69
Θ = 189.78
a = 0.41612, Θ = 86.290
of-moments estimate is t h e solution to t h e two equations
1,424.4
=
13,238,441.9
=
Θ
α-Ι'
Θ*
(α-1)(α-2)'
Squaring the first equation and dividing it into the second equation give
6.52489 =
a - 1
a-2'
which leads to a = 2.181 and then Θ — 1,682.2. This result does not match
the maximum likelihood estimate (which had to be the case because the mle
produces a model t h a t does not have a mean).
1 3 . 2 4 For the inverse exponential distribution, the cdf is F(x) = e~Qlx. Numerical maximization yields Θ = 6,662.39 and the value of the loglikelihood
function is —365.40. For the g a m m a distribution, the cdf requires numerical evaluation. In Excel® the function G A M M A D I S T ( x , a, #,true) can be
used. T h e estimates are a = 0.37139 and Θ = 83,020. T h e value of the loglikelihood function is —360.50. For the inverse g a m m a distribution, the cdf
is available in E x c e l ® as l - G A M M A D I S T ( l / z , a , l / 0 , t r u e ) . T h e estimates
are a = 0.83556 and Θ = 5,113. T h e value of the loglikelihood function is
-363.92.
1 3 . 2 5 In each case the likelihood function is / ( 2 7 ) / ( 8 2 ) · · · / ( 2 4 3 ) [ 1 - F ( 2 5 0 ) ] 1 3 .
Table 13.1 provides the estimates for b o t h the original and censored d a t a sets.
T h e censoring tended to disguise the t r u e n a t u r e of these numbers and, in general, had a large impact on the estimates.
1 3 . 2 6 T h e calculations are done as in Example 13.6, b u t with Θ unknown.
T h e likelihood must be numerically maximized. For the shifted data, the
estimates are a = 1.4521 and Θ = 907.98. T h e two expected costs are
907.98/0.4521 = 2,008 and 1,107.98/0.4521 = 2,451 for the 200 and 400 deductibles, respectively. For t h e unshifted data, the estimates are a = 1.4521
126
CHAPTER 13 SOLUTIONS
Table 13.2
Probabilities for Exercise 13.29
Probability
Event
Observed
Observed
Observed
Observed
at
at
at
at
age
age
age
age
F(l)-F(0.4)
1-F(0.4)
35.4 and died
35.4 and survived
35 and died
35 and survived
1
1-0.4?/;
1-0.4?/;
l-w
F(l) = W
1-W
and Θ = 707.98. T h e three expected costs are 707.98/0.4521 = 1,566, 2,008,
and 2,451 for the 0, 200, and 400 deductibles, respectively. While it is always
the case t h a t for the Pareto distribution the two approaches produce identical
answers, t h a t will not be true in general.
1 3 . 2 7 Table 13.1 can be used. The only difference is t h a t observations t h a t
were surrenders are now treated as x-values and deaths are treated as y-values.
Observations t h a t ended at 5.0 continue to be treated as y-values. Once again
there is no estimate for a Pareto model. T h e g a m m a parameter estimates are
a = 1.229 and Θ = 6.452.
1 3 . 2 8 T h e contribution to the likelihood for the first five values (number of
drivers having zero through four accidents) is unchanged. However, for the
last seven drivers, the contribution is
> 5)] 7 = [1 - p(0) - p ( l ) - p(2) - p(3) - p(4)] 7 ,
[PT(X
and the maximum must be obtained numerically. T h e estimated values are
λ = 0.16313 and q = 0.02039. These answers are similar to those for Example
13.8 because the probability of six or more accidents is so small.
1 3 . 2 9 There are four cases, with the likelihood function being the product
of the probabilities for those cases raised to a power equal to the number of
times each occurred. Table 13.2 provides the probabilities.
T h e likelihood function is
0.6w
1-™ V
\ 1-0 Aw
1-0.4W
8,,
U2
™14(l-w)16
and its logarithm is
/ = 141nw 4- 161n(l - w) - 101n(l - 0.4«;).
T h e derivative is
„
14
I = w
16
4
+
1—w
1 — QAw
SECTION 13.2
127
Set the derivative equal to zero and clear the denominators to produce the
equation
0
=
14(1 - w)(l - OAw) - 16w(l - OAw) + Aw (I - w)
14-31.6w + 8w 2 ,
and the solution is w — (735 = 0.508 (the other root is greater than one and
so cannot be the solution).
13.30 The survival function is
f e"A,t'
I e-2\y-(t-2)\^
cm*W-
0<t<2,
t > 2
and the density function is
}{t) - -b (t) - I
λ2β_2λι_(ί_2)λ!>)
t
>
2
The likelihood function is
L
=
_
/(1.7)S(1.5)S(2.6)/(3.3)5(3.5)
λι6-1.7λΐ6-1.5λ,6-2λ,-0.6λ2λ26-2λι-1.3λ2(,-2λι-1.5λ2
The logarithm and partial derivatives are
I =
In λι + In λ 2 - 9.2λι - 3.4λ2,
w - f-"-0·
A2
0X2
and the solutions are λι = 0.10870 and λ 2 = 0.29412.
13.31 Let f(t) = w be the density function. For the eight lives that lived the
full year, the contribution to the likelihood is Pr(T > 1) = 1 - w. For the one
censored life, the contribution is Pr(T > 0.5) = 1 — 0.5u>. For the one death,
the contribution is Pr(T < 1) — w. Then
L
InL
dlnL
dw
=
=
_
(l-w)8(l-0.5w)w,
8ln(l - w) + ln(l - O.bw) + lnw,
___8
1— w
0.5
1 — O.bw
2.
w'
128
CHAPTER 13 SOLUTIONS
Setting the derivative equal to zero and clearing the denominators give
0 = - 8 w ( l - 0.5w) - 0.5w(l - w) + (1 - iu)(l - Q.bw).
The only root of this quadratic that is less than one is w = 0.10557 = qx.
13.32 For the two lives that died, the contribution to the likelihood function
is /(10), while for the eight lives that were censored, the contribution is 5(10).
We have
- -*<"-¥ΗΓ-
/(*)
L
=
InL =
dlnL
dk
0 =
31n(fc-10)-51n/c,
3
5
■ ■ ( -
-?)"
/(10)aS(10)8=(^)2(l-
10\4
(fc-10) 3
" k ) * fc5
-o
k-10
k "'
3k- 5(fc - 10) = 50 - Ik.
?herefor ■e, k = 25.
13.33 We have
L
=
/(l,100)/(3,200)/(3,300)/(3,500)/(3,900)[5(4,000)] 495
_
^ - l e - l , 1 0 0 / ( 9 ^ - 1 e -3,200/6>^-l e -3,300/6>^-l e -3,500/<9
χ
InL
dlnL
M
^-1
-3,900/0 r -4,000/01495
_
^-5-1,995,000/0
=
-51n6>-
—
5
1
Θ^
1,995,000
Θ
1,995,000 Λ
I
:
θ2
— 0
'
and the solution is Θ = 1,995,000/5 = 399,000.
13.34 For maximum likelihood, the contributions to the likelihood function
are (where q denotes the constant value of the time-to-death density function)
Event
Survive to 36
Censored at 35.6
Die prior to 35.6
Die after 35.6
Contribution
Pr(T > 1) = 1 - q
Pr(T > 0.6) = 1 - 0.6q
Pr(T < 0.6) = 0.6<?
Pr(0.6 < T < 1) = OAq
SECTION 13.2
129
Then,
L
In L
dlnL
dg
= (1 - g) 7 2 (l - 0.6^) 15 (0.6^) 10 (0.4^) 3 oc (1 - q)72(l = 72 ln(l - q) + 15 ln(l - 0.6q) + 13 Inq,
72
9
13 _
1 — </ 1 — 0.6g
^
0.6q)15q13,
The solution to the quadratic equation is q — 0.13911 = ^35.
For the product-limit estimate, the risk set at time zero has 100 members.
The first 10 observations are all deaths, and so the successive factors are
(99/100), (98/99), . . . , (90/91) and, therefore, 5(0.6) = 90/100 - 0.9. The
15 censored observations reduce the risk set to 75. The next three deaths each
reduce the risk set by one, and so 5(1) = 0.9(74/75)(73/74)(72/73) - 0.864.
Then $35 = 1 - 0.864 = 0.136.
13.35 The density function is f(t) = —Sf(t) = 1/w. For actuary X, the
likelihood function is
/(l)/(»)/(4)/(4)S(5)-(I)4(l-i)-i-±.
Setting the derivative equal to zero gives
0=
4
25
+ - 5 , Aw = 25, w = 6.25.
wö w°
For actuary Y the likelihood function is
/ ( l ) / ( 3 ) / ( 4 ) / ( 4 ) / ( 6 ) = w-\
This function appears to be strictly decreasing and, therefore, is maximized at
w = 0, an unsatisfactory answer. Most of the time the support of the random
variable can be ignored, but not this time. In this case, f(t) = 1/w only for
0 < t < w and is zero otherwise. Therefore, the likelihood function is only
w~5 when all the observed values are less than or equal to w, otherwise the
function is zero. In other words,
\
Γ 0,
^ < 6,
T(
L(w)
= < _5
v
'
[ ΰ; , w > ß6.
This result makes sense because the likelihood that w is less than 6 should be
zero. After all, such values of w are not possible, given the sampled values.
This likelihood function is not continuous and, therefore, derivatives cannot
be used to locate the maximum. Inspection quickly shows that the maximum
occurs at w — 6.
130
CHAPTER 13 SOLUTIONS
13.36 The likelihood function is
/(4)/(5)/(7)5(3 + r)
w^w^w'^w
- 3 - r)w~l
4
4
™ ~
5(3)
"
(w - 3) w~ 4
l(w) =
\n(w-3-r)-4\n(w-3).
(
}
w-3-r
(w - 3) 4 '
The derivative of the logarithm is
i »
= — \
w—ό—r
—r
w—ό
Inserting the estimate of w and setting the derivative equal to zero yield
0 =
4
10.671 - r 10.67
= 10.67 - 4(10.67 - r) = -32.01 + 4r,
= 8.
r
13.37 The survival function is
(
e-tx\
0<£<5,
β _ 5 λ ι _ ( ί _ 5 )λ^
β -5λ,-5λ 2 -(>-10)λ: %
and the density function is
(
2
-
3
lnL(Ai,A 2 ,A 3 )
0 ?
0<ί<5,
Aae-^'-i*-5)^,
5<£<10,
λ36-5λ1-5λ2-(ί-10)λ;%
^
^
Λ^-βλ,^-ιολ,-δλ,^^θλ,^ολ,-ιΐλ,
χ/6~5λι
-
>1
λιβ-^1,
The likelihood function and its logarithm are
£(Ai,A ,A )
5 < * < 10,
t
-5λ2-5λ.-5\3
Ιηλι - 48λα + 21ηλ 2 - 40λ2 + 41ηΑ3 - 26λ 3 .
The partial derivative with respect to λι is Aj"1 - 48 = 0 for Ai = 1/48.
Similarly, A2 = 2/40 and A3 = 4/26.
13.38 The density function is the derivative f(x) — α 5 0 0 α χ _ α _ 1 . The likelihood function is
L(a) = α 5 5 0 0 5 α ( Π χ ? · ) - α - \
and its logarithm is
1(a)
= 5 In a-f (5a) In 5 0 0 - (α + 1)Σ1η^·
= 5 Ina + 31.073a - 33.111(a + 1).
SECTION 13.2
131
Setting the derivative equal to zero gives
0 = 5c*_1 - 2.038.
The estimate is a = 5/2.038 = 2.45.
13.39 The coefficient (2πχ) - 1 / / 2 is not relevant because it does not involve μ.
The logarithm of the likelihood function is
m
=
-^(11^)2-3^(15-2-M)2-^(18-M)2-^(21-M)2-^(25.8-/x)2·
The derivative is
,Ί,Λ
1 {μ) =
11-M , 15.2-μ
-ΪΓ-
+
^5^-
+
18-μ
-ϋ~
+
21 - μ
+
25.8 - μ
η9ο8βο„
5
"ST" ^ 5 ^ - = - °· 2 9 8 6 3 μ ·
Setting the derivative equal to zero yields μ = 16.74.
13.40 The distribution and density function are
F(x) = l - e - < * / e > a , f(x) =
Θ
^ e - W \
The likelihood function is
L(9)
=
/(20)/(30)/(45)[l-F(50)]2
e-(50/9)
_
2
]2
^-6 β -8,325/θ' 2
The logarithm and derivative are
1{θ) =
Ι'(θ)
=
-61η6>-8,3256Τ 2 ,
-66» _1 + 16,6506»-3.
Setting the derivative equal to zero yields Θ = (16,650/6) χ / 2 = 52.68.
13.41 For the exponential distribution, the maximum likelihood estimate is
the sample mean, and so xp — 1000 and xs = 1500. The likelihood with the
restriction is (using i to index observations from Phil's bulbs and j to index
observations from Sylvia's bulbs)
132
CHAPTER 13 SOLUTIONS
10
20
L(r) = Yi^-'eM-xijnlivn-'eM-xj/zn
2= 1
\
20
10
i=l
j=l
(6»*)- 3 O exp(-20ip/r-10x s /26l*)
(0*)- 3 0 exp(-20,000/6>* - 7,500/6»*).
=
=
Taking logarithms and differentiating yields
- 3 O l n 0 * - 27,500/6»*,
-30(6»*)" 1 +27,500(r)- 2 .
1(0*) =
1\θ*) =
Setting the derivative equal to zero gives Θ = 27,500/30 = 916.67.
13.42 For the first part,
L{ß)
F(1,000) 62 [1 - F(1,000)] 38
=
_
(I _
e-l,000/ö\62/
-l,000/ö\38
Let x = e- 1 » 000 / 0 . Then
(l-z)6V8,
621n(l - x ) + 381nx,
62
38
l'{x) =
x
1—x
Setting the derivative equal to zero yields
L(x)
l(x)
=
=
0
x
=
=
=
-62x 4- 38(1 - x)
38 - lOOx,
0.38,
and then Θ = -1,000/In 0.38 = 1,033.50.
With additional information,
62
Up)
1{θ)
Ι'(θ)
0
θ
62
[1 - F(1,000)] 38 = m ö ^ e - ^ / e ] e-38,ooo/ö
=
^-62e-28?140/Öe-38,000/Ö _
=
=
=
=
-621n<9-66,140/0,
- 6 2 / 0 + 66,14O/02 - 0,
-620 4- 66,140,
66,140/62 = 1,066.77.
0-62-66,140/0
SECTION 13.2
Table 13.3
Likelihood contributions for Exercise 13.44.
Observation
Probability
1997-1
Pr(iV=l)
Pr(iV = l)+Pr(AT=2)
Pr(iV=2)
Pr(iV=l)+Pr(iV=2)
Pr(N=0)
ΡΓ(ΛΓ=0)+ΡΓ(ΛΓ=1)
Pr(JV = l)
_
P r ( N = 0 ) + Pr(iV=l)
Pr(iV=0) _ -,
Pr(iV=0) — L
1997-2
1998-0
1998-1
1999-0
133
Loglikelihood
(l-p)p
1
(l-p)p+(l-p)p2
1+p
(I-P)P2
_
P
(l-p)p+(l-p)p'2
1+P
(1-p)
_
1
(l-p) + (l_p)p
1+p
(I-P)P
_
p
(I-P)4-(I-P)P
1+P
-31n(l+p)
lnp — ln(l + p)
-51n(l+p)
21np-21n(l+p)
0
13.43 The density function is
f(x) =
0.5x-°-59-05e-^°\
The likelihood function and subsequent calculations are
10
10
=
H 0.5xJ°-5e-°-5e-*i 0 . 5θ/ ) - 0 . 5 oc 0" 5 exp - 0 " 0 · 5 £ *,0.5
3=1
V
0- 5 exp(-488.976r 0 · 5 ),
1(0)
=
-51n0-488.976T 0 ' 5 ,
Ι'(θ)
=
-56>_1 + 244.4850 -1 · 5 = 0,
=
- 5 0 0 · 5 + 244.485,
L{6)
=
0
and so 0 = (244.485/5) 2 = 2,391.
13.44 Each observation has a uniquely determined conditional probability.
The contribution to the loglikelihood is given in Table 13.3. The total is
l(jp) = 3 lnp - l l l n ( l + p). Setting the derivative equal to zero gives 0 =
l'(p) = 3p- 1 - 11(1 + p ) ~ \ and the solution is p = 3/8.
13.45 L = 2 η 6 » η ( Π ^ ) β χ ρ ( - έ » Σ ^ ) ) ' = n l n 2 + n l n 0 + £ l n x j - 0 £ > ? ,
= 0, 0 = η / Σ > 2 .
Ι' = ηθ-1 -Σ,'ή
13.46 f(x) = ρχΡ~\ L = pn{X[xj)p~l,
np~l + Y^\nXj = 0, p = — n/^2\nxj.
I = nlnp + (p - l ^ l n a ^ · , I' =
13.47 (a) L = ( Π ^ ' ) α _ 1 e x p ( - Σ V ö ) [ Γ ( α ) ] _ n ί r ™ Q ·
^ =
dl/θθ
( a - l ) £ l n x j ; - 0 _ 1 £ Z J - η1ηΓ(α) - n a l n 0
=
81.61837(a - 1) - 38,OOO0_1 - 10 In Γ(α) - 10a In Θ.
=
38,OOO0-2 - lOoiT 1 = 0,
134
CHAPTER 13 SOLUTIONS
Θ = 38,000/(10 · 12) = 316.67.
(b) I is maximized at a = 6.341 and Θ = 599.3 (with I - -86.835).
13.48 μ = ±Σ1ηχ 7 ; = 7.33429. σ 2 = ^ Σ ( 1 η ^ ) 2 - 7.334292 = 0.567405,
σ = 0.753263.
Ρ Γ ( Χ > 4,500)
=
1 - Φ[(1η4,500 - 7.33429)/0.753263]
=
1 - Φ(1.4305) = 0.076.
13.49 L = θ-ηβχρ(-Σ^/θ)θ~2Σχ3 = ° ·
O^Y^Xj/n.
13.50£ = ^
1 0
I = -ηΐτιθ
V = -ηθ~ι
-ß-^Xj.
+
( Π ^ > χ ρ [ - Σ ^
V = - 1 0 / Τ * 4- / Τ 3 Σ χ) = °· 0 = Λ / Σ Χ 2 / 1 0 = 3.20031.
13.51 / ( χ ) - a x " 0 " 1 . L - α ^ Π ^ Γ 0 " 1 Ζ' — η α - 1 — ^ZlnXj = 0. ά = n / ^ l n x j .
/ = η\ηα - (α + l J J Z l n x j .
1352
' «·> - Κ Η έ - έ Η ΐ ) 5 ^ - ^ '
1(0)
=
91η(1Ο-0) + 111η(0),
'<·> - -iöb + τ - 0 ·
96» =
6» =
11(10-6»),
110/20 = 5.5.
13-53
—
Γ(1Ι})
-LJ.1 (irülz£) 2
www
=
I'M
= —\
25-p
w5
(i£zd)
Z(w)
0
\
(w_4-p)2
/
\ _
21n(u; - 4 - p) - 51n(w - 4),
w —4 —p
5
- = o,
w —4
- w-äs^-s·
=
_Γ/
(w-4)5
10,p=15.
'
SECTION 13.2
13.54 The density function is f(x) = 9x 2e
L[ß)
135
θ χ
Ι . The likelihood function is
=
^(66- 2 )e- ö / 6 6 ^(91- 2 )e- ö / 9 1 ö(186- 2 )e- ö / 1 8 6 (e- 0 / 6 6 ) 7
OC
β3β-0.148184βϊ
3 1 n ö - 0.148184(9,
36Γ 1 -0.148184 = 0,
3/0.148184 - 20.25.
1(θ) =
Ι'(θ) =
θ =
The mode is 0/2 = 10.125.
13.55
L
f(r.U\
/foil*»)
TT
Q400
.Π (400 +a!j )- +
_ /= i
L(Q)
=
Π i-
/(a)
=
71na 4 7aln400 - (a 4-1) ]Tln(400 4- Xj) - 7aln0.8
/'(a)
a
=
=
7 In a - 3.79a -47.29,
7 a " 1 - 3.79 - 0,
7/3.79 - 1.847.
13.56
L
i
I1
a
=
=
=
F ( ioo|e)
αΊ7'
\(
400
\°]
[\ 400+100 ) J
J=I
α 5 λ 5 α [Π(λ + ^ · ) Γ α - \
5 l n a 4 5aIn 1,000 - (a 4-1) £ l n ( l , 0 0 0 4 Xj),
5 a " 1 4 34.5388 - 35.8331 = 0,
3.8629.
13.57 (a) Three observations exceed 200. The empirical estimate is 3/20 =
0.15.
(b) E(X) = 100α/(α - 1) = x = 154.5, ά - 154.5/54.5 = 2.835, Ρτ(Χ >
200) - (100/200) 2 · 835 = 0.140.
(c) f(x) = αΙΟΟ^χ-"- 1 . L = α 2 0 1 0 0 2 0 α ( Π ^ ) ~ α _ 1 ·
I = 20Inα 4 20αIn 1 0 0 - (a + 1) ^ ΐ η ^ ,
Ι' = 20a" 1 4 20 In 1 0 0 - £ In Xj = 0,
a = 2 0 / ( Σ > ^ - 20 In 100) = 20/(99.125 - 92.103) = 2.848.
Pr(X > 200) = (100/200) 2 · 848 = 0.139.
136
CHAPTER 13 SOLUTIONS
13.58 The maximum likelihood estimate is Θ = 93.188.
13.59 (a)
Σ
(XJ
Eh-^r
- μ) 2
Σ(>ί-4)+Έ(4-*>+«
μ2 Υ^ ί
) 4- —
1
Μ2Σ
(b)
.
2μ 2
2
/>
η,
1
-V
ü
=
de
θ
~
29
2L^
I τ-,
^ )
X
η0
μ"
χ.
9/3
26»
Χ)
1\
- μ) - (χ - μ)22μ
25
μ
1
^\Xj
1
θ^{
ηθ -μ22(χ
ομ
^
θ
l n Ö
(χ - μγ
9/1.2
2μ: Ζ ^
2
η
η
^(Xj-μγ
θ
oc θη/2 exp
L
1
2ημ + ηχ
+
2
= 0,
r ( x - A i ) 2 = 0,
2μ2ί
η
Σ(ί-ί)'
13.60
- τ τ ί --)
oc
0 n / 2 exp
Γ
%XP
θ^
(Xj - μ)
2
jixj-μ)2
2
mfi~
SECTION 13.3
ί(μ, θ)
=
- In θ - - ^2
^—'
ομ
-^—
=
d£
öfl
m
j (xj ~ I1)2 + constant,
2^mj
·*
m
— Θ 2 . ™>j < 0, hence, maximum.
nl
2 (9
λ-ι
o
1 y-^
2
1 ^-^
=
—
2
n
0 = nj^m^-x) 2 ] -1
—ö2
Ö0
137
7T"~Ö < 0, hence, maximum.
2^
j=l
n
^
j=l
}
n
ί(θ)
= Σΐηρ(ζ,·) + Γ(0)5>,·-η1η<?(0),
Hiß)
=
^ ) ^ χ , - η
Therefore,
*<*>
r'(%(0)
-*.
But,
<7'(0) = E(X) = μ(θ),
r'(%(<?)
and so μ(#) = x.
13.3
SECTION 13.3
13.62 In general, for the exponential distribution,
Ι'(β) =
1"{θ) =
Ε[1"(θ)] =
Var(0)
=
-ηθ^+ηχθ-2,
ηθ~ - 2ηχθ~ζ,
ηθ~2 - 2ηθθ~3 =
2
-ηθ~2,
θ2/η,
where the third line follows from E(X) = E(X) = Θ. The estimated variance
for Data Set B is Var(0) = l,424.4 2 /20 = 101,445.77 and the 95% confidence
138
CHAPTER 13 SOLUTIONS
interval is 1,424.4±1.96(101,445.77)1/2, or 1,424.4±624.27. Note that in this
particular case Theorem 13.5 gives the exact value of the variance, because
Var(ö) = Var(X) - Var(X)/n -
θ2/η.
For the gamma distribution,
1(α,θ)
=
021(α,θ)
da2
2
θ 1{α,θ)
δαδθ
θ21(α,θ)
δθ2
-^XjO"1
l)Y^]nxj
Σι
Θ1(α,θ)
da
dl(a,9)
ΘΘ
(a-
ηΓ'(α)
1
i = i lnXj~~T(aY~
^XjO'2
-
- η 1 η Γ ( α ) - na In 0,
ü
'
-ηαθ~\
3= 1
Γ(α)Γ"(α) - Γ'(α) 2
1 »
=
-ηθ~\
-2 ] Γ χάθ~3 + ηαθ'2
= -2ηχθ~3
+
ηαθ'2.
The first two second partial derivatives do not contain Xj, and so the expected
value is equal to the indicated quantity. For the final second partial derivative,
E(X) = E(X) = αθ. Therefore,
n
Ι(α,0) =
no
Γ(α)2
ηθ~
ηαθ
The derivatives of the gamma function are available in some better computer
packages, but are not available in Excel®. Using numerical derivatives of the
gamma function yields
82.467
0.0078091
J(&,0) =
0.0078091
0.0000016958
and the covariance matrix is
0.021503
-99.0188
-99.0188
1,045,668
Numerical second derivatives of the likelihood function (using h\ = 0.00005
and h2 = 0.25) yield
1{ά,θ)
82.467
0.0078091
0.0078091 0.0000016959
SECTION 13.3
139
and covariance matrix
Γ 0.021502
[ -99.0143
-99.0143 1
1,045,620 J *
The confidence interval for a is 0.55616 ± 1.96(0.021502)1/2, or 0.55616 ±
0.28741, and for Θ is 2,561.1 ± 1.96(1,045,620)1/2, or 2,561.1 ± 2,004.2.
13.63 The density function is f(x\0)
function is
L{6)
=
=
= 0 ~ \ 0 < x < Θ. The likelihood
0" n , 0<xi,...,xn
0, otherwise.
<0,
As a function of 0, the likelihood function is sometimes zero, and sometimes
θ~η. In particular, it is θ~η only when Θ is greater than or equal to all the
xs. Equivalently, we have
L{9)
— 0, Θ < m a x ( x i , . . . , x n )
= 0_n, Θ > max(xi,... ,xn).
Therefore, the likelihood function is maximized at Θ = m a x ( # i , . . . , xn). Note
that the calculus technique of setting the derivative equal to zero does not
work here because the likelihood function is not continuous (and therefore
not differentiable) at the maximum. From Examples 10.5 and 10.7 we know
that this estimator is asymptotically unbiased and consistent, and we have its
variance without recourse to Theorem 13.5. According to Theorem 13.5, we
need
1(θ) =
Z'(0) =
/"(#) =
— nln#, Θ > m a x ( x i , . . . , x n ) ,
— n # - 1 , Θ > m a x ( x i , . . . ,xn),
2
n 0 ~ , θ> m a x ( x i , . . . , x n ) .
Then E[Z"(0)] = ηθ~2 because with regard to the random variables, ηθ~2 is a
constant and, therefore, its expected value is itself. The information is then
the negative of this number and must be negative.
With regard to assumption (ii) of Theorem 13.5,
!"Λ>=[-°-^->13.64 From Exercise 13.62 we have ά = 0.55616, Θ = 2,561.1, and covariance
matrix
Γ 0.021503 -99.0188 1
-99.0188 1,045,668 I '
140
CHAPTER 13 SOLUTIONS
The function to be estimated is g(a, θ) = αθ with partial derivatives of Θ and
a. The approximated variance is
[ 2,561.1
0.55616 ]
0.021503
-99.0188
2,561.1
0.55616
-99.0188
1,045,668
182,402.
The confidence interval is 1,424.4 dz 1.96^/182,402, or 1,424.4 ± 837.1.
13.65 The partial derivatives of the mean are
οβμ+σ
2
/2
θμ
geß+<r2/2
da
=
β//+σ
/2
=
σβμ+σ*/2 = 134.458.
= 123.017,
The estimated variance is then
[ 123.017 134.458 ]
" 0.1195
0
0
0
0.0597
123.017
134.458
13.66 The first partial derivatives are
da
dl(a,ß)
dß
=
- 5 α - 3β + 50,
=
-3α - 2 / 3 + 2.
The second partial derivatives are
d2l(a,ß)
da2
2
d l(a,ß)
dß2
2
d l(a,ß)
dadß
=
-5,
=
-2,
=
-3,
and so the information matrix is
5
3
3
2
The covariance matrix is the inverse of the information, or
2
-3
-3
5
2,887.73.
SECTION 13.3
141
13.67 For the first case, the observed loglikelihood and its derivatives are
1 00
9
1 00( ö
Z(0)
= 6 2 1 n [ l - e - ' ° / ] + 381ne- ' V ,
i'(0)
=
-62e- 1 ' 0 0 0 / e (l,000/6' 2 )
I _ g-1,000/0
~*~
38,000
02
2
-62,0006"LMO/eg- + 38,0006>~2 - βδ,ΟΟΟβ" 1 ' 000 /^"· 2
I _ e -i,ooo/0
βδ,ΟΟΟβ1'000/0 - 100,000
ö2(ei.ooo/e _ i)
1»(Ω\
_02(ei.ooo/e _ i ^ o o O e ^ / ^ O O O i T 2
-^δ,ΟΟΟβ 1 - 000 /* - 100,000)^(e 1 - 0 0 0 ^ - 1) flV'O0O/el,000fl-2]
5/4^1,000/0 _ ! ) 2
Evaluating the second derivative at Θ = 1033.50 and changing the sign give
Ι[θ) = 0.00005372. The reciprocal gives the variance estimate of 18,614.
Similarly, for the case with more information
1(θ)
=
-621n0-66,1406T1,
ϊ(θ)
=
- 6 2 0 _ 1 + 66,140ο -2 ,
1"(θ)
=
626»~2 - 132,2806Τ3,
1"{ 1,066.77)
=
-0.000054482.
The variance estimate is the negative reciprocal, or 18,355.2
13.68 (a) f(x) = pxp-\ lnf(x) = lnp + ( p - l ) l n x , d2\nf(x)/dp2
I(p) = nE(p~2) = np~2, Var(p) = p2/n.
(b) From Exercise 13.46, p = - η / Σ lnxj. The CI is p ±
(c) μ = p/(l+p).
The CI ispil+p)-1
p / ( l +p). δμ/dp = (1+p)-2.
± 1.96p(l +ρ)~2/^/η.
=
-p~2,
IMp/y/n.
Var(A) = (1 +
ρ)-*ρ2/η.
13.69 (a) Inf(x) = -In6» - z/6>, <92ln/(x)/ö6>2 = 0 - 2 - 26Γ 3 :Ε, 1(0) =
ηΕ(-θ~2 + 2Θ~3Χ) = ηθ~2, Var(0) = θ2/η.
(b) From Exercise 13.49, 0 = x. The CI is x ± 1.965/v/n.
(c) Var(X) = Θ2. dVax(X)/d0
= 2Θ. Var(X) = x 2 .
2 2
4
2
(2θ) θ /η = Αθ /η. The CI is x ± 1.96(2ä 2 )/v^.
2
Comparing the variances at the same Θ value would be more useful.
Var[Vai(X)] =
142
CHAPTER 13 SOLUTIONS
1 3 . 7 0 In f(x) = - ( 1 / 2 ) 1 η ( 2 π ^ ) - χ 2 / ( 2 ^ ) , ο 2 1 η / ( χ ) / ο ^ 2 = ( 2 0 2 ) " 1 - x 2 ( 0 3 ) - 1 ,
and 7(0) = ηΕ[-(2θ2)-1
+Χ2(Θ3)'1}
= η ( 2 β 2 ) " 1 since X - iV(O,0). Then
MSE(0) = Var(0) = 2 0 2 / n = 20 2 /n = 8/40 = 0.2.
1 3 . 7 1 T h e maximum likelihood estimate is 0 = x = 1,000. Var(0) = Var(x) =
0 2 / 6 . T h e quantity to be estimated is 5(0) = e " 1 ' 5 0 0 / 0 , and then
S , (0) = l , 5 O O 0 - 2 e - 1 ' 5 0 0 / 0 .
From the delta method
Var[5(0)]
=
[5'(0)] 2 Var(0)
=
[ l , 5 0 0 ( l , 0 0 0 ) - 2 e - 1 ' 5 0 0 / 1 ' 0 0 0 ] 2 ( l , 0 0 0 2 / 6 ) = 0.01867.
T h e standard deviation is 0.13664 and with 5(1,000) = 0.22313, the confidence interval is 0.22313 ± 1.96(0.13664), or 0.22313 ± 0 . 2 6 7 8 1 . An alternative
t h a t does not use the delta method is to start with a confidence interval for 0:
1,000 ± 1.96(l,000)/^6, which is 1,000 ± 800.17. P u t t i n g the endpoints into
5(0) produces the interval 0.00055 to 0.43463.
1 3 . 7 2 (a) L = F ( 2 ) [ l - F ( 2 ) ] 3 . F ( 2 ) = f^Xxe'^dx
e "
4A
. I = 1η(1-βλ = (1/4)1η(4/3).
4λ
) - 1 2 λ . dl/dX
= (l-e-
(b) Ρ α ( λ ) = 1 - e ~ 4 \ P 2 (A) = e " 4 \
4A
1
)- 4e"
4A
2
-e -\x*
=
- 1 2 = 0. er
P i (A) = 4e~ 4 A , Ρ^(λ) =
4X
0
= 1
= 3/4.
-4e~4A.
/ ( λ ) - 4[16e~ 8 A /(l - e" 4 A ) + 1 6 e " 8 A / e - 4 A ] . Var(A) = {4[16(9/16)/(l/4) +
16(9/16)/(3/4)]}"1 - 1 / 1 9 2 .
1 3 . 7 3 T h e loglikelihood function is / = 81.61837(a—1)-38,OOO0~ 1 - 1 0 I n Γ ( α )
- 10a In 0. Also, a = 6.341 and 0 - 599.3. Using v = 4, we have
021(α,θ)
da2
j_
—
1(6.3416341,599.3)-2/(6.341,599.3)+/(6.3403659,599.3)
(0.0006341) 2
021(α,θ)
Οαθθ
j_
~
1(6.34131705,599.329965)-/(6.34131705,599.270035)
-/(6.34068295,599.329965)+/(6.34068295.599.270035)
(0.0006341)(.05993)
=
0.0166861,
_;_
—
/(6.341,599.35993)-2/(6.341,599.3)
+ /(6.341,599.25007) —
_
(0.05993) 2
ö 2 /(q,fl)
dB2
7(ά,0)
and its inverse is
Var =
1.70790
0.0166861
0.0166861
0.000176536
7.64976
-723.055
-723.055
74,007.7
_
—
_-,
1
7η7αη
-'U'JU^
Π η η ΐ 7 Γ ^ Γ
U.UUUl/ΟΟόΟ,
n
SECTION 13.3
143
The mean is αθ, and so the derivative vector is [ 599.3 6.341 1. The variance
of αθ is estimated as 227,763 and a 95% CI is 3,800 ± 1.97^/227,763 = 3,800 ±
935.
13.74 a = 3.8629. 1η/(ζ) = 1ηα + α 1 η λ - ( α + 1)1η(λ + ζ). d 2 ln/(x)/<9a 2
—a - 2 . 7(a) = na~2. Var(a) = a 2 / n . Inserting the estimate gives 2.9844.
Ε(ΧΛ 500)
=
/•500
/
Jo
:χα1,000 α (1,000 +
+500 /
xy^'dx
x)-a-ldx
αΙ,ΟΟΟ" (1,000 +
1,000
-(2/3)«1'500
a —1 v ' ' a —1
Evaluated at a, it is 239.88. The derivative with respect to a is
1,000
2
(a-1) "' V3; (
1,500
(2\a
1,500,
/2\
which is —39.428 when evaluated at a. The variance of the LEV estimator is
(-39.4298) 2 (2.9844) = 5,639.45, and the CI is 239.88 ± 133.50.
13.75 (a) Let θ = μ/ΐχΐ + τ - 1 ) . From Appendix A, E(X) = ΟΓ(1 + τ " 1 ) = μ.
(b) The density function is f(x) = exp { - [ r < 1+ ^
its logarithm is
\nf(x) =
Γ(1 + τ " 1 ) χ
l)x T
]}
~x [ Γ(1+ ^
l)x
]\
+ In r + r In
m+r-1)
+ ( τ - l)lnx.
+ In r + r In
m+r-1)
+ (r - 1) In Xj
μ
and
The loglikelihood function is
n
,
1(μ)
and its derivative is
ew.±{jul±g*-z
> ,
144
CHAPTER 13 SOLUTIONS
Setting the derivative equal to zero, moving the last term to the right-hand
side, multiplying by μ, and dividing by r produce the equation
Σ
T-1)xj
T(l+
μ
1
ra+r- )
J
1
ra+r- )
Σχΐ
= η
i=i
n
(13.1)
,,1/r
and, finally,
1/r
r(1+,-) Σ Ϊ
^:
u=i
(c) The second derivative of the loglikelihood function is
ητ
~7&
Erom(13.1), Σ ? = ι *;
can be written as
1"{μ)
=
~
■τίτ + Ι ^ Ι +
τ - ^ ν ^ Σ ^ ·
j=i
ηλ'τμ\
Γ(1+τ-ΐ)
and therefore the observed information
- j - r i r + lJlXl + r - y A —
nr
"72"
r ( r + l)n
Γ2
2
—
ηΎ'τμτ
Γ(1 + τ - η
r n
~2~>
μ
μ
μ
and the negative reciprocal provides the variance estimate.
(d) The information requires the expected value of XT. From Appendix A, it
is 6>ΤΓ(1 + 7) = θτ = [μ/Γ(1 + τ ~ 1 ) ] τ . Then
Ε[/"(μ)1
— -τ(τ + 1)Γ(1 + τ-1Υμ-τ-*η\μ/Γ(1
μ
τ(τ + \)η
τ2η
ητ
2 '
— μ*
_
μ*
μ
=
+ τ-1)Υ
Changing the sign, inverting, and substituting μ for μ produce the same estimated variance as in part (c).
SECTION 13.4
145
(e) To obtain the distribution of μ, first obtain the distribution of Y = XT.
We have
SY(y) = Ρτ(Υ>ν) = Ρτ(Χτ>ν)
=
Pi(X>y1^)
=
exp I -
=
exp
-^
1 L
f
μ
'-
\Γ(ΐ + τ-ΐ)Υ
J
y
\
H—μ—J r
which is an exponential distribution with mean [μ/Γ(1+τ 1)]τ. Then 52?=i XT
has a gamma distribution with parameters n and [μ/Γ(1 + r _ 1 ) ] T . Next look
at
ΈχΙ-
μτ
3=1
Multiplying by a constant changes the scale parameter, so this variable has
a gamma distribution with parameters n and 1. Now raise this expression to
the 1/r power. Then
has a transformed gamma distribution with parameters a = n, Θ = 1, and
T = T. To create μ, this function must be multiplied by μ / η 1 / τ , which changes
the scale parameter to θ = μ/η1/τ.
Prom Appendix A,
E(A)
μΐΧη + τ - 1 )
nVrr(n) '
A similar argument provides the second moment and then a variance of
- ^
VariW
v a r
W
13.4
Γ ( η + 2Τ-1)
η2/τΓ(η)
- "Tfr
+ T-12 )8
2
η /^Γ(η)
'
SECTION 13.4
13.76 The likelihood function is L(9) = < Γ 2 ν 2 8 ' 4 8 8 / θ . Theni(Ö) = -2Ο1η(0)28,488/6». With Θ = 1,424.4, 10) = -165.23. A 95% confidence region solves
1(θ) = -165.23 - 1.92 = -167.15. The solutions are 946.87 and 2,285.07. Inserting the solutions in 5(200) = e _ 2 0 0 / e produces the interval 0.810 to 0.916.
The symmetric interval was 0.816 to 0.922.
146
CHAPTER 13 SOLUTIONS
13.5
SECTION 13.5
13.77 For the Kaplan-Meier estimates, the variances are (by Greenwood's
formula)
Vär(<J45)
=
(0.750)2
Var(<746)
=
(0.584)2
1
8(7)
1
8(7)
J_
+
+
= 0.02344,
7(6) J
1
1
- 0.03451.
6(5) + 5(4)
For exact exposure, the deaths were 2 and 3 and the exposures 5.9 and 5.5
for the two ages. The estimated variances are
Vär(<745)
(1 - 0.28751) 2 —Ö = 0.02917,
-
(1 - 0.42042) 2 —^ = 0.03331.
5.5^
Var(£'46;
The actuarial exposures are 7.2 for both ages. The estimated variances are
v-w- Ϊ
Var(q45)
Var(g 46 )
=
0-28751(0.71249)
^7.2
= 0.02845,
0.42042(0.57958)
0.03384.
7.2
CHAPTER 14
CHAPTER 14
14.1
SECTION 14.7
14.1 (a) q = X/m,
E(q) = E(X)/m
b
( )
Var(<?)
(c)
I =
=
=
= mq/m = q.
Var(X)/m 2 = Var(X)/(nm 2 )
mq{\ — q)/(nm2) = q(\ — q)/(nm).
\~] In (
\ + Xjlnq + (m — Xj) ln(l — q),
n
l
'
=
Σχά<1~λ
~ (jn-Xj)(l
n
-q)~\
J^-Xjq-Z-im-XjKl-q)-2,
l" =
j=i
I(q)
=
E(-l")
= n[mqq-2 + {m - mq)(l - q)-2}
= nmlf'+a-?)- 1 ].
The reciprocal is (1 — q)q/{nm).
Student Solutions Manual to Accompany Loss Models: From Data to Decisions,
Edition. By Stuart A. Klugman, Harry H. Panje-r, Gordon E. Williiiot
Copyright © 2012 John Wiley & Sons, Inc.
FourthlAJ
148
CHAPTER 14
(d)
q±za/2s/q{l-■q)/(nm).
1
^
= Pr
n
1 — Gi
ί -za/s>
<
_ _J ~ Q_^_
\A(1 - q)l(nm)
<
Za/2
and so
\q-q\
< ;
nm(q —
tf '
which implies
Then,
1
V
nm
<z2a/2q(l-q).
(nm + z^i2)q2 - (2nmq + ^/ 2 )<7 + nmq2 < 0.
The boundaries of the CI are the roots of this quadratic:
2nmq + z2a/2 ± za/2\/l
+ Anmq(l - q)
2(nm + zl/2)
14.2 Because r = l,ß = X.
Var(X)
=
Var(X)/n - β(1 + β)/η,
n
I =
j=i
=
n
^ l n P r ( 7 V = x3) = ^ l n [ / 3 ^ ( l +
β)"^'1]
j=i
^ χ , · 1 η ( / 3 ) - ( χ , + 1)1η(1 + /3),
n
3=1
E{1")
=
- η / 3 / Τ 2 + η(/3 + 1 ) ( 1 + / ? Γ 2 = η/[/3(1+/3)].
The reciprocal matches the true variance of the mle.
14.3 (a) The mle is the sample mean, [905+2(45) +3(2)]/10,000 - 0.1001, and
the CI is 0.1001 ± 1.96^/0.1001/10,000 = 0.1001 ±0.0062, or (0.0939,0.1063).
(b) The mle is the sample mean, 0.1001. The CI is
0.1001 ± 1.96χ/0.1001(1.1001)/10,000 = 0.1001 ± 0.0065,
or (0.0936,0.1066).
SECTION 14.7
149
(c) Numerical methods yield r = 56.1856 and ß = 0.00178159.
(d) q = x/A = 0.025025.
(e) 0.025025 ± 1.96>/0.025025(0.974975)/40,000 = 0.025025 ± 0.001531 and
2(10,000)(4)(0.025025) + 1.962 ± lMy/1 4- 4(10,000)(4)(0.025025(0.974975)
2[10,000(4)4-1.96 2 ]
or 0.025071 ±0.001531.
(f) The likelihood function increases as m —> oo and gf —> 0.
14.4 (a) The sample means are underinsured, 109/1,000 = 0.109, and insured,
57/1,000 = 0.057.
(b) The Poisson parameter is the sum of the individual parameters, 0.109 40.057 = 0.166.
14.5 (a) λ is the sample mean, 0.166.
(b) Let riij be the number observations of j counts from population i, where
j = 0 , 1 , . . . and 2 = 1,2. The individual estimators are λ^ = Σ ^ = ο ^ η ΰ · The
estimator for the sum is the sum of the estimators, which is λ = X}°L0 j(ji\j +
ri2j), which is also the estimator from the combined sample.
(c) β = 0.166.
(d) Numerical methods yield r = 0.656060 and β = 0.253026.
(e)q = 0.166/7 = 0.0237143.
(f) The likelihood function increases as m —► oo and q —► 0.
14.6 (a) λ = 15.688. When writing the likelihood function, a typical term is
(P4 +P5 4-p6) 47 \ a n ( l the likelihood must be numerically maximized.
(b) β = 19.145.
150
CHAPTER 14
(c) f = 0.56418 and ß = 37.903.
1 4 . 7 T h e maximum likelihood estimate is the sample mean, which is 1. The
variance of the sample mean is the variance of a single observation divided
by the sample size. For the Poisson distribution, the variance equals the
mean, so the estimated variance is 1/3,000. T h e confidence interval is 1 ±
1.645^/1/3,000 = 1 ± 0.030, which is 0.970 to 1.030.
14.8 The estimator is unbiased, so the mean is λ. The variance of the sample
mean is the variance divided by the sample size, or X/n. The coefficient of
variation is y/X/n/X
— 1/ynX.
T h e maximum likelihood estimate of A is
the sample mean, 3.9. T h e estimated coefficient of variation is 1/^/10(3.9) =
0.1601.
CHAPTER 15
CHAPTER 15
15.1
SECTION 15.2
1 5 1
Let
f(v) = y$itmi+Ü>»'
W = \nYy = 100e™ and dy = lOOe^dw. Thus,
_
W
^ >~
In 100 = ln(F/100). Then
12(4.801121)12100ew
_ 12(4.801121)12
13
100e™(0.195951 + w + lnl00) " (4.801121 + w ) 1 3 '
y >
'
which is a Pareto density with a — 12 and Θ = 4.801121.
15.2
, , λ
π(α|χ)
oc
α1010010(*
^ ^
(X α10^-1
οΡ-χβ-"/θ
^ ^
βχρΐ-^θ'1
- ΙΟΙηΙΟΟ + ^ l n ^ ) ] ,
which is a gamma distribution with parameters 10 + 7 and (θ~ — 10 In 100 +
Σ ΐ η α ^ ) - 1 . The mean is aßayos = (1O + 7)(0 _ 1 - 10In 100 + ^ I n ^ ) " 1 . For
the mle:
I =
/' =
101ηα + 10α1η100-(ο; + 1 ) ^ ; ΐ η χ ^
10a" 1 + 10 In 1 0 0 - £ In x,; = 0,
Student Solutions Manual to Accompany Loss Models: From Data to Decisions,
Edition. By Stuart A. Klugmau, Harry H. Paiijor, Gordon E. Willmot
Copyright © 2012 John Wiley & Sons, Inc.
FourthlSl
152
CHAPTER 15
for am\c — 10(^2m xj ~ 10 In 100) l. The two estimators are equal when
7 = 0 and Θ = oo, which corresponds to π(α) = a - 1 , an improper prior.
n
15.3 (a) π(μ, σ|χ) oc σ
exp
-Σ^^Ψ^
(b) Let
I = 1ηπ(μ,σ|χ) = — (n + 1) Ιησ — —σ
Then
9/
d/z
2
TjOn^j
—
A*) 2 ·
2 2 1η
=^"
Σ ( ^-^(- 1 ) = °
2
and the solution is μ = ^ ^Z l n a ; j· Also,
81
£- = -(n + \)σ~ι + σ - 3 ^ ( l n a ; , · - μ)2 = 0,
da
and :
(c)
^ϊΣ(ΐη^·-Α) 5
π(μ, σ|χ)
oc
exp
ex
=
1/2
Σ2 V
1 / In Xj; — μ
£
1 ημ 2 — 2μ V l n x 7 + ΥΎΐη:τ7·)5
Pl-2~^
^2
—
oc exp
1 μ2 - 2μμ + μ2
"2
^
which is a normal pdf with mean μ and variance σ /η. The 95% HPD interval
is μ db l.96a/y/n.
15.4 (a)
π υ
\ \*)
^
ρ200
Qß+l
exp[-e-\\ + Zxj)}
oc
^201+^
>
which is an inverse gamma pdf with parameters 200 + ß and 30,000 4- λ.
(b) £(2<9|x) - 2f^ll+_\-
At β = A = 0, it is 2 ^ ^ - 301.51 while at β = 2
?
> 30,250
and λ = 250, it is 2 2 01 = 301.00. For the first case, the inverse gamma
parameters are 200 and 30,000. For the lower limit,
0.025 - Pr(20 < a) = F{a/2) = 1 - Γ(200; 60,000/α)
SECTION 15.2
153
for a = 262.41. Similarly, the upper limit is 346.34. With parameters 202 and
30,250, the interval is (262.14,345.51).
(c) Var(20|x) = 4 Var(0|x) = 4
(30,000+A)2
_ /30,0
/30,000+λλ2
^V 19'
(199+j0)(198+0)
199+/3 I
The two vari-
ances are 459.13 and 452.99. The two CIs are 301.51±42.00 and 301.00±41.72.
(d) I = -β'^Ο,ΟΟΟ - 200 In(9. V = 6>"230,000 - 2OO0"1 = 0. 0 = 150. For
the variance, V = 0~"2 £>.,· - 2OO0~\ I" = - 2 0 " 3 ^Xj + 2OO0-2, E(-Z") =
20~3(2OO0) - 2OO0-2 = 2OO0"2 and so Var(0) = 02/2OO and Var(20) = 02/5O.
An approximate CI is 300 ± 1.96(150)/y/EÖ = 300 ± 41.58.
15 5<a)/w
'
-
Ü^-v-^wwr^-*^
ίΚ\ Γ(ο + b) Γ(χ + α)Γ(Κ -x + b)
\x)v(a)T(b)
T(a + b + K)
K\
T(x + a) T{K -x + b) T(a + b)
x\(K-x)\
T(a)
T(b)
T(a + b + K)
a(a + 1) · · · (x + a - 1) b(b + 1) - · ■ (K - x + b - 1)
xl
(K- x)\
K\
(a + b) ■ ■ ■ (K + a + b - 1)
(-i)Tr6)
(-aKb)
Also,
Ε(Χ\Θ) = ΚΘ, E{X) = Ε[Ε(Χ\Θ)} = Ε(ΚΘ) = K
a
a +b
- 0 ) 6 - 1 , which is a beta density. There(b) π(0|χ) oc θΣχί(1 -θ)ΣΚ'-χ^θα~1{1
fore, the actual posterior distribution is
with mean
Ε(θ\ν) = a + b + Y^Kj'
154
CHAPTER 15
a)ß Jo
1
„,
. ,w
.
o-l\-«-l
- Γ ( α + 1)(χ + /Γ
Γ(α)/?α
α/Τ^/Τ'+χ)-"-1.
=
Using E(X|6>) = 6»"1,
E(X)
=
Ε[Ε(Χ\Θ)}=Ε{Θ-1)
, έ Τ - ν » / % _Γ(α-1)/Τ
Γ(α)/Τ
Γ(α)/Τ
Jo
β(α-ΐγ
(b) π(0|χ) oc βηε-βΣχίθα-'ίβ-βΙβ
= $n+r-1e-^^xi+ß
'>, which is a gamma
density. Therefore, the actual posterior distribution is
π(0|χ) =
Γ(η + α ) ( Σ ^ + / ? _ 1 )
with mean
Ε(β|χ) =
15 • 7 ( a ) / ( x )
U+ a
= f f{x\e)b{6)de
x
Jr(a)T(b)J0
y
}
r + x - 1\ Γ(α + b) T(r + α)Γ(& 4- x)
x
J Γ(α)Γ(6) Γ(Γ + a + 6 + x)
T(r + x) Γ(α + 6) Γ(α + r)T(b + x)
Γ(Γ)Χ! Γ(α)Γ(&) Γ(α + τ + 6 + χ ) '
(b)
π(β|χ)
α
Π ^ Ι ^ )
Hence, π (#|x) is beta with parameters
a*
=
a + nr,
E(0|x)
=
a*
a* + 6*
a + nr
a + nr + 6 4- Σ Xj
SECTION 15.2
155
15.8 (a)
/»oo
/
/(χ\θ)ο(θ)άθ = /
Jo
ß°
/ n
oQ
J^-e-^-^±-e^e-ßOde
Γ(α)
άθ
V 2π
r(« + j)
(2π)ν2Γ(α)[ι(χ_μ)2+/?]^
r ( a + i) Γ
(b) π(β|χ) α
oc
1
,"
[Π/(^Ίβ)]τ(β)
ο " / 2 β χ ρ [ - | ^ ( α : , - μ ) 5 6, αa~- l1e
-09
^^•-v^·,
where α * = α + §,
ß* = ß + \ Y^(XJ - μ) 2 . Therefore, b(0|x) is gamma
( α % / η and E(g)x) = fr = ^ ° % _
μ )
,.
15.9 (a) By the convolution formula,
y=0
^ o U A i + ey VI + ÖJ
J/=0
te-y)!
S
e
e (l+0)"
The pf is of the form (5.6) with τ{θ) = Ιηβ, p(x) = £ * = 0 {")/(x - y)\, and
q(9) = ee(l + e)N.
(b) Theorem 15.18 applies, yielding
π(θ)
\q{e))-ke^kr^r'{e)
ο(μ, k)
β(μ, k)
k l
0» - (l
+
0)-Nke-kf)
156
CHAPTER 15
Because JQ π(θ)άθ = 1,
c(M,fc)
/»OO
= / e»k-\i + e)-Nke-kede
Jo
pOO
_
/
^ik-1^
,^^k-Nk-\-l)-ßk-\e~kt^
JO
=
Γ(μ&)Φ(μΛ, μ/c - TVfc + 1, fc).
15.10 (a) Let N be Poisson(A).
px(l-p)n-xe-x\n
n\
n=x
v
m—nr
'
X
*,-\ °° \xe-^[(l-p)\}n
P
1 — p J x\ ^—' (n — x)\
n=x
( P \ ' e - ^°°[ ( l - p ) A r '
x
Λ-λ
\l~pj
P
x\ n—0
^
^
1 — p)
e~pX(p\)x
6
x\
n\
[(l-p)\]xeV-rix
a Poisson distribution with parameter p\.
(b) Let N be binomial(ra, r).
_
n\
*-^x\[n
n=x
\-p)
v
,„_„.
— x)\
ml
n\(m — n)\
7
v
^ ^ .Vx
n=0
'
1 - r
.
.
/,
.......
...
n!(m-n
X !
SECTION 15.2
-1-pJ
2 - V K(1 - r'V»x\^ \ (V-&Y
1-r )
ψ
([1-P]r\n
^ \ l - r p )
- {&)
1-r
( l - r y n
\l-rp)
1
(m-s)!
(m-x)\
n\(m — n — x)!
/ 1— r
m!
(1 - r ) m
157
x
x!(m — x)! \ 1 — rpy
[1 — p\r \ ( 1 — r \
(m — x)!
n\(m — n — x)!
rp ) \ 1 — rp
m—x
Σ ^
n=0
m!
(pr)x(l-rp)rn~x
x!(m — x)!
which is binomial
(m,pr).
(c) Let TV be negative binomial (r, /?).
^—' χ!(η — x)!
\" /
β
1 V r ( r + l)---(r + n
)
:E
/
1
\
\l + ß)
r
1 °°
l(i-p)
x ! ^
1 + /3
P γ ( i y 1 ^\β(ίΡ;
/3(1-P)
\l + ß)
l-p)
1)
™!
i + /V vi + /v
x\^[
r(r + 1) · · · (r + n — 1)
(n — x)!
n+x
1+β
r(r + 1) · · · (r + n -f x — 1)
( Pß Y(
Σ
n=0
n!
1
r^(i-p)
I 1+0
Vr(r+
1)···(Γ + Χ - 1 )
I n (r + x) · · · (r + n -f x — 1)
n!
and the summand is almost a negative binomial density with the term
158
CHAPTER 15
missing. Place this term in the sum so the sum is one and then divide by it
to produce
1+ ß
pß V ( 1 Y r(r + l ) - - - ( r + z - l )
χ}
1+PßJ \1+Pßj
which is negative binomial with parameters r and pß.
15.11 Let D be the die. Then
Pr(D = 2|2,3,4,l,4)
P r ( 2 , 3 , 4 , l , 4 | D = 2)Pr(£> = 2)
~ Pr(2,3,4,1,4|D = 1) Pr(D = 1) + Pr(2,3,4,1, A\D == 2) Pr(£> == 2)
_
113 13 1
666662
3 I I I I I i λλ^ΙΆΙ
666662
1
"666662
o
_ °
A'
15.12 (a) P r ( y = 0) = / J e-e(\)d0
= 1 - e" 1 = 0.63212 > 0.35.
(b) P r ( y = 0) = Jo Θ2(1)άθ = 1/3 < 0.35.
(c) P r ( y = 0) = / ^ ( l - 6>)2(l)cZ6> = 1/3 < 0.35.
Only (a) is possible.
15.13
_
Pr(if = l/4|d = 1)
Pr(d = 1\H = 1/4) Pr(if = 1/4)
Pr(d = \\H = 1/4) Pr(H = 1/4) + Pr(d - \\H = 1/2) P r ( # = 1/2)
11
2
4 5
_
45 + 2 5
Δ
ό
15.14 The equations are αθ = 0.14 and αθ2 = 0.0004. The solution is a = 49
and 0 = 1/350. Prom Example 15.8, the posterior distribution is gamma with
a = 49 + 110 - 159 and Θ = (l/350)/[620(l/350) + 1] = 1/970. The mean is
159/970 - 0.16392 and the variance is 159/970 2 = 0.00016899.
15.15 The posterior pdf is proportional to
t e - t 5 i e - ' = t 2 e-* 6 ,
SECTION 15.2
159
which is a gamma distribution with a — 3 and ß = 1/6. The posterior pdf is
n(t\x = 5) = l08t2e-6t.
15.16 Sm\Q ~ bin(m, Q) and Q ~ beta(l,99). Then,
E(5 m ) - E[E(S m |Q)] = E(mQ) = ^
γ
^ = 0.01m.
For the mean to be at least 50, m must be at least 5,000.
15.17 π(β\χ) oc f(x\ß)n(ß)
= / T V ^ l O O / r V 1 0 ^ oc ß~*e-W+*)/ßm
This is an inverse gamma distribution with a — 3 and # — 10 -f x. The
posterior mean is θ/(α — 1) = (10 + x)/2.
15.18 The likelihood function is
£(0)
=
ö -l e -HOO/0 e -l e -32OO/0 e -l e -33OO/0 ö -l e -35OO/e 0 -l e -39OO/0
xe
-4000/01495
000/014
The posterior density is proportional to
π(0|χ) oc ^ - V 1 · 9 9 5 · 0 0 0 ' · .
This is an inverse gamma distribution with parameters 5 and 1,995,000. The
posterior mean is 1,995,000/4 = 498,750. The maximum likelihood estimate
divides by 5 rather than 4.
15.19 (a)
ί(χ\θι,θ2)
6(0i|0 2 )
6(02)
^2 /
exp
02
2πσ exp
2
Γ(α)^ 2
6
^2
a \2
//3
\
2
160
CHAPTER 15
7Γ(0Ι,02|Χ)
π(0ι|0 2 )π(0 2 )
OC
;=i
OC
Λη/2 e x
^2
5
4Σ
2 ^·-^)
,1/2
^2
-/302
Θ2
P
x exp
\2
9<*-1
,α+^ψ--!
=
Ö:
x exp ^ - 0 2
π(βι|β 2 ,χ)
//3
^Κ^'^Σ^-·.)'
oc ττ(0ι,0 2 |χ)
oc exp
exp ~ 2
which is normal with variance σ* = [^- + n#2]
= θ, (\+ησ2\
which satisfies % = ^ + 6>2n:r. Then, u* = J-^J + ^ ^ .
For the posterior distribution of 0 2 ,
7r(#2|x)
OC
=
Λ
I n-r-
d mean μ+,
)άθλ
[π(θ1,θ2\χ
ff
an
1
/ βχρ {-|[(^) 2+ Σ(^-^) 2 ]}^·
Now 52(XJ - Öi)2 = ^Z(xj - x) 2 + n (x - #i) 2 and, therefore,
π(0 2 |χ)
oc θ°+!ψ
1
expi-ff2
[/? + | £ ) ( * , ■ - ä ) 2 j }
/•■Ι^)
+ η(α;-θι)
■döi.
SECTION 15.3
161
To evaluate the integral, complete the square as follows:
θ\ - μ \
2
,_
2
-\-η(χ-θι)
1
2
θ2
2
'<?(£+-)-»-(£+«·) + £+■
= -1^(^+η)
μ + ησ2χ\
[θ1
2
+
1 + ησ J
#2
Τ
■ 22 ^ 2
(μ + ησ
χ)
2
σ (1 + ησ2)
Ö — ηχ
σ2
T h e first term is a normal density and integrates to [^(l/^" 2 + n ) ] - 1 / 2 . T h e
second term does not involve θ\ and so factors out of the integral. T h e
posterior density contains #2 raised to the a + ^ — 1 power and an exponential
term involving #2 multiplied by
β + 2\Υΐ*ά-*)
^
2
+\
1
2
(μ -f ησ2χ)2
μ2
σ2(1 + ησ2) " ^ 2
η(χ-μ)
2
^ ν ^
ηχ
2
2
^
+ 2 ( 1 + ησ ) '
This constitutes a g a m m a density with the desired parameters.
(b) Because the mean of θ ι given ©2 and x does not depend on #2? it is also
the mean of ©i given just x, which is μ*. T h e mean of ©2 given x is the ratio
of the parameters, or
a + n/2
1
β +
15.2
Σ { χ
. . ^ 2
+
n(x — μ ) 2
2(1 + η σ 2 )
SECTION 15.3
1 5 . 2 0 By Exercise 5.26(a), 5 | © has pf of the form
fs\s(s\0)
=
n
m ] n
.
Thus,
π(θ\χ)
oc β(χ\θ)π(θ)
7r(0)oC
=
3=1
or
π(θ\χ) oc
exp
x
e
r(e)EU
n A^ )
m\
162
CHAPTER 15
But π ( φ ) oc fS\e{8\0)ir{e)
α ^ p f p also.
15.21 The posterior pdf is proportional to
e-'0° β~
0!
θ
e-20.
=
This is an exponential distribution. The pdf is n(6\y = 0) = 2e~29.
15.22 The posterior pdf is proportional to
e
θηΐ
^
1!
0 _
/)2
-20
This is a gamma distribution with parameters 3 and 0.5. The pdf is n(0\y =
l)^402e-20.
15.23 From Example 15.8, the posterior distribution is gamma with a =
50 + 177 - 227 and Θ = 0.002/[l,850(0.002) + 1] = 1/2,350. The mean is
αθ = 0.096596 and the variance is αθ2 = 0.000041105. The coefficient of
variation is λ / α ^ / ( α 0 ) = 1/^/5 = 0.066372.
15.24 The posterior pdf is proportional to
( ? V ( 1 " 0) 3-1 60(l - Θ) oc 02(1 - Θ)3.
This is a beta distribution with pdf π(θ\τ = 1) = 6O02(1 - 6>)3.
15.25 The prior exponential distribution is also a gamma distribution with
a = 1 and Θ — 2. From Example 15.8, the posterior distribution is gamma
with a = l + 3 = 4 a n d # = 2/[l(2) + 1] = 2/3. The pdf is n{X\y = 3) =
27λ
36-3λ/2/32
15.26 (a) The posterior distribution is proportional to
ζ)θ2(1 - θ)280θ3(1 - Θ)Α oc 0 5 (1 - Of
which is a beta distribution. The pdf is π ( % = 2) = 27720 5 (l - 0) 5 .
(b) The mean is 6/(6 + 6) = 0.5.
SECTION 15.3
163
15.27 The posterior distribution is n(q\2) = 6<?2(1 -q)Hp{\ -q) oc q3(l
-qf.
The mode can be determined by setting the derivative equal to zero. 0 =
3q2(l — q)3 — 3g 3 (l — q)2, which is equivalent to q = 1 — q for q — 0.5.
SECTION 15.3
163
15.27 The posterior distribution is n(q\2) = 6<?2(1 -q)Hp{\ -q) oc q3(l
-qf.
The mode can be determined by setting the derivative equal to zero. 0 =
3q2(l — q)3 — 3g 3 (l — q)2, which is equivalent to q = 1 — q for q — 0.5.
CHAPTER 16
CHAPTER 16 SOLUTIONS
16.1
SECTION 16.3
16.1 For Data Set B truncated at 50, the maximum likelihood parameter
estimates are f = 0.80990 and Θ = 675.25, leading to the graph in Figure
16.1.
For Data Set B censored at 1,000, the estimates are f = 0.99984 and
Θ = 718.00. The graph is in Figure 16.2.
For Data Set C, the parameter estimates are f = 0.47936 and Θ — 11,976.
The plot is given in Figure 16.3.
16.2 For Data Set B truncated at 50, the plot is given in Figure 16.4.
For Data Set B censored at 1,000, the plot is given in Figure 16.5.
16.3 The plot for Data Set B truncated at 50 is given in Figure 16.6.
For Data Set B censored at 1,000, the plot is given in Figure 16.7.
Student Solutions Manual to Accompany Loss Models: From Data to Decisions,
Edition. By Stuart A. Klugman, Harry H. Panjor, Gordon E. Willmot
Copyright © 2012 John Wiley & Sons, Inc.
Fourthl65
166
CHAPTER 16 SOLUTIONS
Weibull Fit
1
'
0.90.80.7^
0.6 -
—Model
I
Emplrical|
0.4
0.3-
0.2
0.1 0
()
700
1400
2100
2800
3500
X
Figure 16.1
Cdf plot for Data Set B truncated at 50.
Weibull Fit
0.8 -I
0.70.6-
*tr*^i
0.5
Model
So.4-
I
Empirical
0 30.20.1 n .
(3
200
1
1
1
400
600
800
1000
X
Figure 16.2
16.2
Cdf plot for Data Set B censored at 1,000.
SECTION 16.4
1 6 . 4 For D a t a Set B truncated at 50, the test statistic is 0.0887, while the
critical value is unchanged from the example (0.3120). T h e null hypothesis is
not rejected, and it is plausible t h a t the d a t a came from a Weibull population.
For D a t a Set B censored at 1,000, the test statistic is 0.0991, while the critical
value is 0.3041. T h e null hypothesis is not rejected.
SECTION 16.4
167
Weibull Fit
0.00006 γ0.00005 \
0.00004 j
-Model
- Empirical
§ 0.00003 J
0.00002 \
0.00001 |
oi
0
50000 100000 150000 200000 250000 300000
Figure 16.3
Pdf and histogram for Data Set C.
Weibull Fit
0.1 -r
0.08 \
0.06 \
0.04 \
0.02 \
0
a
-0.02 \
-0.04
-0.06 \
-0.08 ]
-0.1 0
3500
Difference plot for Data Set B truncated at 50.
Figure 16.4
16.5 T h e first step is to obtain the distribution function. It can be recognized
as an inverse exponential distribution or the calculation done as
/»DO
pX
F(x)
=
/
J0
2y-2e-2/ydy=
/
J2/x
2 ( 2 / z ) - 2 e ~ z (2 z~2)dz
/»OC
-
/
J2/x
e~zdz
=
e~2lx.
In the first line, the substitution z — 2/y was used. T h e calculations are in
Table 16.1. T h e test statistic is the maximum from the final column, or 0.168.
168
CHAPTER 16 SOLUTIONS
Figure 16.5
Difference plot for Data Set B censored at 1,000.
Weibull Fit
1 -1
•
0.90.8
0.7 ~ 0-6f0.50.4
0.3
0.2 0.1
^»*\»
^—·
·
Π -
()
0.2
0.4
0.6
0.8
I
F„(x)
Figure 16.6
p-p plot for Data Set B truncated at 50.
1 6 . 6 T h e distribution function is
F(x)=
I
/
2(1+ y)-0dy=
- ( 1 + y)
-2\x
l - ( l + x)"2.
T h e calculations are in Table 16.2. T h e test statistic is the maximum from
the final column, 0.189.
1 6 . 7 For D a t a Set B truncated at 50, the test statistic is 0.1631, which is
less t h a n the critical value of 2.492, and the null hypothesis is not rejected.
SECTION 16.4
169
Weibull Fit
1
0.9 0.80.7- * 0.6£- 0.5 0.40.3
0.2
0.1
0
F i g u r e 16.7
p-p plot for Data Set B censored at 1,000.
Table 16.1
Calculations for Exercise 16.5.
X
F(x)
Compare to
Max difference
1
2
3
5
13
0.135
0.368
0.513
0.670
0.857
0, 0.2
0.2, 0.4
0.4, 0.6
0.6, 0.8
0.8, 1
0.135
0.168
0.113
0.130
0.143
Table 16.2
Calculations for Exercise 16.6.
X
F(x)
Compare to
Max difference
0.1
0.2
0.5
1.0
1.3
0.174
0.306
0.556
0.750
0.811
0, 0.2
0.2, 0.4
0.4, 0.6
0.6, 0.8
0.8, 1
0.174
0.106
0.156
0.150
0.189
For D a t a Set B censored at 1,000, the test statistic is 0.1712 and the null
hypothesis is again not rejected.
16.8 T h e calculations for D a t a Set B truncated at 50 are in Table 16.3. T h e
sum is 0.3615. W i t h three degrees of freedom, the critical value is 7.8147 and
the Weibull model is not rejected. T h e p-value is 0.9481.
T h e calculations for D a t a Set B censored at 1,000 are in the Table 16.4.
T h e sum is 0.5947. W i t h two degrees of freedom, the critical value is 5.9915
and the Weibull model is not rejected. T h e p-value is 0.7428.
170
CHAPTER 16 SOLUTIONS
Data Set B truncated at 50 for Exercise 16.8.
Table 16.3
Range
P
Expected
Observed
x2
50-150
150-250
250-500
500-1,000
1,000-2,000
0.1599
0.1181
0.2064
0.2299
0.1842
0.1015
3.038
2.244
3.922
4.368
3.500
1.928
3
3
4
4
3
2
0.0005
0.2545
0.0015
0.0310
0.0713
0.0027
2,000-OQ
Data Set B censored at 1,000 for Exercise 16.8.
Table 16.4
Range
P
Expected
Observed
x2
0-150
150-250
250-500
500-1,000
l,000-oo
0.1886
0.1055
0.2076
0.2500
0.2484
3.772
2.110
4.151
4.999
4.968
4
3
4
4
5
0.0138
0.3754
0.0055
0.1997
0.0002
Table 16.5
Data Set C truncated at 7,500 for Exercise 16.8.
Range
P
Expected
Observed
x2
7,500-17,500
17,500-32,500
32,500-67,500
67,500-125,000
125,000-300,000
300,000-oo
0.3299
0.2273
0.2178
0.1226
0.0818
0.0206
42.230
29.096
27.878
15.690
10.472
2.632
42
29
28
17
9
3
0.0013
0.0003
0.0005
0.1094
0.2070
0.0513
T h e calculations for D a t a Set C are in Table 16.5. T h e sum is 0.3698. W i t h
three degrees of freedom, the critical value is 7.8147 and the Weibull model
is not rejected. T h e /rvalue is 0.9464.
1 6 . 9 T h e calculations are in Table 16.6. For the test, there are three degrees
of freedom (four groups less zero estimated parameters less one) and at a 5%
significance level, the critical value is 7.81. T h e null hypothesis is accepted
and, therefore, the d a t a may have come from a population with the given
survival function.
171
SECTION 16.4
Calculations for Exercise 16.9
Table 16.6
Interval
Observed
Expected
0 to 1
1 to 2
2 to 3
3 to 4
Total
21
27
39
63
150
150F(1) =
150[F(2) 150[F(3) 150[F(4) 150
Table 16.7
No. of claims
Observed
50
122
101
92
0
1
2
3 or more
Chi-square
150(2/20) = 15
F(l)] = 150(4/20) = 30
F(2)] = 150(6/20) = 45
F(3)] = 150(8/20) = 60
§1
3Q
= 2.40
0.30
If =0.80
60 = 0.15
3.65.
Calculations for Exercise 16.10.
Chi-square
Expected
1 6438
365c"
= 70.53
365(1.6438)e -1 · 6438 = 115.94
365(1.6438) 2 e- 1 6 4 3 8 /2 = 95.29
365 - 70.53 - 115.94 - 95.29 = 83.24
115.94
= 5.98
= 0.32
§05=0-34
§ £ £ = 0.92
1 6 . 1 0 Either recall t h a t for a Poisson distribution the mle is the sample mean
or derive it from
L(X) =
(β^γ°(Χβ-"γ 22
lnL(A)
=
6001ηλ-365λ,
0
=
600λ_1-365,
λ
=
600/365 = 1.6438.
/ x2
λ\
/Xze~xx
101
/ χ 3ö - λ
X e~ λ
*
/χό
92
rsr \ 6 0 0
oc Λ
e
-365λ
,
For the goodness-of-fit test, the calculations are in Table 16.7. T h e last two
groups were combined. T h e total is 7.56. There are two degrees of freedom
(four groups less one estimated parameter less one). At a 2.5% significance
level, the critical value is 7.38 and, therefore, the null hypothesis is rejected.
T h e Poisson model is not appropriate.
1 6 . 1 1 W i t h 365 observations, the expected count for k accidents is
365Pr(AT = /c)
365e-°- 6 0.6
fc!
T h e test statistic is calculated in Table 16.8. T h e total of the last column
is the test statistic of 2.85. W i t h three degrees of freedom (four groups less
one less zero estimated parameters), the critical value is 7.81 and the null
hypothesis of a Poisson distribution cannot be rejected.
172
CHAPTER 16 SOLUTIONS
Calculations for Exercise 16.11.
Table 16.8
No. of accidents
a
Observed
Expected
Chi-square
209
111
33
7
3
2
200.32
120.19
36.06
7.21
1.08
0.14°
0.38
0.70
0.26
1.516
T h i s is 365 loss the sum of the other entries to reflect the expected for five or more accidents.
T h e last three cells are grouped for an observed of 12 and an expected of 8.43.
16·12χ2
^(0,-50)3
=
50
J'=l
=
20
20
= 0.02 ^ Ο , 2 - 1 0 0 ^ Ο , + 2 0 ( 5 0 ) 2
j=l
j=l
0.02(51,850 - 100(1,000) + 50,000] = 37.
W i t h 19 degrees of freedom, the probability of exceeding 37 is 0.007935.
1 6 . 1 3 T h e density function and likelihood functions are
αθα
α20θ20α
(χ + ey
m=i(*;+*) ft+1
Then,
20
l(a, Θ) = 20 l n ( a ) + 20a ln(0) - ( a + 1) ^
\n(x3 + Θ).
Under the null hypothesis, a = 2 and Θ = 3.1, the loglikelihood value is
—58.7810. Under the alternative hypothesis, a = 2 and Θ — 7.0, and the
loglikelihood value is —55.3307. Twice the difference is 6.901. There is one
degree of freedom (no estimated parameters in the null hypothesis versus one
in the alternative). T h e p-value is the probability of exceeding 6.901, which
is 0.008615.
16.14
Π
OC
(nk + efc + l ) - - - e f e /
nk\
k=\
/p»*(l+/3)-S("fc+efc).
1
1 + ßJ
\l + ß
T h e logarithm is
( l n / 3 ) ^ ] n , - [ l o g ( l + / 3 ) ] ^ K + e,),
k=\
k=l
SECTION 16.4
173
and setting the derivative equal to zero yields
6
6
k=\
k=\
ß'1 J > f c - (1 + ß)-1 ^2(nk + ek) = 0
for ß = Y?k=1 nk/YX=l ek = 0.09772. The expected number is Ek = ßek,
which is exactly the same as for the Poisson model. Because the variance is
ekß(l -f /3), the goodness-of-fit test statistic equals the Poisson test statistic
divided by 1 + /?, or 6.19/1.09772 = 5.64. The geometric model is accepted.
16.15 The null hypothesis is that the data come from a gamma distribution
with a = 1, that is, from an exponential distribution. The alternative hypothesis is that a has some other value. From Example 13.4, for the exponential
distribution, Θ — 1,424.4 and the loglikelihood value is —165.230. For the
gamma distribution, a = 0.55616 and Θ = 2,561.1. The loglikelihood at the
maximum is L 0 = -162.293. The test statistic is 2(-162.293 4- 165.230) =
5.874. The p-value based on one degree of freedom is 0.0154, indicating there
is considerable evidence to support the gamma model over the exponential
model.
16.16 For the exponential distribution, Θ = 29,721 and the loglikelihood is
-406.027. For the gamma distribution, a = 0.37139, Θ = 83,020, and the
loglikelihood is —360.496. For the transformed gamma distribution, a =
3.02515, Θ = 489.97, f = 0.32781, and the value of the loglikelihood function
is —357.535. The models can only be compared in pairs. For exponential
(null) versus gamma (alternative), the test statistic is 91.061 and the p-value
is essentially zero. The exponential model is convincingly rejected. For gamma
(null) versus transformed gamma (alternative), the test statistic is 5.923 with
a p-value of 0.015 and there is strong evidence for the transformed gamma
model. Exponential versus transformed gamma could also be tested (using
two degrees of freedom), but there is no point.
16.17 Poisson expected counts are
0
1
2
3 or more
: 10,000(e-° 1001 ) = 9,047.47,
: 10,000(0.1001e-° 1001 ) = 905.65,
: 10,000(0.1001 2 e-° 1001 /2) = 45.33,
: 10,000 - 9,047.47 - 905.65 - 45.33 - 1.55.
The test statistic is
(9,048 - 9,047.47)2/9,047.47 + (905 - 905.65)2/906.65
+(45 - 45.33) 2 /45.33 + (2 - 1.55)2/1.44 = 0.14.
174
CHAPTER 16 SOLUTIONS
There are two degrees of freedom (four groups less one and less one estimated
parameter) and so the 5% critical value is 5.99 and the null hypothesis (and
therefore, the Poisson model) is accepted.
Geometric expected counts are 9,090.08, 827.12, 75.26, and 7.54, and the
test statistic is 23.77. W i t h two degrees of freedom, the model is rejected.
Negative binomial expected counts are 9,048.28, 904.12, 45.97, and 1.63,
and the test statistic is 0.11. W i t h one degree of freedom, the model is accepted.
Binomial (m = 4) expected counts are 9,035.95, 927.71, and 36.34 (extra
grouping is needed to keep the counts above 1), and the test statistic is 3.70.
W i t h one degree of freedom, the model is accepted (critical value is 3.84).
1 6 . 1 8 Poisson expected counts are 847.05, 140.61, and 12.34 (grouping needed
to keep expected counts above 1), and the test statistic is 5.56. W i t h one
degree of freedom, the model is rejected.
Geometric expected counts are 857.63, 122.10, 17.38, and 2.89, and the test
statistic is 2.67. W i t h two degrees of freedom, the model is accepted.
Negative binomial expected counts are 862.45, 114.26, 19.10, and 4.19, and
the test statistic is 2.50. W i t h one degree of freedom, t h e model is accepted.
Binomial (m = 7) expected counts are 845.34, 143.74, and 10.92, and the
test statistic is 8.48. W i t h one degree of freedom, the model is rejected.
1 6 . 1 9 (a) To achieve a reasonable expected count, the first three rows are
combined as well as the last two. T h e test statistic is 16,308. W i t h five
degrees of freedom, the model is clearly rejected.
(b) All nine rows can now be used. T h e test statistic is 146.84. W i t h seven
degrees of freedom, the model is clearly rejected.
(c) T h e test statistic is 30.16. W i t h six degrees of freedom, the model is clearly
rejected.
1 6 . 2 0 x = 0.155140 and s2 = 0.179314. E(N) = λ ι λ 2 and Var(TV) = λ α ( λ 2 +
λ | ) . Solving the equations yields λλ = 0.995630 and λ 2 = 0.155821. For the
secondary distribution, fa = e - ° 1 5 5 8 2 1 ( 0 . 1 5 5 8 2 1 ) V j ! and then g0 = e x p [ - 0 . 9 9 5 6 3 ( l e -o.i5582i)] = 0.866185. Then,
^ 0.99563 . £
9k = 2^
1
JJjQk-j.
For the goodness-of-fit test with two degrees of freedom, see Table 16.9, and
the model is clearly rejected.
SECTION 16.5
Table 16.9
Value
Calculations for Exercise 16.20.
Observed
Expected
Chi-square
103,704
14,075
1,766
255
53
103,814.9
13,782.0
1,988.6
238.9
28.6
0.12
6.23
24.92
1.09
20.82
0
1
2
3
4+
175
Total
53.18
16.21 (a)
(20,592-20,596.76)2
(2,651-2,631.03)2 , (297-318.37)2
20,596.76
~τ~
2,631.03
~*~
318.37
+
(41-37.81) 2
(8-5.03) 2
37.81
"i"
5.03
0.0011 + 0.1516 + 1.4344 + 0.2691 + 1.7537 = 3.6099,
df
=
5 - 2 - 1 = 2, given a = 0.05, =>
χ* 0 0 5 = 5.99 > 3.6099 => fit is acceptable, or
Pr(xL·
l
(2) > 3.6099) « 0.16 => fit is acceptable.
(b)
0
Pi
=
.
(r - l)/3
(a + 6 ) Ä « a + 6
(r + 1) £
i + /3
2
r +1
p2
pi
f
&\
= ^ = ^ = 5 S S = 0.12774,
1+β
ρο 20,596.76
318.37
2,631.03
0.12101,
0.12774
= 1.05565 => f(2 - 1.05565) - 1.05565
0.12101
1.11786, a = - ^ = * H ™ =0.11427
1+/3
r
16.3
SECTION 16.5
16.22 These are discrete data from a discrete population, so the normal and
gamma models are not appropriate. There are two ways to distinguish among
the three discrete options. One is to look at successive values of kuk/rik-i·
176
CHAPTER 16 SOLUTIONS
Table 16.10
Criterion
Tests for Exercise 16.23.
Exponential
Weibull
Trans, gam.
B truncated at 50
K-S
A-D
x2
p- Value
Loglikelihood
SBC
0.1340
0.4292
1.4034
0.8436
-146.063
-147.535
0.0887
0.1631
0.3615
0.9481
-145.683
-148.628
0.0775
0.1649
0.5169
0.7723
-145.661
-150.078
B censored at 1,000
K-S
A-D
x2
p- Value
Loglikelihood
SBC
0.0991
0.1713
0.5951
0.8976
-113.647
-115.145
0.0991
0.1712
0.5947
0.7428
-113.647
-116.643
N/A
N/A
N/A
N/A
N/A
N/A
C
K-S
A-D
x2
p- Value
Loglikelihood
SBC
N/A
N/A
61.913
lO" 1 2
-214.924
-217.350
They are 2.67, 2.33, 2.01, 1.67,
decreasing, indicating a binomial
to compute the sample mean and
T h e variance is considerably less
N/A
N/A
0.3698
0.9464
-202.077
-206.929
N/A
N/A
0.3148
0.8544
-202.077
-209.324
1.32, and 1.04. T h e sequence is linear and
distribution is appropriate. An alternative is
variance. They are 2 and 1.494, respectively.
t h a n the mean, indicating a binomial model.
1 6 . 2 3 T h e various tests for the three d a t a sets produce the following results.
For D a t a Set B truncated at 50, the estimates are a = 0.40982, f = 1.24069,
and Θ — 1,642.31. For D a t a Set B censored at 1,000, there is no maximum
likelihood estimate. For D a t a Set C, the maximum likelihood estimate is
a = 4.50624, τ = 0.28154, and Θ = 71.6242. T h e results of the tests are in
Table 16.10.
For D a t a Set B truncated at 50, there is no reason to use a three-parameter
distribution. For D a t a Set C, the transformed g a m m a distribution does not
provide sufficient improvement to drop the Weibull as the model of choice.
1 6 . 2 4 T h e loglikelihood values for the two models are —385.9 for the Poisson
and - 3 8 2 . 4 for the negative binomial. T h e test statistic is 2 ( - 3 8 2 . 4 + 3 8 5 . 9 ) =
SECTION 16.5
177
7.0. There is one degree of freedom (two parameters minus one parameter),
and so the critical value is 3.84. The null hypothesis is rejected, and so the
data favors the negative binomial distribution.
16.25 The penalty function subtracts ln(100)/2 = 2.3 for each additional
parameter. For the five models, the penalized loglikelihoods are: generalized Pareto: -219.1 - 6.9 - -226.0; Burr: -219.2 - 6.9 = -226.1; Pareto:
-221.2 - 4.6 = -225.8; lognormal: -221.4 - 4.6 = -226.0; and inverse
exponential: —224.3 — 2.3 = —226.6. The largest value is for the Pareto
distribution.
16.26 The loglikelihood function for an exponential distribution is
1(θ) = In J p - V * ' / 0 = X > l n 0 - ^ = -ηΐηθ - ^ .
Under the null hypothesis that Sylvia's mean is double that of Phil's, the
maximum likelihood estimates of the mean are 916.67 for Phil and 1833.33
for Sylvia. The loglikelihood value is
Un
=
=
- 2 0 In 916.67 - 20,000/916.67 - 10 In 1,833.33 - 15,000/1833.33
-241.55.
Under the alternative hypothesis of arbitrary parameters the loglikelihood
value is
/ait = - 2 0 In 1,000 - 20,000/1,000 - 10 In 1,500 - 15,000/1,500 - -241.29.
The likelihood ratio test statistic is 2(-241.29 + 241.55) = 0.52, which is not
significant (the critical value is 3.84 with one degree of freedom). To add
a parameter, the SBC requires an improvement of ln(30)/2 = 1.70. Both
procedures indicate that there is not sufficient evidence in the data to dispute
Sylvia's claim.
16.27 The deduction to compute the SBC is (r/2) ln(n) = (r/2) ln(260) =
2.78r, where r is the number of parameters. The SBC values are —416.78,
-417.56, -419.34, -420.12, and -425.68. The largest value is the first one,
so that is the model to select.
16.28 Both the Poisson and negative binomial have acceptable p-values (0.93
and 0.74) with the Poisson favored. The Poisson has a higher loglikelihood
value than the geometric. The negative binomial improves the loglikelihood
by 0.01 over the Poisson, less than the 1.92 required by LRT or the 3.45
required by SBC. The Poisson is acceptable and preferred.
178
CHAPTER 16 SOLUTIONS
Table 16.11
Calculations for Exercise 16.31.
Model
Parameters
Poisson
Geometric
Negative binomial
λ = 1.74128
β = 1.74128
f = .867043, β == 2.00830
NLL
Chi-square
df
2,532.86
2,217.71
2,216.07
1,080.80
170.72
165.57
5
7
6
1 6 . 2 9 Both the geometric and negative binomial have acceptable p-values
(0.26 and 0.11), with the geometric favored. The geometric has a higher
loglikelihood value t h a n the Poisson. T h e negative binomial improves the
loglikelihood by 0.71 over the geometric, less t h a n the 1.92 required by LRT
or the 2.30 required by SBC. T h e geometric model is acceptable and preferred.
1 6 . 3 0 T h e negative binomial distribution has the best p-value (0.000037)
though it is clearly unacceptable. For the two one-parameter distributions, the
geometric distribution is a better loglikelihood and SBC t h a n the Poisson. T h e
negative binomial model improves the loglikelihood by 1132.25 — 1098.64 =
33.61, more t h a n the 1.92 required by the likelihood ratio test and is more t h a n
the 0.5 In 503 = 3.11 required by the SBC. For the three models, the negative
binomial is the best, but is not very good. It turns out t h a t zero-modified
models should be used.
1 6 . 3 1 (a) For k = 1 , 2 , 3 , 4 , 5 , 6 , 7 , the values are 0.276, 2.315, 2.432, 2.891,
4.394, 2.828, and 4.268, which, if anything, are increasing. T h e negative
binomial or geometric models may work well.
(b) T h e values appear in Table 16.11. Because the sample variance exceeds
the sample mean, there is no mle for the binomial distribution.
(c) T h e geometric is better t h a n the Poisson by b o t h likelihood and chi-square
measures. T h e negative binomial distribution is not an improvement over the
geometric as the NLL decreases by only 1.64. W h e n doubled, 3.28 does not
exceed the critical value of 3.841. T h e best choice is geometric, but it does
not pass the goodness-of-fit test.
1 6 . 3 2 For each d a t a set and model, Table 16.12 first gives the negative loglikelihood and then the chi-square test statistic, degrees of freedom, and p-value.
If there are not enough degrees of freedom to do the test, no p-value is given.
For Exercise 14.3, the Poisson is the clear choice. It is the only oneparameter distribution acceptable by the goodness-of-fit test and no twoparameter distribution improves the NLL by more t h a n 0.03.
SECTION 16.5
Table 16.12
179
Results for Exercise 16.32.
Ex. 14.3
Ex. 14.5
E x . 14.6
Ex. 16.31
3,339.66
.00;1;.9773
488.241
5.56;1;.0184
3,578.58
16,308;4;0
2,532.86
1,081;5;0
Geometric
3,353.39
23.76;2;0
477.171
.28;1;.5987
1,132.25
146.84;7;0
2,217.71
170.72;7;0
Neg. bin.
3,339.65
.01;0
476.457
1.60;0
1,098.64
30.16;6;0
2,216.07
165.57;6;0
ZM Poisson
3,339.66
.00;0
480.638
1.76;0
2,388.37
900.42;3;0
2,134.59
37.49;6;0
ZM geometric
3,339.67
.00;0
476.855
1.12;0
1,083.56
1.52;6;.9581
2,200.56
135.43;6;0
ZM logarithmic
3,339.91
.03;0
475.175
.66;0
1,171.13
186.05;6;0
2,308.82
361.18;6;0
ZM neg. bin.
3,339.63
.00;-!
473.594
.05;0
1,083.47
1.32;5;.9331
2,132.42
28.43;5;0
Poisson
For Exercise 14.5, the geometric is the best one-parameter distribution and
is acceptable by the goodness-of-fit test. T h e best two-parameter distribution
is the negative binomial, b u t the improvement in the NLL is only 0.714, which
is not significant. (The test statistic with one degree of freedom is 1.428). T h e
three-parameter ZM negative binomial improves t h e NLL by 3.577 over the
geometric. This is significant (with two degrees of freedom). So an argument
could be made for the ZM negative binomial, but the simpler geometric still
looks to be a good choice.
For Exercise 14.6, the best one-parameter distribution is the geometric, but
it is not acceptable. T h e best two-parameter distribution is the ZM geometric,
which does pass the goodness-of-fit test and has a much lower NLL. T h e ZM
negative binomial lowers the NLL by 0.09 and is not a significant improvement.
For Exercise 16.31, none of the distributions fit well. According to the
NLL, the ZM negative binomial is the best choice, but it does not look very
promising.
16.33(a) T h e mle is p = 3.0416 using numerical methods.
(b) T h e test statistic is 785.18 and with three degrees of freedom the model
is clearly not acceptable.
180
CHAPTER 16 SOLUTIONS
Table 16.13
Results for Exercise 16.34.
Ex. 14.3
Ex. 14.5
Ex. 14.6
Ex. 16.31
Poisson-Poisson
3,339.65
0.01;0
478.306
1.35;0
1,198.28
381.25;6;0
2,151.88
51.85;6;0
Polya-Aeppli
3,339.65
0.01;0
477.322
1.58;0
1,084.95
4.32;6;0.6335
2,183.48
105.95;6;0
Poisson-I.G.
3,339.65
0.01;0
475.241
1.30;0
1,174.82
206.08;6;0
2,265.34
262.74;6;0
Poisson-ETNB
3,339.65
0.01;-1
473.624
0.02;-l
1,083.56
1.52;5;0.9112
did not
converge
1 6 . 3 4 Results appear in Table 16.13. T h e entries are the negative loglikelihood, the chi-square test statistic, degrees of freedom, and the p-value (if the
degrees of freedom are positive).
For Exercise 14.3, the Poisson cannot be topped. These four improve the
loglikelihood by only 0.01 and all have more parameters.
For Exercise 14.5, b o t h the Poisson-inverse Gaussian and P o i s s o n - E T N B
improve the loglikelihood over the geometric. T h e improvements are 1.93
and 3.547. W h e n doubled, they are slightly (p-values of 0.04945 and 0.0288,
respectively, with one and two degrees of freedom) significant. T h e goodnessof-fit test cannot be done. T h e geometric model, which easily passed the
goodness-of-fit test, still looks good.
For Exercise 14.6, none of the models improved the loglikelihood over the
ZM geometric (although the P o i s s o n - E T N B , with one more parameter, tied).
As well, the ZM geometric has the highest p-value and is clearly acceptable.
For Exercise 16.31, none of the models have a superior loglikelihood versus
the ZM negative binomial. Although this model is not acceptable, it is the
best one from among the available choices.
For the five
1 6 . 3 5 T h e coefficient discussed in the section is LH^Z^±MR^
d a t a sets, use t h e empirical estimates. For the last two sets, the final category
is for some number or more. For these cases, estimate by assuming the observations were all at t h a t highest value. T h e five coefficient estimates are (a)
-0.85689, (b) -1,817.27, (c) -5.47728, (d) 1.48726, and (e) - 0 . 4 2 1 2 5 . For
all but D a t a Set (d), it appears t h a t a compound Poisson model will not be
appropriate. For D a t a Set (d), it appears t h a t the Polya-Aeppli model will
do well. Table 16.14 summarizes a variety of fits. T h e entries are the negative
loglikelihood, the chi-square test statistic, the degrees of freedom, and the
p- value.
SECTION 16.5
Table 16.14
181
Results for Exercise 16.35.
(a)
(b)
(c)
(d)
(e)
36,188.3
190.75;2;0
206.107
0.06;1;.8021
481.886
2.40;2;.3017
1,461.49
267.52;2;0
841.113
6.88;6;.3323
36,123.6
41.99;3;0
213.052
14.02;2;0
494.524
24.03;3;0
1,309.64
80.13;4;0
937.975
189.47;9;0
NB
36,104.1
0.09,1,.7631
did not
converge
481.054
0.31;1;.5779
1,278.59
1.57;4;.8139
837.455
3.46;6;.7963
P-bin.
ra = 2
36,106.9
2.38;1;.1228
did not
converge
481.034
0.38;1;.5357
1,320.71
69.74;2;0
837.438
3.10;6;.7963
P-bin.
m = 3
36,106.1
1.20;1;.2728
did not
converge
481.034
0.35;1;.5515
1,303.00
63.10;3;0
837.442
3.19;6;.7853
PolyaAeppli
36,104.6
0.15;1;.6966
did not
converge
481.044
0.32;1;.5731
1,280.59
6.52;4;.1638
837.452
3.37;6;.7615
Ney.-A
36,105.3
0.49;1;.4820
did not
converge
481.037
0.33;1;.5642
1,288.56
24.14;3;0
837.448
3.28;6;.7736
P-iG
36,103.6
0.57;1;.4487
did not
converge
481.079
0.31;1;.5761
1,281.90
8.75;4;.0676
837.455
3.63;6;.7260
P-ETNB
36,103.5
5.43;1;.0198
did not
converge
did not
converge
1,278.58
1.54;3;.6740
did not
converge
Poisson
Geometric
(a) All two-parameter distributions are superior to the two one-parameter
distributions. The best two-parameter distributions are negative binomial
(best loglikelihood) and Poisson-inverse Gaussian (best p-value). The threeparameter Poisson-ETNB is not a significant improvement by the likelihood
ratio test. The simpler negative binomial is an excellent choice.
(b) Because the sample variance is less than the sample mean, mles exist only
for the Poisson and geometric distributions. The Poisson is clearly acceptable
by the goodness-of-fit test.
(c) None of the two-parameter models are significant improvements over the
Poisson according to the likelihood ratio test. Even though some have superior
^-values, the Poisson is acceptable and should be our choice.
(d) The two-parameter models are better by the likelihood ratio test. The
best is negative binomial on all measures. The Poisson-ETNB is not better,
and so use the negative binomial, which passes the goodness-of-fit test. The
182
CHAPTER 16 SOLUTIONS
moment analysis supported the Polya Aeppli, which was acceptable, but not
as good as the negative binomial.
(e) T h e two-parameter models are better by the likelihood ratio test. The
best is Poisson-binomial with m = 2, though the simpler and more popular
negative binomial is a close alternative.
1 6 . 3 6 Excel® solver reports the following mles to four decimals: p — 0.9312,
λι = 0.1064, and A2 - 0.6560. The negative loglikelihood is 10,221.9. Rounding these numbers to two decimals produces a negative loglikelihood of 10,223.3
while Tröbliger's solution is superior at 10,222.1. A better two-decimal solution is (0.94,0.11,0.69), which gives 10,222.0. T h e negative binomial distribution was found to have a negative loglikelihood of 10,223.4. T h e extra
parameter for the two-point mixture cannot be justified (using the likelihood
ratio test).
CHAPTER 17
CHAPTER 17 SOLUTIONS
17.1
1 7 1
SECTION 17.7
λ0 =
E(A-)
=
E(X 2 )
=
Var(X)
=
ηλ
=
(1.96/0.05) 2 = 1,536.64,
^ 0 0 100* - x2
- CLX '— όό ο 5
/
5,000
Jo
100
IQOa:2-*3,
_ 2
1
[
^ ^ 0 - ^ = 1,6665,
Jo
1,666| - (33|) 2 = 555§,
1,536.64 1 +
'V555JP
33|
2Ί
- 2,304.96.
2,305 claims are needed.
17.2 0.81 = nX/Xo. 0.64 = nX [λ0 ( l +
3.7647. aß = 100, /? = 26.5625.
^ )
= 0.81(1 + a " 11)\ - 1
Student Solutions Manual to Accompany Loss Models: Fivm Data to Decisions,
Edition. By Stuart A. Klugman, Harry H. Panjer, Gordon E. Willmot
Copyright © 2012 John Wiley & Sons, Inc.
a
FourthlSZ
184
CHAPTER 17 SOLUTIONS
17.3 λ 0 = 1,082.41. μ = 600. Estimate the variance as σ2 = ° 2 + 752 +(~ 75 ) 2 =
75 2 . The standard for full credibility is l,082.41(75/600) 2 = 16.913. Z =
^3/16.913 = 0.4212. The credibility pure premium is
0.4212(475) + 0.5788(600) = 547.35.
17.4 μ = sßOy, σ2 — sßo\ + sß(\ 4- β)θγ; where s is used in place of the
customary negative binomial parameter, r. Then
T nAr τμ^/η
1.ο4ο —
Q.Oby/nsß0Y
—
σ
y/sßai+sßil
+ ßyf
and so
1,082.41 =
sßa
.
2
Y
" ^W/ r — — s - =-nsß
nsß
,w
2
4 sß(l 4 β)θ
H
γ
θ
±^
'
\ σ\ 4 Θ2Υ(1 4 ß)
The standard for full credibility is
nsß = 1,082.41 ( 1 4 ß 4 - f
y
Partial credibility is obtained by taking the square root of the ratio of the
number of claims to the standard for full credibility.
17.5 λ 0 = (2.2414/0.03)2 = 5,582.08, and so 5,583 claims are required. (From
the wording of the problem, λ = λο·)
/»200,000
E(X)
E
2
= l
(* )
=
Var(X)
=
/.200.000
/
2ÖPÖÖ^ = 100 ' 000 '
2
ώ
^777^ =
200,000
13
333
333
i
333
> ' > ' >'
ο3>
3,333,333,333^.
The standard for full credibility is
1,082.41(1 + 3,333,333,333.33/10,000,000,000) = 1,443.21.
Z = ^1,082/1,443.21 = 0.86586.
17.7 (1.645/0.06) 2 (1 + 7,500 2 /l,500 2 ) = 19,543.51, or 19,544 claims.
17.8 Z = ^6,000/19,543.51 = 0.55408. The credibility estimate is
0.55408(15,600,000) + 0.44592(16,500,000) = 16,001,328.
SECTION 17.7
185
17.9 For the standard for estimating the number of claims, 800 = (y p /0.05) 2 ,
and so yp = y/2.
/•ιυυ
rioo
i
E(X)
=
/
Jo
0.0002x(100-x)d£ = 3 3 - ,
3
E(X2)
=
/
Jo
0.0002x 2 (100-£)cte = 1,666-,
3
Var(X)
,400
9
= ΐ , 6 6 6 | - ( 3 3 ^ ) 2 = 555^.
The standard for full credibility is ( v / 2/0.1) 2 [l + 555§/(33±) 2 ] - 300.
17.10 1,000 = (1.96/0.1) 2 (1 + C2). The solution is the coefficient of variation,
1.2661.
17.11 Z = ^10,000/17,500 = 0.75593. The credibility estimate is
0.75593(25,000,000) + 0.24407(20,000,000) - 23,779,650.
17.12 The standard for estimating claim numbers is (2.326/0.05) 2 = 2,164.11.
For estimating the amount of claims, we have E(X) — f^° 2.bx~25dx =
5/3, E(X2) = J 1 0 0 2.5x- 1 5 dx - 5, and Var(X) - 5 - (5/3) 2 = 20/9. Then
2164.11 = (1.96/K) 2 [1 + (20/9)/(25/9)], and so K = 0.056527
17.13 E(X) = 0.5(1) + 0.3(2) 4- 0.2(10) = 3.1, E(X2) = 0.5(1) + 0.3(4) +
0.2(100) = 21.7, Var(X) = 21.7 - 3.1 2 = 12.09. The standard for full credibility is (1.645/. 1) 2 (1 + 12.09/3.12) = 611.04, and so 612 claims are needed.
17.14 3,415 - (1.96/fc)2(l + 4), k = 0.075, or 7.5%.
17.15 Z = ν/ίί/F, R = ZO + (1 - Z)P, Z = (R - P)/{0
n/F=(R-P)2/(0-P)2.n=^ff.
- P) = vW^·
17.16 For the severity distribution, the mean is 5,000 and the variance is
10,0002/12. For credibility based on accuracy with regard to the number of
claims,
2 000
'
=(äk) ' ^ 1 · 8 '
where z is the appropriate value from the standard normal distribution. For
credibility based on accuracy with regard to the total cost of claims, the
186
CHAPTER 17 SOLUTIONS
number of claims needed is
*
/ 1 + io,ooo»/i2\
=m
2
0.052 V
5,000
CHAPTER 18
CHAPTER 18
18.1
SECTION 18.9
18.1 The conditional distribution of X given that Y = y is
_
P r ( X = a:,Z = y - x )
F r ( X = x)P r (Z = y - x) _ ^ Γ —
Pr(Y
x\(y - x)\ \\i
= y)
-f X2J
\y-x)\
(Ai+A 2 )»e
\ λ ι + λ2,
for x = 0 , 1 , 2 , . . . , 2/. This is a binomial distribution with parameters m = y
and <? = λ ι / ( λ ι + λ2).
Student Solutions Manual to Accompany Loss Models: From Data to Decisions,
Edition. By Stuart A. Klugman, Harry H. Panjer, Gordon E. Willmot
Copyright © 2012 John Wiley & Sons, Inc.
FourthW7
188
18.2
CHAPTER 18
f{x\y)
=
f(x,y)
f(y)
Pr(X = x, Z = y - x)
Pr(Y = y)
=
~
(ni+n2)pJ/(l -p)ni+n2-y
~
V a? /
\y—x)
ΎΤΤ'
This is the hypergeometric distribution.
18.3 Using (18.3) and conditioning on N yields
E(X) = E[E(X\N)} = E\NE(Yi)} = Ε[μγΝ] = μγΕ(Ν)
=
\μγ.
For the variance, use (18.6) to obtain
Var(X)
=
E[Var(X|JV)]+Var[E(X|7V)]
=
=
=
=
=
EfiVVar^)] + Var[ATE(yi)]
Ε[σγΝ] + Υ&τ\μγΝ]
σγΕ(Ν) + μγν&τ(Ν)
\(μγ+σγ)
AE[lf)].
18.4(a) fx(0) = 0.3, / χ ( 1 ) = 0.4, fx(2) = 0.3.
M O ) = 0.25, M l ) = 0-3, M 2 ) = 0-45.
(b) The following array presents the values for x = 0,1,2:
(c)
/ ( x | y = 0) = 0.2/0.25 = 4/5, 0/0.25 = 0, 0.05/0.25 = 1/5,
f(x\Y = 1) = 0/0.3 = 0, 0.15/0.3 = 1.2, 0.15/0.3 = 1/2,
f(x\Y = 2) = 0.1/0.45 = 2/9, 0.25/0.45 = 5/9, 0.1/0.45 = 2/9.
Ε(ΛΤΚ = 0) = 0(4/5)+ 1(0)+2(1/5) = 2/5,
E ( X | y = l) = 0(0)+ 1(1/2)+ 2(1/2) = 3/2,
E(X\Y = 2) = 0(2/9) + 1(5/9) + 2(2/9) = 1,
E(X2\Y = 0) = 0 ( 4 / 5 ) + 1(0)+4(1/5) = 4/5,
E ( X 2 | F = 1 ) = 0(0)+ 1(1/2)+ 4(1/2) = 5/2,
E ( X 2 | y = 2) = 0(2/9)+ 1(5/9)+ 4(2/9) = 13/9,
V a r ( X | r = 0) = 4/5 - (2/5) 2 = 16/25,
V a r ( X | r = 1) = 5/2 - (3/2) 2 = 1/4,
Var(X|y = 2) = 13/9 - l 2 = 4/9.
SECTION 18.9
(d)
E(X)
E[Var(X|Y)]
Vai[E(X\Y)]
Var(X)
189
(2/5)(0.25) + (3/2)(0.3) + 1(0.45) = 1,
(16/25)(0.25)+ (l/4)(0.3) + (4/9)(0.45) = 0.435,
(2/5) 2 (0.25) + (3/2) 2 (0.3) + 12(0.45) - l 2 = 0.165,
0.435 4- 0.165 = 0.6.
=
=
=
=
18.5 (a)
f(x, y)
oc exp { -
«
1
2(1 -P2)
Χ μι
2ρ( ~
χ-μ1\~
)(^)])
*?)]}
βΧΡ
{"2(Γ^)
2
Now a normal density Ν(μ, σ ) has pdf f(x) oc exp [— -^j(x2
fx\y(x\y)
— 2μχ)] . Then
2
<* f(x, y) is N [ ^ +p^(y-
μ2), (1 - Ρ )σ\j .
(b)
fx(x)
=
f(x,y)dy
2p
« / exp {-2(i^7)
+
exp
exp
oc exp
i (y-
1 ( x — μλ
2\
\dy
σι
μ2-ρ^(χ-^ιϊ
°2 x / T ^
dy
γ2πσ2\/1-p2
1 ( x — μλ
2 V σι
Since the normal density is
fx(x)
[v-jH
°2
1 ( x — μλ
2 V σι
x / exp
/
=
σ2
χ-μι\
o\
1
J - exp — \
1
^/2πσ\ exp
2\
σι
2
(Χ-μ\
(^-^Y , in general, we have
J
~Ν(μλ,σ\).
190
CHAPTER 18
(c) Suppose fx(x)fy(y)
= /x,y(x,y). Then fx(x)fy(y)
= fx,y{x,y)
=
fx\Y(x\y)fy(y)'
Therefore, fx(x) = fx\y(x\y).
From the results of (a) and
(b),p = 0.
Then
Therefore, fXy(x,y)
°1
=
E(X)
Var(X)
18.6 (a)
x2
1
ocexp
fx,y(x,y)
/
\ 2
OC
°2
fX(x)fY(x).
fx(x)fy(y).
=
=
Ε[Ε(Χ|θι,θ2)] = Ε(θι)
Ε[ν8τ(ΛΊΘι,θ2)]+ν8Γ[Ε(ΛΊΘι,θ2)]
E(02) + Var(01).
(b) / χ | θ ι , θ 2 ( χ | β ι , β 2 ) = ( 2 π ^ ) - 1 / 2 θ χ ρ f- i (χ - ^ )
fx(x)
=
J(2ne22y1/2exp
J
-II
π(θ1,θ2)άθ1άθ·2
(2πθ:2 X - 1 / 2 exp
ττι(θ1)π2(θ2)άθ1άθ2.
We are also given, /y|e 2 (j/|ö 2 ) = ( 2 π ^ ) " 1 / 2 β χ ρ ( - ^ y 2 ) · Let Z = F + ©j.
Then
/ζ,Ιθ, Wfc) = ^ /κ|* 2 (2 - *ι |02) πι (0!)d0i
and
/z(*)
=
=
J
ίζ,\θΛζ\θ2)π2(θ2)άθ2
ί f
/γ\θΑζ-θι\θ2)ττι(θ1)άθιπ2(θ2)άθ2
| | ( 2 π ο 2 ) - 1 / 2 exp
j^mde
18.7
My)
M*)
e
ιχ
α
r e~9ez
-J5<*-''
=j
π1(θ1)άθλπ2(θ2)άθ2
π(<9 -
χ)άθ,
SECTION 18.9
191
Let W = Y + Z. Then
fw{w)
=
^fy{y)fz(w-y)
y=0
e-aay
y
y=0
-J
(
α
ε-(
J
+θ\α
e-eQ{w-y)
(w - y)\
§0(^)·(^Γ-«-
+ θΥ> ^ ( w \
wl
J-—5^^π(^
+
π(θ)άθ
e-(° V(a
(
a
\
v
(
θ
Υ~υ
+ ey
with the last line following because the sum contains binomial probabilities.
Let r = a + Θ and so
/
=
e~rrv
_ t π(τ - a)dr
w\
fx(x).
18.8 (a) 7r(9ij) = 1/6 for die i and spinner j .
(b)(c) The calculations are in Table 18.1.
Table 18.1
Calculations for Exercise 18.8(b) and (c).
i
3
Pr(X = O|0ij)
Pr(X = 3|0i,-)
Pr(X == 8|0Ο·)
1
1
1
2
2
2
1
2
3
1
2
3
25/30
25/30
25/30
10/30
10/30
10/30
1/30
2/30
4/30
4/30
8/30
16/30
4/30
3/30
1/30
16/30
12/30
4/30
μ(θίά)
35/30
30/30
20/30
140/30
120/30
80/30
(d) Prpfi = 3 ) = (1 + 2 + 4 + 4 + 8 + 16)/(30 · 6) = 35/180.
(e) The calculations are in Table 18.2.
(f) The calculations are in Table 18.3.
ν{θα)
6,725/900
5,400/900
2,600/900
12,200/900
10,800/900
5,600/900
192
CHAPTER 18
Calculations for Exercise 18.8(e).
Table 18.2
i
j
1
1
1
2
2
2
1
2
3
1
2
3
Pr(Xi = 3\θ13)
ΡΓ(Θ = θία\Χι = 3) =
1/30
2/30
4/30
4/30
8/30
16/30
Table 18.3
ΡΓ(Χ|=3|0^)(1/6)
35/180
1/35
2/35
4/35
4/35
8/35
16/35
Calculations for Exercise 18.8(f).
xi
Pr(X 2 = x2\X1 = 3) =
0
3
8
- ^ [25(1) + 25(2) + 25(4) + 10(4) + 10(8) + 10(16)] = 455/1,050
M W l ) + 2(2) + 4(4) + 4(4) + 8(8) + 16(16)] = 357/1,050
M ά [4(1) + 3(2) + 1(4) + 16(4) + 12(8) + 4(16)] = 238/1,050
ΣΡΓ(Χ2
=
= θί3)Ρν(θ = θα\Χχ = 3)
Χ 2 |Θ
(g)
E(X2\X1 = 3)
=
1 1
3Q 3^[35(1) + 30(2) + 20(4) + 140(4) + 120(8) + 80(16)]
2,975
1,050
(h)
Ρτ(Χ2 = 0, Xi = 3)
=
^
25(1)
t
900
10(4)
900
Pr(X 2 = 3|Jf! = 3) = ^ .
25(2)
25(4)
900
900
10(8)
900
Pr(X 2 = 8|Xi = 3) -
10(16)
900
455
5,400'
- ^
5,400*
(i) Divide answers to (h) by PT(X\ = 3) = 35/180 to obtain the answers to
(f).
(j)E(X 2 |X 1 = 3) = 0 I f 5 + 3 1 f 5 + 8 ^050
(k)
μ
v
=
1,050"
1 1
425
^ ( 3 5 + 30 + 2 0 + 1 4 0 + 1 2 0 + 80) = — ,
= I TL· ( 5 ' 7 2 S + 5 > 4 o °+ 2 > 6 o °+ i 2 - 2 0 0 + i o > 8 0 0 + 5 » 6 °°) = ^ S »
o yUU
5,4U0
» - δ4( 3 5 2 + 3 ο 2 + 2 0 2 + ι 4 θ 2 + ΐ 2 θ 2 + 8 0 2 '-(ιΙ) 2 -ϋδ^
SECTION 18.9
{1)
Z
=
Pc
=
=
=
193
1
> 4 2 5 A 4400
00 y
76,925/32,.,400 )
0.228349.
0.228349(3) + 0.771651(425/180)
2.507.
43
(l+
18.9 (a) 7r(0i) = 1/3, i = 1,2,3.
(b)(c) The calculations appear in Table 18.4.
Table 18.4
Calculations for Exercise 18.9(b) and (c).
1
2
3
0.1600
0.2800
0.3225
0.1750
0.0625
1.7
1.255
0.0625
0.0500
0.3350
0.1300
0.4225
2.8
1.480
0.2500
0.1500
0.3725
0.1050
0.1225
1.7
1.655
i
Ρτ(Χ
Pr(X
Pv(X
Pr(X
Pr(X
=
=
=
=
=
O|0i)
l\9i)
2\9i)
3|0»)
4|0i)
ß(0i)
V(ßi)
(d) Pr(Jfi = 2) = I (0.3225 + 0.335 + 0.3725) = 0.34333.
(e) The calculations appear in Table 18.5.
Table 18.5
i
Calculations for Exercise 18.9(e).
Pr(Xi = 2|0i)
1
2
3
0.3225
0.3350
0.3725
P r ( 6 = 0i|Xi = 2) =
ΡΓ(Χ1=2|0,)(1/3)
.34333
0.313107
0.325243
0.361650
(f) The calculations appear in Table 18.6.
(g)
E(X2|X!=2)
=
1.7(0.313107)+ 2.8(0.325243)+ 1.7(0.361650)
=
2.057767.
194
CHAPTER 18
Calculations for Exercise 18.9(f).
Table 18.6
X2
Pr(X 2 = X2|Xi = 2) = Σ Ρ Γ ( Χ 2 = Χ2|θ = ö i ) P r ( 0 = 0i|Xi = 2)
0
1
2
3
4
0.16(0.313107) + 0.0625(0.325243) + 0.25(0.361650) = 0.160837
0.28(0.313107) + 0.05(0.325243) + 0.15(0.361650) = 0.158180
0.3225(0.313107) + 0.335(0.325243) + 0.3725(0.361650) = 0.344648
0.175(0.313107) + 0.13(0.325243) + 0.105(0.361650) = 0.135049
0.0625(0.313107) + 0.4225(0.325243) + 0.1225(0.361650) = 0.201286
(h)
p r ( X 2 = 0, Xi = 2)
-
[0.16(0.3225) + 0.0625(0.335) + 0.25(0.3725)j/3
-
0.055221,
Pr(X2 = 1, Xi = 2)
=
0.054308. P r ( X 2 = 2, X a = 2) = 0.118329,
P r ( X 2 = 3, X i = 2)
=
0.046367. P r ( X 2 = 4, X i = 2) = 0.069108.
(i) Divide answers to (h) by Ρτ(Χ\
(0(j)
E(X2|Xi = 2 )
-
— 2) = 0.343333 to obtain the answers to
0(0.160837) + 1(0.158180) + 2(0.344648)
+3(0.135049) +4(0.201286) = 2.057767.
(k)
μ
=
± ( 1 . 7 + 2 . 8 + 1.7) = 2.06667,
v
=
± ( 1 . 2 5 5 + 1.48+ 1.655) = 1.463333,
2
±(1.7
+ 2.8 2 + 1.72) - 2.06667 2 = 0.268889.
3
a
(1)
-1
/
L463333V
V
.267779 y
0.155228,
=
=
Pc
=
0.155228(2) + 0.844772(2.06667)
=
2.056321.
(m) Table 18.4 becomes Table 18.7 and the quantities become μ — 1.033333,
v = 0.731667, a = 0.067222. Z = 2+.73ΐββ7/0.067222 = 0.155228.
Table 18.7
i
1
2
3
Calculations for Exercise 18.9(m).
Ρν(Χ = 0\θι)
Pr(X = l|ö0
Pr(X = 2|Ö0
0.40
0.25
0.50
0.35
0.10
0.15
0.25
0.65
0.35
μ(θι)
0.85
1.40
0.85
ν(θτ)
0.6275
0.7400
0.8275
SECTION 18.9
18.10
= 0.2(200) = 40,
E(S\eA)
E(S\9B)
=
=
Ε(Ν\ΘΑ)Ε(Χ\ΘΑ)
0.7(100) = 70,
Var(S|0 A )
Var (S\9B)
μ3
vs
=
=
=
=
=
as
=
Ε(Ν\θΑ)νΆτ(Χ\θΑ)+νΒΐ(Ν\θΑ)[Ε(Χ\θΑ))2
0.2(400) + 0.2(40,000) = 8,800,
0.7(1500) + 0.3(10,000) = 4,050,
| 4 0 + ±70 = 50,
18,800
3 " ' " " " T+ 3±4,050 = 7,216.67,
k
=
^ = 36.083,
as
Z
=
4T4Ö83= 0 1 0 '
Pc
=
2
_i__
(
νβτ[μ(0)] = |2/in2
402 +
±70
- 502 = 200,
0.10(125) + 0.90(50) = 57.50.
18.11 Let S denote total claims. Then μ8 = μ^μγ
vs
195
= 0.1(100) = 10.
E{E(AT|Ö 1 )Var(y|0 2 )+Var(iV|Öi)E[(y|Ö2)] 2 }
+Ε[νΆι(Ν\θ1)]Ε{[Ε(Υ\θ2)}2}
E[E(AT|0!)]E[Var(r|0 2 )]
μΝνγ + νΝΕ{[Ε(Υ\θ2)}2}.
=
=
=
Since αγ = Var^ y (0)] = Ε{[μγ(θ)}2} - {Ε[μγ(θ)\}2, vs = μΝνγ + νΝ(αγ +
μγ). Then αγ = Var[E(y|0 2 )] = Var(0 2 ) since F is exponentially distributed.
But, again using the exponential distribution,
Var(0 2 )
=
E (Θ2) - [E(0 2 )] 2 = E[Var(y|0 2 )] - {E[E(y|0 2 )]} 2
=
υγ -
μ γ,
from which αγ + μγ = νγ. Then
vs
=
=
μΝνγ+νΝ{υγ
- μγ + μγ) =
μΝυγ+νΝνγ
0.1(25,000) + 0.1(25,000) = 5,000.
Also,
as
=
=
=
=
V a r ^ i , ^ ) ] = Ε{[μ3(θ1,θ2)}2} {Ε[μ3(θ1,θ2)}}2
Ε{[μΝ(θ1)}2}Ε{[(μγ(θ2)}2}
- μ2Νμ2γ
2
(αΝ + μ%){αγ + μγ) - μ Νμγ
[0.05 + (0.01)2](25,000) - (0.1) 2 (100) 2 = 1,400.
Therefore, k = vs/as = 5,000/1,400 = 3.5714,
and Pc = 0.4565 (ψ) + 0.5435(10) = 35.87.
Ζ =
3
3+3 57U
= 0.4565,
196
CHAPTER 18
18.12 (a) E(Xj) = Ε[Ε(Χ,·|Θ)] = Ε[/?,·μ(θ)] = /?,·Ε[μ(θ)] =
Var(Xj-)
=
ßjß.
EtVar(Xj|e)]+Var[E(Xj|e)]
=
Ε[τ^(θ) + rJ>jV(G)\ + V a r ^ M Q ) ]
=
TJ + ijjjv + /32a.
CoviXi,^-)
=
=
EiX.X^-EiX^EiXj)
Ε[Ε(Χ,;Χ,·|θ)]-/3^μ2
=
Ε[Ε(Χ,·|θ)Ε(Χ;|θ)]-&/3,.μ 2
=
Ε{/3,;^.[μ(θ)]2}-^μ2
^.(ΕΙΙΜΘ^Ι-ΙΕΙΜΘ)]} 2 )
=
(b) The normal equations are
E(Xn+1)
n
=
α0 +
Σά3Ε(Χ3)ι
3= 1
n
Cov(X ? ; ,X n + 1 )
=
]Γ^·Οον(Χ?:,Χ,),
where E(X n + 1 ) = /3 η+1 μ, E(Xj) = /^.μ and Cov(Xi,Xn+i) = βφη+λα,
Cov(X?:, Xi) = Var (X*) = Tj + V ^ + ß j ß · On substitution,
n
and
ßißn+la
=
a
Σ
jßißja
+ <*ι (Τ?; + V^)
J= l
=
) ßia + ®ί(Τί + V^)·
[ßn+1
Hence,
— β{α = ai(Ti + ψιν),
yielding
ά{ = —/3 ?: α (τ,; + ^ ι ; )
/i
Then
ζ= 1
μ
i= \
and
SECTION 18.9
197
Thus,
ά
άθ = ßn+Φ ~ Σ
^Ομ
=
#
^η+Ι^ - ά0α V
J= l
,
3= 1
which gives
ffn+lM
^η+ιΜ
α0
_
ßn+Φ
1 + am
and
a
ßißn+1
1 + α?η τι + V ^
The credibility premium is
ßn+Φ + ßn±\a
-X,
1 4- am
1 + am J=i ' J + ^ ' V
j=i
E(Xn+l)
1 + am
ffn+lQ
Y^m3X
l + amfijSv J
E(Xn+1) [ ^ n + i a m ^
1 4- am
1 + am
( l - Z ) E ( X n + 1 ) + Z/3 n + 1 X.
18.13 The posterior distribution is
π(0|χ) oc
π{θ) oc H [θ** (1 - 0)*'-*'] 0 a_1 (l - θγ-1
H f(Xj\9)
3=1
ßT,Xj+a-l
J
J=l
Q _ ^^(Xj-x^+b-l
0a*(l-0)6%
which is the kernel of the beta distribution with parameters α* = Σ Xj -f a
and 6* = XX ^ ~~ xj)+ &· So
198
CHAPTER 18
E(X n + 1 |x)
=
/
μη+1(θ)π(θ\χ)άθ
Jo
= f^^rhri!')/a-"i(i^)t·-1'«
70
Γ(α*)Γ(6*)
-
Γ(α»)Γ(6,) J0
Γ(α*+Κ)Γ(α*
+ 1)Γ(Κ)
η+1
Γ(α*)Γ(ί>*)Γ(α* + 1 + 6»)
α
κ
*
=
Κ,η+\
Σχά+α
^Xj+α
+ Y^iKj -Xj) + b
53 -ftTj
a
a+b
α
Σ x j
Σ -^j
α+6
K
+
EKjE j+a
+ b a + b^Kj+a + bJ "
Let
Σ*0
-γ_ Σχί
α
_
ζ = Σ
^ /„f j ,+ ίo +■ . .ft'* =Σ^ ^ s
· ' μ =α + &'
ThenE(X„ + 1 |x) = Κη+λ[Ζ~Χ + {\-Ζ)μ\.
(1 — Ζ)μ, the credibility premium.
18.14 (a) fXj\e(xj\e)
= p(xj)erW*i/q(0),
Normalizing,
Ε(£^-|Χ)
= ΖΧ +
where
p(x) = Γ(α + x)/[T(a)a:!], r(0) = ln(0)
and q(0) = (1 - β)"°. Thus for 0 < Θ < 1, π(0) oc (1 - 6)akθμΜ{\/θ), which is
the kernel of a beta pdf (Appendix A) with parameters a = μλ; and 6 = afc + 1 .
As a > 0 and 6 > 0, the result follows.
μ(0) - q'{9)l\r\0)q{e)}
= αθ/(1 - Θ),
and thus
Ε Θ
Μ >1 - j f £ j *(θ)άθ
Γ(/χ/ί)Γ(α*; + 1) 7o
which is infinite if a/c < 0 (orfc< 0 because a > 0). If a/; > 0,
pf
ίην, =
ί Μ )J
αΓ(μΑ; + afc + 1) Γ(μΑ: + l)r(afc) _
~ Γ(μΛ)Γ(α*; + 1) T(ßk + ak + l) ~ μ'
SECTION 18.9
199
Alternatively, because
<Θ)
kr'{6)
=
Γ(μΑ: + αΑ: + 1)
kT(ßk)Y(ak + l)
k
K
_
}
k
equals zero for Θ — 0 and 0 = 1 unless k < 0 (in which case it equals — oc for
θ = 1), the result follows directly from (18.38).
(c
and
d (
1
Λ 1 ,Λ_ι
*>-i$-[!(iV')]'·-E
αο
(1-0)2
w^S^/^-^·
αΓ(μλ: + ak +_L) f1 kn
1) Jo
As in (b), Ε[υ(θ)] = oc Ίϊ ak < 1, whereas if ak > 1,
Ε[ν(θ)]
mak
.2,
αΓ(μΑ; + ak + 1) Γ(μ& + 1)Γ(αΑ; - 1)
Γ(μ*;)Γ(αΑ + 1)
Γ(μ£ + ak)
a(ßk + ak)μk
ak(ak — 1)
μ&(μ -f a)
ak - 1
Similarly, [μ(<9)]2 - α2θ2/(1 - Θ)2 and
which is infinite for ak < 1, whereas if ak > 1,
Ε{[μ(θ)]^
α2Γ(μ& + afc + 1) Γ(μΛ + 2)r(afc - 1)
T(ßk)T(ak + 1)
Γ(μ& + afc + 1)
2
α (μ/ί)(μΑ;4-1)
(ak)(ak - 1)
'μ/c + l
a/i
\afc-l,
As fc > l / a > 0, Ε[μ(θ)] = μ from (b), and thus
νβφ(θ)]
=
=
Ε{[μ(θ)]2}-{Ε[μ(θ)]}2
(αμ)(μ& + 1) _ 2
ak — 1
ß2(ak) + αμ — ß2{ak — 1)
ak — 1
M(Q + μ)
afc — 1 '
200
CHAPTER 18
Clearly, Ε[ν(θ)] = k V a r ^ ( 6 ) ] .
Alternatively, because k > 1/a > Ο,π(0)l[kr'(ß)\
implying t h a t (18.52) holds. Furthermore,
r'(0)
= 0 for 0 = 0 and 0 = 1,
l
Γ(μΛ)Γ(α*; + 1)
j
which equals zero for 0 = 0 and 0 — 1 if k > 1/a, implying, in turn, that
(18.52) reduces to k = Ε[υ(θ)}/
ν*ι[μ(θ)].
(d)
π(0|χ) α
|J}(l-0)a0^
with /e* = A; + n and μ„ = (μ/c + nx)/(k
Qfik-l
(1 - 0) c*k
+ n). As shown in (b),
E ( X n + 1 | X = x) = Ε\μ(θ)\Χ
= x] = μ,
if /c* > 0 and is infinite if —1/a < k* < 0. But /e* = k + n, and /c* > 0 if and
only if /c > —n. But /c* > — 1/a is the same as k > —n — 1/a, which is clearly
satisfied because k > —1/a. Also, /e* < 0 is the same as k < —n. Therefore,
if n < 1/a, it is possible to have —1/a < k < — n, or n — 1/a < /c* < 0, in
which case the Bayesian premium is infinite. If k — n, then k* = 0, and
π(0|χ) O C 0 ~ \
O < 0 < 1,
which is impossible, t h a t is, there is no posterior pdf in this case.
(e) If k > 1/a, then the credibility premium exists from (b), and the Bayesian
premium is a linear function of the XjS from (d). Hence, they must be equal.
1 8 . 1 5 (a) T h e posterior distribution is
π(0|χ) oc
m
Lj=i
=
exp(—m9xj)
fo(0)r*exp(-0/xfc)
W)Y
fe(0)]-*-mnexp
[-θ{τηΣχ3
This is the same form as the prior distribution, with
k* — k + rrni
and
m
μ* =
Σ xj + M
ran 4- k
+μ*θ] ·
SECTION 18.9
201
The Bayesian premium is clearly given by (18.44), but with these new definitions offc*and /i^, because the derivation of (18.44) from (18.41) is completely
analogous to that of (18.38) from (18.36).
(b) Because the Bayesian premium is linear in the x η s as long as (18.45) holds,
the credibility premium must be given by μ^ as defined in (a).
(c) The inverse Gaussian distribution can be written as
„ ,
/
θ Υ
(Ox
θ
θ\
Replace Θ with m and μ with (20) ""2 to obtain
/(*) = ( £ 3 ) 4 «p [-"»«*+™(2β)*-^]·
Now let p(m,x) = λ/^^χρ(-^),Γ(θ)
that f(x) has the desired form.
= 0, and q(0) = exp(-VW)
to see
18.16 (a) They are true when Xi,X2,... represent values in successive years
and r is an inflation factor for each year.
(b) This is a special case of Exercise 18.12 with ßj = τ3\ Tj(9) = 0, ^
r2j /rrij for j = 1 , . . . , n.
(c) From Exercise 18.12(b)
n
άο + Σ
a X
J J = (1-
Ζ Ε Χ
+
) ( η+ι)
Ζτη+1Χ,
J'=l
where
j=l
rrij
n
I
\r
i=i
and
Z =
\J—l
rrij
n
/
n
i=i
3=1
Ση
ra7
Ι + «Σ;=Ι^
TO
fc
+m
v
rar·?
=
202
CHAPTER 18
Then,
k
k+ m
E(Xn+1) + ^ - r "
k+ m
+ 1
X
k
m v-^ m
η+1
μ +
k + mτ
k + m *-^ m
(d) This is the usual Bühlmann-Straub credibility premium updated with
inflation to next year's dollars.
(e) lnfx.\Q(xj\0)
= ra^r ^Xjr(6) — rrij \nq(6) -f \np(xj, raj, r ) .
a
0βίχ,\Θ(Χ^Θ)
=
q'{9)
rrijT JXjr'(9) — rrij
fx3\e(xj\0)
d(0)
mjr'WlT-Jxj
μ(θ)}/ΧΑΘ(χι\θ).
Integrate over all Xj and use Liebniz's rule to obtain
0 = rrijr'fflT-'EiXjie
that is,
= θ) -
πιότ'(θ)μ(Θ)(1),
Ε(^|θ) = τν(Θ).
Also,
d2 x e
d(pfxM i\ )
=
rnjr"(0)[r-'xJ-ß(e)]fXi\e{xj\e)
-τηιΓ'(θ)μ'(θ)ΙΧΑθ(χ,\θ)
+my(e)]2[r^xj
-μ(θ)}2/Χ]ΙΘ(χί\θ)
.
Again integrating over all x yields
0 = 0 - τη^'(θ)μ'(θ)
+ m2T-2]{r'{e)]2VnT{Xj\Q
= Θ),
and solving for Var(X,|©) yields V a r ^ l © ) = ^ ' { f } = ΐϋίϊί«!
(f)
π(0|χ)
oc
π(θ)
oc
[9(6»)]-fc-mexp ßkr{e) + T-jr(e)
]T
i=i
=
[q(e)}-h-mexp r{0) Ltfc + r - ^ i
j=i
=
[q(6)]-k'e>1'k'r^r'(e),
τ'(θ)
SECTION 18.9
203
where /e* = k + m and
j
ßk + τ
μ*
Σ™=1 rrijXj
k + m
Thus, π ( 0 | χ ) is the same as (18.41), and from (18.44)
Ε1μ(θ)|Χ = χ ] =
μ
.
^
+
-
^
.
But from (e) and (18.13), the Bayesian premium is
E ( X n + 1 | X = x)
=
Εθ|χ=χ[Ε(Χη+1|θ)]
=
Εθ|χ=χ[τη+1μ(θ)]
=
τη+1Ε[/χ(θ)|Χ = χ].
1 8 . 1 7 (a) T h e Poisson pgf of each X , is Ρχά{ζ\θ) = e"< 2-1 >, and so the pgf of
S is Ps(z\&) = e n ö ( 2 - 1 ) 5 which is Poisson with mean ηθ. Hence,
(n0)se-
ί3\θ(8\θ)
and so
=1
{ηθ)°εΛ . - η β
fs(s)
-π(θ)άθ.
(b) μ(0) = E(X|6>) = Θ and π ( 0 | χ ) = [Π^=ι f{Xj\0)]
/ ( X)
=
π(θ)άθ
/
f»
π ( 0 ) / / ( ζ ) . We have
T.X.J
π(θ)άθ
-ηθ.
Therefore,
ττ(0|χ) =
θΕχ*ε-ηβπ(θ)
]θΣχ^-ηθπ{θ)άθ
T h e Bayesian premium is
E(Xn+1|x)
=
ίμ(θ)π(θ\χ )d9
ΙΘΣχ>+ιε-ηθπ(θ)άθ
o
oo
/^c-»eir(i)i
^7(7^T^+1^"^(^
5 / =f βΛβ-»βπ(ο)ίί0
o
S + l /<;(.?+ 1)
™
/s(s)
204
CHAPTER 18
(c)
0
=
ü l £ Z . /■ö-+«-ie-(«+^-,)ed0
s! Γ(α) 7
ns / T α
T(g + g)
s! Γ ( α ) ( η + 0 - 1 ) β + β
r ( s + a)
/
1
Γ(β + 1)Γ(α) \l + nßj
Yfnß
\l + nß
where β^ = nß and r = a. This is a negative binomial distribution.
18.18 (a) Clearly,
πθ|χ(0|χ)
Θηχε-η°π{θ)
oc
oc θηχβ~ηθ
(e-h~^9~^
From Exercise 5.20(g) with m = nx (a nonnegative integer because of the
Poisson assumption) and Θ replaced by 7 yields the normalizing constant
poo
/
θηχ-^€-α(η)θ-^άθ
=
2
JO
νΜφ
Kn
2a(n)
(b)
/•OO
/Xn+1|x(^n+l|x)
=
/ χ η + Ι | θ ( α η + ΐ | 0 ) 7 Γ θ | 3 :(fl|x)d9
/
JO
00
/
0
Jo
0Xn+] 6~θ
^n+l!
2α(η)
7
2
2K.
2a(n)
2(xn+1!)ffnä_i [V2a(n)7j
Jo
Again using Exercise 5.20(g) yields
4
^ni-|e_a(n)0-^
ν^ϊΦ
>d#
SECTION 18.9
205
/x n + i|x(ffn+l| x )
x
2
4 Γ
7
n+l~
*^χη+1+ηχ-±
[2α(η+1)
y/2a(n + 1)7
( α η + ι ! ) # η * - $ [χ/2ο:(η)7
7
Γ[2α(η+1)
•^η + Ι
2
c*(n+l)
4
#* η+1 +η*-* [>/2α(η + 1)7J
(χη+ι!)Κη5_ι
(c) Let χ = ηθ, and so ö = x/n and άθ = (dx)/n.
Jo
s-
Thus,
Ln ^ n '
dx.
But
n
ηny
γ 2 π ( χ / η ) 3 exp
\n/
2πχ 3 exp
2(x/n) V μ
)
n7 / x — ημ V
2x \ ημ /
which is the pdf of the inverse Gaussian distribution with parameters 7 and
μ replaced by 717 and ημ, respectively. As in Example 7.17, let μ* = ημ and
β^ = ημ 2 /7, which implies that 717 — η2μ2/β* = μΙ/β*> Then
n
\nJ
μ%
2πβ*χ* exp
V^ßl
exp
μΐ (χ2ß*x V /
\ 21
(χ-μ*)2'
2ß*x \
Therefore, S has a Poisson-inverse Gaussian distribution [or, equivalently, a
compound Poisson distribution with ETNB (r = -\) secondary distribution].
with parameters μ* = ημ and β+ = η μ 2 / 7 , and the pf /s(s) can be computed
recursively. As shown in Exercise 18.17(b), the Bayesian premium is then
given by
E(X n +l|X = x )
1 + nx fs(l + nx)
n
fs{nx)
206
CHAPTER 18
1 8 . 1 9 T h e posterior distribution is
π(0|χ)
OC {Y\
^=1
ex
P
=
exp [{2υ)-\-
oc
exp
1
-i:^3-e?
exp
-2^-">
2
+ 2θηχ ~ ηθ2) - ( 2 α ) - 1 (0 2 - 2θμ + μ2)]
Σx)
έ + έ) +2 Η^ + £
Let p = ^ + ^ and <? = ^ -f ^ , and then note t h a t -ρθ2 + 2ς0 = -(ρι/2θ
qp-^2f-q2p-1.
-
Then
7Γ(0|Χ)
OC
exp
=
exp
-tf/H-qp-Wf
1
(0-qp-
i \
2
2 V(2p)~1/2
and the posterior distribution is normal with mean
nx
l·
—
V + a
*= n
1
—
—
V + a
ß*=QP
and variance
1
/n
1
a* = — = - + 2p
\ v
a
Then (18.10) implies t h a t X n + i | x is a mixture with Χ η + ι | θ having a normal
distribution with mean Θ, and θ | χ is normal with mean μ* and variance a*.
From Example 5.5, Xn-\-i |x is normally distributed with mean μ* and variance
a* + v, t h a t is,
Λ Υ „ + 1 | Χ ( ^ Π + Ι | Χ ) = [2π(α* + υ)]
7
exp
2(a* +v)
, — OC < X ' n + i < OC.
T h e Bayesian estimate is the mean of the predictive distribution μ + , which
can be written as Zx + (1 — Ζ)μ, where
Z =
n/v
n/v 4- 1/a
na
rm -f-1>
n -f u / a '
Because the Bayesian estimate is a linear function of t h e data, it must be the
Bühlmann estimate as well. To see this directly,
μ(θ)
=
υ(θ)
— v (not random), v = E(v)
a
=
θ,μ = Ε(θ)=μ,
Var(B) - a.
^
SECTION 18.9
207
thus indicating that the quantities in the question were chosen to align with
the text. Then, k = v/a and Z — n/(n + k).
18.20 (a) Let Θ represent the selected urn. Then, / χ ( χ | θ = 1) = | , χ =
1,2,3,4 and / χ ( χ | θ - 2) = ±, x = 1,2,... ,6. Then, μ(1) = Ε(Χ|Θ = 1) =
2.5 and μ(2) - Ε(Χ\Θ = 2) = 3.5.
For the Bayesian solution, the marginal probability of drawing a 4 is
/x(4) - | x 4 + ^ x | - ^ 5 a n d the posterior probability for urn 1 is
π(0 = 1|Χ = 4)
and for urn 2 is
/ Χ | β (4|1)π(1) _ \ \ _ 3
_
/x(4)
h24
5
π(0 = 2|Χ = 4) = 1 - § = §.
The expected value of the next observation is
E(X 2 |Xi = 4) = 2.5 ( | ) + 3.5 (§) = 2.9.
(b) Using Bühlmann credibility,
μ
=
±(2.5+ 3.5) = 3,
t,(l)
=
I ( l 2 + 2 2 + 3 2 + 4 2 ) - ( 2 . 5 ) 2 = 1.25,
v(2)
=
±(1 2 + · · · + 6 2 ) - ( 3 . 5 ) 2 = 2.917,
υ
=
±υ(1) + \υ{2) = 2.0835,
a
=
Var[M(ö)] = E { [ ^ ) ] 2 } - { E K ö ) ] } 2 = i [ 2 . 5 2 + 3 . 5 2 ] - 3 2
=
0.25,
0.25
a
- 0.1071352.
1 + 8.334
The credibility premium is Pc = Ζχ+{\-Ζ)μ
3.10709.
18.21(a)
MW
=
= 0.1071352(4)4-0.8927648(3) =
0,ν(0) = 0,
oo
v
=
3θ~3άθ = 1.5,
/
Ε(θ) = 1.5,
/•OO
α
=
Var(Ö) = /
k
=
1.5/0.75 = 2,
Z
= 2T2=°· 5 '
Pc
=
Ζθ~2άθ - 2.25 = 0.75,
0.5(10)+0.5(1.5) = 5.75.
208
CHAPTER 18
(b) Because the support of the prior distribution is Θ > 1, that is also the
support of the posterior distribution. Therefore, the posterior distribution
is not gamma. TT(0|JVI + N2 = 20) oc e" 2 ^ 2 0 *? - 4 = e " 2 ^ 1 6 . The required
constant is
i;
β~2θθ16άθ
= e" 2
1
17
2 ^ 4 ^
17(16)
8
16!
2 17
l,179,501,863e"
and the posterior distribution is π(θ\Ν1+Ν2 = 20) - e~ 2ö (9 16 /l,179,501,863e- 2 .
The posterior mean is
f™ee-29e16de
1,179,501,863e~2
8.5.
The mean is actually slightly less than 8.5 (which would be the exact answer
if integrals from zero to infinity were used to create a gamma posterior).
0.5
0.5+fc
18.22 Z
0.5, k = 0.5, Z -
3+0.5
= 6/7.
18.23 (a)
Pr(Xx = 1|A)
= 1|B)
Pr(X! = \\C)
Ρτ{Α\Χλ = 1)
ΡΓ(*!
?τ{Β\Χχ = 1)
Pr(C|Xi = 1)
μ(Α)
Ε(Χ2\Χ1
(b)
μ(Β)
μ(ϋ)
= 1)
μ
a
ν(Α)
ν
k
Pc
=
=
=
=
=
=
=
=
=
3(0.12)(0.9) = 0.027,
3(0.62)(0.4) = 0.432,
3(0.82)(0.2) = 0.384,
0.027/(0.027 + 0.432 + 0.384) = 27/843 = 9/281
144/281,
128/281,
3(0.9) = 2.7,
3(0.4) = 1.2,
3(0.2) = 0.6,
=
[9(2.7) + 144(1.2) + 128(0.6)]/281 = 0.97473.
= (2.7 + 1.2 + 0.6)/3 = 1.5,
= (2.72 + 1.22 + 0.6 2 )/3 - 2.25 = 0.78,
= 3(0.9)(0.1) = 0.27, ν(Β) = 0.72, v(C) = 0.48,
= (0.27 + 0.72 + 0.48)/3 = 0.49,
= 49/78, Ζ = (1 + 49/78)- 1 = 78/127,
=
τ | ( 1 ) + τ | ( 1 · 5 ) = 1-19291.
SECTION 18.9
18.24 (a)
μ(λ)
=
λ) υ(λ) =
λ>
OO
v
=
/
Ε(λ) = 4/3,
4A_4dA = 4/3,
OO
/
fc =
Pc
=
209
4Ä_3dÄ - 16/9 = 2/9,
(4/3)/(2/9) = 6, Z = ^ A _ = l / 3 ,
(1/3)(1) + (2/3)(4/3) = 11/9.
/•l
(b)
μ
=
v
=
Xd\ = l/2,
Jo
M=l/2,
2
/ ' \ d\
k
=
- 1/4 = 1/12,
Jo
(l/2)/(l/12) = 6, Z = —
Pc
=
(1/3)(1) + (2/3)(l/2) = 2/3.
= 1/3,
18.25 μ(/ι) = h, μ = Ε(/ι) = 2, v(/i) = /i, v = E(ft) = 2, a = Var(ft) = 2,
fc = 2/2 = 1, Z = i ^ i = 1/2.
18.26 (a) r ~ bin(3, 0), π(0) = 60(1 - 0).
π(θ\Χ = 1) oc 30(1 - 0) 2 60(1 - 0) oc 0 2 (1 - 0) 3 ,
and so the posterior distribution is beta with parameters 3 and 4. Then the
expected next observation is Ε(3Θ\Χ = 1) = 3(3/7) = 9/7.
00
μ{θ)
=
30, υ(θ) = 30(1 - 0),
μ
=
Ε(30) = 3 / 060(1 - θ)άθ = 1.5,
Jo
υ
=
Ε[30(1 - 0)] = 3 / 0(1 - 0)60(1 - θ)άθ = 0.6,
Jo
a
=
k
Pc
=
=
Var(30) = 9 f 02 60(1 - θ)άθ - 2.25 = 0.45,
Jo
0.6/0.45 = 4/3, Z = (1 + 4 / 3 ) " 1 = 3/7,
(3/7)(l) + (4/7)(1.5) = 9/7.
210
CHAPTER 18
18.27 (a)
μ(Α) = 20, μ(Β) = 12, μ{0) = 10,
υ{Α) = 416, υ{Β) = 288, v[C) = 308,
μ = (20 + 12 + 10)/3 = 14,
ν = (416 + 288 + 308)/3 = 337i,
a = (202 + 122 + 1 0 2 ) / 3 - 1 4 2 = 1 8 | ,
k = 337±/18§ = 18-jL,
Ζ = (1 + 1 8 ^ ) _ 1 = 14/267,
P c = (14/267)(0) + (253/267)(14) = 13.2659.
π(Α\Χ=0)
= 2/(2 + 3 + 4) = 2/9,
π(Β|Χ = 0) = 3/9, π{0\Χ = 0) = 4/9,
Ε(Χ 2 |Χι = 0 ) = (2/9)20 + (3/9)12 + (4/9)10 = 12§.
18.28 (a) ΡΓ(ΛΓ = 0) = / 1 3 e- A (0.5)dA = (e" 1 - e~ 3 )/2 = 0.159046.
(b)
μ
=
υ
a
=
Var(A) = / A2(0.5)dA - 4 = 1/3,
k
=
2/(1/3) = 6, Z = j
Pc
=
(1/7)(1) + (6/7)(2) = 13/7.
= Ε ( λ ) = ί A(0.5)dA = 2,
^ = 1/7,
(c) 7r(A|Jfi = 1) =e-*A(.5)// 1 3 e- x A(.5)<t\ = e- A A/(2e- 1 E(A|X 1
18.29(a)
=
1)
ß(A) =
v(A) =
v{B) =
μ =
v =
a =
k =
Z
=
4e~3),
=
ή
/ e-x\2d\/(2e-1
=
(5e _ 1 - 17e- 3 )/(2e- 1 - 4e'3) = 1.8505.
-4e~3)
(l/6)(4) = 2/3, μ(Β) = (5/6)(2) = 5/3,
(l/6)(20) + (5/36)(16) = 50/9,
(5/6)(5) + (5/36)(4) = 85/18,
[(2/3) + (5/3)]/2 = 7/6,
[(50/9) + (85/18)]/2 = 185/36,
[(2/3) 2 + (4/3) 2 ]/2 - 49/36 = 1/4,
(185/36)/(l/4) = 185/9,
41^579= 3 6 / 2 2 1 ·
SECTION 18.9
211
(b) (36/221)(0.25) + (185/221)(7/6) - 1,349/1,326 = 1.01735.
1 8 3 0
E(X 2 )
E(X2\X1 = 12)
=
=
=
=
(l + 8 + 1 2 ) / 3 = 7
E[E(X 2 |X!)]
[2.6 + 7.S + E(X2\X1
10.6.
= 12)]/3.
18.31 (a) X ~ Poisson(A), π(λ) = e _ A / 2 / 2 . The posterior distribution with
three claims is proportional to e - A A 3 e - A / 2 = λ e~ 1,5A , which is gamma with
parameters 4 and 1/1.5. The mean is 4/1.5 = 2 | .
(b)
Μλ)
μ
a
fc
^
= v(\) = A,
= v = E(A) = 2,
= Var(A) = 4,
= 2/4 = 0.5, Z =
=
9
i
n
L ^ = f,
8
2
ö(3) + ö(2) = ö = 2 ö -
18.32 (a) r ~ bin(3,0), π(0) = 28O03(1 - 0) 4 , which is beta(4,5).
π(0|Χ = 2) oc 30 2 (1 - 0)28O03(1 - Θ)4 oc 0 5 (1 - 0) 5 ,
and so the posterior distribution is beta with parameters 6 and 6. Then the
expected next observation is E(30|X = 2) = 3(6/12) = 1.5.
(b)
μ(θ)
μ
=
=
30, ν(θ) = 30(1 - 0),
Ε(30) = 3(4/9) = 4/3,
ν
=
E[30(l - 0)] = 3 / 0(1 - 0)28O0J(1 - 0) 4 d0
■f
Jo
=
Var(30) = 9 ί 0228O03(1 - θ)4άθ - 16/9
=
2 , 5 2 0 « ^ - 1 6 / 9 = 2/9,
k
Z
=
=
(2/3)/(2/9) = 3,
( l + 3 ) " 1 = l/4,
Pr
=
(l/4)(2) + (3/4)(4/3) = 1.5.
a
212
CHAPTER 18
μ(Α1)
=
ν(Α{) =
μ =
ν =
α =
k =
18.33(a)
Ζ
=
0.15, μ{Α2) = 0.05,
0.1275, ν(Α2) = 0.0475,
(0.15 + 0.05)/2 = 0.1,
(0.1275+ 0.0475)/2 = 0.0875,
(0.152 + 0.05 2 )/2 - 0.1 2 = 0.0025,
0.0875/0.0025 = 35,
3Τ35=3/38·
Thus, the estimated frequency is (3/38) (1/3) + (35/38) (0.1) = 9/76.
==
==
μ ==
V
==
a -=
k ==
24, μ(Β2) = 34,
64, v{B2) = 84,
(24 + 34)/2 = 29,
(64 + 84)/2 = 74,
(242 + 34 2 )/2 - 292
z --=
?—— = 25/99.
1 + 74/25
μ(Βι)
v(B,)
25,
74/25,
Thus, the estimated severity is (25/99) (20) + (74/99) (29) = 294/11.
The estimated total is (9/76)(294/ll) = 1323/418 = 3.1651.
(b) Information about the various spinner combinations is given in Table 18.i
Table 18.8 Calculations for Exercise 18.33.
Spinners
AUB1
AUB2
A2,Bi
A2,B2
μ
3.6
5.1
1.2
1.7
83.04
159.99
30.56
59.11
μ =
v =
a =
(3.6 + 5.1 + 1.2 + 1.7)/4 = 2.9,
(83.04 + 159.99 + 30.56 + 59.11)/4 = 83.175,
(3.62 + 5.1 2 + 1.22 + 1.72)/4 - 2.92 = 2.415,
k
83.175/2.415
= 34.441, Z = -—^—— = 0.080126.
1
3 + 34.441
=
Thus, the estimated total is (0.080126)(20/3) + 0.919874(2.9) = 3.2018.
SECTION 18.9
213
(c) For part (a),
Pr(l|i4i)
Pr(l|42)
=
=
3(0.15)(0.85)2 = 0.325125,
3(0.05)(0.95)2 = 0.135375,
Pr(A 2 |l)
=
1 - 0.706026 = 0.293974.
Thus, the estimated frequency is 0.706026(0.15) + 0.293974(0.05) = 0.120603.
Pr(20|Bi)
Pr(20|B 2 )
=
=
0.8,
0.3,
Pr(ßi|20)
K li
'
Pr(S 2 |20)
=
_M^==8/ii,
0.8 + 0.3
'
3/11.
=
Thus, the estimated severity is (8/ll)(24) + (3/ll)(34) = 26.727272.
The estimated total is 0.120603(26.727272) = 3.2234.
For part (b),
Pr(0,20,0|4i,Bi)
=
(0.85)2(0.12) = 0.0867,
ΡΓ(0,20,0|ΛΙ,Β2)
=
=
=
(0.85)2(0.045) = 0.0325125,
(0.95)2(0.04) = 0.0361,
(0.95)2(0.015) = 0.0135375.
Ρ Γ (0,20,0μ 2 ,-Βι)
Pr(0,20,0|A 2 ,ß 2 )
The posterior probabilities are 0.51347, 0.19255, 0.21380, 0.08017, and the
estimated total is
0.51347(3.6) + 0.19255(5.1) + 0.21380(1.2) + 0.08017(1.7) = 3.2234.
(d)
Pr(A-1=0,...,X„_1=0|,4i,Bi)
Pr(Xi=0,...,X„_i=0|i4i,B2)
Pr(X1=0,...,Xn-1=0\A2,B1)
ΡΓ(Λ-Ι=0,...,Χ„_Ι=0|Λ2,Β2)
E(Xn\X1=0,...,Xn-i=0)
=
=
=
=
(0.85)"- 1 ,
(0.85)n-\
(0.95)"- 1 ,
(0.95)"- 1 .
(0.85)"- 1 (3.6) + (Q.85)"- 1 ^.!) + (Q.95)"- 1 ^·?) + (0.95)"-» (1-7)
(0.85)"- 1 + (0.85)"" 1 + (0.95)"- 1 + (0.95)"- 1
_ 2.9 + 8.7(0.85/0.95)"- 1
2 + 2(0.85/0.95)"-! '
214
CHAPTER 18
and the limit as n —» oc is 2.9/2 = 1.45.
18.34
PrOf = 0.12IA) - -==
exp
V
' J
>/2^(0.03)
( 0 . 1 2 - 0 . 1 \2l
2(0.0009)
10.6483,
PT(X = 0.12|£) = (X = 0.12|C) = 0 (actually, just very close to zero), so
Pr(;4|X = 0.12) = 1. The Bayesian estimate is μ(Α) = 0.1.
18.35 Ε(Χ\Χλ = 4) = 2 = Z(4) + (1 - Z)(l), Z = 1/3 = ^ , k = 2 = v/a =
3/a, a = 1.5.
18.36 v = E{v) = 8, a = Var(^) = 4, fc = 8/4 = 2, Z = ^
= 0.6.
18.37 (a)
/(j,) = / x-le-y^mx-'se-20^d\
Jo
= 400 /
Jo
A-4e-(20+^/\u.
Let 0 = (20 + y)/A, λ = (20 + y)/0, dA = -(20 + ν)/θ2άθ, and so
/(j/) - 400 / (20 + y)-302e-e<W = 800(20 + j/)"" 3 ,
Jo
which is Pareto with parameters 2 and 20, and so the mean is 20/(2 — 1) = 20.
(b) μ(Α) = λ, v(X) = X . The distribution of λ is inverse gamma with a = 2
and 0 = 20. Then μ = E(A) = 20/(2 - 1) - 20 and v = Ε(λ 2 ), which does
not exist. The Bühlmann estimate does not exist.
(c) π(λ|15,25) oc A~ V 1 5 / A A _ V 2 5 / A 4 0 0 A - V 2 0 / A oc A" 5 e" 6 0 / A , which is
inverse gamma with a = 4 and 0 = 60. The posterior mean is 60/(4 — 1) = 20.
1 8
·38
μ(0)
α
υ
k
=
=
=
=
Z
=
0, υ ( 0 ) = 0 ( 1 - 0 ) ,
Var(0) - 0.07,
E ( 0 - 0 2 ) = E(0) - Vax(0) - [E(0)]2
0.25 - 0.07 - (0.25)2 = 0.1175,
0.1175/0.07=1.67857,
1 + 1.67857
= 0.37333.
SECTION 18.9
215
18.39 (a) Means are 0, 2, 4, and 6 while the variances are all 9. Thus
μ
v
=
=
(0 + 2 + 4 + 6)/4 = 3,
9, a = (0 + 4 + 16 + 36)/4 - 3 2 = 5,
Z
=
^
= 5/14 = 0.35714.
(b)(i) v = 9, a = 20, Z = Y ^ Ö - 20/29 = 0.68966.
(b)(ii) v = 3.24, a = 5, Z =
1+3 * 24/5
- 5/8.24 = 0.60680.
(b)(iii) v = 9, a = (4 + 4-f 100-f 100)/4-36 - 16, Z = J^J^
= 16/25 = 0.64.
( b ) ( i v ) Z = 3 ^ - 1 5 / 2 4 = 0.625.
(b)(v) a = 5, v = (9+9 + 2.25+2.25)/4 = 5.625, Z =
0.64.
The answer is (i).
2+5
*
5
= 10/15.625 =
18.40 (a) Preliminary calculations are given in Table 18.9.
Table 18.9
Calculations for Exercise 18.40.
Risk
100
1,000
20,000
μ
V
1
2
0.5
0.7
0.3
0.2
0.2
0.1
4,350
2,270
61,382,500
35,054,100
Pr(100|l)
Pr
=
<1|100) =
Expected
μ
(b)
v
a
k
0.5, Pr(100|2) = 0.7,
0.5(2/3)(+2S(l/3)
= 10/17
' Pr(2|100) = 7/17.
value is (10/17)(4350) + (7/17)(2,270) = 3,493.53.
= (2/3)(4,350;) + (Γ/3)(2,270) = 3,656.33i,
= (2/3)(61,382,500) + (1/3)(35,054,100) = 52,606,366.67,
= (2/3)(4,350) 2 + (l/3)(2,270) 2 - 3,656.332 = 963,859.91,
= 54.579, Z = 1/55.579 = 0.017992.
Estimate is 0.017992(100) + 0.982008(3,656.33) = 3,592.34.
216
CHAPTER 18
18.41(a)
ν(μ,λ)
=
=
=
=
ν
(b)
=
=
=
=
=
μ(μ,λ)
a
μ(2λ2) = 2μλ2,
2Ε(μλ 2 ) = 2 Ε ( μ ) ^ Γ ( λ ) + Ε ( λ ) 2 ]
2(0.1)(640,000 + 1,0002)
328,000.
μλ,
ν&τ(μ\) =
Ε(μ'2λ2)-Ε(μ)2Ε(λ)2
[Var(M) + Ε(μ) 2 ] [Var(Ä) + Ε(λ) 2 ] - Ε(μ) 2 Ε(λ) 2
(0.0025 + 0.12)(640,000 + Ι,ΟΟΟ2) - 0.121,0002
10,500.
18.42 For the Bayesian estimate,
Ρι·(λ = l|Xj = r)
=
Pr(Xi = r|Ä = 1) ΡΓ(Λ = 1)
Pr(X! = r\X + 1) Pr(A = 1)
+ Ρτ(Χ1 = r | Ä = 3)Pr(Ä = 3)
^■(0.75)
e_l( 0 .75) + ^21(0.25)
0.2759
0.2759 + 0.1245(3 r )'
Then,
2 08 =
0-2759
0.2759 + 0.1245(3r)V
0.1245(3'·)
0.2759 + 0.1245(3r)V ; '
;
The solution is r = 7.
Because the risks are Poisson,
μ
=
υ = Ε(λ) = 0.75(1)+0.25(3) = 1.5,
a
=
Var(A) = 0.75(1) + 0.25(9) - 2.25 = 0.75,
Z
=
1 + 1.5/0.75
=
1/3
'
and the estimate is (l/3)(7) + (2/3)(1.5) = 3.33.
18.43 For the Bayesian estimate,
Pr(6>
=
8|Xi = 5)
Pr(Xi = 5|6> = 8) Pr(fl = 8)
PT(X1
= 5|<9 = 8) Ρτ(θ = 8) + ΡΓ(ΛΊ = 5|0 = 2) Pr(6> = 2)
(l/8)e- 5 / 8 (0.8)
(l/8)e- /8( 0 .8) + (l/2)e- 5 / 2 (0.2)
5
0.867035.
SECTION 18.9
217
Then,
E(X 2 |Xi = 5) = Ε(0|ΛΊ = 5) = 0.867035(8) + 0.132965(2) = 7.202.
For the Bühlmann estimate,
μ =
v =
a =
Z
=
0.8(8) + 0.2(2) = 6.8,
0.8(64) + 0.2(4) = 52.8,
0.8(64) + 0.2(4) - 6.82 = 6.56,
l
+
5218/6.56=°·110512'
and the estimate is 0.110512(5) + 0.889488(6.8) = 6.601.
18.44 The posterior distribution is
π(λ|χ) oc ( β - λ ) 9 0 ( λ 6 - λ ) 7 ( λ ν λ ) 2 ( λ ν λ ) λ 3 ε - 5 0 λ = Ä 1 7 e - 1 5 0 \
This is a gamma distribution with a = 18 and Θ — 150. The estimate for one
risk is the mean, 18(1/150) = 3/25 and for 100 risks, it is 300/25 - 12.
Because a Poisson model with a gamma prior is a case where the Bayes
and Bühlmann estimates are the same, the Bühlmann estimate is also 12.
18.45 We have
μ = 0.6(2,000) + 0.3(3,000) + 0.1(4,000) = 2,500,
v = 1,0002,
a = 0.6(2,000)2 + 0.3(3,000)2 + 0.1(4,000)2 - 2,5002 - 450,000,
80
Z — ™ . ι,οοο,οοο ~ 0.97297,
80 + 450,000
X =
24,000 + 36,000 + 28,000 Λ η Λ η
go
-MOO,
and the estimate is 0.97297(1,100) + 0.02703(2,500) = 1,137.84.
18.46 For one year,
Trfolan) oc q^(l - qf-'^-^l
- qf = ^ + - 1 ( 1 - q) 16-x
which is a beta distribution with parameters x\ + a and 17 — X\. The mean
is the Bayesian estimate of #, and 8 times that value is the estimate of the
expected number of claims in year 2. Then, x\ = 2 implies
2.4344 = 8- 2 + a
17 + a
218
CHAPTER 18
for a = 5. For two years,
*{q\xi
=2,X2
= k) oc q\\
Then,
- qfqk{\
- ς ) 8 " ^ 4 ( 1 - qf
= q6+k{\
q)22~h.
-
7 + A;
3.73333 = 8—^—
for k = 7.
1 8 . 4 7 If p = 0, then the claims from successive years are uncorrelated, and
hence the past d a t a x = (X\,...,
X n ) are of no value in helping to predict
X n + i so more reliance should be placed on μ. (Unlike most models in this
chapter, here we get to know μ as opposed to only knowing a probability
distribution concerning μ.) Conversely, if p = 1, then Xn+\ is a perfect linear
function of x. Thus, no reliance need be placed on μ.
1 8 . 4 8 (a)
E{[X n + 1 - 9 (X)] 2 }
= E { [ X n + i - E ( X n + 1 |X) + E ( X n + 1 |X) -
g(X)}2}
2
E{[Xn+1-E(Xn+1|X)] }
+E{[E(Xn+1|X)-s(X)]2}
+2E{[Xn+1 -E(Xn+i|X)][E(Xn+i|X) -^(X)]}.
T h e third term is
2E{ [ X n + 1 - E ( X n + 1 | X ) ] [ E ( X n + 1 |X) - <7(X)]}
=
2E(E{[Xn+1-E(Xn+1|X)][E(Xn+1|X)-^(X)]|X})
= 2E{ [ E ( X n + 1 |X) - E ( X n + 1 | X ) ] [ E ( X n + 1 |X) - ^(X)]}
= 0,
completing the proof.
(b) T h e objective function is minimized when E { [ E ( X n + i | X ) — g(X)] 2 } is
minimized. If g(X) is set equal to E ( X n + i | X ) , the expectation is of a random
variable t h a t is identically zero, and so is zero. Because an expected square
cannot be negative, this is the minimum. But this is the Bayesian premium.
(c) Inserting a linear function, the mean-squared error to minimize is
n
E{[E(X n+1 |X) - a0
~Σα3Χ3)2).
But this is (18.19), which is minimized by the linear credibility premium.
CHAPTER 19
CHAPTER 19
19.1
19.1
SECTION 19.5
Xi
Vl
V2
V3
V
= 733±, X2 = 633i, X3 = 900, X = 755§,
2
2
2
= (16| + 66| + 83± )/2 = 5,833±,
=
=
=
(8| 2 + 33i 2 + 4 l | 2 ) / 2 = 1,458±,
(02 + 502 + 502)/2 = 2,500,
3,263|,
ä =
i ( 2 2 f + 122| 2 + 144| 2 ) - 3,263§/3 = 17,060^,
k =
Z =
3,263§/17,060^ = 0.191316,
3/(3+ 0.191316) = 0.94005.
The three estimates are
0.94005(733^) + 0.05995(755§)
0.94005(633^) + 0.05995(755§)
0.94005(900) + 0.05995(755|)
=
=
=
734.67,
640.66,
891.34.
Student Solutions Manual to Accompany Loss Models: From Data to Decisions,
Edition. By Stuart A. Klugnian, Harry H. Panjor, Gordon E. Willinot
Copyright © 2012 John Wiley & Sons, Inc..
Fourt}i219
220
CHAPTER 19
19.2
χλ
=
45,000/220 = 204.55,
X2
X3
μ
=
=
=
54,000/235 = 229.79,
91,000/505=180.20,
χ = 190,000/960 = 197.91.
a
~
[100(4.55)2 + 120(3.78)2 + 90(18.68)2 + 75(10.21)2 + 70(13.07)2
+150(6.87) 2 + 175(8.77)2 + 180(14.24) 2 /(1 + 2 + 2)
22,401,
220(204.55 - 197.91)2 + 235(229.79 - 197.91)2
+505(180.20 - 197.91)2 - 22,401(2)
960 - (2202 + 2352 + 505 2 )/960
~ 617'54,
k
=
36.27, Zx = 0.8585, Z 2 = 0.8663, Z 3 = 0.9330.
v
=
=
The estimates are
0.8585(204.55) + 0.1415(197.91)
0.8663(229.79) + 0.1337(197.91)
0.9330(180.20) + 0.0670(197.91)
=
=
=
203.61,
225.53,
181.39.
Using the alternative method,
A
μ
~
_ 0.8585(204.55) + 0.8663(229.79) + 0.9330(180.20)
0.8585 + 0.8663 + 0.9330
~
'
'
and the estimates are
0.8585(204.55) + 0.1415(204.32)
0.8663(229.79) + 0.1337(204.32)
0.9330(180.20) +0.0670(204.32)
=
=
=
204.50,
226.37,
181.81.
19.3 X = 475, v = (02 + 752 + 75 2 )/2 = 5,625. With μ known to be 600,
a = (475 - 600)2 - 5,625/3 = 13,750, k = 5,625/13,750 = 0.4091, Z =
3/3.4091 = 0.8800. The premium is 0.88(475) + 0.12(600) = 490.
19.4 (a)
VsiiXij)
=
=
(b) This follows from (19.6).
ElVar^ieOl+Var^X^ie,)]
Ε[ν(θί] + ν3χ[μ(θί)] = v + α.
SECTION 19.5
221
(c)
ΣΣ{χ^-χ?
= ΣγΐίΧν-Xi + Xi-xf
i=l j = \
i=\ j = l
= Σ Σ Α - - **)2+2 Σ Σ( χ « - *i)(Xi - x)
i=l
j=l
i=l
j=l
+£,£{*<-X?
r
t=l i = l
n
χ
2+2
= ΣΣ( «-*)
r
n
Σ(*<-*)Σ(*«-**)
+ηΣ(Χΐ-Χγ
r
i=l
n
i=\
j=\
i=l
The middle term is zero because $Zi=iG^j — X{) = Σ ? = ι Xij — nX{ — 0.
(d)
E
^ΣΣ^-*) 2
nr
r
n
2
2
-^τ Σ
Σ<*«
^)
+
^
Σ
(
^
*)
i = l j=l
We know that ^ γ Y^=i(Xij — Xi)2 is an unbiased estimator of v(6i), and so
the expected value of the first term is
E
E
= E
(Ε^τΣΣ(^-^)2ι^ 11
n-l
Τ Σ^)
i=l
r(n — 1)
nr — 1-v.
For the second term note that Var(X^) — E[v(ßi)/n] -f Var^(©^)] = v/n H- a
and ^τττ Σ ΐ = ι ( ^ ~~ X)2 ls a n unbiased estimator of v/n -f a. Then, for the
222
CHAPTER 19
second term,
nr
1 f—'
nr — 1 \n
i=l
I
Then the total is
r(n— 1)
n(r — l)/v
\
—
-^υ + —
zr (- +a) =v + a
nr — 1 \n
I
nr — 1
n— 1
-a.
nr — 1
(e) Unconditionally, all of the XjjS are assumed to have the same mean when
in fact, they do not. They also have a conditional variance that is smaller, and
so the variance from X is not as great as it appears from (b).
19.5
EfcbjiYj-Y)2}
= Ε [ Σ Μ ^ - 7 + 7-?) 2 ]
= E£>i(^-7) 2 + E M 7 - n 2
+2EW-7)(7-?)]
=
Σ bj(aj + a2/b3) + b Var(y)
+2E [ 7 ZbjYj -Irri + byY-YY:
E [7 ΣbjYj - &72 + frrY - Ϋ
Var(r)
b2
!
f2
ΣbjYj]
Σήν&ΐ{Υ3)=
=
E{-ybY -Ιη2+ΙηΥ-
=
-E[6(y-7)2]
=
-ftVar(F).
bjYj)
b?2}
^{aJ+a2/b
b2
E ^ + r2
E [ E W ' - ? ) 2 ] = EbM + ^/b^-bYariY)
=
Eaj(i>j-b2/b)
+
(n-l)a2.
19.6 X = 333/2,787 = 0.11948 = v. The sample variance is 447/2,787 0.119482 = 0.14611, and so a = 0.14611 - 0.11948 = 0.02663. Then, k =
0.11948/0.02663 = 4.4867 and Z = 1/5.4867 = 0.18226. The premiums are
given in Table 19.1.
SECTION 19.5
Table 19.1
Calculations for Exercise 19.6.
No. of claims
Premium
0
1
2
3
4
0.18226(0) +
0.18226(1) +
0.18226(2) +
0.18226(3) +
0.18226(4) +
0.81774(0.11948)
0.81774(0.11948)
0.81774(0.11948)
0.81774(0.11948)
0.81774(0.11948)
=
=
=
=
=
0.09770
0.27996
0.46222
0.64448
0.82674
1 9 . 7 (a) See Appendix B.
o, = V a r [ M ( © ) ] = V a r ( 6 ) ,
(b)
ν - μ - μ
v
=
Ε[ν(θ)]=Ε[θ(1 + θ)],
μ
=
Ε[μ(θ)] = Ε ( θ ) ,
=
Ε ( θ ) - Ε ( θ 2 ) - Ε ( θ ) - Ε ( θ ) 2 = V a r ( 6 ) = α.
2
(cj
μ
=
Χ = 0.11948,
α+ ΰ
=
0.14611,
a
=
ι) - 0.11948 - 0.11948 2 ,
=
-0.133755.
α-ν
T h e solution is ä = 0.0061775 and v = 0.1399325. T h e n
k = 0.1399325/0.0061775 = 22.652
and Z = 1/23.652 = 0.04228. T h e premiums are given in Table 19.2
Table 19.2 Calculations for Exercise 19.7.
No. of claims
Premium
0
1
2
3
4
0.04228(0) +
0.04228(1) +
0.04228(2) +
0.04228(3) +
0.04228(4) +
0.95772(0.11948)
0.95772(0.11948)
0.95772(0.11948)
0.95772(0.11948)
0.95772(0.11948)
19.8
/x,(Xi)
=
/
Jo
Π
(mi^O^e-" 1 «"'
j=1
- Ks^)i
e-e^-'+rn,)et,de
i
v"t
=
=
=
=
=
0.11443
0.15671
0.19899
0.24127
0.28355
223
224
CHAPTER 19
where U = X ^ l i Uj- T h e n the likelihood function is
r
7=1
and the logarithm is
1(μ) = -r 1η(μ) - ]Γ(ί 7 ; + 1) 1η(μ -1 + m?;)
and
r
ϊ(μ) = -τμ~ι - £(*< + ΐΚμ"1 + m,;)^-//- 2 ) = 0.
T h e equation t o be solved is
v^
ti + 1
> —
^-—' //
νμ=
7= 1
19.9 (a)
i=l j = l
i=l j = l
r
n,
7= 1 j= l
7= 1 j = l
r
n,
r
= Σ Σ « (*« - xtf + Σ m *(* - *)2·
m
i=l j= l
T h e middle term vanishes because ΣΤ=ι
of X.
7=1
m
ij(Xij~Xi)
=
0 from the definition
SECTION 19.5
(b)
(TO-TO
^ T O
2
) -
1
i=l
ΣΣπιίό{Χίά
i=l
r
rii
m X
^ iJ - *i? - (r - 1)«
t=i i = i
(TO-TO
^ T O
2
) -
1
i=l
- Xf
j=l
"Σ Σ
=
225
r
nj
t=l
j=l
-v^2(rii - 1) - ( r - l ) t )
i=l
Also
ra* =
Σ ^ ^ ( ΐ - ^ )
ΣΙ=ι w< - 1
m-m-JEUrn?
Σί=ιn* - *
Then
,-i
Σ[=ι ^t - 1
-1
m.
i = l j=l
ΣΖ=ιΣ%ιηα*{Χν-Χ)
\i=l
J
2
YJi=i ni - 1
19.10 The sample mean is 21/34 and the sample variance is 370/340 —
(21/34) 2 = 817/1,156. Then, v = 21/34 and a = 817/1,156 - 21/34 =
103/1,156. k = (21/34)/(103/l,156) = 714/103 and Z = 1/(1 + 714/103) =
103/817. The estimate is (103/817) (2) + (714/817)(21/34) = 0.79192.
19.11 The sample means for the three policyholders are 3, 5, and 4, and the
overall mean is μ — 4. The other estimates are
v
=
a
=
Z
=
l2 + 0 2 + 0 2 + l2 + 0 2 + 0 2 + l2 + l2 + l2 + l2 + l2 + l2
3 ( 4 - 1)
2
.
)
2
+ (4 - 4 ) 2 8/9 _ 7
( 3 - -4) + ( 5 - 4
4 "9'
3 -- 1
4
7
4 + (8/9)/(7/9) 9-
9'
The three estimates axe (7/9)(3)+(2/9)(4) = 29/9, (7/9)(5)+(2/9)(4) = 43/9,
and (7/9) (4) + (2/9) (4) = 4.
226
CHAPTER 19
19.12 The estimate of the overall mean, μ, is the sample mean, per vehicle,
which is 7/10 = 0.7. With the Poisson assumption, this is also the estimate
of v. Then,
-
=
5(0.4 - 0.7)2 + 5(1.0 - 0.7)2 - (2 - 1)(0.7) _
io-(5'2
Z
-
5 + 0.7/0.04 ~
2/9
+ 52)
10
·
For insured A, the estimate is (2/9)(0.4) + (7/9)(0.7) = 0.6333, and for insured
B, it is (2/9)(1.0) + (7/9)(0.7) = 0.7667.
19.13
46
34
( ! ) +1 0 103 ( 2 ) + 5 ( 3 ) + 2 ( 4 ) - 0 . 8n 3oo
,
μ?, -_
v.-■ -- x =.
- ...
2 _
-
46(-0.83) 2 + 34(0.17)2 + 13(1.17)2 + 5(2.17)2 + 2(3.17)2 _
- U.J5UÜ1,
9g
ä
=
0.95061-0.83 = 0.12061,
Z
-
1+0.83/0.12061
s
(°) +
=0 12688
·
·
The estimated number of claims in five years for this pohcyholder is 0.12688(3) +
0.87312(0.83) = 1.10533. For one year, the estimate is 1.10533/5 = 0.221.
CHAPTER 20
CHAPTER 20 SOLUTIONS
20.1
SECTION 20.1
20.1 The first seven values of the cumulative distribution function for a Poisson(3) variable are 0.0498, 0.1991, 0.4232, 0.6472, 0.8153, 0.9161, and 0.9665.
With 0.0498 < 0.1247 < 0.1991, the first simulated value is x = 1. With
0.9161 < 0.9321 < 0.9665, the second simulated value is x = 6. With
0.6472 < 0.6873 < 0.8153, the third simulated value is x = 4.
20.2 The cumulative distribution function is
{
0.25x,
0.5,
0.1x + 0.1,
For u = 0.2, solve 0.2 = 0.25a; for x = 0.8.
from 2 to 4, so the second simulated value is
0.7 = 0.1x + 0.1 for x = 6.
0 < x < 2,
2 < x < 4,
4<x<9.
The function is constant at 0.5
x = 4. For the third value, solve
Student Solutions Manual to Accompany Loss Models: From Data to Decisions,
Edition. By Stuart. A. Klugman, Harry H. Panjcr, Gordon E. Willmot
Copyright © 2012 John Wiley & Sons, Inc.
Fourth227
228
CHAPTER 20 SOLUTIONS
20.2
SECTION 20.2
2 0 . 3 Because 0.372 < 0.4, the first value is from the Pareto distribution. Solve
0.693 = 1 - ( Y C ^ ) for x = 100 [(1 - 0 . 6 9 3 ) - 1 / 3 - l] = 48.24.
Because
0.702 > 0.4, the second value is from the inverse Weibull distribution. Solve
0.284 = e-* 200 /** 2 for x = 2 0 0 [ - ln(0.284)]~ 1 / 2 = 178.26.
2 0 . 4 For the first year, the number who remain employees is bin(200,0.90)
and the inversion method produces a simulated value of 175. T h e number
alive but no longer employed is bin(25,0.08/0.10 = 0.80) and the simulated
value is 22. T h e remaning 3 employees die during the year. For year 2, the
number who remain employed is bin(175,0.90) and the simulated value is 155.
T h e number of them who are alive but no longer employed is bin(20,0.80) and
the simulated value is 17, leaving 3 deaths. For the 22 who began the year
alive but no longer employed, the number who remain alive is bin(22,0.95)
and the simulated value is 21. At the end of year 2 there are 155 alive and
employed, 17 + 21 = 38 are alive but no longer employed, and 3 + 3 + 1 = 7
have died. T h e normal approximation produces the same values in all cases.
Also note t h a t because the number in each state at time 1 was not required,
an alternative is to determine the theoretical probabilities for time 2, given
employed at time 0, as 0.81, 0.148, and 0.042 and then do a single simulation
from this trinomial distribution.
20.5d
=
ln(0.3) = - 1 . 2 0 4 ,
c = - 3 ( - 1 . 2 0 4 ) = 3.612,
λ0
=
3.612,
λι
=
3.612 - 1.204(1) = 2.408,
sx = - ln(0.704)/2.408 = 0.1458,
tx
=
0.0427 + 0.1458 = 0.1885,
A2 = 3.612 - 1.204(2) = 1.204,
s2
=
- ln(0.997)/1.204 = 0.0025,
to = - ln(0.857)/3.612 = 0.0427,
t2 = 0.1885 + 0.0025 = 0.1910.
Because £ m _i = t2 < 2, the simulated value is m — 3.
20.6
xi
w
=
=
x2 = 2(0.108) - 1 = - 0 . 7 8 4 ,
2(0.942) - 1 = 0.884,
2
(0.884) + ( - 0 . 7 8 4 )
2
= 1.396 > 1,
and so these values cannot be used. Moving to the next pair,
X!
w
=
=
2(0.217) - 1 = - 0 . 5 6 6 ,
2
x2 = 2(0.841) - 1 = 0.682,
2
( - 0 . 5 6 6 ) + (0.682) = 0.78548 < 1.
These values can be used. Then,
y
=
>/-21n(0.78548)/0.78548 = 0.78410,
ζλ
=
-0.566(0.78410) = -0.44380,
z2
=
0.682(0.78410) = 0.53476.
SECTION 20.3
229
The simulated lognormal values are exp[5+1.5(—0.44380)] = 76.27 and exp[5+
1.5(0.53476)] = 331.01.
20.3
SECTION 20.3
20.7 The requested probability is the same as the probability that at least a
of the observations are at or below 7Γ0.9 and at most b are at or below πο.9.
The number of such observations has a binomial distribution with a sample
size of n and a success probability of Pr(X < πο.9) = 0.9. Let N be this
binomial variable. We want 0.95 — Pr(a < N < b). Prom the central limit
theorem,
a - 0.9n
b - 0.9n
0.95 = Pr
\ ZJ "^
L λ/ο.θ9η ~
and a symmetric interval implies
-1.96:
~
vömü
a - 0.9n
y/QMn'
giving a = 0.9n — 1.96-^/0.9(0.l)n. Because a must be an integer, the result
should be rounded down. A similar calculation can be done for b.
20.8 The mean and standard deviation are both 100. The analog of (20.1) is
0.02μ
= 1.645.
σ/y/n
Substituting the mean and standard deviation produces n — (1.645/0.02) 2 =
6,766, where the answer has been rounded up for safety. For the probability
at 200, the true value is F(200) = 1 - exp(-2) = 0.8647. The equation to
solve is
= 1.645
, °·02(0·8647)
vO.8647(0.1353)/n
for n = 1,059. When doing these simulations, the goal should be achieved
90% of the time.
20.9 The answer depends on the particular simulation.
20.10 The sample variance for the five observations is
(1 - 3) 2 + (2 - 3) 2 + (3 - 3) 2 4- (4 - 3) 2 + (5 - 3) 2 _ ,
= 2.5.
The estimate is the sample mean and its standard deviation is σ/y/n, which
is approximated by ^/2.5/n. Setting this equal to the goal of 0.05 gives the
answer, n = 1000.
230
CHAPTER 20 SOLUTIONS
20.4
SECTION 20.4
2 0 . 1 1 T h e inversion method requires a solution to u = Φ ( ^ Q Q Q 0 0 K where
Φ(χ) is the standard normal cdf.
x
~2^°
For u — 0.5398, the equation to solve is
= 0.1 for x = 15,200. After the deductible is applied, the insurer's
cost for the first month is 5,200. T h e next equation is —1.2 =
J
^ Q Q ° ° for
x = 12,600 and a cost of 2,600. The third month's equation is - 3 . 0 = %*ooo°°
for x — 9,000 and a cost of zero. T h e final month uses 0.8 = ^ Q Q Q
for
x = 16,600 and a cost of 6,600. The total cost is 14,400.
2 0 . 1 2 T h e equation to solve for the inversion method is u — Φ ( Q"^' 0 1 )·
W h e n u = 0.1587, the equation is - 1 = 1 η ^ 0 1 for x = 0.99005, and the
first year's price is 99.005. For the second year, solve 1.5 = ln Q"O2Q1 ^ o r
x = 1.04081, and the price is 103.045.
2 0 . 1 3 For this binomial distribution, the probability of no claims is 0.03424,
and so if 0 < u < 0.03424, the simulated value is 0. T h e probability of one
claim is 0.11751, and so for 0.03424 < u < 0.15175, the simulated value is 1.
T h e probability of two 2 claims is 0.19962, and so for 0.15175 < u < 0.35137.
Because the value of 0.18 is in this interval, the simulated value is 2.
2 0 . 1 4 For claim times, the equation to solve is u = 1 — exp(—x/2) for x =
—0.51n(l — u). The simulated times are 0.886, 0.388, and two more we don't
need. T h e first claim is at time 0.886 and the second claim is after time 1, so
there is only one claim in the simulated year. For the amount of t h a t claim,
the equation to solve is 0.89 = 1 - ( γ ^ ^ : ) for x = 2,015. At the time of the
first claim, the surplus is 2,000 + 2,200(0.886) - 2,015 = 1,934.2. Because ruin
has not occurred, premiums continue to be collected for an ending surplus of
1,934.2 + 2,200(0.114) = 2,185.
2 0 . 1 5 T h e empirical distribution places probability 1/2 on each of the two
points. T h e mean of this distribution is 2 and the variance of this distribution
is 1. There are four possible bootstrap samples: (1,1), (3,1), (1,3), and (3,3).
T h e value of the estimator for each is 0, 1, 1, and 0. T h e MSE is (1/4) [(0 l ) 2 + (1 - l ) 2 + (1 - l ) 2 + (0 - l) 2 ] = 0.5.
2 0 . 1 6 (a) T h e distribution function is F(x) = x / 1 0 and so is 0.2, 0.4, and
0.7 at the three sample value. These are to be compared with the empirical
values of 0, 1/3, 2 / 3 , and 1. T h e maximum difference is 0.7 versus 1 for a test
statistic of 0.3.
SECTION 20.4
231
(b) Simulations by the authors produced an estimated p-value of 0.8882.
2 0 . 1 7 (a) T h e estimate is 4 ( 7 ) / 3 = 9.33. T h e estimated MSE is ( 2 8 / 3 ) 2 / 1 5
5.807.
(b) There are 27 equally weighted bootstrap samples. For example, the
sample 2, 2, 4 produces an estimate of 4 ( 4 ) / 3 — 5.33 and a contribution to
the MSE of (5.33 - 8.0494) 2 = 7.3769. Averaging the 27 values produces an
estimated MSE of 4.146.
1
0
For the first simulation,
0.6 0.8
the inverse normals are 2.1201 and —0.1181. T h e correlated normals are
2.1201 and 0.6(2.1201) + 0.8(-0.1181) - 1.1776. Next, apply the normal cdf
to obtain the correlated uniform values 0.9830 and 0.8805. Finally, apply the
Pareto and exponential inverse cdf functions to obtain the simulated loss of
28,891 and expenses of 850. T h e insured pays 500 of the loss, the insurer pays
10,000, and the reinsurer pays the remaining 18,391. T h e insurer's share of
expenses is (10,000/28,391)(850) = 299 and the reinsurer pays 551. For the
second simulation, the same calculations lead to a loss of 929 and expenses of
174. T h e total for the insured is 1,000, for the insurer is 10,000 + 299 + 429 +
174 = 10,902, and for the reinsurer is 18,391 + 551 = 18,942.
2 0 . 1 8 W i t h p = 0.6, the matrix L
2 0 . 1 9 T h e extra step requires a simulated g a m m a value with a = 3 and
Θ = 1/3. W i t h u = 0.319, the simulated value is 0.6613. Dividing its
square root into the simulated normals gives 2.1201/0.6613 1 / 2 = 2.6070 and
1.1776/0.6613 1 / 2 = 1.4480. Using the normal cdf to make them uniform and
the Pareto and exponential inverse cdfs, respectively, gives a loss of 50,272
and expense of 1,043. For the second simulation, the loss is 710 and the expense is 155. After dividing them up, the insured pays 1,000, the insurer pays
10,000+210+210+155 = 10,575, and the reinsurer pays 39, 772+833 = 40,605.
2 0 . 2 0 T h e annual charges are simulated from u — 1 — e " * / 1 , 0 0 0 or x =
- 1 , 0 0 0 l n ( l - u). T h e four simulated values are 356.67, 2,525.73, 1,203.97,
and 83.38. T h e reimbursements are 205.34 (80% of 256.67), 1,000 (the maximum), 883.18 (80% of 1,103.97), and 0. T h e total is 2,088.52 and the average
is 522.13.
2 0 . 2 1 T h e simulated paid loss is βχρ[0.494Φ _ 1 (ϋ) + 13.294]. T h e four simulated paid losses are 450,161, 330.041, 939,798, and 688,451, for an average of
232
CHAPTER 20 SOLUTIONS
602,113. T h e multiplier for unpaid losses is
0.801(2006 - 2 0 0 5 ) 0 · 8 5 1 β - 0 · 7 4 7 ( 2 0 0 6 - 2 0 0 5 ) = 0.3795
for an answer of 228,502.
2 0 . 2 2 0.95 = ΡΓ(0.95μ < X < 1.05μ) and X ~ Ν{μ,σ2/η
Then,
0.95 =
=
= 1.44μ 2 /η).
PrfOy■-* < z< l J%*f£\
V 1.2μ/ν/ή
1.2μ/^/η J
Pr(-0.05>/n/1.2 < Z < 0 . 0 5 ^ / 1 . 2 ,
and therefore 0 . 0 5 ^ / 1 . 2 = 1.96 for n = 2,212.76.
2 0 . 2 3 F(300) = 1 - e " 3 0 0 / 1 0 0 = 0.9502. T h e variance of the estimate is
0.9502(0.0498)/n. T h e equation to solve is
0.9502(0.01) =
2 . 5 7 6 ^ « « .
T h e solution is n = 3,477.81.
2 0 . 2 4 There are 4 possible b o o t s t r a p samples for the 2 fire losses and 4 for
the wind losses, making 16 equally likely outcomes. There are 9 unique cases,
as follows. T h e losses are presented as the first fire loss, second fire loss, first
wind loss, second wind loss.
Case 1: loss is ( 3 , 3 , 0 , 0 ) ; total is 6; eliminated is 0; fraction is 0; square
error is (0 - 0.2) 2 = 0.04; probability 1/16.
Case 2: loss is ( 3 , 3 , 0 , 3 ) ; total 9; eliminated 2; fraction 0.22; error 0.0005
probability 2/16 [includes ( 3 , 3 , 3 , 0 ) ] .
Case 3: loss is ( 3 , 3 , 3 , 3); total 12; eliminated 4; fraction 0.33; error 0.0178
probability 1/16.
Case 4: loss is ( 3 , 4 , 0 , 0 ) ; total 7; eliminated 0; fraction 0; error 0.04
probability 2/16.
Case 5: loss is ( 3 , 4 , 0 , 3 ) ; total 10; eliminated 2; fraction 0.2; error 0
probability 4/16.
Case 6: loss is ( 3 , 4 , 3 , 3 ) ; total 13; eliminated 4; fraction 0.3077; error
0.0116; probability 2/16.
Case 7: loss is ( 4 , 4 , 0 , 0 ) ; total 8; eliminated 0; fraction 0; error 0.04;
probability 1/16.
Case 8: loss is ( 4 , 4 , 0 , 3 ) ; total 11; eliminated 2; fraction 0.1818; error
0.0003; probability 2/16.
Case 9: loss is ( 4 , 4 , 3 , 3 ) ; total 14, eliminated 4; fraction 0.2857; error
0.0073; probability 1.16.
T h e MSE is [1(0.04) + 2(0.0005) + · · ■ + 1(0.0073)]/16 = 0.0131.
WILEY SERIES IN PROBABILITY AND STATISTICS
ESTABLISHED BY WALTER A. SHEWHART AND SAMUEL S. WlLKS
Editors: David J. Balding, Noel A. C. Cressie, GarrettM. Fitzmaurice,
Harvey Goldstein, Iain M. Johnstone, Geert Molenberghs, David W. Scott,
Adrian F. M. Smith, Ruey S. Tsay, Sanford Weisberg
Editors Emeriti: Vic Barnett, J. Stuart Hunter, Joseph B. Kadane, JozefL. Teugels
The Wiley Series in Probability and Statistics is well established and authoritative. It covers
many topics of current research interest in both pure and applied statistics and probability
theory. Written by leading statisticians and institutions, the titles span both state-of-the-art
developments in the field and classical methods.
Reflecting the wide range of current research in statistics, the series encompasses applied,
methodological and theoretical statistics, ranging from applications and new techniques
made possible by advances in computerized practice to rigorous treatment of theoretical
approaches.
This series provides essential and invaluable reading for all statisticians, whether in academia, industry, government, or research.
t ABRAHAM and LEDOLTER · Statistical Methods for Forecasting
AGRESTI * Analysis of Ordinal Categorical Data, Second Edition
AGRESTI · An Introduction to Categorical Data Analysis, Second Edition
AGRESTI · Categorical Data Analysis, Second Edition
ALTMAN, GILL, and McDONALD · Numerical Issues in Statistical Computing for the
Social Scientist
AMARATUNGA and CABRERA * Exploration and Analysis of DNA Microarray and
Protein Array Data
ANDEL · Mathematics of Chance
ANDERSON · An Introduction to Multivariate Statistical Analysis, Third Edition
* ANDERSON · The Statistical Analysis of Time Series
ANDERSON, AUQUIER, HAUCK, OAKES, VANDAELE, and WEISBERG ·
Statistical Methods for Comparative Studies
ANDERSON and LOYNES · The Teaching of Practical Statistics
ARMITAGE and DAVID (editors) · Advances in Biometry
ARNOLD, BALAKRISHNAN, and NAGARAJA · Records
* ARTHANARI and DODGE · Mathematical Programming in Statistics
* BAILEY · The Elements of Stochastic Processes with Applications to the Natural
Sciences
BAJORSKI · Statistics for Imaging, Optics, and Photonics
BALAKRISHNAN and KOUTRAS · Runs and Scans with Applications
BALAKRISHNAN and NG · Precedence-Type Tests and Applications
BARNETT · Comparative Statistical Inference, Third Edition
BARNETT · Environmental Statistics
BARNETT and LEWIS · Outliers in Statistical Data, Third Edition
BARTHOLOMEW, KNOTT, and MOUSTAKI · Latent Variable Models and Factor
Analysis: A Unified Approach, Third Edition
BARTOSZYNSKI and NIEWIADOMSKA-BUGAJ · Probability and Statistical
Inference, Second Edition
BASILEVSKY · Statistical Factor Analysis and Related Methods: Theory and
Applications
BATES and WATTS * Nonlinear Regression Analysis and Its Applications
BECHHOFER, SANTNER, and GOLDSMAN · Design and Analysis of Experiments for
Statistical Selection, Screening, and Multiple Comparisons
*Now available in a lower priced paperback edition in the Wiley Classics Library.
1"Now available in a lower priced paperback edition in the Wiley-Interscience Paperback Series.
t
*
*
*
BEIRLANT, GOEGEBEUR, SEGERS, TEUGELS, and DE WAAL · Statistics of
Extremes: Theory and Applications
BELSLEY · Conditioning Diagnostics: Collinearity and Weak Data in Regression
BELSLEY, KUH, and WELSCH · Regression Diagnostics: Identifying Influential
Data and Sources of Collinearity
BENDAT and PIERSOL · Random Data: Analysis and Measurement Procedures,
Fourth Edition
BERNARDO and SMITH ■ Bayesian Theory
BERZUINI, DAWID, and BERNARDINELL · Causality: Statistical Perspectives and
Applications
BHAT and MILLER · Elements of Applied Stochastic Processes, Third Edition
BHATTACHARYA and WAYMIRE · Stochastic Processes with Applications
BIEMER, GROVES, LYBERG, MATHIOWETZ, and SUDMAN ■ Measurement Errors
in Surveys
BILLINGSLEY · Convergence of Probability Measures, Second Edition
BILLINGSLEY · Probability and Measure, Anniversary Edition
BIRKES and DODGE ■ Alternative Methods of Regression
BISGAARD and KULAHCI · Time Series Analysis and Forecasting by Example
BISWAS, DATTA, FINE, and SEGAL · Statistical Advances in the Biomedical Sciences:
Clinical Trials, Epidemiology, Survival Analysis, and Bioinformatics
BLISCHKE and MURTHY (editors) · Case Studies in Reliability and Maintenance
BLISCHKE and MURTHY ■ Reliability: Modeling, Prediction, and Optimization
BLOOMFIELD · Fourier Analysis of Time Series: An Introduction, Second Edition
BOLLEN · Structural Equations with Latent Variables
BOLLEN and CURRAN · Latent Curve Models: A Structural Equation Perspective
BOROVKOV · Ergodicity and Stability of Stochastic Processes
BOSQ and BLANKE · Inference and Prediction in Large Dimensions
BOULEAU · Numerical Methods for Stochastic Processes
BOX and TIAO · Bayesian Inference in Statistical Analysis
BOX · Improving Almost Anything, Revised Edition
BOX and DRAPER · Evolutionary Operation: A Statistical Method for Process
Improvement
BOX and DRAPER · Response Surfaces, Mixtures, and Ridge Analyses, Second Edition
BOX, HUNTER, and HUNTER · Statistics for Experimenters: Design, Innovation,
and Discovery, Second Editon
BOX, JENKINS, and REINSEL · Time Series Analysis: Forcasting and Control, Fourth
Edition
BOX, LUCENO, and PANIAGUA-QUINONES · Statistical Control by Monitoring
and Adjustment, Second Edition
BROWN and HOLLANDER · Statistics: A Biomedical Introduction
CAIROLI and DALANG · Sequential Stochastic Optimization
CASTILLO, HADI, BALAKRISHNAN, and SARABIA · Extreme Value and Related
Models with Applications in Engineering and Science
CHAN · Time Series: Applications to Finance with R and S-Plus®, Second Edition
CHARALAMBIDES · Combinatorial Methods in Discrete Distributions
CHATTERJEE and HADI · Regression Analysis by Example, Fifth Edition
CHATTERJEE and HADI · Sensitivity Analysis in Linear Regression
CHERNICK · Bootstrap Methods: A Guide for Practitioners and Researchers,
Second Edition
CHERNICK and FRIIS · Introductory Biostatistics for the Health Sciences
CHILES and DELFINER · Geostatistics: Modeling Spatial Uncertainty, Second Edition
CHOW and LIU · Design and Analysis of Clinical Trials: Concepts and Methodologies,
Second Edition
CLARKE · Linear Models: The Theory and Application of Analysis of Variance
*Now available in a lower priced paperback edition in the Wiley Classics Library.
'Now available in a lower priced paperback edition in the Wiley-Interscience Paperback Series.
CLARKE and DISNEY · Probability and Random Processes: A First Course with
Applications, Second Edition
* COCHRAN and COX · Experimental Designs, Second Edition
COLLINS and LANZA · Latent Class and Latent Transition Analysis: With Applications
in the Social, Behavioral, and Health Sciences
CONGDON · Applied Bayesian Modelling
CONGDON · Bayesian Models for Categorical Data
CONGDON · Bayesian Statistical Modelling, Second Edition
CONOVER · Practical Nonparametric Statistics, Third Edition
COOK · Regression Graphics
COOK and WEISBERG · An Introduction to Regression Graphics
COOK and WEISBERG · Applied Regression Including Computing and Graphics
CORNELL · A Primer on Experiments with Mixtures
CORNELL · Experiments with Mixtures, Designs, Models, and the Analysis of Mixture
Data, Third Edition
COX * A Handbook of Introductory Statistical Methods
CRESSIE · Statistics for Spatial Data, Revised Edition
CRESSIE and WIKLE · Statistics for Spatio-Temporal Data
CSÖRGÖ and HORVÄTH · Limit Theorems in Change Point Analysis
DAGPUNAR · Simulation and Monte Carlo: With Applications in Finance and MCMC
DANIEL · Applications of Statistics to Industrial Experimentation
DANIEL · Biostatistics: A Foundation for Analysis in the Health Sciences, Eighth Edition
* DANIEL · Fitting Equations to Data: Computer Analysis of Multifactor Data,
Second Edition
DASU and JOHNSON · Exploratory Data Mining and Data Cleaning
DAVID and NAGARAJA · Order Statistics, Third Edition
* DEGROOT, FIENBERG, and KADANE · Statistics and the Law
DEL CASTILLO · Statistical Process Adjustment for Quality Control
DEMARIS · Regression with Social Data: Modeling Continuous and Limited Response
Variables
DEMIDENKO · Mixed Models: Theory and Applications
DENISON, HOLMES, MALLICK and SMITH · Bayesian Methods for Nonlinear
Classification and Regression
DETTE and STUDDEN · The Theory of Canonical Moments with Applications in
Statistics, Probability, and Analysis
DEY and MUKERJEE · Fractional Factorial Plans
DILLON and GOLDSTEIN · Multivariate Analysis: Methods and Applications
* DODGE and ROMIG · Sampling Inspection Tables, Second Edition
* DOOB · Stochastic Processes
DOWDY, WEARDEN, and CHILKO · Statistics for Research, Third Edition
DRAPER and SMITH · Applied Regression Analysis, Third Edition
DRYDEN and MARDIA · Statistical Shape Analysis
DUDEWICZ and MISHRA · Modern Mathematical Statistics
DUNN and CLARK · Basic Statistics: A Primer for the Biomedical Sciences,
Fourth Edition
DUPUIS and ELLIS · A Weak Convergence Approach to the Theory of Large Deviations
EDLER and KITSOS · Recent Advances in Quantitative Methods in Cancer and Human
Health Risk Assessment
* ELANDT-JOHNSON and JOHNSON · Survival Models and Data Analysis
ENDERS · Applied Econometric Time Series, Third Edition
t ETHIER and KURTZ · Markov Processes: Characterization and Convergence
EVANS, HASTINGS, and PEACOCK · Statistical Distributions, Third Edition
*Now available in a lower priced paperback edition in the Wiley Classics Library.
^Now available in a lower priced paperback edition in the Wiley-Interscience Paperback Series.
EVERITT, LANDAU, LEESE, and STAHL · Cluster Analysis, Fifth Edition
FEDERER and KING · Variations on Split Plot and Split Block Experiment Designs
FELLER · An Introduction to Probability Theory and Its Applications, Volume I,
Third Edition, Revised; Volume II, Second Edition
FITZMAURICE, LAIRD, and WARE · Applied Longitudinal Analysis, Second Edition
* FLEISS · The Design and Analysis of Clinical Experiments
FLEISS · Statistical Methods for Rates and Proportions, Third Edition
t FLEMING and HARRINGTON · Counting Processes and Survival Analysis
FUJIKOSHI, ULYANOV, and SHIMIZU · Multivariate Statistics: High-Dimensional and
Large-Sample Approximations
FULLER · Introduction to Statistical Time Series, Second Edition
' FULLER · Measurement Error Models
GALLANT · Nonlinear Statistical Models
GEISSER · Modes of Parametric Statistical Inference
GELMAN and MENG * Applied Bayesian Modeling and Causal Inference from
Incomplete-Data Perspectives
GEWEKE · Contemporary Bayesian Econometrics and Statistics
GHOSH, MUKHOPADHYAY, and SEN · Sequential Estimation
GIESBRECHT and GUMPERTZ · Planning, Construction, and Statistical Analysis of
Comparative Experiments
GIFI · Nonlinear Multivariate Analysis
GIVENS and HOETING · Computational Statistics
GLASSERMAN and YAO · Monotone Structure in Discrete-Event Systems
GNANADESIKAN · Methods for Statistical Data Analysis of Multivariate Observations,
Second Edition
GOLDSTEIN · Multilevel Statistical Models, Fourth Edition
GOLDSTEIN and LEWIS · Assessment: Problems, Development, and Statistical Issues
GOLDSTEIN and WOOFF · Bayes Linear Statistics
GREENWOOD and NIKULIN · A Guide to Chi-Squared Testing
GROSS, SHORTLE, THOMPSON, and HARRIS · Fundamentals of Queueing Theory,
Fourth Edition
GROSS, SHORTLE, THOMPSON, and HARRIS · Solutions Manual to Accompany
Fundamentals of Queueing Theory, Fourth Edition
* HAHN and SHAPIRO · Statistical Models in Engineering
HAHN and MEEKER · Statistical Intervals: A Guide for Practitioners
HALD · A History of Probability and Statistics and their Applications Before 1750
* HAM PEL · Robust Statistics: The Approach Based on Influence Functions
HÄRTUNG, KNAPP, and SINHA · Statistical Meta-Analysis with Applications
HEIBERGER · Computation for the Analysis of Designed Experiments
HEDAYAT and SINHA · Design and Inference in Finite Population Sampling
HEDEKER and GIBBONS · Longitudinal Data Analysis
HELLER · MACSYMA for Statisticians
HERITIER, CANTONI, COPT, and VICTORIA-FESER · Robust Methods in
Biostatistics
HINKELMANN and KEMPTHORNE · Design and Analysis of Experiments, Volume 1:
Introduction to Experimental Design, Second Edition
HINKELMANN and KEMPTHORNE · Design and Analysis of Experiments, Volume 2:
Advanced Experimental Design
HINKELMANN (editor) · Design and Analysis of Experiments, Volume 3: Special
Designs and Applications
HOAGLIN, MOSTELLER, and TUKEY · Fundamentals of Exploratory Analysis
of Variance
* HOAGLIN, MOSTELLER, and TUKEY · Exploring Data Tables, Trends and Shapes
*Now available in a lower priced paperback edition in the Wiley Classics Library.
'Now available in a lower priced paperback edition in the Wiley-lnterscience Paperback Series.
* HOAGLIN, MOSTELLER, and TUKEY · Understanding Robust and Exploratory
Data Analysis
HOCHBERG and TAMHANE · Multiple Comparison Procedures
HOCKING · Methods and Applications of Linear Models: Regression and the Analysis
of Variance, Second Edition
HOEL · Introduction to Mathematical Statistics, Fifth Edition
HOGG and KLUGMAN · Loss Distributions
HOLLANDER and WOLFE · Nonparametric Statistical Methods, Second Edition
HOSMER and LEMESHOW · Applied Logistic Regression, Second Edition
HOSMER, LEMESHOW, and MAY · Applied Survival Analysis: Regression Modeling
of Time-to-Event Data, Second Edition
HUBER · Data Analysis: What Can Be Learned From the Past 50 Years
HUBER · Robust Statistics
t HUBER and RONCHETTI · Robust Statistics, Second Edition
HUBERTY · Applied Discriminant Analysis, Second Edition
HUBERTY and OLEJNIK · Applied MANOVA and Discriminant Analysis,
Second Edition
HUITEMA · The Analysis of Covariance and Alternatives: Statistical Methods for
Experiments, Quasi-Experiments, and Single-Case Studies, Second Edition
HUNT and KENNEDY · Financial Derivatives in Theory and Practice, Revised Edition
HURD and MIAMEE · Periodically Correlated Random Sequences: Spectral Theory
and Practice
HUSKOVA, BERAN, and DUPAC · Collected Works of Jaroslav Hajek—
with Commentary
HUZURBAZAR · Flowgraph Models for Multistate Time-to-Event Data
JACKMAN · Bayesian Analysis for the Social Sciences
T JACKSON · A User's Guide to Principle Components
JOHN · Statistical Methods in Engineering and Quality Assurance
JOHNSON · Multivariate Statistical Simulation
JOHNSON and BALAKRISHNAN · Advances in the Theory and Practice of Statistics: A
Volume in Honor of Samuel Kotz
JOHNSON, KEMP, and KOTZ · Univariate Discrete Distributions, Third Edition
JOHNSON and KOTZ (editors) · Leading Personalities in Statistical Sciences: From the
Seventeenth Century to the Present
JOHNSON, KOTZ, and BALAKRISHNAN · Continuous Univariate Distributions,
Volume 1, Second Edition
JOHNSON, KOTZ, and BALAKRISHNAN · Continuous Univariate Distributions,
Volume 2, Second Edition
JOHNSON, KOTZ, and BALAKRISHNAN · Discrete Multivariate Distributions
JUDGE, GRIFFITHS, HILL, LÜTKEPOHL, and LEE · The Theory and Practice of
Econometrics, Second Edition
JUREK and MASON · Operator-Limit Distributions in Probability Theory
KADANE · Bayesian Methods and Ethics in a Clinical Trial Design
KADANE AND SCHUM · A Probabilistic Analysis of the Sacco and Vanzetti Evidence
KALBFLEISCH and PRENTICE · The Statistical Analysis of Failure Time Data, Second
Edition
KARIYA and KURATA · Generalized Least Squares
KASS and VOS · Geometrical Foundations of Asymptotic Inference
t KAUFMAN and ROUSSEEUW · Finding Groups in Data: An Introduction to Cluster
Analysis
KEDEM and FOKIANOS · Regression Models for Time Series Analysis
KENDALL, BARDEN, CARNE, and LE · Shape and Shape Theory
KHURI · Advanced Calculus with Applications in Statistics, Second Edition
KHURI, MATHEW, and SINHA · Statistical Tests for Mixed Linear Models
*Now available in a lower priced paperback edition in the Wiley Classics Library.
'Now available in a lower priced paperback edition in the Wiley-Interscience Paperback Series.
* KISH · Statistical Design for Research
KLEIBER and KOTZ · Statistical Size Distributions in Economics and Actuarial Sciences
KLEMELÄ ■ Smoothing of Multivariate Data: Density Estimation and Visualization
KLUGMAN, PANJER, and WILLMOT · Loss Models: From Data to Decisions,
Fourth Edition
KLUGMAN, PANJER, and WILLMOT · Student Solutions Manual to Accompany Loss
Models: From Data to Decisions, Fourth Edition
KOSKI and NOBLE · Bayesian Networks: An Introduction
KOTZ, BALAKRISHNAN, and JOHNSON · Continuous Multivariate Distributions,
Volume 1, Second Edition
KOTZ and JOHNSON (editors) · Encyclopedia of Statistical Sciences: Volumes 1 to 9
with Index
KOTZ and JOHNSON (editors) · Encyclopedia of Statistical Sciences: Supplement
Volume
KOTZ, READ, and BANKS (editors) ■ Encyclopedia of Statistical Sciences: Update
Volume 1
KOTZ, READ, and BANKS (editors) · Encyclopedia of Statistical Sciences: Update
Volume 2
KOWALSKI and TU · Modern Applied U-Statistics
KRISHNAMOORTHY and MATHEW · Statistical Tolerance Regions: Theory,
Applications, and Computation
KROESE, TAIMRE, and BOTEV · Handbook of Monte Carlo Methods
KROONENBERG · Applied Multiway Data Analysis
KULINSKAYA, MORGENTHALER, and STAUDTE · Meta Analysis: A Guide to
Calibrating and Combining Statistical Evidence
KULKARNl and HARMAN · An Elementary Introduction to Statistical Learning Theory
KUROWICKA and COOKE · Uncertainty Analysis with High Dimensional Dependence
Modelling
KVAM and VIDAKOVIC · Nonparametric Statistics with Applications to Science
and Engineering
LACHIN · Biostatistical Methods: The Assessment of Relative Risks, Second Edition
LAD · Operational Subjective Statistical Methods: A Mathematical, Philosophical, and
Historical Introduction
LAMPERTI · Probability: A Survey of the Mathematical Theory, Second Edition
LAWLESS · Statistical Models and Methods for Lifetime Data, Second Edition
LAWSON · Statistical Methods in Spatial Epidemiology, Second Edition
LE · Applied Categorical Data Analysis, Second Edition
LE · Applied Survival Analysis
LEE · Structural Equation Modeling: A Bayesian Approach
LEE and WANG · Statistical Methods for Survival Data Analysis, Third Edition
LEPAGE and BILLARD · Exploring the Limits of Bootstrap
LESSLER and KALSBEEK · Nonsampling Errors in Surveys
LEYLAND and GOLDSTEIN (editors) · Multilevel Modelling of Health Statistics
LIAO · Statistical Group Comparison
LIN · Introductory Stochastic Analysis for Finance and Insurance
LITTLE and RUBIN · Statistical Analysis with Missing Data, Second Edition
LLOYD · The Statistical Analysis of Categorical Data
LOWEN and TEICH · Fractal-Based Point Processes
MAGNUS and NEUDECKER · Matrix Differential Calculus with Applications in
Statistics and Econometrics, Revised Edition
MALLER and ZHOU · Survival Analysis with Long Term Survivors
MARCHETTE · Random Graphs for Statistical Pattern Recognition
MARDIA and JUPP · Directional Statistics
*Now available in a lower priced paperback edition in the Wiley Classics Library.
'Now available in a lower priced paperback edition in the Wiley-Interscience Paperback Series.
MARKOVICH · Nonparametric Analysis of Univariate Heavy-Tailed Data: Research and
Practice
MARONNA, MARTIN and YOHAI · Robust Statistics: Theory and Methods
MASON, GUNST, and HESS · Statistical Design and Analysis of Experiments with
Applications to Engineering and Science, Second Edition
McCOOL · Using the Weibull Distribution: Reliability, Modeling, and Inference
McCULLOCH, SEARLE, and NEUHAUS · Generalized, Linear, and Mixed Models,
Second Edition
McFADDEN · Management of Data in Clinical Trials, Second Edition
* McLACHLAN · Discriminant Analysis and Statistical Pattern Recognition
McLACHLAN, DO, and AMBROISE · Analyzing Microarray Gene Expression Data
McLACHLAN and KRISHNAN ■ The EM Algorithm and Extensions, Second Edition
McLACHLAN and PEEL · Finite Mixture Models
McNEIL · Epidemiological Research Methods
MEEKER and ESCOBAR · Statistical Methods for Reliability Data
MEERSCHAERT and SCHEFFLER · Limit Distributions for Sums of Independent
Random Vectors: Heavy Tails in Theory and Practice
MENGERSEN, ROBERT, and TITTERINGTON · Mixtures: Estimation and
Applications
MICKEY, DUNN, and CLARK · Applied Statistics: Analysis of Variance and
Regression, Third Edition
* MILLER · Survival Analysis, Second Edition
MONTGOMERY, JENNINGS, and KULAHCI · Introduction to Time Series Analysis
and Forecasting
MONTGOMERY, PECK, and VINING · Introduction to Linear Regression Analysis,
Fifth Edition
MORGENTHALER and TUKEY · Configural Polysampling: A Route to Practical
Robustness
MUIRHEAD · Aspects of Multivariate Statistical Theory
MÜLLER and STOYAN · Comparison Methods for Stochastic Models and Risks
MURTHY, XIE, and JIANG · Weibull Models
MYERS, MONTGOMERY, and ANDERSON-COOK · Response Surface Methodology:
Process and Product Optimization Using Designed Experiments, Third Edition
MYERS, MONTGOMERY, VINING, and ROBINSON · Generalized Linear Models.
With Applications in Engineering and the Sciences, Second Edition
NATVIG · Multistate Systems Reliability Theory With Applications
t NELSON · Accelerated Testing, Statistical Models, Test Plans, and Data Analyses
t NELSON · Applied Life Data Analysis
NEWMAN · Biostatistical Methods in Epidemiology
NG, TAIN, and TANG · Dirichlet Theory: Theory, Methods and Applications
OKABE, BOOTS, SUGIHARA, and CHIU · Spatial Tesselations: Concepts and
Applications of Voronoi Diagrams, Second Edition
OLIVER and SMITH · Influence Diagrams, Belief Nets and Decision Analysis
PALTA · Quantitative Methods in Population Health: Extensions of Ordinary Regressions
PANJER · Operational Risk: Modeling and Analytics
PANKRATZ · Forecasting with Dynamic Regression Models
PANKRATZ · Forecasting with Univariate Box-Jenkins Models: Concepts and Cases
PARDOUX · Markov Processes and Applications: Algorithms, Networks, Genome and
Finance
PARMIGIANI and INOUE · Decision Theory: Principles and Approaches
* PARZEN · Modern Probability Theory and Its Applications
PENA, TIAO, and TSAY · A Course in Time Series Analysis
PESARIN and SALMASO · Permutation Tests for Complex Data: Applications and
Software
*Now available in a lower priced paperback edition in the Wiley Classics Library.
^Now available in a lower priced paperback edition in the Wiley-lnterscience Paperback Series.
PIANTADOSI · Clinical Trials: A Methodologie Perspective, Second Edition
POURAHMADI · Foundations of Time Series Analysis and Prediction Theory
POWELL · Approximate Dynamic Programming: Solving the Curses of Dimensionality,
Second Edition
POWELL and RYZHOV · Optimal Learning
PRESS · Subjective and Objective Bayesian Statistics, Second Edition
PRESS and TANUR · The Subjectivity of Scientists and the Bayesian Approach
PURI, VILAPLANA, and WERTZ · New Perspectives in Theoretical and Applied
Statistics
' PUTERMAN · Markov Decision Processes: Discrete Stochastic Dynamic Programming
QIU ■ Image Processing and Jump Regression Analysis
* RAO · Linear Statistical Inference and Its Applications, Second Edition
RAO · Statistical Inference for Fractional Diffusion Processes
RAUSAND and H0YLAND · System Reliability Theory: Models, Statistical Methods,
and Applications, Second Edition
RAYNER, THAS, and BEST · Smooth Tests of Goodnes of Fit: Using R, Second Edition
RENCHER and SCHAALJE · Linear Models in Statistics, Second Edition
RENCHER and CHRISTENSEN · Methods of Multivariate Analysis, Third Edition
RENCHER · Multivariate Statistical Inference with Applications
RIGDON and BASU · Statistical Methods for the Reliability of Repairable Systems
* RIPLEY · Spatial Statistics
* RIPLEY · Stochastic Simulation
ROHATGI and SALEH · An Introduction to Probability and Statistics, Second Edition
ROLSKI, SCHMIDLI, SCHMIDT, and TEUGELS · Stochastic Processes for Insurance
and Finance
ROSENBERGER and LACHIN · Randomization in Clinical Trials: Theory and Practice
ROSSI, ALLENBY, and McCULLOCH · Bayesian Statistics and Marketing
t ROUSSEEUW and LEROY · Robust Regression and Outlier Detection
ROYSTON and SAUERBREI · Multivariate Model Building: A Pragmatic Approach to
Regression Analysis Based on Fractional Polynomials for Modeling Continuous
Variables
* RUBIN · Multiple Imputation for Nonresponse in Surveys
RUBINSTEIN and KROESE · Simulation and the Monte Carlo Method, Second Edition
RUBINSTEIN and MELAMED · Modern Simulation and Modeling
RYAN · Modern Engineering Statistics
RYAN · Modern Experimental Design
RYAN - Modern Regression Methods, Second Edition
RYAN · Statistical Methods for Quality Improvement, Third Edition
SALEH · Theory of Preliminary Test and Stein-Type Estimation with Applications
SALTELLI, CHAN, and SCOTT (editors) · Sensitivity Analysis
SCHERER · Batch Effects and Noise in Microarray Experiments: Sources and Solutions
* SCHEFFE · The Analysis of Variance
SCHIMEK · Smoothing and Regression: Approaches, Computation, and Application
SCHOTT · Matrix Analysis for Statistics, Second Edition
SCHOUTENS · Levy Processes in Finance: Pricing Financial Derivatives
SCOTT · Multivariate Density Estimation: Theory, Practice, and Visualization
* SEARLE · Linear Models
i SEARLE · Linear Models for Unbalanced Data
' SEARLE · Matrix Algebra Useful for Statistics
t SEARLE, CASELLA, and McCULLOCH · Variance Components
SEARLE and WILLETT · Matrix Algebra for Applied Economics
SEBER · A Matrix Handbook For Statisticians
' SEBER · Multivariate Observations
SEBER and LEE · Linear Regression Analysis, Second Edition
*Now available in a lower priced paperback edition in the Wiley Classics Library.
'Now available in a lower priced paperback edition in the Wiley-Interscience Paperback Series.
* SEBER and WILD · Nonlinear Regression
SENNOTT · Stochastic Dynamic Programming and the Control of Queueing Systems
* SERFLING · Approximation Theorems of Mathematical Statistics
SHAFER and VOVK · Probability and Finance: It's Only a Game!
SHERMAN · Spatial Statistics and Spatio-Temporal Data: Covariance Functions and
Directional Properties
SILVAPULLE and SEN · Constrained Statistical Inference: Inequality, Order, and Shape
Restrictions
SINGPURWALLA · Reliability and Risk: A Bayesian Perspective
SMALL and McLEISH · Hubert Space Methods in Probability and Statistical Inference
SRIVASTAVA · Methods of Multivariate Statistics
STAPLETON · Linear Statistical Models, Second Edition
STAPLETON · Models for Probability and Statistical Inference: Theory and Applications
STAUDTE and SHEATHER · Robust Estimation and Testing
STOYAN · Counterexamples in Probability, Second Edition
STOYAN, KENDALL, and MECKE · Stochastic Geometry and Its Applications, Second
Edition
STOYAN and STOYAN · Fractals, Random Shapes and Point Fields: Methods of
Geometrical Statistics
STREET and BURGESS · The Construction of Optimal Stated Choice Experiments:
Theory and Methods
STYAN · The Collected Papers of T. W. Anderson: 1943-1985
SUTTON, ABRAMS, JONES, SHELDON, and SONG · Methods for Meta-Analysis in
Medical Research
TAKEZAWA · Introduction to Nonparametric Regression
TAMHANE · Statistical Analysis of Designed Experiments: Theory and Applications
TANAKA · Time Series Analysis: Nonstationary and Noninvertible Distribution Theory
THOMPSON · Empirical Model Building: Data, Models, and Reality, Second Edition
THOMPSON · Sampling, Third Edition
THOMPSON · Simulation: A Modeler's Approach
THOMPSON and SEBER · Adaptive Sampling
THOMPSON, WILLIAMS, and FINDLAY · Models for Investors in Real World Markets
TIERNEY · LISP-STAT: An Object-Oriented Environment for Statistical Computing
and Dynamic Graphics
TS AY · Analysis of Financial Time Series, Third Edition
TSAY · An Introduction to Analysis of Financial Data with R
UPTON and FINGLETON · Spatial Data Analysis by Example, Volume II:
Categorical and Directional Data
t VAN BELLE · Statistical Rules of Thumb, Second Edition
VAN BELLE, FISHER, HEAGERTY, and LUMLEY · Biostatistics: A Methodology for
the Health Sciences, Second Edition
VESTRUP * The Theory of Measures and Integration
VIDAKOVIC · Statistical Modeling by Wavelets
VIERTL · Statistical Methods for Fuzzy Data
VINOD and REAGLE · Preparing for the Worst: Incorporating Downside Risk in Stock
Market Investments
WALLER and GOTWAY · Applied Spatial Statistics for Public Health Data
WEISBERG · Applied Linear Regression, Third Edition
WEISBERG · Bias and Causation: Models and Judgment for Valid Comparisons
WELSH · Aspects of Statistical Inference
WESTFALL and YOUNG · Resampling-Based Multiple Testing: Examples and
Methods for/?-Value Adjustment
♦Now available in a lower priced paperback edition in the Wiley Classics Library.
^Now available in a lower priced paperback edition in the Wiley-Interscience Paperback Series.
* WHITTAKER · Graphical Models in Applied Multivariate Statistics
WINKER · Optimization Heuristics in Economics: Applications of Threshold Accepting
WOODWORTH · Biostatistics: A Bayesian Introduction
WOOLSON and CLARKE · Statistical Methods for the Analysis of Biomedical Data,
Second Edition
WU and HAMADA · Experiments: Planning, Analysis, and Parameter Design
Optimization, Second Edition
WU and ZHANG · Nonparametric Regression Methods for Longitudinal Data Analysis
YIN · Clinical Trial Design: Bayesian and Frequentist Adaptive Methods
YOUNG, VALERO-MORA, and FRIENDLY · Visual Statistics: Seeing Data with
Dynamic Interactive Graphics
ZACKS · Stage-Wise Adaptive Designs
* ZELLNER · An Introduction to Bayesian Inference in Econometrics
ZELTERMAN · Discrete Distributions—Applications in the Health Sciences
ZHOU, OBUCHOWSKI, and McCLISH ■ Statistical Methods in Diagnostic
Medicine, Second Edition
*Now available in a lower priced paperback edition in the Wiley Classics Library.
'Now available in a lower priced paperback edition in the Wiley-lnterscience Paperback Series.
* WHITTAKER · Graphical Models in Applied Multivariate Statistics
WINKER · Optimization Heuristics in Economics: Applications of Threshold Accepting
WOODWORTH · Biostatistics: A Bayesian Introduction
WOOLSON and CLARKE · Statistical Methods for the Analysis of Biomedical Data,
Second Edition
WU and HAMADA · Experiments: Planning, Analysis, and Parameter Design
Optimization, Second Edition
WU and ZHANG · Nonparametric Regression Methods for Longitudinal Data Analysis
YIN · Clinical Trial Design: Bayesian and Frequentist Adaptive Methods
YOUNG, VALERO-MORA, and FRIENDLY · Visual Statistics: Seeing Data with
Dynamic Interactive Graphics
ZACKS · Stage-Wise Adaptive Designs
* ZELLNER · An Introduction to Bayesian Inference in Econometrics
ZELTERMAN · Discrete Distributions—Applications in the Health Sciences
ZHOU, OBUCHOWSKI, and McCLISH ■ Statistical Methods in Diagnostic
Medicine, Second Edition
*Now available in a lower priced paperback edition in the Wiley Classics Library.
'Now available in a lower priced paperback edition in the Wiley-lnterscience Paperback Series.
Printed in the United States of America
ED-08-30-12
Download