Chapter #6 – Functions of Random Variables
Question #2: Let π~πππΌπΉ(0,1). Use the CDF technique to find the PDF of the following
random variables: a) π = π1/4 , b) π = π −π , c) π = 1 − π −π , and d) π = π(1 − π).
1 ππ π₯ ∈ (0,1)
a) Since π~πππΌπΉ(0,1), we know that the density function is ππ (π₯) = {
0 ππ‘βπππ€ππ π
0
while the distribution function is πΉπ (π₯) = {π₯
1
ππ π₯ ∈ (−∞, 0]
ππ π₯ ∈ (0,1) . We can then use the
ππ π₯ ∈ [1, ∞)
CDF technique to find πΉπ (π¦) = π(π ≤ π¦) = π(π1/4 ≤ π¦) = π(π ≤ π¦ 4 ) = πΉπ (π¦ 4 ), so
π
π
that ππ (π¦) = ππ¦ πΉπ (π¦) = ππ¦ πΉπ (π¦ 4 ) = ππ (π¦ 4 )4π¦ 3 = (1)4π¦ 3 . Since 0 < π₯ < 1, the
bounds are 0 < π¦ 4 < 1 or 0 < π¦ < 1. Therefore, ππ (π¦) = {4π¦
0
3
ππ π¦ ∈ (0,1).
ππ‘βπππ€ππ π
b) We have πΉπ (π€) = π(π ≤ π€) = π(π −π ≤ π€) = π(π ≥ −ππ(π€)) = 1 − πΉπ (−ππ(π€)),
π
π
1
1
so that ππ (π€) = ππ€ πΉπ (π€) = ππ€ [1 − πΉπ (− ππ(π€))] = −ππ (− ππ(π€)) (− π€) = π€. Since
0 < π₯ < 1, the bounds are 0 < − ππ(π€) < 1 or π −1 < π€ < 1. The probability density
1
ππ π€ ∈ (π −1 , 1)
.
ππ‘βπππ€ππ π
function of the random variable π is therefore ππ (π€) = {π€
0
c) By the CDF technique, we have that the πΉπ (π§) = π(π ≤ π§) = π(1 − π −π ≤ π§) =
π(π −π ≥ 1 − π§) = π(π ≤ −π π(1 − π§)) = πΉπ (−π π(1 − π§)), so we have that ππ (π§) =
π
πΉ (π§)
ππ§ π
π
−1
1
= ππ§ πΉπ (−π π(1 − π§)) = ππ (− ππ(1 − π§)) (− 1−π§) = 1−π§. Since 0 < π₯ < 1, the
bounds are 0 < −π π(1 − π§) < 1 or 0 < π§ < 1 − π −1 . Therefore, the probability
1
density function of this random variable is ππ (π§) = { 1−π§
0
ππ π§ ∈ (0,1 − π −1 )
ππ‘βπππ€ππ π
.
d) We will use Theorem 6.3.2 for this question. Suppose that π is a continuous random
variable with density ππ (π₯) and assume that π = π’(π) is a one-to-one transformation
π
with inverse π₯ = π€(π¦). If ππ¦ π€(π¦) is continuous and nonzero, the density of π is given
π
by ππ (π¦) = ππ (π€(π¦)) |ππ¦ π€(π¦)|. Here, the transformation is π’ = π₯(1 − π₯) = −π₯ 2 + π₯
and, based on the y-values of the graph over the x-values of (0,1), the range over
1
which the density of π is defined is (0, 4). Since this transformation is not one-to-one,
1
1
we must partition the interval (0,1) into two parts: (0, 2) and (2 , 1). Then solve the
transformation by completing the square to obtain π’ = −π₯ 2 + π₯ → −π’ = π₯ 2 − π₯ →
1
1
− π’ = π₯2 − π₯ + 4 →
4
1
1 2
1
1
1
Since π₯ = π€(π’) = 2 ± √4 − π’, we have π€
1
1
1
1
− π’ = (π₯ − 2) → π₯ − 2 = ±√4 − π’ → π₯ = 2 ± √4 − π’.
4
1
1
′ (π’)
1 1
= ± 2 (4 − π’)
−
1
2
(−1) =
±1
1
4
. Thus,
2√ −π’
1
1
ππ (π’) = ππ (2 + √4 − π’) |π€ ′ (π’)| + ππ (2 − √4 − π’) |π€ ′ (π’)| = (1 + 1)|π₯ ′ (π’)| =
−
1
so we can conclude that the density function is ππ (π’) = {(4 − π’)
0
1
2
,
1
√ −π’
4
1
ππ π’ ∈ (0, 4).
ππ‘βπππ€ππ π
Question #3: The measured radius π
of a circle has PDF ππ
(π) = 6π(1 − π) if π ∈ (0,1) and
ππ
(π) = 0 otherwise. Find the distribution of a) the circumference and b) area of the circle.
a) The circumference of a circle is given by πΆ = 2ππ
, so by the CDF technique we have
π
π
that πΉπΆ (π) = π(πΆ ≤ π) = π(2ππ
≤ π) = π (π
≤ 2π) = πΉπ
(2π). The density is thus
π
π
π
π
1
π
π
1
ππΆ (π) = ππ πΉπΆ (π) = ππ πΉπ
(2π) = ππ
(2π) (2π) = 6 (2π) (1 − 2π) (2π) = β― =
6π(2π−π)
(2π)3
.
π
Since 0 < π < 1 we have 0 < 2π < 1 so that 0 < π < 2π. Therefore, the probability
density function of the circumference is given by ππΆ (π) = {
6π(2π−π)
(2π)3
ππ π ∈ (0,2π)
0
ππ‘βπππ€ππ π
.
b) The area is given by π΄ = ππ
2 , so we have πΉπ΄ (π) = π(π΄ ≤ π) = π(ππ
2 ≤ π) =
π
π
π
π
π
π
π (π
2 ≤ π) = π (|π
| ≤ ±√π) = π (−√π ≤ π
≤ √π) = πΉπ
(√π) − πΉπ
(−√π).
Thus,
π
π
π
π
3(√π−√π)
.
π 3/2
we have ππ΄ (π) = ππ πΉπ΄ (π) = ππ [πΉπ
(√π) − πΉπ
(−√π)] = β― =
π
0 < π < 1, then 0 < √π < 1 so that 0 < π < π and ππ΄ (π) = {
3(√π−√π)
π 3/2
0
Since we have
ππ π ∈ (0, π).
ππ‘βπππ€ππ π
1
Question #10: Suppose π has density ππ (π₯) = 2 π −|π₯| for all π₯ ∈ β. a) Find the density of the
random variable π = |π|. b) If π = 0 when π ≤ 0 and π = 1 when π > 0 find the CDF of π.
a) We have πΉπ (π¦) = π(π ≤ π¦) = π(|π| < π¦) = π(−π¦ ≤ π ≤ π¦) = πΉπ (π¦) − πΉπ (−π¦), so
that we obtain ππ (π¦) =
1
2
π
πΉ (π¦)
ππ¦ π
=
π
ππ¦
[πΉπ (π¦) − πΉπ (−π¦)] = ππ (π¦)(1) − ππ (−π¦)(−1) =
1
π −|π¦| + 2 π −|−π¦| = π −|π¦| . Since the transformation was an absolute value function and
we have the bounds −∞ < π₯ < ∞, the bounds become 0 < π¦ < ∞. This allows us to
write the probability density function of π as ππ (π¦) = {
1
π −π¦
0
ππ π¦ ∈ (0, ∞)
.
ππ‘βπππ€ππ π
1
b) We see that π(π = 0) = 2 and π(π = 1) = 2 since ππ (π₯) is symmetric. This allows us
0
ππ π€ ∈ (−∞, 0)
1
ππ π€ ∈ [0,1)
1
ππ π€ ∈ [1, ∞)
to write the cumulative distribution function as πΉπ (π€) = { 2
.
1
Question #13: Suppose π has density ππ (π₯) = 24 π₯ 2 for π₯ ∈ (−2,4) and ππ (π₯) = 0 otherwise.
Find the probability density function of the random variable π = π 2 .
ο·
We will use Theorem 6.3.2 for this question: Suppose that π is a continuous random
variable with density ππ (π₯) and assume that π = π’(π) is a one-to-one transformation
π
with inverse π₯ = π€(π¦). If ππ¦ π€(π¦) is continuous and nonzero, the density of π is given
π
by ππ (π¦) = ππ (π€(π¦)) |ππ¦ π€(π¦)|. Here, the transformation is π’ = π₯ 2 and, based on the
y-values of the graph over the x-values of (0,1), the domain over which the density of
π is defined is (0,16). Solving the transformation then gives that π₯ = π€(π¦) = ±√π¦ so
1
that π€ ′ (π¦) = ± 2 π¦. We must consider two cases in the interval (0,16): over (0,4) the
√
transformation is not one-to-one and over (4,16) it is one-to-one. The density is thus
ππ (−√π¦)|π€ ′ (π¦)| + ππ (√π¦)|π€ ′ (π¦)| ππ π¦ ∈ (0,4)
ππ (π¦) = {π (√π¦)|π€ ′ (π¦)|
π
0
ππ π¦ ∈ [4,16) =
ππ‘βπππ€ππ π
√π¦
24
{√π¦
ππ π¦ ∈ (0,4)
ππ π¦ ∈ [4,16)
48
0
.
ππ‘βπππ€ππ π
Question #16: Let π1 and π2 be independent random variables each having density function
1
πππ (π₯) = π₯ 2 for π₯π ∈ [1, ∞) and πππ (π₯) = 0 otherwise. a) Find the joint PDF of π = π1 π2 and
π = π1. b) Find the marginal probability density function of the random variable π.
a) Since π1 and π2 are independent, their joint density is simply the product of their
1
1
1
2
marginal densities, so ππ1 π2 (π₯1 , π₯2 ) = (π₯ 2 ) (π₯ 2 ) = (π₯
1
2
1 π₯2 )
whenever we have that
(π₯1 , π₯2 ) ∈ [1, ∞) × [1, ∞) and zero otherwise. We will use Theorem 6.3.6, which says
that if π’ = π(π₯1 , π₯2 ) and π£ = π(π₯1 , π₯2 ) and we can solve uniquely for π₯1 and π₯2 , then
ππ₯1
we have πππ (π’, π£) = ππ1 π2 (π₯1 (π’, π£), π₯2 (π’, π£))|π½|, where π½ =
ππ’
πππ‘ [ππ₯
2
ππ₯1
ππ£
].
ππ₯2
ππ’
π’
Here, we
ππ£
π’
have that π£ = π₯1 so π₯1 = π£ and π’ = π₯1 π₯2 so π₯2 = π₯ = π£ , so we can calculate the
1
0
1
1
Jacobian as π½ = πππ‘ [ 1 − π’ ] = − π£. We can therefore find the joint density as
π£
π£2
π’
1
πππ (π’, π£) = ππ1 π2 (π£, π£ ) |− π£| = (
1
1
π’ 2
(π£⋅ )
π£
1
) (π£) = π’2 π£ if 1 < π£ < π’ < ∞. We can find this
region of integration by substituting into the constraints of ππ1 π2 (π₯1 , π₯2 ). The first is
that 1 < π₯1 < ∞ so 1 < π£ < ∞ while the second is 1 < π₯2 < ∞ so 1 <
π’
π£
< ∞, which
reduces to π£ < π’ < ∞. Combining these gives the required bounds 1 < π£ < π’ < ∞.
π’
π’ 1
b) We have ππ (π’) = ∫1 πππ (π’, π£)ππ£ = ∫1
1
π’
ππ(π’)
1
π’2
ππ£ = [π’2 ππ(π£)] =
π’2 π£
if 1 < π’ < ∞.
Question #18: Let π and π have joint density function πππ (π₯, π¦) = π −π¦ for 0 < π₯ < π¦ < ∞
and πππ (π₯, π¦) = 0 otherwise. a) Then find the joint density function of π = π + π and π = π.
b) Find the marginal density function of π. c) Find the marginal density function of π.
a) We have that π‘ = π₯ so π₯ = π‘ and π = π₯ + π¦ so π¦ = π − π₯ = π − π‘, so we can calculate
ππ₯
the Jacobian as π½ =
ππ₯
ππ
πππ‘ [ππ¦
ππ‘
]
ππ¦
ππ
ππ‘
0
= πππ‘ [
1
1
] = −1. We can therefore find the joint
−1
π
density as πππ (π , π‘) = πππ (π‘, π − π‘)|−1| = (π −(π −π‘) )(1) = π π‘−π if 0 < π‘ < 2 < ∞. We
can find these bounds by substituting into the bounds 0 < π₯ < π¦ < ∞ with our solved
π
transformations, which gives 0 < π‘ < π − π‘ < ∞ or 0 < 2π‘ < π < ∞ or 0 < π‘ < 2 < ∞.
π /2
b) We have that ππ (π ) = ∫0
π /2
π −π [π π‘ ]0
π /2 π‘−π
πππ (π , π‘)ππ‘ = ∫0
π
π /2 π‘ −π
ππ‘ = ∫0
π /2
π π ππ‘ = [π π‘ π −π ]0
=
= π −π (π π /2 − 1) = if 0 < π < ∞.
∞
∞
∞
c) We have that ππ (π‘) = ∫2π‘ πππ (π , π‘)ππ = ∫2π‘ π π‘−π ππ = ∫2π‘ π π‘ π −π ππ = [−π π‘ π −π ]∞
2π‘ =
π‘
−2π‘
−π π‘ [π −π ]∞
) = π −π‘ if 0 < π‘ < ∞. Note that we have omitted the steps
2π‘ = −π (0 − π
where the infinite limit of integration is replaced by a parameter and a limit to infinity
with that parameter is evaluated to show that it goes to zero.
Question #21: Let π and π have joint density πππ (π₯, π¦) = 2(π₯ + π¦) for 0 < π₯ < π¦ < 1 and
πππ (π₯, π¦) = 0 otherwise. a) Find the joint probability density function of π = π and π = ππ.
b) Find the marginal probability density function of the random variable π.
π‘
π‘
a) We have that π = π₯ so π₯ = π and π‘ = π₯π¦ so π¦ = π₯ = π , so we can calculate the Jacobian
ππ₯
ππ
as π½ = πππ‘ [ππ¦
ππ
ππ₯
ππ‘
]
ππ¦
1
= πππ‘ [− π‘
π
ππ‘
π‘
1
0
1
1 ] = . The joint probability density function is thus
π
π
π‘
1
π‘
πππ (π , π‘) = πππ (π , π ) |π | = 2 (π + π ) (π ) = 2 (1 + π 2 ). The region is then found by
π‘
substituting to 0 < π₯ < π¦ < 1, so we have 0 < π < π < 1 or 0 < π 2 < π‘ < π < 1. This
can be visualized as the region between π‘ = π and π‘ = π 2 on the π π‘ plane.
√π‘
b) The marginal probability density function is given by ππ (π‘) = ∫π‘ πππ (π , π‘)ππ =
√π‘
2π‘ √π‘
π‘
∫π‘ 2 (1 + π 2 ) ππ = [2π − π ]
π‘
= (2√π‘ − 2√π‘) − (2π‘ − 2) = 2 − 2π‘ if 0 < π‘ < 1.
Question #25: Let π1 , π2 , π3,π4 be independent random variables. Assume that π2, π3, π4 are
each distributed Poisson with parameter 5 and the random variable π = π1 + π2 + π3 + π4
is distributed Poisson with parameter 25. a) What is the distribution of the random variable
π1? b) What is the distribution of the random variable π = π1 + π2?
a) We first note that while π1,π2, π3, π4 are independent, they are not πππ since only
π2 , π3 , π4 ~πππΌ(5) with π1 not being listed. Thus, we must use the unsimplified
formula 6.4.4, which says if π1 , … , ππ are independent random variables with moment
generating functions πππ (π‘) and π = π1 + β― + ππ , then the moment generating
function of π is ππ (π‘) = [ππ1 (π‘)] … [πππ (π‘)]. We use the fact that if some π~πππΌ(π),
then ππ (π‘) = π π(π
that
π‘ −1)
to solve this problem. If we let π = π1 + π2 + π3 + π4 , we have
ππ (π‘) = [ππ1 (π‘)][ππ2 (π‘)][ππ3 (π‘)][ππ4 (π‘)] = π 25(π
Substituting gives [ππ1 (π‘)][π 5(π
[ππ1 (π‘)][π 5(π
π‘ −1)
3
] = π 5(π
π‘ −1)
π‘ −1)
][π 5(π
π‘ −1)
][π 5(π
so that [ππ1 (π‘)] =
π‘ −1)
π‘ −1)
] = π 25(π
π‘
π 25(π −1)
3
π‘
[π 5(π −1) ]
=
since
π‘ −1)
π~πππΌ(25).
, which reduces to
π‘
π 25(π −1)
π‘
π 15(π −1)
= π 10(π
π‘ −1)
. This
is the moment generating function of a poisson 10 random variable, so π1 ~πππΌ(10).
b) We have ππ (π‘) = [ππ1 (π‘)][ππ2 (π‘)] = [π 10(π
π‘ −1)
][π 5(π
π‘ −1)
] = π 15(π
π‘ −1)
, which is the
moment generating function of a poisson 15 random variable, so π~πππΌ(15). We can
see a general pattern here; if ππ ~πππΌ(π) for π = 1, … , π are independent random
variables and we define some π = ∑ππ=1 ππ for π ≤ π, then we have that π~πππΌ(ππ).
Chapter #6 – Functions of Random Variables
Question #17: Suppose that π1 and π2 denote a random sample of size 2 from a gamma
1
π
distribution such that ππ ~πΊπ΄π (2, 2). Find the PDF of a) π = √π1 + π2 and b) π = π1.
2
π₯
1
a) We know that if some π~πΊπ΄π(π, π
), then ππ (π₯) = ππ
Γ(π
) π₯ π
−1 π −π if π₯ > 0. Since π1
and π2 are independent, their joint density function is given by ππ1 π2 (π₯1 , π₯2 ) =
−
1
ππ1 (π₯1 )ππ2 (π₯2 ) = (
1
√2Γ(2)
1
2
π₯1 π
π₯
− 1
2
1
)(
1
√2Γ(2)
−
1
2
π₯2
1
π₯2 π − 2 ) = 2π
1
1
√ π₯1 √ π₯2
π−
(π₯1 +π₯2 )
2
if π₯1 , π₯2 > 0,
1
since Γ (2) = √π. We have the transformation π¦ = √π₯1 + π₯2 and generate another
transformation π€ = π₯1 , so we have π₯1 = π€ and π₯2 = π¦ 2 − π€, which allows us to find
1
0
π½ = πππ‘ [
] = 2π¦. Then the joint density of π and π is given by πππ (π€, π¦) =
−1 2π¦
1 1
ππ1 π2 (π€, π¦ 2 − π€)|π½| = π
π¦2
1
√π€ √π¦ 2 −π€
π − 2 π¦ if π€ > 0, π¦ 2 − π€ > 0; these bounds can be
∞
combined to give 0 < π€ < π¦ 2 . Finally, the density of π is ππ (π¦) = ∫−∞ πππ (π€, π¦) ππ€ =
1
π¦2
1
π¦2
1
∫ π¦ √π€ √π¦ 2
π 0
−π€
π¦2
π − 2 ππ€ = β― = π¦π − 2 if π¦ > 0 and zero otherwise. The evaluation of
this integral has been omitted, but can be computed by two substitutions.
π₯
π§
b) We have π€ = π₯1 and generate π§ = π₯1 so that π₯1 = π§ and π₯2 = π€, which allows us to
2
1
calculate π½ = det [ 1
π€
1 1
1
2π √π§ √π§/π€
π
π§+π§/π€
−
2
∞
π§
π€2
0
π§
π§
π§
(π§,
|π½|
− π€2 ] = − π€2 . Then we have πππ π€) = ππ1 π2 (π§, π€) =
1
1
= 2π π€3/2 π −
1
(π§+π§/π€)
2
∞
ππ (π€) = ∫−∞ πππ (π€, π§) ππ§ = 2π ∫0
1
if π§, π€ > 0, so that the density of π is given by
π−
π€ 3/2
(π§+π§/π€)
2
1
1
ππ§ = β― = π (π€+1)
√π€
if π€ > 0. The
evaluation of this integral has been omitted, but can be computed by substitution.
Question #26: Let π1 and π2 be independent negative binomial random variables such that
π1 ~ππ΅(π1 , π) and π2 ~ππ΅(π2 , π). a) Find the MGF and distribution of π = π1 + π2.
ο·
We use Theorem 6.4.3, which says that if the random variables ππ are independent
with respective MGFs πππ (π‘), then the MGF of the random variable that is their sum
is simply the product of their respective MGFs. Also, if some discrete random variable
π
ππ π‘
π~ππ΅(π, π), then ππ (π‘) = (1−ππ π‘) . Therefore, the moment generating function of π
ππ π‘
is ππ (π‘) = [ππ1 (π‘)][ππ2 (π‘)] = (
1−ππ π‘
π1
) (
ππ π‘
1−ππ π‘
π2
)
ππ π‘
=(
1−ππ π‘
π1 +π2
)
. This then allows to
determine the distribution of π = π1 + π2, namely that π~ππ΅(π1 + π2 , π).
Question #27: Recall that π~πΏππΊπ(π, π 2 ) if ππ(π) ~π(π, π 2 ). Assume that ππ ~πΏππΊπ(ππ , ππ2 )
for π = 1, … , π are independent. Find the distribution functions of the following random
π
π
variables a) π΄ = ∏ππ=1 ππ , b) π΅ = ∏ππ=1 ππ , c) πΆ = π1, d) find πΈ(π΄) = πΈ(∏ππ=1 ππ ).
2
a) We have that ππ(π΄) = ππ(∏ππ=1 ππ ) = ∑ππ=1 ππβ‘(ππ ) = ππ(π1 ) + β― + ππβ‘(ππ ), so the
random variable ππβ‘(π΄) is the sum of π normally distributed random variables. This
implies that ππ(π΄) ~π(∑ππ=1 ππ , ∑ππ=1 ππ2 ), which means π΄~πΏππΊπ(∑ππ=1 ππ , ∑ππ=1 ππ2 ).
b) The
random
variable
ππ(π΅) = ππ(∏ππ=1 πππ ) = ∑ππ=1 ππβ‘(πππ ) = ∑ππ=1 ππ ππβ‘(ππ ) =
π1 ππ(π1 ) + β― + π2 ππβ‘(ππ ). We use that if some π~π(π, π 2 ), then ππ~π(ππ, π2 π 2 ) to
conclude that ππ(π΅) ~π(∑ππ=1 ππ ππ , ∑ππ=1 ππ2 ππ2 ) so π΅~πΏππΊπ(∑ππ=1 ππ ππ , ∑ππ=1 ππ2 ππ2 ).
π
c) We have that ππ(πΆ) = ππ (π1 ) = ππ(π1 ) − ππ(π2 ), so the random variable ππβ‘(πΆ) is the
2
sum of two normally distributed random variables. Thus, ππ(πΆ) ~π(π1 − π2 , π12 + π22 )
π
which implies that the distribution of πΆ = π1 is πΆ~πΏππΊπ(π1 − π2 , π12 + π22 ).
2
d) For π~π(π, π 2 ), we have ππ (π‘) = πΈ(π π‘π ) = π ππ‘+π
ππ (π‘) = πΈ(π π‘π ) = π π‘
2 /2
2 π‘ 2 /2
and for π~π(0,1), we have
. Thus, the expected value is given by πΈ(ππ ) = πΈ(π ππ +ππ π ) =
2
π ππ +ππ /2. Since the random variables ππ are all independent, we therefore have that
2
πΈ(π΄) = πΈ(∏ππ=1 ππ ) = ∏ππ=1 πΈ(ππ ) = ∏ππ=1(π ππ +ππ /2 ) = ππ₯πβ‘{∑ππ=1 ππ + ∑ππ=1 ππ2 /2}.
Question #28: Let π1 and π2 be a random sample of size 2 from a continuous distribution
with PDF of the form ππ (π₯) = 2π₯ if 0 < π₯ < 1 and zero otherwise. a) Find the marginal
densities of π1 and π2 , the smallest and largest order statistics, b) find the joint probability
density function of π1 and π2 , and c) find the density of the sample range π
= π2 − π1 .
0β‘β‘β‘β‘β‘β‘β‘ππβ‘π₯ ∈ (−∞, 0]β‘
(0,1)β‘β‘
2π₯β‘β‘β‘β‘β‘πππ₯
∈
a) Since ππ (π₯) = {
, we know that πΉπ (π₯) = { π₯ 2 β‘β‘β‘β‘β‘ππβ‘π₯ ∈ (0,1)β‘β‘β‘β‘β‘β‘ . Then
0β‘β‘β‘β‘β‘β‘β‘β‘ππ‘βπππ€ππ πβ‘β‘β‘
1β‘β‘β‘β‘β‘β‘β‘β‘ππβ‘π₯ ∈ [1, ∞)β‘β‘β‘β‘
from Theorem 6.5.2, we have that π1 (π¦1 ) = πππ (π¦1 )β‘[1 − πΉπ (π¦1 )]π−1 so we can
calculate the smallest order statistic as π1 (π¦1 ) = 2[2π¦1 ][1 − π¦12 ]2−1 = 4π¦1 − 4π¦13
whenever π¦1 ∈ (0,1). Similarly, ππ (π¦π ) = πππ (π¦π )β‘[πΉπ (π¦π )]π−1 so we can calculate the
largest order statistic as π2 (π¦2 ) = 2[2π¦2 ][π¦22 ]2−1 = 4π¦23 whenever π¦2 ∈ (0,1).
b) From Theorem 6.5.1, the joint probability density function of the order statistics is
ππ (π¦1 , … , π¦π ) = π! ππ (π¦1 ) … ππ (π¦π ). In this question, we have that ππ1 π2 (π¦1 , π¦2 ) =
2! ππ (π¦1 )ππ (π¦2 ) = 2(2π¦1 )(2π¦2 ) = 8π¦1 π¦2 whenever we have 0 < π¦1 < π¦2 < 1.
c) We first find the joint density of the smallest and largest order statistics in order to
make a transformation to get the marginal density of the sample range. From the
work we did above, we have that ππ1 π2 (π¦1 , π¦2 ) = 8π¦1 π¦2 . We have the transformation
π = π¦2 − π¦1 and generate π = π¦1 , so we have π¦1 = π and π¦2 = π + π which allows us to
1 0
calculate π½ = πππ‘ [
] = 1. The joint density of π and π
is therefore πππ
(π , π) =
1 1
ππ1 π2 (π , π + π )|π½| = 8π (π + π )(1) = 8π 2 + 8π π if 0 < π < π + π < 1, which can also be
∞
written as 0 < π < 1 − π. The marginal density is thus ππ
(π) = ∫−∞ πππ
(π , π) ππ =
1−π
∫0
8
1−π
(8π 2 + 8π πβ‘) ππ = [3 π 3 + 4ππ 2 ]
0
8
= 3 (1 − π)3 + 4π(1 − π)2 if 0 < π < 1.
Question #31: Consider a random sample of size π from an exponential distribution such
that ππ ~πΈππ(1). Give the density of a) the smallest order statistic denoted by π1 , b) the
largest order statistic denoted by ππ , c) the sample range of the order statistics π
= ππ − π1 .
a) Since πππ (π₯π ) = {
0β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘πππ₯ ∈ (−∞, 0]
π −π₯π β‘β‘β‘β‘β‘πππ₯ ∈ (0, ∞)β‘β‘
, we have πΉππ (π₯π ) = {
.
1 − π −π₯π β‘β‘β‘β‘β‘ππ‘βπππ€ππ πβ‘β‘β‘β‘β‘β‘
0β‘β‘β‘β‘β‘β‘β‘β‘β‘ππ‘βπππ€ππ πβ‘β‘β‘β‘β‘β‘
Then we have that π1 (π¦1 ) = πππ (π¦1 )[1 − πΉπ (π¦1 )]π−1 = ππ −π¦1 [1 − (1 − π −π¦1 )]π−1 =
ππ −π¦1 (π −π¦1 )π−1 = ππ −ππ¦1 if π¦1 > 0 and zero otherwise.
b) Similarly, we have ππ (π¦π ) = πππ (π¦π )[πΉπ (π¦π )]π−1 = ππ −π¦π [1 − π −π¦π ]π−1 for π¦π > 0.
c) Since the exponential distribution has the memoryless property, the difference of
π
= ππ − π1 will not be conditional on the value of π1 . This allows us to treat π1 = 0,
so that π
= ππ − π1 = ππ − 0 = ππ . We then use the fact that the range of a set of π
order statistics from an exponential distribution is the same as the largest order
statistic from a set of π − 1 order statistics. From above, we have that ππ (π¦π ) =
ππ −π¦π [1 − π −π¦π ]π−1 , so substituting π − 1 gives ππ
(π) = (π − 1)[1 − π −π ]π−2 π −π .
Question #32: A system is composed of five independent components connected in series
one after the other. a) If the PDF of the time to failure of each component is ππ ~πΈππ(1), then
give the PDF of the time to failure of the system; b) if the components are connected in
parallel so that all must fail before the system fails, give the PDF of the time to failure.
a) Since ππ ~πΈππ(1), we know that the density is given by πππ (π₯π ) = π −π₯π for π₯π > 0 and
0β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘πππ₯ ∈ (−∞, 0]
πΉππ (π₯π ) = {
. The system in the series fails whenever the
1 − π −π₯π β‘β‘β‘β‘β‘ππ‘βπππ€ππ πβ‘β‘β‘β‘β‘β‘
earliest component fails, which happens at time π(1) = π1 , the first order statistic.
Thus, the probability density function of the time to failure is therefore given by
ππ1 (π¦1 ) = πππ (π¦1 )[1 − πΉπ (π¦1 )]π−1 = 5π −π¦1 [π −π¦1 ]4 = 5π −5π¦1 whenever π¦1 > 0.
b) For the system in parallel, the system in the series fails whenever the last component
fails, which happens at time π(5) = π5, the greatest order statistic. Thus, the density is
ππ5 (π¦5 ) = πππ (π¦π )[πΉπ (π¦π )]π−1 = 5π −π¦5 [1 − π −π¦5 ]4 whenever π¦5 > 0.
Question #33: Consider a random sample of size π from a geometric distribution such that
ππ ~πΊπΈπ(π). Give the CDF of a) the minimum π1 , b) the π π‘β smallest ππ , c) the maximum ππ .
a) If some ππ ~πΊπΈπ(π), then πππ (π₯) = π(1 − π)π₯ and πΉππ (π₯) = 1 − (1 − π)π₯+1. Consider
the event π(1) > π, which happens if and only if π(π) > π for all π = 1, … , π. Therefore,
π
π
we have π(π(1) > π) = π(π(π) > π) = [1 − πΉππ (π)] = [(1 − π)π ]π = (1 − π)ππ ,
which implies that the CDF is given by πΉπ(1) (π) = π(π(1) ≤ π) = 1 − (1 − π)ππ .
b) The event π(π) ≤ π happens when π of the π(π) satisfy π(π) ≤ π and the other π − π
π
satisfy π(π) > π. Thus, we have π(π(π) ≤ π) = (ππ)π(π(π) ≤ π) π(π(π) > π)
π−π
=
(ππ)[1 − (1 − π)π ]π [(1 − π)π ]π−π , which is the distribution function of ππ .
c) The event π(π) ≤ π happens when π(π) ≤ π for all π = 1, … , π. Thus, π(π(π) ≤ π) =
π
π
π(π(π) ≤ π) = (∑ππ=1[π(1 − π)π−1 ]) = (π
1−(1−π)π π
1−(1−π)
) = [1 − (1 − π)π ]π .
Chapter #7 – Limiting Distributions
Question #30: If π~ππ΄π
(π, π
), then ππ (π₯) =
π
π₯ π
+1
π
π(1+ )
if π₯ > 0 and zero otherwise. Consider
a random sample of size π = 5 from a Pareto distribution where π~ππ΄π
(1,2); that is,
suppose that π1 , … , π5 are drawn from the given Pareto distribution above. a) Find the joint
PDF of the second and fourth order statistics given by π2 = π(2) and π4 = π(4) , and b) find the
joint PDF of the first three order statistics given by π1 = π(2) , π2 = π(2) and π3 = π(2).
π₯
π₯
2
a) The CDF of the population is given by πΉπ (π₯) = ∫0 ππ (π‘) ππ‘ = ∫0 [(1+π‘)3 ] ππ‘, so that we
can calculate the joint density using Corollary 6.5.1 as
5!
[πΉ (π¦ )]2−1 ππ (π¦2 )[πΉπ (π¦4 )
(2−1)!(4−2−1)!(5−4)! π 2
ππ2 π4 (π¦2 , π¦4 ) =
− πΉπ (π¦2 )]4−2−1 ππ (π¦4 )[1 − πΉπ (π¦4 )]5−4 =
5! ππ (π¦2 )ππ (π¦4 )[πΉπ (π¦2 )][πΉπ (π¦4 ) − πΉπ (π¦2 )][1 − πΉπ (π¦4 )] if 0 < π¦2 < π¦4 < ∞.
π!
b) From Theorem 6.5.4, we have π(π¦1 , … , π¦π ) = (π−π)! [1 − πΉπ (π¦π )]π−π [ππ (π¦1 ) … ππ (π¦π )],
so we may calculate that ππ1 π2 π3 (π¦1 , π¦2 , π¦3 ) = 60[1 − πΉπ (π¦3 )]2 [ππ (π¦1 )ππ (π¦2 )ππ (π¦3 )]
π!
5!
whenever 0 < π¦1 < π¦2 < π¦3 < ∞, since we have that (π−π)! = 2! = 60.
Question #1: Consider a random sample of size π from a distribution with cumulative
1
distribution function πΉπ (π₯) = 1 − π₯ whenever 1 ≤ π₯ < ∞ and zero otherwise. That is, let the
random variables π1 , … , ππ be ~πππ from the distribution with CDF πΉπ (π₯). a) Derive the CDF
of the smallest order statistic given by π(1) = π1:π , b) find the limiting distribution of π1:π ;
that is, if πΊπ (π¦) denotes the order statistic from above, find lim πΊπ (π¦), c) find the limiting
π→∞
distribution of
π
π1:π
;
that is, find the CDF of
π
π(1)
and its limit as π → ∞.
a) We can compute that πΉπ1:π (π¦) = π(π1:π ≤ π¦) = 1 − π(π1:π > π¦) = 1 − π(ππ > π¦)π =
1
π
1 π
1 − [1 − (1 − π¦)] = 1 − (π¦) whenever 1 ≤ π¦ < ∞. We thus have that the CDF of
the smallest order statistic is πΉπ1:π (π¦) = {
1 − (1/π¦)π
0
ππ π¦ ≥ 1
. Finally, we note
ππ π¦ < 1
that π(π1:π > π¦) ≡ π(ππ > π¦)π since the smallest order statistic is greater than some
π¦ if and only if all π of the independent samples are also greater than π¦. We can use
this approach for any order statistic, including the largest, by changing the exponent.
b) We have that lim πΊπ (π¦) = {
π→∞
lim [1 − (1/π¦)π ]
ππ π¦ ≥ 1
lim 0
ππ π¦ < 1
π→∞
π→∞
={
0 ππ π¦ ≤ 1
= πΊ(π¦),
1 ππ π¦ > 1
so the limiting distribution of π1:π is degenerate at π¦ = 1. From Definition 7.2.2, this
means that πΊ(π¦) is the cumulative distribution function of some discrete distribution
π(π¦) that assigns probability one at π¦ = 1 and zero otherwise.
1
1
π
π (π¦) = π(π1:π ≤ π¦) = π (π1:π ≤ π¦ π ) = 1 − π (π1:π > π¦ π ) =
c) As before, we have πΉπ1:π
1
π
π
1 − π (ππ > π¦ ) = 1 − [1 − (1 −
1
1
π¦π
π
1
)] = 1 − (π¦) whenever π¦ ≥ 1. Therefore, it is
clear that the limiting distribution of this sequence of random variables is given by
π (π¦) = {
lim πΉπ1:π
π→∞
1 − 1/π¦
0
ππ π¦ ≥ 1
= πΊ(π¦) since there is no dependence on π.
ππ π¦ < 1
Question #2: Consider a random sample of size π from a distribution with CDF given by
1
πΉ(π₯) = 1+π −π₯ for all π₯ ∈ β. Find the limiting distribution of a) ππ:π and b) ππ:π − ππ(π).
1
π
a) We have πΉππ:π (π¦) = π(ππ:π ≤ π¦) = π(ππ ≤ π¦)π = (1+π −π¦ )
1
π
for all π¦ ∈ β. Since
lim [(1+π −π¦ ) ] = 0, we conclude that ππ:π does not have a limiting distribution.
π→∞
b) We calculate that πΉππ:π−ππ(π) (π¦) = π(ππ:π − ππ(π) ≤ π¦) = π(ππ:π ≤ π¦ + π π(π)) =
π
1
π
1
πΉππ:π (π¦ + π π(π)) = (1+π −(π¦+π π(π)) ) = (1+π −π¦ π −ππ(π)) = (
1+
limit gives lim [πΉππ:π (π¦ + π π(π))] = lim [(
π→∞
1
π−π¦
1+
π
π→∞
π
1
π−π¦
π
) .
Evaluating
this
π
) ] = ππ
−π¦
for all π¦ ∈ β.
Question #3: Consider a random sample of size π from the distribution πΉ(π₯) = 1 − π₯ −2 if
π₯ > 1 and zero otherwise. Find the limiting distribution of a) π1:π , b) ππ:π and c)
1
√π
ππ:π .
a) We can compute that πΉπ1:π (π¦) = π(π1:π ≤ π¦) = 1 − π(π1:π > π¦) = 1 − π(ππ > π¦)π =
1
π
1
1
1 − [1 − (1 − π¦ 2 )] = 1 − (π¦ 2π) if π¦ > 1. Thus, πΉπ1:π (π¦) = {
1
lim [1 − π¦ 2π]
1 − π¦ 2π
ππ π¦ > 1
0
ππ π¦ ≤ 1
so
ππ π¦ > 1
1 ππ π¦ > 1
={
.
0 ππ π¦ ≤ 1
ππ π¦ ≤ 1
the limiting distribution is lim πΉπ1:π (π¦) = {π→∞
π→∞
lim [0]
π→∞
We therefore say that the limiting distribution is degenerate at π¦ = 1.
π
1
b) We have πΉππ:π (π¦) = π(ππ:π ≤ π¦) = π(ππ ≤ π¦)π = (1 − π¦ 2 ) whenever π¦ > 1. Thus,
1
(1 − π¦ 2 )
πΉππ:π (π¦) = {
0
π
1
π
lim [(1 − π¦ 2 ) ] ππ π¦ > 1
ππ π¦ > 1
so lim πΉππ:π (π¦) = {π→∞
π→∞
lim [0]
ππ π¦ ≤ 1
π→∞
ππ π¦ ≤ 1
= 0.
We would therefore conclude that there is no limiting distribution for ππ:π .
c) We compute that πΉ 1 π
√π
(1 − (
1
√ππ¦)2
π
1
π:π
(π¦) = π (
1
√π
ππ:π ≤ π¦) = π(ππ:π ≤ √ππ¦) = πΉππ:π (√ππ¦) =
π
1
) = (1 − ππ¦ 2 ) whenever √ππ¦ > 1 or π¦ > π. We can therefore compute
√
1
the limit as lim πΉ 1 π
π→∞
√π
π:π
(π¦) = {
π
lim [(1 − ππ¦ 2 ) ] ππ π¦ >
π→∞
lim [0]
π→∞
ππ π¦ ≤
1
√π
1
√π
1
− 2
π¦
= {π
0
ππ π¦ > 0 .
ππ π¦ ≤ 0
Question #5: Suppose that ππ ~π(0,1) and that the ππ are all independent. Use moment
generating functions to find the limiting distribution of π΄π =
ο·
We have π΄π =
1
π
∑π
π=1(ππ + )
√π
1
π
π
(∑π
π=1 ππ )+(∑π=1( ))
=
√π
=
1
1
π
∑π
π=1(ππ + )
√π
(∑π
π=1 ππ )+π
1
π
√π
=
as π → ∞.
∑π
π=1 ππ
√π
+
1
√π
, so the MGF is
π‘
ππ΄π (π‘) = [π1 (π‘)]π [π2 (π‘)] = [π1 (π‘)]π [πΈ (π √π )] since π΄π is the sum of two parts so
we can multiply their respective MGFs. The MGF of a standard normal random
variable with π = 0 and π 2 = 1 is given by ππ (π‘) = π π‘
that π1 (π‘) = π
(
π‘ 2
) /2
√π
π‘2
2π
2 /2
1
, which allows us to calculate
π‘
π‘
= π . Also, we have that πΈ (π √π ) = π √π so combining these
π
gives ππ΄π (π‘) = [π1 (π‘)] [πΈ (π
1
π‘
√π
π‘2
2π
π
π‘2
π‘
π‘
)] = [π ] [π √π ] = [π 2 ] [π √π ]. Then we can use
π‘2
π‘
π‘2
Theorem 7.3.1 to calculate lim ππ΄π (π‘) = lim [π 2 ] [π √π ] = π 2 = π(π‘), which we
π→∞
π→∞
know is the MGF of a standard normal, so the limiting distribution is π΄~π(0,1). Note
that this is also a direct consequence of the Central Limit Theorem.
Question #9: Let π1 , π2 , … , π100 be a random sample of size π = 100 from an exponential
distribution such that each ππ ~πΈππ(1) and let π = π1 + π2 + β― + π100. a) Give a normal
π
approximation for the probability π(π > 110), and b) if πΜ
= 100 is the sample mean, then
give a normal approximation to the probability π(1.1 < πΜ
< 1.2).
a) Since each ππ ~πΈππ(1), we know that πΈ(ππ ) = π = 1 while πππ(ππ ) = π 2 = 1. Due to
100
the independence of the ππ s, we have that πΈ(π) = πΈ(∑100
π=1 ππ ) = ∑π=1 πΈ(ππ ) = 100 and
100
πππ(π) = πππ(∑100
π=1 ππ ) = ∑π=1 πππ(ππ ) = 100 so π π(π) = √100 = 10. We can
therefore calculate that π(π > 110) = 1 − π(π ≤ 110) = 1 − π(∑100
π=1 ππ ≤ 110) =
1−π(
∑100
π=1 ππ −100
10
≤
110−100
10
) ≈ 1 − π(π ≤ 1) = 1 − Φ(1) = 1 − 0.8413 = 0.1587,
where π denotes the standard normal distribution with π = 0 and π 2 = 1.
πΜ
−π
b) We know that ππ = π/
√π
→π π(0,1) by the Central Limit Theorem. We then have that
π
1
π
1
1
πΈ(πΜ
) = πΈ (100) = 100 πΈ(π) = 1 and πππ(πΜ
) = πππ (100) = 10,000 πππ(π) = 100 so
1
1.1−1
πΜ
−1
π π(π) = 10, which allows us to find π(1.1 < πΜ
< 1.2) = π ( 1/10 < 1/10 <
1.2−1
1/10
)≈
π(1 < ππ < 2) = Φ(2) − Φ(1) = 0.9772 − 0.8413 = 0.1359. Here, we have used the
fact that π = 1 and π = 1 which come from the population distribution ππ ~πΈππ(1).
Question #11: Let ππ ~πππΌπΉ(0,1) where π1 , π2 , … , π20 are all independent. Find a normal
approximation for the probability π(∑20
π=1 ππ ≤ 12).
ο·
Since each ππ ~πππΌπΉ(0,1), we know that πΈ(ππ ) = 1/2 while πππ(ππ ) = 1/12. Due to
20
the independence of the ππ s, we have that πΈ(∑20
π=1 ππ ) = ∑π=1 πΈ(ππ ) = 10 and
20
20
πππ(∑20
π=1 ππ ) = ∑π=1 πππ(ππ ) = 5/3, so that π π(∑π=1 ππ ) = √5/3. This allows us to
find π(∑20
π=1 ππ ≤ 12) = π (
∑20
π=1 ππ −10
√5/3
≤
12−10
√5/3
) ≈ π(π ≤ 1.55) = Φ(1.55) = 0.9394.
Chapter #8 – Statistics and Sampling Distributions
Question #1: Let π denote the weight in pounds of a single bag of feed where π~π(101,4).
What is the probability that 20 bags will weigh at least 2,000 pounds?
ο·
20
20
Let π = ∑20
π=1 ππ where ππ ~π(101,4). We have that πΈ(π) = πΈ(∑π=1 ππ ) = ∑π=1 πΈ(ππ ) =
20
20(101) = 2,020 and πππ(π) = πππ(∑20
π=1 ππ ) = ∑π=1 πππ(ππ ) = 20(4) = 80 such
that π π(π) = √80 = 4√5. We can thus calculate the probability π(π ≥ 2,000) =
∑20
π=1 ππ −πΈ(π)
π(∑20
π=1 ππ ≥ 2,000) = π (
π π(π)
≥
2,000−πΈ(π)
∑20
π=1 ππ −2,020
π π(π)
4√5
) = π(
≥
2,000−2,020
)≈
4√5
20
π (π ≥ − 4√5) = π(π ≥ −2.24) = 1 − Φ(−2.24) = 0.987, where π~π(0,1).
Question #2: Let π denote the diameter of a shaft and π΅ the diameter of a bearing, where
both π and π΅ are independent and π~π(1,0.0004) and π΅~π(1.01,0.0009). a) If a shaft and
bearing are selected at random, what is the probability that the shaft diameter will exceed
the bearing diameter? b) Now assume equal variances (ππ2 = ππ΅2 = π 2 ) such that we have
π~π(1, π 2 ) and π΅~π(1.01, π 2 ). Find the value of π that will yield a probability of
noninterference of 0.95 (which means the shaft diameter exceeds the bearing diameter).
a) Define π = π − π·, since we wish to find π(π > π·) = π(π − π· > 0) = π(π > 0). We
have that πΈ(π) = πΈ(π − π·) = πΈ(π) − πΈ(π·) = 1 − 1.01 = −0.01 and πππ(π) =
πππ(π − π·) = πππ(π) + πππ(π·) = 0.0004 + 0.0009 = 0.0013 such that π π(π) =
√0.0013 = 0.036.
π−πΈ(π)
π(
π π(π)
>
0−πΈ(π)
π π(π)
Thus,
we
π+0.01
have
π(π > π·) = π(π − π· > 0) = π(π > 0) =
0.01
) = π ( 0.036 > 0.036) ≈ π(π > 0.28) = 1 − Φ(0.28) = 0.39.
b) For π = π − π·, we have that πΈ(π) = −0.01 but πππ(π) = 2π 2 so π π(π) = √2π. We
wish to find π so that π(π > 0) = 0.95 → 1 − π (π ≤
0.01
0.01
√2π
√2π
) = 0.95 → Φ (
) = 0.05.
Since only the critical value ππΌ = −1.645 ensures that Φ(−1.645) = 0.05, we must
solve
0.01
√2π
= −1.645 → π = −0.004. But since we must have π ≥ 0, no such π exists.
Question #3: Let π1 , … , ππ be a random sample of size π where they are ~πππ such that
ππ ~π(π, π 2 ) and define π = ∑ππ=1 ππ and π = ∑ππ=1 ππ2 . a) Find a statistic that is a function of
π and π and unbiased for the parameter π = 2π − 5π 2 . b) Find a statistic that is unbiased
for πΎ = π 2 + π 2 . c) If π is a constant and ππ = 1 if ππ ≤ π and zero otherwise, find a statistic
π−π
that is a function of π1 , … , ππ and is unbiased for πΉπ (π) = Φ (
π
2
)=
1
π‘
(π−π)/π 1
π− 2
∫−∞
√2π
1
ππ‘.
1
a) We first find an estimator for π = πΈ(πΜ
) = πΈ (π ∑ππ=1 ππ ) = π πΈ(∑ππ=1 ππ ) = π πΈ(π) =
1
and then for π 2 = πΈ(π 2 ) = πΈ (
π−1
1
∑ππ=1(ππ − πΜ
)2 ) = πΈ (
1
1
π−1
π
π
[∑ππ=1(ππ2 ) − ππΜ
2 ]) =
1
2
1
πΈ(∑ππ=1(ππ2 ) − ππΜ
2 ) = π−1 πΈ(∑ππ=1 ππ2 − ππΜ
2 ) = π−1 πΈ [π − π (π ∑ππ=1 ππ ) ] =
π−1
1
πΈ [π − π
π−1
(∑π
π=1 ππ )
π2
2
1
] = π−1 πΈ [π −
(∑π
π=1 ππ )
2
π
1
1
π
1
thus have π = 2π − 5π 2 = 2 [ π ] − 5 [π−1 (π −
We
π2
1
] = π−1 πΈ [π − π π 2 ] = π−1 [π −
π2
π
2π
)] =
π
π
5
].
1
− π−1 (π − π π 2 ),
which is an unbiased estimator of π since πΈ(π) = β― = π.
2
1
1
b) Since we found that π = πΈ(πΜ
) = πΈ ( ∑ππ=1 ππ ), then we have π 2 = [πΈ ( ∑ππ=1 ππ )] =
π
2
1
π
1
1
1
1
πΈ [(π ∑ππ=1 ππ ) − πππ (π ∑ππ=1 ππ )] = πΈ [π2 (∑ππ=1 ππ )2 − π2 πππ(∑ππ=1 ππ )] = πΈ [π2 π 2 −
π2
1
ππ 2 ] = πΈ [ π2 −
π2
combining
(1−π)
π
1
π2
]=
π
these
[π−1 (π −
π2
π
π2
−
π2
we
π2
π2
find
1
1
. We previously found that π 2 = π−1 [π −
π
π2
)] + π2 = π (π −
π2
π
π2
πΎ = π 2 + π 2 = π 2 + π2 −
that
π2
) + π2 =
π
π
π2
π2
π
π
− π2 + π2 = π ,
=
(1−π)
π
π2
π
], so
π2
π 2 + π2 =
which
is
an
unbiased estimator of πΎ since πΈ(πΎ) = β― = πΎ.
ππ −π
c) We have π(ππ = 1) = π(ππ ≤ π) = π (
and
π
≤
π−π
π
) = π (π ≤
π−π
π
π−π
πΈ(ππ ) = 1 β π(ππ = 1) + 0 β π(ππ = 0) = π(ππ = 1) = Φ (
π−π
) = Φ(
π
π
) = πΉπ (π)
) = πΉπ (π).
Then,
1
1
1
1
π−π
π−π
πΈ(πΜ
) = πΈ (π ∑ππ=1 ππ ) = π πΈ(∑ππ=1 ππ ) = π ∑ππ=1 πΈ(ππ ) = π πΦ ( π ) = Φ ( π ) = πΉπ (π),
which means that πΜ
is an unbiased estimator of πΉπ (π) = Φ((π − π)/π).
Question #4: Assume that π1 and π2 are independent normal random variables such that
each ππ ~π(π, π 2 ) and define π1 = π1 + π2 and π2 = π1 − π2. Show that the random variables
π1 and π2 are independent and normally distributed.
ο·
Since π1 and π2 are independent normal random variables, we know that their joint
1
density function is ππ1 π2 (π₯1 , π₯2 ) = ππ1 (π₯1 )ππ2 (π₯2 ) = [
√2ππ
1
π
2ππ2
−
1
[(π₯1 −π)2 +(π₯2 −π)2 ]
2π2
ππ1 π2 (π¦1 , π¦2 ) = ππ1 π2 (
(π₯ −π)2
(π₯ −π)2
1
− 1 2
− 2
2π
][
π 2π2 ]
√2ππ
=
. We have the transformation π¦1 = π₯1 + π₯2 and π¦2 = π₯1 − π₯2 ,
which can be solved to obtain π₯1 =
the Jacobian π½ = πππ‘ [
π
1/2
1/2
,
2
and π₯2 =
π¦1 −π¦2
2
. This allows us to calculate
1/2
1
] = − 2, so we can compute the joint density
−1/2
π¦1 +π¦2 π¦1 −π¦2
2
π¦1 +π¦2
2
1
1
) |π½| = 2 [2ππ2 π
−
2
2
1
π¦ +π¦
π¦ −π¦
(( 1 2 −π) +( 1 2 −π) )
2
2
2π2
1
simplifying this expression, we have ππ1 π2 (π¦1 , π¦2 ) = 4ππ2 π
−
1
[π¦ −2π]2
4π2 1
π
−
].
After
1
[π¦ ]2
4π2 2
. Since
the marginal densities can be separated, this shows that π1 and π2 are independent
and normally distributed. Moreover, we see that π1 ~π(2π, 2π 2 ) and π2 ~π(0,2π 2 ).
Question #12: The distance in feet by which a parachutist misses a target is π· = √π12 + π22
where π1 and π2 are independent with each ππ ~π(0,25). Find the probability π(π· ≤ 12.25).
ο·
We wish to find π(π· ≤ 12.25) = π (√π12 + π22 ≤ 12.25) = π[π12 + π22 ≤ (12.25)2 ] =
π[(π1 − 0)2 + (π2 − 0)2 ≤ (12.25)2 ] = π[(π1 − π)2 + (π2 − π)2 ≤ (12.25)2 ] =
π[
(π1 −π)2
π2
+
(π2 −π)2
π2
≤
(12.25)2
π2
] = π [∑2π=1
(ππ −π)2
π2
Since π = 0 and π 2 = 25, we have π [∑2π=1
≤
(12.25)2
(ππ −0)2
25
π2
≤
] = π [∑2π=1
(12.25)2
25
(ππ −0)2
π2
≤
] ≈ π [π 2 (2) ≤
(12.25)2
π2
(12.25)2
25
].
]≈
π[π 2 (2) ≤ 6] = 0.95. Note that we have used Corollary 8.3.4 to transform the
ππ −π 2
question into one using the chi-square distribution, since ∑ππ=1 (
because ππ ~π(π, π 2 ) implies that
ππ −π
π
π
) ~π 2 (π). This is
ππ −π 2
~π(0,1) so (
π
) ~π 2 (1) and that the sum of
π independent chi-square distributed random variables is distributed π 2 (π).
Chapter #8 – Statistics and Sampling Distributions
Question #8: Suppose that π and π are independent and distributed π~π 2 (π) and π~π 2 (π).
Is the random variable π = π − π distributed chi-square if we have π > π?
ο·
No. The random variable π = π − π can clearly take on negative values, whereas as a
random variable following the chi-square distribution must be positive.
Question #9: Suppose that π~π 2 (π), π = π + π~π 2 (π + π) and that π and π are
independent random variables. Use moment generating functions to show that π − π~π 2 (π).
ο·
We know that if some π΄~π 2 (π£), then its MGF is given by ππ (π‘) = (1 − 2π‘)−π£/2. We
thus have ππ (π‘) = (1 − 2π‘)−π/2 and ππ (π‘) = ππ+π (π‘) = (1 − 2π‘)−(π+π)/2 . Since π
and π are independent, we know that ππ+π (π‘) = ππ (π‘)ππ (π‘), which implies that
ππ−π (π‘) = π(π+π)−π (π‘) = ππ (π‘) =
ππ+π (π‘)
ππ (π‘)
=
(1−2π‘)−(π+π)/2
(1−2π‘)−π/2
= (1 − 2π‘)−π/2. Thus, we
have that π = π − π is distributed chi-square with π degrees of freedom.
Question #14: If π~π‘(π), find the distribution of the random variable π 2 .
ο·
We know that if π~π(0,1) and π~π 2 (π) are independent random variables, then the
distribution of π =
π2
produce π 2 = π/π =
π
is Student’s t distribution. But then we can square this to
√π/π
π 2 /1
π/π
, which makes it clear that π 2 ~πΉ(1, π). The reason for this is
that we know if some π~π(0,1), then π 2 ~π 2 (1). Moreover, we are already given that
π~π 2 (π). Combining these results with the fact that if some π1 ~π 2 (π1 ) and π2 ~π 2 (π2 )
π /π
are independent, then the random variable π = π1 /π1 ~πΉ(π1 , π2 ). Therefore, π 2 follows
2
2
the F distribution with 1 and π degrees of freedom whenever π~π‘(π).
Question #15: Suppose that ππ ~π(π, π 2 ) for π = 1, … , π and ππ ~π(0,1) for π = 1, . . , π and
that all variables are independent. Find the distribution of the following random variables.
a) π1 − π2 ~π(π − π, π 2 + π 2 ) ≡ π(0,2π 2 )
b) π2 + 2π3 ~π(π + 2π, π 2 + 4π 2 ) ≡ π(3π, 5π 2 )
c) π12 ~π 2 (1) since the square of a standard normal random variable is chi-square.
d)
π1 −π2
πππ§ √2
~π‘(π − 1) since π1 − π2 ~π(0,2π 2 ) implies that
π1 −π2
π√2
~π(0,1) and dividing this
by the sample standard deviation ππ§ of the Z sample makes it clear that ~π‘(π − 1).
e)
√π(πΜ
−π)
~π‘(π
πππ§
√π(πΜ
−π)
πππ§
πΜ
−π
(π−1)ππ§2
√
π2
− 1) since π = π/
π
π
π§
√ππ§ /(π−1)
=π =
~π(0,1), ππ§ =
π
~π 2 (π − 1) and we can write
~π‘(π − 1) by the definition of the t distribution (see above).
f) π12 + π22 = π 2 (1) + π 2 (1)~π 2 (1 + 1) ≡ π 2 (2) since we can simply add the
parameters for a sum of independent chi-square random variables.
g) π12 − π22 → the distribution is unknown.
h)
i)
j)
π1
√π22
π12
π22
π1
π2
~π‘(1) since π = π22 ~π 2 (1) and we can write
π1
√π22
=
π1
√π22 /1
=
~πΉ(1,1) since π1 = π12 ~π 2 (1), π2 = π22 ~π 2 (1) and we have
π1
√π/1
π12
π22
=
~π‘(1).
π1 /1
= ~πΉ(1,1).
π2 /1
π§
~πΆπ΄π(1,0) since we can generate the joint transformation π’ = π§1 and π£ = π§2 ,
2
1
calculate the joint density πππ (π’, π£) and integrate out ππ£ to find ππ (π’) = π(π’2 +1).
k)
πΜ
πΜ
→ the distribution is unknown.
l)
√ππ(πΜ
−π)
2
π√∑π
π=1 ππ
πΜ
−π
~π‘(π) since π = π/
expression
√ππ(πΜ
−π)
2
π√∑π
π=1 ππ
=
√π
Μ
−π)
√π(π
π
π
2
~π(0,1) and π = ∑ππ=1 ππ2 ~π 2 (π) and we can write the
=
√∑π=1 ππ
π
√π/π
~π‘(π) by the definition of the distribution.
π
m) ∑ππ=1
(ππ −π)2
π2
+ ∑ππ=1(ππ − πΜ
)2 ~π 2 (π + π − 1) since ∑ππ=1
(ππ −π)2
π2
~π 2 (π) by Corollary
2
(π−1)π
8.3.4 and ∑ππ=1(ππ − πΜ
)2 = (π − 1)ππ§2 = 12 π§ ~π 2 (π − 1) by Theorem 8.3.6. Thus,
we have the sum of two chi-square random variables so we sum the parameters.
n)
πΜ
π2
1
π2
1
π
2
1
π
1
1
π
+ π ∑ππ=1 ππ ~π (π2 , ππ2 + π) since πΜ
~π (π, π ) implies that the random variable
2
1 π
1
1
π
1
πΜ
= ∑ππ=1 (π2 ) ππ ~π (∑ππ=1 (ππ2 ) π, ∑ππ=1 (ππ2 ) π 2 ) ≡ π (π2 , ππ2 ). Also, we have
1
∑ππ=1 ππ = πΜ
~π (0, ) so the distribution of their sum is normal and we sum their
π
πΜ
1
π
1
1
respective means and variances to conclude that π2 + π ∑ππ=1 ππ ~π (π2 , ππ2 + π).
2
o) ππΜ
2 ~π 2 (1) since √ππΜ
~π(0,1), so it must be that (√ππΜ
) = ππΜ
2 ~π 2 (1).
p)
Μ
2
(π−1) ∑π
π=1(ππ −π)
~πΉ(π
π
(π−1)π2 ∑π=1(ππ −πΜ
)2
Μ
2
(π−1) ∑π
π=1(ππ −π)
π
2
(π−1)π ∑π=1(ππ −πΜ
)2
=
− 1, π − 1) since we can simplify the random variable as
Μ
2
12 ∑π
π=1(ππ −π)
π−1
Μ
2
π2 ∑π
π=1(ππ −π)
π−1
12 π 2
= π2 ππ2 and 12 ππ2 ~π 2 (π − 1) and π 2 ππ2 ~π 2 (π − 1). We
π
thus have the ratio of two chi-square random variables over their respective degrees
of freedom, which we know follows the F distribution.
Question #18: Assume that π~π(0,1), π1 ~π 2 (5) and π2 ~π 2 (9) are all independent. Then
compute the probability that a) π(π1 + π2 < 8.6), b) π (
π
√π1 /5
π
< 2.015), c) π(π > 0.611√π2 ),
π
1
d) π (π1 < 1.45) and e) find the value of π such that π (π +π
< π) = 0.9.
2
1
2
a) Since π1 ~π 2 (5) and π2 ~π 2 (9), we know that π1 + π2 ~π 2 (14). This allows us to
compute π(π1 + π2 < 8.6) = 0.144 using the tables for the chi-square distribution.
b) We know that if π~π(0,1) and some π~π 2 (π) are independent random variables,
then π =
π
π=
√π1 /5
π
√π/π
follows the t distribution with π degrees of freedom. We thus have that
π
~π‘(5), so we can compute π (
√π1 /5
< 2.015) = 0.95 using the t-table.
c) We wish to compute π(π > 0.611√π2 ) = π (
π
√π2
π(
π
√π2 /9
> 0.611) = π (
π
3 > 0.611(3)) =
√π2
> 1.833) = 0.05, from using the t-table since we know that
π
π /5
9
π
√π2 /9
~π‘(9).
π /5
d) We wish to compute π (π1 < 1.45) = π (π1 /9 < 1.45 (5)) = π (π1 /9 < 2.61). We know
2
2
2
that if some π1 ~π 2 (π1 ) and π2 ~π 2 (π2 ) are independent, then the random variable
π /π
π /5
given by π = π1 /π1 ~πΉ(π1 , π2 ). We thus have that π1 /9 ~πΉ(5,9), so we can therefore use
2
2
2
π /5
the F-table to compute the desired probability as π (π1 /9 < 2.61) = 0.9.
2
π
π1 +π2
1
e) We wish to compute π sich that π (π +π
< π) = π (
1
π
1
π /9
2
π1
1
π
1
> π) = π (1 + π2 > π) =
1
5 1
π /9
π (π2 > π − 1) = π (π2/5 > 9 (π − 1)) = 0.9. But we know that πΉ = π2/5 ~πΉ(9,5) so we
1
1
1
can use tables to find that π(πΉ > 0.383) = 0.9. This means that we must solve the
5 1
1
9
equation 9 (π − 1) = 0.383 → π = 5 (0.383) + 1 → π = 9
5
1
(0.383)+1
= 0.592.
1
1
Question #19: Suppose that π~π‘(1). a) Show that the CDF of π is πΉπ (π‘) = 2 + π ππππ‘ππβ‘(π‘)
1
and b) show that the 100 ⋅ πΎ π‘β percentile is given by π‘πΎ (1) = π‘ππ [π (πΎ − 2)].
a) If some π~π‘(π), then its density is given by ππ (π‘) =
we have ππ (π‘) =
Γ(1)
1
Γ( )√π
2
1
1
π+1
)
2
π
Γ( )√ππ
2
Γ(
π‘2
−
(1 + π )
1
π+1
2
. When π = 1,
1
(1 + π‘ 2 )−1 =
since Γ(1) = 1 and Γ (2) = √π. We thus
π 1+π‘ 2
1
have that ππ (π‘) = π 1+π‘ 2 when π = 1, which is the density of a Cauchy random variable.
1
π‘
1
To find the cumulative distribution, we simply compute πΉπ (π‘) = π ∫−∞ 1+π₯ 2 ππ₯ =
1
π
1
π
1
1
[ππππ‘ππβ‘(π₯)]π‘−∞ = (ππππ‘ππ(π‘) − (− )) = ππππ‘ππ(π‘) + .
π
2
π
2
b) The 100 ⋅ πΎ π‘β percentile is the value of π‘ such that πΉπ (π‘) = πΎ. From the work above,
we have
1
π
1
1
1
ππππ‘ππ(π‘) + 2 = πΎ → ππππ‘ππ(π‘) = (πΎ − 2) π → π‘ = π‘ππ [(πΎ − 2) π]. This
1
proves that the 100 ⋅ πΎ π‘β percentile is given by π‘πΎ (1) = π‘ππ [(πΎ − 2) π].
Chapter #8 – Statistics and Sampling Distributions
Question #22: Compute πΈ(π π ) for π > 0 if we have that π~π΅πΈππ΄(π, π).
ο·
Γ(π+π)
Since π~π΅πΈππ΄(π, π), its PDF is ππ (π₯) = Γ(π)Γ(π) π₯ π−1 (1 − π₯)π−1 whenever 0 < π₯ < 1
and π > 0, π > 0. Then using the definition of expected value, we can compute
∞
1
Γ(π+π)
Γ(π+π) Γ(π+π)Γ(π)
πΈ(π π ) = ∫−∞ π₯ π ππ (π₯) ππ₯ = Γ(π)Γ(π) ∫0 π₯ π π₯ π−1 (1 − π₯)π−1 ππ₯ = Γ(π)Γ(π)
Γ(π+π)Γ(π+π)
Γ(π)Γ(π+π+π)
since we have
integral to conclude that
Γ(π+π) 1 π−1
∫ π₯ (1
Γ(π)Γ(π) 0
Γ(π+π+π)
=
− π₯)π−1 ππ₯ = 1 so we can solve for the
1
Γ(π)Γ(π)
= ∫0 π₯ π−1 (1 − π₯)π−1 ππ₯. In this case, we are solving
Γ(π+π)
1
1
that ∫0 π₯ π π₯ π−1 (1 − π₯)π−1 ππ₯ = ∫0 π₯ π+π−1 (1 − π₯)π−1 ππ₯ =
Γ(π+π)Γ(π)
Γ(π+π+π)
. Therefore, all of
the moments of the beta distribution for some fixed π > 0 can be written in terms of
the gamma function, which can be evaluated numerically.
Question #24: Suppose that ππ ~π 2 (π). Use moment generating functions to find the limiting
distribution of the transformed random variable
ο·
ππ −π
√2π
as π → ∞.
This result follows directly from the Central Limit Theorem. If we let ππ = ∑ππ=1 ππ
where ππ ~χ2 (1) for π = 1, … , π, then ππ ~π 2 (π) so that πΈ(ππ ) = π, πππ(ππ ) = 2π and
π π(ππ ) = √2π. Therefore,
ππ −πΈ(ππ )
π π(ππ )
ππ −π
=
√2π
→ π~π(0,1) as π → ∞. We will now prove
this result using moment generating functions. By the definition of MGFs, we have
π[ππ−π ] (π‘) = πΈ [π
π −π
π‘( π )
√2π
] = πΈ [π
π
2
−π‘√
π
π
π‘ π
√2π
]=π
−π‘
√2
√π
πΈ [π
π
π‘ π
√2π
]=π
−π‘
√2π
π
√2
−π‘
√π
(1 −
2π‘
√2π
−
)
π
2
=π
√2
−π‘
√π
2
−
√2
√π
π‘
πππ (
√2π
)=
π
2
(1 − π‘√π) . In order to evaluate lim π[ππ−π ] (π‘), we first
π→∞
√2π
take logarithms and then exponentiate the result. This implies that ln [M[Yν−ν ] (t)] =
√2ν
ln [π
−π‘
√2
√π
(1 − π‘
√2
)
√π
π
−
2
] = ln [e
−π‘
√2
√π
] + ln [(1 − π‘
√2
√π
π
−
2
) ] = −π‘
√2
√π
ν
− 2 ln (1 − π‘
√2
√π
). From
here, we use the Taylor series ln(1 − π§) = −π§ −
π§2
2
−
π§3
3
− β― for π§ = π‘
the limit, which then gives lim ln [M[Yν−ν ] (t)] = lim [−π‘
π→∞
√2
lim [−π‘ π
√
π→∞
π‘2
lim [ 2 +
π→∞
−
ν
√2
(−π‘ π
2
√
π‘ 3 √2
3√π
+β―] =
lim π[ππ −π ] (π‘) = π
π→∞
−
π‘2
π‘2
2
π
−
3
3π 2
− β― )] = lim [−π‘
π→∞
√2
√π
+π‘
√2
√π
+
√2
√π
π‘2
2
to evaluate
ν
− 2 ln (1 − π‘
+
π‘ 3 √2
3√π
√2
√π
)] ≈
+β―] =
+ 0 + β―. This result therefore implies that the limit
lim ln[M Yν −ν (t)]
[
]
π→∞
√2ν
3
π‘ 3 22
π→∞
√2
√π
√2ν
π‘2
= π 2 , which is the moment generating function of
√2π
a random variable that follows a standard normal distribution. This proves that the
random variable
ππ −π
√2π
→ π~π(0,1) as π → ∞, just as is guaranteed by the CLT.
Chapter #9 – Point Estimation
Question #1: Assume that π1 , … , ππ are independent and identically distributed with
common density π(π₯; π), where π > 0 is an unknown parameter. Find the method of
moments estimator (MME) of π if the density function is a) π(π₯; π) = ππ₯ π−1 for 0 < π₯ < 1,
b) π(π₯; π) = (π + 1)π₯ −π−2 whenever π₯ > 1, and c) π(π₯; π) = π 2 π₯π −ππ₯ whenever π₯ > 0.
1
a) We begin by computing the first population moment, so πΈ(π) = ∫0 π₯π(π₯; π) ππ₯ =
1
1
1
π
π
π
∫0 π₯(ππ₯ π−1 ) ππ₯ = π ∫0 π₯ π ππ₯ = π+1 [π₯ π+1 ]0 = π+1 (1 − 0) = π+1. We therefore have
π
πΈ(π) = π+1. Next, we equate the first population moment with the first sample
π
1
π
moment, which gives π1′ = π1′ → π+1 = π ∑ππ=1 ππ → π+1 = πΜ
. Finally, we replace π
Μ
π
πΜ
by πΜ and solve the equation πΜ+1 = πΜ
for πΜ, which implies that πΜπππΈ = 1−πΜ
.
∞
∞
b) Just as above, we first compute πΈ(π) = ∫1 π₯π(π₯; π) ππ₯ = ∫1 π₯[(π + 1)π₯ −π−2 ] ππ₯ =
∞
(π + 1) ∫1 π₯ −π−1 ππ₯ =
π+1
−π
which means that π1′ = π1′ →
∞
∞
[π₯ −π ]1 = −
π+1
π
π+1
π
1
[0 − 1] =
= π ∑ππ=1 ππ →
π+1
π
π+1
π
. Thus, we have πΈ(π) =
π+1
π
1
= πΜ
and πΜπππΈ = πΜ
−1.
∞
∞
2
c) We have πΈ(π) = ∫0 π₯π(π₯; π) ππ₯ = ∫0 π₯[π 2 π₯π −ππ₯ ] ππ₯ = π 2 ∫0 π₯ 2 π −ππ₯ ππ₯ = β― = π
after doing integration by parts. We can also find this directly by noting that the
1
density π(π₯, π) = π 2 π₯π −ππ₯ suggests that π~πΊπ΄πππ΄ (π , 2). This then implies that
1
2
2
1
2
πΈ(π) = π
π = π 2 = π. We therefore set π1′ = π1′ such that π = π ∑ππ=1 ππ or π = πΜ
, and
2
then solve for the method of moments estimator, which is given by πΜπππΈ = Μ
.
π
Question #2: Assume that π1 , … , ππ are independent and identically distributed. Find the
method of moments estimator (MME) of the unknown parameters if the random sample
1
comes from a) π~ππ΅(3, π), b) π~πΊπ΄πππ΄(2, π
), c) π~ππΈπΌ (π, 2), and d) π~ππ΄π
(π, π
).
π
3
a) Since π~ππ΅(3, π), we know that πΈ(π) = π = π. Equating this with the first sample
3
3
moment gives π1′ = π1′ → π = πΜ
, so the estimator is πΜπππΈ = πΜ
.
b) Since π~πΊπ΄πππ΄(2, π
), we know that πΈ(π) = π
π = 2π
. Equating this with the first
πΜ
sample moment gives π1′ = π1′ → 2π
= πΜ
, so the estimator is π
Μ πππΈ = 2 .
1
1
1
c) Since π~ππΈπΌ (π, 2), we know that πΈ(π) = πΓ (1 + π½) = πΓ (1 + 1/2) = πΓ(3) =
πΜ
π(3 − 1)! = 2π. Thus, we have π1′ = π1′ → 2π = πΜ
, so the estimator is πΜπππΈ = 2 .
π2 π
π
π2
d) Since π~ππ΄π
(π, π
), we have π1 = π
−1 and π2 = π 2 + π12 = (π
−2)(π
−1)2 + (π
−1)2. This
π2 π
π
π2
1
means that π1 = π
−1 = π1′ = πΜ
and π2 = (π
−2)(π
−1)2 + (π
−1)2 = π2′ = π ∑ππ=1 ππ2 . We
must solve for the unknown parameters π and π
in terms of the two sample moments
πΜ
and
1
π
∑ππ=1 ππ2 . From the first equation, we can solve to find π = (π
− 1)πΜ
and
substitute into the second equation to find
πΜ
2 π
(π
−2)
πΜ
2 (π
−1)2 π
(π
−2)(π
−1)2
+
πΜ
2 (π
−1)2
(π
−1)2
π
1
= π ∑ππ=1 ππ2 →
2
∑
ππ
1
π
1
2π
−2
+ πΜ
2 = π ∑ππ=1 ππ2 → πΜ
2 (π
−2 + 1) = π ∑ππ=1 ππ2 → π
−2 = π=1
. But this means
2
Μ
ππ
ππΜ
2 (2π
− 2) = (π
− 2) ∑ππ=1 ππ2 → 2ππΜ
2 π
− 2ππΜ
2 = π
∑ππ=1 ππ2 − 2 ∑ππ=1 ππ2 , so that
2ππΜ
2 π
− π
∑ππ=1 ππ2 = 2ππΜ
2 − 2 ∑ππ=1 ππ2 → π
(2ππΜ
2 − ∑ππ=1 ππ2 ) = 2ππΜ
2 − 2 ∑ππ=1 ππ2 .
Finally, we divide through to find π
=
2
2ππΜ
2 −2 ∑π
π=1 ππ
2 .
2ππΜ
2 −∑π
π=1 ππ
Plugging in to the other equation
2ππΜ
2 −2 ∑π π 2
implies that π = (π
− 1)πΜ
= ( 2ππΜ
2 −∑ππ=1π 2π − 1) πΜ
, so that the two method of
π=1 π
moments estimators are π
Μ πππΈ =
2
2ππΜ
2 −2 ∑π
π=1 ππ
π
2
2
2ππΜ
−∑π=1 ππ
2ππΜ
2 −2 ∑π
π2
and πΜπππΈ = ( 2ππΜ
2 −∑ππ=1π 2π − 1) πΜ
.
π=1 π
Question #3: Assume that π1 , … , ππ are independent and identically distributed with
common density π(π₯; π), where π > 0 is an unknown parameter. Find the maximum
likelihood estimator (MLE) for π when the PDF is a) π(π₯; π) = ππ₯ π−1 whenever 0 < π₯ < 1,
b) π(π₯; π) = (π + 1)π₯ −π−2 whenever π₯ > 1, and c) π(π₯, π) = π 2 π₯π −ππ₯ whenever π₯ > 0.
a) We first find the likelihood function based on the joint density of π1 , … , ππ , which is
πΏ(π) = π(π₯1 ; π) … π(π₯π ; π) = ∏ππ=1 π(π₯π ; π) = ∏ππ=1 ππ₯ππ−1 = π π (π₯1 … π₯π )π−1 . Next, we
construct the log likelihood function, since it is easier to differentiate and achieves a
maximum at the same point as the likelihood function. This gives ln[πΏ(π)] =
ln[π π (π₯1 … π₯π )π−1 ] = π ln(π) + (π − 1)[ln(π₯1 ) + β― + ln(π₯π )], which we differentiate
so
π
π
π
ln[πΏ(π)] = ππ [π ln(π) + (π − 1) ∑ππ=1 ln(π₯π )] = π + ∑ππ=1 ln(π₯π ). We then solve
ππ
π
for the value of π which makes the derivative equal zero, so π + ∑ππ=1 ln(π₯π ) = 0 → π =
− ∑π
π
. Since it is clear that the second derivative of ln[πΏ(π)] is negative, we have
π=1 ln(π₯π )
found that the maximum likelihood estimator is πΜππΏπΈ = − ∑π
π
. (Note that we
π=1 ln(ππ )
must capitalize the ππ from π₯π when presenting the estimator.)
b) We have πΏ(π) = ∏ππ=1 π(π₯π ; π) = ∏ππ=1(π + 1)π₯π−π−2 = (π + 1)π (π₯1 … π₯π )−π−2 so that
ln[πΏ(π)] = ln[(π + 1)π (π₯1 … π₯π )−π−2 ] = π ln(π + 1) − (π + 2) ∑ππ=1 ln(π₯π ). Then we
find
π
ππ
π
π
ln[πΏ(π)] = ππ [π ln(π + 1) − (π + 2) ∑ππ=1 ln(π₯π )] = π+1 − ∑ππ=1 ln(π₯π ). Finally,
π
we must solve π+1 − ∑ππ=1 ln(π₯π ) = 0 → π = ∑π
π
π=1 ln(π₯π )
− 1. Since the second derivative
of ln[πΏ(π)] will be negative, we have found that πΜππΏπΈ = ∑π
π
π=1 ln(ππ )
− 1.
c) We have πΏ(π) = ∏ππ=1 π(π₯π ; π) = ∏ππ=1 π 2 π₯π π −ππ₯π = π 2π (π₯1 … π₯π )π −π(π₯1 +β―+π₯π) so that
π
ln[πΏ(π)] = ln[π 2π (π₯1 … π₯π )π −π(∑π=1 π₯π ) ] = 2π ln(π) + ∑ππ=1 ln(π₯π ) − π ∑ππ=1 π₯π . Then we
have
π
ππ
π
ln[πΏ(π)] = ππ [2π ln(π) + ∑ππ=1 ln(π₯π ) − π ∑ππ=1 π₯π ] =
must solve
2π
π
2π
− ∑ππ=1 π₯π = 0 → π = ∑π
π=1 π₯π
2π
π
− ∑ππ=1 π₯π . Finally, we
2
2
= π₯Μ
, which implies that πΜππΏπΈ = πΜ
.
Question #4: Assume that π1 , … , ππ are independent and identically distributed. Find the
maximum likelihood estimator (MLE) of the parameter if the distribution is a) ππ ~π΅πΌπ(1, π),
1
b) ππ ~πΊπΈπ(π) , c) ππ ~ππ΅(3, π), d) ππ ~πΊπ΄πππ΄(π, 2), e) ππ ~ππΈπΌ (π, 2), and f) ππ ~ππ΄π
(1, π
).
a) Since the density of π~π΅πΌπ(1, π) is π(π₯; π) = (π₯1)π π₯ (1 − π)1−π₯ = π π₯ (1 − π)1−π₯ , we
π
π
have πΏ(π) = ∏ππ=1 π(π₯π ; π) = ∏ππ=1 π π₯π (1 − π)1−π₯π = π∑π=1 π₯π (1 − π)π−∑π=1 π₯π and then
π
π
ln[πΏ(π)] = ln[π∑π=1 π₯π (1 − π)π−∑π=1 π₯π ] = (∑ππ=1 π₯π ) ln(π) + (π − ∑ππ=1 π₯π )ln(1 − π).
Differentiating
∑π
π=1 π₯π
π
−
π−∑π
π=1 π₯π
1−π
gives
=0→
π
ππ
ln[πΏ(π)] =
∑π
π=1 π₯π
π
=
π
ππ
π−∑π
π=1 π₯π
1−π
[(∑ππ=1 π₯π ) ln(π) + (π − ∑ππ=1 π₯π )ln(1 − π)] =
→ (1 − π) ∑ππ=1 π₯π = π(π − ∑ππ=1 π₯π ) →
∑ππ=1 π₯π − π ∑ππ=1 π₯π = ππ − π ∑ππ=1 π₯π → ∑ππ=1 π₯π = ππ → π =
∑π
π=1 π₯π
π
= π₯Μ
.
Since
the
second derivative will be negative, we have found that πΜ ππΏπΈ = πΜ
.
b) Since π(π₯; π) = π(1 − π)π₯−1 , we have πΏ(π) = ∏ππ=1 π(π₯π ; π) = ∏ππ=1 π(1 − π)π₯π −1 =
π
ππ (1 − π)[∑π=1 π₯π ]−π and then the log likelihood function becomes ln[πΏ(π)] =
π
ln[ππ (1 − π)[∑π=1 π₯π ]−π ] = π ln(π) + {[∑ππ=1 π₯π ] − π}ln(1 − π).
π
π
π
ln[πΏ(π)] = ππ [π ln(π) + {[∑ππ=1 π₯π ] − π} ln(1 − π)] = π −
ππ
with zero implies
π
−
π
[∑π
π=1 π₯π ]−π
1−π
π
=0→π=
[∑π
π=1 π₯π ]−π
1−π
π
π − ππ = π ∑ππ=1 π₯π − ππ → π = π ∑ππ=1 π₯π → π = ∑π
π=1 π₯π
Differentiating
[∑π
π=1 π₯π ]−π
1−π
gives
. Equating this
→ (1 − π)π = π[∑ππ=1 π₯π − π] →
=
1
1 π
∑
π₯
π π=1 π
1
= π₯Μ
. Since the second
1
derivative will be negative, we have found that πΜ ππΏπΈ = πΜ
.
(π₯−1)!
c) Since π~ππ΅(3, π), we have π(π₯; π) = (π₯−1
)π3 (1 − π)π₯−3 = 2(π₯−3)! π3 (1 − π)π₯−3 =
3−1
(π₯−1)(π₯−2)
2
1
π3 (1 − π)π₯−3 = 2 (π₯ 2 − 3π₯ + 2)π3 (1 − π)π₯−3.
This
implies
that
the
1
likelihood function πΏ(π) = ∏ππ=1 π(π₯π ; π) = ∏ππ=1 [2 (π₯π2 − 3π₯π + 2)π3 (1 − π)π₯π −3 ] =
π
2−π (π₯π2 − 3π₯π + 2)π π3π (1 − π)[∑π=1 π₯π ]−3π , so the log likelihood function ln[πΏ(π)] =
π
ln[2−π (π₯π2 − 3π₯π + 2)π π3π (1 − π)[∑π=1 π₯π ]−3π ] = −π ln(2) + π ln(π₯π2 − 3π₯π + 2) +
3π ln(π) + {[∑ππ=1 π₯π ] − 3π} ln(1 − π). Differentiating this then gives
π
ππ
π
ln[πΏ(π)] =
ππ
[−π ln(2) + π ln(π₯π2 − 3π₯π + 2) + 3π ln(π) + {[∑ππ=1 π₯π ] − 3π} ln(1 − π)] =
[∑π
π=1 π₯π ]−3π
1−π
3
3π
π
−
3
= 0 → β― → π = πΜ
. Therefore, we have that πΜ ππΏπΈ = πΜ
.
π₯
1
π₯
1
d) Since π~πΊπ΄πππ΄(π, 2), we have π(π₯; π) = π2 Γ(2) π₯ 2−1 π −π = π2 π₯π −π . This means
π₯π
1
1
1
πΏ(π) = ∏ππ=1 π(π₯π ; π) = ∏ππ=1 [π2 π₯π π − π ] = π2π (π₯1 … π₯π )π −π
1
1
ln [π2π (π₯1 … π₯π )π −π
gives
π
∑π
π=1 π₯π
so that ln[πΏ(π)] =
1
] = −2π ln(π) + ∑ππ=1 ln(π₯π ) − π ∑ππ=1 π₯π .
π
ππ
∑π
π=1 π₯π
1
ln[πΏ(π)] = ππ [−2π ln(π) + ∑ππ=1 ln(π₯π ) − π ∑ππ=1 π₯π ] = −
we solve
1
π2
∑ππ=1 π₯π −
2π
π
1
= 0 → π2 ∑ππ=1 π₯π =
2π
π
Differentiating
2π
π
1
+ π2 ∑ππ=1 π₯π . Then
→ π ∑ππ=1 π₯π = 2ππ 2 → π =
∑π
π=1 π₯π
2π
π₯Μ
= 2.
πΜ
Since the second derivative will be zero, we have found that πΜππΏπΈ = 2 .
1
1
e) Since π~ππΈπΌ (π, 2), we have π(π₯; π) = 2√π π₯
πΏ(π) =
∏ππ=1 π(π₯π ; π)
1
=
1
−1
2
π₯
− −√ π
1
2
∏ππ=1 [
π₯
π √π ]
2√π π
=
1
π
π₯
π
−√
π
2π π 2
−
π
1
π
−
π
1
√π
1
π
√π =
∑π
π=1 √π₯π
π
→π=[
∑π
π=1 √π₯π
π
2
1
2
(π₯1
1
2
π₯
π
−√
. Thus, we have
1
1
1
−
− ∑π π₯ 2
… π₯π 2 ) π √π π=1 π
1
−
π
2π π 2
(π₯1
1
2
√π
∑π
π=1 √π₯π
3
2π2
1
∑ππ=1 π₯π2 ] = −
=0→
π
+
2π
∑π
π=1 √π₯π
3
2π2
∑π
π=1 √π₯π
3
2π2
1
1
1
1
so that the
1
−
− ∑π π₯ 2
… π₯π 2 ) π √π π=1 π ]
∑ππ=1 π₯π2 . Differentiating this gives
[−π ln(2) − 2 ln(π) + ∑ππ=1 π₯π 2 −
ππ
zero and solving implies − 2π +
−
= 2√π π₯ π
−
log of the likelihood function is ln[πΏ(π)] = ln [
−π ln(2) − 2 ln(π) + ∑ππ=1 π₯π 2 −
1
π
ππ
=
ln[πΏ(π)] =
. Setting this equal to
3
π
= 2π → 2π ∑ππ=1 √π₯π = 2ππ 2 →
π
2
∑
√π
] . Therefore, we have found πΜππΏπΈ = [ π=1π π ] .
π
f) Since π~ππ΄π
(1, π
), we have π(π₯; π
) = (1+π₯)π
+1 so the likelihood function is πΏ(π
) =
∏ππ=1 π(π₯π ; π
) = ∏ππ=1[π
(1 + π₯π )−π
−1 ] = π
π ∏ππ=1(1 + π₯π )−π
−1. Then we have that
ln[πΏ(π
)] = ln[π
π ∏ππ=1(1 + π₯π )−π
−1 ] = π ln(π
) − (π
+ 1) ∑ππ=1 ln(1 + π₯π ).
compute the derivative so that
π
π
π
π
π
ππ
Next,
we
π
ln[πΏ(π
)] = ππ
[π ln(π
) − (π
+ 1) ∑ππ=1 ln(1 + π₯π )] =
− ∑ππ=1 ln(1 + π₯π ). Finally, we set this result equal to zero and solve for π
to find that
π
π
− ∑ππ=1 ln(1 + π₯π ) = 0 → π
= ∑ππ=1 ln(1 + π₯π ) → π
= ∑π
. Since the second
π=1 ln(1+π₯π )
derivative will be negative, we have found that π
Μ ππΏπΈ = ∑π
π
.
π=1 ln(1+ππ )
Chapter #9 – Point Estimation
Question #7: Let π1 , … , ππ be a random sample from π~πΊπΈπ(π). Find the Maximum
1
Likelihood Estimator (MLE) for a) πΈ(π) = π, b) πππ(π) =
1−π
π2
, and c) π(π > π) = (1 − π)π
where π ∈ {1,2, … }. Do it both ways for each part to verify the Invariance Property.
a) We begin by computing πΜππΏπΈ by first calculating the likelihood function πΏ(π) =
π
∏ππ=1 π(π₯π , π) = ∏ππ=1(1 − π)π₯π −1 π = ππ (1 − π)∑π=1(π₯π −1). Then we can compute
π
ln[πΏ(π)] = ln[ππ (1 − π)∑π=1(π₯π −1) ] = π ln(π) + [∑ππ=1(π₯π − 1)] ln(1 − π)
π
π
and
π
ln[πΏ(π)] = ππ [π ln(π) + [∑ππ=1(π₯π − 1)] ln(1 − π)] = π −
ππ
differentiate
Setting equal to zero and solving for π gives
π
−
π
∑π
π=1 π₯π −π
1−π
π
=0→π=
(1 − π)π = π ∑ππ=1 π₯π − ππ → π − ππ = π ∑ππ=1 π₯π − ππ → π = π ∑ππ=1 π₯π .
π
implies that π = ∑π
π=1 π₯π
then
∑π
π=1(π₯π −1)
1−π
∑π
π=1 π₯π −π
1−π
This
.
→
then
1
= π₯Μ
. Since the second derivative will be negative, we have
1
found that πΜππΏπΈ = πΜ
. By the Invariance Property of the Maximum Likelihood
Estimator, we have that π(πΜ ) =
1
1
πΜππΏπΈ
=
b) Since πΜ ππΏπΈ = πΜ
and π(π) = πππ(π) =
1
1/πΜ
1−π
π2
1
= πΜ
as the MLE for π(π) = πΈ(π) = .
, then π(πΜ ) =
π
1−πΜππΏπΈ
πΜππΏπΈ 2
Μ
1−1/π
= (1/πΜ
)2 = πΜ
(πΜ
− 1), by
the Invariance Property of the Maximum Likelihood Estimator.
1
c) Since πΜ ππΏπΈ = πΜ
and π(π) = (1 − π)π , then π(πΜ ) = (1 − πΜ ππΏπΈ )π = (1 − 1/πΜ
)π , by the
Invariance Property of the Maximum Likelihood Estimator.
Question #12: Let π1 , … , ππ be a random sample from π~πΏππΊπ(π, π 2 ). Find the Maximum
Likelihood Estimator (MLE) for a) the parameters π and π 2 , and b) π(π, π 2 ) = πΈ(π).
1
1
a) We have that the density function of π is ππ (π₯; π, π 2 ) = √2ππ2 π₯ π
−
1
(ln π₯−π)2
2π2
, so that
the likelihood function of the sample is given by πΏ(π, π 2 ) = ∏ππ=1 π(π₯π , π) =
∏ππ=1 [
1
1
√2ππ2 π₯π
π
−
1
(ln π₯π −π)2
2π2
π
] = (2ππ 2 )− 2 (π₯1 … π₯π )−1 π
−
1
∑π (ln π₯π −π)2
2π2 π=1
π
likelihood function is ln[πΏ(π, π 2 )] = ln [(2ππ 2 )− 2 (π₯1 … π₯π )−1 π
π
−
. Then the log
1
∑π (ln π₯π −π)2
2π2 π=1
]=
1
− 2 ln(2ππ 2 ) − ∑ππ=1 ln(π₯π ) − 2π2 ∑ππ=1(ln π₯π − π)2. We differentiate this with respect
to both parameters and set the resulting expressions equal to zero so we can
simultaneously solve for the parameters, so
and
π
ππ2
π
π
ππ
1
ln[πΏ(π, π 2 )] = π2 ∑ππ=1(ln π₯π − π) = 0
1
ln[πΏ(π, π 2 )] = − 2π2 + 2π4 ∑ππ=1(ln π₯π − π)2 = 0. The first equation implies
1
∑π (ln π₯π − π) = 0 → ∑ππ=1(ln π₯π − π) = 0 → ∑ππ=1(ln π₯π ) − ππ = 0 → π =
π2 π=1
and
1
π2
the
second
π
1
∑π
π=1(ln π₯π )
π
1
π
− 2π2 + 2π4 ∑ππ=1(ln π₯π − π)2 = 0 → 2π4 ∑ππ=1(ln π₯π − π)2 = 2π2 →
1
∑ππ=1(ln π₯π − π)2 = π → π 2 = ∑ππ=1(ln π₯π − π)2. Thus, we have that the maximum
π
1
1
2
likelihood estimators are πΜ ππΏπΈ = π ∑ππ=1(ln π₯π ) and πΜππΏπΈ
= π ∑ππ=1(ln π₯π − πΜ ππΏπΈ )2.
b) We know that π~πΏππΊπ(π, π 2 ) if and only if π = ln(π) ~π(π, π 2 ). But π = lnβ‘(π) if and
only if π = π π implies that πΈ(π) = πΈ(π π ) = ππ (1) = π π(1)+
π2 12
2
π2
= π π+ 2 . By the
Invariance Property of the Maximum Likelihood Estimator, we can conclude that
π2
1 2
2 )
π(πΜ , πΜ 2 )ππΏπΈ = π(πΜ ππΏπΈ , πΜππΏπΈ
= π πΜππΏπΈ +2πΜππΏπΈ is the MLE for π(π, π 2 ) = πΈ(π) = π π+ 2 .
Question #17: Let π1 , … , ππ be a random sample from π~πππΌπΉ(π − 1, π + 1). a) Show that
the sample mean πΜ
is an unbiased estimator for π; b) show that the midrange π =
π(1) +π(π)
2
is also an unbiased estimator for the parameter π; c) which one has a smaller variance?
a) To show that πΜ
is an unbiased estimator for π, we must verify that πΈ(πΜ
) = π. But we
(π+1)+(π−1)
1
1
1
1
see that πΈ(πΜ
) = πΈ (π ∑ππ=1 ππ ) = π πΈ(∑ππ=1 ππ ) = π ∑ππ=1 πΈ(ππ ) = π ∑ππ=1 [
]=
2
1
1
∑π π = ππ = π, so it is clear that the sample mean is an unbiased estimator for π.
π π=1
π
π(1) +π(π)
b) We have that πΈ(π) = πΈ (
2
1
1
) = 2 πΈ(π(1) + π(π) ) = 2 [πΈ(π(1) ) + πΈ(π(π) )]. We
must therefore compute the mean of the smallest and largest order statistics, which
we can do by first finding their density functions. We first note that since
1
1
π~πππΌπΉ(π − 1, π + 1), then ππ (π‘) = (π+1)−(π−1) = 2 whenever π‘ ∈ (π − 1, π + 1) and
π‘
1
1
πΉπ (π‘) = ∫π−1 2 ππ₯ = 2 [π₯]π‘π−1 =
π‘−(π−1)
2
π‘ ∈ (π − 1, π + 1).
whenever
Then
the
distribution function of π(π) is given by πΉπ (π‘) = π(π(π) ≤ π‘) = π(ππ ≤ π‘)π =
(
π‘−(π−1) π
(π‘−π+1)π
) =
2
π
so the density function of π(π) is ππ (π‘) = ππ‘ πΉπ (π‘) =
2π
We can then compute the mean of π(π)
π+1
∫π−1 π‘
π(π‘−π+1)π−1
2π
π’ =π‘−π+1
π+1
∫π−1 π‘
π
π’π+1
2π
[
+
2π π+1
ππ’π
π
−
2π
.
π+1
as πΈ(π(π) ) = ∫π−1 π‘ππ (π‘) ππ‘ =
ππ‘. This integral can be calculated by completing the substitution
so
π(π‘−π+1)π−1
π(π‘−π+1)π−1
that
ππ’ = ππ‘
π
2
π
2π+1
and
π‘ = π’ + π − 1.
π
This
then
implies
2
ππ‘ = 2π ∫0 (π’ + π − 1)π’π−1 ππ’ = 2π ∫0 π’π + ππ’π−1 − π’π−1 ππ’ =
π’π
2
] = 2π [ π+1 +
π
π2π
π
0
−
2π
2π
] = π+1 + π − 1. We can similarly compute
π
2π
that the expected value of the first order statistic is πΈ(π(1) ) = − π+1 + π + 1. Thus, we
have
1
2
1
that
1
2π
2π
πΈ(π) = 2 [πΈ(π(1) ) + πΈ(π(π) )] = 2 [(π − π+1 + 1) + (π + π+1 − 1)] =
[2π] = π, so the midrange is also an unbiased estimator for the parameter π.
c) We
1
π2
have
∑ππ=1
that
1
1
1
πππ(πΜ
) = πππ (π ∑ππ=1 ππ ) = π2 πππ(∑ππ=1 ππ ) = π2 ∑ππ=1 πππ(ππ ) =
[(π+1)−(π−1)]2
12
1
4
1
1
1 π
1
= π2 ∑ππ=1 12 = π2 ∑ππ=1 3 = π2 3 = 3π. Similarly, we can calculate
π(1) +π(π)
that πππ(π) = πππ (
2
1
) = 4 πππ(π(1) + π(π) ).
Question #21: Let π1 , … , ππ be a random sample from π~π΅πΌπ(1, π). a) Find the Cramer-Rao
lower bound for the variances of all unbiased estimators of π; b) Find the Cramer-Rao lower
bound for the variances of unbiased estimators of π(1 − π); c) Find a UMVUE of π.
2
a) We have that πΆπ
πΏπ΅ =
[π′ (π)]
ππΈ[(
, so we compute each of these parts individually.
2
π
ln π(π;π)) ]
ππ
First, we have π(π) = π, so π ′ (π) = 1 and [π ′ (π)]2 = 1. Next, since π~π΅πΌπ(1, π) we
know that ππ (π₯) = (π₯1)π π₯ (1 − π)1−π₯ = π π₯ (1 − π)1−π₯ and so π(π; π) = π π (1 − π)1−π ,
which means that ln π(π; π) = π ln(π) + (1 − π) ln(1 − π). Taking the derivative and
squaring
π
gives
ππ
2
π
π
π−π
2
(ππ ln π(π; π)) = (π(1−π)) =
πΈ[
π 2 −2ππ+π2
]
π2 (1−π)2
1
π2 (1−π)2
1
π2 (1−π)2
1−π
ln π(π; π) = π − 1−π =
π 2 −2ππ+π2
.
π2 (1−π)2
(1−π)π−π(1−π)
π(1−π)
=
π−ππ−π+ππ
π(1−π)
π−π
= π(1−π) →
2
π
Finally, we compute πΈ [(ππ ln π(π; π)) ] =
1
1
= π2 (1−π)2 πΈ(π 2 − 2ππ + π2 ) = π2 (1−π)2 [πΈ(π 2 ) − 2ππΈ(π) + π2 ] =
[(π(1 − π) + π2 ) − 2π(π) + π2 ] =
[π − π2 ] =
π(1−π)
π2 (1−π)2
1
π2 (1−π)2
[π − π2 + π2 − 2π2 + π2 ] =
1
= π(1−π). Thus, we have found that πΆπ
πΏπ΅ =
π(1−π)
π
.
b) Now, π(π) = π(1 − π) = π − π2 , so [π ′ (π)]2 = [1 − 2π]2 = 1 − 4π + 4π2 , so the
Cramer-Rao Lower Bound becomes πΆπ
πΏπ΅ =
(1−4π+4π2 )π(1−π)
π
.
1
1
c) Since for the estimator πΜ = πΜ
, we have πΈ(πΜ ) = πΈ(πΜ
) = πΈ (π ∑ππ=1 ππ ) = π πΈ(∑ππ=1 ππ ) =
1
π
1
1
1
∑ππ=1 πΈ(ππ ) = ∑ππ=1 π = ππ = π and then πππ(πΜ ) = πππ(πΜ
) = πππ ( ∑ππ=1 ππ ) =
π
π
π
1
π2
1
1
1
πππ(∑ππ=1 ππ ) = π2 ∑ππ=1 πππ(ππ ) = π2 ∑ππ=1 π(1 − π) = π2 ππ(1 − π) =
π(1−π)
π
=
πΆπ
πΏπ΅, we can conclude that πΜ = πΜ
is a Uniform Minimum Variance Unbiased
Estimator (UMVUE) for the parameter π in π~π΅πΌπ(1, π).
Question #22: Let π1 , … , ππ be a random sample from π~π(π, 9). a) Find the Cramer-Rao
lower bound for the variances of unbiased estimators of π; b) is the Maximum Likelihood
Estimator πΜ ππΏπΈ = πΜ
a UMVUE for the parameter π?
a) We have π(π) = π, so π ′ (π) = 1 and [π ′ (π)]2 = 1. Next, since π~π(π, 9) we know that
the density is ππ (π₯) =
1
1
√18π
1
π −18
(π₯−π)2
, so that we have π(π; π) =
1
√18π
1
π −18
(π−π)2
and
1
ln π(π; π) = − 2 ln(18π) − 18 (π − π)2. We then differentiate twice to obtain
π
ππ
1
π2
9
ππ 2
ln π(π; π) = (π − π) →
1
ln π(π; π) = − . Since we have shown than the
2
π
9
π2
1
1
expression πΈ [(ππ ln π(π; π)) ] = −πΈ [ππ2 ln π(π; π)] and −πΈ (− 9) = 9, we can
9
conclude that the Cramer-Rao Lower Bound is πΆπ
πΏπ΅ = π. This then means that
9
πππ(π) ≥ π for any unbiased estimator π of the parameter π in π~π(π, 9).
1
1
1
b) We first verify that πΈ(πΜ ππΏπΈ ) = πΈ(πΜ
) = πΈ (π ∑ππ=1 ππ ) = π πΈ(∑ππ=1 ππ ) = π ∑ππ=1 πΈ(ππ ) =
1
1
∑π π = ππ = π, so πΜ ππΏπΈ = πΜ
is an unbiased estimator for π. Then we compute
π π=1
π
1
1
1
πππ(πΜ ππΏπΈ ) = πππ(πΜ
) = πππ (π ∑ππ=1 ππ ) = π2 πππ(∑ππ=1 ππ ) = π2 ∑ππ=1 πππ(ππ ) =
1
π2
∑ππ=1 9 =
1
π2
9
9π = π = πΆπ
πΏπ΅, so that πΜ ππΏπΈ = πΜ
a UMVUE for the parameter π.
Question #23: Let π1 , … , ππ be a random sample from π~π(0, π). a) Is the Maximum
Likelihood Estimator (MLE) for π unbiased?; b) is the MLE also a UMVUE for π?
a) We first find πΜππΏπΈ by noting that since π~π(0, π), then its density function is ππ (π₯) =
1
√2ππ
1
2
π −2ππ₯ so the likelihood function is πΏ(π) = ∏ππ=1 ππ (π₯; π) = ∏ππ=1
π
−
2
(2ππ) π
1
− ∑π
π₯2
2π π=1 π
differentiate
so
and
that
1
then
π
ππ
π
1
ln[πΏ(π)] = − 2 ln(2ππ) − 2π ∑ππ=1 π₯π2 .
π2π
1
1
1
√2ππ
1
2
π −2ππ₯π =
Next,
we
π
ln[πΏ(π)] = − 4ππ + 2π2 ∑ππ=1 π₯π2 = 0 → 2π2 ∑ππ=1 π₯π2 = 2π →
∑ππ=1 π₯π2 = ππ → π = ∑ππ=1 π₯π2 . Since the second derivative is negative, we have
π
1
1
πΜππΏπΈ = π ∑ππ=1 ππ2 . We verify unbiasedness by computing πΈ(πΜππΏπΈ ) = πΈ (π ∑ππ=1 ππ2 ) =
1
1
1
1
1
πΈ(∑ππ=1 ππ2 ) = π ∑ππ=1 πΈ(ππ2 ) = π ∑ππ=1(π + 02 ) = π ∑ππ=1 π = π ππ = π.
π
b) The estimator πΜππΏπΈ will be a UMVUE for π if πππ(πΜππΏπΈ ) = πΆπ
πΏπ΅. We therefore begin
by computing the Cramer-Rao Lower Bound. First, we have π(π) = π, so π ′ (π) = 1
and [π ′ (π)]2 = 1. Next, since we previously found that π(π; π) =
1
1
π
1
1
√2ππ
2
π −2ππ , then we
π2
1
have ln π(π; π) = − 2 ln(2ππ) − 2π π 2 so that ππ ln π(π; π) = − 2π + 2π2. We then find
π2
2π 2
1
1
π2
ln π(π; π) = 2π2 − 2π3 = 2π2 − π3 and take the negative of its expected value to
ππ2
1
π2
1
1
1
1
1
1
1
obtain −πΈ [2π2 − π3 ] = − [2π2 − π3 πΈ(π 2 )] = π3 (π + 02 ) − 2π2 = π2 − 2π2 = 2π2. This
implies that πΆπ
πΏπ΅ =
to
1
π2
this
lower
2π2
π
. We must verify that the variance of our estimator is equal
bound,
so
we
compute
1
πππ(πΜππΏπΈ ) = πππ (π ∑ππ=1 ππ2 ) =
1
πππ(∑ππ=1 ππ2 ) = π2 ∑ππ=1 πππ(ππ2 ). In order to compute πππ(ππ2 ), we use the formula
πππ(ππ2 ) = πΈ(ππ4 ) − [πΈ(ππ2 )]2 = β― = 3π 2 − π 2 = 2π 2 by finding the moments of ππ
using the derivatives of the Moment Generating Function at π‘ = 0. Then we have that
2
1
1
1
2π
πππ(πΜππΏπΈ ) = π2 ∑ππ=1 πππ(ππ2 ) = π2 ∑ππ=1 2π 2 = π2 π2π 2 = π = πΆπ
πΏπ΅, which verifies
that the Maximum Likelihood Estimator is a UMVUE for the parameter π.
Chapter #9 – Point Estimation
Question #31: Let πΜ and πΜ be the MLE and MME estimators for the parameter π, where
π1 , … , ππ is a random sample of size π from a Uniform distribution such that ππ ~πππΌπΉ(0, π).
Show that a) πΜ is MSE consistent, and b) πΜ is MSE consistent.
a) We first derive the MLE πΜ for π. Since π~πππΌπΉ(0, π), we know that the density
1
function is π(π₯; π) = π for π₯ ∈ (0, π). This allows us to construct the likelihood
function πΏ(π) = ∏ππ=1 π(π₯π ; π) = ∏ππ=1 π −1 = π −π whenever π₯1:π ≥ 0 and π₯π:π ≤ π and
zero otherwise. Then the log likelihood function is ln[πΏ(π)] = −π ln(π) so that
π
ππ
π
ln[πΏ(π)] = − π < 0 for all π and π. This means that πΏ(π) = π −π is a decreasing
function of π for π₯π:π ≤ π since its first derivative is always negative, so we can
conclude that the MLE is the largest order statistic, so πΜ = ππ:π . Next, we show that
this estimator is MSE consistent, which means verifying that lim πΈ[ππ:π − π]2 = 0.
π→∞
But then we can see that
2
lim [πΈ(ππ:π
)
π→∞
− 2ππΈ(ππ:π ) + π
2 ].
2
lim πΈ[ππ:π − π]2 = lim πΈ[ππ:π
− 2πππ:π + π 2 ] =
π→∞
π→∞
In order to compute this limit, we must find the first
and second moments of the largest order statistic. But we already know that ππ (π¦) =
π π¦ π−1
ππ(π¦; π)πΉ(π¦; π)π−1 = π (π)
π
∫0 π¦
ππ¦ π−1
ππ
π
π
=
π
ππ¦ π−1
ππ
π¦ π+1
π
, so we can calculate πΈ(ππ:π ) = ∫0 π¦ππ (π¦) ππ¦ =
π
π
ππ+1
π
ππ¦ = ππ ∫0 π¦ π ππ¦ = ππ [ π+1 ] = ππ ( π+1 − 0) = π+1 π
0
π
π
∫0 π¦ 2 ππ (π¦) ππ¦ = ∫0 π¦ 2
ππ¦ π−1
ππ
π
π
π
π¦ π+2
π
π
2 )
πΈ(ππ:π
=
and
ππ+2
π
ππ¦ = ππ ∫0 π¦ π+1 ππ¦ = ππ [ π+2 ] = ππ ( π+2 ) = π+2 π 2 .
0
π
π
2
Thus, we have lim [πΈ(ππ:π
) − 2ππΈ(ππ:π ) + π 2 ] = lim [π+2 π 2 − 2π π+1 π + π 2 ] =
π→∞
lim [
π
π→∞ π+2
π→∞
2π
π 2 − π+1 π 2 + π 2 ] = π 2 − 2π 2 + π 2 = 0. That this limit is zero verifies that
the maximum likelihood estimator πΜ = ππ:π is mean square error (MSE) consistent.
π
b) We first derive the MME πΜ for π. Since π~πππΌπΉ(0, π), we know that πΈ(π) = 2 so we
can equate π1′ = π1′ →
π
2
1
= π ∑ππ=1 ππ →
π
2
= πΜ
→ π = 2πΜ
. This means that πΜ = 2πΜ
.
Next, we show that this estimator is MSE consistent, which means verifying that
lim πΈ[2πΜ
− π]2 = 0. But we have lim πΈ[2πΜ
− π]2 = lim πΈ[4πΜ
2 − 4ππΜ
+ π 2 ] =
π→∞
π→∞
lim [4πΈ(πΜ
2 ) − 4ππΈ(πΜ
) + π 2 ].
We
π→∞
1
π
1
1
π→∞
therefore
π
1
π
πΈ(∑ππ=1 ππ ) = π ∑ππ=1 πΈ(ππ ) = π ∑ππ=1 2 = π π 2 =
π 2
1
1
πππ (π ∑ππ=1 ππ ) + (2) = π2 ∑ππ=1 πππ(ππ ) +
π2
+
12π
π2
4
lim [4
π→∞
=
(3π+1)π 2
12π
(3π+1)π2
12π
π2
1
πΈ(πΜ
) = πΈ (π ∑ππ=1 ππ ) =
compute
π
2
and πΈ(πΜ
2 ) = πππ(πΜ
) + πΈ(πΜ
)2 =
π2
1
= π2 ∑ππ=1 12 +
4
π2
4
1
π2
= π2 π 12 +
π2
4
=
. Thus, we can compute that lim [4πΈ(πΜ
2 ) − 4ππΈ(πΜ
) + π 2 ] =
π→∞
π
− 4π + π 2 ] = lim [
2
(3π+1)π2
3π
π→∞
− 2π 2 + π 2 ] = π 2 − 2π 2 + π 2 = 0. That
this limit is zero verifies that the MME πΜ = 2πΜ
is mean square error (MSE) consistent.
Question #29: Let π1 , … , ππ be a random sample of size π from a Bernoulli distribution such
that ππ ~π΅πΌπ(1, π). For a Uniform prior density π~πππΌπΉ(0,1) and a squared error loss
function πΏ(π‘; π) = (π‘ − π)2 , a) find the posterior distribution of the unknown parameter π,
b) find the Bayes estimator of π, and c) find the Bayes risk for the Bayes estimator of π above.
a) We have that the posterior density is given by πP|π₯ (π) =
π(π₯1 ,…,π₯π ;π)π(π)
∫ π(π₯1 ,…,π₯π ;π)π(π)ππ
π
, where
π
π(π₯1 , … , π₯π ; π) = ∏ππ=1 π(π₯π ; π) = ∏ππ=1 π π₯ (1 − π)1−π₯ = π∑π=1 π₯π (1 − π)π−∑π=1 π₯π
since
the random variables are independent and identically distributed and π(π) = 1 since
1
π
π
the prior density is uniform. We then express ∫0 π∑π=1 π₯π (1 − π)π−∑π=1 π₯π ππ in terms of
the beta distribution. Recall that if π~π΅πΈππ΄(π, π), then its density is π(π¦; π, π) =
1
π¦ π−1 (1 − π¦)π−1 where π΅(π, π) =
π΅(π,π)
1
Γ(π)Γ(π)
Γ(π+π)
π
. Next, we must define π = ∑ππ=1 π₯π and
π
π = π − ∑ππ=1 π₯π , so we can write ∫0 π∑π=1 π₯π (1 − π)π−∑π=1 π₯π ππ = π΅(π + 1, π + 1) =
π
π
π∑π=1 π₯π (1−π)π−∑π=1 π₯π
π΅(∑ππ=1 π₯π + 1, π − ∑ππ=1 π₯π + 1). Thus, we have πP|π₯ (π) = π΅(∑π
π
π=1 π₯π +1,π−∑π=1 π₯π +1)
1
π΅(π+1,π+1)
=
ππ (1 − π)π , which verifies that the random variable given by
P|π₯~π΅πΈππ΄(∑ππ=1 π₯π + 1, π − ∑ππ=1 π₯π + 1) ≡ π΅πΈππ΄(π + 1, π + 1).
π
b) For some random variable π~π΅πΈππ΄(π, π), we know that πΈ(π) = π+π. Moreover,
Theorem 9.5.2 states that when we have a squared error loss function, the Bayes
estimator is simply the expected value of the posterior distribution. This implies that
∑π
π=1 π₯π +1
the Bayes estimator of π is given by πΜ π΅πΈ = ∑π
π
π=1 π₯π +1+π−∑π=1 π₯π +1
=
c) The risk function in this case is π
π (π) = πΈ[(π − π)2 ], where π =
∑π
π=1 π₯π +1
π+2
.
∑π
π=1 π₯π +1
π+2
is the Bayes
Estimator derived above. We would therefore substitute for π in the risk function,
1
evaluate the expected value of that expression and then compute ∫0 πΈ[(π − π)2 ] ππ.
Question #34: Consider a random sample of size π from a distribution with discrete
probability mass function ππ (π₯; π) = (1 − π)π₯ π for π₯ ∈ {0,1,2, … }. a) Find the MLE of the
unknown parameter π. b) Find the MLE of π =
1−π
π
. c) Find the CRLB for variances of all
unbiased estimators of the parameter π above. d) Is the MLE of π =
MLE of π =
π=
1−π
π
1−π
π
1−π
π
a UMVUE? e) Is the
also MSE consistent? f) Compute the asymptotic distribution of the MLE of
π
. g) If we have the estimator πΜ = π+1 πΜ
, then find the risk functions of both πΜ and πΜ
using the loss function given by πΏ(π‘; π) =
a) We
have
(π‘−π)2
π2 +π
.
π
πΏ(π) = ∏ππ=1 π(π₯π ; π) = ∏ππ=1(1 − π)π₯π π = ππ (1 − π)∑π=1 π₯π ,
ln[πΏ(π)] = π ln(π) + ∑ππ=1 π₯π ln(1 − π). Then we have that
π
so
π
ln[πΏ(π)] = π −
ππ
that
∑π
π=1 π₯π
1−π
.
1
Setting this equal to zero and solving for π gives the estimator πΜ ππΏπΈ = 1+πΜ
.
1−πΜ
b) By the Invariance Property, we have that the estimator is πΜππΏπΈ = πΜ ππΏπΈ = πΜ
.
ππΏπΈ
c) Since π = π(π) =
1−π
π
1
1
1
= π − 1, then π ′ (π) = − π2 and [π ′ (π)]2 = π4. Then since
π(π; π) = (1 − π) π π, we can compute ln π(π; π) = ln(π) + πln(1 − π) so that
π
1
π2
π
ln π(π; π) = π − 1−π and
ππ
ππ2
1
π
ln π(π; π) = − π2 − (1−π)2. We can then compute the
π2
negative of the expected value of this second derivative so that −πΈ [ππ2 ln π(π; π)] =
1
1
1
1−π
+ (1−π)2 πΈ(π) = π2 + π(1−π)2 =
π2
These results imply that πΆπ
πΏπ΅ =
(1−π)2 +π(1−π)
=
π2 (1−π)2
π2 (1−π)
ππ4
=
1−π
ππ2
1−2π+π2 +π−π2
π2 (1−π)2
1−π
1
= π2 (1−π)2 = π2 (1−π).
.
1
1
1 1−π
1−π
d) We first verify that πΈ(πΜππΏπΈ ) = πΈ(πΜ
) = πΈ (π ∑ππ=1 ππ ) = π ∑ππ=1 πΈ(ππ ) = π π π = π ,
so that the MLE is an unbiased estimator of π =
1−π
π
. Next, we compute πππ(πΜππΏπΈ ) =
1
1
1
1−π
1−π
πππ(πΜ
) = πππ (π ∑ππ=1 ππ ) = π2 ∑ππ=1 πππ(ππ ) = π2 π π2 = ππ2 = πΆπ
πΏπ΅, which verifies
that πΜππΏπΈ = πΜ
is the UMVUE for the parameter π =
1−π
π
.
2
1−π
e) To verify that πΜππΏπΈ = πΜ
is MSE consistent, we must show that lim πΈ [πΜ
− π ] = 0.
π→∞
2
2
(1−π)
1−π
2(1−π)
But we can see that we have lim πΈ [πΜ
− π ] = lim πΈ [πΜ
2 − π πΜ
+ π2 ] =
π→∞
2(1−π)
lim [πΈ(πΜ
2 ) − π πΈ(πΜ
) +
(1−π)2
π→∞
π2
π→∞
], so we must compute the expectation of both the
1−π
mean and the mean squared. However, we already know that πΈ(πΜ
) = π = π since
2
1
1−π
πΜππΏπΈ is unbiased. Then πΈ(πΜ
2 ) = πππ(πΜ
) + πΈ(πΜ
)2 = πππ (π ∑ππ=1 ππ ) + ( π ) =
1
π2
∑ππ=1 πππ(ππ ) +
(1−π)2
π2
1
= π2 ∑ππ=1
1−π
π2
+
(1−π)2
π2
1
= π2 π
1−π
π2
+
2
(1−π)2
π2
=
1−π
ππ2
+
2
(1−π)2
π2
.
2
(1−π)
(1−π)
(1−π)
2(1−π)
1−π
2(1−π) 1−π
Thus lim [πΈ(πΜ
2 ) − π πΈ(πΜ
) + π2 ] = lim [ ππ2 + π2 − π
+
]=
π
π2
π→∞
(1−π)2
π2
−
π→∞
2(1−π)2
π2
+
(1−π)2
π2
= 0. This shows that πΜππΏπΈ = πΜ
is MSE consistent.
f) We use Definition 9.4.5, which states that for large values of π, the MLE estimator is
distributed normal with mean π =
that πΆπ
πΏπ΅ =
1−π
ππ2
1−π
π
and variance πΆπ
πΏπ΅. Since we previously found
1−π 1−π
, we can conclude that πΜππΏπΈ ~π ( π , ππ2 ).
g) Definition 9.5.2 states that the risk function is the expected loss π
π (π) = πΈ[πΏ(π; π)].
In this case, the loss function is πΏ(π‘; π) =
(π‘−π)2
π2 +π
Μ
2 −2ππΜ
+π2
π
estimator πΜ
we compute π
πΜ
(π) = πΈ [
1
π2 +π
1−π
[ ππ2 +
π
(π+1)ππ
(1−π)2
π2
− 2π
1−π
π
(1−π)/π
1
1−π
can compute π
πΜ (π) = πΈ [
π‘ 2 −2ππ‘+π2
π2 +π
. Therefore, for the
1
] = π2 +π [πΈ(πΜ
2 ) − 2ππΈ(πΜ
) + π 2 ] =
π2
1
+ π 2 ] = π2 +π [ππ π 2 + π 2 − 2π 2 + π 2 ] = (π2 +π)ππ =
= [(1−π)/π+1]ππ = [1/π]ππ2 =
(
π 2 +π
=
1−π
ππ
π
π
= π. Similarly, for the estimator πΜ = π+1 πΜ
we
2
π
π
πΜ
) −2π(
πΜ
)+π2
π+1
π+1
π2 +π
π+π
] = β― = (π+1)(π+1)2 .
Question #36: Let π1 , … , ππ be a random sample of size π from a Normal distribution such
that each ππ ~π(0, π). Find the asymptotic distribution of the MLE of the parameter π.
ο·
1
From the previous assignment, we know that πΜππΏπΈ = π ∑ππ=1 ππ2 . We then use
Definition 9.4.5, which states that for large values of π, the MLE estimator is
distributed normal with mean π and variance πΆπ
πΏπ΅. That is, we have that
πΜππΏπΈ ~π(π, πΆπ
πΏπ΅). This means that we must compute the Cramer-Rao Lower Bound.
Since π(π) = π, then π ′ (π) = 1 and [π ′ (π)]2 = 1. Next, since we previously found that
π(π; π) =
1
1
√2ππ
2
1
1
π −2ππ , then we have ln π(π; π) = − 2 ln(2ππ) − 2π π 2 so that
π
π2
1
π2
2π 2
1
1
π2
ln π(π; π) = − 2π + 2π2. We then find ππ2 ln π(π; π) = 2π2 − 2π3 = 2π2 − π3 and take
ππ
1
π2
1
1
the negative of its expected value to obtain −πΈ [2π2 − π3 ] = − [2π2 − π3 πΈ(π 2 )] =
1
1
π
2π2
(π + 02 ) −
3
1
1
1
= π2 − 2π2 = 2π2. This implies that πΆπ
πΏπ΅ =
2π2
π
. Combining these
2
2π
facts reveals that the asymptotic distribution of the MLE is πΜππΏπΈ ~π (π, π ). We can
transform this to get a standard normal distribution by noting that the random
variable
Μ ππΏπΈ −π
π
π√2/π
~π(0,1) or large values of π. We could further reduce this by
multiplying through by the constant π so that
Μ ππΏπΈ −π
π
√2/π
~π(0, π 2 ).
Chapter #10 – Sufficiency and Completeness
Question #6: Let π1 , … , ππ be independent and each ππ ~π΅πΌπ(ππ , π). Use the Factorization
Criterion to show that π = ∑ππ=1 ππ is sufficient for the unknown parameter π.
ο·
Since each ππ ~π΅πΌπ(ππ , π), we know that the probability mass function is given by
π(π₯; ππ , π) = (ππ₯π )π π₯ (1 − π)ππ −π₯ 1{π₯ = 0, … , ππ }. We can then construct their joint
probability mass function due to the fact that they are independent as
π
π(π₯1 , … , π₯π ; ππ , π) = ∏ππ=1 π(π₯π ; ππ , π) = ∏ππ=1 (π
) π π₯π (1 − π)ππ −π₯π 1{π₯π = 0, … , ππ } =
π₯
π
π
π
π
π
π
[∏ππ=1 (π
)] π∑π=1 π₯π (1 − π)∑π=1 ππ −∑π=1 π₯π 1{π₯π = 0, … , ππ }. If πΆ = [∏ππ=1 (π
)], then we
π₯
π₯
π
π
π
π
π
have that πΆπ∑π=1 π₯π π ∑π=1 ππ −∑π=1 π₯π 1{π₯π = 0, … , ππ } = πΆ
π
π ∑π=1 π₯π
πΆ (π )
π
π
π∑π=1 π₯π π ∑π=1 ππ
∑π
π π=1 π₯π
1{π₯π = 0, … , ππ } =
π
π ∑π=1 ππ 1{π₯π = 0, … , ππ }. But then if we define π = ∑ππ=1 π₯π , we have that
π π
π
π(π₯1 , … , π₯π ; ππ , π) = πΆ (π ) π ∑π=1 ππ 1{π₯π = 0, … , ππ } = π(π ; ππ , π)β(π₯1 , … , π₯π ).
π π
Since
π
π(π ; ππ , π) = (π ) π ∑π=1 ππ does not depend on π₯1 , … , π₯π except through π = ∑ππ=1 π₯π
and β(π₯1 , … , π₯π ) = πΆ1{π₯π = 0, … , ππ } does not involve π, the Factorization Criterion
guarantees that π = ∑ππ=1 ππ is sufficient for the unknown parameter π.
Question #7: Let π1 , … , ππ be independent and each ππ ~ππ΅(ππ , π). This means that each ππ
has probability mass function π(ππ = π₯) = (ππ₯−1
) πππ (1 − π)π₯−ππ for π₯ = ππ , ππ + 1, ππ + 2, …
−1
π
Find a sufficient statistic for the unknown parameter π using the Factorization Criterion.
ο·
As in the question above, we have that π(π₯1 , … , π₯π ; ππ , π) = ∏ππ=1 π(π₯π ; ππ , π) =
−1
∏ππ=1 (π₯ππ−1
) πππ (1 − π)π₯π −ππ 1{π₯π = ππ , ππ + 1, … }. After applying the product operator,
π
π
π
π
−1
this becomes [∏ππ=1 (π₯ππ−1
)] π∑π=1 ππ π ∑π=1 π₯π −∑π=1 ππ 1{π₯π = ππ , ππ + 1, … }. Then if we define
π
πΆ=
−1
[∏ππ=1 (π₯ππ−1
)],
π
π
this expression becomes πΆ
π
π∑π=1 ππ π ∑π=1 π₯π
π
∑
π π=1 ππ
1{π₯π = ππ , ππ + 1, … } =
π
π ∑π=1 ππ
πΆ (π )
joint
π
π ∑π=1 π₯π 1{π₯π = ππ , ππ + 1, … }. Finally, if we let π = ∑ππ=1 π₯π , we have that the
mass
function
π
π ∑π=1 ππ
is
π(π₯1 , … , π₯π ; ππ , π) = πΆ (π )
π
π ∑π=1 ππ
π(π ; ππ , π)β(π₯1 , … , π₯π ). Since π(π ; ππ , π) = (π )
π π 1{π₯π = ππ , ππ + 1, … } =
π π does not depend on π₯1 , … , π₯π
except through π = ∑ππ=1 π₯π and β(π₯1 , … , π₯π ) = πΆ1{π₯π = ππ , ππ + 1, … } does not involve
π, the Factorization Criterion guarantees that π = ∑ππ=1 ππ is sufficient for π.
Question #16: Let π1 , … , ππ be independent and each ππ ~ππ΅(ππ , π). This means that each ππ
has mass function π(ππ = π₯) = (ππ₯−1
) πππ (1 − π)π₯−ππ for π₯ = ππ , ππ + 1, ππ + 2, … Find the
−1
π
Maximum Likelihood Estimator (MLE) of π by maximizing the MLE of the suο¬cient statistic.
ο·
In
the
previous
question,
π
π
we
found
that
πΏ(π) = π(π₯1 , … , π₯π ; ππ , π) =
π
−1
[∏ππ=1 (π₯ππ−1
)] π∑π=1 ππ (1 − π)∑π=1 π₯π (1 − π)− ∑π=1 ππ . Taking the natural logarithm gives
π
−1
ln[πΏ(π)] = ∑ππ=1 (π₯ππ−1
) + ∑ππ=1 ππ ln(π) + ∑ππ=1 π₯π ln(1 − π) − ∑ππ=1 ππ lnβ‘(1 − π).
π
Then
differentiating the log likelihood function and equating to zero implies that
π
ln[πΏ(π)] =
ππ
∑π
π=1 ππ
π
−
∑π
π=1 π₯π
1−π
+
∑π
π=1 ππ
1−π
= 0 → (1 − π) ∑ππ=1 ππ − π ∑ππ=1 π₯π + π ∑ππ=1 ππ = 0.
Then we have ∑ππ=1 ππ − π ∑ππ=1 ππ − π ∑ππ=1 π₯π + π ∑ππ=1 ππ = 0 → ∑ππ=1 ππ − π ∑ππ=1 π₯π = 0.
∑π
π
This implies that the Maximum Likelihood Estimator of π is πΜ ππΏπΈ = ∑ππ=1 π₯π .
π=1 π
Question #12: Let π1 , … , ππ be independent and identically distributed from a two
parameter exponential distribution πΈππ(π, π) such that the probability density function is
1
π(π₯; π, π) = π π
ο·
−π₯+π
π
1{π₯ > π}. Find jointly sufficient statistics for the parameters π and π.
Since the random variables are iid, their joint probability density function is thus
given by π(π₯1 , … , π₯π ; π, π) = ∏ππ=1 π(π₯π ; π, π) 1{π₯π > π} = ∏ππ=1 π −1 π
1
−π₯π +π
π
1{π₯π > π} =
π
π −π π π(ππ−∑π=1 π₯π ) [∏ππ=1 1{π₯π > π}]. This then shows that π1 = ∑ππ=1 ππ and π2 = π1:π are
jointly sufficient for π and π by the Factorization Criterion with β(π₯1 , … , π₯π ) = 1 being
independent of the unknown parameters π and π and π(π 1 , π 2 ; π, π) =
1
π −π π π
(ππ−π 1 )
∏ππ=1 1{π 2 > π} depending on π₯1 , … , π₯π only through π1 and π2 .
Question #13: Let π1 , … , ππ be independent and identically distributed from a beta
distribution π΅πΈππ΄(π1 , π2 ) such that the probability density function of each of these random
Γ(π +π )
variables is given by π(π₯; π1 , π2 ) = Γ(π 1)Γ(π2 ) π₯ π1 −1 (1 − π₯)π2 −1 whenever 0 < π₯ < 1. Find
1
2
jointly sufficient statistics for the unknown parameters π1 and π2 .
ο·
Since the random variables are iid, their joint density is given by π(π₯1 , … , π₯π ; π1 , π2 ) =
Γ(π +π )
π −1
1
2
∏ππ=1 π(π₯π ; π1 , π2 ) 1{0 < π₯π < 1} = ∏ππ=1
π₯ 1 (1 − π₯π )π2 −1 1{0 < π₯π < 1} =
Γ(π )Γ(π ) π
1
2
Γ(π +π ) π
[Γ(π 1)Γ(π2 )] [π₯1 … π₯π ]π1 −1 [(1 − π₯1 ) … (1 − π₯π )]π2 −1 ∏ππ=1 1{0 < π₯π < 1}.
1
2
This
then
shows that π1 = ∏ππ=1 ππ and π2 = ∏ππ=1(1 − ππ ) are jointly sufficient for π1 and π2 by
the
Factorization
independent
of
Criterion
the
with
unknown
β(π₯1 , … , π₯π ) = ∏ππ=1 1{0 < π₯π < 1}
parameters
and
being
π(π 1 , π 2 ; π1 , π2 ) =
Γ(π +π ) π
[Γ(π 1)Γ(π2 )] [π 1 ]π1 −1 [π 2 ]π2−1 depending on the observations only through π1 and π2 .
1
2
Question #18: Let π~π(0, π) for π > 0. a) Show that π 2 is complete and sufficient for the
unknown parameter π, and b) show that π(0, π) is not a complete family.
a) Since π~π(0, π), we know that ππ (π₯; π) =
1
√2ππ
π₯2
π −2π for π₯ ∈ β. Therefore, by the
Regular Exponential Class (REC) Theorem, π 2 is complete and sufficient for π.
b) Since π~π(0, π), we know that πΈ (π) = 0 for all π > 0. Therefore, completeness fails
because we have a nontrivial unbiased estimator of πΈ(π) = 0.
Chapter #10 – Sufficiency and Completeness
Question #21: If π1 , … , ππ is a random sample from a Bernoulli distribution such that each
ππ ~π΅πΈπ
π(π) ≡ π΅πΌπ(1, π) where π is the unknown parameter to be estimated, find the
UMVUE for a) π(π) = πππ(π) = π(1 − π), and b) π(π) = π2.
a) We first verify that the Bernoulli distribution is a member of the Regular Exponential
Class (REC) by noting that its density can be written as π(π₯; π) = π π₯ (1 − π)1−π₯ =
ππ₯
π₯
π₯
ππ₯
(1−π)π₯−1
= (1−π)π₯ (1 − π) = (1−π) (1 − π) = exp {ln [(1−π) (1 − π)]}. This equality
implies
π(π₯; π) = exp {π₯ ln (1−π) + ln(1 − π)} = exp {π₯ ln (1−π)} exp{lnβ‘(1 − π)} =
π
π
π
(1 − π) exp {π₯ ln (
π
1−π
π
)} = π(π) exp{π‘1 (π₯)π1 (π)}, so the Bernoulli distribution is a
member of the REC by Definition 10.4.2. We then use Theorem 10.4.2, which
guarantees the existence of sufficient statistics for distribution from the REC, to
construct the sufficient statistic π1 = ∑ππ=1 π‘1 (ππ ) = ∑ππ=1 ππ . Next, we appeal to the
Rao-Blackwell Theorem in justifying the use of π1 (or any one-to-one function of it) in
our search for a UMVUE for πππ(π) = π(1 − π). Our initial guess for an estimator is
π = πΜ
(1 − πΜ
), so we first compute that πΈ(π) = πΈ[πΜ
(1 − πΜ
)] = πΈ(πΜ
) − πΈ(πΜ
2 ) =
1
1
1
πΈ (π ∑ππ=1 ππ ) − [πππ(πΜ
) + πΈ(πΜ
)2 ] = π ∑ππ=1 πΈ(ππ ) − π2 ∑ππ=1 πππ(ππ ) + πΈ(πΜ
)2.
1
1
1
2
we have calculated that πΈ(π) = π ππ − π2 ππ(1 − π) − (π ππ) = π −
1
π(1−π)
π
Thus,
− π2 =
π−1
π(1 − π) (1 − π) = π(1 − π) (
π
π
π
), which implies that π ∗ = π−1 π = π−1 [πΜ
(1 − πΜ
]
will have expected value equal to πππ(π) = π(1 − π). The Lehman-Scheffe Theorem
finally guarantees that π ∗ is a UMVUE for πππ(π) = π(1 − π) since it states that any
unbiased estimator which is a function of complete sufficient statistics is a UMVUE.
b) We note that for the complete sufficient statistic π1 = ∑ππ=1 ππ , we have πΈ(π1 ) = ππ
and πππ(π12 ) = ππ(1 − π) since π1 ~π΅πΌπ(π, π), which is true because it is the sum of π
independent Bernoulli random variables. This implies πΈ(π12 ) = πππ(π12 ) + πΈ(π1 )2 =
ππ(1 − π) + (ππ)2 = ππ(1 − π) + π2 π2 . By the Lehman-Scheffe Theorem, we know
that we must use some function of the complete sufficient statistic π1 to construct a
UMVUE for the unknown parameter π2 . We note that for π = π1 − π12 , we have πΈ(π) =
πΈ(π1 ) − πΈ(π12 ) = ππ − ππ(1 − π) + π2 π2 = ππ − ππ + ππ2 + π2 π2 = π2 (π + π2 ).
1
This implies that the statistic π ∗ = π+π2 π =
π1 −π12
π+π2
=
π
∑π
π=1 ππ +(∑π=1 ππ )
2
π+π2
will have
expected value equal to π2 , so it is a UMVUE by the Lehman-Scheffe Theorem.
Question #23: If π1 , … , ππ is a random sample from a Normal distribution such that each
ππ ~π(π, 9) where π is unknown, find the UMVUE for a) the 95th percentile, and b) π(π1 ≤ π),
where π is a known constant. Hint: find the conditional distribution of π1 given πΜ
= π₯, and
apply the Rao-Blackwell Theorem with π = π’(π1 ), where define π’(π₯) = 1{π₯ ≤ π}.
a) The 95th percentile of a random variable π from a π(π, 9) distribution is the value of
π such that π(π ≤ π) = 0.95 → π (
π−π
3
≤
π−π
3
) = 0.95 → π (π ≤
π−π
3
) = 0.95 where
π~π(0,1). From tabulations of the standard normal distribution function Φ(π§), we
know that π(π ≤ 1.645) = 0.95, so we equate
π−π
3
= 1.645 → π = 4.935 + π = π(π).
This is what we wish to find a UMVUE for, but since the expectation of a constant is
that constant itself, we simply need to find a UMVUE for π. We begin by verifying that
the Normal distribution is a member of the Regular Exponential Class (REC) by noting
that the density of π~π(π, 9) can be written as π(π₯; π) =
1
√18π
π₯2
exp {− 18 +
ππ₯
9
1
√18π
1
exp {− 18 (π₯ − π)2 } =
π2
− 18}, where we have that π‘1 (π₯) = π₯ 2 and π‘2 (π₯) = π₯. Thus, the
Normal distribution is a member of the REC by Definition 10.4.2. We then use
Theorem 10.4.2, which guarantees the existence of sufficient statistics for
distribution from the REC, to construct the sufficient statistics π1 = ∑ππ=1 π‘1 (ππ ) =
∑ππ=1 ππ2 and π2 = ∑ππ=1 π‘2 (ππ ) = ∑ππ=1 ππ . Since the sample mean is an unbiased
π
estimator for the population mean, we have that πΈ(π) = πΈ(πΜ
) = πΈ ( π2 ) = π. Thus, an
unbiased estimator for π(π) = π = 4.935 + π is given by π ∗ = 4.935 + πΜ
, which is
also a UMVUE of π(π) by the Lehmann-Scheffe Theorem.
π1 −π
b) Note that we are trying to estimate π(π1 ≤ π) = π (
3
≤
π−π
3
) = Φ(
π−π
3
) = π(π),
where Φ: β → (0,1) is the cumulative distribution function of π~π(0,1). Since π(π) is
a nonlinear function of π, we cannot simply insert πΜ
to obtain a UMVUE. To find an
unbiased estimator, we note that π’(π1 ) = 1{π1 ≤ π} is unbiased for π(π) since we
have πΈ[π’(π1 )] = πΈ[1{π1 ≤ π}] = π(π1 ≤ π) = π(π). But since it is not a function of
the complete sufficient statistic π2 = ∑ππ=1 ππ , this estimator cannot be a UMVUE.
However, the Rao-Blackwell Theorem states that πΈ[π’(π1 )|π2 ] = πΈ[1{π1 ≤ π}|π2 ] will
also be unbiased and will be a function of π2 = ∑ππ=1 ππ . The Lehmann-Scheffe
Theorem then guarantees that πΈ[1{π1 ≤ π}|π2 ] will be a UMVUE. In order to find this,
we must compute the conditional distribution of π1 given π2 . We know that the
random variable π2 = ∑ππ=1 ππ ~π(ππ, 9π) and that π1 = π₯, π2 = π is equivalent to
2
π1 =
π₯, ∑ππ=2 ππ
= π − π₯. This implies that ππ1 |π2 (π₯|π ) = β― =
π
where π ′ = π and (π ′ )2 =
9(π−1)
π
πΈ[1{π1 ≤ π}|π2 ] = π(π΄ ≤ π) = Φ (
π−π /π
3√(π−1)/π
,
√2ππ′
π 9(π−1)
. Therefore, if we let π΄~π (π ,
2
exp{−(π₯−π ′ ) /2(π′ ) }β‘
π
) we have that
), which is a UMVUE for Φ (
π−π
3
) = π(π).
Question #25: If π1 , … , ππ is a random sample from the probability density function
π(π₯; π) = ππ₯ π−1 1{0 < π₯ < 1} where π > 0 is the unknown parameter, find the UMVUE for
1
1
a) π(π) = π by using the fact that πΈ[− ln(π)] = π, and b) the unknown parameter π.
a) We first verify that the density is a member of the REC by nothing that it can be
written
as
π(π₯; π) = ππ₯ π−1 = exp{ln[ππ₯ π−1 ]} = exp{ln(π) + (π − 1) ln(π₯)} =
exp{ln(π)} exp{(π − 1) ln(π₯)} = π exp{(π − 1) ln(π₯)}, where π‘1 (π₯) = ln(π₯). We then
use Theorem 10.4.2, which guarantees the existence of sufficient statistics for REC
distributions, to construct the sufficient statistic π1 = ∑ππ=1 π‘1 (ππ ) = ∑ππ=1 lnβ‘(ππ ). Next,
we appeal to the Rao-Blackwell Theorem in justifying the use of π1 (or any one-to-one
1
function of it) in our search for a UMVUE for π. From the hint provided, we initially
guess
that
π=
−π1
π
=
∑π
π=1 −lnβ‘(ππ )
π
and
check
that
πΈ(π) = πΈ [
∑π
π=1 − ln(ππ )
π
]=
1
1
1
1
∑π πΈ[− ln(ππ )] = π = . The Lehman-Scheffe Theorem finally guarantees that
π π=1
π π
π
1
1
π = − π ∑ππ=1 lnβ‘(ππ ) is a UMVUE for π since it states that any unbiased estimator which
is a function of complete sufficient statistics is a UMVUE.
b) Any UMVUE of the unknown parameter π must be a function of the complete and
sufficient statistic π1 = ∑ππ=1 lnβ‘(ππ ) by the Lehman-Scheffe Theorem. We begin by
π
noting that πΈ(π1 ) = πΈ[∑ππ=1 ln(ππ )] = ∑ππ=1 πΈ[ln(ππ )] = − ∑ππ=1 πΈ[− ln(ππ )] = − π, so
1
1
π1
πΈ(π1 )
we would like to be able to compute πΈ ( ) ≠
1
. However, this involves finding
1
πΈ [− ln(π)] since we know that πΈ[− ln(π)] = π. We do this by finding the distribution
1
of π = −lnβ‘(π) using the CDF technique, which shows that π~πΈππ (π) with density
1
π(π¦; π) = ππ −ππ₯ 1{π₯ > 0}. This is equivalent to π~πΊπ΄πππ΄ (π , 1), so by the Moment
1
Generating Function technique, we see that π1 = ∑ππ=1 lnβ‘(ππ ) ~πΊπ΄πππ΄ (π , π). We can
1
ππ
∞1
thus calculate πΈ [− ln(π)] = Γ(π) ∫0
π−1
π
= ∑π
π−1
π=1 lnβ‘(ππ )
π
π₯ π−1 π −ππ₯ ππ₯ = β― = π−1, which implies that π =
π₯
is an unbiased estimator of π. Then the Lehmann-Scheffe Theorem
guarantees that it is also a UMVUE for the unknown parameter π.
Question #31: If π1 , … , ππ is a random sample from the probability density function
π(π₯; π) = π(1 + π₯)−(1+π) 1{π₯ > 0} for unknown π > 0, find a) the MLE of π, b) a complete and
1
1
sufficient statistic for π, c) the CRLB for π(π) = π, d) the UMVUE for π(π) = π, e) the mean
and variance of the asymptotic normal distribution of the MLE, and f) the UMVUE for π.
a) We have πΏ(π) = ∏ππ=1 π(π₯π ; π) = ∏ππ=1 π(1 + π₯π )−(1+π) = π π [∏ππ=1(1 + π₯π )]−(1+π) so
that ln[πΏ(π)] = π ln(π) − (1 + π) ∑ππ=1 ln(1 + π₯π ). Then we have that
π
π
− ∑ππ=1 ln(1 + π₯π ) = 0 → π = ∑π
π
π=1 ln(1+π₯π )
so that πΜππΏπΈ = ∑π
π
.
π=1 ln(1+ππ )
π
ππ
ln[πΏ(π)] =
b) To check that it is a member of the REC, we verify that we can write the probability
density function of π as π(π₯; π) = π(1 + π₯)−(1+π) = exp{ln[π(1 + π₯)−(1+π) ]} =
exp{ln(π) − (1 + π) ln(1 + π₯)}, where π‘1 (π₯) = 1 and π‘2 (π₯) = lnβ‘(1 + π₯). Thus, π(π₯; π)
is a member of the REC and π2 = ∑ππ=1 π‘2 (ππ ) = ∑ππ=1 ln(1 + ππ ) is a complete and
sufficient statistic for the unknown parameter π to be estimated.
1 2
1
1
c) Since π(π) = π, we have [π ′ (π)]2 = [− π2 ] = π4. Then we have that π(π; π) =
π
π(1 + π)−(1+π) so its log is ln π(π; π) = ln(π) − (1 + π) ln(1 + π) and ππ ln π(π; π) =
1
π
− ln(1 + π). Finally,
π2
ππ2
ln π(π; π) = −
1
π2
so that −πΈ [
π2
ππ2
(1/π4 )
ln π(π; π)] =
1
π2
. These
1
results combined allow us to conclude that πΆπ
πΏπ΅ = π(1/π2 ) = ππ2 .
d) We previously verified that this density is a member of the REC and that the statistic
π2 = ∑ππ=1 ln(1 + ππ ) is complete and sufficient for π. Next, we use the Rao-Blackwell
Theorem in justifying the use of π2 (or any one-to-one function of it) in our search for
1
a UMVUE for π. In order to compute πΈ(π2 ), we need to find the distribution of the
random variable π = lnβ‘(1 + π), which we do using the CDF technique. We thus have
that πΉπ (π¦) = π(π ≤ π¦) = π(ln(1 + π) ≤ π¦) = π(π ≤ π π¦ − 1) = πΉπ (π π¦ − 1), so that
π
π
then ππ (π¦) = ππ¦ πΉπ (π¦) = ππ¦ πΉπ (π π¦ − 1) = π π¦ ππ (π π¦ − 1) = π π¦ [π(1 + π π¦ − 1)−(1+π) ] =
ππ π¦ (π π¦ )−(1+π) = ππ π¦ π −(π¦+ππ¦) = ππ π¦−π¦−ππ¦ = ππ −ππ¦
whenever
π¦ > 0.
It
is
1
immediately clear that π~πΈππ(π), so that πΈ(π) = πΈ[ln(1 + π)] = π. This allows us to
π
find πΈ(π2 ) = πΈ[∑ππ=1 ln(1 + ππ )] = ∑ππ=1 πΈ[ln(1 + ππ )] = π. Since we want an unbiased
1
estimator for π, it is clear that π =
π2
π
1
= π ∑ππ=1 ln(1 + ππ ) will suffice by the LST.
e) We previously found that the MLE for π is πΜππΏπΈ = ∑π
π
. From Chapter 9, we
π=1 ln(1+ππ )
know that the MLE for some unknown parameter π has an asymptotic normal
distribution with π = π and π 2 = πΆπ
πΏπ΅; that is, πΜππΏπΈ ~π(π, πΆπ
πΏπ΅) for large π. We
must therefore find the Cramer-Rao Lower Bound, which can be easily done from the
work in part c) above with π(π) = π, so that πΆπ
πΏπ΅ =
π2
π
. This means that we have
2
π
1
πΜππΏπΈ ~π (π, π ) for large π. We can similarly argue for the MLE of π(π) = π, where we
1
1
see that πΜππΏπΈ = πΜ
= π ∑ππ=1 ln(1 + ππ ) by the Invariance Property of the Maximum
ππΏπΈ
Likelihood Estimator. Then using the work done in part c) above for the Cramer-Rao
1
1
Lower Bound, we can conclude that πΜππΏπΈ ~π (π , ππ2 ) for large π.
f) We previously verified that this density is a member of the REC and that the statistic
π
π2 = ∑ππ=1 ln(1 + ππ ) is complete and sufficient for π where πΈ(π2 ) = π . As in the
1
previous question, we have that πΈ (π ) = πΈ (∑π
2
π=
π−1
π2
= ∑π
π−1
π=1 ln(1+ππ )
π=1
1
π
) = π−1 which implies that
ln(1+π )
π
is unbiased and a UMVUE for the unknown parameter π.
Chapter #11 – Interval Estimation
Question #5: If π1 , … , ππ is a random sample from ππ (π₯; π) = π π−π₯ 1{π₯ > π} with π unknown,
then a) show that π = π1:π − π is a pivotal quantity and find its distribution, and b) derive a
100πΎ% equal-tailed confidence interval for the unknown parameter π.
a) We first find the distribution of the smallest order statistic π1:π using the formula
π1 (π¦; π) = πππ (π¦; π)[1 − πΉπ (π¦; π)]π−1 . We thus need the CDF of the population, which
is
given
π₯
π₯
π₯
πΉπ (π₯; π) = ∫π ππ (π‘; π) ππ‘ = ∫π π π−π‘ ππ‘ = π π ∫π π −π‘ ππ‘ = π π [−π −π‘ ]ππ₯ =
by
π π (−π −π₯ + π −π ) = 1 − π π−π₯ whenever π₯ > π. We therefore have that π1 (π¦; π) =
ππ π−π¦ [1 − (1 − π π−π¦ )]π−1 = ππ π−π¦ [π π−π¦ ]π−1 = ππ π−π¦ π ππ−π−ππ¦+π¦ = ππ π(π−π¦) when
π₯ > π. Now that we have the density of π1:π , we can use the CDF technique to find the
density of π = π1:π − π. Thus, we have πΉπ (π) = π(π ≤ π) = π(π1:π − π ≤ π) =
π
π(π1:π ≤ π + π) = πΉ1 (π + π) so ππ (π) = ππ πΉ1 (π + π) = π1 (π + π) = ππ −ππ whenever
1
π + π > π → π > 0. This reveals that π = π1:π − π~πΈππ (π), so it is clearly a pivotal
quantity since it is a function of π but its distribution does not depend on π.
b) We have π(π₯(1−πΎ)/2 < π < π₯(1+πΎ)/2 ) = πΎ → π(π₯(1−πΎ)/2 < π1:π − π < π₯(1+πΎ)/2 ) = πΎ, so
after solving for the unknown parameter we obtain the 100πΎ% equal tailed
confidence interval π(π1:π − π₯(1+πΎ)/2 < π < π1:π − π₯(1−πΎ)/2 ) = πΎ. This can also be
expressed as the random interval (π1:π − π₯(1+πΎ)/2 , π1:π − π₯(1−πΎ)/2 ). Finally, we know
1
that the πΈππ (π) distribution has CDF πΉπ (π) = 1 − π −ππ so that πΉπ (π₯πΌ ) = πΌ implies
1
1 − π −ππ₯πΌ = πΌ. We solve this last equality for π₯πΌ = − π lnβ‘(1 − πΌ). This means that the
1
confidence interval becomes (π1:π + π ln (
found by substituting πΌ =
1−πΎ
2
and πΌ =
1−πΎ
1+πΎ
2
2
1
1+πΎ
) , π1:π + π ln (
2
)), where each term is
1
into the expression π₯πΌ = − π lnβ‘(1 − πΌ).
2
Question #7: If π1 , … , ππ is a random sample from ππ (π₯; π) = π2 π₯π −π₯
unknown parameter π, a) show that π =
2
2 ∑π
π=1 ππ
π2
~π 2 (2π), b) use π =
2 /π 2
1{π₯ > 0} with
2
2 ∑π
π=1 ππ
π2
to derive an
equal-tailed 100πΎ% confidence interval for π, c) find a lower 100πΎ% confidence limit for
π(π > π‘) = π −π‘
2 /π 2
, d) find an upper 100πΎ% confidence limit for the ππ‘β percentile.
2
a) Since ππ (π₯; π) = π2 π₯π −π₯
2 /π 2
1{π₯ > 0}, we know that π~ππΈπΌ(π, 2). The CDF technique
then reveals that π 2 ~πΈππ(π 2 ) so that ∑ππ=1 ππ2 ~πΊπ΄πππ΄(π 2 , π). A final application of
the CDF technique shows that
also shows that π =
2
2 ∑π
π=1 ππ
π2
2
π2
∑ππ=1 ππ2 ~π 2 (2π), proving the desired result. This
is a pivotal quantity for the unknown parameter π.
2
2
b) We find that the confidence interval is π (π1−πΎ
(2π) < π < π1+πΎ
(2π)) = πΎ →
2
2
2 ∑π π 2
2
2 ∑π
π2
2
2
π (π1−πΎ
(2π) < π2 ∑ππ=1 ππ2 < π1+πΎ
(2π)) = πΎ → π (π2 π=1(2π)π < π 2 < π2 π=1(2π)π ) = πΎ.
2
2
1+πΎ
2
1−πΎ
2
Taking square roots gives the desired random interval (√
2
2 ∑π
π=1 ππ
2 (2π)
π1+πΎ
2
,√
2
2 ∑π
π=1 ππ
2
2
2 ∑π
π=1 ππ
c) From the work done above, a lower confidence limit for π is √
quantity π(π) = π −π‘
2
2 ∑π
π=1 ππ
substitute √
ππΎ2 (2π)
2 /π 2
)β‘.
2 (2π)
π1−πΎ
ππΎ2 (2π)
. Since the
is a monotonically increasing function of π, we can simply
for π into the expression π(π) = π −π‘
2 /π 2
by Corollary 11.3.1.
d) We must solve the equation π(π > π‘π ) = 1 − π for π‘π . From the question above, we
are given that π(π > π‘) = π −π‘
2 /π 2
2
2
so we must solve π −π‘π /π = 1 − π for π‘π , which gives
2 ∑π π 2
π‘π = π√lnβ‘(1 − π). By the same reasoning as above, we substitute √π2 π=1(2π)π in for π
1−πΎ
2 ∑π π 2
into the expression π‘π = π(π) = π√lnβ‘(1 − π) to obtain π‘π = √π2 π=1(2π)π lnβ‘(1 − π).
1−πΎ
Question #8: If π1 , … , ππ is a random sample from π~πππΌπΉ(0, π) with π > 0 unknown and
ππ:π is the largest order statistic, then a) find the probability that the random interval given
by (ππ:π , 2ππ:π ) contains π, and b) find the value of the constant π such that the random
interval (ππ:π , πππ:π ) is a 100(1 − πΌ)% confidence interval for the parameter π.
a) We have that π ∈ (ππ:π , 2ππ:π ) if and only if π < 2ππ:π , since the inequality π < ππ:π
will always be true by the definition of the density. We must therefore compute
π
π
π
π
π(2ππ:π > π) = π (ππ:π > 2) = 1 − π (ππ:π ≤ 2) = 1 − [π (ππ ≤ 2)] = 1 − 2−π .
b) As above, we have that π[π ∈ (ππ:π , πππ:π )] = 1 − π −π , so if we set this equal to 1 − πΌ
and solve for the value of the constant, we obtain 1 − π −π = 1 − πΌ → π = πΌ −1/π .
Question #13: Let π1 , … , ππ be a random sample from π~πΊπ΄πππ΄(π, π
) such that their
1
common distribution is π(π₯; π, π
) = ππ
Γ(π
) π₯ π
−1 π −π₯/π 1{π₯ > 0} with the parameter π
known
but π unknown. Derive a 100(1 − πΌ)% equail-tailed confidence interval for π based on the
sufficient statistic for the unknown parameter π.
ο·
We begin by noting that the given density is a member of the Regular Exponential
Class (REC) since π(π₯; π, π
) = π −π
Γ(π
)−1 π₯ π
−1 π −π₯/π = π(π)β(π₯) exp{π1 (π)π‘1 (π₯)}
where π‘1 (π₯) = π₯. Then we know that π = ∑ππ=1 π‘1 (ππ ) = ∑ππ=1 ππ is complete sufficient
for the unknown parameter π. Next, we need to create a pivotal quantity from π; from
2
2
the distribution in question 7 which is similar, we guess that π = π π = π ∑ππ=1 ππ
might be appropriate. We now derive the distribution of π and, by showing that it is
simultaneously a function of π but its density does not depend on π, will verify that it
is a pivotal quantity. Since the ππ ~πΊπ΄πππ΄(π, π
), we know that the random variable
π΄ = ∑ππ=1 ππ ~πΊπ΄πππ΄(π, ππ
).
π (∑ππ=1 ππ ≤
1
ππ
2
) = π (π΄ ≤
ππ ππ
−1
( )
πππ
Γ(ππ
) 2
ππ
π
ππ
2
Then
2
πΉπ (π) = π(π ≤ π) = π (π ∑ππ=1 ππ ≤ π) =
ππ
π
1
which
ππ
ππ π
) = πΉπ΄ ( 2 ) so that ππ (π) = ππ πΉπ΄ ( 2 ) = ππ΄ ( 2 ) 2 =
π −( 2 )/π 2 = β― = 2ππ
Γ(ππ
) π ππ
−1 π −π/2 ,
shows
that
the
2
transformed random variable π = π ∑ππ=1 ππ ~πΊπ΄πππ΄(2, ππ
) ≡ π 2 (2ππ
). This allows
2
2
to compute π[ππΌ/2
(2ππ
) < π < π1−πΌ/2
(2ππ
)] = 1 − πΌ, so after substituting in for π
2 ∑π
π=1 ππ
and solving for π, we have π [π2
1−πΌ/2
2 ∑π
π=1 ππ
< π < π2
(2ππ
)
] = 1 − πΌ, which is the desired
πΌ/2 (2ππ
)
100(1 − πΌ)% equail-tailed confidence interval for π based on the sufficient statistic
π = ∑ππ=1 ππ for the unknown parameter π.
0
You can add this document to your study collection(s)
Sign in Available only to authorized usersYou can add this document to your saved list
Sign in Available only to authorized users(For complaints, use another form )