Final Exam Practice Problems.docx

advertisement
Final – Practice Exam
Question #1: If 𝑋1 and 𝑋2 denotes a random sample of size 𝑛 = 2 from the population with
1
density function 𝑓𝑋 (π‘₯) = π‘₯ 2 1{π‘₯ > 1}, then a) find the joint density of π‘ˆ = 𝑋1 𝑋2 and 𝑉 = 𝑋2,
and b) find the marginal probability density function of π‘ˆ.
a) By the independence of the two random variables 𝑋1 and 𝑋2, we have that their joint
density is given by 𝑓𝑋1 𝑋2 (π‘₯1 , π‘₯2 ) = 𝑓𝑋1 (π‘₯1 )𝑓𝑋2 (π‘₯2 ) = π‘₯1−2 π‘₯2−2 = (π‘₯1 π‘₯2 )−2 whenever
𝑒
𝑒
π‘₯1 > 0 and π‘₯2 > 0. Then 𝑒 = π‘₯1 π‘₯2 and 𝑣 = π‘₯2 implies that π‘₯2 = 𝑣 and π‘₯1 = π‘₯ = 𝑣 , so
2
πœ•π‘₯1
we can compute the Jacobian as 𝐽 =
πœ•π‘’
𝑑𝑒𝑑 [πœ•π‘₯
2
πœ•π‘’
πœ•π‘₯1
πœ•π‘£
]
πœ•π‘₯2
πœ•π‘£
1
𝑒
1
− 𝑣2
] = 𝑣. This implies
1
= 𝑑𝑒𝑑 [ 𝑣
0
𝑒
−2 1
𝑒
that the joint density of π‘ˆ and 𝑉 is π‘“π‘ˆπ‘‰ (𝑒, 𝑣) = 𝑓𝑋1 𝑋2 (𝑣 , 𝑣) |𝐽| = (𝑣 𝑣)
𝑣
1
= 𝑒2 𝑣
𝑒
whenever 𝑣 > 1 and 𝑣 > 1 → 𝑒 > 𝑣. This also shows π‘ˆ and 𝑉 are independent.
𝑒
𝑒 1
b) We have π‘“π‘ˆ (𝑒) = ∫1 π‘“π‘ˆπ‘‰ (𝑒, 𝑣) 𝑑𝑣 = ∫1
𝑒2 𝑣
1
ln(𝑒)
𝑑𝑣 = 𝑒2 [ln(𝑣)]1𝑒 =
𝑒2
if 𝑒 > 1.
Question #2: Find the Maximum Likelihood Estimators (MLE) of πœƒ and 𝛼 based on a random
𝛼
sample 𝑋1 , … , 𝑋𝑛 from 𝑓𝑋 (π‘₯; πœƒ, 𝛼) = πœƒπ›Ό π‘₯ 𝛼−1 1{0 ≤ π‘₯ ≤ πœƒ} where πœƒ > 0 and 𝛼 > 0.
ο‚·
We first note that the likelihood function is given by 𝐿(πœƒ, 𝛼) = ∏𝑛𝑖=1 𝑓𝑋 (π‘₯𝑖 ; πœƒ, 𝛼) =
∏𝑛𝑖=1 π›Όπœƒ −𝛼 π‘₯𝑖𝛼−1 1{0 ≤ π‘₯𝑖 ≤ πœƒ} = 𝛼 𝑛 πœƒ −𝑛𝛼 (∏𝑛𝑖=1 π‘₯𝑖 )𝛼−1 1{0 ≤ π‘₯1:𝑛 }1{π‘₯𝑛:𝑛 ≤ πœƒ}. Then by
inspection, we see that πœƒΜ‚π‘€πΏπΈ = 𝑋𝑛:𝑛 . To find the MLE of 𝛼, we compute ln[𝐿(πœƒ, 𝛼)] =
𝑛 ln(𝛼) − 𝑛𝛼 ln(πœƒ) + (𝛼 − 1) ∑𝑛𝑖=1 ln(π‘₯𝑖 ) so that the derivative with respect to 𝛼 is
πœ•
πœ•π›Ό
𝑛
𝑛
ln[𝐿(πœƒ, 𝛼)] = 𝛼 − 𝑛 ln(πœƒ) + ∑𝑛𝑖=1 ln(π‘₯𝑖 ) = 0 → 𝛼 = 𝑛 ln(πœƒ)−∑𝑛
.
𝑖=1 ln(π‘₯𝑖 )
𝑛
𝑛 ln(𝑋 ).
)−∑
𝑛:𝑛
𝑖
𝑖=1
second derivative is zero, we have found that 𝛼̂𝑀𝐿𝐸 = 𝑛 ln(𝑋
Since
the
Question #3: If 𝑋1 , … , 𝑋𝑛 is a random sample from 𝑁(πœ‡, 𝜎 2 ), then a) find a UMVUE for πœ‡ when
𝜎 2 is known, and b) find a UMVUE for 𝜎 2 when πœ‡ is known.
a) We have previously shown that the Normal distribution is a member of the Regular
Exponential Class (REC), where 𝑆1 = ∑𝑛𝑖=1 𝑋𝑖 and 𝑆2 = ∑𝑛𝑖=1 𝑋𝑖2 are jointly complete
sufficient statistics for the unknown parameters πœ‡ and 𝜎 2 . Since we have that 𝐸(𝑆1 ) =
1
1
𝐸(∑𝑛𝑖=1 𝑋𝑖 ) = ∑𝑛𝑖=1 𝐸(𝑋𝑖 ) = ∑𝑛𝑖=1 πœ‡ = π‘›πœ‡, the statistic 𝑇 = 𝑛 𝑆1 = 𝑛 ∑𝑛𝑖=1 𝑋𝑖 = 𝑋̅ will be
unbiased for the unknown parameter πœ‡. The Lehmann-Scheffe Theorem then
guarantees that 𝑇 = 𝑋̅ is a UMVUE for πœ‡ when 𝜎 2 is known.
b) Since 𝐸(𝑆2 ) = 𝐸(∑𝑛𝑖=1 𝑋𝑖2 ) = ∑𝑛𝑖=1 𝐸(𝑋𝑖2 ) = ∑𝑛𝑖=1 π‘‰π‘Žπ‘Ÿ(𝑋𝑖 ) + 𝐸(𝑋𝑖 )2 = ∑𝑛𝑖=1 𝜎 2 + πœ‡ 2 =
1
𝑛(𝜎 2 + πœ‡ 2 ), the statistic 𝑇 = 𝑛 𝑆2 − πœ‡ 2 will be unbiased for the unknown parameter
𝜎 2 when πœ‡ is known. The Lehmann-Scheffe Theorem states that 𝑇 is a UMVUE for 𝜎 2 .
𝛼
Question #4: Consider a random sample 𝑋1 , … , 𝑋𝑛 from 𝑓𝑋 (π‘₯; πœƒ, 𝛼) = πœƒπ›Ό π‘₯ 𝛼−1 1{0 ≤ π‘₯ ≤ πœƒ},
where πœƒ > 0 is unknown but 𝛼 > 0 is known. Find the constants 1 < π‘Ž < 𝑏 depending on the
values of 𝑛 and 𝛼 such that (π‘Žπ‘‹π‘›:𝑛 , 𝑏𝑋𝑛:𝑛 ) is a 90% confidence interval for πœƒ.
ο‚·
π‘₯
We first compute the CDF of the population 𝐹𝑋 (π‘₯; πœƒ, 𝛼) = ∫0 𝑓𝑋 (𝑑; πœƒ, 𝛼) 𝑑𝑑 =
𝛼
π‘₯
𝛼
π‘₯
1
π‘₯ 𝛼
∫ π‘₯ 𝛼−1 𝑑𝑑 = πœƒπ›Ό [𝛼 π‘₯ 𝛼 ] = (πœƒ) whenever 0 ≤ π‘₯ ≤ πœƒ. Then we use this to compute
πœƒπ›Ό 0
0
π‘₯ 𝑛𝛼
the CDF of the largest order statistic 𝐹𝑋𝑛:𝑛 (π‘₯) = 𝑃(𝑋𝑛:𝑛 ≤ π‘₯) = 𝑃(𝑋𝑖 ≤ π‘₯)𝑛 = (πœƒ) .
In order for (π‘Žπ‘‹π‘›:𝑛 , 𝑏𝑋𝑛:𝑛 ) to be a 90% confidence interval for πœƒ, it must be the case
that 𝑃(π‘Žπ‘‹π‘›:𝑛 ≥ πœƒ) = 0.05 and 𝑃(𝑏𝑋𝑛:𝑛 ≤ πœƒ) = 0.05. We solve each of these two
πœƒ
equations for the unknown constants, so 𝑃(π‘Žπ‘‹π‘›:𝑛 ≥ πœƒ) = 0.05 → 𝑃 (𝑋𝑛:𝑛 ≥ π‘Ž) =
πœƒ
πœƒ
πœƒ/π‘Ž 𝑛𝛼
0.05 → 1 − 𝑃 (𝑋𝑛:𝑛 ≤ π‘Ž) = 0.95 → 1 − 𝐹𝑋𝑛:𝑛 (π‘Ž) = 0.95 → 1 − (
1
π‘Žπ‘›π›Ό
1
1⁄𝑛𝛼
= 0.95 → π‘Ž = (0.95)
20 1⁄𝑛𝛼
= (19)
πœƒ
)
= 0.95 → 1 −
. Similarly, we find that 𝑏 = (20)1⁄𝑛𝛼 .
Question #5: If 𝑋1 and 𝑋2 are independent and identically distributed from 𝐸𝑋𝑃(πœƒ) such
1
π‘₯
𝑋
that their density is 𝑓𝑋 (π‘₯) = πœƒ 𝑒 −πœƒ 1{π‘₯ > 0}, then find the joint density of π‘ˆ = 𝑋1 and 𝑉 = 𝑋2.
1
ο‚·
We
have
𝑒
𝑒𝑣
1
1
𝑒 −𝑒(1−𝑣)
0
]| = [πœƒ 𝑒 −πœƒ ] [πœƒ 𝑒 − πœƒ ] [𝑒] = πœƒ2 𝑒 πœƒ
𝑒
1
π‘“π‘ˆπ‘‰ (𝑒, 𝑣) = 𝑓𝑋1 𝑋2 (𝑒, 𝑒𝑣) |𝑑𝑒𝑑 [
𝑣
whenever we have that 𝑒 > 0 and 𝑒𝑣 > 0 → 𝑣 > 0.
Question #6: Let 𝑋1 , … , 𝑋𝑛 be a random sample from π‘ŠπΈπΌ(πœƒ, 𝛽) with known 𝛽 such that
𝛽
their common density is 𝑓𝑋 (π‘₯; πœƒ, 𝛽) = πœƒπ›½
ο‚·
π‘₯ 𝛽
π‘₯ 𝛽−1 𝑒 −(πœƒ)
π‘₯ 𝛽
𝛽−1 −( 𝑖 )
𝛽
We have 𝐿(πœƒ, 𝛽) = ∏𝑛𝑖=1 πœƒπ›½ π‘₯𝑖
𝑒
1{π‘₯ > 0}. Find the MLE of πœƒ.
𝑛
= (π›½πœƒ −𝛽 ) (∏𝑛𝑖=1 π‘₯𝑖 )𝛽−1 𝑒
πœƒ
1
1
𝛽
− 𝛽 ∑𝑛
𝑖=1 π‘₯𝑖
πœƒ
so that
𝛽
ln[𝐿(πœƒ, 𝛽)] = 𝑛 ln(𝛽) − 𝑛𝛽 ln(πœƒ) + (𝛽 − 1) ∑𝑛𝑖=1 ln(π‘₯𝑖 ) − πœƒπ›½ ∑𝑛𝑖=1 π‘₯𝑖 . Then we can
calculate that the
𝑛=
1
∑𝑛 π‘₯ 𝛽
πœƒπ›½ 𝑖=1 𝑖
πœ•
ln[𝐿(πœƒ, 𝛽)] = −
πœ•πœƒ
𝛽
𝛽
→πœƒ =
∑𝑛
𝑖=1 π‘₯𝑖
𝑛
𝑛𝛽
πœƒ
𝛽
𝛽
+ πœƒπ›½+1 ∑𝑛𝑖=1 π‘₯𝑖 = 0 →
𝑛𝛽
πœƒ
𝛽
𝛽
= πœƒπ›½+1 ∑𝑛𝑖=1 π‘₯𝑖 →
1
1
𝛽 𝛽
. Therefore, we have found πœƒΜ‚π‘€πΏπΈ = (𝑛 ∑𝑛𝑖=1 𝑋𝑖 ) .
Question #7: Use the Cramer-Rao Lower Bound to show that 𝑋̅ is a UMVUE for πœƒ based on a
1
π‘₯
random sample of size 𝑛 from an 𝐸𝑋𝑃(πœƒ) distribution where 𝑓𝑋 (π‘₯; πœƒ) = πœƒ 𝑒 −πœƒ 1{π‘₯ > 0}.
ο‚·
We first find the Cramer-Rao Lower Bound; since 𝜏(πœƒ) = πœƒ, we have [𝜏 ′ (πœƒ)]2 = 1.
𝑋
Then we have ln 𝑓(𝑋; πœƒ) = − ln(πœƒ) − πœƒ so that
πœ•2
1
πœ•2
2𝑋
πœ•
πœ•πœƒ
1
𝑋
ln 𝑓(𝑋; πœƒ) = − πœƒ + πœƒ2 and
1
2
2πœƒ
1
1
ln 𝑓(𝑋; πœƒ) = πœƒ2 − πœƒ3 . Finally, −𝐸 [πœ•πœƒ2 ln 𝑓(𝑋; πœƒ)] = − πœƒ2 + πœƒ3 𝐸(𝑋) = πœƒ3 − πœƒ2 = πœƒ2
πœ•πœƒ2
implies that 𝐢𝑅𝐿𝐡 =
1
πœƒ2
𝑛
. To verify that 𝑋̅ is a UMVUE for πœƒ, we first show that 𝐸(𝑋̅) =
1
1
𝐸 (𝑛 ∑𝑛𝑖=1 𝑋𝑖 ) = 𝑛 ∑𝑛𝑖=1 𝐸(𝑋𝑖 ) = 𝑛 π‘›πœƒ = πœƒ. Then we check that the variance achieves
1
the Cramer-Rao Lower Bound from above, so π‘‰π‘Žπ‘Ÿ(𝑋̅) = π‘‰π‘Žπ‘Ÿ (𝑛 ∑𝑛𝑖=1 𝑋𝑖 ) =
1
𝑛
∑𝑛𝑖=1 π‘‰π‘Žπ‘Ÿ(𝑋𝑖 ) =
2
1
π‘›πœƒ 2 =
𝑛2
πœƒ2
𝑛
= 𝐢𝑅𝐿𝐡. Thus, 𝑋̅ is a UMVUE for πœƒ.
Question #8: Use the Lehmann-Scheffe Theorem to show that 𝑋̅ is a UMVUE for πœƒ based on
π‘₯
1
a random sample of size 𝑛 from an 𝐸𝑋𝑃(πœƒ) distribution where 𝑓𝑋 (π‘₯; πœƒ) = πœƒ 𝑒 −πœƒ 1{π‘₯ > 0}.
ο‚·
We know that 𝐸𝑋𝑃(πœƒ) is a member of the Regular Exponential Class (REC) with
𝑑1 (π‘₯) = π‘₯, so the statistic 𝑆 = ∑𝑛𝑖=1 𝑑1 (𝑋𝑖 ) = ∑𝑛𝑖=1 𝑋𝑖 is complete sufficient for the
parameter πœƒ. Then 𝐸(𝑆) = 𝐸(∑𝑛𝑖=1 𝑋𝑖 ) = ∑𝑛𝑖=1 𝐸(𝑋𝑖 ) = π‘›πœƒ implies that the estimator
1
1
𝑇 = 𝑛 𝑆 = 𝑛 ∑𝑛𝑖=1 𝑋𝑖 = 𝑋̅ is unbiased for πœƒ, so the Lehmann-Scheffe Theorem
guarantees that it will be the Uniform Minimum Variance Unbiased Estimator of πœƒ.
Question #9: Find a 100(1 − 𝛼)% confidence interval for πœƒ based on a random sample of
π‘₯
1
size 𝑛 from an 𝐸𝑋𝑃(πœƒ) distribution where 𝑓𝑋 (π‘₯) = πœƒ 𝑒 −πœƒ 1{π‘₯ > 0}. Use the facts that
𝐸𝑋𝑃(πœƒ) ≡ 𝐺𝐴𝑀𝑀𝐴(πœƒ, 1), that πœƒ is a scale parameter, and that 𝐺𝐴𝑀𝑀𝐴(2, 𝑛) ≡ πœ’ 2 (2𝑛).
ο‚·
Since πœƒ is a scale parameter, we know that 𝑄 =
Μ‚ 𝑀𝐿𝐸
πœƒ
πœƒ
𝑋̅
=πœƒ=
∑𝑛
𝑖=1 𝑋𝑖
π‘›πœƒ
will be a pivotal
quantity. Note that we used the fact that πœƒΜ‚π‘€πΏπΈ = 𝑋̅, which was derived in a previous
exercise. We begin by noting that since each 𝑋𝑖 ~𝐸𝑋𝑃(πœƒ) ≡ 𝐺𝐴𝑀𝑀𝐴(πœƒ, 1), we can
conclude that 𝐴 = ∑𝑛𝑖=1 𝑋𝑖 ~𝐺𝐴𝑀𝑀𝐴(πœƒ, 𝑛). In order to obtain a chi-square distributed
2
pivotal quantity, we use the modified pivot 𝑅 = 2𝑛𝑄 = πœƒ ∑𝑛𝑖=1 𝑋𝑖 . To verify its
2
distribution, we use the CDF technique so 𝐹𝑅 (π‘Ÿ) = 𝑃(𝑅 ≤ π‘Ÿ) = 𝑃 (πœƒ ∑𝑛𝑖=1 𝑋𝑖 ≤ π‘Ÿ) =
𝑃 (∑𝑛𝑖=1 𝑋𝑖 ≤
1
πœƒπ‘Ÿ
πœƒπ‘Ÿ 𝑛−1
( )
πœƒπ‘› Γ(𝑛) 2
2
(
𝑒−
so 𝑃 [πœ’π›Ό2 (2𝑛) <
2
πœƒπ‘Ÿ
implies
) = 𝐹𝐴 ( 2 )
πœƒπ‘Ÿ
)
2
πœƒ
2𝑛𝑋̅
πœƒ
πœƒ
2
1
𝑑
that
πœƒπ‘Ÿ
π‘Ÿ
2
= 2𝑛Γ(𝑛) π‘Ÿ 𝑛−1 𝑒 −2 . This proves that 𝑅 = πœƒ ∑𝑛𝑖=1 𝑋𝑖 =
2
< πœ’1−
𝛼 (2𝑛)] = 1 − 𝛼 → 𝑃 [ 2
πœ’
2
πœƒπ‘Ÿ πœƒ
𝑓𝑅 (π‘Ÿ) = π‘‘π‘Ÿ 𝐹𝐴 ( 2 ) = 𝑓𝐴 ( 2 ) 2 =
2𝑛𝑋̅
𝛼 (2𝑛)
1−
2
2𝑛𝑋̅
2𝑛𝑋̅
πœƒ
~πœ’ 2 (2𝑛)
< πœƒ < πœ’2 (2𝑛)] = 1 − 𝛼 is the
𝛼
2
desired 100(1 − 𝛼)% confidence interval for the unknown parameter πœƒ.
Question #10: Let 𝑋1 , … , 𝑋𝑛 be independent and identically distributed from the population
1
with CDF given by 𝐹𝑋 (π‘₯) = 1+𝑒 −π‘₯ . Find the limiting distribution of π‘Œπ‘› = 𝑋1:𝑛 + ln(𝑛).
ο‚·
We have that πΉπ‘Œπ‘› (𝑦) = 𝑃(π‘Œπ‘› ≤ 𝑦) = 𝑃(𝑋1:𝑛 + ln(𝑛) ≤ 𝑦) = 𝑃(𝑋1:𝑛 ≤ 𝑦 − ln(𝑛)) =
𝑛
1 − 𝑃(𝑋1:𝑛 > 𝑦 − ln(𝑛)) = 1 − 𝑃(𝑋𝑖 > 𝑦 − ln(𝑛))𝑛 = 1 − (1 − 𝐹𝑋 (𝑦 − ln(𝑛))) =
𝑛
1
𝑛
𝑒 ln(𝑛)−𝑦
𝑛
𝑛𝑒 −𝑦
1
𝑛
1 − (1 − 1+𝑒 −𝑦+ln(𝑛) ) = 1 − (1+𝑒 ln(𝑛)−𝑦 ) = 1 − (1+𝑛𝑒 −𝑦 ) = 1 − (1 + 𝑛𝑒 −𝑦 ) = 1 −
1
𝑛
(1 + 𝑛𝑒 −𝑦 ) = 1 − (1 +
𝑒𝑦 𝑛
) . Then lim πΉπ‘Œπ‘› (𝑦) = 1 − lim (1 +
𝑛
𝑛→∞
𝑛→∞
𝑐 𝑛𝑏
limiting distribution since lim (1 + 𝑛)
𝑛→∞
𝑒𝑦 𝑛
𝑦
) = 1 − 𝑒 𝑒 is the
𝑛
= 𝑒 𝑐𝑏 for all real numbers 𝑐 and 𝑏.
Question #11: Let 𝑋1 , … , 𝑋𝑛 be independent and identically distributed from the population
1
with a PDF given by 𝑓𝑋 (π‘₯) = 2 1{−1 < π‘₯ < 1}. Approximate 𝑃(∑100
𝑖=1 𝑋𝑖 ≤ 0) using Φ(𝑧).
ο‚·
1
From the given density, we know that 𝑋~π‘ˆπ‘πΌπΉ(−1,1) so 𝐸(𝑋) = 0 and π‘‰π‘Žπ‘Ÿ(𝑋) = 3.
100
100
Then if we define π‘Œ = ∑100
𝑖=1 𝑋𝑖 , we have that 𝐸(π‘Œ) = 𝐸(∑𝑖=1 𝑋𝑖 ) = ∑𝑖=1 𝐸(𝑋𝑖 ) = 0 and
100
π‘‰π‘Žπ‘Ÿ(π‘Œ) = π‘‰π‘Žπ‘Ÿ(∑100
𝑖=1 𝑋𝑖 ) = ∑𝑖=1 π‘‰π‘Žπ‘Ÿ(𝑋𝑖 ) = 100/3. These facts allow us to compute
𝑃(∑100
𝑖=1 𝑋𝑖 ≤ 0) = 𝑃(π‘Œ ≤ 0) = 𝑃 (
π‘Œ−0
√100/3
≤
0−0
) ≈ 𝑃(𝑍 ≤ 0) = Φ(0) = 1/2.
√100/3
1
Question #12: Suppose that 𝑋 is a random variable with density 𝑓𝑋 (π‘₯) = 2 𝑒 −|π‘₯| for π‘₯ ∈ ℝ.
Compute the Moment Generating Function (MGF) of 𝑋 and use it to find 𝐸(𝑋 2 ).
ο‚·
∞
∞
1
By definition, we have 𝑀𝑋 (𝑑) = 𝐸(𝑒 𝑑𝑋 ) = ∫−∞ 𝑒 𝑑π‘₯ 𝑓𝑋 (π‘₯) 𝑑π‘₯ = ∫−∞ 𝑒 𝑑π‘₯ 𝑒 −|π‘₯| 𝑑π‘₯ =
2
∞
∫ 𝑒 𝑑π‘₯ 𝑒 −π‘₯
2 0
1
1
0
1
∞
1
0
𝑑π‘₯ + 2 ∫−∞ 𝑒 𝑑π‘₯ 𝑒 π‘₯ 𝑑π‘₯ = 2 ∫0 𝑒 π‘₯(𝑑−1) 𝑑π‘₯ + 2 ∫−∞ 𝑒 π‘₯(𝑑+1) 𝑑π‘₯. After integrating
1
and collecting like terms, we obtain 𝑀𝑋 (𝑑) = 2 [(𝑑 + 1)−1 − (𝑑 − 1)−1 ]. We then
compute
𝑑
1
𝑀 (𝑑) = 2 [−(𝑑 + 1)−2 + (𝑑 − 1)−2 ] and
𝑑𝑑 𝑋
𝑑2
1
2
2
1
𝑑2
𝑑𝑑 2
1
2
2
𝑀𝑋 (𝑑) = 2 [(𝑑+1)3 − (𝑑−1)3 ] so
that 𝐸(𝑋 2 ) = 𝑑𝑑 2 𝑀𝑋 (0) = 2 [(0+1)3 − (0−1)3 ] = 2 (2 + 2) = 2.
1
Question #13: Let 𝑋 be a random variable with density 𝑓𝑋 (π‘₯) = 4 1{−2 < π‘₯ < 2}. Compute
the probability density function of the transformed random variable π‘Œ = 𝑋 3 .
ο‚·
1
1
We have πΉπ‘Œ (𝑦) = 𝑃(π‘Œ ≤ 𝑦) = 𝑃(𝑋 3 ≤ 𝑦) = 𝑃 (𝑋 ≤ 𝑦 3 ) = 𝐹𝑋 (𝑦 3 ) so that π‘“π‘Œ (𝑦) =
𝑑
1
1
1
2
11
2
𝐹 (𝑦 3 ) = 𝑓𝑋 (𝑦 3 ) 3 𝑦 −3 = 4 3 𝑦 −3 =
𝑑𝑦 𝑋
1
2
12𝑦 3
1
if −2 < 𝑦 3 < 2 → −8 < 𝑦 < 8.
Question #14: Let 𝑋1 , … , 𝑋𝑛 be a random sample from the density 𝑓𝑋 (π‘₯) = πœƒπ‘₯ πœƒ−1 whenever
0 < π‘₯ < 1 and zero otherwise. Find a complete sufficient statistic for the parameter πœƒ.
ο‚·
We begin by verifying that the density is a member of the Regular Exponential Class
(REC) by showing that 𝑓𝑋 (π‘₯) = exp{ln(πœƒπ‘₯ πœƒ−1 )} = exp{ln(πœƒ) + (πœƒ − 1) ln(π‘₯)} =
πœƒ exp{(πœƒ − 1) ln(π‘₯)}, where π‘ž1 (πœƒ) = πœƒ − 1 and 𝑑1 (π‘₯) = ln(π‘₯). Then we know that the
statistic 𝑆 = ∑𝑛𝑖=1 𝑑1 (𝑋𝑖 ) = ∑𝑛𝑖=1 ln(𝑋𝑖 ) is complete sufficient for the parameter πœƒ.
Question #15: Let 𝑋1 , … , 𝑋𝑛 be a random sample from the density 𝑓𝑋 (π‘₯) = πœƒπ‘₯ πœƒ−1 whenever
0 < π‘₯ < 1 and zero otherwise. Find the Maximum Likelihood Estimator (MLE) for πœƒ > 0.
ο‚·
We have 𝐿(πœƒ) = ∏𝑛𝑖=1 𝑓𝑋 (π‘₯𝑖 ) = ∏𝑛𝑖=1 πœƒπ‘₯π‘–πœƒ−1 = πœƒ 𝑛 (∏𝑛𝑖=1 π‘₯𝑖 )πœƒ−1 so that ln[𝐿(πœƒ)] =
𝑛 ln(πœƒ) + (πœƒ − 1) ∑𝑛𝑖=1 ln(π‘₯𝑖 ) and
πœ•
𝑛
ln[𝐿(πœƒ)] = πœƒ + ∑𝑛𝑖=1 ln(π‘₯𝑖 ) = 0 → πœƒ = ∑𝑛
πœ•πœƒ
−𝑛
.
𝑖=1 ln(π‘₯𝑖 )
Thus, we have found that the Maximum Likelihood Estimator is πœƒΜ‚π‘€πΏπΈ = ∑𝑛
−𝑛
.
𝑖=1 ln(𝑋𝑖 )
Question #16: Let 𝑋1 , … , 𝑋𝑛 be a random sample from the density 𝑓𝑋 (π‘₯) = πœƒπ‘₯ πœƒ−1 whenever
0 < π‘₯ < 1 and zero otherwise. Find the Method of Moments Estimator (MME) for πœƒ > 0.
ο‚·
1
1
1
πœƒ
1
πœƒ
We have 𝐸(𝑋) = ∫0 π‘₯𝑓𝑋 (π‘₯) 𝑑π‘₯ = ∫0 π‘₯πœƒπ‘₯ πœƒ−1 𝑑π‘₯ = πœƒ ∫0 π‘₯ πœƒ 𝑑π‘₯ = πœƒ+1 [π‘₯ πœƒ+1 ]0 = πœƒ+1, so
πœƒ
𝑋̅
that πœ‡1′ = 𝑀1′ → πœƒ+1 = 𝑋̅ → πœƒ(1 − 𝑋̅) = 𝑋̅ so the desired estimator is πœƒΜ‚π‘€π‘€πΈ = 1−𝑋̅.
Download