Midterm #1 – Practice Exam
Question #1: Let π
1
and π
2
be independent and identically distributed random variables with probability density function π
π
(π₯) = { π −π₯ ππ π₯ > 0
0 ππ π₯ ≤ 0
such that they come from the distribution π~πΈππ(1) . Compute the joint density function of π = π
1
and π = π
1
π
2
.
ο· Since π
1
and π
2
are independent, their joint density function is given by π
π
1
π
2
(π₯
1
, π₯
2
) = π
π
1
(π₯
1
)π
π
2
(π₯
2
) = π −π₯
1 π −π₯
2 = π −(π₯
1
+π₯
2
) whenever π₯
1
> 0 , π₯
2
> 0 . We have the transformation π’ = π₯
1
and π£ = π₯
1 π₯
2
so π₯
1
= π’ and π₯
2
= π£ π₯
1
= π£ π’
, so we can
ππ₯
1 compute the Jacobian as π½ = πππ‘ [
ππ’
ππ₯
2
ππ’
ππ₯
1
ππ£
ππ₯
2
] = πππ‘ [
−
1 0 π£ 1 ] = π’ 2 π’
ππ£
1 π’
. This implies that the joint density π
ππ
(π’, π£) = π
π
1
π
2
(π’, π£ π’
) |π½| =
1 π’ π
−(π’+ π£ π’
) whenever π’ > 0 and π£ > 0 .
Question #2: Let π be a standard normal random variable. Compute the Moment Generating
Function (MGF) of
π = |π|
using the standard normal distribution function
Φ(z)
.
ο· For some continuous random variable π with density π
π
(π₯) , its moment generating function is given by π
π
(π‘) = πΈ(π π‘π ) = ∫
∞
−∞ π π‘π₯ π
π
(π₯) ππ₯ . Since π~π(0,1) , we know that its density function is given by π
π
(π₯) =
1
√2π π
− π₯2
2
. We can therefore compute
π
|π|
(π‘) = πΈ(π π‘|π| ) = ∫
∞
−∞ π π‘|π₯| 1
√2π π
− π₯2
2 ππ₯ =
1
√2π
∫
∞
−∞ π π‘|π₯|− π₯2
2 ππ₯ =
2
√2π
0
∞
∫ π π‘π₯− π₯2
2 ππ₯ =
2
√2π 0
∞
∫ π
2π‘π₯−π₯2
2 ππ₯ =
2
√2π 0
∞
∫ π π‘2−(π₯−π‘)2
2 ππ₯ =
2
√2π 0
∞
∫ π π‘2
2 π
−
(π₯−π‘)2
2 ππ₯ . Separating out the π‘ variable and making the substitution π’ = π₯ − π‘
and ππ’ = ππ₯
then yields
2π π‘2
2
∫
∞
−π‘
1
√2π π
− π’2
2 ππ’ = 2π π‘2
2
[Φ(∞) − Φ(−π‘)] = 2π π‘2
2
[1 − Φ(−π‘)] = 2π π‘2
2
Φ(π‘) . Thus, the
Moment Generating Function of
π = |π|
is given by
π
|π|
(π‘) = 2Φ(π‘)π π‘2
2
.
Question #3: Let π
1
and π
2
be independent random variables with respective probability density functions π
π
1
(π₯) = { π −π₯ ππ π₯ > 0
0 ππ π₯ ≤ 0
and π
π
2
(π₯) = {
1
2 π
−
(π₯−1)
2
ππ π₯ > 1
0 ππ π₯ ≤ 1
. Calculate the probability density function of the transformed random variable π = π
1
+ π
2
.
ο· Since π
1
and π
2
are independent random variables, their joint density is π
π
1
π
2
(π₯
1
, π₯
2
) = π
π
1
(π₯
1
)π
π
2
(π₯
2
) =
1
2 π −π₯
1 π
−
(π₯2−1)
2
=
1
2 π
−
(2π₯1+π₯2−1)
2
if π₯
1
> 0 and π₯
2
> 1 .
We have π¦ = π₯
1
+ π₯
2
and generate π§ = π₯
1
so π₯
1
= π§ and π₯
2
= π¦ − π₯
1
= π¦ − π§ , so we
ππ₯
1 can compute the Jacobian as
π½ = πππ‘ [
ππ¦
ππ₯
2
ππ¦
ππ₯
1
ππ§
ππ₯
2
] = πππ‘ [
0 1
1 −1
] = −1
. This implies
ππ§ that the joint density π
ππ
(π¦, π§) = π
π
1
π
2
(π§, π¦ − π§)|π½| =
1
2 π
−
(2π§+π¦−π§−1)
2
=
1
2 π
−
(π§+π¦−1)
2 whenever π§ > 0 and π¦ − π§ > 1 → π¦ > 1 + π§ or π§ < π¦ − 1 . We then integrate out π§ to find the PDF π
π
(π¦) = ∫
∞
−∞ π
ππ
(π¦, π§) ππ§ =
1
2 0 π¦−1
∫ π
−
(π§+π¦−1)
2 ππ§ =
1
2 π
−
(π¦−1)
2
0 π¦−1
∫ π
− π§
2 ππ§ =
1
2 π
−
(π¦−1)
2
[−2π
− π§
2
]
0 π¦−1
= −π
−
(π¦−1)
2
[π
− π§
2 π¦−1
]
0
= −π
−
(π¦−1)
2
(π
−
(π¦−1)
2
− 1) = π
1−π¦
2
(1 − π
1−π¦
2
) .
Thus, we find that π
π
(π¦) = π
1−π¦
2
(1 − π
1−π¦
2
) whenever π¦ > 0 and zero otherwise.
Question #4: Let π
1
, … , π π
be independent and identically distributed random variables from a cumulative distribution function πΉ
π
(π₯) = {
1 −
1 π₯
ππ π₯ > 1
0 ππ π₯ ≤ 1
. Find the limiting distribution of the transformed first order statistic given by π π
1:π
.
ο· We can calculate that πΉ
π π
1:π
(π₯) = π(π π
1:π
≤ π₯) = π (π
1:π
≤ π₯ π
1
) = 1 − π (π
1:π
> π₯
1 π
) =
1 − π (π π
> π₯ π
1
) π
= 1 − [1 − πΉ
π
(π₯ π
1
)] π
= 1 − [1 − (1 −
1
1 π₯ π
)] π
= 1 −
1 π₯
when π₯ > 1 .
Since there is no dependence on π , we may simply compute the desired limiting distribution as lim π→∞
πΉ
π π
1:π
(π₯) = { lim π→∞ lim π→∞
(1 −
1 π₯
) ππ π₯ > 1
(0) ππ π₯ ≤ 1
= {
1 −
1 π₯
ππ π₯ > 1
0 ππ π₯ ≤ 1
= πΉ
π
(π₯) .
Question #5: Let π
1
, … , π π
be independent and identically distributed random variables with density function π
π
(π₯) = {
1 ππ π₯ ∈ (0,1)
0 ππ π₯ ∉ (0,1)
. That is, they are sampled from a random variable distributed as
π~πππΌπΉ(0,1)
. Approximate
π(∑
90 π=1
π π
≤ 77)
using
π~π(0,1)
.
ο· Since π~πππΌπΉ(0,1) , we know that πΈ(π) =
0+1
=
2
1
2
and πππ(π) =
(1−0) 2
12
=
1
12
. Then if we let π = ∑ 90 π=1
π π
, we have πΈ(π) = πΈ(∑ 90 π=1
π π
) = ∑ 90 π=1
πΈ(π π
) = ∑ 90 π=1
1
2
=
90
= 45 and
2
πππ(π) = πππ(∑ 90 π=1
π π
) = ∑ 90 π=1
πππ(π π
) = ∑ 90 π=1
1
12
90
=
12
= 7.5
so that π π(π) = √7.5
.
Thus
π(∑
90 π=1
π π
≤ 77) = π(π ≤ 77) = π (
π−πΈ(π) π π(π)
≤
77−πΈ(π) π π(π)
) = π (
π−45
√7.5
≤
77−45
) ≈
√7.5
π (π ≤
32
√7.5
) = π(π ≤ 11.68) = Φ(11.68) ≈ 0.99
where π~π(0,1) .
Question #6: If π~πΈππ(1) such that π
π
(π₯) = { π −π₯ ππ π₯ > 0
0 ππ π₯ ≤ 0
, what transformation of π will yield a random variable π~πππΌπΉ(0,1) ? What about a random variable π~πππΌπΉ(−1,1) ?
ο· Let π = 1 − π −π , so we can compute that πΉ
π
(π¦) = π(π ≤ π¦) = π(1 − π −π ≤ π¦) =
π(π ≤ −ππ (1 − π¦)) = πΉ
π
(−π π(1 − π¦)) and then by differentiating we obtain π
π
(π¦) = π ππ¦
[πΉ
π
(−π π(1 − π¦))] = π
π
(−π π(1 − π¦)) (−
−1
1−π¦
) =
1−π¦
1−π¦
= 1 so that π~πππΌπΉ(0,1) .
ο· Since we need to end up with 1−π¦
2(1−π¦)
=
1
2
to conclude that π~πππΌπΉ(−1,1) , it is clear that we must make the transformation π = 1 − 2π −π to obtain this result. We can therefore compute πΉ
π
(π§) = π(π ≤ π§) = π(1 − 2π −π ≤ π§) = π (π ≤ −π π (
1−π§
2
)) =
πΉ
π
(−π π (
1−π§
2
)) so after differentiating we have π
π
(π§) = π ππ¦
[πΉ
π
(−π π (
1−π§
2
))] = π
π
(−π π (
1−π§
2
)) (−
2
1−π¦
(−
1
2
) ) =
1−π§
2(1−π§)
=
1
2
so that we conclude π~πππΌπΉ(−1,1) .
Question #7: Does convergence in probability imply convergence in distribution? Does convergence in distribution imply convergence in probability? Give a counterexample if not.
ο· We have that πΆπ ⇒ πΆπ· , but that πΆπ· β πΆπ . As a counterexample, consider π π
= −π where π~π(0,1) . Then π π
~π(0,1) so π π
→ π π . However, we have π(|π π
− π| > π) =
π(|−π − π| > Ο΅) = π(2|π| > Ο΅) = π(π) > 0 . This expression won’t converge to zero as π → ∞ since there is no dependence on π in the probability expression. This implies that there is no convergence in probability, that is π π
β π π .
Question #8: State the law of large numbers (LLN), including its assumptions. Then state the central limit theorem (CLT), including its assumptions.
ο· Law of Large Numbers: If π
1
, … , π π
are independent and identically distributed with finite mean π and variance π 2 , then the sample mean πΜ → π π .
ο· Central Limit Theorem: If π
1
, … , π π
are independent and identically distributed with finite mean π and variance π 2 , then
[∑ π π=1
π π
]−ππ π√π
→ π π(0,1) and πΜ −π π/√π
→ π π(0,1) .
Question #10: Let π
1
and π
2
be independent random variables with Cumulative Distribution
Function (CDF) given by πΉ
π
(π₯) = 1 − π −π₯
2
/2 for π₯ > 0 . Find the joint probability density function π
π
1
π
2
(π¦
1
, π¦
2
) of the random variables π
1
= √π
1
+ π
2
and π
2
=
π
1
π
2
+π
2
.
ο· We first find the density π
π
(π₯) = π ππ₯
πΉ
π
(π₯) = −π
− π₯2
2
(−π₯) = π₯π
− π₯2
2
for π₯ > 0 . Then since
π
1
and π
2
are independent random variables, their joint density function is π
π
1
π
2
(π₯
1
, π₯
2
) = π
π
1
(π₯
1
)π
π
2
(π₯
2
) = [π₯
1 π
− π₯1
2
] [π₯
2 π
− π₯2
2
] = π₯
1 π₯
2 π
−
2+π₯ 2
2
2
for π₯
1
> 0, π₯
2
> 0 .
We then have that π¦
1
= √π₯
1
+ π₯
2
and π¦
2
= π₯
1 π₯
2
+π₯
2
so that π₯
2
= π¦
2
(π₯
1
+ π₯
2
) = π¦
2 π¦ 2
1
and π₯
1
= π¦ 2
1
− π₯
2
= π¦ 2
1
− π¦
2 π¦ 2
1
. These computations allow us to compute the Jacobian as
ππ₯
1
π½ = πππ‘ [
ππ¦
1
ππ₯
2
ππ¦
1
ππ₯
1
ππ¦
2
ππ₯
2
] = πππ‘ [
2π¦
1
2π¦
1 π¦
2
ππ¦
2
− 2π¦
1 π¦
2
−π¦ 2
1 π¦ 2
1
] = 2π¦ 3
1
− 2π¦ 3
1 π¦
2
+ 2π¦ 3
1 π¦
2
= 2π¦ 3
1
. This implies that the joint density function is π
π
1
π
2
(π¦
1
, π¦
2
) = π
π
1
π
2
(π¦ 2
1
− π¦
2 π¦ 2
1
, π¦
2 π¦ 2
1
)|π½| =
(π¦ 2
1
− π¦
2 π¦ 2
1
)(π¦
2 π¦ 2
1
)π
−
2−π¦2π¦12)
2
2
2)
2
2π¦ 3
1
whenever π¦ 2
1
− π¦
2 π¦ 2
1
> 0, π¦
2 π¦ 2
1
> 0 . This work also reveals that π
1
and π
2
are not independent since their joint probability density function π
π
1
π
2
(π¦
1
, π¦
2
) cannot be factored into two functions of π¦
1
and π¦
2
.
Question #11: Let π
1
, π
2
, π
3
be independent random variables and suppose that we know
π
1
~π(2,4) , π
2
~π(3,9) , and π
1
+ π
2
+ π
3
~π(4,16) . What is the distribution of π
3
?
ο· We use the Moment Generating Function (MGF) technique to note that since the three random variables are independent, we have
π
π
1
+π
2
+π
3
(π‘) = π
π
1
(π‘)π
π
2
(π‘)π
π
3
(π‘) →
π
π
3
(π‘) =
π
π1+π2+π3
(π‘)
π
π1
(π‘)π
π2
(π‘)
= π
4π‘+16π‘2/2
(π 2π‘+4π‘2/2 )(π 3π‘+9π‘2/2 )
= π
4π‘+16π‘2/2 π 5π‘+13π‘2/2
= π −π‘+3π‘
2
/2 . This reveals that the random variable π
3
~π(−1,3) since π
π΄
(π‘) = π ππ‘+π
2 π‘
2
/2 when π΄~π(π, π 2 ) .
Question #12: Let π
1
, … , π
5
be independent π(π, π 2 ) random variables. Give examples of random variables π
1
, … , π
4
defined in terms of the random variables π
1
, … , π
5
such that we have π
1
~π(0,1) , π
2
~π 2 (3) , π
3
~πΉ(1,3) , and π
4
~π‘(3) .
ο· We have that π
1
=
π
1
−π π
~π(0,1) , π
2
= ∑ 3 π=1
π π
2 = ∑ 3 π=1
(
π π
−π
)
2 π
~π 2 (3) , π
3
=
π
2
1
/1
π
2
/3
=
(
π1−π
) π
2
/1
[∑
3 π=1
(
ππ−π π
)
2
]/3
~πΉ(1,3) , and π
4
=
π
1
√π
2
/3
=
(
π1−π π
)
√[∑
3 π=1
(
ππ−π
) π
2
]/3
~π‘(3) . These random variables are constructed since if some π~π(π, π 2 ) , then π =
π−π
~π(0,1) . This implies that π
π = π 2 ~π 2 (1)
, so that
π = ∑ π π=1
π π
~π 2 (π)
. Finally, we note that
π =
π/π
~πΉ(π, π)
π/π where π~π 2 (π) and that π =
π
√π/π
~π‘(π) .
Question #13: If π
1
and π
2
are independent πΈππ(1) random variables such that their densities are π(π₯) = π −π₯ 1{π₯ > 0} , then find the joint density of π =
π
1
π
2
and π = π
1
+ π
2
.
ο· By independence, π
π
1
π
2
(π₯
1
, π₯
2
) = π
π
1
(π₯
1
)π
π
2
(π₯
2
) = [π −π₯
1 ][π −π₯
2 ] = π −(π₯
1
+π₯
2
) if π₯
1
> 0 and π₯
2
> 0 . We have π’ = π₯
1 π₯
2
and π£ = π₯
1
+ π₯
2
so π₯
1
= π’π₯
2
= π’(π£ − π₯
1
) , which implies that π₯
1
+ π’π₯
1
= π’π£ so π₯
1
= π’π£
1+π’
. Then π₯
2
= π£ − π₯
1 π’π£
= π£ −
1+π’
= π£+π’π£−π’π£
1+π’
= π£
1+π’
. This
ππ₯
1 allows us to compute the Jacobian as π½ = πππ‘ [
ππ’
ππ₯
2
ππ’
ππ₯
1
ππ£
ππ₯
2
] = πππ‘ [
ππ£ π£(1+π’)−π’π£
(1+π’) 2
−π£
(1+π’) 2 π’
1+π’
1
] =
1+π’ π£(1+π’)−π’π£
(1+π’) 2
1
(
1+π’
) + π’
1+π’ π£
(
(1+π’) 2
) = π£
(1+π’) 3 π’π£
+
(1+π’) 3
= π£(1+π’)
(1+π’) 3
= π£
(1+π’) 2
. Thus, the joint density is π
ππ
(π’, π£) = π
π
1
π
2 π’π£
(
1+π’
, π£
1+π’
) |π½| = π£
(1+π’) 2 π π’π£
−(
1+π’
+ π£
1+π’
)
= π£
(1+π’) 2 π −π£ whenever we have that π’ > 0, π£ > 0 and zero otherwise.
Question #14: If π
1
, π
2
, π
3
are a random sample of size π = 2 from the πππΌπΉ(0,1) distribution, then a) find the joint density of the order statistics π
1
, π
2
, π
3
, b) find the marginal densities of π
1
and π
3
, and c) find the mean of the sample range π = π
3
− π
1
. a) We have π
π
1
π
2
π
3
(π¦
1
, π¦
2
, π¦
3
) = 3! ∏ 3 π=1 π
π
(π¦ π
) = 6 ∏ 3 π=1
1 = 6 if 0 < π¦
1
< π¦
2
< π¦
3
< 1 . b) We have π
π
1
(π¦
1
) = 3π
π
(π¦
1
)[1 − πΉ
π
(π¦
1
)] 2 = 3(1 − π¦
1
) 2 if 0 < π¦
1
< 1 and that π
π
3
(π¦
3
) = 3π
π
(π¦
3
)[πΉ
π
(π¦
3
)] 2 = 3π¦ 2
3
whenever 0 < π¦
3
< 1 . c) We have πΈ(π ) = πΈ(π
3
) − πΈ(π
1
) , so we compute πΈ(π
3
) = ∫ π¦
3
3π¦ 2
3 ππ¦
3
=
3
4
[π¦ 3
3
] 1
0
=
3
4 and πΈ(π
1
) = ∫ π¦
1
3(1 − π¦
1
) 2 ππ¦
1
0
1
= 3 ∫ π¦
1
− 2π¦ 2
1
+ π¦ 3
1 ππ¦
1
= 3 [ π¦
1
2
−
2π¦
3
1
3
+ π¦
4
1
4
1
]
0
=
1
4
.
Then, we have that πΈ(π ) = πΈ(π
3
) − πΈ(π
1
) =
3
4
−
1
4
=
1
2
. Alternatively, we could have found π
π
1
π
3
(π¦
1
, π¦
3
) and computed πΈ(π ) as a double integral.