Uploaded by VICTOR NKWOCHA

Sample Comprehensive Exam Stat Theory

advertisement
Statistics STAT:5100 (22S:193), Fall 2015
Tierney
Sample Final Exam B
Please write your answers in the exam books provided.
2
1. Let X, Y1 , and Y2 be independent random variables with X ∼ N(µX , σX
) and
2
Yi ∼ N(µY , σY ). Let Zi = X + Yi for i = 1, 2.
(a) Find the means and variances of Z1 and Z2 .
(b) Find the covariance and correlation of Z1 and Z2 .
2. The conditional density of Y given X = x is xe−xy for y > 0. The marginal
density of X is e−x for x > 0.
(a) Find the marginal density of Y .
(b) Find the conditional density of X given Y = y; if this is a standard distribution identify it by name.
3. The random variables U and V are independent Uniform[0, 1] random variables.
Find the density of X = V /U .
4. Let X1 , . . . , Xn be a random sample from an Exponential(λ) population with
density
(
λe−λx for x ≥ 0
f (x|λ) =
0
otherwise
for some λ > 0. Recall that E[Xi ] = 1/λ and Var(Xi ) = 1/λ2 .
(a) Show that X n ∼ AN(1/λ, 1/(nλ2 )) as n → ∞.
√
(b) Show that n(X n − 1/λ)/X n ∼ AN(0, 1) as n → ∞
(c) Show that 1/X n ∼ AN(λ, λ2 /n) as n → ∞.
Recall that Yn ∼ AN(an , b2n ) as n → ∞ means that (Yn − an )/bn converges in
distribution to a standard normal random variable.
5. Suppose n different letters and envelopes are typed, but the letters accidentally
are assigned randomly to the envelopes. Let X be the number of letters assigned
to the correct envelopes. Compute the mean and variance of X. Hint: Let
(
1 if the i-th letter is assigned to the correct envelope
Yi =
0 otherwise,
and consider how X is related to Y1 , . . . , Yn .
1
Statistics STAT:5100 (22S:193), Fall 2015
Tierney
Some Distributions
Bernoulli(p)
pmf
P (X = x|p) = px (1 − p)1−x ; x = 0, 1; 0 ≤ p ≤ 1
mean, variance E[X] = p, Var(X) = p(1 − p)
mgf
MX (t) = (1 − p) + pet
Binomial(n, p)
pmf
P (X = x|n, p) = nx px (1 − p)n−x ; x = 0, 1, . . . , n; 0 ≤ p ≤ 1
mean, variance E[X] = np, Var(X) = np(1 − p)
mgf
MX (t) = ((1 − p) + pet )n
Poisson(λ)
x e−λ
pmf
P (X = x|λ) = λ x!
; x = 0, 1, . . .; 0 ≤ λ < ∞
mean, variance E[X] = λ, Var(X) = λ
t
mgf
MX (t) = eλ(e −1)
Geometric(p)
pmf
P (X = x|p) = p(1 − p)x−1 ; x = 1, 2, . . .; 0 < p ≤ 1
mean, variance E[X] = p1 , Var(X) = 1−p
p2
pet
mgf
MX (t) = 1−(1−p)et
Negative Binomial(r.p)
r
pmf
P (X = x|r, p) = r+x−1
p (1 − p)x ; x = 0, 1, . . .; 0 < p ≤ 1
x
r(1−p)
mean, variance E[X] = p , Var(X) = r(1−p)
p2
r
p
mgf
MX (t) = 1−(1−p)et
Beta(α, β)
pdf
mean, variance
Cauchy(θ, σ)
pdf
Γ(α+β) α−1
f (x|α, β) = Γ(α)Γ(β)
x (1 − x)β−1 ; 0 < x < 1
α
, Var(X) = (α+β)2αβ
E[X] = α+β
(α+β+1)
f (x|θ, σ) =
1
1
;
πσ 1+( x−θ )2
σ
mean, variance
mgf
Gamma(α, β)
pdf
mean, variance
do not exist
does not exist
mgf
MX (t) =
−∞ < x < ∞; −∞ < θ < ∞; σ > 0
1
α−1 −x/β
e
; 0 < x < ∞; α, β > 0
f (x|α, β) = Γ(α)β
αx
2
E[X] = αβ,
= αβ
Var(X)
1
1−βt
α
,t<
1
β
Normal(µ, σ 2 )
1
pdf
f (x|µ, σ 2 ) = √2πσ
exp{− 2σ1 2 (x − µ)2 }; σ 2 > 0
mean, variance E[X] = µ, Var(X) = σ 2
mgf
MX (t) = exp{µt + 21 t2 σ 2 }
2
Statistics STAT:5100 (22S:193), Fall 2015
Tierney
Solutions
1. (a) The means are
E[Zi ] = E[X + Yi ] = E[X] + E[Yi ] = µX + µY
and the variances are
2
Var[Zi ] = Var[X + Yi ] = Var[X] + Var[Yi ] = σX
+ σY2 .
since X and Yi are independent.
(b) The covariance of Z1 and Z2 is
Cov(Z1 , Z2 ) = Cov(X + Y1 , X + Y2 )
= Cov(X, X + Y2 ) + Cov(Y1 , X + Y2 )
= Cov(X, X) + Cov(X, Y2 ) + Cov(Y1 , X + Y2 )
2
= Var(X) = σX
since X, Y1 , and Y2 are independent. The correlation is therefore
1
σ2
Cov(Z1 , Z2 )
.
= 2 X 2 =
ρY1 ,Y2 = p
2
σX + σY
1 + σY2 /σX
Var(Z1 )Var(Z2 )
2. (a) The marginal density of Y is
Z
∞
fY (y) =
xe−xy e−x dx
Z0 ∞
xe−(1+y)x dx
0
Z ∞
1
=
ue−u du
(1 + y)2 0
1
=
(1 + y)2
=
for y > 0.
(b) The conditional density of X given Y = y for y > 0 is
f (x, y)
f (y)
(Y
(1 + y)2 xe−(1+y)x
=
0
fX|Y (x|y) =
for x > 0
otherwise.
This is a Gamma(2, (1 + y)−1 ) density. The conditional density is not
defined for y < 0.
3
Statistics STAT:5100 (22S:193), Fall 2015
Tierney
3. Let Y = U . The inverse transformation is
u=y
v = xy.
and the image of A = [0, 1] × [0, 1] under the transformation is
B = {(x, y) : 0 ≤ y ≤ 1, 0 ≤ xy ≤ 1}.
All x values with x ≥ 0 are possible, and for a given x value the corresponding
y value must satisfy both y ≤ 1 and xy ≤ 1, or y ≤ 1/x. So y values must
satisfy y ≤ min(1, 1/x), and B can be written as
B = {(x, y) : 0 ≤ x, 0 ≤ y ≤ min(1, 1/x)}.
The Jacobian determinant of the inverse transformation is
0 1
det
= −y
y x
The joint density of U, V is
(
1 for 0 ≤ u ≤ 1 and 0 ≤ v ≤ 1
fU,V (u, v) =
0 otherwise.
The joint density of X, Y is thus
(
y
fX,Y (x, y) = fU,V (y, xy)y =
0
for 0 ≤ x and 0 ≤ y ≤ min(1, 1/x)
otherwise
and the marginal density of X is
(R min(1,1/x)
fX (x) =
=
0
0
(
ydy
1 2 min(1,1/x)
y 0
2
0
(
1
2
for x ≥ 0
otherwise
for x ≥ 0
otherwise
min(1, 1/x2 ) for x ≥ 0
0
otherwise

1

 2x2 for x > 1
= 12
for 0 ≤ x ≤ 1


0
for x < 0.
=
4
Statistics STAT:5100 (22S:193), Fall 2015
Tierney
4. (a) The mean and variance of Xi are E[Xi ] = 1/λ and Var(Xi ) = 1/λ2 . By
the central limit theorem, X n ∼ AN(1/λ, 1/(nλ2 )), or
√
D
n(X n − 1/λ)λ → Z ∼ N (0, 1).
P
(b) By the law of large numbers, X n → 1/λ, and by the continuous mapping
P
theorem 1/(λX n ) → 1. Slutsky’s theorem then implies that
√
n(X n − 1/λ)/X n =
√
n(X n − 1/λ)λ ×
1 D
→ Z ∼ N (0, 1).
λX n
(c) By the delta method with f (x) = 1/x and f 0 (x) = −1/x2
f (X n ) = 1/X n ∼ AN(f (1/λ), f 0 (1/λ)2 /(nλ2 ))
= AN(λ, λ4 /(nλ2 )) = AN(λ, λ2 /n)
In this problem parts (b) and (c) are algebraically equivalent, so either of the
two proofs shown here works in both cases.
5. The number of correctly assigned letters is
X=
n
X
Yi .
i=1
The Yi are Bernoulli random variables, and by symmetry E[Yi ] =
" n
#
n
X
X
1
E[X] = E
Yi =
E[Yi ] = n = 1
n
i=1
i=1
1
n
for all i. So
and
1
1
n−1
Var(Yi ) =
1−
=
n
n
n2
The Yi are not independent, so their covariances need to be considered in computing the variance of X. By symmetry, for all i 6= j
E[Yi Yj ] = E[Y1 Y2 ]
= P (letters 1 and 2 both in correct envelopes)
= P (letter 1 correct)P (letter 2 correct|letter 1 correct)
1 1
.
=
nn−1
So the covariance of Yi and Yj for i 6= j is
Cov(Yi , Yj ) = E[Yi Yj ] − E[Yi ]E[Yj ]
2
1
1
1
1
1
1
=
−
=
−
= 2
n(n − 1)
n
n n−1 n
n (n − 1)
5
Statistics STAT:5100 (22S:193), Fall 2015
Tierney
and therefore
Var(X) = Var
n
X
!
Yi
=
X
i=1
Var(Yi ) +
XX
Cov(Yi , Yj )
i
= nVar(Y1 ) + n(n − 1)Cov(Y1 , Y2 )
1
n−1
= n 2 + n(n − 1) 2
n
n (n − 1)
n−1 1
=
+ = 1.
n
n
6
i6=j
Download