Homework solutions Math525 Fall 2003 Text book: Bickel

advertisement
Homework solutions
Math525
Fall 2003
Text book: Bickel-Doksum, 2nd edition
Assignment # 5
Section 3.2.
2. By the sufficiency of S
~ = E[θ|S] =
θ̂B = E[θ|X]
Z
1
θπ(θ|S)dθ
0
For any k = 0, 1, · · · n,
1
n k
θ (1 − θ)n−k
θ r−1 (1 − θ)s−1
θ k+r−1 (1 − θ)n−k+s−1
k
B(r, s)
=
π(θ|k) =
R1 n k
1
B(k + r, n − k + s)
θ r−1 (1 − θ)s−1 dθ
θ (1 − θ)n−k
0
B(r, s)
k
That is, the posterior distribution is the β-distribution β(k + r, n − k + s). Hence
Z
1
θπ(θ|k)dθ =
0
Z
1
0
θ k+r (1 − θ)n−k+s−1
k+r
dθ =
B(k + r, n − k + s)
n+r+s
Therefore,
θ̂B =
where θ0 =
r
r+s
r
n
nX̄ + r
n
S+r
X̄ + 1 −
=
=
n+r+s
n+r+s
n+r+s
n+r+s r+s
is the mean of the distribution β(r, s).
In addition, notice that β(1, 1) is the uniform distribution on [0, 1]. Taking r = s = 1
in the above gives the Bayes rule against uniform prior
θ̂B =
S+1
n+2
3. By solving likelihood equation we have the MLE of θ: θ̂ = X̄. Hence, the MLE of
q(θ) = θ(1 − θ) is q(θ̂) = X̄(1 − X̄).
Notice that q(θ̂B ) = E[q(θ)|~x] if and only if q E[θ|~x] = E[q(θ)|~
x]. Since q(θ) =
θ(1 − θ) is strictly concave on [0, 1]. By Jensen’s inequality, q E[θ|~x] ≥ E[q(θ)|~x] and the
equality holds only when π(θ|~x is degenerate. By the computation given in Problem #2,
π(θ|~x) ∼ β(nx̄ + r, n − nx̄ + s) is not degenerate. So we do not have equality.
1
Some remarks: Another way to solve this problem is to compute q(θ̂B ) and E[q(θ)|~x]
separately. This exercise shows that unlike the MLE, the Bayes estimate can not be carried
by function.
9. (a). πτo = N (ηo , τo2 ). By Example 3.2.1, p.163, the posterior distribution is
N (µτo , στ2o ), where
µτ o = η o
The posterior risk
  E l(θ, 0)|~x
r(a|~x) = E l(θ, a)|~x =
 E l(θ, 1)|~x
The Bayes rule δπτo :
δ π τo =
=
Notice that
nτ 2 σo2 −1
σo2
σo2 o
2
1
+
+
x̄
and
σ
=
τo
nτo2 + σo2
nτo2 + σo2
n
nτo2

 accept bioequivalence

reject bioequivalence

 accept bioequivalence

reject bioequivalence
a=0
a=1
if E l(θ, 0)|~x < E l(θ, 1)|~x
if E l(θ, 0)|~x ≥ E l(θ, 1)|~x
if E λ(θ)|~x < 0
if E λ(θ)|~x ≥ 0
Z ∞
n
θ2
(θ − µτo )2 o
1
dθ
exp − 2 −
E λ(θ)|~x = r − √
2c
2στ2o
2πστo −∞
n µ2 1
o
1
1
exp − τo
=r− √
−
2 στ2o
στ4o (c−2 + στ−2
2πστo
o )
Z ∞
n 1 q
2 o
µτ o
p
×
exp −
c−2 + στ−2
θ
−
dθ
o
2
−∞
στ2o c−2 + στ−2
o
o
n µ2 1
1
1
τo
p
− 4 −2
=r−
exp −
2 στ2o
στo (c + στ−2
στ2o c−2 + στ−2
o )
o
By the relation log r = − 2c12 2 , the inequality E λ(θ)|~x < 0 is equivalent to
i
h p
2
2
−2 + σ −2
n
2 o
−
2
log
σ
c
τo
τo
c2
c2
2
2
=
τ
(n)
+
c
)
log
+ 2
µ2τo <
o
1
1
2 (n) + c2
τ
c
−
2
−2
o
4
−2
σ
σ (c +σ )
τo
τo
τo
Hence the conclusion follows from the fact that
nτ 2 σo2
o
E(θ|~x) = µτo ηo
+
x̄
= wηo + (1 − w)x̄
nτo2 + σo2
nτo2 + σo2
2
where w =
is
σo2
2
nτo +σo2
.
(b). As ηo = 0 and τo2 → ∞, τo2 (n) →
δ∗ =

 accept bioequivalence

reject bioequivalence
σo2
n
if 1 −
and µτo → 1 −
1
nτo
if otherwise
1
nτo
x̄. So the limiting rule
2 2
n
2
σ
x̄ ≤ no + c2 log σ 2nc
+
2
o +nc
(c). As n → ∞, τo2 (n) → 0. The Bayes rule is close to the rule
δ
∗∗
=

 accept bioequivalence

reject bioequivalence
if |x̄| ≤ if otherwise
Section 3.3.
3. (a). The posterior risk is
r(a|x) = E l(θ, a)|x = l(θ0 , a)P {θ = θ0 |x} + l(θ1 , a)P {θ = θ1 |x}
w1,0 P {θ = θ1 |x} a = 0
=
w0,1 P {θ = θ0 |x} a = 1
(
p(x,θ1 )P {θ=θ1 }
a=0
w1,0 p(x,θ0 )P {θ=θ
0 }+p(x,θ1 )P {θ=θ1 }
=
p(x,θ0 )P {θ=θ0 }
w0,1 p(x,θ0 )P {θ=θ0 }+p(x,θ1 )P {θ=θ1 }
a=1
(
p(x,θ1 )π
w1,0 p(x,θ0 )(1−π)+p(x,θ
a=0
1 )π
=
0 )(1−π)
w0,1 p(x,θ0p(x,θ
a=1
)(1−π)+p(x,θ1 )π
As the minimizer of r(a|x), the Bayes rule
n
δπ (x) = 1 w1,0 p(x, θ1 )π ≥ w0,1 p(x, θ0 )(1 − π)
0 otherwise
(1−π)w0,1
= 1 Lx (θ0 , θ1 ) ≥ πw1,0
0 otherwise
(b). The problem is to look for 0 < π ∗ < 1 such that
r(π ∗ , δπ ∗ ) = sup r(π, δπ ∗ )
π
Note that
r(π ∗ , δπ ∗ ) = (1 − π ∗ )R(θ0 , δπ ∗ ) + π ∗ R(θ1 , δπ ∗ )
3
2
c2
o
r(π, δπ ∗ ) = (1 − π)R(θ0 , δπ ∗ ) + πR(θ1 , δπ ∗ )
We will have
r(π ∗ , δπ ∗ ) = r(π, δπ ∗ )
and therefore the desired conclusion, if we can prove that there is 0 < π ∗ < 1 such that
R(θ0 , δπ ∗ ) = R(θ1 , δπ ∗ )
Let 0 < π < 1 be arbitrary and δπ be defined in part (a).
w0,1 Pθ0 {δπ (X) = 1}
R(θ, δπ ) = El(θ, δπ (X)) =
w1,0 Pθ1 {δπ (X) = 0}
n
o

 w0,1 Pθ0 LX (θ0 , θ1 ) ≥ (1−π)w0,1
θ = θ0
πw1,0
n
o
=
 w1,0 Pθ LX (θ0 , θ1 ) < (1−π)w0,1
θ = θ1
1
πw1,0
θ = θ0
θ = θ1
The problem now is to prove that there is 0 < π ∗ < 1 such that
n
n
(1 − π)w0,1 o
(1 − π)w0,1 o
w0,1 Pθ0 LX (θ0 , θ1 ) ≥
= w1,0 Pθ1 LX (θ0 , θ1 ) <
πw1,0
πw1,0
as π = π ∗ . This follows from mean value theorem for continuous functions.
√
√
4. (a). By Problem 3.2.2, δ ∗ = δπ for π = β( n/2, n/2). So we only need to show
that R(θ, δ ∗ ) does not depend on θ.
1√
i2
n
2√ − θ
R(θ, δ ∗ ) = El θ, δ ∗ (S) = E
n+ n
h
√ i
1√
1
√ 2 E (S − nθ) −
n − nθ
=
(n + n)
2
h
√ 2 i
1√
1
1
√ 2 nθ(1 − θ) +
√
=
n − nθ
=
2
(n + n)
4(n + n)2
hS +
(b). R(θ, δ) =
1
θ(1
n
− θ)
lim
n→∞
n
R(θ, δ ∗ )
<1
= 4θ(1 − θ)
R(θ, δ)
=1
θ=
6 1
θ=1
7. Let δ ∗ (X) = X. We have
R(λ, δ ∗ ) = E
h (λ − X)2 i
λ
4
=1
(constant)
Let πk = Γ(k −1 , 1), i.e., exponential distribution with parameter k −1 . By Theorem 3.3.3,
we need only to show that
lim r(πk , δk ) = 1
k→∞
Indeed, the posterior density is
−1
λλ
X
e−λ
(1 + k −1 )X+1 X −(1+k−1 )λ
X!
πk (λ|X) = Z ∞
=
λ e
λ>0
X
Γ(X + 1)
−1 −k−1 t t
−t
k e
e dt
X!
0
That is, πk (·|X) = Γ (1 + k −1 ), X + 1 (Γ-distribution with parameter (1 + k −1 ) and
X + 1). The Bayes risk is
k −1 e−k
h (λ − a)2 i (1 + k −1 )X+1 Z ∞
−1
(λ − a)2 λX−1 e−(1+k )λ dλ
r(a|X) = E
X =
λ
Γ(X + 1)
0
The minimizer a = δπk (X) is
δπk (X) = expectation of Γ (1 + k −1 ), X =
X
1 + k −1
where the second equality follows from Problem 4-(c), B-2, p.526.
h (λ − δ (X))2 i 1 X
2
πk
R(λ, δπk ) = E
= E
−λ
λ
λ
1 + k −1
2
1
1
1
1
−1
−2 2
E
(X
−
λ)
−
k
λ
=
λ
+
k
λ
=
λ (1 + k −1 )2
λ (1 + k −1 )2
1
−2
=
1
+
k
λ
(1 + k −1 )2
Z ∞
−1
1
1
−2
1 + k λ k −1 e−k λ dλ =
−→ 1 (k → ∞)
r(πk , δπk ) =
−1
2
(1 + k )
1 + k −1
0
8.
R(~
µ, δ) = E
k
X
i=1
(Xi − µi )2 = k (constant)
So we need only to show that δ = δπ for some distribution π on Rk . That is, δ is a
minimizer of
0
r(π, δ ) =
Z
0
R(~
µ, δ )π(d~
µ) =
k Z
X
i=1
~ − µi )2 πi (dµi )
E(δi0 (X)
~ = δ 0 (X),
~ · · · , δ 0 (X)
~ and π1 , · · · , πk are the marginal distributions of π.
where δ 0 (X)
1
1
5
Take π be the degenerate single point mass at .(0, · · · , 0). By Example 3.2.1, p.163
~ = Xi minimizes
(with η0 = 0 and τ = 0), we conclude that for each 1 ≤ i ≤ k, δi (X)
ri (π, δi0 )
=
Z
~ − µi )2 πi (dµi )
E(δi0 (X)
Hence δ is a minimizer of
0
r(π, δ ) =
k Z
X
i=1
~ − µi )2 πi (dµi ).
E(δi0 (X)
6
Download