Homework solutions Math525 Fall 2003 Text book: Bickel-Doksum, 2nd edition Assignment # 4 Section 2.1. 1. (a). Notice that 1 1 θ = θ 2 + 2θ(1 − θ) = p1 + p2 2 2 Substitute p1 and p2 by N1 /n and N2 /n, respectively. We have θ̂ = N1 n + N2 2n (b). The estimate is 1 N1 2 +N n 2n 2 − Nn1 − N 2n = 2N1 + N2 2n − 2N1 − N2 (c). Notice that EX = (−1) · θ 2 + 0 · 2θ(1 − θ) + 1 · (1 − θ)2 = 1 − 2θ On the other hand, the moment estimate for EX is n 1X N3 N1 N1 N2 N1 + = (−1) +1− − = 1 − 2T3 Xi = (−1) n n n n n n i=1 Therefore the moment estimate θ̂ = T3 . 3. By Problem B.2.5, EX = α1 α1 (α1 + 1) and EX 2 = α1 + α 2 (α1 + α2 )(α1 + α2 + 1) Write n n 1X 1X 2 µ̂1 = Xi and µ̂2 = X n i=1 n i=1 i By solving α1 α1 (α1 + 1) = µ̂1 and = µ̂2 α1 + α 2 (α1 + α2 )(α1 + α2 + 1) we have α̂1 = µ̂1 − µ̂2 µ̂1 − µ̂2 1 − µ̂1 and α̂2 = µ̂2 − 1 µ̂2 − 1 µ̂1 1 = T3 . 12. Let P̂ be the empirical distribution. The key point is that for any j = 1, · · · , r, µ̂(θ) is the j-moment under P̂ . Indeed, n n k n k 1 XX 1X j 1 XX Xi = µ̂j = 1{Xi =vl } Xij = 1{Xi =vl } vlj n i=1 n i=1 n i=1 l=1 = k X l=1 vlj 1 n n X 1{Xi =vl } = i=1 l=1 k X l=1 vlj P̂ {X = l} = ÊX j Section 2.2. 1. By assumption, g(θ, t) = (θ/2)t2 . By normal equation n X ∂g i=1 ∂θ (θ̂, ti )Yi = n X ∂g ∂θ i=1 Hence, θ̂ = 2 n X t2i Yi i=1 (θ̂, ti )g(θ̂, ti ) n .X t4i i=1 6. (a). g(α, z) = αz. By normal equation, n X zi Yi = α̂ i=1 Therefore, α̂ = n X zi2 i=1 n X z i Yi i=1 n .X zi2 i=1 (b).By Theorem 1.4.3, p.38, the best zero intercept linear predictor of Y is given by µL (z) = E(ZY ) z EZ 2 As E(ZY ) and EZ 2 are unknown, we use their estimators n n 1X 1X 2 Zi Yi and Z n i=1 n i=1 i instead. Therefore we have µL (z) = α̂z 2 10. Under certain conditions (such as the differentiability), the mean step is to solve the equation n X ∇θ log f (Xi , θ̂) = 0 i=1 (a) n X 1 i=1 θ̂ = n θ̂ − Xi = 0 n .X Xi = X̄ −1 i=1 (b) n X 1 θ̂ i=1 + log c − log Xi = 0 θ̂ = n n .X log c−1 Xi i=1 (c) This case is different from the above due to non-differentiability. The sample density is n Y −(c+1) cn θ nc Xi min{X1 , · · · , Xn } ≥ θ ~ p(X, θ) = i=1 0 otherwise ~ θ) is The maximizer θ̂ of p(X, θ̂ = min{X1 , · · · , Xn } (d). n X 1 i=1 1 + p log Xi = 0 2θ̂ 2 θ̂ n2 θ̂ = log n X − Qn −1 i=1 Xi 2 (e) 2 + Xi2 =0 2 θ̂ θ̂ i=1 v u n u 1 X X2 θ̂ = t 2n i=1 i 3 (f). n X 1 − Xic = 0 θ̂ i=1 θ̂ = Pn n i=1 Xic 11. (a) f (x, µ, σ 2) = √ We solve n (x − µ)2 o 1 exp − 2σ 2 2πσ n n X X ∂ ∂ log f (Xi , µ, σ 2) = 0 and log f (Xi , µ, σ 2 ) = 0 2 ∂µ ∂σ i=1 i=1 or n X Xi − µ σ2 i=1 = 0 and n X i=1 The solution is − 1 (Xi − µ)2 + =0 2σ 2 2σ 4 n 1X µ̂ = X̄ and σ̂ = (Xi − X̄)2 n i=1 2 (b) As X̄ > 0, one can see n µ̂ = X̄ and σ̂ 2 = 1X (Xi − X̄)2 n i=1 ~ µ, σ 2 ). As X̄ ≤ 0, is the maximizer of the sample density p(X, n X Xi − µ ∂ ~ µ, σ 2 ) = log p(X, ≤0 ∂µ σ2 i=1 ~ µ, σ 2 ) is decreasing as a function of µ ∈ [0, ∞). So to maximize p(X, ~ µ, σ 2 ), we So p(X, 2 must take µ = 0. To find σ , we solve n X i=1 which gives σ 2 = 1 n In conclusion, Pn i=1 − Xi2 1 + =0 2σ 2 2σ 4 Xi2 . n 2 1X Xi − max{X̄, 0} µ̂ = max{X̄, 0} and σ̂ = n i=1 2 4 15. By factorization theorem, ~ θ) = h(X)g ~ ~ θ p(X, T (X), ~ θ) and g T (X), ~ θ have same maximizer(s) θ. Finally, the conclusion Therefore, p(X, ~ θ is a function of follows from the obvious function that the maximizer θ of g T (X), ~ T (X). 37. Write ~ θ0 ) = −E D(θ0 , θ0 ) = −Eθ0 log p(X, √ n = n log( 2πσ) + 2 K(θ0 , θ) = D(θ0 , θ) − D(θ0 , θ0 ) = −E X n i=1 X n i=1 √ (Xi − θ0 )2 − log( 2πσ) − 2σ 2 √ (Xi − θ)2 − log( 2πσ) − − D(θ0 , θ0 ) 2σ 2 √ √ n(θ − θ0 )2 n 2 n 2 = n log( 2πσ) + 2 σ + (θ − θ0 ) − n log( 2πσ) − = 2σ 2 2 5