Study Guide – STAT 543, Exam 2

advertisement
Study Guide – STAT 543, Exam 2
Below is a rough outline of the material covered on the second exam. This corresponds to material in
the textbook from Chapter 6 and Chapter 8 (as well as parts of Chapter 7 on finding UMVUEs). For
practice, you should check out problems from the textbook on these Sections.
1. Data Reduction Principles
(a) Sufficiency: distribution of X = (X1 , X2 , . . . , Xn ) given S doesn’t depend on θ
˜
˜
i. Factorization Theorem (for determining sufficiency)
ii. 1-1 functions of sufficient statistics are sufficient
(b) Minimal sufficiency
i. Theorem (for determining minimal sufficiency): Let f (x|θ) be the joint pdf/pmf of
˜˜
X1 , . . . , Xn , θ ∈ Θ ⊂ Rp . Let T = (T1 , . . . , Tk ) be a statistic such that: for every
˜
˜
two sample points x and y , {θ : f (x|θ) > 0} = {θ : f (y |θ) > 0} and f (x|θ)/f (y |θ)
˜
˜˜
˜
˜˜
˜ ˜
˜˜
˜˜
is constant over θ ∈ {θ : f (x|θ) > 0} if and only T (x) = T (y ). Then, T is minimal
˜
˜
˜˜
˜
˜
sufficient for θ.
˜
ii. Minimal sufficient statistics from exponential family (see below)
(c) Completeness of a statistic T (or equivalently the family {fT (t|θ) : θ ∈ Θ} of distributions)
i. Sufficiency & (bounded) completeness together imply minimal sufficiency
(d) Exponential families: definition and theorem for finding complete, sufficient statistics
(e) Ancillary statistics
i. Basu’s Theorem: If S is a complete, sufficient statistic & T is an ancillary statistic, then
˜
˜
S , T are independent (no matter the data generating θ)
˜ ˜
2. Sufficiency and UMVUEs
(a) Rao-Blackwell Theorem (conditional expectation T ∗ = E(T |S ) of an UE T of γ(θ) given a
˜
sufficient statistic S is an unbiased estimator Eθ γ(θ) with Varθ (T ∗ ) ≤ Varθ (T ) for all θ; and
˜
if Varθ0 (T ∗ ) = Varθ0 (T ) at some θ0 , then it must be that Pθ0 (T = T ∗ ) = 1.
(b) Lehmann-Scheffe Theorem (getting the UMVUE based on a complete, sufficient statistic)
i. Method I for getting UMVUE of γ(θ): Find an UE of γ(θ) that is a function of a
complete, sufficient statistic
ii. Method II for getting UMVUE of γ(θ): start with UE T of γ(θ) & then UMVUE is
E(T |S ) where S is complete, sufficient
˜
˜
3. Testing of Hypotheses
(a) Definitions:
i. General notations: Null/alternative hypothesis, simple/composite hypothesis, test function/rule, simple test function, rejection region, acceptance region
ii. Probability related: Type I, Type II errors, power function (power), size or level of test
(b) Most Powerful (MP) tests of H0 : θ = θ0 vs H1 : θ = θ0
i. Definition of MP tests & Neyman-Pearson lemma for finding MP tests, i.e., a MP size
α test for H0 : θ = θ0 vs H1 : θ = θ1 , is given by

1
if f (x|θ1 ) > kf (x|θ0 )



˜
˜

γ
if f (x|θ1 ) = kf (x|θ0 )
ϕ(x) =
˜
˜

˜


if f (x|θ1 ) < kf (x|θ0 )
 0
˜
˜
where γ ∈ [0, 1] and 0 ≤ k ≤ ∞ are constants satisfying Eθ0 ϕ(X ) = α
˜
1
(c) Two Methods for finding: Uniformly Most Powerful (UMP) size α tests of H0 : θ ∈ Θ0 vs
H1 : θ 6∈ Θ0
i. Based on Neyman-Pearson lemma: Suppose you can pick some θ0 ∈ Θ0 so that 1)
the MP test ϕ(x) by the Neyman-Pearson lemma of H0 : θ = θ0 vs H1 : θ = θ1 is
˜
the same, no matter what alternative θ1 you choose; and 2) the size of ϕ(x) is α, i.e.,
˜
maxθ∈Θ0 Eθ ϕ(X ) = α. Then ϕ(x) is UMP of H0 : θ ∈ Θ0 vs H1 : θ 6∈ Θ0
˜
˜
ii. Using Monotone Likelihood Ratio property (in a real-valued statistic T = t(X )): UMP
˜
tests for ‘H0 : θ ≤ θ0 vs H1 : θ > θ0 ” or “H0 : θ ≥ θ0 vs H1 : θ < θ0 ” where θ is
real-valued in terms of T = t(X )
˜
(d) Likelihood ratio statistics and tests for H0 : θ ∈ Θ0 ⊂ Rp vs H1 : θ 6∈ Θ0
max f (x|θ)
θ∈Θ0
˜
is too
i. reject H0 if λ(x) =
max f (x|θ)
˜
θ∈Θ
˜





ϕ(x) =

˜



small, i.e., the likelihood ratio test is
1
γ
0
if
λ(x) < k
˜
if λ(x) = k
˜
if λ(x) > k
˜
where γ ∈ [0, 1] and 0 ≤ k ≤ 1 are constants determined by maxθ∈Θ0 Eθ ϕ(X ) = α.
˜
ii. Large sample properties of likelihood ratio test for hypotheses “H0 : θ1 = θ10 , . . . , θr = θr0 ”
vs “H1 : θi 6= θi0 for some 1 ≤ i ≤ r” (where r ≤ p and parameter θ = (θ1 , . . . , θp ) has p
components), i.e., −2 log λ(x) has a large sample χ2 (r) distribution for calibrating tests
˜
(e) Bayes tests of H0 : θ ∈ Θ0Rvs H1 : θ 6∈ Θ0 based on prior pdf π(θ), i.e., reject if posterior
probability P (θ 6∈ Θ0 |x) = Θ\Θ0 fθ|x (θ)dθ ≥ 1/2
˜
˜
2
Download