Study Guide – STAT 543, Exam 1 Below is a tentative outline of the material covered on the first exam. This corresponds to material in the textbook from Chapter 7 and Chapter 10.1 (subsections 10.1.1,10.1.2,10.1.3). For practice, you should check out problems from the textbook on these Sections. 1. Finding estimators (a) Method of Moments Estimator (MME) (b) Maximum Likelihood Estimator (MLE) (c) Bayes Estimator (see below) 2. Evaluating and comparing estimators of γ(θ) (a) Some desirable qualities: unbiasedness (i.e., Eθ (T ) = γ(θ), ∀θ ∈ Θ) & small variance (precision) i. Uniform minimum variance unbiased estimators (UMVUEs) ii. Cramer-Rao lower bound (CRLB) for finding UMVUE iii. Relative efficiency & efficiency of unbiased estimators ¡ ¢2 (b) Mean squared error (MSE) criterion: MSEθ (T ) = Eθ T − γ(θ) = Varθ (T ) + [bθ (T )]2 , bias of T is bθ (T ) = Eθ (T ) − γ(θ) (c) Risk comparisons i. loss function L(t, θ) ≥ 0; risk function RT (θ) = Eθ L(T, θ), expected loss of T in estimating γ(θ) ii. Definitions of “better” & ”admissible” estimators with respect to risk iii. Principles for choosing estimators based on risk A Minimax principle (general idea only); T0 is minimax if maxθ∈Θ RT0 (θ) = minT maxθ∈Θ RT (θ) or T0 has minimal maximum (worst-case) risk. B Bayes principle R prior π(θ) for θ ∈ Θ (usually a pdf); Bayes risk BRT = θ∈Θ RT (θ)π(θ)dθ assuming θ is continuous; Bayes estimator T0 has minimal Bayes risk BRT0 = minT BRT . C Finding Bayes estimators C.1 posterior pdf fθ|x (θ) of θ on Θ, given x = (x1 , x2 , . . . , xn ) ˜ ˜ R C.2 Bayes estimator minimizes the “posterior risk” Eθ|x L(h(x), θ) = θ∈Θ L(h(x), θ)fθ|x (θ)dθ, ˜ ˜ ˜ ˜ over all estimators T = h(X ), for fixed values x = (x1 , x2 , . . . , xn ) of X = (X1 , X2 , . . . , Xn ) ˜ ˜ ˜ C.3 For squared error loss, Bayes estimator = posterior mean Eθ|x [γ(θ)]; for absolute ˜ error loss, Bayes estimator = posterior median of γ(θ) given x ˜ 3. Large sample properties of estimators (a) Three definitions p i. Consistency: Tn −→ γ(θ) as n → ∞ ii. Mean squared error consistency (MSEC): MSEθ (Tn ) → 0 as n → ∞ iii. Asymptotically unbiased: Eθ (Tn ) → γ(θ) as n → ∞ (b) Asymptotic Relative efficiency & Asymptotic efficiency (c) Large sample properties of MLEs i. consistency, normality, asymptotic efficiency 1