Stat 643 Final Exam December 20, 2002 Prof. Vardeman N

advertisement
Stat 643 Final Exam
December 20, 2002
Prof. Vardeman
1. Suppose that \ µ NÐ)ß "Ñ where ) - @ œ d. Consider first a decision problem where T œ @ and
PÐ)ß +Ñ œ a" • ) +b# Þ
a) Find a generalized Bayes rule in this problem against Lebesgue measure on @.
b) Argue that every finite nonrandomized decision rule $ ÐBÑ is Bayes versus ? a point mass prior
concentrated at 0.
c) Argue from first principles (you don't need a theorem from class for this) that $ ‡ ÐBÑ ´ ! is minimax in
this decision problem.
Consider now the squared error loss estimation of ") , i.e. a second decision problem with
#
@ œ d/Ö!×ß T œ @ and loss P‡ a)ß +b œ ˆ )" • +‰ Þ
d) Argue that for - - Ð!ß "Ñ the estimator
$- ÐBÑ œ
B
" € -B#
is admissible for this second problem.
••••••••••••••••••••••••••••••••••••••••••••••••••••••
Hint for part d): It is a fact that you need not prove that under the first loss function, P, the rule
B
$ÐBÑ œ
#
#
" € B ˆ "€7 7 # ‰
is Bayes versus a Na0,7 # b prior.
••••••••••••••••••••••••••••••••••••••••••••••••••••••
2. Suppose that \ µ BernoulliÐ)Ñ where @ œ Ö "$ ß #$ ×Þ Consider a decision problem where T œ Ð!ß "Ñ
(the unit interval) and
"
PÐ ß +Ñ œ +#
$
#
PÐ ß +Ñ œ " • +
$
a) Argue that the set of nonrandomized decision rules is essentially complete in this problem.
b) Consider a behavioral decision rule 9B where
9! is the Uniform Ð!ß "Ñ distribution
9" has pdf 0 Ð+Ñ œ #+MÒ! • + • "Ó
Identify a nonrandomized rule that is better than it (and argue carefully that your candidate does the job).
1
c) Find the risk point ˆVÐ "$ ß $ ‰ß V Ð #$ ß $ ÑÑ corresponding to an arbitrary nonrandomized decision rule $ .
d) Set up but do not try to solve an equation or equations needed to identify a least favorable prior
distribution in this problem (a "1" in the notation below).
••••••••••••••••••••••••••••••••••••••••••••••••••••••
Hint for part d): You may use the following without proof. For a prior distribution K, abbreviate
1 œ KˆÖ "$ ׉ and " • 1 œ KˆÖ #$ ׉. For $K Bayes versus K
$K Ð!Ñ œ ž
"•1
%1
if 1 ž
"
if 1 Ÿ
"
5
"
5
and
$K Ð"Ñ œ ž
"•1
1
if 1 ž
"
if 1 Ÿ
"
#
"
#
••••••••••••••••••••••••••••••••••••••••••••••••••••••
3. For ) - @ œ Ö"ß #× let /) be NÐ)ß "Ñ measure and () be PoissonÐ)Ñ measure. You may take as given
that for testing H! À ) œ " vs Ha À ) œ # based on \ µ /) , most powerful level ! tests reject for large \
and that the same is true for testing based on \ µ () . In fact, tables of normal and Poisson probabilities
can be consulted to see that a best size ! œ Þ!) test in the first case rejects if \ ž #Þ%!& and in the second
case rejects if \ &.
Suppose that \ µ "# /) € "# () and that one wishes to test H! À ) œ " vs Ha À ) œ # based on \ . (Notice
that with T) probability 1, one knows after observing \ whether it was the Normal distribution or the
Poisson distribution that generated \ .)
A possible test here is
9ÐBÑ œ MÒB  m and B ž #Þ%!&Ó € MÒB - m and B
&Ó
a) Is this test of size ! œ Þ!)? Explain.
b) Is this test most powerful of its size? Argue carefully one way or the other.
4. Here is a problem related to "scale counting." As a means of "automated" counting of a lot of discrete
objects, I might weigh the lot and try to infer the number in the lot from the observed weight. Suppose
that items have weights that are normally distributed with mean . and standard deviation ". In a
calibration phase, I weigh a group of &! of these items (the number being known from a "hand count") and
observe the weight \ . I subsequently weigh a large number, Q , of these items and observe the weight ] .
I might model \ µ NÐ&!.ß &!Ñ independent of ] µ NÐQ .ß Q Ñ.
Suppose further that I use a prior distribution for Ð.ß Q Ñ of independence, . µ NÐ!ß Ð"!!!Ñ# Ñ and
Q • " µ PoissonÐ"!!Ñ.
Describe in as much detail as possible (given the time constraints here) a simulation-based way of
approximating the posterior mean of Q .
2
Download