Stat 643 Problems

advertisement
Stat 643 Problems
1. Let (H,T ) be a measurable space. Suppose that ., T and U are 5-finite positive
measures on this space with T ¥ U and U ¥ ..
.U
.T
a) Show that T ¥ . and .T
. . œ .U † . . a.s. ..
b) Show that if . ¥ U, then
..
.U
œ Š .U
.. ‹
•"
a.s. ..
2. (Cressie) Let H œ Ò • -ß -Ó for some - ž !, T be the set of Borel sets and T be
Lebesgue measure divided by 2-. For a subset of H, define the symmetric set for E as
• E œ Ö • =l= - E× and let V œ ÖE - T |E œ • E×Þ
a) Show that V is a sub 5-algebra of T .
b) Let \ be an integrable random variable. Find EÐ\lVÑÞ
3. Let H œ Ò!ß "Ó# , T be the set of Borel sets and T œ "# ? € "# . for ? a distribution
placing a unit point mass at the point Ð "# ß "Ñ and . 2-dimensional Lebesgue measure.
Consider the variable \Ð=Ñ œ =" and the sub 5-algebra of T generated by \ , V.
a) For E - T , find EÐME lVÑ œ T ÐElVÑ.
b) For ] Ð=Ñ œ =# , find EÐ] lVÑ.
4. Prove Result 1.
5. Prove the fact stated in the paragraph above Result 5 in the typed notes.
6. Show directly that the following families of distributions have countable equivalent
subsets:
a) c , for T) uniform on the integers ) • " through ) € ",
@ œ ÖÞÞÞß • #ß • "ß !ß "ß #ß $ß ÞÞÞ×.
b) c , for T) uniform on the 2-dimensional disk of radius ) centered at the origin,
@ œ Ð!ß_Ñ.
c) The Poisson Ð)Ñ family, @ œ Ð!ß_Ñ .
7. Suppose that \ œ Ð\" ß \# ß ÞÞÞß \8 Ñ has independent components, where each \3 is
generated as follows. For independent random variables [3 µ normalÐ.ß "Ñ and
^3 µ Poisson Ð.Ñ, \3 œ [3 with probability : and \3 œ ^3 with probability " • :.
Suppose that . - Ò!ß_Ñ . Use the factorization theorem and find low dimensional
sufficient statistics in the cases that:
a) : is known to be "# , and
b) : - Ò!ß "Ó is unknown.
(In the first case the parameter space is @ œ Ö "# × ‚ Ò!ß_Ñ , while in the second it is
Ò!ß "Ó ‚ Ò!ß_Ñ.)
8. (Ferguson) Consider the probability distributions on Ðe" ß U" Ñ defined as follows. For
) œ Ð)ß :Ñ - @ œ e ‚ Ð!ß "Ñ and \ µ T) , suppose
-1-
T) Ò\ œ BÓ œ œ
Ð" • :Ñ:B•)
!
for B œ )ß ) € "ß ) € #ß ÞÞÞ
otherwise .
Let \" ß \# ß ÞÞÞß \8 be iid according to T) Þ
a) Argue that the family ÖT)8 ×)-@ is not dominated by a 5-finte measure, so that
the factorization theorem can not be applied to identify a sufficient statistic here.
b) Argue from first principles that the statistic X Ð\Ñ œ Ðmin\3 ß !\3 Ñ is
sufficient for the parameter ) .
c) Argue that the factorization theorem can be applied if ) is known (the first
factor of the parameter space @ is replaced by a single point) and identify a
sufficient statistic for this case.
d) Argue that if : is known, min\3 is a sufficient statistic.
9. Suppose that \ w is exponential with mean -•" (i.e. has density
0- ÐBÑ œ -expÐ • -BÑMÒB !Ó with respect to Lebesgue measure on e" , but that one
only observes \ œ \ w MÒ\ w ž "Ó. (There is interval censoring below B œ ".)
a) Consider the measure . on k œ Ö!× • Ð"ß_Ñ consisting of a point mass of 1
at 0 plus Lebesgue measure on (1,_). Give a formula for the R-N derivative of
T-\ wrt . on k .
b) Suppose that \" ß \# ß ÞÞÞß \8 are iid with the marginal distribution T-\ . Find a
2-dimensional sufficient statistic for this problem and argue that it is indeed
sufficient.
c) Argue carefully that your statistic from b) is minimal sufficient.
10. Suppose that \" ß \# ß ÞÞÞß \8 are iid with marginal density with respect to Lebesgue
measure on Ð!ß #Ñ of the form
)B
0) ÐBÑ œ œ "
# Ð# • ) Ñ
B - Ð!ß "Ñ
B - Ò"ß #Ñ Þ
Find a low dimensional sufficient statistic and prove that it is minimal.
11. (Lehmann TPE page 65) Let \" ß \# ß ÞÞÞß \7 and ]" ß ]# ß ÞÞÞß ]8 be independent
normal random variables with E\ œ 0ß Var\ œ 5# , E] œ ( and Var] œ 7 # . Find
minimal sufficient statistics for the following situations:
a) no restrictions on the parameters,
b) 5 œ 7 , and
c) 0 œ (.
12. (Lehmann TPE page 65) Let 0 be a positive integrable function over Ð!ß_Ñ and let
:) ÐBÑ œ -Ð)Ñ0 ÐBÑMÒ! • B • )ÓÞ If \" ß \# ß ÞÞÞß \8 are iid with the marginal density
:) ÐBÑ, show that max\3 is sufficient for ).
13. Schervish Exercise 2.8. And show that ^ is ancillary.
-2-
14. Schervish Exercise 2.13. (I think you need to assume that ÖB3 × spans e5 .)
15. Schervish Exercise 2.16.
16. Schervish Exercise 2.18.
17. Schervish Exercise 2.22. What is a first order ancillary statistic here?
18. Schervish Exercise 2.27.
19. Suppose that f is a convex subset of Ò!ß_Ñ 5 . Argue that there exists a finite @
decision problem that has f as its set of risk vectors. (Consider problems where \ is
degenerate and carries no information.)
20. Consider the two state decision problem with @ œ Ö"ß #×, T" the Bernoulli ˆ "% ‰
distribution and T# the Bernoulli ˆ "# ‰distribution, T œ @ and PÐ)ß +Ñ œ MÒ) Á +ÓÞ
a) Find the set of risk vectors for the four nonrandomized decision rules. Plot
these in the plane. Sketch f , the risk set for this problem.
b) For this problem, show explicitly that any element of W‡ has a corresponding
element of W‡ with identical risk vector and vice versa.
c) Identify the the set of all admissible risk vectors for this problem. Is there a
minimal complete class for this decision problem? If there is one, what is it?
d) For each : - Ò!ß "Ó, identify those risk vectors that are Bayes versus the prior
1 œ Ð:ß " • :Ñ. For which priors are there more than one Bayes rule?
e) Verify directly that the prescription "choose an action that minimizes the
posterior expected loss" produces a Bayes rule versus the prior 1 œ Ð "# ß "# ÑÞ
21. Consider a two state decision problem with @ œ Ö"ß #×, where the observable
\ œ Ð\" ß \# Ñ has iid Bernoulli ˆ "% ‰coordinates if ) œ " and iid Bernoulli ˆ "# ‰coordinates
if ) œ #. Suppose that T œ @ and PÐ)ß +Ñ œ MÒ) Á +Ó. Consider the behavioral decision
rule 9B defined by
9B ÐÖ"×Ñ œ "
if B" œ !, and
"
"
9B ÐÖ"×Ñ œ # and 9B ÐÖ"×Ñ œ #
if B" œ ".
a) Show that 9B is inadmissible by finding a rule with a better risk function. (It
may be helpful to figure out what the risk set is for this problem, in a manner
similar to what you did in problem 20.)
b) Use the construction from Result 21 and find a behavioral decision rule that is
a function of the sufficient statistic \" € \# and is risk equivalent to 9B Þ (Note
that this rule is inadmissible.)
22. Consider the squared error loss estimation of : - Ð!ß "Ñ, based on \ µ binomial
Ð8ß :Ñ, and the two nonrandomized decision rules $" ÐBÑ œ 8B and $# ÐBÑ œ "# ˆ 8B € #" ‰Þ Let
-3-
< be a randomized decision function that chooses $" with probability "# and $# with
probability "# Þ
a) Write out expressions for the risk functions of $" , $# and <.
b) Find a behavioral rule that is risk equivalent to <.
c) Identify a nonrandomized estimator that is strictly better than < or 9.
23. Suppose that @ œ @" ‚ @# and that a decision rule 9 is such that for each )# ß 9 is
admissible when the parameter space is @" ‚ Ö)# ×. Show that 9 is then admissible when
the parameter space is @.
24. Suppose that AÐ)Ñ ž ! a)Þ Show that 9 is admissible with loss function PÐ)ß +Ñ iff it
is admissible with loss function AÐ)ÑPÐ)ß +Ñ.
25. Let T! and T" be two distributions on k and 0! and 0" be densities of these with
respect to some dominating 5-finite measure .. Consider the parametric family of
distributions with parameter : - Ò!ß "Ó and densities wrt . of the form
0: ÐBÑ œ Ð" • :Ñ0! ÐBÑ € :0ÐBÑ ß
and suppose that \ has density 0: .
a) If KÐÖ!×Ñ œ KÐÐ1}Ñ œ "# , find the squared error loss formal Bayes estimator
of : versus the prior KÞ
b) Suppose now that K is the uniform distribution on Ò!ß "Ó. Find the squared
error loss Bayes estimator of : versus the prior KÞ
c) Argue that your estimator from b) is admissible.
26. For the model of problem 25, suppose that T! is the binomial Ð8ß "% Ñ distribution and
T1 is the binomial Ð8ß 3% Ñ distribution. Find an ancillary statistic. (Hint: consider the
probabilities that \ takes the values ! and 8.) Is the family of problem 25 then
necessarily boundedly complete?
27. (Ferguson) Prove or give a counterexample: If V" and V# are complete classes of
decision rules, then V" • V# is essentially complete.
28. Problem 9, page 209 of Schervish.
29. (Ferguson and others) Suppose that \ µ binomial Ð8ß :Ñ and one wishes to estimate
: - Ð!ß "Ñ. Suppose first that PÐ:ß +Ñ œ :•" Ð" • :Ñ•" Ð: • +Ñ# Þ
a) Show that \8 is Bayes versus the uniform prior on Ð!ß "Ñ.
b) Argue that \8 is admissible in this decision problem.
Now consider ordinary squared error loss, PÐ:ß +Ñ œ Ð: • +Ñ# .
c) Apply the result of problem 24 and prove that \8 is admissible under this loss
function as well.
-4-
30. Consider a two state decision problem where @ œ T œ Ö!ß "×, T0 and T" have
respective densities with respect to a dominating 5-finite measure ., 0! and 0" and the
loss function is PÐ)ß +Ñ.
a) For K an arbitrary prior distribution, find a formal Bayes rule versus KÞ
b) Specialize your result from a) to the case where PÐ)ß +Ñ œ MÒ) Á +Ó. What
connection does the form of these rules have to the theory of simple versus simple
hypothesis testing?
31. (Ferguson and others) Suppose that one observes independent binomial random
variables, \ µ binomial Ð8ß :" Ñ and ] µ binomial (8ß :# Ñ and needs to estimate :" • :#
with squared error loss, PÐ:ß +Ñ œ Ð:" • :# • +Ñ# . (We will assume that the parameter
space is Ð!ß "Ñ# and that the action space is Ð • "ß "Ñ.) Find the form of a rule Bayes
against a prior uniform over the parameter space.
32. Consider a "linear loss two action" decision problem, where @=e" , T œ Ö!ß "× and
PÐ)ß +Ñ œ Ð) • )! ÑMÒ+ œ !ÓMÒ) )! Ó € Ð)! • )ÑMÒ) œ "ÓMÒ) • )! Ó for some )! of
interest. Find the form of a Bayes rule for this problem. (Hint: Consider the difference
PÐ)ß !Ñ • PÐ)ß "ÑÞÑ
33. Problem 24, page 211 Schervish.
34. Problem 34, page 142 Schervish.
35. Problem 36, page 142 Schervish.
36. Problem 2, page 339 Schervish.
37. Problem 3, page 339 Schervish.
38. Problem 4, page 339 Schervish.
39. Problem 5, page 339 Schervish.
40. Problem 7, page 339 Schervish.
41. Consider the family of bivariate observations \ œ Ð] ß ^Ñ on k œ Ð!ß "Ñ# with
densities wrt two-dimensional Lebesgue measure
0( ÐBÑ º expÐ(" C € (# DÑ
for a bivariate parameter vector ( œ Ð(" ß (# Ñ.
a) Find the natural parameter space for this family of distributions.
b) What is the normalizing constant OÐ(Ñ for this family? Use it to find E( ] Þ
c) Find a UMVUE of the quantity E( ]" • E( ^" based on iid vectors
\" ß \# ß ÞÞÞß \8 with marginal (bivariate) distribution specified by0( ÐBÑ. Argue
carefully that your estimator is UMVUE.
-5-
42. Suppose that \ µ Bernoulli Ð:Ñ and that one wishes to estimate : with loss
PÐ:ß +Ñ œ l: • +l. Consider the estimator $ with $ Ð!Ñ œ "% and $ Ð"Ñ œ $% .
a) Write out the risk function for $ and show that VÐ:ß $ Ñ Ÿ "% Þ
b) Show that there is a prior distribution placing all its mass on Ö!ß "# ß "× against
which $ is Bayes.
c) Prove that $ is minimax in this problem and identify a least favorable prior.
43. Consider a decision problem where T) is the Normal Ð)ß "Ñ distribution on k =e" ,
T œ Ö!ß "× and PÐ)ß +Ñ œ MÒ+ œ !ÓMÒ) ž &Ó € MÒ+ œ "ÓMÒ) Ÿ &Ó.
a) If @ œ Ð • _ß &Ó • Ò'ß_Ñ guess what prior distribution is least favorable, find
the corresponding Bayes decision rule and prove that it is minimax.
b) If @ œ e" , guess what decision rule might be minimax, find its risk function
and prove that it is minimax.
44. Consider the scenario of problem 29. Show that the estimator there is minimax for
the weighted squared error loss and identify a least favorable prior.
45. (Ferguson and others) Consider (inverse mean) weighted squared error loss
estimation of -, the mean of a Poisson distribution. That is, let A œ Ð!ß_Ñß T œ Aß Tbe the Poisson distribution on k œ Ö!ß "ß #ß $ß ÞÞÞ× and PÐ-ß +Ñ œ -•" Ð- • +Ñ# Þ Let
$Ð\Ñ œ \Þ
a) Show that $ is an equalizer rule.
b) Show that $ is generalized Bayes versus Lebesgue measure on A.
c) Find the Bayes estimators wrt the >Ð!," Ñ priors on A.
d) Prove that $ is minimax for this problem.
46. (Berger page 385) Suppose that \ is Poisson with mean - and that one wishes to
test H! À - œ " vs Ha À - œ # with "! • " loss," i.e. A œ T œ Ö"ß #× and
PÐ-ß +Ñ œ MÒ- Á +Ó.
a) Find the form of the nonrandomized Bayes tests for this problem.
b) Using your answer to part a), find enough risk points for nonrandomized Bayes
rules to sketch the part of the lower boundary of the risk set relevant to finding a
minimax rule.
c) On the basis of your sketch from part b), find a minimax rule for this problem
and identify a least favorable distribution.
47. Consider the left-truncated Poisson distribution with pmf
0) ÐBÑ œ expÐ • )Ñ)B ÎBxÐ" • expÐ • )ÑÑ for B œ "ß #ß $ß ÞÞÞ . Suppose that \" ß \# ß ÞÞÞß \8
are iid variables with this distribution.
a) Show that X œ ! \3 is a complete sufficient statistic for ) - Ð!ß_ÑÞ
8
3œ"
b) It turns out that the pmf of X is given by 1) Ð>Ñ œ W>8 8x)> Î>xÐexpÐ)Ñ • "Ñ8 for
> œ 8ß 8 € "ß 8 € #ß ÞÞÞ where W>8 is the so called "Stirling number of the second
-6-
kind" (whose exact definition is not important here). Find the UMVUE of ). (It
will involve the W>8 's.)
48. Consider an observable \ taking values in k and with distribution T) for ) - @Þ
For a statistic X , let U! œ U ÐX Ñ and T)! be the restriction of T ) to U! . Prove that if X is
complete and sufficient and 9ÐX Ñ is unbiased for # Ð)Ñ, then another random variable Y is
an unbiased estimator of # Ð)Ñ iff EÐY lX Ñ œ 9ÐX Ñ a.s. T)! a).
49. Suppose that under the usual set-up X and W are two real-valued statistics.
a) Suppose that E) ÐX Ð\ÑÑ# • _ and E) ÐWÐ\ÑÑ# • _ a). Show that if X Ð\Ñ
and WÐ\Ñ are both unbiased estimators of ) and X Ð\Ñ and WÐ\Ñ • X Ð\Ñ are
uncorrelated a), then Var) X Ð\Ñ Ÿ Var) WÐ\ÑÞ
b) Suppose that WÐ\Ñ is an unbiased estimator of ), E) ÐWÐ\ÑÑ# • _ a ) and that
V œ E) ÒWlX Ó doesn't depend upon ). Argue that V is preferable to W under
squared error loss.
50. Suppose that \ is a real-valued random variable with density (wrt the 5-finite
measure .) defined by 0( ÐBÑ œ expÐ+Ð(Ñ € (BÑ2ÐBÑ for a positive function 2Ð † Ñ
aB - e" and ( - Ð-ß .Ñ œ > § e" , the natural parameter space.
a) Find # Ð(Ñ œ E( \Þ
b) For a fixed ( - >, let Q( Ð=Ñ œ E( expÐ=\ÑÞ Find Q( Ð=Ñ, determine for which
values of = it is finite, and show that it depends on ( only through # Ð(Ñ.
51. Problem 14, page 340 Schervish.
52. Problem 19, page 341 Schervish. Also, find the MRE estimator of ) under squared
error loss for this model.
53. Show that the C-R inequality is not changed by a smooth reparameterization. That is,
suppose that c œ ÖT) × is dominated by a 5-finite measure . and satisfies
i) 0) ÐBÑ ž ! for all ) and B,
ii) for all B, ..) 0) ÐBÑ exists and is finite everywhere on @ § e" and
iii) for any statistic $ with E) l$ Ð\Ñl • _ for all ), E) $ Ð\Ñ can be differentiated
under the integral sign at all points of @.
Let h be a function from @ to e" such that 2w is continuous and nonvanishing on @. Let
( œ 2Ð)Ñ and define U( œ T) . Show that the information inequality bound obtained
from ÖU( × evaluated at ( œ 2Ð)Ñ is the same as the bound obtained from c .
54. Let R œ ÐR" ß R# ß ÞÞÞß R5 Ñ be multinomial Ð8ß :" ß :# ß ÞÞÞß :5 Ñ where !:3 œ "Þ
a) Find the form of the Fisher information matrix based on the parameter :.
b) Suppose that :3 Ð)Ñ for 3 œ "ß #ß ÞÞÞß 5 are differentiable functions of a real
parameter ) - @, and open interval, where each :3 Ð)Ñ ! and !:3 Ð)Ñ œ "Þ
Suppose that 2ÐC" ß C# ß ÞÞÞß C5 Ñ is a continuous real-valued function with continuous
-7-
first partial derivatives and define ;Ð)Ñ œ 2Ð:" Ð)Ñß :# Ð)Ñß ÞÞÞß :5 Ð)ÑÑ. Show that
the information bound for unbiased estimators of ;Ð)Ñ in this context is
5
Ð; w Ð)ÑÑ#
.
where M" Ð)Ñ œ ":3 Ð)ÑŒ log:3 Ð)Ñ• Þ
8M" Ð)Ñ
.)
3œ"
#
55. As an application of Lemma 61 (or 61w ) consider the following. Suppose that
\ µ UÐ)" ß )# Ñ and consider unbiased estimators of # Ð)" ß )# Ñ œ )" €#)# . For )" • )"w • )#
and )" • )#w • )# , apply Lemma 61 with 1" ÐBÑ œ " •
0Ð)w ß)# Ñ ÐBÑ
"
0Ð)" ß)# Ñ ÐBÑ
and 1# ÐBÑ œ " •
0Ð)" ß)w Ñ ÐBÑ
0Ð)" ß)# Ñ ÐBÑ Þ
#
What does Lemma 61 then say about the variance of an unbiased estimator of # ()" ß )# Ñ?
(If you can do it, sup over values of )"w and )#w . I've not tried this, so am not sure how it
comes out.)
56. Problem 11, page 389 Schervish.
57. Take the simplest possible 8-dimensional statistical estimation problem,
\" ß \# ß ÞÞÞß \8 iid Bernoulli Ð:Ñ, and make your way through the results on maximum
likelihood, carefully seeing and saying what they assume and what they yield. (Find
D:! Ð:Ñ, 68 ÐB8 ß :Ñ, verify hypotheses of the theorems and see what they say about
estimation of :Þ For sake of argument, let X8 œ
8•&
" !
\3
8•&
3œ"
and have a look at the
–
corresponding :˜ 8 . What do the theorems say about the behavior of \ when : œ ! or "?
What are approximate confidence intervals for :? Etc.)
58. Suppose that \" ß \# ß ÞÞÞ are iid, each taking values in k œ Ö!ß "ß #× with R-N
derivative of T) wrt to counting measure on k
0) ÐBÑ œ
expÐB)Ñ
.
" € exp) € expÐ#)Ñ
a) Find an estimator of ) based on 8! œ ! MÒ\3 œ !Ó that is È8 consistent (i.e.
8
for which È8Ðs
) • )Ñ converges in distribution).
b) Find in more or less explicit form a "one step Newton modification " of your
estimator from a).
c) Prove directly that your estimator from b) is asymptotically normal with
variance "ÎM" Ð)Ñ. (With )^8 the estimator from a) and )˜ 8 the estimator from b),
3œ"
Pw Ðs
)8Ñ
)˜ 8 œ s
)8 • 8
,
)8Ñ
Pww8 Ðs
‡
‡
and write Pw8 Ðs
) 8 Ñ œ Pw8 Ð)Ñ € Ðs
) 8 • )ÑPww8 Ð)Ñ € "# Ðs
) 8 • )Ñ# Pwww
8 Ð)8 Ñ for some )8
between s
) 8 and ).)
– - Ð!ß #Ñ the loglikelihood has a maximizer
d) Show that provided B
-8-
– • " € È • $B
– # € 'B
–€"
B
s
s
) 8 œ logŽ
• Þ
–
#Ð# • BÑ
s
– - Ð!ß #Ñ will be asymptotically
Prove that an estimator defined to be s
) 8 when B
normal with variance "ÎM" Ð)Ñ.
e) Show that the "observed information" and "expected information"
approximations lead to the same large sample confidence intervals for ). What do
s
these look like based on, say, s
)8?
By the way, a version of nearly everything in this problem works in any one parameter
exponential family. See the "Theory II" question on last year's prelim.
59. Prove the following "filling-in" lemma:
Lemma Suppose that 1! and 1" are two distinct, positive probability densities defined on
an interval in e" Þ If the ratio 1" Î1! is nondecreasing in a real-valued function X ÐBÑ, then
the family of densities Ö1! | ! - Ò!ß "Ó× for 1! œ !1" € Ð" • !Ñ1# has the MLR property
in X ÐBÑ.
60. Consider the situation of problem 9 and maximum likelihood estimation of -.
a) Show that with Q œ #Ò\3 's equal to !Ó, in the event that Q œ 8 there is no
MLE of -, but that in all other cases there is a maximizer of the likelihood. Then
^ ,
argue that for any - ž !, with T- probability tending to 1, the MLE of -, say 8
exists.
b) Give a simple estimator of - based on Q alone. Prove that this estimator is
consistent for -. Then write down an explicit one-step Newton modification of
your estimator from a).
c) Discuss what numerical methods you could use to find the MLE from a) in the
event that it exists.
d) Give two forms of large sample confidence intervals for - based on the MLE
^ and two different approximations to M Ð-ÑÞ
8
"
61. Consider the two distributions T! and T" on k œ Ð!ß "Ñ with densities wrt Lebesgue
measure
0! ÐBÑ œ " and 0" ÐBÑ œ $B# .
a) Find a most powerful level ! œ Þ# test of H! À \ µ T! vs H" À \ µ T" .
b) Plot both the !-" loss risk set f and the set i œ Ö"9 Ð)! Ñß " Ð)" Ñ× for the simple
versus testing problem involving 0! and 0" . What test is Bayes versus a uniform
prior? This test is best of what size?
c) Take the result of Problem 59 as given. Consider the family of mixture
distributions c œ ÖT) × where for ) - Ò!ß "Ó, T) œ Ð" • )ÑT! € ) T" . In this
family find a UMP level ! œ Þ# test of the hypothesis H! À ) Ÿ Þ& vs H" À ) ž Þ&
based on a single observation \ . Argue carefully that your test is really UMP.
-9-
62. Consider a composite versus composite testing problem in a family of distributions
c œ ÖT) × dominated by the 5-finite measure ., and specifically a Bayesian decisiontheoretic approach to this problem with prior distribution K under !-" loss.
a) Show that if KÐ@! Ñ œ ! then 9ÐBÑ » " is Bayes, while if KÐ@1 Ñ œ ! then
9ÐBÑ » 0 is Bayes.
b) Show that if KÐ@! Ñ ž ! and KÐ@1 Ñ ž ! then the Bayes test against K has
Neyman-Pearson form for densities 1! and 1" on k defined by
1! ÐBÑ œ
' 0) ÐBÑ.KÐ)Ñ
@!
KÐ@! Ñ
and 11 ÐBÑ œ
' 0) ÐBÑ.KÐ)Ñ
@1
KÐ@1 Ñ
.
c) Suppose that \ is Normal (),1). Find Bayes tests of H! À ) œ ! vs H" À ) Á !
i) supposing that K is the Normal Ð!ß 5# Ñ distribution, and
ii) supposing that K is "# ÐR € ?Ñ, for R the Normal Ð!ß 5# Ñ distribution,
and ? a point mass distribution at 0.
63. Suppose that \ is Poisson with mean ). Find the UMP test of size ! œ Þ!& of
H! À ) Ÿ % or ) 10 vs H" À % • ) • "!. The following table of Poisson probabilities
T) Ò\ œ BÓ will be useful in this enterprise.
B
)œ%
) œ "!
B
)œ%
) œ "!
!
Þ!")$
Þ!!!!
"$
Þ!!!#
Þ!(#*
"
Þ!($$
Þ!!!&
"%
Þ!!!"
Þ!&#"
#
Þ"%'&
Þ!!#$
"&
Þ!!!!
Þ!$%(
$
Þ"*&%
Þ!!('
"'
%
Þ"*&%
Þ!")*
"(
&
Þ"&'$
Þ!$()
")
'
Þ"!%#
Þ!'$"
"*
(
Þ!&*&
Þ!*!"
#!
)
Þ!#*)
Þ""#'
#"
*
Þ!"$#
Þ"#&"
##
"!
Þ!!&$
Þ"#&"
#$
""
Þ!!"*
Þ""$(
#%
"#
Þ!!!'
Þ!*%)
#&
Þ!#"(
Þ!"#)
Þ!!("
Þ!!$(
Þ!!"*
Þ!!!*
Þ!!!%
Þ!!!#
Þ!!!"
Þ!!!!
64. Suppose that \ is exponential with mean -•" Þ Set up the two equations that will
have to be solved simultaneously in order to find UMP size ! test of H! À - Ÿ Þ& or
- 2 vs H" À - - ÐÞ&ß #Ñ.
65. Suppose that for 3 œ "ß #ß ÞÞÞ the random vectors \3 œ Ð]3 ß ^3 Ñ are iid with
distribution T: described as follows. ]" and ^" are independent, ]" µ binomial Ð8ß :Ñ
and ^" µ geometricÐ:Ñ (i.e. with pmf 1: ÐDÑ œ :Ð" • :ÑD•" for D œ "ß #ß ÞÞÞ).
a) Give the C-R lower bound on the variance of an unbiased estimator of : based
on \" ß \# ß ÞÞÞß \8 Þ
b) Find the maximum likelihood estimator of : based on \" ß \# ß ÞÞÞß \8 and
discuss its asymptotic properties.
c) Give both the "expected information" and "observed information" large sample
confidence intervals for : based on your estimator from b).
66. Consider again the scenario of problem 20. There you sketched the risk set f . Here
sketch the corresponding set i .
-10-
67. Consider the estimation problem with 8 iid T) observations, where T) is the
exponential distribution with mean ). Let Z be the group of scale transformations on
k œ Ð!ß_Ñ8 ß Z œ Ö1- l- ž !× where 1- ÐBÑ œ -BÞ
a) Show that the estimation problem with loss PÐ)ß +Ñ œ ÐlogÐ+Î)ÑÑ# is invariant
under Z and say what relationship any equivariant nonrandomized decision rule
must satisfy.
b) Show that the estimation problem with loss PÐ)ß +Ñ œ Ð) • +Ñ# Î) # is invariant
under Z , and say what relationship any equivariant nonrandomized estimator must
satisfy.
c) Find the generalized Bayes estimator of ) in situation b) if the "prior" has
density wrt Lebesgue measure 1Ð)Ñ œ )•" MÒ) ž !Ó. Argue that this estimator is
the best equivariant estimator in the situation of b).
68. (Problem 37, page 122 TSH.) Let \" ß ÞÞÞß \8 and ]" ß ÞÞÞß ]7 be independent samples
from NÐ0ß "Ñ and NÐ(ß "Ñ, and consider the hypotheses H! À ( Ÿ 0 and H" À ( ž 0Þ Show
–
–
that there exists a UMP test of size !, and it rejects H! when ] • \ is too large. (If
0" • (" is a particular alternative, the distribution assigning probability 1 to the point
with ( œ 0 œ Ð70" € 8(" ÑÎÐ7 € 8Ñ is least favorable at size !.)
69. Problem 57 Schervish, page 293.
70. Problem 12, parts a)-c) Schervish, page 534.
!
71. Fix ! - Ð!ß Þ&Ñ and - - Ð #•#
! ß !Ñ. Let @ œ Ö • "× • Ò!ß "Ó and consider the discrete
distributions with probability mass functions as below.
B
•#
•"
!
"
#
!
!
"
"
) œ •"
!
#
# •!
# •!
#
"•- ‰ˆ "
"•- ‰
"•- ‰ˆ "
‰
ˆ
ˆ
‰
) Á • " )- ˆ "•
•
!
!
•
!
"
•
)ba
!
#
"•!
"•!
#
Find the size ! likelihood ratio test of H! À ) œ • " vs H" À ) Á • ". Show that the test
9ÐBÑ œ MÒB œ !Ó is of size ! and is strictly more powerful that the LRT whatever be ).
(This simple example shows that LRTs need not necessarily be in any sense optimal.)
72. Consider the situation of problem 58.
a) Find the form of UMP tests of H! À ) Ÿ )! vs H" À ) ž )! Þ Discuss how you
would go about choosing an appropriate constant to produce a test of
approximately size !. Discuss how you would go about finding an optimal
(approximately) 90% confidence set for ).
b) Beginning with a third order Taylor expansion of the loglikelihood at )! about
the MLE, prove directly that the asymptotic ;# distribution holds under H! for the
likelihood ratio statistic for testing H! À ) œ )! vs H" À ) Á )! Þ
c) Suppose that 8 œ #! and 8! œ (ß 8" œ ' and 8# œ (. Plot the corresponding
loglikelihood and give the value of the MLE. Based on the MLE and its normal
limiting distribution, then give an approximate 90% two-sided confidence interval
-11-
for ). Finally, based on the asymptotic ;# distribution of the likelihood ratio
statistic, give a different approximate 90% confidence interval for ).
73. Consider the situation of Problems 9 and 60. Below are some data artificially
generated from an exponential distribution.
.24, 3.20, .14, 1.86, .58, 1.15, .32, .66, 1.60, .34,
.61, .09, 1.18, 1.29, .23, .58, .11, 3.82, 2.53, .88
a) Plot the loglikelihood function for the uncensored data (the Bw values given
above). Give approximate 90% two-sided confidence intervals for - based on the
asymptotic ;# distribution for the LRT statistic for testing H! À - œ -! vs
H" À - Á -! and based on the asymptotic normal distribution of the MLE.
Now consider the censored data problem where any value less than 1 is reported as 0.
Modify the above data accordingly and do the following.
b) Plot the loglikelihood for the censored data (the derived B) values. How does
this function of - compare to the one from part a)? It might be informative to plot
these on the same set of axes.
c) It turns out (you might derive this fact) that
M" Ð- Ñ œ
expÐ • -Ñ
"
#
Œ
•ˆ" € - • expÐ • -щ Þ
#
- " • expÐ • -Ñ
Give two different approximate 90% confidence intervals for - based on the
asymptotic distribution of the MLE here. Then give an approximate 90% interval
based on inverting the LRTs of H! À - œ -! vs H" À - Á -! Þ
74Þ Suppose that \" ß \# ß \$ and \% are independent binomial random variables,
\3 µ binomial Ð8ß :3 Ñ. Consider the problem of testing H! À :" œ :# and :$ œ :% against
the alternative that H! does not hold.
a) Find the form of the LRT of these hypotheses and show that the log of the LRT
statistic is the sum of the logs of independent LRT statistics for H! À :" œ :# and
H! À :$ œ :% (a fact that might be useful in directly showing the ;# limit of the
LRT statistic under the null hypothesis).
b) Find the form of the Wald tests and show directly that the test statistic is
asymptotically ;# under the null hypothesis.
c) Find the form of the ;# tests of these hypotheses and again show directly that
the test statistic is asymptotically ;# under the null hypothesis.
75. If \" ß \# ß ÞÞÞß \8 are iid normal Ð.ß 5# Ñ, find the form of the LRT of H! À . œ .! vs
H" À . Á .! Þ Show directly that the LRT statistic is asymptotically ;# with 1 degree of
freedom. Invert the LRTs to produce a large sample 90% confidence procedure for ..
-12-
Download