# Stat 643 Final Exam December 11, 2000 Prof. Vardeman

```Stat 643 Final Exam
December 11, 2000
Prof. Vardeman
1. Suppose the random vector &ETH;\&szlig; ] &szlig; ^&Ntilde; has joint probability mass function 0 &ETH;B&szlig; C&szlig; D&Ntilde;
on &Ouml;!&szlig; &quot;&szlig; #&szlig; \$&szlig; &THORN;&THORN;&THORN;&times;\$ with
0 &ETH;B&szlig; C&szlig; D&Ntilde; &ordm; expa • BCD bM&Ograve;B is odd&Oacute;M&Ograve;C is even&Oacute;M&Ograve;D &Aacute; B and D &Aacute; C&Oacute;
and I wish to approximate T &Ograve;\ € ] € ^ &quot;!&Oacute;. Completely describe an MCMC
method that I could use to approximate this probability. (You may assume that standard
software libraries are available to allow me to sample from any &quot;standard&quot; distribution.)
2. Consider squared error loss estimation of ) on the basis of \ where
(Remember that the ;#/ distribution has mean / and variance #/ .)
\
)
&micro; ;#/ .
(a) A possible estimator of ) is s) œ \/ . Argue carefully that s) is inadmissible by
considering estimators of the form -s).
(b) Find a formal generalized Bayes estimator of ), using an improper prior distribution
for ) with RN derivative w.r.t. to 1-dimensional Lebesgue measure on &ETH;!&szlig;_&Ntilde;, 1&ETH;)&Ntilde; œ &quot;) .
(If 2&ETH; † &Ntilde; is the ;/# density, then T) has density 0) &ETH;B&Ntilde; œ &quot;) 2ˆ B) ‰. Note that the formal
posterior is &quot;Inverse Gamma.&quot; You can find this distribution in Schervish's dictionary of
distributions.) Is this estimator admissible? Explain.
3. Suppose that \&quot; and \# are independent Poisson &ETH;-&Ntilde; random variables (for - 0).
You may take as given the fact that conditional on W œ \&quot; € \# œ =, \&quot; &micro; Binomial
(=&szlig; &quot;# &Ntilde;.
(a) Consider the nonrandomized estimator of -, \$&ETH;\&Ntilde; œ \&quot; and squared error loss.
Describe a (possibly randomized) estimator that is a function of W œ \&quot; € \# and has
the same risk function as \$ .
(b) Apply the Rao-Blackwell Theorem and find a nonrandomized estimator of - that is a
function of W œ \&quot; € \# and is at least as good as \$ under squared error loss.
4. Consider the estimation of a (location) parameter ), for the family of distributions with
RN derivatives w.r.t. 1-dimensional Lebesgue measure given by
0) &ETH;B&Ntilde; œ M&Ograve;B ž )&Oacute;expa) • Bb
Find the Chapman-Robbins lower bound on the variance of an unbiased estimator of ).
(You may find it useful to know that B œ &quot;&THORN;&amp;*\$ is the only positive root of the equation
&ETH;#B • B# &Ntilde;exp&ETH;B&Ntilde; • #B œ !.)
1
DO EXACTLY ONE OF PROBLEMS 5 and 6.
5. Consider a model c œ &Ouml;T) &times; where T) is Normal &ETH;)&szlig; )# &Ntilde; measure for @ œ e • &Ouml;!&times;
and the group of transformations Z œ &Ouml;1- &times; for - &Aacute; !,where 1- &ETH;B&Ntilde; œ -B.
(a) Argue that Z leaves the model invariant, and for each - &Aacute; !, find the transformation
1- mapping @ one-to-one onto @.
#
(b) For the action space T œ @, argue that the loss function P&ETH;)&szlig; +&Ntilde; œ ˆ +) • &quot;‰ is
invariant under Z and for each - &Aacute; !, find the transformation ~
1 - mapping T one-to-one
onto T .
(c) Characterize equivariant nonrandomized rules in this decision problem and find the
best equivariant rule.
6. Consider a simple decision problem with @ œ T œ &Ouml;&quot;&szlig; #&szlig; \$&times;, T) the exponential
distribution with mean ) (i.e. the distribution with R-N derivative w.r.t. 1-dimensional
‰
Lebesgue measure 0) &ETH;B&Ntilde; œ &quot;) expˆ •B
!&Oacute;) and !-&quot; loss
) M&Ograve;B
P&ETH;)&szlig; +&Ntilde; œ M&Ograve;) &Aacute; +&Oacute;
For K a (prior) distribution on @, for ) - @ we'll use the notation 1) œ Ka&Ouml;)&times;b. 0&quot; &szlig; 0#
and 0\$ are plotted together on the attached page in Figure 1.
(a) Find in explicit form the Bayes rule versus K uniform on @. That is, suppose that
1&quot; œ 1# œ 1\$ œ &quot;\$ and find the Bayes rule. (For which B ž ! does one take actions
+ œ &quot;&szlig; # and \$?)
(b) Is the rule from (a) admissible? Argue carefully one way or another.
Now consider the nonrandomized decision rule
&Uacute;&quot;
\$&ETH;B&Ntilde; œ &Ucirc; #
&Uuml;\$
if B • &THORN;&amp;*&quot;
if &THORN;&amp;*&quot; • B • #&THORN;%#&quot;
if #&THORN;%#&quot; • B
\$ is Bayes in this problem versus a prior with 1&quot; œ &THORN;#&amp;&quot;&szlig; 1# œ &THORN;\$(% and 1\$ œ &THORN;\$(&amp;.
(c) Prove that \$ is minimax. (You may wish to use the fact that the cdf of T) is
‰‰M&Ograve;B ž )&Oacute;.)
J) &ETH;B&Ntilde; œ ˆ&quot; • expˆ •B
)
2
0.4
0.3
f1( x)
f2( x)
0.2
f3( x)
0.1
0
1
2
3
4
5
x
Figure 1
3
```