STAT 543 HW 1 Solution Spring 2016 #1. (a) From the plot of the log-likelihood functions based on Data Set #1 and #2, we can see that both curves are almost in the same shape and reach their maximum at around , which does provide some reliable indication that both data sets are generated from the given model with . (b) From the plot above, we can see that the curve of the log-likelihood function of the model with left censoring is in the same shape with the one of the original model; and they almost have the same vertical position. (c) Since exp for ∼Exp, exp exp , where for for . for The curves of the log-likelihood function of the model II rounding to 3, 2, and 1 decimal place are all in the same pattern with the one of original model I. # 3. ① The 1st experiment: ② The 2nd experiment: ③ The 3rd experiment: where and , . Since these three experiments are mutually independent, log log log log . # 4. ① B&D 1.2.2. (a) ≤ ≤ ≤ ≤ (b) ≤ ≤ ≤ ≤ ≤ (c) For the prior in (a), log log, . For the prior in (b), , . (d) , where is the maximum of … . ② B&D 1.2.3. (a) Given the prior distribution of , for . The posterior density of given is as follows. , where , or . (b) The most probable value of given is . Given , . The most probable values of given are if , if and if ≥ . (c) Therefore, ∼ . ③ B&D 1.2.14. (a) where ∞ ∞ exp ∞ ∞ ∞ exp ∞ . Therefore, ∼ . Since , ⋯ and , ⋯ ∞ ∞ ⋯ exp exp ∞ ∞ ∞ exp ∞ . Similarly, ⋯ . ⋯ Thus, ⋯ ⋯ . Then, the posterior predictive distribution is as follows. ⋯ ∼ Solution 2 Consider , where ∼ , and ∼ and and are independent. Then, ⋯ ∼ , where × × and is a matrix of ’s. So, ⋯ ∼ , where × × . Also, ∼ . Let ⋯ and . Then ∼ , ∼ and × . By using the formula of , and , we can get ⋯ and ⋯ . . Hence, ⋯ ∼ * Notice that the inverse of a matrix in the form of has the form of for and , where is the dimension of the square matrix (or ). (b) As → ∞, the posterior predictive distribution becomes close to , while the predictive distribution of is still . ④ B&D 1.3.3. (a) and is the normal cdf. where (b) From the plot, we can see that when , the procedure with has smaller risk than the procedure with . ⑤ B&D 1.3.9. ∼ , , , and , ⇒ ⇒ for , for . ⑥ B&D 1.3.10. From part (a) in problem 1.3.3, and is the standard normal distribution function and the prior where distribution is as follows. . (a) The the Bayes risk of is (b) The the Bayes risk of is . From the Bayes point of view, is a better decision rule. ⑨ B&D 1.3.18. (a) Risk function is ≥ where the loss function is ≥ . (b) For , , and (c) ≥ . ≥ For , , and ≥ ≥ , where and maxn N ≤ i ≤ minN n From the plot, is smaller than over all possible values of , which means that is a better decision rule.