- 4 5 t

advertisement
Stat 544 – Spring 2005
Homework assignment 4 answer key
Exercise 1
µ+
µ2 + 4
2
20
05
20
05
K (α) = 0 ⇔ α2 − αµ − 1 = 0 ⇔ α =
since α > 0.
ii. No choice α = µ
the notation: p(y) is our truncated normal (our target distribution), and g(y) is
probability of obtaining one observation ≥ a. So p = 1 − Φ(a). Note that Range(X)
the shifted exponential that we use as the instrumental density. M is the smallest
i. Draw a candidate value X from g(x)
b.1 - Upper bound
Exercise 2
Thus, h(x) attains its maximum when x = α. Hence, the upper bound ignoring
(a) Let θ = (α, β, σ 2 ), θ−α = (β, σ 2 ), and so on.
at
truncation is
1 α2 /2−αµ
e
α
• Likelihood
St
Now, if we introduce the truncation we only need to consider values of α such
that α ≥ µ. So, we consider two cases for the bound:
2 /2−αµ
ii. α = µ: Upper bound is α1 e−α
2 /2
• Priors
:= K(α)
= µ1 e−µ
iv. Set Y = X with probability p/gM
v. Go back to step (i).
∂
2
2
h(x) = αe−x /2 eα −αµ
∂x
i. α > µ: Upper bound is α1 eα
iii. Draw a uniform U ∼ U(0, 1)
-
1 α(x−µ) −x2 /2
e
e
α
4
⇒ h(x) =
54
54
4
-
ii. Compute the ratio p/M g evaluated at X
at
2
D
Io epa
w r
a tm
St e
at n t
e
U of S
n
i v ta S
er tis p
s i t i ri
ty c s n
straightforward:
(b) Note that a has been replaced by µ for the sake of clarity
f (x) = e−x /2
g(x) = αe−α(x−µ)
value of the upper bound of the ratio p/g. So the steps in the algorithm are
St
D
Io epa
w r
a tm
St e
at n t
e
U of S
n
i v ta S
er tis p
s i t i ri
ty c s n
= {1,2,...}. Hence, the average number of simulations is E(X) = [1 − Φ(a)]−1
g
truncated normal. So X ∼ Geometric(p), where p is the acceptance rate, i.e. the
g
(a) Let X be the number of simulations needed to produce one observation from the
b.3 - You have already found the M for each of the two cases (α > µ or α = µ). We use
2 /2
p(y|θ) ∝ σ −n exp −
n
1 (yi − α − βxi )2
2
2σ i
(α − α0 )2
p(α) ∝ σa−1 exp −
2
2σa
(β − β0 )2
p(β) ∝ σb−1 exp −
2
2σb
b.2 - Now need to find α that minimizes upper bound in each case
i.
2
∂
1 2
1 2
eα /2−αµ 2
K(α) = K (α) = − 2 eα /2−αµ + eα /2−αµ (α−µ) =
(α −αµ−1)
∂α
α
α
α2
1
p(σ 2 ) ∝ (σ 2 )−(ν0 /2+1) exp
2
ν0 s20
2σ 2
p(θ|y) ∝ σ −(ν0 +2+n) exp − 12 σ12 ( ni (yi − α − βxi )2 + ν0 s20 ) +
σ 2 |y, θ−σ2 ∼ Inv-χ2 (ν0 + n,
n
y[i] ~ dnorm(mu[i],tau.y) }
alpha ~ dnorm(mu.a,tau.a)
beta
(yi − α − βxi )2 + ν0 s20 )
tau.y <- nu0*s20/tau.y0
σβ =
σb2
σ2σ2
2b
xi + σ 2
4
where
54
µα =
Data
α|y, θ−α ∼ N(µα , σα2 )
σa2 n(ȳ − β x̄) + σ 2 α0
nσa2 + σ 2
σα =
list(y = c(....),x = c(....), N= ,
σ 2 σa2
nσa2 + σ 2
(c) To obtain a sample of size m using Gibbs sampling do:
2 - Set t = 1.
4 - Sample αt from a
N(µα , σα2 )
i
(yi − αt−1 − βt−1 xi )2 + ν0 s20 ) distribution.
distribution with
σ 2 n(ȳ − βt−1 x̄) + σt2 α0
µα = a
nσa2 + σt2
5 - Sample βt from a N(µβ , σβ2 ) distribution with
σ 2 ( xi yi − nαt x̄) + σt2 β0
µβ = b
σb2 x2i + σt2
list(alpha = , beta = , tau.y0 = )
at
at
parameters, say β0 and α0 .
n
mu.a = , tau.a = , mu.b = , tau.b = , nu0 = , s20 = , )
Inits
1 - Choose two of the three parameters, say, β and α. Assign initial values for those
3 - Sample σt2 from a Inv-χ2 (ν0 + n,
that this part was not required.
-
-
• α.
Below you will find how the data and inital values specification should look like. Note
4
σb2 ( xi yi − nαx̄) + σ 2 β0
σb2 x2i + σ 2
D
Io epa
w r
a tm
St e
at n t
e
U of S
n
i v ta S
er tis p
s i t i ri
ty c s n
µβ =
}
σα =
σβ =
σt2 σa2
nσa2 + σt2
σb2
St
D
Io epa
w r
a tm
St e
at n t
e
U of S
n
i v ta S
er tis p
s i t i ri
ty c s n
sigma2 <- 1/tau.y
β|y, θ−β ∼ N(µβ , σβ2 )
where
~ dnorm(mu.b,tau.b)
tau.y0 ~ dchisq(nu0)
g
i
• β.
St
model{ for (i in 1:N){ mu[i] <- alpha + beta*x[i]
g
• σ2
(d) WinBugs code
54
(b) Conditional posterior distributions
20
05
+
(β 2 −2ββ0 )
σb2 (α2 −2αα0 )
2
σa
20
05
• Joint posterior
σ2σ2
t 2b
xi + σt2
6 - Set t = t+1. Go back to step (3) and repeat m times.
3
4
Download