Great Deal! Get Instant $10 FREE in Account on First Order + 10% Cashback on Every Order Order Now

Problems 4 (20 points) Suppose (Y1,11),..., (Ys, I,) are i.i.d. observations defined as follows. With probability 7, ¥; is drawn from a N(uq,1) distribution, and with probability 1 — 7, Y; is drawn...

1 answer below »
Problems 4 (20 points)
Suppose (Y1,11),..., (Ys, I,) are i.i.d. observations defined as follows. With probability 7, ¥; is drawn from
a N(uq,1) distribution, and with probability 1 — 7, Y; is drawn from a N(ug,1). The variable I; indicates
which distribution Y; is drawn from i.e.
Lo LEYin Nu, 1)
‘ 0, otherwise
where 7, 11, uo are unknown parameters. Define the following statistics
vo Zia Yili/Z,iE Z > 0
1M 0,ifZ=0
fy [ Ti Y= 1)/(n=2)i Z 2 0ifZ=n
2
1. Show that the likelihood function is proportional to
n—2
2

(1 =r) exp 2 (un — Va) expl-" 2 (a — Bo)
2. Now use the the result in 1 to identify a conjugate family for prior distributions on 7, uy, uo for this
mixture model.
Problem 5 (20 points)
Consider the two-parameter linear model
Y ~ N XXXXXXXXXX,1),
with prior distributions 6; ~ N(ay,b?) and 6, ~ N (ay, b3), with 6; and , independent.
1. Clearly 0; and 0, are individually identified only by the prior; the likelihood provides information only
on yu = 6; + 6, . Still, the full conditional distributions, p(6; | 62,v) and p(6> | 61,y) for available in
closed form. Derive these distributions.
2. Now derive the marginal posterior distributions: p(6; | y) and p(fs | y). Do the data update the prio
distributions for these parameters?
3. Set a; = ay = 50,b; = by = 1000, and suppose we observe y = 0. Run the Gi
s sampler defined
in part (a) for ¢t = 100 iterations, starting your chains near the prior mean (say, between 40 and 50),
and monitoring progress of 8,6, and u. Does this algorithm “converge” in any sense?. Estimate the
posterior mean of i. Does your answer change using ¢ = 1000 iterations.
4. Now keep the same values for a; and a,, but set b; = by = 10. Again run 100 iterations using the same
starting values as in part (c). What is the effect on convergence?. Repeat for ¢ = 1000 iterations; is
your estimate for E(u | y) unchanged?
Problem 2 (15 points)
For the following data we will consider two competing models:
Hi: alinear; H, : a quadratic regression.
z; XXXXXXXXXX XXXXXXXXXX XXXXXXXXXX
yi XXXXXXXXXX XXXXXXXXXX XXXXXXXXXX
Model H;:

Yi =P1+ Pers +&,i=1,...,n
e ~ N(0,1)
B1 ~ N(0,1),8, ~ N(1,1),
with 8; and $3, a priori independent.
Model Hy:
Yi =m +7 +i +e, i=1,...,n
e ~ N(0,1)
71 ~N(0,1),72 ~ N(1,1),7s ~ N(0,1),
with v1, ye, v3 a priori independent.
1. Find the marginal distribution p(y | H1) = [p(y | 8)p(B)dB and p(y | Hz) = [p(y | v)p(v)dy. (Hint:
using the matrix form to represent regression models is easier to derive the marginal distribution. )
2. Write down the Bayes factor B = p(y | H2)/p(y | H1) for comparing model H; vs. model H, and
evaluate it for the given data set.
3. We now replace the prior distributions by improper constant priors: p(8) = ¢; in model Hi; and
p(v) = cz in model H,. We can still form evaluate integrals [ p(y | 8)p(8)dB and [ p(y | v)p(v)dy and
define a Bayes facto
5 [p61 Vp0)dy
Joly | Bp(B)dB’
Show that the value of the Bayes factor B depends on the — a
itrarily chosen — constants ¢; and c,.
Answered 1 days After Nov 13, 2022

Solution

Banasree answered on Nov 14 2022
53 Votes
Problem 4.
Ans.
1.Likelihood
L1 = …………..1
L2 = …………..2
Simplifying 1) and 2)
L = }
    =
    =
= ꭋ^z(1-ꭋ)^n-z exp[-z/2(µ1-Y1bar)^2]exp[-(n-z)/2(µ2-Y2bar)^2]
2.Ans.
conjugate prior for, µ1,µ2 and ꭋ
p(ꭋ|µ1) α ꭋ^z exp^(- z/2(µ1-Y1bar)^2
p[(1-ꭋ)|µ2) α (1-ꭋ)^(n-z)...
SOLUTION.PDF

Answer To This Question Is Available To Download

Related Questions & Answers

More Questions »

Submit New Assignment

Copy and Paste Your Assignment Here