Great Deal! Get Instant $10 FREE in Account on First Order + 10% Cashback on Every Order Order Now

Directions: Think of this exam as a game of “Jeopardy!” (the TV show). It gives you the answers to the maximization problem. You have to find the questions that lead to those answers. The tasks below...

1 answer below »
Directions: Think of this exam as a game of “Jeopardy!” (the TV show). It gives you the answers to the maximization problem. You have to find the questions that lead to those answers. The tasks below require you use maximum likelihood to estimate regression coefficients, the standard error of the regression and the standard error of the estimates. Specifically, assume that the error term in the following regression model is distributed normally with mean zero and constant variance: ei = yi -a-ßxi e ~ N 0,s 2  Your assignment is to maximize the likelihood function: £ a,ß,s 2  = ? 1 v 2ps2 exp - (yi -a-ßxi) 2 2s2 ! and use the first-order conditions to estimate ab, bß and sc2 . Then you must use the second-order conditions to demonstrate that the likelihood function has been maximized (and is not at a saddle point or minimum). Finally, you must estimate the standard error of your estimates XXXXXXXXXXpoints) Take the natural logarithm of the likelihood function to obtain the log-likelihood function. Then derive the first-order conditions for a minimum of the log-likelihood function with respect to ab, bß and sc XXXXXXXXXXpoints) Define: ¯x = 1 N ?xi and: ¯y = 1 N ?yi . Show that the first-order conditions imply that: ab = y¯-bßx¯ bß = ?(xi -x¯) (yi -y¯) ?(xi -x¯) 2 sc2 = 1 N ?  yi -ab -bßxi  XXXXXXXXXXpoints) Next, you need to show that the second-order conditions for a maximum are satisfied. • Set up the Hessian matrix of second partials at ab, bß and sc2 . • Show that the own-partials are negative at ab, bß and sc2 . • Show that the determinant of the Hessian is negative at ab, bß and sc XXXXXXXXXXpoints) Define the information matrix as the negative of the inverse of the Hessian matrix at ab, bß and sc2 . Show that the square roots of the information matrix’s diagonal elements imply that: std. error ab = sb s 1 n ?x 2 i ?(xi -x) 2 std. error bß = sb s 1 ?(xi -x) 2 std. error sc2 = sc2 r 2 N XXXXXXXXXXpoints) Use what you have learned about optimization to explain why our estimate of a parameter’s standard error is smaller when the log-likelihood surface comes to a sharp peak along its dimension.
Answered Same Day Dec 21, 2021

Solution

Robert answered on Dec 21 2021
125 Votes
1. The regression model is given by

iii
yx
abe
=-+
where
2
(0,)
i
N
es
:
The likelihood function is given by

2
2
2
2
()
1
(,,)exp
2
2
ii
yx
L
a
abs
s
ps
æö
--
÷
ç
÷
=-
ç
÷
ç
÷
ç
èø
Õ
Then, the log-likelihood function is given by

2222
2
1
(,,)log(,,)log(2)log()
222
ii
i
nn
lLyx
absabspsa
s
==----
å
The first order conditions are given by

2
2
1
(,,)()
ii
lyx
absa
as

=--

å
2
(,,)0()0
ii
lyx
absa
a

=Þ--=

å

=>
ii
ynx
a
=+
åå
(1)
2
2
1
(,,)()
iii
lyxx
absa
s

=--

å
2
(,,)0()0
iii
lyxx
absa

=Þ--=

å
(2)

22
224
1
(,,)()
22
ii
i
n
lyx
absa
sss

=-+--

å

22
224
11
(,,)0()0
22
ii
i
lyx
absa
sss

=Þ-+--=

å
2.
Putting the value of
a
from (1) in (2) we get

2
(
)
iiiiii
yxynxnxx
=-+
ååååå
=>
22
(())
iiiiii
nyxyxnxx
-=-
ååååå
=>
22
()
iii
yxnyxxnx
-=-
åå
where
1
i
xx
N
=
å
,
1
i
yy
N
=
å
=>
2
()()()
iii
yyxxxx
--=-
åå
=>
2
()()
ˆ
()
ii
i
yyxx
xx
--
=
-
å
å
Putting the value of
ˆ
in (1) we get

ˆ
ˆ
yx
a
=-
Again,

22
224
11
(,,)0()0
22
ii
i
lyx
absa
sss

=Þ-+--=

å

22
1
ˆ
ˆˆ
()
ii
i
yx
N
sa
Þ=--
å
3.
The second-order conditions are

2
22
(,,)
N
l
abs
as

=-

2
22
(,,)
i
x
Nx
l
abs
abss

=-=-
¶¶
å
...
SOLUTION.PDF

Answer To This Question Is Available To Download

Related Questions & Answers

More Questions »

Submit New Assignment

Copy and Paste Your Assignment Here