Solution
David answered on
Dec 21 2021
1) Let z be a TX1 vector of random variables with joint density function
f(z; )
dependant on kX1 vector of unknown parameters = ( 1, 2, . . . , k)T . Let
A ( ) be a real
valued function of . Then under some regularity conditions, for any unbiased
estimator ̂ of A( ,
Var( ̂
Where is the vector of derivatives of A ( ) i.e. δ=
A ( ),
A
( ),....,
A ( )) and the matrix I( is the Fisher information matrix with the
(i,j) th element as Iij( =E(Si(z)Sj(z)]=-E[
, i,j= 1,2,...k
And Si(z)=
is the ith score statistic i = 1, 2, . . . , k. In particular, if ̂ (z) is an unbiased
estimator of the rth parameter r then under the same regularity conditions
Var( ̂ >= J
(
Where J
( ) is the rth diagonal element of the inverse matrix
Proof: The proof is very similar to that of the scalar case .
1. Following exactly the same approach as in the scalar case show that
A ( ) = Cov( ̂ , Si(z))
2. Make use of the fact that for any coefficients c1, c2, . . . , ck,
V ar( ̂ –∑
)>=0..............................................................................(A)
In particular, this is true for the values c1 = c*1, c2 = c*.....,ck=c*k
which minimizes V ar( ̂ –∑ )
i=1 ciSi(X)).
3. Using 1. show that
V ar( ̂ –∑ )= Var( ̂ +
) -2
With c=(c1,c2,...,ck)^T and then using calculus show that
c*=(c1*,c2*,...,ck*)^T=
Thus show that min(Var( ̂ –∑ Var( ̂
and because of (A) result follows.
2)
a) √ ( ̂ )
→N(0,
Now given is that → where Q is a p.d. matrix.
Hence T*(X’X)^-1-> Q^-1 where Q is a p.d. matrix
Hence
( ̂ )
→N(0,
Hence ̂ has an asymptotic distribution N (β,
) Yt = β1 + β2*t + ut
This...