Great Deal! Get Instant $10 FREE in Account on First Order + 10% Cashback on Every Order Order Now

Let z be a T ?1 vector of random variables with joint density function zf ? );( , where ? is a k ?1 vector of unknown parameters. Derive the Cramér-Rao lower bound for the covariance matrix of an...

1 answer below »
Let z be a T ?1 vector of random variables with joint density function zf ? );( , where ? is a k ?1 vector of unknown parameters. Derive the Cramér-Rao lower bound for the covariance matrix of an unbiased estimator of ? . 2. Consider the fixed X linear regression model: T IuuEuEuXy 2 ? ? ? ? ?)(,0)(, ? ? (a) Stating clearly any conditions that you impose, derive the asymptotic distribution of the OLS estimator of ? when ? ? QXXT ?1 , where Q is a positive definite matrix. (b) Suppose now that X is comprised of a constant and a time trend, i.e.: Ttuty tt ,...,1, ? ? ? ? 21 ? ? Explain why the ? ? QXXT ?1 assumption is no longer appropriate. Derive the asymptotic distribution of the OLS estimator of ? 2 in this framework. SECTION B Answer ONE of the following 3. Let t x , ? ,...,1 Tt , be a sample of independent and identically distributed random variables with mean xE t )( ? ? and 2 xV t )( ? ? . (a) State Khinchine’s weak law of large numbers and provide a proof of this result. Discuss conditions under which a law of large numbers exists for a sample of independent but heterogeneously distributed random variables. Explain how these results relate to the notion of a consistent estimator of the mean ? . (b) State the Lindeberg-Levy central limit theorem and provide a proof of this result. Explain why central limit theorems are useful in econometric analysis. L13521 E1 L13521 – E1 END 4. Let z be a T ?1 vector of independent and identically distributed random variables, with joint density function zf ? );( , where ? is a k ?1 vector of unknown parameters. (a) Stating clearly any assumptions that you make, prove that the maximum likelihood estimator of ? is consistent and asymptotically normally distributed. Use your result to explain why maximum likelihood estimators are asymptotically efficient. (b) State the likelihood ratio, Wald and Lagrange multiplier test statistics for testing the null hypothesis 00 H :? ? ? . Derive the asymptotic distributions of these three statistics under the null hypothesis. 5. Consider the fixed X linear regression model: ? ? ? 2 ? ? ? uuEuEuXy ?)(,0)(, ? where ? is a positive definite matrix. (a) Prove that the GLS estimator of ? is the best linear unbiased estimator for the above model. Stating clearly any conditions that you impose, show that the GLS estimator of ? is consistent. (b) Suppose the error terms are first order autocorrelated: EETtuu Tttt ? ? ? ?? ? ? 2 1 ? ? ? ? ? ?)(,0)(,,...,1, ? where u0 ? 0 and ? ?1. Explain how to obtain a feasible GLS estimator of ? , and show under suitable conditions that this estimator is consistent.
Answered Same Day Dec 21, 2021

Solution

David answered on Dec 21 2021
135 Votes
1) Let z be a TX1 vector of random variables with joint density function
f(z; )
dependant on kX1 vector of unknown parameters = ( 1, 2, . . . , k)T . Let
A ( ) be a real
valued function of . Then under some regularity conditions, for any unbiased
estimator ̂ of A( ,
Var( ̂
Where is the vector of derivatives of A ( ) i.e. δ=


A ( ),

A
( ),....,


A ( )) and the matrix I( is the Fisher information matrix with the
(i,j) th element as Iij( =E(Si(z)Sj(z)]=-E[


, i,j= 1,2,...k
And Si(z)=



is the ith score statistic i = 1, 2, . . . , k. In particular, if ̂ (z) is an unbiased
estimator of the rth parameter r then under the same regularity conditions
Var( ̂ >= J
(
Where J
( ) is the rth diagonal element of the inverse matrix
Proof: The proof is very similar to that of the scalar case .
1. Following exactly the same approach as in the scalar case show that



A ( ) = Cov( ̂ , Si(z))
2. Make use of the fact that for any coefficients c1, c2, . . . , ck,
V ar( ̂ –∑
)>=0..............................................................................(A)
In particular, this is true for the values c1 = c*1, c2 = c*.....,ck=c*k
which minimizes V ar( ̂ –∑ )
i=1 ciSi(X)).
3. Using 1. show that
V ar( ̂ –∑ )= Var( ̂ +
) -2
With c=(c1,c2,...,ck)^T and then using calculus show that
c*=(c1*,c2*,...,ck*)^T=
Thus show that min(Var( ̂ –∑ Var( ̂

and because of (A) result follows.
2)
a) √ ( ̂ )

→N(0,
Now given is that → where Q is a p.d. matrix.
Hence T*(X’X)^-1-> Q^-1 where Q is a p.d. matrix
Hence
( ̂ )

→N(0,
Hence ̂ has an asymptotic distribution N (β,
) Yt = β1 + β2*t + ut
This...
SOLUTION.PDF

Answer To This Question Is Available To Download

Related Questions & Answers

More Questions »

Submit New Assignment

Copy and Paste Your Assignment Here