Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

updated: 12/15/00

Hayashi Econometrics: Answers to Selected Review Questions

Chapter 6
Section 6.1
Pn
Pm
1. Let sn
j=1 |j |. Then sm sn =
j=n+1 |j | for m > n. Since |sm sn | 0, the
sequence {sn } is Cauchy, and hence is convergent.
3. Proof that (L) = (L)1 (L) (L)(L) = (L):
(L)(L) = (L)(L)1 (L) = (L).
Proof that (L)(L) = (L) (L) = (L)(L)1 :
(L)(L)1 = (L)(L)(L)1 = (L).
Proof that (L) = (L)(L)1 (L)(L) = (L):
(L)(L) = (L)(L)1 (L) = (L)(L)(L)1 = (L).
4. The absolute value of the roots is 4/3, which is greater than unity. So the stability condition
is met.

Section 6.2
b (yt |1, yt1 ) = c + yt1 . The projection coefficients does
1. By the projection formula (2.9.7), E
b (yt |1, yt1 , yt2 ) =
not depend on t. The projection is not necessarily equal to E(yt |yt1 ). E
c + yt1 . If || > 1, then yt1 is no longer orthogonal to t . So we no longer have
b (yt |1, yt1 ) = c + yt1 .
E
3. If (1) were equal to 0, then (z) = 0 has a unit root, which violates the stationarity condition.
To prove (b) of Proposition 6.4, take the expected value of both sides of (6.2.6) to obtain
E(yt ) 1 E(yt1 ) p E(ytp ) = c.
Since {yt } is covariance-stationary, E(yt ) = E(ytp ) = . So (1 1 p ) = c.

Section 6.3
4. The proof is the same as in the answer to Review Question 3 of Section 6.1, because for
inverses we can still use the commutatibity that A(L)A(L)1 = A(L)1 A(L).
5. Multiplying both sides of the equation in the hint from the left by A(L)1 , we obtain
B(L)[A(L)B(L)]1 = A(L)1 . Multiplying both sides of this equation from the left by
B(L)1 , we obtain [A(L)B(L)]1 = B(L)1 A(L)1 .
1

Section 6.5

1. Let y (yn , . . . , y1 )0 . Then Var( ny) = Var(10 y/n = 10 Var(y)1/n).


stationarity, Var(y) = Var(yt , . . . , ytn+1 ).

By covariance-

3. lim j = 0. So by Proposition 6.8, y , which means that y .


m.s.

Section 6.6
1. When zt = xt , the choice of S doesnt matter. The efficient GMM etimator reduces to OLS.
2. The etimator is consistent because it is a GMM etimator. It is not efficient, though.

Section 6.7
b 1 X0 b
2. J = b
0 X(X0 X)
, where b
is the vector of estimated residuals.
b The truncated kernel-based estimator with a bandwidth
4. Let
bij be the (i, j) element of .
of q can be written as (6.7.5) with
bij = bi bj for (i, j) such that |i j| q and
bij = 0
q|ij|
bi bj for (i, j)
otherwise. The Bartlett kernel based estimator obtains if we set
bij =
q
such that |i j| < q and
bij = 0 otherwise.
5. Avar(bOLS ) > Avar(bGLS ) when, for example, j = j . This is consistent with the fact that
OLS is efficient, because the orthogonality conditions exploited by GLS are different from
those exploited by OLS.

You might also like