627 10 Assign1

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Econ627 Suggested answers to Assignment 1 January 2010

(a)
R2,n = n1 n1 = n1 n1
n i=1 n i=1 n i=1 2 2 2 n i=1

(n ) Xi
2

Zi Zi

(n ) Xi Zi Zi by the triangle inequality (n ) Xi n n1 n1 n1


n i=1 n i=1 2

Zi Zi since (n ) Xi is a scalar
2

Xi Xi Xi Xi
2 2 2

Zi Zi by Cauchy-Swhartz inequality

= n = n = n

tr(Zi Zi Zi Zi )1/2 Zi tr(Zi Zi )1/2 since Zi Zi = Zi Zi


2 2

The assumption

n i=1 4 EXi,j

< for all j = 1, ..., k implies


k 2 2 Xi,r r=1

E Xi
k k

= E (tr(Xi Xi )) = E
k k
1

=
r=1 s=1

2 2 E(Xi,r Xi,s ) r=1 s=1 4 4 EXi,j

4 4 (EXi,r ) 2 (EXi,s ) 2 <

And similarly

< implies E Z1
2

< . By Cauchy-Swhartz inequality


1/2

E Xi

Zi

E Xi

E Zi

1/2

<

Then by the Weak Law of Large Number


n

n1
i=1 2

Xi

Zi

p E Xi

Zi

.
2 n i=1 2 2

Therefore that n p 0 and the Slutsky Theorem imply n 0 and R2,n p 0. (b) First we will show (8): Since Vn (An ) p V (A), by the Slutsky's Theorem 1/2 (An ) p V (A)1/2 Vn

n1

Xi

Zi

Note n1/2 (n (An ) ) d N (0, V (A)), then by the Cramer Convergence Theorem (Vn (An ))1/2 n1/2 (n (An ) ) d V (A)1/2 N (0, V (A)) = N (0, Ik ).

Consequently

n(n,j (An ) j ) [Vn (An )]jj

d N (0, 1)

which implies
n(n,j (An ) j ) P z P (Z < z) z R [Vn (An )]jj

where Z is a N (0, 1) variable. Therefore


P (j CIn,j,1 ) = P n(n,j (An ) j ) [Vn (An )]jj (8) z1/2

P (|Z| z1/2 ) = 1 .

Now we want to show (9). Write


Wn = n(n (An ) 0 )(Vn (An ))1/2 (Vn (An ))1/2 (n (An ) 0 ) n

From above we know


n(n (An ) 0 )(Vn (An ))1/2 d N (0, Ik ),

then by Continuous Mapping Theorem


Wn d 2 . k bility limit of n is 0 where 0 = n = n1/2
ow we show (10) and (11).

(9)

Note under the local alternative, the proba . Observe n


1

X Z 1 Z X n n n +

X Z 1 Z Y n n n
1

= 0 +

X Z 1 Z X n n n
1

X Z 1 Z U n n n

Since

X Z 1 Z U n 0 and XnZ 1 ZnX n n n p 0 by WLLN and Slutsky n p 0 . To show Vn (An ) p V (A) and (10), the crucial step is to theorem, show n (An ) p . As usual we write n

1 n (An ) = n

Ui2 Zi Zi R1,n (An ) R2,n (An ).


i=1

By arguements similar to those in page 4 of Lecture 1 R1,n (An ) and R2,n (An ) converge to zero in probability. Alternatively, we substitute Ui = Ui Xi (n (An ) 0 ) + Xi / n into
1 n (An ) = n
n

Ui2 Zi Zi .
i=1

As terms invovling

will converges to 0, manipulation yields


1 n
n

Ui2 Zi Zi p .
i=1

Note
n(n 0 ) = + 1 1 X Z(An An ) Z X n n
1 1

1 1 X Z(An An ) Z U n n

and by arguement similar to those leading to equation (4) in lecture 1, we know


1 1 X Z(An An ) Z X n n 1 1 X Z(An An ) Z U d N (0, V (A)). n n (11)

Thus

n1/2 n 0 d N (, V (A)).

Since Vn (An ) p V (A), by the Cramer Convergence Theorem (Vn (An ))1/2 n1/2 (n (An ) ) d V (A)1/2 N (, V (A)) = N (V (A)1/2 , Ik ).

and therefore
P (Wn > b) 1.
Proof. Claim.

Wn d 2 (V (A)1 ). k

Suppose that Wn /n p a > 0 as n . Then, for any b > 0,

We need to show that for all > 0 there is n such that for all n n , P (Wn b) = P (Wn /n b/n) < . Fix > 0. Since Wn /n p a, we can chose n such that P (|Wn /n a| ) < for all n n . Also, since a > 0, for some n we have that a b/n for all n n . Next
P (Wn /n b/n) = P (Wn /n a + a b/n) = P (a Wn /n a b/n) P (|a Wn /n| a b/n) P (|a Wn /n| ) <

for all n max {n , n }. This implies


P (Wn /n 2 k,1 ) 1 (11)

2)
(a) Since is positive denite, so is 1 . Then we can nd the matrix square root of 1 , C , such that 1 = CC . Take the inverse of both sides we have = C 1 C 1 . (b)
Jn (n ) = ng n (n ) CC 1 1 C = n C g n (n ) C n C
1 1

C g n ( n )

C g n ( n )

(c) Use the model Y = X + U to rewrite


g n () = n1 (Z X + Z U Z X) = ZU n

Then
Dn C g n () = Il C ZU ZU C n n =C ZU n X Z 1 Z X n n
1 1

XZ n C n

ZU n

=C

X Z 1 Z X n n

X Z Z Y Z X n n n

ZX ZU C (n ) n n U X + X

= C n1 Z = C n1 Z = C n1 Z

Y X X + X Y X = C g n ( n )

(d) By the WLLN


ZX p E(Zi Xi ) n

and by hypothesis 1 p 1 = C 1 C 1 , therefore the Slutsky Theorem n implies


Dn p Il C E(Zi Xi ) E(Xi Zi )1 E(Zi Xi ) = Il R(R R)1 R .
1

E(Xi Zi )CC C

(e) Observe n1/2 C g n () = C n1/2 Z U and


n1/2 Z U d N (0, ),

then by Continuous Mapping Theorem


n1/2 C g n () d N (0, C C).

Since C C = Il , n1/2 C g n () d N where N N (0, Il ). (f) Rewrite


Jn (n ) = n C g n (n ) = Dn n1/2 C g n () C n C
1 1

C g n (n )

C n C

Dn n1/2 C g n () .

Since Dn p Il R(R R)1 R , C n C p C C = Il and n1/2 C g n () d N , by Cramer Convergence Theorem Jn (n ) d N (Il R(R R)1 R )N.

(g) That Il R(R R)1 R is symmetric and idempotent can be established by direct verication. To determine its rank, we make use the the following claim.
Claim 1: if a matrix

zeros or ones, moreover

Therefore

is is idempotent its eigenvalues are either rank(Il R(R R)1 R ) = tr(Il R(R R)1 R ).

rank(Il R(R R)1 R ) = tr(Il R(R R)1 R ) tr(Il ) tr(R(R R)1 R ) = l tr(R(R R)1 R ) = l tr(R R(R R)1 )

where the last equality follows from tr(AB) = tr(BA) for square matrices A and B . Thus
tr(Il R(R R)1 R ) = l tr(R R(R R)1 ) = l tr(Ik ) = l k.

(h) Claim
2 rank(A)

2: if a matrix

where

A is symmetric and idenpotent, then N AN N N (0, Il ). Therefore from (e) and (g) we know Jn (n ) d 2 . lk

If A is a ll real and symmetric matrix, there exists a spectral (or eigenvalue) decomposition
A = CC

Proof of Claim 1:

where is a diagonal matrices with eigenvalues (all real, but some maybe the multiple) as its diagonal elements. Columns of C , denoted c1 , c2 , ..., cl , are

corresponding eigenvectors that are orthogonal to each other and normalized to have norm 1. In particular C C = Il , then idempotency AA = A implies
CC CC C2 C 2 = = CC = CC

Since is diagonal, the diagonal elements i are either 0 or 1. Let ai i be the diagonal elements or A. From
c11 c21 A = CC = . . . cl1 c12 c22 1 c1l c2l 0 . . . . . . 0 cll 0 2 c11 0 0 c12 . . . . . . c1l l c21 c22 cl1 c12 . . . c1l

. . .

..

. . .

..

. . .

..

cl2

c21

we can verify aii =


trace(A)

l 2 j=1 cij j . l

Thus
aii

=
i=1 l

=
j=1

c2 j + 1j
j=1 l

c2 j + ... + 2j
j=1 l

c2 j lj
l

= = =

1
j=1

c2 + 2 j1
j=1 2

c2 + ... + l j2
j=l 2

c2 lj
2

1 c1

+ 2 c2

+ ... + l cl

1 + 2 + ... + l = rank(A)

where the last inequality follows from that the rank of A equals the number of nonzero eigenvalues and that the eigenvalues are either 0 or 1.
Proof of Claim 2: 2

By denition of random variables, it suces to show N AN = rank(A) Vj j=1 where Vj are independent N (0, 1) random variables. Let V = C N and denote its elements by Vi . By the Continuous Mapping Theorem
V N (0, C Il C) = N (0, Il ).

Then
N AN = N CC N = V V =

rank(A)

i Vi2 =
i=1 j=1

Vj2

where the last inequality follows from that the rank of A equals the number of nonzero eigenvalues and that the eigenvalues are either 0 or 1. 6

You might also like