Professional Documents
Culture Documents
627 10 Assign1
627 10 Assign1
627 10 Assign1
(a)
R2,n = n1 n1 = n1 n1
n i=1 n i=1 n i=1 2 2 2 n i=1
(n ) Xi
2
Zi Zi
Zi Zi since (n ) Xi is a scalar
2
Xi Xi Xi Xi
2 2 2
Zi Zi by Cauchy-Swhartz inequality
= n = n = n
The assumption
n i=1 4 EXi,j
E Xi
k k
= E (tr(Xi Xi )) = E
k k
1
=
r=1 s=1
And similarly
< implies E Z1
2
E Xi
Zi
E Xi
E Zi
1/2
<
n1
i=1 2
Xi
Zi
p E Xi
Zi
.
2 n i=1 2 2
Therefore that n p 0 and the Slutsky Theorem imply n 0 and R2,n p 0. (b) First we will show (8): Since Vn (An ) p V (A), by the Slutsky's Theorem 1/2 (An ) p V (A)1/2 Vn
n1
Xi
Zi
Note n1/2 (n (An ) ) d N (0, V (A)), then by the Cramer Convergence Theorem (Vn (An ))1/2 n1/2 (n (An ) ) d V (A)1/2 N (0, V (A)) = N (0, Ik ).
Consequently
d N (0, 1)
which implies
n(n,j (An ) j ) P z P (Z < z) z R [Vn (An )]jj
P (|Z| z1/2 ) = 1 .
(9)
X Z 1 Z X n n n +
X Z 1 Z Y n n n
1
= 0 +
X Z 1 Z X n n n
1
X Z 1 Z U n n n
Since
X Z 1 Z U n 0 and XnZ 1 ZnX n n n p 0 by WLLN and Slutsky n p 0 . To show Vn (An ) p V (A) and (10), the crucial step is to theorem, show n (An ) p . As usual we write n
1 n (An ) = n
By arguements similar to those in page 4 of Lecture 1 R1,n (An ) and R2,n (An ) converge to zero in probability. Alternatively, we substitute Ui = Ui Xi (n (An ) 0 ) + Xi / n into
1 n (An ) = n
n
Ui2 Zi Zi .
i=1
As terms invovling
Ui2 Zi Zi p .
i=1
Note
n(n 0 ) = + 1 1 X Z(An An ) Z X n n
1 1
1 1 X Z(An An ) Z U n n
Thus
n1/2 n 0 d N (, V (A)).
Since Vn (An ) p V (A), by the Cramer Convergence Theorem (Vn (An ))1/2 n1/2 (n (An ) ) d V (A)1/2 N (, V (A)) = N (V (A)1/2 , Ik ).
and therefore
P (Wn > b) 1.
Proof. Claim.
Wn d 2 (V (A)1 ). k
We need to show that for all > 0 there is n such that for all n n , P (Wn b) = P (Wn /n b/n) < . Fix > 0. Since Wn /n p a, we can chose n such that P (|Wn /n a| ) < for all n n . Also, since a > 0, for some n we have that a b/n for all n n . Next
P (Wn /n b/n) = P (Wn /n a + a b/n) = P (a Wn /n a b/n) P (|a Wn /n| a b/n) P (|a Wn /n| ) <
2)
(a) Since is positive denite, so is 1 . Then we can nd the matrix square root of 1 , C , such that 1 = CC . Take the inverse of both sides we have = C 1 C 1 . (b)
Jn (n ) = ng n (n ) CC 1 1 C = n C g n (n ) C n C
1 1
C g n ( n )
C g n ( n )
Then
Dn C g n () = Il C ZU ZU C n n =C ZU n X Z 1 Z X n n
1 1
XZ n C n
ZU n
=C
X Z 1 Z X n n
X Z Z Y Z X n n n
ZX ZU C (n ) n n U X + X
= C n1 Z = C n1 Z = C n1 Z
Y X X + X Y X = C g n ( n )
E(Xi Zi )CC C
C g n (n )
C n C
Dn n1/2 C g n () .
Since Dn p Il R(R R)1 R , C n C p C C = Il and n1/2 C g n () d N , by Cramer Convergence Theorem Jn (n ) d N (Il R(R R)1 R )N.
(g) That Il R(R R)1 R is symmetric and idempotent can be established by direct verication. To determine its rank, we make use the the following claim.
Claim 1: if a matrix
Therefore
is is idempotent its eigenvalues are either rank(Il R(R R)1 R ) = tr(Il R(R R)1 R ).
rank(Il R(R R)1 R ) = tr(Il R(R R)1 R ) tr(Il ) tr(R(R R)1 R ) = l tr(R(R R)1 R ) = l tr(R R(R R)1 )
where the last equality follows from tr(AB) = tr(BA) for square matrices A and B . Thus
tr(Il R(R R)1 R ) = l tr(R R(R R)1 ) = l tr(Ik ) = l k.
(h) Claim
2 rank(A)
2: if a matrix
where
A is symmetric and idenpotent, then N AN N N (0, Il ). Therefore from (e) and (g) we know Jn (n ) d 2 . lk
If A is a ll real and symmetric matrix, there exists a spectral (or eigenvalue) decomposition
A = CC
Proof of Claim 1:
where is a diagonal matrices with eigenvalues (all real, but some maybe the multiple) as its diagonal elements. Columns of C , denoted c1 , c2 , ..., cl , are
corresponding eigenvectors that are orthogonal to each other and normalized to have norm 1. In particular C C = Il , then idempotency AA = A implies
CC CC C2 C 2 = = CC = CC
Since is diagonal, the diagonal elements i are either 0 or 1. Let ai i be the diagonal elements or A. From
c11 c21 A = CC = . . . cl1 c12 c22 1 c1l c2l 0 . . . . . . 0 cll 0 2 c11 0 0 c12 . . . . . . c1l l c21 c22 cl1 c12 . . . c1l
. . .
..
. . .
..
. . .
..
cl2
c21
l 2 j=1 cij j . l
Thus
aii
=
i=1 l
=
j=1
c2 j + 1j
j=1 l
c2 j + ... + 2j
j=1 l
c2 j lj
l
= = =
1
j=1
c2 + 2 j1
j=1 2
c2 + ... + l j2
j=l 2
c2 lj
2
1 c1
+ 2 c2
+ ... + l cl
1 + 2 + ... + l = rank(A)
where the last inequality follows from that the rank of A equals the number of nonzero eigenvalues and that the eigenvalues are either 0 or 1.
Proof of Claim 2: 2
By denition of random variables, it suces to show N AN = rank(A) Vj j=1 where Vj are independent N (0, 1) random variables. Let V = C N and denote its elements by Vi . By the Continuous Mapping Theorem
V N (0, C Il C) = N (0, Il ).
Then
N AN = N CC N = V V =
rank(A)
i Vi2 =
i=1 j=1
Vj2
where the last inequality follows from that the rank of A equals the number of nonzero eigenvalues and that the eigenvalues are either 0 or 1. 6