Professional Documents
Culture Documents
Quad Form - Normal Dist
Quad Form - Normal Dist
f (y; , ) =
e 2 (y)
1 (y)
(1)
(2) 2 || 2
12 (y1 1 )
1
2
(y1 1 )
1
(2) 2 ( 2 ) 2
(2)
(y1 1 )2
2 2
e
=
2 2
(3)
2
...
Y1
Y =
N ,
Yn
(4)
1
1
,...,
n
n
Y = AY
N (, 2 /n) since
1
1 .
,...,
.. = and
n
n
(5)
1
1
1
,...,
n
n
n
2
n 2
2 I ... = 2 =
n
n
1
n
(6)
(7)
1 0 0 . . .
0 2 0 . . .
Q0 M Q = = ..
.. ..
.
. .
0 0 0
0
0
..
.
(8)
Furthermore, since M is idempotent all these roots are either zero or one. Thus we can
choose Q so that will look like
I 0
Q MQ = =
0 0
0
(9)
The dimension of the identity matrix will be equal to the rank of M , since the number
of non-zero roots is the rank of the matrix. Since the sum of the roots is equal to the trace,
the dimension is also equal to the trace of M . Now let v = Q0 y. Compute the moments of
v = Q0 y
E(v) = Q0 E(y) = 0
var(v) = Q0 2 IQ
(10)
= 2 Q0 Q = 2 I
since Q is orthogonal
v N (0, I)
Now consider the distribution of y 0 M y using the transformation v. Since Q is orthogonal,
its inverse is equal to its transpose. This means that y = (Q0 )1 v = Qv. Now write the
quadratic form as follows
y0M y
v 0 Q0 M Qv
=
2
2
1 0 I 0
= 2v
v
0 0
tr M
1 X 2
= 2
vi
(11)
i=1
tr
M
X
i=1
v 2
i
This is the sum of squares of (tr M ) standard normal variables and so is a 2 variable with
tr M degrees of freedom.
(12)
Let r denote the dimension of the identity matrix which is equal to the rank of M . Thus
r = tr M .
Let v = Q0 y and partition v as follows
v1
v2
..
.
v1
v
v=
= r
v2
vr+1
.
..
vn
(13)
(14)
= v10 v1
Now consider the product of L and Q which we denote C. Partition C as (C1 , C2 ). C1
has k rows and r columns. C2 has k rows and n r columns. Now consider the following
product
C(Q0 M Q) = LQQ0 M Q, since C = LQ
= LM Q = 0, since LM = 0 by assumption
(15)
(16)
(17)
(18)
Now note that Ly depends only on v2 , and y 0 M y depends only on v1 . But since v1 and
v2 are independent, so are Ly and y 0 M y.
2.5. Quadratic Form Theorem 5.
Theorem 5. Let the n 1 vector y N (0, I), let A be an n n idempotent matrix of rank m,
let B be an n n idempotent matrix of rank s, and suppose BA = 0. Then y 0 Ay and y 0 By are
independently distributed 2 variables.
Proof: By Theorem 3 both quadratic forms are distributed as chi-square variables. We
need only to demonstrate their independence. Define the matrix Q as before so that
Ir 0
0
Q AQ = =
(19)
0 0
Let v = Q0 y and partition v as
v1
v2
..
.
v1
v=
= vr
v2
vr+1
.
..
vn
0
Now form the quadratic form y Ay and note that
y 0 Ay = v 0 Q0 AQv
0 Ir 0
=v
v
0 0
= v10 v1
(20)
(21)
Ir 0
0 0
G1 G2
G02 G3
G(Q AQ) =
(22)
=
G1 0
G02 0
0r 0
=
0 0
0 0
0 G3
(23)
(v10 ,
v20 )
0 0
v1
0 G3 v2
(24)
= v20 G3 v2
It is now obvious that y 0 Ay can be written in terms of the first r terms of v, while y 0 By
can be written in terms of the last n r terms of v. Since the v 0 s are independent the result
follows.
2.6. Quadratic Form Theorem 6 (Craigs Theorem).
Theorem 6. If y N (, ) where is positive definite, then q1 = y 0 Ay and q2 = y 0 By are
independently distributed if AB = 0.
Proof of sufficiency:
This is just a generalization of Theorem 5. Since is a covariance matrix of full rank
it is positive definite and can be factored as = T T 0 . Therefore the condition AB = 0
can be written AT T 0 B = 0. Now pre-multiply this expression by T 0 and post-multiply by
T to obtain that T 0 AT T 0 BT = 0. Now define C = T 0 AT and K = T 0 BT and note that if
AB = 0, then
CK = (T 0 AT )(T 0 BT ) = T 0 BT = T 0 0T = 0
(25)
(26)
D1 0
0 0
(27)
0 0
0
Q KQ =
0 D2
where D1 is an n1 n1 diagonal matrix and D2 is an (n n1 ) (n n1 ) diagonal matrix.
Now define v = Q0 T 1 y. It is then distributed as a normal variable with expected value
and variance given by
E(v) = Q0 T 1
var(v) = Q0 T 1 T 10 Q
= Q0 T 10 T T 0 T 10 Q
=I
(28)
(29)
= v10 D1 v1
Similarly we can define y 0 By in terms of v as
q2 = y 0 By = v 0 Q0 T 0 BT Qv
= v 0 Q0 T 0 (T 01 KT 1 )T Qv
= v 0 Q0 KQv
(30)
= v20 D2 v2
Thus q1 = y 0 Ay is defined in terms of the first n1 elements of v, and q2 = y 0 By is defined
in terms of the last n n1 elements of v and so they are independent.
The proof of necessity is difficult and has a long history [2], [3].
1 0 0 . . .
0 2 0 . . .
Q0 Q = = ..
.. ..
.
. .
0 0 0 ...
0
0
..
.
(31)
=
.
..
0
1
2
..
.
0
0 ...
0 ...
..
.
0 ...
0
0
..
.
1
(32)
(33)
The last equality follows from the definition of = QQ0 after taking the inverse of
both sides remembering that the inverse of an orthogonal matrix is equal to its transpose.
Furthermore it is obvious that
HH 0 = Q Q0 Q Q0
= Q Q0 QQ0 Q Q0
=I
(34)
E(z) = HE() = 0
var(z) = H var()H 0
= HH
=I
(35)
(36)
= z0z
2.8. Quadratic Form Theorem 8. Let y N (0, I). Let M be a non-random idempotent
matrix of dimension n n (rank (M ) = r n). Let A be a non-random matrix such that
AM = 0. Let t1 = M y and let t2 = Ay. Then t1 and t2 are independent random vectors.
Proof: Since M is symmetric and idempotent it can be diagonalized using an orthonormal matrix Q as before.
Irr
0r(nr)
0
Q MQ = =
(37)
0(nr)r 0(nr)(nr)
Further note that since Q is orthogonal that M = QQ0 . Now partition Q as
Q = (Q1 , Q2 ) where Q1 is n r. Now use the fact that Q is orthonormal to obtain the
following identities
Q01
QQ = (Q1 Q2 )
Q02
= Q1 Q01 + Q2 Q02 = In
(38)
Q0 Q =
0
Q1 Q1 Q01 Q2
Q01
(Q1 Q2 ) =
Q02
Q02 Q1 Q02 Q2
Ir
0
0 Inr
I 0
Q = (Q1 Q2 )
0 0
= (Q1 0)
(39)
10
Now compute M as
Q01
M = QQ = (Q1 Q2 )
Q02
0
(40)
= Q1 Q01
Now let z1 = Q01 y and let z2 = Q02 y. Note that
z = (z10 , z20 ) = C 0 y
is a standard normal since E(x) = 0 and var(z) = CC 0 = I. Furthermore z1 and z2 are
independent. Now consider t1 = M y. Rewrite this using (40) as
Q1 Q01 y = Q1 z1
Thus t1 depends only on z1 . Now let the matrix
N = I M = Q2 Q02
from (38) and (40). Now notice that
AN = A(I M ) = A AM = A
since AM = 0. Now consider t2 = Ay. Replace A with AN to obtain
t2 = Ay = AN y
= A(Q2 Q02 )y
= AQ2 (Q02 y)
= AQ2 z2
(41)
Now t1 depends only on z1 and t2 depends only on z2 and since the zs are independent
the ts are also independent.
11
R EFERENCES
[1] Cramer, H. Mathematical Methods of Statistics. Princeton: Princeton University Press, 1946
[2] Driscoll, M.F. and W.R. Grundberg. A History of the Development of Craigs Theorem. The American
Statistician 40(1986):65-69
[3] Driscoll, M.F. and B. Krasnicka. An accessible proof of Craigs theorem in the general case. The American
Statistician 49(1995):59-61
[4] Goldberger, A.S. Econometric Theory. New York: Wiley, 1964
[5] Goldberger, A.S. A Course in Econometrics. Cambridge: Harvard University Press, 1991
[6] Hocking, R. R. The Analysis of Linear Models. Monterey: Brooks/Cole, 1985
[7] Hocking, R. R. Methods and Applications of Linear Models. New York: Wiley, 1996
[8] Rao, C.R. Linear Statistical Inference and its Applications. 2nd edition. New York: Wiley, 1973
[9] Schott, J. R. Matrix Analysis for Statistics. New York: Wiley, 1997