Professional Documents
Culture Documents
Slides
Slides
Slides
1-1
1-2
The pdf of
X Np (, ) is given by:
1 2
f (x ) = |2 |1/2 exp (x ) 1 (x )
E (X ) = ,
MVA: HumboldtUniversitt zu Berlin
Var(
X) =
1-3
1-4
Theorem Then
1-5
Corollary
Let
X=
X1 X2
Np (, ),
with
12 = 0
if and only if
X1
is independent of
X2 .
1-6
The independence of two linear transforms of a multinormal be shown via the following corollary.
can
Corollary
If
X Np (, ), A and B
matrices, then
AX
and
BX
are
AB = 0.
1-7
Y Nq (A + c , AA ).
1-8
Theorem The conditional distribution of X2 given X1 = x1 is normal with 1 mean 2 + 21 11 (x1 1 ) and covariance 22.1 , i.e.,
1 (X2 | X1 = x1 ) Npr (2 + 21 11 (x1 1 ), 22.1 ).
1-9
Theorem
1 L(X2 | X1 = x1 ) Npr (2 + 21 11 (x1 1 ), 22.1 )
1-10
Example
Suppose
p = 2, r = 1, =
=
0 0
0.8
2 2 (0.8)
0.8 2 = 1.36.
exp
2 x1
1 exp 2 (1.36)
0.8x1 )2 . ( x2 2(1.36)
MVAcondnorm.xpl
1-12
Theorem If X1 Nr (1 , 11 ) and (X2 |X1 = x1 ) Npr (Ax1 + b, ) where does not depend on x1 , then
X= where
X1 Np (, ), X2 A + b
. 1
and
=
11 11 A A11 + A11 A
1-13
X2 R, X1 Rr
1 E (X2 |X1 ) = 2 + 21 11 (X 1 )
approximation is linear!
11 12 21 22
1 = 11 + 22.1 = 21 11 12 + 22.1
1-14
X2 R and B
r = p 1.
of dimension
is a row vector
(1 r )
X2 = 0 + X1 + U .
This means that the best MSE approximation of of
X1
X2
by a function
is a straight line.
X2
can be decomposed as
1 22 = 11 + 22.1 = 21 11 12 + 22.1 .
2 2.1...r =
MVA: HumboldtUniversitt zu Berlin
variables
1 21 11 12 22
X1 .
X2
and the
1-15
X1
(sales),
X2
(price),
X3
(advertisement) and
X4
.
172.7 104.6
1037.21
1-16
X1
given
(X2 X3 X4 )
is univariate
1 11.2 = 11 12 22 21 = 96.761
2 1.234 =
1 12 22 21 11
= 0.907.
1-17
.
1
1 0.308
1-18
(X1 , X2 )
given
(X3 , X4 )
1
is bivariate
1 2
13 14 23 24 =
32.516
33 34 43 44
X3 3 X4 4
1-19
11 12 21 22
13 14 23 24 =
104.006
33 34 43 44
155.592
31 32 41 42
33.574
X1
and
X2
X3
and
X4 :
1-20
and it holds
Y Y = (X ) 1 (X ) 2 p. Y is random vector and Y Y is scalar. Y Y can be used for testing (assuming that is known).
Normally, we do not know
1-21
AX and B X AB = 0.
of
X Np (, ) are
X1
X2
1-22
X1
is independent of
X2
if and
12 = 0. (X2 |X1 )
is a linear function for
X1 N (, ). p X2
The multiple correlation coecient is dened as 1 21 11 12 . 2 2.1...r = 22 The multiple correlation coecient is the percentage of the variance of
0 + X1 .
X2
1-23
1-24
X Np (, ),
X (n p )
=0
data matrix
M(p p ) = X X Wp (, n)
Example p =
1,
X x1
. . .
N1 (0, 2 )
n i =1
X
=
M
=
X X =
xn
xi2 2 2 n
1-25
2 p
B = 1/2
the distribution of
1/2 M1/2
is
Wp (I , n).
1-26
1-27
X (n p )
Np (, )
covariance matrix)
nS x
and
X HX sample Wp (, n 1)
are independent
1-28
2 -distribution.
distribution.
In particular
W1 ( 2 , n) = 2 2 n.
S
has a
n Wp (, n 1)
1-29
1-30
T 2-Distribution
T2
is a generalisation of Student's
t -distribution
F -distribution:
T 2 (p , n ) =
np F n p + 1 p,np+1
1-31
Summary: Hotelling's
T 2-Distribution
(n 1)(x ) S 1 (x )
has a
T 2 (p , n 1) distribution.
1-32
1-33
Denition A (p 1) random vector Y is said to have a spherical distribution Sp () if its characteristic function Y (t ) satises:
Y (t ) = (t t )
for some scalar function
(.)
Sp ().
We will write
Y Sp ()
.
1-34
All marginal distributions of a spherical distributed random vector are spherical. All the marginal characteristic functions have the same generator.
2.
1-35
Let X Sp (), then X has the same distribution as ru (p) where u (p) is a random vector distributed uniformly on the unit sphere surface in Rp and r 0 is a random variable independent of u (p) . If E (r 2 ) < , then E (X ) = 0 , Cov (X ) = E (r 2 ) I . p p
1-36
Denition A (p 1) random vector X is said to have an elliptical distribution with parameters (p 1) and (p p ) if X has the same distribution as + A Y , where Y Sk () and A is a (k p ) matrix such that A A = with rank() = k . We shall write
X ECp (, , )
.
1-37
Let
Z Np (0, Ip ) and s 2 m Z s
degrees of freedom.
be
Y= m
has a multivariate Moreover the
t -distribution with m
spherical distributions.
1-38
Any linear combination of elliptically distributed variables are elliptical. Marginal distributions of elliptically distributed variables are elliptical.
2.
1-39
Theorem
1.
A scalar function (.) can determine an elliptical distribution ECp (, , ) for every Rp and 0 with rank() = k i (t t ) is a p-dimensional characteristic function. Assume that X is nondegenerate. If X ECp (, , ) and X ECp ( , , ), then there exists a constant c > 0 such that = , = c , (.) = (c 1 .). In other words , , A are not unique, unless we impose the condition that det() = 1.
2.
1-40
Theorem
1.
1-41
Theorem
1.
where r 0 is independent of u (k ) which is a random vector distributed uniformly on the unit sphere surface in Rk and A is a (k p ) matrix such that A A = .
MVA: HumboldtUniversitt zu Berlin
1-42
Theorem
1.
= 2 (0).
2.
Assume that X ECp (, , ) with rank() = k. Then Q (X ) = (X ) (X ) has the same distribution as r 2 in equation (1).