Gaussian PDF

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Multivariate Gaussian Distribution

Leon Gu

CSD, CMU
Multivariate Gaussian

1 1
p(x|, ) = exp { (x )T 1 (x )}
(2)n/2 ||1/2 2

I Moment Parameterization: = E(X),


= Cov(X) = E[(X )(X )T ] (symmetric, positive
semi-definite matrix).
I Mahalanobis distance: 42 = (x )T 1 (x ).
I Canonical Parameterization:
1
p(x|, ) = exp {a + T x xT x}
2
where = 1 , = 1 , a = 12 n log 2 log || + T .


I Tons of applications (MoG, FA, PPCA, Kalman Filter, ...)


Multivariate Gaussian P (X1 , X2 )

P (X1 , X2 ) (Joint Gaussian)


   
1 11 12
= , =
2 21 22
P (X2 ) (Marginal Gaussian)

m
2 = 2 , m
2 = 2

P (X1 |X2 = x2 ) (Conditional Gaussian)

1|2 = 1 + 12 1
22 (x2 2 )
1|2 = 11 12 1
22 21
Operations on Gaussian R.V.
The linear transform of a gaussian r.v. is a guassian. Remember that no
matter how x is distributed,

E(AX + b) = AE(X) + b

Cov(AX + b) = ACov(X)AT
this means that for gaussian distributed quantities:

X N (, ) AX + b N (A + b, AAT ).

The sum of two independent gaussian r.v. is a gaussian.

Y = X1 + X2 , X1 X2 Y = 1 + 2 , Y = 1 + 2

The multiplication of two gaussian functions is another gaussian function


(although no longer normalized).

N (a, A)N (b, B) N (c, C),

where C = (A1 + B 1 )1 , c = CA1 a + CB 1 b


Maximum Likelihood Estimate of and
Given a set of i.i.d. data X = {x1 , . . . , xN } drawn from N (x; , ), we
want to estimate (, ) by MLE. The log-likelihood function is
N
N 1 X T 1
ln p(X|, ) = ln || (xn ) (xn ) + const
2 2 n=1

Taking its derivative w.r.t. and setting it to zero we have


N
1 X
= xn
N n=1

Rewrite the log-likelihood using trace trick,


N
= N ln || 1 (xn )T 1 (xn ) + const
P
ln p(X|, ) 2 2
n=1
N
N ln || 1 Trace 1 (xn )(xn )T
P
2 2
n=1 !
N
= N ln || 1 Trace 1 [(xn )(xn )T ]
P
2 2
n=1

1
Taking the derivative w.r.t. , and using 1)
A log |A| = AT ; 2)
T
A Tr[AB] = A Tr[BA] = B , we obtain
N
1 X T
= (xn ) (xn ) .
N n=1

You might also like