Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

U NIVERSIDAD DEL QUINDO, P ROGRAMA DE I NGENIERA E LECTRNICA

Marginal and Conditional Gaussians

Ramses Acosta,Dairo Quintero.


April 10, 2016

1 FORMULATION OF PROBLEM
We have a marginal gaussian distribution for x and a conditional gaussian distribution for y
given x in the form :

p(x) = N x|, 1

(1)

p(y|x) = N y|Ax + b, L1

(2)

Where the equation (1) is a probability marginal of x and the equation (2) is a probability
conditional over y given x.
The goal is find the others probabilities for complete all information (probability p(x|y) and
p(y) ) through of the equation (1) y (2)[1].

2 A RITHMETIC
In this section we give a procedure to find the other probabilities enunciated in the section 1

2.1 J OINT PROBABILITY


The joint probability is define as:
p(x, y) = p(y|x)p(x)

(3)

The equation (3) is know as product rule.

First we know the probabilities given by (1) and (2), after we find the joint probability through
equation (3). Then:

p(x, y) = N x|, 1 N y|Ax + b, L1

p(x, y) =

1/2
(2)D/2 1

exp{

(4)

1
1
1
T
(x )T (x )}

1/2 exp{ (y (Ax + b)) L(y (Ax + b))}


D/2
1
2
2
(2) L
(5)

ln

Now consider the log of the joint probability:


1
p(x, y) = {(x )T (x ) + (y (Ax + b))T L(y (Ax + b))}
2
1 T
= ( x x xT T x + T
2
+ yT Ly yT LAx yT Lb xT LAy + xT A T LAx + xT AT Lb b T Ly + b T LAx + b T Lb)
(6)

2.2 F IND MEAN AND C OVARIANCE OVER THE JOINT PROBABILITY


The covariance is associated with the terms of second order, and the mean with the terms of
first order, the terms that are independent of x and y dont take them:
second order terms = xT x + yT Ly yT LAx xT LAy + xT A T LAx
firts order terms = xT T x yT Lb + xT AT Lb b T Ly + b T LAx

(7)
(8)

The equation (6) can be written as partitioned matrix if we use a equations (8) and (7) as the
equation (9) [1] :
ln

1
p(x, y) =
2

( T
+ A T LA
x
-LA
y

)
T

AT Lb
x
AT L x
2
Lb
y
y
L

(9)

With the terms of the second order we can find R the inverse of the covariance. Then:
R=

+ AT LA
-LA

AT L
L

(10)

Now we have to find the inverse of R, for this we use the equations (11) and (12),which results
in (13) [2]

A
C

1
1
B
M
MBD1
=
D
D1 CM D1 + D1 CMBD1

(11)

Where:
M = (A BD1 C)1

(12)

1
cov(x,y) =
A1

1 AT
1
L + A1 AT

(13)

We know that:
REx,y = first order terms

(14)

Using (8) :

T
1 A L b
Ex,y = R
Lb

1
AT L b

1 AT
=
Lb
A1 L1 + A1 AT

=
A + b

(15)
(16)

Then the joint probability of x,y is:


p(x, y) = N (x,y|

1 AT
)
L1 + A1 AT

,
A + b
A1

(17)

Now we want to know the probability over y, then take the terms which probability only depend of y in (17) [3] ,then:
p(y) = N (y|A + b, L1 + A1 AT )

(18)

Finally only we dont know p(x|y) then using the first order terms that depend of x, throughout (8), we have:
second order terms = xT x + xT A T LAx

(19)

first order terms = xT T x yT Lb + xT AT Lb b T Ly + b T LAx

(20)

With (19) we can to calculate a covariance at the same in (14) then:


cov(x) = ( + A1 LA)

(21)

The calculate of mean is the next:


1
(xT T x + xT AT Lb + b T LAx)
2

= x T + AT L(y b)

1 Ex|y =

Then:
Ex|y = [ + AT L(y b)]

(22)
(23)
(24)
(25)

After this, the conditional probability over x given y and is define in (21):
p(x|y) = N (x|[ + A T L(y b), )

(26)

R EFERENCES
[1] C. M. Bishop, Pattern Recognition and Machine Learning, 2006, vol. 4, no. 4. [Online].
Available: http://www.library.wisc.edu/selectedtocs/bg0137.pdf
[2] K. B. Petersen and M. S. Pedersen, The matrix cookbook, nov 2012, version 20121115.
[Online]. Available: http://www2.imm.dtu.dk/pubdb/p.php?3274
[3] C. Bracegirdle. (2010) Bayes theorem for gaussians. bayesTheoremForGaussians.pdf. [Online]. Available:
http://web4.cs.ucl.ac.uk/staff/C.Bracegirdle/
bayesTheoremForGaussians.pdf

You might also like