Professional Documents
Culture Documents
Marginal and Conditional Gaussians: Ramses Acosta, Dairo Quintero
Marginal and Conditional Gaussians: Ramses Acosta, Dairo Quintero
1 FORMULATION OF PROBLEM
We have a marginal gaussian distribution for x and a conditional gaussian distribution for y
given x in the form :
p(x) = N x|, 1
(1)
p(y|x) = N y|Ax + b, L1
(2)
Where the equation (1) is a probability marginal of x and the equation (2) is a probability
conditional over y given x.
The goal is find the others probabilities for complete all information (probability p(x|y) and
p(y) ) through of the equation (1) y (2)[1].
2 A RITHMETIC
In this section we give a procedure to find the other probabilities enunciated in the section 1
(3)
First we know the probabilities given by (1) and (2), after we find the joint probability through
equation (3). Then:
p(x, y) =
1/2
(2)D/2 1
exp{
(4)
1
1
1
T
(x )T (x )}
ln
(7)
(8)
The equation (6) can be written as partitioned matrix if we use a equations (8) and (7) as the
equation (9) [1] :
ln
1
p(x, y) =
2
( T
+ A T LA
x
-LA
y
)
T
AT Lb
x
AT L x
2
Lb
y
y
L
(9)
With the terms of the second order we can find R the inverse of the covariance. Then:
R=
+ AT LA
-LA
AT L
L
(10)
Now we have to find the inverse of R, for this we use the equations (11) and (12),which results
in (13) [2]
A
C
1
1
B
M
MBD1
=
D
D1 CM D1 + D1 CMBD1
(11)
Where:
M = (A BD1 C)1
(12)
1
cov(x,y) =
A1
1 AT
1
L + A1 AT
(13)
We know that:
REx,y = first order terms
(14)
Using (8) :
T
1 A L b
Ex,y = R
Lb
1
AT L b
1 AT
=
Lb
A1 L1 + A1 AT
=
A + b
(15)
(16)
1 AT
)
L1 + A1 AT
,
A + b
A1
(17)
Now we want to know the probability over y, then take the terms which probability only depend of y in (17) [3] ,then:
p(y) = N (y|A + b, L1 + A1 AT )
(18)
Finally only we dont know p(x|y) then using the first order terms that depend of x, throughout (8), we have:
second order terms = xT x + xT A T LAx
(19)
(20)
(21)
= x T + AT L(y b)
1 Ex|y =
Then:
Ex|y = [ + AT L(y b)]
(22)
(23)
(24)
(25)
After this, the conditional probability over x given y and is define in (21):
p(x|y) = N (x|[ + A T L(y b), )
(26)
R EFERENCES
[1] C. M. Bishop, Pattern Recognition and Machine Learning, 2006, vol. 4, no. 4. [Online].
Available: http://www.library.wisc.edu/selectedtocs/bg0137.pdf
[2] K. B. Petersen and M. S. Pedersen, The matrix cookbook, nov 2012, version 20121115.
[Online]. Available: http://www2.imm.dtu.dk/pubdb/p.php?3274
[3] C. Bracegirdle. (2010) Bayes theorem for gaussians. bayesTheoremForGaussians.pdf. [Online]. Available:
http://web4.cs.ucl.ac.uk/staff/C.Bracegirdle/
bayesTheoremForGaussians.pdf