C C C C: 19.3 Inference: MAP and MLE, and The MAP Rule

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 1

EECS 126 Notes Tyler Zhu

approach, where the unknowns are RVs whose distributions have to be estimated. Each of
these will lead to a different inference method as we will see later.
The basic premise is that we have n possible exclusive causes C1 , C2 , . . . , Cn of a particular
symptom. Each cause has a prior probability pi and it has a probability qi of causing the
symptom. We call pi our priors and qi our posteriors, and can illustrate the setup with the
following diagram.

p1 pn
p2 pn−1

C1 C2 ··· Cn−1 Cn

q2 qn−1
q1 qn

Figure 13: The basic inference setup, where the Ci ’s are possible causes of a symptom S.

Suppose we want the posterior probability πi of cause i given the symptom S. In other words,
the probability that cause i caused the symptoms we observed. Then we can use Bayes’ Rule
to get
P(S|Ci )P(Ci ) pi qi
πi = P(Ci |S) = Pn = Pn .
j=1 P(S|Cj )P(Cj ) j=1 pj qj

This is important enough to be stated as a theorem.

Theorem 28 (Posterior Probability). The posterior probability πi of a cause i is given by


pi qi
πi = P .
j pj qj

19.3 Inference: MAP and MLE, and the MAP Rule


There are two main inference methods: MAP and MLE.
Definition 29 (MAP). The maximum a posteriori, or MAP, is defined to be

MAP = arg max πi = arg max pi qi .


i i

In other words, it is the best estimate of the cause given a symptom.


Definition 30 (MLE). The maximum likelihood estimate, or MLE, is defined to be

MAP = arg max qi ,


i

which is just the MAP estimate under a uniform prior, i.e. p1 = · · · = pn .


More generally,
MAP[X|Y = y] = arg max P(X = x|Y = y),
x
which can be interpreted as finding “which cause best explains the observed symptom,” and

MLE[X|Y = y] = arg max P(Y = y|X = x),


x

45

You might also like