Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 2

Intuitively, we might just add the probability of these events together.

We know this works because heads and tails are the only possible outcomes,
and the probability of all possible outcomes must equal 1. If the probabilities
of all possible events did not equal 1, then we would have some outcome
that was missing. So how do we know that there would need to be a missing
outcome if the sum was less than 1?
Suppose we know that the probability of heads is P(heads) = 1/2, but
someone claimed that the probability of tails was P(tails) = 1/3. We also
know from before that the probability of not getting heads must be:
NOT P (heads) = 1 − =
1
2
1
2
Since the probability of not getting heads is 1/2 and the claimed probability
for tails is only 1/3, either there is a missing event or our probability
for tails is incorrect.
From this we can see that, as long as events are mutually exclusive, we
can simply add up all of the probabilities of each possible event to get the
probability of either event happening to calculate the probability of one
event OR the other. Another example of this is rolling a die. We know that
the probability of rolling a 1 is 1/6, and the same is true for rolling a 2:
P (one) = , P (two) = 1
6
1
6
So we can perform the same operation, adding the two probabilities,
and see that the combined probability of rolling either a 1 OR a 2 is 2/6,
or 1/3:
P (one) + P (two) = = 2
6
1
3
Again, this makes intuitive sense.
This addition rule applies only to combinations of mutually exclusive outcomes.
In probabilistic terms, mutually exclusive means that:
P (A) AND P (B) = 0
That is, the probability of getting both A AND B together is 0. We see
that this holds for our examples:
• It is impossible to flip one coin and get both heads and tails.
• It is impossible to roll both a 1 and a 2 on a single roll of a die.
To really understand combining probabilities with OR, we need to look
at the case where events are not mutually exclusive. We can now solve our problem since we know each piece of
information:
P
PP
P
male color blind
male color blind male
color blin
(|)=
()×(|)
( d) =
×
0 5 0 08 =
0 0425
0 941
..
.
.
Given the calculation, we know there is a 94.1 percent chance that the
customer service rep is in fact male!
Introducing Bayes’ Theorem
There is nothing actually specific to our case of color blindness in the preceding
formula, so we should be able to generalize it to any given A and B
probabilities. If we do this, we get the most foundational formula in this
book, Bayes’ theorem:
PAB
PAPBA
PB
(|)=
()(|)
()
To understand why Bayes’ theorem is so important, let’s look at a general
form of this problem. Our beliefs describe the world we know, so when
we observe something, its conditional probability represents the likelihood of
what we’ve seen given what we believe, or:
P (observed | belief )
For example, suppose you believe in climate change, and therefore
you expect that the area where you live will have more droughts than usual
over a 10-year period. Your belief is that climate change is taking place,
and your observation is the number of droughts in your area; let’s say there
were 5 droughts in the last 10 years. Determining how likely it is that you’d
see exactly 5 droughts in the past 10 years if there were climate change during
that period may be difficult. One way to do this would be to consult an
expert in climate science and ask them the probability of droughts given
that their model assumes climate change.
At this point, all you’ve done is ask, “What is the probability of what I’ve
observed, given that I believe climate change is true?” But what you want is
some way to quantify how strongly you believe climate change is really happening,
given what you have observed. Bayes’ theorem allows you to reverse
P(observed | belief), which you asked the climate scientist for, and solve for
the likelihood of your beliefs given what you’ve observed, or:

You might also like