Probability Theory and Statistics

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Probability Theory and Statistics

Lecture 4: Conditional probability and independence, part 2

23 September 2019

Office hours: Monday 12:10 (R208), Friday 17:00 (R308)

Home assignment: 2 parts – problems + test

Lecturer: Mikhail Zhitlukhin (Михаил Валентинович Житлухин)


Email: icef.statistics@gmail.com
Total probability and Bayes’ formulas for many events
Definition
Events B1 , B2 , . . . , Bn are called collectively exhaustive if
1. Bi ∩ Bj = ∅ for any i 6= j and Ω
2. B1 ∪ B2 ∪ . . . ∪ Bn = Ω B2
B1 B4
B3
Examples
1. B1 = B, B2 = B for any event B
2. A random card:
B1 = hearts, B2 = spades, B3 = diamonds, B4 = clubs

1/11
The total probability formula
Suppose B1 , . . . , Bn are collectively exhaustive events such that
P (Bi ) 6= 0 for any i. Then for any event A

P (A) = P (A | B1 ) · P (B1 ) + . . . + P (A | Bn ) · P (Bn ).

Bayes’ formula
Suppose B1 , . . . , Bn be collectively exhaustive events, P (Bi ) 6= 0. Then
for any event A such that P (A) 6= 0 and any k

P (A | Bk ) · P (Bk )
P (Bk | A) = n .
P
P (A | Bi ) · P (Bi )
i=1

2/11
Independence of two random events
Event A is called independent of event B if its conditional probability is
the same as the unconditional probability:
|= P (A | B) = P (A) assume P (B) 6= 0

Notation: A B

Example
A
A = “5 or 6 points”
1 3 5 B = “even number of points”
1
2 4 6 P (A) = P (A | B) =
3
B

3/11
• An equivalent definition (and more convenient):

P (A ∩ B) = P (A) · P (B) O.K. when P (B) = 0

This is the probability multiplication rule for independent events.

• If A is independent of B, then B is also independent of A:

if P (A | B) = P (A), then P (B | A) = P (B)

4/11
Example
A football team is going to play 2 games. It will win the 1st game with
probability 0.7, win the 2nd game with probability 0.5.
If the games are independent, what is the probability to win at least one
game?

Answer: 0.85

5/11
Properties of independent events

|=

|=
1. If A B, then B A

|=

|=
2. Always A ∅, A Ω
|=

|=

|=

|=
3. If A B, then A B, A B, A B.

6/11
Independence of many random events
Events A1 , A2 , . . . , An are called independent if for any collection
Ak1 , Ak2 , . . . Akm it holds that

P (Ak1 ∩ . . . ∩ Akm ) = P (Ak1 ) · . . . · P (Akm )

(k1 , . . . , km are distinct indices).

Be careful: if Ai and Aj are independent for all i, j, it doesn’t necessarily mean that
A1 , . . . , An are mutually independent.

7/11
Example
A multiple-choice test has 10 questions, each with 4 answers where only
one is correct. Suppose a student guesses the answers.

1. What is the probability to answer all questions correctly?


Answer: 0.2510 ≈ 1/1 000 000

2. What is the probability to answer at least one question correctly?


Answer: 1 − 0.7510 ≈ 0.94

3. What is the probability to answer exactly 4 questions correctly?


4 · 0.254 · 0.756 (see the next slide)
Answer: C10

8/11
Binomial probabilities (again)
Theorem
Let A1 , . . . , An be independent random events, each with probability p.
Then
P (“exactly k of them happen”) = Cnk pk (1 − p)n−k .

Proof
Step 1: P (Ai1 ∩ . . . ∩ Aik ∩ Aik+1 ∩ . . . ∩ Ain ) = pk (1 − p)n−k ,
where i1 , . . . , in is a permutation of the indices 1, 2, . . . , n.
Step 2: the number of ways to choose k indices of events which happen
is Cnk .

9/11
Independence of complementary events
In the proof, we used the following fact (similar to the case of two random
events):
if A1 , A2 , . . . , An are independent, then A1 , A2 , . . . , An are independent.

Therefore, we can take complements of independent events, and they


will remain independent.

10/11
Reading
Basic reading
• Wonnacotts: § 3.5
• Newbold, Carlson, Thorne: § 3.3 (starting from “Statistical
Independence”)

Detailed reading
• Durret: § 1.3
• Bertsekas, Tsitsiklis: § 1.5

11/11

You might also like