Professional Documents
Culture Documents
Chap0 Discrete
Chap0 Discrete
Chap0 Discrete
anis.rezguii@gmail.com
INSAT
Mathematics Department
Table of contents
1 Expected learning outcomes
2 Definitions
The Mass Distribution Function (MDF)
The cumulative distribution function CDF
The mathematical expectation
The variance and the standard deviation
3 Usual probability distributions
The uniform distribution
The Binomial distribution
The Hypergeometric distribution
The Geometric distribution
The Poisson distribution
Definition
Definition
Let (Ω, P) be a probability space describing a random experiment
and X a mapping
X : Ω −→ R
ω 7−→ X (ω).
Discrete subset of R
Example of d.r.v
Ω HH TT HT TH
X 2 0 1 1
Definition
We associate to a given d.r.v, X , its The Mass Distribution
Function (MDF) denoted by PX and defined by the mapping:
PX : X (Ω) −→ [0, 1]
xi 7−→ PX (xi ) = P{X = xi }
where {X = xi } = {ω ∈ Ω : X (ω) = xi }
0.5
0.45
0.4
0.35
0.3
0.25
0.2
0.15
0.1
0.05
0
0 1 2
Example
We reconsider the same example of tossing two perfect coins and
where X was the number of heads. To get its MDF PX we need to
evaluate:
PX (0) = P{X = 0} = P{TT } = P{T } × P{T } = 21 × 12 = 14 ,
note that we have used the independence between the two
tosses.
PX (1) = P{X = 1} = P{TH, HT } = P{TH} + P{HT } =
1 1 1
4 + 4 = 2.
PX (2) = P{X = 2} = P{HH} = P{H} × P{H} = 21 × 12 = 14 .
Example
We reconsider the same example of tossing two perfect coins and
where X was the number of heads. To get its MDF PX we need to
evaluate:
PX (0) = P{X = 0} = P{TT } = P{T } × P{T } = 21 × 12 = 14 ,
note that we have used the independence between the two
tosses.
PX (1) = P{X = 1} = P{TH, HT } = P{TH} + P{HT } =
1 1 1
4 + 4 = 2.
PX (2) = P{X = 2} = P{HH} = P{H} × P{H} = 21 × 12 = 14 .
X (Ω) PX
0 0.25
PX is given by ⇝
1 0.5
2 0.25
anis.rezguii@gmail.com Discrete Random Variable
The Mass Distribution Function (MDF)
Expected learning outcomes
The cumulative distribution function CDF
Definitions
The mathematical expectation
Usual probability distributions
The variance and the standard deviation
Notes
X X
1 Note that PX (xi ) = P{X = xi } = P(Ω) = 1.
i i
Notes
X X
1 Note that PX (xi ) = P{X = xi } = P(Ω) = 1.
i i
2 Note the similarity between a discrete random variable and a
discrete statistical variable:
X(Ω) PX xi fi
x1 PX (xi ) x1 f1
.. .. .. ..
. . . .
P P
1 100%
Notes
X X
1 Note that PX (xi ) = P{X = xi } = P(Ω) = 1.
i i
2 Note the similarity between a discrete random variable and a
discrete statistical variable:
X(Ω) PX xi fi
x1 PX (xi ) x1 f1
.. .. .. ..
. . . .
P P
1 100%
3 The latter similarity is used to model a statistical phenomenon
by a theoretical d.r.v and then to generate artificial data by
computer simulation without any cost. This is very important
for prediction and making decision.
Definition
To any discrete random variable X we could associate its
cumulative distribution function defined as follows:
FX : R −→ [0, 1]
x 7−→ FX (x) = P{X ≤ x}
Example
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
−1 −0.5 0 0.5 1 1.5 2 2.5 3
Note 1
Let X a given d.r.v and denote by PX , FX its MDF and its CDF
respectively. The CDF FX satisfies:
i. FX is non-decreasing.
ii. FX is càdlag i.e continuous to the right and bounded
to the left.
iii. lim FX (x) = 0 and lim FX (x) = 1.
x→−∞ x→+∞
Note 2
Definition
Let X be a d.r.v and X (Ω) = {xi } its set of values.
The mathematical expectation of X , denoted by E(X ), is the
quantity: X
E(X ) = xi PX (xi ).
i
Example
Reconsider the last example where X was the number of heads, we
get:
1 1 1
E(X ) = 0 × +1× +2×
4 2 4
1 1
= + =1
2 2
Example
Reconsider the last example where X was the number of heads, we
get:
1 1 1
E(X ) = 0 × +1× +2×
4 2 4
1 1
= + =1
2 2
Exercise
Let X be a d.r.v with three possible values {−1, 0, 1} such that its
MDF is as follow:
X (Ω) PX
−1 q
0 1/2
1 p
and its expectation E(X ) = 0.
1 Determine the MDF of X , PX .
2 Suppose now P{X = 0} is also unknown, could you determine
PX ?
Solution
Solution
E(X )
P{X ≥ λ} ≤ .
λ
The variance
Definition
Let X be a d.r.v and X (Ω) = {xi } its range. Its variance, denoted
by V(X ) or σX2 , is the non negative quantity:
X
V(X ) = σX2 = (xi − E(X ))2 PX (xi ).
i
Notes
Note
Tchebychev inequality is often stated as follow:
1
P{|X − E(X )| ≥ nσ} ≤
n2
which can be read
1
P {X ∈ [E(X ) − nσ, E(X ) + nσ]} > 1 −
n2
If for example we take n = 6 we obtain that for any random
variable X , which means for any phenomenon we have the
following estimation:
1
P {X ∈ [E(X ) − 6σ, E(X ) + 6σ]} > 1 − = 97.22%
62
.
anis.rezguii@gmail.com Discrete Random Variable
The Mass Distribution Function (MDF)
Expected learning outcomes
The cumulative distribution function CDF
Definitions
The mathematical expectation
Usual probability distributions
The variance and the standard deviation
Note
Definition
Let X be a d.r.v with a finite range, X (Ω) = {x1 , · · · , xn }, for
some n ∈ N. We say that X follows the uniform distribution on
{x1 , · · · , xn } and we denote
X ,→ U{x1 , · · · , xn }
if
1
P{X = x1 } = · · · = P{X = xn } = .
n
Examples
Examples
ii.
n n
1X 2 1 X 2
V(X ) = xi − 2 xi .
n n
i=1 i=1
ii.
n n
1X 2 1 X 2
V(X ) = xi − 2 xi .
n n
i=1 i=1
Note: Note the similarity with the sample mean and the sample
variance of any set of data.
Definition
Let X be a d.r.v, n ∈ N and p ∈]0, 1[. X is said to follow a
binomial distribution with n repetitions and a ”success” probability
p, or just with parameters (n, p), if:
i) X (Ω) = {0, 1, · · · , n},
ii) for any k = 0, 1, · · · , n, P{X = k} = Cnk p k (1 − p)n−k .
We denote
X ,→ β(n, p).
Typical Example
Checking
X ,→ β(n, p).
E(X ) = n × p,
V(X ) = n × p × (1 − p).
Definition
Let N1 , N2 and n three integers, such that N1 + N2 > n, and X a
given discrete random variable. It is said to follow a
hypergeometric distribution of parameters (N1 , N2 , n), and denoted
X ,→ HG(N1 , N2 , n) if
i) X (Ω) = {0, 1, · · · , n}.
ii) for any k = 0, · · · , n
CNk 1 CNn−k
2
P{X = k} = .
CNn1 +N2
Typical example
Definition
Let p ∈]0, 1[ and X a discrete random variable. X is said to follow
a geometric distribution of parameter p, and is denoted
X ,→ G(p), if
i) X (Ω) = N∗ .
ii) For any k ≥ 1,
Typical example
Definition
Let λ ∈ R∗+ and X a d.r.v, we say that X follows a Poisson
distribution with parameter λ, and we denote X ,→ P(λ), if
i) X (Ω) = N.
ii) for any k ≥ 0,
λk
P{X = k} = e −λ .
k!
Typical example
ii.
V(X ) = λ.
Exercise
Solution
1. The average m is just the total number of accidents divided
by the number of days.
1 × 18 + 2 × 7 + 3 × 3 + 4 × 1
m= = 0, 9.
50
Still solution
3. The theoretical number of days where there was exactly i
accidents is ti = PX (i) × 50.
i 0 1 2 3 4
ti 20,328 18,295 8,233 2,469 0,555
It is clear that the theoretical frequency distribution is very
close to the experimental (statistical) one and so, we have a
good reason to trust the Poisson model.
4. Without waiting for a year, we can estimate the number of
days without any accidents