MSO201 Week 3 Lecture Notes

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

19

3. Week 3

Definition 3.1 (Independence of Two events). Let A, B be events in a common probability space
(Ω, F, P). We say that A and B are independent if P(A ∩ B) = P(A)P(B).

Note 3.2. If A = ∅, then A is independent of any other event B. To see this, observe that

P(A ∩ B) = P(∅) = 0, P(A)P(B) = 0 × P(B) = 0.

Again, if A = Ω, then A is independent of any other event. Observe that

P(A ∩ B) = P(B) = P(A)P(B).

Remark 3.3 (Independence and Mutually Exclusiveness (Pairwise disjointness) of two events).
Independence should not be confused with pairwise disjointness! If A and B are disjoint, P(A∩B) =
P(∅) = 0 and hence A and B can be independent if and only if at least one of P(A) or P(B) equals
0. If A and B are disjoint, then A ⊆ B c and B ⊆ Ac . If we know that A has occurred, then we
immediately conclude that B did not occur. Independence is not to be expected in such situations.
On the other hand, if P(A) > 0 and P(B) > 0 and A, B are independent, then P(A ∩ B) > 0. In
this situation, A and B cannot be mutually exclusive.

Remark 3.4 (Conditional probability and Independence of two events). If A, B are independent
P(A∩B)
and P(A) > 0, then P(B | A) = P(A)
= P(B).

Example 3.5. Recall Example 2.27, where we considered rolling a standard six-sided die twice
1
at random. The sample space is Ω = {(i, j) : i, j ∈ {1, 2, 3, 4, 5, 6}} with P((i, j)) = 36
, ∀(i, j) ∈ Ω.
Consider the events A = {(i, j) : i is odd}, B = {(i, j) : i + j = 4} and C = {(i, j) : j = 2}.
Observe that
1 1 1
P(A) = , P(B) = , P(A ∩ B) = .
2 12 18
Here, A and B are not independent. Again,
1 1 1
P(A) = , P(C) = , P(A ∩ C) = .
2 6 12
Here, A and C are independent.
20

Definition 3.6 (Mutual Independence of a collection of events). (a) Let {E1 , E2 , · · · , En } be


a finite collection of events in a probability space (Ω, F, P). We say that this collection of
events is mutually independent or equivalently, the events are mutually independent, if for
all k ∈ {2, 3, · · · , n} and indices 1 ≤ i1 < i2 < · · · < ik ≤ n, we have
 
k
\ k
Y  
P E ij  = P Eij .
j=1 j=1

(b) Let I be any indexing set and let {Ei : i ∈ I} be a collection of events in a probability space
(Ω, F, P). We say that this collection of events is mutually independent or equivalently, the
events are mutually independent, if all finite subcollections {Eii , Ei2 , · · · , Eik } are mutually
independent.

Definition 3.7 (Pairwise Independence of a collection of events). Let I be any indexing set and
let {Ei : i ∈ I} be a collection of events in a probability space (Ω, F, P). We say that this collection
of events is pairwise independent or equivalently, the events are pairwise independent, if for all
distinct indices i and j, the events Ei and Ej are independent.
 
n
Note 3.8. To check that events E1 , E2 , · · · , En are mutually independent, we need to check 2
+
   
n n
3
+· · ·+ n
= 2n −n−1 conditions. However, to check that these events are pairwise independent
 
n
we need to check 2
conditions.

Remark 3.9 (Mutual independence and Pairwise independence of a collection of events). If a collec-
tion of events is mutually independent, then by definition the events are also pairwise independent.
We consider an example to see that the converse need not be true. Consider a random experiment E
with sample space Ω = {1, 2, 3, 4} and event space F = 2Ω . If the outcomes are equally likely, then
we have the probability function/measure P determined by the information P({ω}) = 41 , ∀ω ∈ Ω.
Consider the events A = {1, 4}, B = {2, 4}, C = {3, 4}. Then P(A) = P(B) = P(C) = 21 . More-
1
over, A ∩ B = B ∩ C = C ∩ A = {4} and hence P(A ∩ B) = P(B ∩ C) = P(C ∩ A) = 4
.
Therefore, the collection of events {A, B, C} is pairwise independent. However, A ∩ B ∩ C = {4}
1 1
and hence P(A ∩ B ∩ C) = 4
6= 8
= P(A)P(B)P(C). Here, the collection {A, B, C} is not mutually
independent.
21

Notation 3.10. We say that a collection of events is independent or equivalently, some events are
independent to mean that the collection is (equivalently, the events are) mutually independent.

Remark 3.11. Consider random experiments where we roll a standard six-sided die thrice or draw
two balls from a bin/box/urn. In such experiments multiple draws/trials of the same operations
are being performed. In such situations, if the events depending purely on different trials are
independent, then we say that the trials are being performed independently. In Example 3.5, the
two throws/rolls of the die are independent. Here, the events A and C which depend on the first and
the second throws respectively, are independent. If we toss a fair coin twice independently, then the
sample space is Ω = {HH, HT, T H, T T } with P({HH}) = P({HT }) = P({T H}) = P({T T }) = 14 .

Note 3.12 (Functions defined on sample spaces). While studying a random phenomena with
the framework of a random experiment, in most situations we shall be interested in numerical
quantities associated with the outcomes. To elaborate, consider the following two examples.

(a) Consider the random experiment of tossing a coin once. As discussed earlier, the sample
space is Ω = {H, T }. Suppose we think of the occurrence of head as winning a rupee and
the occurrence of a tail as losing a rupee. This information may be captured by a function
X : Ω → R given by
X(H) := 1, X(T ) := −1.

(b) Consider the random experiment of tossing a coin until head appears. The sample space
may be written as Ω = {H, T H, T T H, T T T H, · · · }. If we are interested in the number
of tosses required to obtain the first head, then the information can be captured by the
function X : Ω → R given by

X(H) := 1, X(T H) := 2, X(T T H) := 3, X(T T T H) := 4, · · ·

Now, we focus on analysis of such functions X defined on the sample space Ω of some random
experiment E.
22

Notation 3.13 (Pre-image of a set under a function). Let Ω be a non-empty set and let X : Ω → R
be a function. Given any subset A of R, we consider the subset X −1 (A) of Ω defined by

X −1 (A) := {ω ∈ Ω : X(ω) ∈ A}.

The set X −1 (A) shall be referred to as the pre-image of A under the function X.

Remark 3.14. In Notation 3.13, we do not know whether the function X is bijective. As such, we
cannot identify X −1 as the ‘inverse’ function of X. To avoid any confusion, treat X −1 (A) as one
symbol referring to the set as defined above and do not identify it as a combination of symbols
X −1 and A.

Remark 3.15 (Shorthand notation for Pre-images). In the setting of Notation 3.13, we shall sup-
press the symbols ω and use the following notation for convenience, viz.

X −1 (A) = {ω ∈ Ω : X(ω) ∈ A} = (X ∈ A).

For specific sets A, other notations, again for convenience, may be used. For example for
(a) If A = (−∞, x], then we write

X −1 (A) = (X ∈ A) = {ω ∈ Ω : X(ω) ∈ (−∞, x]} = {ω ∈ Ω : X(ω) ≤ x} = (X ≤ x).

For A = (−∞, x), (x, ∞), [x, ∞), we shall write X −1 (A) to be equal to (X < x), (X >
x), (X ≥ x) respectively.
(b) If A = {x}, then we write

X −1 (A) = (X ∈ A) = {ω ∈ Ω : X(ω) ∈ {x}} = {ω ∈ Ω : X(ω) = x} = (X = x).

Remark 3.16 (Properties of pre-images). Let X : Ω → R be a function. The following are some
properties of the pre-images under X, which follow from the fact that X is a function.
(a) X −1 (R) = Ω.
(b) X −1 (∅R ) = ∅Ω , where ∅R and ∅Ω denote the empty sets under R and Ω, respectively. When
there is no chance of confusion, we simply write X −1 (∅) = ∅.
(c) For any two subsets A, B of R with A ∩ B = ∅, we have X −1 (A) ∩ X −1 (B) = ∅.
23
c
(d) For any subset A of R, we have X −1 (Ac ) = (X −1 (A)) .
(e) Let I be an indexing set. For any collection {Ai : i ∈ I} of subsets of R, we have
! !
−1 −1 −1
X −1 (Ai ).
[ [ \ \
X Ai = X (Ai ), X Ai =
i∈I i∈I i∈I i∈I

The above properties shall be used frequently throughout the course.

Note 3.17. As discussed in Note 3.12, we now look at real valued functions defined on Ω, where Ω
is the sample space of a random experiment E. We shall also assume that (Ω, F, P) is a probability
space. The event space F shall be taken as the power set 2Ω , unless stated otherwise.

Definition 3.18 (Random variable or RV). Let (Ω, F, P) be a probability space. Any real valued
function X : Ω → R shall be referred to as a random variable or simply, an RV. In this case, we
shall say that X is an RV defined on the probability space (Ω, F, P).

Note 3.19. Since F is taken to be 2Ω , we immediately have

X −1 (A) = (X ∈ A) ∈ F

for any subset A of R. If F is taken to be a smaller collection of subsets of Ω, then the above
observation may not hold for any arbitrary function X. Given such F, we then restrict our
attention to the class of functions X satisfying the above property and refer to them as RVs. It is
therefore important to specify F before we discuss RVs X. As mentioned earlier, F shall be taken
as 2Ω , unless stated otherwise.

Note 3.20. The probability function/measure P has not been used in the definition of an RV X.
We now discuss the role of P in analysis of RVs X.

Notation 3.21. We write B to denote the power set of R.

Notation 3.22. Let X be an RV defined on a probability space (Ω, F, P). Then for all A ∈ B, we
have X −1 (A) ∈ F and hence P(X −1 (A)) is well defined. We denote this in terms of a set function
P ◦ X −1 : B → [0, 1] given by P ◦ X −1 (A) := P(X −1 (A)) = P(X ∈ A), ∀A ∈ B. A shorthand
notation PX shall also be used to refer to P ◦ X −1 .
24

Notation 3.23. Similar to the discussion in Remark 3.15, we shall write P(X ≤ x), P(X = x) etc.
for P ◦ X −1 (A) where A = (−∞, x], {x} etc. respectively.

Proposition 3.24. Let X be an RV defined on a probability space (Ω, F, P). Then, the set function
P ◦ X −1 is a probability function/measure defined on the collection B.

Proof. We verify the axioms/properties of a probability function/measure as mentioned in Defini-


tion 1.27.
We have P ◦ X −1 (R) = P(X −1 (R)) = P(Ω) = 1. Since P is a probability measure on F, we also
have P ◦ X −1 (A) = P(X −1 (A)) ≥ 0, ∀A ∈ B.
If {An }n is a sequence of pairwise disjoint sets in B, then {X −1 (An )}n is a sequence of pairwise
disjoint events in F. Hence,
∞ ∞ ∞ ∞
! !
−1 −1 −1
P ◦ X −1 (An ).
[ [ X X
P◦X An = P X (An ) = P(X (An )) =
n=1 n=1 n=1 n=1

This proves countable additivity property for P ◦ X −1 and the proof is complete. 

Definition 3.25 (Induced Probability Space and Induced Probability Measure). If X is an RV


defined on a probability space (Ω, F, P), then the probability function/measure P ◦ X −1 on B is
referred to as the induced probability function/measure induced by X. In this case, (R, B, P ◦ X −1 )
is referred to as the induced probability space induced by X.

Example 3.26. Recall from Remark 3.11, that if we toss a fair coin twice independently, then the
sample space is Ω = {HH, HT, T H, T T } with P({HH}) = P({HT }) = P({T H}) = P({T T }) = 41 .
Consider the RV X : Ω → R which denotes the number of heads. Here,

X(HH) = 2, X(HT ) = X(T H) = 1, X(T T ) = 0.

Consider the induced probability measure P ◦ X −1 on B. We have


1
P ◦ X −1 ({0}) = P(X −1 ({0})) = P({T T }) = ,
4
1
P ◦ X −1 ({1}) = P(X −1 ({1})) = P({HT, T H}) = ,
2
25

1
P ◦ X −1 ({2}) = P(X −1 ({2})) = P({HH}) = .
4
More generally, for any A ∈ B, we have

P ◦ X −1 (A) = P({ω : X(ω) ∈ A}) = P ◦ X −1 ({i}).


X

i ∈ {0,1,2}∩A

Remark 3.27. If we know the probability function/measure PX = P◦X −1 for any RV X, then we get
the information about all the probabilities P(X ∈ A), A ∈ B for events X −1 (A) = (X ∈ A), A ∈ B
involving the RV X. In what follows, our analysis of RV X shall be through the understanding of
probability function/measure PX = P ◦ X −1 on B.

Definition 3.28 (Law/Distribution of an RV). If X is an RV defined on a probability space


(Ω, F, P), then the probability function/measure PX on B is referred to as the law or distribution
of the RV X.

We now discuss some properties of a probability function/measure. To do this, we first introduce


a concept involving sequence of sets.

Definition 3.29 (Increasing and decreasing sequence of sets). Let {An }n be a sequence of subsets
of a non-empty set Ω.

(a) If An ⊆ An+1 , ∀n = 1, 2, · · · , we say that the sequence {An }n is increasing. In this case, we
S∞
say An increases to A, denoted by An ↑ A, where A = n=1 An .
(b) If An ⊇ An+1 , ∀n = 1, 2, · · · , we say that the sequence {An }n is decreasing. In this case,
T∞
we say An decreases to A, denoted by An ↓ A, where A = n=1 An .

Remark 3.30. An ↑ A if and only if Acn ↓ Ac .

Proposition 3.31 (Continuity of a probability measure). Let (Ω, F, P) be a probability space.

(a) (Continuity from below) Let {An }n be sequence in F, such that An ↑ A. Then

lim P(An ) = P(A).


n→∞
26

(b) (Continuity from above) Let {An }n be sequence in F, such that An ↓ A. Then

lim P(An ) = P(A).


n→∞

Proof. To prove the first statement. Since {An }n is an increasing sequence of sets, we have

An ∩ (A1 ∪ A2 ∪ · · · ∪ An−1 )c = An ∩ Acn−1 , ∀n ≥ 2.

Then using a hint from practice problem set 1, we have


∞ ∞
!
Acn−1 )
[ [
An = A1 ∪ (An ∩ .
n=1 n=2

Since the sets A1 , A2 ∩ Ac1 , A3 ∩ Ac2 , · · · are pairwise disjoint, using the countable additivity of P,
we have
∞ ∞ k
!
P(An ∩ Acn−1 ) = P(A1 ) + lim P(An ∩ Acn−1 )
[ X X
P An = P(A1 ) +
k→∞
n=1 n=2 n=2
k
X
= P(A1 ) + lim [P(An ) − P(An−1 )]
k→∞
n=2

= P(A1 ) + lim [P(Ak ) − P(A1 )] = lim P(Ak ).


k→∞ k→∞

This completes the proof of the first statement.


T∞
To prove the second statement. First observe that Acn ↑ Ac with A = n=1 An . Using the first
statement, we have

P(A) = 1 − P(Ac ) = 1 − lim P(Acn ) = lim [1 − P(Acn )] = lim P(An ).


n→∞ n→∞ n→∞

The proof is complete. 

Definition 3.32 (Distribution function of an RV). Let X be an RV defined on a probability space


(Ω, F, P) with law/distribution PX . Consider the function FX : R → R defined by FX (x) :=
PX ((−∞, x]) = P(X ≤ x), ∀x ∈ R. The function FX is called the cumulative distribution function
(CDF) or simply, the distribution function (DF) of the RV X.
27

Remark 3.33 (RVs equal in law/distribution). Let X be an RV defined on a probability space


(Ω, F, P) and let Y be an RV defined on a probability space (Ω0 , F 0 , P0 ). If P ◦ X −1 = P0 ◦ Y −1 , i.e.
P ◦ X −1 (A) = P0 ◦ Y −1 (A), ∀A ∈ B, then we say that X and Y are equal in law/distribution. In
this case, FX = FY , i.e. FX (x) = FY (x), ∀x ∈ R.

Remark 3.34. Let X and Y be two RVs, possibly defined on different probability spaces. If FX =
FY , then it can be shown that X and Y are equal in law/distribution. The proof of this statement
is beyond the scope of this course. This statement is often restated as ‘the DF of an RV uniquely
determines the law/distribution of the RV’.

Example 3.35. Consider X as in Example 3.26. Then for all x ∈ R, we have






0, if x < 0,



0 ≤ x < 1,
PX ({0}), if


X
FX (x) = PX ((−∞, x])) = PX ({i}) = 
PX ({0}) + PX ({1}), if 1 ≤ x < 2,
i ∈ {0,1,2}∩(−∞,x] 





X ({0}) + PX ({1}) + PX ({2}), if x ≥ 2.

P

Therefore, 



 0, if x < 0,



1, if 0 ≤ x < 1,


4
FX (x) =
3
, if 1 ≤ x < 2,





 4


1, if x ≥ 2.

Theorem 3.36. Let X be an RV defined on a probability space (Ω, F, P) with law PX and DF FX .
Then
(a) FX is non-decreasing, i.e. FX (x) ≤ FX (y), ∀x < y.
(b) FX is right continuous, i.e. limh↓0 FX (x + h) = FX (x), ∀x ∈ R.
(c) FX (−∞) := limx→−∞ FX (x) = 0 and FX (∞) := limx→∞ FX (x) = 1.

Proof. For all x < y, observe that (−∞, x] ( (−∞, y]. Since PX is a probability measure, we have
PX ((−∞, x]) ≤ PX ((−∞, y]). The statement (a) follows.
28

By definition, FX takes values in [0, 1] and hence it is bounded. Since FX is non-decreasing, the
limit FX (x+) = limh↓0 FX (x + h) exists for all x ∈ R. Using the non-decreasing property, we use
the following fact from real analysis that FX (x+) = limn→∞ FX (x + n1 ). By Proposition 3.31, we
have
1 1
   
FX (x+) = lim FX x+ = lim PX ( −∞, x + ) = PX ((−∞, x]) = FX (x).
n→∞ n n→∞ n
This proves statement (b). Here, we use the fact that (−∞, x + n1 ] ↓ (−∞, x].
Similar to the proof of statement (b), we have

FX (−∞) = n→∞ lim PX ((−∞, −n]) = PX (∅) = 0,


lim FX (−n) = n→∞

and
FX (∞) = lim FX (n) = lim PX ((−∞, n]) = PX (R) = 1.
n→∞ n→∞

Here, we use that facts that (−∞, −n] ↓ ∅ and (−∞, n] ↑ R. This proves statement (c). 

The next theorem is stated without proof. The arguments required to prove this statement is
beyond the scope of this course.

Theorem 3.37. Let F : R → R be a non-decreasing and right continuous function such that
limx→−∞ F (x) = 0 and limx→∞ F (x) = 1. Then there exists an RV X defined on some probability
space (Ω, F, P) such that F = FX , i.e. F (x) = FX (x), ∀x.

Remark 3.38. Given any function F : R → R, as soon as we check the relevant conditions, we can
claim that it is the DF of some RV by Theorem 3.37.

Example 3.39. Consider the function F : R → R defined by






 0, if x < 0,


F (x) = x, if 0 ≤ x < 1,




1, if x ≥ 1.


29

The function is a constant on (−∞, 0) and on [1, ∞). Moreover, it is non-decreasing in the interval
[0, 1). Further for x < 0, y ∈ (0, 1), z > 1, we have

F (x) = F (0) < F (y) < F (1) = F (z).

Hence, F in non-decreasing over R. Again, by definition F is continuous on the intervals (−∞, 0),
(0, 1) and (1, ∞). We check for right continuity at the points 0 and 1. We have

lim F (0 + h) = lim h = 0 = F (0), lim F (1 + h) = lim 1 = 1 = F (1).


h↓0 h↓0 h↓0 h↓0

Hence, F is right continuous on R. Finally, limx→−∞ F (x) = limx→−∞ 0 = 0 and limx→∞ F (x) =
limx→∞ 1 = 1. Hence, F is the DF of some RV. Later on, we shall identify the corresponding RV.

Proposition 3.40 (Further properties of a DF). Let X be an RV defined on a probability space


(Ω, F, P) with law PX and DF FX .

(a) For all x ∈ R, the limit FX (x−) = limh↓0 FX (x − h) exists and equals PX ((−∞, x)) =
P(X < x).

Proof. Since FX in non-decreasing and bounded, as argued in Theorem 3.36, the limit
FX (x−) = limh↓0 FX (x − h) exists and moreover, by Proposition 3.31 we have
1 1
   
FX (x−) = lim FX x− = lim PX ( −∞, x − ) = PX ((−∞, x)) = P(X < x).
n→∞ n n→∞ n
Here, we use the fact that (−∞, x − n1 ] ↑ (−∞, x). 

(b) For all x ∈ R, P(X ≥ x) = 1 − FX (x−).

Proof. We have, P(X ≥ x) = PX ([x, ∞)) = PX ((−∞, x)c ) = 1 − PX ((−∞, x)) = 1 −


FX (x−). 

(c) For any x ∈ R, FX (x−) ≤ FX (x+).

Proof. By the non-decreasing property of FX , for all x ∈ R and positive integers n, we have,
FX (x − n1 ) ≤ FX (x + n1 ). Letting n go to infinity in this inequality, we get the result. 

(d) FX is continuous at x if and only if FX (x) = FX (x−).


30

Proof. A real valued function is continuous at a point x if and only if the function is both
right continuous and left continuous at the point x. Now, by construction, FX is right
continuous on R. Hence, FX is continuous at x if and only if FX is left continuous at x.
The last statement is exactly the statement to be proved. 
(e) Only possible discontinuities of FX are jump discontinuities.
Proof. As discussed in Theorem 3.36 and in part (a), for any x ∈ R, both the limits
FX (x+) and FX (x−) exist and FX (x+) = FX (x). Since FX (x−) ≤ FX (x+), the only
possible discontinuity appears if and only if FX (x−) < FX (x+). These discontinuities are
jump discontinuities. This completes the proof. 
(f) For all x ∈ R, we have FX (x+) − FX (x−) = P(X = x).
Proof. By the finite additivity of PX , we have FX (x+) − FX (x−) = P(X ≤ x) − P(X <
x) = PX ((−∞, x]) − PX ((−∞, x)) = PX ({x}) = P(X = x). 
(g) If FX has a jump at x, then the jump is given by FX (x+) − FX (x−) = P(X = x).
Proof. If FX has a jump at x, then the jump is given by FX (x+) − FX (x−). The result
follows from statement (f). 
(h) FX is continuous at x if and only if P(X = x) = 0.
Proof. Recall that FX (x+) = FX (x). Then by statement (d) and (f), we have FX is
continuous at x if and only if FX (x+) = FX (x−) and hence, if and only if P(X = x) = 0. 
(i) Consider the set D := {x ∈ R : FX is discontinuous at x} = {x ∈ R : FX (x−) <
FX (x+)} = {x ∈ R : P(X = x) > 0}. Then D is either finite or countably infinite.
(Note that if FX is continuous on R, then D = ∅.)
Proof. Left as an exercise in practice problem set 3. 
(j) For all x < y, we have

P(x < X ≤ y) = FX (y) − FX (x),

P(x < X < y) = FX (y−) − FX (x),

P(x ≤ X < y) = FX (y−) − FX (x−),


31

P(x ≤ X ≤ y) = FX (y) − FX (x−).

Proof. We prove the first two equalities. Proof of the last two equalities are similar.
By the finite additivity of PX , we have FX (y) − FX (x) = PX ((−∞, y]) − PX ((−∞, x]) =
PX ((x, y]) = P(x < X ≤ y).
Again, FX (y−) − FX (x) = PX ((−∞, y)) − PX ((−∞, x]) = PX ((x, y)) = P(x < X < y).
This completes the proof. 

Example 3.41. Consider the function F : R → R defined by






 0, if x < 0,



1 + x, if 0 ≤ x ≤ 1,


F (x) =  4 2

 1
+ x4 , if 1 < x < 2,
2




1, if x ≥ 2.

Assume that F is the DF of some RV X (left as an exercise in practice problem set 3). Since F is
continuous on the intervals (−∞, 0), (0, 1), (1, 2) and (2, ∞), discontinuities may arise only at the
points 0, 1, 2.
We have F (0−) = limh↓0 F (0 − h) = 0 and F (0) = 14 . Therefore F is discontinuous at 0 with jump
F (0) − F (0−) = 14 .
We have F (1−) = limh↓0 F (1 − h) = limh↓0 [ 14 + 1−h
2
] = 3
4
and F (1) = 1
4
+ 1
2
= 34 . Therefore F is
continuous at 1.
We have F (2−) = limh↓0 F (2 − h) = limh↓0 [ 12 + 2−h
4
] = 1 and F (2) = 1. Therefore F is continuous
at 2.
Only discontinuity of F is at the point 0. In particular, P(X = 0) = F (0) − F (0−) = 41 . At all
other points F is continuous and hence P(X = x) = 0, ∀x 6= 0.
Observe that P(0 ≤ X < 1) = F (1−) − F (0−) = 43 . Again, P( 32 < X ≤ 2) = F (2) − F ( 23 ) =
1 − [ 12 + 38 ] = 18 .

You might also like