Download as pdf or txt
Download as pdf or txt
You are on page 1of 30

Unit – III

Joint Probability and Morkov Chain


Joint Probability distribution:Joint Probability distribution for two discrete random variables
(both discrete and continuous cases), expectation, covariance, correlation coefficient.
Stochastic processes- Stochastic processes, probability vector, stochastic matrices, fixed
points, regular stochastic matrices, Markov chains, higher transition probability-simple
problems.

Joint Probability Distribution


If X and Y are two discrete random variables, we define the joint probability function of X and
Y by P(X = x, Y = y) = f(x, y) where f(x, y) satisfies the conditions

(i) f(x, y)  0, (ii)  f ( x, y)  1


x y

Also, if X and Y are two continuous random variables, we define the joint probability function
for the random variables X and Y which is also called the joint density function by f(x, y)
where
 
(i) f(x, y)  0 (ii) 
 
f ( x, y ) dxdy  1 .

Suppose X and Y takes any one of the values from the set of values {x1, x2, x3, …….xm} and
{y1, y2, y3, …….yn} respectively then P(X = xi, Y = yj) = f(xi, yj) is represented in the
following two way table

Y y1 y2 -- yn Total
X
x1 f ( x1 , y1 ) f ( x1 , y2 ) -- f ( x1 , yn ) f1 ( x1 )

x2 f ( x2 , y1 ) f ( x2 , y2 ) -- f ( x2 , yn ) f1 ( x2 )

--

--

--

xm f ( xm , y1 ) f ( xm , y2 ) -- f ( xm , yn ) f1 ( xm )

1
Total f 2 ( y1 ) f 2 ( y2 ) -- f 2 ( yn ) 1

f1 ( xi ) or simply f1 ( x ) and f 2 ( y j ) or simply f 2 ( y ) are known as the marginal probability


functions of X and Y respectively as it can be observed that
m n m n

 f (x )  1 ,  f
i 1
1 i
j 1
2 ( y j )  1 which can jointly written as  f ( xi , y j )  1 .
i 1 j 1

Expectation: (One Variable)

The mean value of the distribution of a variate X is commonly known as its expectation and is
denoted by E(x). If f(x) is the probability density function of the variate X then

E ( x)   xi f ( xi ) (Discrete distribution)


E ( x) 

 xf ( x)dx (Continuous distribution)

In general expectation of any function  ( x) is given by

E  ( x)    ( xi ) f ( xi ) (Discrete distribution)


E  ( x)    ( x) f ( x)dx (Continuous distribution)


Expectation: (Two Variables)

If X and Y are two discrete random variables having the joint probability function f(x, y) then
the expectations of X and Y are defined as

Mean of X   x  E ( X )   xf ( x, y)   xi f1 ( xi )
x y i

Mean of Y   y  E (Y )   yf ( x, y)   y j f 2 ( y j )
x y j

and E ( XY )   xi y j J ij
ij

The Covariance of X and Y = Cov( X , Y )  E ( XY )  E ( X ) E (Y )

If X and Y are two continuous random variables having the joint probability function f(x, y)
then the expectations of X and Y are defined as

2
 
x  E( X )    xf ( x, y)dxdy
 

 
 y  E (Y )    yf ( x, y)dxdy
 

 
Variance V ( X )   x2    (x  
 
x ) 2 f ( x, y ) dxdy

 
V (Y )   y2    (x  
 
y )2 f ( x, y )dxdy

 
Cov( X , Y )    (x  
 
x )( y   y ) f ( x, y )dxdy  E ( X , Y )  E ( X ) E (Y )

Cov( X , Y )
Correlation Co efficient  ( X , Y ) 
 x y

Note 1: The probability that x  [a, b] and y  [c, d ] is defined as

b d
P ( a  x  b, c  y  d )    f ( x, y ) dxdy
a c

Note 2:

The variables x and y are said to be independent if f1 ( x)  f 2 ( y)  f ( x, y ) .

Note 3:

Marginal density function of X is f1 ( x)  

f ( x, y )dy


Marginal density function of Y is f 2 ( y )  

f ( x, y )dx

Properties of Expectation:

(i) E (kX )  kE ( X )

(ii) E ( X  k )  E ( X )  k

(iii) E ( X  Y )  E ( X )  E (Y )

(iv) E ( XY )  E ( X ) E (Y )

3
Properties of Variance:

(i) V ( X )  E ( X 2 )   E ( X )
2

(ii) V (kX )  k 2V ( X )

(iii) V ( X  k )  V ( X )

(iv) V ( X  Y )  V ( X )  V (Y )

Problem:

Find E(X), E(X2) and 2 for the probability function P(X) defined by the following data

xi 1 2 3 --------- n

p(xi) k 2k 3k --------- nk

Here k is an appropriate constant.

Solution:

The condition  p( x )  1gives


i

k  2 k  3k  4k  .........  nk  1

k 1  2  3  4  .........  n   1

n( n  1)
k 1
2

2
k
n( n  1)

E ( X )   xi p ( xi )

E ( X )  (1)(k )  (2)(2k )  3(3k )  ............  (n)( nk )

2 n( n  1)(2n  1) (2n  1)
E ( X )  k (12  22  32  ..........  n2 )  
n(n  1) 6 3

E ( X 2 )   xi2 p( xi )

E ( X 2 )  (12 )(k )  (22 )(2k )  (32 )(3k )  ............  ( n 2 )( nk )

4
2 n 2 (n  1)2 n(n  1)
E ( X 2 )  k (13  23  33  ..........  n3 )  
n(n  1) 4 2

 2  E( X 2 )   2
2
n(n  1)  2n  1 
2   
2  3 

Problem:

The joint distribution of two random variables X and Y is as follows

Y
-4 2 7
X

1 1/8 1/4 1/8

5 1/4 1/8 1/8

Compute the following

(i) E(X) (ii) E(Y) (iii) E(XY) (iv) x and y (v) Cov(X,Y) (vi) (X,Y)

Solution:

The marginal distribution of X and Y is as follows

Distribution of X

xi 1 5

f(xi) 1/2 1/2

Distribution of Y

yi -4 2 7

g(yi) 3/8 3/8 1/4

(i)  x  E ( X )   xi f ( xi )  (1)(1 / 2)  (5)(1 / 2)  3

Thus  x  E ( X )  3

5
(ii)  y  E (Y )   y j g ( y j )  (4)(3 / 8)  (2)(3 / 8)  (7)(1 / 4)  1

Thus  y  E (Y )  1

(iii) E ( XY )   xi y j J ij
 (1)(4)(1 / 8)  (1)(2)(1 / 4)  (1)(7)(1 / 8)  (5)(4)(1 / 4)  (5)(2)(1 / 8)  (5)(7)(1 / 8)

1 1 7 5 35 3
   5  
2 2 8 4 8 2

(iv)  x2  E ( X 2 )   x2

Consider E ( X 2 )   xi2 f ( xi )  (1)(1 / 2)  (25)(1 / 2)  13

Hence  x2  13  9  4

Thus  x  2

 y2  E (Y 2 )   y2

Consider E (Y 2 )   y 2j g ( y j )  (16)(3 / 8)  (4)(3 / 8)  (49)(1 / 4)  79 / 4

Hence  y2  (79 / 4)  1  75 / 4

Thus  y  75 / 4  4.33

3 3
(v) Cov( X , Y )  E ( XY )   x  y   (3)(1)  
2 2

Cov ( X , Y ) 3 2 3
(vi)  ( X , Y )     0.1732
 x y 2  75 4 2 75

Problem:

The joint probability distribution table for two random variables X and Y is as follows

Y
-2 -1 4 5
X

1 0.1 0.2 0 0.3

2 0.2 0.1 0.1 0

6
Determine the marginal probability distributions of X and Y

Also compute (a) Expectations of X and Y (b) Standard deviation of X and Y (c) Covariance
of X and Y (d) Correlation of X and Y.

Solution:

The marginal distribution of X and Y is as follows

Distribution of X

xi 1 2

f(xi) 0.6 0.4

Distribution of Y

yi -2 -1 4 5

g(yi) 0.3 0.3 0.1 0.3

(i)  x  E ( X )   xi f ( xi )  (1)(0.6)  (2)(0.4)  1.4

Thus  x  E ( X )  1.4

(ii)  y  E (Y )   y j g ( y j )  (2)(0.3)  (1)(0.3)  (4)(0.1)  (5)(0.3)  1

Thus  y  E (Y )  1

(iii)  x2  E ( X 2 )   x2

Consider E ( X 2 )   xi2 f ( xi )  (1)(0.6)  (4)(0.4)  2.2

Hence  x2  2.2  1.96  0.24

Thus  x  0.4898

 y2  E (Y 2 )   y2

Consider E (Y 2 )   y 2j g ( y j )  (4)(0.3)  (1)(0.3)  (16)(0.1)  (25)(0.3)  10.6

Hence  y2  10.6  1  9.6

Thus  y  3.2557

7
(iv) E ( XY )   xi y j J ij
 (1)(2)(0.1)  (1)(1)(0.2)  (1)(4)(0)  (1)(5)(0.3)
 (2)(2)(0.2)  (2)(1)(0.1)  (2)(4)(0.1)  (2)(5)(0)

 0.9

Cov( X , Y )  E ( XY )   x  y  0.9  (1.4)(1)  0.5

Cov( X , Y ) 0.5
(v)  ( X , Y )    0.313
 x y 0.4898  3.2557

Problem:

The joint probability distribution of two random variables x and y is given below

y
1 3 9
x

2 1/8 1/24 1/12

4 1/4 1/4 0

6 1/8 1/24 1/12

Find (i) marginal distribution of x and y (ii) Cov(xy).

Solution:

(i)The marginal distribution of x and y is as follows

The marginal distribution of x

xi 2 4 6

f(xi) 1/4 1/2 1/4

The marginal distribution of y

yj 1 3 9

f(yj) 1/2 1/3 1/6

(ii) Expectation of x

E ( x)   x   xi f ( xi )  (2)    (4)    (6)    4


1 1 1
4 2 4

8
Expectation of y

1 1
E ( y )   y   y j g ( y j )  (1)    (3)    (9)    3
1
2 3 6

1
E ( xy )   x   xi y j f ( xi , y j )  (2)(1)    (2)(3)    (2)(9)  
1 1
8  24   12 

 (4)(1)    (4)(3)    (9)(9)  0 


1 1
4 4
1
 (6)(1)    (6)(3)    (6)(9)    12
1 1
8  24   12 

Cov ( x , y )  E ( x, y )   x  y  12  (4)(3)  0

Problem:

The joint probability distribution of two random variables x and y is given below

y
2 3 4
x

1 0.06 0.15 0.09

2 0.14 0.35 0.21

Find (i) marginal distribution of x and y (ii) Expectation of x and y

Solution:

The marginal distribution of x and y is as follows

The marginal distribution of x

xi 1 2

f(xi) 0.3 0.7

The marginal distribution of y

yj 2 3 4

f(yj) 0.2 0.5 0.3

(ii) Expectation of x

E ( x)   xi f ( xi )  (1)  0.3  (2)  0.7   1.7

9
Expectation of y

E ( y )   y j g ( y j )  (2)(0.2)  (3)(0.5)  (4)(0.3)  3.1

Problem:

The joint probability distribution of two discrete random variables x and y is given by the
table. Determine the marginal distributions of x and y. Also find whether x and y are
independent

y
1 3 6
x

1 1/9 1/6 1/18

3 1/6 1/4 1/12

6 1/18 1/12 1/36

Problem:

The joint distribution of two random variables X and Y is as follows

Y
-3 2 4
X

1 0.1 0.2 0.2

3 0.3 0.1 0.1

Find (i) marginal distribution of x and y (ii) Covariance of x and y

Problem.

k ( x  1)e  y 0  x  1, y  0
Find the constant k so that P( x, y )   is a joint probability
0 otherewise
function. Are x, y independent?

Solution:

We observe that P( x, y )  0 for all x, y if k  0.

Further
 

  P( x, y )dxdy  1
 

10
 1

  P( x, y)dxdy  1
y 0 x 0

 1 1 
3
  k ( x  1)e  y dxdy  k  e
y
 ( x  1)dx dy  k 1
y0 x0 x0 y 0
2

2
k
3

With k = 2/3 we find the marginal density functions are



P1 ( x)   P( x, y)dy

0 <x< 1

 
2 2
0 3 ( x  1)e dy  3 ( x  1)0 e dy
y y

2 2 2
( x  1) e  y 0  ( x  1) 0  1  ( x  1)


3 3 3

P2 ( x) 

 P ( x, y)dx y> 0
1 1
2 2
  ( x  1)e y dx  e  y  ( x  1) dx
0
3 3 0

 
1
2  y  x2  2 1 2 3
 e   x   e y  1  0  e y  e y
3 2 0 3 2 3 2

We observe that P1 ( x) P2 ( x )  P( x, y ) , hence x and y are stochastic independent variables.

Problem:

Suppose the joint probability density function of x, y is given by


e  y for x  0, y  x
P ( x, y )  
0 otherewise

Find the marginal density function of x and y. Verify that x and y are not independent.

Problem:

The joint probability function of two continuous random variables X and Y is given by

c(2 x  y ) 0  x  2, 0  y  3
f ( x, y )  
0 otherewise

11
Find (i) constant c (ii) P(x < 2, y < 1) (iii) P(x  1, y  2)
(iv) marginal probability function of x and y.

Problem:

If X and Y are random variables having joint probability density function

4 xy 0  x  1, 0  y  1
f ( x, y )  
0 otherewise

Verify that (i) E(X + Y) = E(X) + E(Y) (ii) E(XY) = E(X)E(Y)

Problem:

If X and Y are continuous random variables having joint probability density function

c ( x 2  y 2 ) 0  x  1, 0  y  1
f ( x, y )  
0 otherewise

Find (i) constant c (ii) P(x < 1/2, y > 1/2) (iii) P(1/4 < x < 3/4) (iv) P(y < 1/2)

Problem:

The joint density function of two continuous random variables X and Y is given by

 xy / 96 0  x  4,1  y  5
f ( x, y )  
0 otherewise

Find P(X + Y < 3)

Problem:

e  ( x  y ) x  0, y  0
Verify that f ( x, y )   is a density function of a joint probability
0 otherewise
distribution. Then evaluate the following (i) P(1/2 < x < 2, 0 < y < 4) (ii) P(x < 1)
(iii) P(x > y) (iv) P(x + y  1)

Problem:

A fair coin is tossed thrice, the random variables X and Y are defined as follows: X = 0 or 1
according as head or tail occurs on the first toss Y = number of heads

(i) Determine the distribution of X and Y

(ii) Determine the joint distribution of X and Y

(iii) Obtain the expectations of X, Y and XY. Also find standard deviation of X and Y

12
(iv) Compute covariance and correlation of X and Y.

Solution:

The sample space S and the association of random variables X and Y is given by the
following table

S HHH HHT HTH HTT THH THT TTH TTT

X 0 0 0 0 1 1 1 1

Y 3 2 2 1 2 1 1 0

(i) The probability distribution of X and Y is found as follows

X = {xi} = {0, 1} and Y = {yj} = {0, 1, 2, 3}

4 1
P( X  0)  
8 2

4 1
P ( X  1)  
8 2

1
P(Y  0) 
8

3
P (Y  1) 
8

3
P(Y  2) 
8

1
P (Y  3) 
8

The marginal distribution of x

xi 0 1

f(xi) 1/2 1/2

The marginal distribution of y

yj 0 1 2 3

f(yj) 1/8 3/8 3/8 1/8

13
(ii) The joint distribution of X and Y is found by computing

J ij  P ( X  xi , Y  y j ) where we have x1  0, x2  1 and y1  0, y2  1, y3  2, y4  3

J11  P( X  0, Y  0)  0 (X = 0 implies that there is a head turn out and Y the total number
of heads 0 is impossible)

1
J12  P( X  0, Y  1) 
8

2 1
J13  P( X  0, Y  2)  
8 4

1
J14  P( X  0, Y  3) 
8

1
J 21  P ( X  1, Y  0) 
8

2 1
J 22  P ( X  1, Y  1)  
8 4

1
J 23  P ( X  1, Y  2) 
8

J 24  P ( X  1, Y  3)  0 (This outcome is impossible)

The joint probability distribution for X and Y is as follows

X Y 0 1 2 3 Sum

0 0 1/8 1/4 1/8 1/2

1 1/8 1/4 1/8 0 1/2

Sum 1/8 3/8 3/8 1/8 1

1 1 1
(iii)  x  E ( X )   xi f ( xi )  0   1 
2 2 2

1 3 3 1 3
 y  E (Y )   y j f ( y j )  0   1   2   3  
8 8 8 8 2

 1 2  1
E ( XY )   xi y j f ( y j )J ij  0   0    0  
 4 8  2

14
 x2  E ( X 2 )   x2  (0)(1 / 2)  (1)(1 / 2)  (1 / 4)  1 / 4

x 1/ 2

 y2  E (Y 2 )   y2  (0)(1 / 8)  (1)(3 / 8)  (4)(3 / 8)  (9)(1 / 8)  (9 / 4)  3 / 4

y  3 / 2

1  1  3  1
(iv) Cov( X , Y )  E ( XY )   x  y       
2  2  2  4

Cov( X , Y ) 1 4 1
 ( X ,Y )   
 x y (1 2)( 3 2) 3

15
Markov Chain’s :

Probability Vector:
A vector 𝑣 = {𝑣1 , 𝑣2 , 𝑣3 , 𝑣4 … … … . 𝑣𝑛 } is called a probability vector if each one of its components
are non-negative and their sum is equal to unity.
1 1 1 1 1
Ex:- If 𝑢 = (1,0), 𝑣 = (2 , 2) , 𝑤 = (4 , 4 , 2) are probability vector

Note:
If v is not a probability vector but each one of the 𝑣𝑖 (𝑖 = 0 𝑡𝑜 𝑛) are non-negative then 𝜆 𝑣
1
is a probability vector when 𝜆 = ∑𝑛
𝑖=1 𝑣𝑖

1 1 2 3
Ex:- If 𝑣 = (1,2,3) 𝑡ℎ𝑒𝑛 𝜆 = 6 𝑎𝑛𝑑 (6 , 6 , 6) is a probability vector.

Stochastic Matrix:
A square matrix 𝑃 = [𝑃𝑖𝑗 ] having every row in the form of a probability vector is called a
stochastic matrix.
1 0 0
1 0
Ex:- (1) Identity matrix I of any order 𝐼2 = [ ] 𝐼3 = [ 0 1 0]
0 1
0 0 1
1⁄ 1⁄
2 2 0
1/2 1/2 1⁄ 1⁄
(ii) [ ] (iii) 0 2 2
0 1
1 1
[ ⁄2 0 ⁄2 ]
Regular – Stochastic Matrix:
A stochastic matrix 𝑃 is said to be a regular stochastic matrix if all the entries of some power 𝑃𝑛
are positive.
0 1
Ex:- 𝐴 = [ ]
1/2 1/2
0 1 0 1 1/2 1/2
∴ 𝐴2 = [ ][ ]=[ ]
1/2 1/2 1/2 1/2 1/4 1/4
∴ 𝐴 is a regular stochastic matrix of order (𝑛 = 2)
Properties of a Regular stochastic Matrix:
The following properties are associated with a regular stochastic matrix 𝑃 of order 𝑛.
1. (a) 𝑃 has a unique fixed point 𝑥 = {𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 … … … . 𝑥𝑛 } such that 𝑥𝑃 = 𝑥
(b) P has a unique fixed probability vector 𝑣 = {𝑣1 , 𝑣2 , 𝑣3 , 𝑣4 … … … . 𝑣𝑛 } such that 𝑣𝑃 = 𝑣

2. 𝑃2 , 𝑃3 , 𝑃4 … …. Approaches the matrix 𝑉 whose rows are each the fixed probability vector 𝑣.

3. If 𝑢 is any probability vector then the sequence of vectors


𝑢𝑃, 𝑢𝑃2 , 𝑢𝑃3 , 𝑢𝑃4 , … …. approaches the unique fixed probability vector 𝑣.
Problems:
𝑎1 𝑎2
𝑣 𝑣2 ] is a probability vector such that
1. If 𝐴 = [ 𝑏
1 𝑏2 ] is a stochastic matrix and 𝑣 = [ 1
𝑣𝐴 is also probability vector.

Solution: By data, 𝑎1 + 𝑎2 = 1, 𝑏1 + 𝑏2 = 1, 𝑣1 + 𝑣2 = 1
𝑎1 𝑎2
𝑣𝐴 = [𝑣1 𝑣2 ] [ 𝑏 𝑏 ] = [𝑣1 𝑎1 + 𝑣2 𝑏1 𝑣1 𝑎2 + 𝑣2 𝑏2 ]
1 2

We have to prove that 𝑣1 𝑎1 + 𝑣2 𝑏1 + 𝑣1 𝑎2 + 𝑣2 𝑏2 = 1


LHS 𝑣1 (𝑎1 + 𝑎2 ) + 𝑣2 (𝑏1 + 𝑏2 ) = 1
𝑣1 + 𝑣2 = 1
1=1

3⁄ 1⁄
2. Find the unique fixed probability vector of the regular stochastic matrix 𝐴 = [ 4 4]
1⁄ 1⁄
2 2
Unique fixed probability vector 𝑣 = (𝑥, 𝑦)
such that 𝑥 + 𝑦 = 1 𝑎𝑛𝑑 𝑣𝑃 = 𝑣
3⁄ 1⁄
∴ 𝑣𝑃 = 𝑣 ⇒ (𝑥, 𝑦) [ 4 4 ] = (𝑥, 𝑦)
1⁄ 1⁄
2 2
3𝑥 𝑦 𝑥 𝑦
[ + , + ] = (𝑥, 𝑦)
4 2 4 2
3𝑥 + 2𝑦 = 4𝑥, … … … (1) 𝑥 + 2𝑦 = 4𝑦 … … … . (2)
Using 𝑥 + 𝑦 = 1 ⇒ 𝑦 =1−𝑥
Equation (1), 3𝑥 + 2(1 − 𝑥) = 4𝑥
3𝑥 + 2 − 2𝑥 = 4𝑥
4𝑥 − 𝑥 = 2
2
3𝑥 = 2 𝑥=3
2
∴ 𝑦 =1−𝑥 ⇒𝑦 = 1−3
1
𝑦=3
2 1
Thus Unique fixed probability vector 𝑣 = (𝑥, 𝑦) = (3 , 3)
0 1 0
1⁄ 1⁄ 1⁄
3. Find the unique fixed probability vector of the regular stochastic matrix𝐴 = [ 6 2 3]
0 2⁄3 1⁄
3
Solution: we have to find the unique fixed probability vector 𝑣 = (𝑥, 𝑦, 𝑧)
such that 𝑥 + 𝑦 + 𝑧 = 1 𝑎𝑛𝑑 𝑣𝑃 = 𝑣
0 1 0
1⁄ 1⁄ 1⁄
∴ 𝑣𝑃 = 𝑣 ⇒ (𝑥, 𝑦, 𝑧) [ 6 2 3 ] = (𝑥, 𝑦, 𝑧)
0 2⁄ 1⁄
3 3
𝑦 𝑦 2𝑧 𝑦 𝑧
[ , 𝑥+ + , + ] = ( 𝑥, 𝑦, 𝑧)
6 2 3 3 3
𝑦 = 6𝑥, … … . . (1) 6𝑥 + 3𝑦 + 4𝑧 = 6𝑦, … … . . (2) 𝑦 + 𝑧 = 3𝑧 … … . (3)

Using 𝑥 + 𝑦 + 𝑧 = 1 ⇒ 𝑧 = 1−𝑥−𝑦 ⇒ 𝑧 = 1 − 𝑥 − 6𝑥
⇒ 𝑧 = 1 − 7𝑥

6𝑥 + 3(6𝑥) + 4(1 − 7𝑥) = 6(6𝑥), … … . . (2)


6𝑥 + 18𝑥 + 4 − 28𝑥 − 36𝑥 = 0,
1
40𝑥 = 4, ⇒ 𝑥 = 10
6
𝑦 = 6𝑥 ⇒ 𝑦 =
10
1 6
𝑧 = 1−𝑥−𝑦 ⇒ 1− −
10 10
3
𝑧=
10
1 6 3
∴ Unique fixed probability vector 𝑣 = (𝑥, 𝑦, 𝑧) = (10 , 10 , 10)

4. Find the unique fixed probability vector of the regular stochastic matrix
0 1⁄2 1⁄
4
1⁄
4
1⁄ 1⁄ 1⁄
𝐴= 2 0 4 4
1⁄ 1⁄
2 2 0 0
1⁄ 1⁄ 0 0
[ 2 2 ]
Solution: we have to find the unique fixed probability vector 𝑣 = (𝑎, 𝑏, 𝑐, 𝑑)
such that 𝑎 + 𝑏 + 𝑐 + 𝑑 = 1 𝑎𝑛𝑑 𝑣𝑃 = 𝑣
0 1⁄2 1⁄4 1⁄4
1⁄ 1⁄ 1⁄
∴ 𝑣𝑃 = 𝑣 ⇒ (𝑎, 𝑏, 𝑐, 𝑑) 2 0 4 4 = (𝑎, 𝑏, 𝑐, 𝑑)
1⁄ 1⁄
2 2 0 0
1⁄ 1⁄ 0 0 ]
[ 2 2
𝑏 𝑐 𝑑 𝑎 𝑐 𝑑 𝑎 𝑏 𝑎 𝑏
[ + + , + + , + , + ] = (𝑎, 𝑏, 𝑐, 𝑑)
2 2 2 2 2 2 4 4 4 4
𝑏 + 𝑐 + 𝑑 = 2𝑎, … … (1) 𝑎 + 𝑐 + 𝑑 = 2𝑏, … … (2) 𝑎 + 𝑏 = 4𝑐 … (3)
𝑎 + 𝑏 = 4𝑑 … … … (4)

Using 𝑎 + 𝑏 + 𝑐 + 𝑑 = 1 ⇒ 𝑏+𝑐+𝑑 =1−𝑎


⇒ 𝑎+𝑐+𝑑 =1−𝑏
1
Equation (1) 1 − 𝑎 = 2𝑎 ⇒ 𝑎 = 3
1
Equation (2) 1 − 𝑏 = 2𝑏 ⇒ 𝑏 = 3

Equation (3) 𝑎 + 𝑏 = 4𝑐
1 1 1
4𝑐 = 3 + 3 ⇒𝑐=6

Equation (4) 𝑎 + 𝑏 = 4𝑑
1 1 1
4𝑑 = 3 + 3 ⇒𝑑=6
1 1 1 1
∴ Unique fixed probability vector 𝑣 = (𝑎, 𝑏, 𝑐, 𝑑) = (3 , 3 , 6 , 6)
0 1 0
5. Show that 𝑃 = [ 0 0 1 ] is a regular stochastic matrix. Also find the associated unique
1⁄ 1⁄ 0
2 2
fixed probability vector.
Solution: Stochastic matrix P is said to be regular stochastic matrix if all the entries of some
power 𝑃𝑛 are positive
0 1 0 0 1 0 0 1 0
1⁄ 1
1 ] = [ ⁄2
𝑃 =[ 0
2 0 1][ 0 0 2 0 ]
1⁄ 1⁄ 0 1 1
⁄2 ⁄2 1⁄ 1⁄
0
2 2 2 2 0
1⁄ 1⁄
0 0 1 0 1 0 2 2 0
1 1
𝑃3 = 𝑃2 𝑃 = [ ⁄2 ⁄2 0 ] [ 0 0 1 ] = 0 1⁄ 1⁄
2 2
1 1
0 1⁄2 1⁄2 ⁄2 ⁄2 0 1⁄ 1⁄ 1⁄
[ 4 4 2]
1⁄ 1⁄ 0 1⁄2 1⁄2
2 2 0 0 1 0
𝑃4 = 𝑃3 𝑃 = 0 1⁄2 1⁄2 [ 0 0 1 ] = 1⁄ 1⁄ 1⁄
4 4 2
1⁄ 1⁄ 0
1⁄ 1⁄ 1⁄ 2 2 1⁄ 1⁄ 1⁄
[ 4 4 2] [ 4 2 4]
We observe that 𝑃4 all the entries are positive, ∴.P is Regular stochastic matrix
we have to find the unique fixed probability vector 𝑣 = (𝑎, 𝑏, 𝑐)
such that 𝑎 + 𝑏 + 𝑐 = 1 𝑎𝑛𝑑 𝑣𝑃 = 𝑣
0 1 0
∴ 𝑣𝑃 = 𝑣 ⇒ (𝑎, 𝑏, 𝑐) [ 0 0 1 ] = (𝑎, 𝑏, 𝑐)
1⁄ 1⁄ 0
2 2
𝑐 𝑐
[ , 𝑎+ , 𝑑] = (𝑎, 𝑏, 𝑐)
2 2
𝑐 = 2𝑎, … … (1) 2𝑎 + 𝑐 = 2𝑏, … … (2) 𝑏 = 𝑐 … (3)
Using, 𝑐 = 2𝑎 𝑏 = 𝑐 = 2𝑎
Using 𝑎 + 𝑏 + 𝑐 = 1 ⇒ 𝑎 + 2𝑎 + 2𝑎 = 1
1
⇒ 𝑎=5
2
𝑐 = 2𝑎 ⇒ 𝑐=5
2
𝑏=𝑐 ⇒ 𝑏=5
1 1 1
∴ Unique fixed probability vector 𝑣 = (𝑎, 𝑏, 𝑐) = (5 , 5 , 5)
Makov Chain:
Definition:
A stochastic process which is such that the generation of the probability distribution
depends only on the present state is called a Markov process. If this state space is discrete (finite
or countably infinite) we say that the process is a discrete state process or chain. Then the Markov
process is known as Markov chain.
Further if the state space is continuous the process is called a continuous state process.

We explicitly define a markov chain as follows:

Let the outcomes 𝑋1 , 𝑋2 , 𝑋3 , 𝑋4 … … of a sequence of trials satisfy the following properties


(i) Each outcome belong to the finite set (state space) of the outcomes {𝑎1 , 𝑎2 , 𝑎3 … … 𝑎𝑚 }
(ii) The outcome of any trial depends at most upon the outcome of the immediate preceding
trial.
Probability 𝑃𝑖𝑗 is associated with every pair of states (𝑎𝑖 , 𝑎𝑗 ) that is 𝑎𝑗 occurs
immediately after 𝑎𝑖 occurs. Such a stochastic process is called a finite Markov chain. These
probabilities 𝑃𝑖𝑗 which are non-negative real numbers are called transition probability
matrix (t. p. m.) denoted by P.
𝑃11 𝑃12 − 𝑃1𝑚
− 𝑃2𝑚 ]
𝑃 = [𝑃𝑖𝑗 ] = [ 𝑃−
21 𝑃22
− − −
𝑃𝑚1 𝑃𝑚2 − 𝑃𝑚𝑚
It is evident that the elements of 𝑃 have the following properties
(𝑖) 0 ≤ 𝑃𝑖𝑗 ≤ 1 (𝑖𝑖) ∑𝑚
𝑗=1 𝑝𝑖𝑗 = 1(𝑖 = 1,2,3 … … … . 𝑚)

The above two properties satisfy the requirement of a stochastic matrix hence we conclude
that the transition matrix of a Markov chain is a stochastic matrix.

Examples for writing transition probability matrix (t. p. m):


1. A person commutes the distance to his office every day either by train or by bus. Suppose he
does not go by train for two consecutive days, but if he goes by bus the next day he is just as
likely to go by bus again as he is to travel by train.
Solution:
The state space of the system is {train(t), bus(b)}
The stochastic process is a Markov chain since the outcome of any day depends only on the
happening of the previous day. The transition probability matrix is as follows
𝑡 𝑏
𝑡 0 1
𝑃= [ 1⁄ ]
𝑏 1⁄2 2

Conclusion: The first row of the matrix is related to the fact that the person does not commute
to consecutive days by train and is sure to go by bus if he had travelled by train. The second
row of the matrix is related to the fact that if the person had commute in bus on a particular day
he is likely to go by bus again or by train. Thus the probabilities are equal to 1/2.

2. Three boys A, B, C are throwing ball to each other A always throws the ball to B and B always
throws the ball to C, C is just as likely to throw the ball to B as to A.
Solution:
State space = {A, B, C} and transition probability matrix (t. p. m): is as follows
𝐴 𝐵 𝐶
𝐴 0 1 0
𝑃 = 𝐵[ 0 0 1]
𝐶 1⁄2 1⁄
2 0
3. A student’s study habits are as follows. If he studies one night he is 30% sure to study the next
night. On the other hand, if he does not study one night, he is 60% sure not to study the next
night as well. Find the transition matrix for the chain of his study.

Solution:
State space = {𝑎1 = studying 𝑎2 = not studying} and the transition probability matrix is as
follows
𝑎1 𝑎2
𝑎1 0 1
𝑃 = 𝑎 [ 1⁄ 1⁄ ]
2 2 2
State Classification:
Absorbing State:

A state i is called an absorbing state if the transition probabilities 𝑝𝑖𝑗 are such that
1 𝑓𝑜𝑟 𝑗 = 𝑖
𝑝𝑖𝑗 = {
0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
In other words, a state 𝑖 is absorbing if the 𝑖 𝑡ℎ row of the transition matrix 𝑃 ℎ𝑎𝑠 1 on the
main diagonal and zero’s everywhere.

Transient State:
A state 𝑖 said to be a transient state if the system is in this state at some step (as it has to be)
and there is a chance (i.e. there is a non-zero probability) that it will not return to that state (in
a subsequent step).

Recurrent State:
A state 𝑖 said to be a recurrent state if starting from state 𝑖, the system does eventually return
to the same state. (Here it is implicit that the probability of return is one).

Example:
0 1
Consider the two-state chain for which the transition matrix is 𝑃 = [ ]
1 0
1 0 0 1
We find that 𝑃2 = [ ], 𝑃3 = [ ] = 𝑃 Since 𝑃3 = 𝑃, the system returns to a
0 1 1 0
state after two steps. Therefore both states of the chain are recurrent.

Higher Transition Probabilities:

The entry 𝑝𝑖𝑗 in the transition probability matrix 𝑃 of the Markov chain is the probability
that the system changes from the state 𝑎𝑖 to 𝑎𝑗 in a single step that is 𝑎𝑖 → 𝑎𝑗
The probability that the system changes from the state 𝑎𝑖 to the state 𝑎𝑗 in exactly 𝑛 steps is
denoted by 𝑝𝑖𝑗 (𝑛)
The matrix formed by the probabilities 𝑝𝑖𝑗 (𝑛) is called the 𝑛 – step transition matrix denoted
by 𝑃(𝑛) .
𝑖. 𝑒. [𝑃(𝑛) ] = [𝑝𝑖𝑗 (𝑛) ] is obviously a stochastic matrix.

It can be proved that the 𝑛 – step transition matrix is equal to the 𝑛𝑡ℎ power of 𝑃.
That is 𝑃(𝑛) = 𝑃𝑛 .

Let 𝑃 be the transition probability matrix of the Markov chain and let 𝑝 = (𝑝𝑖 ) =
(𝑝1 , 𝑝2 , 𝑝3 … … … . 𝑝𝑚 ) be the probability distribution at some arbitrary time, then
𝑝𝑃, 𝑝𝑃2 , 𝑝𝑃3 … … . 𝑝 𝑃𝑛 respectively are the probabilities of the system after one step, two
step, ------ 𝑛 𝑠𝑡𝑒𝑝.
Let 𝑝(0) = [𝑝1 (0) , 𝑝2 (0) , … … … . 𝑝𝑚 (0) ] denote the initial probability distribution at the start
of the process.
Let 𝑝(𝑛) = [𝑝1 (𝑛) , 𝑝2 (𝑛) , … … … . 𝑝𝑚 (𝑛) ] denote the 𝑛𝑡ℎ step probability distribution at the
end of 𝑛 𝑠𝑡𝑒𝑝𝑠.
We have, 𝑝(1) = 𝑝(0) 𝑃,
𝑝(2) = 𝑝(1) 𝑃 = 𝑝(0) 𝑃2 ,
𝑝(3) = 𝑝(2) 𝑃 = 𝑝(0) 𝑃3 , − − − − − − − − 𝑝(𝑛) = 𝑝(𝑛) 𝑃 = 𝑝(0) 𝑃𝑛
Example-1. A person commutes the distance to his office every day either by train or by bus.
Suppose he does not go by train for two consecutive days, but if he goes by bus the
next day he is just as likely to go by bus again as he is to travel by train.

Solution:
The state space of the system is {train(t), bus(b)}
The stochastic process is a Markov chain since the outcome of any day depends only on the
happening of the previous day. The transition probability matrix is as follows
𝑡 𝑏
𝑡 0 1 𝑝𝑡𝑡 𝑝𝑡𝑏
𝑃= [ 1⁄ ] = [ 𝑝𝑏𝑡 𝑝𝑏𝑏 ]
𝑏 1⁄2 2
We shall find 𝑃2 , 𝑃3

0 1 1 1⁄ 1⁄
0 (2)
2 2 ] = [ 𝑝𝑡𝑡 𝑝𝑡𝑏 (2)
𝑃2 = [ 1⁄ 1⁄ 1⁄ ] [ 1⁄
] = [ ]
2 2 2 1⁄ 3⁄
2 𝑝𝑏𝑡 (2) 𝑝𝑏𝑏 (2)
4 4
0 1 1⁄ 1⁄ 1⁄ 3⁄ (3)
4 ] = [ 𝑝𝑡𝑡 𝑝𝑡𝑏 (3)
3 2
𝑃 = 𝑃𝑃 = [ 1⁄ 1⁄ ] [ 2 2 ]= [ 4 ]
2 2 1⁄4 3⁄4 3⁄ 5⁄
8 8
𝑝𝑏𝑡 (3) 𝑝𝑏𝑏 (3)
1
𝑝𝑡𝑏 (2) = 2 means that the probability that the system changes from the state 𝑡 𝑡𝑜 𝑏 in exactly

two steps is 1/2


3
𝑝𝑏𝑡 (3) = 8 means that the probability that the system changes from the state 𝑏 𝑡𝑜 𝑡 in
3
exactly three steps is 8

Next let us create an initial probability distribution for the state of the process.
Let us suppose that the person toss a coin and decided that he will go by bus if head turn up.
1
Therefore 𝑝(𝑏) = 2 𝑎𝑛𝑑 𝑝(𝑡) = 1/2

∴ 𝑝(0) = [1/2 1/2] is the initial probability distribution


1⁄ 1⁄
∴ 𝑝 (2)
= 𝑝 (0) 2
𝑃 = [1/2 1/2] [ 2 2 ] = [3/8 5/8]
1⁄ 3⁄
4 4
1⁄ 3⁄
∴ 𝑝 (3)
= 𝑝 (0) 3
𝑃 = [1/2 1/2] [ 4 4 ] = [5/16 11/16]
3⁄ 5⁄
8 8
∴ 𝑝(3) = [5/16 11/16] = [𝑝𝑡 (3) 𝑝𝑏 (3) ]
This is the probability distribution after 3 days.
That is, probability of travelling by train after 3 days = 5/16.
That is, probability of travelling by bus after 3 days = 11/16

Problem – 1: A student’s study habits are as follows. If he studies one night he is 70% sure not to
study the next night. On the other hand, if he does not study one night, he is 60% sure
to study the next night. In the long run how often does he study?
Solution: State space = {A = studying, B = not studying}
The transition probability matrix(t.p.m) for the problem is as follows
𝐴 𝐵
𝐴 0.3 0.7
𝑃= [ ]
𝐵 0.6 0.4
To know the student’s study habits in the long run,
we have to find the steady state probability vector 𝑣 = (𝑥, 𝑦)
such that 𝑥 + 𝑦 = 1 𝑎𝑛𝑑 𝑣𝑃 = 𝑣

∴ 𝑣𝑃 = 𝑣 ⇒ (𝑥, 𝑦) [ 0.3 0.7 ] = (𝑥, 𝑦)


0.6 0.4
[0.3𝑥 + 0.6𝑦, 0.7𝑥 + 0.4𝑦] = (𝑥, 𝑦)
0.3𝑥 + 0.6𝑦 = 𝑥, … … … (1) 0.7𝑥 + 0.4𝑦 = 𝑦 … … … . (2)
Using 𝑥 + 𝑦 = 1 ⇒ 𝑦 =1−𝑥
Equation (1), 0.3𝑥 + 0.6(1 − 𝑥) = 𝑥
0.3𝑥 + 0.6 − 0.6𝑥 = 𝑥
0.3𝑥 − 0.6𝑥 − 𝑥 = 0.6
1.3𝑥 = 0.6 𝑥 = 0.461
∴ 𝑦 = 1 − 𝑥 ⇒ 𝑦 = 1 − 0.461
𝑦 = 0.539
Thus the steady state probability vector for the problem is 𝑣 = (𝑥, 𝑦) = (0.461, 0.539)
From this we find that in the long run, the student’s studies = 46% of the time.
Problem-2: A man’s smoking habits are as follows. If he smokes filter cigarettes one week, he
switches to non filter cigarettes the next week with probability 0.2. On the other hand if he
smokes non filter cigarettes one week there is a probability of 0.7 that he will smoke non
filter cigarettes the next week as well. In the long run how often does he smoke filter
cigarettes.

Solution:
State space = {A =He smokes filter cigarettes, B = He smokes non filter cigarettes}
The transition probability matrix (t.p.m) for the problem is as follows
𝐴 𝐵
𝐴 0.8 0.2
𝑃= [ ]
𝐵 0.3 0.7
To find the probability in the long run how often does he smoke filter cigarettes,
we have to find the steady state probability vector 𝑣 = (𝑥, 𝑦)
such that 𝑥 + 𝑦 = 1 𝑎𝑛𝑑 𝑣𝑃 = 𝑣

∴ 𝑣𝑃 = 𝑣 ⇒ (𝑥, 𝑦) [ 0.8 0.2 ] = (𝑥, 𝑦)


0.3 0.7
[0.8𝑥 + 0.3𝑦, 0.2𝑥 + 0.7𝑦] = (𝑥, 𝑦)
0.8𝑥 + 0.3𝑦 = 𝑥, … … … (1) 0.2𝑥 + 0.7𝑦 = 𝑦 … … … . (2)
Using 𝑥 + 𝑦 = 1 ⇒ 𝑦 =1−𝑥
Equation (1), 0.8𝑥 + 0.3(1 − 𝑥) = 𝑥
0.8𝑥 + 0.3 − 0.3𝑥 = 𝑥
0.8𝑥 − 0.3𝑥 − 𝑥 = 0.3
−0.5 𝑥 = 0.3 𝑥 = −0.6 = 0.6
∴ 𝑦 = 1 − 𝑥 ⇒ 𝑦 = 1 − 0.6
𝑦 = 0.4
Thus the fixed probability vector for the problem is 𝑣 = (𝑥, 𝑦) = (0.6, 0.4)
From this we find that the probability in the long run, he smoke filter cigarettes is = 60% of
the time.
Problem -3: A habitual gambler is a member of two clubs A and B. He visits either of the clubs
every day for playing cards. He never visits club A on two consecutive days. But, if he visits
club B on particular day, then the next day he is as likely to visit Club B or Club A.
i) Find the transition matrix of the Markov chain.
ii) Also, show that the matrix is a regular stochastic matrix and find the unique fixed
probability vector.

Solution:
State space = {A = club A , B = club B }
i) The transition probability matrix (t.p.m) for the problem is as follows
𝐴 𝐵
𝐴 0 1
𝑃= [ ]
𝐵 1/2 1/2
To find the probability in the long run how often does he visit Clubs,
ii) Since all the entries of 𝑃2 are positive then P is regular stochastic matrix
0 1 0 1 1/2 1/2
𝑃2 = [ ][ ]= [ ]
1/2 1/2 1/2 1/2 1/4 3/4
∴ 𝑃 is regular stochastic matrix.
we have to find the unique fixed probability vector 𝑣 = (𝑥, 𝑦)
such that 𝑥 + 𝑦 = 1 𝑎𝑛𝑑 𝑣𝑃 = 𝑣
0 1
∴ 𝑣𝑃 = 𝑣 ⇒ (𝑥, 𝑦) [ ] = (𝑥, 𝑦)
1/2 1/2
1 1
[0 + 𝑦, 𝑥 + 𝑦] = (𝑥, 𝑦)
2 2
1 1
𝑦 = 𝑥, 𝑥 + 2𝑦 = 𝑦
2

𝑦 = 2𝑥 … … (1) 2𝑥 + 𝑦 = 2𝑦 … … . (2)
Using 𝑥 + 𝑦 = 1 ⇒ 𝑦 =1−𝑥
2𝑥 = 1 − 𝑥
1
2𝑥 + 𝑥 = 1 ∴𝑥=3
1 2
𝑦 =1−𝑥 ⇒𝑦 =1−3=3
1 2
Thus the unique fixed probability vector for the problem is 𝑣 = (𝑥, 𝑦) = (3 , 3)
0 2/3 1/3
Problem -4: Prove that the Markov chain whose t.p.m is 𝑃 = [ 1/2 0 1/2 ] is irreducible.
1⁄ 1⁄ 0
2 2
Find the corresponding stationary probability vector.

Solution: W.K.T Markov chain whose t.p.m is irreducible, if t.p.m is regular. (Since all the
entries in 𝑃2 are positive then t.p.m is regular)
[ we need to show that P is regular stochastic matrix:- Stochastic matrix P is said to be
regular stochastic matrix if all the entries of some power 𝑃𝑛 are positive]
0 4 2
1
𝑃 = 6[ 3 0 3]
3 3 0

2
1 0 4 2 1 0 4 2 1 18 6 12
𝑃 = [3 0 3] [3 0 3] = [ 9 21 6 ]
6 6 36
3 3 0 3 3 0 9 12 15
Since, all the entries in 𝑃2 are positive. Then we conclude that t.p.m P is regular
Markov chain having t.p.m P is irreducible.
we have to find the unique fixed probability vector 𝑣 = (𝑥, 𝑦, 𝑧)
such that 𝑥 + 𝑦 + 𝑧 = 1 𝑎𝑛𝑑 𝑣𝑃 = 𝑣
0 4 2
1
∴ 𝑣𝑃 = 𝑣 ⇒ (𝑥, 𝑦, 𝑧) [ 3 0 3 ] = (𝑥, 𝑦, 𝑧)
6
3 3 0
1
[0 + 3𝑦 + 3𝑧, 4𝑥 + 3𝑧, 2𝑥 + 3𝑦] = (𝑥, 𝑦, 𝑧)
6
3𝑦 + 3𝑧 = 6𝑥, … … . . (1) 4𝑥 + 3𝑧 = 6𝑦, … … . . (2) 2𝑥 + 3𝑦 = 6𝑧 … … . (3)
Using 𝑥 + 𝑦 + 𝑧 = 1 ⇒ 𝑦+𝑧 = 1−𝑥
3(𝑦 + 𝑧) = 6𝑥, … … . . (1)
3(1 − 𝑥) = 6𝑥 ⇒ 6𝑥 + 3𝑥 = 3 ⇒ 𝑥 = 1/3
4
4𝑥 + 3𝑧 = 6𝑦 , … … . . (2) ⇒ = 6𝑦 − 3𝑧 … … … . . (4)
3
2
2𝑥 + 3𝑦 = 6𝑧 … … . (3) ⇒ = 6𝑧 − 3𝑦 … … … … 𝑀𝑢𝑙𝑡𝑖𝑝𝑙𝑦 𝑏𝑦 2
3
4
⇒ = 12𝑧 − 6𝑦 … … … . (5)
3
4 4 9
Equation (4)+(5) ⇒ + 3 = 9𝑧 ⇒ 𝑧 = 27
3
∴ 𝑥+𝑦+𝑧 =1 ⇒ 𝑦 = 1−𝑥−𝑧
1 9 10
⇒ 𝑦 = 1− − =
3 27 27
1 9 10
∴ unique fixed probability vector 𝑣 = (𝑥, 𝑦, 𝑧) = (3 , 27 , 27)

Problem-5: Three boys A, B, C are throwing ball to each other. A always throws the ball to B and
B always throws the ball to C. C is just as likely to throw the ball to B as to A. If C was the
first person to throw the ball find probabilities that after three throws i) A as the ball. ii) B
has the ball. iii) C has the ball.

Solution: State space = {A, B, C} and transition probability matrix (t.p.m) is as follows
𝐴 𝐵 𝐶
𝐴 0 1 0
𝑃 = 𝐵[ 0 0 1]
𝐶 1⁄2 1⁄
2 0

Initially, If C has the ball, the associated initial probability vector is given 𝑝(0) = (0,0,1)

Since Probabilities are desired after three throws we have to find 𝑝(3) = 𝑝(0) 𝑃3

0 1 0 0 1 0 0 0 1
1
0 1 ] = [ ⁄2 1⁄ 0 ]
𝑃 = 𝑃𝑃 = [ 0
2 0 1][ 0 2
1⁄ 1⁄ 0 1⁄2 1⁄ 0 1⁄ 1⁄
2 2 2 0 2 2

0 0 1 1⁄ 1⁄ 0
0 1 0 2 2
1 1⁄
𝑃3 = 𝑃2 𝑃 = [ ⁄2 2 0 ][ 0 0 1] = 0 1⁄2 1⁄
2
1⁄ 1⁄ 1⁄ 1⁄ 0
0 2 2 1 1 1⁄
2 2 [ ⁄4 ⁄4 2]
1⁄ 1⁄
2 2 0
𝑝(3) (0) 3
= 𝑝 𝑃 = (0,0,1) 0 1⁄ 1⁄
2 2
1 1⁄ 1⁄
[ ⁄4 4 2]
1 1 1
𝑝(3) = [ , , ]
4 4 2
1 1 1
After 3 throws the probability that the ball is with 𝐴 = , with 𝐵 = 4, Ball with 𝐶 =
4 2

You might also like