Professional Documents
Culture Documents
Markov Processes
Markov Processes
BY:
DR. SHAILJA TRIPATHI
Introduction to Markov Processes
SYSTEM: A SET OF ENTITIES (BOTH LIVING AND NON-LIVING) THAT INTERACT
WITH EACH OTHER AND THUS INFLUENCE AND GET INFLUENCED BY EACH
OTHER, E.G. A CLASSROOM, A MARKETPLACE, A FAMILY.
NOTE:
1.EACH TRIP CAN BE TREATED AS A WEEKLY TRIAL.
2.IN EACH TRIP (I.E., A TRIAL) ONE OUT OF TWO SHOPS A, B IS VISITED.
3.WITH REGARDS TO SHOPPING WHICH IS THE FOCUS, THERE ARE TWO
POSSIBILITIES (SHOPPING AT A / SHOPPING AT B). THESE ARE CALLED AS THE
STATES OF THE SYSTEM.
STATE1: SHOPPING AT A
STATE2: SHOPPING AT B
Introduction to Markov Processes
EXAMPLE: MARKET SHARE AND CUSTOMER LOYALTY
FOR EXAMPLE, FROM THIS DATA ONE FINDS THAT OF ALL THE CUSTOMERS
THAT VISITED SHOP A, 25% SWITCHED TO SHOP B IN THE NEXT WEEK (I.E. 75%
CONTINUED TO SHOP AT SHOP A). SIMILARLY, ONE CAN FIND OUT THAT OF
ALL THE CUSTOMERS THAT SHOPPED AT SHOP B, 20% SWITCHED TO SHOP A
THE FOLLOWING WEEK (I.E. 80% CONTINUED TO SHOP AT SHOP B).
Introduction to Markov Processes
EXAMPLE: MARKET SHARE AND CUSTOMER LOYALTY
SINCE THESE PROBABILITIES DOCUMENTED IN THE TABLE BELOW THE
SWITCHING BEHAVIOUR FROM STATE 1 TO STATE 2 AND VICE VERSA, THESE
ARE CALLED AS TRANSITION PROBABILITIES. Next
Week
Current
Week Shop A Shop B
Shop A 0.75 0.25
Shop B 0.2 0.8
If Pij is the probability of a transition from state i in the current period to state j in the
next period then we can represent the aforementioned as a matrix of transition
probabilities, P = P11 P12 = 0.75 0.25
P21 P22 0.20 0.80
Using this transition matrix, one can determine the probability that a given customer
will shop at shop A or B at some time period in future.
A transition matrix with non-zero probabilities is called as regular (transition matrix) and
the Markov chain/process is called as regular Markov chain/process.
Introduction to Markov Processes
Example: Market share and customer loyalty
Let
πi(n) = probability that the system is in state i in time period n
i is the state
n is the time period (related to the number of trials/transitions)
So, π1(1) = probability that the system is in state 1 in time period 1 and
π2(1) = probability that the system is in state 2 in time period 1
If we set π1(0) = 1 and π2(0) = 0, we are setting the initial condition that the customer
shopped last week at shop A. Similarly, if set π1(0) = 0 and π2(0) = 1, we are setting the
initial condition that the customer shopped last week at shop B.
is a vector (matrix) that represents the initial state of the system, i.e. the customer
shopped last week at shop A.
Introduction to Markov Processes
Example: Market share and customer loyalty
To find out the probabilities for the next period, we use the assumption we had made
earlier, i.e.,
or ∏ (n+1) = ∏ (n) * P
If two matrices are same/equal then their corresponding elements are same/equal i.e.,
If P = Q, then a = l; b = m; c = n; d = o
Introduction to Markov Processes
Matrix Addition (order 2 X 2 i.e. 2 rows and 2 columns):
Only when the number of columns of first matrix is same as the number of rows of second
matrix
a c e g = ae+cf ag+ch
b d X f h be+df bg+dh
The product of the matrices of the order r1 X c1 and c1 X r2 results in a matrix of order r1 X r2
Introduction to Markov Processes
Matrix Multiplication (order 2 X 2 i.e. 2 rows and 2 columns):
Example:
1 3 5 7 = 23 31
2 4 X 6 8 34 46
Introduction to Markov Processes
Example Continued: Market share and customer loyalty
Finding out the probabilities for the next period, system state in next period
The vector ∏ (1) contains the probability that the system will be in state 1 (i.e. the
customer will shop from shop A) or in state 2 (i.e. the customer will shop from shop B)
in time period 1.
Introduction to Markov Processes
Example: System state in period 2
Time Periods
State
Probability 0 1 2 3 4 5 6 7 8 9 10
π1(n) 1 0.75 0.6125 0.5369 0.4953 0.472 0.46 0.453 0.449 0.447 0.446
π2(n) 0 0.25 0.3875 0.4631 0.5047 0.528 0.54 0.547 0.551 0.553 0.554
ONE NOTICES FROM THE ABOVE THAT AFTER A CERTAIN TIME PERIOD THE
PROBABILITIES DO NOT CHANGE MUCH FROM ONE TIME PERIOD TO THE OTHER.
SO IF WE HAD STARTED WITH 10,000 CUSTOMERS WHO HAD LAST SHOPPED AT
SHOP A THEN DURING TIME PERIOD 6, 4600 CUSTOMERS WOULD BE RETAINED BY
SHOP A, WHEREAS 5400 WOULD SWITCH TO SHOP B. .
Introduction to Markov Processes
EXAMPLE: SYSTEM STATE IN 10 TIME PERIODS WITH A DIFFERENT INITIAL STATE
As we continue working the Markov process for a large number of time periods, we
realize that the probabilities of a system being in a given state change very-very little.
This state of a system is called as the steady state or the equilibrium state. The
probabilities are called as steady state probabilities and are independent of the initial
(beginning) state of the system. The symbol used to denote steady state probability for
state 1 is π1 and that of state 2 is π2 (i.e. we remove the time notation, n).
With a large n the subsequent probability values (for each state) get very closer, i.e. the
difference between the state probabilities for nth and (n+1)th period becomes very small.
With this logic, one can compute the steady state probabilities. Thus,
Introduction to Markov Processes
Example Continued :
∏ (n+1) = ∏ (n) * P
[ π1(n+1) π2(n+1) ] = [ π1(n) π2(n) ] * 0.75 0.25
0.20 0.80
For a very large value of n,
Using the above logic, π1(n+1) = π1(n) = π1 (steady state probability – state1), and
π2(n+1) = π2(n) = π2 (steady state probability – state2)
Introduction to Markov Processes
Example Continued :
Thus, [ π1(n+1) π2(n+1) ] = [ π1(n) π2(n) ] * 0.75 0.25
0.20 0.80
[ π1 π2 ] = [ π1 π2 ] * 0.75 0.25
0.20 0.80
LIKE IN THE CURRENT EXAMPLE, WHEN THE PRIOR STATES OF THE SYSTEM DO
NOT HAVE TO BE CONSIDERED, IT IS CALLED AS THE FIRST ORDER MARKOV
PROCESS. HIGHER ORDER MARKOV PROCESSES ARE THE ONES IN WHICH
FUTURE STATES OF THE SYSTEM DEPENDS ON TWO OR MORE PREVIOUS
STATES.
Example of Markov Analysis
Let's analyze the market share and customer loyalty for Murphy's Foodliner
and Ashley's Supermarket grocery store. Our primary focus is to check the
sequence of shopping trips of a customer. You can assume that customers
can make one shopping trip per week to either Murphy's Foodline or
Ashley's Supermarket, but not both.
Generalize formula:
Where P1, P2, …, Pr represents systems in the process state’s probabilities, and n shows
the state.
Example: Market Share Analysis
■ Transition Probabilities
p p 0.9 0.1
11 12
P= =
0.2 0.8
p p
21 22
Example: Market Share Analysis
■ State Probabilities
Murphy’s
.9 P = .9(.9) =
Murphy’s .81
Ashley’s
.9
Murphy’s .1 P = .9(.1) =
.09
Murphy’s
P = .1(.2) =
.2 .02
Ashley’s
.1
Ashley’s P = .1(.8) =
.8 .08
Example: Market Share Analysis
■ State Probabilities for Future Periods
Beginning Initially with a Murphy’s Customer
.9 .1
[π1 π2] = [π1 π2]
.2 .8
continued . . .
Example: Market Share Analysis
■ Steady-State Probabilities
Suppose that the total market consists of 6000 customers per week.
The new promotional strategy will increase the number of
customers doing their weekly shopping at Ashley’s from 2000 to
2580.
If the average weekly profit per customer is $10, the proposed
promotional strategy can be expected to increase Ashley’s profits
by $5800 per week. If the cost of the promotional campaign is less
than $5800 per week, Ashley should consider implementing the
strategy.
EXAMPLE 2
in 2 years; and
in the long-run.
This problem is an example of a brand switching problem that often arises in the sale of
consumer goods.
Observe that, each year, a customer can either be buying K's cereal or the
competition's. Hence we can construct a diagram as shown below where the
two circles represent the two states a customer can be in and the arcs
represent the probability that a customer makes a transition each year
between states. Note the circular arcs indicating a "transition" from one state to
the same state. This diagram is known as the state-transition diagram.
Given that diagram we can construct the transition matrix (usually denoted by
the symbol P) which tells us the probability of making a transition from one state to
another state. Letting:
To state 1 2
From state 1 |0.88 0.12 |
2 |0.15 0.85 |