Download as pdf or txt
Download as pdf or txt
You are on page 1of 60

MARKOV PROCESSES

BY:
DR. SHAILJA TRIPATHI
Introduction to Markov Processes
SYSTEM: A SET OF ENTITIES (BOTH LIVING AND NON-LIVING) THAT INTERACT
WITH EACH OTHER AND THUS INFLUENCE AND GET INFLUENCED BY EACH
OTHER, E.G. A CLASSROOM, A MARKETPLACE, A FAMILY.

MARKOV MODEL: MATHEMATICAL MODEL THAT HELPS PREDICT THE FUTURE


STATE OF A SYSTEM BASED ON ITS PRESENT STATE. NAMED AFTER RUSSIAN
MATHEMATICIAN ANDREY ANDREYEVICH MARKOV

MARKOV ANALYSIS CAN PROVIDE INFORMATION TO HELP DECISION


MAKING IN A SCENARIO/PROCESS THAT INVOLVES A SEQUENCE OF
REPEATED TRIALS WITH FINITE POSSIBLE STATES ON EACH TRIAL.
Introduction to Markov Processes
MARKOV PROCESS MODELS ARE USED TO UNDERSTAND THE EVOLUTION
OF A SYSTEM OVER A PERIOD OF TIME (THE STATE OF A SYSTEM MAY
CHANGE IN EACH TIME PERIOD, LIKE IN REPEATED TRIALS AND THEREFORE
CAN NOT BE ASCERTAINED WITH CERTAINTY).

THUS, IN MARKOV PROCESS MODELS WE ARE CONCERNED ABOUT THE


PROBABILITY THAT THE SYSTEM WILL BE IN A GIVEN STATE AT A GIVEN POINT
OF TIME.
Introduction to Markov Processes
EXAMPLES: 2 STATE MARKOV PROCESS

1. ONE CAN CALCULATE THE PROBABILITY THAT A CUSTOMER THAT


PURCHASES A GIVEN PRODUCT A WILL CONTINUE TO PURCHASE THE SAME
PRODUCT A OR WILL SWITCH TO THE ALTERATIVE PRODUCT B. (WHAT
CONSTITUTES THE SYSTEM AND WHAT ARE THE STATES).

2. ONE CAN CALCULATE THE PROBABILITY THAT AN EQUIPMENT USED IN A


PARTICULAR ACTIVITY/PROCESS WILL CONTINUE TO BE FUNCTIONAL IN
FUTURE OR IT WILL BREAK DOWN. (WHAT CONSTITUTES THE SYSTEM AND
WHAT ARE THE STATES).
Markov analysis technique is named after Russian mathematician Andrei
Andreyevich Markov, who introduced the study of stochastic processes,
which are processes that involve the operation of chance.

This analysis helps to generate a new sequence of random but related


events, which will look similar to the original.

It is useful in analyzing dependent random events i.e., events that only


depend on what happened last.

Markov Analysis is a probabilistic technique that helps in the process of


decision-making by providing a probabilistic description of various
outcomes.
Introduction to Markov Processes
ASSUMPTIONS:

1. FINITE NUMBER OF STATES

2. CONSTANT TRANSITION PROBABILITIES OVER A PERIOD OF TIME (TRANSITION


PROBABILITIES ARE USED TO DESCRIBE HOW A SYSTEM MAKES TRANSITION FROM
ONE STATE TO THE OTHER)*

3. PROBABILITY OF SYSTEM BEING IN A SPECIFIC STATE (IN A GIVEN TIME PERIOD)


IS DEPENDENT ONLY ON THE STATE IN THE PRECEDING TIME PERIOD. WHEN THE
CURRENT STATE TOGETHER WITH TRANSITION PROBABILITIES IS ENOUGH TO
PREDICT THE FUTURE STATE OF A SYSTEM (THE PRIOR STATES NEED NOT BE
CONSIDERED), THE SYSTEM IS SAID TO HAVE THE MEMORYLESS PROPERTY.

*SUCH PROCESSES ARE CALLED AS MARKOV CHAINS WITH STATIONARY


TRANSITION PROBABILITIES.
Introduction to Markov Processes
EXAMPLE 1: MARKET SHARE AND CUSTOMER LOYALTY
A CUSTOMER BUYS HIS GROCERY FROM TWO NEARBY KIRANA STORES, SAY
SHOP A AND SHOP B. HE MAKES ONE TRIP PER WEEK FOR THIS PURPOSE
AND PURCHASES FROM ONLY ONE OF THESE STORES IN A GIVEN TRIP.

NOTE:
1.EACH TRIP CAN BE TREATED AS A WEEKLY TRIAL.
2.IN EACH TRIP (I.E., A TRIAL) ONE OUT OF TWO SHOPS A, B IS VISITED.
3.WITH REGARDS TO SHOPPING WHICH IS THE FOCUS, THERE ARE TWO
POSSIBILITIES (SHOPPING AT A / SHOPPING AT B). THESE ARE CALLED AS THE
STATES OF THE SYSTEM.
STATE1: SHOPPING AT A
STATE2: SHOPPING AT B
Introduction to Markov Processes
EXAMPLE: MARKET SHARE AND CUSTOMER LOYALTY

IF IT IS SAID THAT THE SYSTEM IS IN STATE 2 IN WEEK 4. IT MEANS THAT THE


CUSTOMER SHOPS AT STORE B IN WEEK 4.

WHILE THERE IS UNCERTAINTY REGARDING WHERE THE CUSTOMER WILL


SHOP IN A GIVEN WEEK, USING THE MARKOV PROCESS MODEL ONE CAN
CALCULATE THE PROBABILITY THAT SHE/HE WILL SHOP AT SHOP A OR SHOP
B.

TO FIND THE ABOVE, ONE NEEDS INFORMATION ON THE PROBABILITY


WHETHER THE CUSTOMER CONTINUES TO SHOP AT A GIVEN SHOP OR
SWITCHES TO ANOTHER SHOP (TRANSITION).
Introduction to Markov Processes
EXAMPLE: MARKET SHARE AND CUSTOMER LOYALTY

ONE CAN COLLECT PAST DATA OF 100 CUSTOMER’S (WEEKLY) SHOPPING


PATTERN FOR LAST SIX MONTHS. THIS INFORMATION RELATED TO THE
SEQUENCE OF VISITS CAN BE UTILIZED FIND OUT WHICH SHOP (A OR B) A
GIVEN CUSTOMER VISITS GIVEN THAT S/HE WENT TO A SHOP (A OR B) IN
THE LAST WEEK.

FOR EXAMPLE, FROM THIS DATA ONE FINDS THAT OF ALL THE CUSTOMERS
THAT VISITED SHOP A, 25% SWITCHED TO SHOP B IN THE NEXT WEEK (I.E. 75%
CONTINUED TO SHOP AT SHOP A). SIMILARLY, ONE CAN FIND OUT THAT OF
ALL THE CUSTOMERS THAT SHOPPED AT SHOP B, 20% SWITCHED TO SHOP A
THE FOLLOWING WEEK (I.E. 80% CONTINUED TO SHOP AT SHOP B).
Introduction to Markov Processes
EXAMPLE: MARKET SHARE AND CUSTOMER LOYALTY
SINCE THESE PROBABILITIES DOCUMENTED IN THE TABLE BELOW THE
SWITCHING BEHAVIOUR FROM STATE 1 TO STATE 2 AND VICE VERSA, THESE
ARE CALLED AS TRANSITION PROBABILITIES. Next
Week
Current
Week Shop A Shop B
Shop A 0.75 0.25
Shop B 0.2 0.8

THE PROBABILITIES OF 0.75 AND 0.80 CAN BE INTERPRETED


AS A MEASURE OF STORE LOYALTY (RETENTION BY STORE), WHEREAS, THE
PROBABILITIES
0.25 AND 0.20 CAN BE MEASURED AS A MEASURE OF SWITCHING
BEHAVIOUR/CHARACTERISTIC. NOTICE THAT (EXPECTEDLY) THE SUM OF
PROBABILITIES FOR EACH ROW IS 1.
Introduction to Markov Processes
Example: Market share and customer loyalty

If Pij is the probability of a transition from state i in the current period to state j in the
next period then we can represent the aforementioned as a matrix of transition
probabilities, P = P11 P12 = 0.75 0.25
P21 P22 0.20 0.80

Using this transition matrix, one can determine the probability that a given customer
will shop at shop A or B at some time period in future.

A transition matrix with non-zero probabilities is called as regular (transition matrix) and
the Markov chain/process is called as regular Markov chain/process.
Introduction to Markov Processes
Example: Market share and customer loyalty
Let
πi(n) = probability that the system is in state i in time period n
i is the state
n is the time period (related to the number of trials/transitions)

So, π1(1) = probability that the system is in state 1 in time period 1 and
π2(1) = probability that the system is in state 2 in time period 1

Week zero (0) is considered as the starting period so,

π1(0) = probability that the system is in state 1 at an initial/starting period and


π2(0) = probability that the system is in state 2 at an initial/starting period
Introduction to Markov Processes
Example: Market share and customer loyalty
π1(0) = probability that the system is in state 1 at some initial/starting period and
π2(0) = probability that the system is in state 2 at some initial/starting period

If we set π1(0) = 1 and π2(0) = 0, we are setting the initial condition that the customer
shopped last week at shop A. Similarly, if set π1(0) = 0 and π2(0) = 1, we are setting the
initial condition that the customer shopped last week at shop B.

So, [ π1(0) π2(0) ] = [1 0]

is a vector (matrix) that represents the initial state of the system, i.e. the customer
shopped last week at shop A.
Introduction to Markov Processes
Example: Market share and customer loyalty

So, initially [ π1(0) π2(0) ] = [1 0]

The general notation of this vector is, ∏ (n) = [ π1(n) π2(n) ]


which represents the state probabilities in period n OR system state in period n

Thus ∏ (1) is a vector of state probabilities in period 1, ∏ (2) is a vector of state


probabilities in period 2 and ∏ (0) is a vector of state probabilities in period 0.
Introduction to Markov Processes
Example: Market share and customer loyalty

To find out the probabilities for the next period, we use the assumption we had made
earlier, i.e.,

∏ (next period) = ∏ (current period) * P

or ∏ (n+1) = ∏ (n) * P

[ π1(n+1) π2(n+1) ] = [ π1(n) π2(n) ] * P

Where P is the transition probability matrix


Introduction to Markov Processes
Matrix (arrangement of data in row and column format) operations:

Matrix with 1 column is called as a column vector e.g., x


y
Matrix with 1 row is called as a row vector, e.g., [x y]
Introduction to Markov Processes
Matrix operations:

Comparing two matrix


a c l n
P= b d , Q = m o

If two matrices are same/equal then their corresponding elements are same/equal i.e.,

If P = Q, then a = l; b = m; c = n; d = o
Introduction to Markov Processes
Matrix Addition (order 2 X 2 i.e. 2 rows and 2 columns):

Only when the order of the two matrices is the same


a c e g = a+e c+g
b d + f h b+f d+h
Introduction to Markov Processes
Matrix Multiplication (order 2 X 2 i.e. 2 rows and 2 columns):

Only when the number of columns of first matrix is same as the number of rows of second
matrix
a c e g = ae+cf ag+ch
b d X f h be+df bg+dh

The product of the matrices of the order r1 X c1 and c1 X r2 results in a matrix of order r1 X r2
Introduction to Markov Processes
Matrix Multiplication (order 2 X 2 i.e. 2 rows and 2 columns):
Example:

1 3 5 7 = 23 31
2 4 X 6 8 34 46
Introduction to Markov Processes
Example Continued: Market share and customer loyalty

Finding out the probabilities for the next period, system state in next period

∏ (next period) = ∏ (current period) * P


i.e. ∏ (n+1) = ∏ (n) * P

Where P is the transition probability matrix


If, the system in state 1 (shopping at shop A) at period, n = 0

∏ (1) = ∏ (0) * P , where P = 0.75 0.25


0.20 0.80
Introduction to Markov Processes
Example: System state in period 1

∏ (1) = ∏ (0) * P , where P = 0.75 0.25 and,


0.20 0.80

∏ (1) = [ π1(1) π2(1) ] and ∏ (0) = [ π1(0) π2(0) ] = [1 0]

∏ (1) = [ π1(1) π2(1) ] = [1 0] X 0.75 0.25 = [ 0.75 0.25 ]


0.20 0.80

The vector ∏ (1) contains the probability that the system will be in state 1 (i.e. the
customer will shop from shop A) or in state 2 (i.e. the customer will shop from shop B)
in time period 1.
Introduction to Markov Processes
Example: System state in period 2

∏ (2) = ∏ (1) * P , where P = 0.75 0.25 and,


0.20 0.80

∏ (2) = [ π1(2) π2(2) ] and ∏ (1) = [ π1(1) π2(1) ] = [0.75 0.25]

∏ (2) = [ π1(2) π2(2) ] = [0.75 0.25] X 0.75 0.25


0.20 0.80
= [ 0.75*0.75+0.25*0.20 0.75*0.25+0.25*0.80]
= [0.6125 0.3875]
The vector ∏ (2) contains the probability that the system will be in state 1 (i.e. the customer
will shop from shop A) or in state 2 (i.e. the customer will shop from shop B) in time period 2.
Introduction to Markov Processes
EXAMPLE: SYSTEM STATE IN 10 TIME PERIODS

Time Periods
State
Probability 0 1 2 3 4 5 6 7 8 9 10

π1(n) 1 0.75 0.6125 0.5369 0.4953 0.472 0.46 0.453 0.449 0.447 0.446

π2(n) 0 0.25 0.3875 0.4631 0.5047 0.528 0.54 0.547 0.551 0.553 0.554

ONE NOTICES FROM THE ABOVE THAT AFTER A CERTAIN TIME PERIOD THE
PROBABILITIES DO NOT CHANGE MUCH FROM ONE TIME PERIOD TO THE OTHER.
SO IF WE HAD STARTED WITH 10,000 CUSTOMERS WHO HAD LAST SHOPPED AT
SHOP A THEN DURING TIME PERIOD 6, 4600 CUSTOMERS WOULD BE RETAINED BY
SHOP A, WHEREAS 5400 WOULD SWITCH TO SHOP B. .
Introduction to Markov Processes
EXAMPLE: SYSTEM STATE IN 10 TIME PERIODS WITH A DIFFERENT INITIAL STATE

WE CAN DO A SIMILAR KIND OF ANALYSIS WITH A DIFFERENT INITIAL STATE


I.E. THE CUSTOMER LAST SHOPPED AT SHOP B.
Introduction to Markov Processes
Example Continued :

As we continue working the Markov process for a large number of time periods, we
realize that the probabilities of a system being in a given state change very-very little.
This state of a system is called as the steady state or the equilibrium state. The
probabilities are called as steady state probabilities and are independent of the initial
(beginning) state of the system. The symbol used to denote steady state probability for
state 1 is π1 and that of state 2 is π2 (i.e. we remove the time notation, n).

With a large n the subsequent probability values (for each state) get very closer, i.e. the
difference between the state probabilities for nth and (n+1)th period becomes very small.
With this logic, one can compute the steady state probabilities. Thus,
Introduction to Markov Processes
Example Continued :

∏ (n+1) = ∏ (n) * P
[ π1(n+1) π2(n+1) ] = [ π1(n) π2(n) ] * 0.75 0.25
0.20 0.80
For a very large value of n,
Using the above logic, π1(n+1) = π1(n) = π1 (steady state probability – state1), and
π2(n+1) = π2(n) = π2 (steady state probability – state2)
Introduction to Markov Processes
Example Continued :
Thus, [ π1(n+1) π2(n+1) ] = [ π1(n) π2(n) ] * 0.75 0.25
0.20 0.80

[ π1 π2 ] = [ π1 π2 ] * 0.75 0.25
0.20 0.80

[ π1 π2 ] = [ 0.75 π1 + 0.20 π2 0.25 π1+0.80 π2 ]


Introduction to Markov Processes
Example Continued :

Equating both sides, we get

π1 = 0.75 π1 + 0.20 π2 and π2 = 0.25 π1+0.80 π2

Additionally, we know that π1 + π2 = 1

Solving the above, we get π2 = 0.56 and π1 = 0.44


Introduction to Markov Processes
EXAMPLE CONTINUED :

LIKE IN THE CURRENT EXAMPLE, WHEN THE PRIOR STATES OF THE SYSTEM DO
NOT HAVE TO BE CONSIDERED, IT IS CALLED AS THE FIRST ORDER MARKOV
PROCESS. HIGHER ORDER MARKOV PROCESSES ARE THE ONES IN WHICH
FUTURE STATES OF THE SYSTEM DEPENDS ON TWO OR MORE PREVIOUS
STATES.
Example of Markov Analysis

Let's analyze the market share and customer loyalty for Murphy's Foodliner
and Ashley's Supermarket grocery store. Our primary focus is to check the
sequence of shopping trips of a customer. You can assume that customers
can make one shopping trip per week to either Murphy's Foodline or
Ashley's Supermarket, but not both.

Using the terminologies of Markov processes, you refer to the weekly


periods or shopping trips as the trials of the process. In each trial, the
customer can shop at either Murphy’s Foodliner or Ashley’s Supermarket.
The particular store chosen in a given week is known as the state of the
system in that week because the customer has two options or states for
shopping in each trial. With a finite number of states, you can identify the
states as follows: State 1: The customer shops at Murphy’s Foodliner.

State 2: The customer shops at Ashley’s Supermarket.


Example: Market Share Analysis

Suppose we are interested in analyzing the


market share and customer loyalty for
Murphy’s Foodliner and Ashley’s
Supermarket, the only two grocery stores in a
small town. We focus on the sequence of
shopping trips of one customer and assume
that the customer makes one shopping trip
each week to either Murphy’s Foodliner or
Ashley’s Supermarket, but not both.
Example: Market Share Analysis
We refer to the weekly periods or shopping trips as
the trials of the process. Thus, at each trial, the
customer will shop at either Murphy’s Foodliner or
Ashley’s Supermarket. The particular store selected in a
given week is referred to as the state of the system in
that period. Because the customer has two shopping
alternatives at each trial, we say the system has two
states.
State 1. The customer shops at Murphy’s Foodliner.
State 2. The customer shops at Ashley’s
Supermarket.
Example: Market Share Analysis
► Suppose that, as part of a market research study, we
collect data from 100 shoppers over a 10-week period.
► In reviewing the data, suppose that we find that of all
customers who shopped at Murphy’s in a given week,
90% shopped at Murphy’s the following week while
10% switched to Ashley’s.
► Suppose that similar data for the customers who
shopped at Ashley’s in a given week show that 80%
shopped at Ashley’s the following week while 20%
switched to Murphy’s.
Transition Probabilities
■ Transition probabilities govern
the manner in which the state
of the system changes from
one stage to the next. These
are often represented in a
transition matrix.
Transition Probabilities
■ A system has a finite Markov chain with stationary
transition probabilities if:
• there are a finite number of states,
• the transition probabilities remain constant from stage to
stage, and
• the probability of the process being in a particular
state
at stage n+1 is completely determined by the state of the
process at stage n (and not the state at stage n-1).
This is referred to as the memory-less property.
The given transition probabilities are:
Similarly,
Hence, probability murphy’s after two weeks can be calculated
by multiplying the current state probabilities matrix with the
transition probabilities matrix to get the probabilities for the next
state.

Generalize formula:

Where P1, P2, …, Pr represents systems in the process state’s probabilities, and n shows
the state.
Example: Market Share Analysis
■ Transition Probabilities

pij = probability of making a transition from state i


in a given period to state j in the next period

p p 0.9 0.1
11 12
P= =
0.2 0.8
p p
21 22
Example: Market Share Analysis
■ State Probabilities

Murphy’s
.9 P = .9(.9) =
Murphy’s .81
Ashley’s
.9
Murphy’s .1 P = .9(.1) =
.09
Murphy’s
P = .1(.2) =
.2 .02
Ashley’s
.1
Ashley’s P = .1(.8) =
.8 .08
Example: Market Share Analysis
■ State Probabilities for Future Periods
Beginning Initially with a Murphy’s Customer

■ State Probabilities for Future Periods


Beginning Initially with an Ashley’s Customer
Steady-State Probabilities
■ The state probabilities at any stage of the process can
be recursively calculated by multiplying the initial state
probabilities by the state of the process at stage n.
■ The probability of the system being in a particular
state after a large number of stages is called a steady-
state probability.
Steady-State Probabilities
■ Steady state probabilities can be found by solving the
system of equations ΠP = Π together with the
condition for probabilities that Σπi = 1.
• Matrix P is the transition probability matrix
• Vector Π is the vector of steady state probabilities.
Example: Market Share Analysis
■ Steady-State Probabilities

Letπ1 = long run proportion of Murphy’s visits


π2 = long run proportion of Ashley’s visits
Then,

.9 .1
[π1 π2] = [π1 π2]
.2 .8

continued . . .
Example: Market Share Analysis
■ Steady-State Probabilities

.9π1 + .2π2 = π1 (1)


.1π1 + .8π2 = π2 (2)
π1 + π2 = 1 (3)

Substitute π2 = 1 - π1 into (1) to give:


π1 = .9π1 + .2(1 - π1) = 2/3 = .667
Substituting back into (3) gives:
π2 = 1/3 = .333.
Example: Market Share Analysis
■ Steady-State Probabilities
Thus, if we have 1000 customers in the system, the
Markov process model tells us that in the long run,
with steady-state probabilities π1 = .667 and π2 = .333,
2
667 customers will be Murphy’s and 333 customers
13
will be Ashley’s.
Example: Market Share Analysis
Suppose Ashley’s Supermarket is contemplating an
advertising campaign to attract more of Murphy’s customers
to its store. Let us suppose further that Ashley’s believes this
promotional strategy will increase the probability of a
Murphy’s customer switching to Ashley’s from 0.10 to 0.15.
Example: Market Share Analysis
■ Revised Transition Probabilities
Example: Market Share Analysis
■ Revised Steady-State Probabilities

.85π1 + .20π2 = π1 (1)


.15π1 + .80π2 = π2 (2)
π1 + π2 = 1 (3)

Substitute π2 = 1 - π1 into (1) to give:


π1 = .85π1 + .20(1 - π1) = .57
Substituting back into (3) gives:
π2 = .43.
Example: Market Share Analysis

Suppose that the total market consists of 6000 customers per week.
The new promotional strategy will increase the number of
customers doing their weekly shopping at Ashley’s from 2000 to
2580.
If the average weekly profit per customer is $10, the proposed
promotional strategy can be expected to increase Ashley’s profits
by $5800 per week. If the cost of the promotional campaign is less
than $5800 per week, Ashley should consider implementing the
strategy.
EXAMPLE 2

Consider the following problem: company K, the manufacturer of a breakfast cereal,


currently has some 25% of the market. Data from the previous year indicates that 88% of K's
customers remained loyal that year, but 12% switched to the competition. In addition, 85%
of the competition's customers remained loyal to the competition but 15% of the
competition's customers switched to K. Assuming these trends continue determine K's share
of the market:

in 2 years; and
in the long-run.

This problem is an example of a brand switching problem that often arises in the sale of
consumer goods.
Observe that, each year, a customer can either be buying K's cereal or the
competition's. Hence we can construct a diagram as shown below where the
two circles represent the two states a customer can be in and the arcs
represent the probability that a customer makes a transition each year
between states. Note the circular arcs indicating a "transition" from one state to
the same state. This diagram is known as the state-transition diagram.
Given that diagram we can construct the transition matrix (usually denoted by
the symbol P) which tells us the probability of making a transition from one state to
another state. Letting:

state 1 = customer buying K's cereal and


state 2 = customer buying competition's cereal

we have the transition matrix P for this problem given by

To state 1 2
From state 1 |0.88 0.12 |
2 |0.15 0.85 |

You might also like