Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

3/23/2018

Chapter 4 Learning Objectives


After completing this chapter, students will be able to:

Markov Analysis 1. Determine future states or conditions by


using Markov analysis.
2. Compute long-term or steady-state
conditions by using only the matrix of
transition probabilities.
3. Understand the use of absorbing state
analysis in predicting future conditions.

4-2

Chapter Outline Introduction


4.1 Introduction  Markov analysis is a technique that deals with
4.2 States and State Probabilities the probabilities of future occurrences by
4.3 Matrix of Transition Probabilities analyzing presently known probabilities.
4.4 Predicting Future Market Shares  It has numerous applications in business.
4.5 Markov Analysis of Machine Operations  Markov analysis makes the assumption that
the system starts in an initial state or
4.6 Equilibrium Conditions
condition.
4.7 Absorbing States and the Fundamental
 The probabilities of changing from one state
Matrix: Accounts Receivable Application
to another are called a matrix of transition
probabilities.
 Solving Markov problems requires basic
matrix manipulation.

4-3 4-4

Introduction States and State Probabilities

 This discussion will be limited to Markov  States are used to identify all possible conditions
problems that follow four assumptions: of a process or system.
1. There are a limited or finite number of  It is possible to identify specific states for many
possible states. processes or systems.
2. The probability of changing states  In Markov analysis we assume that the states are
remains the same over time. both collectively exhaustive and mutually
3. We can predict any future state from the exclusive.
previous state and the matrix of transition  After the states have been identified, the next
probabilities. step is to determine the probability that the
4. The size and makeup of the system do system is in this state.
not change during the analysis.

4-5 4-6

1
3/23/2018

States and State Probabilities States and State Probabilities


The information is placed into a vector of state  In some cases it is possible to know with complete
probabilities: certainty in which state an item is located:
 Vector states can then be represented as:
 (i) = vector of state probabilities
for period i  (1) = (1, 0)
= (1, 2, 3, … , n)
where
Where:  (1) = vector of states for the machine
in period 1
n = number of states
1 = 1 = probability of being in the
1, 2, … , n = probability of being in state 1, first state
state 2, …, state n
2 = 0 = probability of being in the
second state
4-7 4-8

The Vector of State Probabilities for The Vector of State Probabilities for
Three Grocery Stores Example Three Grocery Stores Example
Probabilities are as follows:
 States for people in a small town with three
grocery stores. State 1 – American Food Store: 40,000/100,000 = 0.40 = 40%
State 2 – Food Mart: 30,000/100,000 = 0.30 = 30%
 A total of 100,000 people shop at the three
State 3 – Atlas Foods: 30,000/100,000 = 0.30 = 30%
groceries during any given month:
 Forty thousand may be shopping at American Food These probabilities can be placed in the following
Store – state 1. vector of state probabilities
 Thirty thousand may be shopping at Food Mart – state 2.
 (1) = (0.4, 0.3, 0.3)
 Thirty thousand may be shopping at Atlas Foods – state
3. where  (1) = vector of state probabilities for the three
grocery stores for period 1
1 = 0.4 = probability that person will shop at
American Food, state 1
2 = 0.3 = probability that person will shop at
Food Mart, state 2
3 = 0.3 = probability that person will shop at
Atlas Foods, state 3
4-9 4-10

The Vector of State Probabilities for Tree Diagram for the Three Grocery
Three Grocery Stores Example Stores Example
 The probabilities of the vector states represent
the market shares for the three groceries. 0.8 #1 0.32 = 0.4(0.8)
 Management will be interested in how their American Food #1 0.1
#2 0.04 = 0.4(0.1)
market share changes over time. 0.4 0.1
#3 0.04 = 0.4(0.1)
 Figure 4.1 shows a tree diagram of how the
market shares in the next month. 0.1 #1 0.03
Food Mart #2 0.7
#2 0.21
0.3 0.2
#3 0.06

0.2 #1 0.06
Atlas Foods #3 0.2
#2 0.06
0.3 0.6
Figure 4.1 0.18
#3

4-11 4-12

2
3/23/2018

Matrix of Transition Probabilities Matrix of Transition Probabilities

The matrix of transition probabilities allows us to Let P = the matrix of transition probabilities:
get from a current state to a future state:
P11 P12 P13 … P1n
Let Pij = conditional probability of being in state j P21 P22 P23 … P2n
in the future given the current state of i P=


Pm1 … Pmn
For example, P12 is the probability of being in state 2
in the future given the event was in state 1 in the
period before:  Individual Pij values are determined empirically
 The probabilities in each row will sum to 1

4-13 4-14

Transition Probabilities for the Transition Probabilities for the


Three Grocery Stores Three Grocery Stores
We used historical data to develop the following matrix: Row 2
0.1 = P21 = probability of being in state 1 after being in state 2 in the
preceding period
0.8 0.1 0.1 0.7 = P22 = probability of being in state 2 after being in state 2 in the
P = 0.1 0.7 0.2 preceding period
0.2 0.2 0.6 0.2 = P23 = probability of being in state 3 after being in state 2 in the
preceding period

Row 1 Row 3
0.8 = P11 = probability of being in state 1 after being in state 1 in the 0.2 = P31 = probability of being in state 1 after being in state 3 in the
preceding period
preceding period
0.1 = P12 = probability of being in state 2 after being in state 1 in the
0.2 = P32 = probability of being in state 2 after being in state 3 in the
preceding period preceding period
0.1 = P13 = probability of being in state 3 after being in state 1 in the
0.6 = P33 = probability of being in state 3 after being in state 3 in the
preceding period preceding period

4-15 4-16

Predicting Future Market Shares Predicting Future Market Shares

 One of the purposes of Markov analysis is to When the current period is 0, the state probabilities
predict the future. for the next period 1 are determined as follows:
 Given the vector of state probabilities and the
matrix of transitional probabilities, it is not very
difficult to determine the state probabilities at a  (1) =  (0)P
future date.
 This type of analysis allows the computation of
For any period n we can compute the state
the probability that a person will be at one of the probabilities for period n + 1:
grocery stores in the future.
 Since this probability is equal to market share, it  (n + 1) =  (n)P
is possible to determine the future market shares
of the grocery stores.

4-17 4-18

3
3/23/2018

Predicting Future Market Shares Predicting Future Market Shares

The computations for the next period’s market share are:  The market share for American Food and Food
Mart have increased and the market share for
Atlas Foods has decreased.
 (1) =  (0)P  We can determine if this will continue by looking
at the state probabilities will be in the future.
0.8 0.1 0.1  For two time periods from now:
= (0.4, 0.3, 0.3) 0.1 0.7 0.2
0.2 0.2 0.6  (2) =  (1)P
= [(0.4)(0.8) + (0.3)(0.1) + (0.3)(0.2),
(0.4)(0.1) + (0.3)(0.7) + (0.3)(0.2),
(0.4)(0.1) + (0.3)(0.2) + (0.3)(0.6)]
= (0.41, 0.31, 0.28)

4-19 4-20

Markov Analysis of
Predicting Future Market Shares Machine Operations
Since we know that:
 The owner of Tolsky Works has recorded the
 (1) =  (0)P operation of his milling machine for several years.
We have:  Over the past two years, 80% of the time the
milling machine functioned correctly for the
 (2) =  (1)P = [ (0)P]P =  (0)PP =  (0)P2 current month if it had functioned correctly during
the preceding month.
In general:  90% of the time the machine remained incorrectly
adjusted if it had been incorrectly adjusted in the
 (n) =  (0)Pn preceding month.
The question of whether American and Food Mart  10% of the time the machine operated correctly in
will continue to gain market share and Atlas will a given month when it had been operating
continue to loose is best addressed in terms of incorrectly the previous month.
equilibrium or steady state conditions.
4-21 4-22

Tolsky Works Tolsky Works

The matrix of transition probabilities for this machine is: What is the probability that the machine will be
functioning correctly one and two months from now?
0.8 0.2
P=
0.1 0.9
where
 (1) =  (0)P
P11 = 0.8 = probability that the machine will be correctly functioning 0.8 0.2
this month given it was correctly functioning last month = (1, 0)
0.1 0.9
P12 = 0.2 = probability that the machine will not be correctly
functioning this month given it was correctly functioning
last month = [(1)(0.8) + (0)(0.1), (1)(0.2) + (0)(0.9)]
P21 = 0.1 = probability that the machine will be correctly functioning = (0.8, 0.2)
this month given it was not correctly functioning last
month
P22 = 0.9 = probability that the machine will not be correctly
functioning this month given it was not correctly
functioning last month
4-23 4-24

4
3/23/2018

Period State 1 State 2


Tolsky Works State 1 1.000000 0.000000
Probabilities 2 0.800000 0.200000
for the 3 0.660000 0.340000
What is the probability that the machine will be Machine 4 0.562000 0.438000
functioning correctly one and two months from now? Example for 5 0.493400 0.506600
4 Periods 6 0.445380 0.554620
 (2) =  (1)P 7 0.411766 0.588234
8 0.388236 0.611763
0.8 0.2 9 0.371765 0.628234
= (0.8, 0.2)
0.1 0.9 10 0.360235 0.639754
11 0.352165 0.647834
= [(0.8)(0.8) + (0.2)(0.1), (0.8)(0.2) + (0.2)(0.9)]
12 0.34654 0.653484
= (0.66, 0.34)
13 0.342560 0.657439
14 0.339792 0.660207
Table 4.1 15 0.337854 0.662145

4-25 4-26

You might also like