Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 9

Contents

Introduction to the Technique ...................................................................................................................... 1


Where and How to use it .............................................................................................................................. 2
Practical Application .................................................................................................................................... 3
I. Anti-Bus-Bunching Formula: .......................................................................................................... 3
II. Elevator waiting times ..................................................................................................................... 3
How to run in Excel ...................................................................................................................................... 4
A simple Markov Chain ............................................................................................................................. 5
Bibliography ................................................................................................................................................... 7


Page 1 of 9

Introduction to the Technique
Markov Chain (discrete-time Markov Chain or DTMC) is a special kind of stochastic process, where
the outcome of the experiment depends only on the outcome of the previous experiments (Kemeny,
2003). It is named after the Russian mathematician Andrey Markov, refer to Appendix 1.
A stochastic process is a sequence of random variables defined on the same sample space (Haviv,
2013). In easier terms it is the collection of random variables used to represent their evolution.
A Markov Chain is often denoted by (, , K) for state space, initial and transition probability. The 1
st

order Markovian chain is denoted as follows: -





Figure 1
Page 2 of 9

Where and How to use it
1. Markov models are a good way to model local, overlapping sets of information, which help us
in understanding of regions.
2. Markov chains provide a stochastic model of diffusion that applies to individual particles; this
gives us the foundation for diffusion weve studied.
3. This stochastic diffusion also provides a useful model of the spread of information throughout
(Sirl, 2005).
We can also use Markov chains to model contours, and they are used, explicitly or implicitly, in many
contour-based segmentation algorithms. One of the advantages of 1D Markov models is that we can do
dynamic programming solutions through them (Win, 2006).
In a Markov chain, we use a sequence of random variables, which describe the state of a system, x1;
x2; xn. What makes them Markov is a conditional independence property that tells us that each state
only depends on the previous state. That is: P (xnjx1; x2; xn1) = P (xnjxn1)
(Unknown, Lecture 3 Markov Chain Properties)
Diffusion of a single particle offers us a simple example of this. Let xi be the position of a particle at
time i - Suppose at each time step, the particle jumps either one unit to the left or right, or stays in the
same place. We can see that xi depends on xi1, but that if we know xi1 the previous values of x are
irrelevant. We can also see that this is a stochastic version of the deterministic diffusion weve studied.
If there are many particles, we can use the law of large numbers to assume that a xed fraction of
them jump to the left and right. Remember that we saw that the position xi will have a Gaussian
distribution for large i, because of the central limit Theorem (Cambridge University Press, 2004).
Markov chains have some great properties. One of them is that their steady state can be found by
solving an eigenvalue problem; others include lending themselves to dynamic programming solutions.
If a Markov chain involves transitions between a discrete set of states, its very useful to describe these
transitions from state to state using vectors and matrices. They are being used in physics, chemistry,
medicine, music, game theory, genetics and sports (Unknown, CS UMD, 2007).


Page 3 of 9

Practical Application
Mathematics is very useful in our daily lives, it can solve problems related to house hold accounting,
budgeting, economic forecasts etc. Research has reported the application and usefulness of Markov
Chains in a wide range of topics such as physics, chemistry, medicine, music, game theory and sports.
Below we discuss the practical applications of Markov Chains:-
I. Anti-Bus-Bunching Formula:
Bus-bunching is a term used when people wait for a bus for a considerable amount of time
and after boarding it realize that another bus, which is virtually empty, is right behind them. It
frustrated passengers and this frustration caught the eye of Georgia Tech professor John
Bartholdi and University of Chicago professor Donald Eisenstein. They used complex algebra,
or DTMC to come up with an anti-bus-bunching formula.
In short, this formula, if applied to any shuttle system that runs in a loop in which buses are
no more than about 12 to 15 minutes apart, can reduce wait time to approximately 6 minutes.
Passengers would know that at any given time a bus would arrive
within the acceptable time limit.
This DTMC equation is what the professors came up with. The first
line describes how the headway (the space between buses) changes
for the bus that is currently at the end of the route (the turnaround
point). Alpha (in red) is a control parameter - a number, say, 0.5 - by
which the bus manager chooses whether the bus should wait longer (and fix imbalances
faster) or vice versa. The "v" is the average velocity of the buses.
The second line describes how the headways of the other buses change. This collection of
equations describes how the headways change from bus arrival t to
the next bus arrival t+1. In other words, it predicts the future
behavior of all the buses (CNN Radios Edgar Treiguts , 2012).




II. Elevator waiting times
Mitsubishi Electric Research Laboratories presented a paper on Dec 2004, stating that they
have made an efficient algorithm for exact calculation and minimization of expected waiting
times of all passengers using a bank of elevators. They used DTMC along with dynamic
programming to compute measure of future system performance such as expected waiting
time, properly averaged over all possible future scenarios. Below is the description the paper
gives of their model.
Figure 2
Figure 3
Page 4 of 9

Simplified trellis structure for the embedded Markov Chain of a single descending car. Rows
signify floors; columns signify the
number of recently boarded
passengers; column groups signify
elevator speeds. The empty
descending car is about to reach the
branching point for a possible stop
at floor 13. It has been assigned hall
calls at floor 7 and 11, each of which
may increase the passenger load by
one (Mitsubishi, 2004).
How to run in Excel
In Excel, it is usually straightforward to use formulas that refer to single cells. However multiplying,
inverting, or taking powers of matrices is hard if you try to calculate each matrix element separately. A
set of formulas called array formulas make this easier, but there are very small tricks.
Type in two 4x4 matrices in Excel. For example:

Multiply the two matrices using the MMULT formula. The trick here is to select the area where the
result would show first, then typing in the formula, then pressing Ctrl+Shift+Enter.

There are many array formulas in Excel. The ones you will find most useful in this class are MMULT
and MINVERSE (for taking the inverse of a square array).

Figure 4
Page 5 of 9

A simple Markov Chain
Lets take a simple example of modeling Markov Chain. Suppose there are 10,000 customers of a
product of which 40% belong to A, 30%belong to B and 30% belong to C, at current year. Where A, B
and C are the Sellers of the product and the current year being = 0. Now suppose weve been asked
to find the share of each seller using the probabilities of the transition and vector matrix for next year.
Assuming this transition matrix below. These are the probabilities given (Vose, 2007).

P =

Start with a brand new Excel sheet. Enter your P matrix as you would just write it. A format like below
is usually helpful:

Fill in the Matrix. Then add vector matrix probabilities on the spread sheet as well.


0.8 0.1 0.1
0.2 0.6 0.2
01 0.2 0.7
Page 6 of 9

Now we need to select a new space to find the probabilities of the next year by using MMULT formula
selecting Vector matrix as array1 and transition matrix as array2 (Australian Education, 2007).

Now well lock the 14:k6 by selecting it and pressing f4.Then we press Ctrl+Shift+Enter to solve the
Markov.

Then

So by using Markov Chains we find out the next years share for the Sellers A B& C would be
0.41,0.28and 0.31 respectively .


Probabilities for the
next year
Page 7 of 9

Bibliography
Australian Education. (2007, November). Retrieved from
http://courses.washington.edu/inde311/Lab%203%20Instructions.doc
Cambridge University Press. (2004, September). Markov Chains. Retrieved from Statslab:
http://www.statslab.cam.ac.uk/~james/Markov/
CNN Radios Edgar Treiguts . (2012). Lightyears blog. Retrieved April 19, 2014, from
http://lightyears.blogs.cnn.com/2012/05/16/waiting-for-a-bus-math-may-help/?hpt=hp_t3
Haviv, M. (2013). Introduction to Markov Chains. In Queues (pp. 37-50). Springer.
Kemeny, J. (2003). Markov Chains. In P. Education (Ed.). Addison-Wesley.
Mitsubishi. (2004). Elevator Sequencing.
Sirl, D. (2005, April). Markov Chains: An introduction. Retrieved from
http://www.maths.uq.edu.au/MASCOS/Markov05/Sirl.pdf
Unknown. (2007). CS UMD. Retrieved from
http://www.cs.umd.edu/~djacobs/CMSC828seg/MarkovModels.pdf
Unknown. (n.d.). Lecture 3 Markov Chain Properties.
Vose. (2007). Vose Software. Retrieved from
http://www.vosesoftware.com/ModelRiskHelp/index.htm#Time_series/Markov_chain_models.htm
Win. (2006). Win.tue. Retrieved from Win: http://www.win.tue.nl/~iadan/sdp/h3.pdf
http://www.youtube.com/watch?v=jmmsXBk0X64


Page 8 of 9



Appendix 1: Russian mathematician, Andrey Markov

You might also like