Markov Process: On The Present State of The System, Its Future and Past Are Independent

You might also like

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 2

Markov process

From Wikipedia, the free encyclopedia

In probability theory and statistics, a Markov process, named after the Russian mathematician Andrey Markov,
is a time-varying random phenomenon for which a specific property (the Markov property) holds. In a common
description, a stochastic process with the Markov property, or memorylessness, is one for which conditional on
the present state of the system, its future and past are independent[1].

Markov processes arise in probability and statistics in one of two ways. A stochastic process, defined via a
separate argument, may be shown (mathematically) to have the Markov property and as a consequence to have
the properties that can be deduced from this for all Markov processes. Of more practical importance is the use of
the assumption that the Markov property holds for a certain random process in order to construct, ab initio, a
stochastic model for that process. In modelling terms, assuming that the Markov property holds is one of a limited
number of simple ways of introducing statistical dependence into a model for a stochastic process in such a way
that allows the strength of dependence at different lags to decline as the lag increases.

Often, the term Markov chain is used to mean a Markov process which has a discrete (finite or countable) state-
space. Usually a Markov chain would be defined for a discrete set of times (i.e. a discrete-time Markov Chain)[2]
although some authors use the same terminology where "time" can take continuous values.[3] Also see continuous-
time Markov process.

For certain types of stochastic processes it is simple to formulate the condition specifying whether the Markov
property holds while, for others, more sophisticated mathematics is required as described in the article Markov
property. One simple instance relates to a stochastic process whose states X can take on a discrete set of values.
The states vary with time t and hence the values are denoted by X(t). The description here is the same irrespective
of whether the time-index is either a continuous variable or a discrete variable. Consider any set of "past times"
( ..., p2, p1), any "present time" s, and any "future time" t, where each of these times is within the range for which
the stochastic process is defined, and

Then the Markov property holds, and the process is a Markov process, if the condition

holds for all sets of values ( ... ,x(p2), x(p1), x(s), x(t) ), and for all sets of times. The interpretation of this is that
the conditional probability

does not depend on any of the past values ( ... ,x(p2), x(p1) ). This captures the idea that the future state is
independent of its past states conditionally on the present state (i.e. depends only on the present state).

[edit] Markovian representations

In some cases, apparently non-Markovian processes may still have Markovian representations, constructed by
expanding the concept of the 'current' and 'future' states. For example, let X be a non-Markovian process. Then
define a process Y, such that each state of Y represents a time-interval of states of X. Mathematically, this takes
the form:
If Y has the Markov property, then it is a Markovian representation of X. In this case, X is also called a second-
order Markov process. Higher-order Markov processes are defined analogously.

An example of a non-Markovian process with a Markovian representation is a moving average time series[citation
needed]
.

You might also like