Professional Documents
Culture Documents
Modul 9 Model Stokastik
Modul 9 Model Stokastik
Model Stokastik
In particular, this chapter presents probability models for processes that evolve over time in
a probabilistic manner. Such processes are called stochastic processes. After briefly
introducing general stochastic processes in the first section, the remainder of the chapter
focuses on a special kind called a Markov chain. Markov chains have the special property
that probabilities involving how the process will evolve in the future depend only on the
present state of the process, and so are independent of events in the past. Many processes
fit this description, so Markov chains provide an especially important kind of probability
model.
Stochastic Processes
Let Xt be a random variable that characterizes the state of the system at discrete points in
time t = 1,2, ... The family of random variables {Xt} forms a stochastic process. The number
of states in a stochastic process may be finite or infinite, as the following two examples
demonstrate:
The random variable Xt is finite because it represents three states: poor (0), fair (1), and
good (2).
Stochastic processes are of interest for describing the behavior of a system operating over
some period of time. A stochastic process often has the following structure.
The current status of the system can fall into any one of M + 1 mutually exclusive categories
called states. For notational convenience, these states are labeled 0, 1, . . . , M. The random
variable Xt represents the state of the system at time t, so its only possible values are 0, 1, .
. . , M. The system is observed at particular points of time, labelled t = 0, 1, 2, . . . . Thus, the
stochastic process {Xt} = {X0, X1, X2, . . .} provides a mathematical representation of how
the status of the physical system evolves over time.
This kind of process is referred to as being a discrete time stochastic process with a finite
state space.
Markov Chains
A stochastic process is a Markov process if the occurrence of a future state depends only on
the immediately preceding state. This means that given the chronological times t0, t1 ... , tn
the family of random variables {Xt} = {X0, X1, X2, . . .} is said to be a Markov process if it
possesses the following property:
In a Markovian process with n exhaustive and mutually exclusive states (outcomes), the
probabilities at a specific point in time t = 0, 1,2, ... is usually written as
This is known as the one-step transition probability of moving from state i at t - 1 to state j at
t. By definition, we have
TIle matrix P defines the so-called Markov chain. It has the property that all its transition
probabilities Pij are fixed (stationary) and independent over time.
The use of fertilizer now allows improvements in the deteriorating condition. There is a 10%
chance that the soil condition will change from fair to good (state 2 to state 1), a 5% chance
it will change from poor to good (state 3 to state 1), and a 40% chance that a poor condition
will become fair (state 3 to state 2).
This leads to a four-state Markov chain with the following transition matrix:
Given the initial probabilities a(0) = {aj(o)} of starting in state j and the transition matrix P of a
Markov Chain, the absolute probabilities a(n) = {aj(n)} of being in state j after n transitions (n >
0) are computed as follows:
The matrix Pn is known as the n-step transition matrix. From these calculations we can see
that
Example 7
The following transition matrix applies to the gardener problem with fertilizer:
The initial condition of the soil is good-that is a(0) = (1,0,0). Determine the absolute
probabilities of the three states of the system after 1,8, and 16 gardening seasons.
The rows of P8 and the vector of absolute probabilities a(8) are almost identical. The result is
more pronounced for P16. It demonstrates that, as the number of transitions increases, the
absolute probabilities are independent of the initial a(0). In this case the resulting probabilities
are known as the steady-state probabilities.
Daftar Pustaka