Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

6.

DISCRETE MARKOV CHAINS

We haven’t taken both the operation and the failure (repair) states at a time so far. We
have dealt either with the operation phase or repair phase individually where our concern was
the failure or the completion of the repairmen, respectively. However, electric power system
operate on operation-failure-operation-failure-…basis. Therefore, we have to extend our
existing models or introduce a new model that can be used for multi-state of the system.
Markov processes are a special case of Stochastic processes and can be used for this type of
modeling.
A stochastic process, X(Error! Bookmark not defined.,t), is a function of two
independent variables.  denotes the states and t denotes the time in Markov processes. They
can be classified into 4 groups with respect to the type of the state and the time variables:
Continuous state, discrete time Markov processes,
Continuous state, continuous time Markov processes,
Discrete state, discrete time Markov processes, (Discrete Markov Chains)
Discrete state, continuous time Markov processes, (Continuous Markov Processes)

Since the number of states is assumed to be constant in reliability analysis, we will


concentrate on discrete-state case.
On the hand, we have to state that the Markov modeling is used to evaluate the
probability of being at several states in the future and these probabilities are independent from
their past history. Therefore, transition rates between the states are constant.

6.1 Discrete Markov Modeling and State Tree


Assume that a unit has two possible states represented by s1 and s2. Let’s define the
following probabilities for the discrete time considered:

P11: The probability of the unit remaining at s1 while it is at state s1.


P12: The probability of the unit transition to s2 while it is at state s1.
P21: The probability of the unit remaining at s2 while it is at state s2.
P22: The probability of the unit transition to s1 while it is at state s2.

P12
P11 P11 + P12 = 1
s1 s2 P21 + P22 = 1
P21 P22

Figure 6.1 Discrete Markov model of a 2-state unit

Consider the system is initially at state s1. The system can remain at s1 or it can move to s2
during the first time interval with the probabilities of P11 and P12, respectively. This principle
applies for the following time intervals with the known probabilities. Figure 6.2 illustrates the
state transitions and corresponding probabilities of a unit along 4 consecutive time intervals,
where the system is initially at s1.
6.2

Initial 1 2 3 4
P11 4
s1 P11
P11 s1 P12 3
s2 P11 P12
P11 s1
P12 P21 2
s1 P11 P12 P21
s2 P22 2
s2 P11 P12 P22
P11 s1
P11 2
s1 P11 P12 P21
P12 P21 s1 P12 2
s2 P11P12 P21
s2
P22 P21
s1 P11P12 P21P22
s2 P22 2
s2 P11P12P22
s1
P11 2
s1 P11P12 P21
P11 s1 P12 2
s2 P11P12 P21
P21 s1
P12 P21 2
s1 P12 P212
P12 s2 P22 2
s2 P12 P21P22
s2
P11
s1 P11P12 P21P22
P22 P21
s1 P12 2
s2 P12 P21P22
s2
P22 P21 2
s1 P12 P21P22
s2 P22 3
s2 P12 P22

Figure 6.2 Markov Chain (Tree diagram) of a 2-state unit starting from s1

Table 6.1 illustrates the probabilities of being at state-1 and state-2 after 5 steps (time
intervals) for several different initial conditions and for state probabilities P11=P12=1/2,
P21=1/4, P22=3/4. Figure 6.3 illustrates the probabilities of being at state-1 (P1) versus steps.
The probability of being at state-2 is the complement of this probability, i.e. P2=1- P1. It is
clear from Table 1 and Figure 3 that the limiting state probabilities approach to the same
values regardless of the initial positions. These limiting state probabilities are 1/3 and 2/3 for
P1 and P2, respectively. Initial conditions are important just for the number of steps to reach
the limiting state probabilities not for the values.

Table 6.1 P1 and P2 probabilities for three different initial conditions.


6.3

Initial State
Time s1 s2 %50 s1 , %50 s2
interval
P1 P2 P1 P2 P1 P2
0 1 0 0 1 1/2 1/2
1 1/2 1/2 5/16 11/16 3/8 5/8
2 3/8 5/8 21/64 43/64 11/32 21/32
3 11/32 21/32 85/256 171/256 43/128 85/128
4 43/128 85/128 341/1024 683/1024 171/512 341/512
5 171/512 341/512 1365/4096 2731/4096 683/2048 1365/2048

P1
1 .0 0

0.75

0 .5 0

1/3
0.25

Step
0 .0 0
0 1 2 3 4 5
Figure 6.3 P1 probabilities for different initial conditions.

In fact, in order for the limiting state probabilities to converge a non-zero value, there
should be a direct or indirect path from one state to another one. This type of systems are
called Ergodic Systems. Ergodic systems are defined as the systems not comprising
absorbing states. An absorbing state is defined as the state where the transition rates from
this state to the other states are zero. That is, Pii=1 , Pij=0 j=1,2,..Error! Bookmark not
defined.i-1,i+1,…,n). It can be proved that the limiting probabilities of all non-absorbing
states are zero, if a system comprises one or more absorbing states.

6.2 Stochastic State Transitional Matrix


State tree given in Figure 6.2 along 4 time steps for a 2-state system is not practical for a
multi-state unit along large time steps. Assume that n state system where the states are
enumerated as s1, s2,...,sn. Let Pij denote the probability of transiting to a state sj when the
system is at si. The matrix, composed of Pij probabilities is named as (Stochastic) State
Transition matrix
6.4

s1 s2 s3 sn
s1  P11 P12 P13 ... P1n 
s2  P21 P2n  n
P 
s3  P31
P22 P23
P32 P33
...
... P3n 
 ,  Pij  1 , i  1,2, ..., n
j1
..  ... ... ... ... ... 
s n  Pn1 Pn 2 Pn3 ... Pnn 

Then, for an initial state probability matrix P(0),


PT (1)  PT (0) P
will show the probabilities after the first time interval. The square of state transition matrix
can be computed as,
n
 P i, j  Pi1P1j  Pi2P2 j...Pin Pnj   Pik Pkj
2
k 1
Pik is the probability of transition to sk when the system is at si at first time step and Pkj is the
probability of transition to sj when the system is at sk at the second time step. Then PikPkj is
the probability of transiting to sj when the system is at si along the two consecutive time
steps. Therefore, the entries of P2 will show the transition probabilities of the system along 2-
consecutive time steps. (Figure 6.4)

first step Second step


P11 s
1

P1j
Pi1 s1 sj Pi1 P1j
P1n
sn

Pj1 s1

Pij Pjj
si sj sj Pij Pjj
Pjn
sn
Pin

Pn1 s
1

Pnj
sn sj Pin Pnj
Pnn
sn

Figure 6.4 The paths of transiting to sj from state si at 2-cunsecutive time steps.
6.5

Then, for an initial state probability matrix P(0),


PT (2)  PT (0) P k
will show the probabilities after the second time interval. Similarly, the entries of Pk will show
the transition probabilities of the system after k-consecutive time intervals. Therefore, for an
initial state probability matrix P(0),
PT (k )  PT (0).P k 
will show the probabilities after k time steps.

6.3 Limiting State Probabilities

P(0)T ,P(0)T P,P(0)T P2 ,P(0)T P3 ,...,P(0)T Pm ,P(0)T Pm1,...


will converge to limiting state probabilities (steady stat probabilities) for an ergodic system.
Let this limiting state probabilities represented by [ ]. It can be computed
by the simultaneous solution of the following equation.

P(0)T P m P(0)T P m 1  T
   T P (convergence criteria )
 T  1 2 ... n   P(0)T P m 
n
with  i  1
i 1

Example: Calculate the limiting state probabilities of the previous 2-state unit, where
P11=P12=1/2 , P21=1/4 and P22=3/4

1 1
1 / 21 / 2  1   2  0
 T   T P 11   11   2 4
1 / 43 / 4 1 1
1   2  0
2 4

If one of the two linear dependent equations is solved together with then,
⁄ ⁄

6.4 Absorbing States


The states where all the transition probabilities to the other states are zero are called
absorbing states; i.e., Pii=1 , Pij=0 j Error! Bookmark not defined.i) . Absorbing states and
the average steps up to accessing to an absorbing state is important for reliability analysis. We
can decompose the state transition matrix of a model including absorbing states into 4-parts:
Q R 
P 
0 I 
Where,
6.6

Q : (mxm) Reduced state transition matrix showing the transition probabilities among non-
absorbing states.
R : mx(n-m) matrix showing the transition probabilities from non-absorbing states to
absorbing states,
I : (n-mxn-m) Identity matrix showing the transition probabilities among absorbing states.
If the powers of transition matrix is evaluated,

[ ]

[ ] [ ]

[ ] [ ]

[ ] [ ]

Qk shows the probabilities of the transitions among non-absorbing states at the end of k step.

Theorem : The probability of the state being absorbed is one for the models including
absorbing state(s). That is, limiting state probabilities of all non-absorbing states
will be zero. Mathematically,

Lim Q k  0 for the models comprising absorbing state(s)
k 

Proof: m
 Qij  1,i  1,2,...m
j 1

m 
t  Max  Qij   1
i  j 1 

qmax (Qij ) max 1  smax


(l )
m
 Max  Q l
i  j 1
   1
ij
 

Let, [ ] denote the vector whose entries are all one, then
6.7

Q.1  s max .1

Q k .1  Qk 1 .Q.1  s max .Qk 1 .1  s max


2
.Qk  2 .1  s max
3
.Qk  3 .1  ...  s kmax .1

Lim s kmax .1  0  Lim Qk .1  0  Lim Qk  0


k  k  k 

Let’s calculate the average time steps before accessing to an absorbing state. This will
be the average time steps that the system will remain at non-absorbing states. Let the system
be at si non-absorbing state.

1 if the system is at s j after k time steps


x ij(k )  
0 otherwise

x ij( k ) is a random variable. Then,

P(x ij(k )  1)  Q k   ij  q (k ) , P(x ij(k )  0)  1  q (k )


ij ij

E (x ij(k ) )   x ij(k ) .P(x ij(k ) )  1.q (k )


ij
 0.(1  q (k )
ij
)  q (k )
ij

Let Tij denote the total time step that the system is at sj and let’s calculate its expected value.

N 1
Tij 
( 0) (1) ( 2) ( N 1)
x ij  x ij  x ij ...x ij   x(ijl)
l 0
N 1 N 1  N 1 
Tij = E(Tij ) 
( 0) (1) ( N 1)
E( x ij )  E( x ij )...E( x ij )  (l)
E( x ij )   (l)
q ij 
  Ql 
 l 0 ij
l 0 l 0

Let T be the matrix whose entries are Tij, then,

T = E(T)  I  Q  Q2 ... QN 1


6.8

Average time steps of being at s j after (N - 1) time steps,


T ij  E (Tij ) 
given that the system was initially at si

m m Sum of the average time steps of being at s , s ,..., s (non - absorbing states)
T ij   E(Tij ): aftter (N - 1) time steps, given that the system
1 2 m

j 1 j 1  was initially at si

Sum of the average time steps of being at s1 , s 2 ,..., s m (non - absorbing states)
m

Lim  T ij : aftter infinite time steps (that is up to accessing to an absorbing state),
N  j 1 given that the system was initially at s
 i

N  N 

Lim T  Lim I  Q  Q  ...  Q 2 N 1
  Lim
I  QN
N  I  Q
 Lim
I
N  I  Q
 Lim I  Q1
N 

Example:(Drunkard's Walk) The probability of walking either to left corner or to right corner
for a man after finishing his walk is equal. The probabilities of walking to right corner
and to left corner after his first walk are also equal. Assume that his home and the bar
are at 2-corner left and 2-corner right to his office. He stays at home or at bar
whenever he reaches there.
a) Construct the state space diagram of his walk.
b) Calculate the probabilities of being at several locations after 4 walks for a man leaving
his office.
c) Calculate the limiting state probabilities.
d) Calculate the average number of walks before reaching to home or to bar.

a)
Home Work Bar
1/2 1/2

s0 s1 s2 s3 s4
1 1
1/2 1/2 1/2 1/2
s0 s1 s2 s3 s4
s0  1 0 0 0 0 
s1 1 / 2 0 1 / 2 0 0 
P
s2  0 1 / 2 0 1 / 2 0 
 
s3  0 0 1 / 2 0 1 / 2
s4  0 0 0 0 1 
6.9

T
 1 0 0 0 0  3 / 8 
5 / 81/ 8 0 1/ 81/ 8   0 
   
b) PT (4)  PT (0).P 4  001003 / 8 0 2 / 8 0 3 / 8 2 / 8 3 / 802 / 803 / 8
   
1/ 81/ 8 0 1/ 85 / 8  0 
 0 0 0 0 0  3 / 8

c) Home and the bar are the two absorbing states. Therefore limiting state probabilities
of the remaining states are zero., Error! Bookmark not defined.. Or
mathematically,

1
0  0 
2
2
 1 0 0 0 0 
T T
0  0  1 
    1 / 2 0 1 / 2 0 0   2
 1  1    2 
1  3
 2    2   0 1 / 2 0 1 / 2 0   2
      2
3  3   0 0 1 / 2 0 1 / 2  3 
 4   4   0 0 0 0 1   2
 3
4   4
2
0  1   2  3   4  1
 1   2  3  0 ,0  4  1

d) s0 ve s4 are absorbing states. If we decompose the transition matrix,

s1 s2 s3 s0 s4
s1  0 0.5 0 0.5 0  s1 s2 s3
s2 0.5 0 0.5 0 0  s1  0 0.5 0 
P  Q 
s3  0 0.5 0 0 0.5 s2 0.5 0 0.5
 
s0  0 0 0 1 0 s3  0 0.5 0 
s4  0 0 0 0 1 

1.5 1 0.5 T1  T 11  T 12  T 13  3
T  I  Q 1
  1 2 1  T2  T 21  T 22  T 23  4
 0.5 1 1.5 T3  T 31  T 32  T 33  3

T 1,T 2 and T 3 denote the average of total number of steps before reaching to an
absorbing state (either to home or to bar) for the initial locations of s1, s2 and s3,
respectively.

You might also like