Professional Documents
Culture Documents
L18
L18
L18
9 / 19
Recap
9 / 19
Recap
9 / 19
Recap
9 / 19
Recap
9 / 19
Running example: Coin with memory!
10 / 19
Running example: Coin with memory!
▶ In a Markovian coin with memory, the outcome of the next
toss depends on the current toss.
10 / 19
Running example: Coin with memory!
▶ In a Markovian coin with memory, the outcome of the next
toss depends on the current toss.
10 / 19
Running example: Coin with memory!
▶ In a Markovian coin with memory, the outcome of the next
toss depends on the current toss.
10 / 19
Running example: Coin with memory!
▶ In a Markovian coin with memory, the outcome of the next
toss depends on the current toss.
10 / 19
Running example: Coin with memory!
▶ In a Markovian coin with memory, the outcome of the next
toss depends on the current toss.
10 / 19
Running example: Coin with memory!
▶ In a Markovian coin with memory, the outcome of the next
toss depends on the current toss.
▶ The transition
" probability
# matrix
" P for
# the two cases is
0.9 .1 0.1 0.9
Ps = and Pf =
0.2 0.8 0.7 0.3
10 / 19
Running example: Coin with memory!
▶ In a Markovian coin with memory, the outcome of the next
toss depends on the current toss.
▶ The transition
" probability
# matrix
" P for
# the two cases is
0.9 .1 0.1 0.9
Ps = and Pf =
0.2 0.8 0.7 0.3
11 / 19
Running example: Dice with memory!
▶ In a markovian dice with memory, the outcome of the next roll
depends on the current roll.
11 / 19
Running example: Dice with memory!
▶ In a markovian dice with memory, the outcome of the next roll
depends on the current roll.
▶ Xn = i for i ∈ S where S = {1, . . . , 6}.
11 / 19
Running example: Dice with memory!
▶ In a markovian dice with memory, the outcome of the next roll
depends on the current roll.
▶ Xn = i for i ∈ S where S = {1, . . . , 6}.
▶ Example
transition probability matrix
0.9 .1 0 0 0 0
0 .9 .1 0 0 0
0 0 0.9 0.1 0 0
P=
0 0 0 0.9 0.1 0
0 0 0 0 0.9 0.1
0.1 0 0 0 0 0.9
11 / 19
Running example: Dice with memory!
▶ In a markovian dice with memory, the outcome of the next roll
depends on the current roll.
▶ Xn = i for i ∈ S where S = {1, . . . , 6}.
▶ Example
transition probability matrix
0.9 .1 0 0 0 0
0 .9 .1 0 0 0
0 0 0.9 0.1 0 0
P=
0 0 0 0.9 0.1 0
0 0 0 0 0.9 0.1
0.1 0 0 0 0 0.9
11 / 19
Running example: Dice with memory!
▶ In a markovian dice with memory, the outcome of the next roll
depends on the current roll.
▶ Xn = i for i ∈ S where S = {1, . . . , 6}.
▶ Example
transition probability matrix
0.9 .1 0 0 0 0
0 .9 .1 0 0 0
0 0 0.9 0.1 0 0
P=
0 0 0 0.9 0.1 0
0 0 0 0 0.9 0.1
0.1 0 0 0 0 0.9
11 / 19
Running example: Dice with memory!
▶ In a markovian dice with memory, the outcome of the next roll
depends on the current roll.
▶ Xn = i for i ∈ S where S = {1, . . . , 6}.
▶ Example
transition probability matrix
0.9 .1 0 0 0 0
0 .9 .1 0 0 0
0 0 0.9 0.1 0 0
P=
0 0 0 0.9 0.1 0
0 0 0 0 0.9 0.1
0.1 0 0 0 0 0.9
11 / 19
Running example: Dice with memory!
▶ In a markovian dice with memory, the outcome of the next roll
depends on the current roll.
▶ Xn = i for i ∈ S where S = {1, . . . , 6}.
▶ Example
transition probability matrix
0.9 .1 0 0 0 0
0 .9 .1 0 0 0
0 0 0.9 0.1 0 0
P=
0 0 0 0.9 0.1 0
0 0 0 0 0.9 0.1
0.1 0 0 0 0 0.9
11 / 19
Running example: Dice with memory!
▶ In a markovian dice with memory, the outcome of the next roll
depends on the current roll.
▶ Xn = i for i ∈ S where S = {1, . . . , 6}.
▶ Example
transition probability matrix
0.9 .1 0 0 0 0
0 .9 .1 0 0 0
0 0 0.9 0.1 0 0
P=
0 0 0 0.9 0.1 0
0 0 0 0 0.9 0.1
0.1 0 0 0 0 0.9
12 / 19
Finite dimensional distributions
12 / 19
Finite dimensional distributions
12 / 19
Finite dimensional distributions
12 / 19
Finite dimensional distributions
12 / 19
Finite dimensional distributions
12 / 19
Finite dimensional distributions
12 / 19
Finite dimensional distributions
13 / 19
Finite dimensional distributions
13 / 19
Finite dimensional distributions
13 / 19
Finite dimensional distributions
13 / 19
Finite dimensional distributions
13 / 19
Finite dimensional distributions
13 / 19
Finite dimensional distributions
13 / 19
Finite dimensional distributions
13 / 19
Chapman Kolmogorov Equations for DTMC
14 / 19
Chapman Kolmogorov Equations for DTMC
▶ Consider
" a Markov coin# and its transition probability matrix
p1,1 p1,−1
P= .
p−1,1 p−1,−1
14 / 19
Chapman Kolmogorov Equations for DTMC
▶ Consider
" a Markov coin# and its transition probability matrix
p1,1 p1,−1
P= .
p−1,1 p−1,−1
14 / 19
Chapman Kolmogorov Equations for DTMC
▶ Consider
" a Markov coin# and its transition probability matrix
p1,1 p1,−1
P= .
p−1,1 p−1,−1
14 / 19
Chapman Kolmogorov Equations for DTMC
▶ Consider
" a Markov coin# and its transition probability matrix
p1,1 p1,−1
P= .
p−1,1 p−1,−1
14 / 19
Chapman Kolmogorov Equations for DTMC
▶ Consider
" a Markov coin# and its transition probability matrix
p1,1 p1,−1
P= .
p−1,1 p−1,−1
15 / 19
Chapman Kolmogorov Equations for DTMC
15 / 19
Chapman Kolmogorov Equations for DTMC
15 / 19
Chapman Kolmogorov Equations for DTMC
▶ In general, P (n) = P n .
15 / 19
Chapman Kolmogorov Equations for DTMC
▶ In general, P (n) = P n .
15 / 19
Chapman Kolmogorov Equations for DTMC
▶ In general, P (n) = P n .
15 / 19
Classification of states
16 / 19
Classification of states
16 / 19
Classification of states
16 / 19
Classification of states
▶ This is denoted by i → j.
16 / 19
Classification of states
▶ This is denoted by i → j.
16 / 19
Classification of states
▶ This is denoted by i → j.
16 / 19
Classification of states
▶ This is denoted by i → j.
16 / 19
Recurrent and Transient states
17 / 19
Recurrent and Transient states
▶ We say that a state i is recurrent if
Fii = P( ever returning to i having started in i ) = 1.
17 / 19
Recurrent and Transient states
▶ We say that a state i is recurrent if
Fii = P( ever returning to i having started in i ) = 1.
17 / 19
Recurrent and Transient states
▶ We say that a state i is recurrent if
Fii = P( ever returning to i having started in i ) = 1.
17 / 19
Recurrent and Transient states
▶ We say that a state i is recurrent if
Fii = P( ever returning to i having started in i ) = 1.
17 / 19
Recurrent and Transient states
▶ We say that a state i is recurrent if
Fii = P( ever returning to i having started in i ) = 1.
17 / 19
Limiting probabilities
18 / 19
Limiting probabilities
" # " # " #
0 0 1 .06 .3 .64 .23 .385 .385
▶ P= 0 0.6 0.4 P 5 = .18 .38 .44 P 30 = .23 .385 .385
0.6 0.4 0 .38 .44 .18 .23 .385 .385
18 / 19
Limiting probabilities
" # " # " #
0 0 1 .06 .3 .64 .23 .385 .385
▶ P= 0 0.6 0.4 P 5 = .18 .38 .44 P 30 = .23 .385 .385
0.6 0.4 0 .38 .44 .18 .23 .385 .385
b a
▶ P = 1−a a
limn→∞ P n = a+b
b
a+b
a
b 1−b a+b a+b
18 / 19
Limiting probabilities
" # " # " #
0 0 1 .06 .3 .64 .23 .385 .385
▶ P= 0 0.6 0.4 P 5 = .18 .38 .44 P 30 = .23 .385 .385
0.6 0.4 0 .38 .44 .18 .23 .385 .385
b a
▶ P = 1−a a
limn→∞ P n = a+b
b
a+b
a
b 1−b a+b a+b
18 / 19
Limiting probabilities
" # " # " #
0 0 1 .06 .3 .64 .23 .385 .385
▶ P= 0 0.6 0.4 P 5 = .18 .38 .44 P 30 = .23 .385 .385
0.6 0.4 0 .38 .44 .18 .23 .385 .385
b a
▶ P = 1−a a
limn→∞ P n = a+b
b
a+b
a
b 1−b a+b a+b
18 / 19
Limiting probabilities
" # " # " #
0 0 1 .06 .3 .64 .23 .385 .385
▶ P= 0 0.6 0.4 P 5 = .18 .38 .44 P 30 = .23 .385 .385
0.6 0.4 0 .38 .44 .18 .23 .385 .385
b a
▶ P = 1−a a
limn→∞ P n = a+b
b
a+b
a
b 1−b a+b a+b
18 / 19
Limiting probabilities
" # " # " #
0 0 1 .06 .3 .64 .23 .385 .385
▶ P= 0 0.6 0.4 P 5 = .18 .38 .44 P 30 = .23 .385 .385
0.6 0.4 0 .38 .44 .18 .23 .385 .385
b a
▶ P = 1−a a
limn→∞ P n = a+b
b
a+b
a
b 1−b a+b a+b
18 / 19
Stationary distribution
19 / 19
Stationary distribution
19 / 19
Stationary distribution
19 / 19
Stationary distribution
19 / 19
Stationary distribution
19 / 19
Stationary distribution
19 / 19
Stationary distribution
19 / 19