Professional Documents
Culture Documents
Slides Presentation
Slides Presentation
RWANDA
DEPARTMENT OF APPLIED STATISTICS – SCHOOL OF
ECONOMICS
AST 3231 STOCHASTIC PROCESSES
STOCHASTIC PROCESS LECTURE NOTE
BY
LAWAL F . K (Ph.D)
Module Description
• Module Code :AST 3231
• Module Title: Stochastic Processes
• Level 3 Semester 1
• Credits: 15
• Administering School: Economics
• Department: Applied Statistics
• Year of Presentation: 2020-2021
• Pre-requisite or co-requisite modules:
• Advanced mathematics
• Inferential statistics
• Probability
STOCHASTIC PROCESS
state
of all states
Note
initially with such a distribution will be a stationary stochastic process. We will also see that we can find π by
Communication classes and irreducibility for Markov chains. For a Markov chain with state space S, consider
a pair of states (i, j). We say that j is reachable from i, denoted by i → j, if there exists an integer n ≥ 0 such
that Pnij > 0. This means that starting in state i, there is a positive probability (but not necessarily equal to 1)
that the chain will be in state j at time n (that is, n steps later); P(Xn = j|X0 = i) > 0. If j is reachable from i,
and i is reachable from j, then the states i and j are said to communicate, denoted by i ←→ j. The relation
result.
absorbing states 0 and 4. Hence the chain is an absorbing chain. When a process reaches an
The most obvious question that can be asked about such a chain is: What is the probability
that the process will eventually reach an absorbing state? Other interesting questions
include: (a) What is the probability that the process will end up in a given absorbing state?
(b) On the average, how long will it take for the process to be absorbed? (c) On the average,
how many times will the process be in each transient state? The answers to all these
questions depend, in general, on the state from which the process starts as well as the
transition probabilities.
Canonical Form Consider an arbitrary absorbing Markov chain. Renumber the state
so that the transient states come first. If there are r absorbing states and t transien
states, the transition matrix will have the following canonical form
matrix, and Q is an t-by-t matrix. The first t states are transient and the last r state
are absorbing. In Section previous lectures, we saw that the entry P nij of the matrix P
In the following, if u and v are two vectors we say that u ≤ v if all components of u are less than or
equal to the corresponding components of v. Similarly, if A and B are matrices then A ≤ B if each
entry of A is less than or equal to the corresponding entry of B. Probability of Absorption.
In an absorbing Markov chain, the probability that the process will be absorbed is 1 (i.e., Q n → 0
as n → ∞). Proof. From each non absorbing state j it is possible to reach an absorbing state. Let mj
be the minimum number of steps required to reach an absorbing state, starting from j . Let p j be
the probability that, starting from j , the process will not reach an absorbing state in mj steps.
Then pj < 1. Let m be the largest of the mj and let p be the largest of p j . The probability of not
being absorbed in m steps is less than or equal to p, in 2n steps less than or equal to p 2, etc. Since p
< 1 these probabilities tend to 0
Since the probability of not being absorbed in n steps is monotone decreasing, these probabilities
Definition For an absorbing Markov chain P, the matrix N = (I − Q)−1 is called the fundamental
matrix for P. The entry nij of N gives the expected number of times that the process is in the
transient state j if it is started in the transient state i. Example In the Drunkard’s Walk example,
states 1, 2, and 3 before being absorbed are 1, 2, and 1. Time to Absorption We now consider the
question: Given that the chain starts in state i, what is the expected number of steps before the
Theorem Let ti be the expected number of steps before the chain is absorbed, given that the chain
starts in state i, and let t be the column vector whose ith entry is t i. Then t = Nc , where c is a
Considera a gambler who at each play of the game has probability P of winning one unit of the
game and probability q = 1- P of losing one unit. Assuming that successive plays of the game are
independent. What is the probability that starting with i units the gambler’s fortune will reach N
before reaching 0.
Let Xn denote the player’s fortune at time n, then the process Xn[ n, = 0, i1,i2, - - -] is
a Markov chain with transition probabilities P00 = PNN = 1, Pi, i+1 = P = 1- Pi, where i= 1,2, - - -, N-1
The first and third classes are recurrent. The second is transient. Since each transient state is visited only
finitely often, it follows that after some finite amount of time the gambler will either attain his goal of N or go
broke.
Let Pi i= 0,1,2, - - -, N denote the probability starting from i the gambler’s fortune will eventually reach N.
Example
Suppose Max and Bill decide to flip pennies. The one coming closest to the wall wins. Bill being a
better player has a probability 0.6 of winning on each flip. If Bill start with 5RF and Max with
10RF. What is the probability that Bill will wipe Max out?
run or not to run in the next election. Then A relays the news to
takes him to safety after 2 hours traveled. The second door leads to a tunnel which returns him
to the mine after 3 hours traveled. The third door leads to a tunnel which returns him to the
mine after 5 hours. Assuming that the miner is at all times equally likely to choose any one of
the doors, what is the expected length of time until the miner reaches safety.
b. If X and Y are independent binomial random variable with Identical parameters n and p.