Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 106

COLLEGE OF BUSINESS AND ECONOMICS – UNIVERSITY OF

RWANDA
DEPARTMENT OF APPLIED STATISTICS – SCHOOL OF
ECONOMICS
AST 3231 STOCHASTIC PROCESSES
STOCHASTIC PROCESS LECTURE NOTE
BY
LAWAL F . K (Ph.D)
Module Description
• Module Code :AST 3231
• Module Title: Stochastic Processes
• Level 3 Semester 1
• Credits: 15
• Administering School: Economics
• Department: Applied Statistics
• Year of Presentation: 2020-2021
• Pre-requisite or co-requisite modules:
• Advanced mathematics
• Inferential statistics
• Probability
STOCHASTIC PROCESS

Random variable and stochastic process


Definition: Suppose we are given a sample space and a probability measure
p. A random variable X with values in the set E is a function which assigns
a value X( in E to each outcome in
Remark. The most usual example E are the set of non-negative integers
N= {0,1,2,- - -}. The set of integers {- - -. -2,-1, 0, 1, 2, 3---}
• Example: Let an experiment consist of measuring the life
time of twelve electric bulbs. The sample space . Is the set
of all the twelve bulbs. {w1, w2, - - - w12}. Where wi for
all i, then X( = (w1+ w2+ - - + w12) defines a random
variable on this sample space. It represents the average life
time of the 12 bulbs.
Let X be a random variable taking value in a set E and let F be a
real valued function define on a set E. Then for each X( is a
point in E and F assigns the value f(X( to that point. By f(x), we
mean the random variable whose value at is f(X(
A stochastic process with state space E is a collection [Xt: t ] of a
random variables Xt define on the same probability space and
taking value in E. The set T is called its parameter set. If T is
countable, especially if T = N = [ 0, 1,2 ---]. The process is said
to be a discrete parameter process. If T is not countable, the
process is said to have a continuous parameter. It is customary to
think of index t as representing time and then one think of X t as
the state or the position of the process at time t.
• Examples of such phenomena are for example, the prices and the
returns of the FTSE All-Share Index at the London Stock
Exchange, interest rates, number of claims of an insurance portfolio
over time and claim amounts or claim sizes.
Based on the above two categories of the state space E and of the index
set T we can have stochastic processes with
• Discrete state space and discrete time changes
• Discrete state space and continuous time changes
• Continuous state space and discrete time changes
• Continuous state space and continuous time changes
• Mixed type
• Examples of such phenomena are for example, the prices and the
returns of the FTSE All-Share Index at the London Stock Exchange,
interest rates, number of claims of an insurance portfolio over time and
claim amounts or claim sizes.
Based on the above two categories of the state space E and of the index
set T we can have stochastic processes with
• Discrete state space and discrete time changes
• Discrete state space and continuous time changes
• Continuous state space and discrete time changes
• Continuous state space and continuous time changes
• Mixed type
Discrete State Space and Discrete Time
Changes
• Let us consider that the total number of claims of a
policyholder insured for third-party liability insurance is
reported yearly. Then the state space of this process is E
= {0, 1, 2, _ _ _ } and the time set is T = {0, 1, 2, _ _ _ },
measured in years.
• CONDITIONAL PROBABILITY AND CONDITIONAL
EXPECTATION

One of the most useful concepts in probability theory is that of


conditional probabilities and conditional expectation. The
reason is twofold, first in practice, we often interested in
calculating probabilities and expectations when some partial
information is avoidable hence, the desired probabilities and
expectations are conditions ones. Secondly, in calculating a
desired probability on expectation, it is often extremely useful
to first condition on some appropriate random variable.
Example: A coin having probability P of a coming up head is to be
successively flipped with the first head appears. What is the expected
numbers of flips required?
Let N be the number of flips required, and let Y=1, if the first flip results
in a head, 0 if the first flip results in a tail.
E(N) = E(N/y=1)P(y=1) + E(N/y=0)P(y=0)
PE[N/y=1]+(1-p)E(N/y=0)
However, E(N/y=1} = 1, E{N/y=0) = 1+E(N)
To see this consider
E(N/y=1), since y=1 since y=1, we know that the first flip resulted in head
and so expected number of flips required is 1. If y=0 the first flip resulted in
tails. However, since the successive flips are assumed independent, it follows
that after the first flip the expected additional number of flips until the first
head is just E(N). Hence, E(N/y=0) = 1 +E(N)
Hence, E(N) = p +(1-p)(1+E(N) = 1/p
Example: A miner is trapped in a mine containing 3 doors. The
first door leads to a tunnel which takes him to safety after 2
hours traveled. The second door leads to a tunnel which
returns him to the mine after 3 hours traveled. The third door
leads to a tunnel which returns him to the mine after 5 hours.
Assuming that the miner is at all times equally likely to choose
any one of the doors, what is the expected length of time until
the miner reaches safety.
Example: Suppose the expected numbers of accidents per
week at an industrial plant is four. Supposed also that the
number of workers in each accident are independent random
variable with a common mean of two. Assuming also that the
number of workers injured in each accident that occurred are
independently distributed. What is the expected number of
injure during a week?
Solution
Example:
Independent trials each resulting in a success with probability P
are successively performed. Let N be the time of the first
success. Find variance of N
Solution
Let y=1 if the first trial result to success and y = 0 if otherwise
Var(N) = E(N2) – (E(N))2
To calculate E(N2) and E(N) we condition on y
E(N2) = E(E(N2/y))
Example: A rat is trapped in a maze. Initially he has to choose
one of two directions. If he goes to the right, he will wonder
around in the maze for 3 minutes and will then return to his
initial position. If he goes to left with probability 1/3 he will
depart the maze after 2minutes of travelling and with
probability 2/3 he will he return to his initial position after 5
minutes of travelling. Assuming that the rat is at all times
equally likely to go to the left or to the right. What is the
expected number of minutes that he will trapped in the maze.

Let X= 1 if the rat goes to the right


Let X = 2 if the rat goes to the left
Then
E(Y) = E(Y/X=1)P(X=1) + E(Y/X=2)P(X=2)
E(Y/X=1)P(X=1) = 3+ E(X)
E(Y/X=2)P(X=2) = 1/3(2) + 2/3[5 + E(Y)]
E(Y) = ½(3 + E(Y) + 4 + 2/3E(Y))
= ½( 7 + 2/3E(Y) + E(Y))
6E(Y) = 21 + 2E(Y) + 3E(Y)
6E(Y) -5E(Y) = 21
E(Y) = 21
Theorem:
Let y be a random variable depending on a finite number of a
random variable Nm, Nm+1 … that is y can be written as Y = g
(Nm, Nm+1 . . . Nm+n ) for some n and some function g then
E[y/N0 --- Nm] = E[Y/Nm]
Example
E[N11/N5] = E[N5 + (N11-N5/ N5]
= E[N5/N5] + E[(N11 –N5)/N5] = N5 + E(N11 – N5)
= N5 +6p
Example compute Z = E[N5, N11/N2,N3]
Z= E[E[N5N11/N0 N1 - - - N5] /N2N3]
= E[EN5N11/N5]/N2N3]

Example

A Markov chain whose state space is given by the
integers [i = 0 , i1, i2 - - - ] is said to be a random walk if
for some numbers 0 < n <1
 Pi, i+1 = p = 1-pi, i-1 , i = 0 , i1, i2 - - -

Example: Consider a gambler who at each play of the
game either win $1 with probability p or loses $1 with
probability 1-p. If we suppose that our gambler quits
playing either when he goes broke or he attains a
function of $N. Then the gambler’s future is a Markov
chain having transition probability
 Pi, i+1 = p = 1-pi, i-1 , i = 0 , i1, i2 - - -
 P00 = PNN = 1
State 00 and NN are called absorbing states since once entered
they are never left.
Definitions.

1. A set of states is said to be closed if no state outside it can be

reached from any state on it.

2. A state forming a closed set by itself is called an absorbing

state

3. A closed set is irreducible if no proper set of it is closed.

4. A Markov chain is irreducible if its only closed set is the set

of all states
Note
initially with such a distribution will be a stationary stochastic process. We will also see that we can find π by

merely solving a set of linear equations.

Communication classes and irreducibility for Markov chains. For a Markov chain with state space S, consider

a pair of states (i, j). We say that j is reachable from i, denoted by i → j, if there exists an integer n ≥ 0 such

that Pnij > 0. This means that starting in state i, there is a positive probability (but not necessarily equal to 1)

that the chain will be in state j at time n (that is, n steps later); P(Xn = j|X0 = i) > 0. If j is reachable from i,

and i is reachable from j, then the states i and j are said to communicate, denoted by i ←→ j. The relation

defined by communication satisfies the following conditions:


Exercise let a = 0 and b = 1/2. Find P, P2, and P3. What would Pn

be? What happens to Pn as n tends to infinity? Interpret this

result.

Definition A state i of a Markov chain is called absorbing if it is

impossible to leave it (i.e., pii = 1). A Markov chain is absorbing

if it has at least one absorbing state, and if from every state it is

possible to go to an absorbing state (not necessarily in one step).

Definition: In an absorbing Markov chain, a state which is not

absorbing is called transient.


The states 1, 2, and 3 are transient states, and from any of these it is possible to reach the

absorbing states 0 and 4. Hence the chain is an absorbing chain. When a process reaches an

absorbing state, we shall say that it is absorbed.

The most obvious question that can be asked about such a chain is: What is the probability

that the process will eventually reach an absorbing state? Other interesting questions

include: (a) What is the probability that the process will end up in a given absorbing state?

(b) On the average, how long will it take for the process to be absorbed? (c) On the average,

how many times will the process be in each transient state? The answers to all these

questions depend, in general, on the state from which the process starts as well as the

transition probabilities.
Canonical Form Consider an arbitrary absorbing Markov chain. Renumber the state

so that the transient states come first. If there are r absorbing states and t transien

states, the transition matrix will have the following canonical form

Here I is an r-by-r indentity matrix, 0 is an r-by-t zero matrix, R is a nonzero t-by-

matrix, and Q is an t-by-t matrix. The first t states are transient and the last r state

are absorbing. In Section previous lectures, we saw that the entry P nij of the matrix P
In the following, if u and v are two vectors we say that u ≤ v if all components of u are less than or
equal to the corresponding components of v. Similarly, if A and B are matrices then A ≤ B if each
entry of A is less than or equal to the corresponding entry of B. Probability of Absorption.

In an absorbing Markov chain, the probability that the process will be absorbed is 1 (i.e., Q n → 0
as n → ∞). Proof. From each non absorbing state j it is possible to reach an absorbing state. Let mj
be the minimum number of steps required to reach an absorbing state, starting from j . Let p j be
the probability that, starting from j , the process will not reach an absorbing state in mj steps.
Then pj < 1. Let m be the largest of the mj and let p be the largest of p j . The probability of not
being absorbed in m steps is less than or equal to p, in 2n steps less than or equal to p 2, etc. Since p
< 1 these probabilities tend to 0
Since the probability of not being absorbed in n steps is monotone decreasing, these probabilities

also tend to 0, hence limn→∞ Qn = 0.

Definition For an absorbing Markov chain P, the matrix N = (I − Q)−1 is called the fundamental

matrix for P. The entry nij of N gives the expected number of times that the process is in the

transient state j if it is started in the transient state i. Example In the Drunkard’s Walk example,

the transition matrix


From the middle row of N, we see that if we start in state 2, then the expected number of times in

states 1, 2, and 3 before being absorbed are 1, 2, and 1. Time to Absorption We now consider the

question: Given that the chain starts in state i, what is the expected number of steps before the

chain is absorbed? The answer is given in the next theorem.

Theorem Let ti be the expected number of steps before the chain is absorbed, given that the chain

starts in state i, and let t be the column vector whose ith entry is t i. Then t = Nc , where c is a

column vector all of whose entries are 1.


THE GAMBLER’S RUIN PROBLEM

Considera a gambler who at each play of the game has probability P of winning one unit of the

game and probability q = 1- P of losing one unit. Assuming that successive plays of the game are

independent. What is the probability that starting with i units the gambler’s fortune will reach N

before reaching 0.
Let Xn denote the player’s fortune at time n, then the process Xn[ n, = 0, i1,i2, - - -] is
a Markov chain with transition probabilities P00 = PNN = 1, Pi, i+1 = P = 1- Pi, where i= 1,2, - - -, N-1

This Markov chain has three classes {0},{1,2, - - - N-1},{N}

The first and third classes are recurrent. The second is transient. Since each transient state is visited only

finitely often, it follows that after some finite amount of time the gambler will either attain his goal of N or go

broke.

Let Pi i= 0,1,2, - - -, N denote the probability starting from i the gambler’s fortune will eventually reach N.
Example

Suppose Max and Bill decide to flip pennies. The one coming closest to the wall wins. Bill being a

better player has a probability 0.6 of winning on each flip. If Bill start with 5RF and Max with

10RF. What is the probability that Bill will wipe Max out?

What if Bill starts with 10RF and Max with 20RF


b. The President of the Rwanda tells person A his intention to

run or not to run in the next election. Then A relays the news to

B, who in turn relays the message to C, and so forth, always to

some new person. We assume that there is a probability a, that a

person will change the answer from yes to no when transmitting

it to the next person and a probability b that he or she will

change it from no to yes. We choose as states the message,

either yes or no.

(i) Determine the probability matrix (ii) If a = 0 and b = 1/2.

Find P, P2, and P3 interpret this results.


7.a. A miner is trapped in a mine containing 3 doors. The first door leads to a tunnel which

takes him to safety after 2 hours traveled. The second door leads to a tunnel which returns him

to the mine after 3 hours traveled. The third door leads to a tunnel which returns him to the

mine after 5 hours. Assuming that the miner is at all times equally likely to choose any one of

the doors, what is the expected length of time until the miner reaches safety.


b. If X and Y are independent binomial random variable with Identical parameters n and p.

calculate the conditional probability mass function of X given that X+Y=m



10. A man walks along a four-block stretch of Park Avenue .
If he is at corner 1, 2, or 3, then he walks to the left or right
with equal probability. He continues until he reaches corner
4, which is a bar, or corner 0, which is his home. If he
reaches either home or the bar, he stays there. We form a
Markov chain with states 0, 1, 2, 3, and 4. States 0 and 4 are
absorbing states.

You might also like