Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Lecture 9: Kramers-Moyall Cumulant Expansion

Scribe: Jacy Bird


(Division of Engineering and Applied Sciences, Harvard)
March 3, 2005
In this lecture, we study the higher-order terms in the Kramers-Moyall expansion for the con-
tinuum approximation, which satises a PDE generalizing the diusion equation, for the long-time
limit for a random walk with IID steps with nite moments. We show that the coecients in the
PDE involve cumulants, not moments (as commonly asserted), if the solution at various orders is to
produce a valid asympototic expansion. This is done by an iterative method, , justied a posteriori
by a continuum derivation of the correct Gram-Charlier expansion, as the asymptotic expansion of
the Green function for the PDE.
1 Continuum Approximation of the Position of a Random Walk
As we saw in previous lecture, we can rewrite the Bachelier Equation as a innite-sum, partial
dierential equation (PDE). Provided that the random walk has IID steps with nite moments, we
derived that

P
N+1
(x) P
N
(x) =

(1)
n
m
n

n
P
N
. (1)
n! x
n
n=1
We now substitute (x, N) for P
N
(x) (where represents the time-step) and divide both sides by
to get

(
x, t + ) (x, t)

(1)
n
m
n

= (2)
n! x
n
n=1
Although real stochastic processes are discrete random walks, we usually care about the long-
time limit of many steps. Examples include Brownian motion (where large particles bounce around
in a gas) or nancial time series. Therefore in Equation 2, we will consider the limit of 0, since
we are interested in much longer time scales. As shown in the previous lecture, when dealing with
discrete random walks, strictly speaking, the right thing to do is to consider times t = N=constant
with N , focusing on the appropriate length scale, L(t) , much larger than the standard
deviation of the step size, . Here, we will get similar results by xing x, t and letting , 0
with D
2
=
2
/2 xed.

Recall that
(
x, t +) can be expanded as
(
x, t) +

+
1

2
2

+

t
n
. Thus Equation
t 2 t
2
n=3 n!
2 can rewritten as

n1

(1)
n
m
n

n

(3)

=
t
+
n! t
n
n! x
n
.
n=2 n=1
1







M. Z. Bazant 18.366 Random Walks and Diusion Lecture 9 2
In the last lecture it was showed that the leading order behavior of Equation 3 was
m
1
m
2

2

2

(4)
t

x
+
2 x
2

2 t
2
In this lecture we are going to calculate higher order behavior, so the rst step is to expand Equation
3 to higher order. We nd that
m
1
m
2

2
m
3

3

2

2

3

4

4

= + O (5)
t

x
+
2 x
2

3! x
3

2 t
2

3! t
3
x
4
,
t
4
2 Recursive Substitution in the Kramers-Moyall Expansion
Our goal is to rewrite the RHS of Equation 5 so that the lower order terms only consist of derivatives
with respect to x. To do this we will employ recursive substitution.

We begin by dierentiating both sides of Equation 5 with respect to t and multiply by


2
, so
that


2
m
1
m
2

2

2

3

4

4

= + O (6)
2 t
2

2 x t
+
4 x
2
t

4 t
3
x
4
,
t
4
The next step is to substitute

t
from Equation 5 into Equation 6 so that Equation 6 becomes

2
m
1
m
1
m
2

2

2
m
2

2
m
1
m
2

2

2

= . . .
2 t
2

2 x

x
+
2 x
2

2 t
2
. . . +
4 x
2

x
+
2 x
2

2 t
2
m
3

3
m
1
m
2

2

2

2

3

. . . . . .
4 t
3
. . . (7)
12 x
3

x
+
2 x
2

2 t
2
Equation 7 can be simplied to

2
m
2

2
m
1
m
2

3
m
1

2
m
1
m
2

3

2

3

4

4

1
= (8)
2 t
2
2 x
2

4 x
3
+
2 x 2 t
2

4 x
3

4 t
3
+ O
x
4
,
t
4
and even further reduced to
3

3

2
m
1
=
2

2
m
1
m
2

3

+
m
1

3

4

4

. (9)
2 t
2
2 x
2

2 x
3
4 x
3

4 t
3
+ O
x
4
,
t
4
by noting that


2
m
2

2

3

4

1
(10)
2 t
2
=
2 x
2
+ O
x
3
,
t
4
.
What we are eectively doing is pushing the time derivatives to higher order while gaining more x
derivatives.

We now repeat this process again. We apply the operator


t
on Equation 9 to get

3

2

2

4

4

t
3
= m
1
+ O . (11)
x
2
t x
4
,
t
4

Then we substitute
t
from Equation 5 into Equation 11 so that Equation 11 becomes
3

3

=
m
1

4

4

. (12)
t
3

x
3
+ O
x
4
,
t
4



M. Z. Bazant 18.366 Random Walks and Diusion Lecture 9 3
Substituting Equation 12 into Equation 9, we see that Equation 9 can be rewritten as
3

3

3

3

2
m
1

4

4

=
2

2
m
1
m
2

3

+
m
1
+
m
1
. (13)
2 t
2
2 x
2

2 x
3
4 x
3
4 x
3
+ O
x
4
,
t
4
Finally the results from Equations 12 and 13 are substituted into Equation 5 to give
m
1
m
2

2
m
3

3
m
2

2
m
1
m
2

3
m
3

3
m
3

3

4

4

+
1
+
1
+O
t
=
x
+
2 x
2

6 x
3

2
1
x
2

2 x
3
2 x
3
6 x
3
x
4
,
t
4
(14)
Collecting terms, we see that


m
1

2

2

4

4

1
m
3
3m
1
m
2
+ 2m
x
3
+ O
x
4
,
t
4
(15)
t
=
x
+
m
2
2

m
1
x
2

6
3

3


which is an equation that relates
t
to lower order derivatives that are only with respect to x.
The combinations of moments in Equation 16 are thus more naturally expressed as cumulants
leading to

C
1
C
2

2
C
3

3

4

4

= + O . (16)
t

x
+
2 x
2

3! x
3
x
4
,
t
4
As we notice the pattern, it seems the higher-order Kramers-Moyall coecients from the previous
lecture should be dened in terms of cumulants, not moments:

=
C
n
(17) D
n
n!
so that the continuum limit is properly described by the following PDE,

(1)
n

n

(18) D
n
t x
n
n=1

Unlike the usual Kramers-Moyall moment expansion (with D


n
replaced by D
n
= M
n
/n!), this
PDE, when truncated at various orders, provides a valid asymptotic approximation of the position
of the random walk, as we now show.
3 Gram-Charlier Expansion for the Green Function
Claim:

(1)
n
C
n

n

(19) =
t n! x
n
n=1
where C
n
is the n
th
cumulant.
We are now going to solve Equation 19 for (x, 0) = (x), and demonstrate that (x, t) is the
same Gram-Charlier expansion we derived before by more rigorous means. First we take the Fourier
transform of 19 and get

(1)
n
C
n
(ik
n
)
. (20)
t n!
n=1
4 M. Z. Bazant 18.366 Random Walks and Diusion Lecture 9
Solving the dierential equation leads to

k
n
t
n=1
(k, t) = e
P
(i
n
)
!
n

Cn
. (21)
Next, we note that t/ = N and (x, N) = P
N
(x):

k
n
N
n=1
P

N
(k) = e
P
(i
n
)
!
n

Cn
. (22)
By the denition of cumulants, the RHS of Equation 22 is equal to p(k)
N
. Therefore
P

N
(k) = p(k)
N
(23)
which is the familiar Fourier transform of Bacheliers equation,
P
N
(x) = p
N
(x) = P
N1
p(x) (24)
so we know we have a systematic asymptotic approximation of P
N
(x) from the solution to the
modied Kramers-Moyall expansion PDE, Eq (19). As we saw in lectures 3 and 4, the cumulant
expansion of P

N
(k), upon inverse Fourier transform, produces the Gram-Charlier expansion of
P
N
(x), valid in the central region.
We have seen that the Gram-Charlier expansion is obtained from the solution to the PRE (19)
for a localized intitial condition, (x, 0) = (x). Since the PDE is linear, this is the Green function,
G(x, t), (for an innite space) from which a general solution is easily constructed for any initial
condition, (x, t) = G(x, t) (x, 0).
A major advantage of the continuum approximation to the discrete random walk is that the
PDE can be solved, to the desired order, in more complicated multi-dimensional geometries, where
discrete methods, such as Fourier transform are not easily applied. It also provides simple analytical
insights into the nal position of the random walk, which do not depend on the details of the step
distribution, beyond the low-order cumulants.

You might also like