Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Physics 212: Statistical mechanics II, Fall 2006

Lecture VII
Our rst picture of Brownian motion will be as a stochastic (i.e., random) process. A heavy
particle in a uid undergoes random forces as a result of collisions with the light particles of the
uid. You may be familiar with the idealization of the resulting motion as a random walk, but
since this is a physics course, we will start from Newtons equation of motion. The assumption of
random forcing leads to a Langevin equation of motion. Our strategy is as follows: we obtain a
constraint on the random forcing from the requirement that it reproduce thermal equilibrium, then
use this to describe the behavior of the system under nonrandom forcing (e.g., an applied electric
eld to a charged particle).
In writing equations in this lecture, the motion will be assumed to be one-dimensional so that
no vector notation is necessary. Including the drag force from Stokes law gives:
m
du
dt
= 6au +F(t) mu +F(t). (1)
Here we dened
=
6a
m
. (2)
This equation will simplify depending on our assumptions about F. For now we leave it in this
general form.
The rst goal of this lecture is, given statistical information about F(t), to carry it through the
Langevin equation (29) to get a statistical picture of x(t). This sort of random problem appears
frequently in statistical physics: an example of great importance in condensed matter physics is,
given a random potential V (x), to solve Schrodingers equation in that potential and get statistical
information about its eigenstates. There are even many applications beyond physics: stochastic
calculus, the formalization of some ideas in this lecture, is of considerable importance in signal
processing and quantitative nance, to name just two elds.
Since the Langevin equation (29) is linear, a natural approach is to take the Fourier transform
of both sides. It will be useful to imagine that we are observing the system for times 0 t T, for
some very large time T. The reason for doing this rather than just going to T = is that we can
average quantities by 1/T and get nite answers in the limit T for quantities like the power
spectrum, even though the underlying functions like u(t) do not fall o suciently rapidly at
for the Fourier transform to be well dened. These denitions will be familiar to any of you with
experience in signal processing. Then the expansion of u(t) is
u(t) =

n
u
n
e
int
,
n
=
2n
T
. (3)
and so on for the other quantities in the Langevin equation. The inverse formula is
u
n
=
1
T
_
T
0
u(t)e
inT
. (4)
The reality of u(t) then restricts u
n

= u
n
.
1
A bit more also needs to be said about the random force F. We assume that the problem has time
translation invariance, or that the random variable F is a stationary process: F(t
1
) = F(t
2
).
However, we do want to allow for situations where a random variable like u or F has a memory
eect, so for instance u(0)u(t
1
) = u(0)u(t
2
) if t
1
= t
2
. An example of this situation will be if
u is random but changes only slowly in time: knowing u at one time does give some information
about its future value.
After Fourier transforming, the Langevin equation becomes
m(i
n
u
n
) = mu
n
+F
n
, (5)
or
u
n
=
F
n
m(i
n
+)
. (6)
We still need one more mathematical tool to make use of this result. One important statistical
property is the autocorrelation of a quantity, such as the velocity u, be dened as
(t) = u(t)u(t +t) (u)
2
. (7)
Let the power spectrum of a quantity be dened as the averaged square of its Fourier transform,
that is,
I() = lim
T

n
|a
n
|
2
. (8)
We can write this in a more standard way: the number of modes n that appear in the right-hand-side
becomes large as T , and is given by

2/T
=
T
2
, (9)
so
I() = lim
T
T
2
|a
n
|
2
. (10)
Here a
n
is just one of the many modes near .
You may be familiar from a previous course in quantum mechanics with the statement that
energy is equal in both time and frequency space:
_

|f(t)|
2
dt =
_
|

f()|
2
d. (11)
(Depending on your conventions for f and

f, you may be used to seeing a
1
2
in the above.) You
can check this easily for a Gaussian, for example. The equivalence of energy above suggests that
the autocorrelation at zero time (on the left side) should somehow be related to the integral of the
power spectrum over all frequencies.
Here we will show a generalization of this result. The Wiener-Khinchin theorem states that
the autocorrelation and the power spectrum form a Fourier-transform pair:
I() =
1
2
_

(t)e
it
dt, (12)
2
and
(t) =
_

I()e
it
d. (13)
Note that even for signals which do not fall o in time, the autocorrelation can fall o suciently
rapidly in time that the above integral is well-dened.
Proof:
From the denition of a
n
,
|a
n
|
2
=
1
T
2
_
T
1
0
_
T
2
0
e
int
1
+int
2
u(t
1
)u(t
2
) dt
1
dt
2
. (14)
Now switch variables in the integral for t
1
> t
2
to (t = t
1
t
2
, t
2
). The Jacobian of this transforma-
tion is unity, but the integration region has a strange shape, and also note that the new variables
are not orthogonal. The integral over t
2
runs from 0 to T t (verify), so the part of the integral
with t
1
> t
2
becomes
_
T
0
dt(T t)(t)e
int
. (15)
Similarly the integral for t
2
> t
1
becomes
_
T
0
dt(T t)(t)e
int
. (16)
Finally, combining the two and remembering the denition of the power spectrum, take the limit
T ; then the powers of T cancel and we obtain
I() =
1
2
__

0
(t)e
it
dt +
_

0
(t)e
it
_
. (17)
Then redening the second time integral on the right side gives the desired result
I() =
1
2
_

(t)e
it
. (18)
Example: White noise, where the autocorrelation function is a delta-function. Then the above
theorem says that the power spectrum is constant: equal power goes into all frequencies, as ex-
pected. To say noise is colored means that more power goes into some frequencies than others,
which can only happen if the autocorrelation function has some extent in time.
Returning to our explicit solution of the Langevin equation in real time,
u
n
=
F
n
m(i
n
+)
, (19)
we obtain
I
u
() =
I
F
()
m
2
(
2
+
2
)
. (20)
Then the Wiener-Khinchin theorem can be used to understand this result. Starting with white
noise in the forcing term, we obtain colored noise in the velocity. To be more precise, suppose
I
F
() = I
F
, a constant. Then

u
(t) = I
F
_

e
it

2
+
2
d =
I
F
m
2

e
|t
1
t
2
|
. (21)
3
Here we have used that the Fourier transform of a Lorentzian is an exponential, which can be
veried easily by transforming back. Note that is the rate for the exponential decay, and that
=
6a
m
(22)
indeed has units of reciprocal time. At equal times, this becomes
u
2
=
I
F
m
2

. (23)
We can derive one other constraint if we require that the motion induced by the random force
be thermal at temperature T. Then, by equipartition,
mu
2
= kT, (24)
which implies that the strength of the force should be
I
F
=
mkT

. (25)
The above only required linearity; we could go back and write the Langevin equation for the
displacement x of a harmonic oscillator, adding an additional harmonic term m
2
0
x, and get
I
x
() =
I
F
()
m
2
((
2

2
0
)
2
+
2

2
)
. (26)
The Fourier transform of this now has both sinusoidal and exponential terms.
The way that we connect the single-particle equation to the diusion equation for many particles
will be via the notion of a transition probability. The transition probability P(x
0
, t
0
|x, t) is dened
as the probability to nd the particle at position x at time t, given that at time t
0
it was at position
x
0
. Then the normalization condition is
_
P(x
0
, t
0
|x, t) dx = 1. (27)
The point of this quantity is that it will enable us to connect the single-particle and many-particle
pictures of the Boltzmann equation. With enough particles, the transition probability multiplied
by the initial density becomes a later density distribution: we can therefore write
_
P(x
0
, t
0
|x, t)n(x
0
, t
0
) dx
0
= n(x, t). (28)
Then the normalization condition corresponds to conservation of particle number.
The remaining physics we want to do for the Brownian motion problem is to understand the
motion with an applied force, understand the connection between the above single-particle picture
and a deterministic many-particle picture, and understand this more generally as an example of a
uctuation-dissipation relation. We then move on to nonequilibrium correlation functions in the
quantum case toward the end of the lecture.
4
Adding a deterministic applied force K(t), the Langevin equation becomes
m
du
dt
= mu +F(t) +K(t). (29)
Assume that K(t) = K
0
cos(
0
t) is periodic. The Fourier transform of the Langevin equation is
(miu
n
) = mu
n
+F
n
+K
n
, (30)
or
u
n
=
F
n
+K
n
m( +i)
. (31)
Now consider the average velocity as a function of time, u(t). To calculate this, take the average
of the Langevin equation over the random force,
m
d
dt
u(t) = mu(t) +K(t). (32)
Then taking the Fourier transform, only the
0
and
0
components are nonzero:
im
0
u(
0
) = m u(
0
) +K
0
, (33)
so nally
u(t) = Re((
0
)K
0
e
i
0
t
), (34)
where
() =
1
m(i +)
. (35)
Suppose the particles have charge e and have density n: then the above predicts that the average
current in an electric eld E is
j(t) = neu(t) = Re((
0
)K
0
ne
2
e
i
0
t
), (36)
so the complex conductivity is
() = e
2
n() =
e
2
n
m
1
i +
. (37)
Because of linearity, dierent frequencies superpose, so this is essentially a complete solution of
the linear-response conductivity. Until now, we have postponed the question of what the resulting
motion in real space of the particle looks like. Integrating the Langevin equation gives
u(t) = u(t
0
)e
(tt
0
)
+
_
t
t
0
e
(tt

)
F(t

)
m
dt

. (38)
This can be checked by taking the derivative of both sides, which yields the Langevin equation
(divided by m).
We now want to integrate the above equation in a statistical sense to obtain the real-space
probability distribution. Using either brute force or the method of characteristic functions, starting
5
from a Maxwellian distribution at time 0, one can show (details are in Statistical Mechanics II, by
Kubo et al.)
P(0, 0|x, t) =
_
4D
_
t
1 e
t

__
1/2
exp(x
2
(4D(t
1 e
t

))
1
) (39)
where
D =
kT
m
. (40)
We see that for long enough times, this becomes a Gaussian of width
2
= 2Dt.
There are several immediate connections we can make from the above results. First, generalizing
the above to 3D gives the relationship between D and the velocity correlations in time that we
quoted in Lecture I:
D =
_

0
u(0)u(t)
3
dt. (41)
We also nd that the DC conductivity is proportional to the diusion constant D:
=
ne
2
m
=
ne
2
D
kT
. (42)
This relationship is sometimes referred to as the (classical) Einstein relation. It is our rst exam-
ple of a uctuation-dissipation theorem connecting a linear response to equilibrium dynamical
properties (the velocity autocorrelation). Note that we have been able to absorb the microscopic
damping time into D in the above formula.
In the above, I postponed explaining how the method of characteristic functions can be used in
order to understand the spatial and velocity distribution of a Brownian particle. Our goal will be
to nd the probability distribution P(u
0
, t
0
|u, t) and similarly for the position. The (optional)
remainder of this lecture gives an example of such calculations.
Now let us assume some extra properties about the random force F in the Langevin equation,
that correspond to its being a Gaussian random variable. This should be a good description if
the random force is resulting from many random collisions, because of the central limit theorem.
Assume that the probability of its taking the n values F(t
1
), F(t
2
), . . . , F(t
n
) is proportional to the
Gaussian form

jk
exp(
1
2
a
jk
F(t
j
)F(t
k
)) (43)
where a is some positive denite matrix. In particular, this implies that F(t) is Gaussian distributed
with mean 0.
We need a few properties of Gaussian random variables. Dene the characteristic function
c(x) e
ixF
=
1

2
_
dFe
F
2
/2
2
e
ixF
= e
x
2

2
/2
. (44)
In general, we dene the characteristic function of a random variable R as
c
R
(x) e
ixR
. (45)
6
The generalization of the above to the characteristic function of n points is
c(x
1
, x
2
, . . . , x
n
) e
ix
1
F
1
e
ix
2
F
2
. . . e
ixnFn
. (46)
Evaluating this for a Gaussian random process gives
c(x
1
, x
2
, . . . , x
n
) = e

j,k
1
2
(a
1
)
jk
x
j
x
k
. (47)
Generalizing this to a random process of nonzero mean gives
c(x
1
, x
2
, . . . , x
n
) = e
i

j
x
j
m
j

j,k
1
2
(a
1
)
jk
x
j
x
k
. (48)
Finally, going to the continuum limit, the characteristic functional of a random function F(t)
is dened to be
c((t)) e
i
_
F(t)(t) dt
. (49)
Integrating the Langevin equation led to the velocity equation
u(t) = u(t
0
)e
(tt
0
)
+
_
t
t
0
e
(tt

)
F(t

)
m
dt

. (50)
The characteristic functional of u is
c() = e
iu(t)
= exp
_
iu
0
e
(tt
0
)
+i
_
t
t
0
e
(tt

)
F(t

)
m
dt

_
. (51)
Now recalling that F is a Gaussian random variable, we nd
c() = exp
_
iu
0
e
(tt
0
)


2
2
_
t
t
0
dt
1
_
t
t
0
dt
2
e
(tt
1
)
e
(tt
2
)
F(t
1
)F(t
2
)
m
2
_
. (52)
Assuming a white-noise distribution,
F(t
1
)F(t
2
) = 2I
F
(t
1
t
2
), (53)
the integral in the exponent is
_
t
t
0
dt
1
_
t
t
0
dt
2
e
(tt
1
)
e
(tt
2
)
F(t
1
)F(t
2
)
m
2
=
2I
F
m
2
_
t
t
0
e
2(tt

)
=
I
F
m
2
1 e
2(tt
0
)

. (54)
Then the characteristic function is
c() = exp
_
iu
0
e
(tt
0
)


2
2
I
F
m
2
1 e
2(tt
0
)

_
= exp
_
iu
0
e
(tt
0
)


2
2
kT
m
1 e
2(tt
0
)

_
(55)
where we have assumed in the second line the condition for equilibrium,
I
F
=
mkT

. (56)
7
Now we are done: inspecting this characteristic function, we see that it corresponds to a Gaus-
sian distribution about the mean u
0
e
(tt
0
)
. Hence
P(u
0
, t
0
|u, t) =
_
m
2kT
_
1/2
1
(1 e
2(tt
0
)
)
1/2
exp(
m
2kT
(u u
0
e
(tt
0
)
)
2
1 e
2(tt
0
)
). (57)
So after a time of order 1/, the velocity distribution forgets its original value and becomes
Maxwellian.
Similar techniques can be used for the spatial distribution to obtain, assuming an initial
Maxwellian,
P(0, 0|x, t) =
_
4D
_
t
1 e
t

__
1/2
exp(x
2
(4D(t
1 e
t

))
1
) (58)
where
D =
kT
m
. (59)
These conditional probability distributions can be related (simply by multiplying by particle den-
sity) to the evolution of a continuous density of Brownian particles. This completes the connection
between the single-particle motion under a random force to the deterministic motion of the distri-
bution of an ensemble.
8

You might also like