Professional Documents
Culture Documents
98 Random Explorations
98 Random Explorations
ix
x Preface
The first chapter introduces Markov chains and ideas that perme-
ate the book. The focus is on transient chains or recurrent chains with
“killing” for which there is a finite Green’s function representing the
expected number of visits to sites. The Green’s function can be seen
to be the inverse of an important operator, the Laplacian. Harmonic
functions (functions whose Laplacian equal zero) and the determinant
of the Laplacian figure prominently in the later chapters. We concen-
trate mainly on discrete time chains but we also discuss how to get
continuous time chains by putting on exponential waiting times. A
probabilistic approach dominates our treatment but much can be done
purely from a linear algebra perspective. The latter approach allows
measures on paths that take negative and complex values. Such path
measures come up naturally in a number of models in mathematical
physics although they are not emphasized much here.
Chapter 2 introduces an object that has been a regular part of my
research. I introduced the loop-erased random walk (LERW) in my
doctoral dissertation with the hope of getting a better understanding
of a very challenging problem, the self-avoiding random walk (SAW).
While the differences between the LERW and SAW have prevented
the former from being a tool to solve the latter problem, it has proved
to be a very interesting model in itself. One very important applica-
tion is the relation between LERW and another model, the uniform
spanning tree (UST). This relationship is most easily seen in an al-
gorithm due to David Wilson [20] to generate such trees.
Analysis of the loop-erasing procedure leads to consideration both
of the loops erased and the LERW itself. Chapter 3 gives an intro-
duction to loop measures and soups that arise from this. We view
a collection of loops as a random field that is growing with time as
loops are added. The distribution of the loops at time 1 corresponds
to what is erased from loop-erased random walks. The loop soup at
time 1/2 is related to the Gaussian free field (GFF). This chapter
introduces the discrete time loop soup which is an interesting math-
ematical model in itself. This discrete model has characteristics of a
number of fields in statistical mechanics. In particular, the distribu-
tion of the field does not depend on how one orders the elements, but
to investigate the field one can order the sites and then investigate
Preface xi
the field one site at a time. For this model, when one visits a site,
one sees all the loops that visits that site. This “growing loop” model
which depends on the order of the vertices turns out to be equivalent
to an “unrooted loop soup” that does not depend on the order.
While we have used the generality of Markov chains for our set-
up, one of the most important chains is the simple random walk in
the integer lattice. In order to appreciate paths and fields arising
from random walk, it is necessary to understand the walk. Chapter
4 discusses the simple random walk on the lattice giving some more
classical results that go beyond what one would normally see at an
undergraduate level.
We return to the spanning tree in Chapter 5 and consider the
infinite spanning tree in the integer lattice as a limit of spanning
trees on finite subsets. Whether or not this gives an infinite tree or a
forest (a collection of disconnected trees) depends on the dimension.
We also give an example of duality on the integer lattice.
Another classical field is the topic of Chapter 6. The multivariate
normal distribution is a well known construction and is the model
upon which much of classical mathematical statistics, such as linear
regression, is based. The (GFF) is an example of such a distribution
where some geometry comes into the picture. Here we discuss the
GFF coming from a Markov chain. The idea of exploration comes
in again as one “samples” or “explores” the field at some sites and
uses that to determine distributions at other sites. The global object
is independent of the ordering of the vertices but the sampling rule
is not. There is a relation between the GFF and the growing loop
defined in Chapter 3 discussed in Section 6.6.
In Chapter 7 we introduce some of the continuous models that
arise as scaling limits. A proper treatment of this material would
require more mathematical background than I am assuming so this
should be viewed as an enticement to learn more. The scaling limits
we discuss are: Brownian motion, Brownian loop soup, Schramm-
Loewner evolution, and the continuous GFF.
In the Appendix, we discuss a couple of topics that arise in
the previous chapters but have sufficient independent interest that
it seems appropriate to separate them. The first is a basic technique
xii Preface
Preface ix
v
vi Contents
§3.1. Introduction 45
§3.2. Growing loop at a point 46
§3.3. Growing loop configuration in A 50
§3.4. Rooted loop soup 54
§3.5. (Unrooted) random walk loop measure 55
§3.6. Local time and currents 58
§3.7. Negative weights 63
§3.8. Continuous time 65
Further Reading 66
Bibliography 195
Index 197
Chapter 4
Random Walk in Zd
4.1. Introduction
In this chapter we will focus on the integer lattice
Zd = {(z1 , . . . , zd ) : zj ∈ Z}
A = A ∪ ∂A.
We let Bn denote the discrete ball of radius n about the origin
n ≤ |w| < n + 1.
67
68 4. Random Walk in Zd
• Closure: The vertices are A, and the edges are the edges
of Zd with at least one endpoint in A.
1 ∑
(4.1) pn+1 (z) = pn (w),
2d
|z−w|=1
where
[ { }]
∏
d
1 x2j
p(x) = √ exp −
j=1
2π(1/d) 2(1/d))
( )d/2 { }
d d|x|2
= exp − .
2π 2
This should be familiar at least for d = 1. For general d, p(x) is
the density of independent normal random variables with mean 0
and variance 1/d. The variance is 1/d because that is the variance
of one step for each component; for example, each step in the first
component equals 1 with probability 1/2d; −1 with probability 1/2d;
and 0 otherwise.
We define
√
(4.2) pn (x) = n−d/2 p(x/ n)
( )d/2 { }
1 d d|x|2
= exp − .
nd/2 2π 2n
4.2. Local central limit theorem 71
Proof.
∫ ∫ ∑
e−iz·θ ϕ(θ) dθ = e−iz·θ eiw·θ P{X = w} dθ
Td Td
w∈Zd
∑ ∫
= P{X = w} e−iz·θ eiw·θ dθ
Td
w∈Zd
1 ∑ iθj 1∑
d d
E[eiθ·X1 ] = [e + e−iθj ] = cos θj .
2d j=1 d j=1
[ ] n
∏
n ∏
n
[ iθ·X ] 1 ∑d
E[eiθ·Sn ] = E eiθ·Xk = E e k
= cos θj .
d j=1
k=1 k=1
the theorem is valid, it is not very useful. There are other versions
of the LCLT that give better estimates for these atypical values of x,
but we will not discuss them.
π/2. Unless cos θ is very near one, cosn θ will be very small for large n.
To make this observation precise, we will use the Taylor polynomial
approximation of cos y. By Taylor’s theorem with remainder we know
that there exist C < ∞ such that
y2
(4.4) cos y − 1 + ≤ C y4 , |y| ≤ π/2.
2
Indeed, we could give an explicit C but we will not need it. We are
letting n go to infinity, so we only need consider n sufficiently large
√
that C ≤ n/4. We claim that
∫ −1/4
1 n
(4.5) pn (x) + o(n−3/2 ) = e−ixθ cosn θ dθ.
π −n−1/4
To see this, we use (4.4) to see that
(n−1/4 )2 1
cos n−1/4 ≤ 1 − + C (n−1/4 )2 (n−1/4 )2 ≤ 1 − √ ,
2 4 n
and hence
∫ ∫ π/2
e−ixy cosn y dy ≤ 2 cosn y dy
n1/4 ≤|y|≤π/2 n1/4
[ ]n
1
≤ π 1− √
4 n
≤ π e−n
1/2
/4
o(n−3/2 ).
√
If we do the change of variables θ = s/ n, the right-hand side of
(4.5) becomes
∫ n1/4
2 1 √ √
√ I where I = √ e−ixs/ n cosn (s/ n) ds.
2πn 2π −n1/4
Note that I = I1 − I2 + I3 where
∫ ∞ √ 1
e−ixs/ n √ e−s /2 ds.
2
I1 =
−∞ 2π
∫ √ 1
e−ixs/ n √ e−s /2 ds.
2
I2 =
|s|≥n1/4 2π
4.2. Local central limit theorem 75
∫ n1/4
1 √ √
e−ixs/ [cosn (s/ n) − e−s /2 ] ds.
2
I3 = √ n
2π −n1/4
∫ √
1
√ e−s /2 ds ≤ O(e− n/2 ) = o(n−1 ).
2
|I2 | ≤
|s|≥n1/4 2π
Similarly,
√ ∫ n1/4 √
cosn (s/ n) − e−s /2 ds.
2
2π I3 ≤
−n1/4
Using the expansion for the cosine (details omitted) we see that
√ s4
cosn (s/ n) − e−s /2 ≤ c e−s /2 .
2 2
n
Hence, ∫
c ∞ 4 −s2 /2
I3 ≤ s e ds = O(1/n).
n −∞
The error term I3 is the largest of the error terms and indeed can be
as large as c/n.
∞
∑ {
< ∞ if d ≥ 3
pn (x) .
= ∞ if d ≤ 2
n=0
∞
∑ ∞
∑
G(z, w) = Pz {Sn = w} = pn (w − z),
n=0 n=0
{
1 z=0
δ(z) =
̸ 0.
0 z=
LG(x) = δ(z).
4.3. Green’s function 77
∑ ∞
∑
1
= δ(z) + pn−1 (w)
2d
|w−z|=1 n=1
1 ∑
= δ(z) + G(w)
2d
|w−z|=1
∞
∑ ∞
(d/2π)d/2 ∑ −y/n
∼ pn (x) = e
n=1
nd/2 n=1
where y = d |x| /2. We write the right-hand side as
2
[ ∞ ]
d |x|2−d 1 ∑
(4.6) (n/y)−d/2 e−y/n .
2 π d/2 y n=1
This shows that the sum in (4.9) is absolutely convergent and we can
write
∑∞
a(x) = [pj (0) − pj (x)].
j=0
If x1 + x2 is odd, we can similarly write
∑∞
a(x) = [pj (0) − pj+1 (x)].
j=0
Therefore,
1∑1 1 2
a(x) = O(1) + = log y + O(1) = log |x| + O(1).
π n π π
n≤y
The next proposition gives a more precise version. As in the case for
the Green’s function for d ≥ 3, this can be proved from a sufficiently
strong LCLT, but we will not prove it here.
Proposition 4.12. If d = 2, as |x| → ∞,
2
(4.10) a(x) = log |x| + k0 + O(|x|−2 ),
π
where
1 2
k0 = log 8 + γ
π π
and γ is Euler’s constant.
Here we do not use the full force of the asymptotics of the Green’s
function. Although we know G(z) up to an error of |z|−d , there is an
error of order n1−d when we replace |z| with n since
|z|2−d = n2−d + O(n1−d ), z ∈ ∂Bn ,
log |z| = log n + O(n−1 ), z ∈ ∂Bn .
The next proposition expresses the Green’s function GA on a finite set
in terms of the whole space Green’s function or the potential kernel.
Proposition 4.15. Suppose A ⊂ Zd is finite. Then for all z, w ∈ A,
• If d ≥ 3,
∑
GA (z, w) = G(z, w) − HA (z, y) G(y, w)
y∈∂A
∑
= G(w − z) − HA (z, y) G(w − y).
y∈∂A
• If d = 2,
∑
GA (z, w) = −a(w − z) + HA (z, y) a(w − y).
y∈∂A
Proposition 4.16.
• If d ≥ 3,
GBn (0, 0) = G(0) − O(n2−d ).
• If d = 2,
2
GBn (0, 0) = log n + k0 + O(n−1 )
π
where k0 is as in (4.10).
• If d = 2 and x ∈ Bn ,
( )
2 n
(4.12) GBn (x, 0) = log + O(n−1 ) + O(|x|−2 ).
π |x|
Exercise 4.17. Use Proposition 4.15 to prove the last proposition.
and hence,
∑
HA (x, z) G(z) = q βd n2−d [1 + O(n−1 )].
z∈∂A
Therefore,
|x|2−d
q= [1 + O(n−1 )].
n2−d
□
For z ∈ Bn ,
2
log n + k0 + O(n−1 ).
a(z) =
π
The probability that we want is
∑ a(x)
HA (x, z) = 2 −1 )
.
z∈∂B n π log n + k0 + O(n
Exercise 4.20. Show that if d = 2 and m < |x| < n, then the
probability that a random walk starting at x enters Bm before leaving
Bn equals
log n − log |x| + O(|x|−1 )
.
log n − log m + O(m−1 )
Hint: The potential kernel a(·) is a harmonic function in Bn \ Bm .
86 4. Random Walk in Zd
Note that
Px {Sτ = z, ρ = k, Sk = w} =
Px {Sk = w, k < τ } Px {Sτ = z, ρ = k | Sk = w, k < τ }.
Using the Markov property we can see that
Px {Sτ = z, ρ = k | Sk = w, k < τ } = q(w).
Therefore,
∑ ∞
∑
P {Sτ = z} =
x
q(w) Px {Sk = w, k < τ }
w∈V k=0
∑
= q(w) GA (x, w).
w∈V
Since f is nonnegative,
∑
f (x) = HBn (x, z) f (z)
z∈∂A
∑ [ cr ]
≤ HBn (y, z) 1 + f (z)
n
z∈∂A
[ cr ]
= f (y) 1 + .
n
If |x|, |y| ≤ rn,√then we can connect x to y by a path staying in Brn
of at most 2r dn steps. Therefore, by repeated application of the
above inequality we get
√
[ cr ]2r dn
f (x) ≤ 1 + f (y) ≤ Cr f (y).
n
√
where Cr = exp{2 dcr r}. □
Proposition 4.24. There exists c < ∞ such that if f : A → [0, ∞)
is harmonic in A then the following holds. Suppose ω is a nearest
neighbor path from z to w in A of length k with dist(ω, ∂A) ≥ N .
Then,
f (z) ≤ f (w) exp{c k/N }.
Exercise 4.25.
(1) Check that the proof of Proposition 4.22 extends to prove
the last proposition.
(2) Use the last proposition to show the following. There exists
c < ∞ such that if A = Zd \ Bn and f : A → [0, ∞) is
harmonic in A, then for all z, w ∈ ∂B2n ,
f (z) ≤ c f (w).
Exercise 4.26. Show that there exists c < ∞ such that the following
is true for every f : Bn → R that is harmonic in Bn .
• For every y ∈ Bn ,
|y|
|f (y) − f (0)| ≤ c ∥f ∥∞ .
n
• If f ≥ 0 on Bn , then for every y ∈ Bn/2 ,
|y|
(4.18) |f (y) − f (0)| ≤ c f (0).
n
90 4. Random Walk in Zd
Exercise 4.27. Show that there exists α > 0 and c < ∞ such that
the following is true. Let A = Zd \ Bn and z ∈ ∂A. Then if r ≥ 2 and
x, y ∈ Zd \ Brn , then
c
(4.19) |HA (x, z) − HA (y, z)| ≤ α HA (x, z).
r
Hint:
(1) Let Vk = ∂B2k n for positive integers k. Explain why it
suffices to prove (4.19) for x, y ∈ Vk for all k.
(2) Let
{ }
|HA (x, z) − HA (y, z)|
λk = max : x, y ∈ Vk .
HA (x, z)
Show that there exists ρ < 1 (independent of z, n, k) such
that if k ≥ 1,
λk+1 ≤ ρ λk .
Hint: Use Exercise 4.25.
We will now establish the existence of the limit. Before doing so, we
note that the limit does not exist for d = 1. If we consider the set
A = {0, 1}, then the probability that a random walk “from infinity”
first visits A at 0 depends on whether the walker is coming from the
right-hand side or the left-hand side. The proposition below shows
that in more than one dimension, the hitting probability is the same
(in the limit) regardless of the direction one is coming from.
where
τn = min{j ≥ 1 : Sj ∈ ∂Bn }.
Moreover, if A ⊂ Bm and |w| ≥ 2m, then
Pw {ST = x | T < ∞} = hmA (x) [1 + O(ϵ)],
where ϵ = ϵ(m, w) = m/|w| for d ≥ 3 and ϵ = (m/|w|) log(|w|/m)
for d = 2.
By Exercise 4.28,
Pw {S(TA ) = x}
∑
= GZd \A (w, z) rn (x, z)
z∈∂Bn
∑
= GZd \A (w, z) Px {τn < T } P0 {S(τn ) = z} [1 + O(ϵn )]
z∈∂Bn
= Jn (w, A) Px {τn < T } [1 + O(ϵn )],
where
∑
Jn (w, A) = GZd \A (w, z) HBn (0, z).
z∈∂Bn
Hint:
(1) It suffices to prove the result when ∥f ∥∞ = 1.
(2) Let
∑
fˆ(z) = HA (z, x) f (x).
x∈∂A
4.5. Capacity for d ≥ 3 93
Show that
lim fˆ(z) = fˆ(∞).
z→∞
and hence
1+p
F{0,z} (Zd ) = G(0, 0) GZ d \{0} (z, z) = .
1−p
(4) Show that
1−p
cap({0, z}) = 2 .
1+p
Proposition 4.34. If A ⊂ Zd , d ≥ 3 is a finite set, then
σ = max{k < ∞ : Sk ∈ A}
4.5. Capacity for d ≥ 3 95
If we multiply both sides by βd−1 |x|2−d and take the limit using (4.21),
we get the result. □
Proposition 4.36. If A = Bn ,
cap(A) = βd−1 nd−2 + O(nd−1 ).
Since a transient random walk visits each point only a finite num-
ber of times, it also visits every finite set only finitely often. What
about infinite sets?
Also, Proposition 4.34 shows that cap(An ) ≍ rnd−2 and hence the
probability of visiting An is comparable to
2−n(d−2) rnd−2 ≍ 2n(d−2)(b−1) = 2−n(d−2)/d .
Therefore,
∞
∑
Px {random walk visits An } < ∞.
n=1
Exercise 4.39. Show that there exist 0 < c1 < c2 < ∞ such that for
every A and every n, if |x| ≤ 2n−1 or 2n+2 ≤ |x| ≤ 2n+3 , then
c1 cap(An ) ≤ 2n(d−2) Px {TAn < ∞} ≤ c2 cap(An ).
Proposition 4.40 (Wiener’s Test). Let A ⊂ Zd , d ≥ 3 and let
An = A ∩ {z : 2n ≤ |z| < 2n+1 }.
Then the set A is transient for random walk if and only if
∑∞
(4.23) 2n(2−d) cap(An ) < ∞.
n=1
Proof. We will only prove the result for A; the results for à is similar
but requires some more work. Let
∑ ∑
Y = G(x) = 1{x ∈ A} G(x).
x∈A x∈Zd
σn = min{j : Sj ≥ 2n },
We will not give all the details but leave it as an exercise in the ideas
of this chapter to put in all the details, However, we will give the
sketch of facts to verify. Let En denote the event that A4n ∩ Ã4n ̸= ∅
and let Un = B24n+1 \ B24n .
• Show that there exists c1 > 0 such that for all x ∈ Un ,
P{x ∈ An } ≥ c1 2−8n .
• Show that there exists c2 < ∞ such that for all x, y ∈ Un
distinct,
P{x, y ∈ An } ≤ c2 2−16n |x − y|−4 .
• Show that if
∑
Yn = 1{x ∈ A ∩ Ã}.
x∈A4n ∩Ã4n
Exercise 4.44. Show that if d ≥ 5, there exists c < ∞ such that the
following holds. Suppose S 1 , S 2 are simple random walks starting at
0 and x respectively. Then,
P{S 1 [0, ∞) ∩ S 2 [0, ∞) ̸= ∅} ≤ c |x|4−d .
Hint: Let Iy be the indicator function of the event that there exist
∑
j, k with Sj1 = y and Sk2 = y. Let I = y∈Zd I(y). Show that
E[I] ≤ c |x| .
4−d
4.6. Capacity in two dimensions 101
Proof. We will do the case z = 0 and leave the general case to the
reader. Note that
(4.24) Px {τn < τ0 } = Px {τn < τA } + Px {τA < τn < τ0 },
102 4. Random Walk in Zd
and
∑
Px {τA < τn < τ0 } = Px {S(τA ∧ τn ) = w} Pw {τn < τ0 }.
w∈A
If 0 ∈ A, we can write
2
a(x) − aA (x) = lim (log n) [Px {τn < τ0 } − Px {τn < τA }] .
n→∞ π
Further reading
The classical book by Frank Spitzer [18] includes an extensive bib-
liography on the early work on random walk. This chapter can be
considered as a sampler from [10] which is a serious graduate/research
level treatment of simple random walk.
Index
capacity edge
in Z2 , 102 self-edge, 7
in Zd , d ≥ 3, 93 undirected, 7
capacity parametrization, 169 elementary loop, 47
Cauchy-Riemann equation, 171 escape probability, 93
Cauchy-Riemann equations, 159 Euler’s constant, 81
central charge, 45, 167 exponential distribution, 20, 190
central limit theorem, 70
Chapman-Kolomogorov equations, fractal dimension, 153
2 free boundary, 67, 119
characteristic function, 71, 193
comparable, 97 gambler’s ruin, 149
compound Poisson process, 183, Gamma distribution, 65, 190
188 Gamma function, 78, 189
concatenation, 18 Gamma process, 65, 190
conformal invariance, 163, 166 Gaussian free field (GFF), 133
conformal transformation, 160 conformal invariance in R2 , 176
197
198 Index
sausage, 154
scaling limit, 153
Schramm-Loewner evolution
(SLE), 167, 171
second moment method, 180
self-avoiding polygon (SAP), 122
self-avoiding walk (SAW), 24, 172
simply connected, 122, 161
spanning forest, 117
spanning tree, 117
Stirling’s formula, 6, 75
stochastic matrix, 2
stopping time, 9
transient, 76
set in Zd , 96, 98
transition matrix (probabilities), 1
tree, 37
spanning, 38
wired spanning, 41