Download as pdf or txt
Download as pdf or txt
You are on page 1of 47

Preface

This book is an outgrowth of lectures that I gave in the summer of


2020 as part of the Research Experiences for Undergraduates (REU)
at the University of Chicago. The REU lectures are not intended to
be standard courses but rather tastes of graduate and research level
mathematics for advanced undergraduates. The title of the book
can be interpreted in two ways. First, this is not a comprehensive
survey of an area but rather a “random” sampling of some objects
that arise in models in probability and statistical mechanics. The
second meaning refers to a prevailing theme in many of these models.
Random fields can be studied by exploration, that is, by traveling
(perhaps randomly) through the field and observing what one has
seen so far and using that to predict the parts that have not been
observed.
In order to keep the material accessible to students who have not
had graduate material, I have concentrated on discrete models where
“measure theoretic” probability is not needed. The formal prereq-
uisites for these notes are advanced calculus, linear algebra, and a
calculus-based course in probability. It is also expected that students
have sufficient mathematical maturity to understand rigorous argu-
ments. While those are the only formal prerequisites, the intent of
these lectures was to give a taste of research level mathematics and I
allow myself to venture occasionally a bit beyond these prerequisites.

ix
x Preface

The first chapter introduces Markov chains and ideas that perme-
ate the book. The focus is on transient chains or recurrent chains with
“killing” for which there is a finite Green’s function representing the
expected number of visits to sites. The Green’s function can be seen
to be the inverse of an important operator, the Laplacian. Harmonic
functions (functions whose Laplacian equal zero) and the determinant
of the Laplacian figure prominently in the later chapters. We concen-
trate mainly on discrete time chains but we also discuss how to get
continuous time chains by putting on exponential waiting times. A
probabilistic approach dominates our treatment but much can be done
purely from a linear algebra perspective. The latter approach allows
measures on paths that take negative and complex values. Such path
measures come up naturally in a number of models in mathematical
physics although they are not emphasized much here.
Chapter 2 introduces an object that has been a regular part of my
research. I introduced the loop-erased random walk (LERW) in my
doctoral dissertation with the hope of getting a better understanding
of a very challenging problem, the self-avoiding random walk (SAW).
While the differences between the LERW and SAW have prevented
the former from being a tool to solve the latter problem, it has proved
to be a very interesting model in itself. One very important applica-
tion is the relation between LERW and another model, the uniform
spanning tree (UST). This relationship is most easily seen in an al-
gorithm due to David Wilson [20] to generate such trees.
Analysis of the loop-erasing procedure leads to consideration both
of the loops erased and the LERW itself. Chapter 3 gives an intro-
duction to loop measures and soups that arise from this. We view
a collection of loops as a random field that is growing with time as
loops are added. The distribution of the loops at time 1 corresponds
to what is erased from loop-erased random walks. The loop soup at
time 1/2 is related to the Gaussian free field (GFF). This chapter
introduces the discrete time loop soup which is an interesting math-
ematical model in itself. This discrete model has characteristics of a
number of fields in statistical mechanics. In particular, the distribu-
tion of the field does not depend on how one orders the elements, but
to investigate the field one can order the sites and then investigate
Preface xi

the field one site at a time. For this model, when one visits a site,
one sees all the loops that visits that site. This “growing loop” model
which depends on the order of the vertices turns out to be equivalent
to an “unrooted loop soup” that does not depend on the order.
While we have used the generality of Markov chains for our set-
up, one of the most important chains is the simple random walk in
the integer lattice. In order to appreciate paths and fields arising
from random walk, it is necessary to understand the walk. Chapter
4 discusses the simple random walk on the lattice giving some more
classical results that go beyond what one would normally see at an
undergraduate level.
We return to the spanning tree in Chapter 5 and consider the
infinite spanning tree in the integer lattice as a limit of spanning
trees on finite subsets. Whether or not this gives an infinite tree or a
forest (a collection of disconnected trees) depends on the dimension.
We also give an example of duality on the integer lattice.
Another classical field is the topic of Chapter 6. The multivariate
normal distribution is a well known construction and is the model
upon which much of classical mathematical statistics, such as linear
regression, is based. The (GFF) is an example of such a distribution
where some geometry comes into the picture. Here we discuss the
GFF coming from a Markov chain. The idea of exploration comes
in again as one “samples” or “explores” the field at some sites and
uses that to determine distributions at other sites. The global object
is independent of the ordering of the vertices but the sampling rule
is not. There is a relation between the GFF and the growing loop
defined in Chapter 3 discussed in Section 6.6.
In Chapter 7 we introduce some of the continuous models that
arise as scaling limits. A proper treatment of this material would
require more mathematical background than I am assuming so this
should be viewed as an enticement to learn more. The scaling limits
we discuss are: Brownian motion, Brownian loop soup, Schramm-
Loewner evolution, and the continuous GFF.
In the Appendix, we discuss a couple of topics that arise in
the previous chapters but have sufficient independent interest that
it seems appropriate to separate them. The first is a basic technique
xii Preface

for research probabilists often called the “second moment method”.


The second, which arises for us primarily in the analysis of the loop
models, is an introduction to Lévy processes with an emphasis on the
negative binomial and Gamma processes.
There are a number of exercises scattered through the text. It
is recommended that the serious reader, that is, those who are con-
sidering doing research in this or related areas of mathematics, do as
many as possible. I also suggest to be prepared to draw pictures to
help understand some of the constructions and the arguments. Of
course, the more casual reader can do whatever they please!
I have focused on the mathematics in these lectures and have not
discussed the history of the development of these ideas. Clearly, the
mathematics in this book is the work of many researchers including
many who are active today. I have included a few references for fur-
ther reading. Many of these works also have extensive bibliographies
which can be a good source of original articles.
All of the chapters depend on the material in Chapter 1. Chapter
3 uses Chapter 2 while Chapters 4 and 6 are independent and need
only Chapter 1. Chapter 5 uses all then chapters preceding it.
I would like to thank the “random fields” group during the 2020
REU: Nixia Chen, Victor Gardner, Jinwoo Sung, Stephen Yearwood
(mentors) and Sam Craig, Nitya Gadhiwala, Jingyi Jia, Ethan Lewis,
Mishal Mrinal, Ethan Naegele, Vedant Pathak, Sivakorn Sanguamoo,
Rachit Surana, Haozhe Yu, Lingyue Yu, Stanley Zhu (participants),
as well as some participants from other groups that interacted with
our group: Jake Fiedler, Jessica Metzger, Lucca Prado, Ben Rapport.
Among others who commented and sent corrections on early drafts of
the notes are Charley Devlin, Vladislav Guskov, Seyhun Ji, Fredrik
Viklund, and Zijian Wang.
Research was supported by National Science Foundation grant
DMS-1513036.
Contents

Preface ix

Chapter 1. Markov Chains 1


§1.1. Definition 1
§1.2. Laplacian and harmonic functions 7
§1.3. Markov chain with boundary 8
§1.4. Green’s function 13
§1.5. An alternative formulation 16
§1.6. Continuous time 20
Further Reading 21

Chapter 2. Loop-Erased Random Walk 23


§2.1. Loop erasure 23
§2.2. Loop-erased random walk 25
§2.3. Determinant of the Laplacian 29
§2.4. Laplacian random walk 32
§2.5. Putting the loops back on the path 36
§2.6. Wilson’s algorithm 37
Further Reading 43

Chapter 3. Loop Soups 45

v
vi Contents

§3.1. Introduction 45
§3.2. Growing loop at a point 46
§3.3. Growing loop configuration in A 50
§3.4. Rooted loop soup 54
§3.5. (Unrooted) random walk loop measure 55
§3.6. Local time and currents 58
§3.7. Negative weights 63
§3.8. Continuous time 65
Further Reading 66

Chapter 4. Random Walk in Zd 67


§4.1. Introduction 67
§4.2. Local central limit theorem 70
§4.3. Green’s function 76
§4.4. Harmonic functions 81
§4.5. Capacity for d ≥ 3 93
§4.6. Capacity in two dimensions 101
Further reading 102

Chapter 5. LERW and Spanning Trees on Zd 103


§5.1. LERW in Z d
103
§5.2. Marginal distributions for UST in Z d
108
§5.3. Uniform spanning tree (UST) in Z d
117
§5.4. The dual lattice in Z2
122
§5.5. The uniform spanning tree (UST) in Z2 130
Further Reading 131

Chapter 6. Gaussian Free Field 133


§6.1. Introduction 133
§6.2. Multivariate normal distribution 133
§6.3. Gaussian fields coming from Markov chains 139
§6.4. A Gibbs measure perspective 143
§6.5. One-dimensional GFF 148
Contents vii

§6.6. Square of the field 149


Further reading 152

Chapter 7. Scaling Limits 153


§7.1. The idea of a scaling limit 153
§7.2. Brownian motion 156
§7.3. Conformal invariance in two dimensions 159
§7.4. Brownian loop soup 161
§7.5. Scaling limit for LERW 164
§7.6. Loewner differential equation 169
§7.7. Self-avoiding walk: c = 0 172
§7.8. Continuous GFF for d = 1, 2 173
Further Reading 177

Appendix A. Some Background and Extra Topics 179


§A.1. Borel-Cantelli lemma 179
§A.2. Second moment method 180
§A.3. Compound Poisson process 182
§A.4. Negative binomial process 184
§A.5. Increasing jump processes 187
§A.6. Gamma process 189
§A.7. Lévy processes 192

Bibliography 195

Index 197
Chapter 4

Random Walk in Zd

4.1. Introduction
In this chapter we will focus on the integer lattice

Zd = {(z1 , . . . , zd ) : zj ∈ Z}

viewed as an undirected graph where two vertices z, w are adjacent if


they are nearest neighbors, that is, |z − w| = 1. Here and throughout
this chapter we use | · | to denote the usual Euclidean distance. If
A ⊂ Zd , we write

∂A = {z ∈ Zd \ A : |z − w| = 1 for some w ∈ A},

A = A ∪ ∂A.
We let Bn denote the discrete ball of radius n about the origin

Bn = {z ∈ Zd : |z| < n},

and note that for all w ∈ ∂Bn ,

n ≤ |w| < n + 1.

There are three natural “subgraphs” of Zd associated to a subset


A:
• Free boundary: The vertices are A, and the edges are the
edges of Zd with both endpoints in A.

67
68 4. Random Walk in Zd

• Closure: The vertices are A, and the edges are the edges
of Zd with at least one endpoint in A.

• Wired boundary: The vertices are A∪{∂A} where all the


points on the boundary are identified (“wired”) and consid-
ered as a single point. The edges are the same edges as
in the closure but now there can be multiple edges from a
vertex z ∈ A to the boundary ∂A.

Simple random walk on Zd starting at the origin can be written as


Sn = X1 + · · · + Xn
4.1. Introduction 69

where X1 , . . . , Xn are independent random variables with distribution


P{Xj = w} = 1/(2d) for all |w| = 1. We will write pn (z, w) for the
corresponding n-step transition probabilities
pn (z, w) = Pz {Sn = w},
and pn (w) = pn (0, w) = pn (z, z + w). The transition probabilities are
symmetric, pn (z, w) = pn (w, z). We write L for the Laplacian
1 ∑
Lf (z) = (I − P ) f (z) = f (z) − f (w).
2d
|w−z|=1

The transition probabilities pn (z) satisfy the “discrete heat equation”

1 ∑
(4.1) pn+1 (z) = pn (w),
2d
|z−w|=1

which can also be written as


pn+1 (z) − pn (z) = −Lpn (z),
where the Laplacian is with respect to the variable z. Simple random
walk is a Markov chain with period 2. We can divide Zd into the
“even” points and the “odd” points where the even points are the
(z1 , . . . , zd ) with z1 + · · · + zd even. If one starts at an even point,
then after an odd number of steps one is at an odd point, and after
an even number of steps one is at an even point.

There are other variations of simple random walk that


get rid of this periodicity. Two standard ones are:
• Lazy walker: Let 0 < p < 1. At each time step the
walker chooses with probability p to not move. If
the walker moves, then it chooses its new site as in
the simple random walk.
• Continuous time: Let St be a continuous time walk
that waits for an exponential amount of time and
then takes a step. In this model the components of
the walk are independent.
These models are the same if we view it only at the times
the walker chooses a new site. There are advantages and
disadvantages to each of these.
70 4. Random Walk in Zd

Our discrete heat equation is a discretization of the usual


(continuous) heat equation
1
∂t p(t, x) = ∆x p(t, x).
2
The latter describes the evolution of the probability den-
sity function of the continuous analogue of random walk
which is called Brownian motion.

4.2. Local central limit theorem


Let Sn denote the position of a simple random walk starting at the
origin in Zd . The central limit theorem states that the distribution
of n−1/2 Sn converges to a normal distribution; in this case, if U is an
open ball in Rd ,
{ } ∫
Sn
lim P √ ∈ U = p(x) dx.
n→∞ n U

where
[ { }]

d
1 x2j
p(x) = √ exp −
j=1
2π(1/d) 2(1/d))
( )d/2 { }
d d|x|2
= exp − .
2π 2
This should be familiar at least for d = 1. For general d, p(x) is
the density of independent normal random variables with mean 0
and variance 1/d. The variance is 1/d because that is the variance
of one step for each component; for example, each step in the first
component equals 1 with probability 1/2d; −1 with probability 1/2d;
and 0 otherwise.
We define

(4.2) pn (x) = n−d/2 p(x/ n)
( )d/2 { }
1 d d|x|2
= exp − .
nd/2 2π 2n
4.2. Local central limit theorem 71

Using the central limit theorem as a guide we might conjecture that


if n and x = (x1 , . . . , xd ) have the same “parity”, that is, if n + x1 +
x2 + · · · + xd is even, then pn (x) ∼ 2 pn (x). Statements of this kind
are called local central limit theorems (LCLT). theorems are stronger
than the usual central limit theorem which is not sufficiently precise
to estimate probabilities at points.
We will state one strong (although not the strongest) version of
the LCLT for simple random walk. The basic idea and proof work
for all d, but for ease we will discuss the full details of the proof only
for d = 1. Let

Td = [−π, π] × [−π, π] × · · · × [−π, π] .


| {z }
d

If θ = (θ1 , . . . , θd ) ∈ Td we write dθ for dθ1 · · · dθd . If X is a ran-


dom variable taking values in Zd , its characteristic function is is the
function ϕ : Rd → C defined by

ϕ(θ) = ϕX (θ) = E[eiθ·X ] = eiθ·x P{X = x}.
x∈Zd

Note that since X takes values in Zd and e2πi = 1, the function ϕ is


periodic of period 2π in each variable. In other words, if y ∈ Zd , then
ei2πy·X = 1 and hence

ϕ(θ) = ϕ(θ + 2πy).

The next proposition shows that we can give the distribution of Sn in


terms of the characteristic function: the idea is a version of “Fourier
inversion”.

Proposition 4.1. Suppose X is a random variable taking values in


Zd with characteristic function ϕ. Then,

1
P{X = z} = e−iz·θ ϕ(θ) dθ.
(2π)d Td
72 4. Random Walk in Zd

Proof.
 
∫ ∫ ∑
e−iz·θ ϕ(θ) dθ = e−iz·θ  eiw·θ P{X = w} dθ
Td Td
w∈Zd
∑ ∫
= P{X = w} e−iz·θ eiw·θ dθ
Td
w∈Zd

= (2π)d P{X = w}.


The third equality uses the identity
∫ {
iw·θ (2π)d , w = 0,
e dθ = .
Td 0, w ∈ Zd \ {0}
The interchange of sum and integral in the second equality is valid
since
∑ ∫ ∑
e−iz·θ eiw·θ P{X = w} dθ = P{X = w}(2π)d < ∞.
Td
w∈Zd w∈Zd

If Sn = X1 + · · · + Xn is simple random walk in Zd , then

1 ∑ iθj 1∑
d d
E[eiθ·X1 ] = [e + e−iθj ] = cos θj .
2d j=1 d j=1

[ ]  n

n ∏
n
[ iθ·X ] 1 ∑d
E[eiθ·Sn ] = E eiθ·Xk = E e k
= cos θj  .
d j=1
k=1 k=1

The second equality uses the independence of X1 , . . . , Xn . Combining


this with the last proposition, we get an exact expression for the
distribution of Sn .

Proposition 4.2. The n-step transition probabilities are given by



1
(4.3) pn (z) = e−iz·θ ϕ(θ)n dθ
(2π)d Td
where
1∑
d
ϕ(θ) = cos θj .
d j=1
4.2. Local central limit theorem 73

While (4.3) is an exact expression, the integrand is highly oscil-


latory for large |z| which means that there is a lot of cancellation.
Hence it takes work to estimate the integral.
Theorem 4.3 (Local Central Limit Theorem(LCLT)). For every in-
teger d ≥ 1, there exists c < ∞ such that for every positive integer n
and x = (x1 , . . . , xd ) ∈ Zd with n + x1 + · · · + xd even,
c
|pn (x) − 2 pn (x)| ≤ d .
n 2 +1
Here pn (x) is as in (4.2).

Remark 4.4. For a “typical” x with |x| ≤ n, pn (x) is of order
n−d/2 and hence we can write
pn (x) = 2 pn (x) [1 + O(n−1 )].

However, if |x| ≫ n, then pn (x) is of smaller order and the error
n−( 2 +1) can be larger than the dominant term. In this case, while
d

the theorem is valid, it is not very useful. There are other versions
of the LCLT that give better estimates for these atypical values of x,
but we will not discuss them.

The proof of Theorem 4.3 is similar in all dimensions and involves


estimating the integral in (4.3). We will only discuss it in the case
d = 1 and n, x are even for which
1
e−x /2n ,
2
pn (x) = √
2πn
and hence (4.3) gives
∫ π
1
P{Sn = x} = e−ixθ cosn θ dθ.
2π −π
Since n and x are both even integers, the function v(θ) = e−ixθ cosn θ
has period π and hence the integral from −π to π is the same as twice
the integral from −π/2 to π/2,
∫ π/2
2
pn (x) = e−ixθ cosn θ dθ.
2π −π/2
Let us consider this integral. We know that we expect the left-
hand side to be of order n−1/2 at least if x is not too far away from the
origin. We also know that cos θ goes from 1 to 0 as |θ| goes from 0 to
74 4. Random Walk in Zd

π/2. Unless cos θ is very near one, cosn θ will be very small for large n.
To make this observation precise, we will use the Taylor polynomial
approximation of cos y. By Taylor’s theorem with remainder we know
that there exist C < ∞ such that
y2
(4.4) cos y − 1 + ≤ C y4 , |y| ≤ π/2.
2
Indeed, we could give an explicit C but we will not need it. We are
letting n go to infinity, so we only need consider n sufficiently large

that C ≤ n/4. We claim that
∫ −1/4
1 n
(4.5) pn (x) + o(n−3/2 ) = e−ixθ cosn θ dθ.
π −n−1/4
To see this, we use (4.4) to see that
(n−1/4 )2 1
cos n−1/4 ≤ 1 − + C (n−1/4 )2 (n−1/4 )2 ≤ 1 − √ ,
2 4 n
and hence
∫ ∫ π/2
e−ixy cosn y dy ≤ 2 cosn y dy
n1/4 ≤|y|≤π/2 n1/4
[ ]n
1
≤ π 1− √
4 n
≤ π e−n
1/2
/4

The first inequality is immediate since|e−ixy | = 1. Note that e−n /4 =


1/2

o(n−3/2 ).

If we do the change of variables θ = s/ n, the right-hand side of
(4.5) becomes
∫ n1/4
2 1 √ √
√ I where I = √ e−ixs/ n cosn (s/ n) ds.
2πn 2π −n1/4
Note that I = I1 − I2 + I3 where
∫ ∞ √ 1
e−ixs/ n √ e−s /2 ds.
2
I1 =
−∞ 2π
∫ √ 1
e−ixs/ n √ e−s /2 ds.
2
I2 =
|s|≥n1/4 2π
4.2. Local central limit theorem 75
∫ n1/4
1 √ √
e−ixs/ [cosn (s/ n) − e−s /2 ] ds.
2
I3 = √ n
2π −n1/4

The integral I1 is the characteristic function of a standard normal



random variable evaluated at −x/ n; one can compute this or look
it up to see that I1 = e−x /2n . Using |e−iy | = 1, we see that
2

∫ √
1
√ e−s /2 ds ≤ O(e− n/2 ) = o(n−1 ).
2
|I2 | ≤
|s|≥n1/4 2π
Similarly,
√ ∫ n1/4 √
cosn (s/ n) − e−s /2 ds.
2
2π I3 ≤
−n1/4

Using the expansion for the cosine (details omitted) we see that
√ s4
cosn (s/ n) − e−s /2 ≤ c e−s /2 .
2 2

n
Hence, ∫
c ∞ 4 −s2 /2
I3 ≤ s e ds = O(1/n).
n −∞
The error term I3 is the largest of the error terms and indeed can be
as large as c/n.

There is another approach to the LCLT in one dimension


that uses another exact expression
( )
2n
P{S2n = x} = 2−2n
n+x
and then uses Stirling formula (with error terms) to eval-
uate the right-hand side. This is easier than our proof,
if one knows Stirling’s formula, but the proof we give is
easier to adapt to higher dimensions and also can be used
for random walks other than the simple walk.

The LCLT implies that


Cd
+ O(n− 2 −1 ).
d
p2n (0) =
nd/2
76 4. Random Walk in Zd

where Cd = 21−d (d/π)d/2 . In particular,


∑ {
< ∞ if d ≥ 3
pn (x) .
= ∞ if d ≤ 2
n=0

the expected number of returns to the origin is infinite if d ≤ 2 and


finite for d ≥ 3.

Theorem 4.5 (Pólya). With probability one, simple random walk


walk in Z1 and Z2 is recurrent. If d ≥ 3, the random walk is transient.

Exercise 4.6. Use Proposition 1.2 to prove this theorem.

4.3. Green’s function


If d ≥ 3, simple random walk is transient, and the (whole space)
Green’s function


∑ ∞

G(z, w) = Pz {Sn = w} = pn (w − z),
n=0 n=0

is well defined. Note that G(z, w) = G(w, z) = G(w − z) where we


write G(z) for G(0, z). Analysts think of the Green’s function as the
“fundamental solution of the Laplacian”. The discrete analogue of
this viewpoint is the statement in the next proposition. We write
δ(z) for the delta-function in Zd defined by

{
1 z=0
δ(z) =
̸ 0.
0 z=

Proposition 4.7. The Green’s function G satisfies

LG(x) = δ(z).
4.3. Green’s function 77

Proof. Using (4.1), we can see that



∑ ∞

G(z) = pn (z) = δ(z) + pn (z)
n=0 n=1
∑∞ ∑
1
= δ(z) + pn−1 (w)
2d
n=1 |w−z|=1

∑ ∞

1
= δ(z) + pn−1 (w)
2d
|w−z|=1 n=1
1 ∑
= δ(z) + G(w)
2d
|w−z|=1

= δ(z) + G(z) − LG(z).

We will give the asymptotics of the Green’s function as |x| → ∞.


This can be deduced from local central limit theorems although we
would need a stronger version than we have proved here. For this
reason, we will not give a complete proof of the asymptotics, but we
will show how the leading order term arises from a computation using
the LCLT. For ease let us assume that x ∈ Zd \ {0} and that the sum
of the components of x is even. We start with

∑ ∞

G(x) = p2n (x) ∼ 2 p2n (x)
n=1 n=1


∑ ∞
(d/2π)d/2 ∑ −y/n
∼ pn (x) = e
n=1
nd/2 n=1
where y = d |x| /2. We write the right-hand side as
2

[ ∞ ]
d |x|2−d 1 ∑
(4.6) (n/y)−d/2 e−y/n .
2 π d/2 y n=1

We write it this way because the quantity in the brackets is a Riemann


sum approximation using intervals of length y −1 of the integral
∫ ∞
t−d/2 e−1/t dt.
0
78 4. Random Walk in Zd

To compute the integral we use the substitution


t = 1/s, dt = −s−2 ds
to make it
∫ ∞ ( )
d
−s −2 d 2
s e
2 s ds = Γ − 1 = Γ(d/2) ,
0 2 d−2
where ∫ ∞
Γ(r) = z r−1 e−z dz
0
is the Gamma function which satisfies rΓ(r) = Γ(r + 1). Combining
with (4.6) we see that as |x| → ∞,
d Γ(d/2)
G(x) ∼ |x|2−d .
(d − 2) π d/2
By more careful analysis which we omit one can give a sharp bound
on the error in the above asymptotics.
Proposition 4.8. If d ≥ 3, then as |x| → ∞
d Γ(d/2)
G(x) ∼ βd |x|2−d , where βd = .
(d − 2) π d/2
In fact,
(4.7) G(x) = βd |x|2−d + O(|x|−d ).

The statement of this proposition uses a convenient shorthand.


The conclusion can be written more precisely as: there exists c < ∞
such that for all x,
c
G(x) − βd |x|2−d ≤ .
|x|d
Writing things like this is a bit bulky so we will use the O(·) and
o(·) notation. It is important to remember that there is an implicit
constant in this notation and that this constant is uniform over all
x ∈ Zd .

It is useful to know what is worth memorizing and what


is not so critical. In the case of the last proposition, the
exponent 2 − d is worth committing to memory but not
the value of the constant βd . The function f (x) = |x|2−d
is harmonic in Rd \ {0} and is the fundamental solution
of the continuous Laplacian.
4.3. Green’s function 79

One way to remember the exponent is to use the


following heuristic derivation. If Sn = x then we would
expect n to be of order |x|2 . So there are about |x|2 pos-
sible times that contribute to the sum. For each of these
values, the probability of being at x is of order |x|−d .
Therefore the Green’s function is of order |x|2 |x|−d .

The Green’s function as defined above does not exist if d = 2 be-


cause the simple random walk is recurrent. However, there is another
quantity that has many of the same properties, the potential kernel.
Some authors refer to this as the Green’s function.
Definition 4.9. If d = 2 the potential kernel is defined by
 
∑n ∑
n
a(x) = lim  pj (0) − pj (x) .
n→∞
j=0 j=0

One has to be careful with this definition. We cannot naively


write the limit as

∑ ∞

(4.8) pn (0) − pn (x),
j=0 j=0

since both of these sums are infinite.


Let us show why the limit exists. We will do the case where
x = (x1 , x2 ) with x1 + x2 even. We write

n
(4.9) a(x) = lim [p2j (0) − p2j (x)].
n→∞
j=0

Using the local central limit theorem. we can write


C2 [ ]
1 − e−|x| /n + O(n−2 ).
2
p2n (0) − p2n (x) =
n
If we fix x and let n → ∞, we see that
|x|2
1 − e−|x|
2
/n
=+ O(|x|4 /n2 ).
n
Hence there exists a constant cx such that for all n,
cx
|pn (0) − pn (x)| ≤ 2 .
n
80 4. Random Walk in Zd

This shows that the sum in (4.9) is absolutely convergent and we can
write
∑∞
a(x) = [pj (0) − pj (x)].
j=0
If x1 + x2 is odd, we can similarly write
∑∞
a(x) = [pj (0) − pj+1 (x)].
j=0

The next proposition shows that this is the fundamental solution


of the Laplacian with d = 2 although we get a change in sign.
Proposition 4.10. If d = 2, a(0) = 0, and a(y) = 1 if |y| = 1.
Moreover, for all x ∈ Z2 , La(x) = −δ(x).
Exercise 4.11. Prove Proposition 4.10.

We could have defined a for d ≥ 3 using the same defini-


tion, but in that case the naive expression (4.8) is valid
and
a(x) = G(0) − G(x).
It is more convenient to use G(x) rather than a(x).

We now consider the asymptotics of the potential kernel in Z2 as


|x| → ∞. We will consider the case where x1 + x2 is even and let
y = |x|2 so that
1 −y/n
p2n (x) = e + O(n−2 ).
πn
We will ignore the error term for the moment and consider
1 [ ]
∑∞
1 − e−y/n .
n=0
n
Note that ∑1 [ ] ∑ y
1 − e−y/n ≤ c ≤ c0 ,
n n2
n≥y n≥y
∑1 ∑1 1 ∫ 1 −1/t
−y/n −y/n e
e = e ∼ dt ≤ c0 .
n y n/y 0 t
n≤y n≤y
4.4. Harmonic functions 81

Therefore,
1∑1 1 2
a(x) = O(1) + = log y + O(1) = log |x| + O(1).
π n π π
n≤y

The next proposition gives a more precise version. As in the case for
the Green’s function for d ≥ 3, this can be proved from a sufficiently
strong LCLT, but we will not prove it here.
Proposition 4.12. If d = 2, as |x| → ∞,
2
(4.10) a(x) = log |x| + k0 + O(|x|−2 ),
π
where
1 2
k0 = log 8 + γ
π π
and γ is Euler’s constant.

Euler’s constant is defined by


[ ]
∑n
1
γ = lim − log n + .
n→∞
j=1
j

The actual value π1 log 8 + π2 γ is not so important but


just the fact that there exists k0 such that
2
a(x) = log |x| + k0 + O(|x|−2 ).
π

4.4. Harmonic functions


The study of simple random walk is very closely related to the study
of harmonic functions on the lattice Zd . A good starting point for
understanding harmonic function is the sharp estimates of the Green’s
function and potential kernel, (4.7) and (4.10). We will assume these
even though we have not given complete proofs.
Suppose A ⊂ Zd with ∂A ̸= ∅. Let TA = min{n : Sn ̸∈ A}.
Recall that the Poisson kernel HA (z, w) for z ∈ A, w ∈ ∂A, is defined
by
HA (z, w) = Pz {STA = w}.
82 4. Random Walk in Zd

For fixed z, this gives a probability measure on ∂A provided that


Pz {TA < ∞} = 1. This will always be true if d ≤ 2 or if A is finite.
There are examples with d ≥ 3 such that Pz {TA < ∞} < 1, for
example, if Zd \ A is finite. The next proposition is a particular case
of Proposition 1.10 so we do not need to prove it again.

Proposition 4.13. Suppose A ⊂ Zd such that for each x ∈ A,


Px {TA < ∞} = 1. Suppose F : ∂A → R is a bounded function.
Then there exists a unique bounded function f : A → R satisfying
Lf (x) = 0, x ∈ A,

f (x) = F (x), x ∈ ∂A.


It is given by

(4.11) f (x) = Ex [F (STA )] = F (y) HA (x, y).
y∈∂A

Exercise 4.14. Suppose d ≥ 3 and Zd \ A is finite. Show that (4.11)


gives the unique function that is harmonic in A, equals zero on ∂A,
and satisfies the extra condition
lim f (x) = 0.
|x|→∞

If Px {TA < ∞} < 1 for some x we can get a similar result


by adding the point “∞” to ∂A and setting
HA (z, ∞) = Pz {TA = ∞}.
In this case we must also give the boundary value F (∞).
See Exercise 4.31 for a proof of this.

Recall that Bn is the discrete ball of radius n about the origin


and for z ∈ ∂Bn , n ≤ |z| < n + 1. Propositions 4.7 and 4.10 imply
that for all n and all z ∈ ∂Bn ,
G(z) = βd n2−d + O(n1−d ), d ≥ 3,
2
a(z) = log n + k0 + O(n−1 ), d = 2.
π
4.4. Harmonic functions 83

Here we do not use the full force of the asymptotics of the Green’s
function. Although we know G(z) up to an error of |z|−d , there is an
error of order n1−d when we replace |z| with n since
|z|2−d = n2−d + O(n1−d ), z ∈ ∂Bn ,
log |z| = log n + O(n−1 ), z ∈ ∂Bn .
The next proposition expresses the Green’s function GA on a finite set
in terms of the whole space Green’s function or the potential kernel.
Proposition 4.15. Suppose A ⊂ Zd is finite. Then for all z, w ∈ A,
• If d ≥ 3,

GA (z, w) = G(z, w) − HA (z, y) G(y, w)
y∈∂A

= G(w − z) − HA (z, y) G(w − y).
y∈∂A

• If d = 2,

GA (z, w) = −a(w − z) + HA (z, y) a(w − y).
y∈∂A

Proof. Without loss of generality we may assume that z = 0 ∈ A


and let T = TA . For d ≥ 3, we write the total number of visits to w
as
∑∞ ∑
T −1 ∞

1{Sj = w} = 1{Sj = w} + 1{Sj = w}.
j=0 j=0 j=T
Taking expectations, we get

G(w) = GA (0, w) + HA (0, y) G(y, w).
y∈∂A

A similar proof can be given for d = 2, but it takes more work


because of the recurrence of the random walk. We give a different
proof. Without loss of generality assume that w = 0 and let

g(z) = −a(−z) + HA (z, y) a(−y).
y∈∂A

Note that if z ∈ ∂A, then g(z) = 0. Also if z ∈ A,


Lg(z) = δ(z).
The unique function satisfying this is g(w) = GA (z, 0). □
84 4. Random Walk in Zd

As a corollary, we estimate the expected number of returns to the


origin before leaving the ball Bn by a random walker starting at the
origin,

Proposition 4.16.
• If d ≥ 3,
GBn (0, 0) = G(0) − O(n2−d ).
• If d = 2,
2
GBn (0, 0) = log n + k0 + O(n−1 )
π
where k0 is as in (4.10).
• If d = 2 and x ∈ Bn ,
( )
2 n
(4.12) GBn (x, 0) = log + O(n−1 ) + O(|x|−2 ).
π |x|
Exercise 4.17. Use Proposition 4.15 to prove the last proposition.

Proposition 4.18. If d ≥ 3 and |x| ≥ n, then the probability that a


random walk starting at x ever enters Bn equals
( )d−2
n [ ]
1 + O(n−1 ) .
|x|

Proof. Let A = Zd \ Bn and let q = q(x, n) be this probability. Note


that ∑
q= HA (x, z).
z∈∂A
The (whole space) Green’s function G(·) is a bounded function that
is harmonic in A and goes to zero as x → ∞. Therefore (see Exercise
4.14),

G(x) = HA (x, z) G(z).
z∈∂A

We know that G(x) = βd |x|2−d + O(|x|−d ), and since |x| ≥ n,


[ ]
G(x) = βd |x|2−d 1 + O(n−2 ) .
For z ∈ ∂A,
G(z) = βd n2−d + O(n1−d ) = βd n2−d [1 + O(n−1 )],
4.4. Harmonic functions 85

and hence,

HA (x, z) G(z) = q βd n2−d [1 + O(n−1 )].
z∈∂A

Therefore,
|x|2−d
q= [1 + O(n−1 )].
n2−d

Proposition 4.19. Suppose d = 2, and let q(n, x) be the probability


that a simple random walk starting at x ∈ Z2 leaves Bn before reaching
the origin. Then for |x| < n,
a(x)
q(n, x) = .
2
π log n + k0 + O(n−1 )
In particular,
2
(4.13) lim (log n) q(n, x) = a(x).
n→∞ π

Proof. Let A = Bn \{0}. The potential kernel is a harmonic function


on A with a(0) = 0, and hence
∑ ∑
a(x) = a(z) HA (x, z) = a(z) HA (x, z).
z∈∂A z∈∂Bn

For z ∈ Bn ,
2
log n + k0 + O(n−1 ).
a(z) =
π
The probability that we want is
∑ a(x)
HA (x, z) = 2 −1 )
.
z∈∂B n π log n + k0 + O(n

Exercise 4.20. Show that if d = 2 and m < |x| < n, then the
probability that a random walk starting at x enters Bm before leaving
Bn equals
log n − log |x| + O(|x|−1 )
.
log n − log m + O(m−1 )
Hint: The potential kernel a(·) is a harmonic function in Bn \ Bm .
86 4. Random Walk in Zd

The next proposition gives a difference estimate for harmonic


functions. Difference estimates are discrete analogues of estimates of
derivatives. We will use the following estimates which follow imme-
diately from Propositions 4.7 and 4.10: if x, y ∈ Zd , |x − y| = 1,
then
(4.14) |G(x) − G(y)| ≤ c |x|1−d , d ≥ 3,
−1
|a(x) − a(y)| ≤ c |x| , d = 2.
If f is a function on a countable set V we write
∥f ∥∞ = sup{|f (x)| : x ∈ V }.
If V is finite, the supremum is the same as the maximum of |f |. We
also write
dist(x, ∂A) = min |x − y|.
y∈∂A

Proposition 4.21. There exists c = c(d) < ∞ such that if f : A → R


is harmonic in A and x, y ∈ A with |x − y| = 1 and dist(x, ∂A) ≥ n,
then
c
|f (x) − f (y)| ≤ ∥f ∥∞ .
n
It is important to note the order of quantifiers in the proposition.
There is a single constant c that works for all subsets A ⊂ Zd and all
harmonic function on A.

Proof. Without loss of generality we will assume that x = 0, and


since Bn ⊂ A, we can assume A = Bn .
We will show that for every |y| = 1 and z ∈ ∂A,
(4.15) HA (y, z) = HA (0, z) [1 + O(n−1 )].
We recall that this is shorthand for the statement that there exists c
such that for every n > 0, every z ∈ ∂A, and every |y| = 1,
c
|HA (0, z) − HA (y, z)| ≤ HA (0, z).
n
To see that (4.15) suffices, we can use (4.11) to write

|f (0) − f (y)| ≤ |HA (0, z) − HA (y, z)||f (z)|
z∈∂A
c ∑ c
≤ ∥f ∥∞ HA (0, z) = ∥f ∥∞ .
n n
z∈∂A
4.4. Harmonic functions 87

Let us fix z ∈ ∂A and write h(x) = HA (x, z). We will use a


technique knows as a last-exit decomposition. Let τ = τn = min{j :
Sj ̸∈ Bn }, and let V = ∂Bn/2 . For w ∈ V , let q(w) be the probability
that a random walker starting at w does not return to V before leaving
Bn and that it leaves Bn at z,
q(w) = qn,z (w) = Pw {Sτ = z; Sj ̸∈ V for j = 1, 2, . . . , τ }.
Then we claim that for all x ∈ Bn/2 ,

(4.16) h(x) = GA (x, w) q(w).
w∈V

To see this we focus on the last visit to V by the random walker


before leaving Bn . Note that if we start in Bn/2 , we must visit V
before leaving Bn . Let ρ be the largest k with Sk ∈ V and k < τ .
Then using the total law of probability,
∑∞ ∑
h(x) = Px {Sτ = z, ρ = k, Sk = w}.
k=1 w∈V

Note that
Px {Sτ = z, ρ = k, Sk = w} =
Px {Sk = w, k < τ } Px {Sτ = z, ρ = k | Sk = w, k < τ }.
Using the Markov property we can see that
Px {Sτ = z, ρ = k | Sk = w, k < τ } = q(w).
Therefore,
∑ ∞

P {Sτ = z} =
x
q(w) Px {Sk = w, k < τ }
w∈V k=0

= q(w) GA (x, w).
w∈V

Our next step is to claim that for all w ∈ V , we have


(4.17) GA (0, w) = GA (y, w) [1 + O(n−1 )].
We will show this in the case d ≥ 3; the d = 2 case is done similarly.
Proposition 4.15 gives

GA (x, w) = GA (w, x) = G(w, x) − HA (z, ζ) GA (ζ, x).
ζ∈∂Bn
88 4. Random Walk in Zd

Using this and (4.7), we see for w ∈ V and x ∈ {0, y}


GA (x, w) = [2d−2 − 1] βd n2−d + O(n1−d ).
Also (4.14) gives
|G(ζ, 0) − G(ζ, y)| ≤ c n1−d , |ζ| ≥ n/2.
Combining these two estimates gives (4.17).
Finally we can write

h(0) = q(w) GA (0, w)
w∈V

= q(w) GA (y, w) [1 + O(n−1 )]
w∈V

= h(y) [1 + O(n−1 )].


Exercise 4.22. Suppose f : Zd → R is harmonic.


(1) Show that if f is bounded then f is constant.
(2) More generally, show that if
|f (x)|
lim = 0,
|x|→∞ |x|
then f is constant.

For nonnegative functions we get another important result that


says that values of positive functions in the interior are comparable.
The key point in this lemma is that the constant Cr can be chosen so
that the inequality holds for all n and all positive harmonic functions
in Bn .

Proposition 4.23 (Harnack inequality). For every 0 < r < 1, there


exists Cr = Cr (d) < ∞ such that if f : B n → [0, ∞) is harmonic in
Bn , then for all |x|, |y| < rn, f (x) ≤ Cr f (y).

Proof. Let cr = c/(1 − r) where c is from the previous proposition.


Then if |x|, |y| < rn with |x − y| = 1, (4.15) gives
[ cr ]
HBn (x, w) ≤ HBn (y, w) 1 + .
n
4.4. Harmonic functions 89

Since f is nonnegative,

f (x) = HBn (x, z) f (z)
z∈∂A
∑ [ cr ]
≤ HBn (y, z) 1 + f (z)
n
z∈∂A
[ cr ]
= f (y) 1 + .
n
If |x|, |y| ≤ rn,√then we can connect x to y by a path staying in Brn
of at most 2r dn steps. Therefore, by repeated application of the
above inequality we get

[ cr ]2r dn
f (x) ≤ 1 + f (y) ≤ Cr f (y).
n

where Cr = exp{2 dcr r}. □
Proposition 4.24. There exists c < ∞ such that if f : A → [0, ∞)
is harmonic in A then the following holds. Suppose ω is a nearest
neighbor path from z to w in A of length k with dist(ω, ∂A) ≥ N .
Then,
f (z) ≤ f (w) exp{c k/N }.
Exercise 4.25.
(1) Check that the proof of Proposition 4.22 extends to prove
the last proposition.
(2) Use the last proposition to show the following. There exists
c < ∞ such that if A = Zd \ Bn and f : A → [0, ∞) is
harmonic in A, then for all z, w ∈ ∂B2n ,
f (z) ≤ c f (w).
Exercise 4.26. Show that there exists c < ∞ such that the following
is true for every f : Bn → R that is harmonic in Bn .
• For every y ∈ Bn ,
|y|
|f (y) − f (0)| ≤ c ∥f ∥∞ .
n
• If f ≥ 0 on Bn , then for every y ∈ Bn/2 ,
|y|
(4.18) |f (y) − f (0)| ≤ c f (0).
n
90 4. Random Walk in Zd

Hint: The first is a consequence of the difference estimate and the


second uses the Harnack inequality as well.

Exercise 4.27. Show that there exists α > 0 and c < ∞ such that
the following is true. Let A = Zd \ Bn and z ∈ ∂A. Then if r ≥ 2 and
x, y ∈ Zd \ Brn , then
c
(4.19) |HA (x, z) − HA (y, z)| ≤ α HA (x, z).
r
Hint:
(1) Let Vk = ∂B2k n for positive integers k. Explain why it
suffices to prove (4.19) for x, y ∈ Vk for all k.
(2) Let
{ }
|HA (x, z) − HA (y, z)|
λk = max : x, y ∈ Vk .
HA (x, z)
Show that there exists ρ < 1 (independent of z, n, k) such
that if k ≥ 1,
λk+1 ≤ ρ λk .
Hint: Use Exercise 4.25.

Exercise 4.28. Suppose n ≥ 2m and A ⊂ Bm . Let τA and τn denote


the first time that a random walk hits A and ∂Bn , respectively. Let
z ∈ ∂Bn . If x ∈ ∂B2m , define ϵA (x, z) by
Px {S(τn ) = z | τn < τA } = P{S(τn ) = z} [1 + ϵA (x, z)] .
Show that there there exists c = c(d) < ∞ such that for all n ≥ 2m,
m
|ϵA (x, z)| ≤ c , d ≥ 3,
n
m n
|ϵA (x, z)| ≤ c log , d = 2.
n m
Hint: Use (4.18) to show that
[ ( m )]
Px {S(τn ) = z | τn > τA } = P{S(τn ) = z} 1 + O .
n
We will use our work so far to show the existence of harmonic
measure from infinity. We start by giving the definition and then we
will prove a proposition that shows that the definition is valid.
4.4. Harmonic functions 91

Definition 4.29. Suppose A ⊂ Zd , d ≥ 2 is finite and let


T = TA = min{j ≥ 1 : Sj ∈ A}.
Then the harmonic measure (from infinity) of A is defined by
hmA (x) = lim Pw {ST = x | T < ∞}
w→∞

If d = 2, then Pw {ST = x} = 1 and we can write simply


hmA (x) = lim Pw {ST = x}.
w→∞

We will now establish the existence of the limit. Before doing so, we
note that the limit does not exist for d = 1. If we consider the set
A = {0, 1}, then the probability that a random walk “from infinity”
first visits A at 0 depends on whether the walker is coming from the
right-hand side or the left-hand side. The proposition below shows
that in more than one dimension, the hitting probability is the same
(in the limit) regardless of the direction one is coming from.

Proposition 4.30. If A ⊂ Zd , d ≥ 2, is finite, then for each x ∈ A,


the limit
hmA (x) = lim Pw {ST = x | T < ∞}
w→∞
exists and is also given by
Px {τn < T }
lim ∑ ,
y∈A P {τn < T }
n→∞ y

where
τn = min{j ≥ 1 : Sj ∈ ∂Bn }.
Moreover, if A ⊂ Bm and |w| ≥ 2m, then
Pw {ST = x | T < ∞} = hmA (x) [1 + O(ϵ)],
where ϵ = ϵ(m, w) = m/|w| for d ≥ 3 and ϵ = (m/|w|) log(|w|/m)
for d = 2.

Proof. For x ∈ A, z ∈ ∂Bn , let


rn (x, z) = Px {τn < T, S(τn ) = z}.
By reversing paths (check this!) we also see that
rn (x, z) = Pz {τn > T, S(T ) = x}.
92 4. Random Walk in Zd

By Exercise 4.28,

rn (x, z) = Px {τn < T } P0 {S(τn ) = z} [1 + O(ϵn )],

where ϵn = (m/n) if d ≥ 3 and ϵn = (m/n) log(n/m) if d = 2. We


now use a last-exit decomposition for |w| > n focusing on the last
visit to ∂Bn before reaching the set A. More precisely, arguing as in
(4.16), we get for |w| > n,

Pw {S(TA ) = x}

= GZd \A (w, z) rn (x, z)
z∈∂Bn

= GZd \A (w, z) Px {τn < T } P0 {S(τn ) = z} [1 + O(ϵn )]
z∈∂Bn
= Jn (w, A) Px {τn < T } [1 + O(ϵn )],

where

Jn (w, A) = GZd \A (w, z) HBn (0, z).
z∈∂Bn

The term Jn (w, A) is independent of x, and hence if we set


Px {τn < T }
(4.20) hn (x) = ∑ ,
y∈A P {τn < T }
y

then, we can write the above as

Pw {S(TA ) = x | TA < ∞} = hn (x) [1 + O(ϵn )] .

Exercise 4.31. Suppose A ⊂ Zd (d ≥ 2) with Zd \ A finite. Show


that if f : A → R is bounded and harmonic on A, then the limit
exists.
L = lim f (z)
z→∞

Hint:
(1) It suffices to prove the result when ∥f ∥∞ = 1.
(2) Let

fˆ(z) = HA (z, x) f (x).
x∈∂A
4.5. Capacity for d ≥ 3 93

Let fˆ(∞) = 0 if d ≥ 3 and for d = 2



fˆ(∞) = hm∂A (x) f (x).
x∈∂A

Show that
lim fˆ(z) = fˆ(∞).
z→∞

(3) Let g = f − fˆ and note that this satisfies the hypotheses


with g ≡ 0 on A.
(4) Use Exercise 4.28 to show that
g(z) = C HA (z, ∞).
for some C.

4.5. Capacity for d ≥ 3


If A is a finite subset of Zd with d ≥ 3, there are various ways to
describe the size of A. One obvious way is the number of points,
but this does not distinguish n points that are close together from n
points spread apart. We will consider another notion called capacity
which is related to the probability that a simple random walker ever
visits the set. We will start with the definition and then we will give
this interpretation. Let
TA = min{j ≥ 1 : Sj ∈ A},
where we set TA = ∞ if Sj ̸∈ A for all j ≥ 1. Note that TA ≥ 1 even
if we start in A since we are considering j ≥ 1. We let
EsA (x) = Px {TA = ∞}
and call EsA (x) the escape probability.

Definition 4.32. If d ≥ 3 and A ⊂ Zd is finite, the capacity of A, is


defined by

cap(A) = EsA (z).
z∈A

In this language we can write (4.20) for d ≥ 3,


EsA (x)
hmA (x) = .
cap(A)
94 4. Random Walk in Zd

We recall that the Green’s function satisfies

G(x) = βd |x|2−d + O(|x|−d ).

Exercise 4.33. Let Sj , j ≥ 0 denote simple random walk in Zd ,


d ≥ 3, starting at the origin, and let z be a nearest neighbor of the
origin. Let p denote the probability that the random walk returns to
the origin. We know that
1
G(0, 0) = .
1−p
(1) Show that the probability of ever visiting z is p.
(2) Let T = min{j ≥ 1 : Sj ∈ {0, z}} with T = ∞ if no such j
exists. Show that
p
P{ST = 0} = P{ST = z} = .
1+p
(3) Show that

GZd \{0} (z, z) = 1 + p,

and hence
1+p
F{0,z} (Zd ) = G(0, 0) GZ d \{0} (z, z) = .
1−p
(4) Show that
1−p
cap({0, z}) = 2 .
1+p
Proposition 4.34. If A ⊂ Zd , d ≥ 3 is a finite set, then

(4.21) cap(A) = lim βd−1 |x|d−2 Px {TA < ∞}.


x→∞

More precisely, if A ⊂ Bn , and |x| ≥ 2n,


[ ( )]
n
Px {TA < ∞} = βd |x|2−d cap(A) 1 + O .
|x|

Proof. We use a last-exit decomposition. Suppose we start a simple


random walk at x ̸∈ A and let

σ = max{k < ∞ : Sk ∈ A}
4.5. Capacity for d ≥ 3 95

with σ = ∞ if there is no such k. Then


Px {TA < ∞} = Px {σ < ∞}
∑∞ ∑
= Px {σ = k, Sk = z}
k=1 z∈A
∑∞ ∑
= Px {Sk = z} Px {σ = k | Sk = z}
k=1 z∈A
∑∑ ∞
= Px {Sk = z} EsA (z)
z∈A k=1

= EsA (z) G(x, z)
z∈A
∑ [ ( )]
n
= EsA (z) βd |x|d−2 1 + O
|x|
z∈A
[ ( )]
n
= βd |x|d−2 cap(A) 1 + O .
|x|

Proposition 4.35. If A, B ⊂ Zd , d ≥ 3, are finite, then
(4.22) cap(A ∪ B) ≤ cap(A) + cap(B) − cap(A ∩ B).

The inequality (4.22) is what characterizes capacities as opposed


to other “measures” of size. Recall that probabilities, and more gen-
erally finite measures, satisfy
P(E1 ∪ E2 ) = P(E1 ) + P(E2 ) − P(E1 ∪ E2 ).
so the capacity condition is weaker than the probability (measure)
condition.

Proof. Let x ∈ Zd , start a random walk at x, and let EV denote the


event that the random walk visits V . Note that EA∪B = EA ∪ EB
and EA∩B ⊂ EA ∩ EB . Since it is possible for the walker to visit
both A and B without visiting A ∩ B, it is not always true that
EA∩B = EA ∩ EB . Then,
Px (EA∪B ) = Px (EA ) + Px (EB ) − Px (EA ∩ EB )
≤ Px (EA ) + Px (EB ) − Px (EA∩B ).
96 4. Random Walk in Zd

If we multiply both sides by βd−1 |x|2−d and take the limit using (4.21),
we get the result. □

Proposition 4.36. If A = Bn ,
cap(A) = βd−1 nd−2 + O(nd−1 ).

Proof. By Proposition 4.18,


( )d−2 [ ( )]
n 1
P {TA < ∞} =
x
1+O .
|x| n
Therefore,
cap(A) = lim βd−1 |x|d−2 Px {TA < ∞}
x→∞
[ ( )]
1
= βd−1 nd−2 1 + O .
n

Since a transient random walk visits each point only a finite num-
ber of times, it also visits every finite set only finitely often. What
about infinite sets?

Exercise 4.37. Let A ⊂ Zd , and let


g(x) = gA (x) = Px {random walk visits A infinitely often}.
Show that g ≡ 0 or g ≡ 1.
Hint: Show that g is harmonic and use Exercise 4.22 to conclude
that g is constant. Now assume that g ≡ q ∈ (0, 1) and derive a
contradiction.

Definition 4.38. A subset A ⊂ Zd is called transient if and only


if with probability one the simple random walk visits A only finitely
many times. Otherwise A is called recurrent.

We can construct infinite transient sets by spacing points far


apart. Let {x1 , x2 , . . .} be a sequence of points with


|xk |2−d < ∞.
k=1
4.5. Capacity for d ≥ 3 97

Let S be a simple random walk starting at the origin and let



∑ ∞ ∑
∑ ∞
V = 1{Sn ∈ V } = 1{Sn = xj }
n=0 n=0 j=1

be the number of times that the random walk visits A. Then,


∞ ∑
∑ ∞ ∞

E[V ] = P{Sn = xj } = G(xj ) < ∞.
j=1 n=0 j=1

Hence, P{V < ∞} = 1.


One can ask the converse: if E[V ] = ∞ is it true that P{V =
∞} = 1? The answer turns out to be no. Let us do a construction.
∪∞
Set b = 1 − d1 and let A = n=1 An where An is the discrete ball
of radius rn = 2nb centered at the point zn = (2n , 0, 0, . . . , 0). The
number of elements of An is comparable to rnd = 2n(d−1) and
∑ ∞ ∑
∑ ∞
∑ ∞

G(x) = G(x) ≍ 2n(2−d) rnd ≍ 2n = ∞.
x∈A n=1 x∈An n=1 n=1

Also, Proposition 4.34 shows that cap(An ) ≍ rnd−2 and hence the
probability of visiting An is comparable to
2−n(d−2) rnd−2 ≍ 2n(d−2)(b−1) = 2−n(d−2)/d .
Therefore,


Px {random walk visits An } < ∞.
n=1

We used the word “comparable” and the notation ≍ in


the last paragraph. If an , bn are two sequences of pos-
itive numbers, we say that an and bn are comparable,
written an ≍ bn , if there exists C < ∞ such that for all
n, C −1 an ≤ bn ≤ C an .

We will give a criterion to determine whether or not a set is


transient. We start with an exercise that we will use. We let A ⊂
Zd , d ≥ 3 and An = A ∩ {z : 2n ≤ |z| < 2n+1 }.
98 4. Random Walk in Zd

Exercise 4.39. Show that there exist 0 < c1 < c2 < ∞ such that for
every A and every n, if |x| ≤ 2n−1 or 2n+2 ≤ |x| ≤ 2n+3 , then
c1 cap(An ) ≤ 2n(d−2) Px {TAn < ∞} ≤ c2 cap(An ).
Proposition 4.40 (Wiener’s Test). Let A ⊂ Zd , d ≥ 3 and let
An = A ∩ {z : 2n ≤ |z| < 2n+1 }.
Then the set A is transient for random walk if and only if
∑∞
(4.23) 2n(2−d) cap(An ) < ∞.
n=1

Proof. Let qn = 2n(2−d) cap(An ) ≍ P{TAn < ∞}, and let




Y = 1 {TAn < ∞}
n=1
denote the number of sets A1 , A2 , . . . that the random walk visits.
The condition (4.23) is equivalent to the condition E[Y ] < ∞. If
E[Y ] < ∞, then with probability one Y is finite and hence the walk
is transient. This gives one direction.
To finish we need to show that if E[Y ] = ∞, then P{Y = ∞} = 1.
Using Exercise 4.37, it suffices to show that P{Y = ∞} > 0. Assume
that the sum in (4.23) is infinite. Then at least one of

∑ ∞

q2n , q2n−1
n=1 n=1
is infinite. We will assume the first is infinite; essentially the same
argument holds if the second sum is infinite. Let En be the event that
the random walk visits A2n . Then using the exercises immediately
above, we get the relation
P(En ∩ Em ) ≤ c P(En ) P(Em )
for some c. To see that this suffices, we use the second moment
method, see Proposition A.4. □
Proposition 4.41. Let Sn be a simple random walk in Zd and let
A = {Sj : j = 0, 1, 2, . . .} be the points visited by the path and let Â
be the set of points visited by the loop erasure of the path. Then with
probability one,
4.5. Capacity for d ≥ 3 99

• If d ≥ 5, A and  are transient sets.


• If d ≤ 4, A and  are recurrent sets.

Exercise 4.42. Show that



|x|−r < ∞
x∈Zd \{0}

if and only if r > d.

Proof. We will only prove the result for A; the results for à is similar
but requires some more work. Let
∑ ∑
Y = G(x) = 1{x ∈ A} G(x).
x∈A x∈Zd

which is now a random variable since A is a random set. Note that


∑ ∑ G(x)2
E[Y ] = P{x ∈ A} G(x) = .
d d
G(0)
x∈Z x∈Z

Since G(x) ≍ |x| , we have G(x) ≍ |x|4−2d . Therefore the sum


2−d 2

converges if and only if 2d − 4 > d, that is, d > 4. Therefore, if d > 4,


E[Y ] < ∞ and hence with probability one Y < ∞. As we have seen,
this implies that A is a transient set and since  ⊂ A,  is also a
transient set.
The case d ≤ 4 takes more work; we will do only the case of
A with d = 4. Let S, S̃ be two independent simple random walks
starting at the origin and let

A = {Sj : j = 0, 1, . . .}, Ã = {S̃j : j = 0, 1, 2, . . .}.

We will show that with probability one, A ∩ Ã is an infinite set. Let

σn = min{j : Sj ≥ 2n },

An = B2n+1 ∩ {Sj : σn−2 ≤ j ≤ σn+2 }.


and let σ̃n , Ãn be the analogous quantities using the walk S̃. We will
show that following
• With probability one, there exist infinitely many n with
A4n ∩ Ã4n ̸= ∅.
100 4. Random Walk in Zd

We will not give all the details but leave it as an exercise in the ideas
of this chapter to put in all the details, However, we will give the
sketch of facts to verify. Let En denote the event that A4n ∩ Ã4n ̸= ∅
and let Un = B24n+1 \ B24n .
• Show that there exists c1 > 0 such that for all x ∈ Un ,
P{x ∈ An } ≥ c1 2−8n .
• Show that there exists c2 < ∞ such that for all x, y ∈ Un
distinct,
P{x, y ∈ An } ≤ c2 2−16n |x − y|−4 .
• Show that if

Yn = 1{x ∈ A ∩ Ã}.
x∈A4n ∩Ã4n

then there exist c3 , c4 > 0 such that


E[Yn ] ≥ c3 , E[Yn2 ] ≤ c4 n.
• Use the second moment method (see Section A.2) to con-
clude that
c4
P(En ) = P{Yn > 0} ≥ 2 .
c3 n
• Show that there exists c6 such that for all m < n,
P(Em ∩ En ) ≤ c6 P(Em ) P(En ).
• Use the second moment method again to conclude that with
probability one


1{Em } = ∞.
n=1

Exercise 4.43. Put it all the details of the last proof!

Exercise 4.44. Show that if d ≥ 5, there exists c < ∞ such that the
following holds. Suppose S 1 , S 2 are simple random walks starting at
0 and x respectively. Then,
P{S 1 [0, ∞) ∩ S 2 [0, ∞) ̸= ∅} ≤ c |x|4−d .
Hint: Let Iy be the indicator function of the event that there exist

j, k with Sj1 = y and Sk2 = y. Let I = y∈Zd I(y). Show that
E[I] ≤ c |x| .
4−d
4.6. Capacity in two dimensions 101

Many of the results about intersection of random walk


paths are reflections of the fact that a random walk path
in Zd , d ≥ 2 is a “two-dimensional set”. This is seen
by noting that for large R, the number of points of the
path that lie in the ball of radius R is of order R2 . Two
two-dimensional sets (think, for example, planes) in Rd
typically do not intersect if d > 4 and intersect if d < 4
with d = 4 being the critical dimension where it is a close
call.

4.6. Capacity in two dimensions


There is a also a notion of capacity in two dimensions that relates to
the probability of hitting a set, but we cannot use the same definition
since every nonempty set is hit with probability one. Instead we will
take a limit. If A is a finite set we write τA for the first time that we
visit A and we write τn for τ∂Bn . We start with (4.13) which can be
rewritten as
2
a(x) = lim (log n) Px {τn < τ0 }.
n→∞ π

Proposition 4.45. If A ⊂ Z2 is finite, then the limit


2
aA (x) = lim (log n) Px {τn < τA }.
n→∞ π
exists for every x ∈ Z2 . Moreover, aA ≡ 0 on A and if z ∈ A,
x ∈ Z2 \ A, then

aA (x) = a(x − z) − Px {S(TA ) = w} a(w − z).
w∈A

Proof. We will do the case z = 0 and leave the general case to the
reader. Note that
(4.24) Px {τn < τ0 } = Px {τn < τA } + Px {τA < τn < τ0 },
102 4. Random Walk in Zd

and

Px {τA < τn < τ0 } = Px {S(τA ∧ τn ) = w} Pw {τn < τ0 }.
w∈A

Using (4.13), we get


2 ∑
lim (log n) Px {τA < τn < τ0 } = Px {S(τA ) = w} a(w).
n→∞ π
w∈A

Using this again in (4.24), we get


2 ∑
lim (log n) Px {τn < τA } = a(x) − Px {S(τA ) = w} a(w).
n→∞ π
w∈A

If 0 ∈ A, we can write
2
a(x) − aA (x) = lim (log n) [Px {τn < τ0 } − Px {τn < τA }] .
n→∞ π

Definition 4.46. If x ∈ A ⊂ Z2 , then the capacity of A is defined by


cap(A) = lim [a(z) − aA (z)]
|z|→∞

= hmA (y) a(y − x).
y∈A

The existence of this limit follows from Exercise 4.31. In some


sense this is defined up to an additive constant and we have chosen
the constant so that cap({0}) = 0. Another reasonable choice would
be to choose cap({0}) = −k0 . We have the expansion
2
aA (z) = log |z| + k0 − cap(A) + o(1), z → ∞.
π

Further reading
The classical book by Frank Spitzer [18] includes an extensive bib-
liography on the early work on random walk. This chapter can be
considered as a sampler from [10] which is a serious graduate/research
level treatment of simple random walk.
Index

augmented NB distribution, 185 connective constant, 172


augmented NB process, 187 covariance matrix, 135
Cramer’s rule, 31
binary tree, 15 current
Borel-Cantelli Lemma, 179, 181 directed, 59
boundary perturbation, 167 undirected, 59
boundary Poisson kernel, 34
boundary vertex, 8 Dirichlet problem, 12, 170
bounded convergence theorem, 11 domain Markov property, 36, 166
Doob h-process, 105
box dimension, 155
dual graph, 123
Brownian bridge, 158
dual lattice, 122
Brownian motion, 156
dual spanning tree, 122, 125

capacity edge
in Z2 , 102 self-edge, 7
in Zd , d ≥ 3, 93 undirected, 7
capacity parametrization, 169 elementary loop, 47
Cauchy-Riemann equation, 171 escape probability, 93
Cauchy-Riemann equations, 159 Euler’s constant, 81
central charge, 45, 167 exponential distribution, 20, 190
central limit theorem, 70
Chapman-Kolomogorov equations, fractal dimension, 153
2 free boundary, 67, 119
characteristic function, 71, 193
comparable, 97 gambler’s ruin, 149
compound Poisson process, 183, Gamma distribution, 65, 190
188 Gamma function, 78, 189
concatenation, 18 Gamma process, 65, 190
conformal invariance, 163, 166 Gaussian free field (GFF), 133
conformal transformation, 160 conformal invariance in R2 , 176

197
198 Index

continuous, 173 local time, 59


Dirichlet, 142, 146 Loewner differential equation, 169
from Markov chains, 139 loop erasure, 24
generating function, 15 backward, 25
geometric distribution, 5 forward, 25
graph loop measure
connected, 6 rooted, 54
directed (digraph), 6 unrooted, 56
Laplacian, 8 loop soup, 51
graphs, 6 Brownian, 161
Green’s function, 13, 18, 26 ordered, 51
Brownian motion, 174 rooted, 55
conformal invariance, 175 loop-erased random walk (LERW),
ground state, 145 25, 103, 168
growing loop scaling limit, 164
at a point, 46
configuration, 50 Markov chain, 1
cemetery site, 14
Hamiltonian, 143 continuous time, 20
harmonic function, 8 finite, 1
difference estimates, 86 irreducible, 3
on Zd , 81 killing rate, 14
harmonic measure, 91 Laplacian, 7
Harnack inequality, 88 recurrent, 4
Hausdorff dimension, 156 symmetric, 4
heat equation, 69 time homogeneous, 1
holomorphic function, 159 transient, 5
with boundary, 8
indegree, 6 Markovian field, 147
indicator function, 4 measure, 17
infinitely divisible distribution, 192 memoryless property, 20
integrable weight, 17 Minkowski content, 154, 167
interior vertex, 8 Minkowski dimension, 154, 167
moment generating function (mgf),
Kirchhoff’s theorem, 41
15
Lévy measure, 183, 188
negative binomial distribution, 62,
Lévy process, 157, 192
184
Laplacian, 18, 76
negative binomial process, 48, 187
determinant, 29
normal distribution, 133
graph, 8, 40
centered, 134
in Rd , 8
multivariate, 133, 134, 137
Markov chain, 7
normaldistribution
random walk, 8
multivariate, 134
Laplacian random walk, 32
last-exit decomposition, 87, 94 outdegree, 6
lazy walker, 69
local central limit theorem (LCLT), Pólya’s theorem, 76
71, 73 path, 16, 23
Index 199

length, 16 infinite in Zd , 117


nontrivial, 17 uniform spanning tree (UST), 38
trivial, 17 in Z2 , 130
period, 69 infinite in Z2 , 128
Poisson kernel, 10, 34, 142, 171 unrooted loop, 55
Poisson process, 51, 182
positive definite, 135 weight
positive semidefinite, 135 integrable, 17
Possion process on edges, 16
negative rate, 63 on paths, 17
potential kernel, 79, 81 Wiener process, 156
prediction, 173 Wiener’s test, 98
Wilson’s algorithm, 38, 119
random walk wired boundary, 68, 119
excursion-reflected, 111
Laplacian, 32, 106
loop-erased, 25
simple
Zd , 68
graph, 6
with darning, 111
recurrent, 76
set in Zd , 98
set inZd , 96
restriction property, 163, 168
Riemann mapping theorem, 161

sausage, 154
scaling limit, 153
Schramm-Loewner evolution
(SLE), 167, 171
second moment method, 180
self-avoiding polygon (SAP), 122
self-avoiding walk (SAW), 24, 172
simply connected, 122, 161
spanning forest, 117
spanning tree, 117
Stirling’s formula, 6, 75
stochastic matrix, 2
stopping time, 9

transient, 76
set in Zd , 96, 98
transition matrix (probabilities), 1
tree, 37
spanning, 38
wired spanning, 41

Uniform spanning tree (UST)


SELECTED PUBLISHED TITLES IN THIS SERIES

98 Gregory F. Lawler, Random Explorations, 2022


97 Anthony Bonato, An Invitation to Pursuit-Evasion Games and Graph
Theory, 2022
96 Hilário Alencar, Walcy Santos, and Gregório Silva Neto,
Differential Geometry of Plane Curves, 2022
95 Jörg Bewersdorff, Galois Theory for Beginners: A Historical Perspective,
Second Edition, 2021
94 James Bisgard, Analysis and Linear Algebra: The Singular Value
Decomposition and Applications, 2021
93 Iva Stavrov, Curvature of Space and Time, with an Introduction to
Geometric Analysis, 2020
92 Roger Plymen, The Great Prime Number Race, 2020
91 Eric S. Egge, An Introduction to Symmetric Functions and Their
Combinatorics, 2019
90 Nicholas A. Scoville, Discrete Morse Theory, 2019
89 Martin Hils and François Loeser, A First Journey through Logic, 2019
88 M. Ram Murty and Brandon Fodden, Hilbert’s Tenth Problem, 2019
87 Matthew Katz and Jan Reimann, An Introduction to Ramsey Theory,
2018
86 Peter Frankl and Norihide Tokushige, Extremal Problems for Finite
Sets, 2018
85 Joel H. Shapiro, Volterra Adventures, 2018
84 Paul Pollack, A Conversational Introduction to Algebraic Number
Theory, 2017
83 Thomas R. Shemanske, Modern Cryptography and Elliptic Curves, 2017
82 A. R. Wadsworth, Problems in Abstract Algebra, 2017
81 Vaughn Climenhaga and Anatole Katok, From Groups to Geometry
and Back, 2017
80 Matt DeVos and Deborah A. Kent, Game Theory, 2016
79 Kristopher Tapp, Matrix Groups for Undergraduates, Second Edition,
2016
78 Gail S. Nelson, A User-Friendly Introduction to Lebesgue Measure and
Integration, 2015
77 Wolfgang Kühnel, Differential Geometry: Curves — Surfaces —
Manifolds, Third Edition, 2015
76 John Roe, Winding Around, 2015

For a complete list of titles in this series, visit the


AMS Bookstore at www.ams.org/bookstore/stmlseries/.

You might also like