Download as pdf or txt
Download as pdf or txt
You are on page 1of 35

Available online at www.sciencedirect.

com

ScienceDirect

Stochastic Processes and their Applications 163 (2023) 288–322


www.elsevier.com/locate/spa

Well-posedness of the martingale problem for


super-Brownian motion with interactive branching
Lina Jia , Jie Xiongb , Xu Yangc ,∗
a Faculty of Computational Mathematics and Cybernetics, Shenzhen MSU-BIT University, Shenzhen, China
b Department of Mathematics and National Center for Applied Mathematics (Shenzhen), Southern University of Science
and Technology, Shenzhen, China
c School of Mathematics and Information Science, North Minzu University, Yinchuan, China

Received 6 April 2021; received in revised form 14 February 2023; accepted 12 June 2023
Available online 16 June 2023

Abstract
In this paper a martingale problem for super-Brownian motion with interactive branching is derived.
The uniqueness of the solution to the martingale problem is obtained by using the pathwise uniqueness
of the solution to a corresponding system of SPDEs with proper boundary conditions. The existence of
the solution to the martingale problem and the local Hölder continuity of the density process are also
studied.
© 2023 Elsevier B.V. All rights reserved.

MSC: 60H15; 60J68


Keywords: Super-Brownian motion; Interacting branching; Function-valued process; Stochastic partial differential
equation

1. Introduction and main results


Let M(R) be the collection of all finite Borel measures on R endowed with the weak
convergence topology. Let Cbn (R) be the set of all bounded continuous functions on R with
continuous bounded derivatives up to nth order. We consider a continuous M(R)-valued process
(X t )t≥0 satisfying the martingale problem (MP): for φ ∈ Cb2 (R), the process
∫ t⟨ ⟩
1
Mt (φ) ≡ ⟨X t , φ⟩ − ⟨X 0 , φ⟩ − X s , φ ′′ ds (1.1)
0 2
∗ Corresponding author.
E-mail addresses: jiln@smbu.edu.cn (L. Ji), xiongj@sustech.edu.cn (J. Xiong), xuyang@mail.bnu.edu.cn
(X. Yang).

https://doi.org/10.1016/j.spa.2023.06.006
0304-4149/© 2023 Elsevier B.V. All rights reserved.
L. Ji, J. Xiong and X. Yang Stochastic Processes and their Applications 163 (2023) 288–322

is a continuous martingale with quadratic variation process (⟨M(φ)⟩t )t≥0 given by


∫ t
X s , γ (µs , ·)φ 2 ds,
⟨ ⟩
⟨M(φ)⟩t = (1.2)
0
where (µt )t≥0 is the density process of (X t )t≥0 , and γ is the interacting branching rate
depending on (µt )t≥0 . The notation ⟨ν, φ⟩ denotes the integral of the function φ with respect
to the measure ν. In this paper we always assume that X 0 (d x) = µ0 (x)d x with µ0 ∈ Cc (R)+ ,
where Cc (R)+ is the collection of nonnegative continuous functions on R with compact support.
Obviously, µ0 ∈ L 1 (R)+ with L 1 (R)+ being the set of nonnegative functions f on R with
R f (x)d x < ∞. When γ is a constant, the process (X t )t≥0 is a classical super-Brownian

motion. In this case the well-posedness of the MP ((1.1), (1.2)) was established using the
nonlinear partial differential equation satisfied by its log-Laplace transform, which was obtained
by Watanabe [23]. Moreover, a new approach for the well-posedness of the MP was suggested
by Xiong [26], in which a relationship between the classical super-Brownian motion and
a stochastic partial differential equation (SPDE) satisfied by its corresponding distribution-
function-valued process was established. The weak uniqueness of the solution to the MP was
also obtained by the strong uniqueness of the solution to the corresponding SPDE in [26]. See
He et al. [8] for the case of super-Lévy process.
Superprocesses with interaction are natural to consider, since for many species branching
and spatial motion depend on the population density. When the spatial motion is interactive,
the well-posedness of the martingale problem was studied by Donnelley and Kurtz [3], see also
Perkins [17, Theorem V.5.1] and Li et al. [14]. Uniqueness for the historical superprocesses
with certain interaction was investigated by Perkins [17]. Furthermore, the superprocesses
with interactive immigration was studied in [1,7,20], see also Li [13, Section 10]. The well-
posedness of the martingale problem for the interactive immigration process was solved by
Mytnik and Xiong [16]. See also [27] for the well-posedness of the martingale problem for
a superprocess with location-dependent branching, interactive immigration mechanism and
spatial motion.
However, the hard case for the superprocess with interactive branching was rarely inves-
tigated. We are interested in the case of γ (µs , x) = γ (µs (x)), i.e., the interactive branching
mechanism depending on the density process (µt )t≥0 , where the well-posedness of the MP
((1.1), (1.2)) is still an open problem. The weak uniqueness of the solution to the MP ((1.1),
(1.2)) is very difficult to prove. As a first step, throughout this paper we assume that γ satisfies
the following condition:

Condition 1.1. Let integer n ≥ 0 and −∞ = a0 < · · · < an+1 = ∞ be fixed. For any
f ∈ L 1 (R)+ , let
gi ((f (ai+1 )), ) ai ≤ x < ai+1 , i = 0, . . . , n − 1,
{ 2
γ ( f, x) = ∫∞
gn2 an f (y)dy , an ≤ x < ∞,

where gi , i = 0, . . . , n are continuous bounded functions from R+ to R+ .


The existence and local Hölder continuity of the density process (µt )t≥0 are investigated in
this paper, where (µt )t≥0 is a weak solution to the following SPDE:
∂ 1
µt (x) = ∆µt (x) + µt (x)γ (µt , x)Ẇt x ,

(1.3)
∂t 2
289
L. Ji, J. Xiong and X. Yang Stochastic Processes and their Applications 163 (2023) 288–322

∆ denotes the one-dimensional Laplacian operator and Ẇt x is the formal derivative of a white
+
noise random measure. Let C(R) ∫ be the collection of nonnegative +continuous functions on
R. The notation ⟨ f, g⟩ denotes R f (x)g(x)d x if it exists. The C(R) -valued process (µt )t≥0
is a weak solution of the SPDE (1.3) in the following sense: for any φ ∈ Cb2 (R), we have
1 t

⟨µt , φ⟩ = ⟨µ0 , φ⟩ + ⟨µs , φ ′′ ⟩ds
2 0
∫ t∫ √
+ µs (x)γ (µs , x)φ(x)W (ds, d x), t ≥ 0 (1.4)
0 R
almost surely, where W (ds, d x) is a time–space Gaussian white noise random measure with
the Lebesgue measure dsd x as its intensity. The existence of the solution to the MP ((1.1),
(1.2)) is given by showing the existence of the solution to (1.3). Moreover, we show the weak
uniqueness of the solution to the MP ((1.1), (1.2)) in Theorem 1.4. The main idea is to relate
the MP to a system of SPDEs, which is satisfied by a sequence of corresponding distribution-
function-valued processes on intervals. The weak uniqueness of solution to the MP follows from
the pathwise uniqueness of the solution to the system of SPDEs, see Section 3. Throughout
this paper we always assume that all random variables defined on the same filtered probability
space (Ω , F , Ft , P) without special explanation. Let E be the corresponding expectation.

Theorem 1.2. There exists a continuous M(R)-valued process (X t )t≥0 satisfying the MP ((1.1),
(1.2)), where X t (d x) is absolutely continuous respect to d x with density µt (x) satisfying (1.3)
for all t ≥ 0 almost surely, and
[∫ ]
sup E µt (x) e d x < ∞
2 p −|x|
(1.5)
0≤t≤T R

for every T > 0 and p ≥ 1.

Theorem 1.3. Suppose that (µt )t≥0 satisfies (1.3) and (1.5). Then (µt )t>0 is locally Hölder
continuous with exponent λ1 /2 in time variable and with exponent λ2 in space variable for all
λ1 , λ2 ∈ (0, 1/2). Namely, for any fixed 0 < r0 < T and L > 0, there exists a random variable
ξ ≥ 0 depending on λ1 , λ2 such that with probability one,
|µt (x) − µr (y)| ≤ ξ (|t − r |λ1 /2 + |x − y|λ2 ), t, r ∈ [r0 , T ], x, y ∈ [−L , L].

Theorem 1.4. Assume that gn in Condition 1.1 is β-Hölder continuous with 1


2
≤ β ≤ 1, i.e.,
β
|gn (x) − gn (y)| ≤ K |x − y| , x, y ≥ 0 (1.6)
for some constant K > 0. Then the weak uniqueness of the solution to the MP ((1.1), (1.2))
holds.

Remark 1.5. Taking n = 0 in Condition 1.1, the branching rate depends on the total mass
process, i.e., the quadratic variation process (⟨M(φ)⟩t )t≥0 of the martingale defined by (1.1) is
∫ t
⟨M(φ)⟩t = γ (⟨X s , 1⟩)⟨X s , φ 2 ⟩ds. (1.7)
0
The well-posedness of the MP ((1.1), (1.7)) is a corollary of Theorems 1.2 and 1.4.
290
L. Ji, J. Xiong and X. Yang Stochastic Processes and their Applications 163 (2023) 288–322

We now introduce some notation. Let B(R) be the collection of all bounded functions on
R. We use B(R)+ to denote the subset of B(R) of nonnegative functions. Let B[a1 , a2 ] be
the Banach space of bounded measurable functions ∫a on [a1 , a2 ] furnished with the supremum
norm. For f, g ∈ B[a1 , a2 ] let ⟨ f, g⟩ = a12 f (x)g(x)d x. Define Cb [a1 , a2 ] to be the set
of bounded continuous functions on [a1 , a2 ]. For any integer n ≥ 0, let Cbn [a1 , a2 ] be the
subset of Cb [a1 , a2 ] of functions with bounded continuous derivatives up to the nth order.
Let Ccn (a1 , a2 ) denote the subset of Cbn [a1 , a2 ] of functions with compact support in (a1 , a2 ).
Define Cc∞ (R) to be the set of infinitely differentiable functions on R with compact support.
For f, g ∈ C(R)+ and λ > 0, let | f − g|(−λ) = supx∈R |e−λ|x| ( f (x) − g(x))|. Let Ctem +
(R) be
the subspace of C(R) of functions f with | f |(−λ) < ∞ for every λ > 0, whose topology is
+

induced by norm {| f |(−λ) : λ > 0}. Let C([0, T ] × [−L , L], R+ ) be the space of continuous
functions from [0, T ] × [−L , L] to R+ furnished with the supremum norm ∥ · ∥[0,T ]×[−L ,L] , and
C([0, T ] × R, R+ ) be the space of continuous functions from [0, T ] × R to R+ furnished with
metric
∫ ∞
e−L ∥ f − g∥[0,T ]×[−L ,L] ∧ 1 d L .
( )
d( f, g) = (1.8)
0
For a Banach space Y with suitable norm ∥ · ∥Y , let C([0, T ], Y) be the set of all continuous
maps from [0, T ] to Y with the topology induced by
d1 ( f, g) = sup ∥ f (t) − g(t)∥Y . (1.9)
0≤t≤T

The rest of the paper is organized as follows. In Section 2, we present the proofs of
Theorems 1.2 and 1.3. The weak uniqueness of the solution to the MP ((1.1), (1.2)), i.e., the
proof of Theorem 1.4, is given in Section 3. Moreover, several useful lemmas and proofs
are given in the Appendix Throughout the paper we use ∇ to denote the first order spatial
differential operator. We use K to denote a non-negative constant whose value may change
from line to line. In the integrals, we make convention that, for a ≤ b ∈ R,
∫ b ∫ ∫ ∞ ∫
= and = .
a (a,b] a (a,∞)

2. Proofs of Theorems 1.2 and 1.3


In this section, we present the proofs of Theorems 1.2 and 1.3. The existence of the solution
to the MP ((1.1), (1.2)) is obtained by the existence of the corresponding density process.
Moreover, the local Hölder continuity of the density process (µt )t≥0 is given.

Lemma 2.1. The martingale defined in (1.1) induces an (Ft )-martingale measure satisfying
∫ t∫
Mt (φ) = φ(x)M(ds, d x), t ≥ 0, φ ∈ Cb2 (R),
0 R
where
∫ [ M(ds, d x) is an orthogonal martingale measure on R+ × R with covariance measure
ds R γ (µs , z)δz (d x)δz (dy) X s (dz).
]

The definition of orthogonal martingale measure mentioned above is given in, e.g., Walsh [22,
p. 288].

Proof of Lemma 2.1. By the MP ((1.1), (1.2)), one can see that
E[⟨X t , 1⟩] = ⟨X 0 , 1⟩ < ∞. (2.1)
291
L. Ji, J. Xiong and X. Yang Stochastic Processes and their Applications 163 (2023) 288–322

For each n ≥ 1 we define the measure Γn ∈ M(R) by


[∫ n ∫ ]
Γn (φ) := E ds γ (µs , x)φ(x)X s (d x) (2.2)
0 R

with φ ∈ B(R)+ . Note that γ is bounded


∫n by Condition 1.1. It then follows from (2.1) and
(2.2) that for each n ≥ 1, Γn (φ) ≤ K 0 E[⟨X s , 1⟩]ds ≤ K n ⟨X 0 , 1⟩ is bounded. The rest proof
follows by changing c(z) = γ (µs , z)/2 and H (z, dν) = 0 in the proof of Li [13, Theorem 7.25].
We omit it here. □

Lemma 2.2. Suppose that (µt )t≥0 is a solution to (1.3). Then for every t ≥ 0, we have
E[⟨µt , 1⟩] = ⟨µ0 , 1⟩.

Proof.
By (1.3) we have
∫ t∫ √
⟨µt , 1⟩ = ⟨µ0 , 1⟩ + µs (x)γ (µs , x)W (ds, d x).
0 R
For each n ≥ 1 we define stopping time τn by τn = inf{t ≥ 0 : ⟨µt , 1⟩ ≥ n}. For T > 0,
it follows from the continuity of t ↦→ ⟨µt , 1⟩ that sup0≤t≤T∫⟨µt , 1⟩ < ∞ almost surely, which
t∧τn ∫ √
implies τn → ∞ almost surely as n → ∞. Further, t ↦→ 0 R µs (x)γ (µs , x)W (ds, d x)
is a martingale (see, e.g., [9, p. 55]), since
[∫ t∧τn ∫ ] [∫ t∧τn ]
E µs (x)γ (µs , x)dsd x ≤ K E ⟨µs , 1⟩ds ≤ K tn.
0 R 0

It then implies that E ⟨µt∧τn , 1⟩ = ⟨µ0 , 1⟩. Further,


[ ]
[(∫ )2 ]
t∧τn ∫ √
E[⟨µt∧τn , 1⟩2 ] ≤ 2⟨µ0 , 1⟩2 + 2E µs (x)γ (µs , x)W (ds, d x)
0 R
∫ t
≤ 2⟨µ0 , 1⟩2 + K E[⟨µs∧τn , 1⟩]ds
0
≤ K [⟨µ0 , 1⟩2 + t⟨µ0 , 1⟩],

] that supn E[⟨µt∧τn , 1⟩ ] < ∞. By [10, p. 67], we have E[⟨µt , 1⟩] = limn→∞
2
which implies
E ⟨µt∧τn , 1⟩ = ⟨µ0 , 1⟩. The proof ends here. □
[

Let Pt (x, dy) be the semigroup generated by ∆/2, which is absolutely continuous with
respect to Lebesgue measure dy with density pt (x, y) given by
1
e−|x−y| /(2t) ,
2
pt (x, y) := pt (x − y) := √ t > 0, x, y ∈ R.
2π t
Observe that
1
pt (x − y) ≤ √ , t > 0, x, y ∈ R. (2.3)
t

Lemma 2.3. ] the MP ((1.1), (1.2)) and T > 0 be fixed. Then for any p ≥ 1
[ Let (X t )t≥0 satisfy
we have E sup0≤t≤T ⟨X t , 1⟩2 p < ∞.

292
L. Ji, J. Xiong and X. Yang Stochastic Processes and their Applications 163 (2023) 288–322

Proof.
For each n ≥ 1, we define stopping time κn = inf{t ≥ 0 : ⟨X t , 1⟩2 p ≥ n}. Note that
sup0≤t≤T ⟨X t , 1⟩2 p < ∞ almost surely because of the continuity of (⟨X t , 1⟩)t≥0 . We then have
limn→∞ (T ∧ κn ) = T almost surely. By the MP ((1.1), (1.2)) we have
⟨X t , 1⟩ = ⟨X 0 , 1⟩ + Mt (1). (2.4)
By Burkholder–Davis–Gundy’s inequality, Hölder’s inequality and a ≤ a 2 + 1 for a ∈ R, one
can see that
[ ] [ ⏐∫ ⏐p]
⏐ T ∧κn
⟨X s , γ (µs , ·)⟩ ds ⏐⏐
2p

E sup |Mt (1)| ≤ KE ⏐ ⏐
0≤t≤T ∧κn 0
[∫ T ∧κn ]
≤ K + KE ⟨X s , 1⟩ ds
2p
0
[ ]
∫ T
≤ K+K E sup ⟨X t , 1⟩2 p ds,
0 0≤t≤(s∧κn )

where the constant K depends on T . Therefore, by (2.4) we have


[ ] [ ]
E sup ⟨X t , 1⟩2 p ≤ K +E sup |Mt (1)|2 p
0≤t≤T ∧κn 0≤t≤T ∧κn
[ ]
∫ T
≤ K+K E sup ⟨X t , 1⟩2 p ds,
0 0≤t≤(s∧κn )

which implies that E sup0≤t≤T ∧κn ⟨X t , 1⟩2 p ≤ K e K T by Gronwall’s inequality (see, e.g., [24,
[ ]
Theorem 1]). Letting n → ∞, the result follows from the monotone convergence theorem. □

Lemma 2.4. Let (X t )t≥0 satisfy the MP ((1.1), (1.2)) and T > 0 be fixed. Then for any
t ∈ [0, T ) and p ≥ 1 we have E[⟨X t , pT −t (x − ·)⟩] = ⟨X 0 , pT (x − ·)⟩ and
K (T − p + T p )
[ ]
E sup ⟨X t , pT −t (x − ·)⟩2 p ≤ ,
0≤t≤T 1 − 2− p
where ⟨X T , p0 (x − ·)⟩ := limt→T − ⟨X t , pT −t (x − ·)⟩.

Proof.
By Lemma 2.1 and the proof of [13, Theorem 7.26], we have
∫ t∫
⟨X t , pT −t (x − ·)⟩ = ⟨X 0 , pT (x − ·)⟩ + pT −s (x − z)M(ds, dz) (2.5)
0 R
for t < T , where M(ds, dz) is the orthogonal martingale measure defined in Lemma 2.1. For
any t ∈ [0, T ), by (2.1) and (2.3) we have
[∫ t ∫ ]
E pT −s (x − z) γ (µs , z)X s (dz)ds
2
0 R
E [⟨X s , 1⟩]
∫ t
T
≤K ds = K ⟨X 0 , 1⟩ ln <∞ (2.6)
0 T − s T −t
∫t ∫
since t ∈ [0, T ). It implies that t ↦→ 0 R pT −s (x −z)M(ds, dz) is a martingale with t ∈ [0, T ).
Then by (2.5) we have E[⟨X t , pT −t (x − ·)⟩] = ⟨X 0 , pT (x − ·)⟩ for t < T . Further, by (2.3),
⟨X 0 , pT (x − ·)⟩2 p ≤ T − p ⟨X 0 , 1⟩2 p (2.7)
293
L. Ji, J. Xiong and X. Yang Stochastic Processes and their Applications 163 (2023) 288–322

is bounded. Recall γ is bounded. Let 0 < r0 < T be fixed. Then, by Burkholder–


Davis–Gundy’s inequality and (2.3), there are constants K 1 , K 2 independent of T, r0 such
that
[ ⏐∫ t ∫ ⏐2 p ]
2 p−1
⏐ ⏐
IT,r0 := 2 E sup ⏐ ⏐ pT −s (x − z)M(ds, dz)⏐⏐
0≤t≤T −r0 0 R
[ ⏐∫
T −r ∫ ⏐p]
0
pT −s (x − z) γ (µs , z)X s (dz)ds ⏐⏐
⏐ 2

≤ K1E ⏐ ⏐
0 R
[ ⏐∫ ⏐p]
⏐ T −r0

2

≤ K 2 E ⏐⏐ pT −s (x − z) X s (dz)ds ⏐⏐
0 R
⏐ T −r0 K 1/ p ⟨X , p (x − ·)⟩ ⏐ p
[⏐∫ ⏐ ]
s T −s
≤ E ⏐ 2
√ ds ⏐ . (2.8)
⏐ ⏐
⏐ 0 T −s ⏐

Notice that ab ≤ a 2 + b2 for a, b ∈ R. We have


1/ p 1/ p
K 2 ⟨X s , pT −s (x − ·)⟩ = (2K 2 T 1/4 )(2−1 T −1/4 ⟨X s , pT −s (x − ·)⟩)
2/ p 1
≤ 4K 2 T 1/2 + T −1/2 sup ⟨X s , pT −s (x − ·)⟩2 .
4 0≤s≤T −r0

By the above inequality and (2.8) we have


[ ]
1
p 2 p
IT,r0 ≤ 8 K 2 T + p E sup ⟨X t , pT −t (x − ·)⟩2p
.
2 0≤t≤T −r0

By the above inequality, (2.5), (2.7) and using the fact (a + b)2 p ≤ 22 p−1 a 2 p + 22 p−1 b2 p for
any a, b ≥ 0, we obtain
[ ]
E sup ⟨X t , pT −t (x − ·)⟩2 p
0≤t≤T −r0

≤ 22 p−1 ⟨X 0 , pT (x − ·)⟩2 p + IT,r0


[ ]
1
−p p
≤ K (T + T ) + p E sup ⟨X t , pT −t (x − ·)⟩2p
(2.9)
2 0≤t≤T −r0

with K = (22 p−1 ⟨X 0 , 1⟩2 p ) ∨ (8 p K 22 ). By Lemma 2.3 and (2.3) one can check that
[ ] [ ]
−p
E sup ⟨X t , pT −t (x − ·)⟩2 p ≤ r0 E sup ⟨X t , 1⟩2 p < ∞.
0≤t≤T −r0 0≤t≤T −r0

Moving the last term of (2.9) to the left side of the inequality, we then have
[ ]
K (T − p + T p )
E sup ⟨X t , pT −t (x − ·)⟩2p
≤ .
0≤t≤T −r0 1 − 2− p

294
L. Ji, J. Xiong and X. Yang Stochastic Processes and their Applications 163 (2023) 288–322

Letting r0 → 0+, by Fatou’s lemma, we have


[ ] [ ]
E sup ⟨X t , pT −t (x − ·)⟩ 2p
≤ lim E sup ⟨X t , pT −t (x − ·)⟩2p
0≤t≤T r0 →0+ 0≤t≤T −r0

K (T + T p )
−p
≤ .
1 − 2− p
The result follows. □

Lemma 2.5. Let M(ds, dz) be the martingale measure defined in Lemma 2.1. Then for all
t > 0, x ∈ R the stochastic integral
∫ t∫
Mt (x) = pt−s (x − z)M(ds, dz) (2.10)
0 R
is well-defined. Moreover, {Mt (x) : t > 0, x ∈ R} has a modification which is almost surely
continuous.

Proof.
Recall that γ is bounded by Condition 1.1. By (2.3) and Lemma 2.4, for every t > 0 and
x ∈ R we have
[∫ t ∫ ]
E pt−s (x − z)2 γ (µs , z)X s (dz)ds
0 R
[∫ t ∫ ]
≤ KE pt−s (x − z)2 X s (dz)ds
∫ t0 R
E [⟨X s , pt−s (x − ·)⟩]
≤K √ ds
0 t −s

≤ K ⟨X 0 , pt (x − ·)⟩ t ≤ K ⟨X 0 , 1⟩,
which is bounded. Then Mt (x) is well-defined for all t, x (see, e.g., [9, p. 55]). Let 0 < r0 < T
be fixed. For any 0 < r0 ≤ r ≤ t ≤ T and p ≥ 1, we have
E[|Mt (x) − Mr (x)|2 p ]
[ ⏐∫ ∫ ⏐2 p ]
⏐ t ⏐
≤ K E ⏐⏐ pt−s (x − z)M(ds, dz)⏐⏐
r R
[ ⏐∫ ∫ ⏐2 p ]
⏐ r ⏐
+K E ⏐ ⏐ [ pr −s (x − z) − pt−s (x − z)]M(ds, dz)⏐⏐
0 R

=: I1 + I2 . (2.11)
Notice that a ≤ a 2 + 1 for a ∈ R. By Lemma 2.4, for any r0 ≤ t ≤ T we have
[ ] [ ]
E sup ⟨X s , pt−s (x − ·)⟩ ≤ E sup ⟨X s , pt−s (x − ·)⟩
p 2p
+1
0≤s≤t 0≤s≤t
K (t − p + t p )
≤ +1
1 − 2− p
−p
K (r0 + T p )
≤ + 1, (2.12)
1 − 2− p
295
L. Ji, J. Xiong and X. Yang Stochastic Processes and their Applications 163 (2023) 288–322

where the above K is independent of t. By (2.3), (2.12) and Burkholder–Davis–Gundy’s


inequality one can see that
[⏐∫ t ∫ ⏐p]
⏐ ⏐
I1 ≤ K E ⏐⏐ ds pt−s (x − z)2 X s (dz)⏐⏐
[⏐∫r t R
⏐p]
⏐ ⟨X s , pt−s (x − ·)⟩ ⏐⏐
≤ K E ⏐⏐ √ ds ⏐
r t −s
[ ]
≤ K (t − r ) p/2 E sup ⟨X s , pt−s (x − ·)⟩ p ≤ K (t − r ) p/2 , (2.13)
0≤s≤t

where the above constant K only depends on r0 , T . In the following of the proof we assume
that δ ∈ (0, 1/4). By [18, Lemma III 4.5], for any 0 < s < r < t ≤ T we have
[ pr −s (x − z) − pt−s (x − z)]2
≤ (t − r )δ (r − s)−3δ/2 pr −s (x − z)2−δ + pt−s (x − z)2−δ .
[ ]
(2.14)
It follows from (2.14) and Burkholder–Davis–Gundy’s inequality that
[⏐∫ r ∫ ⏐p]
⏐ 2

I2 ≤ K E ⏐ ⏐ ds [ pr −s (x − z) − pt−s (x − z)] X s (dz)⏐⏐
0
[⏐R∫ r
⟩ ⏐p
⏐ ]
X s , pr −s (x − ·) + pt−s (x − ·) ds ⏐⏐ .

⏐ −3δ/2
⟨ 2−δ 2−δ
≤ K (t − r ) E ⏐ (r − s)

0
Moreover, by (2.3) and (2.12) once again we have
⟩ ⏐p
[⏐∫ r ⏐ ]
X s , pr −s (x − ·) + pt−s (x − ·)
⏐ −3δ/2
⟨ 2−δ 2−δ
E ⏐ (r − s)
⏐ ds ⏐⏐
0
[ ] ⏐∫ r ⏐p
≤ K E sup ⟨X s , pr −s (x − ·)⟩ ⏐ (r − s)
p ⏐
⏐ −(1+2δ)/2

ds ⏐⏐
0≤s≤r 0
[ ] ⏐∫ r ⏐p
+K E sup ⟨X s , pt−s (x − ·)⟩ ⏐ (r − s)
p ⏐
⏐ −3δ/2 −(1−δ)/2

(t − s) ds ⏐⏐
0≤s≤t 0
[ ]
≤ K E sup ⟨X s , pr −s (x − ·)⟩ p r (1−2δ) p/2
0≤s≤r
[ ] (∫ r ∫ r ) p/2
+K E sup ⟨X s , pt−s (x − ·)⟩ p (r − s)−3δ ds (t − s)δ−1 ds
0≤s≤t 0 0
( −p
)
K (r0 + T p )
≤K + 1 T (1−2δ) p/2 .
1 − 2− p
By the above two inequalities, we have
I2 ≤ K (t − r ) pδ , 0 < r0 ≤ r ≤ t ≤ T (2.15)
with the above constant K only depending on r0 , T . Combining (2.11), (2.13) and (2.15) we
obtain
E[|Mt (x) − Mr (x)|2 p ] ≤ K [(t − r ) p/2 + (t − r ) pδ ] (2.16)
for any 0 < r0 ≤ r ≤ t ≤ T , where the above constant K only depends on r0 , T .
On the other hand, by [19, (2.4e)], for any 0 < β < 1 we have
| pt−s (x − z) − pt−s (y − z)| ≤ K |x − y|β (t − s)−(1+β)/2 , x, y ∈ R.
296
L. Ji, J. Xiong and X. Yang Stochastic Processes and their Applications 163 (2023) 288–322

Then for all x, y ∈ R, by Burkholder–Davis–Gundy’s inequality, (2.12) and Lemma 2.4 one
can check that, for 0 < r0 ≤ t ≤ T ,
E[|Mt (x) − Mt (y)|2 p ]
[ ⏐∫ ∫ ⏐2 p ]
⏐ t ⏐
=E ⏐ ⏐ [ pt−s (x − z) − pt−s (y − z)]M(ds, dz)⏐⏐
0 R
[(∫ t ∫ )p]
≤ KE [ pt−s (x − z) − pt−s (y − z)]2 X s (dz)ds
0 R
[(∫ t )p]
βp
≤ K |x − y| E (t − s)−(1+β)/2 ⟨X s , pt−s (x − ·) + pt−s (y − ·)⟩ ds
(∫ t 0 )p
βp
≤ K |x − y| (t − s)−(1+β)/2 ds
0
{ [ ] [ ]}
· E sup ⟨X s , pt−s (x − ·)⟩ p + E sup ⟨X s , pt−s (y − ·)⟩ p
0≤s≤t 0≤s≤t
βp
≤ K |x − y| (2.17)
with β ∈ (0, 1), where the constant K depends on r0 , T . By (2.16) and (2.17),
E[|Mt (x) − Mr (y)|2 p ] ≤ K [(t − r ) p/2 + (t − r ) pδ + |x − y|βp ],
for 0 < r0 ≤ r ≤ t ≤ T and x, y ∈ R, where p can be taken greater than max(1/δ, 1/β). By
Kolmogorov’s continuity criteria (see, e.g., [10, Theorem 3.23]), {Mt (x) : t ∈ [r0 , T ], x ∈ R}
has a continuous version. The result follows by taking suitable sequences {r0 , r1 , . . .} and
{T1 , T2 , . . .} such that limn→∞ rn = 0 and limn→∞ Tn = +∞. □

Proposition 2.6. Suppose that (X t )t≥0 is a solution to the MP ((1.1), (1.2)). Then almost
surely for each t ≥ 0, the random measure X t (d x) is absolutely continuous respect to d x with
density µt (x) satisfying (1.3). Conversely, assume (µt )t≥0 is a solution to (1.3). Then there
exists a solution to the MP ((1.1), (1.2)).

Proof.
Suppose that (X t )t≥0 satisfies the MP ((1.1), (1.2)). By Lemma 2.1 and the proof of [13,
Theorem 7.26], for any φ ∈ B(R) we have
∫ t∫
⟨X t , φ⟩ = ⟨X 0 , Pt φ⟩ + Pt−s φ(x)M(ds, d x),
0 R
where M(ds, d x) is the orthogonal martingale measure defined in Lemma 2.1 and Pt is the
semigroup generated by ∆/2. Taking an appropriate modification, we may and shall assume
that {Mt (x) : t > 0, x ∈ R} is almost surely continuous by Lemma 2.5. Then by stochastic
Fubini’s theorem (e.g., see Li [13, Theorem 7.24]), for every t ∈ (0, T ] and φ ∈ Cc (R) we get
⟨X t , φ⟩ = R µt (x)φ(x)d x almost surely with


µt (x) = pt (x − z)µ0 (z)dz + Mt (x).
R

Note that t ↦→ ⟨X t , φ⟩ and t ↦→ R µt (x)φ(x)d x are continuous. Considering suitable sequences


{φ1 , φ2 , . . .} ⊂ Cc (R) and {t1 , t2 ,∫. . .} ⊂ (0, ∞), by Lemma 2.5 we get almost surely for all
t > 0 and φ ∈ Cb2 (R), ⟨X t , φ⟩ = R µt (x)φ(x)d x, i.e., X t (d x) is absolutely continuous respect
297
L. Ji, J. Xiong and X. Yang Stochastic Processes and their Applications 163 (2023) 288–322

to Lebesgue measure d x. Set pt (x −z) = 0 for all t < 0. By El Karoui and Méléard [4, Theorem
III-6], on some extension of the probability space one can define a white noise W (ds, dz) on
R+ × R based on dsdz such that almost surely for any u > 0, t > 0 and x ∈ R,
∫ u∫ ∫ u∫ √
pt−s (x − z)M(ds, dz) = µs (z)γ (µs , z) pt−s (x − z)W (ds, dz),
0 R 0 R
which implies that
∫ t∫ √
µt (x) = ⟨µ0 , pt (x − ·)⟩ + µs (z)γ (µs , z) pt−s (x − z)W (ds, dz) (2.18)
0 R
for all t > 0, x ∈ R almost surely, and µt (x) is joint continuous in (t, x) ∈ (0, ∞) × R. Notice
that
∫ ∫ ∫ ∫
⟨ f, pt (x − ·)⟩2 d x ≤ K dx f (y)2 pt (x − y)dy ≤ K f (y)2 dy < ∞
R R R R

for any f ∈ Cc (R) . Then we have limt→0 R [⟨ f, pt (x − ·)⟩ − f (x)]2 d x = 0 by dominated
+

convergence theorem. Moreover, by Lemma 2.4 we have


∫ [∫ t ∫ ]
d xE µs (z) pt−s (x − z) dsdz
2
R
[∫ 0 R∫ t
E[⟨µs , pt−s (x − ·)⟩]
]
≤K dx √ ds
R 0 t −s


≤K t ⟨µ0 , pt (x − ·)⟩ d x → 0, t → 0.
R

It then implies that µt → µ0 in L 2 (R) as t → 0 in the following sense:


[∫ ]
E ∥µt − µ0 ∥ L 2 = E
2
|µt (x) − µ0 (x)| d x
2
[ ]

∫R
≤ K |⟨µ0 , pt (x − ·)⟩ − µ0 (x)|2 d x
R
∫ [∫ t ∫ ]
+K d xE µs (x) pt−s (x − z)2 dzds
R 0 R
→0 (2.19)
as t → 0. Then (2.18) holds for all t ≥ 0, x ∈ R almost surely. And (1.3) holds similar to [21,
Theorem 2.1].
Conversely, suppose that (µt )t≥0 satisfies (1.3) and we denote X t (d x) = µt (x)d x. It then
follows from Lemma 2.2 that X t ∈ M(R) almost surely for every t ≥ 0. For any φ ∈ Cb2 (R)
one can check that

⟨X t , φ⟩ = µt (x)φ(x)d x
∫R
1 t
∫ ∫
= µ0 (x)φ(x)d x + µs (x)φ ′′ (x)d xds
R 2 0 R
∫ t∫
φ(x) µs (x)γ (µs , x)W (ds, d x)

+
0 R
1 t⟨

= ⟨X 0 , φ⟩ + X s , φ ′′ ds + Mt (φ),

2 0
298
L. Ji, J. Xiong and X. Yang Stochastic Processes and their Applications 163 (2023) 288–322

where (Mt (φ))t≥0 is a continuous local martingale with quadratic variation process (⟨M(φ)⟩t )t≥0
satisfying (1.2). Further, by Lemma 2.2, for φ ∈ Cb2 (R),
∫ t
E[⟨M(φ)⟩t ] ≤ K E[⟨µs , 1⟩]ds = K t⟨µ0 , 1⟩,
0

which implies (Mt (φ))t≥0 is a martingale (see [9, p. 55]). The proof ends. □

Now we show the existence of the solution to (1.3). For any T > 0, let m ≥ 1 and
tk = kT /m with k = 0, 1, . . . , m. For x ∈ R, let ρ be the mollifier given by

ρ(x) := C exp{−1/(1 − x 2 )}1{|x|<1} , (2.20)

and C is a constant such that R ρ(x)d x = 1. Let ρm = mρ(mx) and γm (·, x) = R ρm (x −


∫ ∫

y)γ (·, y)dy be the mollification of γ . Then x ↦→ γm (·, x) is continuous and limm→∞ γm (·, x) =
γ (·, x) almost every x ∈ R (see, e.g., [6, p. 630, Theorem 6]). Further, by Condition 1.1 one
can see that

sup γm ( f, x) = sup ρm (x − y)γ ( f, y)dy
x∈R,m≥1, x∈R,m≥1, R
f ∈L 1 (R)+ f ∈L 1 (R)+

≤ sup K ρm (x − y)dy = K . (2.21)
x∈R,m≥1 R

For any f ∈ Cc∞ (R), we define a sequence of approximation by


1 t ⟨ m ′′ ⟩

µt , f = ⟨µ0 , f ⟩ + µs , f ds
⟨ m ⟩
2 0
∑m ∫ tk ∧t ∫ √
+ γm (µm
tk−1 , x)G m (µs (x)) f (x)W (ds, d x),
m
(2.22)
k=1 tk−1 ∧t R

where

[ ] √
G m (x) = pm −1 (x − y) − pm −1 (y) ( |y| ∧ m)dy
R

is a Lipschitz function for fixed m ≥ 1. Moreover, one can see that G m (0) = 0, limm→∞ G m (x)

= x for all x ≥ 0.
+ +
Recall that C([0, T ], Ctem (R)) is the collection of all continuous maps from [0, T ] to Ctem (R)
+
with topology induced by (1.9) by changing Y to Ctem (R). And C([0, T ] × R, R+ ) is the
space of continuous maps from [0, T ] × R to R+ furnished with metric (1.8). The topology
+ +
of C([0, T ], Ctem (R)) is stronger than that of C([0, T ] × R, R+ ). Hence C([0, T ], Ctem (R)) is
a subspace of C([0, T ] × R, R+ ). Then we have the following result.

+
Lemma 2.7. For every m ≥ 1, there is a pathwise unique continuous Ctem (R)-valued solution
m m
(µt )t∈[0,T ] to (2.22). Furthermore, {µt (x) : t ∈ [0, T ], x ∈ R} ∈ C([0, T ] × R, R+ ) satisfying,
for every λ > 0,

sup sup e−λ|x| µm t (x) < ∞


[ ]
a.s. (2.23)
0≤t≤T x∈R

299
L. Ji, J. Xiong and X. Yang Stochastic Processes and their Applications 163 (2023) 288–322

Proof.
For every m ≥ 1, there is a constant K > 0 such that
∫ √ [√ ]
|G m (x)| ≤ K + pm −1 (x − y) |y|dy = K + E |Wmx −1 |
R
[ [ ]]1/4
≤ K + E |Wmx −1 |2 ≤ K + (1 + x 2 )1/4

≤ K ( |x| + 1), (2.24)
where (Wtx )t≥0 is a Brownian motion with initial value x. Further, |G m (x)| ≤ K (|x| + 1).
Recall µ0 ∈ Cc (R)+ , x ↦→ γm (·, x) is continuous and bounded. And G m (x) is Lipschitz with
G m (0) = 0. By [21, Theorems 2.2 and 2.3], when k = 1, there is a pathwise unique continuous
+
Ctem (R)-valued solution (µm
t )t∈[tk−1 ,tk ] to

⟨ m ⟩ ⟨ m ⟩ 1∫ t ⟨
µt , f = µtk−1 , f + µm
s , f
′′

ds
2 tk−1
∫ t ∫ √
+ γm (µm tk−1 , x)G m (µs (x)) f (x)W (ds, d x),
m
(2.25)
tk−1 R

for t ∈ [tk−1 , tk ] and any f ∈ Cc∞ (R) almost surely, which implies that
sup sup[e−λ|x| µm
t (x)] < ∞ (2.26)
t∈[tk−1 ,tk ] x∈R

almost surely for each λ > 0. Notice that C([tk−1 , tk ], Ctem


+
(R)) is a subspace of C([tk−1 , tk ] ×
R, R+ ), which implies that {µt (x) : t ∈ [tk−1 , tk ], x ∈ R} ∈ C([tk−1 , tk ] × R, R+ ).
m

Conditioned on Ftk−1 , the above result still holds for k = 2, . . . , m by mathematical induction.
In other word, for each k = 1, . . . , m, there is a pathwise unique continuous Ctem +
(R)-valued
(µt )t∈[tk−1 ,tk ] to (2.25). Moreover, {µt (x) : t ∈ [tk−1 , tk ], x ∈ R} ∈ C([tk−1 , tk ] × R, R+ )
m m

satisfying (2.26). The result follows. □

For x ∈ R, let J (x) = R e−|y| ρ(x − y)dy, where ρ is the mollifier given by (2.20). Let

J (n) (x) be the nth derivative of J (x). By Mitoma [15, (2.1)], for n ≥ 0, there are constants
cn , Cn such that
cn e−|x| ≤ J (n) (x) ≤ Cn e−|x| , x ∈ R. (2.27)

Lemma 2.8. For every T > 0 and p ≥ 1, we have


[∫ ]
sup E µm
t (x)2p
J (x)d x < ∞.
0≤t≤T,m≥1 R

Proof.
Let 0 ≤ t ≤ T . Using the convolution form, the solution (µm
t )t≥0 to (2.22) can be represented
as
µm
t (x) = ⟨µ0 , pt (x − ·)⟩
∑m ∫ tk ∧t ∫ √
+ γm (µm
tk−1 , z)G m (µs (z)) pt−s (x − z)W (ds, dz).
m
(2.28)
k=1 tk−1 ∧t R

300
L. Ji, J. Xiong and X. Yang Stochastic Processes and their Applications 163 (2023) 288–322

For each n ≥ 1 we define stopping time σnm = inf t ≥ 0 : R µm


{ ∫ 2p
}
t (x) J (x)d x ≥ n . Recall
that µ0 ∈ Cc (R)+ . For any p ≥ 1, it follows from (2.23) that

sup µm 2p
t (x) J (x)d x
0≤t≤T R
[ −|x|/4 p m ] 2 p
[ ] ∫
≤ sup sup e µt (x) e|x|/2 J (x)d x < ∞
0≤t≤T x∈R R

almost surely for every T > 0 and m ≥ 1. We then have σnm → ∞ almost surely as n → ∞.
By (2.3) and Hölder’s inequality, for any t ≤ T we have
⏐∫ t ∫ ⏐2 p
⏐ 1{s≤σ m } ds µm (z) pt−s (x − z)2 dz ⏐
⏐ ⏐
⏐ n s ⏐
0 R
⏐∫ t ⏐2 p
1{s≤σnm }

ds µs (z) pt−s (x − z)dz ⏐⏐
⏐ m

≤⏐ ⏐ √
0 t −s R
[∫ ⏐2 p ] [⏐∫ t ⏐2 p/q ]
t
⏐∫
1{s≤σn } ⏐
m 1
µs (z) pt−s (x − z)dz ⏐⏐ ds · ⏐⏐
⏐ m
⏐ ⏐ ⏐
≤K √ √ ds ⏐⏐
0 t −s R ⏐
0 t −s
∫ t
1{s≤σnm }

≤K √ ds µm 2p
s (z) pt−s (x − z)dz (2.29)
0 t −s R

with 1/2 p + 1/q = 1 and p, q ≥ 1, and the above K depending on T . By the definition of
J (·), we have J (u + z) ≤ J (z)e|u| for u, z ∈ R. Moreover,
∫ ∫ √ [ √ ]
|u|
e pt−s (u)du ≤ e T |u| p1 (u)du = E e T |W1 | ≤ K (2.30)
R R

for 0 ≤ s ≤ t ≤ T , where (Wt )t≥0 is a standard Brownian motion. By (2.29), (2.30) and a
change of variable, we obtain
[∫ ⏐∫ t ∫ ⏐2 p ]
J (x)d x ⏐⏐ 1{s≤σnm } ds µs (z) pt−s (x − z) dz ⏐⏐
⏐ m 2

E
R 0 R
[∫ t ]
1{s≤σnm }
∫ ∫
≤ KE √ ds J (x)d x µs (z) pt−s (x − z)dz
m 2p
t −s
[∫0 t ∫R R
]
1{s≤σnm }

≤ KE √ ds µs (z) dz
m 2p
J (u + z) pt−s (u)du
t −s
[∫0 t ∫R R
]
1{s≤σnm }

≤ KE √ ds µm s (z) 2p
J (z)dz e |u|
p t−s (u)du
t −s
∫ t 0 [∫ R R
]
1
≤ K √ E 1{s≤σnm } µm 2p
s (z) J (z)dz ds, (2.31)
0 t −s R

where the above K depends on T only. By (2.3) again we have


[∫ ⏐∫ t ∫ ⏐p]
⏐ 2

E J (x)d x ⏐ 1{s≤σnm } ds
⏐ pt−s (x − z) dz ⏐⏐
R 0 R
∫ ⏐∫ t ⏐p
1
ds ⏐⏐ ≤ K ,
⏐ ⏐
≤ J (x)d x ⏐
⏐ √ (2.32)
R 0 t −s
301
L. Ji, J. Xiong and X. Yang Stochastic Processes and their Applications 163 (2023) 288–322

where again the constant K only depends on T . Now we denote


∑m ∫ tk ∧t ∫ √
Mn (t, x) = 1{s≤σnm } γm (µm
tk−1 , z)G m (µs (z)) pt−s (x − z)W (ds, dz).
m

k=1 tk−1 ∧t R

By Burkholder–Davis–Gundy’s inequality, (2.21), (2.24), (2.31) and (2.32), we have



J (x)E |Mn (t, x)|4 p d x
[ ]
R
∫ m
[⏐ ∫ t ∫ ∑ √
1(tk−1 ,tk ] (s)1{s<σnm } γm (µm
tk−1 , z)

= J (x)d xE ⏐⏐
R 0 R k=1
⏐4 p ]

·G m (µm
s (z)) pt−s (x − z)W (ds, dz)⏐⏐
[ ⏐∫ ∫ ⏐2 p ]
⏐ t

2 2

≤K J (x)d xE ⏐⏐ 1{s≤σnm } G m (µms (z)) pt−s (x − z) dsdz ⏐

R 0 R
[∫ ⏐∫ t ∫ ⏐2 p ]
⏐ m 2

≤ KE J (x)d x ⏐ ds 1{s≤σnm } (µs (z) + 1) pt−s (x − z) dz ⏐⏐

R 0 R
∫ t [∫ ]
1
≤K+K √ E 1{s≤σnm } µm
s (z) 2p
J (z)dz ds, (2.33)
0 t −s R
where the above constants K only depends on T . Recall that µ0 ∈ Cc (R)+ , which implies µ0
is bounded. Then we have
∫ ∫ [∫ ]2 p
⟨µ0 , pt (x − ·)⟩2 p J (x)d x = J (x)d x pt (x − z)µ0 (z)dz
R R R

≤ K J (x)d x, (2.34)
R
which is bounded. Note that
µm
t (x)1{t≤σn } = ⟨µ0 , pt (x − ·)⟩ 1{t≤σn } + 1{t≤σn } Mn (t, x).
m m m (2.35)
By Hölder’s inequality and a ≤ a 2 + 1 for every a ∈ R, we then have
]}1/2
E |1{t≤σnm } Mn (t, x)|2 p ≤ E |Mn (t, x)|4 p ≤ E |Mn (t, x)|4 p + 1.
[ ] { [ [ ]

Combining the above inequality, (2.33), (2.34) and (2.35), we obtain


[∫ ]
E 1{t≤σnm } µt (x) J (x)d x
m 2 p
R
∫ t [∫ ]
1
≤K+K √ E 1{s≤σn } µs (x) J (x)d x ds.
m
m 2p
0 t −s R
Iterating the above once, one can check that
[∫ ]
E 1{t≤σnm } µmt (x) 2p
J (x)d x
R
[∫ t ∫ ∫ t ]
1
≤ K + KE dr 1{r ≤σnm } µrm (x)2 p J (x)d x √ ds
(t − s)(s − r )
∫ t 0 [∫ R ] r
≤K+K E 1{r ≤σnm } µrm (x)2 p J (x)d x dr.
0 R
302
L. Ji, J. Xiong and X. Yang Stochastic Processes and their Applications 163 (2023) 288–322

By Gronwall’s inequality (see, e.g., [24, Theorem 1]), we have


[∫ ]
E 1{t≤σnm } µm
t (x) 2p
J (x)d x ≤ K eK t , 0 ≤ t ≤ T,
R
where the above K is independent of t, n, m. Letting n → ∞, the result follows from the
monotone convergence theorem. □
We proceed to proving the tightness of (µm · ) in C([0, T ] × R, R+ ). Denote
m
∑ ∫ tk ∧t ∫ √
νtm (x) = tk−1 , z)G m (µs (z)) pt−s (x − z)W (ds, dz).
γm (µm m
(2.36)
k=1 tk−1 ∧t R

Lemma 2.9. For any fixed 0 < δ < 1/6, p ≥ 1 and T > 0, there is a constant K > 0
depending on T such that
E[|νtm (x) − νrm (x)|2 p ] ≤ K (t − r )δ+( p−1)/2 , 0 < r < t ≤ T.

Proof.
By (2.36) we get
|νtm (x) − νrm (x)|
⏐∫ r ∫ ∑ m √
1[tk−1 ,tk ) (s) γm (µm
tk−1 , z)G m (µs (z))
⏐ m
= ⏐⏐
0 R k=1

·[ pr −s (x − z) − pt−s (x − z)]W (ds, dz)


m
∫ t∫ ∑ √ ⏐
γm (µm
tk−1 , z)G m (µs (z)) pt−s (x − z)W (ds, dz)⏐⏐.
m

− 1[tk−1 ,tk ) (s)
r R k=1

By the above, (2.21), (2.24) and Burkholder–Davis–Gundy’s inequality one can check that, for
0 < r < t ≤ T,
[ ]
E |νtm (x) − νrm (x)|2 p
[⏐∫ r ∫ ⏐p]
⏐ 2 m 2

≤ KE ⏐ ⏐ [ pr −s (x − z) − pt−s (x − z)] G m (µs (z)) dsdz ⏐⏐
[⏐0 ∫ tR∫ ⏐p]
m 2 2
+K E ⏐ G m (µs (z)) pt−s (x − z) dsdz ⏐
⏐ ⏐
[⏐∫ r r∫ R ⏐p]
⏐ ⏐
≤ K E ⏐⏐ [ pr −s (x − z) − pt−s (x − z)]2 [µms (z) + 1]dsdz ⏐

0 R
[⏐ ∫ t ∫ ⏐p]
2
+K E ⏐ [µm s (z) + 1] pt−s (x − z) dsdz ⏐
⏐ ⏐
r R
=: I1m (t, r ) + m
I2 (t, r ). (2.37)
Further, by Hölder’s inequality, Lemma 2.8, (2.3) and (2.27), we have
[∫ ]
p
E pr −s (x − z)2−δ |µm
s (z)| dz
R
[(∫ )1/2 ] (∫ )1/2
m 2p 4−2δ |z|
≤ KE |µs (z)| J (z)dz pr −s (x − z) e dz
R R

≤ K (r − s) (2δ−3)/4
. (2.38)
303
L. Ji, J. Xiong and X. Yang Stochastic Processes and their Applications 163 (2023) 288–322

Replacing r with t the above inequality still holds. By Hölder’s inequality, (2.14) and (2.38),
for δ ∈ (0, 1/6) and 0 < r < t ≤ T , we have
[∫ r ∫ ]
p
E ds [ pr −s (x − z) − pt−s (x − z)]2 |µm s (z)| dz
0 R
∫ r
≤ K (t − r )δ (r − s)−3δ/2 ds
0
[∫ ]
2−δ 2−δ m p
·E [ pr −s (x − z) + pt−s (x − z) ]|µs (z)| dz
R
[∫ r ∫ r ]
≤ K (t − r )δ (r − s)−(3+4δ)/4 ds + (r − s)−3δ/2 (t − s)(2δ−3)/4 ds
[ 0 0
) ]
(∫ ) (∫ r 1/4 r 3/4
≤ K (t − r )δ T (1−4δ)/4 + (r − s)−6δ ds (t − s)(2δ−3)/3 ds
0 0
δ
≤ K (t − r ) , (2.39)
where the above K in the last inequality depends on T . By [25, Lemma 1.4.4], there exists a
constant K independent of r, t, T such that
∫ r∫
[ pr −s (x − z) − pt−s (x − z)]2 dsdz ≤ K |t − r |1/2 (2.40)
0 R
and
∫ t∫
pt−s (x − z)2 dsdz ≤ K |t − r |1/2 . (2.41)
r R
By Hölder’s inequality, (2.39) and (2.40), it implies that
[∫ r ∫ ]
p
I1m (t, r ) ≤ K E ds [ pr −s (x − z) − pt−s (x − z)]2 |µm
s (z)| dz
0 R
[∫ r ∫ ] p−1
2
· [ pr −s (x − z) − pt−s (x − z)] dsdz + K (t − r ) p/2
0 R
δ+( p−1)/2
≤ K (t − r ) + K (t − r ) p/2 , (2.42)
where the above K depends on T . Similarly, by Hölder’s inequality and Lemma 2.8, one can
see that
[∫ t ∫ ]
m p 2
E |µs (z)| pt−s (x − z) dsdz
r R
[∫ ]1/2 [∫ ]1/2 ]
t [∫
m 2p 4 |z|
≤ KE ds |µs (z)| J (z)dz pt−s (x − z) e dz
r R R
∫ t
≤K (t − s)−3/4 ds = K (t − r )1/4 ,
r
where the above K depends on T . By the above and (2.41) we obtain
[⏐ ∫ t ∫ ⏐p]
I2m (t, r ) ≤ K E ⏐ µm (z) p (x − z) 2
dsdz
⏐ ⏐
s t−s ⏐
r R
⏐∫ t ∫ ⏐p
+K ⏐ pt−s (x − z)2 dsdz ⏐
⏐ ⏐
r R
304
L. Ji, J. Xiong and X. Yang Stochastic Processes and their Applications 163 (2023) 288–322
[∫ t ∫ ] [∫ t ∫ ] p−1
p 2
≤ KE |µm
s (z)| pt−s (x − z) dsdz pt−s (x − z)2 dsdz
r R r R
+K (t − r ) p/2
≤ K (t − r )1/4+( p−1)/2 + K (t − r ) p/2 . (2.43)
The result follows from (2.37), (2.42) and (2.43). □
Recall that C([0, T ], C(R)+ ) is the space of the continuous maps from [0, T ] to C(R)+
∫with
∞ −L
topology induced by (1.9) by changing ∥ f (t) − g(t)∥Y to ∥ f (t) − g(t)∥C(R)+ =
0 e (∥ f (t, ·) − g(t, ·)∥[−L ,L] ∧ 1)dL, where ∥ · ∥[−L ,L] is the supremum norm on [−L , L].
One can check that C([0, T ] × R, R+ ) is a subspace of C([0, T ], C(R)+ ). Similar to above, we
have the following result.

Lemma 2.10. For fixed 0 < β < 1 and p ≥ 1, there is a constant K such that
[ ]
E |νtm (x) − νtm (y)|2 p ≤ K |x − y|βp , x, y ∈ R.

m
Lemma
⟨ m ⟩2.11. Suppose that (µt )t∈[0,T ] is a solution to (2.28). Then for every m ≥ 1, we have
E[ µt , 1 ] = ⟨µ0 , 1⟩.

Proof.
Notice that γm , G m are bounded for every m ≥ 1. Let t ∈ [0, T ] be fixed. Recall that
pt−s (x − z) = 0 for all s > t. Then for any u ≥ 0, we have
[∫ ∫ m ]
u ∑
E 1[tk−1 ,tk ) (s)γm (µtk−1 , z)G m (µs (z)) pt−s (x − z) dsdz
m m 2 2
0 R k=1
∫ u∫ √
≤K pt−s (x − z)2 dsdz ≤ K T < ∞,
0 R
which implies that
∫ u∫ ∑ m √
u ↦→ 1[tk−1 ,tk ) (s) γm (µm
tk−1 , z)G m (µs (z)) pt−s (x − z)W (ds, dz)
m
0 R k=1

is a martingale on [0, t]⟨ for ⟩any fixed t ∈ [0, T ]. By (2.28), one sees that E[µm
t (x)] =
⟨µ0 , pt (x − ·)⟩. Then E[ µm
t , 1 ] = ⟨µ0 , 1⟩. The result follows. □

Proof of Theorem 1.2. By (2.28) and (2.36) one can check that
µm
t (x) = ⟨µ0 , pt (x − ·)⟩ + νt (x).
m

Notice that p can be taken greater than max{2(1 − δ) + 1, 1/β} in Lemmas 2.9 and 2.10. Then
the Kolmogorov–Chentsov criterion (see [10, Theorem 3.23]) holds. By [10, Corollary 16.9],
one can check that, for each T, L > 0, the sequence of {µm t (x) : t ∈ [0, T ], x ∈ [−L , L]}
in C([0, T ] × [−L , L], R+ ) is tight, which implies {µm t (x) : t ∈ [0, T ], x ∈ R} is tight
in C([0, T ] × R, R+ ) under the topology induced by (1.8), hence, has a weakly convergent
m
subsequence {µt k (x) : t ∈ [0, T ], x ∈ R} with limit {µt (x) : t ∈ [0, T ], x ∈ R}. Let
∫ t∫
Bt (φ) = φ(x)W (ds, d x).
0 R
305
L. Ji, J. Xiong and X. Yang Stochastic Processes and their Applications 163 (2023) 288–322

Let X be a suitable Banach space containing L 2 (R) such that (ι, L 2 (R), X ) being an abstract
Wiener space, where ι denotes the inclusion map of L 2 (R) into X , see [12, §I.4] for the
definition of the abstract Wiener space. Similar to [11, Theorem 3.2.4], one can check that
(Bt )t≥0 is an L 2 (R)-cylindrical Brownian motion taking values in X . Thus, (µm k , B) → (µ, B)
m
in law on C([0, T ]×R, R+ )×C([0, T ], X ) as k → ∞. Then, (µt k , Bt )t∈[0,T ] → (µt , Bt )t∈[0,T ]
in law on C([0, T ], C(R)+ ) × C([0, T ], X ).
Applying Skorokhod’s representation theorem (e.g. [5, p.102, Theorem 1.8]), on an-
m m
other probability space, there are continuous processes (µ̂t k , B̂t k )t∈[0,T ] and (µ̂t , B̂t )t∈[0,T ]
mk
with the same distribution as (µt , Bt )t∈[0,T ] and (µt , Bt )t∈[0,T ] , respectively. Moreover,
m m
(µ̂t k , B̂t k )t∈[0,T ] → (µ̂t , B̂t )t∈[0,T ] almost surely as k → ∞. Recall that there is a pathwise
+
unique continuous Ctem (R)-valued solution to (2.22). For any f ∈ Cc∞ (R), almost surely the
following holds for each t ∈ [0, T ]:
1 t ⟨ m k ′′ ⟩

⟨ mk ⟩
µ̂t , f = ⟨µ0 , f ⟩ + µ̂s , f ds
2 0
m k ∫ t ∧t ∫ √
∑ j
mk
+ γm k (µ̂t j−1 , z)G m k (µ̂m
s (z)) f (z) Ŵ
k mk
(ds, dz),
j=1 t j−1 ∧t R

m
where Ŵ m k (ds, dz) is the corresponding time–space white noise of ( B̂t k )t∈[0,T ] . It follows from
Lemma 2.8 and Fatou’s lemma that, for every p ≥ 1, we have
[∫ ] [∫ ]
m
sup E µ̂t (x)2 p J (x)d x = sup E lim µ̂t k (x)2 p J (x)d x
0≤t≤T R 0≤t≤T R k→∞
[∫ ]
m
≤ sup lim E µ̂t k (x)2 p J (x)d x
0≤t≤T k→∞ R
[∫ ]
m
≤ sup E µ̂t k (x)2 p J (x)d x
0≤t≤T,k≥1 R
≤ Ke KT
< ∞.

Then (1.5) holds for (µ̂t )t≥0 . Further, for every t ∈ [0, T ], by the above and Lemma 2.8 we
have
[∫ ]
mk 2
lim E |µ̂t (x) − µ̂t (x)| J (x)d x = 0
k→∞ R

and
[∫ ]
m
sup E [µ̂t (x) + µ̂t k (x)]2 J (x)d x < ∞.
0≤t≤T,k≥1 R

Then by the above we have


[∫ t ]
E ⟨|µ̂s − µ̂s |, f ⟩ds
mk ′′
0
[∫ t ∫ ]1/2
2
≤ KE ds |µ̂m
s (x) − µ̂s (x)| J (x)d x
k →0
0 R

306
L. Ji, J. Xiong and X. Yang Stochastic Processes and their Applications 163 (2023) 288–322

as k → ∞. Recall that γ satisfies Condition 1.1. Thus,


[ ⏐∫ ∫ ⏐2 ]
⏐ t √
γ (µ̂s , z)µ̂s (z) f (z)Ŵ k (ds, dz)⏐⏐
m

E ⏐⏐
0 R
[∫ t ∫ ]
≤ KE µ̂s (z)J (z)dsdz
0 R
{ [∫ t ∫ ]}1/2
≤K E µ̂s (z)2 J (z)dsdz < ∞.
0 R

By [28, Lemma 2.4], for any f ∈ Cc∞ (R), we have


1 t⟨

µ̂t , f = ⟨µ0 , f ⟩ + µ̂s , f ′′ ds
⟨ ⟩ ⟩
2
∫ t∫ √ 0
+ γ (µ̂s , z)µ̂s (z) f (z)Ŵ (ds, dz), t ∈ [0, T ] (2.44)
0 R

almost surely, where Ŵ (ds, dz) is the corresponding time–space white noise of ( B̂t )t∈[0,T ] . By
Fatou’s lemma and Lemma 2.11, we have
[⟨ ⟩]
mk ⟨ m ⟩
E[ µ̂t , 1 ] = E lim µ̂t , 1 ≤ lim E[ µ̂t k , 1 ]
⟨ ⟩
k→∞ k→∞
⟨ m ⟩
= lim E[ µt k , 1 ] = ⟨µ0 , 1⟩ .
k→∞

For any n ≥ m, let 1[m,n] (x) be the characteristic function of [m, n]. By the above and
dominated convergence theorem, we have
lim lim E[ µ̂t , 1[m,n] ] = 0.
⟨ ⟩
(2.45)
m→∞ n→∞

For any fixed nonnegative function f ∈ Cb2 (R), we will prove below that (2.44) holds for
any t ∈ [0, T ] almost surely. In fact, there exists a nonnegative function f m,n ∈ Cc∞ (R) such
that

⎨ f m,n (x) = f (x),
⎪ x ∈ [m, n];
0 ≤ f m,n (x) ≤ f (x) · 1[m−1,n+1] (x); (2.46)

⎩ ′′
| f m,n (x)| ≤ K 1[m−1,n+1] (x),
where K > 0 is a constant independent of m, n. There is an example in Cc∞ (R) satisfying
(2.46):
x < m − 1,

⎪0,

⎪ ∫ x

ρ(2(y − m) + 1)dy,

2 f (x) · x ∈ [m − 1, m),





⎨ m−1
f m,n (x) = f (x), x ∈ [m, n),

⎪ [ ∫ x ]
ρ(2(y − n) − 1)dy ,

f (x) · 1 − 2 x ∈ [n, n + 1),





⎪ n

0, x ≥ n + 1,

where ρ is the mollifier defined by (2.20). It is easy to see that 0 ≤ f m,n (x) ≤ f (x) ·
1[m−1,n+1] (x). Moreover, supx∈(−1,1) ρ(x) ≤ C and supx∈(−1,1) |ρ ′ (x)| ≤ 8C, where C is the
307
L. Ji, J. Xiong and X. Yang Stochastic Processes and their Applications 163 (2023) 288–322

constant in (2.20). For any x ∈ [m − 1, m),


⏐ ∫ x
′′
ρ(2(y − m) + 1)dy + 4 f ′ (x) · ρ(2(x − m) + 1)
⏐ ′′
| f m,n (x)| = ⏐2 f (x) ·
m−1

+4 f (x) · ρ ′ (2(x − m) + 1)⏐

≤ 2∥ f ′′ ∥ + 4C∥ f ′ ∥ + 32C∥ f ∥ =: K ,

where ∥ · ∥ is the supremum norm. Similarly, the above also holds for any x ∈ [n, n + 1). This
′′
means | f m,n (x)| ≤ K 1[m−1,n+1] (x). One sees that the above f m,n satisfies (2.46). By (2.45),
(2.46) and the fact of f (x) ≤ ∥ f ∥ for all x ∈ R, we have

lim lim E µ̂t , f m,n + µ̂t , | f m,n


′′
[⟨ ⟩ ⟨ ⟩]
| = 0. (2.47)
m→∞ n→∞

Moreover, it follows from Doob’s inequality that


[ ⏐∫ t ∫ ⏐2 ]

µ̂s (z)γ (µ̂s , z) f m,n (z)Ŵ (ds, dz)⏐⏐
⏐ ⏐
E sup ⏐⏐
0≤t≤T 0 R
[∫ T ∫ ]
≤ KE µ̂s (z)γ (µ̂s , z) f m,n (z)2 dsdz
0 R
∫ T
E[ µ̂s , f m,n ]ds.
⟨ ⟩
≤K (2.48)
0

By (2.44), it holds that


[ ] [∫ T ]
⟩ 1
E sup µ̂t , f m,n ≤ µ0 , f m,n + E µ̂s , | f m,n
′′
⟨ ⟩ ⟨ ⟨ ⟩
| ds
0≤t≤T 2 0
[ ⏐∫ t ∫ ⏐]

µ̂s (z)γ (µ̂s , z) f m,n (z)Ŵ (ds, dz)⏐⏐ .
⏐ ⏐
+E sup ⏐⏐
0≤t≤T 0 R

Recall that µ0 ∈ Cc (R)+ . Taking n → ∞ and then m → ∞, by (2.47), (2.48) and the above
we have
[ ]
lim lim E sup µ̂t , f m,n = 0.
⟨ ⟩
(2.49)
m→∞ n→∞ 0≤t≤T

On the other hand, by Fatou’s lemma and (2.46) we have

µ̂t , f 1[m,∞) ≤ lim µ̂t , f 1[m,n] ≤ lim µ̂t , f m,n ,


⟨ ⟩ ⟨ ⟩ ⟨ ⟩
n→∞ n→∞

which implies that


[ ] [ ]
E sup µ̂t , f 1[m,∞) ≤ E sup lim µ̂t , f m,n
⟨ ⟩ ⟨ ⟩
0≤t≤T 0≤t≤T n→∞
[ ]
≤ lim E sup µ̂t , f m,n .
⟨ ⟩
n→∞ 0≤t≤T

By the above and (2.49), one can see that


[ ]
lim E sup µ̂t , f · 1[m,∞) = 0.
⟨ ⟩
(2.50)
m→∞ 0≤t≤T
308
L. Ji, J. Xiong and X. Yang Stochastic Processes and their Applications 163 (2023) 288–322

Similarly,
[ ]
sup µ̂t , f · 1(−∞,−m] = 0.
⟨ ⟩
lim E (2.51)
m→∞ 0≤t≤T

Now we take the sequence of functions { f −m,m }∞


m=1 satisfying (2.46). It follows from (2.46),
(2.50) and (2.51) that
[ ] [ ∫ ]
E sup | µ̂t , f −m,m − f | ≤ E sup µ̂t (x)| f −m,m (x) − f (x)|d x
⟨ ⟩
0≤t≤T 0≤t≤T R
[ ]
≤ E sup µ̂t , f · 1[m,∞) + f · 1(−∞,−m]
⟨ ⟩
0≤t≤T

goes to 0 as m → ∞. Moreover, by (2.46) and dominated convergence theorem, we have


[⏐∫ T ⏐] [∫ T ]
µ̂s , f −m,m µ̂s , | f −m,m
⏐ ′′
⟩ ⏐
− f ′′ ds ⏐⏐ ≤ lim E ′′
− f ′′ | ds
⟨ ⟨ ⟩
lim E ⏐⏐
m→∞ 0 m→∞ 0
[∫ T ⟨ ⟩ ]
=E µ̂s , lim | f −m,m
′′
− f ′′ | ds = 0.
0 m→∞

Similarly, one can check that limm→∞ [| µ0 , f −m,m − f |] = 0 and


⟨ ⟩

[ (∫ t ∫ √ )2 ]
E sup γ (µ̂s , z)µ̂s (z)( f −m,m (z) − f (z))Ŵ (ds, dz)
0≤t≤T 0 R
[∫ T ∫ ]
≤ KE µ̂s (z)( f −m,m (z) − f (z))2 dsdz → 0
0 R

as m → ∞. Then for any nonnegative function f ∈ Cb2 (R), (2.44) holds for any t ∈ [0, T ]
almost surely.
For any f ∈ Cb2 (R), there exist nonnegative functions f 1 , f 2 ∈ Cb2 (R) such that f = f 1 − f 2 .
Then for any f ∈ Cb2 (R), (2.44) holds for any t ∈ [0, T ] almost surely. Letting T → ∞ in
(2.44), it completes the proof of the existence of the solution to (1.3). By Proposition 2.6 one
can see that X t (d x) = µt (x)d x satisfies the MP ((1.1), (1.2)), which implies the conclusion. □

Proof of Theorem 1.3. Assume that (µt )t≥0 is a solution to SPDE (1.3) satisfying (1.5). Then
it also satisfies (2.18) by [21, Theorem 2.1]. Let r0 > 0 be fixed. For any r0 ≤ r ≤ t ≤ T and
p ≥ 1, we have

E |µt (x) − µr (x)|2 p


[ ]
⏐∫ ⏐2 p
≤ K ⏐ [ pt (x − z) − pr (x − z)]µ0 (z)dz ⏐
⏐ ⏐
R
[⏐ ∫ t ∫ ⏐2 p ]
pt−s (x − z) µs (z)W (ds, dz)⏐

+K E ⏐
⏐ ⏐
r R
[⏐ ∫ r ∫ ⏐2 p ]
[ pr −s (x − z) − pt−s (x − z)] µs (z)W (ds, dz)⏐

+K E ⏐
⏐ ⏐
0 R
=: K (I1 + I2 + I3 ). (2.52)
309
L. Ji, J. Xiong and X. Yang Stochastic Processes and their Applications 163 (2023) 288–322

Recall that µ0 ∈ Cc (R)+ and then R µ0 (z)2 dz < ∞. Note that (2.14) holds for δ = 1 by [18,

Lemma III 4.5]. By (2.3), (2.14) and Hölder’s inequality, we have
⏐∫ ⏐2 p
I1 = ⏐ [ pt (x − z) − pr (x − z)]µ0 (z)dz ⏐
⏐ ⏐
R
⏐∫ ∫ ⏐p
≤ ⏐ [ pt (x − z) − pr (x − z)]2 dz µ0 (z)2 dz ⏐
⏐ ⏐
R R
⏐∫ ⏐p
2
≤ K ⏐ [ pt (x − z) − pr (x − z)] dz ⏐
⏐ ⏐
R
⏐∫ ⏐p
≤ K (t − r ) p r −3 p/2 ⏐ [ pt (x − z) + pr (x − z)] dz ⏐
⏐ ⏐
R
−3 p/2
≤ K (t − r ) p r0 , r0 ≤ r ≤ t ≤ T. (2.53)
The following estimations of I2 and I3 are similar with that of and in I1m (t, r ) I2m (t, r )
Lemma 2.9, respectively. By (1.5), (2.30) and Hölder’s inequality, for δ ∈ (0, 1/6) one can
see that
[∫ ]
E pt−s (x − z) µs (z) dz
2−δ p
R
[(∫ ) ] [∫ 1/2 ] 1/2
≤E |µs (z)|2 p e−|z| dz pt−s (x − z)4−2δ e|z| dz
R R
(2δ−3)/4
≤ K (t − s) (2.54)
and
[∫ t ∫ ]
E pt−s (x − z)2 µs (z) p dsdz
r
[R∫ ]1/2 [∫ ]1/2 ]
t [∫
≤E ds |µs (z)|2 p e−|z| dz pt−s (x − z)4 e|z| dz
r R R
∫ t
≤K (t − s)−3/4 ds ≤ K (t − r )1/4 ,
r
which implies that
[⏐ ∫ t ∫ ⏐p]
I2 ≤ K E ⏐ pt−s (x − z)2 µs (z)dsdz ⏐
⏐ ⏐
r R
[∫ t ∫ ] [∫ t ∫ ] p−1
≤ KE pt−s (x − z)2 µs (z) p dsdz pt−s (x − z)2 dsdz
r R r R
≤ K (t − r )1/4+( p−1)/2 , (2.55)
where the above constant K only depends on T . By (2.14), (2.54) and Hölder’s inequality, for
δ ∈ (0, 1/6) we have
[∫ r ∫ ]
E ds [ pr −s (x − z) − pt−s (x − z)] µs (z) dz
2 p
0 R
∫ r
≤ K (t − r )δ (r − s)−3δ/2 ds
0
[∫ ]
·E [ pr −s (x − z)2−δ + pt−s (x − z)2−δ ]µs (z) p dz
R
310
L. Ji, J. Xiong and X. Yang Stochastic Processes and their Applications 163 (2023) 288–322
[∫ r ∫ r ]
δ −(3+4δ)/4 −3δ/2 (2δ−3)/4
≤ K (t − r ) (r − s) ds + (r − s) (t − s) ds
0 0
[ )1/4 (∫ )3/4 ]
(∫ r r
δ (1−4δ)/4 −6δ (2δ−3)/3
≤ K (t − r ) T + (r − s) ds (t − s) ds
0 0

≤ K (t − r )δ

with K only depending on T . Combining the above with (2.40), it implies that
[⏐ ∫ r ∫ ⏐p]
I3 ≤ K E ⏐ [ pr −s (x − z) − pt−s (x − z)] µs (z)dsdz ⏐
2
⏐ ⏐
0 R
[∫ r ∫ ]
≤ KE [ pr −s (x − z) − pt−s (x − z)] µs (z) dsdz
2 p
0 R
[∫ r ∫ ] p−1
2
· [ pr −s (x − z) − pt−s (x − z)] dsdz
0 R
δ+( p−1)/2
≤ K (t − r ) (2.56)

with δ ∈ (0, 1/6) and the constant K only depending on T . By (2.52), (2.53), (2.55) and (2.56),
we have

E |µt (x) − µr (x)|2 p ≤ K (t − r )δ+( p−1)/2 ,


[ ]
(2.57)

where δ ∈ (0, 1/6) and the above constant K only depends on r0 , T . Similarly,

E |µt (x) − µt (y)|2 p ≤ K |x − y|βp , ∀ x, y ∈ R


[ ]
(2.58)

with β ∈ (0, 1), where the above constant K depends on r0 . By (2.57) and (2.58), for
t, r ∈ [r0 , T ] and x, y ∈ R, it is easy to see that

E |µt (x) − µr (y)|2 p ≤ K E |µt (x) − µt (x)|2 p + K E |µt (x) − µt (y)|2 p


[ ] [ ] [ ]

≤ K (t − r )δ+( p−1)/2 + K |x − y|βp

Taking p large enough, the result follows from the Kolmogorov’s continuity criteria (see,
e.g., [10, Theorem 3.23]). □

3. Proof of Theorem 1.4

In this section we show the weak uniqueness of the solution to the MP ((1.1), (1.2)) under
Condition 1.1. The main idea is to relate the MP ((1.1), (1.2)) with a system of SPDEs, which
is satisfied by a sequence of corresponding distribution-function-valued processes. The weak
uniqueness of the solution to the MP ((1.1), (1.2)) follows from the pathwise uniqueness of
the solution to the SPDEs.
For (X t )t≥0 satisfying the MP ((1.1), (1.2)), we define the [ai , ai+1 )-distribution-function-
valued process

u it (x) = X t ((ai , x]), ai ≤ x < ai+1 , i = 0, . . . , n. (3.1)


311
L. Ji, J. Xiong and X. Yang Stochastic Processes and their Applications 163 (2023) 288–322

In Proposition 3.1, for i = 0, . . . , n, we show the processes (u it )t≥0 defined by (3.1) solves the
following system of SPDEs:
⎧ ∫ t ∫ t ∫ u is (x)
1 i
⎪u it (x) = u i0 (x) + gi (∇u is (ai+1 ))Wi (ds, dz),

⎪ ∆u s (x)ds +



⎪ 0 2 0 0 (a)
x ∈ [a , a ), i = 0, . . . , n − 1;


∫ t ∫ u ns i(x) i+1


⎪ ∫ t
⎨ 1
u n (x) = u n0 (x) + ∆u n (x)ds + gn (u ns (∞))Wn (ds, dz), (3.2)
⎪ t
⎪ 0 2 s 0 0 (b)
x ∈ [an , ∞).




. . . ,
⎪ i
u (a ) = 0, i = 0, 1, n,


⎪ i

⎪ (c)
∇u t (ai ) = ∇u it (ai ), i = 1, . . . , n.
⎩ i−1

where u ns (∞) := limx→∞ u ns (x) and Wi (ds, dz), i = 0, . . . , n are independent time–space
Gaussian white noises on R+ ×R+ with intensity dsdz. The pathwise uniqueness of the solution
to the SPDEs is obtained in Proposition 3.3.
The system of SPDEs (3.2) will be understood in the following form: for any φi ∈
Cb2 [ai , ai+1 ] with φi (ai ) = φi′ (ai+1 ) = 0, i = 0, 1, . . . , n − 1, and φn ∈ Cb2 [an , ∞) with
φn (an ) = φn (∞) = 0 (given φn (∞) := limx→∞ φn (x)), almost surely for each t ≥ 0, we have
1 t [ i ′′
⎧ ∫
⎪⟨u t , φi ⟩ = ⟨u 0 , φi ⟩ + 2 0 ⟨u s , φi ⟩ + φi (ai+1 )∇u s (ai+1 ) ds
i i i
⎪ ]





⎪ ∫ t ∫ ∞ [∫ ai+1 ] (a)
φ i



⎨ + 1 i
{z≤u s (x)} i (x)d x gi (∇u (a
s i+1 ))Wi (ds, dz);
0 0 ai ∫
t (3.3)
1
⎪⟨u nt , φn ⟩ = ⟨u n0 , φn ⟩ + ⟨u ns , φn′′ ⟩ds


⎪ 2
∫ t ∫ ∞ [∫ ∞ 0



⎪ ] (b)
1{z≤u ns (x)} φn (x)d x gn u ns (∞) Wn (ds, dz),
⎪ ( )
+



0 0 an

where ⟨u i0 , φi ⟩ = R u 0 (x)φi (x)d x and ⟨u 0 , φn ⟩ =


∫ i n

R u n0 (x)φn (x)d x.

Proposition 3.1. Suppose that (X t )t≥0 is a solution to the MP ((1.1), (1.2)). Then {u it (x) :
t ≥ 0, x ∈ [ai , ai+1 )}, i = 0, 1, . . . , n defined as (3.1), solves the group of SPDEs (3.2).

Proof.
For any φi ∈ Cc3 (ai , ai+1 ) with i = 0, 1, . . . , n − 1, by integration by parts, almost surely
for each t ≥ 0 we have
1 t⟨

u t , φi = − ⟨X t , φi ⟩ = −Mt (φi ) − ⟨X 0 , φi ⟩ − X s , φi′′ ds
⟨ i ′⟩ ⟩
2 0
⟨ i ′ ⟩ 1 t ⟨ i ′′′ ⟩

= −Mt (φi ) + u 0 , φi + u , φ ds. (3.4)
2 0 s i
Thus
⟩ 1 t⟨ i

− Mt (φi ) = u it , φi′ − u i0 , φi′ − u s , (φi′ )′′ ds
⟨ ⟩ ⟨ ⟩
(3.5)
2 0
is a continuous martingale. By Lemma 2.1 we have
∫ t∫
−Mt (φi ) = φi (x)M(ds, d x)
0 R
312
L. Ji, J. Xiong and X. Yang Stochastic Processes and their Applications 163 (2023) 288–322

with quadratic variation process (⟨−M(φi )⟩t )t≥0 defined by


∫ t ∫
⟨−M(φi )⟩t = gi (∇u is (ai+1 ))2 ds φi (x)2 X s (d x)
0 R
∫ t ∫ u is (ai+1 )
= gi (∇u is (ai+1 ))2 ds φi (u is (y)−1 )2 dy
0 0
∫ t ∫ ∞ (∫ ai+1 )2
= 1{y≤u is (x)} φi (x)d x gi (∇u is (ai+1 ))2 dsdy,

0 0 ai

where u is (y)−1 denotes the generalized inverse of the nondecreasing function u is , that is,
u is (y)−1 = inf{x∈ [ai , ai+1 ) : u is (x) ≥ y}. Moreover, for φn ∈ Cc3 (an , ∞), one can see that
⟩ 1 t ⟨ n ′′′ ⟩

− Mt (φn ) = u nt , φn′ − u n0 , φn′ − u , φ ds
⟨ ⟩ ⟨
(3.6)
2 0 s n
is a continuous martingale with quadratic variation process (⟨−M(φn )⟩t )t≥0 defined by
∫ t ∫
⟨−M(φn )⟩t = gn (u ns (∞))2 ds φn (x)2 X s (d x)
0 R
∫ t ∫ ∞ (∫ ∞ )2
= 1{y≤u ns (x)} φn′ (x)d x gn (u ns (∞))2 dsdy.
0 0 an

As in Lemma 2.1, the family {−Mt (φi ) : t ≥ 0, φi ∈ Cc3 (ai , ai+1 )} determines a martingale
measure {Mt (B) : t ≥ 0, B ∈ B(ai , ai+1 ), i = 0, . . . , n − 1}. Moreover, for φi ∈ Cc3 (ai , ai+1 )
and φ j ∈ Cc3 (a j , a j+1 ) with i ̸= j, we have φi (x)φ j (x) = 0 for any i ̸= j, and then by
Lemma 2.1,
⟨ ⟩
−M(φi ), −M(φ j ) t
∫ t ∫ ∫ ∫
= ds γ (µs , z)X s (dz) φi (x)δz (d x) φ j (y)δz (dy)
∫0 t ∫R R R

= ds γ (µs , z)φi (z)φ j (z)X s (dz) = 0.


0 R
By El Karoui and Méléard [4, Theorem III-7, Corollary III-8], on some extension of the prob-
ability space, one can define a sequence independent Gaussian white noises Wi (ds, dz), i =
0, . . . , n on (0, ∞)2 based on dsdz such that, for any φi ∈ Cc3 (ai , ai+1 ), i = 0, . . . , n − 1,
∫ t ∫ ∞ [∫ ai+1 ]
1{z≤u is (x)} φi′ (x) γ (µs , x)d x Wi (ds, dz)

− Mt (φi ) =
0 0 ai
∫ ai+1 ∫ t∫ u is (x)
= φi′ (x)d x gi (∇u is (ai+1 ))Wi (ds, dz) (3.7)
ai 0 0

for each t ≥ 0 almost surely, and for any φn ∈ Cc3 (an , ∞), we have
∫ ∞ ∫ t ∫ u ns (x)
− Mt (φn ) = φn′ (x)d x gn (u ns (∞))Wn (ds, dz) (3.8)
an 0 0
for each t ≥ 0 almost surely. By (3.4), (3.7) and (3.8), the Eqs. (3.3) hold for any φi ∈
Cc2 (ai , ai+1 ), i = 0, . . . , n. Moreover, for each T > 0,
sup sup |u it (x)| = sup X t ([ai , ai+1 ]) < ∞
0≤t≤T x∈[ai ,ai+1 ] 0≤t≤T
313
L. Ji, J. Xiong and X. Yang Stochastic Processes and their Applications 163 (2023) 288–322

almost surely for i = 0, 1, . . . , n − 1. Therefore, for any φi ∈ Cb2 [ai , ai+1 ] with φi (ai ) =
φi′ (ai+1 ) = 0, i = 0, . . . , n − 1, (a) of (3.3) follows from Lemma A.2 in the Appendix.
Moreover, for any φn ∈ Cb2 [an , ∞) with φn (an ) = φn (∞) = 0, (b) of (3.3) follows from
Lemma A.3 in the Appendix. □

Lemma 3.2. Suppose that (u nt (x))t≥0,x≥an satisfies (b) of (3.2) with u nt (an ) = 0. Let u nt (∞) =
limx→∞ u nt (x). Then (u nt (∞))t≥0 satisfies
∫ t ∫ u ns (∞)
u nt (∞) = u n0 (∞) + gn (u ns (∞))Wn (ds, dz). (3.9)
0 0

Proof.
√ 1 e−x /(2t)
2
Recall that pt (x) = 2π t
and let
qtx (y) := pt (x + an − y) − pt (x − an + y)
for t > 0 and x, y ≥ an . One can check that qtx (an ) = qtx (∞) = 0 for any t > 0 and x ≥ an .
Then (b) of (3.2) can be written in the following mild form, whose proof is similar to [29,
Lemma 5.1], or [13, Theorem 7.26] taking into consideration of the boundary condition:
∫ t ∫ ∞ [∫ ∞ ]
u nt (x) = x
1{z≤u ns (y)} qt−s (y)dy gn (u ns (∞))Wn (ds, dz)
0 0 a
⟩ n
+ u n0 , qtx

(3.10)
for any x ≥ an , where u 0 , qt = R u 0 (y)qtx (y)dy. By a change of variable and dominated
⟨ n x⟩ ∫ n

convergence theorem, we have


∫ ∞
u 0 , qt =
⟨ n x⟩
u n0 (y)[ pt (x + an − y) − pt (x − an + y)]dy
∫anx ∫ ∞
= u n0 (x + an − z) pt (z)dz − u n0 (z − x + an ) pt (z)dz
−∞ x
∫ ∞
n
→ u 0 (∞) pt (z)dz = u n0 (∞)
−∞
and
∫ ∞
x
1{z≤u ns (y)} qt−s (y)dy
an
∫ ∞
= 1{z≤u ns (y)} [ pt (x + an − y) − pt (x − an + y)]dy
an
∫ x ∫ ∞
= 1{z≤u ns (x+an −y)} pt (y)dy − 1{z≤u ns (y−x+an )} pt (y)dy
−∞ x
∫ ∞
→ 1{z≤u ns (∞)} pt (y)dy = 1{z≤u ns (∞)}
−∞
as x → ∞, which ends the proof. □

Proposition 3.3 (Pathwise Uniqueness). Suppose that (1.6) holds, and (u t )t≥0 , (ũ t )t≥0 are two
solutions with the same initial value to (3.2). Then P{u t (x) = ũ t (x) for all t ≥ 0, x ∈ R} = 1.

314
L. Ji, J. Xiong and X. Yang Stochastic Processes and their Applications 163 (2023) 288–322

Proof.
By Lemma 3.2, (u nt (∞))t≥0 and (ũ nt (∞))t≥0 are two solutions to (3.9) with the same initial
value. Then by (1.6) and [2, Theorem 2.1], the pathwise uniqueness of the solution to (3.9)
holds, i.e., P{u nt (∞) = ũ nt (∞) for all t ≥ 0} = 1. Further, the pathwise uniqueness of the
solution holds for (b) of (3.2) by Lemma A.4, i.e., P{u nt (x) = ũ nt (x) for all t ≥ 0, x ≥ an } = 1.
That implies the strong uniqueness of (∇u nt (an ))t≥0 . By Lemma A.5 and using induction, one
can get the pathwise uniqueness of the solution to (a) of (3.2) for i = 0, 1, . . . , n − 1. The
proof ends here. □

Proof of Theorem 1.4. Suppose that (X t )t≥0 and ( X̃ t )t≥0 are two solutions to the MP ((1.1),
(1.2)). In fact, the existence of the solution to the MP follows from Theorem 1.2. As (3.1), one
can define u it (x) and ũ it (x) corresponding to X t and X̃ t , respectively. In fact, by Proposition 3.1
one can see that (u it (x))t≥0,x∈[ai ,ai+1 ) and (ũ it (x))t≥0,x∈[ai ,ai+1 ) , i = 0, 1, . . . , n are two solutions
to the group of SPDEs (3.2). Further, the strong uniqueness of the solution to (3.2) follows
from Proposition 3.3, which implies the weak uniqueness of the solution to the MP ((1.1),
(1.2)). □

Declaration of competing interest


The authors declare that they have no known competing financial interests or personal
relationships that could have appeared to influence the work reported in this paper.

Acknowledgments
We would like to express our sincere gratitude to the editors and an anonymous referee
for their very helpful comments on the paper. The research of L. Ji was supported in
part by Guangdong Basic and Applied Basic Research Foundation No. 2022A1515110986,
Guangdong Young Innovative Talents Project No. 2022KQNCX105 and NSFC grant 12271029;
the research of J. Xiong was supported in part by National Key R&D Program of China grant
2022YFA1006102, and NSFC grant 11831010; the research of X. Yang was supported in part
by NSFC grant 12061004, NSF of Ningxia grant 2021AAC02018 and the Major Research
Project for North Minzu University No. ZDZX201902.

Appendix
In this section we give some useful lemmas which∫ are used in the proofs of Propositions 3.1
1
and 3.3 . Let Φ ∈ Cc2 (0, 1) satisfy 0 ≤ Φ ≤ 2 and 0 Φ(x)d x = 1. For k ≥ 1 and x ∈ [0, 1]
let
∫ kx ∫ 1
h k (x) := Φ(z)dz · Φ(z)dz. (A.1)
0 xk

Then h k ∈ Cc2 (0, 1) for all k ≥ 1.

Lemma A.1. Suppose that f ∈ C[0, 1] with f ′ (1) and f ′ (0) existing. Then
lim f, h ′k = f (0) − f (1), lim f, h ′′k = f ′ (1) − f ′ (0),
⟨ ⟩ ⟨ ⟩
k→∞ k→∞
∫1
where ⟨ f, g⟩ = 0 f (x)g(x)d x with f, g ∈ C[0, 1].
315
L. Ji, J. Xiong and X. Yang Stochastic Processes and their Applications 163 (2023) 288–322

Proof.
Observe that for each n ≥ 1,
∫ 1 ∫ kx
h ′k (x) = kΦ(kx) Φ(z)dz − kx k−1 Φ(x k ) Φ(z)dz, x ∈ [0, 1].
xk 0
Then by change of variables and dominated convergence, as k → ∞ we have
f, h ′k
⟨ ⟩
∫ 1 [∫ 1 ] ∫ 1 [∫ kx ]
k−1 k
= f (x)kΦ(kx) Φ(z)dz d x − f (x)kx Φ(x ) Φ(z)dz d x
0 xk 0 0
∫ 1 [∫ 1 ] ∫ 1 [∫ ky 1/k ]
= f (y/k)Φ(y) Φ(z)dz dy − f (y 1/k )Φ(y) Φ(z)dz dy
0 (y/k)k 0 0
converges to f (0) − f (1) as k → ∞, which gives the first assertion.
In the following we prove the second assertion. Observe that
∫ 1
h ′′k (x) = k 2 Φ ′ (kx) Φ(z)dz − 2k 2 x k−1 Φ(kx)Φ(x k )
x k

] kx

− k(k − 1)x k−2 Φ(x k ) + k 2 x 2k−2 Φ ′ (x k )
[
Φ(z)dz
0
=: M1,k (x) − 2M2,k (x) − M3,k (x). (A.2)
By change of variables and dominated convergence again, as k → ∞, we have
∫ 1
[ f (x) − f (0)]M1,k (x)d x
0
∫ 1 [∫ 1 ]

= k[ f (y/k) − f (0)]Φ (y) Φ(z)dz dy
0 y k k −k
∫ 1
→ f ′ (0) yΦ ′ (y)dy = − f ′ (0) (A.3)
0
and
∫ 1
[ f (x) − f (1)]M2,k (x)d x
0
∫ 1
f (y 1/k ) − f (1)
= 1/k − 1
k(y 1/k − 1)Φ(ky 1/k )Φ(y)dy
0 y
→ 0. (A.4)
Similarly, as k → ∞,
∫ 1
[ f (x) − f (1)]M3,k (x)d x
0
∫ 1
f (y 1/k ) − f (1)
k(y 1/k − 1) k −1 (k − 1)y −1/k Φ(y)
[
= 1/k
0 y −1
∫ 1/k
] [ ky ]
+y (k−1)/k Φ ′ (y) · Φ(z)dz dy
0
∫ 1
→ f ′ (1) ln y[Φ(y) + yΦ ′ (y)]dy = − f ′ (1). (A.5)
0
316
L. Ji, J. Xiong and X. Yang Stochastic Processes and their Applications 163 (2023) 288–322

Applying integration by parts and the fact 0 ≤ Φ ≤ 2 with supp(Φ) ⊂ (0, 1), one can check
that
∫ 1 ∫ 1 (∫ kx )′′ (∫ 1 )
M1,k (x)d x = Φ(z)dz · Φ(z)dz d x
0 0 0 xk
∫ 1 ∫ 1
= M2,k (x)d x = k Φ(ky 1/k )Φ(y)dy
0 0
∫ k −k
=k Φ(ky 1/k )Φ(y)dy ≤ 4k 1−k
0
and
∫ 1 ∫ 1 (∫ 1 )′′ (∫ kx )
M3,k (x)d x = − Φ(z)dz · Φ(z)dz d x
0 0 xk 0
∫ 1
= M2,k (x)d x ≤ 4k 1−k .
0
Then combining (A.2) with (A.3)–(A.5) one completes the proof. □

Lemma A.2. Suppose that for each φ ∈ Cc2 (0, 1), (u t )t≥0 satisfies
1 t⟨

⟨u t , φ⟩ = ⟨u 0 , φ⟩ + u s , φ ′′ ds

2 0
∫ t∫ ∞ [∫ 1 ]
+ g(∇u s (1)) 1{z≤u s (x)} φ(x)d x W (ds, dz), (A.6)
0 0 0
and for each T > 0,
sup sup |u t (x)| < ∞ a.s. (A.7)
0≤t≤T x∈[0,1]

Then for each φ ∈ Cb2 [0, 1], almost surely for t ≥ 0 we have
1 t [⟨

⟨u t , φ⟩ = ⟨u 0 , φ⟩ + u s , φ ′′ + Fs (φ) ds
⟩ ]
2 0
∫ t∫ ∞ [∫ 1 ]
+ g(∇u s (1)) 1{z≤u s (x)} φ(x)d x W (ds, dz),
0 0 0
where
Fs (φ) := [φ(1)∇u s (1) − φ(0)∇u s (0)] − [u s (1)φ ′ (1) − u s (0)φ ′ (0)].

Proof.
Recall h k in (A.1). For m ≥ 1 define stopping time τm by
{ }
τm := inf t ≥ 0 : sup |u t (x)| ≥ m
x∈[0,1]

with the convention inf ∅ = ∞. Then limm→∞ τm = ∞ almost surely by (A.7). It follows from
(A.6) that, for any φ ∈ Cb2 [0, 1] and m, k > 1,
∫ t∧τm ∫ ∞ [∫ 1 ]
u t∧τm , φh k = 1{z≤u s (x)} φ(x)h k (x)d x W (ds, dz)
⟨ ⟩
g(∇u s (1))
0 0 0
1 t∧τm ⟨

+ ⟨u 0 , φh k ⟩ + u s , (φh k )′′ ds.

(A.8)
2 0
317
L. Ji, J. Xiong and X. Yang Stochastic Processes and their Applications 163 (2023) 288–322

Notice that
u s , (φh k )′′ = u s , φ ′′ h k + 2 u s , φ ′ h ′k + u s , φh ′′k .
⟨ ⟩ ⟨ ⟩ ⟨ ⟩ ⟨ ⟩

It follows from Lemma A.1 that


lim u s , (φh k )′′
⟨ ⟩
k→∞
= u s , φ ′′ + [u s (0)φ ′ (0) − u s (1)φ ′ (1)] − [φ(0)∇u s (0) − φ(1)∇u s (1)]
⟨ ⟩

= u s , φ ′′ + Fs (φ).
⟨ ⟩

Thus letting k → ∞ in (A.8) we obtain


1 t∧τm [⟨

u t∧τm , φ = ⟨u 0 , φ⟩ + u s , φ ′′ + Fs (φ) ds
⟨ ⟩ ⟩ ]
2 0
∫ t∧τm ∫ ∞ [∫ 1 ]
+ g(∇u s (1)) 1{z≤u s (x)} φ(x)d x W (ds, dz).
0 0 0
Letting m → ∞ the result holds. □

Lemma A.3. Suppose that for each φ ∈ Cc2 (0, ∞), (u t )t≥0 satisfies
1 t⟨

⟨u t , φ⟩ = ⟨u 0 , φ⟩ + u s , φ ′′ ds

2
∫ t∫ ∞ 0 [∫ ∞ ]
+ g(u s (∞)) 1{z≤u s (x)} φ(x)d x W (ds, dz),
0 0 0
and for each T > 0,
sup sup |u t (x)| < ∞ a.s. (A.9)
0≤t≤T x∈[0,∞)

Then for each φ ∈ Cb2 [0, ∞) with φ(∞) = 0, almost surely for t ≥ 0 we have
1 t [⟨

⟨u t , φ⟩ = ⟨u 0 , φ⟩ + u s , φ ′′ + u s (0)φ ′ (0) − φ(0)∇u s (0) ds
⟩ ]
2
∫ t∫ ∞ 0 [∫ ∞ ]
+ g(∇u s (∞)) 1{z≤u s (x)} φ(x)d x W (ds, dz).
0 0 0

Proof. ∫ kx ∫∞
We take h k (x) := 0 Φ(z)dz, where Φ ∈ Cc2 (0, ∞), 0 Φ(x)d x = 1 and 0 ≤ Φ ≤ 2.
Similar with the proof of Lemma A.1, it is easy to check that
f, h ′k → f (0) f, h ′′k → − f ′ (0)
⟨ ⟩ ⟨ ⟩

as k → ∞, where f ∈ Cb [0, ∞) with f ′ (0) existing and f (∞) = limx→∞ f (x) = 0. The
following proof is similar with that of Lemma A.2, we omit it here. □
Next we consider the following equation:
{ ∫t ∫ t ∫ u (x)
u t (x) = u 0 (x) + 0 21 ∆u s (x)ds + 0 0 s G s W (ds, dz), x ≥ 0,
(A.10)
u t (0) = 0,
where (G t )t≥0 is a bounded continuous process, and W (ds, dz) is a time–space Gaussian white
noise on R+ × R+ with intensity dsdz. Further, we assume (A.9) holds for the above equation.
318
L. Ji, J. Xiong and X. Yang Stochastic Processes and their Applications 163 (2023) 288–322

The [0, ∞)-distribution-function-valued process (u t )t≥0 solving (A.10) will be understood in


the following form: for each φ ∈ Cb2 [0, ∞) with φ(0) = φ(∞) = 0, we have
1 t⟨

⟨u t , φ⟩ = ⟨u 0 , φ⟩ + u s , φ ′′ ds

2
∫ t ∫ ∞ [∫ 0 ∞ ]
+ 1{z≤u s (x)} φ(x)d x G s W (ds, dz), t ≥ 0
0 0 0

almost surely. It is easy to see that t→ ⟨u t , φ⟩ is continuous almost surely for each φ ∈
Cb2 [0, ∞). The pathwise uniqueness of the solution to the general version of (A.10) is
considered in Xiong and Yang [29, Theorem 1.9]. We give a brief proof below, since that
paper has not been formally published yet.

Lemma A.4. Let (u it )t≥0 , i = 1, 2 be two [0, ∞)-distribution-function-valued solutions to


(A.10) with the same initial value satisfying (A.9). Then P{u 1t (x) = u 2t (x) for all t, x ≥ 0} = 1.

Proof. ∫a
For each k ≥ 1 we define ak = exp{−k(k + 1)/2}. Then ak → ∞ and akk−1 z −1 dz = k. Let
∫a
x → ψk (x) be a positive continuous function supported by (ak , ak−1 ) such that akk−1 ψk (x)d x =
1 and ψk (x) ≤ (2kx)−1 for every x > 0. For k ≥ 1 let
∫ |z| ∫ y
φk (z) = dy ψk (x)d x, z ∈ R.
0 0

It is easy to see that |φk′ (z)| ≤ 1 and 0 ≤ |z|φk′′ (z) = |z|ψk (|z|) ≤ 2k −1 for z ∈ R. Moreover,
we have φk (z) → |z| increasingly as k → ∞.
Recall that (u 1t )t≥0 and (u 2t )t≥0 are two solutions to (A.10) with the same initial value. Let
x
qt (y) = pt (x, y) − pt (−x, y). It follows that for i = 1, 2,
⟩ 1 t

u t , qδ = u 0 , qδx + ∆x ( u is , qδx )ds
⟨ i x⟩ ⟨ ⟨ ⟩
2
∫ t ∫ ∞ [∫ 0∞ ]
+ 1{z≤u is (y)} qδx (y)dy G s W (ds, dz), (A.11)
0 0 0

where ∆x is the second order spatial differential operator with respect to the variable x. For
n ≥ 1, we define the stopping time

τn := inf{t ≥ 0 : sup |u 1t (x)| + sup |u 2t (x)| ≥ n}.


x≥0 x≥0

Then τn ⟨ → ⟩∞ almost surely as n → ∞ by (A.9). Let vt (x) = u 1t (x) − u 2t (x) and


vtδ (x) = vt , qδx . From (A.11) it follows that
1 t
∫ ∫ t∫ ∞
vtδ (x) = ∆x (vsδ (x))ds + Msδ (x, z)W (ds, dz),
2 0 0 0

where
∫ ∞
Msδ (x, z) = G s (1{z≤u 1s (y)} − 1{z≤u 2s (y)} )qδx (y)dy.
0
319
L. Ji, J. Xiong and X. Yang Stochastic Processes and their Applications 163 (2023) 288–322

It then follows from Itô’s formula that


1 t∧τn ′ δ

δ
φk (vt∧τ n
(x)) = φk (vs (x))∆x (vsδ (x))ds
2 0
∫ t∧τn ∫ ∞
2
+ φk′′ (vsδ (x))|Msδ (x, z)| dsdz
∫0 t∧τn ∫0 ∞
+ φk′ (vsδ (x))Msδ (x, z)W (ds, dz).
0 0

Recall that J (x) = R e−|y| ρ(x − y)dy satisfying (2.27). It leads to




δ
J (x)E[φk (vt∧τ n
(x))]d x
R
[∫ t∧τn ∫ ]
1 ′ δ δ
= E J (x)φk (vs (x))∆x (vs (x))d xds
2
[∫0 t∧τn ∫R∞ ∫ ]
′′ δ δ 2
+E J (x)φk (vs (x))|Ms (x, z)| dsdzd x
0 0 R
[∫ t∧τ n(1 ) ]
δ δ
=: E I (s) + I2,k (s) ds . (A.12)
0 2 1,k
By integration by parts and (2.27), we have
∫ ∫
δ 2
I1,k (s) = ∆x (φk (vs (x)))J (x)d x − φk′′ (vsδ (x))|∇x vsδ (x)| J (x)d x
δ

∫R ∫R
δ
≤ ∆x (φk (vs (x)))J (x)d x = φk (vsδ (x))J ′′ (x)d x
R∫ R

≤ K φk (vsδ (x))J (x)d x.


R
Recall that (G t )t≥0 is bounded. Then we have

δ
J (x)φk′′ (vsδ (x)) |vsδ |, qδx d x.
⟨ ⟩
I2,k (s) ≤ K
R
Letting δ → 0 in (A.12) and using dominated convergence we have

J (x)E[φk (vt∧τn (x))]d x
R
∫ t ∫
≤K ds E[φk (vs∧τn (x))]J (x)d x
0
∫ t R∫
ds E φk′′ (vs∧τn (x))|vs∧τn (x)| J (x)d x.
[ ]
+K
0 R
′′
Recall that 0 ≤ |y|φk (y)
≤ 2/k for all y ∈ R, letting k → ∞ in the above inequality we get
∫ ∫ t ∫
J (x)E[|vt∧τn (x)|]d x ≤ K ds J (x)E[|vs∧τn (x)|]d x,
R 0 R
∫ ∫
which implies that R J (x)E[|vt∧τn (x)|]d x = 0. Letting ∫ n → ∞, we obtain that R
J (x)E[|vt (x)|]d x = 0 by Fatou’s∫ lemma. Note that t → R J (x)u it (x)d x is continuous almost
surely for i = 1, 2. Then P{ R J (x)|vt (x)|d x = 0, for all t ≥ 0} = 1. The result follows since
(u 1t )t≥0 and (u 2t )t≥0 are two [0, ∞)-distribution-function-valued processes. □
320
L. Ji, J. Xiong and X. Yang Stochastic Processes and their Applications 163 (2023) 288–322

Now we consider the following equation:


{ ∫t ∫ t ∫ u (x)
u t (x) = u 0 (x) + 0 21 ∆u s (x)ds + 0 0 s G̃ s W (ds, dz), x ∈ [0, 1],
(A.13)
u t (0) = 0, ∇u t (1) = µt ,

where (µt )t≥0 and (G̃ t )t≥0 are continuous processes, and (G̃ t )t≥0 is bounded. Moreover,
W (ds, dz) is a time–space Gaussian white noise on R+ × R+ with intensity dsdz. The [0, 1]-
distribution-function-valued process (u t (x))t≥0,x∈[0,1] solving (A.13) will be understood in the
following form: for each φ ∈ Cb2 [0, 1] with φ(0) = φ ′ (1) = 0, we have
1 t⟨

⟨u t , φ⟩ = ⟨u 0 , φ⟩ + u s , φ ′′ + φ(1)µs ds

2
∫ t ∫ ∞ [∫ 0 ∞ ]
+ 1{z≤u s (x)} φ(x)d x G̃ s W (ds, dz), t ≥ 0
0 0 0
almost surely. The pathwise uniqueness of the solution to the general version of (A.13) is
considered in Xiong and Yang [29, Theorem 1.4].

Lemma A.5. Let (u it (x))t≥0,x∈[0,1] , i = 1, 2 be two [0, 1]-distribution-function-valued solutions


to (A.13) with the same initial value satisfying (A.7). Then
P u 1t (x) = u 2t (x) for all t ≥ 0, x ∈ [0, 1] = 1.
{ }

Proof.
The conclusion can be justified by using essentially the same argument as that in the proof
of Lemma A.4 with qtx (y) replaced by


qtx (y) = 2 [ pt (4k + x − y) − pt (4k − x − y)]
k=−∞


− [ pt (2k + x − y) − pt (2k − x − y)].
k=−∞

We omit the details. □

References
[1] D.A. Dawson, Z. Li, Construction of immigration superprocesses with dependent spatial motion from one-
dimensional excursions, Probab. Theory Related Fields 127 (2003) 37–61, http://dx.doi.org/10.1007/s00440-
003-0278-y.
[2] D.A. Dawson, Z. Li, Stochastic equations, flows and measure-valued processes, Ann. Probab. 40 (2) (2012)
813–857, http://dx.doi.org/10.1214/10-AOP629.
[3] P. Donnelley, T. Kurtz, Particle representations for measure-valued population models, Ann. Probab. 27 (1)
(1999) 166–205, http://dx.doi.org/10.1214/aop/1022677258.
[4] N. El Karoui, S. Mèlèard, Martingale measures and stochastic calculus, Probab. Theory Related Fields 84
(1990) 83–101, http://dx.doi.org/10.1007/BF01288560.
[5] S.N. Ethier, T.G. Kurtz, Markov Processes: Characterization and Convergence, Wiley, New York, 1986.
[6] L.C. Evans, Partial Differential Equations, American Mathematical Society, 1998, http://dx.doi.org/10.1090/
gsm/019.
[7] Z. Fu, Z. Li, Measure-valued diffusions and stochastic equations with Poisson process, Osaka J. Math. 41 (3)
(2004) 727–744.
[8] H. He, Z. Li, X. Yang, Stochastic equations of super-Lévy processes with general branching mechanism,
Stoch. Process. Appl. 124 (4) (2014) 1519–1565, http://dx.doi.org/10.1016/j.spa.2013.12.007.
321
L. Ji, J. Xiong and X. Yang Stochastic Processes and their Applications 163 (2023) 288–322

[9] N. Ikeda, S. Watanabe, Stochastic Differential Equations and Diffusion Processes, second ed., North-Holland,
Amsterdam, Kodansha, Tokyo, 1989.
[10] O. Kallenberg, Foundations of modern probability, in: Probability and Its Applications, second ed.,
Springer-Verlag, New York, 2002.
[11] G. Kallianpur, J. Xiong, Stochastic differential equations in infinite dimensional spaces, IMS Lect. Notes
Monogr. Ser. 26 (1995) 342, http://dx.doi.org/10.1214/lnms/1215451864.
[12] H.H. Kuo, Gaussian Measures in Banach Spaces, in: Lecture Notes in Mathematics, vol. 463, Springer, Berlin,
Heidelberg, 1975.
[13] Z. Li, Measure-Valued Branching Markov Processes, Springer, Berlin, Heidelberg, 2011.
[14] Z. Li, H. Wang, J. Xiong, Conditional log-Laplace functionals of immigration superprocesses with dependent
spatial motion, Acta Appl. Math. 88 (2005) 143–175, http://dx.doi.org/10.1007/s10440-005-6696-3.
[15] I. Mitoma, An ∞-dimensional inhomogeneous Langevin’s equation, J. Funct. Anal. 61 (1985) 342–359,
http://dx.doi.org/10.1016/0022-1236(85)90027-8.
[16] L. Mytnik, J. Xiong, Well-posedness of the martingale problem for superprocess with interaction, Illinois J.
Math. 59 (2) (2015) 485–497, http://dx.doi.org/10.1215/ijm/1462450710.
[17] E. Perkins, On the martingale problem for interactive measure-valued branching diffusions, Mem. Amer. Math.
Soc. 115, 1995, http://dx.doi.org/10.1090/memo/0549.
[18] E. Perkins, Dawson–Watanabe Superprocesses and Measure-Valued Diffusions, Lectures on Probability Theory
and Statistics (Saint-Flour, 1999), Lecture Notes in Math, vol. 1781, Springer, Berlin, 2002.
[19] J. Rosen, Joint continuity of the intersection local times of Markov processes, Ann. Probab. 15 (2) (1987)
659–675, http://dx.doi.org/10.1214/aop/1176992164.
[20] T. Shiga, A stochastic equation based on a Poisson system for a class of measure valued diffusion processes,
J. Math. Kyoto Univ. 30 (1990) 245–279, http://dx.doi.org/10.1215/kjm/1250520071.
[21] T. Shiga, Two contrasting properties of solutions for one-dimensional stochastic partial differential equations,
Canad. J. Math. 46 (2) (1994) 415–437, http://dx.doi.org/10.4153/CJM-1994-022-8.
[22] J. Walsh, An Introduction To Stochastic Partial Differential Equations, in: Lecture Notes in Math., vol. 1180,
Spring, Berlin, 1986, pp. 266–439.
[23] S. Watanabe, A limit theorem of branching processes and continuous state branching processes, J. Math.
Kyoto Univ. 8 (1) (1968) 141–167, http://dx.doi.org/10.1215/kjm/1250524180.
[24] Z. Willett, J. Wong, On the discrete analogues of some generalizations of Grönwall’s inequality, Monatsh.
Math. 69 (1965) 362–367, http://dx.doi.org/10.1007/BF01297622.
[25] J. Xiong, Three Classes of Nonlinear Stochastic Partial Differential Equations, World Scientific, Hackensack,
NJ, 2013.
[26] J. Xiong, Super-Brownian motion as the unique strong solution to an SPDE, Ann. Probab. 41 (2) (2013)
1030–1054, http://dx.doi.org/10.1214/12-AOP789.
[27] J. Xiong, X. Yang, Superprocesses with interaction and immigration, Stoch. Process. Appl. 126 (11) (2016)
3377–3401, http://dx.doi.org/10.1016/j.spa.2016.04.032.
[28] J. Xiong, X. Yang, Existence and pathwise uniqueness to an SPDE driven by α-stable colored noise, Stoch.
Process. Appl. 129 (2019) 2681–2722, http://dx.doi.org/10.1016/j.spa.2018.08.003.
[29] J. Xiong, X. Yang, SPDEs with non-Lipschitz coefficients and nonhomogeneous boundary conditions, Bernoulli
(2023) in press.

322

You might also like