10.3934 dcds.2024037

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 40

Discrete and Continuous Dynamical Systems

Vol. 44, No. 9, September 2024, pp. 2524-2563


doi:10.3934/dcds.2024037

PROPAGATION DYNAMICS OF THE MONOSTABLE


REACTION-DIFFUSION EQUATION WITH
A NEW FREE BOUNDARY CONDITION

Yihong Du

School of Science and Technology, University of New England, Armidale, NSW 2351, Australia

(Communicated by Masaharu Taniguchi)

Abstract. We study the reaction diffusion equation ut − duxx = f (u) with a


monostable nonlinear function f (u) over a changing interval [g(t), h(t)], viewed
as a model for the spreading of a species with population range [g(t), h(t)] and
density u(t, x). The free boundaries x = g(t) and x = h(t) are not governed
by the same Stefan condition as in Du and Lin [20] and other previous works;
instead, they satisfy a related but different set of equations obtained from a
“preferred population density” assumption at the range boundary, which allows
the population range to shrink. We obtain a rather complete understanding of
the longtime dynamics of the model, which exhibits persistent propagation with
a finite asymptotic propagation speed determined by a certain semi-wave solu-
tion, and the density function converges to the semi-wave profile as time goes
to infinity. The asymptotic propagation speed is always smaller than that of
the corresponding classical Cauchy problem where the reaction-diffusion equa-
tion is satisfied for x over the entire real line with no free boundary. Moreover,
when the preferred population density used in the free boundary condition
converges to 0, the solution u of our free boundary problem converges to the
solution of the corresponding classical Cauchy problem, and the propagation
speed also converges to that of the Cauchy problem.

1. Introduction and main results. Since the work of Du and Lin [20], the follow-
ing free boundary problem (and its many variations) has been extensively studied:

 ut − duxx = f (u),
 t > 0, g(t) < x < h(t),
 u(t, g(t)) = u(t, h(t)) = 0, t > 0,


g ′ (t) = −µux (t, g(t)) t > 0, (1.1)
 h′ (t) = −µux (t, h(t)), t > 0,



−g(0) = h(0) = h0 , u(0, x) = u0 (x), −h0 ≤ x ≤ h0 ,

where x = h(t) and x = g(t) are the moving boundaries to be determined together
with u(t, x), d and µ are given positive constants, f : [0, ∞) → R is a monostable
function, namely
(
f is C 1 , f > 0 in (0,1), f < 0 in (1, ∞)
(f ) :
f (0) = f (1) = 0, f ′ (0) > 0 > f ′ (1),

2020 Mathematics Subject Classification. Primary: 35K20, 35K55, 35R35.


Key words and phrases. Reaction-diffusion, free boundary, propagation, spreading speed.
The author is supported by the Australian Research Council.

2524
PROPAGATION DYNAMICS 2525

which includes the Fisher-KPP nonlinear function as a special case. The initial
function u0 satisfies, for some h0 > 0,
(
u0 ∈ C 2 ([−h0 , h0 ]), u0 (±h0 ) = 0, u′0 (−h0 ) > 0 > u′0 (h0 ),
(1.2)
u0 (x) > 0 for x ∈ (−h0 , h0 ).

(The smoothness requirement u0 ∈ C 2 ([−h0 , h0 ]) can be relaxed to u0 ∈ C([−h0 , h0 ]);


see [11]. However, for simplicity, we will not pursue this issue here.)
Problem (1.1) is often viewed as a population model describing the spreading of
a new or invasive species with population range [g(t), h(t)] and population density
u(t, x). The Stefan type free boundary conditions in (1.1) can be deduced from
some ecological assumptions (see [7]), which basically says that, in order to expand
the population range [g(t), h(t)], certain sacrifice is made by the species, in terms of
population loss near the fronts (the free boundaries), and the expansion rates h′ (t)
and g ′ (t) are proportional to this population loss, with the proportion constant
given by µd . It follows from [16] that as µ → ∞, the population range [g(t), h(t)] in
(1.1) converges to R in any finite time t, and the density u(t, x) converges to the
solution of the following classical reaction-diffusion model for propagation studied
in the pioneering works of Fisher, Kolmogorov-Petrovski-Piskunov and Aronson-
Weinberger [28, 34, 1, 2]:

ut − duxx = f (u), x ∈ R, t > 0,
(1.3)
u(0, x) = ũ0 (x), x ∈ R,
where ũ0 is the zero extension of u0 outside [−h0 , h0 ].
It follows from [20, 21] that (1.1) has a unique solution which is defined for all
t > 0, and as t → ∞, the population range [g(t), h(t)] converges either to a finite
interval [g∞ , h∞ ], or to R. Moreover, in the former case, the population density
u(t, x) → 0 uniformly in x, while in the latter case, u(t, x) → 1 locally uniformly in
x ∈ R. The situation that
u → 0 and (g, h) → (g∞ , h∞ )
is known as the vanishing case, and
u → 1 and (g, h) → R
is called the spreading case.
In the spreading case, it was shown in [20, 21] that there exists c∗ > 0 such that
−g(t) h(t)
lim = lim = c∗ .
t→∞ t t→∞ t

The number c∗ is therefore called the asymptotic spreading speed of (1.1), which is
determined by the following semi-wave problem:
dq ′′ − cq ′ + f (q) = 0 in (0, ∞),
(
(1.4)
q(0) = 0, q ′ (0) = c/µ, q(∞) = 1, q(z) > 0 in (0, ∞).
Theorem A. [Proposition 1.9 and Theorem 6.2 of [21]] Suppose that f satisfies
(f ). Then for any µ > 0, (1.4) has a unique solution pair (c, q(·)) = (c∗ , qc∗ (·)) with
c∗ = c∗µ > 0.
The solution qc∗ is known as a semi-wave with speed c∗ . It was further shown in
[23] that the following more accurate estimates hold for (u, g, h):
2526 YIHONG DU

Theorem B. Assume that f satisfies (f ), (u, g, h) is the unique solution to (1.1)


for which spreading happens. Let (c∗ , qc∗ ) be given by Theorem A. Then there exist
Ĥ, Ĝ ∈ R such that
limt→∞ [h(t) − c∗ t] = Ĥ, limt→∞ h′ (t) = c∗ ,



∗ ′ ∗

t→∞ [g(t) + c t] = Ĝ, limt→∞ g (t) = −c ,
 lim


 limt→∞ supx∈[0, h(t)] |u(t, x) − qc∗ (h(t) − x)| = 0,
limt→∞ supx∈[g(t), 0] |u(t, x) − qc∗ (x − g(t))| = 0.

Of the many other works on different aspects of (1.1) and its generalisations, let
us only list, as a small sample, [3, 8, 10, 12, 13, 14, 15, 18, 19, 22, 24, 25, 29, 30, 31,
32, 33, 35, 36, 37, 39, 42, 43, 44, 45, 46], where the reader can find more details.
In ecology, the movement of the range boundary of a population is a complex
issue, no widely accepted principle to govern such movement appears to be known.
While the assumption leading to the free boundary conditions in (1.1) seems reason-
able in many cases, to better represent different situations arising from the complex
real world, various other free boundary conditions have been proposed and used.
For example, in [8, 3], free boundary conditions of the form
h′ (t) = −µux (t, h(t)) − α(t), u(t, h(t)) = 0
are used, where α(t) ≥ 0 represents a force against the range expansion due to
unfavourable factors of the environment surrounding the population range. In [26],
the authors use a nonlocal free boundary condition of the following form:
Z h(t)

h (t) = µ u(t, x)w(h(t) − x)dx, u(t, h(t)) = 0,
0
where
w(x) = c1 e−α1 x − c2 e−α2 x , c1 > c2 > 0, α1 > α2 > 0.
In [41], the case µ < 0 in (1.1) was considered, where the population range is
shrinking instead of expanding.
In [40, 4], the authors propose a different free boundary model for species which
engineers the environment to create its habitat, where the free boundary stands for
the boundary of the engineered part of the environment, over which the growth of
the population is governed by a Fisher-KPP function such as f (u) = u(r − au),
r, a > 0, while in the rest of the available environment (un-engineered territory),
the growth decays according to f (u) = −mu, m > 0; we refer to [40, 4] for more
details.
In this paper, we examine yet another free boundary condition, based on the
notion of “preferred population density” at the range boundary, which was proposed
by Professor Chris Cosner to the author1 . Formulated in the simplest situation
where a species spreads into a favourable homogeneous environment with carrying
capacity 1, the preferred population density assumption says:
The species favours a certain density
δ ∈ (0, 1),
so that the population is not too crowded for the available resources in the given en-
vironment yet there is still enough chance for mating and reproducing, for instance.
The population range thus expands or shrinks via the change of its boundary caused
1 The late Professor Hans Weinberger helped to clarify an important point on this notion.
PROPAGATION DYNAMICS 2527

by members of the species near the range boundary trying to keep such a preferred
population density there.
The resulting free boundary conditions are (see Section 2.1 for a detailed deduc-
tion):

h′ (t) = − d ux (t, h(t)),
δ
u(t, h(t)) = u(t, g(t)) = δ.
g ′ (t) = − d u (t, g(t)),
δ x
Such an assumption appears reasonable for populations where no significant extra
sacrifice is needed to expand their population range; for instance, for animals that
no nest (home) is required to raise the young, such as the wildebeests whose young
can run with the mother almost immediately after birth.
Thus our new model has the form


 ut − duxx = f (u), t > 0, g(t) < x < h(t),
u(t, g(t)) = u(t, h(t)) = δ, t > 0,



g ′ (t) = − dδ ux (t, g(t)) t > 0, (1.5)
′ d
 h (t) = − δ ux (t, h(t)), t > 0,



−g(0) = h(0) = h0 , u(0, x) = u0 (x), −h0 ≤ x ≤ h0 ,

where the initial function u0 (x) is assumed to belong to X (h0 ) given by


X (h0 ) := {ϕ ∈ C 2 ([−h0 , h0 ]) : ϕ(x) > 0 in [−h0 , h0 ], ϕ(±h0 ) = δ}.
Our main results on (1.5) are listed below.
Theorem 1.1. Suppose that (f ) holds. Then for any given u0 ∈ X (h0 ) and α ∈
(0, 1), (1.5) admits a unique solution
α
h α
i2
(u, g, h) ∈ C 1+ 2 ,2+α (Ω) × C 1+ 2 ((0, ∞)) ,

where Ω := {(t, x) ∈ R2 : t ∈ (0, ∞), x ∈ [g(t), h(t)]}.


Theorem 1.2. Suppose (f ) holds, and (u, g, h) is the solution of (1.5) with u0 ∈
X (h0 ). Then, as t → ∞,
(g(t), h(t)) → (−∞, ∞), u(t, x) → 1 locally uniformly in x ∈ R.
To describe the longtime behaviour of (u, g, h) more precisely, we need the fol-
lowing result:
Proposition 1.3. Suppose that f satisfies (f ) and δ ∈ (0, 1). Then there exists a
unique pair (c, q) = (c∗ , q∗ ) with c∗ = c∗ (δ) > 0 satisfying
dq ′′ − cq ′ + f (q) = 0, q > 0 in (0, ∞),
(
(1.6)
q(0) = 0, q(∞) = 1, q ′ (xδ ) = c dδ ,
where xδ > 0 is uniquely determined by q(xδ ) = δ. Moreover, q∗′ (z) > 0 for z ≥ 0,
c∗ < c0 , and limδ→0 c∗ (δ) = c0 , where c0 is the spreading speed determined by (1.3).
It is well known that the spreading speed c0 of (1.3) is the minimal speed of the
associated traveling wave solutions ([1, 2]), namely the problem
dq ′′ − cq ′ + f (q) = 0, q > 0 in (−∞, ∞),
(
(1.7)
q(−∞) = 0, q(∞) = 1,
2528 YIHONG DU

has a solution if and only if c ≥ c0 . If f (u) satisfies additionally the KPP


p condition
f ′ (0) ≥ f (u)/u for u ∈ (0, 1), then c0 is linearly determined: c0 = 2 f ′ (0)d (see
[34]).
If c∗ is given in Theoem A with µ = dδ , then c∗ > c∗ when δ ∈ (0, 1) is close
to 0, and c∗ < c∗ when δ ∈ (0, 1) is close to 1; on the other hand, if q(x) solves
(1.7) with speed c ≥ c0 , there exists no x ∈ R satisfying q(x) = δ and q ′ (x) = dδ c
simultaneously (see Remark 3.1).
With the help of Proposition 1.3, we can state our main result on the longtime
dynamics of (1.5) now; in particular, c∗ is the asymptotic spreading speed of (1.5).
Theorem 1.4. Suppose that f satisfies (f ) and (c∗ , q∗ ) is given by Proposition 1.3.
Let (u, g, h) be the solution of (1.5) with u0 ∈ X (h0 ). Then there exist ĥ, ĝ ∈ R
such that
limt→∞ [h(t) − c∗ t] = ĥ, limt→∞ h′ (t) = c∗ ,





t→∞ [g(t) + c∗ t] = ĝ, limt→∞ g (t) = −c∗ ,
 lim


 limt→∞ supx∈[0, h(t)] |u(t, x) − q∗ (xδ + h(t) − x)| = 0,
limt→∞ supx∈[g(t), 0] |u(t, x) − q∗ (xδ + x − g(t))| = 0,

where xδ > 0 is uniquely determined by q∗ (xδ ) = δ.


Note that for the classical model (1.3) with nonnegative initial function ũ0 with
nontrivial compact support, the propagation has a finite speed which is the minimal
speed of its traveling wave solutions. However, the solution u of (1.3) may not
converge to the minimal-speed traveling wave solution directly, and a logarithmic
correction term is needed in such a convergence when f satisfies the KPP condition,
which was first shown by Bramson [6]. Our Theorem 1.4 indicates that no such
correction occurs for (1.5) (the same was proved for (1.1) in [23]).
The next result shows that when the preferred population density at the range
boundary converges to 0, our free boundary problem (1.5) is reduced to the classical
model (1.3).
Theorem 1.5. Suppose that (f ) holds. Let u0 = uδ0 ∈ X (h0 ) and (u, g, h) =
(uδ , gδ , hδ ) be the unique solution of (1.5). If uδ0 → û0 in C 1 ([−h0 , h0 ]) as δ → 0,
with û0 satisfying (1.2), then
1,2 2
(uδ , gδ , hδ ) → (U, −∞, +∞) in Cloc ((0, ∞) × R) × [Cloc ((0, ∞))] as δ → 0,
where U (t, x) is the unique solution of (1.3) with
(
0, x ̸∈ [−h0 , h0 ],
U (0, x) =
û0 (x), x ∈ [−h0 , h0 ].
Let us now briefly compare the new model (1.5) with (1.1). The biggest dif-
ference in terms of longtime behaviour is that (1.1) exhibits a spreading-vanishing
dichotomy, where the population may vanish eventually if the initial function is
small in some sense, while for (1.5), similar to the classical model (1.3), persistent
propagation holds. The free boundary conditions in (1.5) allow the range boundary
to retract, however, we can show that the population range can never shrink to one
point (Lemma 2.5), and the front always advances after some finite time Tδ ≥ 0 (see
Lemma 2.6). Although some of the techniques used to treat (1.1) can be modified
to treat our new model (1.5), many new ideas and techniques have been developed
in this paper, which pave the way for further work on (1.5) and its generalisations,
PROPAGATION DYNAMICS 2529

and some of them can even be used to considerably simplify the existing proofs for
(1.1).
In Du and Lou [21], problem (1.1) was not only considered for monostable f (u),
but for bistable and combustion type of f (u) as well. For the new model (1.5),
when f (u) is of bistable or combustion type, we believe more significant differences
than exhibited in this paper will arise between (1.1) and (1.5), and we will consider
these questions in future work.
The rest of the paper is organised as follows. In Section 2, we deduce the free
boundary conditions and prove several basic results, including existence and unique-
ness (Theorem 1.1), some comparison principles and a priori estimates, and The-
orem 1.5. Section 3 is devoted to the understanding of the longtime behaviour of
(1.5); we first prove Theorem 1.2, then Proposition 1.3, and finally Theorem 1.4.

2. Basic results.
2.1. Deduction of the free boundary conditions. Consider a population with
density u(t, x) over a changing population range [g(t), h(t)] in one space dimension,
which may expand or shrink as time t increases in order to keep the population
density at the preferred level δ ∈ (0, 1) at the range boundary x = g(t) and x = h(t).
Let ∆t > 0 be a small time increment. From time moment t to time moment
t+∆t, by Fick’s first law, the quantity of population that enters the region (through
diffusion) bounded by the old front x = h(t) and new front x = h(t + ∆t) is
approximated by d|ux (t, h(t))|∆t, with h(t + ∆t) − h(t) having the opposite sign to
ux (t, h(t)). To keep the (average) population density at δ in this region, we require
d|ux (t, h(t))|∆t
= δ.
|h(t + ∆t) − h(t)|
Therefore
h(t + ∆t) − h(t) d
= − ux (t, h(t)).
∆t δ
Letting ∆t → 0, we deduce
d
h′ (t) = − ux (t, h(t)).
δ
We can similarly deduce
d
g ′ (t) = − ux (t, g(t)).
δ
Naturally, we must have
u(t, h(t)) = u(t, g(t)) = δ.
2.2. Local existence and uniqueness. In this subsection, we prove the following
local existence and uniqueness result by the contraction mapping theorem. It holds
for f (u) satisfying more general conditions than (f ). More precisely, it only requires
f is C 1 and f (0) = 0. (2.1)
Theorem 2.1. Suppose that (2.1) holds. Then for any given u0 ∈ X (h0 ) and any
α ∈ (0, 1), there is a T > 0 such that problem (1.5) admits a unique solution
h i2
(u, h, g) ∈ C (1+α)/2,1+α (ΩT ) × C 1+α/2 ([0, T ]) ;
moreover,
∥u∥C (1+α)/2,1+α (ΩT ) + ∥g∥C 1+α/2 ([0,T ]) + ∥h∥C 1+α/2 ([0,T ]) ≤ C (2.2)
2530 YIHONG DU

and
1+α α
h, g ∈ C 1+ 2 ((0, T ]), u ∈ C 1+ 2 ,2+α (ΩT ), u > 0 in ΩT ,
where ΩT = {(t, x) ∈ R2 : t ∈ (0, T ], x ∈ [g(t), h(t)]}, C and T only depend on h0 ,
α and ∥u0 ∥C 2 ([−h0 ,h0 ]) .
Proof. We will complete the proof in four steps. Our approach is based on that of
[20] (which followed [9]) with some significant changes though. For example, the
extension trick in Step 2 fills in a gap overlooked in the proof there and many other
existing works; such a trick corrects the mistake in using the Sobolev embedding
theorem to guarantee that the embedding constant does not depend on T (one needs
to choose T small enough in order to obtain a contraction mapping).
Step 1. We straighten the boundaries of ΩT for a given pair (g(t), h(t)) which are
continuous with (g(0), h(0)) = (−h0 , h0 ).
Choose ζ ∈ C 3 (R) such that
h0 h0 5
ζ(y) = 1 if |y − h0 | < , ζ(y) = 0 if |y − h0 | > , and |ζ ′ (y)| < for all y.
4 2 h0
Then we define ξ(y) = −ζ(−y) and consider the transformation (t, y) 7→ (t, x) given
by
x = Ψ(t, y) := y + ζ(y)(h(t) − h0 ) − ξ(y)(g(t) + h0 ) for y ∈ R, (2.3)
which, for every fixed t ≥ 0, is a diffeomorphism from R to R as long as |h(t)−h0 | <
h0 h0
12 and |g(t) + h0 | < 12 , which is guaranteed for all small t > 0, say t ∈ [0, T ].
Moreover, it’s inverse maps x = h(t) and x = g(t) to the line y = h0 and y = −h0 ,
respectively.
Direct calculations yield

 ∂y 1 p
 = ′ ′
=: A(y, h(t), g(t)),
∂x 1 + ζ (y)(h(t) − h0 ) − ξ (y)(g(t) + h0 )





2 ′′ ′′

 ∂ y = − ζ (y)(h(t) − h0 ) − ξ (y)(g(t) + h0 )


=: B(y, h(t), g(t)),

∂x2 [1 + ζ ′ (y)(h(t) − h0 ) − ξ ′ (y)(g(t) + h0 )]3 (2.4)
′ ′


 ∂y −h (t)ζ(y) + g (t)ξ(y)
=



 ∂t 1 + ζ ′ (y)(h(t) − h ) − ξ ′ (y)(g(t) + h )

 0 0

′ ′

 =: −h (t)C(y, h(t), g(t)) + g (t)D(y, h(t), g(t)).

Next we define U (t, y) := u(t, x) for (t, x) ∈ ΩT . Then (1.5) for t ∈ (0, T ] is
equivalent to


 Ut − dAUyy − (h′ C − g ′ D + dB)Uy = f (U ), 0 < t ≤ T, −h0 < y < h0 ,

U (t, h0 ) = U (t, −h0 ) = δ, 0 < t ≤ T,




h′ (t) = − d U (t, h ),

0 < t ≤ T,
δ y 0
(2.5)
g ′ (t) = − dδ Uy (t, −h0 ),
 0 < t ≤ T,

h(0) = h0 , g(0) = −h0 ,





U (0, y) = u0 (y), −h0 ≤ y ≤ h0 .

Step 2. An extension trick, and application of Lp theory and Sobolev embeddings.


h0 h0
Let T1 := min{ 12(1+h 0 ) , 12(1−g 0 ) } with h
0
= − dδ u′0 (h0 ) and g 0 = − dδ u′0 (−h0 ).
Then, for T ∈ (0, T1 ] and DT := {(t, y) : 0 ≤ t ≤ T, −h0 ≤ y ≤ h0 }, we introduce
PROPAGATION DYNAMICS 2531

the function spaces


X1,T := {U ∈ C(DT ) : U (0, y) = u0 (y), ∥U − u0 ∥C(DT ) ≤ 1},
X2,T := {h ∈ C 0,1 ([0, T ]) : h(0) = h0 , h′ (0) = h0 , ∥h′ − h0 ∥L∞ ([0,T ]) ≤ 1},
X3,T := {g ∈ C 0,1 ([0, T ]) : g(0) = −h0 , g ′ (0) = g 0 , ∥g ′ − g 0 ∥L∞ ([0,T ]) ≤ 1}.
Q3
Clearly, XT := i=1 Xi,T is a complete metric space with the following metric:
d (U1 , h1 , g1 ), (U2 , h2 , g2 ) := ∥U1 −U2 ∥C(DT ) +∥h′1 −h′2 ∥L∞ ([0,T ]) +∥g1′ −g2′ ∥L∞ ([0,T ]) .

Q3
With 0 < T < T1 , we define a subspace of XT1 , denoted by XTT1 = i=1 Xi,T T
1
,
by
T
X1,T 1
:= {U ∈ X1,T1 : U (t, y) = U (T, y) for T ≤ t ≤ T1 },
T
X2,T1
:= {h ∈ X2,T1 : h(t) = h(T ) for T ≤ t ≤ T1 },
T
X3,T1
:= {g ∈ X3,T1 : g(t) = g(T ) for T ≤ t ≤ T1 }.
Clearly for each (U, h, g) ∈ XT , we can extend it to be a member of XTT1 as
indicated above. For this reason, in what follows we will always identify XT with
XTT1 .
For each (U, h, g) ∈ XT = XTT1 ⊆ XT1 , by the expressions of A, B, C and D in
(2.4), we have the following estimates:
∥h′ C − g ′ D + dB∥C(DT1 )
h′ (t)ζ(y) − g ′ (t)ξ(y)
≤ sup
(t,y)∈DT1 1+ ζ ′ (y)(h(t) − h0 ) − ξ ′ (y)(g(t) + h0 )
(2.6)
ζ ′′ (y)(h(t) − h0 ) + ξ ′′ (y)(g(t) + h0 )
+d sup
(t,y)∈DT1 [1 + ζ ′ (y)(h(t) − h0 ) − ξ ′ (y)(g(t) + h0 )]3
≤ L1 ,
where L1 depends on h0 , h0 and g 0 . Besides, it is easy to check that for (t, y) ∈ DT1 ,
d
≤ dA(y, h(t), g(t)) ≤ 36d. (2.7)
4
Next, for Y1 = (s1 , y1 ) andpY2 = (s2 , y2 ) belonging to DT1 with the parabolic
distance given by δ(Y1 , Y2 ) = (y1 − y2 )2 + |s1 − s2 |, we have
ω(R) := d max sup A(y1 , h(s1 ), g(s1 )) − A(y2 , h(s2 ), g(s2 ))
Y1 ,Y2 ∈DT1 δ(Y1 ,Y2 )≤R
2
≤ 1296 d 1 + ζ ′ (y2 )(h(s2 ) − h0 ) − ξ ′ (y2 )(g(s2 ) + h0 )
2 (2.8)
− 1 + ζ ′ (y1 )(h(s1 ) − h0 ) − ξ ′ (y1 )(g(s1 ) + h0 )
 h0 + g 0 + 2 
≤ l1 R + h0 R → 0 as R → 0,
h0
where l1 is some suitably large constant depending on h0 , h0 , g 0 .
With (U, h, g) given above, we now consider the following initial boundary value
problem for Ū :

′ ′
Ūt − dAŪyy − (h C − g D + dB)Ūy = f (U ), 0 < t ≤ T1 , −h0 < y < h0 ,

Ū (t, h0 ) = Ū (t, −h0 ) = δ, 0 < t ≤ T1 , (2.9)

Ū (0, y) = u0 (y), −h0 ≤ y ≤ h0 .

2532 YIHONG DU

In view of (2.6), (2.7), (2.8), and u0 ∈ C 2 ([−h0 , h0 ]) with u0 (±h0 ) = δ, we are able
to apply the Lp theory to (2.9) and use the Sobolev embedding theorem (see [38]),
to conclude that (2.9) has a unique solution Ū with

∥Ū ∥ 1+α ,1+α ≤ CT1 ∥Ū ∥Wp2,1 (DT ) ≤ K1 , (2.10)


C 2 (DT1 ) 1

where p > 3/(2 − α), K1 depends on p , ∥f (U )∥Lp (DT1 ) , ∥u0 ∥C 2 ([−h0 ,h0 ]) , L1 , DT1 ,
l1 and CT1 , and CT1 depends on DT1 and α ∈ (0, 1).
Define (h̄(t), ḡ(t)) by
d d
h̄′ (t) = − Ūy (t, h0 ), ḡ ′ (t) = − Ūy (t, −h0 ).
δ δ
α
Clearly h̄′ , ḡ ′ ∈ C 2 ([0, T1 ]) with

∥h̄′ ∥C α2 ([0,T ]) , ∥ḡ ′ ∥C α2 ([0,T ≤ K2 , (2.11)


1 1 ])

where K2 depends on K1 .
 2
We now define the mapping F : XT = XTT1 → C(DT1 ) × C([0, T1 ]) by

F (U, h, g) = (Ū , h̄, ḡ).

Set
F̃ (U, h, g) := F (U, g, h)|XT .
Then we see that (U, h, g) is a fixed point of F̃ if and only if it solves (2.5), which
is equivalent to (1.5) for t ∈ [0, T ].
Step 3. We show that F̃ is a contraction mapping for small enough T > 0.
−2 −2
For every fixed 0 < T < min{T1 , K11+α , K2α }, we have
1+α 1+α 1+α
∥Ū − u0 ∥C(DT ) ≤ T 2 ∥Ū ∥ 1+α ≤T 2 ∥Ū ∥ 1+α ≤ K1 T 2 ≤ 1,
C 0, 2 (DT ) C 0, 2 (DT1 )
α α α
∥h̄′ (t) − h0 ∥L∞ ([0,T ]) ≤ T ∥h̄′ ∥C α2 ([0,T ]) ≤ T ∥h̄′ ∥C α2 ([0,T
2 2 ≤ K2 T 2 ≤ 1,
1 ])
α α α
∥ḡ ′ (t) − g 0 ∥L∞ ([0,T ]) ≤ T ∥ḡ ′ ∥C α2 ([0,T ]) ≤ T ∥ḡ ′ ∥C α2 ([0,T
2 2 ≤ K2 T 2 ≤ 1,
1 ])

which implies that F̃ maps XT to itself.


Next we shall prove that F̃ is a contraction map on XT for all small T > 0. Let
(Ui , hi , gi ) ∈ XT = XTT1 for i = 1, 2, and set

W := Ū1 − Ū2 .

Then we have

′ ′
Wt − dA1 Wyy − (dB1 + h1 C1 − g1 D1 )Wy = Ψ, −h0 < y < h0 , 0 < t ≤ T1 ,

W (t, h0 ) = W (t, −h0 ) = 0, 0 < t ≤ T1 ,

W (0, y) = 0, −h0 ≤ y ≤ h0 ,

(2.12)
where
Ψ : = dB1 − dB2 + h′1 C1 − h′2 C2 − g1′ D1 + g2′ D2 Ū2,y


+ (dA1 − dA2 )Ū2,yy + f (U1 ) − f (U2 ),


PROPAGATION DYNAMICS 2533

and Ai := A(y, hi , gi ), Bi := B(y, hi , gi ), Ci := C(y, hi , gi ) and Di := D(y, hi , gi )


for i = 1, 2. Direct calculations show that
∥A1 − A2 ∥C(DT1 )
= sup A(y, h1 (t), g1 (t)) − A(y, h2 (t), g2 (t))
(t,y)∈DT1
1
= sup 2 (2.13)
(1 + ζ ′ (y)(h1 (t) − h0 ) − ξ ′ (y)(g1 (t) + h0 ))
(t,y)∈DT1
1
− 2
(1 + ζ (y)(h2 (t) − h0 ) − ξ ′ (y)(g2 (t) + h0 ))

≤ S1 (∥h1 − h2 ∥C[0,T1 ] + ∥g1 − g2 ∥C[0,T1 ] )

with S1 being some constant depending on h0 , h0 and g 0 but independent of T1 .


Similarly, we can find a positive constant S2 being dependent on h0 , h0 and g 0 but
independent of T1 such that
∥B1 − B2 ∥C(DT1 ) , ∥C1 − C2 ∥C(DT1 ) , ∥D1 − D2 ∥C(DT1 )
 (2.14)
≤ S2 ∥h1 − h2 ∥C[0,T1 ] + ∥g1 − g2 ∥C[0,T1 ] .

Moreover, by (2.4), it is clear that


∥C2 ∥C(DT1 ) + ∥D2 ∥C(DT1 ) ≤ 12 =: S3 . (2.15)

Therefore, by (2.10), (2.13), (2.14) and (2.15), we have, for any p > 1,
∥Ψ∥Lp (DT1 ) ≤ ∥dB1 − dB2 + h′1 C1 − h′2 C2 − g1′ D1 + g2′ D2 ∥L∞ (DT1 ) ∥Ū2,y ∥Lp (DT1 )
+ d∥A1 − A2 ∥L∞ (DT1 ) ∥Ū2,yy ∥Lp (DT1 ) + ∥f (U1 ) − f (U2 )∥Lp (DT1 )
≤ S1∗ ∥U1 − U2 ∥C(DT1 ) + ∥h1 − h2 ∥C 0,1 ([0,T1 ]) + ∥g1 − g2 ∥C 0,1 ([0,T1 ]) ,


(2.16)
where S1∗ depends on DT1 , K1 , f and Si for i = 1, 2.
In view of (2.6), (2.7), (2.8) and (2.16), we may apply the Lp estimate to (2.12),
and use the Sobolev embedding theorem, to obtain
∥W ∥ 1+α ,1+α ≤ CT1 ∥W ∥Wp1,2 (DT )
C 2 (DT ) 1
1
 (2.17)
≤ K3 ∥U1 − U2 ∥C(DT1 ) + ∥h1 − h2 ∥C 0,1 ([0,T1 ]) + ∥g1 − g2 ∥C 0,1 ([0,T1 ]) ,

where K3 depends on p, DT1 , S1∗ , L1 , l1 and CT1 .


By the definitions of h̄i and ḡi , we know
d
∥h̄′1 − h̄′2 ∥C α2 ([0,T ≤ ∥Ū1,y (t, h0 ) − Ū2,y (t, h0 )∥C 0, α2 (D ) ,
1 ]) δ T1
(2.18)
d
∥ḡ1′ − ḡ2′ ∥C α2 ([0,T ≤ ∥Ū1,y (t, −h0 ) − Ū2,y (t, −h0 )∥C 0, α2 (D ) .
1 ]) δ T1

It follows from (2.17) and (2.18) that


∥Ū1 − Ū2 ∥ 1+α ,1+α + ∥h̄′1 − h̄′2 ∥C α2 ([0,T
+ ∥ḡ1′ − ḡ2′ ∥C α2 ([0,T ])
C 2 (DT ) 1 ])
1
1
′ ′ ′ ′

≤ K4 ∥U1 − U2 ∥C(DT1 ) + ∥h1 − h2 ∥L∞ ([0,T1 ]) + ∥g1 − g2 ∥L∞ ([0,T1 ])

with K4 depending on d, δ and K3 .


2534 YIHONG DU

−2 −2 −2
If we take T = min{ 21 , T1 , K11+α , K2α , K4α }, then we have

∥Ū1 − Ū2 ∥C(DT ) + ∥h̄′1 − h̄′2 ∥C([0,T ]) + ∥ḡ1′ − ḡ2′ ∥C([0,T ])


1+α α α
≤T 2 ∥Ū1 − Ū2 ∥ 1+α ,1+α + T 2 |h̄′1 − h̄′2 ∥C α2 ([0,T + T 2 |ḡ1′ − ḡ2′ ∥C α2 ([0,T
C 2 (DT ) 1 ]) 1 ])
1

1
∥U1 − U2 ∥C(DT1 ) + ∥h′1 − h′2 ∥L∞ ([0,T1 ]) + ∥g1′ − g2′ ∥L∞ ([0,T1 ])


2
1
∥U1 − U2 ∥C(DT ) + ∥h′1 − h′2 ∥L∞ ([0,T ]) + ∥g1′ − g2′ ∥L∞ ([0,T ]) ,

=
2

which shows that F̃ is a contraction mapping on XT . Therefore, it has a unique


fixed point (U, h, g) in XT . The maximum principle implies that U > 0 in [0, T ] ×
[−h0 , h0 ].
Step 4. Finally, we apply the Schauder theory to obtain additional regularity for
the solution of (2.5) in [−h0 , h0 ] × (0, T ].
1+α α
From Step 3 we have U ∈ C 2 ,1+α (DT ), h, g ∈ C 1+ 2 ([0, T ]). Therefore, in
(2.5), we have
α α
dA ∈ C 2 ,α (DT ) and h′ C − g ′ D + dB ∈ C 2 ,α (DT ). (2.19)

Since u0 ∈ C 2 ([−h0 , h0 ]), we are not able to apply the Schauder theory directly
to (2.5). We use the usual trick involving a cutting-off function to get around this.
For any given small constant ε > 0, we choose ξ ∗ ∈ C ∞ ([0, T ]) such that
(
1, for t ∈ [2ε, T ],
ξ ∗ (t) =
0, for t ∈ [0, ε].

Then we define Ũ := U ξ ∗ . By (2.5), we have the following new system



′ ′
Ũt = dAŨyy + (h C − g D + dB)Ũy + F (t, y), 0 < t ≤ T, −h0 < y < h0 ,

Ũ (t, h0 ) = Ũ (t, −h0 ) = δξ ∗ (t), 0 < t ≤ T,

Ũ (0, y) = 0, −h0 ≤ y ≤ h0 ,

(2.20)
where
F (t, y) = U ξt∗ − f (U )ξ ∗ .
α
Due to F ∈ C 2 ,α ([0, T ]×[−h0 , h0 ]), and (2.19), (2.7), we may apply the Schauder
estimate (see, e.g., Theorem 5.14 in [38]) to (2.20) to obtain

∥U ∥C 1+ α2 2+α ([2ε,T ]×[−h ≤ ∥Ũ ∥C 1+ α2 ,2+α ([0,T ]×[−h


0 ,h0 ]) 0 ,h0 ])

≤ R ∥F ∥C α2 ,α ([0,T ]×[−h ,
0 ,h0 ])

where R depends on ε, h0 , α and L1 .


Since ε can be arbitrarily small, it follows that
1+α α
h, g ∈ C 1+ 2 ((0, T ]), u ∈ C 1+ 2 ,2+α (ΩT ).

Hence (u, h, g) is a classical solution of (1.5) over ΩT .


PROPAGATION DYNAMICS 2535

2.3. Comparison principles and a priori bounds.


Lemma 2.2. Suppose that (2.1) holds, T ∈ (0, ∞), g, h ∈ C 1 ([0, T ]), u ∈ C(DT ) ∩
C 1,2 (DT ) with DT = {(t, x) ∈ R2 : 0 < t ≤ T, g(t) < x < h(t)}, and

 ut − duxx ≥ f (u),


0 < t ≤ T, g(t) < x < h(t),
 u ≥ u, g(t) ≥ g(t), 0 ≤ t ≤ T, x = g(t),




u = δ, h (t) ≥ − dδ ux , 0 < t ≤ T, x = h(t),

u ≥ u if h(t) ≤ h(t) and x = h(t),





h0 < h(0), u0 (x) ≤ u(0, x), x ∈ [g(0), h0 ],

where (u, g, h) is the solution to (1.5). Then


h(t) < h(t) in (0, T ],
u(t, x) < u(t, x) for t ∈ (0, T ] and g(t) < x < h(t).

Proof. We claim that h(t) < h(t) for all t ∈ (0, T ]. Clearly this is true for small
t > 0 since h(0) = h0 < h(0). If our claim does not hold, then we can find a first
t∗ ≤ T such that h(t) < h(t) for t ∈ (0, t∗ ) and h(t∗ ) = h(t∗ ). It follows that

h′ (t∗ ) ≥ h (t∗ ). (2.21)
We now compare u and u over the region
Ωt∗ := {(t, x) ∈ R2 : 0 < t ≤ t∗ , g(t) < x < h(t)}.
The strong maximum principle yields u(t, x) < u(t, x) in Ωt∗ . Hence w(t, x) :=
u(t, x) − u(t, x) > 0 in Ωt∗ with w(t∗ , h(t∗ )) = 0. It then follows from the Hopf
boundary lemma that wx (t∗ , h(t∗ )) < 0, from which we deduce
′ dh i d
h (t∗ ) − h′ (t∗ ) ≥ − ux (t∗ , h(t∗ )) − ux (t∗ , h(t∗ )) = − wx (t∗ , h(t∗ )) > 0,
δ δ
which is a contradiction to (2.21). This proves our claim that h(t) < h(t) for all
t ∈ (0, T ]. We may now apply the usual comparison principle over ΩT to conclude
that u < u in ΩT .
The following variation of Lemma 2.2, which can be proved similarly, will also
be used later.
Lemma 2.3. Suppose that (2.1) holds, T ∈ (0, ∞), g, h ∈ C 1 ([0, T ]), u ∈ C(DT ) ∩
C 1,2 (DT ) with DT = {(t, x) ∈ R2 : 0 < t ≤ T, g(t) < x < h(t)}, and


 ut − duxx ≥ f (u), 0 < t ≤ T, g(t) < x < h(t),

′ d
 u = δ, g (t) ≤ − δ ux , 0 < t ≤ T, x = g(t),



 u = δ, h′ (t) ≥ − d u ,


δ x 0 < t ≤ T, x = h(t),


 u ≥ u if g(t) ≥ g(t) and x = g(t),

u ≥ u if h(t) ≤ h(t) and x = h(t),





[−h0 , h0 ] ⊂ (g(0), h(0)), u0 (x) ≤ u(0, x), x ∈ [−h0 , h0 ],

where (u, g, h) is the solution to (1.5). Then


[g(t), h(t)] ⊂ (g(t), h(t)) for t ∈ (0, T ],
u(t, x) < u(t, x) for t ∈ (0, T ] and g(t) < x < h(t).
2536 YIHONG DU

Proof. This is only a simple modification of the proof of Lemma 2.2. As the changes
are obvious, we omit the details.
Remark 2.4. We will call the triple (u, g, h) in Lemmas 2.2 and 2.3 an upper
solution of the problem (1.5). We can define lower solutions by reversing all the
inequalities in the obvious places, and easily prove an analogue of Lemmas 2.2
and 2.3 for lower solutions. There is a symmetric version of Lemma 2.2, where
the conditions on the left and right boundaries are interchanged. We also have
corresponding comparison results for lower solutions in each case.
The next result implies that the population range [g(t), h(t)] never shrinks to a
single point as long as the solution is defined and bounded.
Lemma 2.5. Suppose that (2.1) holds, T ∈ (0, ∞), (u, g, h) solves (1.5) for 0 <
t < T and there exists C > 0 such that u(t, x) ≤ C for t ∈ (0, T ), x ∈ [g(t), h(t)].
Then u(t, x) > 0 for t ∈ (0, T ) and x ∈ [g(t), h(t)], and lim inf t→T − [h(t) − g(t)] > 0.
Proof. Clearly u is the unique solution of the initial boundary value problem

vt − dvxx = f (v),
 t ∈ (0, T ), x ∈ (g(t), h(t)),
v(t, g(t)) = v(t, h(t)) = δ, t ∈ (0, T ),

v(0, x) = u0 (x), x ∈ [−h0 , h0 ].

By the maximum principle we have u(t, x) > 0 for t ∈ (0, T ) and x ∈ [g(t), h(t)].
This proves the first part of the lemma.
Denote DT := {(t, x) ∈ R2 : 0 < t < T, g(t) ≤ x ≤ h(t)}. By what is proved
above, and the assumption, we have
0 < u(t, x) ≤ C in DT .
By (2.1), there exists M > 0 such that f (u) ≥ −M u for u ∈ [0, C]. Define
Z h(t)
U (t) := u(t, x)dx.
g(t)

Then, for t ∈ (0, T ), we have


Z h(t)
U ′ (t) = h′ (t)u(t, h(t)) − g ′ (t)u(t, g(t)) + ut (t, x)dx
g(t)
Z h(t)
= δh′ (t) − δg ′ (t) + [duxx + f (u)]dx
g(t)
Z h(t)
′ ′
= δh (t) − δg (t) + dux (t, h(t)) − dux (t, g(t)) + f (u)dx
g(t)
Z h(t) Z h(t)
= f (u)dx ≥ −M udx = −M U (t).
g(t) g(t)

It follows that
U (t) ≥ U (0)e−M t > 0 for t ∈ (0, T ).
Hence lim inf t→T U (t) ≥ U (0)e−M T > 0. Clearly this implies lim inf t→T [h(t) −
g(t)] > 0.
The result below indicates that the population range [g(t), h(t)] will expand after
some finite time.
PROPAGATION DYNAMICS 2537

Lemma 2.6. Suppose that (f ) holds and (u, g, h) solves (1.5) for t ∈ (0, T ) for
some T ∈ (0, ∞]. Then there exists Tδ ≥ 0, depending only on u0 and f , such
that, whenever T > Tδ , u(t, x) ≥ δ for x ∈ [g(t), h(t)] and t ∈ [Tδ , T ); hence
h′ (t) ≥ 0 ≥ g ′ (t) for t ∈ [Tδ , T ).
Proof. Let m0 := minx∈[−h0 ,h0 ] u0 (x). Then δ ≥ m0 > 0. Consider the auxiliary
ODE problem
v ′ = f (v), v(0) = m0 .
By (f ) we see that v(t) > 0 for all t > 0 and v(t) → 1 as t → ∞. Moreover, v(t)
is increasing in t. Therefore there exists a unique Tδ ≥ 0 such that v(Tδ ) = δ and
m0 ≤ v(t) ≤ δ for t ∈ [0, Tδ ]. By the usual comparison principle over the region
{(t, x) : 0 ≤ t ≤ Tδ , g(t) ≤ x ≤ h(t)}, we obtain u(t, x) ≥ v(t) in this region. It
follows that u(Tδ , x) ≥ v(Tδ ) = δ for x ∈ [g(Tδ ), h(Tδ )]. We may now compare
u(t, x) with u ≡ δ over the region {(t, x) : t ∈ [Tδ , T ), g(t) ≤ x ≤ h(t)} to conclude
that u(t, x) ≥ δ here, and the desired conclusion is proved.
Lemma 2.7. Suppose that (f ) holds, (u, g, h) is a solution to (1.5) defined for
t ∈ [0, T ) for some T ∈ (Tδ , ∞). Then there exist C1 > 0 and C2 > 0, both
independent of T , such that
(
0 ≤ u(t, x) ≤ C1 for t ∈ [0, T ) and x ∈ [g(t), h(t)],
′ ′
−g (t), h (t) ∈ [0, C2 ] for t ∈ (Tδ , T ).
Proof. Let m∗ := maxx∈[g(Tδ ),h(Tδ )] u(Tδ , x). Clearly m∗ ≥ δ. Let w(t) be the
unique solution of the ODE problem
w′ = f (w), w(Tδ ) = m∗ .
Then by (f ) we know that w(t) → 1 as t → ∞ and w(t) > δ for all t > Tδ . It follows
immediately by the usual comparison principle that u(t, x) ≤ w(t) for t ∈ (Tδ , T )
and x ∈ [g(t), h(t)]. Let
C1 := max{∥u∥L∞ (DTδ ) , sup w(t)},
t≥Tδ

where
DTδ := {(t, x) : 0 < t ≤ Tδ , g(t) < x < h(t)}.
Then clearly
u(t, x) ≤ C1 for t ∈ (0, T ), g(t) ≤ x ≤ h(t).
Note that C1 ≥ 1 > δ. Since f is C 1 and f (0) = 0, there exists K > 0 depending
on C1 such that f ′ (u) ≤ K for u ∈ [0, 2C1 ].
Define, for some M > 0 to be specified later,
(
Ω := {(t, x) : 0 < t < T, h(t) − M −1 < x < h(t)},
 
u(t, x) := (2C1 − δ) 2M (h(t) − x) − M 2 (h(t) − x)2 + δ.
Note that since h(t) − g(t) ≥ h(Tδ ) − g(Tδ ) for t ∈ [Tδ , T ), if M −1 < mint∈[0,Tδ ]
[h(t) − g(t)], then Ω ⊂ {(t, x) : 0 < t < T, g(t) < x < h(t)}. We assume from now
on that
M > 1/ min [h(t) − g(t)].
t∈[0,Tδ ]
Then
u(t, x) = (2C1 − δ) 1 − [1 − M (h(t) − x)]2 + δ ≤ 2C1 for (t, x) ∈ Ω,


u(t, h(t)) = δ, u(t, h(t) − M −1 ) = 2C1 > u(t, h(t) − M −1 ).


2538 YIHONG DU

Moreover, for (t, x) ∈ Ω with t ≥ Tδ , due to h′ (t) ≥ 0, we have


ut − duxx − f (u) = (2C1 − δ)h′ (t)[2M − 2M 2 (h(t) − x)] + d(2C1 − δ)2M 2 − f (u)
≥ 2dM 2 (2C1 − δ) − 2KC1 ≥ 0
provided that
 1/2
KC1
M≥ .
d(2C1 − δ)
We show next that
u(Tδ , x) ≥ u(Tδ , x) for x ∈ [h(Tδ ) − M −1 , h(Tδ )]
if M is large enough. Indeed, for (t, x) ∈ Ω, we have
ux (t, x) = (2C1 − δ)[−2M + 2M 2 (h(t) − x)] ≤ 0,
and for x ∈ [h(t) − (2M )−1 , h(t)],
ux (t, x) = (2C1 − δ)[−2M + 2M 2 (h(t) − x)] ≤ −(2C1 − δ)M.
Since u(t, h(t)) = u(t, h(t)) = δ, we see that if
M (2C1 − δ) ≥ max |ux (Tδ , x)|,
x∈[g(Tδ ),h(Tδ )]

then
for x ∈ [h(Tδ ) − (2M )−1 , h(Tδ )],


 u(Tδ , x) ≥ u(Tδ , x)
u(T , x) ≥ u(T , h(t) − (2M )−1 )

δ δ
1


 > 2 (2C 1 − δ) + δ > C1 ≥ u(Tδ , x)
for x ∈ [h(Tδ ) − M −1 , h(Tδ ) − (2M )−1 ].

Therefore we can apply the usual comparison principle over {(t, x) : Tδ < t <
T, h(t) − M −1 < x < h(t)} to conclude that u(t, x) ≤ u(t, x) for t ∈ (Tδ , T ). Since
u(t, h(t)) = u(t, h(t)) = δ, it follows that
ux (t, h(t)) ≥ ux (t, h(t)) = −2M (2C1 − δ) for t ∈ (Tδ , T ).
Hence
d d
0 ≤ h′ (t) = − ux (t, h(t)) ≤ C2 := 2M (2C1 − δ) for t ∈ (Tδ , T ),
δ δ
where
( 1/2 )
maxx∈[g(Tδ ),h(Tδ )] |ux (Tδ , x)|

1 KC1
M := 1 + max , , .
mint∈[0,Tδ ] [h(t) − g(t)] d(2C1 − δ) 2C1 − δ

The proof is now complete.

2.4. Global existence.

Proof of Theorem 1.1. By Theorem 2.1, we know that (1.5) has a unique solution
(u, g, h) defined on some maximal time interval (0, Tm ), and
1+α α
h, g ∈ C 1+ 2 ((0, Tm )), u ∈ C 1+ 2 ,2+α (ΩTm )
with
ΩTm := {(t, x) : t ∈ (0, Tm ), x ∈ [g(t), h(t)]}.
PROPAGATION DYNAMICS 2539

It remains to show that Tm = ∞. Arguing indirectly we assume Tm < ∞. Then,


by the proof of Theorem 2.1, and Lemmas 2.5, 2.6 and 2.7, there exist C1 , C2
independent of Tm such that for t ∈ [0, Tm ) and x ∈ [g(t), h(t)],
0 ≤ u(x, t) ≤ C1 ,
|h′ (t)| + |g ′ (t)| ≤ C2 ,
|h(t)|, |g(t)| ≤ C2 t + h0 .
For any small constant ε > 0, we can infer from the proof of Theorem 2.1, and
1+α
Lemmas 2.5, 2.6 and 2.7, that u ∈ C 2 ,1+α (ΩTm −ε ). Therefore, as in Step 4 of the
proof of Theorem 2.1, by Schauder’s estimates, for fixed 0 < T < Tm − ε, we have
∥u∥C 1+ α2 ,2+α (Ω ≤ Q∗ ,
Tm −ε \ΩT )

where Q∗ depends on T , Tm and Ci for i = 1, 2, but are independent of ε. Since ε > 0


can be arbitrarily small, it follows that for any t ∈ [T, Tm ), ∥u(t, ·)∥C 2+α ([g(t),h(t)]) ≤
Q∗ .
Now we can repeat the proof of Theorem 2.1, to conclude that there exists τ > 0
depending on Q∗ and Ci (i = 1, 2) such that (1.5) with initial time Tm − τ2 has
a unique solution (u, g, h) which is defined for larger t at least up to Tm − τ2 + τ ,
which is a contradiction to the definition of Tm .

2.5. Limit as δ → 0.

Proof of Theorem 1.5. We complete the proof in two steps.


Step 1. We show that for any given small ϵ > 0,
lim inf h(t) = ∞, lim sup g(t) = −∞. (2.22)
δ→0 t≥ϵ δ→0 t≥ϵ

Let ϵ > 0 be an arbitrarily given small real number. It suffices to show that
for any M > 0 there exists δM > 0 such that the solution (uδ , gδ , hδ ) satisfies, for
δ ∈ (0, δM ],
inf hδ (t) ≥ M, sup gδ (t) ≤ −M.
t≥ϵ t≥ϵ

Since û0 satisfies (1.2), there exists ϵ0 > 0 such that


û0 (x) ≥ ϵ0 , uδ0 (x) ≥ ϵ0
for x ∈ [−h0 + ϵ, h0 − ϵ] and all small δ > 0, say δ ∈ (0, δ0 ] ⊂ (0, ϵ0 /2].
Since f ′ (0) > 0, there exists ϵ̃0 ∈ (0, ϵ0 ] such that f ′ (u) > 0 for u ∈ [0, 2ϵ̃0 ]. Then
choose a C 1 function f˜(u) such that
0 < f˜(u) ≤ f (u) for u ∈ (0, ϵ̃0 ), f˜(ϵ̃0 ) = 0,
and for some fixed α ∈ (0, 1), choose a C 2+α function v0 satisfying
v0 (±(h0 −ϵ)) = 0, ϵ̃0 /2 ≥ v0 (x) > 0 in (−h0 +ϵ, h0 −ϵ), v0′ (−h0 +ϵ) > 0 > v0′ (h0 −ϵ).
We note that
uδ0 ≥ ϵ0 ≥ v0 + δ in [−h0 + ϵ, h0 − ϵ] for any δ ∈ (0, δ0 ].
With the above given ϵ and M , we now define
M
h̃(t) := h0 − ϵ + t, (2.23)
ϵ
2540 YIHONG DU

and consider the initial boundary value problem



˜
vt − dvxx = f (v), t > 0, x ∈ (−h̃(t), h̃(t)),

v(t, ±h̃(t)) = 0, t > 0,

v(0, x) = v0 (x), x ∈ [−h0 + ϵ, h0 − ϵ].

By standard Schauder theory this problem has a unique solution v(t, x), and the
maximum principle and Hopf boundary lemma infer
ϵ̃0 > v(t, x) > 0 for t ≥ 0, x ∈ (−h̃(t), h̃(t)), vx (t, h̃(t)) < 0 < vx (t, −h̃(t)) for t > 0.
The properties of v0 and the smoothness of v further indicate that
vx (t, h̃(t)) < 0 < vx (t, −h̃(t)) for t ≥ 0 and they are continuous functions of t.
Therefore there exists σϵ > 0 such that
vx (t, h̃(t)) ≤ −σϵ , vx (t, −h̃(t)) ≥ σϵ for t ∈ [0, ϵ].
It follows that
M d d
h̃′ (t) = ≤ σϵ ≤ ∓ vx (t, ±h̃(t)) for t ∈ [0, ϵ] and all δ ∈ (0, σM
ϵd ϵ
].
ϵ δ δ
Define
ṽ(t, x) := v(t, x) + δ.
Then for δ ∈ (0, δ̃M ] with
 
σϵ dϵ
δ̃M := min , ϵ̃0 , δ0 ,
M
we have



δ ≤ ṽ(t, x) ≤ 2ϵ0 , t ∈ (0, ϵ], x ∈ (−h̃(t), h̃(t)),

 ˜
ṽt − dṽxx = f (v) ≤ f (v) ≤ f (ṽ), t ∈ (0, ϵ], x ∈ (−h̃(t), h̃(t)),

ṽ(t, ±h̃(t)) = δ, t ∈ [0, ϵ],
′ d

h̃ (t) ≤ ∓ δ ṽx (t, ±h̃(t)), t ∈ [0, ϵ],




ṽ(0, x) ≤ uδ (x),

x ∈ [−h̃(0), h̃(0)] ⊂ (−h0 , h0 ).
0

If we can show
uδ (t, x) ≥ δ for t ∈ [0, ϵ], x ∈ {−h̃(t), h̃(t)} ∩ [gδ (t), hδ (t)], (2.24)
then we can apply the lower solution version of Lemma 2.3 to conclude that
[−h̃(t), h̃(t)] ⊂ (gδ (t), hδ (t)), ṽ(t, x) ≤ uδ (t, x)
for t ∈ [0, ϵ], x ∈ [−h̃(t), h̃(t)], δ ∈ (0, δM ].
In particular, for δ ∈ (0, δM ],
  
gδ (ϵ), hδ (ϵ) ⊃ − h̃(ϵ), h̃(ϵ) ⊃ [−M, M ],
and
uδ (ϵ, x) ≥ ṽ(ϵ, x) = v(ϵ, x) + δ ≥ δ for x ∈ [−h̃(ϵ), h̃(ϵ)].
To show (2.24), it suffices to prove
uδ (t, x) ≥ δ for all t > 0, x ∈ [gδ (t), hδ (t)] and small δ > 0. (2.25)
Since uδ0 1
→ û0 in C ([−h0 , h0 ]) with û0 satisfying (1.2), we have
uδ0 (x) ≥ δ for x ∈ [−h0 , h0 ] and all small δ > 0.
PROPAGATION DYNAMICS 2541

It follows that, for all small δ > 0, u ≡ δ is a lower solution of



ut − duxx = f (u),
 t > 0, gδ (t) < x < hδ (t),
u(t, gδ (t)) = u(t, hδ (t)) = δ, t > 0,

u(0, x) = uδ0 (x), −h0 ≤ x ≤ h0 .

Since uδ solves the above system, the comparison principle infers uδ (t, x) ≥ δ for
t > 0 and x ∈ [gδ (t), hδ (t)], as desired.
Next we consider the auxiliary free boundary problem
˜

v̂t − dv̂xx = f (v̂),


t > ϵ, x ∈ (ĝ(t), ĥ(t)),
v̂(t, ĝ(t)) = v̂(t, ĥ(t)) = 0, t > ϵ,




ĝ ′ (t) = − d v̂ (t, ĝ(t)),

t > ϵ,
δ x (2.26)
′ d


 ĥ (t) = − δ v̂x (t, ĥ(t)), t > ϵ,
−ĝ(ϵ) = ĥ(ϵ) = h̃(ϵ),





v̂(ϵ, x) = v(ϵ, x), x ∈ [−h̃(ϵ), h̃(ϵ)].

It is well known (see [21]) that (2.26) has a unique solution (û, ĝ, ĥ). Moreover,
by the maximum principle and Hopf boundary lemma we have, similarly as for the
solution v(t, x) above,
ϵ̃0 > v̂(t, x) > 0 for t ≥ ϵ, x ∈ (ĝ(t), ĥ(t)), v̂x (t, ĥ(t)) < 0 < vx (t, ĝ(t)) for t ≥ ϵ.
Therefore
w(t, x) := v̂(t, x) + δ
satisfies, for every δ ∈ (0, δM ],

δ ≤ w(t, x) ≤ 2ϵ0 ,

 t ∈ [ϵ, ∞), x ∈ (ĝ(t), ĥ(t)),

 ˜
wt − dwxx = f (v̂) ≤ f (v̂) ≤ f (w), t ∈ [ϵ, ∞), x ∈ (ĝ(t), ĥ(t)),

w(t, ĝ(t)) = w(t, ĥ(t)) = δ, t ∈ [ϵ, ∞),
′ d ′ d

ĝ (t) = − δ wx (t, ĝ(t)), ĥ (t) = − δ wx (t, ĥ(t)), t ∈ [ϵ, ∞),





w(ϵ, x) = v(ϵ, x) + δ ≤ u (ϵ, x),
δ x ∈ [ĝ(ϵ), ĥ(ϵ)] ⊂ [gδ (ϵ), hδ (ϵ)].
By (2.25), we have
uδ (t, x) ≥ δ for t ≥ ϵ, x ∈ {ĝ(t), ĥ(t)} ∩ [gδ (t), hδ (t)], δ ∈ (0, δM ],
and hence we can apply the lower solution version of Lemma 2.3 to conclude that
[gδ (t), hδ (t)] ⊃ [ĝ(t), ĥ(t)] for t ≥ ϵ.
Since ĝ ′ (t) < 0 < ĥ′ (t) for t > ϵ, it follows that, for all δ ∈ (0, δM ],
[gδ (t), hδ (t)] ⊃ [ĝ(t), ĥ(t)] ⊃ [ĝ(ϵ), ĥ(ϵ)] ⊃ [−M, M ] for t ≥ ϵ,
which implies (2.22). This concludes Step 1.
Step 2. We show that for any given T > ϵ̃ > 0 and R > 0,
lim uδ (t, x) = U (t, x) in the C 1,2 ([ϵ̃, T ] × [−R, R]) norm.
δ→0

By standard parabolic regularity, it suffices to prove the above limit in the C([ϵ̃, T ]×
[−R, R]) norm. We prove this below via some comparison arguments.
For small σ > 0, let vσ be the unique solution of
vt − dvxx = f (v) for t > 0, x ∈ R; v(0, x) = U (0, x) + σ for x ∈ R.
2542 YIHONG DU

Since f (σ) > 0, it is easily seen that vσ (t, x) ≥ σ for all t ≥ 0 and x ∈ R. Hence
for all sufficiently small δ ∈ (0, σ) satisfying uδ0 (x) ≤ U (0, x) + σ in [−h0 , h0 ], we
can apply the usual comparison principle over the region t > 0, x ∈ [gδ (t), hδ (t)] to
deduce
uδ (t, x) ≤ vσ (t, x) for t > 0, x ∈ [gδ (t), hδ (t)].
It follows that
lim sup uδ (t, x) ≤ vσ (t, x) uniformly in [ϵ̃, T ] × [−R, R].
δ→0

Since limσ→0 vσ (t, x) = U (t, x) uniformly in [0, T ] × [−R, R], we thus obtain
lim sup uδ (t, x) ≤ U (t, x) uniformly in [ϵ̃, T ] × [−R, R].
δ→0

For any given small ϵ > 0, we have


mϵ := min û0 (x) > 0.
x∈[−h0 +ϵ,h0 −ϵ]

Clearly mϵ → 0 as ϵ → 0. Moreover, for all sufficiently small δ > 0, say δ ∈ (0, δϵ ],


uδ0 (x) ≥ û0 (x) − mϵ for x ∈ [−h0 + ϵ, h0 − ϵ].
Let f˜(u) and h̃(t) be as given in Step 1, and consider the problem

˜
vt − dvxx = f (v),
 t > 0, x ∈ (−h̃(t), h̃(t)),
v(t, ±h̃(t)) = 0, t > 0,

v(0, x) = û0 (x) − mϵ , x ∈ [−h0 + ϵ, h0 − ϵ].

From Step 1 we know that for all small δ > 0,


ϵ
[−h̃(t), h̃(t)] ⊂ (gδ (t), hδ (t)) for t ∈ [0, ].
2
Hence we can use the usual comparison principle to deduce
ϵ
uδ (t, x) ≥ v(t, x) for t ∈ [0, ], x ∈ [−h̃(t), h̃(t)].
2
Given any small ϵ̂ > 0, we can fix ϵ > 0 small so that
ϵ̂
mϵ < , U (0, x) ≤ ϵ̂ for x ̸∈ [−h0 + ϵ, h0 − ϵ].
2
Then we choose ϵ∗ ∈ (0, 2ϵ ] such that
v(ϵ∗ , x) ≥ v(0, x) − ϵ̂/2 ≥ û0 (x) − ϵ̂ > 0 for x ∈ [h0 − ϵ, h0 + ϵ].
Now define (
û0 − ϵ̂, x ∈ [−h̃(ϵ∗ ), h̃(ϵ∗ )],
w0 (x) :=
0, x ∈ R \ [−h̃(ϵ∗ ), h̃(ϵ∗ )],
and for L ≫ 1, consider the auxiliary problem

wt − dwxx = f (w), t > 0, x ∈ [−L, L],

w(t, ±L) = 0, t > 0,

w(0, x) = w0 (x), x ∈ [−L, L].

For all small δ > 0, by Step 1 we have


[gδ (t), hδ (t)] ⊃ [−L, L] for t ≥ ϵ∗ .
PROPAGATION DYNAMICS 2543

Since uδ (ϵ∗ , x) ≥ v(ϵ∗ , x) ≥ w0 (x) in the supporting set of w0 , we can use the
comparison principle to infer that
uδ (t + ϵ∗ , x) ≥ w(t, x) for t ≥ 0, x ∈ [−L, L].
It follows that
lim inf uδ (t + ϵ∗ , x) ≥ w(t, x) uniformly for t ≥ 0, x ∈ [−L, L].
δ→0
Let w̃ be the solution of
(
w̃t − dw̃xx = f (w̃), t > 0, x ∈ R,
w̃(0, x) = w0 (x), x ∈ R.
It is well known that
lim w(t, x) = w̃(t, x) locally uniformly in (0, T ] × R.
L→∞
Therefore
lim inf uδ (t + ϵ∗ , x) ≥ w̃(t, x) locally uniformly in (0, T ] × R.
δ→0

By our definition of w0 (x), it is easily seen that w0 (x) + ϵ̂ ≥ U (0, x). Therefore
if ŵ is the solution to
(
ŵt − dŵxx = f (ŵ), t > 0, x ∈ R,
ŵ(0, x) = w0 (x) + ϵ̂, x ∈ R,
then ŵ(t, x) ≥ U (t, x) in [0, ∞) × R.
For any given small σ > 0, the following holds for all small ϵ∗ ∈ (0, ϵ̃/2] and
ϵ̂ > 0:
U (t + ϵ∗ , x) − ϵ̂ ≤ U (t, x) ≤ ŵ(t, x) ≤ w̃(t, x) + σ for t ∈ [0, T ] × [−R, R].
It follows that
lim inf uδ (t + ϵ∗ , x) ≥ U (t + ϵ∗ , x) − ϵ̂ − σ uniformly in [ϵ̃ − ϵ∗ , T ] × [−R, R].
δ→0
Letting ϵ̂ → 0 and then σ → 0, we deduce
lim inf uδ (t, x) ≥ U (t, x) uniformly in [ϵ̃, T ] × [−R, R].
δ→0
The proof is now complete.

3. Longtime dynamics of (1.5).


3.1. Proof of Theorem 1.2.
Proof. By Lemma 2.6, h′ (t) ≥ 0 ≥ g ′ (t) for t > Tδ , and hence
h∞ := lim h(t) ∈ [h(Tδ ), ∞], g∞ := lim g(t) ∈ [−∞, g(Tδ )]
t→∞ t→∞
always exist, and
h∞ ≥ h(Tδ ) > g(Tδ ) ≥ g∞ .
We are going to show that h∞ = ∞ and g∞ = −∞.
Step 1. A contradiction arises if both h∞ and g∞ are finite.
Suppose that both h∞ and g∞ are finite. Then define
[h(t) − x]g∞ + [x − g(t)]h∞
y= , U (t, y) = u(t, x).
h(t) − g(t)
2544 YIHONG DU

By direct calculation we see that U satisfies


  2
h∞ −g∞


 U t − d h(t)−g(t) Uyy
 ′ ′ ′ ′
 h (t)g ∞ −g (t)h ∞ −[h (t)−g (t)]y
+ Uy = f (U ), t > 0, y ∈ [g∞ , h∞ ],




 h(t)−g(t)
U (t, g∞ ) = U (t, h∞ ) = δ, t > 0,

′ d h∞ −g∞
(3.1)


 h (t) = − δ h(t)−g(t) y U (t, h∞ ), t > 0,
′ d h −g

g (t) = −
 ∞ ∞
Uy (t, g∞ ), t > 0,


 δ h(t)−g(t)
 
2h0 y−(g∞ +h∞ )h0
y ∈ [g∞ , h∞ ].

 U (0, y) = u0 h∞ −g∞ ,

By standard Lp theory we see from (3.1) that, for any p > 1,


∥U ∥Wp1,2 ([n,n+2]×[g∞ ,h∞ ]) ≤ Cp
for all integers n ≥ 1 and some Cp > 0 independent of n. Taking p sufficiently
large and use the Sobolev embedding theorem, we see from the third and fourth
equations in (3.1) that h′ (t) and g ′ (t) are uniformly continuous in t for t ≥ 1. This
implies that h′ (t) → 0 and g ′ (t) → 0 as t → ∞, since otherwise, by Lemma 2.5, say
there exists sn increasing to ∞ as n → ∞ such that h′ (sn ) ≥ σ > 0 for all n, then
we can find ϵ > 0 small so that h′ (s) ≥ σ/2 for s ∈ [sn − ϵ, sn + ϵ] and all n ≥ 1,
which implies h∞ = ∞.
Let {tn } be an arbitrary sequence increasing to ∞ as n → ∞, and define
Un (t, y) := U (tn + t, y).
Then applying the Lp estimates to the equation satisfied by Un (which is a simple
variation of (3.1)) and using the Sobolev embedding theorem, we see that, subject
(1+α)/2,1+α
to a subsequence, for some α ∈ (0, 1), Un → Ũ in Cloc (R × [g∞ , h∞ ]), with
Ũ satisfying (in the weak sense and then in the classical sense)

Ũt − dŨyy = f (Ũ ),
 t ∈ R, y ∈ [g∞ , h∞ ],
Ũ (t, g∞ ) = Ũ (t, h∞ ) = δ, t ∈ R, (3.2)

Ũy (t, g∞ ) = Ũy (t, h∞ ) = 0, t ∈ R.

Lemma 2.6 implies that Ũ (t, y) ≥ δ, and since U ≡ δ is a strict lower solution
to (3.2), the strong maximum principle implies that Ũ (t, y) > δ for t ∈ R and
y ∈ (g∞ , h∞ ). The Hopf boundary lemma then infers Ũy (t, g∞ ) > 0 > Ũy (t, h∞ ),
which contradicts the third equation in (3.2).
Step 2. A contradiction arises if one of g∞ and h∞ is a finite number.
Without loss of generality we assume that h∞ is finite and g∞ = −∞. The proof
follows the idea in Step 1 with some suitable changes. Now we define, for some
M > − inf t>0 h(t),
x+M
y= , V (t, y) = u(t, x).
h(t) + M
Then V satisfies
 1 2 h′ (t)y

g(t)+M


 Vt − d h(t)+M Vyy − h(t)+M Vy = f (V ), t > 0, y ∈ [ h(t)+M , 1],
V (t, g(t)+M ) = V (t, 1) = δ,

t > 0,
h(t)+M
′ d 1
(3.3)


 h (t) = − V
δ h(t)+M y (t, 1), t > 0,
−h0
y ∈ [M
 
V (0, y) = u0 (h0 + M )y − M , M +h0 , 1].

PROPAGATION DYNAMICS 2545

Applying interior and boundary Lp estimates to (3.3), we see that, for any p > 1,
∥V ∥Wp1,2 ([n,n+2]×[L−σ,L]) ≤ Cp
g(t)+M
for all integers n ≥ 1, all real numbers L ∈ [maxt∈[n,n+2] h(t)+M , 1] and some Cp > 0
1
independent of n and L, where σ ∈ (0, inf t>0 h(t)+M ) is a fixed number. Taking
p sufficiently large and use the Sobolev embedding theorem, we see from the third
equation in (3.3) that h′ (t) is uniformly continuous in t for t ≥ 1. This implies that
h′ (t) → 0 as t → ∞ by the same reasoning as in Step 1.
Let {tn } be an arbitrary sequence increasing to ∞ as n → ∞, and define
Vn (t, y) := V (tn + t, y).
Then applying the Lp estimates to the equation satisfied by Vn (which is a simple
variation of (3.3)), and using the Sobolev embedding theorem, and a usual procedure
of selecting a diagonal subsequence, we see that subject to a subsequence, for some
(1+α)/2,1+α
α ∈ (0, 1), Vn → Ṽ in Cloc (R × (−∞, 1]), with Ṽ satisfying

d
Ṽt − (h∞ +M )2 Ṽyy = f (Ṽ ), t ∈ R, y ∈ (−∞, 1],

Ṽ (t, 1) = δ, t ∈ R, (3.4)

Ṽy (t, 1) = 0, t ∈ R.

Lemma 2.6 implies that Ṽ (t, y) ≥ δ, and since V ≡ δ is a strict lower solution
to (3.4), the strong maximum principle implies that Ṽ (t, y) > δ for t ∈ R and
y ∈ (−∞, 1). The Hopf boundary lemma then infers 0 > Ṽy (t, 1), which contradicts
the third equation in (3.4). This concludes Step 2.
Step 3. W show that u(t, x) → 1 as t → ∞ locally uniformly in x ∈ R.
By Steps 1 and 2, we know that g(t) → −∞ and h(t) → ∞ as t → ∞. Therefore,
for any given L ≫ 1, there exists TL ≫ 1 such that
g(t) < −L, h(t) > L for t ≥ TL .
By Lemma 2.6, we also have u(t, x) ≥ δ for t ≥ TL and x ∈ [g(t), h(t)].
Now let v = vL (t, x) be the unique solution of the following auxiliary problem

vt − dvxx = f (v), t > TL , x ∈ [−L, L],

v(t, ±L) = δ, t > TL , (3.5)

v(TL , x) = δ, x ∈ [−L, L].

Since v ≡ δ is a lower stationary solution of (3.5), vL (t, x) is nondecreasing in t


with δ ≤ vL (t, x) ≤ w(t) for t > TL , x ∈ [−L, L], where w(t) is the unique solution
of the ODE problem
w′ = f (w), w(TL ) = δ,
which is increasing in t and w(t) → 1 as t → ∞. It follows that

vL (x) := lim vL (t, x) exists,
t→∞

and δ ≤ vL (x) ≤ 1 for x ∈ [−L, L]. Moreover, it is easy to see by standard parabolic

regularity that the above limit holds in the C 2 ([−L, L]) norm for vL (t, ·) and vL
satisfies
∞ ′′ ∞ ∞ ∞
−d(vL ) (x) = f (vL ), vL ≥ δ in (−L, L), vL (±L) = δ.
2546 YIHONG DU

∞ ∞
We want to show that vL (x) → 1 as L → ∞. Since vL may not be mono-
tone with respect to L, we use an upper and lower solution trick to overcome this
shortcoming. A simple upper and lower solution consideration for the problem
−dv ′′ = f (v) in (−L, L), v(±L) = δ, (3.6)

shows that (3.6) has a minimal positive solution ṽL satisfying δ ≤ ṽL ≤ vL . More-
over, for L1 > L, ṽL1 restricted to the interval [−L, L] is an upper solution to (3.6),
which is no smaller than the lower solution δ. It follows that ṽL ≤ ṽL1 over [−L, L].
Therefore
ṽ∞ (x) := lim ṽL (x) exists for every x ∈ R.
L→∞
Moreover, by standard elliptic regularity, it is easily seen that the above limit holds
2
in Cloc (R) and
′′
−dṽ∞ = f (ṽ∞ ), δ ≤ ṽ∞ ≤ 1 for x ∈ R.
Since f satisfies (f ) and its behaviour for u > 1 can be modified without affecting
the validity of the above equation, we can use Theorem 1.1 of [17] to conclude that
ṽ∞ ≡ 1.
By the usual comparison principle, we can compare vL with u over [TL , ∞) ×
[−L, L] to deduce vL (t, x) ≤ u(t, x) for t > TL and x ∈ [−L, L]. It follows that

lim inf u(t, x) ≥ vL (x) for x ∈ [−L, L]. (3.7)
t→∞

Since vL ≥ ṽL and ṽL (x) → 1 as L → ∞ locally uniformly in x ∈ R, by letting
L → ∞ in (3.7), we deduce
lim inf u(t, x) ≥ 1 locally uniformly in x ∈ R.
t→∞

On the other hand, define m∗ := maxx∈[−h0 ,h0 ] u0 (x) and let w∗ (t) be the solution
of the ODE problem
w′ = f (w), w(0) = m∗ .
Then w∗ (t) ≥ δ for all t ≥ 0 and w∗ (t) → 1 as t → ∞. Comparing u with w∗ over
the region t > 0 and x ∈ [g(t), h(t)] we deduce by the usual comparison principle
that u(t, x) ≤ w∗ (t) in this region. It follows that
lim sup u(t, x) ≤ 1 uniformly in x ∈ [g(t), h(t)].
t→∞
Therefore we must have
lim u(t, x) = 1 locally uniformly in x ∈ R.
t→∞
The proof is now complete.
3.2. Proof of Proposition 1.3.
Proof. Let c0 > 0 and Pc (q) be defined as in Lemma 6.1 of [21], but with (c, f (u))
there replaced by ( d1 c, d1 f (u)), since the diffusion coefficient d in [21] was assumed
to be 1. From [21] and [2] we know that c0 is the spreading speed of (1.3), and

Pc (q) > 0 for q ∈ [0, 1), c ∈ [0, c0 ),

Pc (0) = 0, Pc (q) > 0 for q ∈ (0, 1), c ≥ c0 ,

for fixed q ∈ [0, 1), c → Pc (q) is continuous and strictly decreasing over [0, c0 ].

Moreover,
1 f (q) 
Pc′ = c− for q ∈ (0, 1). (3.8)
d Pc
PROPAGATION DYNAMICS 2547

Taking c = c0 in (3.8) and noting that Pc0 (0) = 0 and f (q) > 0 for q ∈ (0, 1), we
deduce Pc′0 (q) < c/d for q ∈ (0, 1) and hence
δ
Pc0 (δ) < c0 .
d
We now consider the function
δ
ξ(c) := Pc (δ) − c,
d
which is continuous and strictly decreasing for c ∈ [0, c0 ]. The above inequality
gives ξ(c0 ) < 0, while ξ(0) = P0 (δ) > 0. Therefore there exists a unique c∗ =
c∗ (δ) ∈ (0, c0 ) such that ξ(c∗ ) = 0, namely
δ
Pc∗ (δ) = c∗ .
d
Let q = q∗ (z) be the unique solution of
q ′ = Pc∗ (q), q(0) = 0.
Then it is easily checked that (c, q) = (c∗ , q∗ ) satisfies (1.6).
Finally by the properties of Pc (q) listed at the beginning of the proof, we see
that Pc∗ (q) > 0 for q ∈ [0, 1) which implies q∗′ (z) > 0 for z ≥ 0, and
lim c∗ (δ) = c0 .
δ→0
The proof is now complete.
Remark 3.1. From (3.8) and Pc (1) = 0 we see that for fixed c ∈ (0, c0 ), Pc (q) >
Pc (0) for q > 0 close to 0, and Pc (q) < Pc (0) for q < 1 close to 1. Hence, we see
from the above proof that if c∗ is given in Theorem A with µ = dδ , then c∗ < c∗
when δ > 0 is small, and c∗ > c∗ when δ < 1 is close to 1. If c ≥ c0 , then the
solution Pc (q) of (3.8) gives rise to a traveling wave solution q(x) (via q ′ = Pc (q))
satisfying (1.7). However, since Pc (0) = 0, (3.8) indicates Pc (q) < dc q for q ∈ (0, 1).
In particular, Pc (δ) < dc δ, which implies that if q solves (1.7) (so q ′ = Pc (q)), then
there does not exist x ∈ R satisfying q(x) = δ and q ′ (x) = dδ c simultaneously.
3.3. Bound for |g(t) + c∗ t| and |h(t) − c∗ t|.
Proposition 3.2. Under the assumptions of Theorem 1.4, there exists C > 0 such
that
|g(t) + c∗ t|, |h(t) − c∗ t| ≤ C for all t > 0. (3.9)
We will prove (3.9) for h(t) only, since the conclusion for g(t) is a consequence
of the conclusion for h(t). Indeed, (ũ(t, x), g̃(t), h̃(t)) := (u(t, −x), −h(t), −g(t)) is
the solution of (1.5) with initial function ũ0 (x) := u0 (−x).
Our proof will be based on the construction of suitable upper and lower solutions,
which is a variation of the constructions in [23] and is inspired by the method of
Fife and McLeod [27]. We will need the following result.
Lemma 3.3. Suppose that f satisfies (f ). Let (u, g, h) be the unique solution of
(1.5). Then for any c ∈ (0, c∗ ) there exist σ ∈ (0, −f ′ (1)), T ∗ > 0 and M > 0 such
that for t ≥ T ∗ ,
[g(t), h(t)] ⊃ [−ct, ct], (3.10)
−σt
u(t, x) ≥ 1 − M e for x ∈ [−ct, ct], (3.11)
−σt
u(t, x) ≤ 1 + M e for x ∈ [g(t), h(t)]. (3.12)
2548 YIHONG DU

Proof. This is a simple variation of Lemma 6.5 in [21]. Let (c∗ , q∗ ) be the unique
solution pair in Proposition 1.3. Denote ω∗ := q∗′ (xδ ) = Pc∗ (δ) > 0 and for each
c ∈ (0, c∗ ) consider the problem
1 f (q) 
P′ = c − , P (δ) = ω∗ . (3.13)
d P
By similar analysis as at the end of Section 6.1 in [21], the unique solution P c (q) of
(3.13) satisfies P c (q) > 0 in [0, Qc ) and P c (Qc ) = 0 for some Qc ∈ (δ, 1). Moreover,
Qc → 1 as c → c∗ .
Let (q c (z), pc (z)) denote the trajectory of
1
q ′ = p, p′ = [cp − f (q)],
d
represented by the curve p = P c (q), q ∈ [0, Qc ], with (q c (0), pc (0)) = (0, P c (0)) and
(q c (z c ), pc (z c )) = (Qc , 0); then clearly q c (z) solves
dq ′′ − cq ′ + f (q) = 0, z ∈ [0, z c ],

(3.14)
q(0) = 0, q ′ (z c ) = 0, q(z) > 0 in (0, z c ].
Moreover, we have
d ′ d d d
c < c∗ = q (xδ ) = Pc∗ (δ) = P c (δ) = (q c )′ (x̃δ ), (3.15)
δ ∗ δ δ δ
where x̃δ ∈ (0, z c ) is chosen such that q c (x̃δ ) = δ. Further more, we have
lim z c = +∞, lim ∥q c − q∗ ∥L∞ ([0,zc ]) = 0. (3.16)
c↗ c∗ c↗ c∗

For t ≥ 0 we define
k(t) = kc (t) := z c + ct − x̃δ
and  c
 q (k(t) − x + x̃δ ), x ∈ [ct, k(t)],
w(t, x) = wc (t, x) := q c (z c ), x ∈ [−ct, ct],
 c
q (k(t) + x + x̃δ ), x ∈ [−k(t), −ct].
We will use (w, −k, k) as a lower solution to (1.5) in the argument below.
Step 1. Fix ĉ ∈ (c, c∗ ). By Theorem 1.2, we can find T1 > 0 such that
[g(T1 ), h(T1 )] ⊃ [−kĉ (0), kĉ (0)] and u(T1 , x) > wĉ (0, x) in [−kĉ (0), kĉ (0)].
One then easily checks that (wĉ (t − T1 , x), −kĉ (t − T1 ), kĉ (t − T1 )) is a lower solution
of (1.5) for t ≥ T1 . Hence for t ≥ T2 with some T2 ≫ T1 ,
g(t) ≤ −kĉ (t − T1 ) < −ĉ(t − T1 ) < −ct, h(t) ≥ kĉ (t − T1 ) > ĉ(t − T1 ) > ct
and
u(t, x) ≥ wĉ (t − T1 , x) for x ∈ [−kĉ (t − T1 ), kĉ (t − T1 )] ⊃ [−ct, ct].
This proves (3.10).
Step 2. Since wĉ (t − T1 , x) ≡ q ĉ (z ĉ ) = Qĉ > Qc for |x| ≤ ct < ĉ(t − T1 ) for all
t ≥ T2 , we find from the above estimate for u that
u(t, x) ≥ Qc for − ct ≤ x ≤ ct, t ≥ T2 .
Since f (1) < 0, for any σ ∈ (0, −f ′ (1)) we can find ρ = ρ(σ) ∈ (0, 1) such that

f (u) ≥ σ(1 − u) for u ∈ [1 − ρ, 1], f (u) ≤ σ(1 − u) for u ∈ [1, 1 + ρ]. (3.17)
c
Recall that Q → 1 as c increases to c∗ . Without loss of generality we may assume
that c has been chosen so that Qc > 1 − ρ.
PROPAGATION DYNAMICS 2549

Fix T ≥ T2 and let ψ be the solution of



 ψt = dψxx + σ(1 − ψ), t > 0, −cT < x < cT,
ψ(t, ±cT ) ≡ Qc , t > 0, (3.18)
ψ(0, x) ≡ Qc , −cT ≤ x ≤ cT.

By the proof of Lemma 6.5 in [21], for small ϵ > 0 such that ϵ2 c2 σ < 2, we have
 ϵ2 c2  2 2 4
ψ T, x ≥ 1 − M0 e−ϵ c σT /4 with M0 := √ + 1
4 π
ϵ2 c2
for |x| ≤ (1 − ϵ)cT and T ≥ T2 . Writing t = 4 T, this is equivalent to
4(1 − ϵ) 4
ψ(t, x) ≥ 1 − M0 e−σt for |x| ≤ t, t ≥ 2 2 T2 .
ϵ2 c ϵ c
Moreover, by the estimates proved in Step 1, it is easily seen that ψ is a lower
solution for the equation satisfied by u(t + T, x) in the region (t, x) ∈ [0, ∞) ×
[−cT, cT ], and so
ψ(t, x) ≤ u(t + T, x) for − cT ≤ x ≤ cT, t ≥ 0. (3.19)
4(1−ϵ)
By selecting ϵ sufficiently small, we may assume that ϵ2 c > c. Thus
u(t, x) ≥ ψ(t − T, x) ≥ 1 − M0 eσT e−σt for x ∈ [−ct, ct] and all large t > 0.
Hence (3.11) holds.
Step 3. Finally we prove (3.12). Define m∗ := maxx∈[−h0 ,h0 ] u0 (x) and let w∗ (t)
be the solution of the ODE problem
w′ = f (w), w(0) = m∗ .
Then w∗ (t) ≥ δ for all t ≥ 0 and by (f ), w∗ (t) ≤ 1 + M e−σt for some positive
constants M and σ ∈ (0, −f ′ (1)). Comparing u with w∗ over the region t > 0 and
x ∈ [g(t), h(t)] we deduce by the usual comparison principle that u(t, x) ≤ w∗ (t) in
this region. Hence (3.12) holds.

3.3.1. Upper bound. Fix c ∈ (0, c∗ ). From Lemma 3.3, there exist σ ∈ (0, −f ′ (1)),
M > 0 and T ∗ > 0 such that for t ≥ T ∗ , (3.10), (3.11) and (3.12) hold. Since
0 < σ < −f ′ (1) we can find some η > 0 such that
σ ≤ −f ′ (u) for 1 − η ≤ u ≤ 1 + η,

(3.20)
f (u) ≥ 0 for 1 − η ≤ u ≤ 1.
Fix γ ∈ (0, η). Since q∗ (∞) = 1, we can find X0 > 0 and T ∗ > 0 large such that

(1 + γ)q∗ (X0 ) ≥ 1 + M e−σT . (3.21)

We now construct an upper solution (u, g, h) to (1.1) as follows:



g(t) := g(t)


h(t) := c∗ (t − T ∗ ) + β[1 − e−σ(t−T ) ] + h(T ∗ ) + X0 , (3.22)

u(t, x) := [1 + γe−σ(t−T ) ]q∗ (h(t) − x + zδ (t)),

where β > 0, γ ∈ (0, η) and zδ (t) > 0 is determined by



[1 + γe−σ(t−T ) ]q∗ (zδ (t)) = δ.
2550 YIHONG DU

Therefore zδ (t) is increasing in t with zδ (∞) = xδ and zδ (t) ∈ (zδ (T ∗ ), xδ ) for


t > T ∗ . Moreover,

−δγe−σ(t−T )
= q∗ (zδ (t)) − q∗ (xδ ) = q∗′ (θ(t))[zδ (t) − xδ ]
1 + γe−σ(t−T ∗ )
for some θ(t) ∈ [zδ (t), xδ ]. It follows that

0 ≤ xδ − zδ (t) ≤ M0 γe−σ(t−T )
for t ≥ T ∗ , (3.23)
where
δ
M0 := .
minu∈[0,xδ ] q∗′ (u)
Lemma 3.4. For sufficiently large β > 0, u(t, x) and h(t) satisfy
(
u(t, x) ≤ u(t, x) for t ≥ T ∗ , x ∈ [g(t), h(t)],
h(t) ≤ h(t) for t ≥ T ∗ .

Proof. We check that (u, g, h) satisfies the following inequalities:


ut − duxx ≥ f (u), u ≥ δ for t > T ∗ , g(t) < x < h(t), (3.24)

g(t) ≥ g(t), u ≥ u for t ≥ T , x = g(t), (3.25)
′ d
u(t, h(t)) = δ, h (t) ≥ − ux (t, h(t)) for t ≥ T ∗ , (3.26)
δ
h(T ∗ ) < h(T ∗ ), u(T ∗ , x) ≤ u(T ∗ , x) for x ∈ [g(T ∗ ), h(T ∗ )]. (3.27)
Clearly (3.25) is satisfied since u(t, g(t)) = u(t, g(t)) = δ and u(t, x) ≥ δ for x ≤
h(t). We now show (3.26). It is obvious that u satisfies u(t, h(t)) = δ. Direct
computations give
′ ∗
h (t) = c∗ + βσe−σ(t−T )

and
d d ∗
− ux (t, h(t)) = (1 + γe−σ(t−T ) )q∗′ (zδ (t)).
δ δ
By (3.23),
d ′ d ∗
q∗ (zδ (t)) ≤ q∗′ (xδ ) + M1 |xδ (t) − xδ | ≤ c∗ + M1 M0 γe−σ(t−T ) ,
δ δ
with
d
M1 = max |q ′′ (u)|.
δ u∈[0,xδ ] ∗
Therefore
d ∗ ∗
− ux (t, h(t)) ≤ (1 + γe−σ(t−T ) )(c∗ + M1 M0 γe−σ(t−T ) )
δ

≤ c∗ + γ(c∗ + 2M1 M0 )e−σ(t−T ) .
Hence (3.26) holds for β > 0 satisfying
γ(c∗ + 2M1 M0 ) ≤ βσ. (3.28)
∗ ∗
Next we show (3.27). From the definition of h we see that h(T ) = h(T ) + X0 >
h(T ∗ ). By (3.12) and (3.21) we have
u(T ∗ , x) = (1 + γ)q∗ (h(T ∗ ) − x + zδ (T ∗ )) = (1 + γ)q∗ (h(T ∗ ) + X0 − x + zδ (T ∗ ))

≥ (1 + γ)q∗ (X0 ) ≥ 1 + M e−σT ≥ u(T ∗ , x) for x ∈ [g(T ∗ ), h(T ∗ )].
PROPAGATION DYNAMICS 2551

Thus (3.27) holds.


Finally we show that (3.24) holds for sufficiently large β > 0. Clearly we have
u(t, x) ≥ δ for t ≥ 0, g(t) ≤ x ≤ h(t).
Write z = h(t) − x + zδ (t). We calculate
∗ ∗ ′
ut = −γσe−σ(t−T ) q∗ (z) + (1 + γe−σ(t−T ) )[h (t) + zδ′ (t)]q∗′ (z).
Since zδ′ (t) ≥ 0, we obtain
∗ ∗ ∗
ut ≥ −γσe−σ(t−T ) q∗ (z) + (1 + γe−σ(t−T ) )(c∗ + βσe−σ(t−T ) )q∗′ (z).
Clearly,

uxx = (1 + γe−σ(t−T ) )q∗′′ (z).
So we have
ut − duxx − f (u)
∗ ∗ ∗
≥ − γσe−σ(t−T ) q∗ (z) + (1 + γe−σ(t−T ) )(c∗ + βσe−σ(t−T ) )q∗′ (z)
∗ ∗
− d(1 + γe−σ(t−T ) )q∗′′ (z) − f (1 + γe−σ(t−T ) )q∗ (z)

∗ ∗
= − γσe−σ(t−T ) q∗ (z) + (1 + γe−σ(t−T ) ) [−dq∗′′ (z) + c∗ q∗′ (z)]
∗ ∗ ∗
+ βσe−σ(t−T ) (1 + γe−σ(t−T ) )q∗′ (z) − f (1 + γe−σ(t−T ) )q∗ (z)

∗ ∗ ∗
= − γσe−σ(t−T ) q∗ (z) + βσe−σ(t−T ) (1 + γe−σ(t−T ) )q∗′ (z)
∗ ∗
+ (1 + γe−σ(t−T ) )f q∗ (z) − f (1 + γe−σ(t−T ) )q∗ (z) .
 
∗ ∗
Now we consider the term (1 + γe−σ(t−T ) )f q∗ (z) − f (1 + γe−σ(t−T ) )q∗ (z) .
 

Denote

F (ξ, u) := (1 + ξ)f (u) − f (1 + ξ)u .
The mean value theorem yields
F (ξ, u) = ξf (u) + f (u) − f ((1 + ξ)u) = ξf (u) − ξf ′ (u + θξu)u
for some θ = θξ,u ∈ (0, 1). Therefore we have
ut − duxx − f (u)
∗ ∗ ∗ ∗
≥ − γσe−σ(t−T ) q∗ (z) + βσe−σ(t−T ) (1 + γe−σ(t−T ) )q∗′ (z) + F (γe−σ(t−T ) , q∗ (z))
∗ ∗ ∗
= βσe−σ(t−T ) (1 + γe−σ(t−T ) )q∗′ (z) + γe−σ(t−T ) f (q∗ (z))

h  ∗
i
+ γe−σ(t−T ) q∗ (z) − σ − f ′ q∗ (z) + θ′ γe−σ(t−T ) q∗ (z) .
Since q∗ (z) → 1 as z → ∞, there exists zη > 0 such that q∗ (z) ≥ 1 − η for z ≥ zη .
For h(t) − x + zδ (t) = z ≥ zη , the above inequality yields

h  ∗
i
ut − duxx − f (u) ≥ γe−σ(t−T ) q∗ (z) − σ − f ′ q∗ (z) + θ′ γe−σ(t−T ) q∗ (z) ≥ 0,
where θ′ = θt,z′
∈ (0, 1), and we have used (3.20), 1 − η ≤ q∗ (z) < 1 and 0 ≤
′ −σ(t−T ∗ )
θ γe ≤ γ ≤ η.
For h(t) − x + zδ (t) = z ∈ [zδ (t), zη ], it gives
ut − duxx − f (u)
∗ ∗ ∗
≥ βσe−σ(t−T ) (1 + γe−σ(t−T ) )q∗′ (z) + γe−σ(t−T ) f (q∗ (z))

h ∗ i
+ γe−σ(t−T ) q∗ (z) − σ − f ′ q∗ (z) + θ′ γe−σ(t−T ) q∗ (z)
2552 YIHONG DU

 
∗ ∗
≥ βσe−σ(t−T ) Qη − γe−σ(t−T ) σ + max f ′ (s)
0≤s≤1+η
 h i

= e−σ(t−T ) βσQη − γ σ + max f ′ (s) ,
0≤s≤1+η

where Qη := min0≤z≤zη q∗′ (z)> 0. Thus ut − duxx − f (u) ≥ 0 for β satisfying


h i
βσQη ≥ γ σ + max f ′ (s) . (3.29)
0≤s≤1+η

We may now apply Lemma 2.2 to conclude that


u(t, x) ≤ u(t, x), h(t) ≤ h(t) for t ≥ T ∗ and x ∈ [g(t), h(t)] = [g(t), h(t)].
This completes the proof of the lemma.
3.3.2. Lower bound. Now we bound u and h from below by constructing a lower
solution (u, g, h) to (1.5).
For η given in (3.20), we define constants ζη ∈ (0, ∞) and Q′η by
η
q∗ (ζη ) = 1 − , Q′η = min q∗′ (ζ) > 0.
2 0≤ζ≤ζη

With c, σ and M given in Lemma 3.3, we first take T ∗∗ > 0 so that


η
M e−σt ≤ for t ≥ T ∗∗ , (3.30)
2
then, for β > 0 to be determined, define, for t ≥ T ∗∗ ,

g(t) := −ct,

∗∗
h(t) := c∗ (t − T ∗∗ ) + cT ∗∗ − βM (e−σT − e−σt ),
u(t, x) := (1 − M e−σt )q∗ (h(t) − x + z̃δ (t)),

where z̃δ (t) > xδ is determined by


(1 − M e−σt )q∗ (z̃δ (t)) = δ.
Therefore z̃δ (t) is decreasing in t with z̃δ (∞) = xδ and z̃δ (t) ∈ (xδ , z̃δ (T ∗∗ )] for
t ≥ T ∗∗ . Moreover,
δM e−σt
= q∗ (z̃δ (t)) − q∗ (xδ ) = q∗′ (θ̃(t))[z̃δ (t) − xδ ]
1 − M e−σt
for some θ̃(t) ∈ [xδ , z̃δ (t)]. It follows that
0 ≥ xδ − z̃δ (t) ≥ −M̃0 e−σt for t ≥ T ∗∗ , (3.31)
where
δM
M̃0 := η ′
.
(1 − 2 ) minu∈[0,z̃δ (T )] q∗ (u)
∗∗

Lemma 3.5. For sufficiently large β > 0, u(t, x) and h(t) satisfy
(
u(t, x) ≥ u(t, x) for t ≥ T ∗∗ , x ∈ [g(t), h(t)],
h(t) ≥ h(t) for t ≥ T ∗∗ .
Proof. It suffices to check that (u, g, h) is a lower solution to (1.5) for t ≥ T ∗∗ . First,
from (3.11) we can easily see that u ≤ u at x = g(t) since for t ≥ T ∗∗ ,
u(t, g(t)) = u(t, −ct) ≤ 1 − M e−σt ≤ u(t, −ct) = u(t, g(t)).
PROPAGATION DYNAMICS 2553

Next we check that h and u satisfy the required conditions at x = h(t). It is obvious
that u(t, h(t)) = δ. Since u(t, x) ≥ δ for t ≥ T ∗∗ > Tδ and x ∈ [g(t), h(t)], we have
u(t, x) ≥ u(t, x) whenever h(t) ≤ h(t) and x = h(t). Direct computations give
h′ (t) = c∗ − βM σe−σt ,
and by (3.31),
d d
− ux (t, h(t)) = (1 − M e−σt )q∗′ (z̃δ (t))
δ δ
d h i
≥ (1 − M e−σt ) q∗′ (xδ ) − max ∗∗ |q∗′′ (ξ)|(z̃δ (t) − xδ )
δ ξ∈[xδ ,z̃δ (T )]

≥ (1 − M e−σt )(c∗ − M̃1 e−σt ) ≥ c∗ − (c∗ M + M̃1 )e−σt


with
d
M̃1 := M̃0 max |q∗′′ (ξ)|.
δ ξ∈[xδ ,z̃δ (T ∗∗ )]
Therefore
d
h′ (t) ≤ − ux (t, h(t)) for t > T ∗∗
δ
provided that β is large enough such that
βM σ ≥ c∗ M + M̃1 .
By (3.10) and (3.11), we obtain
(
h(T ∗∗ ) = cT ∗∗ ≤ h(T ∗∗ )
∗∗
u(T ∗∗ , x) ≤ 1 − M e−σT ≤ u(T ∗∗ , x) for x ∈ g(T ∗∗ ), h(T ∗∗ ) = [−cT ∗∗ , cT ∗∗ ].
 

Finally we prove
ut − duxx − f (u) ≤ 0 for t ≥ T ∗∗ and x ∈ [g(t), h(t)].
Write ζ = h(t) − x + z̃δ (t). Since
ut = σM e−σt q∗ (ζ) + (1 − M e−σt )[h′ (t) + z̃δ′ (t)]q∗′ (ζ)
≤ σM e−σt q∗ (ζ) + (1 − M e−σt )h′ (t)q∗′ (ζ)
= σM e−σt q∗ (ζ) + (1 − M e−σt )(c∗ − βM σe−σt )q∗′ (ζ),
uxx = (1 − M e−σt )q∗′′ (ζ),
we have
ut − duxx − f (u)
≤ σM e−σt q∗ (ζ) − βM σe−σt (1 − M e−σt )q∗′ (ζ)
+ (1 − M e−σt )f (q∗ (ζ)) − f ((1 − M e−σt )q∗ (ζ))
= σM e−σt q∗ (ζ) − βM σe−σt (1 − M e−σt )q∗′ (ζ) + F (−M e−σt , q∗ (ζ)).
We can apply the mean value theorem to F (ξ, u) as before to obtain
ut − duxx − f (u)
≤ σM e−σt q∗ (ζ) − βM σe−σt (1 − M e−σt )q∗′ (ζ)
h   i
− M e−σt f (q∗ (ζ)) − f ′ q∗ (ζ) − θ′′ M e−σt q∗ (ζ) q∗ (ζ)
= − M e−σt f (q∗ (ζ)) − βM σe−σt (1 − M e−σt )q∗′ (ζ)
h   i
+ M e−σt f ′ q∗ (ζ) − θ′′ M e−σt q∗ (ζ) + σ q∗ (ζ),
2554 YIHONG DU

where θ′′ = θt,z


′′
∈ (0, 1).
For ζ̃ ≥ ζη , due to (3.30),
1 ≥ q∗ (ζ) − θ′′ M e−σt q∗ (ζ) ≥ q∗ (ζ) − M e−σt q∗ (ζ) ≥ 1 − η,
 
and hence, by (3.20), f ′ q∗ (ζ) − θ′′ M e−σt q∗ (ζ) + σ ≤ 0, from which we obtain
ut − duxx − f (u) ≤ 0.
For z̃δ (t) ≤ ζ ≤ ζη , we have
ut − duxx − f (u)
 
−σt −σt
≤ − βM σe (1 − M e + Me )q∗′ (ζ)
max f (s) + σ−σt ′
0≤s≤1

max0≤s≤1 f ′ (s) + σ
 
−σt −σt ′
≤ M e (1 − M e ) −βσQη +
1 − M e−σt
max0≤s≤1 f ′ (s) + σ
 
−σt −σt ′
≤ M e (1 − M e ) −βσQη +
1 − M e−σT ∗∗
≤ 0,
by taking β > 0 sufficiently large. This completes the proof of the lemma.
3.3.3. Completion of the proof of Proposition 3.2. From Lemmas 3.4 and 3.5, for
t ≥ T ∗∗ we have
∗∗
(c − c∗ )T ∗∗ − βM (e−σT − e−σt ) ≤ h(t) − c∗ t ≤ −c∗ T ∗ + β + h(T ∗ ) + X0 .
Hence if we define
n
C := max − c∗ T ∗ + β + h(T ∗ ) + X0 , (c∗ − c)T ∗∗
∗∗
o
+ βM e−σT , max∗∗ |h(t) − c∗ t| ,
t∈[0,T ]

then
|h(t) − c∗ t| ≤ C for all t > 0.
This completes the proof of Proposition 3.2. □
3.4. Proof of Theorem 1.4. With the preparation in the previous subsection,
we are ready to prove Theorem 1.4 now. Our approach from now on will differ
significantly from [23, 27], and will follow the steps of [24] with some nontrivial
modifications. The techniques here actually can be used to give a simpler, alterna-
tive proof of the corresponding results in [23] and some parts of [24].
3.4.1. Convergence along a sequence. According to Proposition 3.2, there exists
C > 0 such that
−C < h(t) − c∗ t < C, −C ≤ g(t) + c∗ t ≤ C for t > 0.
We now set
k(t) := c∗ t − 2C, l(t) := h(t) − k(t), ϕ(t, x) := u(t, k(t) + x).
Obviously,
C ≤ l(t) ≤ 3C for t > 0. (3.32)
Moreover,
ux = ϕx , uxx = ϕxx , ut = ϕt − c∗ ϕx ,
PROPAGATION DYNAMICS 2555

and (ϕ, l) satisfies



ϕt = dϕxx + c∗ ϕx + f (ϕ), t > 0, −k(t) < x < l(t),

ϕ(t, l(t)) = δ, t > 0,
′
l (t) = − dδ ϕx (t, l(t)) − c∗ , t > 0.

Let {tn } be a sequence satisfying


tn > 0, tn → ∞ and l(tn ) → lim inf t→+∞ l(t) as n → ∞.
Define
(kn , ln )(t) := (k, l)(t + tn ), ϕn (t, x) := ϕ(t + tn , x).
Lemma 3.6. Subject to a subsequence, as n → ∞,
1+ α
ln → L in Cloc 2 (R) and ∥ϕn − Φ∥ 1+α ,1+α → 0,
Cloc2 (Ω)

where α ∈ (0, 1), Ω := {(t, x) : −∞ < x < L(t), t ∈ R}. Moreover (Φ(t, x), L(t))
satisfies

Φt = dΦxx + c∗ Φx + f (Φ), Φ > δ,
 (t, x) ∈ Ω,
Φ(t, L(t)) = δ, t ∈ R, (3.33)
 ′
 d
L (t) = − δ Φx (t, L(t)) − c∗ , L(t) ≥ L(0), t ∈ R.
Proof. It follows from Lemma 2.7 that there exists C0 > 0 such that 0 ≤ h′ (t) ≤ C0
for t > Tδ , which leads to
−c∗ ≤ ln′ (t) ≤ C0 − c∗ for t > −tn + Tδ .
Denote
x
ξ= , ϕ̃n (t, ξ) = ϕn (t, x).
ln (t)
Then (ϕ̃n (t, ξ), ln (t)) satisfies


(ϕ̃n )t = 2d (ϕ̃n )ξξ + ξln (t)+c∗ (ϕ̃n )ξ + f (ϕ̃n ), t > −tn , − klnn(t)
(t)
< ξ < 1,
l (t)
n ln (t)
(3.34)
ϕ̃ (t, 1) = δ, l′ (t) = − d (ϕ̃ ) (t, 1) − c , t > −tn .
n n δln (t) n ξ ∗

Owing to Lemma 2.7, u(t, x) is uniformly bounded for x ∈ [g(t), h(t)] and t ∈ (0, ∞),
which implies that ϕn is uniformly bounded in {(t, x) : −kn (t) < x < ln (t), t ≥ −tn }.
Hence, in view of (3.32), for any given R > 0 and T ∈ R, using the interior-boundary
Lp estimates to (3.34) over [T − 2, T + 1] × [−R − 2, 1], for any p > 1 we have
∥ϕ̃n ∥Wp1,2 ([T −2,T +1]×[−R−2,1]) ≤ CR for all large n,
where CR is a constant depending on R and p but independent of n and T . Fur-
thermore, for any α′ ∈ (0, 1), we can take p > 1 large enough and use the Sobolev
embedding theorem to obtain
∥ϕ̃n ∥ 1+α′ ,1+α′ ≤ C̃R for all large n, (3.35)
C 2 ([T,∞)×[−R,1])

where C̃R is a constant depending on R and α′ but independent of n and T . From


(3.34) and (3.35) we conclude
∥ln ∥ α′ ≤ C̃1 for all large n,
C 1+ 2 ([T,∞))
2556 YIHONG DU

where C̃1 is a constant depending on R and α′ but independent of n and T too.


Hence by passing to a subsequence, still denoted by itself, we have, for some α ∈
(0, α′ ),
1+α
,1+α 1+ α
ϕ̃n → Φ̃ in Cloc2 (R × (−∞, 1]), ln → L in Cloc 2 (R).
Now, applying standard regularity theory to (3.34), we see that (Φ̃, L) satisfies the
following equations in the classical sense:
( ′
Φ̃t = L2d(t) Φ̃ξξ + ξL L(t)
(t)+c∗
Φ̃ξ + f (Φ̃), t ∈ R, ξ ∈ (−∞, 1],
Φ̃(t, 1) = δ, L′ (t) = − δL(t)
d
Φ̃ξ (t, 1) − c∗ , t ∈ R.

By Lemma 2.6 we know that Φ̃ ≥ δ in R × (−∞, 1]. Since f (δ) > 0, the strong max-
imum principle then infers Φ̃ > δ in R × (−∞, 1). By setting Φ(t, x) = Φ̃(t, x/L(t)),
it is easy to verify that (Φ, L) satisfies (3.33) and
lim ∥Φ − ϕn ∥ 1+α ,1+α = 0.
n→∞ Cloc2 (Ω)

Finally, since L(0) = limn→∞ l(tn ) = lim inf t→∞ l(t) and L(t) = limn→∞ l(tn + t),
we obtain L(t) ≥ L(0) for any t ∈ R. This completes the proof.

3.4.2. Determine the limit pair (Φ, L). We aim to show that
L(t) ≡ L(0) and Φ(t, x) ≡ q∗ (L(0) − x + xδ ).
Our approach here is inspired by Berestycki and Hamel [5].
Due to (3.32), we have
C ≤ L(t) ≤ 3C for t ∈ R.
It follows from Lemma 3.5 that, for x ∈ [g(t + tn ) − k(t + tn ), h(t + tn ) − k(t + tn )]
and t + tn ≥ T ∗∗ ,
 
ϕn (t, x) ≥ [1 − M e−σ(t+tn ) ]q∗ h(t + tn ) − k(t + tn ) − x + zδ (t + tn ) . (3.36)

It is easily seen that


h(t + tn ) + k(t + tn ) → ∞ and h(t + tn ) → +∞ as n → ∞.
Moreover, there exists R0 ≤ C such that
h(t + tn ) − k(t + tn ) ≥ R0 for all t + tn ≥ T ∗∗ .
Therefore, for x ≤ R0 ≤ L(t) and t ∈ R, by letting n → ∞ in (3.36), we obtain
Φ(t, x) ≥ q∗ (R0 − x + xδ ) for x ≤ R0 , t ∈ R. (3.37)
Now we define
R∗ := sup R ∈ R : Φ(t, x) ≥ q∗ (R − x + xδ ) for (t, x) ∈ R × (−∞, R] .


Thanks to (3.37) and Φ(t, L(t)) = δ with L(t) ∈ [C, 3C], we see that R∗ is finite.
Moreover,
Φ(t, x) ≥ q∗ (R∗ − x + xδ ) for (t, x) ∈ R × (−∞, R∗ ]
and
min L(t) = L(0) ≥ R∗ .
t∈R

Lemma 3.7. R = L(0).
PROPAGATION DYNAMICS 2557

Proof. On the contrary, suppose R∗ < L(0) = mint∈R L(t).


Step 1. We show the strict inequality
Φ(t, x) > q∗ (R∗ − x + xδ ) for (t, x) ∈ R × (−∞, R∗ ]. (3.38)

Otherwise, there exists (t0 , x0 ) ∈ R × (−∞, R ) such that
q∗ (R∗ − x0 + xδ ) = Φ(t0 , x0 ) > δ.
Note that x0 = R∗ is impossible due to R∗ < L(t0 ) and Φ(t, x) > δ for x < L(t).
Set ϖ(t, x) = q∗ (R∗ − x + xδ ) − Φ(t, x). Then ϖ(t, x) ≤ 0 in R × (−∞, R∗ ] and,
by the mean value theorem,
ϖt − dϖxx − f ′ (θ)ϖ ≤ 0,
where θ = θ(t, x) ∈ [q∗ (R∗ − x + xδ ), Φ(t, x)]. Since ϖ(t0 , x0 ) = 0, the strong
maximum principle implies that ϖ(t, x) ≡ 0 for (t, x) ∈ R × (−∞, R∗ ). But this is
impossible since
ϖ(t0 , R∗ ) = q∗ (xδ ) − Φ(t0 , R∗ ) = δ − Φ(t0 , R∗ ) < 0 due to L(t0 ) ≥ L(0) > R∗ .
Thus (3.38) holds.
Step 2. We prove that, for any x ≤ R∗ ,
ω(x) := sup [q∗ (R∗ − y + xδ ) − Φ(t, y)] < 0. (3.39)
(t,y)∈R×[x,R∗ ]

Obviously, ω(x) ≤ 0 for x ≤ R∗ . If (3.39) does not hold, then there exists x0 ∈
(−∞, R∗ ) such that
ω(x0 ) = 0.
As a consequence of Step 1, we see that in (3.39), ω(x0 ) is not achieved by any
(t, y) ∈ R × [x0 , R∗ ]. Therefore, there exists a sequence {(sn , yn )} ⊂ R × [x0 , R∗ ]
with |sn | → ∞ such that
lim Φ(sn , yn ) − q∗ (R∗ − yn + xδ ) = 0.
 
n→∞

By passing to a subsequence we may assume that limn→∞ yn = y0 ∈ [x0 , R∗ ]. Set


(Φn (t, x), Ln (t)) = (Φ(t + sn , x + yn ), L(t + sn ) − yn ).
Then repeating the same argument used in the proof of Lemma 3.6 and passing to
a subsequence if necessary, we may assume that, for a fixed α ∈ (0, 1),
1+α
,1+α 1+ α
(Φn , Ln ) → (Φ,
e L)
e in C 2
loc
e × C 2 (R)
(Ω) loc

e = {(t, x) : t ∈ R, x < L̃(t)}, and (Φ,


with Ω e L)
e satisfies

Φe t = dΦe xx + c∗ Φ
e x + f (Φ),
e Φ e > δ, t ∈ R, −∞ < x < L(t),
e
(3.40)
Φ(t,
e L(t))
e = δ, t ∈ R.
Moreover, for −∞ < x < R∗ − y0 , t ∈ R,
e x) ≥ q∗ (R∗ − y0 − x + xδ ), L(t)
Φ(t, e + y0 ≥ L(0) > R∗ ,

and
e 0) = q∗ (R∗ − y0 + xδ ).
Φ(0,
2558 YIHONG DU

Since q∗ (R∗ − y0 − x + xδ ) satisfies (3.40) with L(t)


e replaced by R∗ − y0 , repeating
the same argument as in Step 1 and applying the strong maximum principle we can
conclude that
e x) ≡ q∗ (R∗ − y0 − x + xδ ) for x < R∗ − y0 and t ≤ 0.
Φ(t,
e R∗ − y0 ) = δ, which is impossible since L̃(0) > R∗ − y0 . This
It follows that Φ(0,
contradiction proves (3.39).
Step 3. Completion of the proof.
In view of q∗ (x) → 1 as x → ∞, for any small ϵ0 > 0 we can find R0 = R0 (ϵ0 ) <
R∗ large negative such that
q∗ (R∗ − x) ≥ 1 − ϵ0 for x ≤ R0 .
Since f ′ (1) < 0, by choosing ϵ0 > 0 small enough, we also have
f ′ (u) < 0 for u ∈ [1 − ϵ0 , 1]. (3.41)
Then choose ϵ ∈ (0, ϵ0 ) such that
q∗ (R∗ − R0 + xδ + ϵ) ≤ q∗ (R∗ − R0 + xδ ) − ω(R0 ),
where ω is defined in (3.39).
Consider the following auxiliary problem

Φt = dΦxx + c∗ Φx + f (Φ),
 t > 0, x < R0 ,
Φ(t, R0 ) = q∗ (R∗ − R0 + xδ + ϵ), t > 0, (3.42)
Φ(0, x) = q∗ (R∗ − x + xδ ),

x < R0 .

Obviously, 1 and q∗ (R∗ − x + xδ ) are a pair of upper and lower solutions of (3.42).
It follows from the comparison principle that
1 − ϵ0 ≤ q∗ (R∗ − x + xδ ) ≤ Φ(t, x) ≤ 1 (3.43)
for all x < R0 and t > 0. Moreover, Φ(t, x) is non-decreasing in t and
lim Φ(t, x) = Φ∗ (x) for x < R0 ,
t→∞
where Φ∗ satisfies
(
dΦ∗xx + c∗ Φ∗x + f (Φ∗ ) = 0, 1 − ϵ0 ≤ Φ∗ ≤ 1, −∞ < x < R0 ,
∗ ∗ ∗
(3.44)
Φ (−∞) = 1, Φ (R0 ) = q∗ (R − R0 + xδ + ϵ).
Clearly
Φ̂(x) := q∗ (R∗ − x + xδ + ϵ)
also satisfies (3.44), and due to q∗ (R∗ − x + xδ + ϵ) ≥ q∗ (R∗ − x + xδ ), we can apply
the comparison principle to (3.42) to deduce
q∗ (R∗ − x + xδ + ϵ) ≥ Φ(t, x) for x < R0 , t > 0.
Letting t → ∞ we obtain
Φ̂(x) ≥ Φ∗ (x) for − ∞ < x ≤ R0 .
Let us also note that from (3.43) we have
Φ∗ (x) ≥ q∗ (R∗ − x + xδ ) ≥ 1 − ϵ0 for x ≤ R0 . (3.45)
In what follows, we prove that
Φ̂(x) ≡ Φ∗ (x) for − ∞ < x ≤ R0 . (3.46)
PROPAGATION DYNAMICS 2559

To this end, let us denote


Ψ(x)
b := Φ∗ (x) − Φ̂(x).
Then Ψ
b satisfies
dΨ b x + f ′ (ξ)Ψ
b xx + c∗ Ψ b = 0, −∞ < x < R0 (3.47)
with ξ = ξ(x) ∈ [Φ∗ (x), Φ̂(x)] ⊂ [1 − ϵ0 , 1], and
Ψ(−∞)
b = Ψ(R
b 0 ) = 0. (3.48)
b ≤ 0 for x < R0 and (3.48) holds true, if Ψ
Since Ψ b ̸≡ 0, then there exists ζ ∈ R
such that
Ψ(ζ)
b = min Ψ(x)b < 0.
x∈(−∞,R0 ]
Then due to ξ(x) ∈ [1 − ϵ0 , 1], from (3.41) we see that f ′ (ξ(ζ)) < 0. Therefore, by
(3.47),
0 ≤ dΨ b x (ζ) = −f ′ (ξ(ζ))Ψ(ζ)
b xx (ζ) + c∗ Ψ b < 0.
This contradiction proves (3.46).
We are now ready to reach a contradiction by considering Φ(t, x). Recall that it
satisfies (3.33), and for all t ∈ R and x ≤ R∗ ,
Φ(t, x) ≥ q∗ (R∗ − x + xδ ),
Φ(t, R0 ) ≥ q∗ (R∗ − R0 + xδ ) − ω(R0 ) ≥ q∗ (R∗ − R0 + xδ + ϵ).
Therefore we can use the comparison principle to deduce that
Φ(t + s, x) ≥ Φ(t, x) for all t > 0, x < R0 , s ∈ R,
which is equivalent to
Φ(t, x) ≥ Φ(t − s, x) for all t > s, x < R0 , s ∈ R.
Letting s → −∞, due to (3.46) we obtain
Φ(t, x) ≥ Φ∗ (x) = q∗ (R∗ − x + xδ + ϵ) for x < R0 and t ∈ R. (3.49)
By Step 2,
ε := −ω(R0 ) > 0.

Since we assume R < L(0) ≤ L(t) for all t ∈ R, by interior parabolic regularity for
the equation satisfied by Φ(t, x) we have |Φx (t, x)| ≤ C for some positive constant
C and all t ∈ R and R∗ ≤ x ≤ (R∗ + L(0))/2. Thus we can take ϵ1 ∈ (0, ϵ] small
enough such that
(
Φ(t, x) ≥ Φ(t, R∗ ) − ε/2 for t ∈ R, x ∈ [R∗ , R∗ + ϵ1 ],
q∗ (R∗ − x + xδ + ϵ1 ) ≤ q∗ (R∗ − x + xδ ) + ε/2 for x ∈ [R0 , R∗ + ϵ1 ].
Hence, for x ∈ [R∗ , R∗ + ϵ1 ] and t ∈ R,
Φ(t, x) − q∗ (R∗ − x + xδ + ϵ1 ) ≥ Φ(t, R∗ ) − ε/2 − q∗ (xδ + ϵ1 )
≥ −ε − ω(R0 ) = 0,

and for x ∈ [R0 , R ] and t ∈ R,
Φ(t, x) − q∗ (R∗ − x + xδ + ϵ1 ) ≥ −ε/2 − ω(R0 ) > 0.
Combining this with (3.49), we obtain
Φ(t, x) − q∗ (R∗ − x + xδ + ϵ1 ) ≥ 0 for x ≤ R∗ + ϵ1 , t ∈ R
2560 YIHONG DU

for all small ϵ1 ∈ (0, ϵ), which contradicts the definition of R∗ . This completes the
proof.

Proposition 3.8. Φ(t, x) ≡ q∗ (R∗ − x + xδ ) and L(t) ≡ R∗ .


Proof. We already know that R∗ = L(0) = min L(t) and
Φ(t, x) ≥ q∗ (R∗ − x + xδ ) for x ≤ R∗ and t ∈ R
with
Φ(0, L(0)) = q∗ (R∗ − L(0) + xδ ) = q∗ (xδ ) = δ.
It follows from the strong maximum principle and the Hopf boundary lemma that
Φx (0, L(0)) < −q∗′ (xδ ) unless Φ(t, x) ≡ q∗ (R∗ − x + xδ ).
On the other hand, L′ (0) = 0 implies, by the last identity in (3.33),
Φx (0, L(0)) = −q∗′ (xδ ).
Thus we must have Φ(t, x) ≡ q∗ (R∗ − x + xδ ), which implies L(t) ≡ L(0).

The method in the proof of Proposition 3.8 can be used to greatly simplify the
proof of the corresponding result in [24].

3.4.3. Completion of the proof of Theorem 1.4. We are now ready to complete the
proof of Theorem 1.4. We achieve this goal by proving two claims.
Claim 1. Let {tn } be the sequence used in Lemma 3.6. Then


limn→∞ h (t + tn ) = c∗ for every t ∈ R,

limn→∞ supx∈[0, h(tn )] |u(tn , x) − q∗ (xδ + h(tn ) − x)| = 0,

limn→∞ supx∈[g(tn ), 0] |u(tn , x) − q∗ (xδ + x − g(tn ))| = 0.

It follows from Lemma 3.6 and Proposition 3.8 that, h(t + tn ) − k(t + tn ) →
1+ α α
L(0) = R∗ in Cloc 2 (R). Hence h′ (t + tn ) → c∗ in Cloc
2
(R). It then follows easily
from Lemma 3.6 and Proposition 3.8 that
1+α
,1+α
u(t + tn , x + h(t + tn )) → q∗ (−x + xδ ) in Cloc2 (R × (−∞, 0]) as n → ∞.
Hence, for any L0 > 0,
lim ∥u(tn , ·) − q∗ (h(tn ) + xδ − ·)∥L∞ ([h(tn )−L0 ,h(tn )]) = 0.
n→∞

On the other hand, for any given small ϵ > 0, by Lemmas 3.4 and 3.5 , there
exist L1 > 0 and some large positive integer N such that
1 − ϵ ≤ u(tn , x) ≤ 1 + ϵ for x ∈ [0, h(tn ) − L1 ], n ≥ N.
Clearly for L2 > 0 large,
1 − ϵ ≤ q∗ (h(tn ) − x + xδ ) ≤ 1 for x ∈ (−∞, h(tn ) − L2 ].
Therefore, if we take L0 = max{L1 , L2 }, then for n ≥ N ,
∥u(tn , ·) − q∗ (h(tn ) + xδ − ·)∥L∞ ([0,h(tn )−L0 ]) ≤ 2ϵ.
Thus we have
lim ∥u(tn , ·) − q∗ (h(tn ) + xδ − ·)∥L∞ ([0,h(tn )]) = 0. (3.50)
n→∞
PROPAGATION DYNAMICS 2561

Consider (1.5) with initial function u0 (−x), the above proved conclusions imply
that
lim ∥u(tn , ·) − q∗ (· − g(tn ) + xδ )∥L∞ ([g(tn ),0]) = 0. (3.51)
n→∞

Claim 2. limt→∞ (h(t) − c∗ t) = h∗ := R∗ − 2C = L(0) − 2C.


By Claim 1, along a sequence {tn } satisfying
lim [h(tn ) − c∗ tn + 2C] = lim inf [h(t) − c∗ t + 2C] = R∗ ,
n→∞ t→∞

(3.50) holds and


lim [h(tn ) − c∗ tn ] = h∗ , lim h′ (tn ) = c∗ . (3.52)
n→∞ n→∞

Since
lim [h(tn ) − c∗ tn ] = lim inf [h(t) − c∗ t],
n→∞ t→∞

if the desired conclusion does not hold, then lim supt→∞ [h(t) − c∗ t] = h̃∗ > h∗ .
Thus we can find a sequence {sn } increasing to +∞ as n → ∞ such that
lim [h(sn ) − c∗ sn ] = h̃∗ > h∗ .
n→∞

We now derive a contradiction by making use of the upper solution (u, g, h)


defined in (3.22) with T∗ = tn . The key point is that, for large n, (3.50) and (3.51)
allow us to choose X0 and β rather freely. So take X0 = β = (h̃∗ − h∗ )/4 > 0. Then
choose γ ∈ (0, η) small enough so that (3.28) and (3.29) hold.
Now instead of (3.21) which no longer holds, we use (3.50), (3.51), the strict
monotonicity of q∗ (z) and the fact that z̃δ (t) := zδ (tn + t) is independent of n and
z̃δ (0) → xδ as γ → 0, and we see that, for small enough γ > 0 and all large n,
u(tn , x) = (1+γ)q∗ (h(tn )+X0 −x+zδ (tn )) ≥ u(tn , x) for x ∈ [g(tn ), h(tn )]. (3.53)
Thus (3.27) holds. Since (3.28) and (3.29) are satisfies by our choice of γ, we may
now check the proof of Lemma 3.4 and conclude that for all large n, (u, g, h) is an
upper solution, and thus for t > tn ,
h(t) ≤ h(t) = c∗ (t − tn ) + β(1 − e−σ(t−tn ) ) + h(tn ) + X0 .
It follows that, for all large k such that sk > tn ,
h(sk ) − c∗ sk ≤ h(tn ) − c∗ tn + β(1 − e−σ(sk −tn ) ) + X0 .
Letting k → ∞ we obtain
h̃∗ ≤ h(tn ) − c∗ tn + β + X0 .
Letting n → ∞ we deduce
h̃∗ ≤ h∗ + β + X0 = h∗ + (h̃∗ − h∗ )/2,
which is impossible.
Thus we have proved Claim 2 and then any positive sequence {tn } converging to
+∞ can be used in Lemma 3.6, and so by what has been proved so far, {tn } has a
subsequence such that (3.50) and (3.52) hold. This clearly implies that (3.50) and
(3.52) hold with tn → ∞ replaced by t → ∞.
As before, consider (1.5) with initial function u0 (−x); the above proved conclu-
sions imply that the desired convergence holds for g(t) and u(t, x) over [g(t), 0] as
well. The proof of Theorem 1.4 is now complete. □
2562 YIHONG DU

Acknowledgments. The author thanks the anonymous referees for their careful
reading of the manuscript and detailed suggestions to improve the presentation. He
is very thankful to Professor Chris Cosner and the late Professor Hans Weinberger
for inspiring discussions on the notion of preferred population density, and is also
very grateful to Professor Maolin Zhou for his help on the mathematical analysis
of the model and to Professor Hiroshi Matsuzawa for the help on the correction of
some mistakes in the preprint version of the paper.
REFERENCES

[1] D. G. Aronson and H. F. Weinberger, Nonlinear diffusion in population genetics, combustion,


and nerve pulse propagation, in Partial Differential Equations and Related Topics, Lecture
Notes in Math., 446, Springer, Berlin, 1975, 5-49.
[2] D. G. Aronson and H. F. Weinberger, Multidimensional nonlinear diffusion arising in popu-
lation genetics, Adv. Math., 30 (1978), 33-76.
[3] W. Bao, Y. Du, Z. Lin and H. Zhu, Free boundary models for mosquito range movement
driven by climate warming, J. Math. Biol., 76 (2018), 841-875.
[4] M. Basiri, F. Lutscher ad A. Moameni, The existence of solutions for a free boundary problem
modeling the spread of ecosystem engineers, J. Nonlinear Sci., 31 (2021), Paper No. 72, 58
pp.
[5] H. Berestycki and F. Hamel, Generalized traveling waves for reaction-diffusion equations, in
Perspectives in Nonlinear Partial Differential Equations: In Honor of H. Brezis, Contemp.
Math., Vol. 446, Amer. Math. Soc., Providence, RI, 2007, 211-237.
[6] M. Bramson, Convergence of Solutions of the Kolmogorov Equation to Travelling Waves,
Mem. Amer. Math. Soc., 44 (1983), iv+190 pp.
[7] G. Bunting, Y. Du and K. Krakowski, Spreading speed revisited: Analysis of a free boundary
model, Netw. Heterog. Media, 7 (2012), 583-603.
[8] J. Cai, B. Lou and M. Zhou, Asymptotic behavior of solutions of a reaction diffusion equation
with free boundary conditions, J. Dyn. Differ. Equ., 26 (2014), 1007-1028.
[9] X. Chen and A. Friedman, A free boundary problem arising in a model of wound healing,
SIAM J. Math. Anal., 32 (2000), 778-800.
[10] W. Ding, Y. Du and Z. Guo, The Stefan problem for the Fisher-KPP equation with unbounded
initial range, Cal. Var. PDEs, 60 (2021), Paper No. 69, 37 pp.
[11] W. Ding, Y. Du and X. Liang, Spreading in space-time periodic media governed by a monos-
table equation with free boundaries, Part 1: Continuous initial functions, J. Diff. Eqns., 262
(2017), 4988-5021.
[12] W. Ding, Y. Du and X. Liang, Spreading in space-time periodic media governed by a monos-
table equation with free boundaries, Part 2: Spreading speed, Ann. Inst. H. Poincaré Anal.
Non Linéaire, 36 (2019), 1539-1573.
[13] Y. Du, Propagation, diffusion and free boundaries, SN Partial Differ. Equ. Appl., 1 (2020),
Article No. 35, 25 pp.
[14] Y. Du, J. Fang and N. Sun, A delay induced nonlocal free boundary problem, Math. Ann.,
386 (2023), 2061-2106.
[15] Y. Du, C. Gui, K. Wang and M. Zhou, Semi-waves with Λ-shaped free boundary for nonlinear
Stefan problems: Existence, Proc. Amer. Math. Soc., 149 (2021), 2091-2104.
[16] Y. Du and Z. Guo, The Stefan problem for the Fisher-KPP equation, J. Diff. Eqns., 253
(2012), 996-1035.
[17] Y. Du and Z. Guo, Liouville type results and eventual flatness of positive solutions for p-
Laplacian equations, Adv. Diff. Eqns., 7 (2002), 1479-1512.
[18] Y. Du, Z. Guo and R. Peng, A diffusive logistic model with a free boundary in time-periodic
environment, J. Funct. Anal., 265 (2013), 2089-2142.
[19] Y. Du and X. Liang, Pulsating semi-waves in periodic media and spreading speed determined
by a free boundary model, Ann. Inst. H. Poincaré Anal. Non Linéaire, 32 (2015), 279-305.
[20] Y. Du and Z. Lin, Spreading-vanishing dichotomy in the diffusive logistic model with a free
boundary, SIAM J. Math. Anal., 42 (2010), 377-405.
[21] Y. Du and B. Lou, Spreading and vanishing in nonlinear diffusion problems with free bound-
aries, J. Eur. Math. Soc., 17 (2015), 2673-2724.
[22] Y. Du, H. Matano and K. Wang, Regularity and asymptotic behavior of nonlinear Stefan
problems, Arch. Rational Mech. Anal., 212 (2014), 957-1010.
PROPAGATION DYNAMICS 2563

[23] Y. Du, H. Matsuzawa and M. Zhou, Sharp estimate of the spreading speed determined by
nonlinear free boundary problems, SIAM J. Math. Anal., 46 (2014), 375-396.
[24] Y. Du, H. Matsuzawa and M. Zhou, Spreading speed and profile for nonlinear Stefan problems
in high space dimensions, J. Math. Pures Appl., 103 (2015), 741-787.
[25] Y. Du, L. Wei and L. Zhou, Spreading in a shifting environment modelled by the diffusive
logistic equation with a free boundary, J. Dyn. Diff. Equations, 30 (2018), 1389-1426.
[26] C. Feng, M. A. Lewis, C. Wang and H. Wang, A Fisher-KPP model with a nonlocal weighted
free boundary: Analysis of how habitat boundaries expand, balance or shrink, Bull. Math.
Biol., 84 (2022), Paper No. 34, 27 pp.
[27] P. C. Fife and J. B. McLeod, The approach of solutions of nonlinear diffusion equations to
travelling front solutions, Arch. Ration. Mech. Anal., 65 (1977), 335-361
[28] R. A. Fisher, The wave of advance of advantageous genes, Ann Eugenics, 7 (1937), 335-369.
[29] H. Gu, B. Lou and M. Zhou, Long time behavior of solutions of Fisher-KPP equation with
advection and free boundaries, J. Funct. Anal., 269 (2015), 1714-1768.
[30] K. P. Hadeler, Stefan problem, traveling fronts, and epidemic spread, Discrete Contin. Dyn.
Syst. B., 21 (2016), 417-436.
[31] Y. Hu, X. Hao, X. Song and Y. Du, A free boundary problem for spreading under shifting
climate, J. Diff. Equa., 269 (2020), 5931-5958.
[32] Y. Kaneko, H. Matsuzawa and Y. Yamada, Asymptotic profiles of solutions and propagating
terrace for a free boundary problem of nonlinear diffusion equation with positive bistable
nonlinearity, SIAM J. Math. Anal., 52 (2020), 65-103.
[33] Y. Kawai and Y. Yamada, Multiple spreading phenomena for a free boundary problem of a
reaction-diffusion equation with a certain class of bistable nonlinearity, J. Differential Equa-
tions, 261 (2016), 538-572.
[34] A. N. Kolmogorov, I. G. Petrovski and N. S. Piskunov, A study of the diffusion equation
with increase in the amount of substance, and its application to a biological problem, Bull.
Moscow Univ. Math. Mech., 1 (1937), 1-25.
[35] F. Li, X. Liang and W. Shen, Diffusive KPP equations with free boundaries in time almost
periodic environments: I. Spreading and vanishing dichotomy, Discrete Contin. Dyn. Syst.
Ser. A, 36 (2016), 3317-3338.
[36] F. Li, X. Liang and W. Shen, Diffusive KPP equations with free boundaries in time almost pe-
riodic environments: II. Spreading speeds and semi-wave solutions, J. Differential Equations,
261 (2016), 2403-2445.
[37] X. Liang, Semi-wave solutions of KPP-Fisher equations with free boundaries in spatially
almost periodic media, J. Math. Pures Appl., 127 (2019), 299-308.
[38] G. M. Lieberman, Second Order Parabolic Differential Equations, World Scientific Publishing
Co. Inc., River Edge, NJ, 1996.
[39] S. Liu and X. Liu, Krylov implicit integration factor method for a class of stiff reaction-
diffusion systems with moving free boundaries, Discrete Cont. Dynam. Syst. B , 25 (2020),
141-159.
[40] F. Lutscher, J. Fink and Y. Zhu, Pushing the boundaries: Models for the spatial spread of
ecosystem engineers, Bull. Math. Biol., 82 (2020), Paper No. 138, 24 pp.
[41] S. W. McCue, M. El-Hachem and M. J. Simpson, Traveling waves, blow-up, and extinction
in the Fisher-Stefan model, Stud. Appl. Math., 148 (2022), 964-986.
[42] M.-A. Piqueras, R. Company and L. Jodar, A front-fixing numerical method for a free bound-
ary nonlinear diffusion logistic population model, J. Comput. Appl. Math., 309 (2017), 473-
481.
[43] N. Sun and J. Fang, Propagation dynamics of Fisher-KPP equation with time delay and free
boundaries, Calc. Var. Partial Differential Equations, 58 (2019), Art. 148, 38 pp.
[44] N. Sun, B. Lou and M. Zhou, Fisher-KPP equation with free boundaries and time-periodic
advections, Calc. Var. PDEs, 56 (2017), Article No. 61, 36 pp.
[45] M. Wang, A diffusive logistic equation with a free boundary and sign-changing coefficient in
time-periodic environment, J. Funct. Anal., 270 (2016), 483-508.
[46] L. Wei, G. Zhang and M. Zhou, Long time behavior for solutions of the diffusive logistic
equation with advection and free boundary, Calc. Var. Partial Differential Equations, 55
(2016), Art. 95, 34 pp.

Received August 2023; 1st revision August 2023; 2nd revision February 2024;
early access March 2024.

You might also like