Professional Documents
Culture Documents
(CMS - CAIMS Books in Mathematics 11) Arian Novruzi - A Short Introduction To Partial Differential Equations-Springer Nature Switzerland (2023)
(CMS - CAIMS Books in Mathematics 11) Arian Novruzi - A Short Introduction To Partial Differential Equations-Springer Nature Switzerland (2023)
(CMS - CAIMS Books in Mathematics 11) Arian Novruzi - A Short Introduction To Partial Differential Equations-Springer Nature Switzerland (2023)
Canadian
Mathematical Society
Société mathématique
du Canada
Arian Novruzi
A Short Introduction
to Partial Differential
Equations
CMS/CAIMS Books in Mathematics
Volume 11
Series Editors
Karl Dilcher
Department of Mathematics and Statistics, Dalhousie University, Halifax, NS, Canada
Frithjof Lutscher
Department of Mathematics, University of Ottawa, Ottawa, ON, Canada
Nilima Nigam
Department of Mathematics, Simon Fraser University, Burnaby, BC, Canada
Keith Taylor
Department of Mathematics and Statistics, Dalhousie University, Halifax, NS, Canada
Associate Editors
Ben Adcock
Department of Mathematics, Simon Fraser University, Burnaby, BC, Canada
Martin Barlow
University of British Columbia, Vancouver, BC, Canada
Heinz H. Bauschke
University of British Columbia, Kelowna, BC, Canada
Matt Davison
Department of Statistical and Actuarial Science, Western University, London, ON, Canada
Leah Keshet
Department of Mathematics, University of British Columbia, Vancouver, BC, Canada
Niky Kamran
Department of Mathematics and Statistics, McGill University, Montreal, QC, Canada
Mikhail Kotchetov
Memorial University of Newfoundland, St. John’s, Canada
Raymond J. Spiteri
Department of Computer Science, University of Saskatchewan, Saskatoon, SK, Canada
CMS/CAIMS Books in Mathematics is a collection of monographs and graduate-level
textbooks published in cooperation jointly with the Canadian Mathematical Society- Societé
mathématique du Canada and the Canadian Applied and Industrial Mathematics
Society-Societé Canadienne de Mathématiques Appliquées et Industrielles. This series offers
authors the joint advantage of publishing with two major mathematical societies and with a
leading academic publishing company. The series is edited by Karl Dilcher, Frithjof Lutscher,
Nilima Nigam, and Keith Taylor. The series publishes high-impact works across the breadth of
mathematics and its applications. Books in this series will appeal to all mathematicians,
students and established researchers. The series replaces the CMS Books in Mathematics
series that successfully published over 45 volumes in 20 years.
Arian Novruzi
123
Arian Novruzi
Department of Mathematics
University of Ottawa
Ottawa, ON, Canada
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or
part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,
broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and
retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter
developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not
imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and
regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed
to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty,
expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been
made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional
affiliations.
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
This book provides a short introduction to PDEs. It is primarily addressed to graduate students and researchers,
who are new to PDEs. The book offers a user-friendly approach to the analysis of PDEs, by combining elementary
techniques with important concepts and fundamental modern methods.
This book focuses on the analysis of four prototypes of PDEs: first-order PDEs, second-order linear elliptic PDEs,
and two linear evolution PDEs—heat and wave equations, each with a second-order linear elliptic PDE principal
term. To facilitate a smooth introduction into the analysis for the reader, two approaches are presented for each of the
PDEs. The first approach consists of the method of analytical and classical solutions. It includes the method of
characteristics, the method of separation of variables, and Perron’s method. The simplicity of this approach, its
potential to provide classical solutions, and the impossibility of providing a classical solution in a general setting are
highlighted, and used to motivate the second approach.
The second approach is the method of weak (variational) solutions. While for the first-order PDEs we focus the
analysis on a short discussion of scalar conservation laws, we present a detailed analysis for the other PDEs. The main
ingredients we use are the Lax-Milgram lemma, Fredholm alternative and spectral decomposition theorems, and a
number of results from Sobolev spaces. As an application of the results from the second-order linear elliptic PDEs and
Sobolev spaces, we give a very short introduction to the solution to certain nonlinear PDEs by using a fixed point
approach.
In connection with the second approach, we give an introduction to distributions, Fourier transform, and Sobolev
spaces, which are fundamental for the study of PDEs. Though in a number of cases we deal with general fractional
Sobolev spaces Ws,p, we mainly analyze Hs Sobolev spaces by using Fourier transform. We give some fundamental
results related to the concepts of density, extension, embeddings, compactness, and boundary traces. We provide
proofs of some of these results in particular cases, with the intention to highlight the most relevant techniques and to
avoid the complexity of general cases. The book ends with an appendix chapter, which complements the previous
chapters with proofs, examples, and remarks.
Chapter 1 (introduction) provides some concepts and results from analysis, functional analysis, and topology,
which will be used as the book develops. At first reading, one only needs to get familiar with them. Chapter 2 provides
the formal definition of PDEs and a number of examples. Chapters 3 (first-order PDEs) and 4 (second-order PDEs
and maximum principle) can be read independently. Each of them depends on certain results from chapter 1.
Chapters 7 (second-order PDEs and weak solutions) and 8 (evolution PDEs) are almost independent, and each
of them depends on chapters 6 (Sobolev spaces) and 5 (distributions), and some results from chapter 1.
vii
viii Preface
This book can be used for an intense one-semester, or normal two-semester, PDE course. The reader is expected to
have knowledge of linear algebra and differential equations, a good background in real and complex calculus, and a
modest background in analysis and topology. The book has many examples, which help to explain the concepts,
highlight the key ideas, and emphasize the sharpness of results, as well as a section of problems at the end of each
chapter.
I am grateful to professor Michel PIERRE for a number of comments and suggestions, which have been helpful for me
to improve this book.
5 Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
5.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
5.2 Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
5.2.1 Test functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
5.2.2 Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.2.3 Derivatives of distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
5.3 Convolution of distributions and fundamental solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
ix
x Contents
6 Sobolev spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
6.1 Definitions and some first properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
6.1.1 Density of D in W k;p . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
6.1.2 Some applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
6.2 H s spaces and Fourier transform: W s;p and W0s;p spaces . . . . . . . . . . . . . . 101
6.3 Continuous, compact, and dense embedding theorems in H s ðXÞ . . . . . . . . 105
6.3.1 Case X ¼ RN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ....... 106
6.3.2 Case X ( RN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
6.4 Boundary traces in Sobolev spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
6.5 Poincaré inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
6.6 H s ðXÞ and W s;q ðXÞ spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
9 Annex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
9.1 Notations and review (Ch. 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
9.1.1 Continuous differentiable functions . . . . . . . . . . . . . . . . . . . . . . . . 159
9.1.2 Some results from Lp ðXÞ spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
9.1.3 An application: Ordinary Differential Equations . . . . . . . . . . . . . . 161
9.2 First-order PDEs: classical and weak solutions (Ch. 3) . . . . . . . . . . . . . . 163
9.2.1 Classical local solutions to first-order PDEs . . . . . . . . . . . . . . . . . 163
9.2.2 Conservation laws and weak solutions . . . . . . . . . . . . . . . . . . . . . . 166
9.3 Second-order linear elliptic PDEs: maximum principle and classical
solutions (Ch. 4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 169
9.3.1 Dirichlet problem in a ball . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 169
9.3.2 Maximum principle for second-order linear elliptic PDEs . . . .... 170
Contents xi
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
1. Notations and review
The objective of this chapter is to help the reader become familiar with a number
of concepts and some important results that will be used throughout this book. It is
expected that the reader has a minimum background in analysis in RN and topology.
zZ = max{x1 X1 , . . . , xn Xn }, (or zZ = x1 X1 + · · · + xn Xn ) (1.1.2)
1
In Latin id est, which means that is
2
(xn ) is a Cauchy sequence if for every > 0 there exists n ∈ N such that xn+m − xn X < for
all integers n > n and m ∈ N.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 1
A. Novruzi, A Short Introduction to Partial Differential Equations, CMS/CAIMS Books in Mathematics 11,
https://doi.org/10.1007/978-3-031-39524-6 1
1.1. Continuous differentiable functions Chapter 1
for z = (x1 , . . . , xn ) ∈ Z, then it is easy to show that (Z, · Z ) is another Banach space.
If X1 = · · · = Xn = X we write X n instead of X × · · · × X .
n times
Definition 1.1.2 (C 0 (Ω; Y ) spaces) Let (X, · X ), (Y, · Y ) be two Banach spaces,
Ω ⊂ X be an open set, and u : Ω → Y a function. We say “u is continuous at x ∈ Ω” if
i) : X n → Y is said to be “n-linear” if
(x1 , . . . , xi−1 , ci xi + ci xi , xi+1 , . . . , xn ) = ci (x1 , . . . , xi−1 , xi , xi+1 , . . . , xn )
+ ci (x1 , . . . , xi−1 , xi , xi+1 , . . . , xn ),
Given ∈ L(X n ; Y ) and x ∈ X, sometimes instead of (x) we will write , xL(X n ;Y )×X n ,
and we will remove L(X n ; Y ) × X n from , x whenever there is no ambiguity. Also, we
will omit Y when Y = R, so we will write L(X n ) instead of L(X n ; R), and furthermore
we will write X , resp. X instead of L(X), resp. L(X ).
Remark 1.1.4 A n-linear map from X n in Y is continuous if and only if
(x)Y ≤ CxX n , ∀x ∈ X n ,
2
Chapter 1 1.1. Continuous differentiable functions
For this reason, a “continuous n-linear map” is equivalent to a “bounded n-linear map”.
Furthermore, one can show that
L(X n ;Y ) = sup{(x)Y , x ∈ X n , xX n = 1}, ∈ L(X n ; Y ), (1.1.6)
is a norm in L(X n ; Y ) and (L(X n ; Y ), · L(X n ;Y ) ) is a Banach space.
Example 1.1.5 In the case when X = RN and Y = R we have
N
N N
L(R ) = : R → R, (x) = Ai xi ,
i=1
A = (Ai ) ∈ RN , x = (xi ) ∈ RN ,
N
L(R2N ) = : RN × RN → R, (x1 , x2 ) = Ai,j x1i x2j ,
i,j=1
N2
A = (Ai,j ) ∈ R , x1 = (x1i ), x2 = (x2j ) ∈ RN ,
··· : ···
Now we define the C k spaces in general Banach spaces.
Definition 1.1.6 (C k (Ω; Y ) spaces) Let (X, · X ), (Y, · Y ) be two Banach spaces,
Ω ⊂ X be an open set, and u ∈ C 0 (Ω; Y ).
1) We say u is differentiable at x ∈ Ω if there exists u(1) (x) ∈ L(X; Y ), called
“derivative of u at x”, such that
u(x + h) − u(x) − u(1) (x; h)Y
lim = 0.
hX →0 hX
Here u(1) (x; h) denotes the value of u(1) (x) at h. If u is differentiable at every
x ∈ Ω, then u(1) : Ω → L(X; Y ) denotes “the embedded derivative of u”. Further-
more we set
C 1 (Ω; Y ) = {u ∈ C 0 (Ω; Y ) such that u(1) ∈ C 0 (Ω; L(X; Y ))}. (1.1.7)
3
1.1. Continuous differentiable functions Chapter 1
4
Chapter 1 1.1. Continuous differentiable functions
For α ∈ NN 0 and u : R
N
→ R having continuous partial derivatives of order |α| in the
usual sense, we write
∂ |α| u αN
Dα u = = ∂xα11 · · · ∂xαNN u = ∂1α1 · · · ∂N u = ux1 · · · x1 ······xN · · · xN , (1.1.13)
∂xα11 · · · ∂xαNN
α1 times αN times
α
D u = u, if |α| = 0.
We say Dα u is an “α derivative of u”, and a “Dα u is a derivative of order |α|”. Finally,
for m ∈ N0 we denote by Dm u = (u1 , u2 , . . . , uN m ) the vector of m-th order partial
derivatives of u, defined by recurrence as follows:
D0 u := u,
D1 u = (D11 , . . . , DN
1
) := (∂1 u, . . . , ∂N u),
D u = (D1 , . . . , DN 2 ) := (D1 D11 u, . . . , D1 DN
2 2 2 1
u),
···
D u = (D1m , . . . , DN
m m 1 m−1
m ) := (D D1
m−1
u, . . . , D1 DN m−1 u).
Proposition 1.1.10 Let Ω ⊂ RN be an open set. Then the spaces C k (Ω) of Definition
1.1.6 with X = RN , Y = R, are equivalently defined as
C k (Ω) = {u : Ω → R, Dα u ∈ C 0 (Ω), ∀α ∈ NN
0 , |α| ≤ k}, (1.1.15)
For the proof the reader can see [7, 8]. The idea of the proof, in the case k = 1 to avoid
the technicalities, is to point out that if u ∈ C 1 (Ω) in the sense of Definition 1.1.6, then
necessarily u ∈ C 0 (Ω) and all ∂xi u(x) := u (x; ei ), ei = (0, . . . , 0, 1, 0, . . . , 0) with 1 on
the i-th place, exist and are continuous in Ω. Conversely, if all ∂xi u ∈ C 0 (Ω) then one
n
defines u (x) ∈ L(RN ) by u (x; h) = i=1 ∂xi u(x)hi , h = (h1 , . . . , hn ), and proves that
u ∈ C 1 (Ω) in the sense of Definition 1.1.6 by using the mean value theorem and the
continuity of all ∂xi u.
5
1.1. Continuous differentiable functions Chapter 1
The space Cbk (Ω), resp. Cbk,λ (Ω), k ∈ N0 , equipped with the norm
is a Banach space.
The spaces C k,λ are called “Hölder spaces”. They are useful when describing addi-
tional regularity of functions in Sobolev spaces; see chapter 6. We note that in connection
with C k,λ spaces there is a complete theory associated with second-order linear partial
differential equations, called “Hölder theory”; see, for example, [20].
We also note that a function in Cbk (Ω) is not necessarily continuous up to the bound-
2x1 x2
ary. For example, u(x) = 2 ∈ Cb0 (Ω), Ω = (0, 1) × (0, 1), but u ∈/ Cb0 (Ω). The
x1 + x22
following subspaces of Cb0 (Ω) are useful when approximating, or describing additional
regularity, of Sobolev spaces functions, and are defined by
6
Chapter 1 1.1. Continuous differentiable functions
where Ω is the closure of Ω.3,4 If Ω is bounded, we can remove the subscript “b” from
the spaces Cbk (Ω) and Cbk,λ (Ω), because every continuous function in a compact set5
is bounded. The space Cbk (Ω), resp. Cbk,λ (Ω), endowed with the norm (1.1.20), resp.
(1.1.21), is a Banach space.
When working with the distributions or the weak solution to PDEs, we are interested
in C k (Ω) functions which are zero in a neighborhood of the boundary ∂Ω of Ω.
Definition 1.1.12 (C0k (Ω) spaces) Let Ω ⊂ RN be an open set. Then we define
3
The closure Ω of Ω ⊂ X, X Banach, is defined as Ω = Ω ∪ ∂Ω, where ∂Ω is the boundary of Ω.
4
The boundary ∂Ω of Ω ⊂ X, X Banach, is defined as ∂Ω = {x ∈ X, B(x, r) ∩ Ω = ∅, B(x, r) ∩
Ωc = ∅, ∀r > 0}.
5
In general, K ⊂ X, X Banach, is compact if every sequence (xn ) in K has a convergent sequence
with limit in K.
7
1.2. Domains and C k (∂Ω) spaces Chapter 1
Usually, PDEs are equipped with boundary conditions, which describe the solution on
the boundary. It is then necessary to define some spaces of functions on the bound-
ary. First we define the space C 0 (∂Ω). The definition of C k (∂Ω) spaces requires more
regularity of ∂Ω and will be introduced in the next section.
Definition 1.1.14 (C 0 (∂Ω) spaces) Let Ω ⊂ RN . A function u : ∂Ω → R is said to be
“continuous at x ∈ ∂Ω” if lim |u(y) − u(x)| = 0, and it is said to be “continuous on
|y−x|→0
y∈∂Ω
∂Ω” if it is continuous at every x ∈ ∂Ω. We define C 0 (∂Ω) by
C 0 (∂Ω) = {u : ∂Ω → R, u continuous on ∂Ω}. (1.1.31)
When ∂Ω is unbounded one defines Cb0 (∂Ω) = C 0 (∂Ω) ∩ {u bounded}.
We note that when Ω is bounded, we have Cb0 (∂Ω) = C 0 (∂Ω). The space Cb0 (∂Ω) is a
Banach space when equipped with the norm
i) 0 ≤ ϕi ≤ 1, for all i = 0, 1, . . . , m,
m
ii) ϕi (x) = 1 for all x ∈ RN ,
i=0
iii) supp(ϕi ) ⊂ Gi for all i = 1, . . . , m, supp(ϕ0 ) ∩ Γ = ∅.
N
mwith Ω ⊂ R an open bounded set, we have θ0 = 0 in
Note that in the case Γ = ∂Ω
a neighborhood of ∂Ω, and so i=1 ϕi = 1 in the same neighborhood of ∂Ω.
8
Chapter 1 1.2. Domains and C k (∂Ω) spaces
where (ρn ) is the mollifier sequence in Example 1.1.13. Clearly ηn ∈ D(RN ), ηn ≥ 0 and
for n such that 1/n < δ we have
(i) ηn (x) = 1 for x ∈ K, because B(x, 1/n) ⊂ Uδ for all x ∈ K, and
(ii) ηn (x) = 0 for x ∈ Vδc , because B(x, 1/n) ∩ Uδ = ∅,
which proves i).
Proof of ii). For x ∈ Γ there exists i ∈ {1, . . . , m} and x > 0 such that B(x, x ) Gi .
As Γ is compact, from its open covering {B(x, x ), x ∈ Γ} there exists a finite covering
{B(xj, j ), j = 1, . . . , M } of Γ with B(xj , j ) Gij for a certain ij ∈ {1, . . . , m}. Then
Ui := {B(xj , j ), B(xj , j ) Gi }, i = 1, . . . , m, satisfies the claim ii.1).
Let ηi ∈ D(R n
), i = 1, . . . , m, be a {U i , Gi } cut-off function, and set η = m i=1 ηi .
Clearly η ≥ 1 in m i=1 i U . Set U 0 = {x ∈ RN
, η(x) > 3/4}, G 0 = {x ∈ RN
, η(x) > 1/2}
and let η0 be a {U 0 , G0 } cut-off function. Then
ηi
η0 in G0 ,
ϕ0 = 1 − η0 , ϕi = η i = 1, . . . , m,
0 in Gc0 ,
9
1.2. Domains and C k (∂Ω) spaces Chapter 1
x := Ω ∩ Gx ,
ii) θx (Q+ ) = G+
iii) θx (Q0 ) = G0x := ∂Ω ∩ Gx .
This definition states that a C k domain is more than i) and iii), which one could think
at a first attempt. The feature ii) ensures that Ω is on one side of ∂Ω.
Some properties of functions in Sobolev spaces hold for Lipschitz domains which
have a weaker regularity than C k domains.
Definition 1.2.5 (Lipschitz domains) Let Ω ⊂ RN be an open and bounded set. We
say Ω is Lipschitz if Ω satisfies Definition 1.2.4 except i), which is replaced by
i)L ∃Lx ≥ 0, ∀x1 , x2 ∈ Q L−1 x |x − x | ≤ |θx (x ) − θx (x )| ≤ Lx |x − x | (in this case
1 2 1 2 1 2
6
Of course, instead of 1/2, 3/4 one could choose any a, b with 0 < a < b < 1.
10
Chapter 1 1.3. Review of some important results
Remark 1.2.8 The space C k (∂Ω) is a Banach space when equipped with the norm
It can be shown that two C k (∂Ω) norms, n(·), resp. ñ(·), corresponding to the choice
{(Gi , θi , ϕi ), i = 1, . . . , m}, resp. {(G̃i , θ̃i , ϕ̃i ), i = 1, . . . , m̃}, are equivalent. Indeed
m̃
(uϕi ) ◦ θi (·, 0) = ((uϕ̃j ) ◦ θ̃j (·, 0)) ◦ hi,j (·, 0))(ϕi ◦ θi (·, 0)),
j=1
where hi,j (·, 0) = θ̃j−1 ◦ θi (·, 0), which implies n(u) ≤ C ñ(u), with C > 0 depending only
on {(Gi , θi , ϕi ), i = 1, . . . , m} and {(G̃i , θ̃i , ϕ̃i ), i = 1, . . . , m̃}. We proceed similarly to
prove the reverse inequality.
11
1.3. Review of some important results Chapter 1
Definition 1.3.1 (Lp (Ω) spaces) Let p ∈ [1, ∞], Ω ⊂ RN measurable for the
N -dimensional Lebesgue measure dx and define
⎧
⎪ p
⎪ L (Ω) = u : Ω → R, u dx-measurable and |u(x)|p dx < ∞ , p ∈ [1, ∞),
⎪
⎨ Ω
∞ (1.3.1)
⎪
⎪ L (Ω) = {u : Ω → R, u dx-measurable and ∃M > 0, |u(x)| ≤ M, a.a. x ∈ Ω},
⎪
⎩ p
Lloc (Ω) = {u : Ω → R, u ∈ Lp (ω), ∀ω ⊂ Ω, ω open, ω Ω}.
The following result shows that Lp functions are not very irregular, in the sense that
they can be approximated by D functions.
7
In the sense that there exists an isomorphism from E to E .
8
In general, the conjugate p of p is defined for all p ∈ [1, ∞].
12
Chapter 1 1.3. Review of some important results
Ω
f (x)ϕ(x)dx = 0 for every ϕ ∈ D(Ω). Then f = 0.
9
A Banach space E is said to be separable if it contains a dense countable set F , i.e. F ⊂ E and for
every u ∈ E there exists (ϕn ) in F such that limn→∞ ϕn − uE = 0.
13
1.3. Review of some important results Chapter 1
10
A metric space (M, d) is said to be complete if every Cauchy sequence in M converges to an element
of M .
11
All along this section the reader may take X = RN
14
Chapter 1 1.3. Review of some important results
Here denotes the derivative with respect to the variable t and x is the initial condi-
tion. The function f is sometimes called an “(autonomous) vector field”. The function
α is called an “integral curve with initial condition x”. Regarding the existence and
uniqueness of (1.3.4), we have the following result.
Theorem 1.3.13 Let (X, · X ) be a Banach space, U ⊂ X open, and f : U → X
Lipschitz with a Lipschitz constant L. Let x0 ∈ U , ρ ∈ (0, 1) such that B(x0 , 2ρ) ⊂ U
and K > 0 such that f C 0 (B(x0 ,2ρ)) ≤ K. Finally let 0 < r < min{1/L, ρ/K} and
set I r = [−r, r]. Then there exists α : I r × B(x0 , ρ) → U , such that for every x ∈
B(x0 , ρ) there exists a unique solution α(·, x) ∈ C 1 (I r ; X) of (1.3.4). Furthermore, if
f ∈ C k (U ; X) then α(·, x) ∈ C k+1 (I r ; X).
Proof. See section Theorem 9.1.4 in Annex.
The following theorem shows that the solution α(t, x) to (1.3.4) is stable with respect
to x, in the sense of (1.3.5). See Theorem 9.1.6 for the proof of this result.
Theorem 1.3.14 Assume the conditions of Theorem 1.3.13 hold. Let x ∈ B(x0 , ρ) and
α(·, x) ∈ C 1 (I r ; U ) as given by Theorem 1.3.13. Then x → α(·, x) is uniformly Lipschitz
from B(x0 , ρ) to C 0 (I r ; X), i.e. for every x, y ∈ B(x0 , ρ) we have
Theorem 1.3.13 shows that if f is of class C k then α(·, x) is of class C k+1 with respect
to t. The following theorem shows that α(·, ·) is also of class C k with respect to (t, x).
The reader can find the proof, for example, in [28, Chapter XIV, §3].
Theorem 1.3.15 Assume f ∈ C k (U ; X), k ∈ N ∪ {∞}. For x ∈ U let J(x) ⊂ R be
the interval such that a solution α(·, x) ∈ C k+1 (J(x); U ) exists, and let D = {(t, x) ∈
R × U, t ∈ J(x), x ∈ U }. Then α ∈ C k (D; X).
Remark 1.3.16 Note that often we are interested in a more general problem than
(1.3.4), namely
x ∈ U, x = (s, y),
15
1.3. Review of some important results Chapter 1
α (t, x) = (1, β (s + t, y)) = (1, g(s + t, β(s + t, y))) = f (α(t, x)), t ∈ I, (1.3.7a)
α(0, x) = (s, y) = x, (1.3.7b)
Similarly for PDEs, we are interested in questions i), ii), and iii). Given that usually the
solutions to PDEs are “weak”, meaning that they do not have the classical regularity
C k (we will see what it means precisely in the following chapters), for PDEs we consider
also the following question:
We will address i) and ii) for all the PDEs we consider in this book. For ease of reading,
in section 9.6 in Annex we have addressed question iv) for second-order linear elliptic
PDEs.
16
Chapter 1 1.3. Review of some important results
Problems
Problem 1.1 Let X, Y, and Z be three Banach spaces, f ∈ C 0 (X; Y ), g ∈ C 0 (Y ; Z)
with Im(f ) ⊂ Dom(g), where, in general, for a function f : X → Y we set
Problem 1.2 Find the norm of the elements of L(RnN); see Example 1.1.5, Remark 1.1.4.
Problem 1.3 Let Ω ⊂ RN be open, k ∈ N, and λ ∈ (0, 1]. Prove that · Cbk (Ω) , resp.
· C k,λ (Ω) , as given by (1.1.20), resp. (1.1.21), defines a norm in Cbk (Ω), resp. Cbk,λ (Ω).
b
Problem 1.4 Find a C ∞ domain in RN , and another one which is C 1 but not C 2 .
Problem 1.7 Show that if p ∈ (0, 1) then · Lp (Ω) is not a norm by showing that the
triangle inequality is not satisfied.
Problem 1.8 Show that Theorem 1.3.6 does not hold if only one of i), ii) is satisfied.
12
One may use this fact: f differentiable at x is equivalent to f (x + h) = f (x) + f (x; h) + h hX
for all x ∈ X, where h ∈ Y is such that limhX →0 h Y = 0.
17
2. Partial differential equations
It would be useful to classify PDEs into classes such that the PDEs of the same
class share the same qualitative properties. For example, a classification such that the
proof of the existence, uniqueness, regularity, and even numerical analysis is the same
(or similar) for all the PDEs of the same class. If restricted to linear second-order PDEs,
a complete classification can be done by using Fourier transform. We will not enter into
details and suggest the interested reader [12]; see also Definition 2.0.3 for the case of a
second-order linear PDE with constant coefficients.
Here we will simply define linear and nonlinear PDEs. This definition serves as a
basis for identifying types of PDEs, but in general the PDEs of the same type under
this definition do not share the same qualitative properties.
Definition 2.0.1 Let Ω ⊂ RN be an open set.
i) A (N dimensional) PDE of order m ∈ N in Ω is an equation of an unknown
function u = u(x), of its derivatives up to order m and of x of the form
E(x, u, D1 u, . . . Dm u) = 0, x ∈ Ω, (2.0.1)
N2 Nm
where E : RN × R × RN × R × ··· × R → R and Dk u given by (1.1.14).
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 19
A. Novruzi, A Short Introduction to Partial Differential Equations, CMS/CAIMS Books in Mathematics 11,
https://doi.org/10.1007/978-3-031-39524-6 2
Chapter 2
iv) If (2.0.1), resp. (2.0.2), is linear with respect to u and all its derivatives then
(2.0.1), resp. (2.0.2), is called “linear”. Otherwise it is called “nonlinear”.
N
n
Lu := − ai,j ∂x2i xj u + bi ∂xi u + cu = f, with f given. (2.0.3)
i,j=1 i=1
N
N
= (t V · A · V )k,l ∂ξ2k ξl v = Λk,l ∂ξ2k ξl v
k,l=1 k,l=1
N
= λk ∂ξ2k ξk v,
k=1
N
N
b · ∇u = μk ∂xk v, μk = vi,k bi ; hence
k=1 i=1
N
N
Lu = − λk ∂ξ2k ξk v + μk ∂ξk v + cv.
k=1 k=1
1
For a matrix M ∈ RN ×N , t M will denote its transposed matrix and M −1 will denote its inverse
matrix.
20
Chapter 2
N
N
Lu = − αk ∂η2k ηk w + βk ∂ηk w + cw, (2.0.4)
k=1 k=1
21
2.1. Some prototypes of PDEs Chapter 2
Transport equation
The transport equation, which sometimes is referred to as the continuity equation, models
the transport of a certain quantity, for example, the transport of a cloud of particles. In
its simplest form it is
N
ut + b · ∇u := ut + b i ∂x i u = f in Ω × (0, T ), b = (b1 , . . . , bN ) ∈ RN . (2.1.1)
i=1
In chapter 3, we will consider general first-order PDEs, which include the equation 2.1.1.
Laplace equation
The Laplace equation models, for example, the distribution of heat in a stationary regime,
i.e. the heat does not depend on time. This is one of the most studied PDEs and the
most representative of second-order elliptic PDEs. It has the form
We will study the classical and weak solutions to (2.1.2) in chapters 4 and 7.
Heat equation
The heat equation models the distribution of heat in a time-dependent regime, i.e. the
heat depends on time. In its simplest form it is
22
Chapter 2 2.1. Some prototypes of PDEs
Wave equation
The wave equation models the oscillations of a certain quantity, for example the motion
of an elastic membrane. In it simplest form it has the form
utt − c2 Δu = f in Ω × (0, T ), c > 0. (2.1.4)
Equation (2.1.4) is the most significant representative of second-order PDEs of hyper-
bolic type. We will study (2.1.4) in chapter 8.
Nonlinear PDEs
There are many nonlinear PDEs, often originating from applications. For example,
minimal surface equation in R3 , see [20],
(1 + u2x )uyy − 2ux uy uxy + (1 + u2y )uxx = 0 in Ω, u = g on ∂Ω, (2.1.5)
where u = u(x), x = (x1 , x2 ) ∈ Ω ⊂ R2 , describes the surface with the least area among
all graphs of functions in Ω equaling g on ∂Ω.
Another important example of nonlinear PDEs are incompressible Navier-Stokes
equations, see, for example, [52], given by
ρ(∂t u + (u · ∇)u) − μΔu + ∇p = f in Ω × (0, T ), (2.1.6a)
∇ · u = 0 in Ω × (0, T ), (2.1.6b)
where Ω ⊂ RN , N ≥ 2, 0 < T ≤ ∞, μ, ρ > 0 are given physical parameters, f =
(f1 , . . . , fN ) is a given function, and u = u(x, t), u = (u1 , . . . , uN ) and p = p(x, t) ∈ R
are unknown. The variable u represents the velocity and p represents the pressure. Here
the nonlinearity is due to the term (u · ∇)u.
The analysis of nonlinear PDEs is more difficult than linear PDEs. However, typically
the question of the existence of solutions relies strongly on the existence of solutions
to certain associated linear PDEs, on compactness results, and on tools of Functional
Analysis such as fixed point or topological degree theorems; see, for example, [8, 20, 42].
We will discuss very briefly some nonlinear PDEs mostly in the context of examples; see
Section 7.3.
Problems
Problem 2.1 Find a C 1 solution u = u(x, y) of
a) ux = 3x2 y + y, uy = x3 + x, u(0, 0) = 0,
b) ux = y cos x + 1, uy = sin x, u(0, 0) = 0.
23
2.1. Some prototypes of PDEs Chapter 2
Problem 2.4 Let u = u(x, t), (x, t) ∈ R × (0, ∞) and consider the PDE
utt + 4utx + 4uxx + (ux − 2ut ) = 0.
Use the transformation in Example 2.0.2 to show that this equation is of parabolic type,
and in the new coordinates (ξ, τ ) the PDE is written in the form vτ − 5vξξ = 0.
24
3. First-order PDEs: classical and
weak solutions
where Γ is relatively open1 in ∂Ω. Hereafter, we assume that E, g, and Ω are sufficiently
smooth, unless otherwise specified.
First, we will look for a classical solution to (3.0.1), and next we will demonstrate the
idea of weak solutions restricted to scalar conservation laws. In both cases, we will use
the method of characteristics, which consists of writing (3.0.1) in the form of a system
of ODEs, called the “characteristic equations”. These characteristic equations describe
the solution u along some curves, called “characteristic curves”.
Motivated by physical phenomena, such as the motion of a cloud of particles, the
idea for solving (3.0.1) is as follows. For a given x ∈ Ω, there is a particle moving along
a trajectory, or a characteristic curve, passing through x; see Fig. 3.0.1. This trajectory
eventually connects x with a certain point y 0 ∈ ∂Ω. If y 0 ∈ Γ then u(y 0 ) = g(y 0 ) is
known and therefore the characteristic equations associated with y 0 will provide the
value of u(x) along the characteristic curve.
1
So Γ = G ∩ ∂Ω, where G is a certain open set in RN .
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 25
A. Novruzi, A Short Introduction to Partial Differential Equations, CMS/CAIMS Books in Mathematics 11,
https://doi.org/10.1007/978-3-031-39524-6 3
Chapter 3
We assume that (3.0.2) has one classical solution u ∈ C 0 (RN ×[0, ∞))∩C 1 (RN ×(0, ∞)).
For a fixed (x, t) ∈ RN × (0, ∞), we set z(s) = u(x + sb, t + s) and then we obtain
Therefore, if g ∈ C 1 (RN ) and f ∈ C 0 (RN ×[0, ∞)) then the classical solution u is given by
t
u(x, t) = g(x − tb) + f (x + (s − t)b, s)ds. (3.0.3)
0
The following is also true: if g ∈ C 1 (RN ) and f, ∂xi f ∈ C 0 (RN × [0, ∞)) for all i, then u
given by (3.0.3) is a classical solution to (3.0.2).
Note that we have solved (3.0.2) by converting it to an ordinary differential equation
of the form z (s) = f (x + sb, t + s). Such an equation describes the solution u along
the curve {(x + sb, t + s), s ∈ (−t, ∞)}, which is called a “characteristic curve”. This
method is a particular case of the so-called “method of characteristics”; see Section 3.1
for more details.
26
Chapter 3 3.1. Method of characteristics
In general, g ∈
/ C 1 (RN ) and ∂xi f ∈
/ C 1 (RN × [0, ∞)), and so (3.0.3) does not provide
1
a C “classical” solution to (3.0.2). However, u given by (3.0.3) is the only reasonable
candidate for a solution to (3.0.2). To give a meaning to the solution u in this case, we
introduce the so-called “weak solution to (3.0.2)” satisfying
∞ ∞
g(x)ϕ(x, 0)dx+ (uϕt +u(b·∇x ϕ))dxdt+ f (x, t)ϕ(x, t)dxdt = 0, (3.0.4)
RN 0 RN 0 RN
for all ϕ ∈ D(RN × R), where b · ∇x ϕ = N i=1 bi ∂xi ϕ. Note that (3.0.4) is obtained by
multiplying (3.0.2a) by ϕ, integrating by parts, and then using (3.0.2b). It is well-defined
for all f ∈ L1loc (RN × (0, ∞)) and g ∈ L1loc (RN ).
Now, let us differentiate p(t) in (3.1.1), which after using (3.1.2a) gives
N
N
pi (t) = ∂xi xj u(y(t))yj (t) = ∂xi xj u(y(t))∂pj E(y(t), u(y(t)), ∇u(y(t)))
j=1 j=1
27
3.1. Method of characteristics Chapter 3
N
= ∂pj E(y, u, ∇u)∂xi xj u, on {y(t), t ∈ I}. (3.1.4)
j=1
Then (3.1.4) combined with (3.1.5) evaluated at x = y(t) (so, u(y(t)) = z(t) and
∇u(y(t)) = p(t)) gives
pi (t) = −∂xi E(y(t), z(t), p(t)) − pi (t)∂z E(y(t), z(t), p(t)), t ∈ I, (3.1.6)
which completes the proof.
Equations (3.1.2) are called “characteristic equations (associated with (3.0.1a)”, and
the curve {y(t), t ∈ I} is called a “characteristic curve”. Proposition 3.1.1 gives a nat-
ural method for solving (3.0.1). Namely, it suggests that if (y, z, p) solves (3.1.2) then
u(x) := z(y(t)), with x = y(t), is a good candidate for the solution to (3.0.1a).
Before we move to the analysis of the method to characteristics, let us see how it
works through a number of typical examples. Consider first an example of first-order
linear PDEs, which have the general form
E(x, u, ∇u) = a(x) · ∇u(x) + b(x)u + c(x) = 0, x ∈ Ω, (3.1.7)
with a, b, and c being smooth functions. So, here E(x, z, p) = a(x) · p + b(x)z + c(x) and
then (3.1.2) is equivalent to
y = a(y), (3.1.8a)
z = p · a(y) = −(c(y) + b(y)z), (3.1.8b)
p = −∇x E(y, z, p) − pb(y), (3.1.8c)
because on the characteristic curve we have E(y, z, p) = 0, so p · a(y) = −(c(y) + b(y)z).
Here we do not need to solve for p as the characteristic equations are closed for y and z.
Example 3.1.2 Consider
2x1 x2 ∂x1 u + ∂x2 u = u in Ω = {(x1 , x2 ), x2 > 0},
(3.1.9)
u(x1 , 0) = g(x1 ) on Γ = {(x1 , x2 ), x2 = 0}.
This equation is of the form (3.1.7) with a = (2x1 x2 , 1), b = −1, c = 0, and so
E(x, z, p) = 2x1 x2 p1 + p2 − z. According to (3.1.8), y and z solve
y1 = 2y1 y2 , y2 = 1, z = z, with y(0) = (y10 , 0) ∈ Γ, z(0) = g(y10 ).
28
Chapter 3 3.1. Method of characteristics
The initial conditions for y and z are chosen such that the characteristic curves start
on Γ, so y(0) ∈ Γ, and that z(0) = u(y(0)). We solve this system of ODEs and get
2
y1 (t) = y10 et , y2 (t) = t, z(t) = g(y10 )et .
We note that y and z depend on t and y10 , so y = y(t, y10 ), z = z(t, y10 ).
Now we “construct” the solution u at x = (x1 , x2 ) ∈ Ω by assigning to it the value
2
of z(t, y10 ) with y(t, y10 ) = x. Solving the last equation gives y10 = x1 e−x2 , t = x2 . Hence
2
u(x1 , x2 ) = u(y(t)) = z(t) = g(y10 )et = g(x1 e−x2 )ex2 .
This PDE system is of the form (3.1.10) with a = (z, 1), b = −1, and so E(x, z, p) =
zp1 + p2 − 1. Applying (3.1.11) and noting that ∇x E = 0 and ∂z E = p1 , we obtain
y1 = z, p1 = −p21 ,
z = 1,
y2 = 1, p2 = −p1 p2 .
We note that the system is closed in (y, z), so we do not need to solve for p. The initial
conditions for y and z are
y1 (0) = y10 , 1
0 z(0) = y10 .
y2 (0) = y1 , 2
Then we get
y1 (t) = 12 t2 + 12 y10 t + y10 , 1
z(t) = t + y10 , y10 ∈ R.
y2 (t) = t + y10 , 2
29
3.1. Method of characteristics Chapter 3
To find u(x) we first find the characteristic curve y(t, y01 ) passing through x ∈ Ω, so
y(t, y 0 ) = x or equivalently
⎧
⎪ x2 − x 1
1 2 1 0 ⎪
⎨ t = 2 ,
2
0
t + 2 y1 t + y1 = x1 , 2 − x2
which implies
t + y10 = x2 , ⎪
⎪ 2x1 − x22
⎩ 1
y 0
= .
2 − x2
This implies that
1 1 4x2 − x22 − 2x1
u(x1 , x2 ) = z(t, y10 ) = t + y10 = .
2 2 2 − x2
It is easy to check that u is a classical solution to (3.1.12).
Now we consider an example of a fully nonlinear first-order PDE, which are all first-
order PDEs that are neither linear nor quasi-linear. In this case we have to integrate
the full system of differential equations (3.1.2), which in general is not possible.
Example 3.1.4 Consider
(∂x1 u)2 + (∂x2 u)2 = 2 in Ω = {(x1 , x2 ), x1 > 0},
(3.1.13)
u(0, x2 ) = x2 on Γ = {(0, x2 ), x2 ∈ R}.
Here y20 and p0 = (p01 , p02 ) are arbitrary constants. Using (3.1.13), which we assume
holds in Ω, we can eliminate two of these three constants, for example p01 and p02 , so that
(y, z, p) depends only on y20 . Indeed, (∂x1 u)2 + (∂x2 u)2 = 2 in Ω and ∂x2 (u − x2 ) = 0 on
Γ give
0 2 0
(p1 ) + (p02 )2 = 2, p1 = ±1,
so
0
p2 = 1, p02 = 1.
30
Chapter 3 3.2. Classical local solutions to first-order PDEs
1
Now, let x = (x1 , x2 ) ∈ Ω. We can find (t, y10 ) such that y(t, y10 ) = x. It follows t = ± x1 ,
2
y20 = x2 ∓ x1 . Therefore, two classical solutions to 3.1.13 are given by
1
u(x1 , x2 ) = u(y(t)) = z(t) = 4 ± x1 + (x2 ∓ x1 ) = x2 ± x1 .
2
31
3.2. Classical local solutions to first-order PDEs Chapter 3
E(y 0 , z0 , p0 ) = 0, y 0 ∈ Γ. (3.2.8)
Note that this condition is equivalent to ∂pN E(y 0 , z0 , p0 ) = 0 because we can consider
−E instead of E. The condition (3.2.9) is called a “transversality condition”. An initial
32
Chapter 3 3.2. Classical local solutions to first-order PDEs
G : (RN −1 × {0}) × R →
R,
0 0
(y , pN ) → E(y 0 , z0 (y 0 ), (p01 (y 0 ), . . . , p0N −1 (y 0 ), p0N )).
From Theorem 1.3.12 there exists a unique p0N ∈ C k−1 (Γ0 ), with Γ0 = B(x0 , ρ0 ) ∩ Γ
for a certain ρ0 > 0, and G(y 0 , p0N (y 0 )) = 0 on Γ0 . As ∂pN E(x0 , z0 (x0 ), p0 (x0 )) > 0, by
continuity it follows that (3.2.9) holds in Γ0 provided ρ0 is small, which proves the lemma.
33
3.2. Classical local solutions to first-order PDEs Chapter 3
Theorem 3.2.3 Assume that E and g are C k functions, k ≥ 2, and (x0 , z0 (x0 ), p0 (x0 ))
is non-characteristic for a certain x0 ∈ Γ. Then there exist ρ0 > 0 and r0 > 0 such that
the following hold.
(i) There exists y 0 → p0 (y 0 ) ∈ C k−1 (Γ0 ; RN ) such that (y 0 , z0 (y 0 ), p0 (y 0 )) is non-
characteristic for all y 0 ∈ Γ0 := Γ ∩ B(x0 , ρ0 ).
(ii) For every y 0 ∈ Γ0 , the initial value problem (3.2.10) has a unique C k ([−r0 , r0 ]; RN ×
R × RN ) solution. Furthermore, if T is defined by
T : [0, r0 ] × Γ0 =: U0 → Ω0 := T (U0 ) ⊂ Ω,
(t, y 0 ) → T (t, y 0 ) := y(t, y 0 ),
then T is C k−1 , invertible, and its inverse is C k−1 .
(iii) The function u : Ω0 → R, u(x) = z ◦ T −1 (x) = z(y(t, y 0 )), is a C k (Ω0 ) solution to
E(x, u, ∇u) = 0 in Ω0 , (3.2.11a)
u = g on Γ0 . (3.2.11b)
(iv) If ∂pN E(x0 , z0 (x0 ), (p01 (x0 ), . . . , p0N −1 (x0 ), q)) does not change sign (remains posi-
tive or negative) for all q ∈ R, then (3.2.11) has a unique C k solution.
Proof. See Theorem 9.2.1 in Annex.
34
Chapter 3 3.2. Classical local solutions to first-order PDEs
Proof. The proof is based on Lemma 3.2.1 and Theorem 3.2.3. As Γ is C k+1 at x0 , there
exists θ ∈ C k+1 (Q; G) as in Lemma 3.2.1. We set x̃0 = θ−1 (x0 ), and consider the problem
(3.2.2) with Ẽ, g̃, ũ as in Lemma 3.2.1, which establishes a correspondence between the
local solutions to (3.0.1) with non-flat boundary and (3.2.2) with flat boundary.
Claim: Ẽ, g̃ and (x̃0 , z̃0 (x̃0 ), p̃0 (x̃0 )), where z̃0 (x̃0 ) = z0 (x0 ), p̃0 (x̃0 ) = t [∇θ(x̃0 )] · p0 (x0 ),
satisfy the conditions of Theorem 3.2.3 at x̃0 . Assuming the claim holds implies that (i)–
(iii) of Theorem 3.2.3 hold, and the existence of a C k solution u follows from Lemma 3.2.1.
Now we prove the claim. From (3.2.3) and (3.2.4), clearly Ẽ and g̃ are C k . It remains
to show that (x̃0 , z̃0 (x̃0 ), p̃0 (x̃0 )) is non-characteristic. First we note that the vectors
[∇θ(x̃0 )]·,j , j = 1, . . . , N − 1, are tangent to Γ at x0 and the vector [∇θ(x̃0 )]·,N is
oriented inside Ω. Combined with [∇θ(x̃0 )]−1 · [∇θ(x̃0 )] = Id,3 it implies that
As every vector can be decomposed in its tangential and normal components, by using
(3.2.15), for i = 1, . . . , N − 1 we get
p̃0i (x̃0 ) = (t [∇θ(x̃0 )] · p0 (x0 ))i = (t [∇θ(x̃0 )] · (p0τ (x0 ) + (p0 (x0 ) · ν(x0 ))ν(x0 )))i
= (t [∇θ(x̃0 )] · (p0τ (x0 ))i = (t [∇θ(x̃0 )] · ∇τ g(x0 ))i
= (t [∇θ(x̃0 )] · ∇g(x0 ))i − (t [∇θ(x̃0 )] · ν(x0 ))i (∇g(x0 ) · ν(x0 ))
= (t [∇θ(x̃0 )] · ∇g(x0 ))i = (∇(g ◦ θ)(x̃0 ))i
= ∂i g̃(x̃0 ). (3.2.17)
Ẽ(x̃0 , z̃0 (x̃0 ), p̃0 (x̃0 )) = E(θ−1 (x̃0 ), z̃0 (x̃0 ), t [∇θ(x̃0 )]−1 · p̃0 (x̃0 ))
= E(x0 , z0 (x0 ), p0 (x0 )) = 0. (3.2.18)
35
3.2. Classical local solutions to first-order PDEs Chapter 3
[∇θ(x̃0 )]−1 · (p̃01 (x̃0 ), . . . , p̃0N −1 (x̃0 ), p̃0N (x̃0 )) + t [∇θ(x̃0 )]−1 · (0, . . . , 0, p̃N − p̃0N (x̃0 ))
t
does not change sign. Theorem 3.2.3 implies the uniqueness of the solution ũ, and Lemma
3.2.1 proves the uniqueness of the solution u.
The following example shows that, in general, if the condition (3.2.14) fails then the
problem (3.0.1) does not have a unique solution.
Example 3.2.5 Consider problem (3.1.13), where E(x, z, p) = p21 +p22 −2, g(x1 , x2 ) = x2 .
Hence for x0 = (0, x02 ) ∈ Γ we have ν(x0 ) = (1, 0), z0 (x0 ) = x02 , p0τ (x0 ) = ∇g(x0 ) −
(∇g(x0 ) · ν(x0 )ν(x0 ) = (0, 1), and
∇p E(x0 , z0 (x0 ), ∂τ p0 (x0 ) + qν(x0 )) · ν(x0 ) = ∂p1 E(x0 , x02 , (q, 1)) = 2q.
Hence, condition (3.2.14) is not satisfied. On the other hand, we have seen in Example
3.1.4 that problem (3.1.13) has at least two solutions.
The following example shows that if the transversality condition in (3.2.12) (or
(3.2.9)) is not satisfied then, depending on g, (3.0.1) may not have a solution, or may
have infinitely many solutions.
Example 3.2.6 Consider
Here E = p1 + p2 − 1 and g(x1 ) is a smooth function. Note that for whatever choice of
p0 , the transversality condition is not satisfied, because
√
ν = (−1, 1)/ 2 and so ∇p E(y 0 , z0 , p0 ) · ν = 0.
So Theorem 3.2.3 does not apply. The system (3.2.10) associated with (3.2.20) is
y1 (t, y10 ) = 1, y2 (t, y10 ) = 1, z (t, y10 ) = 1,
y1 (0, y10 ) = y10 , y2 (0, y10 ) = y10 , z(0, y10 ) = g(y10 ),
36
Chapter 3 3.2. Classical local solutions to first-order PDEs
The characteristic curves are straight lines parallel to Γ, so they do not intersect Γ unless
y10 = 0, in which case the characteristic curve coı̈ncides with Γ. Therefore, we cannot
find a solution to (3.2.20) by starting on Γ and following the characteristic curves.
i) Case of no solution. Note that if a classical solution u exists, if we set z(s) =
u(x1 + s, x2 + s) then z (s) = 1, and therefore by integrating z (s) in (−x1 , 0) we find that
u must satisfy u(x1 , x2 ) = u(0, x2 −x1 )+x1 . For x2 = x1 we get u(x1 , x1 ) = u(0, 0)+x1 =
g(0) + x1 . Hence, if g(x1 ) = g(0) + x1 for any x1 , then (3.2.20) has no solution.
ii) Case of infinitely many solutions. We assume that g(x1 ) = C+x1 for all x1 and
a certain C ∈ R. We can find infinitely many solutions to (3.2.20) in the following way.
We consider PDE (3.2.20a) but with a different boundary condition instead of (3.2.20b).
Namely, we choose a boundary Σ ⊂ R2 intersecting Γ at a certain x0 , such that the
transversality condition on Σ is satisfied. Next we look for a solution u to (3.2.20a)
equipped with a boundary condition u = h on Σ, where h is such that h = g on Σ ∩ Γ,
i.e. h(x0 ) = g(x0 ).
For example, we can take Σ = {(y10 , 0), y10 ∈ R} and h smooth such that h(0) = C,
because here Σ ∩ Γ = (0, 0). So we consider
Problem (3.2.21) satisfies the conditions of Theorem 3.2.3. The solution to characteristic
equations of (3.2.21) is given by
For arbitrary x = (x1 , x2 ) ∈ Ω, we look for (t, y10 ) such that x = y(t, y 0 ). It follows
t = x2 , y10 = x1 − x2 , and then u(x1 , x2 ) = x2 + h(x1 − x2 ) is a classical solution to
(3.2.21) and to (3.2.20), provided h is C 1 and h(0) = C. So for g(x1 ) = C + x1 , (3.2.20)
has infinitely many solutions as we can take for example h(x1 ) = C + xn1 , n ∈ N.
Theorems 3.2.3 and 3.2.4 guarantee only local classical solutions, which are obtained
in the situation where the characteristic curves do not intersect. The following example
shows that depending on the behavior of characteristic curves, we may have a local or
global classical solution, or no classical solution at all.
Example 3.2.7 Consider
1 2
∂2 u + u∂1 u = ∂2 u + ∂1 u = 0 in Ω = {(x1 , x2 ), x2 > 0}, (3.2.22a)
2
u(x1 , 0) = g(x1 ) on Γ = ∂Ω, (3.2.22b)
37
3.2. Classical local solutions to first-order PDEs Chapter 3
referred to as “Burgers’ equation”, which describes the dynamics of rarefied gases (vis-
cous effects are neglected). The system (3.2.10) associated with (3.2.22) has the form
y1 (t, y10 ) = z, y2 (t, y10 ) = 1, z (t, y10 ) = 0,
y1 (0, y10 ) = y10 , y2 (0, y10 ) = 0, z(0, y10 ) = g(y10 ),
which implies
So the characteristic curves {y(t, y10 ) := (y1 (t, y10 ), y2 (t, y10 )), t ∈ R} are straight lines
starting at (y10 , 0) with slope g(y10 ) given equivalently by
Given x = (x1 , x2 ), x2 > 0, we look for (t, y10 ) such that y(t, y10 ) = x, which gives
x2 x2
u(x1 , 0) u(x1 , 0)
x1 x1
x2 x2
no char.
curves
region
x1 x1
Figure 3.2.1: Rarefaction waves. Top: an increasing continuous (left) and discontinuous
(right) initial condition. Bottom: the corresponding characteristic curves—they fill all
the domain {x2 > 0} in the case of continuous initial condition (left), while there is a
region where the characteristic curves do not cross (right).
38
Chapter 3 3.2. Classical local solutions to first-order PDEs
Rarefaction waves. Assume g is an increasing function (see Fig. 3.2.1, top). In this
case the characteristic curves do not intersect. If g is continuous, see Fig. 3.2.1, top left,
(3.2.22) has a solution in the entire domain, because the equation y10 = x1 − x2 g(y10 ) has
a unique solution y10 for every (x1 , x2 ). If g is discontinuous, see Fig. 3.2.1, top right,
there is a region where there are no characteristic curves. In this region the method
of characteristics does not provide a solution. Such a solution is called a “rarefaction
wave”. Clearly, if g is only continuous then u is only continuous. Furthermore, in the
case g is discontinuous the method of characteristics does not provide a solution u defined
globally; see Fig. 3.2.1, bottom, right.
x2 x2
u(x1 , 0) u(x1 , 0)
x1 c x1
x2 x2
0
y1,l 0
y1,r x1 0
y1,l 0
y1,r x1
Figure 3.2.2: Shock waves Top: a decreasing continuous (left) and discontinuous (right)
initial condition. Bottom: the corresponding characteristic curves, which in both cases
intersect.
Shock waves. Assume g is a decreasing function (see Fig. 3.2.2, top). The only dif-
ference with the case when g is increasing is that for every x2 > 0, the equation
y10 = x1 − x2 g(y10 ) in (3.2.24), for certain (x1 , x2 ), has two solutions y1,l
0 0
, y1,r 0
, y1,l 0
< y1,r .
0 0
This implies that the two characteristic curves starting at (y1,l , 0) and (y1,r , 0) intersect.
0 0
As z is constant along characteristic curves, z will attain the values g(y1,l ) and g(y1,r )
at the intersecting point (x1 , x2 ). These values are in general different and therefore u
cannot be continuous at the intersection point. Such solutions are called “shock waves”.
They are interesting, even though they are not continuous.
For example, in the case g = 1(−∞,c) , for every (x1 , x2 ) with 0 < x1 − c < x2 we have
y1,l = x1 − x2 , y1,r
0 0
= x1 . So u cannot be continuous at such (x1 , x2 ) because z = 1 on the
0
characteristic curve emanating from y1,l and z = 0 on the characteristic curve emanating
0
from y1,r . In particular, there are no classical solutions near (c, 0). However, in the region
{(x1 , x2 ), x1 < c, x2 > max{0, x1 − c}, resp. {(x1 , x2 ), x1 > c, 0 < x2 < x1 − c}, the
problem has a unique classical solution u = 1, resp. u = 0.
39
3.3. Conservation laws and weak solutions Chapter 3
where u = u(x1 , x2 ) is the unknown function, and g and f are given functions. The
equation (3.3.1a) is a (scalar) conservation law in one space dimension and (3.3.1b) is
the initial condition. Furthermore, u is the conserved quantity and f is its flux.
The name “conservation law” for (3.3.1) follows from this argument. Set x = x1 (the
space variable) and t = x2 (the time), so (3.3.1) becomes
Hence, the total amount of u inside any given interval [a, b] at any given time t remains
constant, if for example the fluxes at the endpoints f (u(a, t)) and f (u(b, t)) are zero, or
even if f (u(a, t)) − f (u(b, t)) = 0.
Remark 3.3.1 The wave equation in one dimension
can be written as follows. Let u = (u1 , u2 ) := (∂1 w, ∂2 w). Then u solves the first-order
vector PDEs
40
Chapter 3 3.3. Conservation laws and weak solutions
The equation (3.3.1) with u = (u1 , . . . , uN ) and f (u) = (f1 (u), . . . , fN (u)) (the equation
(3.3.3) is an example with N = 2) is called “vector conservation laws in one space
dimension”. In order to avoid technicalities we will not consider vector conservation
laws. The reader can read more on this topic in [16, 30–32].
Note that equation (3.3.1) is of the form (3.0.1a) with E(x, z, p) = p2 + f (z)p1 .
About the classical solution to (3.3.1) we have this result, which is a straightforward
corollary of Theorem 3.2.3.
Proposition 3.3.2 Let x0 = (x01 , 0) ∈ R × {0} and assume f is C k+1 near x0 in R2 and
g is C k near x01 in R, k ≥ 2. Then
(x0 , z0 (x0 ), p0 (x0 )) = ((x01 , 0), g(x01 ), (g (x01 ), −f (g(x01 ))g (x01 )))
is a non-characteristic initial condition and ∂p2 E((x0 , z0 (x0 ), p0 (x0 ))) = 1, and therefore
there exists a unique C k solution to (3.3.1) near x0 .
Note that the solution to the characteristic equations of (3.3.1) is given by
y1 (t, y10 ) = f (g(y10 ))t + y10 , y2 (t, y10 ) = t, z(t, y10 ) = g(y01 ), y10 ∈ R. (3.3.4)
We have seen this solution in Example 3.2.7, which dealt with a conservation law with
f (u) = 12 u2 . As demonstrated in this example, if g is a decreasing function then the
solutions along the characteristic curves create a discontinuity at the intersection of
characteristic curves. The candidate solution obtained by the method of characteristic
is discontinuous. The following definition extends the concept of the classical solution
to a conservation law in order to allow a discontinuous solution.
Definition 3.3.3 Assume f ∈ L1loc (R) with |f (z)| ≤ C|z| for all z ∈ R and C ≥ 0, and
g ∈ L1loc (R). A function u ∈ L1loc (R × R+ ) is said to be a “weak solution to (3.3.1)” if for
all ϕ ∈ D(R2 ) we have
(u(x)∂2 ϕ(x) + f (u)∂1 ϕ(x)) dx1 dx2 + g(x1 )ϕ(x1 , 0)dx1 = 0. (3.3.5)
R+ R R
This definition is equivalent to say that (3.3.1) holds in the sense of distributions. It
is obtained by multiplying (3.3.1) with the test function ϕ, integrating in R × R+ and
finally using (3.3.1b). Note that the condition |f (z)| ≤ C|z| in Definition 3.3.3 is to
ensure that f (u) ∈ L1loc (R × R+ ) which implies that (3.3.5) is well-defined.
In general, a weak solution to (3.3.1) is not a (classical) solution—we will see this
in the following examples. The following proposition shows that if the weak solution is
smooth then the concepts of weak and classical solutions coincide. The proof is elemen-
tary and can be found in many textbooks; see for example [16, 30, 43]. For the reader’s
convenience, we have included the proof in Theorem 9.2.2 in Annex.
41
3.3. Conservation laws and weak solutions Chapter 3
Then
Before we consider some examples, let us note that in physical situations x2 is the
time variable and x1 is the space variable. So physically, χ represent the speed of
propagation of the discontinuity curve (chock speed). In the particular case of Burgers’
equation (3.2.22), it gives
Note that the characteristic curves on the left of the discontinuity curve have equations
0 0 0 dy1
y1 = y2 g(y1,l ) + y1,l = y2 ul + y1,l ; see (3.2.23) and Example 3.2.7. Hence, = ul is the
dy2
propagation speed of the left characteristic curve (characteristic speed), and similarly
ur is the propagation speed of the right characteristic curve at a discontinuity point.
42
Chapter 3 3.3. Conservation laws and weak solutions
Therefore, the equation above for χ states that the chock speed of Burgers’ equation is
the average of the characteristic speeds.
Burgers’ equation (3.2.22) is a prototype of nonlinear first-order PDEs that, depend-
ing on the initial condition (3.2.22b), represents a number of features such as discontin-
uous weak solutions and non-uniqueness, as the following examples show.
Example 3.3.6 We look for a weak solution to Burgers’ equation (3.2.22) with
Following Example 3.2.7, if a weak solution has a discontinuity curve γ, then from
Rankine-Hugoniot Theorem 3.3.5 we get γ = {x1 = 12 x2 , x2 > 0}, because [u] = −1,
[f (u)] = −1/2, and so χ = [f[u]
(u)]
= 1/2. It implies that
1, x1 < 12 x2 ,
u(x1 , x2 ) = (3.3.8)
0, x1 ≥ 12 x2
is a candidate for a weak solution to (3.2.22). Let us show that u is a weak solution. For
ϕ ∈ D(R2 ), let R > 0 such that supp(ϕ) ⊂ BR . Then we set Ωl = BR ∩ {(x1 , x2 ), x1 <
1
x , x2 > 0}, Ωr = BR ∩ {(x1 , x2 ), x1 > 12 x2 , x2 > 0}, and let ν l , resp. ν r , the exterior
2 2
unit normal vector on ∂Ωl , resp. Ωr . From the equation of γ, we find that ν l = (2, −1) √15 ,
ν r = −ν l on γ. Then using Gauss theorem4 we get
(u∂2 ϕ + f (u)∂1 ϕ)dx1 dx2 + g(x1 )ϕ(x1 , 0)dx1
R+ R {x2 =0}
= (ul ∂2 ϕ + f (ul )∂1 ϕ)dx + g(x1 )ϕ(x1 , 0)dx1
Ωl ∂Ωl ∩{x2 =0}
+ (ur ∂2 ϕ + f (u)∂1 ϕdx + g(x1 )ϕ(x1 , 0)dx1
Ωr ∂Ωr ∩{x2 =0}
l l
= (ul ν2 + f (ul )ν1 )ϕdx1 + g(x1 )ϕ(x1 , 0)dx1
∂Ωl ∩{x2 =0} ∂Ωl ∩{x2 =0}
r r
+ (ur ν2 + f (ur )ν1 )ϕdx1 + g(x1 )ϕ(x1 , 0)dx1
∂Ωr ∩{x2 =0} ∂Ωr ∩{x2 =0}
− (∂2 u + ∂1 (f (u)))ϕdx − ([u]ν2l + [f (u)]ν1l )ϕds
Ωl ∪Ωr γ
= 0.
In general, conservation law equations (3.3.1) do not have a unique solution, as the
following example shows.
4
If Ω ⊂ RN is open and Lipschitz, ν = (ν1 , . . . , νN ) is the exterior unitary normal vector on ∂Ω and
f, g ∈ C 1 (Ω), then Ω ∂i f (x)g(x)dx = ∂Ω f (x)g(x)νi ds − Ω f (x)∂i g(x)dx.
43
3.3. Conservation laws and weak solutions Chapter 3
44
Chapter 3 3.3. Conservation laws and weak solutions
Example 3.3.9 Consider again Burgers’ equation (3.2.22). If g is given by (3.3.7), see
Fig. 3.2.2 right, then the solution given by (3.3.8) is an entropy solution because it
satisfies Lax entropy condition on the discontinuity curve {x1 = x2 /2}:
1 x2
f (ul ) = 1 > χ = > 0 = f (ur ). u(x1 , 0)
2
If g is given by (3.3.9), see Fig. 3.3.2,
there are infinitely many solutions. The
x1
equation (3.3.11) gives a continuous
solution, so it is not an entropy solution. x2
1 [f (u)]
χ = = , and f (ul ) = 0 ≯ χ ≯ 1 = f (ur ).
2 [u]
Problems
Problem 3.1 Let b ∈ RN , c ∈ R and f , resp. h, be a smooth function in RN × (0, ∞),
resp. on RN , u = u(x, t), and consider
ut + b · ∇u + cu = f in RN × (0, ∞),
u = h on RN × {0}.
45
3.3. Conservation laws and weak solutions Chapter 3
d
Find a classical solution u by considering ds
u(x + sb, t + s).
Problem 3.2 Find a non-trivial classical solution u = u(x1 , x2 ) to the following PDEs,
by using the method of characteristics (here g is C 1 ).
Discuss the largest domain where u is defined and the uniqueness of the solution.
x2 ∂1 u − x1 ∂2 u = 0 in R × (0, ∞).
Verify whether the method of characteristics provides a classical solution to this PDE
when equipped with one of the following boundary conditions:
46
Chapter 3 3.3. Conservation laws and weak solutions
Problem 3.6 Consider the problem (3.3.1) with f and g as below. For each of them
show that there exists an entropy solution.5
Problem 3.7 Consider the problem (3.3.1) with f ∈ C 1 (R) non-decreasing and f (0) =
0, and assume that u is a weak piecewise solution with discontinuity along the curve
γ = {(χ(x2 ), x2 ), x2 ≥ 0}. Show that if f is convex, resp. concave, then shocks travel
forward, resp. to the left, i.e. γ is an increasing, resp. decreasing, function.
5
Hint: after solving the characteristic equations, you may “guess” the discontinuity curve γ by
solving the differential equation χ = [f[u]
(u)]
; next you find a solution candidate by using the method of
characteristics and show that it is indeed an entropy solution.
47
4. Second-order linear elliptic PDEs:
maximum principle and classical
solutions
−Δu = 0 in Ω, (4.0.1a)
u = g on ∂Ω, (4.0.1b)
called “Dirichlet problem for the Laplacian”, as a prototype of second-order linear elliptic
PDEs. A classical solution to the problem (4.0.1) is any function u ∈ C 0 (Ω) ∩ C 2 (Ω)
satisfying (4.0.1) pointwise.1 We will prove the existence and uniqueness of a classical
solution to this problem by using Perron’s method. This method looks for the solution
at the intersection of subsolutions and supersolutions to (4.0.1), and strongly uses the
maximum principle, which roughly speaking states that the solution to a certain PDE in
a domain Ω attains its maximum value on the boundary ∂Ω. Note this method applies
to more general second-order linear PDEs; see, for example, [20].
There are other methods which provide classical solutions, such as Schauder’s theory,
which in fact provides C m,α (Ω) solutions; see, for example, [20]. We note that, in general,
(4.0.1) does not have a classical solution. This is a reason why the weak/variational
solutions are introduced; see Chapter 7.
V − λV = 0, W + λW = 0. (4.1.3)
The differential equations (4.1.3) are of second order, linear, and homogeneous, so their
solution spaces are of dimension two and their general solutions are
V (x) = Ax + B, W (y) = Cy + D, if λ = 0, (4.1.4)
√ √ √ √
x λ −x λ y −λ −y −λ
V (x) = Ae + Be , W (y) = Ce + De , if λ = 0, (4.1.5)
√
with A, B, C, D being arbitrary constants. Here, ±λ can be a complex number, and
in this case A, B, C, and D must be such that the functions V and W are real. It follows
that
u(x, y) = (Ax + B)(Cy + D), λ = 0, (4.1.6)
√ √ √ √
x λ −x λ y −λ −y −λ
u(x, y) = (Ae + Be )(Ce + De ), λ = 0, (4.1.7)
is always a solution to (4.1.1a). The constants A, B, C, D are chosen such that u solves
the boundary conditions (4.1.1b), (4.1.1c). Condition (4.1.1b) is written as
u(0, y) = u(a, y) = u(x, 0) = 0,
⇐⇒ V (0) = V (a) = W (0) = 0.
∀x ∈ (0, a), ∀y ∈ (0, b),
In the case λ = 0, by using (4.1.6) we get
B = Aa + B = D = 0,
50
Chapter 4 4.1. Laplace equation and the method of separation of variables
solves (4.1.1a) and (4.1.1b). Now, we address (4.1.1c). Replacing (4.1.8) in (4.1.1c)
implies that the coefficients γk must satisfy
u(x, b) = γk sin(kπx/a) sinh(kπb/a) = g(x), x ∈ (0, a), (4.1.9)
k∈N
which is the sine Fourier expansion of g in [0, a]. Note that we have the following for-
mulas2
a π π a
sin k x sin l x dx = δkl , k, l ∈ N. (4.1.10)
0 a a 2
Then multiplying (4.1.9) by sin(lπx/a) and integrating in (0, a) gives
a
2 1
γk = sin(kπx/a)g(x)dx, k ∈ N.
a sinh(kπb/a) 0
Therefore, the solution u is given by
2 sin(kπx/a) sinh(kπy/a) a
u(x, y) = sin(kπx/a)g(x)dx. (4.1.11)
a k∈N sinh(kπb/a) 0
Finally, we note that we can use MSV for solving problems like
−Δu = 0 in Ω = (0, a) × (0, b), u = g on ∂Ω. (4.1.12)
Indeed, if Σi , i = 1, 2, 3, 4 are the sides of ∂Ω, then one can look for u = u1 +u2 +u3 +u4 ,
−Δui = 0 in Ω = (0, a) × (0, b), ui = g1Σi on ∂Ω, (4.1.13)
where 1Σi is the characteristic function of Σi . Each ui can then be solved using MSV.
a a
2
Similarly, we have 0
cos k πa x cos l πa x dx = δkl a2 , 0 sin k πa x cos l πa x dx = 0, k, l ∈ N.
51
4.1. Laplace equation and the method of separation of variables Chapter 4
Remark 4.1.1 Note that (4.1.11) is not found rigorously, and therefore we cannot say
that the series (4.1.11) converges, nor that u is a classical solution to (4.1.1). To empha-
size this, sometimes we will write ∼ instead of = in (4.1.11).
How can we verify whether the series in (4.1.11) provides a classical solution? The
answer relies on the convergence of the Fourier series and the maximum principle.
Indeed, if g is smooth then the sine Fourier series of g converges uniformly, which
implies that the (4.1.11) converges in C 0 (∂Ω). Assuming this convergence holds, in com-
bination with the maximum principle it implies the convergence of the series (4.1.11) in
C 0 (Ω), which in turn implies u ∈ C 0 (Ω) and u = g on ∂Ω. Furthermore, by using the
mean value theorem one proves that u ∈ C 2 (Ω) and Δu = 0 in Ω, which shows that u
is a classical solution to (4.1.1). See Corollary 4.3.5 and examples 4.3.5, 4.3.6 for more
details.
MSV, although useful, cannot be used when the domain is not a rectangle or a polar
sector (see below). Theorem 4.4.8 provides general criteria for the solution to (4.0.1),
which for the problem (4.1.1) implies that it has a unique classical solution if and only
if g ∈ C00 (Σ).
instead of (4.1.1b), (4.1.1c). One can proceed as above with the only difference that we
define γk in (4.1.8) by using ∂y u(x, b) = g(x) instead of u(x, b) = g(x).
Remark 4.1.3 One can use MSV in polar coordinates, which is useful when the domain
is a disk or a polar sector. For example, it is easy to show that if (r, θ) are polar coor-
dinates, x = r cos θ, y = r sin θ, then the problem
Then one looks for u(r, θ) = R(r)Θ(θ), which after replacing in (4.1.15a) gives
1 1
(rR ) Θ + 2 RΘ = 0.
r r
Assuming RΘ = 0, dividing the equation above by RΘ gives
2
r R + rR − λR = 0, Θ + λΘ = 0, Θ(0) = Θ(2π) = 0,
(4.1.16)
r ∈ (0, R) θ ∈ (0, 2π), Θ (0) = Θ (2π),
52
Chapter 4 4.2. Dirichlet problem in a ball
R0 (r) = C0 + D0 ln r, Θ0 = A0 ,
Rk (r) = Ck rk + Dk r−k , k = 1, 2, . . . , Θk (θ) = Ak cos(kθ) + Bk sin(kθ), k ∈ N.
α0 r k
∞
u(r, θ) ∼ + (αk cos(kθ) + βk sin(kθ)). (4.1.17)
2 k=1
R
Finally, the coefficients αk , βk are defined formally by replacing the series (4.1.17) in
(4.1.15b), multiplying by cos(lθ), sin(lθ), and integrating in [0, 2π]. It implies
1 2π
α0 = g(s)ds, (4.1.18)
π 0
1 2π 1 2π
αk = g(s) cos(ks)ds, βk = g(s) sin(ks)ds, k = 1, 2, . . . . (4.1.19)
π 0 π 0
We conclude this remark by pointing out that we can use MSV to solve problems like
−Δu = 0 in Ω, u = g on ∂Ω,
with Ω = {(r, θ), r ∈ [0, R), θ ∈ (0, σ), σ ∈ (0, 2π)}, or Ω = B(0, R)c .
53
4.2. Dirichlet problem in a ball Chapter 4
Remark 4.2.2 Theorem 4.2.1 shows that if u ∈ C 0 (Ω) ∩ C 2 (Ω) is a classical solution
to (4.0.1) then u ∈ C 0 (Ω) ∩ C ∞ (Ω) because if B Ω is a ball and h solves −Δh = 0 in
B, h = u on ∂B, then u = h so u ∈ C ∞ (B), which implies u ∈ C ∞ (Ω) because B Ω
is arbitrary.
Remark 4.2.3 (Mean value theorem (MVT)) The formula (4.2.1) is called “Pois-
son’s formula”. In an arbitrary ball BR (z) := B(z, R), it has the form
R2 − |x − z|2 g(y)
u(x) = dσ(y), x ∈ BR (z), (4.2.2)
N VN R ∂BR (z) |x − y|N
In the case N = 2, one can easily find the formula for u(x) in (4.2.1) by using the
method of separation of variables in polar coordinates; see Remark 4.1.3. Indeed, assume
for simplicity R = 1. Then from (4.1.17), taking into account (4.1.19) and permuting
the sum with the integrals gives
1 k
2π ∞
1
u(r, θ) ∼ g(s) + r (cos(kθ) cos(ks) + sin(kθ) sin(ks)) ds
π 0 2 k=1
1 k
2π ∞
1
= g(s) + r cos(k(θ − s)) ds
π 0 2 k=1
∞
1 2π k 1
= g(s) Re rei(θ−s) − ds
π 0 k=0
2
2π
1 1 1
= g(s) Re − ds
π 0 1 − re i(θ−s) 2
1 2π 1 1
= g(s) Re − ds
π 0 (1 − r cos(θ − s)) − ir sin(θ − s) 2
1 2π 1 − r cos(θ − s) 1
= g(s) − ds
π 0 (1 − r cos(θ − s)) + (r sin(θ − s))
2 2 2
54
Chapter 4 4.3. Maximum principle for Laplacian
2π
1 2(1 − r cos(θ − s)) − 1 + 2 cos(θ − s) − r2
= g(s) ds
2π 0 1 − 2 cos(θ − s) + r2
1 − r2 2π g(s)
= ds
2π 1 − 2 cos(θ − s) + r2
0
1 − |x|2 g(s)
= ds.
∂B(0,1) |x − y|
2π 2
The following proposition shows that the property (4.2.3), even though involving
just an integral, which is well-defined for u ∈ C 0 (Ω), actually is very strong.
Proposition 4.2.4 Let Ω ⊂ RN be open and u ∈ C 0 (Ω). The followings are equivalent:
Proof. Indeed, (i) implies (ii) follows from Remark 4.2.3. Now let us prove that (ii)
implies (i). Let B = B(x, R) Ω, h ∈ C 0 (B) ∩ C 2 (B) such that
h = u on ∂B and − Δh = 0 in B.
with strict inequality holding because u−h is continuous and |u−h| M on ∂B(xM , rM ).
The contradiction proves M = 0 and hence u = h in B. From Theorem 4.2.1 and the
arbitrariness of B, we get u ∈ C 2 (Ω) and Δu = 0 in Ω.
Lu := −Δu. (4.3.1)
55
4.3. Maximum principle for Laplacian Chapter 4
For the weak maximum principle, the reader can think of a function u in [a, b] ⊂ R
with −u negative, resp. positive. Then u is concave up, resp. concave down, and there-
fore it reaches the maximum, resp. minimum, on the boundary of (a, b). The following
theorem generalizes these facts in dimension N .
Theorem 4.3.1 (Weak maximum principle) Let Ω ⊂ RN be open bounded and
assume Lu ≤ 0, resp. Lu ≥ 0, in Ω. Then the maximum, resp. minimum, of u in
Ω is achieved on ∂Ω, i.e.
where min f = min{f (x), x ∈ A}, max f = max{f (x), x ∈ A} for every f ∈ C 0 (A).
A A
Proof. We deal first with the case Lu ≤ 0. Let x0 ∈ Ω such that u(x0 ) = max{u(x), x ∈
Ω}. We distinguish two sub-cases.
(i) Case Lu < 0 in Ω. If x0 ∈ ∂Ω the theorem is proved. So we assume x0 ∈ Ω. Hence
∇u(x0 ) = 0 and ∂ii u(x0 ) ≤ 0, for all i. Therefore Lu(x0 ) = −Δu(x0 ) ≥ 0, which is a
contradiction and proves that x0 ∈ Ω.
(ii) Case Lu ≤ 0 in Ω. Let > 0 and consider u (x) = u(x) + ex1 , x = (x1 , . . . , xN ).
Then
From Case (i), it follows that there exists x = (x1 , . . . , xN ) ∈ ∂Ω such that
u (x) ≤ u (x ), so u(x) ≤ u(x ) + (ex1 − ex1 ), ∀x ∈ Ω.
For the case Lu ≥ 0 we set v = −u, so Lv ≤ 0, and then the result follows from (ii).
Proof. Set w = u − v.
(i) We have Lw ≤ 0 in Ω, w ≤ 0 on ∂Ω. From Theorem 4.3.1 it follows max w ≤ max w ≤ 0.
Ω ∂Ω
So, w ≤ 0, or u ≤ v.
3
This is the reason for the negative sign “−” in front of Δ.
56
Chapter 4 4.3. Maximum principle for Laplacian
The result of Theorem 4.3.1 does not exclude that u achieves the maximum also at a
certain point inside the domain. The strong maximum principle excludes this possibility.
The following lemma and theorem are classical results; see, for example, [20].
Lemma 4.3.3 (Hopf lemma) Let Ω ⊂ RN be an open set with ∂Ω a C 2 boundary
near4 x0 ∈ ∂Ω. Assume furthermore Lu(x) ≤ 0, u(x) < u(x0 ) for all x ∈ Ω near x0 and
u is differentiable at x0 . Then ∂ν u(x0 ) := ∇u(x0 ) · ν(x0 ) > 0, where ν(x0 ) is the unit
outward normal vector to ∂Ω at x0 .
Proof. See Lemma 9.3.5 for the details of the proof.
In the case of a local minimum, we have the following result, which we still refer to as
“Hopf lemma”. Let Ω ⊂ RN be an open set with ∂Ω a C 2 boundary near x0 ∈ ∂Ω. Assume
furthermore Lu ≥ 0 in Ω, u(x) > u(x0 ) for all x ∈ Ω near x0 and u is differentiable at
x0 . Then ∂ν u(x0 ) < 0. The proof follows directly from Lemma 4.3.3 applied to −u.
We note that under the assumptions of lemma it follows easily that ∂ν u(x0 ) ≥ 0. The
strength of this lemma is that it rules out the case of equality. This result is strongly
used in the following theorem (Figure 4.3.1).
Proof. Let us consider first the case Lu ≤ 0. Let M = max{u(x), x ∈ Ω and set
K = {x ∈ Ω, u(x) = M }. Note that K is closed. Note also that K = ∅ because u
attains its maximum M in Ω.
If K = Ω then u = M and the theorem is proved.
Otherwise, we assume the theorem does not hold,
i.e. u achieves its maximum in Ω. Hence, K ∩Ω r
Ω. Then there exists a ball B0 Ω, B0 ∩ K = ∅, z
and x0 ∈ ∂B0 such that u achieves at x0 a strict x0 z0
maximum in B0 . Indeed, there exists z ∈ ∂K ∩ Ω
and r > 0 such that B = B(z, r) Ω. The set
B\K is open and nonempty. Let B0 = B(z0 , r0 ), K ∂K Ω\K
for a certain z0 ∈ B\K and r0 > 0, be the biggest
open ball included in B\K. From the maximality
of B0 , necessarily B0 ∩ ∂K = ∅, and let x0 ∈
∂B0 ∩ K. It follows that |∇u(x0 )| = 0. Figure 4.3.1: B(z0 , r0 ) and x0 .
4
Here, “∂Ω a C 2 boundary near x0 ” means that in Definition 1.2.4 we replace “for every x ∈ ∂Ω”
with “for x = x0 ”. Note that this implies that Ω ∩ B(x0 , r0 ) lies on one side of ∂Ω, for r0 > 0 small.
57
4.3. Maximum principle for Laplacian Chapter 4
Now we can apply Lemma 4.3.3 with B0 instead of Ω. It gives ∂ν u(x0 ) > 0, where
ν = (x0 − z0 )/|x0 − z0 |. But this contradicts the fact that |∇u(x0 )| = 0. Hence the
theorem’s claim holds.
The case Lu ≥ 0 is proved by setting v = −u and using the case Lu ≤ 0.
The following corollary is related to the solution to the problem (4.0.1) with the
MSV; see Section 4.1. It shows that if the series representing the boundary function g
convergences in C 0 (∂Ω) then the MSV provides a classical (unique) solution.
Δu = 0 in Ω, u=g on ∂Ω.
Δ(un − um ) = 0 in Ω, un − um = gn − gm on ∂Ω,
and Theorem 4.3.1, we get un − um C 0 (Ω) ≤ gn − gm C 0 (∂Ω) . Therefore, limn→∞ un = u
in C 0 (Ω) and u = g on ∂Ω, for a certain u ∈ C (Ω).
0
As Δun = 0 in Ω, we have un (x) = −∂BR (x) un (y)dσ(y), for all BR (x) Ω. Passing to
the limit in this equality implies u(x) = −∂BR (x) u(y)dσ(y), which from Proposition 4.2.4,
gives u ∈ C 0 (Ω) ∩ C ∞ (Ω) and Δu = 0 in Ω. Then Theorem 4.2.1 implies u ∈ C ∞ (Ω).
58
Chapter 4 4.4. Solution to the Dirichlet problem
u ≤ h (u ≥ h) on ∂B implies u ≤ h (u ≥ h) in B.
59
4.4. Solution to the Dirichlet problem Chapter 4
Theorem 4.4.6 Let B ⊂ RN be a ball and (un ) a sequence of uniformly bounded and
harmonic functions in B. Then (un ) has a subsequence converging pointwise in B to a
harmonic function u in B. Furthermore, the convergence is in C 0 (K) for every compact
K ⊂ B.
Theorem 4.4.7 (Perron) Let Ω ⊂ RN be an open bounded C 2 domain. Then for every
g ∈ C 0 (∂Ω) there exists a unique u ∈ C 0 (Ω) ∩ C 2 (Ω) solution to (4.0.1).
60
Chapter 4 4.4. Solution to the Dirichlet problem
Comments on the proof of the theorem. The proof is classical and can be
found in several books; see, for example, [20, 43]. For the reader’s convenience, we have
included it in section 9.3.3.3 in Annex. Here we give some comments which highlight
the main steps and technique of the proof.
The proof considers separately the problem in the interior, −Δu = 0 in Ω, and the
problem on the boundary, u = g on ∂Ω.
For the problem in the interior, one looks for the solution u of the form
|x − y|
w(x) = ln , for N = 2,
R
w(x) = R2−N − |x − y|2−N , for N ≥ 3,
with y and R such that B(y, R) ⊂ Ωc and ∂B(y, R) ∩ ∂Ω = {ξ}. Note that w satisfies
One may wonder whether the barrier function w is just a tool used in the proof of
Theorem 4.4.7, or if it has a fundamental role in the existence of the solution to (4.0.1).
It is also important to know whether the C 2 regularity of Ω can be weakened. Note that
the C 2 regularity of Ω is used when constructing the barrier function w (namely when
choosing the ball B(y, R)) (Figure 4.4.1).
61
4.4. Solution to the Dirichlet problem Chapter 4
A careful look at the proof of Theorem 4.4.7 shows that instead of the definition
(4.4.1) for the barrier function, one can define a weak barrier function w (for −Δ) at ξ
relative to Ω as follows:
(i) w is superharmonic in Ω ∩ B(ξ, Rξ ),
(4.4.2)
(ii) w > 0 in Ω ∩ B(ξ, Rξ )\{ξ}, w(ξ) = 0,
for a certain Rξ > 0. Then the proof of Theorem 4.4.7 remains unchanged.
In this context, we say that a point ξ ∈ ∂Ω is regular with respect to the Laplacian
if there exists a (weak) barrier function (for −Δ) at ξ relative to Ω.
The following theorem explains the connection between barrier functions and the
problem (4.0.1); see [20].
Theorem 4.4.8 Let Ω ⊂ RN be open, bounded. The Dirichlet problem (4.0.1) has a
unique solution u ∈ C 0 (Ω) ∩ C 2 (Ω) for arbitrary g ∈ C 0 (∂Ω) if and only if all ξ ∈ ∂Ω
are regular w.r.t. the Laplacian (i.e. for every ξ ∈ ∂Ω there exists a weak barrier function
for −Δ at ξ relative to Ω).
Note that the only regularity for Ω needed in Theorem 4.4.8 is the existence of
a weak barrier function at every ξ ∈ ∂Ω. This is a much weaker condition than the
assumption “Ω is of class C 2 ”. However, characterizing the domains that have a weak
barrier function at each point of their boundary is not an easy problem, especially in
high dimension. In dimension N = 2 one can show that if ξ ∈ ∂Ω then
1 ln r
w(x) = w(r, θ) = −Re =− 2 ,
ln z θ + ln2 r
with (r, θ) the polar coordinates, x = ξ + (r cos θ, r sin θ), z = reiθ , are a weak barrier at
ξ for −Δ relative to Ω, provided ln z is well-defined near every ξ on ∂Ω. Hence, when
N = 2 the problem (4.0.1) is solvable in Ω for arbitrary g ∈ C 0 (∂Ω), if for example each
point ξ ∈ ∂Ω is the endpoint of a simple arc lying in the exterior of Ω because in this
case one can define a branch of ln z near every ξ ∈ ∂Ω.
In dimension N ≥ 3, the characterization of Ω such that (4.0.1) is solvable is more
difficult. An example given by Lebesgue, see [9], shows that in dimension N = 3 there
are domains with non-regular points, such that (4.0.1) is not solvable for all g ∈ C 0 (∂Ω).
We conclude this chapter by emphasizing that we proved the existence and unique-
ness of (4.0.1) using the method of subsolutions and supersolutions due to Perron. It
provides classical solutions, i.e. u ∈ C 0 (Ω) ∩ C 2 (Ω). This method separates the prob-
lem in the interior from that on the boundary, and can be applied to general linear
second-order elliptic PDEs. It is based heavily on the maximum principle.
62
Chapter 4 4.4. Solution to the Dirichlet problem
which do not have classical solutions but have “weak solutions” (we will review this
problem in Chapter 7). The method of weak solutions has been developed to pro-
vide solutions to (4.0.1) in the cases where the method of classical solutions fails; see
chapter 7.
Problems
Problem 4.1 Consider the problem
−Δu = f in Ω = B(0, 1) ⊂ R2 ,
u = g on ∂Ω.
i) Let (r, θ) ∈ (0, ∞) × [0, 2π) be the polar coordinates, x1 = r cos θ, x2 = r sin θ.
1 1
Show that Δu = ∂r (r∂r u) + 2 ∂θθ u.
r r
ii) Assume u, f , and g are smooth radially symmetric functions and solve for u.
Problem 4.2 Show that u(x) = ln |x| is a classical solution to Δu = 0 in R2 \{(0, 0)}.
Problem 4.3 For each of the following problems, find a series solution candidate and
analyze the existence of a classical solution as required.
63
4.4. Solution to the Dirichlet problem Chapter 4
Problem 4.4 For each of the following problems, find a series solution candidate and
analyze the existence of a classical solution as required.
In these problems, ν is the unitary exterior normal vector on ∂Ω, excluding the corners
of the domain.
Problem 4.5 Let α ∈ (0, π), Ω = {(r, θ), r ∈ (0, 1), θ ∈ (0, α)}, γ0 = {(r, 0), r ∈
(0, 1)}, γ = {(1, θ), θ ∈ (0, α)} and γα = {(r, α), r ∈ (0, 1)}. Consider the problem
−Δu = 0 in Ω, u = 0 on γ0 ∪ γα , u = g on γ,
with g(1, θ) = θ(α−θ). Use the method of separation of variables to find a series classical
solution u ∈ C 0 (Ω) ∩ C 2 (Ω).
Problem 4.8 Let Ω ⊂ RN be a C 2 bounded open connected set, ν the unit outward
normal vector to ∂Ω, and u ∈ C 1 (Ω) ∩ C 2 (Ω) satisfying
−Δu = 0 in Ω, ∂ν u = 0 on ∂Ω.
64
Chapter 4 4.4. Solution to the Dirichlet problem
Problem 4.9 Let Ω ⊂ RN be an open set and u ∈ C 0 (Ω). Prove that u is subharmonic
in Ω if and only if
1
u(x) ≤ udσ =: − udσ,
N VN RN −1 ∂BR (x) ∂BR (x)
Problem 4.10 Let Ω ⊂ RN be an open set and u ∈ C 0 (Ω)∩C 2 (Ω). Prove that −Δu ≤ 0
in Ω if and only if
u(x) ≤ − udσ,
∂BR (x)
Problem 4.11 Let Ω ⊂ RN be an open set and u ∈ C 0 (Ω) ∩ C 2 (Ω). Prove that u is
subharmonic in Ω if and only if −Δu ≤ 0 in Ω.
Problem 4.12 Let Ω ⊂ RN be an open set. Prove that u ∈ C 0 (Ω)∩C 2 (Ω) and −Δu = 0
in Ω if and only if u ∈ C 0 (Ω) and satisfies the MVT in Ω.
Problem 4.14 Prove the so-called “Liouville’s theorem”, i.e. if u ∈ C 2 (RN ) satisfies
−Δu = λu in Ω, u = 0 on ∂Ω (λ ∈ R).
Problem 4.16 Let Ω = {(r, θ), r ∈ [0, 1), θ ∈ (0, α), α ∈ (0, π)} and u ∈ C 0 (Ω) ∩
C 2 (Ω) be the solution to
−Δu = 0 in Ω, u = g on ∂Ω,
65
4.4. Solution to the Dirichlet problem Chapter 4
i) Assume that |g(1, θ)| ≤ C sin απ θ , for all θ ∈ [0, α]. Use the comparison principle
to show that the u is differentiable at (0, 0) and ∇u(0, 0) = (0, 0).
ii) Prove that the claim in (a) holds even with the condition for g replaced by g(1, θ) ∈
C 1 ([0, α]).
Problem 4.17 Assume Ω+ ⊂ R2+ := {(x1 , x2 ), x2 > 0} is an open bounded set with
Γ0 = [0, a] × {0} = ∂Ω+ ∩ ∂R2+ and
−Δu = 1 in Ω, u = 0 on ∂Ω.
−Δu = 1 in Ω, u = 0 on ∂Ω.
7
You may use Proposition 4.2.4.
8
This is the so-called “Schwarz reflection principle”.
9
You may compare u with v ∈ C 0 (B) ∩ C 2 (B) solving −Δv = 1 in B and v = 0 on ∂B, where B in
the ball inscribed to the polygon.
66
5. Distributions
Well before Laurent Schwartz developed the theory of distributions, see [47], physi-
cists and engineers used distributions in an informal way, for example when solving
ordinary differential equations with discontinuous right-hand sides.
Distributions generalize the concept of a function and as such they can represent
very irregular “functions”. At the same time, distributions are “very regular”, as they
do have derivatives of any order. It turns out that these features are powerful tools,
because they allow us to elegantly write and solve differential and partial differential
equations, which otherwise would not even have meaning.
The literature on distributions is rich, and we refer the reader to [18, 22, 23, 47, 50].
5.1 Motivation
Consider the initial value problem yh = fh in R, yh (−∞) := lim yh (t) = 0, where
t→∞
1
, t ∈ (a, a + h),
fh (t) = h with a ∈ R, h > 0.
0, t ∈/ (a, a + h),
t
We look for the limit of yh as h → 0. Clearly, as yh (t) = −∞ f (τ )dτ we get y(t) =
limh→0 yh (t) = H(t − a), t = a, where the limit exists pointwise a.e. and in L1loc (R), and
H(t) is the Heaviside function, H(t) = 1(0,∞) (t).
One would prefer to consider the limit from a functional viewpoint—first consider
the limit of fh and then by using the differential equation deduce the limit for yh . In this
case, we note that lim fh (t) = 0 for all t. If naively we would pass to the limit in yh = fh ,
h→0
we would get y = 0, so y = 0, which is incorrect. If one would accept that δa := lim fh
h→0
exists, let us say in L1 (R), then
necessarily δa = 0. However,
this is impossible because,
as for arbitrary h > 0 we have R fh (t)dt = 1, we obtain R δa (t)dt = 1, which contradicts
δa = 0. So, if the limit δa of (fh ) exists, it cannot be in L1 (R).
One notes that for ϕ ∈ C00 (R), we have
lim fh (t)ϕ(t)dt = ϕ(a).
h→0 R
5.2 Distributions
In this section, we will introduce the space of test functions as well as the definition
of distributions and of their derivatives, and we will provide a number of examples.
1
All along the remainder of this book, unless otherwise specified, the functions of D(Ω) are with
values in C.
68
Chapter 5 5.2. Distributions
We note that D(Ω) is not empty. For example, if ρ is the function defined in Example
1.1.13, then ρ(n(x − x0 )) belongs to D(Ω) provided x0 ∈ Ω and n ∈ N is big enough.
Using the sequence (ρn ) of Example 1.1.13 and the “convolution” operator, we can
construct infinitely many D(Ω) functions.
Theorem 5.2.4 Let p ∈ [1, ∞] and define the convolution operator ∗ by
∗ : L1 (RN ) × Lp (RN ) → Lp (RN ), (5.2.1)
(u, v) → u ∗ v, (u ∗ v)(x) = RN u(x − y)v(y)dy.
5.2.2 Distributions
As we will see, distributions generalize the notion of a function. In the context of
PDEs, they are fundamental to giving a precise meaning to PDEs, which otherwise
would not have one, and to solve them.
Roughly speaking distributions can be very irregular functions. However, they have
derivatives of any order.
Definition 5.2.6 Let Ω ⊂ RN be an open set.
69
5.2. Distributions Chapter 5
The interest of distributions of finite order is that they can be identified with the dual
space of C0m (Ω).
Example 5.2.7 shows that Definition 5.2.6 extends the notion of L1loc (Ω) functions. Iden-
tifying f with Tf , we write L1loc (Ω) ⊂ D (Ω). The following example shows that the
inclusion is strict.
70
Chapter 5 5.2. Distributions
Example 5.2.8 Let δ0 : D(RN ) → C, δ0 , ϕ = ϕ(0), be the distribution called “Dirac
measure”. It is easy to show that δ0 is a distribution of order zero.
Note that for all p ∈ [1, ∞], we have δ0 ∈ / Lp , i.e. there is no δ0 ∈ Lp such that
δ (x)ϕ(x)dx = ϕ(0), for all ϕ ∈ D. Indeed, let us prove this for p = 2 (for the case
RN 0
p arbitrary, see [5]). Assume for the moment that δ0 ∈ L2 . Then there exists ϕn ∈ D,
lim ϕn = δ0 in L2 ; see Theorem 1.3.5. Consider un = ϕn (1 − ηn ), where ηn ∈ D is a
n→∞
{{0}, B(0, 1/n)} cut-off function. Assuming for the moment lim un = δ0 in L2 , we get
n→∞
0 < δ0 2L2 (RN ) = lim δ0 (x)un (x)dx = lim δ0 , un = un (0) = 0,
n→∞ RN n→∞
/ L2 (RN ).
which is a contradiction and proves δ0 ∈
Now we prove lim un = δ0 in L2 (RN ). We have
n→∞
lim ϕn − un 2L2 (RN ) = lim |ϕn ηn | ≤ lim
2
|ϕn |2
n→∞ n→∞ B(0,1/n) n→∞ B(0,1/n)
≤ C lim |ϕn − δ0 | + lim
2
|δ0 | = 0,
2
n→∞ B(0,1/n) n→∞ B(0,1/n)
where we have used the Lebesgue dominated convergence theorem. This implies lim un = δ0
n→∞
in L2 (RN ) because lim ϕn = δ0 in L2 .
n→∞
D (Ω) is a vector space, and has a topology, which for the needs of this introduction
will be characterized by the convergence of sequences as given by the following definition.
1) α1 T1 + α2 T2 ∈ D (Ω) by
2) kT ∈ D (Ω) by
It is easy to check that 1), 2) are consistent, while the consistency2 of 3) follows from
Theorem 9.4.5 in Annex.
2
Actually, we have the following stronger result: if limn→∞ Tn , ϕ exists for all ϕ ∈ D(Ω) then T
defined by T, ϕ := limn→∞ Tn , ϕ defines a distribution in D (Ω) and limn→∞ Tn = T in D (Ω).
71
5.2. Distributions Chapter 5
Example 5.2.10 Let (ρn ) be a mollifier sequence; see Example 1.1.13. Then lim ρn = δ0
n→∞
in D because for ϕ ∈ D, by using (5.2.7) we get
ρn , ϕ = r i
1 ρn (x)ϕ(x)dx = Re(ϕ(θn )) + iIm(ϕ(θn )) = ϕ(θn ) −−−→ ϕ(0),
n→∞
B 0,
n
r i 1
where θn , θn ∈ B 0, .
n
Example 5.2.12 The convergence in D (Ω) generalizes (or is consistent with) the con-
vergence in L1 (Ω). Indeed, let (fn ) in L1 (Ω), f ∈ L1 (Ω), lim fn = f in L1 (Ω). Then, if
n→∞
Tn = fn we have lim Tn = T in D (Ω) because for ϕ ∈ D(Ω) we have
n→∞
|Tn , ϕ − T, ϕ| = (f (x) − f (x))ϕ(x)dx ≤ fn − f L1 (Ω) ϕ
n L∞ (Ω)
Ω
−−−→ 0.
n→∞
Note that in general the distributions do not have meaning at any point as they are
defined only in D(Ω). However, it is useful to speak about the support of a distribution.
2) We set N (T ) to be the largest open set in Ω where T = 0, and call it “null set of
T ” (or “zero set of T ”).
The support of T ∈ D (Ω), denoted by supp(T ), is defined by supp(T ) = Ω\N (T ).
The following theorem shows that a distribution with compact support is of finite
order, i.e. it belongs to the dual space of C0m (Ω) (see 3) in Definition (5.2.6), for a cer-
tain m ∈ N0 .
72
Chapter 5 5.2. Distributions
Theorem 5.2.15 Let Ω ⊂ RN be open and T ∈ D (Ω) be with compact support. Then
T is of finite order, i.e. T satisfies (5.2.6).
Proof. See Theorem 9.4.6 in Annex.
We note that Definition 5.2.16 is motivated by the usual derivative of smooth func-
tions. For example, if T = f ∈ C 1 (Ω) then
∂i T, ϕ = ∂i f (x)ϕ(x)dx = − f (x)∂i ϕ(x)dx = −T, ∂i ϕ,
Ω Ω
Example 5.2.18 Let T = H(x) = 1(0,∞) (x), x ∈ R, be the Heaviside function. Then
∞
T , ϕ = −T, ϕ = − H(x)ϕ (x)dx = − ϕ (x)dx = ϕ(0)
R 0
= δ0 , ϕ.
So, H = δ0 (here denotes the derivative w.r.t. x).
73
5.2. Distributions Chapter 5
So we have proved
The definition (5.2.11) defines pv(x−1 ) ∈ D (R) because for ϕ ∈ D(R) we have
−1 ϕ(x) − ϕ(0) ϕ(x) − ϕ(0)
pv(x ), ϕ = lim+ dx = dx, so
→0 {<|x|<M } x {|x|<M } x
|pv(x−1 ), ϕ| ≤ (2M ) ϕ C 1 (R) ,
74
Chapter 5 5.2. Distributions
where M = M (ϕ) > 0 is such that supp(ϕ) ⊂ [−M, M ]. It follows furthermore that
pv(x−1 ) is of order one (so, pv(x−1 ) is not a measure).
The distribution pv(x−1 ) is called a “principal value of x−1”. Its name is motivated
ϕ(x)
by the fact that it corresponds to the Cauchy principal value of dx. Finally, note
R x
that pv(x−1 ) is equivalently defined by
−1 ϕ(x) ϕ(x) − ϕ(0)
pv(x ), ϕ = dx + dx, ∀ > 0. (5.2.13)
{|x|>} x {|x|<} x
Example 5.2.21 Let Ω ⊂ RN be open, simply connected, and look for T ∈ D (Ω):
We show that T is equal to a constant. For simplicity and without loss of generality, we
assume Ω = (a1 , b1 ) × · · · × (aN , bN ) =: I1 × · · · × IN .
We note that a function φ ∈ D(Ω) is the ∂i derivative of a certain function ϕ ∈ D(Ω)
if and only if
bi
φ(x1 , . . . , xi−1 , xi , xi+1 , . . . , xN )dxi = 0.
ai
One direction of this statement is clear and the other follows by considering
xi
ϕ(x1 , . . . , xi−1 , xi , xi+1 , . . . , xN ) = φ(x1 , . . . , xi−1 , t, xi+1 , . . . , xN )dt.
ai
Next we choose ηi ∈ D(Ii ), with Ii ηi (xi )dxi = 1, for all i = 1, . . . , N . Then for every
ϕ =: ϕ1 ∈ D(Ω) and x = (x1 , . . . , xN ) ∈ Ω define ϕ2 and φ1 by
b1
ϕ2 (x2 , . . . , xN ) := ϕ1 (x1 , x2 , . . . , xN )dx1 ,
a1
ϕ1 (x1 , . . . , xN ) = φ1 (x1 , . . . , xN ) + η1 (x1 )ϕ2 (x2 , . . . , xN ).
b
It is easy to show that η1 ϕ2 ∈ D(Ω), φ1 ∈ D(Ω), and a11 φ1 (x1 , x2 , . . . , xN )dx1 = 0. So
φ1 is equal to the ∂1 derivative of a test function in Ω. Therefore T, φ1 = 0 and
T, ϕ1 = T, η1 ϕ2 .
We proceed with ϕ2 as with ϕ1 . So we consider ϕ3 and φ2 defined by
b2
ϕ3 (x3 , . . . , xN ) = ϕ2 (x2 , x3 , . . . , xN )dx2 ,
a2
ϕ2 (x2 , . . . , xN ) = φ2 (x2 , . . . , xN ) + η2 (x2 )ϕ3 (x3 , . . . , xN ).
Therefore we have
η1 ϕ2 = η1 φ2 + η1 η2 ϕ3 .
75
5.3. Convolution of distributions and fundamental solutions Chapter 5
b
Clearly η1 η2 ϕ3 ∈ D(Ω), η1 φ2 ∈ D(Ω), and a22 η1 (x1 )φ2 (x2 , x3 , . . . , xN )dx2 = 0. There-
fore η1 φ2 is equal to the ∂2 derivative of a test function in Ω. So T, η1 φ2 = 0 and
T, η1 ϕ2 = T, η1 η2 ϕ3 .
Continuing in this way we get
T, ϕ = T, ϕ1
= T, η1 ϕ2
..
.
= T, η1 η2 · · · ηN ϕN +1
b
= T, η1 (x1 ) · · · ηN (xN ) aNN ϕN (xN )dxN
b b −1
= T, η1 (x1 ) · · · ηN (xN ) aNN aNN−1 ϕN −1 (xN −1 , tN )dtN −1 dxN
..
.
b b −1 b
= T, η1 (x1 ) · · · ηN (xN ) aNN aNN−1 · · · a11 ϕ1 (x1 , . . . , xN −1 , xN )dx1 · · · dxN −1 dxN
= T, η1 (x1 ) · · · ηN (xN )1, ϕ
= T, η1 (x1 ) · · · ηN (xN ) 1, ϕ
= T, η1 (x1 ) · · · ηN (xN ), ϕ ,
Definition 5.3.1 Let T, S ∈ D with at least one of them, for example S, with compact
support. Then we define T ∗ S ∈ D , called “convolution of T with S”, by
76
Chapter 5 5.3. Convolution of distributions and fundamental solutions
= f (x)g(y)ϕ(x + y)dydx = f (x), g(y), ϕ(x + y).
RN RN
Also, the definition (5.3.1) is consistent, i.e. T ∗ S ∈ D ; see Proposition 9.4.10 in Annex.
The following theorem shows the density of D in D and that the convolution is
commutative. The proof uses the lemmas 9.4.8 and 9.4.9 in section 9.4.4, which show
that the differentiation and the integration commute with distribution pairing.
Theorem 5.3.3 For every T ∈ D there exists (Tn ) in D converging to T in D , i.e.
D ⊂ D densely. (5.3.2)
where we used the fact that if (ρn ) is a mollifier sequence then (ρ̌n ), with ρ̌n (x) = ρn (−x),
is also a mollifier sequence and lim ρ̌n ∗ ϕ = ϕ in D (see Theorem 5.2.5).
n→∞
For the equality (5.3.3) we note that from Remark 9.4.7 we have S = ηS, with a
certain η ∈ D. Then considering Tn = T ∗ ρn , using Lemma 9.4.8 and Lemma 9.4.9
implies
77
5.3. Convolution of distributions and fundamental solutions Chapter 5
Now we use the fact limn→∞ ρn (z), ϕ(y + x + z) = ϕ(y + x) in D, see the comment after
Theorem 5.2.4, which by using Lemma 9.4.8 implies limn→∞ T (x), ρn (z), η(y)ϕ(y + x +
z) = T (x), η(y)ϕ(y + x) in D, to conclude that
T ∗ S, ϕ = S(y), T (x), η(y)ϕ(y + x)
= η(y)S(y), T (x), ϕ(y + x)
= S ∗ T, ϕ.
Finally (5.3.4) follows from Lemma 9.4.8 as follows:
Dα T ∗ S, ϕ = (−1)|α| T (x), Dxα S(y), ϕ(x + y) = (−1)|α| T (x), S(y), Dxα ϕ(x + y)
= (−1)|α| T (x), S(y), Dyα ϕ(x + y) = T (x), Dyα S(y), ϕ(x + y)
= T ∗ Dα S, ϕ,
which completes the proof.
By using the convolution and the so-called “fundamental solutions”, one can solve
linear PDEs with constant coefficients as follows.
Definition 5.3.4 Let L be a m-th order linear PDE operator with constant coefficients,
L= cα Dα , cα ∈ C.
|α|≤m
LE := cα Dα E = δ0 . (5.3.5)
|α|≤m
78
Chapter 5 5.3. Convolution of distributions and fundamental solutions
The distribution G is called a “fundamental solution to the heat equation”. Let u be given
by
t
u(·, t) = G(·, t) ∗ u0 (·) + G(·, t − s) ∗ f (·, s)ds, in D (R). (5.3.8)
0
Then u(·, t) ∈ C 1 ((0, ∞); D (R))∩C 0 ([0, ∞); D (R)) (see footnote3 ), provided u0 ∈ D (R)
and f (·, t) ∈ C 0 ((0, ∞); D (R)) are with compact support. Furthermore, u solves4
∂t u(·, t) − Δu(·, t) = f (·, t) for t > 0, u(·, 0) = u0 (·) in D (R). (5.3.9)
79
5.3. Convolution of distributions and fundamental solutions Chapter 5
u(·, 0) = u0 (·).
Example 5.3.8 Let G = G(·, t) ∈ C 2 ((0, ∞); D (R)) ∩ C 1 ([0, ∞); D (R)) the solution to
∂tt G(·, t)−ΔG(·, t) = 0 for t > 0, G(·, 0) = 0, ∂t G(·, 0) = δ0 (·), in D (R). (5.3.11)
If u0 , u1 ∈ D (R) and f (·, t) ∈ C 0 ((0, ∞); D (R)) have compact support then u(·, t) ∈
C 2 ((0, ∞); D (R)) ∩ C 1 ([0, ∞); D (R)), and solve
80
Chapter 5 5.4. Tempered distributions and Fourier transform
The set S is called a “space of test functions with fast decay (at ∞)”.
lim sm (ϕn − ϕ) = 0.
n→∞
Definition 5.4.3 The space of tempered distributions and the convergence in it are
defined as follows.
1) T is linear
81
5.4. Tempered distributions and Fourier transform Chapter 5
The following proposition shows that S is invariant with respect to the derivative of
any order and multiplication by any polynomial. Furthermore, the inclusions D ⊂ S ⊂
Lp ⊂ S ⊂ D , p ∈ [1, ∞), hold and are dense.6
Proposition 5.4.4 Let α, β ∈ NN
0 and p ∈ [1, ∞). Then
with all the inclusions dense, the third inclusion being dense even for p = ∞.
Proof. See Proposition 9.4.11 in Annex.
Here i2 = −1 and ξ · x = ξn xn .
n=1,N
The following properties in S are very important. In plain terms, they show that
after applying F, every operator Dα u becomes ξ α û in Fourier space. We will see that
this property holds for tempered distributions as well and leads to the conclusion that a
linear partial differential operator with constant coefficients becomes a linear equation
with a polynomial coefficient in Fourier space; see Remark 5.4.12.
5
This definition is consistent, because actually the following holds: Assume limn→∞ Tn , ϕ exists
for all ϕ ∈ S and define T : S → C, T, ϕ := limn→∞ Tn , ϕ. Then T ∈ S and Tn → T in S . See [22].
6
If A and B are sets and B is endowed with a convergence of sequences, we say “the inclusion A ⊂ B
is dense” if A ⊂ B and for every b ∈ B there exists a sequence (an ) in A converging to b in B.
82
Chapter 5 5.4. Tempered distributions and Fourier transform
83
5.4. Tempered distributions and Fourier transform Chapter 5
1
Therefore, A = and
21/2
2 1 1 2
F[e−x ](ξ) = e− 4 ξ , ξ ∈ R. (5.4.8)
21/2
In the case N ∈ N, taking into account (5.4.8) we have
1
−x2 1 −i(ξk xk ) −x2k −x2k − 14 ξk2
F[e ](ξ) = e e dx k = F[e ](ξk ) = e .
k=1,N
(2π)1/2 R k=1,N k=1,N
21/2
Hence
2 1 1 2
F[e−x ](ξ) = e− 4 ξ , ξ ∈ RN . (5.4.9)
2N/2
√
Finally, by using (5.4.9) and the change of variable y = ax, a > 0, we get
2 1 1 2
F[e−ax ](ξ) = N/2
e− 4a ξ . (5.4.10)
(2a)
One may ask whether F[u] ∈ D for any u ∈ D. By using the analyticity of F[u],
one can prove that F[u] ∈ D if and only if u = 0. The introduction of S has been
motivated, among others, by the idea of finding a non-trivial space of test functions
which is invariant under F. It turns out that F[S] = S, as the following theorem shows.
where w̌(ξ) = w(−ξ) for every function w. Furthermore, similar to (5.4.4), (5.4.5), for
all u ∈ S we have
Dα F−1 [u](x) = F−1 [(iξ)α u](x), (5.4.12)
(−ix)β F−1 [u](x) = F−1 [Dβ u](x). (5.4.13)
Proof. See Theorem 9.4.13. See Corollary 5.4.15 for another proof of (5.4.11).
The following theorem is important because it shows that Fourier transform preserves
the inner product in L2 . The action of F in Lp spaces is more complex; see Lemma 9.4.14
and Theorem 9.4.15 in Annex for more.
7
For the topology induced by the seminorms sm , in the sense limn→∞ un = u in S implies
limn→∞ F[un ] = F[u] in S; see Definition 5.4.1.
84
Chapter 5 5.4. Tempered distributions and Fourier transform
Theorem 5.4.9 (Parseval’s theorem) Let u, v ∈ S. Then (u, v) = (û, v̂), where (·, ·)
is the inner product in L2 . Furthermore, Fourier transform F is extended continuously
in L2 and it defines an isometry from L2 to itself, i.e. for all u ∈ L2 , u L2 = û L2 .
Proof. For u, v ∈ S we have
1
(u, v)L2 = u(x)v(x)dx = N/2
ei(x·ξ) û(ξ)v(x)dξdx
RN (2π) RN RN
1
= v̂(ξ) N/2
e−i(ξ·x) u(x)dxdξ
RN (2π) RN
Note that Theorem 5.4.8 ensures that F[T ] ∈ S . Note also that in some texts F[T ]
is defined by F[T ], ϕ = T, F−1 [ϕ], which is motivated by the multiplication by the
complex conjugate.
Remark 5.4.11 Similar to Theorem 5.4.8, F maps continuously8 S onto itself, and
formulas (5.4.4) and (5.4.5) hold for every T ∈ S . Also, F−1 : S → S , the inverse of
F in S , is continuous and invertible from S to itself, and is given by
85
5.4. Tempered distributions and Fourier transform Chapter 5
Remark 5.4.12 The properties (5.4.4) and (5.4.5) are useful when solving partial dif-
ferential equations. For example, consider the following PDE:
L(D)u := cα D α u = f in S ,
|α|≤m
fˆ
û cα (iξ)α = fˆ, so û(ξ) = α
=: ĝ(ξ).
|α|≤m |α|≤m cα (iξ)
Example 5.4.13 Let a ∈ RN and δa ∈ S defined by δa , ϕ = ϕ(a), for all ϕ ∈ S. Then
1 1
F[δa ] = N/2
e−i(a·ξ) , F[δ0 ] = , (5.4.17)
(2π) (2π)N/2
because
1
F[δa ], ϕ = δa , F[ϕ] = F[ϕ](a) = e−i(a·x) ϕ(x)dx
(2π)N/2 RN
e−i(a·x)
= ,ϕ .
(2π)N/2
Indeed, from the second equality of (5.4.17) we get F[(2π)N/2 δ0 ] = 1, which implies
F[1], ϕ(x) = 1, F[ϕ(x)] = 1, F−1 [ϕ(−x)] = F−1 [1], ϕ(−x)
= (2π)N/2 δ0 , ϕ(−x) = (2π)N/2 ϕ(0).
By using Example 5.4.14 we get another (simple) proof of the inverse Fourier transform
F−1 in S.
86
Chapter 5 5.4. Tempered distributions and Fourier transform
If we prove that F−1 [û](z) = u(z) then we have proved the formula (5.4.11). Note that
i(ξ·z) 1 −i(ξ·(x−z)) 1
e û(ξ) = N/2
e u(x)dx = N/2
e−i(ξ·x) u(x + z)dx
(2π) RN (2π) RN
= F[u(· + z)](ξ).
Hence, F−1 [F[u]] = u. In a similar way one can prove F[F−1 [U ]] = U , for all U ∈ S.
1 d d √
F [x] = F[(−ix)1] = i F[1] = i (2π)1/2 δ0 = i 2πδ0 .
−i dξ dξ
−Δu = δ0 in S , N ≥ 1. (5.4.19)
First we note that u ∈ C ∞ (RN \{0}); see Problem 5.17. Also, u must be radially sym-
1
metric in RN \{0}. Indeed, taking Fourier transform of (5.4.19) gives |ξ|2 û = .
(2π)N/2
So û is radially symmetric. Then for every rotation matrix R ∈ RN ·N , t R = R−1 , and
ϕ ∈ D(RN ) with supp(ϕ) ∩ {0} = ∅ we have
u, ϕ = û, F−1 [ϕ] = û ◦ R, F−1 [ϕ] = û, (F−1 [ϕ]) ◦ (t R)
= û, F−1 [ϕ ◦ (t R)]
= F−1 [û], ϕ ◦ (t R)
= u ◦ R, ϕ,
which implies u(x) = u(R · x), |x| = 0. So u = u(r), r = |x|. From the formula of Δ in
spherical coordinates, we get
1 N −1
Δu(x) = ∂r (r ∂r u(r)) = 0 in RN \{0}.
rN −1
87
5.4. Tempered distributions and Fourier transform Chapter 5
N = 1 : u(x) = C|x|, C ∈ R,
N = 2 : u(x) = C ln |x|,
N ≥ 3 : u(x) = C|x|2−N .
The constant C is found by taking a particular test function in −Δu, ϕ = δ0 , ϕ, for
example ϕ(x) = e−|x| (which is allowed because δ0 is a distribution of order zero). In the
case N ≥ 3 we get
∞
1 = δ0 , ϕ = −Δu, ϕ = − Cr2−N rN −1 r1−N ∂r (rN −1 ∂r e−r )drdσ
S N −1
∞ 0
The distribution E is called a “fundamental solution to −Δ”. Using it one shows that
the solution to
−Δu = f, f ∈ D
is given by u = G ∗ f . For similar results for general linear operators with constant
coefficients, see Definition 5.3.4 and Theorem 5.3.6.
Example 5.4.18 Let us find a fundamental solution G to the heat equation solving
(5.3.7); see Example 5.3.7. By taking Fourier transform of the first equation of (5.3.7)
with respect to x and then using (5.4.10) with a = t gives
2
∂t Ĝ + ξ 2 Ĝ = 0, so Ĝ(ξ, t) = Ce−tξ and
C 1 x2
G(x, t) = √ e− 4 t , C ∈ R.
2t
88
Chapter 5 5.4. Tempered distributions and Fourier transform
The constant C is chosen such that limt→0 R G(x, t)ϕ(x)ds = ϕ(0). By changing the
variable and using (5.4.7), one finds easily C = √12π , which leads to (5.3.10).
Similarly, we can find the fundamental solution G to wave equation solving (5.3.11);
see Example 5.3.8. If T = ∂t G then T satisfies
By taking Fourier transform of the first equation above with respect to x and then using
(5.4.17), we get
T̂tt + ξ 2 T̂ = 0; so
T̂ (ξ, t) = Aeiξt + Be−iξt with A, B ∈ C and therefore
√
T (x, t) = 2π(Aδ0 (x + t) + Bδ0 (x − t)).
1 1 1
We choose A = B = √ so that T (·, 0) = δ0 and T (x, t) = (δ0 (x + t) + δ0 (x − t)).
2 2π 2
From ∂t G(x, t) = T (x, t) and G(·, 0) = 0 it follows that
1 1
G(x, t) = (H(x + t) − H(x − t)) = H(t − |x|).
2 2
Problems
Problem 5.1 Prove that ρ ∈ D(RN ), where
1
N
∀x ∈ R , ρ(x) = e |x|2 −1 , |x| < 1,
0, |x| ≥ 1.
89
5.4. Tempered distributions and Fourier transform Chapter 5
Problem 5.6 Let Ω ⊂ RN be open and T ∈ D (Ω). Assume T ≥ 0, i.e. T, ϕ ≥ 0 for all
real-valued ϕ, 0 ≤ ϕ ∈ D(Ω). Prove that T is a distribution of order zero (a measure).
Problem 5.8 Prove that if T ∈ D (R) and supp(T ) = {0} then there exists m ∈ N0 and
m
(k)
cα ∈ C, |α| ≤ m, such that9 T = ck δ0 .
k=0
sin(nx)
Problem 5.10 Let un = √ . Find the limit of un and un in D (R).
n
Problem 5.14 Let u ∈ C 1 (R\{xn , n ∈ Z}) with u having left and right limits at every
xn . Write the distribution u .
9
Note that the result holds in dimension N : if T ∈ D (RN ) and supp(T ) = {0} then there exist
m ∈ N0 and cα ∈ C, |α| ≤ m, such that T = cα Dα δ0 .
|α|≤m
90
Chapter 5 5.4. Tempered distributions and Fourier transform
Problem 5.16 Let T ∈ D (R), h ∈ R, δ > 0 and define Th by Th , ϕ = T, ϕ(· + h).
Show that Th ∈ D (R). Furthermore, show that if Th = T for all |h| < δ, then T is con-
stant.
Problem 5.18 Let f ∈ D (R) and k ∈ C ∞ (R). Find the (general) solution T ∈ D (R)
to T = f and of T + kT = f .
10
n
In general, an equation k=0 ck y (k) = 0 is transformed to P (ξ)ŷ = 0, with P a polynomial. By
mi −1 (j)
using the roots ξi of P (ξ) = 0, each with multiplicity mi , it leads to ŷ = i j=0 ai,j δ0 (ξ − ξi ),
which after applying F−1 gives y.
91
6. Sobolev spaces
Sobolev spaces were introduced by Sergei Lvovich Sobolev in the 1930s, and have
since been widely embraced and developed by other mathematicians. Sobolev spaces
represent a natural functional framework to describe a rich variety of real-world prob-
lems, and they provide solutions to a large number of PDEs. Mathematically, they
provide elegant tools for studying PDEs because they have rich properties in terms of
approximation, compactness, and boundary values.
In order to avoid technicalities related to the analysis of general W s,p (Ω) spaces,
s ∈ R, p ∈ [1, ∞], we will focus mainly on H s (Ω) = W s,2 (Ω) spaces, s > 0. The reason
for this is that the analysis of H s (Ω) spaces is greatly simplified by the use of Fourier
transform, as well as the fact that H s (Ω) spaces are Hilbert spaces.
For more on W s,p (Ω) spaces in connection with the development of this chapter, see
section 9.5 in Annex, and for an extensive treatment of Sobolev spaces, see classical
books such as [1, 5, 14, 33, 40, 44, 51].
2) The space W k,p (Ω) is called a “W k,p Sobolev space in Ω”. The integer k is called
an “order of W k,p (Ω)”, and k − Np is called a “differential dimension of W k,p (Ω)”.
The quantity uW k,p (Ω) , resp. |u|W k,p (Ω) , is called a “norm, resp. seminorm of u
in W k,p (Ω)”.
4) For p ∈ [1, ∞), sometimes we use the other equivalent norm and seminorm:
uW k,p (Ω) = Dα uLp (Ω) , |u|W k,p (Ω) = Dα uLp (Ω) . (6.1.6)
|α|≤k |α|=k
5) If E(Ω) is any of the spaces above, we define Eloc (Ω) = {u ∈ E(ω), ∀ω Ω}.
The following result is classical. The proof of completeness is simple and based on the
completeness of Lp (Ω) spaces, while the separability and reflexivity require some results
from Functional Analysis. The proof can be found in many books treating Sobolev
spaces; see, for example, [1, 5, 14].
Theorem 6.1.2 Let k ∈ N and Ω ⊂ RN be open. W k,p (Ω) equipped with the norm
· W k,p (Ω) is a Banach space for all p ∈ [1, ∞]. W k,p (Ω) is separable for all p ∈ [1, ∞),
and reflexive for all p ∈ (1, ∞).
94
Chapter 6 6.1. Definitions and some first properties
Hence u ∈ Lp (B) iff pq > −N . For the derivatives, note that ∂i u = q|x|q−2 xi and
1 1
N −1 p(q−2)
|∂i u| dx = q
p p
r r |xi | drdθ ∼
p
rN −1+p(q−1) dr,
B SN 0 0
95
6.1. Definitions and some first properties Chapter 6
Dα v = Dα (u ∗ ρn ) = Dα u ∗ ρn , (6.1.7)
where we used the result of Problem 5.12. The first integral tends to zero from the
construction of vn , and the second integral tends to zero from Lebesgue DCT. For
the third integral, one notices that Dβ ηn (x) = n−|β| Dβ η(x/n), which implies that this
integral also tends to zero.
The claim for S follows from the fact that given ϕ ∈ S, if ϕn = ηn ϕ then limn→∞ ϕn =
ϕ in W k,p , which is proven similarly as the convergence of un to u above.
The following result, even though it only provides the density of D(Ω) in Lp (Ω) and in
k,p
Wloc (Ω), is useful in many applications because it does not depend on the regularity of Ω.
96
Chapter 6 6.1. Definitions and some first properties
It follows that Vn ∈ C ∞ (RN ) and if 1/n < dist(supp(η), Gc ) then Vn (x) = 0 for x ∈ Gc .
Hence, if un = Vn |Ω we have un ∈ D(Ω). Also, as η = 1 in K, from limn→∞ Vn = V
in W k,p (RN ) it follows limn→∞ un = u in W k,p (ω). Therefore, we have proved that for
every ω G Ω and for every n ∈ N, there exists un ∈ D(Ω),
ii) Now we will repeat i) for an increasing sequence of domains like ω and G. Namely, for
n ∈ N set On = {x ∈ Ω, d(x, ∂Ω) < 1/n, |x| < n}. Note that On is open, On is compact,
On ⊂ On+1 , and ∪n∈N On = Ω.
For every n ∈ N, from part i) with ω = On and G = On+1 , there exists un ∈ D(Ω)
and ηn ∈ D(Ω), a {On , On+1 } cut-off function, such that
Then (un ) is the desired sequence. Indeed, from Lebesgue’s dominated convergence the-
orem we have
un − uLp (Ω) ≤ C
p
|un − ηn u| + |1 − ηn | |u| → 0 as n → ∞.
p p p
Ω Ω
Finally, for a given ω Ω there exists nω such that ω ⊂ On for all n > nω . Therefore
for such n, we have
un − uW k,p (ω) ≤ un − uW k,p (Gn ) < 1/n,
Using a similar technique as above, one can prove the following result; see, for exam-
ple, [1, 5, 14].
Theorem 6.1.9 Let k ∈ N, p ∈ [1, ∞) and Ω ⊂ RN be open. Then C ∞ (Ω) ∩ W k,p (Ω) is
dense in W k,p (Ω).
97
6.1. Definitions and some first properties Chapter 6
Proof. Clearly uv, vDα u, uDα v ∈ Lp (Ω). So it is enough to prove (6.1.9). We assume
first p ∈ [1, ∞). For ϕ ∈ D(Ω), there exists ω Ω such that supp(ϕ) ⊂ ω. Let (un ),
resp. (vn ), be a sequence in D(Ω) satisfying Theorem 6.1.8, associated with u, resp. v,
and such that furthermore un , resp. vn , converges a.e. and in W 1,p (ω) to u, resp. v (see
Theorem 1.3.7). Then for α ∈ NN 0 , |α| = 1, we get
Dα (uv), ϕ = − uvDα ϕdx
Ω
= − lim α
un vn D ϕ = lim ϕ(Dα un vn + un Dα vn )
n→∞ Ω n→∞ ω
= (Dα uv + uDα v)ϕ,
Ω
where when passing to the limit we used Lebesgue’s dominated convergence theorem.
In the case p = ∞, given ϕ ∈ D(Ω) we choose an open bounded set ω, supp(ϕ) ⊂
ω Ω. We can apply the case p ∈ [1, ∞) in ω, which proves (6.1.9) and completes the
proof.
The following two propositions show how the operation of composition holds in the
context of Sobolev spaces.
98
Chapter 6 6.1. Definitions and some first properties
Proof. As |h(t)| ≤ h L∞ (R) |t|, it follows that h ◦ u, h (u)Dα u ∈ Lp (Ω). It is enough
then to prove the equality in (6.1.11).
We consider first the case p ∈ [1, ∞). Let ω be an open set, ω Ω. From Theorem
6.1.8, there exists un ∈ D(Ω) such that limn→∞ un = u in Lp (Ω) and limn→∞ ∇un = ∇u
in Lp (ω). Also, without restriction we may assume that un → u pointwise in Ω, and
∇un → ∇u pointwise in ω; see Theorem 1.3.7.
From the assumption for h and the convergence of un , we have lim h(un ) = h(u)
n→∞
in Lp (Ω). Using Lebesgue DCT and the convergence a.e. of un and of ∇un yields
lim h (un ) = h (u) and lim h (un )Dα un = h (u)Dα u in Lp (ω). Then we get
n→∞ n→∞
Dα (h ◦ u), ϕ = − h ◦ u, Dα ϕ
= − lim h ◦ un , Dα ϕ = lim Dα (h ◦ un ), ϕ = lim h (un )Dα un , ϕ
n→∞ n→∞ n→∞
α
= h (u)D u, ϕ,
which completes the proof in the case p ∈ [1, ∞).
In the case p = ∞, we choose an arbitrary ω Ω. Then u ∈ W 1,p (ω), for all
p ∈ [1, ∞), and so the equality in (6.1.11) holds in ω, so in Ω.
We note that Proposition 6.1.12 √does not hold in general if h is not C 1 . For example,
let Ω = (0, 1), u = xα , α ∈ (1/2, 1/ 2), and h(x) = xα . From Example 6.1.3, we have
2
u, h ∈ H 1 (Ω) because p(α−1) > −N (here p = 2 and N = 1), but h◦u(x) = xα ∈ / H 1 (Ω)
because p(α2 − 1) < −N .
where θβ = θ1β1 · · · θN
βN
.
Proof. Let u ∈ W 1,p (Ω). From the classical change of variable theorem for Lp (Ω) func-
tions, we have
∀u ∈ L (Ω), u ◦ θ ∈ L (G) and
p p
|u| dx =
p
|u ◦ θ|p |Dθ|dy. (6.1.13)
Ω G
So u ◦ θ and all the right-hand side terms of (6.1.12) are in Lp (G). Then to prove
u ◦ θ ∈ W 1,p (G), it is enough to prove (6.1.12).
99
6.1. Definitions and some first properties Chapter 6
Consider first the case p ∈ [1, ∞). For ϕ ∈ D(Ω), there exists ω Ω such that
supp(ϕ) ⊂ ω. Let (un ) in D(Ω), limn→∞ un − uW 1,p (ω) = 0, as in Theorem 6.1.8. Note
that (6.1.13) implies that lim un ◦ θ = u ◦ θ and lim Dα un ◦ θ = Dα u ◦ θ in Lp (G), for
n→∞ n→∞
all |α| = 1. Therefore we get
Dα (u ◦ θ), ϕ = − u ◦ θ, Dα ϕ = − lim un ◦ θ, Dα ϕ
n→∞
= lim Dα (un ◦ θ), ϕ
n→∞
= lim Dα θβ Dβ un ◦ θ, ϕ
n→∞
|β|=1
= Dα θβ Dβ u ◦ θ, ϕ,
|β|=1
two open sets and θ = (θ1 , . . . , θN ) ∈ W k,∞ (G; Ω) be invertible with θ−1 ∈ W k,∞ (Ω; G).
If u ∈ W k,p (Ω) then u ◦ θ ∈ W k,p (G), p ∈ [1, ∞], and, furthermore, Dα (u ◦ θ), |α| = k,
is a finite sum of terms of the form Dβ u ◦ θ, |β| ≤ k, multiplied by Dγ θσ terms, with
|γ| ≤ k, |σ| = 1.
The proof is carried out by recurrence on k. Indeed, for k = 1 the claim holds. Assume
it holds for k − 1, so Dα (u ◦ θ), |α| = k − 1, is a sum of terms Dβ u ◦ θ, |β| ≤ k − 1,
multiplied by Dγ θσ terms, with |γ| ≤ k − 1, |σ| = 1. Then applying Proposition 6.1.13
to Dα (u ◦ θ), |α| = k − 1, proves the claim.
Example 6.1.14 As a corollary of Proposition 6.1.12, let us prove the following. Let
Ω ⊂ RN be an open set, u ∈ W 1,p (Ω), p ∈ [1, ∞], and set
Then
u± ∈ W 1,p (Ω), (6.1.15)
∇u± = (∇u)1{u≷0} , and (6.1.16)
∇u = 0 a.e. in {u = C} := {x ∈ Ω, u(x) = C}, C ∈ R. (6.1.17)
Indeed, for > 0 let f (t) = ((t2 + 2 )1/2 − )H(t), and u = f ◦ u. From Proposition
6.1.12, we have u ∈ W 1,p (Ω) and for ϕ ∈ D(Ω) we get
Du , ϕ = − u Dϕdx = − lim u Dϕdx
+ +
Ω →0 Ω
100
Chapter 6 6.2. H s spaces and Fourier transform: W s,p and W0s,p spaces
u∇u
= lim Du ϕdx = lim ϕ
→0 Ω →0 {u>0} (u2 + 2 )1/2
= 1{u>0} ∇uϕ,
Ω
which proves the lemma for u+ . The result for u− follows from the result for u+ by
noting that u− = −(−u)+ . For (6.1.17) set v = u − C. Since u = C + v + + v − , from
(6.1.16) it follows
Proof. Note that from Theorem 5.4.9 and Proposition 5.4.6, (6.2.1) holds in S.
i) F : H k → Lk is well-defined and continuous. Indeed, for u ∈ S, from (6.2.1) we get
F[u]L2 =
2
ξ |û| dξ ∼
2k 2
|(iξ)α |2 |û|2 dξ = F[Dα u]2L2 (RN )
k
RN RN |α|≤k |α|≤k
= u2H k . (6.2.3)
As S −→ d
H k , (6.2.3) implies that F extends to a continuous linear operator from H k
to Lk and (6.2.1) holds for all u ∈ H k .
2
ii) F : H k → L2k is invertible and its inverse F−1 is continuous. Indeed, let v ∈ L2k . Then
v ∈ L2 and from Theorem 5.4.9 there exists a unique u ∈ L2 such that F[u] = v. From
Remark 5.4.11 and v ∈ L2k , it follows F[Dα u] = (iξ)α v ∈ L2 for all |α| ≤ k, which in
1
A continuous linear invertible operator from a normed space X onto another normed space Y .
101
6.2. H s spaces and Fourier transform: W s,p and W0s,p spaces Chapter 6
Definition 6.2.2 For s > 0, the “fractional order Sobolev space H s ” is defined by
H s = {u ∈ L2 , û := F[u] ∈ L2s }, equipped with (6.2.4)
(u, v)H s = (û, v̂)L2s , (6.2.5)
where
L2s = v ∈ L2 , v2L2s = (v, v)L2s < ∞ , and (6.2.6)
(v, w)L2s = ( ξs v)( ξs w)dξ. (6.2.7)
RN
Note that in some texts is used ξ2s = 1+|ξ|2s . The results are exactly the same because
(1 + |ξ|2 )s and 1 + |ξ|2s are equivalent for s > 0. This is the reason why we will use
ξ2s = 1 + |ξ|2s , whenever it is more appropriate.
102
Chapter 6 6.2. H s spaces and Fourier transform: W s,p and W0s,p spaces
Remark 6.2.5 There is another norm for H s spaces which serves as motivation for
defining W s,p (Ω) spaces. Indeed, let s ∈ (0, 1) and u ∈ H s . Using the fact
1 −i(ξ·x) 1
F[u(· + y)](ξ) = N/2
e u(x + y)dx = e i(ξ·y)
N/2
e−i(ξ·z) u(z)dz
(2π) RN (2π) RN
= e i(ξ·y)
F[u](ξ),
and Plancherel theorem, one gets
|u(x + y) − u(x)|2 |F[u(· + y)](ξ) − F[u](ξ)|2
dxdy = dξdy
RN RN |y|N +2s RN RN |y|N +2s
|ei(y·ξ) − 1|2 1
= · N · |F[u](ξ)|2 dξdy
RN RN |y| 2s |y|
|ei(y·ξ)
− 1|2
dy
= · N |F[u](ξ)|2 dξ
N RN |y| 2s |y|
R
=: h(ξ)|F[u](ξ)|2 dξ.
RN
which implies2
ξ ξ |ei(y·ζ) − 1|2 dy
h(ξ) = h |ξ| = |ξ| h
2s
= Cs |ξ|2s , Cs = · N , |ζ| = 1.
|ξ| |ξ| RN |y|2s |y|
where s is the largest integer smaller than s. Hence from (6.2.8), for 0 < s ∈
/ N we get
uH s ∼
2
ξ |F[u]| dξ +
2s 2
|ξ|2(s−s ) |(iξ)α F[u]|2 dξ
RN |α|=s RN
2
Here Cs < ∞ iff s ∈ (0, 1).
103
6.2. H s spaces and Fourier transform: W s,p and W0s,p spaces Chapter 6
= u2H s + |ξ|2(s−s ) |F[Dα u]|2 dξ
|α|=s RN
|Dα u(x) − Dα u(y)|2
= u2H s + dxdy.
RN RN |x − y|N +2(s−s )
|α|=s
W s,p (Ω) = u ∈ Lp (Ω), upW s,p (Ω) := upW s,p (Ω) + |u|pW s,p (Ω) < ∞ , (6.2.11)
p
Dα u(x) − Dα u(y)
|u|pW s,p (Ω) = N ,
|α|=s |x − y| p +(s−s )
Lp (Ω×Ω)
ii) For s > 0 and p ∈ [1, ∞), we denote by W0s,p (Ω) the closure of D(Ω) for the norm
(6.2.11) or (6.1.2).
iii) If p = 2 we write H s (Ω) = W s,2 (Ω), H0s (Ω) = W0s,2 (Ω), and equip them with the
inner product
Note that the spaces W s,p (Ω) and W0s,p (Ω) are Banach spaces and for p = 2, they
are Hilbert spaces; see, for example, [1, 44, 51]. Also, it follows H s of (6.2.10) and is the
same as W s,2 of (6.2.11).
104
Chapter 6 6.3. Continuous, compact, and dense embedding theorems in H s (Ω)
Remark 6.2.7 The case s ∈ / N and p = ∞ is excluded from Definition 6.2.6. The
s ,s−s
reason is that W s,∞ (Ω) = Cb (Ω); see Proposition 1.1.11. Indeed, by formally
letting p → ∞ in Definition 6.2.6, one would define (see, for example, [41])
Dα u(x) − Dα u(y)
uW s,∞ (Ω) := uW s,∞ (Ω) + <∞ .
|x − y|s−s L∞ (Ω×Ω)
|α|=s
Example 6.2.8 Let Ω = (−1, 1) ⊂ R and u(x) = H(x), x ∈ Ω. We wonder for what
(s, p) we have u ∈ W s,p (Ω). We know that u ∈ / W 1,p (Ω), for all p ∈ [1, ∞]. Then we
restrict the discussion to the case s ∈ (0, 1), p ∈ [1, ∞). Clearly u ∈ Lp (Ω). We compute
1 1 0 1
|u(x) − u(y)|p 1
|u|W s,p (Ω) =
p
dxdy = 2 dxdy
−1 −1 |x − y| 1+sp
−1 0 (x − y)
1+sp
2 0 −sp
x=1 2 0
= − (x − y) dy = − (1 − y)−sp − (0 − y)−sp dy
sp −1 x=0 sp −1
2 1 0
= ((1 − y)−sp+1 − (0 − y)−sp+1
sp sp − 1 −1
< ∞, if f sp < 1.
So, u ∈ W s,p (Ω) for all s ∈ (0, 1), p ∈ [1, ∞) such that sp < 1.
105
6.3. Continuous, compact, and dense embedding theorems in H s (Ω) Chapter 6
Continuous, resp. compact, embedding theorems in Sobolev spaces show that if the
differential dimension s − Np is large enough then the Sobolev space W s,p is continuously
embedded in, for example, more familiar spaces such as C 0 , resp. compactly embedded
in larger spaces, such as L2 . Compact embeddings are particularly important when
studying nonlinear PDEs.
Density theorems are significant when H is a set of regular functions. For example,
one can prove a certain property only for u ∈ H, which a priori is simpler than the proof
for u ∈ E. Next, by using the density of H in E, one can prove that the same property
holds for u ∈ E. This is done by writing the property for a sequence (un ) in H, with
limn→∞ un = u in E, and then passing to the limit.
6.3.1 Case Ω = RN
First, we develop the results in H s spaces. Here, the analysis is simplified based on
the fact that H s spaces are homeomorphic with L2s spaces. Next, the results are extended
to H s (Ω) spaces. Here, the regularity of Ω is important. It allows us to extend H s (Ω)
functions as H s functions and use the results developed for H s spaces.
Theorem 6.3.2 For all s > 0, the space H s equipped with the inner product (u, v)H s is
a Hilbert space. Furthermore, D −→
d
H s and S −→d
H s.
We take ψn = ϕmn with mn such that un − ϕmn H s (RN ) < 1/n. So (ψn ) is in D and
converges to u in H s .
4
A homeomorphism that preserves the norm.
106
Chapter 6 6.3. Continuous, compact, and dense embedding theorems in H s (Ω)
N
Since s − > 0, we obtain
2
∞ ∞
dξ rN −1 N −1−2s
= dr ∼ 1 + r ds < ∞,
RN ξ2s 1 (1 + |r|2 )s 1
For the proof of the following result, see Theorem 9.5.1 in Annex.
Theorem 6.3.4 Assume s − N2 ∈ (m, m + 1], m ∈ N0 and let σ ∈ (0, s − N2 − m}). Then
we have
|Dα u(x) − Dα u(y)| ≤ CDα uH s−m |x − y|σ . (6.3.3)
Remark 6.3.5 One may wonder whether the condition s − N2 > 0 in theorems 6.3.3 and
6.3.4 is due to the technique we have used. Actually, this condition is necessary, in the
sense that if s− N2 ≤ 0 the theorem does not hold. Indeed, we have seen in Example 6.2.3
that u = 1(0,1) ∈ H s (R) only for s−1/2 < 0 (so, here N = 1 and s−N/2 = s−1/2 < 0).
Clearly u ∈/ C 0 . In this context, we say that the condition s − N2 > 0 in theorems 6.3.3
and 6.3.4 is optimal.
Theorems 6.3.3 and 6.3.4 do not hold in general even in the critical case s−N/2 = 0.
For example, let u(x) = 1{|x|<1} ln |1 − ln |x||, x ∈ R2 . Note that u ∈ H 1 (R2 ), however
u∈/ C 0 (here, s − N/2 = 1 − 2/2 = 0). The proof of u ∈ H 1 (R2 ) goes as follows:
1
|u| dx =
2
|u| dx ≤ C
2
r| ln |1 − ln r|2 dr < ∞,
R2 {|x|<1} 0
5
We remind that if a space symbol is not followed by a domain, then it is understood that the
domain is RN .
107
6.3. Continuous, compact, and dense embedding theorems in H s (Ω) Chapter 6
6.3.2 Case Ω RN
We start with the definition of “extension” and “restriction” operators. They are
used to extend H s (Ω) in H s and then to restrict H s to H s (Ω), which makes it possible
to apply H s embedding results to H s (Ω) case.
For the proof of the theorem in the case s ∈ N and p ∈ [1, ∞], see [48]. For the proof in
the case 0 < s ∈/ N and p ∈ (1, ∞) and a proof by interpolation, see [54]. For a general
direct proof, see [45].
The theorem below gives a detailed instructive step-by-step proof in the case s = 1,
p ∈ [1, ∞], and Ω = RN + of the existence of an extension operator.
+ ) →
Theorem 6.3.8 There exists a linear continuous extension operator E : W 1,p (RN
W (R ), p ∈ [1, ∞].
1,p N
108
Chapter 6 6.3. Continuous, compact, and dense embedding theorems in H s (Ω)
E is the required extension operator. Indeed, clearly E is linear. Let us prove that E is
well-defined and continuous.
(i) E is well-defined, so if u ∈ W 1,p (RN + ) then U ∈ W
1,p
(RN ).
(i.1) Let α = (α , αN ) ∈ NN0 , |α| = 1. Then for ϕ ∈ D(R ), we have
N
D U, ϕ = −
α
U (x)Dα ϕ(x)dx
N
R
= − u(x)Dα ϕ(x)dx + u(x , −xN )Dα ϕ(x , xN )dx
RN RN
+ −
with
which we found after using the change of variable formulas (see Proposition 6.1.13)
u(x , −xN )Dx v(x , xN )dx = (−1)
α αN
u(y , yN )Dyα (v(y , −yN ))dy. (6.3.6)
RN
± RN
∓
We claim that the limit of the second term in (6.3.7) is zero. Indeed, if αN = 0 then
Dα ηn = 0. If αN = 1 we get
109
6.3. Continuous, compact, and dense embedding theorems in H s (Ω) Chapter 6
≤ ηC 1 (R) ψC 1 (RN ) |u|dx
{|xN |≤1/n}
−−−→ 0, (6.3.8)
n→∞
where we used Lebesgue DCT and the fact that η (nxN ) = 0 for xn > n1 .
Therefore as ηn ψ ∈ D(RN
+ ), from (6.3.7) and (6.3.8), we get
α
uD ψdx = lim uD (ηn ψ)dx = − lim
α
(ηn ψ)Dα udx
RN n→∞ RN n→∞ RN
+
+ +
= − ψDα udx.
RN
+
(i.3) Finally, by combining (6.3.5) with the last equality and changing the variable again,
we obtain
α
D U, ϕ = ψDα udx
RN
+
= α
ϕD udx + (−1) αN
ϕ(x , −xN )Dα u(x , xN )dx
RN RN
+
+
= α
ϕD udx + ϕ(x , xN )Dα u(x , −xN )dx
RN
+ RN
−
H s (Ω) −→
E
H s (RN ) −→ Cbm,σ (RN ) −→
R
Cbm,σ (Ω).
These embeddings in combination with estimate (6.3.3) prove the embedding H s (Ω) −→
c
Cb (Ω) and the estimate (6.3.9). Furthermore, if Ω is bounded then Cb (Ω) −→Cb (Ω)
m m,σ c m
110
Chapter 6 6.3. Continuous, compact, and dense embedding theorems in H s (Ω)
Theorem 6.3.10 Let k ∈ N and assume Ω is an open bounded set that satisfies the
H k+1 extension property. Then H k+1 (Ω) −→
c
H k (Ω). In particular, H0k+1 (Ω) −→
c
H k (Ω)
with Ω only open bounded.
Proof. For the proof of the first part, see Theorem 9.5.2. The second part follows from
Lemma 9.5.6 and the first part of this theorem.
Remark 6.3.11 Embedding theorems hold in a more general setting; see, for example,
[1, 14, 44, 51]. Namely, we have the following embeddings, where s > 0, p ∈ [1, ∞),
m ∈ N:
Ω = RN , or Ω RN Ω Lipschitz bounded
s,p
satisf ies W extension property
b) s − N
p
=0 W s,p (Ω) −→ Lq (Ω), W s,p (Ω) −→c
Lq (Ω),
f or all q ∈ [p, ∞) f or all q ∈ [1, ∞)
d) s − N
p
=m W s,p (Ω) −→ Cbm−1,σ (Ω), W s,p (Ω) −→c
Cbm−1,σ (Ω),
f or all σ ∈ [0, 1) f or all σ ∈ [0, 1)
e)
s − Np ≥ σ − N
q
W s,p (Ω) −→ W σ,q (Ω) W s+,p (Ω) −→c
W σ,q (Ω),
p≤q f or all > 0.
Furthermore
f ) the continuous embeddings hold in W0s,p (Ω) instead of W s,p (Ω), with Ω only open,
g) the compact embeddings hold with W0s,p (Ω) instead of W s,p (Ω), with Ω only open
bounded.
The following remark shows that Lipschitz regularity of Ω in Theorem 6.3.7 is essential.
111
6.4. Boundary traces in Sobolev spaces Chapter 6
Remark 6.3.12 Theorem 6.3.7 does not hold in general if Ω is not Lipschitz. For exam-
ple, let Ω = {(x1 , x2 ), x1 ∈ (0, 1), x2 < xα1 } with α > 1. Clearly, Ω is not Lipschitz
because it has a cusp at the origin.
− σp
Let u(x) = x1 , p ≥ 1, σ > 0. Given furthermore m ∈ N, we can choose α such that
α − σ − mp + 1 > 0. Then u ∈ W m,p (Ω) because for all integer k ≤ m, we have
1 xα1
− σ −k p
|∂x1 u| dx = C(k, p, σ)
k p
x1 p dx2 dx1
Ω 0 0
1
C(k, p, σ)
= C(k, p, σ) xα−σ−kp dx1 = < ∞.
0
1
α − σ − kp + 1
/ L∞ (Ω). This means that Theorem 6.3.7 does not hold because if it does,
However, u ∈
from the embedding H 2 (R2 ) −→ Cb0 (R2 ) (see Theorem 6.3.3 with s = 2, p = 2, N = 2
so s − N/p = 1 > 0), we should have Eu ∈ C 0 (R2 ).
Example 6.3.13 Let Ω = (0, 1) ⊂ R and u(x) = x. Then u ∈ / H01 (Ω). Indeed, if we
assume that u ∈ H0 (Ω), then there exists un ∈ D(Ω) such that un → u in H 1 (Ω). Note
1
that we have H01 (Ω) −→ C 0 (Ω) because here s − N/p = 1 − 1/2 = 1/2 > 0. Therefore,
u − un C 0 (Ω) ≤ Cu − un H 1 (Ω) so u − un C 0 (Ω) → 0 as n → ∞. But u(1) = 1
and un (1) = 0 for all n, so u − un C 0 (Ω) ≥ |u(1) − un (1)| = 1, which contradicts
u − un C 0 (Ω) → 0 and proves the claim.
Theorem 6.3.14 Assume Ω ⊂ RN is an open set that satisfies the H s extension prop-
erty, s > 0. Then Cb∞ (Ω) −→ d
H s (Ω). More generally, if Ω ⊂ RN is an open set that
satisfies the W s,p extension property, s > 0, p ∈ [1, ∞) then Cb∞ (Ω) −→
d
W s,p (Ω).
Proof. For the first part, let u ∈ H s (Ω) and U ∈ H s , U = Eu, where E is a H s (Ω)
extension operator. From Theorem 6.3.2, there exists a sequence (Un ) in D(RN ) con-
verging to U in H s (RN ). It follows that (un ), un = RUn , is in Cb∞ (Ω) and converges to
u in H s (Ω).
The proof for W s,p (Ω) spaces is similar: extension in W s,p , next using the embedding
D(RN ) −→ d
W s,p (RN ) and conclude with the restriction to W s,p (Ω); see [24].
112
Chapter 6 6.4. Boundary traces in Sobolev spaces
Trace theorem in Sobolev spaces essentially extends the operator γ in H s (Ω) spaces
(or more generally in W s,p (Ω)). Typically, the boundary trace of H s (Ω) functions is
constructed as follows. First, one defines the trace of H s (RN ) functions on {xN = 0} by
using Fourier transform. Next, for Ω general, the boundary trace of H s (Ω) functions is
constructed by using a partition of unity of ∂Ω, see Definition 1.2.1, and the trace results
on {xN = 0}. Traces of W s,p (Ω) functions are constructed similarly, but the proofs are
substantially more difficult and technical; see, for example, [14, 33].
Theorem 6.4.2 Let s − 12 > 0. The trace operator γ : S(RN ) → S(RN −1 ), defined by
γ(u)(x ) = u(x , 0), x ∈ RN −1 , extends to a linear continuous operator denoted with the
1
same letter γ, from H s (RN ) onto H s− 2 (RN −1 ).
Proof. For x, ξ ∈ RN , we will write x = (x , xN ), ξ = (ξ , ξN ) with x , ξ ∈ RN −1 . Let
U ∈ S(RN ) and set u(x ) = U (x , 0), uN (xN ; x ) = U (x , xN ) (uN is considered as a
function of xN ). Note that u ∈ S(RN −1 ) and uN ∈ S(R). Furthermore, if
Û (ξ) = Fx [U ](ξ), resp. û(ξ ) = Fx [u](ξ ), ûN (ξN ; x ) = FxN [uN (·; x )](ξN )
113
6.4. Boundary traces in Sobolev spaces Chapter 6
2
2(s− 21 )
≤C ξ Û (ξ , ξN )dξN dξ
N −1
R R
s− 21
(using H ölder) ≤ C ξ 2( ) ξ2s |Û (ξ , ξN )|2 dξN
RN −1 R
ξ−2s dξN dξ . (6.4.1)
R
For s − 1
2
> 0, we have
2 −s
ξN
ξ−2s dξN = ( ξ 2 + |ξN |2 )−s dξN = ξ −2s 1+ dξN
R R R ξ
−2(s− 12 )
= ξ (1 + r2 )−s dr
R
1
= ξ −2(s− 2 ) Is , (6.4.2)
where Is := R
(1 + r2 )−s dr < ∞. Then (6.4.1) implies
u2 s− 21 N −1
≤C ξ2s |Û (ξ , ξN )|2 dξN dξ ≤ CÛ 2L2s (RN )
H (R ) RN −1 R
≤ CU 2H s (RN ) . (6.4.3)
By density of S(RN ) in H s (RN ), see Theorem 6.3.2, the inequality (6.4.3) is valid for all
U ∈ H s (RN ) and this proves the theorem.
Remark 6.4.3 The condition s − 12 > 0 in Theorem 6.4.2 is used to show that Is < ∞.
One can ask whether this condition is due to the technique we used or it is critical. Actu-
ally, this condition is indeed critical. One can prove that for 0 < s ≤ 12 , the trace operator
cannot be extended to a continuous operator from H s (RN ) to L2 (RN −1 ). Furthermore,
for such s, the closure of D(Ω) in H s (Ω) is H0s (Ω); see [35].
Now we address the question of defining the trace of H s (Ω) functions on ∂Ω. We
expect that the range of the trace operator will be in a certain space H s (∂Ω). The follow-
ing definition addresses a more general boundary space W s,p (∂Ω); see for example [17].
114
Chapter 6 6.4. Boundary traces in Sobolev spaces
θi
∂Ω θi−1
G+
i
Q+
Q0
−1
Figure 6.4.1: G+ +
i , Q , θi , and θi
In the same spirit as in Remark 1.2.8, one can show that · W s,p (∂Ω) norms corre-
sponding to two different partitions of unity of ∂Ω are equivalent; see also [17, 46]. We
have this general result.
Theorem 6.4.5 Let Ω ⊂ RN be an open bounded set of class C 0,1 , s > 0, and p ∈ (1, ∞).
1
i) Case s − ∈ (0, 1). There exists a linear continuous trace operator with continu-
p
ous right inverse,
1
γ : W s,p (Ω) → W s− p ,p (∂Ω). (6.4.5)
1
ii) Case s − ∈ [1, ∞). Assume furthermore that Ω is of class C m,1 , m ≥ 1. Then
p
1 1
for s − ∈ ( − 1, m + 1)\N, if p = 2, and s − ∈ ( − 1, m + 1), if p = 2, for a
p p
certain ∈ N, there exists a linear continuous trace operator with continuous right
inverse,
−1
1
γ = (γ0 , γ1 , . . . , γ −1 ) : W s,p
(Ω) → W s−i− p ,p (∂Ω), (6.4.6)
i=0
such that for U ∈ S(Ω), if (u0 , u1 , . . . , u −1 ) = γ(U ) then ui (x) = ∂νi U (x), for all
x ∈ ∂Ω, i = 0, . . . , − 1. Here, ∂νi U (x) means the i-th order normal derivative of
i!
u, i.e. ∂νi U (x) = Dα U ν α , where ν is the unit outward normal vector to ∂Ω.
α!
|α|=i
115
6.4. Boundary traces in Sobolev spaces Chapter 6
For the proof of this theorem in the case s ∈ N, see [40], and in the case s ∈
/ N see [25, 37].
1
We note that the condition for s − in ii) is due to these facts: a) Besov space
p
s
Bp,p (Ω) is equal to W s,p (Ω) only if (s, p) ∈ ((0, ∞)\N) × (1, ∞) or (s, p) ∈ N × {2} (see
1 −1 s−i− 1 ,p
[24, 44]) and b) Claim ii) holds for all s − ∈ ( − 1, m + 1) with i=0 W p (∂Ω)
p
−1 s−i− p1
replaced by i=0 Bp,p (∂Ω); see [25, 37]. For more details on the boundary traces, see
[24, 25, 33, 37, 39, 40, 46, 48].
The boundary trace results allow us to integrate by parts with Sobolev space func-
tions, as in the case of smooth functions.
Theorem 6.4.6 (Integration by parts) Let Ω ⊂ RN be an open set and α ∈ NN
0 ,
|α| = 1.
i) If u ∈ W01,p (Ω) and v ∈ W 1,q (Ω), where p ∈ [1, ∞), 1/p + 1/q = 1, then
uD vdx = − vDα udx, |α| = 1.
α
(6.4.7)
Ω Ω
ii) If Ω is furthermore a Lipschitz bounded set then for u ∈ W 1,p (Ω) and v ∈ W 1,q (Ω),
with p ∈ (1, ∞), 1/p + 1/q = 1, we have
α
uD vdx = γ(u)γ(v)ν dσ − vDα udx,
α
(6.4.8)
Ω ∂Ω Ω
Proof. To prove (6.4.7), let (un ) in D(Ω) such that limn→∞ un = u in W 1,p (Ω). Then
α
uD vdx = lim un Dα vdx = lim un , Dα v
n→∞ n→∞
Ω Ω
= − lim v, D un = − lim
α
vD un dx = − vDα un dx.
α
n→∞ n→∞ Ω Ω
where for the limit in the boundary integral we used the continuity of the trace operator.
116
Chapter 6 6.5. Poincaré inequality
Proof. By the density of D(Ω) in W01,p (Ω), it is enough to prove (6.5.1) only for
u ∈ D(Ω). Integration by parts gives
uLp (Ω) =
p
∂i xi |u(x)| dx = lim ∂i xi (u2 + 2 )p/2 dx
p
→0 Ω
Ω
= − lim pxi (u2 + 2 )(p−2)/2 u∂i udx
→0 Ω
u
= − lim pxi (u2 + 2 )(p−1)/2 2 ∂i udx
→0 Ω (u + 2 )1/2
(use Lebesgue DCT ) = − pxi |u|p−1 sgn(u)∂i udx
Ω
(use H ölder inequality) ≤ pd|u|p−1 p ∂i uLp (Ω) ; hence
L p−1 (Ω)
∀u ∈ W0k,p (Ω), uLp (Ω) ≤ C|u|W k,p (Ω) , C = C(k, p, Ω). (6.5.3)
117
6.6. H −s (Ω) and W −s,q (Ω) spaces Chapter 6
Example 6.5.3 Assume Ω ⊂ RN is a Lipschitz open bounded connected set and let
M0 (Ω) = u ∈ W (Ω),
1,p 1,p
udx = 0 , p ∈ [1, ∞).
Ω
Then Poincaré inequality (6.5.1) holds in M01,p (Ω). Let us prove the claim for p = 2.
Assume the claim does not hold. So
∀n ∈ N, ∃un ∈ M01,2 (Ω), un L2 (Ω) ≥ n∇un L2 (Ω) . (6.5.4)
un
Dividing (6.5.4) by un L2 (Ω) and setting vn = we get for all n:
un L2 (Ω)
then Lu = 0. Note that (6.6.1) has nonzero solutions, for example u(x) = Re(eλx1 ), with
λ = 0 solution to 0≤α1 ≤k (−1)α1 λ2α1 = 0.
Now we can prove that D(Ω) is not dense in H k (Ω). If we assume D(Ω) is dense in
H k (Ω) then there exists ϕn ∈ D(Ω), lim ϕn = u in H k (Ω). Therefore
n→∞
0 = lim Lu, ϕn = lim α α
D uD ϕn dx = |Dα u|2 dx > 0,
n→∞ n→∞ Ω Ω
|α|≤k |α|≤k
This lemma with the comments preceding it motivates the following definition.
Definition 6.6.2 Let s > 0, p ∈ [1, ∞), q ∈ (1, ∞], 1/p + 1/q = 1, and Ω ⊂ RN . We
define6 W −s,q (Ω) as the dual space of W0s,p (Ω). If Ω is ∂RN
+ or a C
s +1
domain, then
−s,q s,p −s
W (∂Ω) denotes the dual space of W (∂Ω). When p = 2, we write H (·) instead of
W −s,2 (·).
Note that the spaces W −s,q (Ω) and W −s,q (∂Ω) equipped with their dual norms are
Banach spaces. Note also that p = ∞ is excluded from the previous definition, because
the closure of D(Ω) in W s,∞ (Ω) is a subspace of C s (Ω).
Furthermore, D(Ω) −→ d
W0s,p (Ω) implies that W −s,q (Ω) can be identified by a sub-
space of distributions in D (Ω) continuous with respect to · W s,p (Ω) norm, which is not
the case for the dual space of W s,p (Ω).
Remark 6.6.3 Let s > 0. In analogy to the characterization of H s (RN ) by L2s (RN ), we
have H −s (RN ) = (H s (RN )) , i.e.
−s −2s ˆ 22
H (R ) = ∈ S , H −s (RN ) =
N 2
ξ |F[]| dξ =:
2
L−s <∞ .
RN
6
See Remark 6.6.3 for the motivation of the term −s.
119
6.6. H −s (Ω) and W −s,q (Ω) spaces Chapter 6
Problems
Problem 6.1 Let Ω ⊂ RN be an open set, u ∈ W k,p (Ω), k ∈ N, p ∈ [1, ∞], and ũ a
function in Ω such that the N -dimensional Lebesgue measure of the set {x ∈ RN , ũ(x) =
u(x)} is zero. Prove that ũ ∈ W k,p (Ω) and ũW k,p (Ω) = uW k,p (Ω) .
Problem 6.2 Let H = 1(0,∞) be the Heaviside function. Prove that H ∈
/ W 1,p (Ω), where
p ∈ [1, ∞], Ω ⊂ R is an open set, 0 ∈ Ω.
Problem 6.3 Let B be the unit ball in RN . Prove that u(x) = ln(1 − ln |x|) ∈ W 1,p (B)
iff p < N .
Problem 6.4 Let h : R → R be a C 1 piecewise smooth function, i.e. h is C 0 in R,
h(0) = 0, h ∈ L∞ (R), and h is continuous in R except in a finite number of points
Z where side derivatives exist. Prove that h ◦ u ∈ H 1 (Ω), Ω ⊂ RN open, for every
u ∈ H 1 (Ω), and
∂i (h ◦ u) = 1{x∈Ω, / h (u)∂i u,
u(x)∈Z} i = 1, . . . , N.
Problem 6.5 Assume u ∈ H 1 (R) and set uh (x) = 1
h
(u(x + h) − u(x)). Prove that
lim uh = u in L2 (R).
h→0
Problem 6.6 Find where, and explain why, Ω in Lemma 6.6.1 is required to be bounded.
Prove that, in general, D(Ω) is not dense in H 1 (Ω) if Ω RN is unbounded.
Problem 6.7 Show that H01 (R\{0}) H 1 (R).
Problem 6.8 Let N ≥ 3, u ∈ H 1 (RN ) and f (t, x) = (fi (t, x)), t ∈ R, x ∈ RN , where
xi
fi (t, x) = ∂i u + tu 2 . Show that
|x|
u2
0≤ |f (t, x)| dx =
2
|∇u| dx + (t − (N − 2)t)
2 2
dx,
RN |x|
2
RN RN
Show that W s,p (Ω) = W̃ s,p (Ω) and the norms · W s,p (Ω) , · W̃ s,p (Ω) are equivalent. Show
furthermore that the statement does not hold if Ω is not bounded Lipschitz.
7
This statement is true even for N = 2, but the proof is more delicate; see [51].
120
Chapter 6 6.6. H −s (Ω) and W −s,q (Ω) spaces
Problem 6.10 Prove that there exist open bounds sets Ω ⊂ RN such that C 1 (Ω) is not
dense in H 1 (Ω).
Problem 6.11 Prove Theorem 6.1.9.
Problem 6.12 Show that the embedding H k+1 (Ω) −→ H k (Ω) is not compact, in gen-
eral, if Ω ⊂ RN is only an open bounded set.
Problem 6.13 Let u ∈ W 1,p (Ω), p ∈ [1, ∞], Ω = (a, b) ⊂ R. Prove that there exists
ũ ∈ W 1,p (Ω) ∩ C 0 (Ω), ũ = u a.e. in Ω and
x
∀x, y ∈ Ω, ũ(x) − ũ(y) = u (s)ds.
y
Problem 6.14 Prove that if u ∈ H 1 (R) then u cannot have jump discontinuities.
Problem 6.15 Prove that H 1 (RN ) C 0 (RN ) for N ≥ 2, i.e. there exists u ∈ H 1 (RN )
such that u ∈
/ C 0 (RN ).
Problem 6.16 Let Ω ⊂ RN be an open bounded Lipschitz set and s > r ≥ 0. Prove that
H s (Ω) −→
c
H r (Ω).
Problem 6.17 Let k ∈ N. Prove that the embedding H k+1 (RN ) −→ H k (RN ) is not
compact.
Problem 6.18 Let Ω ⊂ RN be an open bounded set. Prove that the embedding H01 (Ω) −→
H −1 (Ω) is compact.
Problem 6.19 Prove that Poincaré inequality does not hold in H 1 (RN ).
Problem 6.20 Let Ω ⊂ R2 be an open bounded Lipschitz connected set, γ ⊂ ∂Ω with
nonzero length, and
Wγ1,p (Ω) = {u ∈ W 1,p (Ω), u = 0 on γ}, p ∈ [1, ∞).
Prove that Poincaré inequality (6.5.3) holds in Wγ1,p (Ω).
Problem 6.21 Let Ω ⊂ RN be an open bounded Lipschitz connected set, ω ⊂ Ω with
nonzero measure, and
Wω1,p (Ω) = {u ∈ W 1,p (Ω), u = 0 on ω}, p ∈ [1, ∞).
Prove that Poincaré inequality (6.5.3) holds in Wω1,p (Ω).
N
Problem 6.22 Let k − > 0. Prove that δ0 ∈ H −k (RN ).
2
Problem 6.23 Let Ω ⊂ RN be of class C 1 , g ∈ H 1 (Ω), and ∈ L(H 1 (Ω)) defined
by (v) = g(x)v(x)dσ, v ∈ H 1 (Ω). Estimate the norm of seen as an element of
∂Ω
H −1 (Ω) or of L(H 1 (Ω)).
121
7. Second-order linear elliptic PDEs:
weak solutions
There exist several well-developed theories of elliptic linear PDEs of order two. We
have already seen Perron’s method for Laplace equation in Chapter 4, which provides
classical solutions. In this context, Schauder theory provides C 2,α classical solutions; see,
for example, [20].
Other methods, such as Hilbert spaces, variational, singular integral, or spectral
methods provide the so-called “weak solutions”, which belong to certain Sobolev spaces;
see, for example, [5, 16, 20, 35].
In this chapter, we will only deal with the existence and uniqueness of weak solutions
obtained by the variational method. By combining these results with some compactness
results in Sobolev spaces, we will also address the solution to certain nonlinear elliptic
PDEs.
7.1 Introduction
The objective of this chapter is to study the existence and uniqueness of a weak
solution u : Ω → R to the following problem:
N
N
Lu := − ∂j (aij ∂i u) + bi ∂i u + cu = f in Ω, (7.1.1a)
i,j=1 i=1
u = 0 on ∂Ω. (7.1.1b)
This problem is commonly referred to as the “Dirichlet problem”, which is in connection
with the boundary condition u = 0. One may consider the general Dirichlet boundary
condition u = g; see Remark 7.2.13.
Here and throughout this chapter, unless otherwise stated, Ω ⊂ RN is an open
bounded set, and aij , bi , and c satisfy
2
A := (aij ) = (aji ) ∈ W 1,∞ (Ω; RN ), b := (bi ) ∈ L∞ (Ω; RN ), c ∈ L∞ (Ω), (7.1.2a)
N
∃k, K > 0, ∀0 = ξ ∈ R , k|ξ| ≤ N 2
aij (·)ξi ξj ≤ K|ξ|2 , a.e. in Ω. (7.1.2b)
i,j=1
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 123
A. Novruzi, A Short Introduction to Partial Differential Equations, CMS/CAIMS Books in Mathematics 11,
https://doi.org/10.1007/978-3-031-39524-6 7
7.1. Introduction Chapter 7
124
Chapter 7 7.2. Existence and uniqueness of weak solutions
The following two theorems are the main ingredients for solving (7.1.1). While The-
orem 7.2.2 solves (7.1.1) in special cases, Theorem 7.2.3 gives a full characterization of
the weak solutions to (7.1.1).
Theorem 7.2.2 (Lax-Milgram lemma) Let H be a Hilbert space and B : H ×H → R
be a bilinear form satisfying
with certain K, k > 0. Then, for every f ∈ H there exists a unique u ∈ H such that
B(u, v) = f, v H ×H , ∀v ∈ H, (7.2.3)
1
u H ≤ f H . (7.2.4)
k
Theorem 7.2.3 (Fredholm alternative) Let H be an Hilbert space and A : H → H,
a compact linear operator. Then all the following hold.
125
7.2. Existence and uniqueness of weak solutions Chapter 7
Now we give the “weak maximum principle” associated with the operator L in the
context of Sobolev spaces. It generalizes the classical weak maximum principle that we
have seen in Chapter 4.
Theorem 7.2.5 (Weak maximum principle) Let Ω ⊂ RN be an open bounded set,
c ≥ 0 and u ∈ H 1 (Ω) such that Lu ≤ 0 (Lu ≥ 0). Then
where sup and inf are taken for x almost everywhere in Ω or on ∂Ω.
Proof. Consider first the case Lu ≤ 0. If M = supx∈∂Ω u+ (x) and m = supx∈Ω u(x) we
have to prove m ≤ M . Assume the claim does not hold, i.e. M < m. The proof continues
strongly using Lu, v ≤ 0 with some particular functions v, and Sobolev embeddings
in Remark 6.3.11.
Let r ∈ [M, m). From Example 6.1.14 we have w := (u − r)+ ∈ H 1 (Ω). Actually,
w ∈ H01 (Ω) for a certain r ∈ [M, m). Indeed, there exists r ∈ [M, m) such that
N
N
aij ∂i u∂j v = Lu, v − bi ∂i uv − cuv ≤ C |v||∇u|, (7.2.5)
i,j=1 Ω i=1 Ω Ω Ω
1
Which is well-defined for all u, v ∈ H 1 (Ω).
126
Chapter 7 7.2. Existence and uniqueness of weak solutions
Now we use Sobolev embedding results, Remark 6.3.11, a) for N ≥ 3 and b) for N = 2.
For N ≥ 3 we obtain
≤ C2 |ωs |1/N v 2N ,
L N −2 (ωs )
which gives C −N ≤ |ωs |. For N = 2 we obtain C −p ≤ |ω|, for any p > 2. In both cases
C does not depend on s. Setting C0 = min{C −N , C −p }, we get
127
7.2. Existence and uniqueness of weak solutions Chapter 7
≤ B(u, u)
1
+ b L∞ (Ω) u 2L2 (Ω) + ∇u 2
L2 (Ω)
4
+ c− L∞ (Ω) u 2L2 (Ω) .
It implies
k− b L∞ (Ω) |u|2H 1 (Ω) ≤ B(u, u)
0
1
+ b L∞ (Ω) + c− L∞ (Ω) u 2
L2 (Ω) ,
4
which proves the lemma.
128
Chapter 7 7.2. Existence and uniqueness of weak solutions
Clearly, if c ≥ 0, Theorem 7.2.8 implies that (7.2.11) has a unique weak solution. In
fact, one can prove that there exists 0 > 0 such that if c− L∞ (Ω) < 0 then still (7.2.11)
has a unique weak solution. Indeed, using Poincaré inequality in H01 (Ω) we get
B(u, u) = (|∇u|2 + (c+ + c− )u2 )dx
Ω
≥ |∇u| dx +c− u2 dx
2
Ω Ω
−
≥ |∇u| − c L∞ (Ω) u2 dx
2
(use (6.5.1))
Ω Ω
where C is the (Poincaré) constant in (6.5.1). Inequality (7.2.13) shows that B is elliptic
in H01 (Ω) if c− L∞ (Ω) < C −2 . Then the existence and uniqueness of a weak solution
follows from Lax-Milgram lemma.
In the case when c 0 or b = 0, the issue of existence of solutions to (7.1.4) is more
complex. We will solve it as follows. First we will consider an auxiliary problem L0 u = f ,
f ∈ H −1 (Ω), L0 u = Lu + λ0 u, which, thanks to Lax-Milgram lemma, has a unique weak
solution. The operator L0 generates a map G0 : H01 (Ω) → H01 (Ω), which is compact.
Then, the equation (7.1.4) is written equivalently in the form (I − G0 )u = L−1 0 f , which
is analyzed by using Fredholm alternative Theorem 7.2.3.
More precisely, let L0 : H01 (Ω) → H −1 (Ω) and B0 ∈ L(H01 (Ω) × H01 (Ω)) be given by
N
N
L0 u := Lu + λ0 u = − ∂j (aij ∂i u) + bi ∂i u + (c + λ0 )u, (7.2.14)
i,j=1 i=1
129
7.2. Existence and uniqueness of weak solutions Chapter 7
The following lemma uses some properties of compact operators, and the reader can
consult [29] for more details about compact operators.
Lemma 7.2.11 Assume λ0 of Lemma 7.2.7 is positive, and let I : H01 (Ω) → H −1 (Ω)
be the embedding map, Iu = u, and G0 := λ0 L−1
0 ◦ I : H0 (Ω) → H0 (Ω). Then G0 is
1 1
compact.
Proof. Clearly G0 is linear. For the compactness, we write I = I0 ◦ I1 , where I1 :
H01 (Ω) → L2 (Ω), I0 : L2 (Ω) → H −1 (Ω), I1 u = I0 u = u. So G0 = λ0 L−1
0 ◦ I0 ◦ I1 , with
−1
L0 , I0 , I1 continuous, and I1 compact; see Theorem 6.3.10. Hence, G0 is compact as a
composition of a compact operator with a continuous operator. This proves (i).
Theorem 7.2.12 If c ≥ 0 then the problem (7.1.4) has a unique solution u ∈ H01 (Ω).
Otherwise we have if N (L) = {0} then (7.1.4) has a unique solution, and if N (L) = {0}
then (7.1.4) has a solution (and then a finite number of solutions) iff f ∈ R(L).
130
Chapter 7 7.2. Existence and uniqueness of weak solutions
We can apply Theorem 7.2.12 to (7.2.18), and the conclusions of this theorem hold for
(7.2.18) with f − Lg instead of f .
Example 7.2.14 Let us consider again the problem (4.4.3). It does not have a classical
solution because the boundary data is not continuous. However, it has a unique H 1 (Ω)
solution. Indeed, if G(x) = 1{|x|<1} ln |1 − ln |x|| then u = G on ∂Ω. As from Remark
6.3.5, ln |1 − ln |x|| ∈ H 1 (Ω), from Remark 7.2.13 the problem (4.4.3) has a unique
solution in H 1 (Ω).
N
N
− ∂j (aij ∂i u) + bi ∂i u + cu = f ∈ (H 1 (Ω)) , (7.2.19a)
i,j=1 i=1
aij ∂i uνj = 0 on ∂Ω, (7.2.19b)
j=1,N
u H 1 (Ω) ≤C f H −1 (Ω) .
131
7.3. Nonlinear second-order elliptic PDEs Chapter 7
Similar results to those of Theorem 7.2.12 for the Dirichlet problem (7.1.4) hold for the
problem (7.2.20). We leave the details as an exercise. For more on this topic, see, for
example, [5, 20].
Remark 7.2.16 Let us see in what sense a weak solution to (7.2.20) satisfies the bound-
ary condition (7.2.19b). For simplicity, we assume f ∈ L2 (Ω) (see NProblem 7.8 for
f ∈ (H (Ω)) ). Then taking v ∈ D(Ω) in (7.2.20) gives T :=
1
i,j=1 ∂j (aij ∂i u) =
b · ∇u + cu − f in D (Ω). As b · ∇u + cu − f ∈ L2 (Ω), it follows T = b · ∇u + cu in
L2 (Ω). Then from (7.2.20) with v ∈ H 1 (Ω), we get
N
T, v = b · ∇u + cu − f, v = (b · ∇u − cu)vdx − f, v = − aij ∂i u∂j vdx,
Ω Ω i,j=1
or equivalently
N N
∂j (aij ∂i u), v + aij ∂i u∂j vdx = 0, ∀v ∈ H 1 (Ω). (7.2.21)
i,j=1 Ω i,j=1
The equation (7.2.21) defines how N i,j=1 aij ∂i uνj = 0 is understood. This makes sense
because if u ∈ C (Ω) integrating by parts in (7.2.21) implies
2
N
0 = aij ∂i uνj vdx, ∀v ∈ H 1 (Ω),
∂Ω i,j=1
N
which implies i,j=1 aij ∂i uνj = 0.
132
Chapter 7 7.3. Nonlinear second-order elliptic PDEs
Let us cite the following theorem which asserts the existence of fixed points of a compact
continuous operator; see, for example, [20].
Theorem 7.3.1 Let A : E → E be a continuous operator from the Banach space E to
itself. Assume that there exists K ⊂ E, convex closed set, such that A(K) ⊂ K and
A(K) is precompact2 in E. Then A has a fixed point, i.e. there exists u ∈ E such that
u = Au.
Now we give two examples, which demonstrate the use of the fixed point method to
solve various problems similar to (7.3.1).
Example 7.3.2 Consider the following nonlinear boundary value problem
This is the weak form equation of (7.3.2), and we look for a solution u ∈ H01 (Ω) to it.
For this we consider a fixed point approach as follows. Let A : C 0 (Ω) → C 0 (Ω), Av = u,
where u is the solution to
u ϕ + h(v)uϕ = f, ϕ , ∀ϕ ∈ H01 (Ω). (7.3.4)
Ω
If u is a fixed point of A, i.e. u = Au, then u ∈ H01 (Ω) and solves (7.3.3).
Claim: A is well-defined. Indeed, for every v ∈ C 0 (Ω) the equation (7.3.4) has a unique
solution u ∈ H01 (Ω) −→ C 0 (Ω); see Theorem 7.2.8 and Theorem 6.3.9. Furthermore,
the following estimate holds:
which implies
|Av − Av0 |2H 1 (Ω) ≤ |h(v) − h(v0 )||u0 ||Av − Av0 | (use H ölder)
Ω
2
A set K ⊂ E, E normed space, is precompact if its closure is compact.
133
7.3. Nonlinear second-order elliptic PDEs Chapter 7
From h(v) → h(v0 ) in C 0 (Ω) and H01 (Ω) → C 0 (Ω), we get Av − Av0 C 0 (Ω) → 0 as
v → v0 in C 0 (Ω), hence A is continuous from C 0 (Ω) to itself.
Claim: Let K = H01 (Ω). Then A(K) ⊂ K and A(K) is precompact in C 0 (Ω). This
follows from the estimate (7.3.5) and H 1 (Ω) −→
c
C 0 (Ω); see Theorem 6.3.9.
Claim: A has a fixed point. Conditions of Theorem 7.3.1 are satisfied, so A has a fixed
point u = Au ∈ H01 (Ω), which solves (7.3.3).
Example 7.3.3 Consider the following nonlinear PDE:
Similar to the method in Example 7.3.2, we consider a fixed point approach as follows.
Let A : L2 (Ω) → L2 (Ω), Av = u, where u is the solution to
∇u · ∇ϕ + h(v)uϕ = f, ϕ , ∀ϕ ∈ H01 (Ω). (7.3.8)
Ω
Claim: A is well-defined. Indeed, for v ∈ L2 (Ω) the equation (7.3.8) has a unique solu-
tion, see Theorem 7.2.8, and the following estimate holds:
where we used the fact that H 1 (Ω) → Lq (Ω) for all q ∈ [1, ∞); see Remark 6.3.11.
Therefore, using Poincare inequality (6.5.1) and |h(v) − h(v0 )| ≤ h Cb1 (R) |v − v0 | implies
134
Chapter 7 7.3. Nonlinear second-order elliptic PDEs
Problems
Problem 7.1 Prove that for every ∈ H −k (Ω), k = 1, there exists a unique u ∈ H0k (Ω)
such that
(−1)|α| D2α u = in D (Ω).
|α|≤k
Problem 7.2 Let Ω = (0, 1), c ∈ L∞ (Ω), and f ∈ H −1 (Ω). Consider the problem
Prove that it has a unique weak solution in H01 (Ω) if c− L∞ (Ω) < 2 (in fact, even if
c− L∞ (Ω) < π 2 ).
Problem 7.3 Let Ω ⊂ RN be an open bounded set and consider the problem
−Δu + bi ∂i u + cu = f in Ω, u = 0 on ∂Ω,
i=1,N
with bi ∈ W 1,∞ (Ω), c ∈ L∞ (Ω), and 2c − i=1,N ∂i bi ≥ 0. Prove that there exists a
unique weak solution u ∈ H01 (Ω).
(see [51]) to show that the weak solutions to the following problems
135
7.3. Nonlinear second-order elliptic PDEs Chapter 7
Problem 7.5 Let Ω ⊂ RN be an open bounded Lipschitz set, ν the unitary exterior
normal vector on ∂Ω and consider the problem
where g ∈ H −1/2 (∂Ω). Write the weak form of this problem and prove that this problem
has a unique H 1 (Ω) weak solution.
Problem 7.6 Let Ω ⊂ RN be an open bounded Lipschitz set and ν the unitary
exterior
normal vector on ∂Ω. Furthermore let M0 (Ω) = u ∈ H (Ω), Ω udx = 0 and consider
1 1
the problem
−Δu = f in Ω, f ∈ (H 1 (Ω)) , f, 1 = 0,
∂ν u = 0 on ∂Ω.
i) Write the weak form of this problem.
iii) Prove that this problem has a unique weak solution in M01 (Ω).
Problem 7.7 Let Ω ⊂ RN be an open bounded connected Lipschitz set and γ be a rela-
tively open set on ∂Ω with nonzero HN −1 measure. Furthermore let Hγ1 (Ω) be the closure
for the H 1 (Ω) norm of v ∈ C 1 (Ω), v|γ = 0 , and consider the problem: u ∈ Hγ1 (Ω)
solution to
iii) Prove that this problem has a unique weak solution in Hγ1 (Ω).
Problem 7.8 Let Ω ⊂ RN be an open bounded Lipschitz set and let u be a solution
to (7.2.20), with b = 0 and c ≥ c0 > 0. Show that L2 (Ω) −→ d
(H 1 (Ω)) and, by
approximating f ∈ (H 1 (Ω)) by a sequence (fn ) in L2 (Ω), give a meaning to (7.2.21) in
the case f ∈ (H 1 (Ω)) .
136
Chapter 7 7.3. Nonlinear second-order elliptic PDEs
ii) Prove that this problem has a weak solution for every f ∈ H −1 (Ω) (you may
consider the map A : C 0 (Ω) → C 0 (Ω) with u = Av the solution to −(a(v)u ) +
c(x)u = f in H01 (Ω) and use Theorem 7.3.1).
Problem 7.11 Let Ω ⊂ RN be an open bounded Lipschitz set and consider the problem
2
−Δu = e−u in Ω, u = 0 on ∂Ω.
ii) Prove that this problem has a unique weak solution (you may apply Theorem 7.3.1)
2
to the map A : Lq (Ω) → Lq (Ω), for a certain q, with u = A(v) solving −Δu = e−v
in H01 (Ω)).
Problem 7.12 Let Ω ⊂ RN be an open bounded Lipschitz set, N ≤ 5, and consider the
problem:
ii) Prove that this problem has a unique weak solution (you may apply Theorem 7.3.1)
to the map A : Lq (Ω) → Lq (Ω), for a certain q, with u = A(v) solving −Δu +
h(v)u = f in H01 (Ω)).
137
8. Second-order parabolic and
hyperbolic PDEs
serves as a motivation for the method of separation of variables in the abstract case that
we will consider in this chapter.
V − λV = 0, W − λW = 0. (8.1.2)
The equations (8.1.2) are respectively of first and second order, linear, and homogeneous,
so their solution spaces are respectively of dimension one and two. Their general solutions
are
αx + β, λ = 0,
λt
V (t) = γe , W (x) = √ √
−x λ α, β, γ ∈ R.
αe x λ
+ βe , λ = 0,
Therefore
eλt (Ax + B), λ = 0,
u(x, t) = √ √
−x λ A, B ∈ R. (8.1.3)
λt
e (Ae x λ
+ Be 0,
), λ =
Such a u solves (8.1.1a) for arbitrary A, B, and λ. The constants A, B, and λ are chosen
such that u solves (8.1.1b) and (8.1.1c). Let us first deal with (8.1.1b). As we must have
u(0, t) = u(π, t) = 0 for all t ∈ (0, T ), in the case λ = 0 we get
√
B = Aπ√ + B = 0, so
A = B = 0 and u = 0. In the case λ = 0 it follows A + B = Ae π λ
+ Be−π λ = 0. Hence
√ √
B = −A, e2π λ = 1, λ = ik, λ = −k 2 , k ∈ N.
140
Chapter 8 8.1. Heat and wave equations and the method of separation of variables
solves (8.1.1a) and (8.1.1b). The coefficients Ck are unknown, but we can define them
by imposing the condition (8.1.1c). Replacing u of (8.1.4) in (8.1.1c) implies
u(x, 0) = Ck sin (kx) = g(x), x ∈ (0, ). (8.1.5)
k∈N
2 π
Using the relations (4.1.10) gives Ck = sin (ks) g(s)ds, which implies
π 0
π
2 −tk2
u(x, t) = e sin (kx) sin (ks) g(s)ds . (8.1.6)
π k∈N 0
We note that (8.1.6) gives a candidate for a solution to (8.1.1). Ideally, one looks for
a “classical solution” to (8.1.1), which is u ∈ C 2 (Ω × (0, T )) ∩ C 0 (Ω × [0, T )) satisfying
(8.1.1) pointwise. Classical solutions require strong assumptions on the data f , u0 , which
narrow the class of f and u0 . In this chapter we look for weak solutions, which exist for
a large range of data; see section 8.3.
Remark 8.1.1 Using the same method as above, we can also solve the problem
u − c2 Δu = 0 in Ω × (0, T ) := (0, ) × (0, T ), (8.1.7a)
u = 0 on ∂Ω × (0, T ), (8.1.7b)
u = p on Ω × {0}. (8.1.7c)
Indeed, we can change the variables as follows:
π π 2
y = x, τ = c t, v(y, τ ) = u(x, t).
If furthermore g(y) = p(x) then v(x, τ ) solves
π 2
v − Δv = 0 in (0, π) × 0, c T , (8.1.8a)
π 2
v = 0 on {0, π} × 0, c T , (8.1.8b)
v = g on (0, π) × {0}, (8.1.8c)
and is given by (8.1.6). Then
2 −t(ck π )2 π π
u(x, t) = e sin k x sin k s p(s)ds . (8.1.9)
k∈N 0
141
8.1. Heat and wave equations and the method of separation of variables Chapter 8
The method of separation of variables can be used to solve (8.1.1) with different
boundary conditions than (8.1.1b), for example ux (0, t) = 0 and u(, t) + ux (, t) = 0.
The method is the same as above, except that after applying these conditions we get
different formulas for the coefficients A, B, λ, and Ck .
by using the method of separation of variables. We can use (8.1.9) with = π, g(x) =
x(π − x), and c = 4. Integrating by parts
π π
1 − (−1)k
sin k s g(s)ds = sin (ks) s(π − s)ds = 2
0 0 k3
gives
4 1 − (−1)k −t(4k)2 4
u(x, t) = e sin (kx) = uk (x, t). (8.1.11)
π k∈N k3 π k∈N
We show that u is a classical solution to (8.1.10). Indeed, note first that the series
(8.1.11) converges in C 0 ([0, π] × [0, T ]), for arbitrary T > 0 because |uk (x, t)| ≤ k23 for
all (x, t) ∈ [0, π] × [0, ∞). Then the claim follows from Weierstrass M-test. Therefore,
u ∈ C 0 ([0, π] × [0, ∞)) and u satisfies (8.1.10b), (8.1.10c). As |∂x uk | + |∂t uk | ≤ kC2 for all
(x, t) ∈ [0, π] × [0, ∞)), by using again Weierstrass M-test we get u ∈ C 1 ([0, π] × [0, ∞)).
Furthermore, u ∈ C 2 ((0, π) × (0, ∞)) and satisfies (8.1.10a). Indeed, all uk satisfy
(8.1.10a). Next we point out that given > 0 and i, j ∈ {x, t} we have s
e−(4k)
2
e−(4k)2
|∂i,j uk | ≤ C in [0, π] × [ , T ], and < ∞.
k k∈N
k
Then from Weierstrass M-test the series (8.1.11) converges in C 2 ([0, π] × [ , T ]) and the
claim follows from the arbitrariness of and T . Finally, we point out that limt→∞ u(x, t) =
0 because
1
lim |u(x, t)| ≤ C lim e−16t = 0.
t→∞
k∈N
k3 t→∞
142
Chapter 8 8.1. Heat and wave equations and the method of separation of variables
Disregarding the issues of existence and uniqueness, we are interested only in finding
a formula for the solution u. We assume that u = u(x, t) can be written as u(x, t) =
V (t)W (x). Replacing u in (8.1.12a) gives
V − λV = 0, W − λW = 0. (8.1.13)
The equations (8.1.13) are of second order, linear, and homogeneous, so their solution
spaces are of dimension two. The general solutions are of the form
At +
√
B, √
λ = 0, Cx +√
D, √
λ = 0,
V (t) = −t λ W (x) = −x λ
Aet λ
+ Be , λ, = 0, Ce x λ
+ De , λ = 0,
is always a solution to (8.1.12a). The constants A, B, C, D, and λ are chosen such that
u solves (8.1.12b), (8.1.12c). Let us first deal with (8.1.12b). The conditions u(0, t) =
u(π, t) = 0, t ∈ (0, T ) are equivalent to
D = Cπ + D √
= 0, √
λ = 0,
−π λ
C + D = Ceπ λ
+ De 0.
= 0, λ =
Then necessarily
C = D = 0, √ λ = 0,
√
D = −C, e2π λ = 1, λ = ik, λ = −k , 2
k ∈ N, λ = 0.
143
8.1. Heat and wave equations and the method of separation of variables Chapter 8
It follows that
u(x, t) = uk (x, t) = (Ak cos(kt) + Bk sin(kt)) sin(kx) (8.1.15)
k∈N k∈N
solves (8.1.12a) and (8.1.12b). Now, it remains to find Ak and Bk . They are chosen such
that u given by (8.1.15) satisfies (8.1.12c), which implies
u(x, 0) = Ak sin(kx) = g(x), ut (x, 0) = Bk k sin(kx) = h(x). (8.1.16)
k∈N k∈N
gk
then from (8.1.16) we obtain Ak = hk , Bk = and therefore
k
hk
u(x, t) = gk cos(kt) + sin(kt) sin(kx) (8.1.17)
k∈N
k
solves (8.1.12).
Like for (8.1.6), (8.1.17) gives a candidate for the solution to (8.1.12). On the first
attempt, we look for a “classical solution” to (8.1.12), which is u ∈ C 2 (Ω × (0, T )) ∩
C 1 (Ω × [0, T )) ∩ C 0 (Ω × [0, T )) satisfying (8.1.12) pointwise. Classical solutions require
strong assumptions on the data f , u0 , u1 . We will study the existence of weak solutions,
which exist for a large range of data f , u0 , and u1 ; see section 8.4.
Remark 8.1.3 Using the same method we can solve the problem
utt − c2 uxx = 0 in Ω × (0, T ) := (0, ) × (0, T ), (8.1.18)
u = 0 on ∂Ω × (0, T ), (8.1.19)
u = p, ut = q on Ω × {0}. (8.1.20)
Indeed, we can change the variables as follows:
π π
y= x, τ = c t, v(y, τ ) = u(x, t).
Note that we have
π 2 π 2
utt (x, t) = c vτ τ (y, τ ), c2 uxx (x, t) = c vyy (y, t).
144
Chapter 8 8.1. Heat and wave equations and the method of separation of variables
with F and G being two C 2 (R) functions. The functions F (x − ct) and G(x + ct) are
called “traveling wave solutions”; more precisely F (x − ct) is called “forward wave”,
while G(x + ct) is called “backward wave”. The curves x ± ct = x0 , x0 ∈ R are called
“characteristic curves” of the wave equation.
To identify g and h, we replace (8.1.26) in (8.1.25b) and get
145
8.2. Some preliminary results Chapter 8
It follows that
x
F (x) + G(x) = g(x), c(−F (x) + G(x)) = c(−F (0) + G(0)) + h(s)ds.
0
Hence
1 1 1 x
F (x) = g(x) + (F (0) − G(0)) − h(s)ds,
2 2 2c 0
1 1 1 x
G(x) = g(x) + (−F (0) + G(0)) + h(s)ds, and
2 2 2c 0
u(x, t) = F (x − ct) + G(x + ct)
1 1 x+ct
= (g(x − ct) + g(x + ct)) + h(s)ds. (8.1.27)
2 2c x−ct
The solution given by the formula (8.1.27) is called the “d’Alembert solution to the wave
equation (8.1.25)”.
∞
u(t) := u(·, t) = cn (t) sin (n(·)) , (·) ∈ Ω = (0, π),
n=1
with appropriate functions cn . Hence, u(·, t) belongs to span {un = sin(nx)), n ∈ N}.
Note that un and λn = n2 , n = 1, 2, . . ., are the only eigenfunction/eigenvalue pairs of
the operator −Δ : u ∈ H01 (Ω) ∩ H 2 (Ω) → L2 (Ω). Note also that if T denotes the inverse
of −Δ, T = (−Δ)−1 , then T is a compact self-adjoint operator, and un , n−2 are the only
eigenfunction/eigenvalue pairs of the operator T .
The property of writing the solution as a series of eigenfunctions un of T , often
called “spectral decomposition”, holds in general for any self-adjoint (symmetric) com-
pact operator T in a separable Hilbert space. It allows one to look for the solution to a
certain PDE involving such an operator T as a series of eigenfunctions of T . We have
this general result; see, for example, [5, 13, 29].
146
Chapter 8 8.2. Some preliminary results
Theorem 8.2.1 (Spectral decomposition) Let (H, (·, ·)) be a separable Hilbert space
and T : H → H be a self-adjoint compact operator, i.e.
[self-adjointness] : (T u, v) = (u, T v) for every u, v ∈ H, and (8.2.1)
[compactness] : for every bounded sequence (un ) in H, (T (un )) has
a convergent subsequence in H. (8.2.2)
Then there exists a countably infinite orthonormal basis (ϕn ) of H consisting of eigen-
vectors of T , with corresponding eigenvalues {μn } ⊂ R and limn→∞ μn = 0.
We will restrict the analysis of (8.0.1) and (8.0.2) to the cases where L is given by
⎧
⎪ L : H01 (Ω) → H −1 (Ω),
⎪
⎪
⎪
⎪ N
⎪
⎨ u → Lu = − ∂j (aij ∂i u) + cu,
i,j=1 (8.2.3)
⎪
⎪
⎪
⎪ N
⎪
⎪ Lu, ϕ = aij ∂i u∂j ϕ + cuϕ dx, ∀ϕ ∈ H01 (Ω).
⎩
Ω i,j=1
Here and throughout this chapter, unless otherwise stated, Ω ⊂ RN is an open bounded
set, and aij and c satisfy
A := (aij ) = (aji ) ∈ W 1,∞ (Ω; RN ×N ), c ∈ L∞ (Ω), (8.2.4)
N
∃k, K > 0, ∀0 = ξ ∈ RN , k|ξ|2 ≤ aij (·)ξi ξj ≤ K|ξ|2 , a.e. in Ω. (8.2.5)
i,j=1
147
8.2. Some preliminary results Chapter 8
√
c) Similarly, {ψn } := {ϕn / λn } is an orthonormal basis of H01 (Ω) for the inner
product ((u, v)) and
∞
∞
H01 (Ω) = u = (u, ϕn )ϕn , u2 = λn (u, ϕn )2 < ∞ . (8.2.7)
n=1 n=1
d) Let
∞
∞
dom(L) = u= (u, ϕn )ϕn , |u|2dom(L) := λ2n (u, ϕn )2 < ∞ .
n=1 n=1
∞
Then dom(L) ⊂ L2 (Ω), Lu = n=1 λn (u, ϕn )ϕn ∈ L (Ω) and LuL2 (Ω) =
2
Proof. The claim a) follows from Theorem 7.2.8 and the compact embedding H01 (Ω)→ c
L2 (Ω). The claim b) follows from the classical theorem of spectral decomposition of
compact self-adjoint operators; see Theorem 8.2.1. For c), clearly {ψn } is orthonormal
in H01 (Ω) because
λm
((ψm , ψn )) = Lψm , ψn = λm (ψm , ψn ) = (ϕm , ϕn ) .
λn
Furthermore {ψn , n ∈ N} is dense in H01 (Ω), because if u ∈ H01 (Ω) and ((u, ψn )) = 0 for
all n ∈ N then
0 = ((u, ψn )) = Lψn , u = λn (ψn , u) = λn (ϕn , u)
∞
∞
∞
∞
u2 = ((u, ψn ))2 = Lψn , u 2
= λ2n (ψn , u)2 = λn (u, ϕn )2 .
n=1 n=1 n=1 n=1
∞ ∞
For d), let u ∈ dom(L), so u = n=1 (u, ϕn )ϕn and n=1 λn (u, ϕn ) < ∞. Clearly
2 2
Lu, ϕ = u, Lϕ
∞
∞
= (u, ϕn )ϕn , Lϕ = (u, ϕn ) Lϕn , ϕ
n=1
n=1
∞ ∞
= λn (u, ϕn )(ϕn , ϕ) = λn (u, ϕn )ϕn , ϕ ,
n=1 n=1
148
Chapter 8 8.2. Some preliminary results
which shows that Lu = ∞ n=1 λn (u, ϕn )ϕn and LuL2 (Ω) = |u|dom(L) .
Finally, to show that dom(L) is a Hilbert space it is enough to show that it is
complete. Let (un ) be a Cauchy sequence in dom(L). Then (un ) and (Lun ) are Cauchy
sequences in L2 (Ω), so they converge in L2 (Ω) to u and v, respectively. Necessarily
v = Lu because for ϕ ∈ D(Ω), we have
where f, v means the duality H −1 (Ω) × H01 (Ω) and the convergence of the series is in
H −1 (Ω). We leave the proof as an exercise; see Problem 8.9.
We close this section by introducing the following concepts and associated results.
For the proof of these results and more details, the reader can see, for example, [15, 36].
Let (Y, · ) be a Banach space, a, b ∈ R, a < b, k ∈ N0 , p ∈ [1, ∞]. We have introduced
the spaces C k ((a, b); Y ); see Definition 1.1.6 with Ω = (a, b) and X = R. The spaces
C k ([a, b]; Y ) and Lp (a, b; Y ) are defined by
C k ([a, b]; Y ) = u : [a, b] → Y, u(i) extends as a C 0 ([a, b]; Y ) f or all i = 0, . . . , k,
149
8.3. Weak solution to the heat equation Chapter 8
k
equipped with uC k ([a,b];Y ) = sup u(i) (t)Y , (8.2.12)
i=0 t∈[a,b]
Lp (a, b; Y ) = t ∈ (a, b) → u(t) ∈ Y measurable and uLp (a,b;Y ) < ∞, where
b
uLp (a,b;Y ) =
p
u(t)pY dt, if p ∈ [1, ∞),
a
uL∞ (a,b;Y ) = u(t)Y L∞ (a,b) if p = ∞ . (8.2.13)
These spaces equipped with the respective norms are Banach spaces. If Y is a separable
Hilbert space with {ϕn , n ∈ N} an orthonormal basis, then for every u ∈ L2 (a, b; Y ) we
have
u(t) = (u(t), ϕn )Y ϕn =: un (t)ϕn , (8.2.14)
n∈N n∈N
b b
u2L2 (a,b;Y ) = u(t)2Y dt = |un (t)|2 dt, un ∈ L2 (a, b). (8.2.15)
a n∈N a
which imply
un (t) + λn un (t) = fn (t), un (0) = u0,n , ∀n ∈ N. (8.3.2)
150
Chapter 8 8.3. Weak solution to the heat equation
This equation can be written equivalently as (eλn t un ) = eλn t fn (t), which has a unique
solution
Theorem 8.3.2 Let L be as in (8.2.3) and assume (8.2.4), (8.2.5) hold, u0 ∈ L2 (Ω),
and f ∈ L2 (0, T ; H −1 (Ω)). Then the problem (8.0.1) has a unique weak solution u ∈
C 0 ([0, T ]; L2 (Ω)) ∩ L2 (0, T ; H01 (Ω)) given by
∞
t
−λn t λn (s−t)
u(t)(x) = e u0,n + e fn (s)ds ϕn (x) (8.3.4)
n=1 0
t
=: e−tL u0 + e(s−t)L f (s)ds,
0
k
Suk 2C 0 ([0,T ];L2 (Ω)) = sup |un (t)|2 , (8.3.5)
t∈[0,T ] n=1
k T
Suk 2L2 (0,T ;H 1 (Ω)) = λn |un (t)|2 dt. (8.3.6)
0
n=1 0
Claim: u satisfies (8.3.1a) and (8.3.1b). Indeed, multiplying (8.3.2) by 2un and integrat-
ing in (0, t) ⊂ (0, T ) gives
t t t
2
(un (s)) ds + 2λn |un (s)| ds = 2
2
fn (s)un (s)ds, which implies
0
t
0 0
t t
1
2
un (t) + 2λn |un (s)| ds ≤ u0,n +
2 2
|fn (s)| ds +
2
λn |un (s)|2 ds.
0 λn 0 0
151
8.3. Weak solution to the heat equation Chapter 8
Suk+l (t) − Suk (t)2L2 (Ω) + Suk+l − Suk 2L2 ((0,T );H 1 (Ω))
0
≤ S0k+l − S0k 2L2 (Ω) + Sfk+l − Sfk 2L2 ((0,T );H −1 (Ω)) , (8.3.7)
which shows that (Suk ) is a Cauchy sequence in C 0 ([0, T ]; L2 (Ω)) ∩ L2 (0, T ; H01 (Ω)),
because (S0k ) converges to u0 in L2 (Ω) and (Sfk ) converges to f in L2 (0, T ; H −1 (Ω)).
As C 0 ([0, T ]; L2 (Ω)) ∩ L2 (0, T ; H01 (Ω)) is Banach, the sequence (Suk ) converges to u in
C 0 ([0, T ]; L2 (Ω)) ∩ L2 (0, T ; H01 (Ω)). Also from Suk (0) = S0k and (Suk ) converging to u in
C 0 ([0, T ]; L2 (Ω)), we get u(0) = u0 , which proves the claim.
Claim: u satisfies (8.3.1c). Indeed, the equation (8.3.2) is equivalent to
d k
(Su , ϕn ) + LSuk , ϕn = Sfk , ϕn , ∀n ∈ {1, 2, . . . , k}.
dt
Multiplying this equation by ψ = ψ(t) ∈ D(0, T ) and integrating by parts in (0, T ) gives
T T T
− (Suk , ϕn )ψ dt + LSuk , ϕn ψdt = Sfk , ϕn ψdt. (8.3.8)
0 0 0
Since limk→∞ Suk − uL2 (0,T ;H01 (Ω)) = 0 and limk→∞ Sfk − f L2 (0,T ;H −1 (Ω)) = 0, we get
T T
lim (Suk , ϕn )ψ dt
(u, ϕn )ψ dt,= (8.3.9)
k→∞ 0 0
T T T
k k
lim LSu , ϕn ψdt, = lim ((Su , Lϕn ))ψdt = ((u, Lϕn ))ψdt
k→∞ 0 k→∞ 0 0
T
= Lu, ϕn ψdt, and (8.3.10)
0
T T
k
lim Sf , ϕn ψdt = f, ϕn ψdt. (8.3.11)
k→∞ 0 0
152
Chapter 8 8.4. Weak solution to the wave equation
Definition 8.4.1 Given u0 ∈ H01 (Ω), u1 ∈ L2 (Ω), and f ∈ L2 (0, T ; L2 (Ω)), a function
u is called a “weak, or variational, solution to (8.0.2)”, if
∞
(un (t) + λn un (t) − fn (t))ϕn (x) = 0, t > 0, x ∈ Ω,
n=1
which implies
un (t) + λn un (t) = fn (t), un (0) = u0,n , un (0) = u1,n . (8.4.2)
Theorem 8.4.2 Let L be as in (8.2.3) with (8.2.4), (8.2.5) satisfied, u0 ∈ H01 (Ω), u1 ∈
L2 (Ω), and f ∈ L2 (0, T ; L2 (Ω)). Then the problem (8.4.1) has a unique solution given
by
∞
1
u(t) = cos(t λn )u0,n + √ sin(t λn )u1,n ϕn
n=1
λn
153
8.4. Weak solution to the wave equation Chapter 8
∞
t
1
+ √ sin((t − s) λn )fn (s)ds ϕn (8.4.4)
n=1 0 λn
t
1 1
=: cos(t λn )u0 + √ sin(t λn )u1 + √ sin((t − s) λn )f (s)ds, (8.4.5)
λn 0 λn
where un are given by (8.4.3b).
Proof. Like in Theorem 8.3.2 the proof goes in several steps. Set Suk (t) = kn=1 un (t)ϕn ,
k k k
S0k = k
n=1 u0,n , S1 =
k
n=1 u1,n , and Sf (t) =
k
n=1 fn (t)ϕn . It is clear that (S0 )
converges to u0 in H01 (Ω), (S1k ) converges to u1 in L2 (Ω), and (Sfk ) converges to f in
L2 (0, T ; L2 (Ω)). We note also that from (8.2.6) and (8.2.7), we have
k
k
Suk 2C 1 ([0,T ];L2 (Ω)) = sup |un (t)|2 + sup |un (t)|2 , (8.4.6)
t∈[0,T ] n=1 t∈[0,T ] n=1
k
Suk 2C 0 ([0,T ];H 1 (Ω)) = sup λn |un (t)|2 . (8.4.7)
0
t∈[0,T ] n=1
Claim: u satisfies (8.4.1a) and (8.4.1b). Indeed, multiplying (8.3.2) by 2un and inte-
grating in (0, t) ⊂ (0, T ) gives
t t t
2 2
(|un (s)| ) ds + λn (|un (s)| ) ds = 2 fn (s)un (s)ds.
0 0 0
It implies
t t
|un (t)|2 + λn |un (t)| 2
≤ |u1,n | + λn |u0,n | +
2 2
|un (s)|2 ds + |fn (s)|2 ds. (8.4.8)
0 0
This implies
T T
|un (t)| dt ≤ C λn u0,n + u1,n +
2 2 2
|fn (t)| dt , 2
0 0
154
Chapter 8 8.4. Weak solution to the wave equation
which shows that (Suk ) is a Cauchy in C 1 ([0, T ]; L2 (Ω)) ∩ C 0 ([0, T ]; H01 (Ω)). As this space
is Banach, the sequence (Suk ) converges to u in C 1 ([0, T ]; L2 (Ω))∩C 0 ([0, T ]; H01 (Ω)). Also
from Suk (0) = S0k and (Suk ) (0) = S1k we get u(0) = u0 , u (0) = u1 , which proves the claim.
Claim: u satisfies (8.4.1c). The equation (8.4.2) is equivalent to
Multiplying this equation by ψ = ψ(t) ∈ D(0, T ) and integrating by parts in (0, T ) gives
T T T
k
− ((Su ) , ϕn )ψ dt + k
LSu , ϕn ψdt = (Sfk , ϕn )ψdt. (8.4.10)
0 0 0
From limk→∞ (Suk ) − u C 0 ([0,T ];L2 (Ω)) = 0, limk→∞ Suk − uC 0 ([0,T ];H01 (Ω)) = 0, and
limk→∞ Sfk − f L2 (0,T ;L2 (Ω)) = 0, we get
T T
k
lim ((Su ) , ϕn )ψ dt = (u , ϕn )ψ dt, (8.4.11)
k→∞ 0 0
T T T
k k
lim LSu , ϕn ψdt = lim Su , Lϕn ψdt = u, Lϕn ψdt
k→∞ 0 k→∞ 0 0
T
= Lu, ϕn ψdt and (8.4.12)
0
T T
k
lim (Sf , ϕn )ψdt = (f, ϕn )ψdt. (8.4.13)
k→∞ 0 0
Therefore, letting k → ∞ in (8.4.10) combined with (8.4.11), (8.4.12), and (8.4.13) gives
T T T
− (u , ϕn )ψ dt + Lu, ϕn ψdt = (f, ϕn )ψdt, ∀n ∈ N, (8.4.14)
0 0 0
In this chapter, we considered linear heat and wave equations (8.0.1), (8.0.2), with
the main part given by the linear operator L in (8.2.3). The main ingredient of the
analysis was the spectral decomposition of the spaces L2 (Ω) and H01 (Ω) and H −1 (Ω).
There are theories associated with (8.0.1), (8.0.2) in a more abstract context. For more
details on this topic we refer the reader to, for example, [11, 27, 30, 53].
155
8.4. Weak solution to the wave equation Chapter 8
Problems
Problem 8.1 Using the method of the separation of variables, solve the following PDEs.
For each of them, consider whether the solution is classical or not, and find limt→∞ u(x, t).
Problem 8.3 Using the method of separation of variables solve the following PDEs.
Problem 8.4 Let u0 (x), u1 (x), f (x, t) be given smooth functions. Show that
1 1 x+ct
u(x, t) = (u0 (x − ct) + u0 (x + ct)) + u1 (s)ds
2 2c x−ct
156
Chapter 8 8.4. Weak solution to the wave equation
t x+c(t−s)
1
+ f (r, s)drds
2c 0 x−c(t−s)
is a classical solution to
Problem 8.5 Find d’Alembert formula in Problem 8.4 by using the formula (3.0.3) for
vt − vx = f and ut + ux = v.
(a) utt − 4uxx = 0 in R × (0, ∞), with u(·, 0) = 1(−1,1) , ut (·, 0) = 1(0,1) , and t0 solution
to u(2, t0 ) = max{u(2, t), t ≥ 0}.
(b) utt − 9uxx = 0 in R × (0, ∞), with u(·, 0) = 5 · 1[−1,1] , ut (·, 0) = 1[−1,1] . Here, for a
given x0 , t0 solves u(x0 , t0 ) = max{u(x0 , t), t ≥ 0}.1
Problem 8.9 Prove (8.2.8). As an application, take Lu = −u in Ω = (0, 2π) and write
δπ (x) := δ0 (x − π) as a series in terms of eigenfunctions of L.
1
This problem has an application: if u is the pressure after a shock wave generated by an explosion
and a building located at the position x0 supports a pressure up to 4 (u is a dimensionless variable),
then this building will collapse or not depend on the sign of δ := 4 − max{u(x0 , t), t > 0}—it collapses
if δ < 0 and it does not if δ ≥ 0.
157
8.4. Weak solution to the wave equation Chapter 8
Problem 8.11 Let u be the solution to the problem (8.0.1) as given by Theorem 8.3.2.
(a) Prove that u ∈ H 1 (0, T ; H −1 (Ω)) and therefore (8.0.1a) holds in L2 (0, T ; H −1 (Ω)).
Problem 8.12 Let u be the solution to the problem (8.0.1) as given by Theorem 8.3.2.
By using the density of {ζ(x)ψ(t), ζ ∈ D(Ω), ψ ∈ D(0, T )} in D(Ω × (0, T )) show that
u satisfies
u + Lu = f in D (Ω × (0, T )).
Problem 8.13 Show that the problem (8.0.1) with (8.0.1b) replaced by u(0) = u(T )
has a unique solution in CT0 ([0, T ]; L2 (Ω)) ∩ L2 (0, T ; H01 (Ω)) in the sense of Definition
8.3.1 (with u(0) = u0 replaced by u(0) = u(T )) for every f ∈ CT0 ([0, T ]; L2 (Ω)) := {g ∈
C 0 ([0, T ]; L2 (Ω)), g(0) = g(T )}.
Problem 8.14 Let u be the solution to (8.0.2) as given by Theorem 8.4.2. Prove that
E(t) := |u (x, t)|2 dx + Lu(·, t), u(·, t)
Ω t
= |u1 (x)| dx + Lu0 , u0 + 2
2
f (x, s)u (x, s)dxds, t ∈ (0, T ). (8.4.15)
Ω 0 Ω
Hence, if f = 0 in a certain interval (a, b) ⊂ (0, T ) then E(t) remains constant in (a, b).
158
9. Annex
In this chapter, we will collect different results, with or without proof, which com-
plement the material considered in the previous chapters.
which shows that (un ) converges uniformly1 to u in Ω. To complete the proof of the
claim, it is enough to show u ∈ C 0 (Ω). For x, x + h ∈ Ω, we have
|u(x + h) − u(x)| ≤ |u(x + h) − un (x + h)| + |u(x) − un (x)| + |un (x + h) − un (x)|,
which combined with the uniform convergence of (un ), of the uniform continuity of un
near2 x and of (9.1.1) implies u ∈ C 0 (Ω) and proves the claim.
1
A sequence (un ) in C 0 (Ω; Y ), Ω ⊂ X, “converges uniformly to u : Ω → Y in Ω” if for every > 0
there exists n ∈ N such that for all n > n and all x ∈ Ω we have un (x) − u(x)Y < .
2
It is easy to prove that if v ∈ C 0 (Ω) then v is uniformly continuous in B(x, rx ) = {y ∈ Ω, |y − x| ≤
rx } ⊂ Ω.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 159
A. Novruzi, A Short Introduction to Partial Differential Equations, CMS/CAIMS Books in Mathematics 11,
https://doi.org/10.1007/978-3-031-39524-6 9
9.1. Annex: Chapter 1 Chapter 9
ii) Claim: Cbk (Ω) is a Banach space. Proceeding as in i) above, we show that given a
Cauchysequence(un )inCb1 (Ω),thereexistu, v i ∈ Cb0 (Ω),i = 1, . . . , N ,withlimn→∞ un = u,
limn→∞ ∂i un = v i in Cb0 (Ω). Necessarily, ∂i u = v i because for x ∈ Ω and |t| small we
have
1 1
(u(x + tei ) − u(x)) = lim (un (x + tei ) − un (x)|
t t n→∞
1 1
= lim ∂i un (x + stei )ds = v i (x + stei )ds
n→∞ 0 0
−−→ v i (x),
t→0
which completes the claim for Cb1 (Ω) spaces. The proof for Cbk (Ω) spaces follows similarly
by recurrence.
iii) Claim: Cbk,λ (Ω) is a Banach space. Given a Cauchy sequence (un ) in Cbk,λ (Ω), from
cases above we get the convergence of (un ) to a certain u in Cbk (Ω). Furthermore, as
(un C k,λ (Ω) ) is necessarily bounded, for x, y ∈ Ω with x = y, α ∈ NN
0 with |α| = k, we
b
have
and therefore |u|C k,λ (Ω) < ∞, so u ∈ C k,λ (Ω). Finally for n, m ∈ N, we have
which from the Cauchy property of (un ) in Cbk,λ (Ω) implies limn→∞ un = u in Cbk,λ (Ω),
and completes the proof.
Then u ∈ L1 (A × B).
160
Chapter 9 9.1. Annex: Chapter 1
Equipped with the distance d(y, z) := y(·)−z(·)C 0 (I r ;X) , M is a complete metric space.
Next, for x ∈ B(x0 , ρ) consider the map
t
Tx : M → M, (Tx z)(t) = x + f (z(s))ds, t ∈ I r , x ∈ B(x0 , ρ).
0
Clearly Tx is well-defined, because Tx z ∈ C 0 (I r ; X), (Tx z)(0) = x ∈ B(x0 , ρ), and for
t ∈ I r we have (Tx z)(t) ∈ B(x0 , 2ρ), because
t
(Tx z)(t) − x0 X ≤ x − x0 X + f (z(s))X ds ≤ ρ + rK ≤ 2ρ.
0
161
9.1. Annex: Chapter 1 Chapter 9
Therefore, from Theorem 1.3.10 there exists a unique fixed point αx ∈ Z of Tx , and
α(t, x) = αx (t) is a solution to (9.1.3).
As f is Lipschitz from U in X, it follows that f ∈ C 0 (U ; X) and therefore from (9.1.3)
we obtain α(·, x) ∈ C 1 (I r ; U ) and α (t, x) = f (α(t, x)), t ∈ I r . Finally, if f ∈ C k (U ; X),
from α (t, x) = f (α(t, x)) we get α(·, x) ∈ C k+1 (I r ; X).
When f is Lipschitz, we can prove by direct calculations that the solution α is Lipschitz
with respect to x.
Theorem 9.1.6 Let x ∈ B(x0 , ρ) and α(·, x) ∈ C 1 (I r , U ) as given by Theorem 9.1.4.
Then x → α(·, x) is uniformly Lipschitz from B(x0 , ρ) to C 0 (I r ; X).
Proof. Let x, y ∈ B(x0 , ρ) and α(·, x), α(·, y) be the corresponding C 1 (I r ; X) solutions
of (1.3.4a) as given by Theorem 9.1.4. We have to prove
for certain C > 0 independent of x and y. Note that from Theorem 1.3.10, limn→∞
Txn α(·, y) − α(·, x)C 0 (I r ;X) = 0, where Tx : Z → Z is defined in Theorem 9.1.4. As
Ty α(·, y) = α(·, y) and Tx z1 − Tx z2 C 0 (I r ;X) ≤ κz1 − z2 X , 0 ≤ κ = rL < 1 (see
(9.1.4)), we have
Tx α(·, y) − α(·, y)C 0 (I r ;X) = Tx α(·, y) − Ty α(·, y)C 0 (I r ;X) = x − yX ,
Txk α(·, y) − Txk−1 α(·, y)C 0 (I r ;X) ≤ κTxk−1 α(·, y) − Txk−2 α(·, y)C 0 (I r ;X) x − yX
···
≤ κk−1 Tx α(·, y) − α(·, y)C 0 (I r ;X) x − yX
= κk−1 x − yX ; hence
Txn α(·, y) − α(·, y)C 0 (I r ;X) ≤ Txk α(·, y) − Txk−1 α(·, y))C 0 (I r ;X)
k=1,...,n
162
Chapter 9 9.2. Annex: Chapter 3
163
9.2. Annex: Chapter 3 Chapter 9
∂T (0, x0 )
Clearly, det = ±∂pN E(x0 , z0 (x0 ), p0 (x0 )) = 0. Then from (inverse mapping)
∂(t, y 0 )
Theorem 1.3.11, it follows that T is C k−1 and invertible near (0, x0 ), with T −1 of class
C k−1 , which completes the proof of (ii).
Proof of (iii). Clearly u ∈ C k−1 (Ω0 ) and u(y 0 ) = g(y 0 ) on Γ0 . We will complete the
proof of (iii) in several steps.
Claim: For all (t, y 0 ) ∈ U0 and x = T (t, y 0 ), we have
Indeed, for fixed y 0 set (t) = E(y(t, y 0 ), z(t, y 0 ), p(t, y 0 )), t ∈ [0, r0 ]. Then we have
(0) = E(y(0, y 0 ), z(0, y 0 ), p(0, y 0 )) = E(y 0 , z0 (y 0 ), p0 (y 0 )) (f rom (3.2.8))
= 0,
(t) = ∇p E(y, z, p) · p + ∂z E(y, z, p)z + ∇x E(y, z, p) · y (f rom (3.2.10))
= 0.
Then Theorem 1.3.13 implies = 0 and proves (9.2.1).
Claim: For all (t, y 0 ) ∈ U0 and x = T (t, y 0 ), we have
∂z ∂xk
N
= pk 0 , j = 1, . . . , N − 1, (9.2.4)
∂yj0 k=1
∂yj
which proves (9.2.2). Now we focus on the proof of (9.2.4), which is quite technical and
∂z(t, y 0 )
N 0
0 ∂xk (t, y )
relies on (3.2.10). Set (t) = − p k (t, y ) . Then
∂yj0 k=1
∂yj0
∂z(0, y 0 )
N 0
0 ∂xk (0, y ) ∂z0
(0) = 0
− pk (0, y ) 0
= 0 − p0j = 0, (9.2.5)
∂yj k=1
∂yj ∂yj
164
Chapter 9 9.2. Annex: Chapter 3
∂ N
N
∂2z ∂xk ∂pk ∂xk ∂ 2 xk
and using = pk = + pk gives
∂t∂yj0 k=1
∂yj0 ∂t k=1
∂yj0 ∂t ∂t∂yj0
N
∂ 2z ∂pk ∂xk ∂ 2 xk
(t) = − + pk
∂t∂yj0 k=1 ∂t ∂yj0 ∂t∂yj0
N
∂pk ∂xk ∂ 2 xk ∂pk ∂xk ∂ 2 xk
= 0
+ pk 0
− 0
+ pk
k=1
∂y j ∂t ∂t∂y j ∂t ∂y j ∂t∂yj0
N
∂pk ∂xk ∂pk ∂xk
= − (use (3.2.10))
k=1
∂yj0 ∂t ∂t ∂yj0
N
∂pk ∂E ∂E ∂E ∂xk
= − − − pk . (9.2.6)
k=1
∂yj0 ∂pk ∂xk ∂z ∂yj0
Differentiating E(y(t, y 0 ), z(t, y 0 ), p(t, y 0 )) = 0 (which is (9.2.1)) with respect to yj0 gives
N
N
∂E ∂pk ∂E ∂xk ∂E ∂z
0
=− 0
+ 0
.
k=1
∂p k ∂yj
k=1
∂x k ∂yj ∂z ∂y j
From (9.2.5), (9.2.7), and Theorem 1.3.13, we get = 0, which proves (9.2.4) and
completes the proof of (9.2.2).
Claim: u ∈ C k (Ω0 ) and solve (3.2.11). Indeed, from (9.2.2) we have ∇u(x) = p(T −1 (x)),
which implies ∇u ∈ C k−1 (Ω0 ; RN ), so u in C k (Ω0 ). The equations (9.2.1), (9.2.2), and
u(y 0 ) = z(0, y 0 ) = g(y 0 ), y 0 ∈ Γ0 prove that u solves (3.2.11).
Proof of (iv): Proposition 3.1.1 and (iii) of this theorem set a correspondence between
local C 2 solutions u to (3.0.1) and the solutions (y, z, p) of (3.2.10). Therefore, from the
uniqueness of solutions to ODEs (from point (ii) of this theorem), it is enough to prove
that near x0 , the non-characteristic initial conditions are unique.
The compatibility condition (3.2.7) near x0 implies that the initial conditions y 0 ,
z0 (y 0 ), and (p01 (y 0 ), . . . , p0N −1 (y 0 )) = (∂1 g(y 0 ), . . . , ∂N −1 g(y 0 )) are uniquely defined. The
condition E(x0 , z0 (x0 ), (p01 (x0 ), . . . , p0N −1 (x0 ), p0N )) = 0 implies that if p0N is not unique
then necessarily ∂pN E(x0 , z0 (x0 ), (p01 (x0 ), . . . , p0N −1 (x0 ), q)) = 0 for a certain q, which
contradicts the assumption and proves that the initial condition (x0 , z0 (x0 ), p0 (x0 )) is
uniquely defined. Lemma 3.2.2 shows that (y 0 , z0 (y 0 ), p0 (y 0 )) are uniquely defined near
x0 , which completes the proof.
165
9.2. Annex: Chapter 3 Chapter 9
Proof.
x2
Assume u ∈ C 1 (R × R+ ) ∩ C 0 (R × [0, ∞))
is a classical solution to (3.3.1). So u sat-
isfies
0 = ∂2 u + ∂1 (f (u)). supp(ϕ)
x1
For ϕ ∈ D(R2 ), let BR+ = {(x1 , x2 ), x21 + -R R
x22 < R2 , x2 > 0} with R large enough so
that supp(ϕ) ∩ {x2 > 0} ⊂ BR+ ; see Fig. Figure 9.2.1: BR+ and supp(ϕ)
9.2.1. If ν + is the outward unit normal on
∂BR+ , using Gauss theorem in BR+ implies
0 = (∂2 u + ∂1 (f (u))) ϕdx
+
BR
= ϕ(uν2+ + f (u)ν1+ )ds −
(u∂2 ϕ + f (u)∂1 ϕ)dx
+ +
∂BR BR
+ +
= ϕ(uν2 + f (u)ν1 )dx1 + ϕ(uν2+ + f (u)ν1+ )ds
+ +
∂BR ∩{x2 =0} ∂BR ∩{x2 >0}
− (f (u)∂1 ϕ + u∂2 ϕ) dx1 dx2
R+ R
= − ϕ(x1 , 0)g(x1 )dx1 − (f (u)∂1 ϕ + u∂2 ϕ) dx1 dx2 ,
R R+ R
166
Chapter 9 9.2. Annex: Chapter 3
Now, let x0 ∈ ∂Ω∩{x2 = 0} and B(x0 , r) be a ball such that B 0,+ := B(x0 , r)∩{x2 >
0} ⊂ Ω, and denote by ν 0,+ the exterior unit normal vector to ∂B 0,+ . Taking ϕ ∈ D(R2 )
with supp(ϕ) ⊂ B(x0 , r) and using Gauss theorem as above gives
0 = (u∂2 ϕ + f (u)∂1 ϕ)) dx + g(x1 )ϕ(x1 , 0)dx1
B 0,+ ∂B 0,+ ∩{x2 =0}
= − (∂2 u + ∂1 (f (u))) ϕdx
B 0,+
+ (uν20,+ + f (u)ν10,+ )ϕds
∂B 0,+ ∩{x2 >0}
+ (g(x1 ) + u(x1 , 0)ν20,+ + f (u)ν10,+ )ϕ(x1 , 0)dx1
0,+
∂B ∩{x2 =0}
= (g(x1 ) − u(x1 , 0))ϕ(x1 , 0)dx1
∂B 0,+ ∩{x2 =0}
which from the arbitrariness of x0 and ϕ proves u(·, 0) = g(·) on ∂Ω ∩ {x2 = 0} and
completes the proof.
Then
Proof. WLOG we may assume that both Ωl and Ωr are smooth, for example Lipschitz,
because otherwise we consider Ωl ∩ B(x, ρ), resp. Ωr ∩ B(x, ρ), instead of Ωl , resp. Ωr ,
with x ∈ γ and ρ small enough. We will use Gauss theorem in Ωl and Ωr , and we denote
by ν l , resp. ν r , the exterior unit normal vector to ∂Ωl , resp. ∂Ωr . As x1 − χ(x2 ) = 0 is
(1, −χ )
the equation of γ, it follows that ν l = = −ν r on γ.
1 + (χ )2
Next we note that as u ∈ C 1 (Ωl ∪ Ωr ) is a weak solution to (3.3.1) in Ωl ∪ Ωr , from
Proposition 3.3.4 it follows that u is a classical solution to (3.3.1) in Ωl and Ωr . Hence
∂2 u + ∂1 (f (u)) = 0 in Ωl ∪ Ωr .
167
9.2. Annex: Chapter 3 Chapter 9
Now we use Gauss theorem for each of these integrals. By taking into account the fact
that the integrals on ∂Ωl ∩ ∂Ω and on ∂Ωr ∩ ∂Ω vanish, we get
0 = l l
ϕ(uν2 + f (u)ν1 )ds + ϕ(uν2 + f (u)ν1 )ds −
r r
ϕ(∂2 u + ∂1 (f (u)))dx
Ωl ∪Ωr
∂Ωl
∂Ωr
γ γ
168
Chapter 9 9.3. Annex: Chapter 4
169
9.3. Annex: Chapter 4 Chapter 9
From g ∈ C 0 (∂BR ), there exists δ1 > 0 such that for all y, z ∈ ∂BR with |y − z| < 2δ1
we have |g(y) − g(z)| < . It follows that for x ∈ BR ∩ B(x0 , δ1 ), we have
2
|u(x) − g(x0 )| = |u(x) − g(x0 )u1 (x)|
= K(x, y)(g(y) − g(x0 ))dσ(y)
∂BR
≤ K(x, y)|g(y) − g(x0 )|dσ(y) + K(x, y)|g(y) − g(x0 )|dσ(y)
∂BR ∩{|y−x0 |<2δ1 } ∂BR ∩{|y−x0 |≥2δ1 }
2
R − |x| 2
≤ K(x, y)dσ(y) + 2gC 0 (∂BR ) dσ(y)
2 ∂BR N VN Rδ1N ∂BR
|∂BR |
≤ + 2gC 0 (∂BR ) (R2 − |x|2 ).
2 N VN Rδ1N
As 0 ≤ R2 − |x|2 ≤ 2R|x − x0 |, there exists δ2 > 0 such that for all x ∈ BR ∩ B(x0 , δ2 )
we have
|∂BR |
2gC 0 (∂BR ) N
(R2 − |x|2 ) < .
N VN Rδ1 2
N
N
Lu = − aij ∂ij u + bi ∂i u + cu, A = (aij ), b = (bi ), (9.3.5)
i,j=1 i=1
3
If (r, θ) ∈ (0, ∞) × SN −1 are the spherical coordinates connected to the Cartesian coordinates
x by
x 1 N −1 x
x = rθ, θ = , then the classical formula holds: Δx u(x) = N −1 ∂r (r ∂r u(rθ)) + Δx u r .
|x| r |x|
170
Chapter 9 9.3. Annex: Chapter 4
Proof. We deal first with the case Lu ≤ 0. Let x0 ∈ Ω such that u(x0 ) = max{u(x), x ∈
Ω}. We distinguish two sub-cases.
(i) Case Lu < 0 in Ω. If x0 ∈ ∂Ω the theorem is proved. So we assume x0 ∈ Ω. We have
|∇u(x0 )| = 0 and D2 u(x0 ) = [∂ij u(x0 )] ≤ 0. Therefore
N
Lu(x ) = −
0
aij (x0 )∂ij u(x0 ) = −tr A(x0 ) · D2 u(x0 ) ≥ 0,
i,j=1
because from A(x0 ) > 0 and D2 u(x0 ) ≤ 0 it follows tr[A(x0 ) · D2 u(x0 )] ≤ 0,5 which is a
contradiction and proves that x0 ∈ ∂Ω.
(ii) Case Lu ≤ 0 in Ω. Let , γ > 0 and consider u = u + eγx1 . We have
We can choose γ large enough such that (−γ 2 a11 + γb1 )eγx1 < 0 in Ω. This is possible
because Ω is bounded, a11 > 0 (this follows from A > 0), and (9.3.6b). So Lu < 0 in
Ω. From case (i), it follows that there exists x ∈ ∂Ω such that
u (x) ≤ u (x ), so u(x) ≤ u(x ) + (eγx1 − eγx1 ), x ∈ Ω.
There exists a subsequence of (x ), still denoted by (x ), and x0 ∈ ∂Ω such that
lim→0 x = x0 , which implies that for all x ∈ Ω we have u(x) ≤ u(x0 ).
4
A matrix M ∈ RN ×N is said to be positive, resp. strictly positive, and we write M ≥ 0, resp.
M > 0, if ξ · M · ξ ≥ 0, resp. ξ · M · ξ > 0, for every 0 = ξ ∈ RN .
5
Indeed, let A(x0 ) = U · Λ · t U be the spectral decomposition of A(x0 ) with U −1 = t U and Λ
the diagonal matrix of eigenvalues of A(x0 ). From the cyclic property of the trace, we get tr[A(x0 ) ·
D2 u(x0 )] = tr[Λ · t U · D2 u(x0 ) · U )] = tr[Λ · B], with B= t U · D2 u(x0 ) · U ; clearly B ≤ 0, because
D2 u(x0 ) ≤ 0, and then tr[A(x0 ) · D2 u(x0 )] = tr[Λ · B] = i λi Bi,i ≤ 0 because λi > 0 and Bi,i ≤ 0 for
all i.
171
9.3. Annex: Chapter 4 Chapter 9
For the case Lu ≥ 0 we set v = −u, so Lv ≤ 0, and then the result follows from (ii).
Note that this theorem does not hold if c = 0. For example, if Ω = B(0, 1) ⊂ R2 and
Lu = −Δu − u (so c = −1) then for u = 5 − |x|2 , we have
Note that if max u < 0 or min u < 0, the result of the previous corollary is not optimal.
Ω Ω
Lw ≤ 0, c ≥ 0 in Ω, w ≤ 0 on ∂Ω.
172
Chapter 9 9.3. Annex: Chapter 4
Proof. From u(x0 ) − u(x) ≥ 0 near x0 in Ω, we obtain ∂ν u(x0 ) ≥ 0. The lemma would
be proven if we find a function v ∈ C 1 (Ω) such that u(x0 ) − u(x) ≥ v(x0 ) − v(x) near
x0 and ∂ν v(x0 ) > 0.
Construction of v. Let z ∈ Ω and r > 0 such that if Br (z) is the ball with center z and
radius r, then Br (z) ⊂ Ω, ∂Br (z)∩∂Ω = {x0 }; see Figure 9.3.1. Set V = Br (z)∩Br0 (x0 ),
2 2
with r0 small, and v(x) = (e−k|x0 −z| − e−k|x−z| ), with x ∈ V , , k > 0. Note that
v(x0 ) − v ≥ 0 in V,
2
L(v(x0 ) − v) = − 4k 2 (x − z) · A(x) · (x − z) + 2k(tr(A(x)) − (x − z) · b e−k|x−z|
2 2
−c (e−k|x0 −z| − e−k|x−z| ) ,
L(u(x0 ) − u) ≥ cu(x0 ).
With k large, small, and with one of (i)–(iii) hold-
ing, we obtain r0
x0
v(x0 ) − v ≤ u(x0 ) − u on ∂V, V
L(v(x0 ) − v) ≤ 0 ≤ L(u(x0 ) − u) in V,
z
173
9.3. Annex: Chapter 4 Chapter 9
Remark 9.3.6 In the case of a local minimum, Hopf lemma is stated as follows. Let
Ω ⊂ RN be open with ∂Ω a C 2 boundary near x0 ∈ ∂Ω, Lu(x) ≥ 0, u(x) > u(x0 ) for all
x ∈ Ω near x0 , and u differentiable at x0 . Assume also that one of the following holds:
(i) c = 0 near x0 ,
(ii) u(x0 ) ≤ 0 and c ≥ 0 near x0 ,
(iii) u(x0 ) = 0.
Then ∂ν u(x0 ) < 0. The proof follows from Lemma 9.3.5 applied to −u.
174
Chapter 9 9.3. Annex: Chapter 4
Proof. Let us prove first Claim i). Let M = max{u(x) − v(x), x ∈ Ω}, which exists as
u, v ∈ C 0 (Ω). If M < 0 then Claim i) holds. So it remains the case M ≥ 0. If u − v ≡ M
in Ω, the case M = 0 gives u = v in Ω, which again proves i), while the case M > 0 is
impossible because it gives u = M + v > v on ∂Ω, which is a contradiction.
So it remains the case that M ≥ 0 and u−v M in Ω. Set K = {x ∈ Ω, (u−v)(x) =
M }. The set K is nonempty and closed and Ω\K is nonempty and open; see Figure
9.3.2. Consequently,
Then
M = u(x0 ) − v(x0 )
1
≤ (u − v)(x0 ) = (u − v)(y)dσ(y) r0
|∂B0 | ∂B0 x0
< M,
Ω\K K
because u − v M on ∂B0 . The contradiction
∂K
shows that this case is impossible and proves i).
The Claim ii) follows from i).
Figure 9.3.2: K, Ω\K, and B(x0 , r0)
175
9.3. Annex: Chapter 4 Chapter 9
−ΔU = 0 = −Δh in C ∩ B,
U ≤ h on ∂(C ∩ B) ∩ B, Figure 9.3.3: Harmonic lifting
(from the assumption for h)
U ≤ h on ∂(C ∩ B) ∩ C.
(from U ≤ h in C\B)
Then from (ii), Corollary 4.3.2, it follows U ≤ h in C ∩ B, so U ≤ h in C and this
completes the proof.
176
Chapter 9 9.3. Annex: Chapter 4
iv) Also (an independent concept from equicontinuity), we say (fn ) is bounded in K
if there exists M ≥ 0 such that |fn (x)| ≤ M for all x ∈ K and n.
Example 9.3.13 Let (fn ) be a uniformly Lipschitz sequence in C 0 (K), i.e. fn ∈ C 0 (K),
for all n, and
177
9.3. Annex: Chapter 4 Chapter 9
As (un )n∈N is uniformly bounded, this formula gives a uniform bound for ∇un in Km .
Indeed, differentiating un (z) at z = x gives
2|x − x| M Rm2
M
|∇un (x)| ≤ N
dσ(y) + N +1
dσ(y) =: C(m), (9.3.8)
N VN Rm ∂Bm Rm VN Rm ∂Bm Rm
lim um m 0
n = u in C (Km ).
n→∞
We set (u0n ) = (un ). From the reasoning above, we obtain (u1n )n∈N , a subsequence
of (u0n )n∈N , and v 1 ∈ C 0 (K1 ) such that lim u1n = v 1 in C 0 (K1 ). Next we proceed with
n→∞
(um
n )n∈N as with (un
m−1
)n∈N , m = 1, 2, . . .. As a result of this process, we obtain the
sequences (un )n∈N , and v m ∈ C 0 (Km ), m = 1, 2, . . ., such that lim um
m m 0
n = v in C (Km ).
n→∞
Set un = unn for all n ∈ N, and v(x) = limn→∞ v n (x). Note that v(x) is well-defined
in Ω, because x ∈ Km , for a certain m, and from the construction, we have that up to
a finite number of terms, (un )n∈N is a subsequence of (um
n )n∈N .
The sequence (u ) and v satisfies the theorem. Indeed, from the definition of (un )
n
178
Chapter 9 9.3. Annex: Chapter 4
From the strong maximum principle, Theorem 4.3.4, V − W must reach the maximum
on ∂B (and not in B) or, otherwise it must be constant. Therefore, as (V − W )(x) = 0
and V − W ≤ 0 in B, it follows V − W = 0 in B, which contradicts V (y) − W (y) < 0. So
V = u in B and therefore u is harmonic in Ω. Theorem 4.2.1 implies that u ∈ C ∞ (Ω).
Claim: u ∈ C 0 (Ω) and u = g on ∂Ω.
Proof. It is enough to prove that u is continuous at every ξ ∈ ∂Ω and u = g on ∂Ω. Let
c
ξ ∈ ∂Ω, y ∈ Ω , R > 0, and B = B(y, R) such that B ∩ Ω = ∅, ∂B ∩ ∂Ω = ξ; see Figure
4.4.1. The choice of ξ and B is possible because ∂Ω is C 2 . Now consider the function w
given by
|x − y|
w(x) = ln , for N = 2,
R
w(x) = R2−N − |x − y|2−N , for N ≥ 3,
which satisfies
Set v − (x) = g(ξ) − ( + kw(x)), resp. v + (x) = g(ξ) + ( + kw(x)), with > 0 given.
Clearly,
−Δv − = −Δv + = 0 in Ω.
Assume for a moment that v − ≤ g ≤ v + on ∂Ω, i.e. v − (x), resp. v + (x), is a subsolution,
resp. supersolution, to (4.0.1). Then from Corollary 4.4.3 and the fact that u(x) =
sup{v(x), v ∈ Sg }, we get
This proves |u(x) − g(ξ)| ≤ + kw(x) for all x ∈ Ω, which implies lim u(x) = g(x)
x→ξ
because w is continuous and w(ξ) = 0.
Now we show that for an appropriate choice of , δ, the function v − , resp. v + , is a
subsolution, resp. supersolution. Let M := gC 0 (∂Ω) and choose , δ, k positive such that
|g(x) − g(ξ)| < , if x ∈ ∂Ω ∩ B(x, δ),
179
9.3. Annex: Chapter 4 Chapter 9
Then
−Δv − = 0 in Ω,
v − (x) ≤ g(x) + |g(ξ) − g(x)| − − kw(x)
g(x) + − − kw(x), for |x − ξ| < δ
≤
g(x) + 2M − − 2M, for |x − ξ| ≥ δ
< g(x), ∀x ∈ ∂Ω.
180
Chapter 9 9.4. Annex: Chapter 5
Change of variable
Let θ ∈ C ∞ (RN ; RN ) with inverse θ−1 ∈ C ∞ (RN ; RN ). If T ∈ D (RN ), we can define
the distribution T ◦ θ−1 ∈ D (RN ) by
181
9.4. Annex: Chapter 5 Chapter 9
The motivation for this definition is the following. Assume T = f ∈ L1 (RN ). Then
−1 −1 −1
T ◦ θ , ϕ = f ◦ θ , ϕ = f (θ (x))ϕ(x)dx = f (x)(ϕ ◦ θ)(x)|det[∇θ]|dx.
RN RN
Example 9.4.2 Let n ∈ N, f ∈ D (R) and let us look for the solutions T ∈ D (R) to
xn T = 0. (9.4.6)
First we consider n = 1. Then the solution is T = Cδ0 , with C constant. Indeed, clearly
ϕ(x) − ϕ(0)η(x)
T = Cδ0 is a solution. Conversely, for ϕ ∈ D set ψ(x) = , where η is a
x
ϕ(x)(1 − η(x)) 1
{[−1, 1], (−2, 2)} cut-off function. As ψ(x) = + η(x) ϕ (tx)dt, we get
x 0
ψ ∈ D. Then
0 = x T, ψ = T, x ψ
n n
ϕ (0)
n−1 (k)
= T, ϕ − T, ηxk ; hence
k=0
k!
n−1
(k)
T = Ck δ0 , Ck constants.
k=0
182
Chapter 9 9.4. Annex: Chapter 5
Example 9.4.3 Let f ∈ D (R) and let us look for the solutions T ∈ D (R) to
xT = f. (9.4.7)
Note that we have already seen the cases f = 0 and f = 1. If we know one particular
solution S to (9.4.7), then all the solutions are given by T = Cδ0 + S, C constant.
Now let us find an S. Inspired by (5.2.13), we define pv(x−1 f ) by
where η is a {{0}, (−, )} cut-off function, > 0. It is easy to show that T = pv(x−1 f ) ∈
D and it solves (9.4.7). So, S = pv(x−1 f ) is a particular solution to (9.4.7), and all
solutions to (9.4.7) are of the form T = Cδ0 + pv(x−1 f ).
Note that the solution does not depend on or η because if ηi , i = 1, 2, is a
{{0}, (−i , i )} cut-off function and Si = pv(x−1 T ) is defined by (9.4.8) with η = ηi ,
then S1 − S2 = Cδ0 .
T = f. (9.4.9)
x x
For ϕ ∈ D set ψ(x) = −∞ ϕ(t)dt− −∞ ϕ1 (t)dt1, ϕ, where ϕ1 ∈ D with R ϕ1 (t)dt = 1.
Then ψ ∈ D. Assuming T exists implies
Every T given by (9.4.9) solves (9.4.9), so (9.4.10) gives all the solutions to (9.4.9).
As an application, let us find the solutions to
xT = 0, x ∈ R, T ∈ D (R). (9.4.11)
From Example 9.4.2 we have T = Dδ0 , D constant. From (9.4.10) with f = Dδ0 we get
0 0
T, ϕ = C, ϕ − Dδ0 , ψ = C, ϕ − D ϕ(t)dt + D ϕ1 (t)dt1, ϕ
−∞ −∞
= C, ϕ + DH, ϕ,
183
9.4. Annex: Chapter 5 Chapter 9
Proof. We refer the reader to [23] or [22] for a proof, which uses the uniform bound-
edness principle (Banach-Steinhaus theorem). For a proof by contradiction, the reader
can see [43].
Any distribution T ∈ D (Ω) restricted in any compact K is of finite order, with the
order eventually tending to infinity as K tends to Ω; see [22, 23]. The following theorem
shows that the distributions with compact support are of finite order.
Theorem 9.4.6 Let Ω ⊂ RN be an open set and T ∈ D (Ω) with compact support. Then
T is of finite order; see (5.2.6).
Proof. Let G Ω be open with supp(T ) ⊂ G, and η ∈ D a {supp(T ), G} cut-off
function. Then
Clearly T , ϕ = T, ϕ, for every ϕ ∈ D(Ω). Note also that T does not depend on the
choice of η because if ηi , i = 1, 2, are two cut-off functions as above then T, (η1 −η2 )ϕ =
0 because supp(η1 − η2 ) ⊂ N (T ). Finally, it is easy to check that T is continuous in the
sense:
184
Chapter 9 9.4. Annex: Chapter 5
T (y), ϕ(·, y) ∈ C ∞ (U ),
then (9.4.12)
Dα T (y), ϕ(x, y) = T (y), Dxα ϕ(x, y), ∀x ∈ U, ∀α ∈ NN
0 .
Proof. Let us prove (9.4.12) for |α| = 1, for example α = (1, 0, . . . , 0). For x ∈ U , let r
and K be as in i) and ii) above. Let e = (1, 0, . . . , 0), 0 = h ∈ R, and consider
T (y), ϕ(x + he, y) − T (y), ϕ(x, y)
z(h) = − T (y), Dxα ϕ(x, y)
h
= T (y), h (y), with
1
h (y) := (ϕ(x + he, y) − ϕ(x, y) − hDxα ϕ(x, y)) .
h
Note that supp(h ) ⊂ K for |h| < r. Next, for every β ∈ NN
0 we have
1
|D h (y)|
h Dy ϕ(x + he, y) − Dy ϕ(x, y) − hDx Dy ϕ(x, y)
β β β α β
=
1
= hDx Dy ϕ(x + the, y) ≤ 1 hDx2α Dyβ ϕC 0 (B(x,r)×RN )
2α β
(here t ∈ (0, 1))
2 2
h→0
−−−→ 0,
185
9.4. Annex: Chapter 5 Chapter 9
which proves lim h = 0 in D. Hence, lim z(h) = 0, which proves (9.4.12) for |α| = 1.
h→0 h→0
One can prove (9.4.12) for arbitrary |α| = 1 by induction. This completes the proof of
Claim a).
For Claim b), we note that (9.4.12) holds because ϕ(x, y) satisfies the condition of
i) and ii). It remains to show that T (y), ϕ(· + y) has compact support. Let G ⊂ RN
be open with supp(T ) ⊂ G and η a {supp(T ), G} cut-off function. Then
ψ(y) := T (y), ϕ(x + y) = T (y), η(y)ϕ(x + y).
Clearly, if x is such that η(·)ϕ(x + ·) = 0 then ψ(x) = 0. This implies supp(ψ) ⊂
{x, η(·)ϕ(x + ·) = 0} ⊂ {x ∈ supp(ϕ) − supp(η)}, where in general A − B =
∪{p − q, p ∈ A, q ∈ B} for any A, B ⊂ RN . As supp(η) ⊂ G, and G is an
arbitrary open set including supp(T ), we obtain that x ∈ supp(ϕ) − supp(T ). So
supp(T (y), ϕ(· + y)) ⊂ supp(ϕ) − supp(T ).
186
Chapter 9 9.4. Annex: Chapter 5
ii) D ⊂ S, with dense inclusion, i.e. for every u ∈ S there exists (un ) in D such that
limn→∞ un = u in S.
iii) S ⊂ Lp , p ∈ [1, ∞], with dense inclusion for p ∈ [1, ∞), i.e. for every u ∈ Lp there
exists (un ) in S such that limn→∞ un = u in Lp .
iv) Lp ⊂ S , p ∈ [1, ∞], with dense inclusion, i.e. for every u ∈ S there exists (un )
in Lp such that limn→∞ un = u in S .
v) S ⊂ D with dense inclusion, i.e. for every u ∈ D there exists (un ) in D such
that limn→∞ un = u in D .
187
9.4. Annex: Chapter 5 Chapter 9
which proves the inclusion S ⊂ Lp , p ∈ [1, ∞]. The density of the inclusion for p ∈ [1, ∞)
follows from ii) and Theorem 1.3.5.
Proof of iv). We have L ⊂ S in the sense that if u ∈ L then Tu defined by Tu , ϕ =
p p
RN
uϕdx, ϕ ∈ S, defines an element of S . It is easy to verify that, indeed, Tu ∈ S .
For the density of Lp ⊂ S , one can prove that S ⊂ S , with dense inclusion in the
sense that for every T ∈ S there exists (Tn ) in S such that limn→∞ Tn = T in S .
The proof of this result is technical and we will omit the details. It follows the same
steps as the proof of the dense embedding D ⊂ D ; see Theorem 5.3.3. Namely, take
Tn (x) = T (y), ρn (x − y), where (ρn ) is a regularizing sequence with ρn (x) = ρn (−x).
Next, like in Lemma 9.4.8 one can show that Tn ∈ S. Furthermore, like in Lemma 9.4.9
one can prove that Tn , ϕ = T, ρn ∗ ϕ, which shows that Tn → T in S .
Proof of v). We have S ⊂ D in the sense that if T ∈ S , then T restricted in D is an
element of D . For the density of the inclusion, let ηn be a {B(0, n), B(0, 2n)} cut-off
function, n ∈ N. For T ∈ D consider Tn = ηn T . Then Tn ∈ S and limn→∞ Tn = T in D .
For every T ∈ S satisfying (9.4.15) we say “T has finite order”, and m is the order of T .
Proof. The proof is similar to the proof of Theorem 5.2.15 (Theorem 9.4.6). Indeed,
assume (9.4.15) does not hold. Then
1 ϕn
Set ψn = . Then |T, ψn | > 1 and limn→∞ ψn = 0 in S because sm (ψn ) ≤ n−1
n sn (ϕn )
for n > m. Passing to the limit in |T, ψn | > 1 implies 0 ≥ 1, which is a contradiction
and proves the proposition.
188
Chapter 9 9.4. Annex: Chapter 5
Proof.
i) Let us prove first F[S] ⊂ S and the continuity of F. For u ∈ S let û = F[u]. Note
that û ∈ C ∞ . From (5.4.6), we get
We want to prove v(z) = u(z). For this, we want to use Fubini’s theorem in the previous
formula, which is not allowed because ei(z−x)·ξ ∈ L1 . However, we can proceed as follows.
For > 0 set
1 2
v (z) = N/2
ei(z·ξ) e−|ξ| û(ξ)dξ
(2π) N
R
1 1 2
= N/2
u(x) N/2
e−i(x−z)·ξ e−|ξ| dxdξ.
(2π) RN (2π) RN
Then from Lebesgue’s DCT, we have lim v (z) = v(z). Now, for v we can use Fubini’s
→0
theorem, which together with (5.4.10) gives
1 1 2
v (z) = N/2
u(x) N/2
e−i(x−z)·ξ e−|ξ| dξdx
(2π) N (2π) RN
R
1 2
= N/2
u(x)F[e−ξ ](x − z)dx
(2π) N
R
1 2
= N/2
u(z + x)F[e−|ξ| ](x)dx
(2π) RN
1 1 2
= N N/2 N/2
u(z + x)e− 4 x dx (using (5.4.10))
2 π RN
1 √ −x2
= u(z + 2x )e dx
π N/2 RN
1 2
e−x dx
→0
−−→ u(z) N/2 (using (5.4.7))
π RN
7
See Corollary 5.4.15 for another proof.
189
9.4. Annex: Chapter 5 Chapter 9
= u(z).
and û(−ξ) ∈ S, which proves that F is invertible from S to itself, and its inverse F−1
is given by (9.4.16).
iii) Finally, (9.4.17) and (9.4.18) follow by simple calculations.
Theorem 5.4.9 shows that F is an isometry in L2 . The following two results deal
with the action of F in Lp spaces.
Lemma 9.4.14 (Riemann-Lebesgue lemma) Let u ∈ L1 . Then
For the limit at infinity, assume first that u ∈ D. Note that Dα u ∈ D for all α ∈ NN
0 .
Therefore, from (5.4.5) we have F[Δu] = −ξ û. Then from (9.4.20) we get
2
F[Δu]L∞ ΔuL1 1
|û(ξ)| ≤ 2
≤ · , |ξ| = 0, (9.4.21)
ξ (2π)N/2 ξ 2
ΔϕL1 (RN ) 1
|û(ξ)| ≤ |(û − ϕ̂)(ξ)| + |ϕ̂(ξ)| ≤ (2π)−N/2 u − ϕL1 + < ,
(2π)N/2 ξ 2
2 ΔϕL1
for |ξ|2 > , which proves the limit in (9.4.20).
(2π)N/2
The action of F in Lp spaces is more complex. For p ∈ [1, 2] we have this result.
190
Chapter 9 9.4. Annex: Chapter 5
Theorem 9.4.15 For p ∈ [1, 2], F maps continuously Lp in Lq , where 1/p + 1/q = 1.
Proof. For p = 1 and q = ∞ the theorem is proven in Lemma 9.4.14, and for p = 2
and q = 2 in Theorem 5.4.9. For p ∈ (1, 2) and q ∈ (2, ∞), the theorem follows from
Riesz-Thorin interpolation theorem 9.4.16 with p0 = 1, q0 = ∞, p1 = 2, q1 = 2,
1/p = (1 − θ)/p0 + θ/p1 = (2 − θ)/2, 1/q = (1 − θ)/q0 + θ/q1 = θ/2, and T = F.
Note that for u ∈ Lp , p ∈ (2, ∞), in general F[u] is not a function; see [49].
191
9.4. Annex: Chapter 5 Chapter 9
1
= v(y) e−i(ξ·x) u(x − y)dxdy
(2π)N/2 RN R N
1 −i(ξ·y)
= e v(y) e−i(ξ·(x−y)) u(x − y)dxdy
(2π)N/2 RN RN
= (2π)N/2 F[u](ξ)F[v](ξ),
For the convolution in S , we avoid the technical details and will only give the formal
definition of the convolution and some properties. For the details of the proofs, the reader
can consult [12, 18, 22, 23].
Definition 9.4.18 Let T ∈ S and S ∈ E , where E is the set of linear maps from
C ∞ (RN ) to C continuous for the convergence in C ∞ (RN ).8 We define the convolution
T ∗ S : S → C by
8
A sequence (ϕn ) in C ∞ (RN ) converges to 0 in C ∞ (RN ) if for every compact K ⊂ RN and for every
α ∈ NN α
0 , limn→∞ D ϕn C 0 (K) = 0.
192
Chapter 9 9.5. Annex: Chapter 6
we have
Proof. Let us consider first the case m = 0. From Theorem 6.3.3 we have H s −→ Cb0 .
To complete the proof in this case, it is enough to prove (9.5.2). It is easy to point out
that for σ as in theorem and arbitrary x, y, ξ, η ∈ RN , we have9
1
d iξ·(y+t(x−y))
|ei(ξ·x)
−ei(ξ·y)
|= e dt ≤ 2|ξ|σ |x − y|σ . (9.5.3)
0 dt
It follows that
1
|u(x) − u(y)| ≤ |ei(x·ξ) − ei(y·ξ) ||û|dξ
(2π)N/2 RN
2
≤ |x − y| σ
|ξ|σ |û|dξ (9.5.4)
(2π)N/2 N
R
2
≤ |x − y|σ |ξ|σ ξ−s ξs |û|dξ
(2π)N/2 RN
2
≤ |x − y|σ |ξ|σ ξ−s L2 ξs ûL2
(2π)N/2
= C|x − y|σ uH s ,
with C = 2
(2π)N/2
|ξ|σ ξ−s L2 . Note that we have (as in Theorem 6.3.3)
∞
|ξ| σ
ξ−s 2L2 ∼ 1+ r N +2(σ−s)−1
dr < ∞,
1
Theorem 9.5.2 Let k ∈ N and assume Ω is an open bounded set satisfying the H k+1 (Ω)-
extension property. Then H k+1 (Ω) −→
c
H k (Ω).
9
Actually, this inequality holds for every σ ∈ (0, 1).
193
9.5. Annex: Chapter 6 Chapter 9
Proof. Let (un ) be a bounded sequence in H k+1 (Ω), i.e. un H k+1 (Ω) ≤ C1 for all n. From
Theorem 6.3.7, there exists a sequence Un = Eun ∈ H k+1 (RN ), with Un H k+1 (RN ) ≤
C2 := C1 E and E an H k+1 extension operator. Without loss of generality we may
assume that supp(Un ) ⊂ K, K compact, because we can multiply (Un ) with η(Rx), η a
{B(0, 1), B(0, 2)} cut-off function, and R >> 1.
The sequence (Ûn ), Ûn = F[Un ], is uniformly bounded and uniformly Lipschitz in
RN . Indeed, as Un are with compact support we have Un ∈ L1 (RN ), and from Lemma
N
9.4.14 obtain Ûn ∈ C 0 (R ) and
1 1
|Ûn (ξ)| ≤ N/2
|Un |dx ≤ C |K|1/2 =: C3 .
N/2 2
(9.5.5)
(2π) K (2π)
Furthermore, Ûn satisfies
1
|Ûn (ξ) − Ûn (η)| = (e−i(ξ·x) − e−i(η·x) )Un (x)dx
(2π)N/2 K
|ξ − η|
≤ |x||e−i(η+θ(ξ−η))·x ||Un (x)|dx (θ ∈ (0, 1))
(2π)N/2 K
≤ C4 |ξ − η|, (9.5.6)
1
with C4 = C2 xL2 (K) .
(2π)N/2
Therefore, (Ûn ) satisfies the conditions of Arzela-Ascoli in every ball B R = B(0, R),
R > 0. Then there exists a subsequence of (Ûn ), still denoted by (Ûn ) converging to a
certain ÛR in C 0 (B R ), for every R > 0. It follows that (Un ) is a Cauchy sequence in
H k (RN ). Indeed, first note that
un − un+m 2H k (Ω) ≤ Un − Un+m 2H k (RN ) (9.5.7)
= ξ |Ûn − Ûn+m | dξ +
2k 2
ξ2k |Ûn − Ûn+m |2 dξ
BR c
BR
194
Chapter 9 9.5. Annex: Chapter 6
We also have the following embedding, which provides an example of how Fourier
transform can be successfully used for proving results in a context different from L2 or
H s spaces.
N N
Theorem 9.5.3 Let s > − with p > 2. Then H s −→ Lp .
2 p
Proof. Let u ∈ S, û = F[u], and q ∈ (1, 2) the conjugate number of p, p1 + 1q = 1.
From the continuity of F from Lq to Lp , see Theorem 9.4.15, and the fact that u =
it follows uLp = û(−ξ)
F[û(−ξ)] = û(−ξ), Lp ≤ CûLq .
If ûLq ≤ CuH s then uLp ≤ CuH s , and from S −→ d
H s it follows H s → Lp .
So, to prove, the theorem reduces to proving ûLq ≤ CuH s . From Hölder inequality
with α = 2/q and β = 1/(1 − q/2), we get
|û| dξ =
q
ξsq |û|q · ξ−sq dξ
RN RN
q/2 1−q/2
−sq
≤ ξ |û| dξ
2s 2
ξ 1−q/2 dξ
RN RN
12 p−1
p−2
2sp
− p−2
= uqH s ξ dξ .
RN
2sp
But RN ξ− p−2 dξ < ∞ iff N −2 p−2
sp
< 0, which is equivalent to s
N
> 12 − p1 and completes
the proof.
Proof. As Ω is bounded, by using Hölder inequality it is easy to prove that uH k (Ω) ≤
C1 uH k (Ω) , with a certain C1 > 0. It remains to prove the inverse inequality, i.e.
uH k (Ω) ≤ C2 uH k (Ω) , with C2 > 0.
Set X = H k (Ω), Y = H k−1 (Ω), and Z = L1 (Ω). Then X, Y , and Z fulfill the
conditions of Lemma 9.5.5. From (9.5.11), for every ∈ (0, 1) there exists C() > 0 such
that
195
9.5. Annex: Chapter 6 Chapter 9
This implies
1 C()
uH k (Ω) ≤ |u|H k (Ω) + uL1 (Ω) ≤ C(, a, b)uH k (Ω) ,
1− 1−
which proves the lemma.
Then for every > 0 there exists C() > 0 such that
Proof. Assume the lemma fails for a certain 0 > 0. Then, there exists a sequence xn
in X such that xn Y > 0 xn X + nxn Z . We can assume, without restriction, that
xn X = 1 (otherwise take xn /xn X instead of xn ). Then
Lemma 9.5.7 (Truncation and extension by zero) Let u ∈ W k,p (Ω), p ∈ [1, ∞],
η ∈ D(Ω). Then V = E0 (ηu) ∈ W k,p (RN ).
196
Chapter 9 9.5. Annex: Chapter 6
Proof. Indeed, for ϕ ∈ D(RN ) and for any α ∈ NN 0 , |α| ≤ k, by using (6.1.10) we have
|α| |α|
D V, ϕ = (−1)
α α
V D ϕ = (−1) α
uηD ϕ = Dα (uη)ϕ
RN Ω Ω
= ϕ Cβα Dβ uDα−β η
Ω 0≤β≤α
= ϕE0 Cβα Dβ uDα−β η .
RN 0≤β≤α
α
So D V = E0 α β
0≤β≤α Cβ D uD
α−β
η ∈ Lp (RN ), which proves V ∈ W k,p (RN ).
In the case when Ω is smooth, a function in W k,p (Ω) can be approximated by func-
tions smooth up to the boundary ∂Ω. The following theorem is instructive, because it
highlights the key steps of the method, in particular the step of regularization by convo-
lution, which furthermore shifts the support of the function. The general case is treated
by using a partition of unity; see Theorem 9.5.10.
N N
Theorem 9.5.8 Let k ∈ N, p ∈ [1, ∞). Then D(R+ ) −→
d
+ ), where D(R+ ) :=
W k,p (RN
{U |RN+ , U ∈ D(R )}.
N
Proof. Let u ∈ W k,p (RN + ). Without loss of generality, we may assume that supp(u) is
bounded. If not, we can always consider un = ηn u, where ηn (x) = η(x/n) and η is a
{B(0, 1), B(0, 2)} cut-off function, and show easily that un converges to u in W k,p (RN +)
similarly as in Theorem 6.1.7.
Now, let (ρn ) be a mollifier sequence with supp(ρn ) ⊂ RN − := {xN < 0}, and consider
the sequence (ρ̌n ), where in general ϕ̌(x) = ϕ(−x). Set U = E0 (u), where E0 is as in
Lemma 9.5.6, Un = U ∗ ρn .
We will show that (un ) = (Un |RN+ ) is the required sequence. Indeed, as u has bounded
support the U also has bounded support, Un ∈ D and furthermore limn→∞ Un = U in
Lp (RN ), so limn→∞ un = u in Lp (RN + ). Next we compute D un , |α| ≤ k.
α
|α|
(−1) D un , ϕD (RN+ )×D(RN+ ) =
α
Un (x)Dα ϕ(x)dx
RN
= U (y)ρn (x − y)Dα ϕ(x)dydx
N N
R R
= U (y)(ρ̌n ∗ Dα ϕ)(y)dy
N
R
= U (y)Dα (ρ̌n ∗ ϕ)(y)dy
N
R
= u(y)Dα (ρ̌n ∗ ϕ)(y)dy, (9.5.13)
RN
+
197
9.5. Annex: Chapter 6 Chapter 9
where the last equality holds for n large, because for such n we have
supp(ρ̌n ) ⊂ RN
+, supp(ρ̌n ∗ ϕ) ⊂ sup(ρ̌n ) + supp(ϕ) ⊂ RN
+.
So Dα (ρ̌n ∗ϕ) ∈ D(RN + ) and then we can integrate by parts in (9.5.13) with no boundary
terms, which gives
D un , ϕD (RN+ )×D(RN+ ) =
α
Dα u(y)(ρ̌n ∗ ϕ)(y)dy
RN
+
The classical construction of W k,p extension operator for p ∈ [1, ∞] is quite long and
technical. We will use Theorem 9.5.8 to construct a W k,p extension operator, p ≥ 1,
with a simple proof.
Theorem 9.5.9 Let k ∈ N, p ∈ [1, ∞). Then there exists a linear continuous extension
+ ) → W
operator10 E : W k,p (RN k,p
(RN ).
N
Proof. Let E : D(R+ ) → C0k (RN ) be defined by
⎧
⎪
⎨ u(x , xN ), xN ≥ 0,
x
(Eu)(x , xN ) := U (x , xN ) = c j u x , −
N
, xN < 0, (9.5.15)
⎪
⎩ j+1
j=0,...,k
so that U ∈ C0k (RN ). For this, it is sufficient to ensure the continuity across ∂RN + of the
derivatives ∂xN U , 0 ≤ i ≤ k. Evaluating these derivatives at (x , 0) leads to
i
1
i
− cj = 1, i = 0, 1, . . . , k. (9.5.16)
j=0,...,k
j+1
10
See Theorem 6.3.7 for the general extension result.
198
Chapter 9 9.5. Annex: Chapter 6
The matrix of this system is the so-called “Vandermonde matrix”, which is nonsingular
and so the coefficients ci are uniquely defined.
E provides the required extension operator. Indeed, clearly E is linear. Furthermore,
there exists C > 0 such that
N
EuW k,p (RN ) ≤ CuW k,p (RN+ ) , ∀u ∈ D(R+ ). (9.5.17)
N
This estimate is proven easily by using simple classical calculus. Therefore as D(R+ ) is
dense in W k,p (RN
+ ), (9.5.17) implies that E extends to a linear continuous operator from
W k,p (RN
+ ) to W k,p
(RN ), still denoted by E, which satisfies (9.5.17) in W k,p (RN+ ). Finally,
Eu = u a.e. in R+ for every u ∈ W (R+ ), because we can choose a sequence (un ) in
N k,p N
N
D(R+ ) with Eun = un in RN + , converging to u in W
k,p
+ ) and pointwise in R+ .
(RN N
Eu ∈ W k,p (RN ),
(Ewi )(x) = wi (x) = u(x)ηi (x), x ∈ Ω, i = 1, . . . , M,
(Eu)(x) = u(x) ηi (x) = u(x), x ∈ Ω,
i=0,...,M
199
9.5. Annex: Chapter 6 Chapter 9
Theorem 9.5.11 Assume Ω ⊂ RN satisfies the W k,p extension property, p ∈ [1, ∞).
Then D(Ω) −→
d
W k,p (Ω), where D(Ω) := {U |Ω , U ∈ D(RN )}.
Proof. Let u ∈ W k,p (Ω) and U = Eu its W k,p (RN ) extension. From Theorem 6.1.7,
there exists a sequence Un ∈ D(RN ) converging to U in W k,p (RN ). Set un = U |Ω . Then
(un ) is in D(Ω) and converges to u in W k,p (Ω).
which together with the inclusion S(RN −1 ) → H k−1 (RN −1 ) completes the proof of the-
orem.
The following theorem shows that in the case when Ω is a bounded Lipschitz domain,
the W01,p (Ω) functions are characterized as W 1,p (Ω) functions with zero boundary trace.
The proof demonstrates typical techniques when working with Sobolev spaces, such
as truncation, extension, regularization, and shifting/translation of the support of the
function.
200
Chapter 9 9.5. Annex: Chapter 6
Corollary 9.5.13 Let Ω ⊂ RN be an open bounded Lipschitz set and p ∈ [1, ∞). Then
W01,p (Ω) = N 1,p (γ), where N 1,p (γ) = {u ∈ W 1,p (Ω), γ(u) = 0}. (9.5.18)
Proof. We will prove the corollary only in the case Ω = RN + . The proof of the general
case is made by transforming the problem to a finite number of problems in RN + by using
a partition of unity of ∂Ω.
It is easy to prove W01,p (Ω) ⊂ N 1,p (γ). Indeed, if u ∈ D(Ω) then clearly u ∈ N 1,p (γ).
As D(Ω) is dense in W01,p (Ω) and γ is continuous, it follows that W01,p (Ω) ⊂ N 1,p (γ).
Now, let u ∈ N 1,p (γ) and prove that u ∈ W01,p (Ω). The proof contains several steps:
first a truncation, then an extension by zero and finally an appropriate regularization
(which regularizes the function and translates/shifts its support). WLOG we may assume
that U has bounded support, as by “truncation” (see Theorem 6.1.7) we can always
consider a sequence (un ) with bounded support converging to u in W 1,p (RN + ). Next,
we denote by U the extension of u in RN by zero. Then U ∈ W 1,p (RN ). Indeed, for
ϕ ∈ D(RN ) and |α| = 1, we have
D U, ϕ = −
α
U D ϕdx = −
α
uDα ϕdx.
RN RN
+
There exists an open set ω ∈ RN + of class C such that supp(ϕ) ∩ R+ ⊂ ω. Then, from
1 N
201
9.6. Annex: Chapter 7 Chapter 9
fˆ
Then u ∈ H 2 (RN ) because ∈ L22 (RN ). By analogy, if f ∈ L2 (Ω) then one expects
1 + ξ2
that u ∈ H01 (Ω) ∩ H 2 (Ω). We will see that this is true provided Ω is regular.
In this section, we will address the “L2 -regularity theory” of the weak solutions to
(9.6.1). The results remain the same even for more general second-order elliptic linear
PDEs like (7.1.1). As the method of proof is the same, we will focus the analysis on
(9.6.1), so as to avoid the technicalities of the general case and highlighting the main
ideas.
There is also a “Lp -regularity theory” of the weak solutions to second-order elliptic
linear PDEs, but the analysis is more difficult. We refer the interested reader to [2, 20, 38]
for more in this topic. The result that we will prove in this section is the following.
Theorem 9.6.1 Let Ω ⊂ RN be an open bounded set and u ∈ W01,p (Ω), p ∈ (1, ∞), be a
weak solution to (9.6.1). If Ω is of class C m+2 and f ∈ W m,p (Ω), m = 0, 1, . . ., then
u ∈ W m+2,p (Ω) and uW m+2,p (Ω) ≤ Cf W m,p (Ω) , (9.6.2)
with C independent of f (C depends only on Ω).
The proof of this theorem in the case p = 2 is more difficult, and we refer the interested
reader to the pioneering works on this subject [2, 20, 38]. We will prove the theorem in
the case p = 2; see, for example, [5].
Note that with f ∈ L2 (Ω), the weak solution u ∈ H01 (Ω) to (9.6.1) satisfies
uH 1 (Ω) ≤ f L2 (Ω) . (9.6.3)
202
Chapter 9 9.6. Annex: Chapter 7
The regularity analysis of u will be divided in two parts. First is the regularity in the
interior of Ω, which depends only on the data f and not Ω. Next is the regularity near
the boundary ∂Ω, which depends on the regularity of the boundary as well. To this end,
let G1 , . . . , Gn , n ∈ N, be a finite covering with open sets of ∂Ω. From Theorem 1.2.2,
there exist η0 , η1 , . . . , ηn in D(RN ) and with values in [0, 1] such that12
n
ηi = 1 in RN ,
i=0
supp(ηi ) ⊂ Gi for all i = 1, . . . , n,
supp(η0 ) ∩ Σ = ∅ where Σ := {x ∈ RN , dist(x, ∂Ω) < },
First, we will prove the regularity of u in the interior of Ω, namely for u0 . The idea is that
as supp(u0 ) Ω, we can use in (7.2.12) as test functions the so-called “difference quo-
tients” Dih u0 , which upon sufficient regularity of f allows one to estimate Dih u0 H m (Ω)
norms, m = 1, 2, . . .. These estimates imply H m+1 (Ω) regularity of u0 . The regularity of
u near the boundary is obtained by first making a change of variable which “flattens”
the boundary. This allows us to use difference quotients again, which provide similar
estimates for ui functions as for u0 , and ultimately the regularity of u in Ω.
Lemma 9.6.2 Assume ω Ω, with ω open, and |h| < dist(ω, ∂Ω). Then for every
u ∈ Lp (Ω), p ∈ [1, ∞), and every v ∈ C 0 (Ω) bounded with supp(v) ⊂ ω, we have
Dih (uv)(x) = u(x)Dih v(x) + v(x + hei )Dih u(x), x ∈ ω, (9.6.6)
h
Di u(x)v(x) = u(x)Di−h v(x)dx. (9.6.7)
Ω Ω
12
Given A, B ⊂ RN , dist(A, B) = inf{|a − b|, a ∈ A, b ∈ B}.
203
9.6. Annex: Chapter 7 Chapter 9
Dih uLp (ω) ≤ ∂i uLp (Ω) , ∀ω Ω, ω open, |h| < dist(ω, ∂Ω). (9.6.8)
Proof. Let u ∈ C ∞ (Ω) ∩ W 1,p (Ω). For t ∈ R set v(t) = u(x + thei ). Then v ∈ C ∞ (R),
v (t) = hei · ∇u(x + thei ), and
v(1) − v(0) 1 1
1
Dih u(x) = = v (t)dt = ei · ∇u(x + thei )dt.
h h 0 0
Therefore, it follows
1 p
Dih upLp (ω) = |e · ∇u(x + the )dt dx
i i
ω 0
1 1
≤ |∇u(x + thei )| dxdt ≤
p
|∇u(y)|p dydt
0 ω 0 Ω
= ∇upLp (Ω) ,
because from ω Ω, |h| < d(ω, ∂Ω) we get y = x + thei ∈ Ω. This inequality remains
true for all u ∈ W 1,p (Ω); see Theorem 6.1.8.
Lemma 9.6.4 Let p ∈ (1, ∞) and u ∈ Lp (Ω). Assume that there exists C > 0 such that
204
Chapter 9 9.6. Annex: Chapter 7
because ϕ ∈ D(Ω). As Dih uLp (ω) is bounded, there exists a sequence (h) tending to
zero and v ∈ Lp (ω), such that limh→0 Dih u = v weakly in Lp (ω), see Theorem 1.3.8, i.e.
lim Di u, ϕ =
h
vϕ, ∀ϕ ∈ D(ω). (9.6.12)
h→0 Ω
From the arbitrariness of ω, we have v ∈ Lp (Ω). Moreover, from (9.6.11) and (9.6.12) it
follows
∂i u, ϕ = vϕdx, ∀ϕ ∈ D(Ω),
Ω
Let i ∈ {1, . . . , N }, h > 0, and take ϕ = D−h Dh u0 := Di−h Dih u0 . Note that ϕ ∈ H 1 (RN )
as a linear combination of H01 (RN ) functions. From (9.6.7), the fact that ∂j and Dh
commute, and (9.6.15), we obtain
|∇D u0 | +
h 2
|D u0 | =
h 2
gD−h Dh u0 .
RN RN RN
205
9.6. Annex: Chapter 7 Chapter 9
This gives Dh u0 2H 1 (RN ) ≤ gL2 (RN ) D−h Dh u0 L2 (RN ) . But from Lemma 9.6.3, we have
D−h (Dh u0 )L2 (RN ) ≤ Dh u0 H 1 (RN ) . Combining the two last inequalities gives
∂i u0 H 1 (Ω) ≤ gL2 (RN ) ≤ f L2 (Ω) + 2η0 H 2 (Ω) uH 1 (Ω)
≤ (1 + 2η0 H 2 (Ω) )f L2 (Ω) ,
Dα g ∈ L2 (RN ), and
ϕk ∈ D(Ω) such that limk→∞ ϕk = ui in H 1 (Ω), then ηi ϕk ∈ D(G+ i ) and limk→∞ (ϕk ηi ) =
ui in H 1 (G+
i ). Furthermore, u i satisfies
206
Chapter 9 9.6. Annex: Chapter 7
More precisely, as Ω is of class C m+2 there exists θi ∈ C m+2 (Q; Gi ), invertible with
inverse ζi ∈ C m+2 (Gi ; Q) such that (see Fig 6.4.1)
θi (Q+ ) = G+
i , θi (Q0 ) = G0i , i ∩ ∂Ω.
G0i := G+
We change the variable and write (9.6.18) in terms of vi ∈ H01 (Q+ ). Namely, we set
The strategy for m = 0 is as follows. After changing the variable, it turns out that vi
satisfies a certain PDE in Q+ (see (9.6.18)). In Q+ , we can make translations Dh := Djh ,
j = 1, . . . , N − 1, and by using the technique of the regularity in the interior, we obtain
∂ij2 vi ∈ L2 (Q+ ), j = 1, . . . , N − 1, i = 1, . . . , N . The regularity ∂N2
N vi ∈ L (Q ) is
2 +
obtained by using the weak form equation for vi , which concludes that vi ∈ H 2 (Q+ ),
and therefore from Proposition 6.1.13 we obtain ui ∈ H 2 (G+ i ). For m > 0 we will proceed
by recurrence.
The following lemma is important. It shows that under the change of variables
(9.6.20), the equation (9.6.18) is transformed to other elliptic PDEs.
Lemma 9.6.6 Under the conditions of Theorem 9.6.1, vi ∈ H01 (Q+ ) and it solves
N
akl (y)(∂k vi (y)∂l ϕ(y))dy = bi (y)ϕ(y)dy, ∀ϕ ∈ H01 (Q+ ), (9.6.21)
Q+ k,l=1 Q+
∇ui (x) · ∇ψ(x) = t (t [∇θi (y)]−1 · ∇vi (y)]) · (t [∇θi (y)]−1 · ∇ϕ(y))
= ∇vi (y) · [∇θi (y)]−1 · t [∇(y)]−1 · ∇ϕ(y).
207
9.6. Annex: Chapter 7 Chapter 9
= ∇vi (y) · [∇θi (y)]−1 · t [∇θi (y)]−1 · ∇ϕ(y)|Jac(θi (y))|dy
G+
i
N
= akl (y)(∂k vi (y)∂l ϕ(y))dy.
Q+ k,l=1
N
A0 |ξ| ≥
2
akl ξk ξl = |t [∇θi (y)]−1 · ξ|2 |Jac(θi (y))| ≥ a0 |ξ|2 ,
k,l=1
for certain A0 , a0 > 0, because akl ∈ C m+1 (Q), [∇θi ] is not singular in Q+ and |t [∇θi (y)]−1 ·
ξ| is a norm in RN equivalent to |ξ| (independently of y).
N
Djh (akl (y)∂k vi (y)) Djh (∂l vi (y))dy = bi (y)Dj−h Djh vi (y)dy,
Q+ k,l=1 Q+
Using the fact that akl are C 1 functions, (9.6.22) and Lemma 9.6.3, from the last equation
we obtain
a0 Djh ∇vi 2L2 (Q+ ) ≤ bi L2 (Q+ ) + akl C 1 (Q+ ) ∇vi L2 (Q+ ) Djh ∇vi L2 (Q+ ) .
208
Chapter 9 9.6. Annex: Chapter 7
Note that from bi in (9.6.23), gi in (9.6.17), and the estimations (9.6.19), (9.6.3), we
obtain
bi L2 (Q+ ) + akl C 1 (Q+) ∇vi L2 (Q+ ) ≤ Cf L2 (Ω) .
Therefore, the last two inequalities and Lemma 9.6.3 yield
∂jk
2
vi L2 (Q+ ) ≤ Cf L2 (Ω) , j = 1, . . . , N − 1, k = 1, . . . , N, (9.6.25)
with C depending only on Ω. Note that this inequality implies that for all ϕ ∈ H01 (Q+ ),
we have
∂ v ∂ ϕdy ≤ Cf L2 (Ω) ϕL2 (Q+ ) , i = 1, . . . , N − 1, j = 1, . . . , N. (9.6.26)
+ j i k
Q
Combining (9.6.28) and (9.6.26) gives (9.6.27) with Cf L2 (Ω) instead of C, and this
proves vi ∈ H 2 (Q+ ).
For m ≥ 1, we proceed by recurrence as in Theorem 9.6.5. So let us assume that the
theorem holds for up to m − 1 and prove it for m. As in Theorem 9.6.5, Dα vi , |α| = m
solves
N
N
α
akl (∂k D vi )(∂l ϕ)dy = (D bi )ϕ −
α α
(D akl )(∂k vi ∂l ϕ)dy dy, (9.6.29)
Q+ k,l=1 Q+ k,l=1
for all ϕ ∈ H01 (Q+ ). This equation is obtained from (9.6.21) by taking Dα ϕ instead of
ϕ, with ϕ ∈ D(Q+ ), integrating by parts, and then proceeding by density. Note that
Dα bi ∈ L2 (Q+ ), Dα akl ∈ C 1 (Q+ ). Then we proceed with (9.6.29) as with (9.6.21) and
then Dα vi ∈ H 2 (Q+ ), so vi ∈ H m+2 (Q+ ).
Finally, the estimation (9.6.24) follows (9.6.25) and (9.6.27), which completes the
proof of the theorem.
209
9.6. Annex: Chapter 7 Chapter 9
Proof of Theorem 9.6.1 We consider only the case p = 2. We have u = u0 + ni=1 ui .
From Theorem 9.6.5, we have u0 ∈ H m+2 (Ω), and from Theorem 9.6.7 we have ui ∈
H m+2 (Ω), so u ∈ H m+2 (Ω). The estimation (9.6.2) follows from (9.6.13) and (9.6.24).
210
Bibliography
[1] R. Adams and J. Fournier. Sobolev Spaces. Second edition. Elsevier, 2003.
[2] S. Agmon, A. Douglis, and L. Nirenberg. “Estimates near the boundary for solutions
of elliptic partial differential equations satisfying general boundary conditions”. In:
Comm. Pure Appl. Math. I, II (1959).
[3] H. Amann. Vector-valued distributions of Fourier transform multipliers. University
of Zürich. 2003. url: https://www.math.uzh.ch/amann/files/distributions.ps.
[4] S. Benzoni-Gavage and D. Serre. Multi-dimensional Hyperbolic Partial Differen-
tial Equations: First-order Systems and Applications. Oxford Mathematical Mono-
graphs. Oxford University Press, 2007.
[5] H. Brezis. Functional Analysis, Sobolev Spaces and Partial Differential Equations.
Springer, 2011.
[6] C. Caratheodory. Calculus of Variations and Partial Differential Equations. Part
I: Partial Differential Equations of the First Order. San Francisco, London, Ams-
terdam: Holden-Day, Inc., 1965.
[7] H. Cartan. Differential calculus. Paris: Hermann, 1971.
[8] Ph. Ciarlet. Linear and Nonlinear Functional Analysis with Applications. SIAM,
2013.
[9] R. Courant and D. Hilbert. Methods of Mathematical Physics. Vol. 2. Wiley-
Interscience, 1989.
[10] C. M. Dafermos. Hyperbolic Conservation Laws in Continuum Physics. Springer,
2010.
[11] R. Dautray and J.-L. Lions. Mathematical Analysis and Numerical Methods for
Science and Technology. Evolution Problems. Vol. V. Berlin: Springer-Verlag, 2000.
[12] R. Dautray and J.-L. Lions. Mathematical Analysis and Numerical Methods for Sci-
ence and Technology. Functional and Variational Methods. Vol. II. Berlin: Springer-
Verlag, 2000.
© The Editor(s) (if applicable) and The Author(s), under exclusive license 211
to Springer Nature Switzerland AG 2023
A. Novruzi, A Short Introduction to Partial Differential Equations, CMS/CAIMS Books in Mathematics 11,
https://doi.org/10.1007/978-3-031-39524-6
Bibliography
[13] R. Dautray and J.-L. Lions. Mathematical Analysis and Numerical Methods for Sci-
ence and Technology. Spectral Theory and Applications. Vol. III. Berlin: Springer-
Verlag, 2000.
[14] F. Demengel and G. Demengel. Functional Spaces for the Theory of Elliptic Partial
Differential Equations. Fractional Sobolev Spaces. Springer, 2012.
[15] J. Diestel and Jr. J. J. Uhl. Vector Measures. Mathematical Surveys and Mono-
graphs. Vol. 15. AMS, 1977.
[16] L. C. Evans. Partial Differential Equations. Vol. 19. Graduate Studies in Mathe-
matics. AMS, 1998.
[17] E. Fabes, O. Mendez, and M. Mitrea. “Boundary Layers on Sobolev-Besov Spaces
and Poisson’s Equation for the Laplacian in Lipschitz Domains”. In: Journal of
Functional Analysis 159 (1998), pp. 323–368.
[18] G. G. Friedlander and M. Joshi. Introduction to the theory of distributions. Cam-
bridge University Press, 1999.
[19] P. Garabedian. Partial Differential Equations. Wiley, 1964.
[20] D. Gilbarg and N. S. Trudinger. Elliptic Partial Differential Equations of Secon-
dorder. Vol. 224. Springer, 2001.
[21] Ch. Heil. Introduction to Real Analysis. Graduate Texts in Mathematics. Springer,
2019.
[22] L. Hörmander. The analysis of linear partial differential operators. Distribution
Theory and Fourier Analysis. Vol. I. Springer-Verlag, 1983.
[23] G. Hörmann and R. Steinbauer. Lecture notes on Theory of dis-
tributions. Fakultät für Mathematik, Universität Wien, 2009. url:
https://www.mat.univie.ac.at/ stein/lehre/SoSem09/distrvo.pdf.
[24] A. Jonsson and H. Wallin. Function Spaces on Subsets of Rn . New York: Harwood
Academic, 1984.
[25] D. Kim. “Trace theorems for Sobolev-Slobodeckij spaces with or without weights”.
In: J. Funct. Spaces Appl. 5.3 (2007), pp. 243–268.
[26] E. Kreyszig. Introductory functional analysis with applications. New York: Wiley,
1978.
[27] O. A. Ladyzhenskaja, V. A. Solonnikov, and Ural’ceva N. N. Linear and quasilinear
equations of parabolic type. Translations of Mathematical Monographs. Providence,
RI: AMS, 1968.
[28] S. Lang. Real and Functional Analysis. Graduate Texts in Mathematics, Third
edition. Springer, 1991.
212
Bibliography
213
Bibliography
[45] V. S. Rychkov. “On restrictions and extensions of the Besov and Triebel-Lizorkin
spaces with respect to Lipschitz domains”. In: J. Lond. Math. Soc. 60 (1 1999), pp.
237–257.
[46] C. Schneider. “Traces of Besov and Triebel-Lizorkin spaces on domains”. In: Math.
Nachr. 284.5-6 (2011), pp. 572–586.
[47] L. Schwartz. Théorie des distributions. Hermann, 1978.
[48] E. M. Stein. Singular Integrals and Differentiability of Functions. Vol. 30. Princeton
Mathematical Series. Princeton University Press, 1971.
[49] E. M. Stein and G. Weiss. Introduction to Fourier Analysis on Euclidean Spaces.
Vol. 32. Princeton Mathematical Series. Princeton University Press, 1990.
[50] R. S. Strichartz. A Guide to Distribution Theory and Fourier Transforms. World
Scientific Publishing, 2003.
[51] L. Tartar. An introduction to Sobolev spaces and interpolation spaces. Springer,
2007.
[52] R. Temam. Navier-Stokes Equations: Theory and Numerical Analysis. Vol. 343.
AMS Chelsea Publishing, 1984.
[53] F. Treves. Basic Partial Differential Equations. Academic Press, 1975.
[54] H. Triebel. Interpolation Theory, Function Spaces and Differential Operators. Ams-
terdam, New York, Oxford: North-Holland, 1978. 214
214
Index
216
Index
217
Index
218
Index
219