Professional Documents
Culture Documents
Download textbook Variational Source Conditions Quadratic Inverse Problems Sparsity Promoting Regularization New Results In Modern Theory Of Inverse Problems And An Application In Laser Optics Jens Flemming ebook all chapter pdf
Download textbook Variational Source Conditions Quadratic Inverse Problems Sparsity Promoting Regularization New Results In Modern Theory Of Inverse Problems And An Application In Laser Optics Jens Flemming ebook all chapter pdf
https://textbookfull.com/product/inverse-problems-basics-theory-
and-applications-in-geophysics-2nd-edition-mathias-richter/
https://textbookfull.com/product/an-introduction-to-the-
mathematical-theory-of-inverse-problems-3rd-edition-andreas-
kirsch/
https://textbookfull.com/product/geometry-of-the-generalized-
geodesic-flow-and-inverse-spectral-problems-second-edition-
petkov/
https://textbookfull.com/product/stochastic-linear-quadratic-
optimal-control-theory-differential-games-and-mean-field-
problems-jingrui-sun/
Inverse Abdominoplasty An Illustrated Guide 1st Edition
Kemal Tunc Tiryaki (Eds.)
https://textbookfull.com/product/inverse-abdominoplasty-an-
illustrated-guide-1st-edition-kemal-tunc-tiryaki-eds/
https://textbookfull.com/product/java-coding-problems-become-an-
expert-java-programmer-by-solving-over-200-brand-new-modern-real-
world-problems-2nd-edition-anghel-leonard/
https://textbookfull.com/product/java-coding-problems-2nd-
edition-become-an-expert-java-programmer-by-solving-
over-200-brand-new-modern-real-world-problems-anghel-leonard/
https://textbookfull.com/product/recurrent-sequences-key-results-
applications-and-problems-dorin-andrica/
Frontiers in Mathematics
Jens Flemming
Source
Variational
Conditions,
Quadratic
Inverse Problems,
Sparsity Promoting
Regularization
Frontiers in Mathematics
This book is published under the imprint Birkhäuser, www.birkhauser-science.com by the registered company
Springer Nature Switzerland AG part of Springer Nature.
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
This book is dedicated to Bernd Hofmann on the
occasion of his retirement.
Preface
This book grew out of the author’s habilitation thesis, which has been completed in January
2018. Parts II and III cover and slightly extend the material of the thesis. Part I, on the one
hand, provides an introduction to the other parts and, on the other hand, contains new
results on variational source conditions in the context of convergence rates theory for ill-
posed inverse problems.
The intention of writing this book was to demonstrate new and to some extent nonortho-
dox ideas for handling ill-posed inverse problems. This book is not a comprehensive
introduction to inverse problems. Instead, it focuses on few research topics and handles
them in depth.
The three topics of the book, variational source conditions, quadratic inverse problems,
and 1 -regularization, seem to be quite different. The first one is of great generality and
establishes the basis for several more concrete results in the book. The second one is
concerned with nonlinear mappings in a classical Hilbert space setting, whereas the third
deals with linear mappings in non-reflexive Banach spaces.
At the second sight, quadratic inverse problems and linear inverse problems with
sparsity context have similar structures and their handling shows several parallels.
Nevertheless, I decided to divide the book into three more or less independent parts and to
give hints on cross connections from time to time. The advantage of this decision is that
the reader may study the three parts in arbitrary order.
Finishing this book would not have been possible without constant support and advice
by Prof. Bernd Hofmann (TU Chemnitz). I thank him a lot for his efforts in several regards
during all the years I have been working in his research group. I also want to thank
my colleagues and coauthors, especially Steven Bürger and Daniel Gerth, for interesting
and fruitful discussions. Last but not least I have to express my thanks to the Faculty
of Mathematics at TU Chemnitz as a whole for the cordial and cooperative working
atmosphere.
vii
Contents
ix
x Contents
Abstract
We introduce the mathematical setting as well as basic notation used throughout the
book. Different notions of ill-posedness in the context of inverse problems are discussed
and the need for regularization leads us to Tikhonov-type methods and their behavior
in Banach spaces.
1.1 Setting
with exact and attainable data y † in Y . Solving such equations requires, in some sense,
inversion of F . Hence the term inverse problem.
The mathematical field of inverse problems is not concerned with Eq. (1.1) in general
but only with equations that are ill-posed. Loosely speaking, an equation is ill-posed if
the inversion process is very sensitive to perturbations in the right-hand side y † . Such
perturbations cannot be avoided in practice because y † represents some measured quantity
and measurements always are corrupted by noise. We provide and discuss different precise
definitions of ill-posedness in the next section.
To analyze and overcome ill-posedness noise has to be taken into account. In other
words, the exact right-hand side y † is not available for the inversion process. Instead, we
only have some noisy measurement y δ at hand, which is assumed to belong to Y , too, and
to satisfy
δ
y − y † ≤ δ (1.2)
Items (ii) and (iii) are satisfied if and only if for each sequence (xn )n∈N in D(F ) and
each x in X we have
1.2 Ill-Posedness
Definition 1.2 The mapping F in Eq. (1.1) is well-posed in the sense of Hadamard if
Items (i) and (ii) of the definition require that F is bijective and item (iii) says that the
inverse mapping has to be continuous with respect to the norm or some other topology. Due
to its restrictive nature Hadamard’s definition only plays a minor role in modern theory of
1.2 Ill-Posedness 5
Definition 1.3 Let F in Eq. (1.1) be linear and bounded. Then F is well-posed in the sense
of Nashed if the range of F is closed in Y and ill-posed in the sense of Nashed if the range
of F is not closed in Y .
Nashed’s definition does not consider existence and uniqueness of solutions, but
focusses on continuous (generalized) invertibility. If a generalized inverse exists, then it is
continuous if and only if F is well-posed in the sense of Nashed, see [2, Theorem 5.6(b)].
But one should be aware of the fact, that in general Banach spaces generalized inverses
are not always available, because the null space of F or the closure of the range may be
uncomplemented, see Proposition 1.10 and Sect. 1.2.4 below. An important example for
this situation is the setting used for analyzing 1 -regularization in Part III.
If F is injective, then the inverse F −1 : Y ⊇ R(F ) → X is continuous on R(F ) if and
only if R(F ) is closed. If X and Y are Hilbert spaces, then the Moore–Penrose inverse is
a generalized inverse which always exists. Thus, in Hilbert spaces well-posedness in the
sense of Nashed is equivalent to continuity of the Moore–Penrose inverse.
Nashed distinguished two types of ill-posedness in [1]. In Chap. 10 we have a closer
look at this distinction in the context of 1 -regularization.
Hadamard’s and Nashed’s definitions of ill-posedness are of global nature. For nonlinear
mappings F properties may vary from point to point and ill-posedness has to be understood
in a local manner. Following the ideas in [3] we have to distinguish between local ill-
posedness at a point x in X and local ill-posedness at a point y in Y .
The aim of defining precisely what is meant by ill-posedness is to describe the following
situation mathematically: Given a sequence (yn )n∈N in R(F ) approximating the unknown
exact data y † in (1.1), a sequence (xn )n∈N of corresponding solutions to F (x) = yn ,
x ∈ D(F ), does not converge to a solution of (1.1). The difficulties are to choose concrete
types of approximation and convergence and to handle the case of multiple solutions.
One possibility for defining ill-posedness locally at a point of the domain D(F ) has
been suggested in [4] by Hofmann, see also [5].
6 1 Inverse Problems, Ill-Posedness, Regularization
Definition 1.4 The mapping F is locally well-posed in the sense of Hofmann at a point x0
in D(F ) if there is some positive ε such that for each sequence (xn )n∈N in Bε (x0 ) ∩ D(F )
the implication
F (xn ) → F (x0 ) ⇒ xn → x0
Definition 1.5 The mapping F is locally well-posed in the sense of Ivanov at a point y0
in R(F ) if for each sequence (yn )n∈N in R(F ) the implication
between two subsets M̃ and M of X used in the Definition 1.5 is not symmetric. It
expresses the maximum distance of elements in M̃ to the set M. Since we cannot control
which of possibly many approximate solutions is chosen by an inversion method, this type
of distance is the right choice.
The only drawback of Definition 1.5 is that norm convergence cannot be replaced easily
by other types of convergence to define ill-posedness with respect to the weak topology,
for example. The following proposition provides an equivalent reformulation which avoids
explicit use of norms. The proposition was already mentioned briefly in [3, Remark 1].
Proposition 1.6 The mapping F is well-posed in the sense of Ivanov at a point y0 in R(F )
if and only if for each sequence (yn )n∈N in R(F ) converging to y0 and for each sequence
(x̃n )n∈N of preimages x̃n from F −1 (yn ) there exists a sequence (xn )n∈N in F −1 (y0 ) with
x̃n − xn → 0.
1.2 Ill-Posedness 7
Proof Let F be well-posed in the sense of Ivanov at the point y0 and let (yn )n∈N be a
sequence in R(F ) converging to y0 . Given a sequence (x̃n )n∈N with x̃n ∈ F −1 (yn ) we
immediately see
inf x̃n − x → 0.
x∈F −1 (y0 )
Thus, we obtain x̃n − xn ≤ 2 ε for all sufficiently large n, which implies convergence
x̃n − xn → 0.
Now let y0 be in R(F ) and let (yn )n∈N be a sequence in R(F ) converging to y0 .
Further, assume that for each sequence (x̃n )n∈N of preimages x̃n from F −1 (yn ) there exists
a sequence (xn )n∈N in F −1 (y0 ) with x̃n − xn → 0. If there would be some positive fixed
ε with
Remark 1.7 From Proposition 1.6 we easily see that the following condition is sufficient
for local well-posedness in the sense of Ivanov at y0 : Each sequence (xn )n∈N in D(F )
with F (xn ) → y0 contains a convergent subsequence and the limits of all convergent
subsequences are solutions corresponding to the right-hand side y0 .
8 1 Inverse Problems, Ill-Posedness, Regularization
1.2.3 Interrelations
The definitions of Hofmann and Ivanov are closely connected, but differ in two aspects.
On the one hand, Hofmann’s definition works in X and Ivanov’s definition works in Y . On
the other hand, and also as a consequence of the first difference, in Hofmann’s definition
well-posedness is restricted to isolated solutions whereas Ivanov’s definition works for
arbitrary solution sets.
Both views have their advantages. Hofmann’s definition allows for a deeper analysis
of ill-posedness phenomena. Due to its locality in X at each element of a set of isolated
solutions we can distinguish between well-posedness and ill-posedness. That is, for one
fixed data element at the same time there might exist solutions at which the mapping
is well-posed and solutions at which the mapping is ill-posed in the sense of Hofmann.
Analyzing an inverse problem with Hofmann’s definition allows to identify regions of
well-posedness and regions of ill-posedness. Thus, restricting the domain of the mapping
F with the help of Hofmann’s definition could make the inverse problem well-posed.
Ivanov’s definition does not allow for such a detailed analysis. But its advantage is that
it is closer to the issue of numerical instability. Given a data element, we want to know
whether a sequence of approximate solutions based on noisy data becomes arbitrarily close
to the set of exact solutions if the noise is reduced until it vanishes. This is exactly what
Ivanov’s definition expresses.
The interrelations between Hofmann’s definition and Ivanov’s definition are made
precise by the following two propositions. The first proposition is a slightly extended
version of [3, Proposition 2] and the second stems from oral communication with Bernd
Hofmann (Chemnitz).
Proposition 1.8 If the mapping F is locally well-posed in the sense of Ivanov at some
point y0 in R(F ), then F is locally well-posed in the sense of Hofmann at each isolated
solution corresponding to the data y0 .
Proof Let F be locally well-posed in the sense of Ivanov at y0 and let x0 be an isolated
solution to data y0 . Take a positive radius ε such that x0 is the only solution to data y0
in B2 ε (x0 ). For each sequence (x̃n )n∈N in Bε (x0 ) ∩ D(F ) and for the corresponding
sequence (yn )n∈N with yn := F (x̃n ) Proposition 1.6 yields a sequence (xn )n∈N in D(F )
with F (xn ) = y0 and x̃n −xn → 0. Since (x̃n )n∈N lies in Bε (x0 ) and x0 is the only solution
in B2 ε (x0 ), we obtain xn = x0 for all n. Consequently, x̃n → x0 , which proves local well-
posedness in the sense of Hofmann at x0 .
1.2 Ill-Posedness 9
Proposition 1.9 There exist mappings F and points x0 in D(F ) such that F is locally
well-posed in the sense of Hofmann at x0 but locally ill-posed in the sense of Ivanov at
F (x0 ).
2
Proof Choose X := R, Y := R and F (x) := 1+x x
4 with D(F ) = X. Then x0 := 0 is the
only solution to F (x) = 0, x ∈ X, and continuous invertibility of F near zero immediately
implies local well-posedness in the sense of Hofmann.
On the other hand, we may consider a sequence (yn )n∈N with elements yn := F (xn )
such that xn → ∞. Then yn → 0, but
Finally, we state the interrelation between Nashed’s definition and Ivanov’s definition.
The special case of Hilbert spaces, where each closed subspace is complemented, can be
found in [3, Proposition 1].
Proposition 1.10 Let F be a bounded linear operator with domain D(F ) = X between
the Banach spaces X and Y and let the null space N (F ) be complemented in X. Then F
is well-posed in the sense of Nashed if and only if F is locally well-posed in the sense of
Ivanov at every point of R(F ) and F is ill-posed in the sense of Nashed if and only if F is
locally ill-posed in the sense of Ivanov at every point of R(F ).
because
and
x̃n − xn = (F |U )−1 F (x̃n ) − (F |U )−1 (y0 ) = (F |U )−1 (yn − y0 ) → 0.
Now let (F |U )−1 be unbounded. We want to show local ill-posedness in the sense of
Ivanov at y0 = 0. Local ill-posedness for arbitrary y0 in R(F ) then follows easily by
translation.
Since (F |U )−1 is unbounded, there exists a sequence (yn )n∈N in R(F ) with yn → 0
but (F |U )−1 (yn ) → 0. Set x̃n := (F |U )−1 (yn ) for all n in N. By Proposition 1.6 we have
to show that there is no sequence (xn )n∈N in X with F (xn ) = 0 and x̃n − xn → 0. Suppose
there would exist such a sequence and denote the projection of X onto U along N (F ) by
PU : X → U . This projection exists and it is a bounded linear operator because U and
N (F ) are complementary subspaces of X. Then
x̃n = PU (x̃n − xn ) → 0,
because x̃n ∈ U and xn ∈ N (F ). But this contradicts x̃n → 0. Thus, F has to be locally
ill-posed in y0 = 0.
A := F.
where (z(n) )n∈N is a dense subset of the open unit ball in 2 . See [9, Proof of Theo-
rem 2.3.1] for details. We now show that if N (A) is uncomplemented, then Nashed’s
definition of well-posedness does not tell anything about continuity of the inverse A+
U.
un = A + + +
U A un → AU A u = AU 0 = 0,
which contradicts un → u = 0.
It remains to clarify the relation between Nashed’s definition and Ivanov’s definition.
We only have the partial result that a closed range implies well-posedness in the sense of
Ivanov, regardless of possible uncomplementedness of N (A).
Proposition 1.12 Let R(A) be closed. Then A is locally well-posed in the sense of Ivanov
at every point of R(A).
Proof Obviously, it suffices to show local well-posedness in the sense of Ivanov at zero.
Let (yn )n∈N be a sequence in R(A) with yn → 0 and take some positive ε. Then the set
Mε := x̃ ∈ X : inf x̃ − x < ε
x∈N (A)
is open and the open mapping theorem implies that the image A Mε is open, too. Since 0 ∈
A Mε , there is n with yn ∈ A Mε . Consequently, we find some x̃n in Mε with A x̃n = yn .
Letting ε → 0 we obtain a sequence (x̃n )n∈N with A x̃n = yn and infx∈N (A) x̃n −x → 0.
12 1 Inverse Problems, Ill-Posedness, Regularization
Noting
we arrive at
If N (A) is uncomplemented and R(A) is not closed, then the author does not have
any result on well- or ill-posedness in the sense of Ivanov. But he conjectures that both
situations are possible.
To solve Eq. (1.1) numerically we have to take into account noisy data and lacking stability
of the inversion process due to ill-posedness. Noisy data need not belong to the range of
F . Thus, there need not exist a solution. To overcome this difficulty we content ourselves
with ‘least squares’ solutions defined by
1
F (x) − y δ p → min , (1.3)
p x∈D(F )
where y δ denotes noisy data with noise level δ as introduced in (1.2). The exponent p can
be used to ease numerical minimization. We assume p > 1.
Without further assumptions on F there need not exist a solution to the minimization
problem (1.3). Even if we have a sequence (y δn )n∈N of noisy data with decreasing noise
levels δn , that is, δn → 0, for which solutions to (1.3) exist, we cannot guarantee that they
converge to the solution set of (1.1). This is a simple consequence of local ill-posedness in
the sense of Ivanov at the exact right-hand side y † , cf. Definition 1.5.
There exist several approaches to stabilize the minimization problem and to obtain
convergence of the stabilized problem’s solutions to the solution set of (1.1). We restrict
our attention to the very flexible class of Tikhonov-type regularization methods
1
Tα (x, y δ ) := F (x) − y δ p + α Ω(x) → min . (1.4)
p x∈D(F )
The penalty functional Ω : X → (−∞, ∞] has to be chosen in a way which stabilizes the
minimization problem, see Assumption 1.13 below. The positive regularization parameter
1.3 Tikhonov Regularization 13
α controls the trade-off between data fitting and stabilization. To obtain convergence of
the Tikhonov minimizers to the solution set of (1.1) if the noise level decreases, we have
to choose α depending on δ or y δ or on both. Tikhonov-type methods are well established
in the field of inverse problems and detailed information can be found in almost all
monographs and textbooks in the field, see, e.g., [10–12].
Under these assumptions one easily shows that there exist solutions to (1.1) which
minimize Ω in the set of all solutions. Such solutions will be referred to as Ω minimizing
solutions and are usually denoted by x † . Tikhonov minimizers only converge to this special
type of solutions.
Theorem 1.14 Let Assumption 1.13 be true. Then the following assertions on existence,
stability and convergence of Tikhonov regularized solutions are true.
(i) For each y in Y and each positive α the minimization problem (1.4) has a solution.
(ii) Let α > 0. If (yn )n∈N is a sequence in Y converging to some fixed y in Y and if
(xn )n∈N is a sequence of corresponding minimizers of Tα (·, yn ), then (xn )n∈N has a
weakly convergent subsequence and the limit of each weakly convergent subsequence
is a minimizer of Tα (·, y). Further Ω(xnk ) → Ω(x̄) for each weakly convergent
subsequence (xnk )k∈N with limit x̄.
(iii) Let (δn )n∈N be a sequence of noise levels converging to zero and let (y δn )n∈N be a
sequence of corresponding data elements. Further assume that (αn )n∈N is a sequence
p
of positive regularization parameters with αn → 0 and δαnn → 0. Then each
sequence (xn )n∈N of corresponding minimizers of Tαn (·, y δn ) has a weakly convergent
subsequence and the limit of each weakly convergent subsequence is an Ω minimizing
solution to (1.1). Further Ω(xnk ) → Ω(x †) for each weakly convergent subsequence
(xnk )k∈N with limit x † .
E † (x) = x − x † 2
between regularized and exact solution. If there are multiple solutions, the point-to-set
distance
E † (x) = inf x − x † 2
x † ∈S
is a suitable choice if S denotes the set of norm minimizing solutions. In Banach spaces
alternatives like the Bregman distance
(see Sect. A.3 for a definition) proved to be useful. But Banach space norms can also be
used. In 1 (N) we could choose
with S denoting again the set of norm minimizing solutions. All these examples will be
discussed in Sect. 3.2 in more detail.
Denoting by xαδ the minimizers of the Tikhonov minimization problem (1.4), we aim at
asymptotic estimates
where α may depend on δ and y δ . The function ϕ shall be an index function in the
following sense.
Abstract
We introduce variational source conditions and derive convergence rates for Tikhonov-
type regularization methods.
Different techniques have been developed to prove convergence rates (1.5). The most
prominent tool are source conditions for linear ill-posed inverse problems in Hilbert
spaces. The classical concept is described in [10, Section 3.2] and general source
conditions are studied in [14]. See also the references given in [14] for the origins of
general source conditions. In both cases the norm distance between exact and regularized
solution is used as error functional E † .
For Banach spaces usage of source conditions is quite limited. But in 2007 variational
source conditions were introduced in [15] and thoroughly studied and developed during the
past 10 years, see, e.g., [11,13,16–18]. This type of condition allows to prove convergence
rates for many different settings, especially for nonlinear operators and general penalty
functionals in (1.4). In its original version Bregman distances (see Sect. A.3) were used as
error functional E † .
Variational source conditions are also known as variational inequalities, but this term
conflicts with the already existing mathematical field with the same name. A second alter-
native was introduced in the book [13]. There the term variational smoothness assumption
is used, because several kinds of smoothness (not only of the underlying exact solution
as it is the case for classical source conditions) are jointly described by one expression.
The term variational source condition rouses associations to classical source conditions.
But the new concept has no similarity to classical source conditions, most notably there
Ω † := min{Ω(x † ) : x † ∈ D(F ), F (x † ) = y † }.
Definition 2.1 Let β > 0 be a constant and let ϕ : [0, ∞) → [0, ∞) be an index function.
A variational source condition for fixed right-hand side y † holds on a set M ⊆ D(F ) if
If the set M is large enough to contain all minimizers of the Tikhonov functional
(1.4), then a variational source condition (2.1) implies the desired convergence rate
(1.5). Although our variant is slightly more general, the proofs of this fact given in
[13, Chapter 4] or in [21] still work with trivial modifications. Suitable choices of the
regularization parameter α are discussed there, too. For the sake of completeness we
provide the proof in the next section.
The constant β plays only a minor role. In principle we could hide it in the functional
E , but then E † would depend on the chosen index function ϕ and not solely on exact
†
and regularized solutions. The implied convergence rate does not depend on β, only the
O-constant contains the factor β1 .
Variational source conditions originally were developed to obtain rates for Tikhonov-
type regularization, but can also be used in the context of other methods. See [22] for the
residual method and [18] for iteratively regularized Newton methods.
A major drawback of variational source conditions is that the best obtainable rate may
be slower than the best possible one. This is for instance the case for rates faster than
√ 2
O( δ) in the classical linear Hilbert space setting, where the best one is O(δ 3 ). On
the other hand, in 1 -regularization rates up to the best possible one O(δ) for the error
norm can be obtained, see [23] and Sect. 11.3. An approach to overcome technical rate
limitations was undertaken in [24], but it is limited to linear equations.
2.2 Convergence Rates 17
In this section we prove that variational source conditions (2.1) imply rates (1.5). To choose
the regularization parameter we consider an a priori parameter choice α = α(δ) specified
below and an a posteriori parameter choice α = α(δ, y δ ) known as discrepancy principle.
The later consists in choosing α such that
δ ≤ F (xαδ ) − y δ ≤ τ δ (2.2)
Proposition 2.2 Let the variational source condition (2.1) be satisfied with a convex index
function ϕ and choose α in (1.4) such that
δp δp
c1 ≤ α ≤ c2
ϕ(δ) ϕ(δ)
1 p
β E † (xαδ ) ≤ δ − F (xαδ ) − y δ p + ϕ F (xαδ ) − F (x † ) . (2.3)
pα
If F (xαδ ) − y δ ≤ δ, then the triangle inequality, the properties of ϕ and the parameter
choice imply
that is,
1 1
F (xαδ ) − y δ ≤ (1 + 2 p c2 ) p δ ≤ (1 + 2 p c2 ) p−1 δ.
ϕ(δ)
≤ δp + p α F (xαδ ) − y δ + δ
δ
ϕ(δ)
≤ δ p−1 F (xαδ ) − y δ + 2 p α F (xαδ ) − y δ
δ
and thus,
1
ϕ(δ) p−1 1
F (xαδ ) − y δ ≤ δ p−1 + 2 p α ≤ (1 + 2 p c2 ) p−1 δ.
δ
1 p
β E † (xαδ ) ≤ δ − F (xαδ ) − y δ p + ϕ F (xαδ ) − y δ + δ
pα
δp 1
≤ +ϕ 1 + (1 + 2 p c2 ) p−1 δ
pα
δp 1
≤ + 1 + (1 + 2 p c2 ) p−1 ϕ(δ)
pα
2.2 Convergence Rates 19
ϕ(δ) 1
β E † (xαδ ) ≤ + 1 + (1 + 2 p c2 ) p−1 ϕ(δ).
p c1
Note that in the proof we used arguments similar to the ones in [21], but made changes
in the details leading to a better constant in the obtained error estimate. Corresponding
estimates in [21, Theorem 1] lead to
1 1
E † (xαδ ) ≤ 1 + 2 (2 + p) p−1 ϕ(δ),
β
which has a greater constant factor than our estimate. Our estimate with the parameter
choice from [21], that is c1 = c2 = 1, reads
1 1 1
E † (xαδ ) ≤ 1 + + (1 + 2 p) p−1 ϕ(δ).
β p
Proposition 2.3 Let the variational source condition (2.1) be satisfied and choose α in
(1.4) according to the discrepancy principle (2.2). Then
1+τ
E † (xαδ ) ≤ ϕ(δ)
β
Proof Because xαδ is a minimizer of (1.4), for an arbitrary Ω minimizing solution to (1.1)
we have
1 1
Ω(xαδ ) − Ω † = Tα (xαδ , y δ ) − α Ω(x †) − F (xαδ ) − y δ p
α p
1 1 1
≤ F (x † ) − y δ p − F (xαδ ) − y δ p
α p p
1 1 p 1
≤ δ − F (xαδ ) − y δ p
α p p
Ω(xαδ ) − Ω(x †) ≤ 0.
20 2 Variational Source Conditions Yield Convergence Rates
1906 2,695,923
1907 4,089,530
1908 2,237,370
1909 4,929,646
1910 2,399,857