Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/344751438

REGULARIZATION IN IMAGE RECONSTRUCTION

Preprint · October 2020

CITATIONS READS
0 78

1 author:

Charles L. Byrne
University of Massachusetts Lowell
247 PUBLICATIONS 6,703 CITATIONS

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

EM-like iterative algorithms View project

Superiorization View project

All content following this page was uploaded by Charles L. Byrne on 19 October 2020.

The user has requested enhancement of the downloaded file.


REGULARIZATION IN IMAGE RECONSTRUCTION

CHARLES L. BYRNE

Abstract. When we use iterative algorithms such as the SMART,


the EMML algorithm, or the projected Landweber algorithm for re-
constructing an image from measured data only one of two things can
happen in the limit, neither of them good: either we overfit noisy data to
a theoretical model or we get a night-sky image that has fewer positive
pixels than there are data values. The problem is not with the algo-
rithms, but with the objective functions these algorithms are designed
to minimize. We improve the situation by modifying the objective func-
tion, which is regularization. We find that we can still use the SMART
and the EMML algorithm to get regularized solutions simply by chang-
ing the underlying system of linear equations.

1. Overview
To reconstruct a digitized image from data we often use more pixels than
we have data values, in the hope of improving resolution. When we use itera-
tive algorithms such as the SMART, the EMML algorithm, or the projected
Landweber algorithm for this purpose only one of two things can happen in
the limit, neither of them good: either we overfit noisy data to a theoretical
model or we get a night-sky image that has fewer positive pixels than there
are data values. Of course we can halt the iteration before the images we get
begin to resemble the useless limiting image. But this just treats a symp-
tom and does not address the real problem. The problem, as we discussed in
[4], lies not with the iterative algorithms themselves, but with the objective
functions these algorithms minimize. What we need to do is to change the
objective function; that is, we need to do regularization. In this note I will
present iterative algorithms to minimize new objective functions in a way
that each new iterate can be obtained in closed form from the previous one.

2. Examples of Modified Objective Functions


In this section we consider three examples of modified objective functions.
2.1. Least-squares Solutions. Suppose we want to find an exact or least-
squares solution of a linear system Ax = b. If the ratio of the largest to
the smallest eigenvalues of AT A is large then the system is ill-conditioned,
overly sensitive to noise in the data vector b, and small changes in b can
lead to large changes in the solution. In such cases it often happens that the
Date: October 19, 2020.
1
2 C. BYRNE

computed solution has an unrealistically large norm. One way to combat


this is to minimize f (x) = 12 kAx − bk2 + kxk2 , for some small positive .
This has several effects: it increases the cost of a large norm; it guarantees
that no eigenvalue of the matrix (AT A + I) is smaller than ; and therefore
reduces the ill-conditioning of the problem.

2.2. The SMART and the EMML. Let y ∈ RI be a positive vector and
P an I by J matrix with positive entries, whose columns each sum to one.
Finding an exact or approximate nonnegative solution of the linear system
y = P x is the goal when we use the SMART, as it is also when we use the
EMML algorithm.
As we saw in [6], the SMART iterates converge to a nonnegative minimizer
of the Kullback–Leibler distance KL(P x, y). For the benefit of the reader,
for a > 0 and b > 0 the Kullback–Leibler distance from a to b is KL(a, b) =
a log(a/b) + b − a, with KL(0, b) = b and KL(a, 0) = +∞. The KL distance
is then extended to vectors component-wise [7]. To avoid unpleasant limiting
images we minimize αKL(P x, y)+(1−α)KL(x, p), for some α in the interval
. P
(0, 1). Here p is a positive vector with p+ = Jj=1 pj = y+ = Ii=1 yi .
P

As we saw in [5], the EMML iterates converge to a nonnegative minimizer


of the Kullback–Leibler distance KL(y, P x). We can regularize the problem
by minimizing the new objective function αKL(y, P x) + (1 − α)KL(p, x).
In the next section we shall show that these new objective functions can be
minimized using the same SMART and EMML algorithm, if only we apply
them to a new linear system of equations.

3. Iterative Algorithms for Regularization


In [1] I proved convergence of iterative algorithms for the regularized
SMART and EMML algorithm by mimicking the proofs given there for the
SMART and the EMML algorithm. One of the nice things that happens
when I revisit earlier work is that I see things I missed before. That happened
here just this morning.

3.1. The New Linear System of Equations. Instead of the system y =


P x consider the system denoted by T x = w with
 
αP
T =
(1 − α)I

and
 
αy
w= .
(1 − α)p
All we have to do now is to apply the usual SMART and EMML algorithm
to the system T x = w instead of y = P x.
REGULARIZATION 3

3.2. The New Iterative Algorithms. The iterative step of the SMART,
applied to y = P x, is xk+1 = Sxk , where S is the operator
I
!
X
(3.1) (Sx)j = xj exp Pi,j log(yi /(P x)i ) .
i=1

The iterative step of the EMML algorithm is xk+1 = M xk , where M is the


operator
I
X
(3.2) (M x)j = xj Pi,j (yi /(P x)i ).
i=1
When we apply the SMART to the new system T x = w we have the reg-
ularized SMART with the iterative step xk+1 = Sr (xk ), where Sr is the
regularized SMART operator
(3.3) Sr x = (Sx)α (p)1−α .
Similarly, when we apply the EMML algorithm to the new linear system
we have the regularized EMML algorithm with the iterative step xk+1 =
Mr (xk ), where Mr is the operator
(3.4) Mr x = αM x + (1 − α)p.
Note that there will be an exact nonnegative solution of y = P x if there is
an exact nonnegative solution of T x = w. If there is no exact nonnegative
solution of T x = w we do not get another night-sky limiting image because
the number of pixels is much less than the number of rows of T , even if
J > I.

References
[1] Byrne, C. (1993) “Iterative image reconstruction algorithms based on cross-entropy
minimization.”IEEE Transactions on Image Processing IP-2, pp. 96–103.
[2] Byrne, C. (1995) “Erratum and addendum to ‘Iterative image reconstruction algo-
rithms based on cross-entropy minimization’.”IEEE Transactions on Image Process-
ing IP-4, pp. 225–226.
[3] Byrne, C. (1996) “Iterative reconstruction algorithms based on cross-entropy mini-
mization.”in Image Models (and their Speech Model Cousins), S.E. Levinson and L.
Shepp, editors, IMA Volumes in Mathematics and its Applications, Volume 80, pp.
1–11. New York: Springer-Verlag.
[4] Byrne, C. (2020) “Three night-sky theorems.” posted on ResearchGate February 18,
2020.
[5] Byrne, C. (2020) “An elementary convergence proof for imaging in SPECT.” posted
on ResearchGate October 16, 2020.
[6] Byrne, C. (2020) “Shannon Entropy Maximization and the Simultaneous MART.”
posted on ResearchGate October 18, 2020.
[7] Kullback, S. and Leibler, R. (1951) “On information and sufficiency.”Annals of Math-
ematical Statistics 22, pp. 79–86.

(C. Byrne) Department of Mathematical Sciences, University of Massachusetts


Lowell, Lowell, MA, USA
E-mail address: Charles Byrne@uml.edu

View publication stats

You might also like