Professional Documents
Culture Documents
Regularisation in Image Reconstruction
Regularisation in Image Reconstruction
net/publication/344751438
CITATIONS READS
0 78
1 author:
Charles L. Byrne
University of Massachusetts Lowell
247 PUBLICATIONS 6,703 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Charles L. Byrne on 19 October 2020.
CHARLES L. BYRNE
1. Overview
To reconstruct a digitized image from data we often use more pixels than
we have data values, in the hope of improving resolution. When we use itera-
tive algorithms such as the SMART, the EMML algorithm, or the projected
Landweber algorithm for this purpose only one of two things can happen in
the limit, neither of them good: either we overfit noisy data to a theoretical
model or we get a night-sky image that has fewer positive pixels than there
are data values. Of course we can halt the iteration before the images we get
begin to resemble the useless limiting image. But this just treats a symp-
tom and does not address the real problem. The problem, as we discussed in
[4], lies not with the iterative algorithms themselves, but with the objective
functions these algorithms minimize. What we need to do is to change the
objective function; that is, we need to do regularization. In this note I will
present iterative algorithms to minimize new objective functions in a way
that each new iterate can be obtained in closed form from the previous one.
2.2. The SMART and the EMML. Let y ∈ RI be a positive vector and
P an I by J matrix with positive entries, whose columns each sum to one.
Finding an exact or approximate nonnegative solution of the linear system
y = P x is the goal when we use the SMART, as it is also when we use the
EMML algorithm.
As we saw in [6], the SMART iterates converge to a nonnegative minimizer
of the Kullback–Leibler distance KL(P x, y). For the benefit of the reader,
for a > 0 and b > 0 the Kullback–Leibler distance from a to b is KL(a, b) =
a log(a/b) + b − a, with KL(0, b) = b and KL(a, 0) = +∞. The KL distance
is then extended to vectors component-wise [7]. To avoid unpleasant limiting
images we minimize αKL(P x, y)+(1−α)KL(x, p), for some α in the interval
. P
(0, 1). Here p is a positive vector with p+ = Jj=1 pj = y+ = Ii=1 yi .
P
and
αy
w= .
(1 − α)p
All we have to do now is to apply the usual SMART and EMML algorithm
to the system T x = w instead of y = P x.
REGULARIZATION 3
3.2. The New Iterative Algorithms. The iterative step of the SMART,
applied to y = P x, is xk+1 = Sxk , where S is the operator
I
!
X
(3.1) (Sx)j = xj exp Pi,j log(yi /(P x)i ) .
i=1
References
[1] Byrne, C. (1993) “Iterative image reconstruction algorithms based on cross-entropy
minimization.”IEEE Transactions on Image Processing IP-2, pp. 96–103.
[2] Byrne, C. (1995) “Erratum and addendum to ‘Iterative image reconstruction algo-
rithms based on cross-entropy minimization’.”IEEE Transactions on Image Process-
ing IP-4, pp. 225–226.
[3] Byrne, C. (1996) “Iterative reconstruction algorithms based on cross-entropy mini-
mization.”in Image Models (and their Speech Model Cousins), S.E. Levinson and L.
Shepp, editors, IMA Volumes in Mathematics and its Applications, Volume 80, pp.
1–11. New York: Springer-Verlag.
[4] Byrne, C. (2020) “Three night-sky theorems.” posted on ResearchGate February 18,
2020.
[5] Byrne, C. (2020) “An elementary convergence proof for imaging in SPECT.” posted
on ResearchGate October 16, 2020.
[6] Byrne, C. (2020) “Shannon Entropy Maximization and the Simultaneous MART.”
posted on ResearchGate October 18, 2020.
[7] Kullback, S. and Leibler, R. (1951) “On information and sufficiency.”Annals of Math-
ematical Statistics 22, pp. 79–86.