Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

2011 IEEE Nuclear Science Symposium Conference Record MIC21.

S-183

Generalization of the Image Space Reconstruction


Algorithm
Andrew J. Reader, Etienne Letourneau and Jeroen Verhaeghe

between ML-EM and ISRA, and proposes a more general


Abstract- The image space reconstruction algorithm (ISRA) iterative image reconstruction algorithm.
has been shown to be a non-negative least squares estimator, and
was introduced as an alternative iterative image reconstruction
II. THEORY
method for positron emission tomography (PET) data. The
implementation of ISRA is straightforward: the ratio of the
Consider the following weighted least-squares (WLS)
backprojected measured data to that of the backprojected objective function, dependent on the I-dimensional PET
expected data is used to multiplicatively update the current measured data vector m and the J-dimensional image estimate
image estimate. This work starts with a modified weighted least vector X(k):
(k»2
OWLS (X(k» =- I mi -qi
squares objective function to derive a more general form of the
1 I (
ISRA algorithm, which importantly accommodates weighting of (1)
the backprojection. Simply by changing the choice of 2 i=1 Wi
backprojection weighting factors at a given iteration, both the where the expected data from a given image estimate X(k) are
well known ML-EM (maximum likelihood expectation
given by
maximization) algorithm as well as the standard ISRA, are
J
obtained as special cases. ML-EM corresponds to using the k)
q; L Qij X?) + 17i (2)
!
=
j=
current estimate of the expected data as the weights for
backprojection, and ISRA corresponds to the case of unit
weighting during backprojection. Of particular interest however, with the imaging system model being given by the matrix
is that the framework naturally suggests the existence of many A={aij}[xJ, where aij is the probability that a positron emitted
alternative reconstruction algorithms through alternative data from voxel j results in an event being registered in sinogram
weighting choices. By changing the weighting factors, a bin i. The term 1'/i accounts for the scatter and randoms events
performance improvement over ISRA is obtained, as well as a
encountered in PET, and the weights w in (1) will be regarded
slight performance improvement compared to ML-EM (for the
for the moment as a fixed vector.
task of accurate region quantification which is considered in this
Consider now the case of seeking a better estimate of x, one
work). Specifically, these improvements are obtained, for
example, by using a spatially-smoothed copy of the measured
which reduces the evaluation of the objection function (1).
data as weighting factors during backprojection. This can be achieved by considering the gradient of the
objective function:
I
aij ( i _q(k»
i
----w-0WLS(X(k» = -I m
I. INTRODUCTION a
(3)
I
TERATIVE image reconstruction methods for positron
Ox} i=1 Wi
emission tomography (PET) data have two main advantages
over methods such as filtered backprojection (PBP): i) The right hand side of equation (3) can be recognized as an
iterative methods allow easy incorporation of more accurate image which is the backprojection of weighted expected data
minus the backprojection of the weighted measured data:
(k)
models of the imaging process, and ii) an appropriate
_ '" i '" i
I I

----w-0WLS(X(k) ) - L.Jaij ---


q
a
statistical model for the noise in the data can be used. The
m
L.Jaij - (4)
most popular method has been the maximum likelihood - Ox} i=1 Wi i=1 Wi
expectation maximization (ML-EM) algorithm [1; 2], based
Knowing the derivative, a simple iterative algorithm to
on a Poisson noise model of the acquired PET data. ML-EM is
minimize the WLS objective (1) can be derived by just
simple to implement (only needing a forward and back
subtracting a variably-scaled amount of this gradient image:
projection algorithm) and demonstrates robustness and non­
erratic behaviour [3] as it converges towards the ML estimate.
Alternative methods, such as ISRA [4], also allow any system J J J L.J
()
X(k+1) =X(k) _'r(k) � aIJ qY _ � aIJ mi
L.J
i=1 Wi i=1 Wi
) (5)

r}k)
model to be used, but use a least-squares objective rather than
If for a given iteration k the following scaling (step size
a maximum likelihood objective. This work looks into the link
for each voxel) is chosen:
X(k)
'r(k) _ J
} -
(6)
k)
I aij q�
I
Manuscript received November 15, 2011.
A. J. Reader, E. Letourneau and J. Verhaeghe are with the Montreal
Neurological Institute, McGill University, Montreal, Quebec, H3A 2B4,
i=1 Wi
-
Canada (e-mail: andrew.reader@mcgill.ca). Part of this work was supported the following iterative update is obtained:
by NSERC grant number 387067-10.

978-1-4673-0120-6/111$26.00
Authorized licensed use ©2011
limited to:IEEE
Peking University. Downloaded 4233
on July 04,2024 at 12:01:25 UTC from IEEE Xplore. Restrictions apply.
using a (possibly smoothed) copy of the expected data q as the
weights is also considered here.
First, if Wi mi
= is used in (7) then what might be
(7) viewed as a "reciprocal EM" algorithm is obtained, which
uses the noisy measured data as an approximation for the
variance of the Poisson distributed measured data:
I

This update equation, derived from the WLS objective (l),


1 Laij
shows that an update of x(k) is obtained just by using a ratio of X( k + ) X (k) i=1
qik)
= (10)
J J I
weighted backprojections. This updating method is in fact a
Laij-
generalization of a number of algorithms, merely through the
choice of the weighting factors w. Two special cases are
i=1 mi
This algorithm will be labelled as M-ISRA (measured data
considered below.
weighted ISRA). Alternatively, one can use weights which are
A.Case 1: the ML-EM Algorithm a smoothed copy of the measured data m, optionally also
If at a given update iteration k the weights are set as including a smoothed copy of q(k):
Wi qik) m
= (i.e. iteration-dependent weights), then the well­
known ML-EM algorithm [1; 2] is obtained:
i:ar
i=1 k) ,
L aidq� L Pidmd
) +
I

X(k+1) X(k)
=
d=1 d=1 (11)
J J I
q:
Laij (
i=1 " aidq(dk) + " Pidmd
I
R
I )
f:t f:t
So when «=0, M-ISRA (with or without smoothed weights) is
obtained, and when p=O Q-ISRA (with or without smoothed
which can be regarded as a least-squares update with iteration­ weights) is obtained. Equation (11) allows various
dependent weights. Note the perspective this gives: the ML­ combinations ofm and q, to give MQ-ISRA.
EM update is just a multiplication by a ratio of weighted Just as an aside, w could also include a vector of offsets 'Y to
backprojected images. It is well established that ML-EM provide another new algorithm, very similar to ML-EM, if
converges to a maximum of a Poisson likelihood objective,
w=q (k)+y is chosen. The rationale for such a choice is that the
and therefore equation (7) also converges for this particular
resulting algorithm is guaranteed to avoid divisions by zero in
choice of iteration-dependent W(k).
sinogram space, which is a problem that has to be artificially
B.Case 2: the Image Space Reconstruction Algorithm rectified with conventional implementations of ML-EM.
If the weights are chosen to be wik) = 1 then ISRA [4] is
Finally, it is worth noting the work of Anderson et al [9]
and also that of Teng and Zhang [10], who have developed
obtained: similar methods. The key distinction in the present work is the
I
consideration of different data weighting schemes and their

1 Laijmi
X ( k+ )
=
(
X i=1
k ) (9)
impact.

(k)
III. METHODS
"
J J I

L.Jaq
2D simulations of a 256x256 pixel slice from a realistic 3D
i=1 IJ I
brain phantom [11] (shown later in figure 7) were used to
It has already been established in the literature that this assess the different reconstruction methods. A system matrix
algorithm (9) converges to a non-negative least squares based on a line-integral model was calculated and stored, and
estimate [5; 6; 7; 8], and so again equation (7) also converges used for all forward and backprojection operations. The
for this particular choice ofw. ground truth brain phantom (which was first scaled according
to the desired mean count level) was forward projected to
obtain a noise-free 2D sinogram for the given scaling (mean
C.NewCases count) level. The various scaling levels of the ground truth
Hence two special cases have been considered, merely image were used to control the noise level: the noise-free
through the choice of weighting factors (at a given iteration k) sinogram for a given scaling level was used as the mean from
used during backprojection. In addition to this, this work which 100 different Poisson noise realizations of the sinogram
considers the case of using the measured sinogram data (with were generated. In this work ten different scaling levels were
optional spatial smoothing) as weights during backprojection considered to allow consideration of ten different noise levels
(required in the general updating formula (8». The case of (mean count levels ranging from 1.75k mean counts in a
sinogram through to 350k mean counts).

Authorized licensed use limited to: Peking University. Downloaded 4234


on July 04,2024 at 12:01:25 UTC from IEEE Xplore. Restrictions apply.
Furthermore, prior to forward projection of the scaled the pixel level, as well as at the ROI level (regions including
ground truth brain phantom, a Gaussian convolution kernel and excluding edges). Since parametric imaging is of interest
(cr=1.5, -3.5 pixels full width at half maximum) was applied to this work, the MAE for a region was calculated as the
to simulate limited system resolution. To compensate for this average pixel-level error for a region:
during image reconstruction, most reconstruction methods
implemented for this work used a PSF (point spread function)
convolution as the fIrst component of the factorized system
matrix (see e.g.[12]), such that the tomographic reconstruction
also implicitly included recovery of resolution. Hence the
system matrix was taken to be A=XH, where X performs the where mnp(k) is the reconstructed ("measured") value for a
line integrals through an image estimate to deliver sinogram pixel p (for all P pixels within an ROI) for realisation n at
bin values, and H performs a convolution. To reduce Gibbs iteration k, and Ip is the true value for pixel p. It is very
ringing, a Gaussian of 0=1.2 pixels was used for H. important to note that for this work, the pixel-level MAE is
Each noisy sinogram for each scaling level was regarded as the main metric of interest - which is geared
reconstructed by ten different image reconstruction methods: towards the task of accurate quantifIcation of radiotracer
I) ML-EM (W(k)=q(k» concentration in each pixel, with a view to producing accurate
2) ML-EM with PSF (W(k)=q(k» parametric images of function.

[..f ]
3) ISRA with PSF (w(k)=I) Also of interest, but not directly related to the task of
4) M-ISRA with PSF (w(k)=m ) interest, are bias and variance:
5) Q-ISRA with PSF (w(k)=q(k) (should match method 2)
6) (O.5M+O.5Q)-ISRA with PSF (w(k)=O.5m + O.5q (k» (k) trp
£..,. mnp -
1 (13)
7) (0.5M+0.5Q, 2S)-ISRA with PSF, using the same PixelBias1�1 = - L .!!::.!....n-)_
sinogram weights as method 6, but with the P peROI N
weights smoothed by a Gaussian convolution
kernel (0=2 sinogram bins)
where PixelBiasROI is the average pixel-level bias in the ROI.
8) (M, 2S)-ISRA with PSF, using the same sinogram
Finally the average pixel-level variance in the ROI is given
weights as method 4, but with the weights
by:
smoothed by a Gaussian convolution kernel (0=2
bins) (i.e. W�k) LhPlhmrk) , where the matrix
=

(14)
elements {fth} correspond to applying a Gaussian
convolution kernel with 0=2 bins)
9) (Q, 2S)-ISRA with PSF, using the same sinogram
weights as method 5, but with the weights
smoothed by a Gaussian convolution kernel (0=2 A best (i.e. lowest achieved) MAE analysis as a function of
count level was carried out for each reconstruction algorithm
bins) (i.e. w:k) Lhalhqrk) , where the matrix
=

(i.e. fmding the iteration number which delivers the lowest


elements {aih} correspond to applying a Gaussian MAE, thus showing each method at its best performance for
convolution kernel with 0=2 bins) the task of accurate pixel-level quantifIcation). In addition to
10) Filtered backprojection (FBP), ramp ftltered, with MAE calculations, bias-variance curves were generated using
post-reconstruction smoothing applied iteratively, the aforementioned pixel-level equations. The bias-variance
to provide 120 increasingly smoothed image curves for each reconstruction algorithm were found for the
estimates (an image-space Gaussian convolution conventional case of showing bias and variance of selected
kernel (0=0.6 pixels), defmed on a 15x15 pixel iterates to form a bias-variance curve of iterates for each
neighbourhood, was applied for each iteration) algorithm.
For the iterative methods (methods 1 to 9), 120 iterations were IV. RESULTS
carried out. As indicated, for FBP, iterative post-smoothing Figure 1 shows the average pixel-level MAE as a function of
was applied to provide a wide range of images with steadily iteration for all reconstruction methods, for three example
decreasing levels of image noise (and corresponding ROIs. From these examples it can be noted that different
resolution degradation). methods give the lowest MAE, according to the count level or
As mentioned, each image reconstruction method was ROI (no single algorithm proves to be the best for the cases
carried out on 100 realizations of data for 10 different count considered). FBP performs very well indeed for the task of
levels (ranging from 1.75k mean counts up to 350k mean delivering the lowest MAE, thanks to the very large number
counts in the sinogram). Knowing the ground truth in each of incremental post-smoothing levels applied, allowing FBP to
case, the mean absolute error (MAE) (calculated across be shown at its best in terms of delivering its lowest MAE for
realizations for a pixel or region of interest (ROI» for each a given uniform region at a given count level. Furthermore, it
method for each count level was found, along with measures is important to note that amongst the iterative methods, ML­
of bias and variance. All these measures were carried out at EM does not always deliver the lowest MAE.

Authorized licensed use limited to: Peking University. Downloaded 4235


on July 04,2024 at 12:01:25 UTC from IEEE Xplore. Restrictions apply.
Occipital & Frontal lobes (Full ROI)
Mean counts :10515
interest is the ROI for which ML-EM is clearly outperformed
by the smoothed-data weighted ISRA method «M,2S)-ISRA),
for a range of count levels. Figure 5 considers the same case
for one of the ROIs used in figure 4, but with a sub ROI. For
this case, FBP now performs as well, and sometimes better
than, (M,2S)-ISRA. This can be understood by the fact that
the sub-ROI stays clear of the resolution losses at the edges of
the region, allowing FBP to perform well despite its lack of
resolution modeling.
Figure 6 shows the bias-variance performance for different
iterates of the algorithms (for the same ROI as previously
considered). Figure 7 compares the visual quality of the
2o"-----;----
- ---,C;:
1o --:';
1 s --;O;
2 o,-----
--; --
20 sc --;30 o;
--- s
-;;"3' reconstructed images which deliver the lowest MAE for each
Iterations count level for each algorithm for the ROI consisting of the
White matter (Full ROI) temporal lobe and thalamus. It is important to note that the
Mean counts: 113908
36 ....... MLEM no PSF
MAE quality metric, for the task of accurate quantification of
-.-MlEM PSF a region from a single realization of data, does not necessarily
36 ISRA PSF
" D' M·ISRA PSF correlate with a visual image quality metric.
34 ...... Q-ISRAPSF
\
-.-(O.SM + O.SQ)-ISRAPSF
Occipital & Frontal lobes (Sub ROI)
-� (O.SM + 0.5Q, 2.0S)-ISRAPSF
'\," ...... (M, 2.0S)·ISRA PSF Mean counts: 1 0515
" ....... (0, 2.0S)·ISRA PSF
..., -+- FBP ramp smooth __ MLEM no PSF
-<io-MLEM PSF
...... ........... "
,. ... ... 45 \ ISRAPSF
;' •• g. M-ISRA PSF

.... O-ISRAPSF
-.- (O.SM + O.SO)·ISRAPSF
40 -.. (O.SM + 0.50, 2.0S)-ISRAPSF
....... (M, 2.0S)·ISRAPSF
....... (0, 2.0S)·ISRA PSF
-+- FBP ramp smooth
10 15 20 25 30 35
Iterations

Temporal lobe + Thalamus (Full ROI) 30


Mean counts: 113908

, ,
I , 25
I ,
, ,
35 , ,
, ,
, ,
\ , 20
\ \
" \ -.-(O.SM + O.SO)·ISRAPSF L-7--�
10,---�
-- l S;---720
-
�� 370 -�7
� 72S 3 S --4 �S
� 0--4
30 \ -.--- -.. (O.SM + 0.50, 2.0S)·ISRA PSF Iterations
\ ...... (M, 2.0S)·ISRAPSF
\
\ ....... (0, 2.0S)·ISRAPSF
\ -+-FBP raflll smooth Fig 2. Similar to the first sub-figure in figure I, but now for the case of the
UJ \
« 25
::;;
, ROI being reduced in size (clear of the region edges through use of an
erosion).
20
Temporal lobe + Thalamus (Central pixel ROI)
Mean counts: 113908
15
__ MLEM ro PSF
1
0 15 20 25 30 35 40 45 ..... -MLEM PSF
Iterations 30
ISRA PSF
"D' M-ISRA PSF
Fig l. Example graphs of mean absolute error based on 100 realisations (given
, __ Q-ISRA PSF
as a percentage %) in three different ROIs, FBP is shown for the case of 25
.,' -.-(0.5M + 0.5Q)-ISRA PSF
-.. (0.5M + 0.5Q, 2.0S)·ISRA PSF
iteratively re-applying a post-smooth with a Gaussian convolution kernel of .. _ ....
'
,
� \ • .,. . (M, 2.0S)-ISRA PSF
cr=0,6 pixels. Note the graphs are zoomed into the main region of the graph of
interest, so not all data are shown,

W
20 \ --(Q, 2.0S)·ISRA PSF

« \\ - + -FBP ramp smooth

:2
15
Figures 2 and 3 show similar information to figure 1, but
for when the ROI is reduced in size and for the case of a
10
central single pixel within each ROI. The trends are similar to
figure 1. The good performance of FBP can be attributed to
post-reconstruction smoothing, and the fact that the regions 10 20 30 40 50 60
Iterations
considered were uniform. The results strongly suggest the
need to consider post-reconstruction smoothing (as well as Fig 3. Similar to the third sub-figure in figure I, but for a single central pixel
iteration number) for the iterative methods. in the ROI.

Figure 4 shows the best (i.e. the lowest achieved) MAE as a


function of count level, for two example ROIs. Of particular

Authorized licensed use limited to: Peking University. Downloaded 4236


on July 04,2024 at 12:01:25 UTC from IEEE Xplore. Restrictions apply.
Occipital & Frontal lobes (Full ROI) Temporal lobe + Thalamus (Full ROI)
Mean counts: 113908
_MLEM no PSF
"
D" a 4 X 10
90 • ••
•• ..... _ MLEM PSF
__ MLEM co PSF
•• •
••• S
'II a.
..
..
" D' _ � �� : SF 3. 5 -..- MLEM PSF
• .. __ O-ISRA PSF
"
ISRA PSF
.
......... - D -(0.5M + 0.50)-ISRA PSF " D' M-ISRA PSF
-.. (0.5M + 0.50. 2.0S)-ISRA PSF 3
-D-(0.5M + 0.5Q)-ISRA PSF
\'It ••
••' • ..... . (M. 2.0S)-ISRA PSF
\ -0- (0.5M + 0.5Q. 2.0S)-ISRA PSF I
\
\ ... .. .... --(0. 2.0S)-ISRA PSF 2 .5 "'P' (M. 2.0S)-ISRA PSF
\ . . -+ - FBP ramp smooth
\ .. . Q) --(Q. 2 .0S)-ISRA PSF
\ . ()
c:
\ • . ... ro
2 -+- FBP ramp smooth
" . . .. •
...
�'" .. ..
. ... .
.�
.. .... > 1.5
'-.. ... .
..
........
. •

- ------
--- ----
0.5 2.5 3 3.5
x 105

Temporal lobe + Thalamus (Full ROI) -5 -4 -3 -2 -1 o


"
\ Bias X 10
\ __ MLEM no PSF
\
\ -"-MLEM PSF Fig 6. Bias-variance performance for one of the ROIs, based on 100 data
\
\ ISRA PSF
realisations. The points on each curve for each method correspond to
\ " D' M-ISRA PSF
\ increments of 10 iterations (increasing iterations giving increasing variance).
\ ""'-Q-ISRA PSF
� 24
", -D-(0.5M + 0.5Q)-ISRA PSF Note the lower variance achieved by the (M,2S)-ISRA method in its early
W -0- (0.5M + 0.5Q. 2.0S)-ISRA PSF iterations. (Q-ISRA is not shown, but its performance, as in previous graphs,
<l: " 'P ' (M. 2.0S)-ISRA PSF
:2 22 closely matches that of MLEM with PSF).
--(Q. 2.0S)-ISRA PSF
'0
Q) -+-FBP ramp smoot:.:.
h ___----' Temporal lobe and thalamus region
> 20
Q)
:.c
()
ro 18
1ii
Q)
:;; 16
0
...J
14

12

0 0.5 1.5 2 2.5 3 3.5


5
Mean counts X 10

Fig 4. The lowest-achieved MAE (%) as a function of count level, for two
different ROls. Full ROI size was used in both cases. For each reconstruction
method at each count level, the iteration at which the method delivers the
lowest MAE (from 100 data realisations) was selected to form these graphs.

32 \ Temporal lobe + Thalamus (Sub ROI)

30

� 28 f
eft.
� 26
W
«
:::2:
'0
<l)
>
<l)
:E
()
�Cf) 18

<l)
?; 16
o
...J 14

12

0.5 1.5 2 2.5 3 3.5


Mean counts X 10
5 :
p-smooth 24
Fig 7. Qualitative image comparison: reconstructed images of single
Fig 5_ Lowest achieved MAE (%) as a function of count level, for one of the realisations of data are shown for five different methods at four of the ten
ROIs considered in figure 4 (sub ROI size used). different count levels considered. The iterates which deliver the lowest
achieved MAE for the temporal lobe and thalamus were selected in each case,
to show each method at its best performance for that metric. (For FBP, this
corresponds to the number of post-smoothing operations applied, using a 20
Gaussian kernel with cr=O.6 pixels each time).

Authorized licensed use limited to: Peking University. Downloaded 4237


on July 04,2024 at 12:01:25 UTC from IEEE Xplore. Restrictions apply.
V.DISCUSSION AND CONCLUSION

Starting from a weighted least-squares objective function, and


using a specially scaled step size of the gradient of this
objective function (to obtain a purely multiplicative update)
shows that a ratio of weighted backprojections explains the
form of MLEM as well as that of ISRA. By choosing lower
noise weighting factors (e.g. by spatial smoothing of a copy of
the measured data), slight improvements in the task of
achieving lowest MAE can be obtained compared to ML-EM.
In particular for the phantom studied in this work, using
smoothed measured data as weighting factors delivered a good
all round performance, sometimes delivering results superior
to ML-EM.

ACKNOWLEDGMENT

Part of this work was supported by NSERC grant number


387067-10.

REFERENCES
[I] L.A. Shepp, and Y. Vardi, Maximum likelihood reconstruction for
emission tomography. IEEE Trans Med Imaging I (1982) 113-22.
[2] K. Lange, and R. Carson, EM reconstruction algorithms for emission and
transmission tomography. J Comput Assist Tomogr 8 (1984) 306-16.
[3] G.!. Angelis, A. l Reader, FA Kotasidis, W. R. Lionheart, and lC.
Matthews, The performance of monotonic and new non-monotonic gradient
ascent reconstruction algorithms for high-resolution neuroreceptor PET
imaging. Physics in Medicine and Biology
56 (2011) 3895-3917.
[4] M.E. Daubewitherspoon, and G. Muehllehner, An Iterative Image Space
Reconstruction Algorithm Suitable for Volume Ect. Ieee Transactions on
Medical Imaging 5 (1986) 61-66.
[5] A. R. Depierro, On the Convergence of the Iterative Image Space
Reconstruction Algorithm for Volume Ect. Ieee Transactions on Medical
Imaging 6 (1987) 174-175.
[6] D.M. Titterington, On the Iterative Image Space Reconstruction Algorithm
for Ect. Ieee Transactions on Medical Imaging 6 (1987) 52-56.
[7] G.E.B. Archer, and D.M. Titterington, The Iterative Image Space
Reconstruction Algorithm (Isra) as an Alternative to the Em Algorithm for
Solving Positive Linear Inverse Problems. Statistica Sinica
5 (1995) 77-96.
[8] J. Han, LX Han, M. Neumann, and U. Prasad, On the Rate of
Convergence of the Image Space Reconstruction Algorithm. Operators and
Matrices
3 (2009) 41-58.
[9] lM.M. Anderson, BA Mair, M. Rao, and C.H. Wu, Weighted least­
squares reconstruction methods for positron emission tomography. Ieee
Transactions on Medical Imaging 16 (1997) 159-165.
[lO] Y.Y. Teng, and T. Zhang, Iterative reconstruction algorithms with alpha­
divergence for PET imaging. Computerized Medical Imaging and Graphics 35
(20II) 294-30l.
[II] A. Rahmim, K. Dinelle, J.C. Cheng, MA Shilov, W.P. Segars, S. C.
Lidstone, S. Blinder, O.G. Rousset, H. Vajihollahi, B.M. W. Tsui, D.F. Wong,
and V. Sossi, Accurate event-driven motion compensation in high-resolution
PET incorporating scattered and random events. Ieee Transactions on Medical
Imaging 27 (2008) 1018-1033.
[12] A.J. Reader, P.J. Julyan, H. Williams, D.L. Hastings, and J. Zweit, EM
algorithm system modeling by image-space techniques for PET
reconstruction. IEEE Transactions on Nuclear Science 50 (2003) 1392-1397.

Authorized licensed use limited to: Peking University. Downloaded 4238


on July 04,2024 at 12:01:25 UTC from IEEE Xplore. Restrictions apply.

You might also like