Professional Documents
Culture Documents
Fast Iterative Reconstruction of DPCI
Fast Iterative Reconstruction of DPCI
Fast Iterative Reconstruction of DPCI
S
o
u
r
c
e
phasegrating
d
C
C
D
x
g
absorptiongrating
y
f
r
i
n
g
e
p
a
t
t
e
r
n
x
g
I
n
t
e
n
s
i
t
y
4xg(y, )
4
x
g
(
y
,
Rf (xxx)(y, ) =
2
_
R
2
f (xxx)(y xxx, )dxxx, (1)
where f (xxx) = n(xxx) 1 with n(xxx) is the real part of the object refractive-index, is the angle
of view of the monochromatic wave illuminating the object and Rf (xxx)(y, ) is the Radon
transform of f (xxx).
Denition 1. [19] The (2-dimensional) Radon transformR: L
2
(R
2
) L
2
(R[0, ]) maps a
function on R
2
into the set of its integrals over the lines of R
2
. Specically, if = (cos, sin)
and y R, then
R f (xxx)(y, ) =
_
R
2
f (xxx)(y xxx, )dxxx =
_
R
f (y +t
)dt ,
where
is perpendicular to .
The object introduces changes in the propagation direction of the illumination wave. The
refraction angle is proportional to the derivative of the phase of the output wave with respect to
y:
(y, ) =
2
(y, )
y
, (2)
where (y, ) is the refraction angle in radians, is the wavelength
After passing through the object, the wave eld reaches the phase grating which imposes
periodic phase modulations onto the X-ray wave eld. Through the Talbot effect [2022], the
phase modulation is transformed into an intensity modulation in the detector plane. A fringe
pattern perpendicular to the optical axis is formed [6, 17]. An absorption grating is placed
in front of the detector with the same periodicity and orientation as the fringe pattern. One
then uses a phase stepping technique (PST) [6] to measure the diffraction pattern with high
resolution. In this technique, the absorption grating is scanned along the transverse direction
and the intensity signal in each pixel in the detector plane is drawn as a function of the grating
position, x
g
. The fundamental idea of this technique is to analyze the changes in the oscillated
curve when an object is placed in the way of the illuminated wave.
The refraction angle introduced by the object shows a shift in the corresponding curve. For a
small refraction angle, it is given by
x
g
(y, ) = d(y, ),
where d is the distance between the two gratings and x
g
(y, ) is the displacement of the curve.
Therefore, by simple substitution of Eq. 2 in the later equation, we have
g(y, ) =
1
d
x
g
(y, ) =
R f
y
(y, ), (3)
where g(y, ) is the measurements. In summary the physical model of DPCI is based on the
rst derivative of the Radon transform.
3. Review of the Derivative Variants of the Radon Transform and Generalized Filtered
Back-Projection
Since the measurements are related to the Radon transform and its derivatives, we start by
establishing some higher-level mathematical properties of these operators. The nth derivative
of the Radon transform of a function f (xxx) is denoted by
R
(n)
f (y, ) =
n
y
n
Rf (y, ).
The derivatives of the Radon Transform are linear operators with the following properties:
Scale invariance:
R
(n)
f (xxx)(y, ) =
n+1
R
(n)
f (y, ), R
+
.
Pseudo-distributivity with respect to convolution:
R
(n)
( f g)(xxx)(y, ) = (R
(n)
f (, ) Rg(, ))(y, ) = (Rf (, ) R
(n)
g(, ))(y, ).
Projected translation invariance:
R
(n)
f ( xxx
0
)(y, ) =R
(n)
f (y xxx
0
, , ).
To derive the necessary relations for the rest of the paper, we dene a new operator, the
Hilbert transform along the second coordinate.
Denition 2. The Hilbert transform along the x
2
axis, H
2
: L
2
(R
2
) L
2
(R
2
), is dened in
the Fourier domain as
H
2
f (xxx)(
1
,
2
) = i sgn(
2
)
f (
1
,
2
),
where (
1
,
2
) are the spatial frequency coordinates corresponding to (x
1
, x
2
).
Proposition 1. The sequential application of the Radon transform, the nth derivative operator
and the adjoint of the Radon transform on function f (xxx) L
2
(R
2
) is
R
n
y
n
R f (y, )(xxx) = 2 (1)
n
H
n
2
()
n1
2
f (xxx), (4)
where ()
1
2
is the fractional Laplace operator with transfer function [[[[ and ()
n
2
is n
times application of this operator.
Proof. Let g(y, ) =Rf (y, ). The Fourier Slice Theorem states that:
g(, ) =
f (cos, sin).
The Fourier transform of the nth derivative of g(y, ) with respect to y is (i)
n
g(, ). Thus,
(i)
n
g(, ) = i
n
sgn
n
()[[
n
n
y
n
Rf (y, ) =R((1)
n
(H
2
)
n
()
n
2
f )(xxx)(y, ), (0, ). (6)
Therefore we have
R
n
y
n
Rf (y, )(xxx) =R
R((1)
n
(H
2
)
n
()
n
2
f )(xxx)(y, ), (0, ).
It yields the desired results.
Eq. 4 is a key equation. It implies that if one is interested in solving the inverse problem
R
(n)
f (y) = g
(y),
the direct solution is to rst apply the Radon adjoint on the measured data and then apply
the inverse of the operator 2 (1)
n
H
n
2
()
n1
2
. The transfer function of the inverse of this
operator is
1
2 i
n
sgn(
2
)
n
||
(n1)
.
An equivalent form of Eq. 4 is
R
(qR
(n)
n
y
.
Eq.7 species the generalized ltered back projection (GFBP). The full procedure is described
in Algorithm 1.
Algorithm 1 GENERALIZED FILTERED BACK PROJECTION(GFBP) FOR THE INVERSE
PROBLEM R
(n)
f (y) = g
(y)
Input: g
(y) as data.
Output: Reconstructed image f (xxx).
1: N
do
3: Filter the input data with the transfer function:
q(
y
) =
1
2 i
n
[
y
[
n
y
4: end for
5: Apply the adjoint of the Radon transform on the output of the previous stage.
4. Discretization Scheme and Implementation of the Forward model
In this section we dene discrete versions of the Radon transform and its derivatives which
are consistent with the continuous-domain operators. This is the basis for a fast and accurate
implementation of the DPCI forward model.
4.1. Discrete Model of the Radon Transform and Its Derivatives
We propose a formulation that relies on the practical generalized sampling scheme. Let (xxx)
L
2
(R
2
) denote a generating Riesz basis function for the approximation space V as:
V() = f (xxx) =
kkkZ
2
c
kkk
(xxx kkk) : c l
2
(Z
2
).
The orthogonal projection of the object f (xxx) on the reconstruction space V is
f (xxx) =
kkkZ
2
c
kkk
kkk
(xxx),
where
kkk
(xxx) = (xxx kkk) and c
kkk
= f , ( kkk) in which (xxx) is the dual of (xxx) with Fourier
transform
() =
()
kkkZ
2
[ (2kkk)[
2
..
Using the linearity and translation invariance properties of the Radon transform, the Radon
transform of f (xxx) is
Rf (y, ) =
kkkZ
2
c
kkk
R
kkk
(y, ), (8)
where
R
kkk
(y, ) =R(y , kkk, ), (9)
and R(y, ) = F
1
()(y). Likewise, we obtain the derivatives of the Radon trans-
form:
R
(n)
f (y, ) =
kkkZ
2
c
kkk
R
(n)
kkk
(y, ), (10)
where
R
(n)
kkk
(y, ) =R
(n)
(y , kkk, ). (11)
Based on Eq. 8 and Eq. 10 we need to choose an appropriate generating function and to
determine its Radon transform analytically. In this work we use a tensor-product B-spline,
which provides a good compromise between support size and approximation order [23]:
(xxx) =
m
(xxx) =
m
(x
1
)
m
(x
2
),
Here
m
is a centered univariate B-spline of degree m:
m
(x) =
m+1
1
m!
x
m
+
where
n
h
f (y) is the
n-fold iteration of the nite-difference operator
h
f (y) =
f (y+h/2)f (yh/2)
h
.
Proposition 2. The n-th derivative of the Radon transform of the tensor-product B-spline is
given by
R
(n)
m
(xxx)(y, ) =
m+1
cos
m+1
sin
(2mn+1)!
y
2mn+1
+
=
m+1
k
1
=0
m+1
k
2
=0
(1)
k
1
+k
2
_
m+1
k
1
__
m+1
k
2
_
_
y +
_
m+1
2
k
1
_
cos +
_
m+1
2
k
2
_
sin
_
2mn+1
+
(2mn+1)!cos
m+1
sin
m+1
. (12)
Proof. We dene g(y, ) =R
(n)
m
(y, ). By an application of the Fourier-slice theorem, we
have
g(y, ) =F
1
( j)
n
m
(cos)
m
(sin),
where
m
() =
_
sin(
2
)
2
_
m+1
is the Fourier transform of
m
(y). We further have
( j)
n
m
(cos)
m
(sin) = ( j)
n
_
sin(
cos
2
)
cos
2
_
m+1
_
sin(
sin
2
)
sin
2
_
m+1
= ( j)
n
_
e
j
cos
2
e
j
cos
2
jcos
_
m+1
_
e
j
sin
2
e
j
sin
2
jsin
_
m+1
=
_
e
j
cos
2
e
j
cos
2
cos
_
m+1
_
e
j
sin
2
e
j
sin
2
sin
_
m+1
1
( j)
2mn+2
(13)
Taking the inverse Fourier transform of this expression yields the desired result.
To discretize our forward model, we need to calculate the derivatives of the Radon transform
of f at the location (y
j
,
i
) where y
j
= jy and
i
= i
R
(n)
f (y
j
,
i
) =
kkkZ
2
c
kkk
R
(n)
kkk
(y
j
,
i
), n = 0, 1, 2. (14)
The values R
(n)
kkk
(y
j
,
i
) can be calculated using proposition 2.
4.2. Matrix Formulation of the Forward Model
To describe the forward model in a more concise way, we now introduce an equivalent matrix
formulation of Eq. 14
g = Hc, (15)
where c is a vector whose entries are the B-spline coefcients c
k
, g is the output vector and H
is the system matrix with:
[H]
(i, j),kkk
=R
(1)
kkk
(y
j
,
i
). (16)
The corresponding adjoint operator is represented by the conjugate transpose of the system
matrix.
In our implementation we use cubic B-spline basis functions and take advantage of the closed
formulas Eq. 12. In the next section, we present an effective algorithm for calculating the for-
ward models and their adjoints.
4.3. Fast Implementation
The calculation of the Radon transform (or its variants) involves the application of the system
matrix on the B-spline coefcients of the object. We now present an algorithm for applying the
discrete version of the derivatives of the Radon transform and their adjoints.
The key observation is that the system matrix H does not need to be stored explicitly. Using
the translation invariance property of the Radon transform, it is sufcient to know the deriva-
tives of the Radon transform of the generating function, R
(n)
(y, ), in order to calculate
the derivatives of the Radon transform of the object. To improve the speed, we oversample
R
(n)
(y, ) and store the values in a lookup table.
Note that the algorithm can easily be parallelized, since projections corresponding to dif-
ferent angles are completely independent of each other. We designed a multithreaded imple-
mentation for an 8-core workstation which allows for 8 simultaneous projection computations.
Similarly, for the adjoint of the forward model, the computation can be parallelized with respect
to each object point. We can parallelize the calculation of the values of the Radon adjoint on
each object point.
In summary, our implementation is based on an accurate continuous-to-discrete model.
Moreover it is fast thanks to the use of look-up tables and multi-threading.
5. Image Reconstruction
We have already presented a direct method for inverting Eq. 15, namely the generalized ltered
back projection (GFBP) described in Algorithm 1. To apply the Radon adjoint in step 2 of this
algorithm, we use our proposed method.However, for large images with a limited number of
measurements, the direct methods such as FBP are not accurate enough. In this section, we
formulate the reconstruction as a penalized least-squares optimization problem.
The cost function we want to minimize is:
J(c)
1
2
|Hc g|
2
+(c),
where g is the measurement vector and the regularization term (c) is chosen based on the
following considerations. First we observe that the null spaces of the derivatives of the Radon
transform contain the zero frequency. To compensate for this we incorporate the energy of
the coefcients into the regularization term. Second, to enhance the edges in the reconstructed
image we impose a sparsity constraint in the gradient domain. This leads to
(c) =
1
2
|c|
2
+
2
kkk
|Lc
kkk
|
1
, (17)
where Lc
kkk
R
2
denotes the gradient of the image at position kkk,
1
and
2
are regularization
parameters. More precisely, the gradient is computed based on our continuous-domain image
model using the following property.
Proposition 3. Let f (xxx) =
kkkZ
2 c
kkk
n
(xxx kkk). The gradient of f on the Cartesian grid is
f
x
1
[k
1
, k
2
] = ((h
1
[, k
2
] c[, k
2
])[k1, ] b
2
[k
1
, ])[k
1
, k
2
]
f
x
2
[k
1
, k
2
] = ((h
2
[k
1
, ] c[k
1
, ])[, k
2
] b
1
[, k
2
])[k
1
, k
2
] , (18)
where k
1
, k
2
Z, h
i
[k
1
, k
2
] =
n1
(k
i
+
1
2
)
n1
(k
i
1
2
), and b
i
[k
1
, k
2
] =
n
(k
i
) for i = 1, 2.
High-dimensional sparsity-regularized problems are typically solved using the family of
iterative-shrinkage/thresholding algorithms (ISTA). The convergence speed of these methods
are dependent on the spectral radius of H
T
H. With regard to (21), The maximum eigenvalue of
H
T
H is signicantly large (
max
= 10
7
). To overcome this difculty we use a variable-splitting
scheme so as to map our general optimization problem into simpler ones. Specically, we in-
troduce the auxiliary variable u [2529] and reformulate the TV problem Eq. 17 as a linear
equality-constrained problem
c = argmin
c,u
_
1
2
|Hc g|
2
+
1
2
|c|
2
+
2
i
|u
i
|
1
_
.
subject to u = Lc (19)
We can map this into an unconstraint problem by considering the augmented- Lagrangian func-
tional
L
(c, u, ) =
1
2
|Hc g|
2
+
1
2
|c|
2
+
2
i
|u
i
|
1
+
T
(Lc u) +
2
|Lc u|
2
, (20)
where is a vector of Lagrange multipliers. To minimize (20), adopt a cyclic update scheme
also known as Alternating-Direction Method of Multipliers (ADMM), which consists of the
iterations
_
_
c
k+1
argmin
c
L
(c, u
k
,
k
)
u
k+1
argmin
u
L
(c
k+1
, u,
k
)
k+1
k
+(Lc
k+1
u
k+1
).
L
(c, u
k
,
k
) is a quadratic function with respect to c whose gradient is
L
(c, u
k
,
k
) =
_
H
T
H+L
T
L+
1
I
_
. .
A
c
_
H
T
g+L
T
_
u
k
__
. .
b
.
We minimize L
(c, u
k
,
k
) iteratively using the conjugate gradient (CG) algorithm to solve the
equation Ac = b. Since A has a large condition number, it is helpful to introduce a precondi-
tioning matrix M. This matrix is chosen such that M
1
_
H
T
H+L
T
L+
1
I
_
has a condition
number close to 1. To design a problem specic preconditioner, we use the following property
of the forward model.
Proposition 4. The successive application of the derivatives of the Radon transform and their
adjoint is a high pass ltering
R
(n)
R
(n)
f (xxx) = 2 ()
2n1
2
f (xxx), (21)
with frequency response [[[[
2n1
.
Proof. Based on the fact that (
y
)
=
y
, the proposition 1 yields the result.
Therefore, we have R
(1)
R
(1)
has a Fourier transform that is proportional to ||; besides,
that L
H
L is a discretized Laplace operator. Thus, the preconditioner M
1
that we use in the
discrete domain is the discrete lter whose frequency response is
1
||+||
2
+
1
.
A standard result for FISTA-type algorithm is that the solution of the minimization of
L
(c
k
, u,
k
) with respect to u is given by the shrinkage function [30, 31]
u
k+1
= max
_
Lc
k+1
+
k
, 0
_
sgn(Lc
k+1
+
k
). (22)
We warmly initialize the preconditioned CG method, using the solution of the previous itera-
tion. The reconstruction method is described in Algorithm 2.
Algorithm 2 ADMM-PCG WITH WARM INITIALIZATION RECONSTRUCTION METHOD
Input: g(y
j
,
i
) =R
(n)
f (y
j
,
i
), i, j, the phase measurements.
Output: f (xxx), the reconstructed image.
1: set
1
,
2
, and B-spline degree, m.
2: initialize c
(0)
and u
(0)
.
3: repeat
4: c
k+1
argmin
c
L
(c, u
k
,
k
), using preconditioned CG method with initial estimate
c
(0)
.
5: u
k+1
max
_
Lc
k+1
+
k
, 0
_
sgn(Lc
k+1
+
k
).
6:
k+1
k
+(Lc
k+1
u
k+1
).
7: c
(0)
c
k+1
.
8: until convergence.
9: f (xxx) =
kkk
c
kkk
m
(xxx kkk).
6. Experimental Analysis
6.1. Performance Metrics
In this paper, we use the structural similarity measure (SSIM) [32, 33] and signal-to-noise ratio
(SNR) for measuring the quality of the reconstructed image. SSIM is a similarity measure
proposed by Z. Wang, et al which compares the luminance, contrast and structure of images.
SSIM is computed for a window of size RR around each image pixel. The SSIM measure for
two images x and x for the specied window is
SSIM(x, x) =
(2
x
x
+C
1
)(2
x x
+C
2
)
(
2
x
+
2
x
+C
1
)(
2
x
+
2
x
+C
2
)
, (23)
where C
1
and C
2
are small constants values to avoid instability.
x
and
x
denote the empirical
mean of image x and x in the specied window, respectively.
x
and
x
are the empirical
variance of the corresponding images and the covariance of two images is denoted by
x x
for
the corresponding window. In our experiments, we choose C
1
=C
2
= (.001L)
2
where L is the
dynamic range of the image pixel values. SSIM for the total image is obtained as the average
of SSIM over all pixels. It takes values between 0 and 1 with 1 corresponding to the highest
similarity.
Our other similarity measure is the SNR. If x is the oracle and x is the reconstructed image
we have
SNR(x, x) = max
a,bR
20log
[[x[[
2
[[xax+b[[
2
(24)
Higher values of the SNR correspond to a better match between the oracle and the reconstructed
image.
6.2. Experimental Result
To validate our reconstruction method, we conducted experiments with real data acquired using
the TOMCAT beam line of the Swiss Light Source at the Paul Scherrer Institute in Villigen,
Switzerland. The synchrotron light is delivered by a 2.9 T super-bending magnet. The energy
of the X-ray beam is 25keV [34]. We used nine phase steps over two periods to measure the
displacement of the diffraction pattern described in Section 2. For each step a complete tomo-
gram was acquired around 180 degrees; we used 721 uniformly distributed projection angles.
Image acquisition was performed with a CCD camera whose pixel size was 7.4m.
For our experiments we used a rat brain sample. The importance of the rat brain is in two fold.
rst, due to its position as mammal of choice in genetic research, it is becoming increasingly im-
portant for the neuroscience. Second is the expectation of the applicability of the lessons learned
from this small animal to the diseased human brain. The hippocampus is a major component
of the brains of humans and other vertebrates which plays important roles in spatial navigation
and memory. The two main interlocking parts of hippocampus are Ammons horn and dentate
gyruns (DG). One important issue in this part is the density and distribution of the immunore-
active neurons in the DG region of the brain. The other part of the brain is thalamus which is
located between the cerebal cortex and midbrain. The thalamus is manifoldly connected to the
hippocampal via the mammillo-thalamic tract. This tract comprises the mammilary body and
fornix. In this part we consider the distribution of the nucleus in the DG and thalamus region
and we clarify the position of the mammali-thalamic tract in one coronal section of the rat brain.
The sample is embedded in liquid parafn at room temperature. This is necessary to match
the refractive index of the sample with its environment, so that the small-refraction-angle ap-
proximation holds. Finally the projections were post-processed, including at-eld and dark-
eld corrections, for the extraction of the phase gradient.
2 4 6 8 10 12
0
20
40
60
80
100
120
140
160
C
o
s
t
F
u
n
c
t
i
o
n
v
a
l
u
e
Time (m)
FISTA
A
D
M
M
-C
G
w
ith
o
u
t w
a
rm
in
itia
liz
in
g
2
0
in
n
e
r ite
ra
tio
n
A
D
M
M
-
P
C
G
w
ith
o
u
t w
a
r
m
in
itia
liz
in
g
1
0
in
n
e
r
ite
r
a
tio
n
ADMM-PCG with warm initializing
1,2,3,4 inner iteration
Fig. 2. The convergence-speed comparison of different iterative techniques for solving our
regularized problem. The standard FISTA algorithm performs poorly in this case. Observe
that the number of inner iterations in the ADMM-PCG method does not signicantly inu-
ence its convergence.
To apply the proposed iterative technique, rst is necessary to choose the regularization and
the ADMM parameters. The Thikhonov parameter is chosen as small as possible and is set to
1
= 10
5
. From the statistical point of view, the TV parameter is proportional the norm of the
measurement. The ratio of the proportionality is related to its signal-to-noise ratio. Here we use
2
=[[g[[
2
10
3
where [[g[[
2
is the l
2
-norm of the measurements. Based on our experience, the
parameter can be chosen ten times larger than
2
. The number of inner iteration for solving
the linear step plays no rule in the convergence of the proposed technique, but affects speed; we
suggest to choose as small as possible, e.g; 2 or 3. We use cubic B-spline as the basis function.
Fig. 2 compares the proposed algorithm against FISTA. It demonstrates the benets of using
a warm initialization as well as a problem-specic preconditioner for the linear optimization
step.
We further investigated the performance of the direct ltered back-projection and of the pro-
posed iterative reconstruction techniques on a coronal section of the rat brain. The reconstructed
images with 721 angles using GFBP and that produced by our method are shown in Figure 3.
The GFBP reconstruction contains artifacts at the boundary of the image. We zoom in more
on the images. The right bottom image of each of the images in Figure 3 demonstrate the
mammal-thalamic tract in this coronal section. There are curve shape artifacts on the direct
method reconstructed image. This could confuses the biologist or automated discovering sys-
tems for determining the nucleus and immoneurins on that region. The middle right image and
above right one display a part of thalamus and the dentate gyruns region in the hippocampus,
respectively. It is visually clear that the reconstructed image with 721 angles using our proposed
technique is more faithful in comparison with the GFBP approach. It visualizes more details
without any artifacts. Therefore, we consider it as our gold standard for investigating the depen-
dence of our algorithm on the number of views as shown in Fig. 4. The SNR and SSIM values
are computed for the main region of the sample which includes the brain. It suggests that using
our method with one order of magnitude less number of directions, we have the same quality
as the direct method.
The reconstructions using 181 directions are provide in Figure 5. The SSIM and regressed
SNR are computed and notied below of each gure.
SSIM = .96
SNR = 37.5 dB
SSIM = .96
SNR = 39.2 dB
SSIM = .91
SNR = 36.9 dB
SSIM = .15
SNR = 9.2 dB
1
2
3
(a) (b)
Fig. 3. The reconstructed image using the iterative ADMM approach with 721 directions
(a). (b) shows the image obtained with GFBP. The debate gyruns, Talamus and Fornix
region of the brain are shown with more details in the right above, middle and bottom of
the each image. Notice the oscillatory artifacts produced by GFBP.
361 181 91 46 23
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Number of directions
S
S
I
M
ADMMPCG
FBP
361 181 91 46 23
1
10
20
30
Number of directions
S
N
R
(
d
B
)
ADMMPCG
FBP
(a) (b)
Fig. 4. The SNR and SSIM metrics for images reconstructed from a subset of projections.
7. Conclusion
We presented an exact B-spline formulation for the discretization of the derivative variants of
the Radon transform that avoids any numerical differentiation in 2-D. This result can be easily
extended to higher dimensions. To reduce the total scanning time, we formulated the reconstruc-
tion problem as an inverse problem with a combination of regularization terms. We introduced
a new iterative algorithm that uses the alternating direction method of multipliers to solve our
TV-regularized reconstruction problem. An important practical twist is the introduction of a
problem-specic preconditioner which signicantly speeds up the quadratic optimization step
of the algorithm. Finally We presented a real experiment to validate the proposed discretization
method and corresponding iterative technique. It conrms that ADMM is the current state of the
art for solving TV-regularized problems. Moreover, our method can achieve the same image-
reconstruction quality as a ltered back-projection algorithm using three times fewer viewing
angles.
SSIM = .79
SNR = 31.4 dB
SSIM = .82
SNR = 32.6 dB
SSIM = .89
SNR = 37.0 dB
SSIM = .49
SNR = 29.1 dB
SSIM = .64
SNR = 28.3 dB
SSIM = .83
SNR = 30.2 dB
SSIM = .43
SNR = 26.1 dB
SSIM = .15
SNR = 7.1 dB
(a) (b)
Fig. 5. (a) and (b) show the reconstructed images with 181 directions using ADMM and
GFBP, respectively. The debate gyruns, Talamus and Fornix region of the brain are shown
with more details in the right above, middle and bottom of the each image.