The Chan-Vese Model With Elastica and Landmark Con

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

This article has been accepted for publication in a future issue of this journal, but has not been

fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2020.3047848, IEEE Access

Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000.
Digital Object Identifier 10.1109/ACCESS.2017.DOI

The Chan-Vese Model with Elastica and


Landmark Constraints for Image
Segmentation
JINTAO SONG1 , HUIZHU PAN2 , WANQUAN LIU3 ,ZISEN XU4 AND ZHENKUAN PAN5 .
1
College of Computer Science and Technology, Qingdao University, Qingdao, 266071, China (e-mail: 2017021234@qdu.edu.cn)
2
School of Electrical Engineering, Mathematical Science and Computing, Curtin University, Perth, WA 6102, Australia (e-mail:
huizhu.pan@postgrad.curtin.edu.au)
3
School of Electrical Engineering, Mathematical Science and Computing, Curtin University, Perth, WA 6102, Australia (e-mail: w.liu@curtin.edu.au)
4
The Affiliated Hospital of Qingdao University,Qingdao, 266003, China (e-mail:zisen_xu@126.com)
5
College of Computer Science and Technology, Qingdao University, Qingdao, 266071, China (e-mail: zkpan@126.com)
Corresponding author: Zhenkuan Pan (e-mail: zkpan@126.com).

ABSTRACT In order to completely separate objects with large sections of occluded boundaries in an
image, we devise a new variational level set model for image segmentation combining the Chan-Vese
model with elastica and landmark constraints. For computational efficiency, we design its Augmented
Lagrangian Method (ALM) or Alternating Direction Method of Multiplier (ADMM) method by introducing
some auxiliary variables, Lagrange multipliers, and penalty parameters. In each loop of alternating iterative
optimization, the sub-problems of minimization can be easily solved via the Gauss-Seidel iterative method
and generalized soft thresholding formulas with projection, respectively. Numerical experiments show that
the proposed model can not only recover larger broken boundaries but can also improve segmentation
efficiency, as well as decrease the dependence of segmentation on parameter tuning and initialization.

INDEX TERMS Image segmentation, Chan-Vese model, Elastica, Landmarks, Variational level set
method, ADMM method.

I. INTRODUCTION improve segmentation contours. The latest works in land-


mark localization [10] use deep learning methods to generate
N recent years, deep learning methods are widely used in
I areas of image processing such as image segmentation.
However, these methods require considerable training data
more accurate landmarks, which makes it possible to utilize
landmarks in an end-to-end fashion. Therefore, the subject of
landmarks in image segmentation is now more relevant than
and are generally limited by the properties of the data. In ever. Motivated by image registration with landmarks [11]–
addition, when the amount of training data is limited, deep [13], Pan et al. [14] proposed a Chan-Vese model [15] with
learning methods may lead to over-fitting problems and landmark constraints (CVL) under the variational level set
poor results. On the other hand, model-based approaches for framework. The model not only enforces the segmentation
image segmentation are more cost efficient, computationally contour to pass through some pre-selected feature points
efficient, and memory efficient. Variational level set meth- but also improves computational efficiency and weakens
ods [1] as classical model-based methods have been widely the dependence of the segmentation result on initialization.
applied to image segmentation problems based on image However, since the Chan-Vese model uses the total variation
features such as edge, region, texture and motion, etc. [2]–[5]. (TV) [16] of the Heaviside function of the level set function
For images containing occluded objects, variational models to approximate the length of contours, sometimes the details
using shape priors can inpaint missing boundaries based on of objects are not well segmented. Instead, the CVL model
pre-defined shapes [6]–[9]. However, obtaining shape priors performs better for recovering smaller boundaries. On the
is often not easy. In certain scenarios, using landmarks is a other hand, the elastica regularizers proposed in the early
good alternative to using shape priors in the segmentation 1990s in depth segmentation [17] have been successively
of occluded objects.Landmark are key points that denote applied to image inpainting with larger broken images [18],
important features in an image, so they can be used to

VOLUME 4, 2016 1

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2020.3047848, IEEE Access

Author et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS

image restoration with smooth components [19], [20], and with the property
image segmentation with larger damaged areas or occlusions
|∇φ (x) | = 1. (2)
[21]–[24]. In [21], Zhu et al. propose a modified Chan-Vese
model with elastica (CVE), combining the classic Chan-Vese (2) is the Eikonal equation, i. e., a kind of Hamilton-Jacobi
model (CV) and the elastica regularizer to inpaint, or interpo- equation. H (φ (x)) is the Heaviside function of φ (x), stated
late, segmentation curves. Due to the ill-posed nature of this as

model, the segmentation results rely heavily on the involved 1, if φ (x) ≥ 0
H (φ (x)) = , (3)
penalty parameters, which makes it hard to for curves to 0, otherwise
pass through desired points in the occluded regions. In this
its partial derivative with φ (x) is the Dirac function
paper, we propose a CVE model with landmark constraints
(CVEL) that combines the CVL and CVE to more accurately ∂H (φ)
δ (φ) = , (4)
and robustly complete missing curves. Different from the ∂φ
CVE proposed in [21] which uses piecewise constant level which is a generalized function. Usually, H (φ), δ (φ) are
set functions or binary label functions, we use the Lipschitz replaced by their mollified versions by introducing a small
smooth level set function defined as a signed distance func- positive constant parameter ε , for instance [15]
tion to describe curve evolution. In order to solve the pro-   
posed model with the signed distance property and landmark 1 2 φ
Hε (φ) = 1 + arctan , (5)
constraints, we devise its Augmented Lagrangian Method 2 π ε
(ALM), i.e., Alternating Direction Method of Multipliers ∂Hε (φ) 1 ε2
(ADMM) [25]–[28] solution by dividing the original prob- δε (φ) = = . (6)
∂φ π ε2 + φ2
lem into several simple sub-problems and optimizing them
The well-known Chan-Vese model [15] for two-phase image
alternatively. The sub-problems can be solved respectively
segmentation is an energy minimization problem on c1 , c2
by the Gauss-Seidel iterative method and generalized soft
and φ, such that
thresholding formula with projection [29].
The paper is organized as follows. In section II , we present Z
2
the classical Chan-Vese model, the Chan-Vese model with minE(c1 , c2 , φ) = (f − c1 ) Hε (φ) dx
elastica, and the Chan-Vese model with landmark constraints ZΩ
for comparisons consequently. In section III, we give the 2
+ (f − c2 ) (1 − Hε (φ)) dx
CVE model with landmark constraints under the variational Ω
Z . (7)
level set framework and design its ADMM method, and +γ |∇Hε (φ) |dx
used the Gauss-Seidel iterative method and generalized soft Ω
thresholding formulas to solve the subproblems. Section IV s.t. |∇φ| = 1.
covers the solutions to all sub-problems derived in section III. where γ is a penalty parameter for the length term of the
Numerical examples are presented in section V to show the curve. c1 and c2 are estimated as
performance of the proposed model and algorithm. Finally, R
f (x) Hε (φ (x)) dx
concluding remarks are drawn in section VI. c1 = Ω R , (8)
H (φ (x)) dx
Ω ε
R
II. THE PREVIOUS WORKS f (x) (1 − Hε (φ (x))) dx
c2 = Ω R . (9)
A. THE CHAN-VESE MODEL FOR IMAGE (1 − Hε (φ (x))) dx

SEGMENTATION
By introducing Q(c1 , c2 ) = α1 (c1 − f )2 − α2 (c2 − f )2 , (7)
The task of two-phase segmentation of a gray value image can be rewritten as
f (x) : Ω → R T is to divide T Ω into two regions Ω1 , Ω2 , such
that Ω = Ω1 Ω2 and Ω1 Ω2 6= ∅. The classical Chan- Z
Vese model [15] is a reduced piecewise constant Mumford- minE(c1 , c2 , φ) = Q (c1 , c2 ) Hε (φ) dx

Shah model [3] under the variational level set framework. Z
The original image is denoted as f (x) = c1 χ1 (φ (x)) + (10)
+γ |∇Hε (φ) |dx,
c2 χ2 (φ (x)), where c1 and c2 are the average image in- Ω

tensities in Ω1 , Ω2 , and χ1 (φ (x)) = H (φ (x)) ∈ [0, 1] s.t. |∇φ| = 1.


and χ2 (φ (x)) = 1 − H (φ (x)) ∈ [0, 1] are characteristic Therefore, the evolution equation of φ (x) can be derived
functions of Ω1 , Ω2 respectively. φ (x) is a level set function via variational methods and gradient descent as
defined as a signed distance function form point x to curve 
Γ, i. e. ∂φ (x, t)
= [k − Q (c1 , c2 )]δε (φ (x, t)) t > 0, x ∈ Ω


∂t


 
d (x, Γ) , if x ∈ Ω1 ∂φ (x, t) ,
 =0 t > 0, x ∈ ∂Ω
φ (x) = 0, if x ∈ Γ , (1) ∂N




−d (x, Γ) , if x ∈ Ω2 φ (x, 0) = φ0 (x) t = 0, x ∈ Ω
 

2 VOLUME 4, 2016

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2020.3047848, IEEE Access

Author et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS

(11) rather than straight. Later, Zhu et al. [21] considered the
∇φ(x, t) relation
where k = ∇ · |∇φ(x, t)| .
∇Hε (φ) ∇φ · δ (φ) ∇φ
= = . (17)
B. THE CHAN-VESE MODEL WITH LANDMARK |∇Hε (φ) | |∇φ| · δ (φ) |∇φ|
CONSTRAINTS and studied the following convex optimization problem in-
Here, we expand the first model to enforce landmark con- stead of (16)
straints where the segmentation contour passes through the
landmarks. Let xL = {x1 , x2 ...xl } be the given landmark Z
points, represented through a mask function minE(c1 , c2 , φ) = Q (c1 , c2 ) φdx

∇φ 2
 Z
1, if x ∈ xL +γ [a + b|∇ · | ]|∇φ|dx,
(18)
η (x) = . (12)
0, otherwise Ω |∇φ|
Since the zero level set describes the boundary curve and s.t. |∇φ| = 1.
the landmarks are positioned on the boundary, the landmark However, the investigations in this paper are based solely on
constraint is (16) . The reason for the simplification is to facilitate the
calculations and will not affect the experimental results.
φ (x) = 0, if η (x) = 1. (13)
Thus, the Chan-Vese model (11) can be transformed into the III. THE CVE MODEL WITH LANDMARK CONSTRAINTS
following constrained optimization problem AND ITS ADMM ALGORITHM
Combining (14) and (16), we propose the Chan-Vese model
Z with elastica and landmark as
minE(c1 , c2 , φ) = Q (c1 , c2 ) Hε (φ)dx Z Z
µ

Z minE(c1 , c2 , φ) = Q (c1 , c2 ) Hε (φ)dx + ηφ2 dx
Ω 2 Ω
+γ |∇Hε (φ) |dx . (14) ∇φ 2
Z

+ γ [a + b|∇ · | ]|∇Hε (φ) |dx,
s.t. φ (x) = 0, if x ∈ xL . Ω |∇φ|
s.t. |∇φ| = 1. s.t. |∇φ| = 1.
(19)
To incorporate the landmark constraints, we can frame them
as an additional penalty term regulated by the parameter µ > By adding landmark points,we can force the contour to pass
0. The problem then becomes through some feature points to get good results. This is the
Z design idea of the CVE model with landmark (CVEL). In or-
minE(c1 , c2 , φ) = Q (c1 , c2 ) Hε (φ)dx der to simplify the implementation of (19), we introduce the

Z Z auxiliary variables p, m, n, q, The main reason for adding
µ . this many intermediate quantities is to avoid the curvature
+γ |∇Hε (φ) |dx + η (x) φ2
Ω 2 Ω term appearing in the calculation and to simplify calculations.
s.t. |∇φ| = 1. The intermediate variables are
(15)
p = ∇φ, (20)
where µ is the parameter and η is defined in (12). p
m= , (21)
|p|
C. THE CHAN-VESE MODEL WITH ELASTICA
q = ∇ · n. (22)
In order to recover curves which are not determined by
image features, for instance the boundary of an occluded Considering that |m| ≤ 1, (21) can be substituted by a more
object, [21] proposed the CVE model by combining Chan- relaxed set of constraints, |p| − p · m = 0 and |m| ≤ 1
Vese model and the elastica term [3]. Since p = ∇φ, the constraint |∇φ| = 1 can be rewritten
Z as |p| = 1. Additionally, we introduce a new variable n =
minE(c1 , c2 , φ) = Q (c1 , c2 ) Hε (φ)dx m [3] for splitting. Thus, the constraints (20)-(22) can be

summarized as
∇φ 2
Z
+ γ [a + b|∇ · | ]|∇Hε (φ) |dx,
Ω |∇φ| p = ∇φ, (23)
s.t. |∇φ| = 1,
|p| − pm = 0, |p| = 1, (24)
(16)
∇φ n = m, |m| ≤ 1, (25)
where ∇ · |∇φ| is the elastica, i. e. the square of the curvature.
The contours obtained by this method tend to be curved q = ∇ · n. (26)
VOLUME 4, 2016 3

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2020.3047848, IEEE Access

Author et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS

q k+1 = arg min E ck+1 , ck+1 , φk+1 , pk+1 , nk+1 , mk+1 , q .



Next, to design the ADMM algorithm for the problem, we in- 1 2
q
troduce the Lagrange multipliers λ1 , λ2 , λ3 , λ4 and penalty
(35)
parameters γ1 , γ2 , γ3 , γ4 and rewrite the energy function in
(19) as the following Augmented Lagrangian Function The solutions to the sub-problems are presented below.
Z Using standard variational methods, we solve (29) and
E (c1 , c2 , u, φ, v, p, n, m) = Q (c1 , c2 ) H (φ) dx (30) respectively and get
Ω 
f (x) H φk (x) dx
R
Z Z
2 µ k+1
c1 = ΩR
, (36)
ηφ2 dx

+γ a + bq |p|δε (φ) dx + H (φk (x)) dx
Ω 2 Ω Ω
Z Z 
f (x) 1 − H φk (x) dx
R
+ λ1 (|p| − p · m) dx + γ1 (|p| − p · m) dx k+1
c2 = ΩR
. (37)
ZΩ Z Ω , Ω
(1 − H (φk (x))) dx
γ2 2
+ λ2 (p − ∇φ) dx + |p − ∇φ| dx For the sub-problem (31), the Euler-Lagrange equations
2 Ω
ZΩ Z on φ are
γ3 2
+ λ3 (n − m) dx + (n − m) dx + δR (m) (
Ω 2 Ω F k+1 + µηφk+1 − γ2 ∆φk+1 = 0 x∈Ω
Z Z . (38)
−λk2 + γ2 ∇φk+1 − pk · N = 0 x ∈ ∂Ω

2
+ λ4 (q − ∇ · n) dx + γ4 (q − ∇ · n) dx
Ω Ω
where
(27)
F k+1 = Qk+1 δε φk + a + bq 2 pk ∇δε φk
  
.
 2

where |p| = 1, R = m ∈ L (Ω) : |m| ≤ 1 a.e. in Ω and
+ ∇ · λk2 + γ2 ∇ · pk .
δR (m) is the characteristic function on the convex set R,
given by To solve (32), we can derive pk+1 via a generalized soft
 thresholding formula and projection formula as below
0 if m ∈ R
δR (m) = .
+∞ otherwise  
λk1 + γ1 mk − λk2
Under the framework of ADMM, the Lagrangian multipliers Ak+1 k+1


 = ∇φ +
γ2

are updated for iteration k = 0, 1, 2...K as


   
k 2

k+1
∇δε φk + λk1 + γ1

B = a+b q



 k+1
= λk1 + γ1 |pk+1 | − pk+1 · mk+1 .

λ B k+1 Ak+1
 
 1

p̃k+1 = max Ak+1 −

,0
 
λk+1

= λk2 + γ2 pk+1 − ∇φk+1
 

 γ2 |A k+1 | + 10 −6
2 
k+1
, (28) 
 k+1
= λk3 + γ3 |nk+1 | − mk+1

λ 3

pk+1
 p̃ 0
= k+1 , =0

 
|p̃ | |0|
 k+1 
= λk4 + γ4 q k+1 − ∇ · nk+1
 
λ4
(39)
and the original minimization problem is split into the
following sub-problems For (33), the Euler-Lagrange equations on n is

λk3 + γ3 nk+1 − mk + γ4 ∇ q k − ∇ · nk + ∇λk4 = 0.


 
ck+1 = arg min E c1 , ck2 , φk , pk , nk , mk , q k , (29)

1 c1
(40)
ck+1 = arg min E ck+1 , c2 , φk , pk , nk , mk , q k , (30)

2 1
c2 m in (34) can be obtained as an exact solution. Consider-
ing the constraint in (25), the projection formula should be
φk+1 = arg min E ck+1 , ck+1 , φ, pk , nk , mk , q k ,
 augmented as follows
1 2
φ 
λk1 + γ1 pk+1 + λk3

(31) k+1 k+1
m =n +

 f
γ3
k+1
. (41)
mk+1 =
 m
pk+1 = arg min E ck+1 , ck+1 , φk+1 , p, nk , mk , q k ,
  f
1 2
p max (1, |fmk+1 |)
(32) Lastly, q in (35) also has an analytical solution

γ4 q k+1 − ∇ · nk+1 +λ4 k +2bq k+1 pk+1 δε φk+1 = 0.


 
nk+1 = arg min E ck+1 , ck+1 , φk+1 , pk+1 , n, mk , q k ,

1 n 2
(42)
(33)
In this part we introduced the solution to each variable in
the CVEL model and the iterative process of the algorithm.
mk+1 = arg min E ck+1 , ck+1 , φk+1 , pk+1 , nk+1 , m, q k

1 2 , In the next section we will present the discretization schemes
m
(34) and the complete algorithm.
4 VOLUME 4, 2016

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2020.3047848, IEEE Access

Author et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS

IV. IMPLEMENTATIONS OF THE RELEVANT Based on (43) and (47), we can easily design the Gauss-
SUB-PROBLEMS OF MINIMIZATION Seidel iterative scheme of φ as
To compute (36)-(42) and (28) numerically, we need to    
 (µη + 4γ2 ) φk+1,l+1 = γ2 U φk+1,l − F k+1
design discrete algorithms for the sub-problems. For the i,j i,j i,j
. (48)
sake of simplicity, we discretize the image domain pixel by U φk+1,l  = φk+1,l + φk+1,l + φk+1,l + φk+1,l
i+1,j i−1,j i,j+1 i,j−1
pixel with the rows and column numbers as indices. Then,
the gradients can be represented approximately by forward, Alternatively, φ can be solved by Fast Fourier transform
backward and central finite differences (FFT) [28].
 +
∂x1 ui,j
  −
∂x1 ui,j
 The discretized solution of p as obtained from (39) is
+ −
∇ ui,j = , ∇ ui,j = ,
∂x+2 ui,j ∂x−2 ui,j

λk1 + γ1 mki,j − λk2i,j
 o  
o ∂x1 ui,j 
Ak+1 = ∇+ φk+1
+
∇ ui,j = ,

i,j i,j
∂xo2 ui,j



 γ2
    +
k 2
∇ δε φk+1
 k+1
+ λk1i,j + γ1
 
where, 

B = a + b qi,j i,j
( ( 
 !
k+1
∂x+1 ui,j = ui+1,j − ui,j ∂x+2 ui,j = ui,j+1 − ui,j Bi,j Ak+1
i,j .
p̃k+1 = max Ak+1

, , − , 0 k+1
i,j
∂x−1 ui,j = ui,j − ui−1,j ∂x−2 ui,j = ui,j − ui,j−1 i,j
γ2 + 10−6

 A
i,j




p̃k+1

1
 
i,j 0
∂xo1 ui,j = (ui+1,j − ui−1,j ) pk+1

= , =0
 

2  i,j k+1 |0|

.

p̃i,j
∂ o ui,j = 1 (ui,j+1 − ui,j−1 )

x2
2 (49)
The discretized Laplacian of φ can be stated as Since the form of n in (40) is similar to that of φ, the
∆φi,j = ∇− · ∇+ φi,j solution of n can also be written similarly. Again, to simplify

(43) the equation, we introduce
= φi−1,j + φi,j−1 + φi+1,j + φi,j+1 − 4φi,j .
The other variables can be expressed in similar ways. 
k+1
(36) and (37) can be calculated directly as Fi,j
 = λk3i,j + γ4 ∇qi,j
k
+ ∇λk4i,j − γ3 mki,j x∈Ω
λk4i,j .
M P
N Gk
 k
= qi,j + x ∈ ∂Ω
 γ4
fi,j H φki,j
P
i=1 j=1 (50)
ck+1
1 = M P
N
, (44)
P
H φki,j
 and (40) becomes
i=1 j=1 (
F k + γ3 nk+1 − γ4 ∇ · ∇nk+1 = 0

x∈Ω
M P
N k+1 k
. (51)
∇n ·N =G ·N x ∈ ∂Ω

fi,j 1 − H φki,j
P
i=1 j=1
ck+1
2 = M P
N
, (45) Introducing the discretized form of n, its Gauss-Seidel itera-

φki,j
P
1−H tive scheme can be easily designed as
i=1 j=1

where M and N are the numbers of rows and columns of the   


(γ3 + 4γ4 ) nk+1,l+1 = γ4 U nk+1,l − 4nk+1,l

image f .

 1i,j 2i,j


Next, to discretize the formula of φ obtained in (38), we
 k+1,l+1
− Fi,j , nk+1,0 = nk1i,j


1i,j
introduce the following intermediate variables  .
(γ3 + 4γ4 ) nk+1,l+1 = γ4 U nk+1,l − 4nk+1,l+1
 
2i,j 1i,j




k+1,l+1
, nk+1,0 = nk2i,j
  2  k 
− Fi,j

F k+1 = a + b qk p ∇δε φk


 2i,j


 (52)
+ Qk+1 δε φk + γ2 ∇ · pk + ∇ · λk2

x∈Ω,

 λk2 Here, n can be solved with FFT as well.
Gk+1 = pk + x ∈ ∂Ω

 For m in (41), its discretized analytical solution with
γ2
projection formula is
(46)
λk1i,j + γ1 pk+1
  k
and write the original Eular-Lagrange equations in the more  k+1 k+1 i,j + λ3i,j
 m̃ = n i,j +
 i,j

concise form below
 γ3
( m̃ k+1 . (53)
F k+1 + µηφk+1 − γ2 ∆φk+1 = 0 x∈Ω 
 m k+1
=
i,j
. (47)  i,j
  
max 1, m̃k+1

k+1 k+1
∇φ ·N =G ·N x ∈ ∂Ω i,j

VOLUME 4, 2016 5

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2020.3047848, IEEE Access

Author et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS

The q obtained in (42) can also be drawn into a simple in Fig.1 (c), Fig.1 (d), Fig.1(e), and Fig.1(f) respectively.
analytical solution The parameters used in the CVEL model are γ1 = 1, γ2 =
3, γ3 = 5, γ4 = 10, α1 = 0.5, α2 = 0.5. One landmark was
γ4 + 2b pk+1 δε φk+1 q = γ4 ∇ · nk+1 − λk+1 . (54)

i,j i,j i,j 4i,j placed in the middle of each piece of missing contour. Results
After calculating (36) - (42), the Lagrange multipliers are show that the CVL, and CVEL can all recover small sections
updated as (28). of the missing contours. The parameters used for CVE were
In each iteration, the following error tolerances should be γ1 = 1, γ2 = 10, γ3 = 5, γ4 = 5, α1 = 0.5, α2 = 0.5.
checked to determine convergence, i. e., The segmentation contours can potentially be improved by
different sets of parameters, especially in the case of the
Tsk+1 ≤ Tol, (s = 1, 2, 3, 4) , Φk+1 ≤ Tol, Σk+1 ≤ Tol, CVE which is highly dependent on parameters. However, the
(55) CVEL not only produces smoother curves due to the elastica
regularizer but also uses landmarks to ease the dependence of
where Tol = 0.01. Tsk+1 , Φk+1 , Σk+1 are defined as the results on parameter tuning.
 k+1 k+1 k+1 k+1
T
1 , T2 , T3 , T4 
k+1
 kλ 1 − λk1 kL1 kλk+1
2 − λk2 kL1 
, ,

 
kλk1 kL1 kλk2 kL1
 
(56)
 
= k+1 k+1
,
 kλ3 − λk3 kL1 kλ4 − λk4 kL1 
 

 ,  (a) (b) (c)
kλk k kλk k
 

3 L1 4 L1

kφk+1 − φk kL1 kE k+1 − E k k


Φk+1 = , Σk+1
= . (57)
kφk kL1 kE k k
The complete algorithm is summarized in Algorithm 1. (d) (e) (f)
FIGURE 1: Results of four different methods to repair the
Algorithm 1: ADMM broken letters ’UCLA’
1: Initialization: Set α1 , α2 , µ, a, b.
2: while any stopping criterion is not satisfied do
Calculate ck+1 , ck+1 from (29) and (30)
Next, to compare the performance of the CVL and CVEL
1 2
Calculate φk+1 from (31) in recovering larger missing contours, we conducted the
Calculate pk+1 from (32) second experiment shown in Fig.2. Fig.2(a) shows a triangle
Calculate nk+1 from (33) with a missing corner, Fig.2(b) shows the initial contour
Calculate mk+1 from (34)
and landmark points, Fig.2(c) and Fig.2(d) are segmentation
Calculate q k+1 from (35)
Calculate λk+1 , λk+1 , λk+1 , λk+1 from (28) results via the CVL and CVEL model respectively, and the
1 2 3 4
3: end while parameters are γ1 = 1, γ2 = 3, γ3 = 5, γ4 = 10, α1 =
1.1, α2 = 0.9 for both models. Although they obtained
similar results, the CVL needed 26 landmark points whereas
V. NUMERICAL EXPERIMENTS the CVEL needed only 20.
We devised four sets of numerical experiments with distinct
goals in mind. The first set compared the performance of
the CV, the CVL, the CVE, and the CVEL models in con-
tour inpainting or interpolation, in cases where smaller and
larger regions are missing from the original images. The
second set of experiments examined the dependence of the (a) (b) (c) (d)
segmentation result on the number of landmark points. The FIGURE 2: Broken triangle repair experiment
third set demonstrated how landmark points improve the
segmentation efficiency of the CVEL model, and the final set Furthermore, we set up an experiment to compare the
presented two applications in medical image segmentation. performance of the CVL and CVEL, as shown in Fig.3.
Fig.3(a) shows the original broken image, Fig.3(b) presents
A. COMPARISONS WITH PREVIOUS MODELS the initial zero level set and landmark points, Fig.3(c) and
Since the CVE, CVL, and CVEL model are different exten- Fig.3(d) give the segmented results via the CVL and the
sions of the CV model for the purpose of missing contour CVEL models respectively. The parameters for the CVEL
recovery, we designed some experiments to compare their model are γ1 = 7, γ2 = 20, γ3 = 5, γ4 = 2, α1 = 1.1, α2 =
performance. First, we segmented an image of the letters 0.9. The conclusion is the same as [21], i. e., it is hard to
’UCLA’ with small damaged regions [21]. We show the origi- inpaint the external missing curve via the CVL, but the CVEL
nal broken image in Fig.1(a), and the initialization of the zero works well.
level set for all of the models in Fig.1(b). Segmentation re- Fig.4 presents another comparison between the CVE and
sults obtained via the CV, CVL, CVE, and CVEL are shown the CVEL. The parameters for the CVLE model are γ1 =
6 VOLUME 4, 2016

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2020.3047848, IEEE Access

Author et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS

(a) (b) (c) (d)


(a) (b) (c) (d)
FIGURE 3: Broken circle repair experiment.

7, γ2 = 20, γ3 = 5, γ4 = 2, α1 = 1.1, α2 = 1.2., Fig.4(a)


shows the original broken circle image with added noise, (b) (e) (f) (g) (h)
shows the initial level set and the given landmark points, (c) FIGURE 5: Different number of landmark points affect the
and (d) are the segmentation results via the CVE model and results of the triangle segmentation experiment differently.
the CVEL model respectively. Though this image includes
noise, the same conclusion as in [21] can be drawn, i.e. , the
CVE model fails at inpainting long contours while the CVEL
does a good job.

(a) (b) (c) (d)

(a) (b) (c) (d)


(e) (f) (g) (h)
FIGURE 4: Broken circle with added noise repair experi- FIGURE 6: Different number of landmark points affect the
ment. results of the rectangle segmentation experiment

B. THE DEPENDENCE ON TUNING PARAMETERS AND


LANDMARK POINTS
In this section of experiments, we studied the effect of
the number of landmark points and their positions on the
segmentation result. Setting different numbers of landmark
points led to different results. The more landmarks we set (a) (b) (c)
within a certain limit, the more accurate the result turned
out to be. However, increasing the number of landmarks
beyond the limit did not increase segmentation accuracy, as
shown in Fig.5.(a) to Fig.5(d). In (e) to (h), the number of
landmark points is 2, 10, 18 and 24 respectively, in Fig.6.(a)
to Fig.6(d). In (e) to (h), the number of landmark points is 3,
9, 11 and 15 respectively, as we can see, the corresponding (d) (e) (f)
results became increasingly better with the more and more FIGURE 7: Experiments on the effect of the location of
additional landmarks. However, setting over 24 landmarks landmark points on the results
did not improve the result further.
The placement of landmark points is also essential, espe-
cially when the total number of landmarks is small. Using C. EFFICIENCY
Fig. 5(d) as an example of a well-segmented image, we Next, we examined how the efficiency of the CVEL in terms
proceeded to take away landmarks from different locations. of convergence time. We first considered whether the CVEL
In Fig.7(a), Fig.7(b) and Fig.7(c), we removed two landmarks could speed up the segmentation process compared to the
from the bottom, top, and middle of the set of landmarks, CVE by constructing the experiment in Fig 8, where we
respectively. As a result, the recovered contour in Fig.7(d) marked the entire contour of the palm with landmarks for
had distortions around the base section, the sharpness of the the CVEL and compare performance with the CVE. Fig.8(a)
tip was not well maintained in Fig.7(e), and the result in is the original image, and Fig.8(b) is the initial zero level
Fig.7(f) did not change significantly. Therefore, we observe set for both models. We obtained the segmentation results
that it is more effective to place landmarks at the vertices or shown in Fig.8(c) and Fig.8(d) via the CVE by five steps and
corners of an object. The better the landmark can capture the the CVEL by two steps, respectively. This shows that using
key features, the fewer landmarks we need. landmarks in the CVEL model can increase the efficiency of
VOLUME 4, 2016 7

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2020.3047848, IEEE Access

Author et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS

segmentation. object boundary. As the curve approaches the landmarks, it


moves away from the boundary and onto the landmarks. This
process inevitably raises the total energy.

D. APPLICATIONS IN REAL LIFE


In this section, we present some applications of the CVEL
in segmenting CT images. The problem is often difficult to
solve due to the presence of fine details.
(a) (b) (c) (d)
FIGURE 8: Hand image segmentation efficiency test

(a) (b) (c) (d)

(a) (b)

(e) (f) (g) (h)


FIGURE 10: Classical brain CT image segmentation experi-
ments

Fig.10(a) is the original CT image of brain, Fig.10(b)


shows the initial level set function for segmentation, and Fig.
10(c) shows the result via the CV model which separates the
(c) (d)
white matter from the gray matter in the brain. However, with
the help of good landmarks, the CVEL model can segment
the entire brain using the same level set initialization as
shown by the result in Fig. 10(e). The results in Fig. 10(d)-
(h) are obtained by selecting landmark points at intensities
of 20, 40, 60, 80, and 100, respectively. As shown by the
experiments above, the segmentation result can be easily
adjusted by different landmarks.

(e) (f)
FIGURE 9: The plots of relative errors in the Lagrange mul-
tipliers, relative error in the level set functions, and energies
for the two examples ’hand’ and ’UCLA’. The first line lists
the plots for ’hand’ and the second one for ’UCLA’.
(a) (b) (c) (d)
We then checked the convergence time of the Lagrangian FIGURE 11: Segmentation experiments of a brain CT image
multiplier, the level set function, and the total energy (4.16- with noise.
4.17) of the CVEL algorithm, where Fig.9(a), Fig.9(c), and
Fig9(e) are recorded from the experiment in Fig.8, and Fig. 11 gives an example of the segmentation of a CT
Fig.9(b), Fig.9(d), and Fig.9(f) are recorded from the experi- image with noise. The main difficulty in this experiment is
ment in Fig.1. On the one hand, we can see that convergence to separate the adjacent tissues in the image. For the original
was reached quickly in both experiments. On the other hand, image Fig. 11(a), we initialize the level set function as Fig.
we observe that the total energy increased towards the end 11(b). Fig. 11(c) and Fig. 11(d) show the results obtained via
in Fig.9(f). This phenomenon is due to the formation of the the CV model and the CVEL model, respectively. In the case
illusory contour. In the initial stage, the CV and elastica terms of the CVEL, we used landmark points successfully separate
play a major role in moving the curve towards the natural the adhering sections.
8 VOLUME 4, 2016

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2020.3047848, IEEE Access

Author et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS

VI. CONCLUDING REMARKS [17] D. Mumford, Elastica and computer vision. algebraic geometry and its
In this paper, our contributions are: 1. we presented a Chan- applications (1994).
[18] J. Shen, S. H. 160 Kang, T. F. Chan, Euler’s elastica and curvature-based
Vese model with elastica and landmarks (CVEL) under the inpainting, SIAM journal on Applied Mathematics 63 (2) (2003) 564-592.
variational level set framework. The new model combines the [19] W. Zhu, T. Chan, Image denoising using mean curvature of image surface,
classical Chan-Vese model (CV), the Chan-Vese model with SIAM Journal on Imaging Sciences 5 (1) (2012) 1-32.
[20] W. Zhu, X.-C. Tai, T. Chan, Augmented lagrangian method for a mean
landmarks (CVL), and the Chan-Vese model with elastica curvature based image denoising model, 165 Inverse Probl. Imaging 7 (4)
(CVE).2. We then designed an ADMM algorithm to solve (2013) 1409-1432.
the new model. A variety of numerical experiments show [21] W. Zhu, X.-C. Tai, T. Chan, Image segmentation using eulers elastica as
the regularization, Journal of scienti
that the CVEL performs better than the CVE in segmentation c computing 57 (2) (2013) 414-438.
accuracy, and can recover larger missing boundaries than the [22] W. Zhu, T. Chan, S. Esedoglu, Segmentation with depth: A level set
CVL. approach, SIAM journal on scienti
c computing 28 (5) (2006) 1957-1973.
For future works, we wish to design more efficient way to [23] J. Zhang, K. Chen, A new augmented lagrangian primal dual algorithm for
solve the sub-problems of CVEL model as well as integrate elastica regularization, Journal of Algorithms & Computational Technol-
automatic landmarks detection methods. Ultimately, the aim ogy 10 (4) (2016) 325-338.
[24] L. Tan, Z. Pan, W. Liu, J. Duan, W. Wei, G. Wang, Image segmentation
is to achieve automatic image segmentation with landmarks. with depth information via simpli
ed variational level set formulation, Journal of Mathematical Imaging and
Vision 60 (1) (2018) 1-17.
[25] C. Wu, X.-C. Tai, Augmented lagrangian method, dual methods, and split
VII. ACKNOWLEDGEMENT bregman iteration for rof, vectorial 175 tv, and high order models, SIAM
The authors thank the editor and anonymous reviewers for Journal on Imaging Sciences 3 (3) (2010) 300-339.
their helpful comments and valuable suggestions. [26] X.-C. Tai, J. Hahn, G. J. Chung, A fast algorithm for euler’s elastica model
using augmented lagrangian method, SIAM Journal on Imaging Sciences
4 (1) (2011) 313-344.
VIII. REFERENCES [27] T. Goldstein, B. O’Donoghue, S. Setzer, R. Baraniuk, Fast alternating
REFERENCES direction optimization methods, SIAM Journal on Imaging Sciences 7 (3)
[1] H.-K. Zhao, T. Chan, B. Merriman, S. Osher, A variational level set (2014) 1588-1623.
approach to multiphase motion, Journal of computational physics 127 (1) [28] Duan J, Ward W O C, Sibbett L, et al. Introducing diffusion tensor to
(1996) 179-195. high order variational model for image reconstruction[J]. Digital Signal
[2] S. Osher, N. Paragios, Geometric level set methods in imaging, vision, and Processing, 2017, 69: 323-336.
graphics, Springer Science & Business 135 Media, 2003. [29] J. Duan, Z. Pan, X. Yin, W. Wei, G. Wang, Some fast projection methods
[3] D. Mumford, J. Shah, Optimal approximations by piecewise smooth based on chan-vese model for image segmentation, EURASIP Journal on
functions and associated variational prob- lems, Communications on pure Image and Video Processing 2014 (1) (2014) 7.
and applied mathematics 42 (5) (1989) 577-685.
[4] T. F. Chan, J. J. Shen, Image processing and analysis: variational, PDE,
wavelet, and stochastic methods, Vol. 94, Siam, 2005.
[5] A. Mitiche, I. B. Ayed, Variational and level set methods in image
segmentation, Vol. 5, Springer Science & Business Media, 2010.
[6] Y. Chen, H. D. Tagare, S. Thiruvenkadam, F. Huang, D. Wilson, K. S.
Gopinath, R. W. Briggs, E. A. Geiser, Using prior shapes in geometric
active contours in a variational framework, International Journal of Com- JINTAO SONG received the bachelor’s degree
puter Vision 50 (3) (2002) 315-328. in College of Computer Science and Technology,
[7] D. Cremers, Nonlinear dynamical shape priors for level set segmentation, Qingdao University, Qingdao, China, in 2017. He
Journal of Scienti is currently pursuing the master’s degree in Qing-
c Computing 35 (2- 3) (2008) 132-143.
dao University. His current research interest in-
[8] S. Chen, D. Cremers, R. J. Radke, Image segmentation with one shape
cludes variational models of image and geometry
priora template-based formulation, Image and Vision Computing 30 (12)
(2012) 1032-1042. processing.
[9] S. R. Thiruvenkadam, T. F. Chan, B.-W. Hong, Segmentation under
occlusions using selective shape prior, SIAM Journal on Imaging Sciences
1 (1) (2008) 115-142.
[10] J. Duan, G. Bello, J. Schlemper, W. Bai, T. J. Dawes, C. Biffi, A. de
Marvao, G. Doumoud, D. P. O’Regan, D. Rueckert, Automatic 3D bi-
ventricular segmentation of cardiac images by a shape-refined multi-task
deep learning approach, IEEE transactions on medical imaging 38 (9)
(2019) 2151–2164.
[11] J. Modersitzki, Numerical methods for image registration, Oxford Univer-
sity Press on Demand, 2004. HUIZHU PAN received the bachelor’s degree in
[12] A. A. Goshtasby, Image registration: Principles, tools and methods, Mount Holyoke College, America, in 2017. She is
Springer Science & Business Media, 2012. currently pursuing the master’s degree in Depart-
[13] K. C. Lam, L. M. Lui, Landmark-and intensity-based registration with
ment of Computing, Curtin University. Her cur-
large deformations via quasi-conformal maps, SIAM Journal on Imaging
rent research interest includes variational models
Sciences 7 (4) (2014) 2364-2392.
[14] H. Pan, W. Liu, L. Li, G. Zhou, A novel level set approach for image of image and geometry processing, and machine
segmentation with landmark constraints, Optik 182 (2019) 257-268. learning,
[15] T. F. Chan, L. A. Vese, Active contours without edges, IEEE Transactions
on image processing 10 (2) (2001) 266-277.
[16] L. I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise
removal algorithms, Physica D: nonlinear phenomena 60 (1-4) (1992) 259-
268.

VOLUME 4, 2016 9

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2020.3047848, IEEE Access

Author et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS

WANQUAN LIU received the B.Sc. degree in ap-


plied mathematics from Qufu Normal University,
China, in 1985, the M.Sc. degree in control theory
and operation research from the Chinese Academy
of Sciences, in 1988, and the Ph.D. degree in elec-
trical engineering from Shanghai Jiaotong Univer-
sity, in 1993. He held the ARC Fellowship, the
U2000 Fellowship, and the JSPS Fellowship and
attracted research funds from different resources
over 2.4 million dollars. He is currently an As-
sociate Professor with the Department of Computing, Curtin University.
His current research interests include large-scale pattern recognition, signal
processing, machine learning, and control systems. He is on the editorial
board for nine international journals.

ZISEN XU Secretary of the second general branch


of the communist party of China, deputy director
of the security logistics department of the west
coast affiliated hospital of Qingdao university, se-
nior engineer, graduated from Qingdao university
with a master’s degree in software engineering
from September 2004 to June 2007;Bachelor of
medicine in medical imaging in 1992;From March
2006 to March 2019, deputy director of medical
equipment department, director of medical engi-
neering department, senior engineer, affiliated hospital of Qingdao univer-
sity.

ZHENKUAN PAN received the B.E. degree from


Northwestern Polytechnical University, Xian,
China, in 1987, and the Ph.D. degree from Shang-
hai Jiao Tong University, Shanghai, China, in
1992. He is currently a Professor with the College
of Computer Science and Technology, Qingdao
University, Qingdao, China. He has authored or
coauthored more than 300 academic papers in
the areas of computer vision and dynamics and
control. His research interests include variational
models of image and geometry processing, and multibody system dynamics.
(Based on document published on 29 July 2019).

10 VOLUME 4, 2016

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/

You might also like