Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Optik 127 (2016) 161–167

Contents lists available at ScienceDirect

Optik
journal homepage: www.elsevier.de/ijleo

A novel spectral clustering method with superpixels


for image segmentation
Yifang Yang a,c,∗ , Yuping Wang b , Xingsi Xue b
a
School of Mathematics and Statistics, Xidian University, Xi’an 710071, China
b
School of Computer Science and Technology, Xidian University, Xi’an 710071, China
c
College of Science, Xi’an Shiyou University, Xi’an 710065, China

a r t i c l e i n f o a b s t r a c t

Article history: Similarity measure is critical to the performance of spectral clustering. The most commonly used sim-
Received 23 December 2014 ilarity measure for spectral clustering is Gaussian kernel similarity measure. However, the selection of
Accepted 11 October 2015 accurate scaling parameter in Gaussian kernel function is difficult. To reduce the sensitivity of scaling
parameter, in this paper, a novel spectral clustering method with superpixels for image segmentation
Keywords: (SCS) is proposed. In particular, a novel kernel fuzzy similarity measure is presented, which uses mem-
Spectral clustering
bership distribution in partition matrix obtained by kernel fuzzy C-means clustering(KFCM). In addition,
Kernel fuzzy-clustering
the superpixel is introduced into image segmentation to alleviate the computational burden of affin-
Image segmentation
Superpixels
ity matrix. The experimental results show that our approach is able to perform steadily under different
parameters, and obtain good clustering results on various natural images. Moreover, the evaluation com-
parisons also indicate that our method can achieve comparable accuracy and significantly outperform
most state-of-the-art algorithms.
© 2015 Elsevier GmbH. All rights reserved.

1. Introduction single scale parameter, however, the local scale parameter in STSC,
the distance to a nearby neighbor, is still a Euclidean distance factor
In the past few decades, spectral clustering algorithms and cannot make any contribution to clustering better than using
[1–6,22–24] combining with spectral graph theory have shown the scale parameter of Gaussian kernel function [14]. Fischer and
great promise in data clustering and image segmentation and have Buhmann [15] proposed the path-based similarity, this similar-
been successfully used to solve data clustering and graph parti- ity reflects the idea that no matter how far the physical distance
tioning problems. Since its high performance in data clustering between two points, they should be considered as in one clus-
and simplicity in implementation, spectral clustering algorithms ter if they are connected by a set of successive points in dense
attract more and more interests. However, there still exists some regions. This is intuitively reasonable. However, it is not robust
open problems in traditional spectral clustering algorithms: (1) the enough against noise and outliers [16]. Zhao et al. [17] proposed
commonly used Gaussian kernel function based similarity mea- fuzzy similarity measure by utilizing the partition matrix obtained
sure cannot fully reflect the complex space distribution of dataset, by fuzzy c-means clustering algorithm(FCM). Moreover, several
and it is undesirable when clusters develop complicated manifold recent papers report the kernel fuzzy-clustering algorithm has bet-
structure [7]; (2) the overall time complexity and space complexity ter performance than the standard FCM. Zeyu et al. in [18] reported
can reach O(n3 ) and O(n2 ), respectively [4] when the scale n of the good performance of kernel fuzzy c-means algorithm(KFCM) on a
data set is relatively large, which yields the difficulty of storing and 2-dimensional non-linearly separable synthetic dataset and com-
decomposing a large affinity matrix, especially for one image. pared the obtained results with those produced by the standard
To overcome the influence of the scale parameter, Zelnik-Manor FCM; the classification rate for KFCM is much higher than standard
and Perona [8] proposed a self-tuning spectral clustering algorithm FCM. The kernel-based clustering algorithms can cluster specific
(STSC) that utilized a local scale for each data point to replace the nonspherical clusters such as the ring cluster, and quite well out-
perform FCM for the same number of clusters [19].
While for the second problem, in order to solve the huge
∗ Corresponding author at: School of Mathematics and Statistics, Xidian Univer- computation of the affinity matrix, several recent approaches
sity, Xi’an 710071, China. [29,31–33,38] start to introduce superpixels into segmentation to
E-mail address: yangyifang@xsyu.edu.cn (Y. Yang). reduce the computational cost, i.e., to define the affinity matrix

http://dx.doi.org/10.1016/j.ijleo.2015.10.053
0030-4026/© 2015 Elsevier GmbH. All rights reserved.
162 Y. Yang et al. / Optik 127 (2016) 161–167

by nodes of superpixels instead of pixel. Segmenting images into However, the similarity measure in NJW algorithm, i.e. Gauss-
superpixels as supporting regions for feature vectors and primitives ian kernel similarity measure, is sensitive to scaling parameter. To
to reduce computational complexity has been commonly used as overcome this defect, the proposed new similarity measure in this
a fundamental step in various image analysis and computer vision paper is used to resolve the sensitivity problem of NJW method.
tasks. Mori [27] demonstrated using superpixels to improve the
efficiency and accuracy of model search in an image. Image segmen-
3. A novel spectral clustering method with superpixels for
tation methods [31,32] use superpixels to initialize segmentation
image segmentation
and achieves significantly better performance.
Motivated by the aforementioned methods, in this paper, a novel
In this section, the details of the proposed SCS method are
spectral clustering method with superpixel for image segmenta-
given. Firstly, the KFCM algorithm is presented to obtain the parti-
tion (SCS) is proposed. In particular, a novel kernel fuzzy similarity
tion matrix with membership distribution, and then a kernel fuzzy
measure is presented, which uses membership distribution in par-
similarity measure is proposed which can reduce the sensitivity
tition matrix obtained by KFCM to reduce the sensitivity of scaling
of scaling parameter of NJW, finally, a superpixel technology is
parameter. In addition, the superpixel is introduced into image seg-
introduced into image segmentation for the purpose of reducing
mentation to reduce the computational cost of affinity matrix, and a
the computational cost.
new measure is proposed to construct the affinity matrix of spectral
clustering.
The rest of this paper is organized as follows. In Section 2, we 3.1. KFCM algorithm
present a short overview about techniques of Ng–Jordan–Weiss
(NJW) [21]. A new kernel fuzzy similarity measure which is used KFCM represents the kernel version of FCM by exploiting a ker-
to construct the affinity matrix and the proposed SCS method nel function for calculating the distance of data points from the
for image segmentation are described in details in Section 3. cluster centers. In KFCM, the data points are mapped from the input
Experimental results analysis, discussion and parameter setting are space to a high dimensional space H (a Hilbert space usually called
described in Section 4. Finally, conclusions are given in Section 5. kernel space) where the data show simpler structures or patterns.
According to clustering algorithms, the data in the new space are
more spherical and therefore can be clustered more easily by FCM
2. Spectral clustering algorithm and the NJW method algorithms [9–12].
Given a dataset X = {x1 , x2 , . . ., xc } in the p-dimensional space Rp ,
Spectral clustering methods widely adopt graph-based in KFCM, a nonlinear map is defined as ˚ : Rp → H, x → ˚(x) where
approaches for data clustering. Given a dataset X = {x1 , x2 , . . ., x ∈ X, ˚ is a nonlinear mapping function from this input space to a
xn } in Rd with k clusters, we represent the dataset X as a weighted high dimensional feature space H. The key notion in kernel based
graph G(V, E), of which V = {xi } Set of n vertices represent n data learning is that mapping function ˚ need not be explicitly specified,
points, and E = {Wij } Set of weighted edges indicate pairwise and the dot product in the high dimensional feature space can be
similarity between the xi and xj data points. The element Wij of the calculated through the kernel function K(xi , xj ) = ˚(xi )˚(xj ). Based
affinity matrix is measured by a typical Gaussian function on the above, KFCM algorithm partitions X into c fuzzy subsets by
⎧ minimizing the following objective function:
⎪ d(xi , xj )2
⎨ − 
c 
n
Wij = e 2 , i=
/ j 2
(1) Jm (U, V ) = (ik )m ˚(xk ) − ˚(vi ) , (2)


0, i=j i=1 k=1

where:
Furthermore, thedegree matrix D is a diagonal matrix whose ele-
n
ment Dii (Dii = W ) is the degree of the point xi . In above
j=1 ij • c is the number of clusters,
framework, clustering problem can be seen as a graph partitioning
• n is the number of data points,
problem.
• vi (1 ≤ i ≤ c) is the centroid of ith cluster,
As a spectral approach to graph partitioning problem, NJW
• uik (1 ≤ i ≤ c, 1 ≤ k ≤ n) represents the fuzzy membership of kth
method [21] is one of the most widely used spectral clustering c
data point belonging to the ith cluster, satisfying i=1
uik = 1,
algorithms. It uses the normalized affinity matrix as the Laplacian
where:
matrix and solves the optimization of the normalized cut criterion
– U = (uik |i = 1, 2 . . . , c, k = 1, 2 . . . , n) is partition matrix,
through considering the eigenvectors associated with the largest
– V = {v1 , v2 , . . ., vc } ⊂ Rp is cluster centers,
eigenvalues. The idea of NJW method is to find a new representa-
– m is a constant, known as the fuzzifier (or fuzziness index),
tion of patterns on the first k eigenvectors of the Laplacian matrix.
which controls the fuzziness of the resulting partition. In par-
The details of NJW method are given as follows.
ticular, we set m = 2 in this paper.
• ˚(xk ) − ˚(vi )2 is the square of distance between ˚(xk ) and
(1) Form the affinity matrix W ∈ Rn×n defined by formula (1). ˚(vi ).
(2) Compute the degree matrix D and the normalized Laplacian • The distance in the feature space is calculated through the kernel
matrix L = D−1/2 WD−1/2 . in the input space as follows:
(3) Let 1 = 1 ≥ 2 ≥ · · · k be the k largest eigenvalues of L and p1 ,
2
p2 . . . , pk be the corresponding eigenvectors. Form the matrix ˚(xk ) − ˚(vi ) = (˚(xk ) − ˚(vi ))(˚(xk ) − ˚(vi ))
P = [p1 , p2 . . . , pk ] ∈ Rn×k and here pi is the column vector.
= ˚(xk )˚(xk ) − 2˚(xk )˚(vi ) + ˚(vi )˚(vi )
(4) Form the matrix Y from P by renormalizing each rows of P to
 1/2
have unit length (i.e., Yij = Pij /( P 2 ) ). = K(xk , xk ) − 2 K(xk , vi ) + K(vi , vi ). (3)
j ij
(5) Treat each row of Y as a point in Rk , and
cluster them into k
clusters via k means algorithm to obtain the final clustering of In KFCM, the Gaussian function is taken as the Kernel function,
2
original dataset. namely K = e− x−y / . If t, the Kernel width, is a positive number,
Y. Yang et al. / Optik 127 (2016) 161–167 163

then K(x, x) = 1, and according to Eqs. (2) and (1) can be rewritten 3.3. A superpixel-based preprocessing technology
as
Superpixel, which is originally proposed by Ren and Malik

c 
n

Jm (U, V ) = 2 m
(ik ) (1 − K(xk , vi )), (4) [25], represents a local, coherent region, which preserves most of
the characteristics necessary for image information mining. With
i=1 k=1
superpixels, the computational cost significantly decreases espe-
where 1 − K(xk , vi ) can be considered as a robust distance measure- cially for probabilistic, combinatorial or discriminative approaches,
ment derived in the kernel space [13]. since the underlying graph is greatly simplified in terms of graph
Finally, solving Eq. (4) for the minimum value of J will get the nodes and edges [29]. In this paper, a superpixel-based prepro-
partition matrix U and clustering center V as follows: cessing technology is adopted to segment the image into several
1/(m−1) superpixels, and then each superpixel is taken as a pixel for further
{1/(1 − K(xk , vi ))}
ik = c 1/(m−1)
, (5) segmentation.
j=1
{1/(1 − K(xk , vj ))} In our work, the texture feature is extracted form the image
n through Non-subsampled Contourlet Transform(NSCT), and then
(ik )m K(xk , vi )xk the subband energy information of NSCT decomposition is used to
vi = k=1
n . (6)
k=1
(ik )m K(xk , vi ) describe the image feature splendidly. In this way, a ten-dimension
energy feature using a 16×16 window can be extracted by three-
With the partition matrix with membership distribution level NSCT, which can be written as
obtained by KFCM, in this work, we propose a kernel fuzzy sim-
1 
ilarity measure which is presented in the next section. M N

E= |coef (i, j)|. (8)


MN
3.2. A kernel fuzzy similarity measure i=1 j=1

The membership value uij in partition matrix U (see in Eq. (7)) where M × N is the subband size and |coef(i, j)| is the coefficient in
obtained by KFCM denotes the probability that the jth point belong- the ith row and jth column of the subband.
ing to the ith cluster. Generally speaking, we can assume that two Based on all the above, the outline of SCS as follows:
data points belonging to the same cluster have higher similarity, Algorithm 2 (SCS). Input: A M × N image I to be segmented; the
and conversely, the situation is the opposite. number k of the image segmentation; t-nearest-neighbor parame-
⎛ ⎞ ter t in SCS.
u11 u12 ··· u1j ··· u1n
Output: Segmented image.
⎜u ⎟
⎜ 21 u22 · · · u2j · · · u2n ⎟
⎜ ⎟
⎜ . .. .. .. ⎟ Step 1. Extract the Feature through NSCT and compute the texture
⎜ .. ⎟
⎜ . . . ⎟ features of each pixel in the image by Eq. (8) to obtain ten-
⎜ ⎟
U = ⎜ ui1 ui2 · · · uij · · · uin ⎟ (7) dimension energy feature dataset.
⎜ ⎟
⎜ . .. ⎟
Step 2. Segment the imagery into several superpixels using the Tur-
⎜ . .. .. ⎟ boPixels method proposed by Alex [28].
⎜ . . . . ⎟
⎜ ⎟ Step 3. Extract color histograms with 64 color attributes in HSV
⎝ un1 un2 · · · unj · · · unn ⎠ color space for every superpixel.
Step 4. Compute the mean texture features of every pixel in each
superpixel region by Eq. (8) to obtain the texture feature
Let U = {u1 , . . ., ui , . . ., un }, where ui is the ith column vector of dataset F with 10 attributes.
matrix U, and it consists of the membership value that data point xi Step 5. Combine the color feature with texture feature to obtain
belonging to c clusters respectively. We can also assume similarity feature dataset X with 74 attributes.
between two data points through their membership distribution, Step 6. Construct the affinity matrix S using the proposed kernel
namely, the greater the inner product of ui and uj , the higher the fuzzy similarity measure.
similarity of xi and xj . Conversely, the smaller the inner product of Step 7. Let 1 = 1 ≥ 2 ≥ · · · k be the k largest eigenvalues of S and
ui and uj , the lower the similarity of xi and xj . Based on this idea, a p1 , p2 , . . ., pk be the corresponding eigenvectors, we can
new kernel fuzzy similarity measure is proposed. get matrix P = [p1 , p2 , . . ., pk ] ∈ Rn×n where pi is the column
Algorithm 1 (a new kernel fuzzy similarity measure). Input: Dataset vector.
X to be clustered and parameter t (nearest neighbor number). Step 8. Get the matrix Y from P by normalizing  each of s rows to
Output: The affinity matrix S of the dataset. have unit length (i.e., Yij = Pij /( j Pij 2 )1/2 ).
Step 9. Consider each row of Y as a point in Rk , and cluster them
Step 1. Cluster the data set X into c clusters by KFCM to get the into k clusters by k-means algorithm to obtain the final
partition matrix U. segmentation results of image I.
Step 2. Let U = {u1 , . . ., ui . . . , un }, where ui is the ith column vector
of matrix U, which consists of the membership value that 4. Experimental results and analysis
data point xi belonging to c clusters.
Step 3. For each xi and xj , In order to investigate the quality of SCS algorithm visually, we
If xi and xj are not the t-nearest neighbors for each other, tested SCS on three benchmark synthetic datasets and Berkeley
sij = 0; Segmentation Database (BSD) [30] respectively, for analyzing the
If xi and xj belong to the same cluster and they are the parameters sensitivity and the scaling performance of it by compar-
t-nearest neighbors for each other, ing with other clustering algorithms. In particular, the parameter t
sij = 1; (nearest neighbor number) of SCS is randomly selected in the inter-
Otherwise, sij = e(Ln2)×(ui uj ) − 1. val [5 50] with step length 5 in the first experiment, and fixed as 9 in
Step 4. Finally, the affinity matrix S is obtained. the second one. Our experimental environment are implemented
164 Y. Yang et al. / Optik 127 (2016) 161–167

in MATLAB 7.10 (R2010a) and performed on a computer with Intel to build a permutation mapping function that maps each clus-
(R) Xeon (R) CPU, 2.53 GHz and Windows XP Professional. ter index to a true class label. According to [20], Accuracy can be
defined as:
n
4.1. Experiments on three benchmark synthetic datasets i=1
ı(yi , map(ci ))
Accuracy = (9)
n
The three benchmark synthetic datasets are Threecircles,
where n is the number of samples and yi and ci denote the true label
LineBlobs and Twomoon, which are described in Fig. 1(a)–(c)
and the algorithms clustering label respectively, ı(y, c) equals one
respectively. In this experiment, two algorithms, i.e. Nyström and
if y = c, otherwise it equals zero. Function map(·) maps each clus-
STSC, are selected to compare with ours. In Nyström, the scale
ter label to a category label where The smaller the Clustering Error
parameter  in Nyström varied in the interval [0.05 1] with step
is, the better the performance. The average maximum accuracy of
length 0.01. While in STSC, the setting of parameter t is the same as
the clustering results of Nyström, STSC and SCS on Threecircles,
SCS. In particular, to reduce the instability in initialization, the cen-
LineBlobs and Twomoon datasets are given in Table 1, and the
troids obtained by k means are taken as the initial centroids of SCS.
clustering accuracy performance of them are shown in Figs. 2–4.
For all these methods, we performed 10 independent runs under
It can be seen from Table 1 that the average maximum accu-
their own parameters.
racy of SCS is better than Nyström and STSC algorithms. While
In our work, the widely used measure Accuracy is utilized to
in Figs. 2–4, for the Threecircles dataset, the clustering accuracy
evaluate the clustering performance, and the calculation of it needs
of Nyström and STSC algorithms greatly change with the scaling
parameter, however, the clustering results of SCS are extremely
0.9 steady and hardly affected by the scaling parameter t (nearest
a
0.8
1
a
0.7 0.9
0.8
0.6
0.7
0.5
0.6
accuracy
0.4 0.5
0.4
0.3
0.3
0.2
0.2
0.1 0.1
0 0.2 0.4 0.6 0.8
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
¦Ò
0.7
b 1
0.65
0.9
b
0.6 0.8

0.55 0.7
0.6
accuracy

0.5
0.5
0.45 0.4

0.4 0.3
0.2
0.35
0.1
0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0 5 10 15 20 25 30 35 40 45 50 55 60
t

1
12 c
c 0.9
10
0.8
8 0.7
6 0.6
accuracy

4 0.5

2 0.4
0.3
0
0.2
−2
0.1
−4
0
−6 0 5 10 15 20 25 30 35 40 45 50 55 60
t
−10 −5 0 5 10 15 20
Fig. 2. A clustering quality comparison between Nyström, STSC and SCS using the
Fig. 1. Three original datasets. threecircles dataset.
Y. Yang et al. / Optik 127 (2016) 161–167 165

Table 1
Comparison of maximum average accuracy rate (%).

Alogrithm Line-blobs Threecircles Twomoon  t

Nyström 86.43 61.20 81.60 0.05:0.01:1 –


STSC 86.20 62.31 98.71 – 5:5:50
SCS 97.37 79.60 98.75 – 5:5:50

neighbor number). For the LineBlobs and Twomoon datasets, like- each image is manually segmented by a number of different human
wise, the clustering results of SASC algorithm are more stable than subjects, and on average, five ground truths are available per image.
those of Nyström and STSC algorithms. Moreover, the average accu- In our work, we compare SCS with five clustering algorithms,
racy rate of SCS is more higher than that of Nyström and STSC namely Ncut [26], Mean Shift [36], FH [37], Nyström [4], and STSC
on the three datasets. To sum up, our method is robust to scaling [8]. Each method is evaluated following common practice (e.g.
parameter on the three synthetic datasets. [31,32]) with four criteria: (1) Probabilistic Rand Index (PRI) [33],
measuring the likelihood of a pair of pixels being grouped consis-
4.2. Experiments on Berkeley Segmentation Database datasets tently in two segmentations; (2)Variation of Information (VoI) [34],
computing the amount of information of one result not contained
In this section, our experiments are carried out on BSD, which in the other; (3) Global Consistency Error (GCE) [30], measuring
consists of 300 natural images of diverse scene categories. In BSD, the extent to which one segmentation is a refinement of the other;

1
1
a
0.9
a
0.8
0.8 0.7
0.7 0.6

accuracy
0.6
accuracy

0.5
0.5 0.4
0.4 0.3
0.3
0.2
0.2
0.1
0.1
0
0 2 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 3
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 ¦Ò
¦Ò
1
1 b
0.9 b 0.9
0.8
0.8
0.7
0.7
0.6
accuracy

0.6
accuracy

0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0 0 5 10 15 20 25 30 35 40 45 50 55 60
0 5 10 15 20 25 30 35 40 45 50 55 60
t
t

1 1
c
0.9 c 0.9
0.8 0.8
0.7 0.7
0.6 0.6
accuracy

accuracy

0.5 0.5
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0 0
0 5 10 15 20 25 30 35 40 45 50 55 60 0 5 10 15 20 25 30 35 40 45 50 55 60
t t

Fig. 3. A clustering quality comparison between Nyström, STSC and SCS using the Fig. 4. A clustering quality comparison between Nyström, STSC and SCS using the
LineBlobs dataset. Twomoon dataset.
166 Y. Yang et al. / Optik 127 (2016) 161–167

Table 2 see that the Nyström algorithm cannot obtain good segmentation
Average performance on the Berkeley Database.
results due to the sensitive of Spectral Clustering to the scaling
Methods PRI VoI GCE BDE parameter, and the STSC algorithm cannot obtain good segmenta-
Human 0.8574 1.1040 0.0797 4.994 tion results due to the influence of the nearest neighbor parameter.
Ncut 0.7242 2.9061 0.2232 17.15 The proposed SCS scales well and is able to better eliminate the
Mean Shift 0.7958 1.9725 0.1888 14.41 impact of parameters.
FH 0.7139 3.3949 0.1746 16.67
Nyström 0.7639 2.273 0.2776 11.67
STSC 0.6779 2.3962 0.2462 13.85 5. Conclusion
SCS 0.8414 1.6573 0.1795 10.8783
In this paper, we propose a novel spectral clustering method
with superpixels for image segmentation. In particular, a novel
and (4) Boundary Displacement Error (BDE) [35], computing the
similarity measure for constructing affinity matrix of spectral
average displacement between the boundaries of two segmenta-
clustering is presented to effectively reduce sensitivity of scaling
tions. A segmentation is better if PRI is larger and the other three
parameter of spectral clustering, and a superpixel-based prepro-
are smaller, when compared to the ground truths.
cessing technology is applied to reduce the computational cost.
For STSC algorithm and our algorithm, the parameter t (nearest
Through comparing with other classic clustering algorithms in the
neighbor number) is set to 9 in the following experiments. While
experiment, the results shows that SCS is extremely steady and
in Nyström, the scale parameter  is set to 0.3, and the number
hardly affected by the parameter t (nearest neighbor number), and
of random samples of pixels in the image is set to 100. The feature
it can achieves comparable accuracy and significantly performs bet-
extraction method in Nyström algorithm is the same as our method.
ter than most current classical algorithms. In future work, we will
Among the several methods, Nyström and SCS adopt superpixel
try to improve the efficiency of the proposed work.
method. Like other graph partitioning methods, the number of seg-
ments is manually set for each image (e.g. [31,32]). For Nyström and
SCS, we choose 1000 as the number of superpixels in one image. Acknowledgements
The scores are shown in Table 2 where the three best results are
highlighted in bold for each criterion. SCS ranks first in PRI and VOI This work was supported by the National Natural Science
by a large margin compared to previous methods apart from the Foundation of China under Grant No. 61472297, U1404622 and
human method. Considering all four criteria, SCS appears to work 61503082.
best overall.
The visual comparison is shown in Fig. 5. From the visual results, References
we can see that our method performs better than all the other
classic methods. As seen in Fig. 5 and Table 2, that SCS behaves [1] C. Fowlkes, S. Belongie, F. Chung, J. Malik, Spectral grouping using the nyström
method, IEEE Trans. Pattern Anal. Mach. Intell. 26 (2) (2004) 214–225.
more robust than all the other classic methods. Moreover, the seg- [2] H. Liu, F. Zhao, L. Jiao, Fuzzy spectral clustering with robust spatial information
mentation results of Ncut lose some particular information with for image segmentation, Appl. Soft Comput. 12 (11) (2012) 3636–3647.
in the sources in the image, and the segmentation result of Nys- [3] H.Q. Liu, L.C. Jiao, F. Zhao, Non-local spatial spectral clustering for image seg-
mentation, Neurocomputing 74 (1–3) (2010) 461–471.
tröm appears over-segmentation. Contrarily, the proposed method [4] S.P. Gou, X. Zhuang, L.C. Jiao, Quantum immune fast spectral clustering for SAR
obtain the much better segmentation results. We could visually image segmentation, IEEE Geosci. Remote Sens. Lett. 9 (1) (2012) 8–12.
[5] Y. Yang, Y. Wang, Y.-M. Cheung, Kernel fuzzy similarity measure-based spectral
clustering for image segmentation, in: HCI International 2013. Proceedings:
LNCS 8008, 2013, pp. 246–253.
[6] N. Rebagliati, A. Verri, Spectral clustering with more than K eigenvectors, Neu-
rocomputing 74 (9) (2011) 1391–1401.
[7] M.C. Su, C.H. Chou, A modied version of the K-means algorithm with a distance
based on cluster symmetry, IEEE Trans. Pattern Anal. Mach. Intell. 23 (6) (2001)
674–680.
[8] L. Zelnik-Manor, P. Perona, Self-tuning spectral clustering, Adv. Eighteenth
Neural Inf. Process. Syst. (2004) 1601–1608.
[9] D.W. Kim, K.Y. Lee, D. Lee, K.H. Lee, Evaluation of the performance of clustering
algorithms in kernel-induced feature space, Pattern Recognit. 38 (4) (2005)
607–611.
[10] D. Graves, W. Pedrycz, Performance of kernel-based fuzzy clustering, Electron.
Lett. 43 (25) (2007) 1445–1446.
[11] D. Graves, W. Pedrycz, Kernel-based fuzzy clustering and fuzzy clustering: a
comparative experimental study, Fuzzy Sets Syst. 161 (4) (2010) 522–543.
[12] C.L. Chen, C.P. Chen, M. Lu, A multiple-kernel fuzzy C-means algorithm for
image segmentation, IEEE Trans. Syst. Man Cybern. B: Cybern. 41 (5) (2011)
1263–1274.
[13] D.Q. Zhang, S.C. Chen, A novel kernelized fuzzy C-means algorithm with appli-
cation in medical image segmentation, Artif. Intell. Med. 32 (1) (2004) 37–50.
[14] X. Zhang, J. Li, H. Yu, Local density adaptive similarity measurement for spectral
clustering, Pattern Recognit. Lett. 32 (2011) 352–358.
[15] B. Fischer, J.M. Buhmann, Path-based clustering for grouping of smooth curves
and texture segmentation, IEEE Trans. Pattern Anal. Mach. Intell. 25 (4) (2003)
513–518.
[16] H. Chang, D.-Y. Yeung, Robust path-based spectral clustering, Pattern Recognit.
41 (1) (2008) 191–203.
[17] F. Zhao, H. Liu, L. Jiao, Spectral clustering with fuzzy similarity measure, Digital
Signal Process. 21 (2011) 701–709.
[18] L. Zeyu, T. Shiwei, X. Jing, J. Jun, Modified FCM clustering based on kernel map-
ping, in: Proceedings of the International Society for Optical Engineering, vol.
4554, 2001, pp. 241–245.
[19] D. Graves, W. Pedrycz, Kernel-based fuzzy clustering and fuzzy clustering: a
comparative experimental study, Fuzzy Sets Syst. 161 (2010) 522–543.
Fig. 5. Segmentation examples on Berkeley Segmentation Database. (a) Original, (b) [20] M. Wu, B. Schölkopf, A local learning approach for clustering, in: Proceedings
Ncut, (c) STSC, (d) Nyström, and (e) SCS. of Neural Information Processing Systems (NIPS), 2007, pp. 1529–1536.
Y. Yang et al. / Optik 127 (2016) 161–167 167

[21] Y. Ng, M.I. Jordan, Y. Weiss, On spectral clustering: analysis and an algorithm, measuring ecological statistics, in: Proceedings of the IEEE International Com-
Adv. Neural Inf. Process. Syst. 84 (2002) 9–856. puter Vision Conference (ICCV), 2001, pp. 416–425.
[22] U. von Luxburg, A tutorial on spectral clustering, Stat. Comput. 17 (4) (2007) [31] T. Kim, K. Lee, S. Lee, Learning full pairwise affinities for spectral segmentation,
395–416. IEEE Trans. Pattern Anal. Mach. Intell. 35 (7) (2013) 1690–1703.
[23] F.R. Bach, M.I. Jordan, Learning spectral clustering, in: Proceedings of Neural [32] Z. Li, X.-M. Wu, S.-F. Chang, Segmentation using superpixels: a bipartite graph
Information Processing Systems (NIPS), 2003. partitioning approach, in: IEEE Conference on Computer Vision and Pattern
[24] W.-Y. Chen, Y. Song, H. Bai, et al., Parallel spectral clustering in distributed Recognition (CVPR), 2012, pp. 789–796.
systems, IEEE Trans. Pattern Anal. Mach. Intell. 33 (3) (2011) 568–586. [33] R. Unnikrishnan, C. Pantofaru, M. Hebert, Toward objective evaluation of image
[25] X. Ren, J. Malik, Learning a classification model for segmentation, in: segmentation algorithms, IEEE Trans. PAMI 29 (6) (2007) 929–944.
Proceedings of IEEE International Computer Vision Conference (ICCV), 2003, [34] M. Meila, Comparing clusterings: an axiomatic view, in: International Confer-
pp. 10–17. ence on Machine Learning (ICML), ACM, 2005, pp. 77–584.
[26] J. Shi, J. Malik, Normalized cuts and image segmentation, IEEETrans. Pattern [35] J. Freixenet, X. Mu noz, D. Raba, J. Martí, X. Cufí, Yet another survey on image
Anal. Mach. Intell. 22 (8) (2000) 888–905. segmentation: region and boundary information integration, in: European Con-
[27] G. Mori, Guiding model search using segmentation, in: Proceedings of the 10th ference on Computer Vision (ECCV), 2002.
International Conference on Computer Vision (ICCV), vol. 2, Beijing, China, [36] D. Comaniciu, P. Meer, Mean shift: a robust approach toward feature space
2005, pp. 1417–1423. analysis, IEEE Trans. PAMI (2002) 603–619.
[28] L. Alex, TurboPixels: fast superpixels using geometric flows, IEEE Trans. Pattern [37] P. Felzenszwalb, D. Huttenlocher, Efficient graph-based image segmentation,
Anal. Mach. Intell. 31 (2009) 2290–2297. Int. J. Comput. Vis. 59 (2) (2004) 167–181.
[29] P. Wang, G. Zeng, R. Gan, J. Wang, H. Zha, Structure-sensitive superpixels via [38] Lu Huchuan, Zhang Ruixuan, Li Shifeng, Li Xuelong, Spectral Segmentation via
geodesic distance, Int. J. Comput. Vis. 103 (1) (2013) 1–21. Midlevel Cues Integrating Geodesic and Intensity, IEEE Trans. Cybern. 43 (6)
[30] D.R. Martin, C. Fowlkes, D. Tal, J. Malik, A database of human segmented nat- (2013) 2170–2178.
ural images and its application to evaluating segmentation algorithms and

You might also like