Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/318689757

Texture classification using Shearlet transform and GLCM

Conference Paper · May 2017


DOI: 10.1109/IranianCEE.2017.7985354

CITATIONS READS

12 138

2 authors:

Khatereh Meshkini Hassan Ghassemian


Fondazione Bruno Kessler Tarbiat Modares University
9 PUBLICATIONS 55 CITATIONS 383 PUBLICATIONS 6,394 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Khatereh Meshkini on 22 September 2022.

The user has requested enhancement of the downloaded file.


2017 25th Iranian Conference on Electrical Engineering (ICEE)

Texture Classification Using Shearlet Transform and


GLCM
Khatere Meshkini Hassan Ghassemian
Faculty of Electrical and Computer Engineering Image processing and Information Analysis Lab
Science and Research branch, Islamic Azad University Faculty of Electrical and Computer Engineering
Tehran, Iran Tarbiat Modares University
khatere.meshkini@srbiau.ac.ir Tehran, Iran
ghassemi@modares.ac.ir

Abstract— Texture is one of the most important and effective Inventing Ridgelet transform was a good process but it still
element in image recognition and image processing. There are a had problems in resolving two dimensional singularities. To
lot of procedures in texture classification, recent researches are solve these problems Curvelet transform was proposed.
based on different transforms such as Ripplet transform. In this Curvelet transform can solve two-dimensional discontinuities
paper textured images are classified using Shearlet transform. along smooth curves [4]. However, there is no reason why
Shearlet transforms provide a general framework for analyzing parabolic scaling was selected in Curvelet transform to
and representing data with anisotropic information at multiple achieve non-isotropic directionality. With generalization of
scales. As a consequence, signal singularities, such as edges, can scaling law, a new transform called Ripplet transform was
be precisely detected and located in images. In the present
obtained which two parameters were added to traditional
research we have used GLCM and Shearlet transform to extract
texture features in order to classify textured images. In this
Curvelet transform [5].
method first Shearlet coefficients and co-occurrence features In general, Curvelet structure is not made directly in
extract from textured images then we classify textures using discrete domain and does not provide geometric
inner product of Shearlet coefficient and co-occurrence representation of multidimensional data. The implementation
features. The performance of the proposed feature set is is more involved with less efficiency and has sophisticated
evaluated on Brodatz texture album. Experimental results mathematical analysis. Recently, a new representation scheme
demonstrate the superiority of the proposed descriptor as
has been introduced, called Shearlet transform [6].
compared to other methods considered in this paper.
Mathematical implementation of this new method is simple
Keywords- Shearlet transform, Ripplet transform, texture and very precise and have succeed in representation of
classification, co-occurrence features multiresolution information. Shearlet transform provide a
more flexible theoretical tool for approximating two
I. INTRODUCTION dimentional smooth functions with discontinuities and extract
singularities of an image properly. Shearlet transform can set
Feature extraction is the first stage of image texture
up different direction numbers at different decomposition
analysis. Numerous feature extraction techniques are
scales. As a results, the Shearlet approach can be associated to
proposed for texture processing such as linear feature
a multiresolution analysis (MRA) and has united behavior
extraction [1]. Results obtained from this stage are used for
both in discrete and continues domains.
texture classification. Texture classification is the process to
classify different textures from the given images. Texture Gray level co-occurrence matrix (GLCM) was suggested
classification techniques are grouped up in four main group in by Haralick [7]. It is one of the widely used texture analysis
general, namely 1) structural; 2) statistical; 3) model based, algorithm because it can be implemented easily. GLCM
and; 4) transform. contains information about the positions of pixels having
similar gray level values. GLCM has a great potential for
There has been abundant interest in wavelet methods for
increasing the classification rate of textured images. Due to
texture classification. Some of spatial wavelet transforms such
co-occurrence features’ good ability in extracting textures
as nondecimated complex wavelet transform are used for detecting
spatial information, in this paper we first apply Shearlet
boundaries regions [2]. The wavelet transform is able to provide
transform and GLCM separately on images then use inner
an optimum representation of functions with one dimensional
product of coefficients matrix for classification. Experimental
discontinuity. Unfortunately, such is not the case in two
results show significant increase in classification rate.
dimensions. To overcome the limitations of wavelets in higher
dimensions next proposed transform was introduced, named II. FEATURE EXTRACTION METHODS
Ridgelets which was based on Radon transform. This method
maps line singularities into point singularities and provides There are many different feature extraction methods that
information about the direction of the linear edges [3]. were introduced and used for texture classification problems.

978-1-5090-5963-8/17/$31.00 ©2017 IEEE


2017 25th Iranian Conference on Electrical Engineering (ICEE)

Most of these methods that were popularly used in recent directions across bands. For > 1 Ripplet transform will
years were statistical and transform based methods. have anisotropic behavior with provides the capability of
extracting singularities along arbitrary curves.
A. Ripplet Transform
The Ripplet transform is a higher dimensional The discrete Ripplet transform of an M × N image f
generalization of the Curvelet transform and has all properties ( , ) will be in the form of
of Curvelet transform except parabolic scaling means that
represent images or two-dimensional signals at different = ( , ) ( , ) (5)
, , , ,
scales and different directions. The Ripplet transform has two
more parameters named c and d, so Curvelet transform is just
a special case of the Ripplet transform with c=1 and d=2 [5]. Where , , are the Ripplet coefficients.
We will introduce discrete Ripplet transform which occur in
the image well. The image can be reconstructed through inverse discrete
Ripplet transform
With discretization of the parameters of continues Ripplet
transform we can achieve discrete Ripplet transform. Here B. Shearlet Transform
scale parameter is which we sample at dynamic interval. Shearlets are very precisionist in representation of the
There are two other parameters and that are position and edges of images. In fact, Shear waves provide a well-localized
rotation parameters respectively. These two parameters are representation of images in different locations, scales and
sampled at equal-spaced intervals. , and are substituted orientations. So it has a good development in multi-scale
with discrete parameters , and which satisfy that = geometric analysis with simple math structure. The Shearlet
properties such as anisotropic makes this transform more
2 , = [ .2 . ,2 . ] and = useful in texture classification and it provides new
( )
.2 . , where = [ , ] , (. ) denotes the investigations for solving difficult problems in image
transpose of a vector and , , , ∈ ℤ . We can choose processing. The construction of Shearlet transform generated
degree of Ripplets from the space ℝ. Representation of can with separable functions is described below [8].
be in the forms of rational numbers, = ⁄ , , ≠ 0 ∈ We introduce a set of vectors ∈ Γ constitutes a
ℤ, since all real numbers can be approximated by rational
frame for a Hilbert space ℋ which there are two positive
numbers. The preferred numbers for , are choose from ℕ
and , are both primes. In the frequency domain, the constants , such that for each ∈ ℋ we have
corresponding frequency response of Ripplet function is in the
‖ ‖ ≤ |< , >| ≤ ‖ ‖ (6)
form ∈

1 1 Also, for ∈ and ∈ ( ) we define the following


( , )= (2 . ) ( . 2 . − ) (1) unitary operators

Where W and V satisfy admissibility conditions as below. ( )( ) = ( − )
(7)
( )( ) = ( )
(2 . ) =1 (2)
Finally, for ∈ ( , 1] and > 1, we define
1 1 0
= = / (8)
1 0 1 0
( .2 ( )⁄
. − 1) =1 , (3)
And
, 1 0 0 /
= = (9)
The `wedge' corresponding to the Ripplet function in the 1 1 0
frequency domain is We are now ready to define a Shearlet frame as follows. Let
( / )
, ( , )= 2 ≤| | = and ∈ be the sampling constant.
( )⁄ For ,…, , ,…, ∈ ( ) and φ ∈ ( ), define
≤2 , − .2 . (4)
Ψ = : , ∈ , ∈ ,
≤ 2 (10)
2 = 1, … ,
The performance of Ripplet transform is more And
comprehensible in discrete domain and we can have better
understanding about the parameters c and d. with changing the
parameter c different directions in the high-pass band will be
achieved and parameters controls the number of directions in
the high-pass bands. d controls quality of the number of
2017 25th Iranian Conference on Electrical Engineering (ICEE)

Ψ= : ∈ C. First and Second Order Statistics Features


∪ : ≥ 0, − Texture is a Feature which is described by pixels in a
≤ ≤ , ∈ , region.These features are calculate by mathematical
= 1, … , (11) statistical operations.
∪ : ≥ 0, − Imagine variable I represents the gray levels of image region.
≤ ≤ , ∈ , The first-order histogram P(I) is defined as:
= 1, … , ℎ
( )= (13)
Where ℎ

= Based on the definition of P(I), the Mean and the most


(12)
= important central moments such as Variance, Skewness and
Kurtosis are used to measure deviation of gray levels from
If Ψ is a frame for ( ), then we call the functions the Mean, degree of histogram asymmetry around the Mean
and and histogram sharpness, respectively [9].
Second order statistics indicates abundance of pair pixels in
، in the system Ψ shearlets. a region as a same time. Therefor it’s more practical than first
order statistics features which considers individual pixels,
By sampling constant , Shearlets in Ψ will constructed with
lonely. An occurrence of some gray-level configuration can
applying two matrices ℓ , ℓ which are anisotropic matrices be described by a matrix of relative frequencies ( , ). It
and shear matrices, respectively. They fix generating describes how frequently two pixels with gray-levels( , )
functions and . Extending these ℓ and ℓ along appear in the window. ( , ) is defined as:
arbitrary oreintions and different scales caused efficient
representation of singularities and edged in image. In fact, ( , )
these matrices produce different windows of multiresolution ℎ (14)
information which will be analyzed in classification system. It =

was shown that two dimentional piecewise smooth functions Angular Second Moment (ASM), Contrast, Correlation,
with singularities can be approximated with nearly optimal Homogeneity and Entropy are some features which are
approximation rate using Shearlets. Furthermore, one can extracted from second order histogram.
show that Shearlets can completely analyze the singular
structures of piecewise smooth images. Singularities and
irregular structures have important and special information of D. Gray level Co-occurrence Matrices (GLCM)
images, Shearlet characteristic has this ability to cover all Grey Level Co-occurrence Matrices (GLCM) is an old and
these informations. For example, discontinuities in the widely used feature extraction for texture classification. This
intensity of an image indicate the presence of edges. The method was proposed by Haralick et al. in 1973. It is one of
following figure shows how Shearlets localize the frequency the most important methods in texture classification since it
domain [6]. calculate the connection between pixel pairs in the image.
Several textural features will be extracted from generated
GLCMs such as contrast, correlation, energy, entropy and
homogeneity [10].
Then, a GLCM matrix is extracted from this window and
is assigned to the centre pixel. If be the number of gray
levels in a 2D image, the co-occurrence matrix will be a ×
matrix [11]. Since calculations of co-occurrence matrices
are time and resource consuming, we can quantize gray levels
into a fewer number and reduce matrix dimensions so less
time will be needed and usually the offset (d,θ) = (1,0) is used
[12]. We first define pixel pairs, each one by a displacement
vector d(dx,dy) which represents a distance and a direction.
dx is the number of displacement pixels along x axis and dy is
the number of displacement pixels along y axis. The distance
usually takes a value from one to five pixels and the direction
has the angle of 0, 45, 90 and 135 degrees. Each element
c(i, j) of the matrix counts number of pixel pairs which have i
and j intensities respectively.

Fig1. Tiling of the frequency plane ℝ induced by Shearlets


in Ψ [6]
2017 25th Iranian Conference on Electrical Engineering (ICEE)

Group 1 Group 5

Group 2 Group 6

Group 3 Group 7

Group 4 Group8
Fig2. Textures used in classification process groups 1-4. Fig3. Textures used in classification process groups 5-8.

E. Principal Component Analysis (PCA) Variance, with choosing three PCs the feature vectors will
be three dimensional. For many datasets dimensionality of the
Principal Component Analysis (PCA) is a method for huge data along with retaining as much information as
selecting most significant features and unsupervised reduction possible in the original dataset, PCA is used.
of dimension, it is also named the discrete Karhunen–Loève
transform (KLT), and Hotelling transform singular value III. CLASSIFICATION METHOD
decomposition (SVD) and empirical orthogonal function
There are several classifiers which are widely used in
(EOF) [13].
texture classification. Nearest Neighbor is one of the popular
PCA seeks to reduce the dimension of the data by classifiers that has simple implementation and good results.
producing new set of features which are orthogonal to one
another and has largest variance so there will be no redundant
information [14]. The first several PCs explain most of the
2017 25th Iranian Conference on Electrical Engineering (ICEE)

A. Nearest Neighbors have two matrices of coefficient then we obtain new vector of
One of the best classifier methods is Nearest Neighbor features with calculation of inner product of coefficient
which is a simple classifiers that select the training samples matrices. This new vector will be used in classification
with the closest distance to the query sample. k-Nearest algorithm.
Neighbor (k-NN) is a popular implementation where k The classification results obtained from experiments are
number of best neighbors is selected and the winning class represented in table 1. The first column indicates group
will be decided based on the best number of votes among the number, the next seven columns indicate classification gain
k neighbors. In this classifier the test sample belongs to the using First Order Feature (FOF), Second Order Feature (SOF)
class knows that it is most votes in the k-nearest neighbor ,Ripplet coefficients, statistical features and co-occurrence
means the class which its members have shortest distance with features extracted from Ripplet coefficients, Shearlet
the test sample [10]. The nearest neighbor does not need a coefficients, GLCM and inner product of Shearlet coefficients
training process that makes this method very simple in and GLCM features, respectively. Classification gain is
implementation. In the case where there is a small dataset calculated by dividing the number of correctly classified
available which is not effectively trained using other machine images and the total number of images in a group.
learning methods goes to the training process that maybe
caused incorrect classification applying this method will be Table 1. Results for texture classification experiments
useful . However, the major drawback of the nearest neighbor Classification gain (%)
algorithms is that the speed of computing distance will
increase according to the number of training samples

Proposed
Ripplet I
Group

Shearlet

GLCM

Method
Ripplet
available.

FOF

SOF
number
IV. EXPERIMENTAL RESULTS
An experimental test has been carried out on textured 1 85.17 89.31 66.21 72.85 99.61 99.02 100
images In order to test the generality and efficiency of the
2 68.47 71.25 70.31 73.05 75.39 80.08 83.79
proposed method. Images are chosen from the Brodatz
database [15]. We divided textured images into 23, 128×128 3 79.30 75.39 73.05 77.73 79.69 83.98 89.26
pixel sub-images, 22 of which are used for training, and 1 for 4 69.24 72.66 64.84 69.53 76.17 76.17 78.91
testing. So as to make a challenging dataset, the experiment 5 84.38 80.86 72.54 82.42 89.06 88.28 89.11
has been carried out on 8 groups of images (figure. 2,3). Some 6 78.71 81.05 75.19 84.18 88.28 91.02 92.62
of these images have analogous and some have different 7 83.40 83.20 75.86 82.62 79.49 88.87 87.30
textures from one another. 8 79.69 80.86 73.78 70.90 79.30 81.64 82.25
The process of classifying will be done with the nearest Mean
neighbor classifier (k-NN) with =1. We use “Leave One Out” gain 78.54 79.32 71.47 76.66 83.37 86.13 87.90
method as evaluation process, in this method if we have n (%)
samples in a group we use n-1 samples for training and one
for testing in next step we use another sample for testing and
again other n-1 samples use for training. This process will be By considering Table 1 it can be seen that the mean gain
continued until all the samples will be examined therefor this for FOF and SOF are 78.54% and 79.32% which means that
process will be repeated for each sample in a class. Second Order Statistics Feature gives more information about
the relative positions of the various gray levels within the
The method is as follow: first we examine evaluation of image. These features will be able to measure whether all low-
FOF and SOF which have been explained in previous section, value gray levels are positioned together, or they are
then we apply the Shearlet transform to the image and use interchanged with the high-value gray levels. Using Ripplet
matrix of extracted coefficient directly in classification coefficient directly as feature vector the mean classification
algorithm. After that we use 6 features extracted from co- rate is 71.47%. By using co-occurrence features extracted
occurrence matrices. This six features are known to be the from Ripplet coefficient the mean classification gain exceed
most discriminative features which are contrast, correlation, to 76.66% that is 5.19% more than Ripplet coefficient. The
energy, homogeneity, entropy and variance [16]. next experiment is Shearlet transform which has been
In the second experiment we use Ripplet transform and investigate in this paper. The mean classification rate for this
statistical features obtained from each image’s Ripplet transform is 83.37% that has essential success compare with
coefficients for comparison and repeat the steps mentioned Ripplet transform. Gray level co-occurrence matrix which has
above. The Ripplet transform and statistical features extracted shown that in classification of different type of textures has
from Ripplet transform have procedures similar to that cited appropriate performance obtain 86.13% mean classification
in reference [17]. rate. In order to reach better classification performance we
apply our proposed method that is inner product of Shearlet
In the third experiment we apply the Shearlet transform to and GLCM coefficients. Experimental results show good
the image. After applying we shall have a series of performance of this method which we attain a mean success
coefficients. We use gray level co-occurrence matrix rate of 87.90%.
algorithm to extracting GLCM features from images. So we
2017 25th Iranian Conference on Electrical Engineering (ICEE)

Table 2. Best classification gain comparison Learning,” Iranian Conference on Electrical Engineering
(ICEE), May 2012.
method Best classification rate
(%) [2] N. Chaji, H. Ghassemian,” Texture-Gradient-Based Contour
Detection,” EURASIP Journal on Applied Signal Processing,
FOF 85.17 Vol. 2006, pp. 1-8, 2006.

SOF 89.31 [3] N. D. Minh, M. Vetterli, “The finite Ridgelet transform for
image representation.” IEEE Transaction on Image Processing,
Ripplet transform 75.86 Vol. 12, pp. 16-28, 2003.

Statistical features 84.18 [4] J. I. Starck, E. J. Candes,” The Curvelet Transform for image
extracted from Ripplet denoising”. IEEE transaction on image processing, Vol. 11, pp.
670-684, 2002.
Shearlet 99.61 [5] J. Xu, L. Yang and D. Wu, “Ripplet: A New Transform for
GLCM 99.02 Image Processing”, J. Vis. Commun. Image R.21, pp.627-639,
2010.
Proposed method 100
[6] L. Q. Wang, “The Discrete Shearlet Transform: A New
Directional Transform and Compactly Supported Shearlet
Frames.” IEEE Transaction on Image Processing, Vol. 19, No.
Results show 9.36%, 8.58%, 16.43%, 11.24%, 4.53% and 5, May 2010.
1.77% improvement compared to that of using FOF, SOF,
[7] R. Haralick, “Statistical and structural approaches to texture”
Ripplet coefficient, co-occurrence feature extracted from Proc. IEEE, Vol 67, No. 5, pp. 786-804, 1979.
Ripplet transform, Shearlet transform and GLCM,
respectively. Table 2 shows the best classification gain for [8] G. R. Easley, D. Labate, Image Processing using Shearlets,
each method. Part of the series Applied and Numerical Harmonic Analysis,
pp 283-325, 2012.
V. CONCLUTION [9] N. Aggarwal, R. K. Agrawal, “First and Second Order Statistics
In this paper we tried to utilize two methods, First Feature Features for Classification of Magnetic Resonance Brain
Extraction Feature and Second Feature Extraction Feature and Images,” Journal of Signal and Information Processing, Vol. 3,
two transforms Ripplet and Shearlet for extracting texture No. 2, pp. 146-153, 2012.
features, We used Statistical feature extracted from ripplet [10] J. Y. Tou, Y. H. Tay and P. Y. Lau, “Recent Trends in Texture
transform for classification, FOF and SOF which Classification: A Review”, Proceedings Symposium on
Experimental results shows that these methods improved Progress, in Information and Communication Technology, pp.
mean classification rate but the performance of Shearlet 63-68, 2009.
transform and GLCM was about 8.58% and 6.81% better than [11] M. Imani, H. Ghassemian, “GLCM, Gabor, and morphology
SOF and results of the mean classification rate of Statistical profiles fusion for hyperspectral image classification,” 24th
features extracted from Ripplet was 6.71% and 9.47% lower Iranian Conference on Electrical Engineering (ICEE), May
than GLCM and Shearlet, respectively. 2016.
In order to get higher classification gain due to good [12] F. Mirzapour, H. Ghassemian, “Improving hyperspectral
performance of GLCM in classification of texture images and image classification by combining spectral, texture, and shape
special properties of Shearlet transform such as features,” International Journal of Remote Sensing, Vol. 36,
pp. 1070-1096, 2015.
multiresolution analysis and edge detection ability, We
proposed new method based on Shearlet transform and [13] T. Archana, D. Sachin, “Dimensionality reduction and
GLCM. classification through PCA and LDA”. International journal of
computer Applications, Vol. 122, No. 17, 2017.
Results has been indicated that in just one of the
experimental textures, classification gain of proposed method [14] M. Imani, H. Ghassemian, “Band Clustering-Based Feature
was not the best but in other seven textures it has good Extraction for Classification of Hyperspectral Images Using
Limited Training Samples,” IEEE Geoscience and AND
performance and significant increases in classification Remote Sensing Letters, Vol. 11, No. 8, 2014.
accuracy. In addition, the best for classification rate of
proposed method was 100%, means that all of the textures [15] http://www.ux.uis.no/~tranden/brodatz.html
were classified correctly. Other methods did not reach to this [16] F. Mirzapour, H. Ghassemian, “Using GLCM and Gabor
rate. Filters for Classification of PAN Images,” Iranian Conference
on Electrical Engineering (ICEE), pp. 1-6, May 2013.
REFRENCES
[17] T. Muhammady, H. Ghassemian, F. Razzazi, “Using Co-
[1] M. Kamandar, H. Ghassemian, “Linear Feature Extraction for occurrence Features Extracted From RippletI Transform in
Hyperspectral Images Using Information Theoretic Texture Classification,” Iranian Conference on Electrical
Engineering (ICEE), May 2012.

View publication stats

You might also like