Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Received: 27 October 2018 Revised: 20 January 2019 Accepted: 28 January 2019

DOI: 10.1002/jemt.23238

RESEARCH ARTICLE

Brain tumor detection and classification: A framework


of marker-based watershed algorithm and multilevel
priority features selection
Muhammad A. Khan1 | Ikram U. Lali2 | Amjad Rehman3 | Mubashar Ishaq2 |
Muhammad Sharif4 | Tanzila Saba5 | Saliha Zahoor2 | Tallha Akram6

1
Department of Computer Science and
Engineering, HITEC University Museum Road, Abstract
Taxila, Pakistan Brain tumor identification using magnetic resonance images (MRI) is an important research domain
2
Department of Computer Science, University in the field of medical imaging. Use of computerized techniques helps the doctors for the diagno-
of Gujrat, Gujrat, Pakistan
sis and treatment against brain cancer. In this article, an automated system is developed for tumor
3
College of Business Administration, Al
extraction and classification from MRI. It is based on marker-based watershed segmentation and
Yamamah University, Riyadh 11512, Saudi
Arabia features selection. Five primary steps are involved in the proposed system including tumor con-
4
Department of Computer Science, COMSATS trast, tumor extraction, multimodel features extraction, features selection, and classification. A
University Islamabad, Wah Cantt, Pakistan gamma contrast stretching approach is implemented to improve the contrast of a tumor. Then,
5
College of Computer and Information segmentation is done using marker-based watershed algorithm. Shape, texture, and point features
Sciences, Prince Sultan University, Riyadh, are extracted in the next step and high ranked 70% features are only selected through chi-square
Saudi Arabia
6
max conditional priority features approach. In the later step, selected features are fused using a
Department of EE, COMSATS University
serial-based concatenation method before classifying using support vector machine. All the exper-
Islamabad, Wah Cantt, Pakistan
Correspondence
iments are performed on three data sets including Harvard, BRATS 2013, and privately collected
Amjad Rehman, MIS Department, COBA Al MR images data set. Simulation results clearly reveal that the proposed system outperforms exist-
Yamamah University, Riyadh, Saudi Arabia. ing methods with greater precision and accuracy.
Email: rkamjad@gmail.com; attique.
khan440@gmail.com; tallha@ciitwah.edu.pk
KEYWORDS
Review Editor: Peter Saggau
classification, features extraction, preprocessing, reduction, segmentation
Funding information
Prince Sultan University Riyadh, Saudi Arabia

1 | I N T RO D UC T I O N et al., 2014; Rehman et al., 2018). In America, 700,000 peoples are liv-
ing with a brain tumor and during 2018, 79,000 new cases of brain
The brain tumor is a major type of cancer which is caused due to tumor have been estimated including 25,000 malignant and approxi-
the growth of abnormal cells (Abbasi & Tajeripour, 2017; Gondal & mately 54,000 nonmalignant. Furthermore, it is predicted that in 2018,
Khan, 2013; Iqbal, Ghani, Saba, & Rehman, 2018; Iqbal, Khan, almost 17,000 people will dissipate their fight with a malignant
Saba, & Rehman, 2017). New developments of computer vision in tumor (Bahadure, Ray, & Thethi, 2018). The tumor depends on sev-
medical imaging gives significant better solutions to doctors and eral factors such as size and location, where a tumor is developed
experts (Abbas, Saba, Mohamad, et al., 2018; Abbas, Saba, Rehman, and grown. Gliomas are common types of brain tumor and it might
et al., 2018; Rajesh, Malar, & Geetha, 2018; Saba, Bokhari, Sharif, be less aggressive in low grade with high survival rate. However, the
Yasmin, & Raza, 2018; Saba, Rehman, Mehmood, Kolivand, & Sharif, high-grade glioma is more aggressive and its survival rate is only
2018; Sadad, Munir, Saba, & Hussain, 2018; Sharif, Tanvir, Munir, 2 years (Angelini et al., 2007). MR images are used by radiologists to
Khan, & Yasmin, 2018). The major areas of medical imaging are brain examine the anatomical structure of the body including skull and its
tumor segmentation, tumor classification, skin cancer, breast tumor, underneath.
stomach infection, and a few more (Akram, Khan, Sharif, & Yasmin, Magnetic resonance images (MRI) is a more suitable imaging
2018; Khan et al., 2018; Liaqat et al., 2018; Nasir et al., 2018; Norouzi method for the detection of brain tumors (DeAngelis, 2001;

Microsc Res Tech. 2019;1–14. wileyonlinelibrary.com/journal/jemt © 2019 Wiley Periodicals, Inc. 1


2 KHAN ET AL.

Liang & Lauterbur, 2000). In clinical practices, segmentation of MR still several challenges exist to differentiate the infected region from
images takes much time due to manual analysis. The manual analy- the healthy part of the images. Infection stretching is an important
sis is a tiring and time-consuming process for doctors. Therefore, problem in this application for improving the contrast of infections in
computerized methods are needed for the diagnosis of a tumor in the given images. The good stretching improves the quality of the
a short-time span and with improved accuracy. However, the input image and segmentation accuracy. Second, segmentation is
automated process is even more critical task due to variation in another challenging task and many works have been done on this in
tumor texture, appearance, and shape. Recently, medical data literature. But still, several challenges exist such as tumor border,
analysis through image processing and machine learning methods shape, irregularity, and texture. In the features extraction step, signifi-
are a major focus of several researchers (Amin, Sharif, Yasmin, & cant numbers of the features are extracted for classification. However,
Fernandes, 2018; Rahim, Norouzi, Rehman, & Saba, 2017; Wang the extracted features contain several noisy factors and challenges such
et al., 2015; Zhang & Wang, 2015). Many techniques are intro- as irrelevant information which degrade the recognition accuracy. The
duced by the researchers such as clustering-based segmentation focus of this research is: (a) poor contrast of infections as compared to
with support vector machine (SVM; Samanta & Khan, 2018), multi- the healthy region, (b) association among the texture pattern of tumor
fractal features (Lahmiri, 2017), salient structural features and and healthy part, shape and size of tumor in the MR images, location
RBF SVM kernel (David & Jayachandran, 2018), superpixels-based and irregularity of tumor in the given images, variation of images
brain tumor segmentation (Soltaninejad et al., 2017), and so on. In through lightning and illuminating effects, irrelevant features reduction,
medical imaging, after segmentation of a region of interest, extrac- and optimal features selection.
tion of useful features and then prioritizing them plays an impor- A new automated technique is proposed in this work for tumor
tant role in improving the classification results. extraction and classification using MR images. Our major contribu-
Therefore, features fusion and selection are the essential steps tions in this article are given below:
for classification asks in the domain of computer vision. Several fusion
techniques such as serial based, parallel based, canonical correlation 1. Initially, preprocessing is performed for tumor refinement based
based, and name a few (Gao, Qi, Chen, & Guan, 2018) are illustrated on gamma contrast stretching (GCS). The proposed GCS approach
in the literature by various researchers. The literature shows that highlights the tumor region in MR images which is helpful for better
these techniques are effective because of a fused score of features segmentation through the marker-based watershed algorithm.
gives better precision as relating to the original score (Rashid 2. Multimodel features such as Geometric, EFTA, and MSER fea-
et al., 2018). The major advantages of features fusion are: (a) it tures are extracted from extracted tumor pixels. Then, chi-square
determines the most biased information from original various max conditional priority features (CMcPF) approach is proposed
types of features which are included in the fusion process; and for the selection of best features. In CMcPF approach, a weight
(b) it is capable to remove the redundant information from the var- equation is defined which selects the desired output features.
ious set of features and only most correlated features are selected
for next step (Conti, Militello, Sorbello, & Vitabile, 2010; Yang, The rest of the article is organized into four sections, Section 2
Yang, Zhang, & Lu, 2003). In addition, features selection is another explores related work, Section 3 presents the proposed methodol-
major step for the classification phase. The selection of best fea- ogy, Section 4 exhibits experimental results, and Section 5 concludes
tures gives the significant performance in both classification accu- the research. Table 1 presents abbreviations of all term used in the
racy and computational time. The major advantage of features research.
selection techniques is the elimination of the overfitting, faster
execution, and selection of most relevant features (Saeys, Inza, &
Larrañaga, 2007). 2 | RE L A TE D WOR K
Recently, several techniques have been introduced for features
In the domain of computer vision, medical imaging gains major atten-
extraction, fusion, selection, and classification such as a combination
tion from the last few years. In medical imaging, brain tumor images
of K-Means and Neural Network (Sharma, Purohit, & Mukherjee,
are used as data set by the researchers for improvement of the seg-
2018), texture features (Soltaninejad et al., 2018), genetic algorithm
mentation and classification accuracy. They introduced automated
(Szenkovits et al., 2018), particle swarm optimization (Lahmiri, 2017),
systems which are giving a significant performance. The major steps
LBP and histogram features (Abbasi & Tajeripour, 2017), and few
which are essential for any automated system are the contrast of
more as in (Jamal, Hazim Alkawaz, Rehman, & Saba, 2017; Khan et al.,
tumor and healthy pixels, separation of the tumor from a given
2017, 2018; Sharif, Khan, Faisal, Yasmin, & Fernandes, 2018;
image, and needed useful features extraction for classification. Adjei,
Vidyarthi & Mittal, 2017). However, still, it is an open research prob-
Nunoo-Mensah, Agbesi, and Ndjanzoue (2018) presented a brain
lem and yet improvement is required in accuracy for early detection
tumor segmentation method through iterative clustering technique.
of brain tumor which can help early medication for its cure.
The mean and variance are computed in the presented approach and
estimated their ration value for threshold mark. The BRATS 2013
1.1 | Problem statement and contributions and BRATS 2015 data sets are utilized for evaluation the presented
The computer-based diagnosis system in healthcare plays a vital role method and provide improved performance. Bennaceur, Saouli, Akil,
in the detection of infection regions from the given images. However, and Kachouri (2018) introduced a deep learning model for brain
KHAN ET AL. 3

TABLE 1 Abbreviations automated diagnosis system for detection of Glioma in brain MRIs. In
Abbreviation Description this method, particle swarm optimization is used for segmentation and
MRI Magnetic resonance images directional spectral distribution (DSD) is estimated for segmented
GCS Gamma contrast stretching tumors. Then generalized Hurst exponents are used for valuation of
CMcPF Chi-square max conditional priority features multifractals of the evaluated DSD. Finally, SVM is used as a classifier
SVM Support vector machine to get the high precision rate. Havaei et al. (2017) introduced a deep
RBF Radial basis function learning model for brain tumor segmentation using MR images. In MR
LBP Local binary patterns images, the tumor appears in a different size, shape, and low contrast.
EFTA Extended features texture analysis Both local and global features are extracted using deep learning. The
MSER Maximally stable extreme regions convolutional layer provides local features, whereas the FC layers give
KNN K-nearest neighbor global features. The FC layer is used in this work for features extrac-
GLCM Gray level occurrences matrix tion which is later utilized for validation. The BRATS 2013 data set is
PSO Particle swarm optimization used in presented work for validation and given significant results.
DSD Directional spectral distribution Sehgal, Goel, Mangipudi, Mehra, and Tyagi (2016) presented a fully
FCNN Fully convolutional neural networks automatic method for the segmentation of brain tumor which is capa-
CRF Conditional random fields ble of detecting tumor with significant accuracy. The introduced
EM Expectation maximization method performs segmentation using Fuzzy C means techniques after
ROI Region of interest preprocessing. Circularity and area features were used for the extrac-
HOG Histogram oriented gradients tion of tumor segmented images. In the end, results were compared
SIFT Scale invariant features transform with the ground truth for inspecting accuracy. Zhao et al. (2018) intro-
FNR False negative rate duced an approach for the segmentation of the brain tumor. In this
AUC Area under the curve method, both Fully convolutional neural networks (FCNNs) and condi-
CM Confusion matrix tional random fields (CRFs) are integrated for improved segmentation
BP Back propagation results. They utilized two-dimensional patches and slices of images to
DT Decision tree train a deep learning model on both FCNNs and CRF as recurrent
Neural Network (NN). Thereafter, fine tuning is performed on NN and

tumor segmentation from MRI images. The introduced model is dif- obtained Axial, coronal and sagittal views. Finally, the obtained views

ferent from other CNN models in terms of error and trial procedures. In are combined by voting fusion approach that is later utilized for tumor

addition, they design an ensemble learning algorithm for best segmen- prediction. The experiments are done on BRATS (2013, 2015, and
2016) data sets to check the overall accuracy of the introduced sys-
tation accuracy. Pinto, Pereira, Rasteiro, and Silva (2018) described a
tem. Goel, Sehgal, Mangipudi, and Mehra (2017) presented a multi-
hierarchical framework for brain tumor segmentation through extreme
modal approach for segmentation of tumor region in brain MRI with
randomize trees and context features. They used BRATS 2013 chal-
T1 contrast enhancement, T2 weighted and FLAIR imaging modalities.
lenge data set and achieve notable performance.
The presented method is able to merge these modalities containing
Havaei, Larochelle, Poulin, and Jodoin (2016) presented an
different images and converted them to make a single image. Padlia
approach for interactive tumor extraction and classification. They
and Sharma (2019) presented a brain tumor segmentation approach
described a semiautomatic method which segments a brain tumor by
for T1 and FLAIR images. A fractional Sobel filter is utilized in this
training and generalization. When spatial features are added, the accu-
work for tumor enhancement and statistical features are extracted
racy of different classifiers such as SVM, KNN, and random forests
for tumor segmentation. The validation of the presented approach is
have been increased. The results validated the MICCAL-BATS 2013
conducted on BRATS 2013 data set and showed improved performance.
database and outperformed as compared to recent methods. Shenba-
Srinivas and Rao (2019) used Fuzzy C-Means approach for tumor seg-
garajan, Ramalingam, Balasubramanian, and Palanivel (2016) intro-
mentation and extract their features like DWT. Later, a Principle Compo-
duced an approach which diagnoses the tumor by an efficient MRI
nent Analysis (PCA) is performed for the reduction of irrelevant features
imaging system. The introduced method classifies the images into two
and classified through SVM. The experiments are conducted on 105 MR
classes (a) noncancer afflicted brain tumor; and (b) cancer afflicted
images and achieved an accuracy of 98.82%. The few of other clustering
brain tumor. Then, the four steps are performed including preproces-
approaches are also used in literature for brain tumor extraction and clas-
sing, segmentation, features extraction in which textural and shape
sification such as kernel clustering (Tong, Zhao, Zhang, Chen, &
descriptors are extracted and the classification is done. The results of
Jiang, 2019).
the introduced method depict the accuracy and robustness of the
approach. Singh, Dixit, Akshaya, and Khodanpur (2017) described an
automated approach for brain tumor extraction and classification 3 | PROPOSED METHOD
using MR images. A watershed segmentation method is utilized for
segmentation and then postprocessing is applied to remove unwanted The proposed method comprises four primary components including
part of the image. Later, famous GLCM features are extracted and tumor contrast stretching, tumor extraction, features extraction and
classified by SVM. Lahmiri (2017) introduced a method for the useful features selection, and finally classification. The detailed flow
4 KHAN ET AL.

FIGURE 1 Proposed flow architecture of an automated system for brain tumor detection and classification using MR images

architecture of proposed system is shown in Figure 1. The detailed 1+μ


F ði, jÞ = ×C ð2Þ
description of each step is given below: δ

ð1 + GðoÞÞ
δ= ð3Þ
σ
3.1 | Tumor contrast stretching
where δ denotes a helping function and σ is fixed in this function
In the domain of computer vision, medical imaging is a famous
of value 0.2. The C denotes a constant parameter of initialized value
research area from last two decades. Advancement in this domain
2. The increase in the value of C improves the brightness of the whole
improves the overall performance of the computerized methods. The
image. The effects of contrast stretching after applying Gamma func-
performance of these methods is based on the good contrast stretch-
tion are presented in Figure 2.
ing which do the clear visibility of infection region in the image
(Rundo et al., 2019). The automatic brain tumor extraction and recog-
nition is much essential for survival against brain cancer. Therefore, in
any automated system, it is essential to improve the tumor contrast as 3.2 | Tumor segmentation
compared to healthy pixels. Several challenges exist in any automated Image segmentation is an important step in the domain of computer
system including low tumor contrast and noise in the given images. To vision based on their emerging applications including medical imag-
resolve these problems, in this article, we use a GCS method which is ing, video surveillance, and many more. However, in medical imag-
based on Gaussian function and a stretching formulation defined as ing segmentation of diseased region is even more important as
follows: compared to any other. The major medical imaging domains, in
Suppose O(i, j) denotes original MR images of dimension N × M, which segmentation is more important, are skin lesion segmenta-
where maximum length of N and M is 256. Initially implement Gauss-
tion, breast tumor segmentation, ulcer segmentation, brain tumor
ian function on the original image and later utilized value of Gaussian
segmentation, and few more. From mentioned domains, brain
in Gamma function of contrast stretching. The Gaussian expression is
tumor segmentation using MR images is an important research area
defined through Equation 1:
from last few years. It includes several challenges including tumor
Oij − μ2
1 irregularity, tumor shape, and texture. These challenges make the
GðoÞ = pffiffiffiffiffiffi e σ ð1Þ
σ 2π segmentation process more complex and degrade the segmentation
where μ denotes mean value of image O(i, j) and σ represents the accuracy. To resolve these challenges, several methods are per-
standard deviation of O(i, j). In the very next step, the value of Gauss- formed recently such as Otsu thresholding, active contour, EM
ian put into a Gamma function F(i, j) which improves the contrast of method, and few clustering methods. However, still, there are prob-
tumor region as compare to background pixels. The function F(i, j) lems exist which need to be resolved. In this work, we used marker-
represented by Equations 2 and 3: based watershed segmentation algorithm.
KHAN ET AL. 5

FIGURE 2 Proposed gamma contrast stretching (GCS) contrast stretching effects using MR images. The above row explains original images
whereas the below row represents GCS images

The watershed method is applicable on gradient images and its to Ri, then flooding is completed by initializing z = min + 1. In second
size is same as original image with pixel values represented as inte- step, catchment basin is evaluated through Equations 5 and 6:
ger gray level values. The gradient image is considered as a topolog-
Sz ðRi Þ = SðRi Þ \ X½z ð5Þ
ical surface and gray level values are depicted as height. Then shaft (
is pierced at each minimum surface and start flooded water from 1 for ði, jÞ 2 SðRi Þ and ði, jÞ 2 X ½z
ð6Þ
these shafts to fill the catchment basins. The water starts flooding 0 Otherwise

toward the lower catchment basin, and then toward the higher Suppose S[z] represents the conjunction of the flooded catchment
catchment basin. When two neighboring catchment basins are near basins when reaches at point z and defined by Equations 7 and 8:
to merge a dam is created to make them mutually exclusive and it
X
N
shows the watershed line (Li, Zhang, Wu, & Yi, 2010). If the water S½z = Sz ðRi Þ ð7Þ
reaches the maximum gray level values and only top of the dam is i=1

visible then the edges of equal level are combined to make water- set z = z + 1 ð8Þ
shed segmentation. A simple watershed segmentation method gen-
In the third step, set of connected components are evaluated. In
erates an over-segmented image, to overcome this one or more
fourth step, dam is constructed for all available catchment basins S[z]
internal markers are set into the region of interest and one or more
using the values taken for Sz(Ri) and s[z] then set z = z + 1. Finally,
external markers are set into other areas. Therefore, areas of catch- repeat the step third and fourth till z reaches to the max + 1. Thereaf-
ment basins with markers are filled to make a binary segmented ter, morphological operation such as erosion is applied on marker-
image . based watershed segmented image is refined through Equation 9:
Suppose R1, R2,…, RN represents the coordinates of the regional
AθB = fz 2 BjBz  A g ð9Þ
minimum of the image F(i, j), where F(i, j) is the pixel value of the coor-
dinates (i, j). Suppose S(Ri)is the coordinates of the catchment basin where Bz is the translation of B by the vector z,Bz = {b + z| b 2 B}, 8
that corresponds to the regional minimum Ri and X[z] is the set of Az 2 E. The marker-based watershed segmentation results after ero-
coordinates (i, j) for which g(i, j) < z which is represented by sion process are shown in Figure 3 and represented by G(AθB). In
Equation 4: Figure 3e, the comparison of ground truth and proposed segmented
image using marker-based watershed algorithm is presented. The
X ½z = fði, jÞjgði, jÞ < ng ð4Þ
comparison explains that the overall average segmentation accuracy is
In the first step of the watershed algorithm, evaluates the bound- above 90%. The segmentation results are compared using ground
ary value of the image pixels of F(i, j) and minimum values is allocated truth images and few sample ground truths are shown in Figure 3d.
6 KHAN ET AL.

FIGURE 3 Tumor segmentation results using MR images through marker-based watershed algorithm. (a) Original image, (b) proposed segmented,
(c) mapped segmented, (d) ground truth, and (e) comparison of proposed segmented and ground truth image [Color figure can be viewed at
wileyonlinelibrary.com]

FIGURE 4 Flow architecture of proposed features extraction and selection [Color figure can be viewed at wileyonlinelibrary.com]
KHAN ET AL. 7

FIGURE 5 Sample magnetic resonance images of all selected data sets

3.3 | Features extraction and reduction extracted through segmented and original images as presented in
Figure 4. Moreover, the detailed description of extracted features is
Feature extraction and reduction has been playing a vital role for
given below.
discrimination of tumor region into their relevant categories in the
field of computer vision and machine learning. The major issue
3.3.1 | MSER features
behind feature extraction is to compute the most active or robust
In machine vision, MSER features showing much interest due to good
features for classification, which produced an efficient performance.
matching and efficient image retrieval task. The computational cost of
Recently, in the computer vision domain, features extraction are uti-
MSER features is also fast as compared to other point descriptors
lized for several applications such as plants diseases (Sharif, Khan,
such as SIFT. These features extract the local information of objects in
et al., 2018), bio-medical imaging (Liaqat et al., 2018; Nasir et al., the given image, therefore in this work, we extract MSER features.
2018), video surveillance (Khan et al., 2018; Raza et al., 2018; Sharif The enhanced MRI images are utilized for MSER features extraction.
et al., 2017), and few more (Khan et al., 2017; Sharif, Amin, Yas- The mathematical formulation of MSER features extraction is given
min, & Rehman, 2018). In this article, we extract two types of below.
descriptors as local and global. In local features, maximally stable Given enhanced image F(i, j) of dimension 256 × 256. Let ξ
extreme region (MSER) features are extracted whereas in global fea- denotes threshold image of F(i, j) represented by ξ(t), where t 2 (i, j).
tures histogram oriented gradients (HOG) and extended features The growth function over MSER features is q(t) which is formulated
texture analysis (EFTA) features are computed. These features are through Equation 10:

TABLE 2 Proposed segmentation results on Private data set

Image no. Accuracy (%) Sensitivity (%) Image no. Accuracy (%) Sensitivity (%)
1 92.21 91.17 11 94.51 92.99
2 92.79 91.31 12 94.59 93.13
3 92.97 91.53 13 94.81 93.29
4 93.89 91.89 14 94.87 93.46
5 93.93 92.04 15 95.01 93.52
6 93.99 92.09 16 95.66 93.58
7 94.01 92.46 17 95.72 93.66
8 94.17 92.79 18 95.80 93.81
9 94.49 92.91 19 96.11 94.21
10 94.51 92.99 20 96.29 94.40
Average 92.26 91.01

The bold values shows the best accuracy.


8 KHAN ET AL.

TABLE 3 Proposed segmentation results on BRATS 2013 data set

Image no. Accuracy (%) Sensitivity (%) Image no. Accuracy (%) Sensitivity (%)
1 94.07 92.17 17 95.24 93.13
2 94.19 92.21 18 95.29 93.13
3 94.22 92.29 19 95.33 93.19
4 94.26 92.31 20 95.37 93.22
5 94.58 92.37 21 95.37 93.26
6 94.58 92.40 22 95.49 93.31
7 94.60 92.46 23 95.51 93.37
8 94.71 92.51 24 95.57 93.39
9 94.71 92.51 25 95.63 93.48
10 94.77 92.67 26 95.64 93.59
11 94.79 92.69 27 95.64 93.66
12 94.89 92.88 28 95.98 93.81
13 94.91 92.89 29 95.98 93.99
14 94.99 92.93 30 96.04 94.00
15 95.09 92.99 31 96.17 94.04
16 95.19 93.11 32 96.67 94.41
Average 93.29 92.11

Dt kξðtÞk where Dt denotes the derivative of threshold image, k.krepresent


qðtÞ = ð10Þ
kξðtÞk region area. The above Equation 10 is modified by its discrete approx-
imation as defined in Equation 11:

FIGURE 6 Proposed segmentation results using BRATS 2013 data set. (a) The original image, (b) proposed tumor stretching, (c) segmented
image, (d) mapped on the original image, and (e) corresponding ground truth image
KHAN ET AL. 9

TABLE 4 Proposed classification results on selected data sets

Data set Performance metrics


Method Harvard Private Sensitivity AUC FNR Accuracy
F-KNN ✓ 91.19 0.9556 7.77 92.23
✓ 92.22 0.9667 6.75 93.25
Decision tree ✓ 89.19 0.9267 9.82 90.18
✓ 90.49 0.9332 7.10 92.90
Back propagation ✓ 92.20 0.9489 6.81 93.19
✓ 93.21 0.9679 4.31 95.69
SVM ✓ 97.55 0.9819 1.83 98.17
✓ 97.78 0.9802 1.12 98.88

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
kξðti Þ − ξðti − 1 Þk
qðti Þ = ð11Þ Norm = length of vector = x21 + x22 + x23 ……x2q = L ð15Þ
kξðti Þk
hx x x xq i
1 2 3
The difference of ti and ti − 1 represents the increment of the vector Normalization = , , , …… ð16Þ
L L L L
threshold value. The above formulation gives resultant vector for
The size of the obtained vector after normalization is N× 3,780.
each image of different length. The distinct length vectors are not
possible to fuse in one vector using simple concatenation. There- 3.3.3 | EFTA features
fore, we used mean padding and obtained a vector of size N × φ,
Finally, EFTA descriptors (Akram et al., 2018) are extracted by
where N denotes a total number of images that are utilized for fea- segmented images. The EFTA features are computed from the
tures extraction and φ represents the maximum length vector, boundary regions of given segmented images and gives a resultant
which is 642 in our case. texture vector of size N × 42. The major purpose of EFTA features
extraction is to obtained significant information of malignant and
3.3.2 | HOG features benign lesions. Mathematically, EFTA features are defined as
In the second step, HOG features are extracted from binary tumor sam- follows.
ples, which are obtained after refinement of morphological operations. As we have binary segmented image, represented by G(AθB),
The HOG features are naturally known as shape features and classified then compute membership functions ui and vi through Equa-
object into their relevant category based on their shape. HOG features tions 17 and 18.
are extracted into four steps such as gradient computation, blocks and
1
cells creation, block normalization, and vector calculation. uki = P 2 ð17Þ
c
i=1 ðdki Þm − 1
Initially, input tumor image is resized into 64 ×128 and calculated
Pn
vertical and horizontal gradients using [−1 0 1]Tand [−1 0 1] mask, uki m xi
v i = Pi n= 1 m
ð18Þ
respectively. After that orientation and magnitude are calculated using i = 1 uki

the following Equations 12 and 13:


  Where, dki denotes the means of distance d(xi, vk). After that,
gradientofy
θ = arctan ð12Þ
gradientofx update the value of vi through Equation 18 and repeated until the
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ku(l + 1) − u(l)k < ϵ.
m = ðgradientofxÞ2 + ðgradientofyÞ2 ð13Þ In the later step, proposed CMcPF approach is implemented on
Thereafter, divide the image into blocks and each block convert all extracted features as MSER, HOG, and EFTA. The proposed
into 8 x 8 cells. Each 8 x 8 cell consists of 192 values. In the next step, CMcPF approach removes irrelevant features and provides only

these values represented on a histogram with 9 bin values and then strong features that are related to the classification problem.

store them into an array. In the third step, normalization is performed


over a 16 x 16 block cell as defined through Equations 14–16: 3.4 | CMcPF reduction
vector = ½x1 , x2 , x3 , …xq  ð14Þ The extracted features contain few redundant and irrelevant fea-
tures that degrade the classification accuracy. Therefore, in this

TABLE 5 Confusion matrix for Harvard data set TABLE 6 Confusion matrix for Private data set

Classification class Classification class


Classification class Benign Malignant Healthy Classification class Benign (%) Malignant (%) Healthy (%)
Benign 97% 1% 2% Benign (%) 98 – 2%
Malignant 2% 98% – Malignant (%) 0.2 98.5 1.3%
Healthy 2% 98% Healthy (%) 0.4 1 98.6
10 KHAN ET AL.

TABLE 7 Proposed classification results on BRATS 2013 data set

Performance measures
Classifier Sensitivity (%) Specificity (%) Precision (%) Accuracy (%) FNR (%)
F-KNN 92.0 93.0 92.0 92.3 7.7
Back propagation 92.5 91.0 92.5 92.5 7.5
Decision tree 95.0 95.0 95.0 95.2 4.8
SVM 98.3 98.2 98.0 98.5 1.5

TABLE 8 Confusion matrix of support vector machine using BRATS


    2
2013 data set K Xj , Xk = exp − rXj − Xk  ð23Þ
Class classification Benign (%) Malignant (%)
Here is n parameters aj = 1, 2, 3…….n.
Benign (%) 98.2 1.8
Malignant (%) 1.4 98.6
4 | EX PE RI MENT AL RE SU LT S
work, we proposed a new CMcPF approach for irrelevant features
reduction. The proposed technique consists of two main steps. In In this section, the proposed experimental results are explained in
the first step, chi-square distance is computed from extracted fea- terms of numerical terms. The proposed method is validated on three
tures and then uses a conditional function to select the priority fea- data sets including Harvard, BRATS 2013, and private. This Harvard
tures. The chi-square distance is computed by each feature vector data set contains a total of 120 MRI of the brain, which comprises
as follows. 41 healthy images and 79 unhealthy/tumor images. The private data
Let λ1, λ2, and λ3 denotes three feature vectors of MSER, HOG, set consists of 40 healthy and 40 unhealthy images. In BRATS data
and EFTA, respectively. As we know the size of each feature vector as set, we select malignant and benign tumor for segmentation and clas-
N × 642, N × 3,780, and N × 42, respectively. Then, the chi-square is sification. The sample images of selected data sets are shown in
defined through Equation 19: Figure 5. Four classification methods are used for proposed system
 2 evaluation including back propagation, decision tree, KNN, and SVM.
1Xn
ðλi − Ei Þ2 2 1Xn
λj − Ej 1Xn
ðλk − Ek Þ2
χ 2 ðiÞ = , χ ðjÞ = , χ 2 ðkÞ = The performance of these methods is calculated by three metrics
di=1 Ei dj=1 Ej dk=1 Ek
including accuracy, FNR, AUC, and sensitivity rate.
ð19Þ

PN El 4.1 | Segmentation accuracy


where Ei, j, k = l2ði, j, kÞ N represents the mean values of extracted
vectors. After that a condition is used to selects the top priority fea- In this section, proposed segmentation results are presented in terms of
tures by Equation 20: accuracy and sensitivity rate. The segmentation results are computed
on Private and BRATS 2013 data sets because ground truth images are
χ 2 ðlÞ × log ðNÞ
P 2 × 100 < δ, where δ = 0:3 ð20Þ given of both data sets. In Table 2, few numerical segmentation results
χ ðlÞ × N
are presented for the private data set. The overall average segmentation
The above function applied to all feature vectors and up to 40%
accuracy is 92.26% and average sensitivity rate is 91.01%. These results
features are reduced. The remaining 60% features are fused by simple
described that the proposed system outperforms on this data set. The
concatenation method in one matrix. Finally, all features are sorted
best individual accuracy and sensitivity rate is 96.29 and 91.01%,
into ascending order and implement a new function which selects only
respectively. The visual segmentation results are also shown in Figure 3,
top 50% features as defined through Equation 21.
which gives the authenticity of this work. The few best numerical seg-
jNj
χ 2
max ðλl Þ = max fλl − Nl g ð21Þ mentation results using BRATS 2013 data set is given in Table 3. In
i=1
Table 3, the overall average accuracy for the complete data set is
The resultant features are finally classified by SVM using RBF ker-
93.29% and average sensitivity rate is 92.11%. From given images
nel function as formulated through Equations 22 and 23.
results, the maximum achieved accuracy is 96.67% and sensitivity rate
X
n   is 92.11%. The overall segmentation results on the proposed method
gðxÞ = b0 + aj x, xj ð22Þ
j=1 are significantly good compared to existing techniques. The proposed

TABLE 9 Comparison of different number of features through support vector machine (SVM) using all data sets

Features Data sets


Method All features 80% selected 50% selected 40% selected Harvard Private BRATS 2013
SVM ✓ 89.17 90.13 87.42
✓ 92.19 93.72 94.56
✓ 98.17 98.88 98.50
✓ 93.13 91.91 92.50
KHAN ET AL. 11

FIGURE 7 Comparison of classification accuracy using different classification methods on selected data sets [Color figure can be viewed at
wileyonlinelibrary.com]

visual results are also shown in Figure 6, which gives the authenticity of Table 5, the proposed SVM results are verified by a confusion
our method. The proposed segmentation results are considered good as matrix (CM).
compared to existing techniques. In (Le & Pham, 2018) accuracy is The proposed method is also evaluated using Private data set
achieved of 85% and sensitivity of 87% on BRATS 2013 whereas our achieved the highest accuracy 98.88% with AUC 0.9802, the sensitiv-
method gives results more than 90%. ity of 97.78%, and FNR 1.12% on SVM. The other classification
methods as F-KNN, DT, and backpropagation also performs well and
achieved an accuracy of 93.25, 92.90, and 95.69%, respectively. The
4.2 | Classification accuracy proposed SVM accuracy is also verified by CM in Table 6.

In this section, the proposed classification results are presented in the Finally, the proposed system is validated on BRATS 2013 data

form of numerical and graphical representation. The classification pro- set. The classification results are computed on two classes known as

cess is performed on 50:50 strategies, which defines that 50% sam- benign and malignant. The results are presented in Table 7 in terms of

ples are utilized for training and the remaining 50% for testing. For sensitivity rate, specificity, precision, accuracy, and FN rate. The best

testing results, 10-fold cross validation is performed five times and results are achieved on SVM of accuracy 98.5% whereas the sensitiv-

calculates the average classification accuracy for selected classifica- ity of 98.3%, specificity of 98.2%, precision is 98%, and FN rate of

tion methods.
In Table 4, the proposed classification results are presented using
Harvard and Private data set. The results are presented in terms of
accuracy, FNR, AUC, and sensitivity rate. The maximum achieved
accuracy using Harvard data set is achieved on SVM, whereas the
other methods as F-KNN, DT, and Back Propagation give accuracy
above 90%. The SVM gives the maximum accuracy of 98.17% with
AUC 0.9817, the sensitivity of 97.55%, and FNR rate of 1.83%. In

TABLE 10 Comparison with existing techniques

Method Year Data set Accuracy (%)


Reza, Mays, and 2015 BRATS 2013 86.70
Iftekharuddin (2015)
Gupta, Manocha, Gandhi, 2017 BRATS 2013 95.0
Gupta, and Panigrahi (2017)
Vishnuvarthanan, Rajasekaran, 2016 Harvard 96.18
Subbaraj, and
Vishnuvarthanan (2016)
Sharif et al. (2018) 2018 Private 98.13
Proposed 2018 Harvard 98.17
Private 98.88
FIGURE 8 Time comparison through line plots of all data sets [Color
BRATS 2013 98.50
figure can be viewed at wileyonlinelibrary.com]
12 KHAN ET AL.

later a novel reduction method is proposed. The classification results


are given in Tables 4–8 including their CM. The results show that the
proposed system performs significantly well on SVM using selected
data sets. Further, we select a different number of features to mea-
sure the effectiveness of our system. As presented in Table 9, 50%
selected features gives prominent accuracy. Additionally, in Figure 7,
the comparison of classification accuracy on selected data sets like
Harvard, Private, and BRATS 2013 is shown. The bar plot shows that
SVM gives sufficient accuracy on all data sets.
Finally, a comparison is conducted of the proposed method
with existing approaches in term of accuracy measure as presented
in Table 10. Reza et al. (2015) present a tumor classification
approach based on texture features and reported accuracy 86.70%
using BRATS 2013 data set. Gupta et al. (2017) introduced a
simple approach for tumor classification and achieved accuracy of
FIGURE 9 Classification accuracy of support vector machine on
95% using BRATS 2013 data set. Vishnuvarthanan et al. (2016)
selected data sets in terms of minimum, average, and high
described an unsupervised learning method for tumor classifica-
performance [Color figure can be viewed at wileyonlinelibrary.com]
tion. They achieved maximum accuracy of 96.18% using Harvard
data set. In (Sharif, Tanvir, et al., 2018) authors presented a multi-
1.5%, respectively. The SVM results also confirmed by CM in Table 8,
features selection approach and evaluated on the privately col-
which shows the negative rate of malignant is 1.4 and 1.8% of benign.
lected data set. They reported accuracy 98.13%. In this work, we
The other classification methods as F-KNN, backpropagation, and DT
achieved 98.17% accuracy using Harvard data set, 98.88% using
achieved accuracy 92.3, 92.5, and 95.2%, respectively which are
Private data set, and 98.50% using BRATS 2013 data set. More-
also good.
over, we also consider the computational time of our system using
all data sets, in Figure 8. The best classification time achieved on
4.3 | Analysis and comparison Harvard data set is 36.19 (s), 21.21 (s) on the private data set and
47.03 (s) for BRATS 2013 data set. The overall results show that
In this section, detailed discussion and comparison of the proposed
our system gives a significant performance as compare to exiting
system are described. The proposed system has two major modules
names tumor segmentation and tumor classification. The tumor seg- state of the art techniques. In the last, we perform Monte Carlo

mentation is performed by contrast stretching and marker-based test and execute the system upto 100 times and achieved different

watershed segmentation and visual results are presented in Figures 2– accuracies. The minimum, average, and maximum accuracies

3, and 6. The numerical results are also presented in Tables 2 and 3. are plotted in the box plot given in Figure 9. The minor change in

All results are taken on image size 256 x 256 but the results should be the accuracy after the each iteration shows our system authentic-

increased if the image size is higher for example 512 x 512. The classi- ity. In addition, the minimum, average, and maximum segmentation

fication is performed by extraction of local and global features and accuracy of both Private and BRATS 2013 data sets are shown in
Figure 10. The P-Acc, P-Sen, BRATS-Acc, and BRATS-Sen denote
Private data set accuracy, Private data set sensitivity, BRATS
2013 data set accuracy, and BRATS 2013 data set sensitivity,
respectively.

5 | CONC LU SION

In this article, we proposed a new method for tumor segmentation


and classification using MR images. In the very first step, a tumor
stretching technique is proposed and used as a marker-based water-
shed algorithm for segmentation. Thereafter, local and global fea-
tures are extracted and a new reduction approach is used, which
reduced irrelevant features from the original set of features. After
reduction, the resultant features are combined by simple concatena-
tion approach. Finally, the best features are selected and SVM classi-
fier is utilized for final classification. The experiments are conducted

FIGURE 10 Presentation of segmentation accuracy in terms of on three data sets including BRATS 2013, Harvard, and Private. In
minimum, average, and maximum through boxplots [Color figure can this work, we conclude that the contrast stretching gives a better
be viewed at wileyonlinelibrary.com] segmentation performance. Moreover, the sufficient features always
KHAN ET AL. 13

produced better classification accuracy. We also conclude that the at the Proceeding of International Conference on Intelligent Communi-
cation, Singapore: Control and Devices.
size of image effects on the segmentation performance.
Gondal, A. H., & Khan, M. N. A. (2013). A review of fully automated tech-
In future, the proposed method is implemented on deep learning niques for brain tumor detection from MR images. International Journal
to improve the system accuracy and also make efficient. The deep of Modern Education and Computer Science, 5(2), 55–61.
learning concept has shown much attention from last few years and in Gupta, T., Manocha, P., Gandhi, T. K., Gupta, R., & Panigrahi, B. K. (2017).
Tumor classification and segmentation of MR brain images. arXiv.
medical imaging, it is very much important to make a fully automated
Havaei, M., Davy, A., Warde-Farley, D., Biard, A., Courville, A., Bengio, Y.,
system with high accuracy and efficiency (Mohsen, El-Dahshan, … Larochelle, H. (2017). Brain tumor segmentation with deep neural
El-Horbaty, & Salem, 2018; Zhao et al., 2018). networks. Medical Image Analysis, 35, 18–31.
Havaei, M., Larochelle, H., Poulin, P., & Jodoin, P.-M. (2016). Within-brain
classification for brain tumor segmentation. International Journal of
Computer Assisted Radiology and Surgery, 11(5), 777–788.
ACKNOWLEDGMENTS Iqbal, S., Ghani, M. U., Saba, T., & Rehman, A. (2018). Brain tumor segmen-
tation in multi-spectral MRI using convolutional neural networks
This work was supported by AI and Data Analytics (AIDA) Lab Prince (CNN). Microscopy Research and Technique, 81(4), 419–427. https://
Sultan University Riyadh Saudi Arabia. doi.org/10.1002/jemt.22994
Iqbal, S., Khan, M. U. G., Saba, T., & Rehman, A. (2017). Computer assisted
brain tumor type discrimination using magnetic resonance imaging fea-
ORCID tures. Biomedical Engineering Letters, 8(1), 5–28. https://doi.org/10.
1007/s13534-017-0050-3
Muhammad A. Khan https://orcid.org/0000-0002-6347-4890
Jamal, A., Hazim Alkawaz, M., Rehman, A., & Saba, T. (2017). Retinal imag-
Amjad Rehman https://orcid.org/0000-0002-3817-2655 ing analysis based on vessel detection. Microscopy Research and Tech-
Tanzila Saba https://orcid.org/0000-0003-3138-3801 nique, 80(17), 799–811. https://doi.org/10.1002/jemt
Khan, M. A., Akram, T., Sharif, M., Javed, M. Y., Muhammad, N., &
Yasmin, M. (2018). An implementation of optimized framework for
action classification using multilayers neural network on selected fused
RE FE R ENC E S features. Pattern Analysis and Applications, 1–21.
Abbas, N., Saba, T., Mohamad, D., Rehman, A., Almazyad, A. S., & Al- Khan, M. A., Akram, T., Sharif, M., Shahzad, A., Aurangzeb, K., Alhussein, M.,
Ghamdi, J. S. (2018). Machine aided malaria parasitemia detection in … Altamrah, A. (2018). An implementation of normal distribution based
Giemsa-stained thin blood smears. Neural Computing and Applications, segmentation and entropy controlled features selection for skin lesion
29(3), 803–818. detection and classification. BMC Cancer, 18(1), 638.
Abbas, N., Saba, T., Rehman, A., Mehmood, Z., kolivand, H., Uddin, M., & Khan, M. A., Sharif, M., Javed, M. Y., Akram, T., Yasmin, M., & Saba, T.
Anjum, A. (2018). Plasmodium life cycle stage classification based (2017). License number plate recognition system using entropy-based
quantification of malaria parasitaemia in thin blood smears. Microscopy features selection approach with SVM. IET Image Processing, 12(2),
Research and Technique. 200–209.
Abbasi, S., & Tajeripour, F. (2017). Detection of brain tumor in 3D MRI Lahmiri, S. (2017). Glioma detection based on multi-fractal features of seg-
images using local binary patterns and histogram orientation gradient. mented brain MRI by particle swarm optimization techniques. Biomedi-
Neurocomputing, 219, 526–535. cal Signal Processing and Control, 31, 148–155.
Adjei, P. E., Nunoo-Mensah, H., Agbesi, R. J. A., & Ndjanzoue, J. R. Y. Le, H. T., & Pham, H. T.-T. (2018). Brain tumour segmentation using U-
(2018). Brain tumor segmentation using SLIC Superpixels and opti- Net based fully convolutional networks and extremely randomized
mized thresholding algorithm. Brain, 181(20). trees. Vietnam Journal of Science, Technology and Engineering, 60(3),
Akram, T., Khan, M. A., Sharif, M., & Yasmin, M. (2018). Skin lesion seg- 19–25.
mentation and recognition using multichannel saliency estimation and Li, D., Zhang, G., Wu, Z., & Yi, L. (2010). An edge embedded marker-based
M-SVM on selected serially fused features. Journal of Ambient Intelli- watershed algorithm for high spatial resolution remote sensing image seg-
gence and Humanized Computing, 1–20. mentation. IEEE Transactions on Image Processing, 19(10), 2781–2787.
Amin, J., Sharif, M., Yasmin, M., & Fernandes, S. L. (2018). Big data analysis Liang, Z.-P., & Lauterbur, P. C. (2000). Principles of magnetic resonance
for brain tumor detection: Deep convolutional neural networks. Future imaging: A signal processing perspective. Washington, USA: SPIE Optical
Generation Computer Systems, 87, 290–297. Engineering Press.
Angelini, E. D., Clatz, O., Mandonnet, E., Konukoglu, E., Capelle, L., & Liaqat, A., Khan, M. A., Shah, J. H., Sharif, M., Yasmin, M., &
Duffau, H. (2007). Glioma dynamics and computational models: A Fernandes, S. L. (2018). Automated ulcer and bleeding classification
review of segmentation, registration, and in silico growth algorithms from WCE images using multiple features fusion and selection. Journal
and their clinical applications. Current Medical Imaging Reviews, 3(4), of Mechanics in Medicine and Biology, 18, 1850038.
262–276. Mohsen, H., El-Dahshan, E.-S. A., El-Horbaty, E.-S. M., & Salem, A.-B. M.
Bahadure, N. B., Ray, A. K., & Thethi, H. P. (2018). Comparative approach (2018). Classification using deep learning neural networks for brain
of MRI-based brain tumor segmentation and classification using tumors. Future Computing and Informatics Journal, 3(1), 68–71.
genetic algorithm. Journal of Digital Imaging, 1–13. Nasir, M., Attique Khan, M., Sharif, M., Lali, I. U., Saba, T., & Iqbal, T.
Conti, V., Militello, C., Sorbello, F., & Vitabile, S. (2010). A frequency-based (2018). An improved strategy for skin lesion detection and classifica-
approach for features fusion in fingerprint and iris multimodal biomet- tion using uniform segmentation and feature selection based approach.
ric identification systems. IEEE Transactions on Systems, Man, and Microscopy Research and Technique, 81, 528–543.
Cybernetics, Part C (Applications and Reviews), 40(4), 384–395. Norouzi, A., Rahim, M. S. M., Altameem, A., Saba, T., Rad, A. E.,
David, M. D. S., & Jayachandran, A. (2018). Robust classification of brain Rehman, A., & Uddin, M. (2014). Medical image segmentation
tumor in MRI images using salient structure descriptor and RBF methods, algorithms, and applications. IETE Technical Review, 31(3),
kernel-SVM. 199–213. https://doi.org/10.1080/02564602.2014.906861
DeAngelis, L. M. (2001). Brain tumors. New England Journal of Medicine, Padlia, M., & Sharma, J. (2019). Fractional Sobel filter based brain tumor
344(2), 114–123. detection and segmentation using statistical features and SVM. In Nanoe-
Gao, L., Qi, L., Chen, E., & Guan, L. (2018). Discriminative multiple canoni- lectronics, circuits and communication systems (pp. 161–175). Singapore:
cal correlation analysis for information fusion. IEEE Transactions on Springer.
Image Processing, 27(4), 1951–1965. Pinto, A., Pereira, S., Rasteiro, D., & Silva, C. A. (2018). Hierarchical brain
Goel, S., Sehgal, A., Mangipudi, P., & Mehra, A. (2017). Brain tumor segmen- tumour segmentation using extremely randomized trees. Pattern Rec-
tation in glioma images using multimodal MR imagery. Paper presented ognition, 82, 105–117.
14 KHAN ET AL.

Rahim, M. S. M., Norouzi, A., Rehman, A., & Saba, T. (2017). 3D bones seg- Sharma, M., Purohit, G., & Mukherjee, S. (2018). Information retrieves
mentation based on CT images visualization. Biomedical Research, 28 from brain MRI images for tumor detection using hybrid technique
(8), 3641–3644. K-means and artificial neural network (KMANN). In Networking
Rajesh, T., Malar, R. S. M., & Geetha, M. (2018). Brain tumor detection communication and data knowledge engineering (pp. 145–157).
using optimisation classification based on rough set theory. Cluster Springer.
Computing, 1–7. Shenbagarajan, A., Ramalingam, V., Balasubramanian, C., & Palanivel, S.
Rashid, M., Khan, M. A., Sharif, M., Raza, M., Sarfraz, M. M., & Afza, F. (2016). Tumor diagnosis in MRI brain image using ACM segmentation
(2018). Object detection and classification: A joint selection and fusion and ANN-LM classification techniques. Indian Journal of Science and
strategy of deep convolutional neural network and SIFT point features. Technology, 9(1).
Multimedia Tools and Applications, 1–27. Singh, N. P., Dixit, S., Akshaya, A., & Khodanpur, B. (2017). Gradient
Raza, M., Sharif, M., Yasmin, M., Khan, M. A., Saba, T., & Fernandes, S. L. magnitude based watershed segmentation for brain tumor segmenta-
(2018). Appearance based pedestrians' gender recognition by employ- tion and classification. Paper presented at the Proceedings of the 5th
ing stacked auto encoders in deep learning. Future Generation Com- International Conference on Frontiers in Intelligent Computing: Sin-
puter Systems, 88, 28–39. gapore: Theory and Applications.
Rehman, A., Abbas, N., Saba, T., Rahman, S. I. u., Mehmood, Z., & Kolivand, H. Soltaninejad, M., Yang, G., Lambrou, T., Allinson, N., Jones, T. L., Barrick, T. R.,
(2018). Classification of acute lymphoblastic leukemia using deep learning. … Ye, X. (2017). Automated brain tumour detection and segmentation
Microscopy Research and Technique, 81(11), 1310–1317. using superpixel-based extremely randomized trees in FLAIR MRI.
Reza, S. M., Mays, R., & Iftekharuddin, K. M. (2015). Multi-fractal detrended International Journal of Computer Assisted Radiology and Surgery, 12(2),
texture feature for brain tumor classification. Paper presented at the 183–203.
Medical Imaging 2015: Computer-Aided Diagnosis. Soltaninejad, M., Yang, G., Lambrou, T., Allinson, N., Jones, T. L., Barrick, T. R.,
Rundo, L., Tangherloni, A., Nobile, M. S., Militello, C., Besozzi, D., Mauri, G., & … Ye, X. (2018). Supervised learning based multimodal MRI brain tumour
Cazzaniga, P. (2019). MedGA: A novel evolutionary method for image segmentation using texture features from Supervoxels. Computer Methods
enhancement in medical imaging systems. Expert Systems with Applications, and Programs in Biomedicine, 157, 69–84.
119, 387–399. Srinivas, B., & Rao, G. S. (2019). Performance evaluation of fuzzy C means
Saba, T., Bokhari, S. T. F., Sharif, M., Yasmin, M., & Raza, M. (2018). Fundus segmentation and support vector machine classification for MRI brain
image classification methods for the detection of glaucoma: A review. tumor. In Soft computing for problem solving (pp. 355–367). Singapore:
Microscopy Research and Technique, 81, 1105–1121. https://doi.org/10. Springer.
1002/jemt.23094 Szenkovits, A., Meszlényi, R., Buza, K., Gaskó, N., Lung, R. I., & Suciu, M.
Saba, T., Rehman, A., Mehmood, Z., Kolivand, H., & Sharif, M. (2018). (2018, 202). Feature selection with a genetic algorithm for classifica-
Image enhancement and segmentation techniques for detection of tion of brain imaging data. In Advances in feature selection for data and
knee joint diseases: A survey. Current Medical Imaging Reviews, 14(5), pattern recognition (p. 185). Singapore: Springer.
704–715. https://doi.org/10.2174/1573405613666170912164546 Tong, J., Zhao, Y., Zhang, P., Chen, L., & Jiang, L. (2019). MRI brain tumor
Sadad, T., Munir, A., Saba, T., & Hussain, A. (2018). Fuzzy C-means and segmentation based on texture features and kernel sparse coding. Bio-
region growing based classification of tumor from mammograms using medical Signal Processing and Control, 47, 387–392.
hybrid texture feature. Journal of Computational Science, 29, 34–45. Vidyarthi, A., & Mittal, N. (2017). Texture based feature extraction method
Saeys, Y., Inza, I., & Larrañaga, P. (2007). A review of feature selection for classification of brain tumor MRI. Journal of Intelligent & Fuzzy Sys-
techniques in bioinformatics. Bioinformatics, 23(19), 2507–2517. tems, 32(4), 2807–2818.
Samanta, A. K., & Khan, A. A. (2018). Computer aided diagnostic system for Vishnuvarthanan, G., Rajasekaran, M. P., Subbaraj, P., & Vishnuvarthanan, A.
automatic detection of brain tumor through MRI using clustering based seg- (2016). An unsupervised learning method with a clustering approach for
mentation technique and SVM classifier. Paper presented at the Interna- tumor identification and tissue segmentation in magnetic resonance
tional Conference on Advanced Machine Learning Technologies and brain images. Applied Soft Computing, 38, 190–212.
Applications. Wang, S., Chen, M., Li, Y., Zhang, Y., Han, L., Wu, J., & Du, S. (2015).
Bennaceur, M., Saouli, R., Akil, M., & Kachouri, R. (2018). Fully automatic Detection of dendritic spines using wavelet-based conditional sym-
brain tumor segmentation using end-to-end incremental deep neural metric analysis and regularized morphological shared-weight neural
networks in MRI images. Computer Methods and Programs in Biomedi- networks. Computational and Mathematical Methods in Medicine,
cine, 166, 39–49. 2015, 1–12.
Sehgal, A., Goel, S., Mangipudi, P., Mehra, A., & Tyagi, D. (2016). Automatic Yang, J., Yang, J.-Y., Zhang, D., & Lu, J.-F. (2003). Feature fusion: Parallel
brain tumor segmentation and extraction in MR images. Paper presented strategy vs. serial strategy. Pattern Recognition, 36(6), 1369–1381.
at the Conference on Advances in Signal Processing (CASP). Zhang, Y., & Wang, S. (2015). Detection of Alzheimer's disease by displace-
Sharif, M., Amin, J., Yasmin, M., & Rehman, A. (2018). Efficient hybrid ment field and machine learning. Peer J, 3, e1251.
approach to segment and classify exudates for DR prediction. Multime- Zhao, X., Wu, Y., Song, G., Li, Z., Zhang, Y., & Fan, Y. (2018). A deep learn-
dia Tools and Applications, 1–17. ing model integrating FCNNs and CRFs for brain tumor segmentation.
Sharif, M., Khan, M. A., Akram, T., Javed, M. Y., Saba, T., & Rehman, A. Medical Image Analysis, 43, 98–111.
(2017). A framework of human detection and action recognition based
on uniform segmentation and combination of Euclidean distance and
joint entropy-based features selection. EURASIP Journal on Image and
Video Processing, 2017(1), 89.
How to cite this article: Khan MA, Lali IU, Rehman A, et al.
Sharif, M., Khan, M. A., Faisal, M., Yasmin, M., & Fernandes, S. L. (2018). A
framework for offline signature verification system: Best features Brain tumor detection and classification: A framework of
selection approach. Pattern Recognition Letters. marker-based watershed algorithm and multilevel priority fea-
Sharif, M., Tanvir, U., Munir, E. U., Khan, M. A., & Yasmin, M. (2018). Brain tures selection. Microsc Res Tech. 2019;1–14. https://doi.org/
tumor segmentation and classification by improved binomial threshold-
ing and multi-features selection. Journal of Ambient Intelligence and 10.1002/jemt.23238
Humanized Computing, 1–20, Singapore.

You might also like