Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

APPLIED PROBLEMS

Predictive Diagnosis of Glaucoma Based


on Analysis of Focal Notching along
the Neuro-Retinal Rim Using Machine Learning
Rishav Mukherjeea,*, Shamik Kundua, Kaushik Duttaa,
Anindya Sena, and Somnath Majumdarb
a Department of Electronics and Communication Engineering, Heritage Institute of Technology, Kolkata, India
b
Department of Ophthalmology, Apollo Gleneagles Hospital, Kolkata, India
* e-mail: rishavmukherjee17@gmail.com

Abstract—Automatic evaluation of the retinal fundus image is regarded as one of the most important future
tools for early detection and treatment of progressive eye diseases like glaucoma. Glaucoma leads to progres-
sive degeneration of vision which is characterized by shape deformation of the optic cup associated with focal
notching, wherein the degeneration of the blood vessels results in the formation of a notch along the neuroret-
inal rim. In this study, we have developed a methodology for automated prediction of glaucoma based on fea-
ture analysis of the focal notching along the neuroretinal rim and cup to disc ratio values. This procedure has
three phases: the first phase segments the optic disc and cup by suppressing the blood vessels with dynamic
thresholding; the second phase computes the neuroretinal rim width to detect the presence and direction of
notching by the conventional ISNT rule apart from calculating the cup-to-disc ratio from the color fundus
image (CFI); the third phase uses linear support vector based machine learning algorithm by integrating
extracted parameters as features for classification of CFIs into glaucomatous or normal. The algorithm out-
puts have been evaluated on a freely available database of 101 images, each marked with decision of five glau-
coma expert ophthalmologists, thereby returning an accuracy rate of 87.128%.

Keywords: glaucoma, optic-disc segmentation, cup segmentation, blood vessel suppression, cup-to-disc
ratio, focal notching, retinal fundus image, computer aided diagnoses, ISNT rule, machine learning, support
vector machine
DOI: 10.1134/S1054661819030155

1. INTRODUCTION undiagnosed until the late advanced stages. The anal-


ysis of CFI is emerging as a preferred modality of glau-
Glaucoma has emerged as one of the leading causes coma assessment due to it’s non-invasive and eco-
of blindness in the recent times, causing about 3 mil- nomical nature, making it a potential device for large
lion cases worldwide [1, 2]. Currently more than scale mass screening programs. Though research is
60 million people are suffering from glaucoma and done on detection of glaucoma from the multi-modal
this is likely to affect more than 80 million by 2020 as images obtained from HRT, the HRT is very costly
projected by World Health Organization (WHO) [3]. hence impractical for primary health care centers. The
Recent data indicates in India more than 11.2 million aim of our study is to develop an automatic computer
people are affected by glaucoma causing 5.9% of the aided diagnosis (CAD) algorithm to analyze the reti-
country’s total blindness [4]. Glaucoma results from
nal fundus image which computes the cup-to-disc
the degeneration of optical nerve fibres over a period
of time, leading to the structural change of optic nerve ratio (CDR) and the notching factor along with the
head (ONH) and subsequently causing permanent direction of notching. Then these parameters are inte-
visual field loss [5]. grated such that an intelligent decision is made about
the existence of glaucoma, which can be used effec-
Glaucoma can be broadly classified into two tively for primary mass screening.
types—open angle glaucoma and angle closure glau-
coma. Open angle glaucoma and few cases with some The qualitative and quantitative analysis of the
components of angle closure is asymptomatic in it’s shape deformation of the optic disc is used to predict
early phases making glaucoma detection and treat- the progress of glaucoma. The optic disc is divided
ment difficult, hence most cases go undetected and into two parts, i.e., (i) the central bright depression
called the cup and (ii) the peripheral region where the
nerve fibres bend into the cup region called the neu-
Received November 14, 2018; revised April 11, 2019; roretinal rim, as shown in Fig. 1. Larger discs have
accepted April 17, 2019 large neuroretinal rim areas. These two are positively

ISSN 1054-6618, Pattern Recognition and Image Analysis, 2019, Vol. 29, No. 3, pp. 523–532. © Pleiades Publishing, Ltd., 2019.
524 MUKHERJEE et al.

Nauroretinal rim
Main blood vessel
S

Optics disc

T N

Pallor region

Cup I

Fig. 1. An optic disc centric 2-D retinal image.


Fig. 2. ISNT representation for a right eye.

correlated. The contour of the cup is influenced by increases due to abnormal elongation of cup towards
this correlation. Along with this, intra ocular pressure, inferior (I) or superior (S) direction. The difference
retinal nerve fiber layer (RNFL) characteristics, focal between CDR and ISNT rule is the area of focus.
notching, neuroretinal rim width and peri-papillary While the CDR focuses on the optic cup size with
atrophy (PPA) serves as important indicators for pro- respect to the disc, the ISNT focuses on the neuroret-
gression of glaucoma. The neural rim area appears to inal rim width i.e., the area between the boundary of
decline with age and increase in intraocular pressure. optic cup and disc. Our work is focused in developing
As a note of caution, it has been observed that patients an automated pre-screening system to analyze the CFI
with diabetes mellitus may have an increase in the to compute the classification parameters, which on
neural rim over time, which could be secondary to integration with machine learning algorithm will help
nerve swelling. in providing an insight to the ophthalmologists to clas-
Our algorithm uses cup-to-disc ratio and neuroret- sify CFIs as glaucomatous and normal.
inal rim width based ISNT rule as classification
parameters for detection of glaucoma. The most com- This paper proposes a pixel based feature classifi-
mon clinical indicator used by ophthalmologists is the cation which analyzes multiple pixel features including
cup-to-disc ratio found out by the visual assessment of numeric properties of a pixel and it’s surrounding. Our
the optic disc through ophthalmoscopy. Higher CDR proposed algorithm has two phases: a training phase
values signify greater chances of glaucoma. However and a test phase. During the training phase, the algo-
the cup-to-disc ratio often fails to address problems rithm learns to classify pixels from known classifica-
with genetically large optic cup cases and myopic eye tions, and in the test phase the algorithm is tested on
(where the optic cup is inherently large). As a result of unknown images using machine learning which uti-
this fallacy, we additionally introduce notching estab- lizes classification factors as parameters. The optic
lished by the ISNT rule which is a technique used to disc is segmented using a statistical model, where the
measure the width of the neuroretinal rim. disc pixels are separated based on their intensity val-
ISNT stands for the four regions in which the opti- ues. Repetitive edge detection and correction tech-
cal nerve head can be divided, as shown in Fig. 2. nique is employed to suppress the blood vessels. Cup
• I = Inferior, i.e., the bottommost region. segmentation is based on average distance vector cal-
• S = Superior, i.e., the topmost region. culation on the masked disc region with respect to the
• N = Nasal, i.e., the near nose region. maximum energy pixels, for intensities greater than
• T = Temporal, i.e., the opposite of nasal region. the RMS value of the pixels. The segmentation of
For a normal eye, the inferior region of the neu- optic disc and cup is the primary requirement in
roretinal rim has the maximum width, followed by the extracting the different parameters from the fundus
superior, nasal and temporal region, following the images. The segmentation process has been elabo-
order rately discussed in our previous work [6]. The ISNT
rule is used to determine the presence and direction of
I > S > N > T. the focal notch from the width of the neuroretinal rim.
Any retinal image which violates this order is sub- The cup-to-disc ratio and the notching characteristics
jected to scrutiny, as the probability of glaucoma are the learning parameters which are integrated with

PATTERN RECOGNITION AND IMAGE ANALYSIS Vol. 29 No. 3 2019


PREDICTIVE DIAGNOSIS OF GLAUCOMA BASED 525

the machine learning algorithm to classify the CFIs for obtained the cup pixels via the thresholding of green
prediction of glaucoma. color plane.
The paper is organized as follows. Section 2 reviews The small bends in the blood vessels, i.e., the kinks,
previous related works on retinal feature extraction automatically mark the cup boundary. Wong [21] et al.
from CFIs and classification of the same into glauco- provided a way for detecting the cup boundary [22]
matous and normal. Section 3 presents our proposed from the kinks in blood vessels by extracting image
methods of segmentation followed by extraction of patches from the estimated cup boundary obtained in
different retinal image parameters and the classifica- [20] and the vessel pixels are identified using the edge
tion method based on them. In Section 4, the experi- and wavelet transform information. The vessel bends
ment results and the segmentation and classification identified by points of direction change in vessel pixels
accuracy is presented. Section 5 presents the discus- are detected and used to obtain the cup boundary.
sions of our works and the conclusion and future Edge detection and hough transform combined with
scopes of the paper are presented in final section. statistical deformable model can be used to extract
optic cup from vessel removed disc [23]. Gopal et al.
[24] proposed a method of cup segmentation by using
2. BACKGROUND r-bends information and by producing a local interpo-
The optic disc segmentation is the primary step for lating spline to naturally approximate cup boundaries
automatic processing of the fundus image, hence a in region where r-bends are absent. The ISNT rule is
considerable amount of research work is available. The widely used by the ophthalmologists to measure the
earliest attempt of optic disc segmentation was made neuroretinal rim width and deduce the diagnosis and
by shape based template matching in which the disc is progression of glaucoma. Tan et al. [25] proposed an
modeled as either circular [7–9] or elliptical [10]. This algorithm to classify notching based on ISNT rule and
method was unable to characterize the irregularities in also detect the feature points from the vessel bends and
shape arising from pathological changes. For this local image gradients.
problem, the approach has shifted to thresholding
based techniques. Nagakawa et al. [11] developed a P-
tile thresholding technique value in gray scale images. 3. MATERIALS AND METHODS
Gradient based active contour model are developed in 3.1. Data
which Mendels [12] approaches contour initialization
manually while Lee [13] approaches it automatically. Proposed algorithm has been trained and tested on
Recent works on active contours are more focused on a free database of colored fundus images provided by
region based approaches [14, 15] inspired by Mum- Arvind Eye Hospital in Madurai, India in collabora-
ford-Shah model [16]. The disc segmentation has tion with IIIT Hyderabad [26]. The dataset comprises
been succeeded by blood vessel extraction and removal of both normal and glaucomatous images that are of a
[11], because the presence of blood vessels makes single demographic population, as all the subjects
detection of optic cup difficult. A fuzzy algorithm was whose eye images are a part of this database are all
developed by Tolias [17] to identify the blood vessel Indians. In this dataset, the images are classified pri-
regions. Chaudhari et al. [18] presented a method of marily by four along with an additional expert oph-
detection of blood vessels by designing 12 different thalmologist. The glaucomatous images are denoted
templates intended to be used in searching for blood by +1 and the normal images as–1. The final decision
vessel segments in all possible directions. Vermeer et was taken on the basis of majority count.
al. [19] employed a thresholding segmentation based
on Laplace transform to segment retinal vessels. 3.2. Preprocessing
The optic cup segmentation poses more challenge
than disc segmentation because of the decreased visi- The preprocessing involves three steps: disc seg-
bility of the boundary and high density of blood vessels mentation, blood vessel detection, and removal and
in the boundary of optic cup and disc region. The oph- cup segmentation.
thalmologists mainly use three visual aids to detect the 3.2.1. Optic disc segmentation. The optic disc
cup, i.e., (i) change in color, that is, the cup appears to region appears brighter than the other parts of the ret-
be much paler as compared to neural rim, (ii) bend in ina in the retinal fundus image and can be segmented
small blood vessels, and (iii) notching of inner margin with a threshold obtained from a selectively chosen
of neuroretinal rim. Compared to the disc segmenta- histogram of single row profile across the fundus disc
tion very few methods have been proposed for the cup image. Our algorithm is based on statistical RGB
segmentation. Liu et al. [20] proposed a method in model using pixel based feature classification by
which a potential set of pixels from the cup region is examining relevant pixels and their surrounding,
derived based on the reference color obtained from a which is used in developing an automatic threshold
manually selected point. Then an ellipse fit to this set based on image characteristics. The row with maxi-
of pixels to obtain cup boundary, but the cup boundary mum mean pixel intensity value Rmax will pass through
thus obtained is coarse. A variant of this method the optic disc.

PATTERN RECOGNITION AND IMAGE ANALYSIS Vol. 29 No. 3 2019


526 MUKHERJEE et al.

(a) (b) (c) (a) (b) (c)

Fig. 3. (a) Binary disc mask, (b) red channel for disc, (c) Fig. 4. (a) Blood vessel profile in RGB, (b) blood vessel
disc mask in RGB suppressed in red layer, (c) blood vessel suppressed disc
mask.

⎡ n X (i, j )⎤ For kth iteration µlocal is calculated for γ × γ neigh-


Rmax = max im=1 ⎢∑
⎣ j =1 n ⎦
⎥, (1) borhood as follows,
M + γ 2−1 N + γ 2−1
where Rmax returns the maximum average pixel value
of the image with dimensions m × n, and X(i, j)
∑ ∑ X (i, j, k − 1)
i = M − γ 2−1 i = N − γ 2−1
denotes the pixel intensity value for the coordinates μlocal = (4)
(i, j). The pixel intensities that are greater than Rmax are γ2
then stored in another array Y . The dynamic row ∀1 ≤ M ≤ m and 1 ≤ N ≤ n,
threshold (Rth) is computed from the set of pixels
stored in array Y as, where M, N are the coordinates of the edge pixels.
The pixel values expected to lie in the cup region
n(Y ) have pixel intensity greater than µlocal and are extracted
∑Y k in the set X.
Rth = k =1
. (2)
n(Y ) Y = {y|y = X (i, j, k − 1), X (i, j, k − 1) ≥ μlocal }.
The Rth acts as RGB threshold for row specific The estimated cup pixel value µlocal for this neigh-
operation, similar process is used to calculate the borhood is calculated from set Y as,
threshold for the column specific operation (Cth). The n(Y )
mean of row and column threshold is computed to
obtain the DiscThreshold. The disc is most prominent
μlocal = ∑ Yn(Y(i)) ∀1 ≤ k ≤ L.
i =1
(5)
when determined in the red layer of the image, hence
it is used over green and blue layer for disc segmenta- The µlocal value is assigned to the X (M , N ) pixel
tion. and this process is iterated L times in all of the RGB
layers. This enables the blood vessels to be suppressed,
⎧ 1 if X (i, j ) ≥ DiscThreshold as represented in Fig. 4c and the corresponding red
PBWD = ⎨ , (3) channel is shown in Fig. 4b.
⎩0 if X (i, j ) < DiscThreshold 3.2.3. Optic cup segmentation. The blood vessel
where PBWD represents the binary image after imple- suppressed disc mask serves as the precursor image for
menting auto-thresholding method which is used for optic cup segmentation. The maximum valued pixel of
further analysis. Figure 3a constitutes the optic disc the mask which lies in the cup region is obtained and
mask, which when integrated on the original image the root mean square values of all the pixel intensities
results in Fig. 3c and corresponding red layer is shown in the disc are calculated.
in Fig. 3b. n( x )
3.2.2. Blood vessel detection and suppression. The ∑ X (i) 2

masked disc shows that irregularities appear on the Prms = i =1


. (6)
optic cup periphery due to the presence of blood ves- n( X )
sels which makes cup segmentation difficult, hence
they are suppressed by morphological approach. The The pixel intensity values greater than the RMS
primary step involves the detection of the edges of the value are subtracted from the maximum valued pixel
blood vessels by implementing sobel edge detection and stored in vector Y.
Algorithm [27] on the green channel of the masked Y = {y|y = Pmax − X (i )∀X (i ) ≥ Prms }.
image, as shown in Fig. 4a. A γ × γ neighborhood is
extracted around that pixel and the mean of this The mean µdiff is calculated for vector Y to obtain
neighborhood is found out. the threshold for cup segmentation i.e., CupThreshold.

PATTERN RECOGNITION AND IMAGE ANALYSIS Vol. 29 No. 3 2019


PREDICTIVE DIAGNOSIS OF GLAUCOMA BASED 527

n(Y ) (a) (b)


∑ Y (i)
μdiff = i =1
, (7)
n(Y )
CupThreshold = Pmax − μdiff . (8)
The thresholding techniques gives best results in
red and green layer of the image for cup extraction,
hence the logical union of these components are used
to obtain the cup pixels PBWC from the disc mask. Fig-
ure 5a represents the binary image of the optic cup and
the corresponding cup in RGB is shown in Fig. 5b. Fig. 5. (a) Binary mask of segmented cup, (b) Cup marked
in RGB.
⎧ 1 if X (i, j ) ≥ CupThreshold
PBWC = ⎨ (9)
⎩0 if X (i, j ) < CupThreshold. step. The major axis and the minor axis lengths of the
cup are computed. Their difference length is calcu-
3.3. Feature Extraction lated which gives us an estimation of the extent of
deformation of the optic cup boundary. This is termed
A collection of features are used for the classifica- as notch factor which yields low values for normal
tion of the retinal images into glaucomatous and non- optic cups, and considerably high values for cups
glaucomatous. which have been deformed due to notching.
3.3.1. Cup-to-disc ratio calculation. The segmenta-
tion of the optic cup and the optic disc is followed by In the next step, the orientation of the cup with
regional property analysis of the segmented binary respect to the optic disc is considered for the detection
images. The two primary features that are used for of notching. The detailed manifestation of the ISNT
classification of retinal images for Glaucoma detec- rule depicts that a normal optic cup with no notch is
tion are: area cup-to-disc ratio (ACDR) and diameter somewhat elliptical in the horizontal direction and a
cup-to-disc ratio (DCDR). The area of the disc and cup with a notch and violating the aforesaid rule
the cup are calculated by considering maximum num- becomes elongated in the vertical or the inferior-supe-
ber of connected components. The ACDR is com- rior direction. Thus, it has been empirically concluded
puted by the ratio of the area of the optic cup with by the ophthalmologists that the effect of notching is
respect to the optic disc. Similarly, the DCDR is com- initially and vividly visible in the I and S directions,
puted by the ratio of the length of the major axis of the followed by the effects in the temporal quadrant and
optic cup with respect to optic disc. The integrated much later in the nasal neural rim. If a notch is pres-
result of ACDR and DCDR is used as a decision mak- ent, it has to be present initially in I or S direction.
ing parameter for the primary classification of the ret- In order to find out the orientation of the cup, the
inal images. image is divided into four quadrants from the centroid
3.3.2. Notching detection from neuroretinal rim of the optic cup as shown in the Fig. 6. The quadrant
width. After calculating the ACDR and DCDR values with angle (90° ± 45°) forms the superior quadrant
for all the images in the database, they are further ana- and the one with angle (90° ± 45°) forms the inferior
lyzed for better prediction results, by detecting and quadrant.
predicting the presence of notch in the optic cup
boundary and the direction of the notching, caused by If the major axis of the optic cup, which must
the irregular distribution of the optic nerve fibers fall- include the notch, forms an angle (with respect to the
ing into the cup. horizontal) lying within the inferior or superior quad-
rants, the presence of the notch is ascertained. The
Selective loss of neuroretinal rim tissue in glau- images where the notch is present are extracted and
coma occurs primarily in the infero-temporal region then processed to find the direction of the notch. In
(more so inferiorly), and to a lesser extent in supero- the final step, to estimate the direction of the notch
temporal region (more so superiorly). As glaucoma either in I or S direction, the major axis of the optic
progresses, the temporal neural rim is typically cup carrying the notch is further extended in either
involved after the vertical poles and the nasal quadrant sides to meet the optic disc boundary as shown in
is the last to be destroyed. Fig. 7a. The distances from the optic cup boundary to
The ISNT rule (I > S > N > T) [28] has been the the optic disc boundary along both the directions of
basis of the detection of this notching feature, where the major axis is calculated for all the dataset images.
any abnormalities in this pre-established hypothesis The distance from the cup boundary to the disc
leads to the identification of the notching criterion. boundary along the superior direction is termed as S-
The notching feature extraction is done in a three- distance and the same along the Inferior direction is
step process, expunging out the different cases in each termed as I-distance, represented in Fig. 7b.

PATTERN RECOGNITION AND IMAGE ANALYSIS Vol. 29 No. 3 2019


528 MUKHERJEE et al.

(a) (b)

S
T N S-distance
I-distance
I

Fig. 7. The S-distance and I-distance represented in (a)


binary image and (b) RGB image.

CDR. In the second step, the six input features used


for classification are area CDR, diameter CDR, notch
Fig. 6. The quadrant division of ISNT for a right eye. factor, S-distance, I-distance and notch prediction
(using ISNT rule). The generated classifier is verified
by a 5-fold cross validation. Randomly selected data
In CFIs, where the I-distance value is considerably from the data set is used as test data to verify the clas-
less than the S-distance one, it can be concluded that sifier and the results are verified with the 5 expert
the notching is in the I-direction and vice versa, thus markings in the data set. A prediction which is match-
giving rise to a parameter called notch prediction ing with the at least 2 of the 5 experts are considered as
which represents the predicted directions of the notch true because the expert decisions show large variations
present in the neuroretinal rim. In this way, the pres- in certain CFI’s across the dataset.
ence of the notch in the optic cup and its direction is
found out by processing the retinal images, which cor-
roborates the presence of glaucoma. 4. RESULTS
4.1. Segmentation
3.4. Decision Making with Machine Learning Five colored fundus images are selected from the
Machine learning is a well known prediction sys- dataset arbitrarily and the segmentation results for the
tem which learns the characteristics from a training different preprocessing steps are shown in Fig. 8. In
dataset and implements its learning for prediction the the 4 × 5 matrix provided below, the first column rep-
results for the test dataset. The resultant model from resents the original images as provided in the dataset.
the machine learning algorithm can work inde- The second column consists of the segmented optic
pendently on any unclassified image. The features disc mask in RGB. The third column represents the
extracted from the segmented images are used to train blood vessel suppressed optic disc mask that has been
the classification model. In this paper we have used used for cup segmentation in the final step. The fourth
support vector machine (SVM) [29] for binary classi- column is formed by images, where the disc is demar-
fication between glaucomatous and nonglaucoma- cated by the algorithm in blue and the optic cup in
tous. Though SVM algorithm takes greater time to green. The rows in the matrix represent the successive
train from a training data set and has a fairly high preprocessing steps for the corresponding images.
memory usage, it comes with a very fast prediction The segmentation results are validated against the
speed especially for a binary classified data. Moreover ground truths of the experts provided in the database.
the training data set being small it takes less time to The experts have marked their estimated areas of cup
train with high prediction accuracy when SVM is used. and disc and those were used as a standard to validate
SVM has several kinds of kernel functions with dif- our segmentation results, as shown in Figs. 9 and 10.
ferent characteristics. The input features and the time The area of the retina that has been marked as optic
required for the training process serve as important disc, both by the ophthalmologists and by our method
factors for selecting kernel functions. Linear kernel is denoted as true positive (TP). The area that has been
function is used for our screening system, as linear left out by the experts as well as by our method are
SVM has the lowest training time and memory usage termed as True Negative (TN). The area that has been
when compared to other kernel functions without identified as optic disc by our method but not by the
much variations in results. This algorithm classifies experts is denoted as false positive (FP). The area that
the images in the database on the basis of the features falls under the disc region according to the experts but
extracted above. Firstly, it classifies the images based is undetected by our method is represented by False
on the two input features area CDR and diameter Negative (FN). The corresponding TP, TN, FP, FN

PATTERN RECOGNITION AND IMAGE ANALYSIS Vol. 29 No. 3 2019


PREDICTIVE DIAGNOSIS OF GLAUCOMA BASED 529

values for the optic cup have been found in a similar


way as that of the disc.
Thus, a quantitative measure of the segmentation
accuracy of both the disc and the cup has been found
out by using those above mentioned terms, and repre-
sented by the parameters Precision Factor, Rectal
Factor, and Fscore, where
Expert Expert

Precision Factor = TP , Proposed Proposed

TP + FP
TP Fig. 8. Qualitative measure of segmentation accuracy in
Recall Factor = , optic disc.
TP + FN

Fscore = 2 Precision × Recall .


Precision + Recall
The parameters for a five images are represented in
Table 1. The mean, range and standard deviation for
each parameter is given below each column of the
table.

4.2. Classification Expert Expert


The SVM based machine learning algorithm is Proposed Proposed
used for the classification of the CFIs into glaucoma-
tous and normal. The prediction accuracy of our algo-
Fig. 9. Qualitative measure of segmentation accuracy in
rithm has been represented by a confusion matrix and optic cup.
validated by the calculation of specificity, sensitivity,
precision factor and negative predictive value. In the
confusion matrix, the horizontal axis represents true
condition and the vertical axis represents predicted
condition. Image no. Original image Segmented disc Vessel suppressed Final markings

In Tables 2 and 3, TP stands for true positive, TN


stands for true negative, FP stands for false positive 5
and FN for false negative. The mathematical expres-
sion for the parameters, depicted in the tables, is
shown below.

Precision Factor (PREC) = TP , 58

TP + FP

Negative Predictive Value (NPV) = TN ,


TN + FN 72

Specificity (SPEC)= TN ,
TN + FP

Sensitivity (SENS)= TP , 82

TP + FN
TP + TN
Accuracy (ACC) = .
TP + FP + TN + FN 97

Table 2 represents the confusion matrix where the


dataset is classified by using two input parameters:
Area CDR and diameter CDR, achieving sensitivity of
Fig. 10. First column: original image; second column: seg-
76.54% and specificity of 65%, with an accuracy of mented optic disc in RGB; third column: blood vessel sup-
74.25%. Table 3 represents the confusion matrix where pressed optic disc; fourth column: marked optic cup and
the dataset is classified by using six input parameters: disc (cup marked in green and disc marked in blue).

PATTERN RECOGNITION AND IMAGE ANALYSIS Vol. 29 No. 3 2019


530 MUKHERJEE et al.

Table 1. Precision factor, recall factor and Fscore of the images shown in Fig. 8
Image Disc Cup
number precision factor recall factor Fscore precision factor recall factor Fscore
05 0.9556 0.9904 0.9727 0.9306 0.9359 0.9332
58 0.8799 0.9960 0.9343 0.8492 0.9349 0.8900
72 0.9728 0.9726 0.9727 0.9613 0.6988 0.8099
82 0.9162 0.9983 0.9555 0.9069 0.7366 0.8129
97 0.9495 0.9548 0.9521 0.9662 0.6726 0.7931
Mean 0.9348 0.9824 0.9575 0.9228 0.7957 0.8478
Range 0.0929 0.0435 0.0384 0.1170 0.2637 0.1401
Std. Dev. 0.0369 0.0184 0.0161 0.0477 0.1295 0.0607

area CDR, diameter CDR, notch factor, S-distance, eters along with the traditional CDR parameters helps
I-distance and notch prediction, achieving sensitivity to improve the accuracy of the algorithm for detection
of 86.42% and specificity of 90%, with an accuracy of of glaucoma.
87.128%. The integration of the focal notching param-
5. DISCUSSIONS
Table 2. Confusion matrix representation for images with
only CDR The dataset contains certain images where the
True
glaucoma phenomenon is not so vivid, and the deci-
sion of whether that CFI can be called as glaucoma-
condition
tous or normal varies among the experts. In such
total cases, the algorithms decision is validated against the
glaucoma normal opinions of at least two of the five expert ophthalmol-
population
ogists, so that the algorithms classification can fetch a
Predicted Predicted TP FP PREC six opinion going in line with those two varying expert
condition condition 62 7 89.85% decisions. Thus, our algorithm successfully evaluates
Glaucoma 76.54% 35% 88 of 101 images present in the database, hereby etch-
ing out a cumulative success rate of 87.128%. Also,
Predicted FN TN NPV there are observer variations in evaluation of CDR.
condition 19 13 40.62% The difference between the estimated cup to disc ratio
Normal 23.46% 65% for the same eye at different times seldom exceeds 0.2.
SENS. SPEC. ACC Hence, any difference greater than 0.2 over time
should be viewed with suspicion.
76.54% 65% 74.25%
The technique developed in our work has the fol-
lowing advantages over the existing techniques. Pro-
Table 3. Confusion matrix representation for images with posed method considered all the RGB layers for the
CDR and notching processing of CFIs, instead of the vastly used grayscale
True approximation which involves loss of information.
condition Furthermore, we have applied an image property
based thresholding approach to correctly extract the
total retinal features. Other works have used shape based
glaucoma normal
population template matching technique which is mostly database
specific, and also image specific. Also, we have used
Predicted Predicted TP FP PREC linear SVM based machine learning algorithm which
condition condition 70 2 97.22% makes our method adaptive, instead of being database
Glaucoma 86.4% 10% specific.
Predicted FN TN NPV Finally, we have used a unique method of detecting
condition 11 18 62.06% a focal notch to corroborate the presence of glaucoma,
Normal 13.6% 90% along with the traditional CDR evaluation techniques.
SENS. SPEC. ACC
This addition of notching parameters increases our
algorithm classification accuracy, which is clearly
86.42% 90% 87.128% depicted by the Tables 2 and 3.

PATTERN RECOGNITION AND IMAGE ANALYSIS Vol. 29 No. 3 2019


PREDICTIVE DIAGNOSIS OF GLAUCOMA BASED 531

6. CONCLUSIONS T. Pajdla and J. Matas, Lecture Notes in Computer Sci-


ence Series (Springer-Verlag, Berlin, 2004),
In this paper, we aimed at designing a predictive Vol. 3022, Part 2, pp. 139–151.
analysis for estimating glaucoma by an intelligent
decision making algorithm. Although our method is 11. T. Nakagawa, Y. Hayashi, Y. Hatanaka, A. Aoyama,
Y. Mizukusa, A. Fujita, M. Kakogawa, T. Hara, H. Fu-
not full proof, incorporating a few more classifying jita, and T. Yamamoto, “Recognition of optic nerve
parameters like age, sex, race of the patient, etc., head using blood-vessel-erased image and its applica-
which are not available in our current database, can tion to production of simulated stereogram in comput-
help in better prediction of glaucoma. As glaucoma is er-aided diagnosis system for retinal images,” IEICE
a progressive phenomenon, a database that consists of Trans. Inf. Syst. J89-D (11), 2491–2501 (2006) [in Jap-
the fundus images for a patient over a period of time anese].
will surely help to predict better, whether the case is 12. F. Mendels, C. Heneghan, and J.-F. Thiran, “Identifi-
tending towards glaucomatous or not. We also intend cation of the optic disk boundary in retinal images using
to extend our work on practical field by using a porta- active contours,” in Proc. Irish Machine Vision and Im-
ble fundus camera for easy collection of CFIs, so that age Processing Conf. (IMVIP) 1999, (IEEE, Piscataway,
this technology can be established as a key step 1999), No. EPFLCONF-86621, pp. 103–115.
towards primary assessment of glaucoma. 13. S. Lee and M. Brady, “Optic disk boundary detection,”
in Proc. British Machine Vision Conf. (BMVC 1991), Ed.
by P. Mowforth (Springer-Verlag, London, 1991),
REFERENCES pp. 359–362.
1. H. A. Quigley and A. T. Broman, “The number of peo- 14. T. Chan and L. Vese, “An active contour model without
ple with glaucoma worldwide in 2010 and 2020,” Br. J. edges,” in Proc. 2nd Int. Conf. Scale-Space’99 “Scale-
Ophthalmol. 90 (3), 262–267 (2006). Space Theories in Computer Vision,” Ed. by M. Nielsen,
2. S. Resnikoff, D. Pascolini, D. Etya’Ale, I. Kocur, P. Johansen, O. F. Olsen, and J. Weickert, Lecture
R. Pararajasegaram, G. P. Pokharel, and S. P. Mariotti, Notes in Computer Science (Springer-Verlag, Berlin,
“Global data on visual impairment in the year 2002,” 1999), Vol. 1682, pp. 141–151.
Bull. World Health Org. 82 (11), 844–851 (2004). 15. T. F. Chan and L. A. Vese, “Active contours without
3. Global Initiative for the Elimination of Avoidable Blind- edges,” IEEE Trans. Image Process. 10 (2), 266–277
ness (World Health Organization, Geneva, 2000); (2001).
WHO/PBL/97.61 Rev2. 16. D. Mumford and J. Shah, “Optimal approximations by
4. R. George, R. S. Ve, and L. Vijaya, “Glaucoma in In- piecewise smooth functions and associated variational
dia: estimated burden of disease,” J. Glaucoma 19 (6), problems,” Commun. Pure Appl. Math. 42 (5), 577–
391–397 (2010). 685 (1989).
5. G. Michelson, J. Hornegger, S. Warntges, and B. Lau- 17. Y. A. Tolias and S. M. Panas, “A fuzzy vessel tracking
sen, “The papilla as screening parameter for early diag- algorithm for retinal images based on fuzzy clustering,”
nosis of glaucoma,” Dtsch. Aerztebl. Int. 105 (34–35), IEEE Trans. Med. Imaging 17 (2), 263–273 (1998).
583–589 (2008).
18. S. Chaudhuri, S. Chatterjee, N. Katz, M. Nelson, and
6. K. Dutta, R. Mukherjee, S. Kundu, T. Biswas, and M. Goldbaum, “Detection of blood vessels in retinal
A. Sen, “Automatic evaluation and predictive analysis images using two-dimensional matched filters,” IEEE
of optic nerve head for the detection of glaucoma,” in Trans. Med. Imaging 8 (3), 263–269 (1989).
Proc. 2018 2nd Int. Conf. on Electronics, Materials Engi-
neering & Nano-Technology (IEMENTech) (IEEE, Kol- 19. K. A. Vermeer, F. M. Vos, H. G. Lemij, and
kata, 2018), pp. 1–7. A. M. Vossepoel, “A model based method for retinal
7. R. A. Abdel-Ghafar and T. Morris, “Progress towards blood vessel detection,” Comput. Biol. Med. 34 (3),
automated detection and characterization of the optic 209–219 (2004).
disc in glaucoma and diabetic retinopathy,” Med. Inf. 20. J. Liu, F. S. Yin, D. W. K. Wong, Z. Zhang, N. M. Tan,
Internet Med. 32 (1), 19–25 (2007). C. Cheung, M. Baskaran, T. Aung, and T. Y. Wong,
8. R. Chrástek, M. Wolf, K. Donath, G. Michelson, and “A-levelset-based automatic cup-to-disc ratio mea-
H. Niemann, “Optic disc segmentation in retinal imag- surement for glaucoma diagnosis from fundus image,”
es,” in Bildverarbeitung fur die Medizin 2002, Ed. by in Image Analysis and Modeling in Ophthalmology, Ed.
M. Meiler, D. Saupe, F. Kruggel, H. Handels, and by E. Y. K. Ng, U. R. Acharya, J. S. Suri, and
T. M. Lehmann, (Springer-Verlag, Berlin, 2002), A. Campilho (CRC Press, Boca Raton, 2014), pp. 129–
pp. 263–266. 142.
9. M. Lalonde, M. Beaulieu, and L. Gagnon, “Fast and 21. D. W. K. Wong, J. Liu, J. H. Lim, H. Li, and T. Y. Wong,
robust optic disc detection using pyramidal decomposi- “Automated detection of kinks from blood vessels for
tion and Hausdorff-based template matching,” IEEE optic cup segmentation in retinal images,” Proc. SPIE
Trans. Med. Imaging 20 (11), 1193–1200 (2001). 7260, 72601J (2009).
10. P. M. D. S. Pallawala, W. Hsu, M. L. Lee, and 22. G. D. Joshi, J. Sivaswamy, K. Karan, R. Prashanth, and
K.-G. A. Eong, “Automated optic disc localization and S. R. Krishnadas, “Vessel bend-based cup segmenta-
contour detection using ellipse fitting and wavelet tion in retinal images,” in Proc. 2010 20th Int. Conf. on
transform,” in Proc. 8th European Conference on Com- Pattern Recognition (ICPR 2010), Istanbul (IEEE, Pis-
puter Vision “Computer Vision – ECCV 2004,” Ed. by cataway, 2010), pp. 2536–2539.

PATTERN RECOGNITION AND IMAGE ANALYSIS Vol. 29 No. 3 2019


532 MUKHERJEE et al.

23. F. Yin, J. Liu, D. W. K. Wong, N. M. Tan, C. Cheung, from the optic nerve head analysis,” JSM Biomed. Im-
M. Baskaran, T. Aung, and T. Y. Wong, “Automated aging Data Pap. 2 (1), 1004 (2015).
segmentation of optic disc and optic cup in fundus im- 27. H. Scharr, PhD Thesis (Ruprecht-Karls-Universität,
ages for glaucoma diagnosis,” in Proc. 2012 25th Inter- Heidelberg, 2000).
national Symposium on Computer-Based Medical Sys-
tems (CBMS), Rome, Italy (IEEE, Piscataway, 2012), 28. N. Harizman, C. Oliveira, A. Chiang, C. Tello,
pp. 1–6. M. Marmor, R. Ritch, and J. M. Liebmann, “The
ISNT rule and differentiation of normal from glauco-
24. G. D. Joshi, J. Sivaswamy, and S. Krishnadas, “Optic matous eyes,” Arch. Ophthalmol. 124 (11), 1579–1583
disk and cup segmentation from monocular color reti- (2006).
nal images for glaucoma assessment,” IEEE Trans. 29. M. A. Hearst, S. T. Dumais, E. Osuna, J. Platt, and
Med. Imaging 30 (6), 1192–1205 (2011). B. Scholkopf, “Support vector machines,” IEEE Intell.
Syst. Their Appl. 13 (4), 18–28 (1998).
25. M. H. Tan, Y. Sun, S. H. Ong, J. Liu, M. Baskaran,
T. Aung, and T. Y. Wong, “Automatic notch detection
in retinal images,” in Proc. 2013 IEEE 10th Internation- Kaushik Dutta received his B.Tech
al Symposium on Biomedical Imaging (ISBI), San Fran- degree in Electronics and Commu-
cisco, CA, USA (IEEE, Piscataway, 2013), pp. 1440– nications Engineering from Heritage
1443. Institute of Technology affiliated to
26. J. Sivaswamy, S. R. Krishnadas, A. Chakravarty, Maulana Abul Kalam Azad Univer-
G. D. Joshi, Ujjwal, and T. A. Syed, “A comprehensive sity of Technology in 2018. He has
been the joint first author to a previ-
retinal image dataset for the assessment of glaucoma ous research paper on Glaucoma
detection from CFIs. His research
interests include biomedical image
processing, computer vision, and
cognitive radio networks.
Rishav Mukherjee received the
B.Tech degree from Heritage Insti-
tute of Technology affiliated to Anindya Sen is a Professor at the
Maulana Abul Kalam Azad Univer- department of Electronics and
sity of Technology in 2018 in Elec- Communication Engineering, Heri-
tronics and Communication Engi- tage Institute of Technology, a pri-
neering. He has successfully quali- vate autonomous engineering col-
fied GATE, examination held by lege in Anandapur, Kolkata, India.
IIT. He has been the joint first He received his B.E. from Jadavpur
author to a previous research paper University, India in the year 1980,
on Glaucoma detection from CFIs. PhD from University of Minnesota,
His research interests include image Twin Cities in 1996, and got his
processing, biomedical image analy- Post-Doctoral training from Uni-
sis, networking and cognitive radio networks (CRN). versity of Chicago from 1996 to
2000. He currently holds one US
patent and thirty research paper publications. His research
interests include, medical image processing, internet of
things, artificial intelligence and VLSI design.
Shamik Kundu received his
B.Tech degree in Electronics and
Communications Engineering Somnath Majumdar is a senior con-
from Heritage Institute of Tech- sultant ophthalmologist in Kolkata,
nology affiliated to Maulana India for last 20 years. He is cur-
Abul Kalam Azad University of rently a consultant ophthalmologist
Technology in 2018. He has for Apollo Hospitals, Kolkata and
been the joint first author to a Fortis Hospitals. He had done post
previous research paper on graduation from Dr. R.P. Centre for
Glaucoma detection from CFIs. Ophthalmic Sciences, AIIMS, New
His research interests include Delhi and completed his FRCS
signal and image processing, (Edin. and Glasgow) in 2000. He is
communications, and networks. an expert in fields of Glaucoma,
Cataract, and Retinal surgery.

PATTERN RECOGNITION AND IMAGE ANALYSIS Vol. 29 No. 3 2019

You might also like