Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

Journal of Digital Imaging (2022) 35:302–319

https://doi.org/10.1007/s10278-021-00566-8

Detection of Optic Disc Localization from Retinal Fundus Image Using


Optimized Color Space
Buket Toptaş1   · Murat Toptaş2 · Davut Hanbay3

Received: 5 April 2021 / Revised: 25 November 2021 / Accepted: 6 December 2021 / Published online: 11 January 2022
© The Author(s) under exclusive licence to Society for Imaging Informatics in Medicine 2021

Abstract
Optic disc localization offers an important clue in detecting other retinal components such as the macula, fovea, and retinal
vessels. With the correct detection of this area, sudden vision loss caused by diseases such as age-related macular degenera-
tion and diabetic retinopathy can be prevented. Therefore, there is an increase in computer-aided diagnosis systems in this
field. In this paper, an automated method for detecting optic disc localization is proposed. In the proposed method, the fundus
images are moved from RGB color space to a new color space by using an artificial bee colony algorithm. In the new color
space, the localization of the optical disc is clearer than in the RGB color space. In this method, a matrix called the feature
matrix is created. This matrix is obtained from the color pixel values of the image patches containing the optical disc and
the image patches not containing the optical disc. Then, the conversion matrix is created. The initial values of this matrix are
randomly determined. These two matrices are processed in the artificial bee colony algorithm. Ultimately, the conversion
matrix becomes optimal and is applied over the original fundus images. Thus, the images are moved to the new color space.
Thresholding is applied to these images, and the optic disc localization is obtained. The success rate of the proposed method
has been tested on three general datasets. The accuracy success rate for the DRIVE, DRIONS, and MESSIDOR datasets,
respectively, is 100%, 96.37%, and 94.42% for the proposed method.

Keywords  Artificial bee colony · Fundus image · Optic disc localization · Eigenvalue

Introduction computer-aided diagnostic systems (CADs) can prevent


vision loss and impairments on fundus structures. CADs are
Glaucoma, diabetic retinopathy (DR), and age-related macu- for helping physicians and reducing the error rate. Actually,
lar degeneration (ARMD) are the important retinal diseases these systems are speeding up the identification of param-
which can be the leading cause of blindness. Screening of eters for the diagnosis of the disease.
digital fundus images helps us to detect symptoms of this The location of the OD has an important role in retinal
pathological change. Early detection of these diseases by image analysis. Therefore, optic disc (OD) segmentation
by using CADs is the basic step in the diagnosis of reti-
nal diseases [1]. Many approaches have been presented by
* Buket Toptaş
btoptas@bandirma.edu.tr researchers for OD segmentation, but this topic is still an
active area of research. In addition, the issue is still devel-
Murat Toptaş
mtoptas@bandirma.edu.tr oping to reach better results in terms of robustness and
accuracy. Especially, in recent years, automatic detection
Davut Hanbay
davut.hanbay@inonu.edu.tr of OD localization in retinal images has become popular in
the CADs.
1
Computer Eng. Dept, Engineering and Natural Science For instance, Pathan et al. [2] proposed an automated deci-
Faculty, Bandırma Onyedi Eylül University, Balıkesir, sion tree classifier system for the detection of OD contours. This
Turkey
proposed system is based on two main steps. The first step was
2
Software Eng. Dept, Engineering and Natural Science developed using a directional filter. An effective retinal blood
Faculty, Bandırma Onyedi Eylül University, Balıkesir, Turkey
vessel detection and extraction algorithm is obtained by this fil-
3
Computer Eng. Dept., Engineering Faculty, Inonu University, ter. Then, its adaptive threshold based on a decision tree
44280 Malatya, Turkey

13
Journal of Digital Imaging (2022) 35:302–319 303

classifier was used for the obtaining of OD contour. Kumar et al. an HSI color space-based approach for automatic segmentation
[3] proposed an improved optic disc and retinal blood vessel of the OD region. Here, all fundus images have been converted
segmentation method. The proposed method consists of five to HSI space. Then, morphological closing and Fourier correla-
stages. The first stage used mathematical morphology operation tion operations were performed on the I channel. The process
for pre-processing and retinal blood vessel detection. Then, a up to here was the template matching process. The template
watershed transform is used for OD segmentation. Also, radial- matching method was used to approach the OD center. Then,
based function neural network is used for the classification of the OD region was segmented using the level clustering method.
the diseases. Uribe-Valencia et al. [4] proposed a method for OD Ahmed and Amin [12] proposed a candidate-based method on
location in fundus images based on high-intensity information. the green channel of the RGB fundus image to detect the central
The proposed method has been created from four main stages. position of the OD. In the method, the intensities of the pixels
These stages are as follows; (i) OD pixel candidate generation in the OD region are extracted. Then, the highest mean candidate
(ii) promising OD region detection, (iii) feature extraction, and approaches were found. For each candidate point, circular
(iv) feature’s set and classification. Reza [5] proposed an idea of patches of different radii are extracted. Then, the average inten-
a circle operator that is based on the properties of the optic disc. sity was calculated for each radius. The point with the average
The operator was used to reveal the circular image intensity vari- intensity value calculated as the maximum between all candidate
ation deal with the OD. Firstly, the proposed method predicts points is labeled as the center OD. Dashtbozorg et al. [13] pro-
image variation using this operator. Then, this method estimates posed an automated method based on a sliding band filter to
the OD-based value maximum-minimum variation. Thakur and segment the OD region. The method was initiated with the pre-
Juneja [6] presented a hybrid approach using the optic cup and processing step. In the preprocessing step, the segmented vessels
optic disc segmentation for glaucoma disease diagnosis. This were removed from the image. Then, the sliding band filter was
approach is used as a level set approach. The initial contour for applied to these images. Firstly, this filter was applied to the
the level set approach is the boundary of the segmented image. low-resolution images. Thus, the approximate center position of
Thereby, the final segmented image was achieved. Gui et al. [7] the OD on the subsampled images was obtained. Then, it was
proposed an algorithm of optic disc localization. This proposed applied to high-resolution images, and the OD region was
algorithm is based on an improved corner detection algorithm. obtained. Mary et al. [14] designed a system to segment the OD
This corner detection algorithm is based on the feature of the region. The system consists of three basic steps. The first step is
corner point distribution where the disc is intensive. The Harris the preprocessing step. The second step is the circular Hough
corner detection algorithm is used to segment the corner points transform to determine the OD position. The last step is the
correctly. A method based on histogram matching for localiza- Active Contour Model (ACM). Nine different ACM algorithms
tion detection of the OD region is presented in [8]. An average are applied to the Hough transform for segmentation of the OD
filter is used in this method. The effect of noise is reduced by region. Hausdorff distance criterion was used for the similarity
using this filter. Then, an 80 × 80 pixel window was designed. between the segmentation results and the ground truth. Bharkad
The histogram of each color component in this window is cal- [15] presented an algorithm for automatic segmentation of the
culated. The mean of this histogram was used as a template and OD region. In the proposed method, an equiripple low pass finite
the center of the OD localization was signed. One of the meth- impulse response (FIR) is used. This filter is designed to sup-
ods used to detect the OD region in fundus images is Radon press the dominance of blood vessels. It is also used to enhance
transform (RT). With this transformation, the bright information the OD region in the retinal image. The filter offers low compu-
of the OD region is used in the method proposed in [9]. In this tational complexity. Moreover, this filter is designed to have the
method, the fundus image is divided into sub-images. RT was desired frequency response with the most suitable parameter
applied to each sub-image. High between fundus image intensity values. Segmentation of the OD region was performed using
differences were associated with the peaks of OD in the radon grayscale morphological dilation and median filtering. Kamble
space. These operations were performed on the blue channel of et al. [16] developed a new method to detect the localization of
the fundus image. All sub-images with a peak value higher than the OD and fovea. This method is based on one-dimensional
a threshold were considered candidate points containing OD. scanned intensity profile analysis. Firstly, fundus images are pre-
Harangi and Hajdu [10] proposed a method based on member processed. Then, 19 scan lines are created on these images. The
algorithms for automatic detection of OD. In the method, seven aim is to find the “x” and “y” coordinates of the OD by using
different member algorithms were used, and candidate OD the horizontal and vertical scanlines. These scan lines have been
regions were created. Then, probability maps were created by extracted from the intensity profile analyses. Finally, the locali-
member algorithms. A suitable combination of these maps is zation of the OD region is marked by interpreting the intensity
then provided to find the correct OD region. These probability profile analyses. Zhou et al. [17] presented a new locally statisti-
maps were then combined to find the OD region. The purpose cal active contour model and structure prior to robustly detecting
of using member functions is to obtain as much information as the OD and optic cup (OC) localization. Firstly, fundus images
possible about the location of the OD. Wang et al. [11] presented are pre-processed. Then, the region of interest (ROI) of the OD

13
304 Journal of Digital Imaging (2022) 35:302–319

region was cropped on pre-processed images. The effect of the most ideal optic disc region is searched. A supervised
intensity inhomogeneity is observed on these images. To deal machine learning algorithm is used for this [24]. Tulsani et al.
with this, the locally statistical active contour model is intro- [25] proposed a new method to perform OD and OC segmenta-
duced. Also, a structure prior model is proposed to combine the tion. This method is designed for medical images with small
OD and OC segmentation results. Naqvi et al. [18] developed a datasets. The method uses Unet +  + , a deep learning model.
system that uses gradient independent active contour estimation Unet +  + segments OD and OC separately. Finally, information
to define the boundaries of the OD region. This system performs about glaucoma is obtained from fundus images using the OD
localization, homogenization, and boundary estimation of OD. and OC segmentation results. Veena et al. [26] used two differ-
OD homogenization was performed by inpainting the retinal ent convolutional neural network (CNN) architectures for OD
vascular network followed by gradient-independent contour and OC segmentation. First, the input images are pre-processed.
estimation using the variational active contour model. The con- In pre-processing, Gaussian filter and image normalization
tour estimation process has become independent of the gradient methods are used. Then, a color-texture descriptor-based mor-
information. This has increased the success. Yu et al. [19] pro- phological approach is applied to these images. Later, region-
posed an automated system to detect localization and perform based semantic segmentation methods are applied. Thus, the
segmentation of OD. Firstly, template matching was used on regions of interest are extracted from the images. Lastly, the
fundus images in CIElab space. Thus, candidate OD location CNN architecture is applied separately for both the OD region
regions are found. A directional matching filter is used to remove and the OC region. As a result of CNN, OD and OC are
false-positive candidate regions resulting from template match- segmented.
ing. In order to segment the OD, the fundus image was preproc- When the above studies are examined, it is seen that there
essed. In pre-processed, the saturation value of the red channel are some terms frequently mentioned together with a retinal
of the image is checked. Then, blood vessels and bright regions image. These terms seem confusing. For this reason, the terms
are removed from the image. A hybrid level clustering approach used together with a retinal fundus image are briefly defined
is applied to the final image. Finally, the OD region is seg- in Table 1 for convenience to researchers. The terms given in
mented. The methods mentioned so far are studies using Table 1 are shown on the retina image in Fig. 1.
machine learning and image processing techniques. In recent OD usually has a circular and bright shape on fundus images.
years, there is an increase in deep learning–based methods for However, factors such as retinal diseases, age, and gender diver-
optic disc segmentation Tan et al. [20] used a 7-layer convolu- sify the size of OD. Therefore, localization and segmentation of
tional neural network (CNN) architecture to segment optic disc, OD become difficult. Numerous approaches have been presented
vasculature. and fovea. This architecture classified every pixel for the localization and segmentation of OD. But, the location
into one of the four classes: optic disc, background, fovea, and of the OD has an important role in retinal images. Therefore,
blood vessels. This proposed method consists of two parts: although many techniques have been proposed in this field, it is
7-layer CNN and background normalization. Yu et al. [21] used still an active field of research. A new OD localization method
a modified U-Net architecture. This architecture combines a pre- is proposed in this article. The motivation behind the proposed
trained ResNet-34 model as encoding layers with classical methodology is to develop a fast, efficient, and robust algorithm
U-Net decoding layers. The purpose of architecture is to detect for OD localization as a prerequisite in fundus image analysis.
glaucoma by segmenting optic disc and optic cub. Liu et al. [22] In this paper, we propose an automated system to detect
used Generative Adversarial Nets (GANs) for optic disc and OD localization. The proposed system is based on the ABC
optic cup segmentation. This used GANs model consists of a algorithm. Firstly, image patches are taken from the OD
segmentation net, a discriminator, and a generator. This model regions and non-OD regions of the color retinal images. The
has the purpose of learning a mapping between the fundus size of these image patches is 25 × 25 pixels. These patches
image and the corresponding segmentation maps. Lim et al. [23] are in the RGB color space. Therefore, each patch is actu-
proposed a method for glaucoma detection. This method con- ally represented by a size of 25 × 25 pixels matrix. Here, the
sists of four stages: (i) localization of a square region around the third dimension represents the three color channels of the
optic disc, (ii) the conversion of this region to extend relevant color space. Secondly, the matrix is created from the color
visual features, (iii) the classification of the converted image pixel values of the image patches obtained from the OD and
with a CNN, (iv) dividing this map into sections to produce the non-OD regions. This matrix is called the feature matrix.
predicted disc and cup boundaries. Jana et al. proposed a method This matrix is obtained with a total of 500 image patches.
for automatic OD detection and segmentation. Firstly, fundus Thirdly, a size of 3 × 3 pixels matrix is created. This matrix
images are converted to gray-scale images. These images are is called the conversion matrix. The initial values of this
preprocessed, thus eliminating noise on images. Then, edge matrix are randomly generated. The conversion matrix is
detection methods are applied, and the edges of the OD are optimized using the ABC algorithm (more detailed explana-
determined. After, the Hough transform is used to find the center tions are given in the rest of the paper). In the ABC algo-
of the optic disc. Finally, among all candidate optic disc regions, rithm, the K-means clustering algorithm is used as an error

13
Journal of Digital Imaging (2022) 35:302–319 305

Table 1  Frequently used terms with together retina images


Name of terms Meaning

Fundus image The fundus image, of the retina expose anatomical structures. These structures are optic disc, fovea, blood vessel,
macula, and lesions
Optic Disc Optic Disc is a circular area. This area is located where the optic nerve connects to the retina. Also, its other name
is the optic nerve head
Optic Cup Optic Cup presents a central area inside the optic disc which is usually brighter than the rest of the disc
Fovea Fovea is located at the center of the macula. The fovea region is darkest with a lack of vasculature [16]
Macula Macula is the area of different and detailed vision. Thus, any abnormality state in this area may affect the quality
of vision. Also, macula is the central region in retina and approximation 3–4 mm in diameter [28]
Exudates Exudates are the most famous symptom of diabetic retinopathy. These symptoms belong to a class of lipid fundus
lesion visible through retinal imaging. They are varied in color from white to yellow with different patterns,
shapes, size, and contrast [29]
Microaneurysms (MAs) Exudates are the most famous symptom of diabetic retinopathy. These symptoms belong to a class of lipid fundus
lesion visible through retinal imaging. They are varied in color from white to yellow with different patterns,
shapes, size, and contrast [30]
Hemorrhages Hemorrhages are one of the earliest symptoms of diabetic retinopathy [31]. If these symptoms are severe, they
may damage retinal tissue and impair vision [32]
Cotton wool spots (CWS) CWS is a lesion. These lesions are present in inner retinal layers. They are white in color and with feathery edges.
They can occur in any disease that affective the arteriolar circulation to the inner retinal layers [33]
Non-proliferative diabetic retinopathy (NDPR) NDPR is an early stage of diabetic retinopathy
Proliferative diabetic retinopathy (PDR) PDR is an advanced stage of diabetic retinopathy
Glaucoma Glaucoma is one of the major causes of vision loss. The increase of inner pressure inside the eye is adopted the
main of glaucoma [34]
Age-Related Macular Degeneration (ARMD) ARMD is an eye disease
Neuroretinal Rim Neuroretinal Rim is used in defining the field of the optic disc. It contains the neural elements and is found
between the edge of the optic disc and the optic cup

function. Fourthly, the color fundus images are multiplied result of the thresholding process. Ultimately, the OD local-
by the optimized conversion matrix. Thus, color fundus ization is determined. The proposed algorithm was tested
images are moved from RGB space to a new space. Fifthly, in three publicly available datasets to obtain a competitive
a thresholding method is applied to the images in the new performance according to other state-of-the-art methods.
space. An eigenvalue-based method is used as the thresh- These datasets are DRIVE, MESSIDOR and DRIONS.
olding method. Morphological operations are applied to the The main contributions of the proposed study are.

• Promising success has also been achieved with the MESSI-


DOR datasets, although all image patches were created
and imported from the DRIVE and DRIONS datasets
• In this study, the OD localization region was found. This
region has critical importance in retinal image analysis.
If the OD region is known, other retinal components such
as the macula, fovea, and exudates are found more easily.
• The ABC algorithm introduces a new enhanced color
space for retinal images.
• Contrary to most approaches in the literature, it is not an
application that only operates on the green channel.

The rest of the paper is organized as follows. Firstly, the


material and method are explained. In this section, the data-
sets used in the proposed method, related work, and the pro-
posed method are presented. Then, the experimental setup
of the proposed approach is given. In this section, the main
stages of the study are explained. Later, results and discussion
are given. In this section, a comparison and discussion of the
experimental results with other literature studies are made.
Fig. 1  Some terms given in Table 1 [27]

13
306 Journal of Digital Imaging (2022) 35:302–319

Lastly, the conclusion is given. In this section, the shortcom- ABC Algorithm
ings and contributions of the method are explained. In addi-
tion, solid research directions for the future are mentioned in The artificial bee colony is an algorithm modeled based on the
this section. behavior of a bee swarm in an environment [38]. ABC algorithm
is an iterative algorithm just like other swarm-based algorithms.
In each iteration, both employed bee and onlooker bees update
Materials and Methods potential solutions and calculate the fitness of the new solution.
If the fitness value of this new solution is better than the original
This section includes the dataset used and the related work one, then the employed bee uses the new solution. This algo-
behind the proposed method. Each related work is detailed rithm contains three base groups of bees. The first group is that
in the subsequent subsections. of employee bees’ group which finds food sources. The second
group is onlooker bees which wait in the dance area and make
Dataset Used the decision to choose a food source. The last group is scouts
bees which carry out the stochastic search. The calculation of
The proposed method has been tested on three public the food source mentioned in the algorithm is given in Eq. (1):
datasets. The first of these datasets is the DRIVE dataset. xi,j = xjmin + rand(0, 1)(xjmin − xjmax ) (1)
The DRIVE (Digital Retinal Images for Vessel Extrac-
tion) dataset is publicly available. This dataset includes where i = 1, 2,…,SN and SN parameter represents the number
40 fundus images. Each image is 8-bits per RGB chan- of food sources and j = 1,2,…,D and D parameter represents the
nel of resolution 768 × 564 pixels. Also, all images have dimensionality of the search space. Also, rand (0,1) is the ran-
been JPG compressed and captured by a Canon CR5 non- dom variable uniformly distributed between (0,1). xjmax and xjmin
mydriatic 3CCD camera with a 45-degree field of view are predefined minimum and maximum values of parameter j.
(FOV). This dataset includes two sets as a test and a train. ABC algorithm is given in Algorithm 1 [39].

Algorithm 1 ABC Algorithm


Initialization:
Initialization a group of food sources.
Determine values to the control parameters.
Repeat
Employed Bee Step:
Produce new food sources and assign all employed bees.
Apply greedy selection process
Onlooker Bee Step:
Choice a solution by probability values and assign onlooker bees.
Apply greedy selection process between food sources
Scout Bee Step:
Keep mind the best food source found so far
Find an abandoned solution for the scout bee
Until Maximum cycle number is arrived

Each set includes 20 images. Both sets have blood vessel K‑Means Clustering Algorithm
ground-truth images [35]. The MESSIDOR dataset is pub-
licly available. This database includes 1200 fundus images. The K-means algorithm is a clustering algorithm. This
Each image was captured using 24-bit RGB at the reso- algorithm is an unsupervised algorithm and one of the
lution of 1440 × 960, 2240 × 1448, or 2304 × 1536 pixel. classical and most popular used clustering algorithms.
Also, these images were acquired by three ophthalmol- The aim is to find k clustering from the data based on the
ogy departments. Persons in this department use a color objective function Distance(x,y) given in Eq. (2):
video 3CCD camera. This camera consists of a Topcon √
∑N ( )2
TRC NW6 non-mydriatic retinography with a field-of-view Distance(x, y) = xi − yi (2)
[36]. The DRIONS dataset is publicly available. This data- i=1

set includes 110 retinal fundus images. Each image has a


where Distance(x,y) parameter is the squared Euclidean distance,
resolution of 600 × 400 pixels [37].
and i represents cluster centroid and j represents data point. N

13
Journal of Digital Imaging (2022) 35:302–319 307

Algorithm 2 K-Means algorithm


Input: D, training users; k, the number of clustering
Output: {c1,c2,…,c3}, k centroids
Initialization:
Step1: Randomly determine the k cluster centroids.
Step2: Choose the k users at random from D, as the initial points.
Step3: Send each point to the group with the closest cluster centroid.
Step4: By using the Euclidean distance, measure the minimum distance between the data object and each
cluster centroid point. Then, calculate the mean values of all clusters and update centroid value to the mean
value of that cluster.
Repeat step2 until the centroids do not change anymore in the predefined number of iteration or a maximum
number of iterations have been reached.

represents the total number. Depending on the distance, the The parameter values used in the ABC algorithm are
points are assigned to the cluster at a minimum distance to the available in Table 2.
center. Then, the data points are clustered. In the next step, the
mean of all points relating to the cluster is found. This mean value
is set as the new cluster centroid for the subsequent iteration. This Obtaining a Feature Matrix
process is repeated until the centroid achieved is the same as that
of the previous iteration. The error function reduces as a result of Fundus images are in RGB color space. Image patches
the K-meaning clustering algorithm. This algorithm is known for are obtained from the OD and non-OD regions of the
using big datasets and its speedy convergence to local optimal. fundus images. The size of these patches is 25 × 25 × 3.
The K-means algorithm is mentioned in Algorithm 2. Here, the third dimension represents the R, G, and B
channels in the RGB color space. In fundus images, ran-
dom image patches are selected from non-OD regions.
With this random selection, different variations of
Proposed Methodology image patches are obtained and thus contributed to the
enlargement of the sampling space. Image patches of
The proposed method has been designed with inspiration both OD and non-OD regions are resized and made two-
from the previous study [40, 41]. However, this method dimensional. The size of the two-dimensional matrix
based on the ABC algorithm is applied for the first time is 625 × 3. The purpose of reducing from three dimen-
on fundus images. A flow chart of the method proposed in sions to two dimensions is that the conversion matrix is
Fig. 2 is given. The algorithm of the method given in the two-dimensional. The color pixel values of these image
second step in Fig. 2 is as given in Algorithm 3. patches obtained from two different regions are merged.

Algorithm 3 Pseudo Code Of Proposed ABC Algorithm [40]


Initialization:
Generate the first population randomly
Start employee bee phase
In this phase, produce new food sources based on neighborhood relations
Calculate the cost function for the new food source
If the new food source is better than the current food source, update the food source position (updated W)
Calculate the selection possibilities
Start the onlooker bee phase
Determine the food sources according to the selection possibilities
Head for to the best food sources based on neighborhood relationships
Calculate the cost function for the new food source
If the food source you find is better than the previous food source, update the food source position (updated
W)
Increase the abandon counter by 1 if the food sources are not better
Start the scout bee phase
If the abandon count is greater than the limit value
Forget everything and create a new population
Reset the abandon count
Memorize the best food source position (solution) and exit

13
308 Journal of Digital Imaging (2022) 35:302–319

Fig. 2  Flow chart

Thus, a 1250 × 3 matrix is created. This matrix is called other. That is, if the first half of the matrix belongs to
the feature matrix. Merging is the process of adding the the OD region, the other half belongs to the non-OD
color pixel values of the image patches one after the region. The image patches obtain to create the feature

13
Journal of Digital Imaging (2022) 35:302–319 309

Table 2  Used parameters for ABC algorithm ∑k=625 ∑k=1250


e1 =
i=1
ei , e2 =
i=626
ei (4)
Parameters Values
where e1 parameter and e2 represent the number of misclassi-
Values of decision variables 9
fied pixels. e1 occurs in the first half and e2 in the second half.
Number of colony 5
These pixels are taken from the converted feature matrix.
Lower limit −10
The evaluated e1 and e2 values for each channel are summed
Upper limit 10
as in Eq. (5):
Maximum cycle number of iterations 50
The upper limit of the acceleration coefficient 0.5 ∑3
E= e
i=1 1i
+ e2i (5)

matrix are shown in Fig. 3. The feature matrix is built where E is the total number of the misclassified pixels.
with 500 image patches.
Moving to New Space
Obtaining a Conversion Matrix
With the optimal conversion matrix, the preprocessed data-
The initial values of the conversion matrix are generated set image is linearly multiplied to move the image into a new
randomly. This matrix is a 3 × 3 matrix. The reason for space. The contrast limited adaptive histogram equalization
choosing this dimension is that the original fundus images (CLAHE) method is used in the pre-processing phase. This
are of size mxnx3. The conversion matrix was chosen in method has been used as a pre-processing method in many
3 × 3 so that the size of the original fundus image does studies [4, 42, 43]. This method is applied to increase the
not change as a result of the transformation. This matrix, contrast of the images. The CLAHE method works on small
whose initial values are randomly generated, is updated image regions. These small regions are called tiles. The
with the ABC algorithm and becomes optimal when it CLAHE method works on this small region instead of the
reaches the stop criteria. In the proposed method, the con- whole image. The size of tiles, the limitation of contrast
version matrix corresponds to the food source position increase, and the number of bins used for the histogram
update of the bee. (See Algorithm 3.) The optimal conver- is set respectively, 8 × 8 0.001, 256. The histogram shape
sion matrix obtained in Eq. (3) is given: used for the image tiles is “uniform” [43]. In this study,
the CLAHE method was applied to each channel of color
⎛ −5.2802 −5.9726 −1.1628 ⎞ fundus image separately. Since each channel has a different
W = ⎜ 6.4346 −4.6705 −1.0061 ⎟ (3) contrast, the CLAHE method is applied to each channel
⎜ ⎟
⎝ 10.5515 −7.9098 −3.7971 ⎠ separately. The outputs of this operation on the three chan-
nels are shown in Fig. 4.
Each preprocessed channel is vectorized. By com-
Error Calculation bining these channels, the enhanced image vector is
obtained. The image vector is MN × 3 in size. (i.e.,
The purpose of this stage is to calculate the error value that MN = m × n). The conversion matrix is then linearly
occurs when pixel values belonging to both OD and non- multiplied by this enhanced image vector. As a result
OD images are clustered. In this stage, the K-means cluster- of the multiplication, the images move to a new color
ing algorithm is used for three color channels of the feature space. These images in the new space are used to normal-
matrix. Euclidean distance is used to calculate the distance ize. This transformation is visually given in Fig. 5. The
between data points and cluster centers. For each color chan- purpose of normalization is to express the image pixel
nel, the error values e1 and e2 are calculated by Eq. (4): values in a certain range. Normalization was carried

Fig. 3  First row is represented OD patch; second row is represented non-OD patch

13
310 Journal of Digital Imaging (2022) 35:302–319

Fig. 4  Pre-processed image. a Original fundus image, CLAHE applied to R, G and B channels respectively (b, c, d)

( )
out between 0 and 255. The mathematical expression chi − minnorm
of the normalization process in a channel is expressed NormY_chi = (6)
maxnorm − minnorm
in Eq. (6). The image in the new normalized space is
reshaping to the original size as (MN × 3 → m × n × 3). where the minnorm parameter represents the minimum-valued
These operations are given in Algorithm 4: pixel value of the corresponding channel and the maxnorm

Fig. 5  Channel reshape

13
Journal of Digital Imaging (2022) 35:302–319 311

Fig. 6  The first row shows the original image. The second row shows images in the new space, the fifth row shows the third channel images
the images in the new color space, the third row shows the first chan- in the new space. Images of a, b DRIVE datasets. c, d Images of the
nel images in the new space, the fourth row shows the second channel MESSİDOR database. e, f are images of the DRIONS dataset

parameter represents the maximum-valued pixel value of in the new color space are shown. In addition, this figure
the corresponding channel. The parameter NormY_chi rep- shows all three channels of data set images in the new
resents a normalized channel. The parameter chi represents space. When the images are examined carefully, it is seen
the i’th channel. The i parameter can take the value 1,2,3 that the OD region is very clear and the vascular struc-
respectively. tures are clearly seen. Another point to note in this figure
The normalized image matrix in the new space is given is that the color pixel values of the fundus image are
in Eq.  (7). The parameter k represents the minimum- very close to each other in the RGB color space. The OD
valued pixel value of the corresponding channel: region has bright color pixels. However, it contains blood
vessels. For this reason, it has taken its share of the red
⎡ NormYch1 ⎤ hue throughout the image. The color pixel values of the
NormY2D = ⎢ NormYch2 ⎥ × 255 (7) OD region are separated from the rest of the image with
⎢ ⎥
⎣ NormY ch3 ⎦MNx3 the ABC algorithm. Therefore, the OD region showed
itself with a dominant hue in the new color space. Here,
where NormY_2D parameter represents the MN × 3 dimen-
the ABC algorithm aimed to reach the optimal result
sional image in the normalized new color space.
by using the color pixel values (i.e., feature matrix) of
In Fig. 6, the results of the transformation of two ran-
the image patches taken from two different regions. In
domly selected images belonging to the data set [35–37]
the most optimal result, the conversion matrix that will

13
312 Journal of Digital Imaging (2022) 35:302–319

separate the color pixels in the new color space has been by applying a threshold method to an image in the new
reached. color space. Then, the thresholded images are converted
into black and white images, and morphological opera-
tions are applied. The morphological operations consist
Experimental Setup of four steps. All objects smaller than 300 pixels are first
deleted from the image. Then a circular structure with a
Thresholding is performed on fundus images in the radius of 15 pixels is created. In the third step, the dila-
new space to obtain OD localization. This thresholding tion operator is applied. Dilation and Erosion processes
process is based on eigenvalues. After this process, a are given in Eq. (8). In the last step of the morphological
morphological procedure is applied, and candidate OD operations, the circular value is checked. Finally, the
regions are found. In the thresholding process, the eigen- OD region is marked on the morphologically processed
values of the two-dimensional image in the new space images. The operations given at this stage are given in
are obtained. The two-dimensional image in the new Algorithm 5.
space is represented by the parameter Y_2D. This image
[ ]
has three eigenvalues (λ_1, λ_2, λ_3). A maximum I_bw2 = I_bw1 ⊕ se θse (8)
eigenvalue is calculated for each image. The threshold
image is obtained by multiplying the current channel
of the image by the maximum eigenvalue. With this where the parameter se represents the structural element.
thresholding process, OD candidate regions are found. The parameter I_bw 1 represents latest the black-white
Figure 7 shows the threshold results of various images image. Dilation operation is denoted by ⊕ . Erosion opera-
of the datasets [35–37]. In fact, what this paper tries to tion denote by Ɵ. The parameter I_bw2 represents a mor-
present here is to show that the OD region is segmented phological operation applied image.

13
Journal of Digital Imaging (2022) 35:302–319 313

Fig. 7  Thresholding results of images in new color space. a, b represent images of the DRIVE dataset. c, d represent views of the DRIONS data-
set. e, f represent images of the MESSIDOR database

13
314 Journal of Digital Imaging (2022) 35:302–319

Fig. 8  DRIVE dataset. First row shows the first five images belonging to the test set. Second row is show first five image belong to train set

The circularity value of each candidate is examined in where the CV parameter represents the circularity value.
order to obtain the estimated result among the candidate
OD regions. Considering the geometric structure of the
OD, it actually has a circular structure. Candidate regions
Result and Discussion
are expected to be circular or close to circular in shape. For
this, objects that do not fit the circular structure and are
Three publicly available datasets were used: DRIVE [35],
smaller than 0.35 are removed from the candidate region.
MESSIDOR [36], and DRIONS [37]. Figures 8, 9 and 10
Here, the parameter 0.35 is a parameter found with experi-
show some examples of the detected OD from each dataset.
mental results and represented circularity. In other words,
The blue sign presents the detected optic disc location for
it was observed on approximately 100 images that objects
each image. When the blue sign intersects the marked OD
smaller than 0.35 are not OD regions. For the remaining
region (green circle), it is accepted as a correct location.
candidate regions, the object meeting the maximum round-
This proposed method has moved fundus images from
ness criterion is selected as the OD region. The circularity
RGB color space to a new color space. The advantage of
value of an object is calculated as given in Eq. (9):
this new color space is that the OD region can be easily
( )
CV = (4 × Area × 𝜋)∕ Perimeter 2 (9) obtained with a threshold value. In this section, we have
compared the proposed method with other state-of-the-art

Fig. 9  First ten images of belong to Base31 folder in Messidor dataset

13
Journal of Digital Imaging (2022) 35:302–319 315

Fig. 10  DRIONS dataset images 11, 21, 31, 41, 51, 61, 71, 81, 91, and 101, respectively

methods examined in the literature to validate its accuracy value is marked as the OD region. In these images, candidate
and robustness. Table 3 presents the performance of the OD region selection was chosen incorrectly in the morphologi-
localization compared with other state-of-the-art methods cal processing step (Fig. 11). OD candidate regions in the
examined in the literature. In this table, only the detection image named image_065 are shown in white. These results
of OD localization is emphasized. The success rate of the show that the morphological processing steps affect the suc-
method is measured to evaluate the algorithm. The suc- cess rate.
cess rate is the ratio of the number of OD localizations we There are unsuccessful results in the MESSIDOR dataset.
marked in the fundus images to the number of images in all This is due to two main problems. The MESSIDOR dataset
datasets. In other words, if the OD region is marked cor- contains images with many different color tones. Therefore,
rectly in the image, it is a success. For example, the OD there are images in this dataset with almost no OD region.
localization of the 40 fundus images for the DRIVE dataset This is the first problem. This problem negatively affects
is correctly marked. Therefore, the success rate is 100%. the success rate. Because, the OD region is not different
Since we examined the success of localization detection and from the rest of the fundus image. Figure 12 shows some of
not the segmentation of the OD, we only considered the these images. The second problem is that the images in the
success rate. dataset have different colors and intensities. This proposed
There are four images that fail in the DRIONS database. method is created independent of the dataset. However, the
These images are image_065, image_084, image_102, and feature matrix is limited to the color pixel values of 500
image_110 named images. The image_084 failed because image patches. Therefore, the color pixel separation of the
the OD candidate regions are lower than the circularity value OD region of the MESSIDOR dataset images in the new
(see Algorithm 5). color space has become difficult. However, the proposed
At least two candidate regions have emerged in the method achieved satisfactory success even with this varied
images of image_065, image_102, and image_110. Of these dataset. The color variation of the MESSIDOR dataset is
candidate regions, the region with the maximum circularity presented in Fig. 13 with a few images.

Table 3  Comparison of the performance of proposed method against other state-of-the-art studies


Author Method Dataset(s) used Performance measure

Toman et al. [44] Weighted majority voting in OD detection MESSIDOR 98%


Yu et al. [19] Directional Matched Filtering and Level Sets MESSIDOR 99.1%
Lupascu et al. [45] Texture descriptors and a regression based a method DRIVE 95%
Rodrigues and Marengoni [46] Wavelets, mathematical morphology and Hessian-based DRIVE 94.65%
multi-scale filtering
Naqvi et al. [18] Used the variational active contour model DRIONS 96.72%
MESSIDOR 98.60%
Rangayyan et al. [47] Gabor filters and phase portrait analysis method DRIVE 100%
Zhu et al. [48] Hough Transform DRIVE 90%-95%
Proposed Method DRIVE 100%
MESSIDOR 94.42%
DRIONS 96.37%

13
316 Journal of Digital Imaging (2022) 35:302–319

Fig. 11  An OD region eliminated as a result of morphological processes

Fig. 12  OD region unclear images

Fig. 13  Fundus images with different intensity and color

Due to the problems caused by the images presented in Conclusion


Figs. 12 and 13, the MESSIDOR dataset showed a competi-
tive performance compared to the state-of-the-art methods. In this article, a method that detects the OD localization is
Experimental results were obtained from fundus images of proposed. Firstly, patches are taken from OD and non-OD
a new color space that was created with a limited number regions. These patches are of size 25 × 25 × 3 pixels. Here,
of images independent of the dataset. What these results the expression ×3 represents three channels of the RGB color
show is the performance of the new color space. Therefore, space. Therefore, patches can also be expressed in the size of
unlike the state-of-the-art methods, fundus images have 625 × 3 pixels. The color pixels of the patches are combined
been moved from the RGB color space. The new space can one after another. (OD at top, non-OD at bottom, or vice
change when the number and variety of images of the feature versa). The combined matrix is called the feature matrix.
matrix are increased. A limited number of image patches This matrix is a matrix size of 1250 × 3 pixels. Then, a 3 × 3
are used in this article. In these conditions, it is promising matrix of randomly generated initial values is created. This
that the method will perform competitively. When result matrix is called the conversion matrix. This matrix is opti-
for the DRIONS dataset is compared with the results of the mized using the ABC algorithm with the feature matrix. In
state-of-the-art methods, there is a hard competition. This the ABC algorithm, K-means clustering is used as the error
hard competition has arisen as the morphological correction function. Images are moved to new color space when the
process is applied since a standard procedure to every image optimized conversion matrix is applied to fundus images.
for each dataset, for the reason that the morphological post- The threshold value is applied to images moved to the new
processing reduces the performance of the proposed method space. The eigenvalue is used during the thresholding phase.
in the DRIONS dataset. OD localization is obtained by performing morphological

13
Journal of Digital Imaging (2022) 35:302–319 317

processing on the thresholded images. Success rate obtained Consent to Participate  Not applicable.
with the proposed method for OD localization determination
Consent for Publication  Not applicable.
is 100% for DRIVE, 96.37% for DRIONS, and 94.42% for
MESSIDOR. This algorithm has been tested on three public
Conflict of Interest  The authors declare no competing interests.
databases.
The limitations of this proposed method are that it cannot
always accurately detect OD localization in retinal images
which dark region OD. In this case, variable results can be References
obtained on dark dataset images. Therefore, we can focus on
improving the proposed algorithm to overcome this short- 1. Osareh, A., Mirmehdi, M., Thomas, B., Markham, R.: Automated
identification of diabetic retinal exudates in digital colour images.
coming in the future. Br. J. Ophthalmol. 87:1220–1223, 2003. https://​doi.​org/​10.​1136/​
The scope of the proposed method is to pioneer computer- bjo.​87.​10.​1220
aided diagnostic systems on retinal images. These systems 2. Pathan, S., Kumar, P., Pai, R., Bhandary, S. V.: Automated detec-
may be systems that offer OD segmentation, detection of the tion of optic disc contours in fundus images using decision tree
classifier. Biocybern. Biomed. Eng. 40:52–64, 2020. https://​doi.​
macula, fovea, and retinal lesions. These systems make it org/​10.​1016/j.​bbe.​2019.​11.​003
easier for the diagnosis of various retinal diseases. This pro- 3. Kumar, S., Adarsh, A., Kumar, B., Singh, A.K.: An automated
posed method for the detection of OD localization presents a early diabetic retinopathy detection through improved blood ves-
preliminary stage for computer-assisted diagnosis of retinal sel and optic disc segmentation. Opt. Laser Technol. 121, 2020.
https://​doi.​org/​10.​1016/j.​optla​stec.​2019.​105815
diseases. Thus, when the OD localization is marked on fun- 4. Uribe-Valencia, L.J., Martínez-Carballido, J.F.: Automated Optic
dus images, the retinal blood vessels spread from there can Disc region location from fundus images: Using local multi-level
be used. This information provides segmentation of retinal thresholding, best channel selection, and an Intensity Profile
blood vessels. The macula and fovea area are located at a Model. Biomed. Signal Process. Control. 51:148–161, 2019.
https://​doi.​org/​10.​1016/j.​bspc.​2019.​02.​006
certain distance from the OD localization. Knowing the OD 5. Reza, M.N.: Automatic detection of optic disc in color fundus reti-
localization makes it easier for the detection of these retinal nal images using circle operator. Biomed. Signal Process. Control.
components. The mentioned retinal components enable the 45: 274–283, 2018. https://​doi.​org/​10.​1016/j.​bspc.​2018.​05.​027
detection of diseases such as diabetic retinopathy, glaucoma, 6. Thakur, N., Juneja, M.: Optic disc and optic cup segmentation
from retinal images using hybrid approach. Expert Syst. Appl.
and high blood pressure. Moreover, this proposed system can 127: 308–322, 2019. https://​doi.​org/​10.​1016/j.​eswa.​2019.​03.​009
be used to mark OD localization in the treatment and methods 7. Gui, B., Shuai, R.J., Chen, P.: Optic disc localization algorithm
of retinal examinations. The average results of the proposed based on improved corner detection. Procedia Comput. Sci.
method show a competitive performance when compared to 131:311–319, 2018. https://​doi.​org/​10.​1016/j.​procs.​2018.​04.​169
8. Dehghani, A., Moghaddam, H.A., Moin, M.S.: Optic disc localiza-
the state-of-the-art studies in all databases. The proposed tion in retinal images using histogram matching. Eurasip J. Image
method is promising in terms of quantitative measurements. Video Process. 2012. https://​doi.​org/1​ 0.​1186/​1687-5​ 281-​2012-​19
The reliable results of the new color space proposed in this 9. Pourreza-Shahri, R., Tavakoli, M., Kehtarnavaz, N.: Computation-
method encourage the application of analysis of retinal com- ally efficient optic nerve head detection in retinal fundus images.
Biomed. Signal Process. Control. 11:63–73, 2014. https://d​ oi.o​ rg/​
ponents, which is the subject of our future research. 10.​1016/j.​bspc.​2014.​02.​011
In the future, segmentation of the most important reti- 10. Harangi, B., Hajdu, A.: Detection of the optic disc in fundus
nal component, OD, is our first goal. Therefore, we aim images by combining probability models. Comput. Biol. Med.
to detect OD in the proposed new color space as robust 65: 10–24 , 2015. https://d​ oi.o​ rg/1​ 0.1​ 016/j.c​ ompbi​ omed.2​ 015.0​ 7.​
002
and reliable. Since we know the OD localization, it will 11. Wang, C., Kaba, D., Li, Y.: Level Set Segmentation of Optic Discs
save time to reach the exact shape of the OD. The OD from Retinal Images. J. Med. Bioeng. 4: 213–220, 2015. https://​
region is important in achieving glaucoma disease. In doi.​org/​10.​12720/​jomb.4.​3.​213-​220
other words, knowing the localization of OD is a pre- 12. Ahmed, M.I., Amin, M.A.: High speed detection of optical disc
in retinal fundus image. Signal, Image Video Process. 9: 77–85
liminary study for glaucoma disease that we will study ,2015. https://​doi.​org/​10.​1007/​s11760-​012-​0412-3
in the future. 13. Dashtbozorg, B., Mendonça, A.M., Campilho, A.: Optic disc seg-
mentation using the sliding band filter. Comput. Biol. Med. 56:
1–12, 2015. https://​doi.​org/​10.​1016/j.​compb​iomed.​2014.​10.​009
Funding  This study was funded by the Inonu university sci- 14. Mary, M.C.V.S., Rajsingh, E.B., Jacob, J.K.K., Anandhi, D., Amato,
entific research and coordination unit with the Project number U., Selvan, S.E.: An empirical study on optic disc segmentation
FDK-2020–2109. using an active contour model. Biomed. Signal Process. Control.
18: 19–29, 2015. https://​doi.​org/​10.​1016/j.​bspc.​2014.​11.​003
15. Bharkad, S.: Automatic segmentation of optic disk in retinal
Declarations  images. Biomed. Signal Process. Control. 31: 483–498, 2017.
https://​doi.​org/​10.​1016/j.​bspc.​2016.​09.​009
Ethical Approval  This article does not contain any studies with human 16. Kamble, R., Kokare, M., Deshmukh, G., Hussin, F.A.,
participants or animals performed by any of the authors. Mériaudeau, F.: Localization of optic disc and fovea in retinal

13
318 Journal of Digital Imaging (2022) 35:302–319

images using intensity based line scanning analysis. Comput. Biol. fitting and human visual characteristics. Opt. Laser Technol. 110:
Med. 87: 382–396, 2017. https://​doi.​org/​10.​1016/j.​compb​iomed.​ 69–77, 2019. https://​doi.​org/​10.​1016/j.​optla​stec.​2018.​07.​049
2017.​04.​016 32. Umesawa, M., Kitamura, A., Kiyama, M., Okada, T., Imano, H.,
17. Zhou, W., Yi, Y., Gao, Y., Dai, J.: Optic Disc and Cup Segmenta- Ohira, T., Yamagishi, K., Saito, I., Iso, H.: Relationship between
tion in Retinal Images for Glaucoma Diagnosis by Locally Statis- HbA1c and risk of retinal hemorrhage in the Japanese general
tical Active Contour Model with Structure Prior. Comput. Math. population: The Circulatory Risk in Communities Study (CIRCS).
Methods Med. 2019. https://​doi.​org/​10.​1155/​2019/​89732​87 J. Diabetes Complications. 30: 834–838, 2016. https://​doi.​org/​10.​
18. Naqvi, S.S., Fatima, N., Khan, T.M., Rehman, Z.U., Khan, M.A.: 1016/j.​jdiac​omp.​2016.​03.​023
Automatic optic disk detection and segmentation by variational 33. Savino, P., Wall, M.: Optic disk edema with cotton-wool spots.
active contour estimation in retinal fundus images. Signal, Image Surv. Ophthalmol. 39: 502–508, 1995. https://​doi.​org/​10.​1016/​
Video Process. 13:1191–1198, 2019. https://​doi.​org/​10.​1007/​ S0039-​6257(05)​80057-8
s11760-​019-​01463-y 34. Hagiwara, Y., Koh, J.E.W., Tan, J.H., Bhandary, S. V., Laude, A.,
19. Yu, H., Barriga, E.S., Agurto, C., Echegaray, S., Pattichis, M.S., Ciaccio, E.J., Tong, L., Acharya, U.R.: Computer-aided diagnosis
Bauman, W., Soliz, P.: Fast localization and segmentation of of glaucoma using fundus images: A review. Comput. Methods
optic disk in retinal images using directional matched filtering Programs Biomed. 165: 1–12 , 2018. https://​doi.​org/​10.​1016/j.​
and level sets. IEEE Trans. Inf. Technol. Biomed. 16:644–657, cmpb.​2018.​07.​012
2012. https://​doi.​org/​10.​1109/​TITB.​2012.​21986​68 35. Park, M., Jin, J.S., Luo, S.: Locating the optic disc in reti-
20. Tan, J.H., Acharya, U.R., Bhandary, S. V., Chua, K.C., Sivaprasad, nal images. Proc. - Comput. Graph. Imaging Vis. Tech. Appl.
S.: Segmentation of optic disc, fovea and retinal vasculature using CGIV’06. 141–145, 2006. https://​doi.​org/​10.​1109/​CGIV.​2006.​63
a single convolutional neural network. J. Comput. Sci. 20: 70–79, 36. Decencière, E., Zhang, X., Cazuguel, G., Laÿ, B., Cochener, B.,
2017. https://​doi.​org/​10.​1016/j.​jocs.​2017.​02.​006 Trone, C., Gain, P., Ordóñez-Varela, J.R., Massin, P., Erginay,
21. Yu, S., Xiao, D., Frost, S., Kanagasingam, Y.: Robust optic disc A., Charton, B., Klein, J.C.: Feedback on a publicly distributed
and cup segmentation with deep learning for glaucoma detection. image database: The Messidor database. Image Anal. Stereol. 33:
Comput. Med. Imaging Graph. 74: 61–71, 2019. https://​doi.​org/​ 231–234, 2014. https://​doi.​org/​10.​5566/​ias.​1155
10.​1016/j.​compm​edimag.​2019.​02.​005 37. Carmona, E.J., Rincón, M., García-Feijoó, J., Martínez-de-la-Casa,
22. Liu, S., Hong, J., Lu, X., Jia, X., Lin, Z., Zhou, Y., Liu, Y., Zhang, J.M.: Identification of the optic nerve head with genetic algorithms.
H.: Joint optic disc and cup segmentation using semi-supervised Artif. Intell. Med. 43: 243–259, 2008. https://​doi.​org/​10.​1016/j.​
conditional GANs. Comput. Biol. Med. 115, 2019. https://d​ oi.o​ rg/​ artmed.​2008.​04.​005
10.​1016/j.​compb​iomed.​2019.​103485 38. Karaboga, D.: An idea based on Honey Bee Swarm for Numerical
23. Lim, G., Cheng, Y., Hsu, W., Lee, M.L.: Integrated optic disc and Optimization. Tech. Rep. TR06, Erciyes Univ. 10 (2005)
cup segmentation with deep learning. Proc. - Int. Conf. Tools with 39. Aslan, S.: A comparative study between artificial bee colony
Artif. Intell. ICTAI. 2016-Janua, 162–169, 2016. https://​doi.​org/​ (ABC) algorithm and its variants on big data optimization.
10.​1109/​ICTAI.​2015.​36 Memetic Comput. 12: 129–150, 2020. https://​doi.​org/​10.​1007/​
24. Jana, S., Parekh, R., Sarkar, B.: A semi-supervised approach for s12293-​020-​00298-2
automatic detection and segmentation of optic disc from retinal 40. Toptaş, B., Hanbay, D.: A new artificial bee colony algorithm-
fundus image. Handb. Comput. Intell. Biomed. Eng. Healthc. based color space for fire/flame detection. Soft Comput. 24:
65–91, 2021. https:// ​ d oi. ​ o rg/ ​ 1 0. ​ 1 016/ ​ b 978-0- ​ 1 2- ​ 8 22260- ​ 7 .​ 10481–10492, 2020. https://d​ oi.o​ rg/1​ 0.1​ 007/s​ 00500-0​ 19-0​ 4557-4
00012-1 41. Khatami, A., Mirghasemi, S., Khosravi, A., Lim, C.P., Nahavandi,
25. Tulsani, A., Kumar, P., Pathan, S.: Automated segmentation of S.: A new PSO-based approach to fire flame detection using
optic disc and optic cup for glaucoma assessment using improved K-Medoids clustering. Expert Syst. Appl. 68: 69–80, 2017. https://​
UNET++ architecture. Biocybern. Biomed. Eng. 41: 819–832, doi.​org/​10.​1016/j.​eswa.​2016.​09.​021
2021. https://​doi.​org/​10.​1016/j.​bbe.​2021.​05.​011 42. Jebaseeli, T.J., Deva Durai, C.A., Peter, J.D.: Retinal blood ves-
26. Veena, H.N., Muruganandham, A., Senthil Kumaran, T.: A novel sel segmentation from diabetic retinopathy images using tandem
optic disc and optic cup segmentation technique to diagnose glau- PCNN model and deep learning based SVM. Optik (Stuttg). 199:
coma using deep learning convolutional neural network over reti- 2019. https://​doi.​org/​10.​1016/j.​ijleo.​2019.​163328
nal fundus images. J. King Saud Univ. - Comput. Inf. Sci. 2021. 43. Hashemzadeh, M., Adlpour Azar, B.: Retinal blood vessel extrac-
https://​doi.​org/​10.​1016/j.​jksuci.​2021.​02.​003 tion employing effective image features and combination of super-
27. Sengupta, S., Singh, A., Leopold, H.A., Gulati, T., Lakshminarayanan, vised and unsupervised machine learning methods. Artif. Intell.
V.: Ophthalmic diagnosis using deep learning with fundus images – A Med. 95: 1–15, 2019. https://​doi.​org/​10.​1016/j.​artmed.​2019.​03.​
critical review. Artif. Intell. Med. 102, 2020. https://​doi.​org/​10.​1016/j.​ 001
artmed.​2019.​101758 44. Toman, H., Kovacs, L., Jonas, A., Hajdu, L., Hajdu, A.: General-
28. GeethaRamani, R., Balasubramanian, L.: Macula segmentation ized weighted majority voting with an application to algorithms
and fovea localization employing image processing and heuristic having spatial output. Lect. Notes Comput. Sci. (including Subser.
based clustering for automated retinal screening. Comput. Meth- Lect. Notes Artif. Intell. Lect. Notes Bioinformatics). 7209 LNAI,
ods Programs Biomed. 160: 153–163, 2018. https://​doi.​org/​10.​ 56–67, 2012. https://​doi.​org/​10.​1007/​978-3-​642-​28931-6_6
1016/j.​cmpb.​2018.​03.​020 45. Lupaşcu, C.A., Di Rosa, L., Tegolo, D.: Automated detection of
29. Joshi, S., Karule, P.T.: A review on exudates detection methods optic disc location in retinal images. Proc. - IEEE Symp. Comput.
for diabetic retinopathy. Biomed. Pharmacother. 97: 1454–1460, Med. Syst. 17–22, 2008. https://​doi.​org/​10.​1109/​CBMS.​2008.​15
2018. https://​doi.​org/​10.​1016/j.​biopha.​2017.​11.​009 46. Rodrigues, L.C., Marengoni, M.: Segmentation of optic disc and
30. Pereira, C., Veiga, D., Mahdjoub, J., Guessoum, Z., Gonçalves, L., blood vessels in retinal images using wavelets, mathematical mor-
Ferreira, M., Monteiro, J.: Using a multi-agent system approach phology and Hessian-based multi-scale filtering. Biomed. Signal
for microaneurysm detection in fundus images. Artif. Intell. Med. Process. Control. 36: 39–49, 2017. https://​doi.​org/1​ 0.​1016/j.b​ spc.​
60: 179–188, 2014. https://​doi.​org/​10.​1016/j.​artmed.​2013.​12.​005 2017.​03.​014
31. Wu, J., Zhang, S., Xiao, Z., Zhang, F., Geng, L., Lou, S., Liu, M.: 47. Rangayyan, R.M., Zhu, X., Ayres, F.J., Ells, A.L.: Detection of the
Hemorrhage detection in fundus image based on 2D Gaussian optic nerve head in fundus images of the retina with gabor filters

13
Journal of Digital Imaging (2022) 35:302–319 319

and phase portrait analysis. J. Digit. Imaging. 23: 438–453, 2010. Publisher's Note Springer Nature remains neutral with regard to
https://​doi.​org/​10.​1007/​s10278-​009-​9261-1 jurisdictional claims in published maps and institutional affiliations.
48. Zhu, X., Rangayyan, R.M., Ells, A.L.: Detection of the optic nerve
head in fundus images of the retina using the hough transform for
circles. J. Digit. Imaging. 23: 332–341 ,2010. https://​doi.​org/​10.​
1007/​s10278-​009-​9189-5

13

You might also like