Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 34

1.

ABSTRACT

A recent development in the state-of-art technology machine learning plays a vital role in
the image processing applications such as biomedical, satellite image processing, Artificial
Intelligence such as object identification and recognition and so on. In Global, diabetic
retinopathy suffered patients growing vastly. And the fact is earliest stage could not diagnoses to
normal eye vision. Increasing necessity of finding a diabetic retinopathy as earliest would stops
vision loss for prolonged diabetes patient although suffered youngs’. Severity of the diabetic
retinopathy disease is based on a presence of microaneurysms, exudates, neovascularization,
Haemorrhages. Experts are categorized those diabetic retinopathy in to five stages such as
normal, mild, moderate, severe Nonproliferative(NPDR) or Proliferative diabetic retinopathy
patient(PDR). A proposed deep learning approach such as Deep Convolutional Neural
Network(DCNN) gives high accuracy in classification of these diseases through spatial analysis.
A DCNN is more complex architecture inferred more from human visual perspects. Amongst
other supervised algorithms involved, proposed solution is to find a better and optimized way to
classifying the fundus image with little pre-processing techniques.

1
Introduction
INTRODUCTION:

1.1 MEDICAL IMAGING:

Medical imaging is the technique and procedure of creating visual demonstration of the
internal of a body for experimental analysis and health intervention. Medical imaging seeks out
to disclose internal structures hidden by the skin and bones, as well as to diagnose and
treat disease. Medical imaging also establishes a database of normal anatomy and physiology to
make it possible to identify abnormality. Although imaging of removed organs and tissues can be
performed for medical reasons, such procedures are usually considered part of pathology instead
of medical imaging.

As a discipline and in its widest sense, it is part of biological imaging and


incorporates radiology which uses the imaging technologies of X-ray radiography, magnetic
resonance imaging, medical ultrasonography or ultrasound, endoscopy, elastography, tactile
imaging, thermography, medical photography and medicine functional techniques as positron
emission tomography.

In the clinical context, "imperceptible light" medical imaging is generally associate


to radiology or "medical imaging" and the medical practitioner responsible for understanding
(and sometimes acquiring) the images are a radiologist. "Visible light" medical imaging involves
digital video or still pictures that can be seen without special equipment. Dermatology and
wound care are two modalities that use visible light imagery. Diagnostic radiography designates
the technical aspects of medical imaging and in particular the acquisition of medical images.
The radiographer or radiologic technologist is usually responsible for acquiring medical images
of diagnostic quality, although some radiological interventions are performed by radiologists.

As a field of scientific investigation, medical imaging constitutes a sub-discipline


of engineering, medical or medicine depending on the context: Research and development in the
area of instrumentation, image acquisition (e.g. radiography), modeling and quantification are
usually the preserve of biomedical engineering, medical physics, and computer science;
Research into the application and interpretation of medical images is usually the preserve
of radiology and the medical sub-discipline relevant to medical condition or area of medical

2
science (neuroscience, cardiology, psychiatry, psychology, etc.) under investigation. Many of the
techniques developed for medical imaging also have scientific and industrial applications.[3]

Medical imaging is often perceived to designate the set of techniques that noninvasively
produce images of the internal aspect of the body. In this restricted sense, medical imaging can
be seen as the solution of mathematical inverse. This means that cause (the properties of living
tissue) is inferred from effect (the observed signal). In the case of medical ultrasonography, the
probe consists of ultrasonic pressure waves and echoes that go inside the tissue to show the
internal structure. In the case of projection radiography, the probe uses X-ray radiation, which is
absorbed at different rates by different tissue types such as bone, muscle and fat. The
term noninvasive is used to denote a procedure where no instrument is introduced into a patient's
body which is the case for most imaging techniques used. The MIPAV (Medical Image
Processing, Analysis, and Visualization) application enables quantitative analysis and
visualization of medical images of numerous modalities such as PET, MRI, CT, or microscopy.
Using MIPAV's standard user-interface and analysis tools, researchers at remote sites (via the
internet) can easily share research data and analyses, thereby enhancing their ability to research,
diagnose, monitor, and treat medical disorders. MIPAV is a Java application and can be run on
any Java-enabled platform such as Windows, UNIX, or Macintosh OS X.

1.1.1 Goals for MIPAV

MIPAV is to meet the following goals:

 To develop computational methods and algorithms to analyze and quantify biomedical data;
 To collaborate with NIH researchers and colleagues at other research centers in applying
information analysis and visualization to biomedical research problems;
 To develop tools (in both hardware and software) to give our collaborators the ability to
analyze biomedical data to support the discovery and advancement of biomedical
knowledge.

1.1.2 Need for MIPAV

Imaging has become an essential component in many fields of bio-medical research and
clinical practice. Biologists study cells and generate 3D confocal microscopy data sets,

3
virologists generate 3D reconstructions of viruses from micrographs, radiologists identify and
quantify tumors from MRI and CT scans, and neuroscientists detect regional metabolic brain
activity from PET and functional MRI scans. Analysis of these diverse types of images requires
sophisticated computerized quantification and visualization tools. To support scientific research
in the NIH intramural program, CIT has made major progress in the development of a platform-
independent, n-dimensional, general-purpose, extensible image processing and visualization
program.

1.2 RETINAL IMAGING:

Retinal image processing is greatly required in diagnosing and treatment of many


diseases affecting the retina and the choroid behind it. Diabetic retinopathy is one of the
complications of diabetes mellitus affecting the retina and the choroid. Retinal imaging is a
recent technological advancement in eye care. It enables optometrist to capture a digital image of
the retina, blood vessels and optic nerve located at the back of eyes. This aids in the early
detection and management of diseases that can affect both eyes and overall health. This includes
glaucoma, macular degeneration, diabetes and hypertension. With retinal imaging technology,
the most subtle changes to the structures at the back of eyes can be detected. In this condition, a
network of small blood vessels, called choroidal neovascularization (CNV), arises in the choroid
and taking a portion of the blood supplying the retina. As the amount of blood supplying the
retina is decreased, the sight may be degraded and in the severe cases, blindness may occur. The
physicians try to treat this dangerous disorder by applying optical energy to photocoagulate the
neovascularization. Argon laser is used in photocoagulation purposes to cauterize the small
vessels which increases the amount of blood supplying the retina and thus maintaining the sight.
This treatment modality is achieved in many sessions. The physician asks the patient to fixate
his/her eye to be able to direct the laser beam to the affected area. The current success rate of this
procedure is below 50% for eradication of CNV following one treatment session with a
recurrence and/or persistence rate of about 50%. The latter condition requires repeating the
treatment. Each treatment repetition in turn has a 50% failure rate. Moreover, several studies
indicate that incomplete treatment was associated with poorer prognosis than no treatment.
Consequently, the need to develop an automated laser system to treat the whole retina in one
session has become a necessity. This system is intended to scan the retina and track it applying

4
the laser energy to whole area except the sensitive objects that may be damaged by the laser
energy. The system is assumed to do this by capturing the retinal images using a fundus camera.
These images are to be accurately segmented to extract the sensitive objects in the retina such as
the blood vessel tree, the optic disk, the macula and the region between the optic disk and the
macula. The positions of laser shots are to be distributed in the rest of the retina. Also a robust
registration technique is to be applied to detect the motion parameters of the retina to update the
positions of laser shots accordingly. Moreover, the fundus camera can only provide an image for
a portion the retina but not the whole retina. The physician sometimes needs to have a complete
image for the retina to be able to have a reliable diagnosis and hence plan for good treatment.
This problem may be overcome by some image processing algorithms to build a complete map
for the retina. The retina is a layered structure with several layers of neurons interconnected
by synapses. The only neurons that are directly sensitive to light are the photoreceptor cells.
These are mainly of two types: the rods and cones. Rods function mainly in dim light and
provide black-and-white vision, while cones support daytime vision and the perception of color.
A third, much rarer type of photoreceptor, the intrinsically photosensitive ganglion cell, is
important for reflexive responses to bright daylight. The basic retinal image shown in fig 1.

Fig 1. Retinal image

1.3 Retinal image analysis:

5
Retinal images obtained using Adaptive Optics have the potential to facilitate early
detection of retinal pathologies. Many researchers were working on retinal images to perform
various image processing tasks for the beneficial of health sector. The result of image analysis
relies on a preliminary phase of identifying good quality images, which have high contrast
photoreceptors and vasculature, and accurate registration of these images. Currently many
researchers have proved the automatic assessment of quality of retinal images taken by fundal
camera with a reference image. Recently, AO has been combined with scanning laser
ophthalmoscope and optical coherence tomography (OCT) to obtain images of retinal
microvasculature and blood flow and three dimensional images of living cone photoreceptors
respectively. The photoreceptors are one of the key components of the retinal images, which act
as an indicator to detect or monitor retinal diseases. In addition, digital imaging has the
advantage of easier storage on media that do not deteriorate in quality with time, can be
transmitted over short distances throughout a clinic or over large distances via electronic transfer
(allowing expert ‘‘at-a distance’’ opinion in large rural communities), can be processed to
improve image quality, and subjected to image analysis to perform objective quantitative
analysis of fundal images and the potential for automated diagnosis. In the research or screening
setting, large databases of fundal images may be automatically classified and managed more
readily than labor intensive observer-driven techniques. Automated diagnosis may also aid
decision-making for optometrists. The analysis of retinal image illustrated in fig 2.

Fig 2. Retinal image analysis

6
Diabetic retinopathy also known as diabetic eye disease, is when damage occurs to the retina due
to diabetes. It’s a systemic disease, which affects up to 80 percent of all patients who have had
diabetes for 20 years or more. Despite these intimidating statistics, research indicates that at least
90% of these new cases could be reduced if there were proper and vigilant treatment and
monitoring of the eyes. The longer a person has diabetes, the higher his or her chances of
developing diabetic retinopathy. According to the International Diabetes Federation, the number
of adults with the diabetes in the world is estimated to be 366 million in 2011 and by 2030 this
would have risen to 552 million. The number of people with type 2 diabetes is increasing in
every country 80% of people with diabetes live in low-and middle-income countries. India stands
first with 195%(18 million in 1995 to 54 million in 2025). Previously, diabetes mellitus(DM)
was considered to be present, largely, among the urban population in india. Recent studies
clearly show an increasing prevalence in rural areas as well. Indian studies show a 3-fold
increase in the presence of diabetes among the rural population over the last decade or so (2.2%
in 1989 to 6.3% in 2003). In India, Study shows the estimated prevalence of type 2 diabetes
mellitus and diabetic retinopathy in a rural population of south india are nearly 1 of 10
individuals in rural south india, above the age of 40 years, showed the evidence of type 2
diabetes mellitus.

7
Fig 3. Diabetic retinopathy stages There are five major level of clinical DR severity. Many
patients have no clinically observable DR early after DM diagnosis, yet there are known
structural and physiologic changes in the retina including slowing of retinal blood flow,
increased leukocyte adhesion, thickening of basement membranes, and loss of retinal pericytes.
The earliest clinically apparent stage of DR is mild non-proliferative diabetic retinopathy(NPDR)
characterized by the development of microaneurysms. The disease can progress to moderate
NPDR where additional DR lesions develop, including venous calibre changes and intraretinal
microvascular abnormalities. The severity and extent of these lesions in increased in severe
NPDR, and retinal blood supply becomes increasingly compromised. As a consequence, the non-
perfused areas of the retina send signals stimulating new blood vessel growth, leading to
proliferative diabetic retinopathy(PDR). The new blood vessels are abnormal, friable, and can
bleed easily often causing severe visual loss. Diabetic macular edema(DME) occurs when there
is swelling of the retina due to leaking of fluid from blood vessels within the macula, and can
occur during any stage of DR. The progression from no retinopathy to PDR can take 2 decades
or more, and this slow rate enables DR to be identified and treated at an early stage.
Development and progression of DR is related to duration and control of diabetes. DR in its early
form is often asymptomatic, but amenable to treatment. The Diabetic Retinopathy Study and the
Early Treatment of Diabetic Retinopathy Study(ETDRS) showed the treatment with laser
photocoagulation can more than halve the risk of developing visual loss from PDR.

Diabetic eye disease relates to a group of eye problems that diabetes patients may face as a
consequence of diabetes. All of them can cause severe vision loss or even blindness, which
includes: Diabetic Retinopathy: Damage to the blood vessels in the retina

Diabetic macular edema (DME) : A consequence of diabetic retinopathy, DME is swelling in an


area of the retina called the macula.

Cataract : Clouding of the eye’s lens. Cataracts develop at an earlier age in people with diabetes.

Glaucoma : Increase in fluid pressure inside the eye that leads to optic nerve damage.

8
In the initial stages of this disease, patients are generally asymptomatic. But in the more
advanced stages, patients may face symptoms that include floaters, blurred vision, distortion, and
visual acuity loss. DR is mainly classified as nonproliferative DR (NPDR) and proliferative DR
(PDR). Depending on the presence of specific DR features, the stages can be identified. The
following list describes three subclasses of NPDR as well as PDR :

[1] Mild NPDR : At this earliest stage, microaneurysms occur which are small areas of balloon-
like swelling in the retina’s tiny blood vessels.

[2] Moderate NPDR : As the disease progresses, some blood vessels that nourish the retina are
blocked. At this stage numerous microaneurysms and retinal haemorrhages are present.

[3] Severe NPDR : Many more blood vessels are blocked, depriving several areas of the retina
with their blood supply. These areas of the retina send signals to the body to grow new blood
vessels for nourishment.

[4] PDR : At this advanced stage, the signals sent by the retina for nourishment trigger the
growth of new blood vessels. They grow along the retina and along the surface of the clear,
vitreous gel that fills the inside of the eye. They have thin, fragile walls. If they leak blood,
severe vision loss and even blindness can result.

At the present time, the detection of DR is a time-consuming and manual process that requires a
trained clinician to examine and evaluate the diagnostic images. The diagnosis can take several
days depending on the doctor’s availability and the number of patients. Experience has proven
that there exists a need for an automatic method of DR screening.7 Recently, image processing
and analysis has progressed detecting DR, but this is still a new field, with room for
improvement in both processing time and accuracy.

9
Literature Survey

S.Wang, et al[1], using convolutional neural network performs as a trainable hierarchical feature
extractor and Random Forest(RF) as a trainable classifier. It has 6 stacked layers of convolution
and followed by subsampling layers for feature extraction. Random Forest algorithm is utilized
to for classifier ensemble method and introduced in the retinal blood vessel segmentation. This
architecture is used in the DRIVE,STARE databases and achieved around 0.98 and 0.97.
Mrinal Haloi et al[2], a new deep learning based computer-aided system for microaneurysm
detection. Comparing other deep neural network, it required less preprocessing, vessel extraction
and more deep layers for training and testing the fundus image dataset.It consists of five layers
which includes convolutional, max pooling and Softmax layer with additional droput training for
improving an accuracy.It achieved low false positive rate. And the performance measured as
0.96 accuracy with .96 specificity and .97 sensitivity.
M.Melinscak et al[3], an automatic segmentation of blood vessels in fundus images. It contain a
deep max-pooling convolutional neural networks to segment blood vessels.It is deployed 10-
layer architecture for achieving a maximum accuracy but worked with small image patches. It
contain a preprocessing for resizing and reshaping the fundus images. It carried around 4-
convolutional and 4-max pooling layer with 2 additional fully connected layer for vessel
segmentation. Also, this method achieved an accuracy around 0.94.
Gardner et al[4], a pioneer method of diabetic retinopathy screening tool using artificial neural
network with preprocessing techniques. This method learned features from the sub-images.It
heavily relied on back propagation neural network. It contains set of diabetic features in fundus
images and compare against the ophthalmologist screening set of fundus images.Its a wholistic
approach of recognition of vessels, exudates and haemorrhages were 91.7%, 93.1% and 73.8%.
Sohini Roychowdhury et.al[5] proposed a novel two stage hierarchical classification algorithm
for automatic detection and classification. For automated detection, novel two-step hierarchical
binary classification is used. For classification of lesions from non-lesions purposed GMM,

10
SVM, KNN and ADABOOST methods are used. They take 30 top features like are, variance of
Ired channel, Igreen channel, I sat of object, major and minor axis length, Mean pixels for
Igreen, Ired and intensity, solidity etc. The DREAM system 100 percent sensitivity, .5316
specificity achieved. Also, carried out average computation time for DR severity per image from
59.54 to 3.46s. overall feature reduction effects the average computation time.
JayakumarLachure et al[6], retinal micro-aneurysms, hemorrhages, exudates, and cotton wool
spots are the abnormality find out in the fundus images. Detection of red and bright lesions in
digital fundus photographs. Preprocessing, morphological operations performed to find
microaneurysms and features are extracted such as GLCM and structural features for
classification. This SVM classifier optimized to 100 percent and 90 percent sensitivity.
R.Priya, P.Aruna et al[7], to diagnostic retinopathy used two models like Probabilistic Neural
network(PNN) and Support Vector Machines. The input color retinal images are pre-processed
using grayscale conversion, adaptive histogram equalization, discrete wavelet transform,
matched filter and fuzzy C-means segmentation. The classification of preprocessed images
features were extracted.It achieved an accuracy of 89.6 percent and SVM of around 97.608
percent.
Giraddi et al[8], detection of the exudates in the color variability and contrast retinal images.
Comparative analysis made for SVM and KNN classifier for earliest detection. They utilized the
GLCM texture features extraction for obtaining the reduced number of false positives.
Eventually the true positive rates for SVM classifier around 83.4 and KNN classifier around
92%.As a result, KNN outperforms SVM with color as well as texture features.
Srivastava et al[9], a key idea of randomly drop units along with their connections during the
training. His work significantly reduces the over fitting and gives improvements over other
regularization techniques. Also, improves the performance of neural networks in vision,
document classification, speech recognition etc. Overall other methods, to identifying the
microaneurysm, Exudates, vessels segmentation for maximizing the accuracy rate is the key
objective. Also, increases the complexity by added more preprocessing stages such as deblurring
algorithm prior to detection, segmentation of blood vessels, rotating cross section,mathematical
modeling of enhancing light intensity, morphological reconstruction.
Meysam Tavakoli et al [11] presents a novel and distinctive arrangement of PC guidelines for
programmed recognition of MAs in fluorescein angiography (FA) fundus pictures, predicated on

11
Radon change (RT) and multi-covering windows. This technique is used in recognition of retinal
area imprints and injuries to analyze the DR. Top cap change and averaging channel are
connected to extract the foundation for pre-handling. After pre-preparing, the entire picture is
isolated into sub pictures. Optic nerve head (ONH) and vessel tree are then recognized and cover
by applying RT in every sub-picture. In the wake of distinguishing and concealing retinal vessels
and ONH, MAs are recognized and numbered by utilizing RT and thresholding. The proposed
system is assessed on three distinctive retinal pictures databases, the Mashhad Database with 120
FA fundus pictures, Second Neighborhood Database from Tehran with 50 FA retinal pictures
and separated of Retinopathy Online Test (ROC) database with 22 pictures. Results
accomplished an affectability and level of point of interest of 94% and 75% for Mashhad
database and 100% and 70% for the Second nearby Database separately.
Marwan D. Saleh, C. Eswaran [12] gives a mechanized choice emotionally supportive network
for non-proliferative diabetic retinopathy sickness predicated on MAs and HAs recognition. The
proposed framework removes some closer view items, for example, optic circle, fovea, and veins
for exact division of dim spot injuries in the fundus pictures. Dull item division methodology is
used to find bizarre areas, for example, MAs and HAs. Predicated on the number and area of
MAs and HAs, the framework assesses the thoroughness level of DR. A database of 98 shading
pictures is used to assess the execution of the created framework. Trial results demonstrate that
the proposed framework accomplishes 84.31% and 87.53% qualities as far as affectability for the
identification of MAs and HAs individually. As far as specificity, the framework accomplishes
93.63% and 95.08% qualities for the location of MAs and HAs individually.
Istvan Lazar and Andras Hajdu [13] proposed a strategy for retinal MAs Detection through Local
Rotating CrossSection Profile Analysis. This methodology apperceives MA identification
through the examination of directional cross-area profiles fixated on the nearby greatest pixels of
the pre-prepared picture. Top recognition is connected on every profile, and an arrangement of
properties in regards to the size, tallness, and state of the top are ascertained in this way. Quality
qualities are used as the cross' introduction area changes. These qualities constitute the list of
capabilities that is used in a guileless Bayes assignment to take out duplicitous hopefuls. The last
score of the remaining applicants can be thresholded further for a twofold yield. The proposed
system has been tried with the Retinopathy Online Challenge and ended up being aggressive

12
with the subsisting methodologies. The proposed strategy has accomplished higher affectability
at low wrong positive rates, i.e., at 1/8 and 1/4 Mendacious Positives/picture.
Balint Antal et Al [14] proposed a procedure called a two-stage choice bolster structure for the
programmed screening of computerized fundus pictures. Prescreening is the initial phase in
which pictures are consigned as astringently unhealthy (exceedingly atypical) or to be sent for
further handling. The second stride of the proposed system distinguishes districts of enthusiasm
with conceivable sores on the pictures that aforesaid passed the pre-screening step. These areas
will suit as info to the solid sore finders for definite examination. The computational execution of
a screening framework is increased because of pre-screening procedure. Test results demonstrate
that there is a decrementation in the computational encumbrance of the programmed screening
framework.
Balint Antal et Al [15] build up a methodology called an Ensemble-Predicated System for
Microaneurysm Detection and Diabetic Retinopathy Grading. This methodology has
demonstrated its high productivity in an open online test with its first position. Our novel
structure depends on an arrangement of sets. A pursuit calculation is used to winnow an ideal
combination. Since the proposed methodology is particular, further improvements should be
possible by coordinating all the more preprocessing techniques and hopeful extractors. The
DR/non-DR evaluating execution of this finder in the 1200 pictures of the Messidor database
have accomplished a 0.90 ± 0.01 AUC esteem, which is focused with other subsisting routines.
Anderson Rocha et al [16] introduced an unremarkable methodology for distinguishing both red
and lustrous injuries in DR pictures without requiring cement pre-or post-preparing. The
proposed methodology requires pinpointing the area of every injury to endorse the pro to assess
the picture for determination. It builds a visual word lexicon speaking to purposes of interest
(PoIs) situated inside of districts stamped by pros. Fundus pictures are consigned as everyday or
DR related pathology predicated on the vicinity or nonappearance of these PoIs. Territory under
the bend (AUC) of 95.3% and 93.3% is accomplished for white and red sore discovery using
fivefold cross approval. The visual word reference is hearty for DR screening of cosmically
gigantic, different groups with fluctuating cameras and settings and levels of aptitude for picture
catch. Solid location of retinal hemorrhages is fundamental in the advancement of computerized
screening frameworks.

13
Li Tang et al [17] proposed a novel splat highlight assignment technique with application to
retinal drain recognition in fundus pictures. Retinal shading pictures are divided into non-
covering fragments covering the whole picture. Every splat contains pixels with homogeneous
shading and spatial area. Elements are removed from every splat in respect to its circumventions,
utilizing replications from a mixed bag of channel bank, collaborations with neighboring splats,
and shape and surface data. An ideal subset of splat components is winnowed by a channel
methodology took after by a wrapper approach. Given splats with their related component
vectors and reference standard marks, a classifier can then be prepared to identify target objects.
A classifier is assessed on the openly accessible Messidor dataset. A zone under the collector
working trademark bend of 0.96 is accomplished at the splat level and 0.87 at the picture level.
Haniza Yazid, Hamzah Arof, Hazlita Mohd Isa [18] presents an early way to deal with identify
exudates and optic plate from shading fundus pictures predicated on opposite surface
thresholding. The proposed methodology includes numerous strategies, for example, fluffy c-
means bunching, edge identification, otsu thresholding and backwards surface thresholding. It
doesn't rely on upon physically winnowed parameters. The proposed system has accomplished
98.2% in affectability and 97.4% in specificity for DIARETDB1 database and 90.4% in
affectability and 99.2% in specificity for the National University Hospital of Malaysia (NUHM),
separately. This strategy beats systems predicated on watershed division and morphological
recreation.
Akara Sopharak et al [19] have propose a programmed technique to recognize exudates from
low-differentiate computerized pictures of retinopathy patients with non-widened students
utilizing a fluffy c-implies (FCM) bunching procedure. Preprocessing of difference upgrade was
connected keeping in mind the end goal to improve the info's nature picture before four elements,
in particular, power, standard deviation on force, tint, and number of edge pixels, were chosen to
supply to the FCM strategy. The quantity of required bunches was ideally chosen from a
quantitative investigation where it was differed from two to eight groups. The number of group
enhancement depended on affectability and specificity which is computed by correlation of the
distinguished results from master ophthalmologists. The positive prescient quality and positive
probability proportion were additionally used to assess the general execution of this strategy.
From the consequence of the subtracted bunch with the quantity of groups equaling 2, it was

14
found that the proposed strategy identified exudates with 92.18% affectability and 91.52%
affectability.
Muhammad Salman Haleem et al [20] have propose a novel way to deal with naturally
concentrate out genuine retinal zone from a SLO picture in light of picture preparing what's
more, machine realizing methodologies. Checking Laser Ophthalmoscopes (SLOs) can be
utilized for ahead of schedule identification of retinal ailments. With the appearance of most
recent screening innovation, the upside of utilizing SLO is its wide Field of View (FOV), which
can picture an expansive piece of the retina for better determination of the retinal sicknesses. On
the other hand, amid the imaging procedure, ancient rarities, for example, eyelashes furthermore,
eyelids are additionally imaged alongside the retinal range. This brings a major test on the most
proficient method to prohibit these curios. To decrease the many-sided quality of picture
handling assignments and give a helpful primitive picture design, we have gathered pixels into
distinctive locales in light of the local size and conservativeness, called superpixels. The
structure then computes picture based elements reflecting textural and basic data and arranges
between retinal region and relics. The exploratory assessment results have indicated great
execution with a general precision of 92%.
Keith A. Goatman et al [21] have presented the determination of suitable picture highlights for
the automatic location of new vessels on the optic plate. The components are picked in light of
their separation capacity (tried utilizing the nonparametric Wilcoxon rank total and Ansari-
Bradley scattering tests) and nonappearance of connection with different elements (tried utilizing
the Kendall Tau coefficient). Order was performed utilizing a bolster vector machine. The
framework was prepared and tried by cross-acceptance utilizing 38 pictures with new vessels and
71 typical pictures without new vessels. Fourteen elements were chosen, giving a region under
the collector administrator trademark bend of 0.911 for distinguishing pictures with new vessels
on the plate, the framework will characterize the unusual picture as the more strange 91.1% of
the time. The technique could have a valuable part as a major aspect of a mechanized retinopathy
investigation framework.
Akara Sopharaka et al. [22] have presented a novel method for detection of exudates from non-
dilated retinal images using mathematical morphology methods. Examines and proposes an
arrangement of ideally balanced morphological administrators to be utilized for exudate
recognition on diabetic retinopathy patients' non-expanded student and low-difference pictures.

15
These naturally identified exudates are approved by contrasting and master ophthalmologists'
hand-drawn ground-truths. The outcomes are fruitful and the affectability and specificity for our
exudate location is 80% and 99.5%, separately.
Cemal Kose et al [23] built up a methodology called converse division system to recognize DR.
Direct division strategies gives poor results in a cases' percentage. The proposed framework
abuses the homogeneity of salubrious zones as opposed to managing fluctuating structure of
insalubrious territories for portioning bright sores (hard exudates and cotton fleece spots). This
framework first causes the reference or lengthened foundation picture from a retinal picture.
Salubrious parts of the retinal picture aside from vessel and OD zones are used in the estimation
of this reference picture. Next the retinal picture is separated into two segments as low and high
force territories predicated on the foundation's intensities picture. Foundation picture is used as
the dynamic edge esteem for dividing high power and low force degenerations in the picture.
Both degenerations are fragmented discretely by using the converse division system and element
thresholding. The framework's execution is more than 95% in identification of the optic circle
(OD), and 90% in division of the DR. Hence, the strategy gives high division and evaluation
accuracy. Now and again, the picture lighting relics may influence division execution adversely,
which could also be considered as an issue.
C.JayaKumari, and R.Maruthi [24] have introduced relevant grouping calculation to identify the
vicinity of hard exudates in the fundus images. After the pre-processing stage, the proposed
calculation has been connected to portion the exudates. Elements removed from the sectioned
areas are similar to the standard deviation, mean, force, edge energy and smallness. These
separated elements are given as inputs to Echo State Neural Network (ESNN) to segregate
between the unremarkable and obsessive image. A dataset comprises of an aggregate of 50
pictures have been utilized to discover the exudates. Out of 50, 35 pictures comprising of both
commonplace and unusual are habituated to prepare the ESSN and the remaining 15 pictures are
accustomed to test the neural system. The execution of the proposed calculation has 93%
affectability and 100% specificity regarding exudates predicated assignment.

16
Existing System
 Machine learning approach are applied on extracted features to classify the severity level
used
 Back Propagation Algorithm to train three layered Artificial Neural Network (ANN) as
the classifier. Random Forests (RF) for classification
 Multilayer feed-forward neural network (NN), consisting of an input layer, three hidden
layers and an output layer.
 Minimum Distance Classifier (MDC), an earliest but still effective classification tool.
 Support Vector Machine (SVM) for classification.
 employing a supervised local Fisher discriminant analysis (LFDA).
 Then they modeled the lesion classes (MAs and non-MAs) using ensemble of several
classifiers including Gaussian Mixture Model (GMM), SVM and an extension of
Multimodal mMediods.

Disadvantages
 It’s not possible to detect at an early stage is very crucial for saving a patient’s vision;
 Need image is of higher and better quality than the actual image.
 Current screening systems are additionally not able to picture the peripheral retina and
require pharmacological pupil dilation;
 The performance of these approaches plateaus, which makes them harder to improve.

17
Proposed System
In this project we proposed a deep learning-based CNN method for the problem of classifying
DR in fundus imagery. This is a medical imaging task with increasing diagnostic relevance.
CNNs are a special type of neural networks that were proposed . Because of their superior
performance on image-oriented tasks, they are now the mainstream model for image-related
tasks. Generally, a CNN contains three basic components: the convolutional layers, the in-place
activation operation, and the pooling layers. For classification tasks, there may be several fully
connected layers and a classification layer at the end.
We also propose a new model which is better adapted to small lesions in the fundus images.

Advantages
 correct localization of the optic disc in 999 of 1,000 retinal images
 segmentation of retinal vessels at an accuracy of 94.2% in 20 retinal images
 detection of hemorrhages, microaneurysms, and vascular abnormalities with sensitivity of
100% and specificity of 87% in 100 retinal images
 detection and separation of exudates, cotton-wool spots, and drusen with a lesion
sensitivity of 95% and specificity of 88% in 300 retinal images
 detection of retinal images with insufficient image quality with an accuracy of 97.4% in
1,000 retinal images

18
Problem Statement
In recent years most of the image processing researchers indulged in the development of machine
learning especially deep learning approaches in the field of Hand-written digit recognition such as MNIST
dataset, image classification by IMAGENET. Our proposed methodology strongly emerged based on
these key aspects of diseases severity classification from the fundus images. In general, especially
classification of diseases with the proposed architecture a DCNN[add citation] following these basic
steps to achieve maximum accuracy from the images dataset are i) Data Augmentation ii) Pre-processing
iii) Initialization of Networks iv) training v) Activation function selections vi) Regularizations vii) Ensemble
the multiple methods. In our proposed diabetic retinopathy classification, an architecture are condensed
and its building blocks are : a. Data augmentation b. Preprocessing c. Deep Convolutional Neural
Network Classification

19
Block Diagram

Training Phase Testing Phase

Preprocessing
Preprocessing

Normalization Factors
Normalization Factors

Size Shape Color


Size Shape Color

Augmentation
Augmentation

Mild
Moderate
Prediction Severe
Very Severe

20
Retinal image acquisition:

Retinal images of humans play an important role in the detection and diagnosis of cardio
vascular diseases that including stroke, diabetes, arteriosclerosis, cardiovascular diseases and
hypertension, to name only the most obvious. Vascular diseases are often life-critical for
individuals, and present a challenging public health problem for society. Therefore, the detection
for retinal images is necessary, and among them the detection of blood vessels is most important.
The alterations about blood vessels, such as length, width and branching pattern, can not only
provide information on pathological changes but can also help to grade diseases severity or
automatically diagnose the diseases. In this module, we upload the retinal images. The fundus of
the eye is the interior surface of the eye, opposite the lens, and includes the retina, optic
disc, macula and fovea, and posterior pole. The fundus can be examined by ophthalmoscopy
and/or fundus photography. The retina is a layered structure with several layers
of neurons interconnected by synapses. In retina we can identify the vessels. Blood vessels show
abnormalities at early stages also blood vessel alterations. Generalized arteriolar and venular
narrowing which is related to the higher blood pressure levels, which is generally expressed by
the Arteriolar-to-Venular diameter ratio. In this work, we have constructed a dataset of images
for the training and evaluation of our proposed method. This image dataset was acquired from
publically available datasets such as DRIVE and STAR. Each image was captured using 24 bit
per pixel (standard RGB) at 760 x 570 pixels. First, proposed method has only been tested
against normal images which are easier to distinguish. Second, some level of success with
abnormal vessel appearances must be established to recommend clinical usage. As can be seen, a
normal image consists of blood vessels, optic disc, fovea and the background, but the abnormal
image also has multiple artifacts of distinct shapes and colors caused by different diseases. In fig
(a) is denoted as normal image and (b) is denoted as diseased image.

21
Preprocessing

In this module, we perform the gray scale conversion operation to identify black and
white illumination. Noise in colored retinal image is normally due to noise pixels and pixels
whose color is distorted so implement sharpening filter can be used to enhance and sharpen the
vascular pattern for preprocessing and blood vessel segmentation of retinal images performing
well in preprocessing, enhancing and segmenting the retinal image and vascular patter.

Human perception is highly sensitive to edges and fine details of an image, and since they
are composed primarily by high frequency components, the visual quality of an image can be
enormously degraded if the high frequencies are attenuated or completed removed. In contrast,
enhancing the high-frequency components of an image leads to an improvement in the visual
quality. Image sharpening refers to any enhancement technique that highlights edges and fine
details in an image. Image sharpening is widely used in printing and photographic industries for
increasing the local contrast and sharpening the images. In principle, image sharpening consists
of adding to the original image a signal that is proportional to a high-pass filtered version of the
original image. In this filter, the original image is first filtered by a high-pass filter that extracts
the high-frequency components, and then a scaled version of the high-pass filter output is added
to the original image, thus producing a sharpened image of the original. Note that the
homogeneous regions of the signal, i.e., where the signal is constant, remain unchanged. The
sharpening filter work as follows:

A primary steps involved in the preprocessing is resizing the images. Before feeding into the
architecture for classification, convert the images in to gray scale. And then, convert in to the L
model. It is a monochrome images which is used to highlights the microaneurysms, and vessels
in the fundus images. And flatten the images in single dimensional for processing further.

22
Normalization

SIZE NORMALIZATION

The first step is to resize different images into a uniform scale so that all fundus areas in
different images have the same diameter. The black borders on each side of the fundus image are
removed at the outset by summing the images horizontally and vertically and discarding regions
that correspond to values under a selected threshold. Then, the images are resized to fixed
dimensions.

SHAPE NORMALIZATION

Some fundus images are complete circles, whereas others may lack the top and bottom
margins. In addition, many devices capture a small notch on the edge of the circle. To unify the
shapes of these images, we use a mask that contains the largest common area of all images from
different sources to obscure unwanted parts of the image.

COLOR NORMALIZATION

After the shape of each image is normalized, its color must be tuned because different
devices may produce images with different color temperatures, and the illumination conditions
can vary. Our method of color tuning is simple: we shift each of the RGB channels of a fundus
image to a pre-calculated mean and truncate the values above 255 .

23
Augmentation

In this block, we used the Augmentator software package. Specifically, we augmented our data
through the following means:

 flip the image horizontally


 flip the image vertically
 randomly rotate the image in the range of [−25,25] degrees
 randomly zoom in or out in the range of [0.85,1.15]
 randomly distort the image All of these methods were combined for augmenting each
image, and a probability of 0.5 was used to determine whether or not to perform each of
them.

Deep Neural Network

In deep learning, the deep neural network uses a complex architecture composed of stacked
layers in which is particularly well-adapted to classify the images. For multi-class classification,
this architecture robust and sensitive to each feature present in the images.

• Convolutional Layer
• Pooling Layer
• ReLU Layer
• Dropout layer
• Fully connected Layer
• Classification Layer

24
System Requirements

Hardware Requirements:

• Processor : Dual core processor 2.6.0 GHZ


• RAM : 4 GB
• Hard disk : 160 GB

Software Requirements:

• Operating system : Windows OS ( 2007, 2008) 64 bit


• Front End : MATLAB 15
• About MATLAB:
• MATLAB 15
• MATLAB is a high-performance language for technical computing. It integrates
computation, visualization, and programming in an easy-to-use environment where
problems and solutions are expressed in familiar mathematical notation. Typical uses
include:
• • Math and computation
• • Algorithm development
• • Modeling, simulation, and prototyping
• • Data analysis, exploration, and visualization
• • Scientific and engineering graphics
• • Application development, including graphical user interface building
• MATLAB is an interactive system whose basic data element is an array that does not
require dimensioning. This allows you to solve many technical computing problems,
especially those with matrix and vector formulations, in a fraction of the time it would
take to write a program in a scalar noninteractive language such as C or Fortran. The
name MATLAB stands for matrix laboratory. MATLAB was originally written to
provide easy access to matrix software developed by the LINPACK and EISPACK
projects. Today, MATLAB uses software developed by the LAPACK and ARPACK

25
projects, which together represent the state-of-the-art in software for matrix computation.
MATLAB has evolved over a period of years with input from many users. In university
environments, it is the standard instructional tool for introductory and advanced courses
in mathematics, engineering, and science. In industry, MATLAB is the tool of choice for
high-productivity research, development, and analysis.
• Toolboxes
• MATLAB features a family of application-specific solutions called toolboxes. Very
important to most users of MATLAB, toolboxes allow you to learn and apply specialized
technology. Toolboxes are comprehensive collections of MATLAB functions (M-files)
that extend the MATLAB environment to solve particular classes of problems. Areas in
which toolboxes are available include signal processing, control systems, neural
networks, fuzzy logic, wavelets, simulation, and many others.

• The MATLAB System
• The MATLAB system consists of five main parts: Development Environment.
This is the set of tools and facilities that help you use MATLAB functions and files.
Many of these tools are graphical user interfaces. It includes the MATLAB desktop and
Command Window, a command history, and browsers for viewing help, the workspace,
files, and the search path. The MATLAB Mathematical Function Library. This is a vast
collection of computational algorithms ranging from elementary functions like sum, sine,
cosine, and complex arithmetic, to more sophisticated functions like matrix inverse,
matrix eigenvalues, Bessel functions, and fast Fourier transforms.
• The MATLAB language. This is a high-level matrix/array language with control
flow statements, functions, data structures, input/output, and object-oriented
programming features. It allows both “programming in the small” to rapidly create quick
and dirty throw-away programs, and “programming in the large” to create complete large
and complex application programs.
• Handle Graphics
• This is the MATLAB graphics system. It includes high-level commands for two-
dimensional and three-dimensional data visualization, image processing, animation, and
presentation graphics. It also includes low-level commands that allow you to fully

26
customize the appearance of graphics as well as to build complete graphical user
interfaces on your
• MATLAB functions:
• A MATLAB “function” is a MATLAB program that performs a sequence of operations
specified in a text file (called an m-file because it must be saved with a file extension of
*.m). A function accepts one or more MATLAB variables as inputs, operates on them in
some way, and then returns one or more MATLAB variables as outputs and may also
generate plots, etc.
• Some functions are
• Imread() – Reading the image from the graphics file.
• A = imread(filename, fmt) reads a grayscale or color image from the file specified
by the stringfilename. If the file is not in the current folder, or in a folder on the
MATLAB path, specify the full pathname.
• The text string fmt specifies the format of the file by its standard file extension. For
example, specify'gif' for Graphics Interchange Format files. To see a list of supported
formats, with their file extensions, use the imformats function. If imread cannot find a file
named filename, it looks for a file named filename.fmt.
• The return value A is an array containing the image data. If the file contains a grayscale
image, A is an M-by-N array. If the file contains a truecolor image, A is an M-by-N-by-3
array. For TIFF files containing color images that use the CMYK color space, A is an M-
by-N-by-4 array. The class of A depends on the bits-per-sample of the image data,
rounded to the next byte boundary. For example, imread returns 24-bit color data as an
array of uint8 data because the sample size for each color component is 8 bits.
• [X, map] = imread(...) reads the indexed image in filename into X and its associated
colormap into map. Colormap values in the image file are automatically rescaled into the
range [0,1].
• Image- write image to graphics file
• imwrite(A,filename,fmt) writes the image A to the file specified by filename in the
format specified by fmt.
• A can be an M-by-N (grayscale image) or M-by-N-by-3 (truecolor image) array, but it
cannot be an empty array. For TIFF files, A can be an M-by-N-by-4 array containing

27
color data that uses the CMYK color space. For GIF files, A can be an M-by-N-by-1-by-
P array containing grayscale or indexed images — RGB images are not supported. For
information about the class of the input array and the output image. filename is a string
that specifies the name of the output file. fmt can be any of the text strings listed. This list
of supported formats is determined by the MATLAB image file format registry.
See imformats for more information about this
registry.imwrite(X,map,filename,fmt) writes the indexed image in X and its associated
colormap map tofilename in the format specified by fmt. If X is of
class uint8 or uint16, imwrite writes the actual values in the array to the file. If X is of
class double, imwrite offsets the values in the array before writing, using uint8(X–
1). map must be a valid MATLAB colormap. Note that most image file formats do not
support colormaps with more than 256 entries. When writing multiframe GIF
images, X should be an 4-dimensional M-by-N-by-1-by-P array, where P is the number
of frames to write.imwrite(...,filename) writes the image to filename, inferring the format
to use from the filename's extension.
• imwrite(...,Param1,Val1,Param2,Val2...) specifies parameters that control various
characteristics of the output file for HDF, JPEG, PBM, PGM, PNG, PPM, and TIFF files.
For example, if you are writing a JPEG file, you can specify the quality of the output
image. For the lists of parameters available for each format.
• MATLAB applications:
• The MATLAB Application Program Interface (API). This is a library that allows
you to write C and Fortran programs that interact with MATLAB. It include facilities for
calling routines from MATLAB (dynamic linking), calling MATLAB as a computational
engine, and for reading and writing MAT-files.
• What Is Simulink?
• Simulink, a companion program to MATLAB, is an interactive system for
simulating nonlinear dynamic systems. It is a graphical mouse-driven program that
allows you to model a system by drawing a block diagram on the screen and
manipulating it dynamically. It can work with linear, nonlinear, continuous-time,
discrete-time, multirate, and hybrid systems. Blocksets are add-ons to Simulink that

28
provide additional libraries of blocks for specialized applications like communications,
signal processing, and power systems.
• Real-Time Workshop is a program that allows you to generate C code from your block
diagrams and to run it on a variety of real-time systems.
• Stateflow is an interactive design tool for modeling and simulating complex
reactive systems. Tightly integrated with Simulink and MATLAB, Stateflow provides
Simulink users with an elegant solution for designing embedded systems by giving them
an efficient way to incorporate complex control and supervisory logic within their
Simulink models. With Stateflow, you can quickly develop graphical models of event-
driven systems using finite state machine theory, statechart formalisms, and flow diagram
notation. Together, Stateflow and Simulink serve as an executable specification and
virtual prototype of your system design.

Conclusion

In our proposed solution, Deep Neural Network is a wholesome approach to all level of diabetic
retinopathy stages. No manual feature extraction stages are needed. Our network architecture
with dropout techniques yielded significant classification accuracy. This architecture has some
setbacks such as an additional stage augmentation are needed for the images taken from a
different camera with a different field of view.

Future View

We can also implement our whole model as an application on mobile phones, so as to make
diabetic retinopathy detection easier and time-saving.

29
REFERENCES
[1] S.Wang, et al, “Hierarchical retinal blood vessel segmentation based on feature and ensemble
learning”, Neurocomputing(2014), http://dx.doi.org/10.1016/j.neucom.2014.07.059.
[2] Mrinal Haloi, “Improved Microaneurysm detection using Deep Neural Networks”, Cornel
University Library(2015), arXiv:1505.04424.
[3] M.Melinscak.P.Prentasic, S.Loncaric, “Retinal Vessel Segmentation using Deep Neural
Networks”, VISAPP(1), (2015):577-582
[4] G.Gardner,D.Keating, T.H.Willamson, A.T.Elliott, “Automatic detection of diabetic
retinopathy using an artificial neural network: a screening Tool”, Brithish Journal of
Opthalmology,(1996);80:940-944
[5] S.Roychowdhury, D.D.Koozekanani, Keshab K.Parhi, “DREAM: Diabetic Retinopathy
Analysis Using Machine Learning”, IEEE Journal of BioMedical and Health Informatics,
Vol.18, No 5, September (2014). [6] J.Lachure, A.V.Deorankar, S.Lachure, S.Gupta, R.Jadhav,
“Diabetic Retinopathy using Morphological operations and Machine Learning”, IEEE
International Advance Computing Conference(IACC), (2015).
[7] R.Priya, P.Aruna, “SVM and Neural Network based Diagnosis of Diabetic Retinpathy”,
International Journal of computer Applications(00975-8887), volume 41-No.1,(March 2012).
[8] S.Giraddi, J Pujari, S.Seeri, “Identifying Abnormalities in the Retinal Images using SVM
Classifiers”, International Journal of Computer Applications(0975-8887), Volume 111 – No.6,
(2015).
[9] N.Srivastava, G.Hinton, A.Krizhevsky, I Sutskever, R Salakhutdinov, “Dropout: A simple
way to prevent Neural networks from overfitting”, Journal of Machine learning research(2014)
1929-1958.
[11] Meysam Tavakoli, Reza Pourreza Shahri, Hamidreza Pourreza, Alireza Mehdizadeh, Touka
Banaee, Mohammad Hosein Bahreini Toosi, (2013), ‘A complementary method for automated
detection of microaneurysms in fluorescein angiography fundus images to assess diabetic
retinopathy’, Pattern Recognition, Vol. 46, No. 10, 2013, pp. 2740-2753.

30
[12] Marwan D. Saleh, C. Eswaran, (2012), ‘An automated decision-support system for non-
proliferative diabetic retinopathy disease based on MAs and Has Detection’, Computer Methods
and Programs in Biomedicine, Vol. 108, No. 1, pp. 186–196.
[13] Istvan Lazar and Andras Hajdu, (2013), ‘Retinal Microaneurysm Detection through Local
Rotating Cross-Section Profile Analysis’, IEEE Transactions on Medical Imaging, Vol. 32, No.
2, pp. 400 - 407.
[14] Balint Antal, Andras Hajdu, Zsuzsanna Maros-Szabo, Zsolt Torok, Adrienne Csutak, Tunde
Peto, (2012), ‘A two-phase decision support framework for the automatic screening of digital
fundus images’, Journal of Computational Science, Vol. 3, No. 5, pp. 262–268.
[15] Balint Antal, and Andras Hajdu, (2012), ‘An Ensemble-Based System for Microaneurysm
Detection and Diabetic Retinopathy Grading’, IEEE Transactions on Biomedical Engineering,
Vol. 59, No. 6, pp. 1720 - 1726.
[16] Anderson Rocha, Tiago Carvalho, Herbert F. Jelinek, Siome Goldenstein, and Jacques
Wainer, (2012), ‘Points of Interest and Visual Dictionaries for Automatic Retinal Lesion
Detection’, IEEE Transactions on Biomedical Engineering, Vol. 59, No. 8, pp. 2244 - 2253.
[17] Li Tang, Meindert Niemeijer, Joseph M. Reinhardt, Mona K. Garvin, and Michael D.
Abràmoff, (2013), ‘Splat Feature Classification with Application to Retinal Hemorrhage
Detection in Fundus Images’, IEEE Transactions on Medical Imaging, Vol. 32, No. 2, pp. 364 -
375.
[18] Haniza Yazid, Hamzah Arof, Hazlita Mohd Isa, (2012), ‘Exudates segmentation using
inverse surface adaptive thresholding’, Measurement, Vol. 45, No. 6, 2012, pp. 1599–1608.
[19] Akara Sopharaka, Bunyarit Uyyanonvaraa, Sarah Barman (2009), ‘Automatic exudate
detection for diabetic retinopathy screening’, doi: 10.2306/scienceasia1513-1874.2009.35.080,
pp. 80–88.
[20] Muhammad Salman Haleem, Liangxiu Han, Jano van Hemert, Baihua Li and Alan Fleming
(2014), Retinal Area Detector from Scanning Laser Ophthalmoscope (SLO) Images for
Diagnosing Retinal Diseases, DOI 10.1109/JBHI.2014.2352271, IEEE Journal of Biomedical
and Health Informatics.
[21] Keith A. Goatman, Alan D. Fleming, John A. Olson, Peter F. Sharp (2010), ‘Feature
selection for detection of new vessels on the optic disc’, Proceedings of Medical Image
Understanding and Analysis 2010, pp. 215-219

31
[22] Akara Sopharak, Bunyarit Uyyanonvara, Sarah Barman, Thomas H. Williamson (2008),
‘Automatic detection of diabetic retinopathy exudates from non-dilated retinal images using
mathematical morphology methods’, Elsevier Computerized Medical Imaging and Graphics
2008, PP. 720–727. doi:10.1016/j.compmedimag.2008.08.009
[23] Cemal Kose, Ugur Sevik, Cevat Ikibas, Hidayet Erdol,(2012), ‘Simple methods for
segmentation and measurement of diabetic retinopathy lesions in retinal fundus images’,
Computer Methods and Programs in Biomedicine, Vol. 107, No. 2, pp. 274–293.
[24] C. JayaKumari, R. Maruthi, (2012), ‘Detection of Hard Exudates in Color Fundus Images of
the Human Retina’, Procedia Engineering, Vol. 30, pp. 297 – 302.

32
System Architecture:

33
34

You might also like