Professional Documents
Culture Documents
A Novel Retinal Segmentation Based On DNNR Model
A Novel Retinal Segmentation Based On DNNR Model
A Novel Retinal Segmentation Based On DNNR Model
DNNR MODEL
By
LINU JEBA MELBA L
960417621315
Of
C.S.I INSTITUTE OF TECHNOLOGY, THOVALAI
A PROJECT REPORT
Submitted to the
FACULTY OF INFORMATION AND COMMUNICATION ENGINEERING
of
ANNA UNIVERSITY
CHENNAI
May 2020
ANNA UNIVERSITY, CHENNAI
BONAFIDE CERTIFICATE
Mrs.Julie Emerald Jiju MCA, M.Tech, M.Phil., Mrs.V.Merin Shobi MCA, MPhil
Department of M.C.A Department of M.C.A
C.S.I Institute of Technology C.S.I Institute of Technology
Thovalai - 629302 Thovalai - 629302
ABSTRACT
ABSTRACT
ப ங் கைில் பிைிப்பது விழித்திரை க ோய் கரை முன் கூ ்டிகய கண் றிவதில் அடுக்கு
ஆழ ோன ை ் பியல் ச ச
் வோை்க் பின் னர வு ஒரு ப ப் பிைிவின் தீவிை ் , ெோய் வு
ற் று ் தகவர ப்பு இயல் போக்கப்ப ் தீவிைத்தன் ர திப்சபண் (ANIS)
துல் லிய ோகவு ் க ை ோகவு ் உை் ைது. 114 ப ங் கரைப் பயன் படுத்தி த்தப்ப ்
முரறயின் திப்பீ ்டில் , எ ்டு எல் ரலகரை அர யோை ் கோண செயலோக்க க ை ்
ஒரு ப த்திற் கு சு ோை் 10.596 வி, ற் று ் ஒவ் சவோரு எல் ரலக் ககோ டி
் ற் கு ் பயிற் சி
ACKNOWLEDGEMENT
First and foremost, I thank the God Almighty for this grace to enable me to
complete this project work a successful one.
Words cannot be expressed to show my gratitude to my respected principle
Dr. K. DHINESH KUMAR, M.E., Ph.D., for his support and freedom throughout my
course.
I would like to thank Mrs. M. JULIE EMERALD JIJU, MCA, M.Phil, M.Tech,
Head of the Department, Department of Computer Applications for her valuable help
and constant encouragements towards my project.
It is my proud privilege to express my sincere thanks to my staff-in-charge,
Mrs. V. Merin Shobi MCA, MPhil, Department of Computer Applications for
providing me with her valuable suggestions and non-stop guidance throughout the
development of this project.
My sincere thanks to all faculty members the technical and non-technical of the
M.C.A. department for their valuable suggestion and support in all my moves towards
the successful finishing of this project.
It is my great pleasure to acknowledge my parents and family members for their
prayer and generous support that they have extended towards the successful completion
of the project.
VI
TABLE OF CONTENTS
LIST OF TABLES IX
LIST OF FIGURES X
LIST OF ABBREVIATIONS XI
1 INTRODUCTION 1
1.1 Retinal image 2
1.2 Supervised methods 2
1.4 Segmentation 4
1.5 Image Segmentation 4
1.6 Retinal Image Segmentation 5
4 SYSTEM IMPLEMENTATION 22
4.1 MODULES 22
4.1.1 Preprocessing 22
5 SYSTEM ARCHITECTURE 24
6 LANGUAGE DESCRIPTION 27
6.1 Introduction 27
Charaters
6.6 Commands 31
6.7 M Files 32
7 SYSTEM TESTING 36
7.1 Introduction 36
8 SCREEN SHOTS 40
9 CONCLUSION 41
REFERENCE XII
IX
LIST OF TABLES
LIST OF FIGURES
6.5.4 Workspace 30
LIST OF ABBREVIATIONS
CHAPTER 1
INTRODUCTION
approaches have gradually shifted to deep learning techniques. Diverted deep learning
networks are employed in many medical image analysis tasks. In, a 3D convolutional
neural network (CNN) is applied to diagnose attention deficit hyperactivity disorder in
functional and structural magnetic resonance imaging (MRI).
1.1 Retinal image
Retinal fund us images play an important role for diagnose and treatment of
cardio vascular and ophthalmologic diseases. However, the manual analysis of the
retinal fund us image is time-consuming and needs the empirical knowledge. Therefore,
it is necessary for developing automatic analysis of retinal fundus images. Retinal
blood vessel segmentation is the fundamental work of retinal fundus images analysis
because some attributes of retinal blood vessels, such as width, tortuosity, and
branching pattern, are important symptoms of diseases. Be sides, retinal blood vessels
segmentation is also useful for other applications such as optic disk detection. Based on
the position of vessels, optic disk and fovea in the funds image can be detected through
their relative location to blood vessels. Some methods are proposed for retinal blood
segmentation. The methods can be divided into two categories: supervised methods and
un supervised methods. Supervised methods obtain segmentation results with labelled
images whereas unsupervised methods do not need labelled image.
1.2 Supervised methods
Supervised methods obtain segmentation results with labeled images where as un
supervised methods do not need labeled image. Unsupervised methods mainly contain
four categories: matched filtering, vessel tracking, morphology processing, and model
based algorithms. Matched filtering methods generally apply filters such as Gaussian
filters or its variation to fit the shape and gray distributions of vessels and use the
corresponding responses to detect vessels. Tracking algorithm based on the
connectivity of vessels is proposed for tracking the vessel by starting from an initial
vessel point and finding next point belonging to vessel according to some designed
rules. Morphology processing-based method is usually combined with other properties
of vessels, to get vessel-like structures from retinal images.
The supervised segmentation methods firstly regard each pixel as an instance and
extract feature of them. Then, training instances are selected for training segmentation
3
model. For a test image, the image is segmented by the trained segmentation model,
which is giving the label of the pixels.
For these methods, feature extraction and segmentation model construction are
the two key factors; some related feature and classifiers are proposed such as image
ridge based features and KNN, 2DGabor wavelet and GMM classifier, line operators
and support vector classification ,virtual template expansion based features and cellular
neural networks , integrated features and AdaBoost classifier, gray-level and moment
invariants-based features and feed-forward neural network , and pathological and vessel
structure considering features and boosted decision trees. These features are designed
manually while employ deep learning for learning the feature automatically. Although
deep learning can achieve the better performance, the parameters tune is complicated.
Line operator based features are also proposed to capture the local shape information of
vessels.
1.3 Line Set Based Feature
The shape of blood vessel can be seen as rectangle due to its line segments
structure. Therefore, the local blood vessels can be regarded as a local rectangle area
approximately. To represent the local rectangle characteristics of blood vessel, we
propose the line sets based features. The line sets are used for searching the blood
vessels pixels on local rectangle.
Line sets based features for each pixel are designed to represent the shape
characteristics of a local area which contains the pixel. Firstly, two line sets are
extracted for a pixel by searching step. A line set contains many line segments obtained
by searching step. Line segments are firstly extracted in many directions in order to
represent the shape of local area; then features are extracted based on all the line
segments. In the searching step, the searching line is starting from the instance pixel
and along a certain direction. Initially, only the staring pixel is in current search line. In
this paper, if the intensity difference among the pixel and neighbour pixels is less than
S; this pixel on searching direction satisfies searching condition. The pixel which
satisfied this condition will be added to current line. Otherwise, stop searching. While
stopping the search a line segment is obtained. Twenty four lines segment is obtained in
every 7.5 degrees, which is enough to fill a local area, for line set one and line set two.
4
The searching lines are numbered from 1 to 24 starting from𝑥-axis and 24 line
segments are enough to cover the local area. The length of each searching line for line
set one is 21 because21 pixels is close to the biggest width of blood vessels, while
length of searching line for line set two is 41 for the reason that 41 pixels is certainly
more than the width of blood vessels so that the shape of blood vessels can be better
represented.
1.4 Segmentation
We use SVM as the segmentation model due to the perfect theory foundation and
its better generation ability. We firstly choose some images as the training image, and
the blood vessel segmentation result of these images can be obtained by the manual
label. And then, we select the pixels from the training images as the training data. We
extract the reinforcement local descriptions of each training pixel and label these pixels
as blood vessel or background according to the manually labeled segmentation result
image. Finally, these training pixels are used for training SVM. In the experiment, RBF
kernel is used for SVM.
1.5 Image Segmentation
The image is a way to remove information, and additionally the image contains
lots of helpful information. Understanding the image and extracting information from
the image to accomplish some works is a very important area of application in digital
image technology, and also the first step in understanding the image is that the image
segmentation. In practice, it is usually not interested in all parts of the image, but only
for some certain areas which have the same characteristics. Image segmentation is one
amongst the hotspots in image processing and computer vision. It is additionally a very
important basis for image recognition. It is based on certain criteria to divide an input
image into a number of the same nature of the category in order to extract the area
which people are interested in. And it is the basis for image analysis and understanding
of image feature extraction and recognition. There are many commonly used images
segmentation algorithms like threshold segmentation method, edge detection
segmentation method and segmentation based on clustering. At present, from the
international image segmentation method, the specific operation of the process of
segmentation method is very diverse and complex, and there is no recognized unified
5
standard. Image segmentation is that the method of segmenting the image into varied
segments that might be used for any applications like image understanding model,
robotics, image analysis, medical diagnosis, etc. Hence, image segmentation is the
process of partitioning an image into multiple segments, thus on modification the
representation of an image into something that is more meaningful and easier to
analyse.
1.6 Retinal Image Segmentation
The retina is a light-sensitive tissue lining the interior surface of the eye and is a
layered structure with many layers of neurons interconnected by synapses. The vein
and central retinal artery appear close by each other at the nasal side of the center of the
optic disk. Information about the structure of blood vessels can facilitate categorizing
the severity of diseases and can also assist as a landmark throughout segmentation
operation. Manual segmentation of retinal blood vessel is a difficult task. As the
vascular structure is a tree like complex structure. It may take hours to segment blood
vessel manually. Manual segmentation may not offer the actual sized blood vessel;
there may a difference in actual blood vessels and segmented vessels. Also, if there are
two different observers observing the same retinal image. Then there may some bias in
two different results. But as automatic segmentation took only a few minutes or
seconds. However, it is difficult to get 100% accuracy with automatic segmentation,
but it’s better to have results close to 100% than to wait for hours.
1.7 Retinal Blood Vessel
The retinal blood vessel is the solely deep microvascular within the human body
that maybe directly observed with none trauma, its physiological status check is
extremely vital for the diagnosis and treatment of some disease, just like high blood
pressure, diabetes, atherosclerosis and other cardiovascular diseases. A lot of
significantly, the shape and structure of the retina cannot be obtained by the naked eye,
it has very strong hiding, at the same time the possibility of forgery is very low.
Compared with face, iris and fingerprint identification system, the retina identification
system is harder to be cheated; it is a more secure recognition system. It will have
higher application prospect in such aspects as identification and Security. The retinal
blood vessels are important structures in retinal images. The information obtained from
6
the examination of retinal blood vessels offers many helpful parameters for the
diagnosis or evaluation of ocular or systemic diseases. For example, the retinal blood
vessel has shown some morphological changes like diameter, length, branching angles
or tortuosity for vascular or nonvascular pathology, such as hypertension, diabetes,
cardiovascular diseases.
1.7.1 Blood Vessel Detection
The objective of computer-based blood vessel mapping is to extract blood
vessel pixels from a digital image. The same as several alternative medical applications,
blood vessel mapping is a crucial basis for various (computer-based) analyses of retinal
images. Computer-aided blood vessel mapping aims at accurate extraction of the
vascular structure at different generation levels. Automated vessel detection has been
an open problem and has been studied for many years. A blood vessel network has a
self-similar geometric structure among its branches on the two dimensional image.
Morphological profiling, i.e., diameters, length, tortuosity, angle of branches, of blood
vessel provide basic measurements for anatomic and pathological studies. Growth,
death and deformation of blood vessel segments derived from such studies can assess
the general condition of the retina, and diseases like diabetic retinopathy, hypertension
or other cardiovascular complications. Early detection and treatment of diabetic
retinopathy (DR) increases the chance of early intervention. The blood vessels are also
used as landmarks for registration of retinal images of the same patient gathered from
different sources. Sometimes, retinal blood vessel should be excluded for simple
detection of pathological lesions like exudates or micro aneurysms. In all cases, proper
segmentation of retinal blood vessel is important. Hence the blood vessel detection and
segmentation is extremely crucial for diabetic retinopathy diagnosis an earlier stage.
where each vessel pixel is located. With those databases, researchers are able to design
their algorithms and compare their performances within the same criterion. Presently
there exist 9 publicly available retinal blood vessel databases, among which CHASE
DB1, DRIVE, HRF, STARE databases contain both retinal colour images and retinal
blood vessel ground truth images, while DiaRetDB1 V2.1, Messidor , REVIEW, ROC,
and VICAVR databases just provide retinal colour images but without labelled images.
Although the above databases are all suitable in quality and contain both normal and
abnormal retinal images, however, as the study of vessel segmentation requires the
vessel ground truth as a golden standard. Most of the retinal blood vessel segmentation
methodologies are evaluated on DRIVE and STARE databases.
1.8.1 Drive Database
The name of the DRIVE (Digital Retinal Image for Vessel Extraction) database
has well expressed its purpose to enable comparative studies on segmentation of blood
vessel in the retinal image. The DRIVE database consists of the 40 colour retinal
image, the 40 images are randomly selected from 400 diabetic subjects between 25-90
years of age, and 33 do not show any sign of diabetic retinopathy whereas 7 show signs
of mild early diabetic retinopathy. Every image is JPEG compressed. The set of 40
images has been divided into a training and a test set, both containing 20 images. For
the training images, a single manual segmentation of the vasculature is available. For
the test cases, two manual segmentations are available; one is used as a gold standard,
the other one can be utilised to compare computer-generated segmentations with those
of an independent human observer. All human observers that manually segmented the
vasculature were instructed and trained by an experienced ophthalmologist, and they
were asked to mark all pixels for which they were for at least 70% certain that they
were the vessel. Consequently, the image quality within the DRIVE database is
desirable and contains just 7 abnormal retinal images with mild disease. It can represent
the retinal conditions of the majority of people.
1.8.2 Stare Database
The STARE database is suited to the STARE (Structured Analysis of the Retina)
Project, which has been created and initiated at the University of California, San Diego,
and it has been funded by the U.S. National Institutes of Health. The STARE database
8
contains 400 retinal colour images. The images have been acquired using a Topcon
TRV-50 fundus (bottom of the eyeball) camera with a 35-degree field of view. Each
image has been captured using 8 bits per colour plane at 605 by 700 pixels, and the
approximate diameter of the field of view is 650 by 500.The 20 of the images can be
utilised for blood vessel segmentation because they are with the vessel ground truth
images. The 20 images have been manually segmented by two different experts. The
segmented results of the second expert have shown many more of the thinner vessels
than the results of the first expert. Usually, the performance is computed with the
segmentation of the first expert as the ground truth. Among those 20 images with
ground truth, only 9 images are healthy retinal images, while the other 11 images show
signs of 8 kinds of retinal diseases, mild or severe. 3 of the images even suffer from
decreased sharpness. Therefore, the STARE database is the most complicated database
among all the others, and it always tests the noise-resistance of an algorithm.
9
CHAPTER 2
LITERATURE REVIEW
2.1 Blood Vessel Segmentation in Retinal Fund us Images
This project focuses on blood vessel segmentation in retinal fund us images for
the potential application of automatic diabetic retinopathy diagnosis. In this project,
five algorithms were implemented based on methods from relevant literature. These
five algorithms were then combined using two different approaches in order to take
advantage of their individual advantages. Several performance measures were
computed in order to evaluate the performance of each of the developed methods.
These measures include accuracy, precision, sensitivity, and specificity. Each of the
developed algorithms offer trade-offs between these performance metrics. However,
the modified hybrid algorithm results tend to have superior performance when
averaging all the performance metrics. Diabetic retinopathy is the leading cause of
blindness among adults aged 20-74 years in the United States. According to the World
Health Organization (WHO), screening retina for diabetic retinopathy is essential for
diabetic patients and will reduce the burden of disease. However, retinal images can be
difficult to interpret, and computational image analysis offers the potential to increase
efficiency and diagnostic accuracy of the screening process. Automatic blood vessel
segmentation in the images can help speed diagnosis and improve the diagnostic
performance of less specialized physicians. An essential step in feature extraction is
blood vessel segmentation of the original image. Many algorithms have been developed
to accurately segment blood vessels from images with a variety of underlying
pathologies and across a variety of ophthalmic imaging systems. This work focuses on
developing existing retinal blood vessel segmentation algorithms, comparing their
performances, and combining them to achieve superior performance. For this project,
the Digital Retinal Images for Vessel Extraction (DRIVE) database of retinal images
was used. This database contains 40 images, 20 for training and 20 for testing. These
images were manually segmented by two trained researchers. The algorithms were
implemented on the original images and the hand segmentations were used to evaluate
the performance of the developed algorithms.
10
The next section of this report explains five distinct vessel segmentation
algorithms developed and applied to the DRIVE database. This section is followed by
the pipeline developed for combining these algorithms for superior performance. The
performance results of all these algorithms are then presented
and compared.
In order to take advantage of the strengths of each of these methods, we have
developed a hybrid algorithm for retinal blood vessel segmentation. Our hybrid
algorithm combines the result the five methods above to achieve an improved
segmentation.
Four of the five methods above (with the exception of the neural network-based
method which was essentially binary in practice), return a continuous value between 0
and 1 for each pixel. While thresholding or other post-processing can be used to obtain
a binary output for each method, these values on can also be thought of as a confidence
or probability that a given pixel is part of the blood vessel. Therefore, our set offive
basic segmentation methods gives us five different (and possibly conflicting)
confidence estimates for each pixel. Our hybrid algorithm is a method to map the
confidence estimates returned by each basic algorithm into a single binary
classification. This is done in two steps. The first is to combine the five confidence
estimates at each pixel into a single value
between 0 and 1. The second step is to apply a sophisticated thresholding technique,
based on a priori knowledge of blood vessel characteristics, to obtain a final binary
classification.
2.2 A Level Set Method Based on Local Approximation of Taylor Expansion for
Segmenting Intensity Inhomogeneous Images
In this paper, we proposed a LATE method to segment images with intensity in
homogeneity that is based on first-order Taylor expansion. Unlike existing models, the
first-order Taylor expansion is first utilized to approximate and describe intensity in
homogeneity images. The local statistical intensity information and the variation degree
information of intensity in homogeneity are jointly incorporated into the proposed
model, so it is a nonlinear description method and can better approximate images with
11
severe intensity in homogeneity. The LATE method can be utilized to solve non convex
optimization of the fitting function. Meanwhile, the local contrast of images is also
enhanced due to the introduction of the variation degree of intensity in homogeneity.
Many experiments on synthetic and real images have been conducted that clearly show
the effectiveness of the proposed LATE method.
In medical image analysis, intensity in homogeneity usually refers to the slow,
non anatomic intensity variations of the same tissue over the image domain, which
occurs in natural images as well. In general, intensity in homogeneity can be caused by
different factors such as imperfection in the imaging device, uneven illumination, and
the subject-induced susceptibility effect. For image segmentation, intensity in
homogeneity inevitably brings many difficulties due to the overlaps among the ranges
of the intensities in the regions to be partitioned. Therefore, it remains a challenging
problem to accurately segment images with severe intensity in homogeneities.
For image segmentation, level set methods are well known for capturing
dynamic interfacing and shapes and have achieved state-of-the-art performance. In
these methods, the zero-level set is used to represent the contours or surfaces, and it
controls the curve evolving freely. A classic level set method is the Chan-Vese (CV)
model, which seeks an approximation of a given image with a binary piecewise
constant representation through a level set formulation. However, due to the
assumption that the images consist of statistically homogeneous regions, the CV model
cannot segment images with intensity in homogeneities well. To solve the intensity
inhomogeneity problem in level set methods, many new models have recently been
proposed whose main idea is to fully utilize local intensity information to constrain
intensity in homogeneity. Generally, these models can be classified into three types,
which will be introduced in the following three paragraphs.
The first type of method assumes that each local region can be described by its
statistical intensity information. In this way, local regions can be further divided into
object and background regions. In the second type of method, global intensity
information and local intensity information are combined to jointly segment images
with intensity in homogeneity. In this manner, global information is used to capture
global intensity statistics to protect contour evolution from trapping in local minima,
12
derived from the mutual influences of neighboring pixels can eliminate the impact of
image noise and intensity in homogeneities. Duan et al. proposed a new variant of the
Mumford–Shah model for simultaneous bias correction and segmentation of images
with intensity inhomogeneity, in which the L0 gradient regularizer and a smooth
regularizer are separately used to model the true intensity and the intensity
inhomogeneity. It should be noted that aforementioned models belonging to the third
type utilize local statistical information to fit or approximate the intensity distributions
of object and background.
clustering. Due to the lack of prior knowledge of each class, such approaches lack
universality and thus are often used for specific tasks or preprocessing steps of
segmentation, such as the generation of super pixels. Fully supervised schemes, such as
convolutional networks, utilize a training set of images for semantic segmentation. The
segmentation results are associated with the training samples of objects. However, for
the same image, different users may not be interested in the same target, which causes
these approaches to lack flexibility. Semi-supervised schemes, such as the graph cut
approach, allow the user to provide simple interactions to represent label information
during segmentation. Compared with the other two segmentation schemes, semi-
supervised schemes can add the users’ intentions to obtain results meeting their
demands. Furthermore, the prior label information provided by the user helps improve
segmentation performance. This paper considers semi-supervised schemes (also called
interactive approaches) for foreground-background segmentation. Given input image I,
we aim to classify its pixels as one of two mutually exclusive classes, F and B,
corresponding to foreground and background objects, respectively. During the last
decades, many interactive segmentation approaches have been proposed, such as graph
cut and random walk.
In these approaches, unary and pair wise potentials that correspond to region and
boundary information, respectively, are generally constructed for segmentation. A
unary potential measure the similarity of a pixel to the labels F and B, while a pair wise
potential quantifies the similarity between pairs of pixels. Unary and pair wise
relationships can be represented via graphs and graph theory-based optimization
algorithms can be used to produce segmentations. Since the prior information of each
class is provided by the user, unary potential is usually quantified as the distance
between un seeded pixels and seeded pixels via some clustering algorithm, such as
Gaussian mixture model (GMM). If enough seeds are given, GMM can accurately
estimate the potential distribution of each label. However, due to the defects of pixel-
level features, it is hard to capture the accurate label information when the user’s
interaction is limited. In this case, the user has to work harder to obtain satisfactory
results. Thus, effectively computing unary potential based on seeds is a key problem of
the interactive segmentation method.
15
or orange-red filaments that originated within the optic disc and were of progressively
diminishing width. The most essential step in the diagnosis of the retinal blood vessel is
to identify the vessels from retinal images. Compared to manual diagnosis, automatic
machine diagnosis in retinal images can reduce the probability of medical misdiagnosis.
Machine retinal vessels diagnosis, known as the segmentation of retinal blood vessels,
is typically under the semantic segmentation task and has been studied with both
unsupervised and supervised machine learning models. In conventional unsupervised
approaches, features are manually extracted from images and are fed into a statistical
model to detect blood vessels with on a threshold. Ricci et al. proposed a line detector
to detect blood vessels at each pixel. Unsupervised approaches doesn’t require labelled
data, which can be considered as an advantage. However, the performance of such
approaches is far below the satisfaction in real application scenarios and most of them
are time consuming. Supervised segmentation approaches then have been proposed to
overcome these limitations. The main idea of these supervised models is to discriminate
vessel pixels against non-vessel ones with a binary classifier which is trained with
features extracted from annotated images. Recent studies of deep neural networks
which could extract features automatically leads to the boom of supervised deep
learning approaches in the segmentation of retinal blood vessels. Ronneberger et al.
proposed U-net, which first uses Convolutional Neural Network (CNN) to extract the
features from the original medical image and then up samples these features to a
segmentation image.
Digital image processing plays an important role in remote sensing, medical
science, air traffic control system, radar system and forensic science etc. The digital
image processing techniques are mainly utilised to obtain an image with clearly defined
characteristics and to extract features. Today, a large number of digital image
processing techniques exists e.g. image acquisition, enhancement, restoration,
compression, segmentation, recognition etc. Image segmentation is a digital image
processing technique which subdivides an image into different regions based on certain
criterion. Hence, digital images are widely used nowadays and utilise of digital images
made changes in everyday life and in many sciences. The main advantage of digital
imaging is the possibility of relatively easy and complex image processing. Many new
17
method currently produces desirable results in about ten minutes. This is a significant
improvement over past machine learning applications to volumetric segmentations
Advancements in medical imaging are facilitating the extraction of accurate
information from volumetric data making three-dimensional(3D) imaging an
increasingly useful tool for clinical diagnosis and medical research. This development
makes possible non-invasive examination and analysis of diseases by providing
clinicians insight into the morphology of disease within the body and how it changes
overtime and through treatment. Common non-invasive imaging modalitiesare
magnetic resonance imaging (MRI) and computed tomography (CT). Our efforts focus
on the analysis and visualization of volumetric OCT retinal data. OCT, described in, is
an acquisition system based on back-scattering of coherent light producing a stack of
images similar to MRI and CT. A light beam is directed into a patient’s eye where
reflected light is merged with a reference beam eliciting an interference pattern that is
used to gauge reflectance at various depths along the beam path. Quickly sweeping the
beam across the retinal surface, in a structured pattern, produces the image stack. The
ophthalmology field historically identified diseases by examining fundus images
(captured using an ophthalmoscope showing the retina, macula, and optic disc) and
more recently by 2D thickness
maps of retinal layers. OCT has drastically improved the type of information
available to vision scientists allowing for a more intuitive view as well as analysis of
retinal layer information. Recently, 3D OCT imaging has gained popularity by giving
practitioners more information for their evaluations due to advancements in OCT
technology. As a result, we have built software that turns what is an otherwise
qualitative evaluation into a quantitative form. An automatic approach that segments,
classifies, and analyzes retinal layers from 3D OCT would be ideal. However, the
morphology of retinal layers depends on the patient and the disease in question, which
has caused problems for existing automatic retinal layer extraction methods.
To address this problem we have developed a semi automatic segmentation
system in which the morphology of retinal structures can be discovered and refined by
a clinician. The clinic an interactively specifies the location of a retinal layer on a few
select slices of the volume. This selection is then extrapolated through out the entire
19
CHAPTER 3
SYSTEM DESIGN
3.1 EXISTING SYSTEM
• A variant of U-Net customized for semantic segmentation with tailored indexed
pooling, in which pooling indices are stored in the encoder and then restored to
recover the information in the un pooling layer of the decoder block.
• The computational complexity could be exponentially increased in the hidden
layers by adjusting the number of kernels, filter size, strides, etc.
• This computational complexity means the task is prohibitively time-consuming
for a computer without a sophisticated graphics card because of the training time
required.
DISADVANTAGE
• The drawback of the regression is that the precision of the learner is reduced
when it returns a real number as the output instead of a class as in the
classification problem, especially for noisy inputs such as OCT images.
• As the technique employs boundary pixel prediction in each A-scan and each
image includes hundreds of A-scans, the scheme is designed to be appropriate for
studies with limited sources of data, such as in academia.
21
ADVANTAGES
CHAPTER 4
SYSTEM IMPLEMENTATION
4.1 MODULES
• Preprocessing
• Features Extraction
• Neural Network Regression for Boundary Pixel Prediction
• Segmentation
4.1.1 Preprocessing
• To locate the layer boundaries accurately, a preprocessing procedure is necessary
to reduce the effect of noise.
• A nonlinear complex diffusion filter for speckle noise reduction.
• Such filters are known to be facilitated in the smooth areas of OCT images and
attenuated at the edges of the images.
4.1.2 Features Extraction
• Three features are extracted for the regression.
• The first feature is a statistical feature called ANIS of the A-scan segment from
the corresponding standard boundary line, an intensity normalization with
empirical mean and standard deviation from this benchmark line.
• The normalization moves the intensity distribution to the standard normal
distribution adaptively according to the benchmark pixel intensity as reference
information.
• The two other features used are intensity and the corresponding gradient of the
segment. The gradient is used in addition to the intensity information to
complement its changing of the segment at the edge’s position.
4.1.3 Neural Network Regression for Boundary Pixel Prediction
• The extracted features are fed into the DNNR for prediction of the boundary
pixel as a regression task, which returns the boundary pixel’s vertical position for
each A-scan.
• The input features form an mn matrix, where m is the number of scores
extracted from the benchmark pixel and n is the number of features extracted.
23
CHAPTER 5
SYSTEM ARCHITECTURE
5.1 BLOCK DIAGRAM
CHAPTER 6
LANGUAGE DESCRIPTION
6.1 INTRODUCTION
The name MATLAB stands for MATrix LABoratory. MATLAB was written
originally to provide easy access to matrix software developed by the LINPACK
(linear system package) and EISPACK (Eigen system package) projects. MATLAB
is a high-performance language for technical computing. It integrates
computation, visualization, and programming environment. MATLAB has many
advantages compared to conventional computer languages (e.g., C, FORTRAN) for
solving technical problems. MATLAB is an interactive system whose basic data
element is an array that does not required dimensioning. Specific applications
are collected in packages referred to as toolbox. There are toolboxes for signal
processing, symbolic computation, control theory, simulation, optimization, and
several other fields of applied science and engineering.
• Curve Fitting
28
• control systems
• computational finance
29
MATLAB development IDE can be launched from the icon created on the
desktop. The main working window in MATLAB is called the desktop. When
MATLAB is started, the desktop appears in its default layout.
Current Folder- This panel allows you to access the project folders
and files.
Fig.6.5.2.Currenfolder
30
Command Window-This is the main area where command scan be entered at the
command line. It is indicated by the command prompt (>>).
Workspace –The work spaces how sall the variables created and/or imported from
files.
Fig.6.5.4. workspace
Command History-This panel shows or rerun commands that are entered at the
command line.
31
Operator Purpose
+ Plus; addition operator.
- Minus, subtraction
\ Left-division operator
/ Right-division
.\ Array left-division
operator.
./ Array right-division
operator.
./ operator.
Table 6.6.1 MATLAB used operators and special characters.
6.7 COMMANDS
Commands Purpose
Command Purpose
Disp Displays content for an array or
6.9 M FILES
Scripts:
script files are program files with. m extension. In these files, you write series
of commands, which you want to execute together. Scripts do not accept inputs and
do not return any outputs. They operate on data in the workspace.
Functions:
functions files are also program files with extension. Functions can accept
inputs and return outputs. Internal variables are local to the function.
To create scripts files, you need to use a text editor. You can open the
MATLAB editor in two ways:
You can directly type edit and then the file name(with .m extension).
Edit or
edit<filename>
MATLAB provides 15 fundamental data types. Every data type store data that
is in the form of a matrix or array. The size of this matrix or array is a minimum of
0-by-0 and this can grow up to a matrix or array of any size.
The following table shows the most commonly used data types in MATLAB:
Datatype Description
Char
Character data(strings are stored
as vector of characters as vector
of characters as
defined class
CHAPTER 7
SYSTEM TESTING
7.1 INTRODUCTION
White Box Testing is a testing in which in which the software tester has
knowledge of the inner workings, structure and language of the software, or at
least its purpose. It is purpose. It is used to test areas that cannot be reached
from a black box level.
Unit testing is usually conducted as part of a combined code and unit test
phase of the software lifecycle, although it is not uncommon for coding and unit
testing to be conducted as two distinct phases.
Test Results: All the test cases mentioned above passed successfully. No
defects encountered.
CHAPTER 8
SCREEN SHOTS
41
1
CHAPTER 9
CONCLUSION
REFERENCE