A Novel Retinal Segmentation Based On DNNR Model

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 54

A NOVEL RETINAL SEGMENTATION BASED ON

DNNR MODEL
By
LINU JEBA MELBA L
960417621315
Of
C.S.I INSTITUTE OF TECHNOLOGY, THOVALAI
A PROJECT REPORT

Submitted to the
FACULTY OF INFORMATION AND COMMUNICATION ENGINEERING

In partial fulfillment of the requirements

for the award of the degree

of

MASTER OF COMPUTER APPLICATIONS

ANNA UNIVERSITY

CHENNAI
May 2020
ANNA UNIVERSITY, CHENNAI

BONAFIDE CERTIFICATE

Certified that this Report titled “A NOVEL RETINAL SEGMENTATION BASE ON


DNNR MODEL” is the bonfire work of L. LINU JEBA MELBA ( 960417621315)
who carried out the work under my supervision. Certified further that to the best of my
knowledge the work reported herein does not form part of any other thesis or dissertation
on the basis of which a degree or award was conferred on an earlier occasion on this or
any other candidate.

Head Of The Department Supervisor

Mrs.Julie Emerald Jiju MCA, M.Tech, M.Phil., Mrs.V.Merin Shobi MCA, MPhil
Department of M.C.A Department of M.C.A
C.S.I Institute of Technology C.S.I Institute of Technology
Thovalai - 629302 Thovalai - 629302

Submitted for the project Viva-Voce examination held on

Internal Examiner External Examiner


III

ABSTRACT

Segmenting the retinal layers in optical coherence tomography (OCT) images


helps to quantify the layer information in early diagnosis of retinal diseases, which are
the main cause of permanent blindness. Thus, the segmentation process plays a critical
role in preventing vision impairment. However, because there is a lack of practical
automated techniques, expert ophthalmologists still have to manually segment the
retinal layers. In this study, we propose an automated segmentation method for OCT
images based on a feature-learning regression network without human bias. The
proposed deep neural network regression takes the intensity, gradient, and adaptive
normalized intensity score (ANIS) of an image segment as features for learning, and
then predicts the corresponding retinal boundary pixel. Reformulating the segmentation
as a regression problem obviates the need for a huge dataset and reduces the complexity
significantly, as shown in the analysis of computational complexity given here. In
addition, assisted by ANIS, the method operates robustly on OCT images containing
intensity variances, lowcontrast regions, speckle noise, and blood vessels, yet remains
accurate and time-efficient. In evaluation of the method conducted using 114 images,
the processing time was approximately 10.596 s per image for identifying eight
boundaries, and the training phase for each boundary line took only 30 s. Further, the
Dice similarity coefficient used for assessing accuracy gave a computed value of
approximately 0.966. The absolute pixel distance of manual and automatic
segmentation using the proposed scheme was 0.612, which is less than a one-pixel
difference, on average.
IV

ABSTRACT

விழித்திரை அடுக்குகரை ஆப்டிகல் ககோஹைன் ஸ் க ோக ோகிைோபி (OCT)

ப ங் கைில் பிைிப்பது விழித்திரை க ோய் கரை முன் கூ ்டிகய கண் றிவதில் அடுக்கு

தகவல் கரை அைவி உதவுகிறது, அரவ ிை ்தை குரு டு


் த்தன் ர க்கு முக்கிய

கோைண ோகு ் . இதனோல் , போை்ரவ குரறபோ ர


் த் தடுப்பதில் பிைிவு செயல் முரற

முக்கிய பங் கு வகிக்கிறது. இருப்பினு ் , ர முரற தோனியங் கி நு ்பங் கை்

இல் லோததோல் , ிபுணை் கண் ருத்துவை்கை் இன் னு ் விழித்திரை அடுக்குகரை

ரகமுரறயோக பிைிக்க கவண்டு ் . இ ்த ஆய் வில் , னித ெோை்பு இல் லோ ல் அ ் ெ ் -

கற் றல் பின் னர வு வரலயர ப்பின் அடிப்பர யில் OCT ப ங் களுக்கோன

தோனியங் கி பிைிவு முரறரய ோங் கை் முன் ச ோழிகிகறோ ் . முன் ச ோழியப்ப ்

ஆழ ோன ை ் பியல் ச ச
் வோை்க் பின் னர வு ஒரு ப ப் பிைிவின் தீவிை ் , ெோய் வு
ற் று ் தகவர ப்பு இயல் போக்கப்ப ் தீவிைத்தன் ர திப்சபண் (ANIS)

ஆகியவற் ரற கற் றலுக்கோன அ ் ெங் கைோக எடுத்துக்சகோை் கிறது, பின் னை்

அதனு ன் சதோ ை்புர ய விழித்திரை எல் ரல பிக்ெரல முன் னறிவிக்கிறது.


பின் னர வு சிக்கலோக பிைிரவ றுசீைர ப்பது ஒரு சபைிய தைவுத்சதோகுப்பின்
கதரவரய ீ க்குகிறது ற் று ் சிக்கலோன தன் ர ரய கணிெ ோகக் குரறக்கிறது,

இங் கு சகோடுக்கப்ப ்டுை் ை கணக்கீ ்டு சிக்கலின் பகுப்போய் வில் கோ ் ப்ப டு


் ை் ைது.
கூடுதலோக, ANIS இன் உதவியு ன் , இ ்த முரற தீவிை ோறுபோடுகை் , குரற ்த
கோன் ை
் ோஸ் ் பகுதிகை் , ஸ்சபக்கிை் இரைெ்ெல் ற் று ் இைத்த ோைங் கை்

ஆகியவற் ரறக் சகோண் OCT ப ங் கைில் வலுவோக இயங் குகிறது, ஆனோல்

துல் லிய ோகவு ் க ை ோகவு ் உை் ைது. 114 ப ங் கரைப் பயன் படுத்தி த்தப்ப ்
முரறயின் திப்பீ ்டில் , எ ்டு எல் ரலகரை அர யோை ் கோண செயலோக்க க ை ்

ஒரு ப த்திற் கு சு ோை் 10.596 வி, ற் று ் ஒவ் சவோரு எல் ரலக் ககோ டி
் ற் கு ் பயிற் சி

க ் ் 30 வினோடிகை் ்டுக எடுத்தது. க லு ் , துல் லியத்ரத திப்பிடுவதற் கு

பயன் படுத்தப்படு ் ர ஸ் ஒற் றுர குணக ் கதோைோய ோக 0.966 கணக்கி ப்ப ்

திப்ரபக் சகோடுத்தது. முன் ச ோழியப்ப ் தி ் த்ரதப் பயன் படுத்தி ரககயடு

ற் று ் தோனியங் கி பிைிவின் முழுர யோன பிக்ெல் தூை ் 0.612 ஆகு ் , இது

ெைோெைியோக ஒரு பிக்ெல் வித்தியோெத்ரத வி க் குரறவு.


V

ACKNOWLEDGEMENT

First and foremost, I thank the God Almighty for this grace to enable me to
complete this project work a successful one.
Words cannot be expressed to show my gratitude to my respected principle
Dr. K. DHINESH KUMAR, M.E., Ph.D., for his support and freedom throughout my
course.
I would like to thank Mrs. M. JULIE EMERALD JIJU, MCA, M.Phil, M.Tech,
Head of the Department, Department of Computer Applications for her valuable help
and constant encouragements towards my project.
It is my proud privilege to express my sincere thanks to my staff-in-charge,
Mrs. V. Merin Shobi MCA, MPhil, Department of Computer Applications for
providing me with her valuable suggestions and non-stop guidance throughout the
development of this project.
My sincere thanks to all faculty members the technical and non-technical of the
M.C.A. department for their valuable suggestion and support in all my moves towards
the successful finishing of this project.
It is my great pleasure to acknowledge my parents and family members for their
prayer and generous support that they have extended towards the successful completion
of the project.
VI

TABLE OF CONTENTS

CHAPTER NO TITLE PAGE NO


ABSTRACT III

LIST OF TABLES IX

LIST OF FIGURES X

LIST OF ABBREVIATIONS XI
1 INTRODUCTION 1
1.1 Retinal image 2
1.2 Supervised methods 2

1.3 Line Set Based Feature 3

1.4 Segmentation 4
1.5 Image Segmentation 4
1.6 Retinal Image Segmentation 5

1.7 Retinal Blood Vessel 5

1.7.1 Blood Vessel Detection 6

1.8 Retinal image databases 6


1.8.1 Drive Database 7
1.8.2 Stare Database 7
2 LITERATURE REVIEW 9
2.1 Blood Vessel Segmentation in 9
Retinal Fund us Images
2.2 A Level Set Method Based on 10
Local Approximation of Taylor
Expansion for Segmenting
Intensity Inhomogeneous
Images
2.3 Probabilistic Diffusion for 13
Interactive Image Segmentation
VII

2.4 Review of Retinal Blood Vessel 15


Segmentation
2.5 Segmentation of Three-dimensional 17
Retinal Image Data
3 SYSTEM DESIGN 20

3.1 Existing System 20

3.2 Proposed System 20

3.3 system specification 21

3.3.1 Software Requirement 21

3.3.2 Hardware Requirement 21

4 SYSTEM IMPLEMENTATION 22

4.1 MODULES 22

4.1.1 Preprocessing 22

4.1.2 Features Extraction 22

4.1.3 Neural Network Regression 22


for Boundary Pixel Prediction
4.1.4 Segmentation 23

5 SYSTEM ARCHITECTURE 24

5.1 Block Diagram 24

5.2 Data Flow Diagram 24

6 LANGUAGE DESCRIPTION 27

6.1 Introduction 27

6.2 Matlab's Power Of Computational 27


Mathematics
6.3 Features Of Matlab 28

6.4 Uses of Matlab 28

6.5 Matlab Operators and Spatial 29


VIII

Charaters

6.6 Commands 31

6.7 M Files 32

7 SYSTEM TESTING 36

7.1 Introduction 36

7.2 Types of Tests 36

8 SCREEN SHOTS 40

9 CONCLUSION 41

REFERENCE XII
IX

LIST OF TABLES

TABLE NO TABLE NAME PAGE NO


6.6 Operators and Spatial Characters 31
6.7.1 Commands for Managing a Session 32
6.8 Input and Output Commands 33
6.10 Data Types in MATLAB 34
X

LIST OF FIGURES

FIGURE NO FIGURE NAME PAGE NO

5.1 Block Diagram 24

5.2.1 Level 0- Preprocessing 24

5.2.2 Level 1- Features Extraction 25

5.2.3 Level 2- Boundary Pixel Prediction 25

5.2.4 Level 3- Segmentation 26

6.5.1 MATLAB desktop environment 29

6.5.2 Current folder 29

6.5.3 Command window 30

6.5.4 Workspace 30

6.5.5 Command history 31


XI

LIST OF ABBREVIATIONS

S.NO ABBREVIATIONS DESCRIPTION


1 WHO World Health Organization
2 OCT Optical Coherence Tomography
3 MRI Magnetic Resonance Imaging
4 WHO World Health Organization
5 STARE Structured Analysis of the Retina
6 DRIVE Digital Retinal Images for Vessel Extraction
7 CNN Convolutional Neural Network
1

CHAPTER 1
INTRODUCTION

Retinal disease is a leading cause of vision loss and permanent


blindness, especially in developing countries. According to the World Health
Organization (WHO), 36 million people worldwide have permanent blindness, and a
total of 253 million people suffer from vision impairment. More than 80 percent of
vision impairment cases can be prevented or cured by appropriate retinal screening and
treatment planning in the early stages . However, for early diagnosis, the retinal images
obtained by optical coherence tomography (OCT) must be evaluated by an expert
ophthalmologist. Early diagnosis of retinal diseases such as glaucoma, diabetic
retinopathy, age-related macular degeneration, and even cognitive degeneration disease
as dementia has been shown to be effective by measuring the changes of retinal layer
thickness. In early stage, there is no visual sign appeared in the retinal structure.
However, by determining the specific changing in the retinal layer, early diagnosis is
possible. Hence quantifying the retinal layer thickness is utilized for early diagnosing
and determining disease progress to have proper treatment plan. The layer thickness is
quantified via a segmentation process; thus, layer segmentation plays a critical role in
diagnosis and treatment planning. Whereas manual segmentation is time-consuming,
the automatic process enables diagnosis of a large volume of images rapidly and
overcomes human bias.

OCT segmentation algorithms and segmentation algorithms in general can be


divided into two main categories: algorithm-driven approaches and data-driven
approaches. The first category includes early algorithms such as those based on edge
detection, edge threshold, and computational geometry; later algorithms include graph
search, active contour, and the kernel-based method. Although these algorithms
partially segment the retinal images, they fail in most practical cases because of their
prior assumptions and fixed mathematical models. The second category employs
artificial intelligence techniques, which have recently attracted considerable attention in
both industry and academia; therefore, studies on the development of segmentation
2

approaches have gradually shifted to deep learning techniques. Diverted deep learning
networks are employed in many medical image analysis tasks. In, a 3D convolutional
neural network (CNN) is applied to diagnose attention deficit hyperactivity disorder in
functional and structural magnetic resonance imaging (MRI).
1.1 Retinal image
Retinal fund us images play an important role for diagnose and treatment of
cardio vascular and ophthalmologic diseases. However, the manual analysis of the
retinal fund us image is time-consuming and needs the empirical knowledge. Therefore,
it is necessary for developing automatic analysis of retinal fundus images. Retinal
blood vessel segmentation is the fundamental work of retinal fundus images analysis
because some attributes of retinal blood vessels, such as width, tortuosity, and
branching pattern, are important symptoms of diseases. Be sides, retinal blood vessels
segmentation is also useful for other applications such as optic disk detection. Based on
the position of vessels, optic disk and fovea in the funds image can be detected through
their relative location to blood vessels. Some methods are proposed for retinal blood
segmentation. The methods can be divided into two categories: supervised methods and
un supervised methods. Supervised methods obtain segmentation results with labelled
images whereas unsupervised methods do not need labelled image.
1.2 Supervised methods
Supervised methods obtain segmentation results with labeled images where as un
supervised methods do not need labeled image. Unsupervised methods mainly contain
four categories: matched filtering, vessel tracking, morphology processing, and model
based algorithms. Matched filtering methods generally apply filters such as Gaussian
filters or its variation to fit the shape and gray distributions of vessels and use the
corresponding responses to detect vessels. Tracking algorithm based on the
connectivity of vessels is proposed for tracking the vessel by starting from an initial
vessel point and finding next point belonging to vessel according to some designed
rules. Morphology processing-based method is usually combined with other properties
of vessels, to get vessel-like structures from retinal images.
The supervised segmentation methods firstly regard each pixel as an instance and
extract feature of them. Then, training instances are selected for training segmentation
3

model. For a test image, the image is segmented by the trained segmentation model,
which is giving the label of the pixels.
For these methods, feature extraction and segmentation model construction are
the two key factors; some related feature and classifiers are proposed such as image
ridge based features and KNN, 2DGabor wavelet and GMM classifier, line operators
and support vector classification ,virtual template expansion based features and cellular
neural networks , integrated features and AdaBoost classifier, gray-level and moment
invariants-based features and feed-forward neural network , and pathological and vessel
structure considering features and boosted decision trees. These features are designed
manually while employ deep learning for learning the feature automatically. Although
deep learning can achieve the better performance, the parameters tune is complicated.
Line operator based features are also proposed to capture the local shape information of
vessels.
1.3 Line Set Based Feature
The shape of blood vessel can be seen as rectangle due to its line segments
structure. Therefore, the local blood vessels can be regarded as a local rectangle area
approximately. To represent the local rectangle characteristics of blood vessel, we
propose the line sets based features. The line sets are used for searching the blood
vessels pixels on local rectangle.
Line sets based features for each pixel are designed to represent the shape
characteristics of a local area which contains the pixel. Firstly, two line sets are
extracted for a pixel by searching step. A line set contains many line segments obtained
by searching step. Line segments are firstly extracted in many directions in order to
represent the shape of local area; then features are extracted based on all the line
segments. In the searching step, the searching line is starting from the instance pixel
and along a certain direction. Initially, only the staring pixel is in current search line. In
this paper, if the intensity difference among the pixel and neighbour pixels is less than
S; this pixel on searching direction satisfies searching condition. The pixel which
satisfied this condition will be added to current line. Otherwise, stop searching. While
stopping the search a line segment is obtained. Twenty four lines segment is obtained in
every 7.5 degrees, which is enough to fill a local area, for line set one and line set two.
4

The searching lines are numbered from 1 to 24 starting from𝑥-axis and 24 line
segments are enough to cover the local area. The length of each searching line for line
set one is 21 because21 pixels is close to the biggest width of blood vessels, while
length of searching line for line set two is 41 for the reason that 41 pixels is certainly
more than the width of blood vessels so that the shape of blood vessels can be better
represented.
1.4 Segmentation
We use SVM as the segmentation model due to the perfect theory foundation and
its better generation ability. We firstly choose some images as the training image, and
the blood vessel segmentation result of these images can be obtained by the manual
label. And then, we select the pixels from the training images as the training data. We
extract the reinforcement local descriptions of each training pixel and label these pixels
as blood vessel or background according to the manually labeled segmentation result
image. Finally, these training pixels are used for training SVM. In the experiment, RBF
kernel is used for SVM.
1.5 Image Segmentation
The image is a way to remove information, and additionally the image contains
lots of helpful information. Understanding the image and extracting information from
the image to accomplish some works is a very important area of application in digital
image technology, and also the first step in understanding the image is that the image
segmentation. In practice, it is usually not interested in all parts of the image, but only
for some certain areas which have the same characteristics. Image segmentation is one
amongst the hotspots in image processing and computer vision. It is additionally a very
important basis for image recognition. It is based on certain criteria to divide an input
image into a number of the same nature of the category in order to extract the area
which people are interested in. And it is the basis for image analysis and understanding
of image feature extraction and recognition. There are many commonly used images
segmentation algorithms like threshold segmentation method, edge detection
segmentation method and segmentation based on clustering. At present, from the
international image segmentation method, the specific operation of the process of
segmentation method is very diverse and complex, and there is no recognized unified
5

standard. Image segmentation is that the method of segmenting the image into varied
segments that might be used for any applications like image understanding model,
robotics, image analysis, medical diagnosis, etc. Hence, image segmentation is the
process of partitioning an image into multiple segments, thus on modification the
representation of an image into something that is more meaningful and easier to
analyse.
1.6 Retinal Image Segmentation
The retina is a light-sensitive tissue lining the interior surface of the eye and is a
layered structure with many layers of neurons interconnected by synapses. The vein
and central retinal artery appear close by each other at the nasal side of the center of the
optic disk. Information about the structure of blood vessels can facilitate categorizing
the severity of diseases and can also assist as a landmark throughout segmentation
operation. Manual segmentation of retinal blood vessel is a difficult task. As the
vascular structure is a tree like complex structure. It may take hours to segment blood
vessel manually. Manual segmentation may not offer the actual sized blood vessel;
there may a difference in actual blood vessels and segmented vessels. Also, if there are
two different observers observing the same retinal image. Then there may some bias in
two different results. But as automatic segmentation took only a few minutes or
seconds. However, it is difficult to get 100% accuracy with automatic segmentation,
but it’s better to have results close to 100% than to wait for hours.
1.7 Retinal Blood Vessel
The retinal blood vessel is the solely deep microvascular within the human body
that maybe directly observed with none trauma, its physiological status check is
extremely vital for the diagnosis and treatment of some disease, just like high blood
pressure, diabetes, atherosclerosis and other cardiovascular diseases. A lot of
significantly, the shape and structure of the retina cannot be obtained by the naked eye,
it has very strong hiding, at the same time the possibility of forgery is very low.
Compared with face, iris and fingerprint identification system, the retina identification
system is harder to be cheated; it is a more secure recognition system. It will have
higher application prospect in such aspects as identification and Security. The retinal
blood vessels are important structures in retinal images. The information obtained from
6

the examination of retinal blood vessels offers many helpful parameters for the
diagnosis or evaluation of ocular or systemic diseases. For example, the retinal blood
vessel has shown some morphological changes like diameter, length, branching angles
or tortuosity for vascular or nonvascular pathology, such as hypertension, diabetes,
cardiovascular diseases.
1.7.1 Blood Vessel Detection
The objective of computer-based blood vessel mapping is to extract blood
vessel pixels from a digital image. The same as several alternative medical applications,
blood vessel mapping is a crucial basis for various (computer-based) analyses of retinal
images. Computer-aided blood vessel mapping aims at accurate extraction of the
vascular structure at different generation levels. Automated vessel detection has been
an open problem and has been studied for many years. A blood vessel network has a
self-similar geometric structure among its branches on the two dimensional image.
Morphological profiling, i.e., diameters, length, tortuosity, angle of branches, of blood
vessel provide basic measurements for anatomic and pathological studies. Growth,
death and deformation of blood vessel segments derived from such studies can assess
the general condition of the retina, and diseases like diabetic retinopathy, hypertension
or other cardiovascular complications. Early detection and treatment of diabetic
retinopathy (DR) increases the chance of early intervention. The blood vessels are also
used as landmarks for registration of retinal images of the same patient gathered from
different sources. Sometimes, retinal blood vessel should be excluded for simple
detection of pathological lesions like exudates or micro aneurysms. In all cases, proper
segmentation of retinal blood vessel is important. Hence the blood vessel detection and
segmentation is extremely crucial for diabetic retinopathy diagnosis an earlier stage.

1.8 Retinal image databases


Datasets utilised are the most popular database used for development and testing
the performance of image segmentation methods. Normally the study of retinal blood
vessel segmentation starts from importing public retinal blood vessel database, where
there provide researchers with retinal colour images and the corresponding information.
A number of the databases provide vessel ground truth images that show precisely
7

where each vessel pixel is located. With those databases, researchers are able to design
their algorithms and compare their performances within the same criterion. Presently
there exist 9 publicly available retinal blood vessel databases, among which CHASE
DB1, DRIVE, HRF, STARE databases contain both retinal colour images and retinal
blood vessel ground truth images, while DiaRetDB1 V2.1, Messidor , REVIEW, ROC,
and VICAVR databases just provide retinal colour images but without labelled images.
Although the above databases are all suitable in quality and contain both normal and
abnormal retinal images, however, as the study of vessel segmentation requires the
vessel ground truth as a golden standard. Most of the retinal blood vessel segmentation
methodologies are evaluated on DRIVE and STARE databases.
1.8.1 Drive Database
The name of the DRIVE (Digital Retinal Image for Vessel Extraction) database
has well expressed its purpose to enable comparative studies on segmentation of blood
vessel in the retinal image. The DRIVE database consists of the 40 colour retinal
image, the 40 images are randomly selected from 400 diabetic subjects between 25-90
years of age, and 33 do not show any sign of diabetic retinopathy whereas 7 show signs
of mild early diabetic retinopathy. Every image is JPEG compressed. The set of 40
images has been divided into a training and a test set, both containing 20 images. For
the training images, a single manual segmentation of the vasculature is available. For
the test cases, two manual segmentations are available; one is used as a gold standard,
the other one can be utilised to compare computer-generated segmentations with those
of an independent human observer. All human observers that manually segmented the
vasculature were instructed and trained by an experienced ophthalmologist, and they
were asked to mark all pixels for which they were for at least 70% certain that they
were the vessel. Consequently, the image quality within the DRIVE database is
desirable and contains just 7 abnormal retinal images with mild disease. It can represent
the retinal conditions of the majority of people.
1.8.2 Stare Database
The STARE database is suited to the STARE (Structured Analysis of the Retina)
Project, which has been created and initiated at the University of California, San Diego,
and it has been funded by the U.S. National Institutes of Health. The STARE database
8

contains 400 retinal colour images. The images have been acquired using a Topcon
TRV-50 fundus (bottom of the eyeball) camera with a 35-degree field of view. Each
image has been captured using 8 bits per colour plane at 605 by 700 pixels, and the
approximate diameter of the field of view is 650 by 500.The 20 of the images can be
utilised for blood vessel segmentation because they are with the vessel ground truth
images. The 20 images have been manually segmented by two different experts. The
segmented results of the second expert have shown many more of the thinner vessels
than the results of the first expert. Usually, the performance is computed with the
segmentation of the first expert as the ground truth. Among those 20 images with
ground truth, only 9 images are healthy retinal images, while the other 11 images show
signs of 8 kinds of retinal diseases, mild or severe. 3 of the images even suffer from
decreased sharpness. Therefore, the STARE database is the most complicated database
among all the others, and it always tests the noise-resistance of an algorithm.
9

CHAPTER 2
LITERATURE REVIEW
2.1 Blood Vessel Segmentation in Retinal Fund us Images
This project focuses on blood vessel segmentation in retinal fund us images for
the potential application of automatic diabetic retinopathy diagnosis. In this project,
five algorithms were implemented based on methods from relevant literature. These
five algorithms were then combined using two different approaches in order to take
advantage of their individual advantages. Several performance measures were
computed in order to evaluate the performance of each of the developed methods.
These measures include accuracy, precision, sensitivity, and specificity. Each of the
developed algorithms offer trade-offs between these performance metrics. However,
the modified hybrid algorithm results tend to have superior performance when
averaging all the performance metrics. Diabetic retinopathy is the leading cause of
blindness among adults aged 20-74 years in the United States. According to the World
Health Organization (WHO), screening retina for diabetic retinopathy is essential for
diabetic patients and will reduce the burden of disease. However, retinal images can be
difficult to interpret, and computational image analysis offers the potential to increase
efficiency and diagnostic accuracy of the screening process. Automatic blood vessel
segmentation in the images can help speed diagnosis and improve the diagnostic
performance of less specialized physicians. An essential step in feature extraction is
blood vessel segmentation of the original image. Many algorithms have been developed
to accurately segment blood vessels from images with a variety of underlying
pathologies and across a variety of ophthalmic imaging systems. This work focuses on
developing existing retinal blood vessel segmentation algorithms, comparing their
performances, and combining them to achieve superior performance. For this project,
the Digital Retinal Images for Vessel Extraction (DRIVE) database of retinal images
was used. This database contains 40 images, 20 for training and 20 for testing. These
images were manually segmented by two trained researchers. The algorithms were
implemented on the original images and the hand segmentations were used to evaluate
the performance of the developed algorithms.
10

The next section of this report explains five distinct vessel segmentation
algorithms developed and applied to the DRIVE database. This section is followed by
the pipeline developed for combining these algorithms for superior performance. The
performance results of all these algorithms are then presented
and compared.
In order to take advantage of the strengths of each of these methods, we have
developed a hybrid algorithm for retinal blood vessel segmentation. Our hybrid
algorithm combines the result the five methods above to achieve an improved
segmentation.
Four of the five methods above (with the exception of the neural network-based
method which was essentially binary in practice), return a continuous value between 0
and 1 for each pixel. While thresholding or other post-processing can be used to obtain
a binary output for each method, these values on can also be thought of as a confidence
or probability that a given pixel is part of the blood vessel. Therefore, our set offive
basic segmentation methods gives us five different (and possibly conflicting)
confidence estimates for each pixel. Our hybrid algorithm is a method to map the
confidence estimates returned by each basic algorithm into a single binary
classification. This is done in two steps. The first is to combine the five confidence
estimates at each pixel into a single value
between 0 and 1. The second step is to apply a sophisticated thresholding technique,
based on a priori knowledge of blood vessel characteristics, to obtain a final binary
classification.

2.2 A Level Set Method Based on Local Approximation of Taylor Expansion for
Segmenting Intensity Inhomogeneous Images
In this paper, we proposed a LATE method to segment images with intensity in
homogeneity that is based on first-order Taylor expansion. Unlike existing models, the
first-order Taylor expansion is first utilized to approximate and describe intensity in
homogeneity images. The local statistical intensity information and the variation degree
information of intensity in homogeneity are jointly incorporated into the proposed
model, so it is a nonlinear description method and can better approximate images with
11

severe intensity in homogeneity. The LATE method can be utilized to solve non convex
optimization of the fitting function. Meanwhile, the local contrast of images is also
enhanced due to the introduction of the variation degree of intensity in homogeneity.
Many experiments on synthetic and real images have been conducted that clearly show
the effectiveness of the proposed LATE method.
In medical image analysis, intensity in homogeneity usually refers to the slow,
non anatomic intensity variations of the same tissue over the image domain, which
occurs in natural images as well. In general, intensity in homogeneity can be caused by
different factors such as imperfection in the imaging device, uneven illumination, and
the subject-induced susceptibility effect. For image segmentation, intensity in
homogeneity inevitably brings many difficulties due to the overlaps among the ranges
of the intensities in the regions to be partitioned. Therefore, it remains a challenging
problem to accurately segment images with severe intensity in homogeneities.
For image segmentation, level set methods are well known for capturing
dynamic interfacing and shapes and have achieved state-of-the-art performance. In
these methods, the zero-level set is used to represent the contours or surfaces, and it
controls the curve evolving freely. A classic level set method is the Chan-Vese (CV)
model, which seeks an approximation of a given image with a binary piecewise
constant representation through a level set formulation. However, due to the
assumption that the images consist of statistically homogeneous regions, the CV model
cannot segment images with intensity in homogeneities well. To solve the intensity
inhomogeneity problem in level set methods, many new models have recently been
proposed whose main idea is to fully utilize local intensity information to constrain
intensity in homogeneity. Generally, these models can be classified into three types,
which will be introduced in the following three paragraphs.
The first type of method assumes that each local region can be described by its
statistical intensity information. In this way, local regions can be further divided into
object and background regions. In the second type of method, global intensity
information and local intensity information are combined to jointly segment images
with intensity in homogeneity. In this manner, global information is used to capture
global intensity statistics to protect contour evolution from trapping in local minima,
12

and local information is explored to eliminate the influences of intensity in


homogeneity. Specifically, Sum and Cheung proposed the Global and Local (GL)
model, in which the global term is derived from CV model and the local term captures
the intensity variation of the weak boundary by enhancing local intensity contrast. It
should be noted that the GL model utilizes only the intensity information. Different
form the GL model, Wang et al. proposed the United Tensor Level Set Model
(UTLSM), in which multi-features of pixels, e.g., gray value and local geometrical
features such as orientation and gradient, are incorporated into a three-order tensor
level set model. Then, by defining a weighted distance, the final energy function is
constructed in the UTLSM model. In, the Local Chan-Vese model (LCV) was
proposed, which incorporates a local term into a global term to segment intensity
inhomogeneity images. In LCV, the global term is directly derived from the CV model,
and the local term is the mean information of intensity differences. Specifically, in
LCV, by subtracting the original image from the averaging convolution image, the
intensity differences are obtained, and the contrast between foreground intensities and
background intensities can be significantly enhanced. In, an energy function is
proposed by incorporating the local statistical analysis and global similarity
measurement, in which the local energy term is based on the local intensity statistical
analysis and the global energy term is used to minimize the Bhattacharyya distance
inside and outside of the contour. In, the in homogeneity of each pixel is described by
its spatially nearby pixels; then, the local inhomogeneity term and global CV term are
combined to segment images with intensity inhomogeneity. By further considering the
global region intensity inhomogeneity, a novel level set model based on global division
and adaptive scale local variation term to segment natural images is proposed in, in
which the global intensity information and local variation information are combined to
drive the contour evolution to accurately segment natural images.
Since the pixel intensity has a multiplicative distribution structure, the
maximum-a-posteriori (MAP) principle with those pixel density functions generates the
model. Li et al. also proposed a segmentation method based on global and local image
statistical information, in which the global energy based on a Gaussian model estimates
the intensity distribution of the target object and background, and the local energy
13

derived from the mutual influences of neighboring pixels can eliminate the impact of
image noise and intensity in homogeneities. Duan et al. proposed a new variant of the
Mumford–Shah model for simultaneous bias correction and segmentation of images
with intensity inhomogeneity, in which the L0 gradient regularizer and a smooth
regularizer are separately used to model the true intensity and the intensity
inhomogeneity. It should be noted that aforementioned models belonging to the third
type utilize local statistical information to fit or approximate the intensity distributions
of object and background.

2.3 Probabilistic Diffusion for Interactive Image Segmentation


In this work, we presented an interactive approach for foreground/background
image segmentation. The classification is formulated using a probabilistic framework
consisting of unary potential estimation and likelihood learning. To improve the
robustness to the seeds, the distances between pixel pairs and label pairs are measured
to obtain prior label information. To improve segmentation accuracy, the region and
boundary information are combined by a likelihood learning framework. An
equivalence relation between likelihood learning and likelihood diffusion is also
established, and an iterative diffusion-based optimization strategy is proposed to
maintain computational efficiency. The qualitative and quantitative comparisons with
state-of-the-art interactive approaches demonstrate the superior performance of the
proposed method.
Image segmentation can be described as the partitioning of an image into several
connected homogeneous regions based on similarity criteria using low-level visual
features and extracting one or more objects that are of interest to the user from the
background environment. Segmented semantic regions or contours associated with real-
word entities or scenes are the basis for further advanced image processing. Therefore,
image segmentation is a key step from image processing to image analysis, which is a
fundamental problem in computer vision.
Many image segmentation methods have been proposed in the literature. Segmentation
schemes can be classified into unsupervised, semi-supervised and fully supervised
approaches. Unsupervised schemes can automatically segment images based on feature
14

clustering. Due to the lack of prior knowledge of each class, such approaches lack
universality and thus are often used for specific tasks or preprocessing steps of
segmentation, such as the generation of super pixels. Fully supervised schemes, such as
convolutional networks, utilize a training set of images for semantic segmentation. The
segmentation results are associated with the training samples of objects. However, for
the same image, different users may not be interested in the same target, which causes
these approaches to lack flexibility. Semi-supervised schemes, such as the graph cut
approach, allow the user to provide simple interactions to represent label information
during segmentation. Compared with the other two segmentation schemes, semi-
supervised schemes can add the users’ intentions to obtain results meeting their
demands. Furthermore, the prior label information provided by the user helps improve
segmentation performance. This paper considers semi-supervised schemes (also called
interactive approaches) for foreground-background segmentation. Given input image I,
we aim to classify its pixels as one of two mutually exclusive classes, F and B,
corresponding to foreground and background objects, respectively. During the last
decades, many interactive segmentation approaches have been proposed, such as graph
cut and random walk.
In these approaches, unary and pair wise potentials that correspond to region and
boundary information, respectively, are generally constructed for segmentation. A
unary potential measure the similarity of a pixel to the labels F and B, while a pair wise
potential quantifies the similarity between pairs of pixels. Unary and pair wise
relationships can be represented via graphs and graph theory-based optimization
algorithms can be used to produce segmentations. Since the prior information of each
class is provided by the user, unary potential is usually quantified as the distance
between un seeded pixels and seeded pixels via some clustering algorithm, such as
Gaussian mixture model (GMM). If enough seeds are given, GMM can accurately
estimate the potential distribution of each label. However, due to the defects of pixel-
level features, it is hard to capture the accurate label information when the user’s
interaction is limited. In this case, the user has to work harder to obtain satisfactory
results. Thus, effectively computing unary potential based on seeds is a key problem of
the interactive segmentation method.
15

2.4 Review of Retinal Blood Vessel Segmentation


The retinal image is reviewed with several methodologies and processing steps.
It is important to segment the blood vessels in the form of the image that can be used to
identify the correct disease. Although a lot of research was made in the segmentation
field, however still it has some limitations and there is a need to enhance the techniques
effectively. There is no further analysis to conclude the best method in general. Each
method has advantages and disadvantages, which depends on the condition and needs
of researchers. However, there are many ways in which these techniques are classified
and categorizing these techniques into the supervised method is the most common way.
The comparison of these techniques on the basis of efficiency largely depends on the
type of data they are being used for.
Retinal images are impacted by all the factors that an effect on the body
vasculature in general. The human eye is a unique organ of the human body where the
vascular condition can be directly observed. Additionally, to the fovea and the optic
disc, the blood vessels contribute one among the most features of a retinal fund us
image and several of its properties are noticeably affected by worldwide major diseases
like diabetes, and hypertension. Further, some eye diseases like choroidalneovas
cularization and retinal artery occlusion also make changes within the retinal
vasculature. As we know, the segmentation of blood vessels in retinal images can be a
valuable help for the detection of diabetic retinopathy. In general, vessels segmentation
occupies an important place in medical image segmentation field; therefore, the retinal
vessels segmentation is suited to this category where a broad variety of algorithms and
methodologies are developed and implemented for the reason of automatic
identification, localization and extraction of retinal vasculature structures. Hence,
retinal vessel segmentation can simplify screening for retinopathy by reducing the
number of false-positive results in microaneurysm detection and may serve as a means
of image registration from the same patient taken at different times by delineating the
location of the optic disc and fovea. However, the manual detection of blood vessels is
not simple because the vessels in a retinal image are complex and have low contrast. So
as to use these useful characteristics of retinal blood vessels, it is vital to get their
locations and shapes accurately. Blood vessels appeared as networks of either deep red
16

or orange-red filaments that originated within the optic disc and were of progressively
diminishing width. The most essential step in the diagnosis of the retinal blood vessel is
to identify the vessels from retinal images. Compared to manual diagnosis, automatic
machine diagnosis in retinal images can reduce the probability of medical misdiagnosis.
Machine retinal vessels diagnosis, known as the segmentation of retinal blood vessels,
is typically under the semantic segmentation task and has been studied with both
unsupervised and supervised machine learning models. In conventional unsupervised
approaches, features are manually extracted from images and are fed into a statistical
model to detect blood vessels with on a threshold. Ricci et al. proposed a line detector
to detect blood vessels at each pixel. Unsupervised approaches doesn’t require labelled
data, which can be considered as an advantage. However, the performance of such
approaches is far below the satisfaction in real application scenarios and most of them
are time consuming. Supervised segmentation approaches then have been proposed to
overcome these limitations. The main idea of these supervised models is to discriminate
vessel pixels against non-vessel ones with a binary classifier which is trained with
features extracted from annotated images. Recent studies of deep neural networks
which could extract features automatically leads to the boom of supervised deep
learning approaches in the segmentation of retinal blood vessels. Ronneberger et al.
proposed U-net, which first uses Convolutional Neural Network (CNN) to extract the
features from the original medical image and then up samples these features to a
segmentation image.
Digital image processing plays an important role in remote sensing, medical
science, air traffic control system, radar system and forensic science etc. The digital
image processing techniques are mainly utilised to obtain an image with clearly defined
characteristics and to extract features. Today, a large number of digital image
processing techniques exists e.g. image acquisition, enhancement, restoration,
compression, segmentation, recognition etc. Image segmentation is a digital image
processing technique which subdivides an image into different regions based on certain
criterion. Hence, digital images are widely used nowadays and utilise of digital images
made changes in everyday life and in many sciences. The main advantage of digital
imaging is the possibility of relatively easy and complex image processing. Many new
17

algorithms or improvements of existing ones were proposed for digital image


processing. In addition, different applications require different adaptations of existing
methods. Applicability of digital image processing to a wide range of problems makes
this research field very active. Medicine is one of the sciences much improved by using
digital images. Standard techniques for medical imaging include-ray radiography,
magnetic resonance imaging, computed tomography, thermography, dermoscopy, etc.
Retina photography is often conducted via an optical apparatus called a fundus
camera. According to fundus camera is often viewed as a low power microscope that
specializes in retina fundus imaging, where the retina is illuminated and imaged via the
attached camera. In particular, fundus camera is designed to capture an image for the
interior surface of the human eye, which consists of major parts, including the macula,
optic disk, retina and posterior pole. Therefore fundus photography can be viewed as a
sort of documentation process for the retinal interior structure and retinal neurosensory
tissues. Retinal photography can be also conducted based on the idea that the eye pupil
is utilized as both an entrance and exit for the illuminating and imaging light rays that
are used by the fundus camera.
2.5 Segmentation of Three-dimensional Retinal Image Data
We used a combination of volume visualization and data analysis techniques
to better diagnose and subsequently treat retinal diseases. We have found that applying
volume visualization techniques to 3D retina image data collected in a clinical setting
has achieved success by revealing subtle features that standard diagnosis procedures
miss as well as providing accurate quantitative measurements of retinal structures. This
tool is currently being used in a clinical environment and is continually providing
insight into challenging retinal visualization and analysis problems. We found that
handling noise is a difficult task when training and using any type of machine learning
algorithm for segmentation. We plan to investigate image filtering techniques to reduce
speckle noise prior to SVM processing. We also plan to create training data that can be
applied to multiple volumes eliminating the need to retrain the SVM for new patients.
Our method shows reproducibility among different clinicians and yields good accuracy.
The primary differentiating factor between clinicians is their interpretation as to where
a feature begins and ends, which is better controlled through a clinical protocol. Our
18

method currently produces desirable results in about ten minutes. This is a significant
improvement over past machine learning applications to volumetric segmentations
Advancements in medical imaging are facilitating the extraction of accurate
information from volumetric data making three-dimensional(3D) imaging an
increasingly useful tool for clinical diagnosis and medical research. This development
makes possible non-invasive examination and analysis of diseases by providing
clinicians insight into the morphology of disease within the body and how it changes
overtime and through treatment. Common non-invasive imaging modalitiesare
magnetic resonance imaging (MRI) and computed tomography (CT). Our efforts focus
on the analysis and visualization of volumetric OCT retinal data. OCT, described in, is
an acquisition system based on back-scattering of coherent light producing a stack of
images similar to MRI and CT. A light beam is directed into a patient’s eye where
reflected light is merged with a reference beam eliciting an interference pattern that is
used to gauge reflectance at various depths along the beam path. Quickly sweeping the
beam across the retinal surface, in a structured pattern, produces the image stack. The
ophthalmology field historically identified diseases by examining fundus images
(captured using an ophthalmoscope showing the retina, macula, and optic disc) and
more recently by 2D thickness
maps of retinal layers. OCT has drastically improved the type of information
available to vision scientists allowing for a more intuitive view as well as analysis of
retinal layer information. Recently, 3D OCT imaging has gained popularity by giving
practitioners more information for their evaluations due to advancements in OCT
technology. As a result, we have built software that turns what is an otherwise
qualitative evaluation into a quantitative form. An automatic approach that segments,
classifies, and analyzes retinal layers from 3D OCT would be ideal. However, the
morphology of retinal layers depends on the patient and the disease in question, which
has caused problems for existing automatic retinal layer extraction methods.
To address this problem we have developed a semi automatic segmentation
system in which the morphology of retinal structures can be discovered and refined by
a clinician. The clinic an interactively specifies the location of a retinal layer on a few
select slices of the volume. This selection is then extrapolated through out the entire
19

volume using an SVM classifier in order to create a segmentation. Once segmented, we


provide visualizations and measurements of the resulting segmentation to aid in disease
diagnosis. The main visualization interface is an interactive 3D volume rendering of the
segmented portions of the volume. We also provide more familiar visualizations such
as a thickness map, currently a common diagnosis tool, and a 2D summed-intensity
projection of the data resembling a fundus image (feature included for completeness,
but we show no images of it in this paper). Additionally, the user can compute volume
and thickness from layer segmentations, which have proven useful in retinal disease
diagnosis.
Speckle noise is a normally distributed noise component introduced by the data
acquisition process. Our SVM approach, based on the original work, considers a
voxel’s mean value and variance across multiple resolutions in order to gracefully
handle this noise and to give the SVM a global perspective over feature shapes.
Additionally, this SVM is more tolerant of misclassification by the user, variation
between patients and diseases, and adapts well to the data variation constitute ingretinal
layer morphology.
Our main goals were to provide a tool that could (i) be used in a clinical setting; (ii)
operate on noisy OCT data; and (iii) isolate individualor multiple retinal layers. Our
main contributions to achieve these goals are (i) integration of a hierarchical data
representation into SVM computations to counter noise and to better find retinal layers
and (ii) several speedups for improving SVM performance allowing its practical use
within a clinical setting.
20

CHAPTER 3
SYSTEM DESIGN
3.1 EXISTING SYSTEM
• A variant of U-Net customized for semantic segmentation with tailored indexed
pooling, in which pooling indices are stored in the encoder and then restored to
recover the information in the un pooling layer of the decoder block.
• The computational complexity could be exponentially increased in the hidden
layers by adjusting the number of kernels, filter size, strides, etc.
• This computational complexity means the task is prohibitively time-consuming
for a computer without a sophisticated graphics card because of the training time
required.
DISADVANTAGE

• The drawback of the regression is that the precision of the learner is reduced
when it returns a real number as the output instead of a class as in the
classification problem, especially for noisy inputs such as OCT images.

3.2 PROPOSED SYSTEM

• A novel technique is proposed for the segmentation of OCT images, in which


DNN is applied as a regression model. To the best of our knowledge, this is the
first OCT image segmentation scheme to formulate the problem as a regression
task.

• A statistical feature, called the adaptive normalized intensity score (ANIS), is


deployed in addition to intensity and vertical gradient to diminish the effects of
intensity distribution imbalances, speckle noise, and low contrast.

• As the technique employs boundary pixel prediction in each A-scan and each
image includes hundreds of A-scans, the scheme is designed to be appropriate for
studies with limited sources of data, such as in academia.
21

• A computational complexity analysis is provided to compare the complexity of


DNNR and other OCT image segmentation schemes that apply CNN and shows
that the complexity is significantly lower in DNNR.

ADVANTAGES

• Reduce the neural network complexity.


• OCT schemes apply- CNN, DNNR

3.3 SYSTEM SPECIFICATION


3.3.1 SOFTWARE REQUIREMENT
➢ Operating system : Windows XP/7.
➢ Coding Language : Matlab
➢ Tool : Matlab 2018a

3.3.2 HARDWARE REQUIREMENT


➢ System : core i3.
➢ Hard Disk : 540 GB.
➢ Floppy Drive : 1.44 Mb.
➢ Monitor : 22 VGA Colour.
➢ Mouse : Logitech.
➢ Ram : 2 GB.
22

CHAPTER 4
SYSTEM IMPLEMENTATION
4.1 MODULES
• Preprocessing
• Features Extraction
• Neural Network Regression for Boundary Pixel Prediction
• Segmentation
4.1.1 Preprocessing
• To locate the layer boundaries accurately, a preprocessing procedure is necessary
to reduce the effect of noise.
• A nonlinear complex diffusion filter for speckle noise reduction.
• Such filters are known to be facilitated in the smooth areas of OCT images and
attenuated at the edges of the images.
4.1.2 Features Extraction
• Three features are extracted for the regression.
• The first feature is a statistical feature called ANIS of the A-scan segment from
the corresponding standard boundary line, an intensity normalization with
empirical mean and standard deviation from this benchmark line.
• The normalization moves the intensity distribution to the standard normal
distribution adaptively according to the benchmark pixel intensity as reference
information.
• The two other features used are intensity and the corresponding gradient of the
segment. The gradient is used in addition to the intensity information to
complement its changing of the segment at the edge’s position.
4.1.3 Neural Network Regression for Boundary Pixel Prediction
• The extracted features are fed into the DNNR for prediction of the boundary
pixel as a regression task, which returns the boundary pixel’s vertical position for
each A-scan.
• The input features form an mn matrix, where m is the number of scores
extracted from the benchmark pixel and n is the number of features extracted.
23

• In this case, n is three, which includes ANISs, corresponding intensities, and


gradients, and m is fifty, for the fifty pixels extracted from the benchmark
4.1.4 Segmentation
• Manual (yellow) and automatic (red) retinal layer segmentation for the left (a)
and right (b) segments of the retina from the fovea.
• The lines are overlapped to provide a visual demonstration of the accuracy of the
automatic segmentation.
• we can observe small differences between the yellow and red lines.
• The regions of overlay between automatic and manual segmentation for each
layer
24

CHAPTER 5
SYSTEM ARCHITECTURE
5.1 BLOCK DIAGRAM

5.2 DATA FLOW DIAGRAM


5.2.1PREPROCESSING(level0)
25

5.2.2 FEATURES EXTRACTION (level 1)

5.2.3 Boundary Pixel Prediction (level 2)


26

5.2.4 Segmentation (level 3)


27

CHAPTER 6
LANGUAGE DESCRIPTION
6.1 INTRODUCTION
The name MATLAB stands for MATrix LABoratory. MATLAB was written
originally to provide easy access to matrix software developed by the LINPACK
(linear system package) and EISPACK (Eigen system package) projects. MATLAB
is a high-performance language for technical computing. It integrates
computation, visualization, and programming environment. MATLAB has many
advantages compared to conventional computer languages (e.g., C, FORTRAN) for
solving technical problems. MATLAB is an interactive system whose basic data
element is an array that does not required dimensioning. Specific applications
are collected in packages referred to as toolbox. There are toolboxes for signal
processing, symbolic computation, control theory, simulation, optimization, and
several other fields of applied science and engineering.

6.2 MATLAB's POWER OF COMPUTATIONAL MATHEMATICS

MATLAB is used in every facet of computational mathematics. Following are


some commonly used mathematical calculations where it is used most commonly:

• Dealing with Matrices and Arrays


• 2-D and 3-D Plotting and graphics
• Linear Algebra
• Algebraic Equations
• Non-linear Functions
• Statistics
• Data Analysis
• Calculus and Differential Equations Numerical Calculations
• Integration
• Transforms

• Curve Fitting
28

• Various other special functions


6.3 FEATURES OF MATLAB

Following are the basic features of MATLAB

It is a high-level language for numerical computation, visualization and


application development.

• It also provides an interactive environment for iterative exploration, design


and problem solving.
• It provides vast library of mathematical functions for linear algebra, statistics,
Fourier analysis, filtering, optimization, numerical integration and solving
ordinary differential equations.
• It provides built-in graphics for visualizing data and tools for creating custom
plots.
• MATLAB's programming interface gives development tools for improving
code quality, maintainability, and maximizing performance.

• It provides tools for building applications with custom graphical interfaces.

• It provides functions for integrating MATLAB based algorithms with external


applications and languages such as C, Java, .NET and Microsoft Excel.

6.4 USES OF MATLAB

MATLAB is widely used as a computational tool in science and engineering


encompassing the fields of physics, chemistry, math and all engineering streams.
It is used in arrange of applications including:

• signal processing and Communications

• image and video Processing

• control systems

• test and measurement

• computational finance
29

6.5 UNDERSTANDING THE MATLAB ENVIRONMENT

MATLAB development IDE can be launched from the icon created on the
desktop. The main working window in MATLAB is called the desktop. When
MATLAB is started, the desktop appears in its default layout.

Fig.6.5.1. MATLAB desktop environment

The desktop has the following panels:

Current Folder- This panel allows you to access the project folders
and files.

Fig.6.5.2.Currenfolder
30

Command Window-This is the main area where command scan be entered at the
command line. It is indicated by the command prompt (>>).

Fig.6.5.3. command window

Workspace –The work spaces how sall the variables created and/or imported from
files.

Fig.6.5.4. workspace

Command History-This panel shows or rerun commands that are entered at the
command line.
31

Fig.6.5.5 command history

6.6 COMMONLY USED OPERATORS AND SPATIAL CHARATERS

MATLAB supports the following commonly used operators and special


characters:

Operator Purpose
+ Plus; addition operator.

- Minus, subtraction

* Scalar and matrix


operator.

.* Array and multiplication


multiplication operator.
^ Scalar and matrix
operator.
.^ Array exponentiation
exponentiation
operator.
operator.
32

\ Left-division operator
/ Right-division
.\ Array left-division
operator.

./ Array right-division
operator.

./ operator.
Table 6.6.1 MATLAB used operators and special characters.

6.7 COMMANDS

MATLAB is an interactive program for numerical computation and data


visualization. You can enter a command by typing it at the MATLAB prompt '>>'on
the Command Window.

6.7.1 Command a for managing a session

MATLAB provides various commands for managing a session. The following


table provides all

Commands Purpose

Clc Clear command window


Clear Removes variables
Exist Checks for existence of file or
frommemory

Global Declare variables to be global.


Help Searches
variable. for help topics.
Look for Searches help entries for

Quit Stops MATLAB.


Who Lists current variable.
keyword.
Whos Lists current variables (Long)

Table.6.7.1command for managing a session


Display).
33

6.8 INPUT AND OUTPUT COMMAND

MATLAB provides the following input and output related commands:

Command Purpose
Disp Displays content for an array or

Fscanf Read formatted data from a


string.
Format Control screen display format
Fprintf Performs
file. formatted write to

Input Displays prompts and waits for


screen or afile.
; Suppresses screen printing.
input.
Table.6.8 input and output commands

6.9 M FILES

MATLAB allows writing two kinds of program files:

Scripts:

script files are program files with. m extension. In these files, you write series
of commands, which you want to execute together. Scripts do not accept inputs and
do not return any outputs. They operate on data in the workspace.

Functions:

functions files are also program files with extension. Functions can accept
inputs and return outputs. Internal variables are local to the function.

Creating and Running Script File:

To create scripts files, you need to use a text editor. You can open the
MATLAB editor in two ways:

• Using the command prompt


34

• Using the IDE

You can directly type edit and then the file name(with .m extension).

Edit or

edit<filename>

6.10 DATATYPES AVAILABLE IN MATLAB

MATLAB provides 15 fundamental data types. Every data type store data that
is in the form of a matrix or array. The size of this matrix or array is a minimum of
0-by-0 and this can grow up to a matrix or array of any size.

The following table shows the most commonly used data types in MATLAB:

Datatype Description

Int8 8-bit signed integer

Unit8 8-bit unsigned integer

Int16 16-bit signed integer

Unit16 16-bit unsigned integer

Int32 32-bit signed integer

unit32 32-bit unsigned integer

Int64 64-bit signed integer

Unit64 64-bit unsigned integer

Single Single precision numerical data


35

Double Double precision numerical data

Logical Logical variables are

1or0,represent true &false


respectively

Char
Character data(strings are stored
as vector of characters as vector
of characters as

Callarray Array of indexed calls, each

capable of storing array of a


different dimension

Structure C-likestructure each structure

Having named fields capable of


storing an array of a different
dimension and datatype
Function handle Pointer to a function

User classes Object constructed from user-

defined class

Java classes Object constructed from a java


class
36

CHAPTER 7

SYSTEM TESTING

7.1 INTRODUCTION

The purpose of testing is to discover errors. Testing is the process of


trying to discover every conceivable fault or weakness in a work product. It
provides a way to check the functionality of components, sub-assemblies,
assemblies and/or a finished product It is the process of exercising software
with the intent of ensuring that the Software system meets its requirements and
user expectations and does not fail in an unacceptable manner. There are
various types of test. Each test type addresses a specific testing requirement.

7.2 TYPES OF TESTS

7.2.1 Unit testing


Unit testing involves the design of test cases that validate that the internal
program logic is functioning properly, and that program inputs produce valid
outputs. All decision branches and internal code flow should be validated. It is
the testing of individual software units of the application it is done after the
completion of an individual unit before integration. This is a structural testing,
that relies on knowledge of its construction and is invasive. Unit tests perform
basic tests at component level and test a specific business process, application,
and/or system configuration. Unit tests ensure that each unique path of a
business process performs accurately to the documented specifications and
contains clearly defined inputs and expected results.

7.2.2 Integration testing


Integration tests are designed to test integrated software components to
determine if they actually run as one program. Testing is event driven and is
more concerned with the basic outcome of screens or fields. Integration tests
37

demonstrate that although the components were individually satisfaction, as


shown by successfully unit testing, the combination of components is correct
and consistent. Integration testing is specifically aimed at exposing the
problems that arise from the combination of components.

7.2.3 Functional Testing


Functional tests provide systematic demonstrations that functions tested are
available as specified by the business and technical requirements, system
documentation, and user manuals.

Functional testing is centered on the following items:

Valid Input : identified classes of valid input must be accepted.

Invalid Input : identified classes of invalid input must be rejected.

Functions : identified functions must be exercised.

Output : identified classes of application outputs must be exercised.

Systems/Procedures: interfacing systems or procedures must be invoked.

Organization and preparation of functional tests is focused on requirements,


key functions, or special test cases. In addition, systematic coverage pertaining
to identify Business process flows; data fields, predefined processes, and
successive processes must be considered for testing. Before functional testing is
complete, additional tests are identified and the effective value of current tests is
determined.

7.2.4 System Testing


System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results.
An example of system testing is the configuration oriented system integration
test. System testing is based on process descriptions and flows, emphasizing
pre-driven process links and integration points.
38

7.2.5 White Box Testing

White Box Testing is a testing in which in which the software tester has
knowledge of the inner workings, structure and language of the software, or at
least its purpose. It is purpose. It is used to test areas that cannot be reached
from a black box level.

7.2.6 Black Box Testing


Black Box Testing is testing the software without any knowledge of the
inner workings, structure or language of the module being tested. Black box
tests, as most other kinds of tests, must be written from a definitive source
document, such as specification or requirements document, such as
specification or requirements document. It is a testing in which the software
under test is treated, as a black box. you cannot “see” into it. The test provides
inputs and responds to outputs without considering how the software works.

7.2.7 Unit Testing

Unit testing is usually conducted as part of a combined code and unit test
phase of the software lifecycle, although it is not uncommon for coding and unit
testing to be conducted as two distinct phases.

Test strategy and approach


Field testing will be performed manually and functional tests will be
written in detail.
Test objectives
• All field entries must work properly.
• Pages must be activated from the identified link.
• The entry screen, messages and responses must not be delayed.
Features to be tested
• Verify that the entries are of the correct format
• No duplicate entries should be allowed
39

• All links should take the user to the correct page.


7.2.8 Integration Testing
Software integration testing is the incremental integration testing of two
or more integrated software components on a single platform to produce failures
caused by interface defects.

The task of the integration test is to check that components or software


applications, e.g. components in a software system or – one step up – software
applications at the company level – interact without error.

Test Results: All the test cases mentioned above passed successfully. No
defects encountered.

7.2.9 Acceptance Testing


User Acceptance Testing is a critical phase of any project and requires
significant participation by the end user. It also ensures that the system meets
the functional requirements.
40

CHAPTER 8

SCREEN SHOTS
41
1

CHAPTER 9

CONCLUSION

In this paper, we proposed a novel retinal layer segmentation scheme


based on a DNNR model that predicts the boundary pixel using three features
extracted for each boundary line: ANIS, intensity, and vertical gradient in the
axial direction. A theoretical analysis of DNNR based on the MSE loss function
and SGD with momentum for the optimization was also discussed in detail. In
each A-scan, the features of the segment from the benchmark line are extracted
to predict the boundary pixel as a regression problem; the predicted pixels are
then tracked by sliding windows applying Otsu’s clustering method. A different
way of formulating the problem into a regression task helps reduce the neural
network complexity dramatically as shown in the complexity analysis.
Reformulating this way helps avoid the need for a huge dataset and long-
duration training for the learning process. Additionally, as the regression
performs the prediction in each A-scan, we have the evaluation for hundreds of
A-scans in each image; hence, the scheme is appropriate for studies with limited
data. We demonstrated that our approach enables successful segmentation of
intra-retinal layers—even with low-quality images containing speckle noise,
low contrast, and different intensity ranges throughout—with the assistance of
the ANIS feature.
XII

REFERENCE

[1] R. R. A. Bourne, S. R. Flaxman, T. Braithwaite, M. V. Cicinelli, A. Das, J.


B. Jonas, J. Keeffe, J. H. Kempen, J. Leasher, H. Limburg, K. Naidoo, K.
Pesudovs, S. Resnikoff, A. Silvester, G. A. Stevens, N. Tahhan, T. Y. Wong, H.
R. Taylor, “Magnitude, temporal trends, and projections of the global
prevalence of blindness and distance and near vision impairment: a systematic
review and meta-analysis,” Lancet Glob. Heal., vol. 5, no. 9, pp. e888–e897,
Sep. 2017.
[2] K. Attebo, P. Mitchell, R. Cumming, and W. Smith, “Knowledge and beliefs
about common eye diseases,” Aust. N. Z. J. Ophthalmol., vol. 25, no. 3, pp.
283–287, Aug. 1997.
[3] D. Yorston, “Retinal Diseases and VISION 2020,” Community eye Heal.,
vol. 16, no. 46, pp. 19–20, 2003.
[4] A. Al-Mujaini, U. K. Wali, and S. Azeem, “Optical coherence tomography:
clinical applications in medical practice,” Oman Med. J., vol. 28, no. 2, pp. 86–
91, Mar. 2013.
[5] P. Malamos, G. Tsolkas, M. Kanakis, G. Mylonas, D. Karatzenis, N.
Oikonomopoulos, J. Lakoumentas, and I. Georgalas, “OCT-Angiography for
monitoring and managing neovascular age-related macular degeneration,” Curr.
Eye Res., vol. 42, no. 12, pp. 1689–1697, Dec. 2017.
[6] J. Y. Won, S. E. Kim, and Y.-H. Park, “Effect of age and sex on retinal
layer thickness and volume in normal eyes,” Medicine (Baltimore)., vol. 95, no.
46, p. e5441, Nov. 2016.
[7] G. J. Jaffe and J. Caprioli, “Optical coherence tomography to detect and
manage retinal disease and glaucoma,” Am. J. Ophthalmol., vol. 137, no. 1, pp.
156–169, Jan. 2004.
[8] Y. Kita, R. Kita, A. Takeyama, S. Takagi, C. Nishimura, and G. Tomita,
“Ability of optical coherence tomography–determined ganglion cell complex
thickness to total retinal thickness ratio to diagnose glaucoma,” J. Glaucoma,
vol. 22, no. 9, pp. 757–762, Dec. 2013.
[9] B. Bhaduri, R. M. Nolan, R. L. Shelton, L. A. Pilutti, R. W. Motl, and S. A.
Boppart, “Ratiometric analysis of in vivo retinal layer thicknesses in multiple
sclerosis,” J. Biomed. Opt., vol. 21, no. 9, p. 95001, 2016.
[10] D. C. DeBuc and G. M. Somfai, “Early detection of retinal thickness
changes in diabetes using optical coherence tomography,” Med. Sci. Monit.,
vol. 16, no. 3, pp. MT15-21, Mar. 2010.

You might also like