Localization of Optic Disc in Fluorescein

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Localization of Optic Disc in Fluorescein Angiography Images

Rubiel Vargas Caas


Information Engineering and Medical Imaging Group City University London, UK. Cauca University, Popayan, Colombia rubiel.vargas-canas.1@city.ac.uk
Abstract Optic disc detection is an important task in retinal imaging due to its significance in both clinical and image understanding aspects. In the clinical setting, the optic disc represents the entrance and exit sites of vascular and nervous structures, and its size and shape could be used in diagnostics and treatment of diseases, such as glaucoma. In terms of automated retinal analysis systems, it is used as a landmark point in retinal image registration, the starting point for vessel tracking, and in determining the geometry of the retina. Several approaches have been proposed, the majority of which use intensity- or shape- based techniques. Recently, approaches that combine intensity, shape and information regarding vascular structures have been used with good results. In this paper, a method that combines information from the major blood vessels is investigated and compared with intensity- and shape- based techniques, when used on their own. The image set employed to evaluate the proposed technique consists of both healthy and unhealthy retinas. The techniques discussed in this contribution result to a robust, fast and accurate technique for detection of the optic disc. Keywords- Optic disc detection, retinal image segmentation, evidence gatering, bayes consensus-base fusion.

Panos Liatsis
Information Engineering and Medical Imaging Group City University London, UK. p.liatsis@ city.ac.uk

The OD appearance, in a normal retina, is as follows: it consists of a high intensity near-circular rim, with a significant vertically oriented, band of low intensity blood vessels. The OD typically occupies approximately one seventh of the entire image [1]. It is divided in two zones, i.e., the nasal and the temporal areas. The nasal side is usually less bright than the temporal side. The majority of the vessels originate from the nasal area and depart vertically and then their path follows a roughly parabolic trajectory. Automatic OD localisation approaches are typically based on one or two OD characteristics, namely, shape [2],[3] and intensity [1],[4] information. However, in some diseases, such as diabetes or age related macular degeneration (AMD), which are diagnosed through retinal angiography, images from the retina show lesions, i.e., drussens, hard exudates and micro aneurisms, which introduce features similar to the ones exhibited by the constituent parts of the retina. Drussens and exudates are characterised by intensities greater or equal to the OD intensity. Furthermore, some hard exudates have an elliptical shape similar to that of the OD. Therefore, the above characteristics do not suffice on their own, for OD detection in Fluorescein Angiographies. Recently, approaches that incorporate information about the vascular tree have been introduced to this task. They estimate OD location using a geometric model [5], while others examine the region where blood vessels converge [6][7][8]. Illumination methods are the best option to detect the OD in images of normal retinas, however they fail when exudates or drussens are present. Vessel convergence techniques are robust to illumination changes, however they are computationally intensive and their detection rate is lower than ones based on illumination, when applied to normal retinas. In this work, we propose an OD localisation technique which incorporates information about the image intensity, shape features and the geometry of the vascular tree using a probabilistic model. The data set consists of Angiogram images from both healthy and unhealthy retinas. Furthermore, a comparison between different approaches for OD localisation is also presented.

I.

INTRODUCTION

The optic disc (OD) is one of the most important features within the retina due to it being the entrance and exit sites of the retinal vessels and nerve fibres. It is also used as an area of reference during image capture and analysis. Figure 1 is a typical fundus image. It depicts the main constituent parts of the retina: optic disc, fovea and vascular network tree.

Figure 1. Typical fluorescein image of the human retina. It depicts the main constituent parts of the retina: optic disc, fovea and vascular network tree. Supported by the Programme AlBan, the European Union Programme of High Level Scholarships for Latin America, scholarship No.(E07D403130CO), Cauca University Colombia- and Colombias National Department of Science, Technology and Innovation COLCIENCIAS.

The remaining of the paper is organised as follows: section two describes the proposed approach, section three presents a comparison between some of the techniques mentioned above

and results achieved with the proposed one, and finally, section four concludes the present work. II. METHODS

cluster. The OD typically has a diametre of one seventh of the retinal image field diametre [1]. Thus, the percentage of pixels that belongs to the OD can be obtained using (2) where MxN is the retinal image size and R is the radius of the image field:

Shape, brightness, size, and vessel convergence are properties that contribute to the identification of the OD. Therefore, we exploit the advantages of individual features and develop our method for OD detection upon their complementary. The proposed framework for OD detection is shown in Figure 2.

p ( R / 14) 2 ( MN ) 1100%

(2)

This percentage is used to calculate the thresholding level, T, as the (100 p) percentile of the intensity level values. After thresholding, a morphological closing operation is performed in order to connect parts that were separated by vessels, and the area and circularity of each cluster is measured to assign individual confidence levels. Two fuzzy sets are established according to the degree of circularity and the ratio of the cluster to the expected OD area (see Figure 3), then the probability that the cluster corresponds to the OD is calculated using the following linguistic rule A cluster corresponds to the OD if it is circular AND its size is equal to the expected OD area.

Figure 2. Framework for OD detection using intensity and vessel convergence consensus.

The framework is composed of four procedures for OD location; two of them are based on the image intensity properties, and shape and size of the OD, while the remaining two are based on the convergence of the major blood vessels. Each procedure detects a number of candidate regions and computes the probability that it belongs to the OD. The outputs of the intensity processes in terms of candidate regions are combined, resulting to aggregate candidates characterised by confidence levels. The same process is performed for the vessel convergence module. Following this, the algorithm establishes a consensus between the information presented from both modules and determines the location of the OD. A. Detection Procedures 1) Intensity Based Illumination in retinal images is uneven due to an optical aberration called vignetting [6]. In order to correct this distortion, illumination equalisation is applied. Each pixel within the image is adjusted as follows:

Figure 3. Membership Levels for a) Circularity and b) estimated area ratio

b) Largest Intensity Variation To identify the area with the highest intensity variations, a window equal to the size of the OD is defined, and a mean intensity variation applied to transform the original image in a mean intensity variation image. Next, the areas with highest intensities are classified as regions of interest (ROIs). For each ROI, the circularity and size are measured and the confidence levels are assigned as in the previous case. The image is mapped using (3), where the operator {}w represents the mean intensity value within a window W: Figure 4 is an example of the mean intensity variation image and its clustered region.

I (i, j ) I mean (i, j ) = {I 2 }w ({I }w ) 2

(3)

I eq ( x, y ) = I ( x, y ) + m Aw ( x, y )

(1)

where m is the desired mean (128) and Aw is the local mean intensity, within a window of size NxN with N=9 pixels. After equalisation, the OD is located as follows: a) Brightest intensity In simple words, the p% brightest pixels are thresholded and clustered, and a confidence measure is associated to each
Figure 4. a) Mean intensity variation image of Figure 1. Grey levels are inversed for visualisation purposes. The darkest area represents the highest variation in intensity. b) cluster obtained after thresholding.

2) Vessel Convergence To determine the region of convergence of the blood vessels, the major blood vessels are extracted using morphological operators. Their centrelines are detected following a skeletonisation process, and their convergence region is located using two approaches. a) Number of starting/ending vessels The skeletonisation procedure results in line segments of 1 pixel width. Branching points, i.e., pixels with more than two neighbours, are deleted and vessel segments are approximated by straight lines using least squares fitting, as in

b) Geometric model Having reduced the influence of noise and lesions, the major vessels are modelled as a parabola. Every point of the skeleton belonging to the parabola satisfies the following condition:

x = ay 2 + by + c

(7)

Using least squares estimation, the values of a, b and c can be obtained by minimising:

min i =0 d i2 = min i=0 [ yi2 (axi + b)]2


n n

(4)

min i =0 [ xi (ayi2 + byi + c)]2


n

(8)

After straight line fitting, each line segment is represented by its starting and ending points, i.e., pstrt = (xstrt,ystrt); pend = (xend,yend). Segments shorter than a predefined value, L = 20, are erased. This assures noise as well as possible lesions are removed. Each starting-ending point segment votes in a circular region of radius equal to the OD radius. This is, for each starting-ending point, a circular neighbourhood, centred at the starting-ending point receives a vote, as shown in Figure 5.

Having obtained the parameters of the parabola, its vertex can be estimated by (9). Figure 6 is an example of this process.

yc =

b ; 2a

xc = ayc2 + byc + c

(9)

Figure 5. Convergence of the major blood vessels. a) vessel segments aproximated by straight lines, gray circle represent votes casted by the star/end points of a particular vessel segment. b) votes density, the area enclosed in the red circle is the area with the highest votes density.

Figure 6. Vessel fitting to a parabolic model. The vertex of the parabola represents the location of the OD.

The confidence for such region is given as,

as:

Then, a probability function is assigned in the voting space

p = 1 ef

(10)

p ( x, y ) =

v ( x, y ) max(v)

(5)

where ef is the fitting error,

e f = x (ay 2 + by + c) .

where p(x,y) represents the probability of a pixel belong to the OD, v(x,y) is the number of votes in pixel (x,y) and max(v) is the maximum value in the voting space. The regions with the highest probabilities are considered and the cumulative probability of the regions belonging to the OD is updated. The procedure terminates, when the cumulative probability level exceeds 0.5, as in (6), where pi denotes the ith highest region probability in the voting space.

B. Aggregation process Each of the techniques discussed above results to candidate regions, which are filtered through an evidence aggregation process. The procedure determines the correspondence between candidate regions and recalculates their probabilities. Two candidates regions detected by different methods would correspond to the same region if the distance between their centroids is less than a specified threshold, e.g., half of the OD radius. In this case, they are fused to a single region (consisting of the set union of its constituent regions), with its probability being the sum of the prior probabilities. If the candidate regions do not correspond to the same region, they

i =1

pi 0.5

(6)

are not processed further. Finally, the probabilities of the refined regions are normalized as,

localisation method is shown to be more robust in both normal and unhealthy retinal images. IV. CONCLUSIONS

rpi = pi

( p )
N i =1 i

(11)

where N is the number of refined regions and pi is their modified probability. C. Consensus Assuming that there exists correspondence between at least two region candidates, to establish whether a region corresponds to the location of the OD, a consensus process based on Bayes rule is applied. The following formula evaluates the probability that a bright region with high intensity variation belongs to the OD, given that it is also a region of convergence:

We presented a automated methodology to accurately identify the location of the OD in images from both healthy and unhealthy retinas. The proposed system gathers complimentary evidence regarding the characteristics of OD, i.e., brightness, mean intensity variation, shape, and convergence of the major blood vessels. This information is combined in two stages: first, an aggregation procedure, which uses fuzzy sets, collects information according to a specific attribute, i.e., intensity or region of convergence of the major blood vessels; second, a probabilistic consensus approach, based on the Bayes rule, which fuses the resulting information from the aggregation process and determines the OD location. The presented technique is robust, fast and does not require information about the complete vascular network, which provides a computational advantage. Furthermore, our technique can successfully deal with the presence of lesions, whose characteristics resemble those of the OD, e.g. hard exudates. This is reflected in higher detection rates in images of both healthy and unhealthy retinas, compared with results obtained with previous approaches utilizing individual features. In order to assist clinicians in quantifying damages caused by conditions such as glaucoma, the OD boundaries have to be detected. Future work will focus on the application of the Gerig Hough transform and deformable contour models for OD border detection. REFERENCES
[1] Sinthanayothin C., James F Boyce, Helen L. Cook, and Thomas H. Williamson. Automated localisation of the optic disc, fovea and retinal blood vessels from colour fundus Images, British. Journal of. Ophthalmology. 83; pp 902-910. 1999. Mai S. Mabrouk, Nahed H. Solouma and Yasser M. Kadah. Survey of retinal image segmentation and registration, GVIP Journal, Volume 6, Issue 2. 2006. Osareh Alireza, Mirmehdi Majid, Thomas Barry, and Markham Richard. Classification and localisation of diabetic-related eye disease, ECCV. Springer-Verlag Berlin Heidelberg. pp. 502516. 2002. Gagnon L., M. Lalonde, M. Beaulieu, M.-C. Boucher. Procedure to detect anatomical structures in optical fundus images, Computer Research Institute of Montreal. Foracchia M., Grisan E., and Ruggeri A. Detection of optic disc in retinal images by means of a geometrical model of vessel structure, IEEE Transactions on Medical Imaging, Vol. 23 No 10. pp. 1189-1195. 2004. Hoover A., and Goldbaum M. Locating the optic nerve in a retinal image using the fuzzy convergence of the blood vessels, IEEE Transactions On Medical Imaging, Vol. 22, No. 8. 2003. Huiqi Li, Opas Chutatape. Automated feature extration in color retinal images by a model based approach, IEEE Transactions on Biomedical Engineering, Vol. 51 No. 2. 2004. Lowell James, Hunter Andrew, Steel David, Basu Ansu, Ryder Robert, Fletcher Eric, and Kennedy Lee. Optic nerve head segmentation, IEEE transactions on medical imaging, VOL. 23, NO. 2. 2004.

P( I | C ) =

P( I C ) P(C )

(12)

Finally, the region with the highest posterior probability is considered as the location of the OD. III. RESULTS AND COMPARISON

In order to test the proposed method, a set of 99 images was used. The test set corresponds to images from several fluorescein angiographies. The set contains 26 images of normal retinas, 34 images of retinas with AMD and 39 images with vein occlusion. Results are compared to intensity- and shape- based methods. Table 1 shows the results obtained for each implementation in the test data set.
TABLE I. COMPARISON BETWEEN OD LOCALISATION RATES Normal High Intensity High Intensity variation Gerig Hough Transform Vessel Convergence Proposed method 88.46% 92.31% 73.08% 83.61% 96.15% AMD 2.94% 2.94% 55.88% 64.07% 73.52% Vein Occlusion 61.4% 74.36% 64.10% 62.10% 76.92%

[2]

[3]

[4]

[5]

To detect circular shapes, the Gerig Hough transform with gradient information was implemented. The results achieved with this technique are shown on the third row of the Table I. Even though, its detection rate is acceptable, the large number of false positives makes this technique unsuitable for detection purposes, however it can be used to delineate the OD after the location of the OD has been revealed. According to the information shown in the above table, the proposed OD

[6]

[7]

[8]

You might also like