Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 10

1

CHAPTER 1

INTRODUCTION

1.1 INTRODUCTION TO DIGITAL IMAGE


PROCESSING

Image processing is the science of manipulating an image with the advent of


digital cameras and their easy interoperability. With computers the process of digital
image processing has acquired an entire new dimension and meaning. Image processing
works with the digital images to enhance, distort, accentuate or highlight inherent
details in the image. The goal of each operation is to achieve some details or we can
generalize by saying, extracting information from the system.

Image processing and image analysis is the area of recognition of individual


regions or objects in an image. Digital image processing encompasses processes whose
inputs and outputs are images and in addition encompasses processes extracts attributes
from images up to and including recognition of individual objects. Interest in digital
image processing methods stems from two principal applications.

 Improvement of pictorial information for human interpretation.

 Processing of scene data for autonomous machine perception.

Digital image processing has a broad spectrum of applications such as


remote sensing, image storage and transmission for business applications, medical
imaging, acoustic imaging and automated inspection of industrial parts. Images acquired
by satellites are useful in tracking of earth resources, geographical mapping, and
prediction of agricultural crops, urban growth, weather, flood and fire control.

Space imaging applications include recognition and analysis of objects


contained in images obtained from deep space-probe missions. There are also medical
applications such as processing of X-rays, Ultrasonic scanning, Magnetic Resonance
imaging, etc., In addition to the above-mentioned applications; digital image processing
is now being used in solving a wide variety of problems. Though unrelated, these
2

problems commonly require methods capable of enhancing information for human


interpretation and analysis.

Image enhancement and restoration procedures are used to process degraded


images of uncover able objects. Successful applications of image processing concepts
are found in astronomy, defence, biology and industrial applications. The images may
be used in the detection of tumours or for screening the patients. The current major area
of application of digital image processing techniques is in solving the problem of
machine vision. The ultimate goal of any image processing technique is to help an
observer interpret the content of an image.

The field of image processing continues, as it has since the early ‘70s, on a
path of dynamic growth in terms of popular and scientific interest and number of
commercial applications. Considerable advances have been made over the past 30 years
resulting in routine application of image processing to problems in medicine,
manufacturing, entertainment, law enforcement and many others. Examples include
mapping internal organs in medicine using various scanning technologies (image
reconstruction from projections), automatic fingerprint recognition (pattern recognition
and image coding) and HDTV (video coding)to name a few. The discipline of image
processing covers a vast area of scientific and engineering knowledge. It is built
on a foundation of one- and two- dimensional signal processing theory and
overlaps with such disciplines as artificial intelligence (scene understanding),
information theory (image coding), statistical pattern recognition (image
classification), communication theory (image coding and transmission) and
microelectronics (image sensors, image processing hardware). Broadly, image
processing may be subdivided into the following categories: enhancement,
restoration, coding and understanding. The goal in the first three categories is to
improve the pictorial information either in quality (for purposes of human
Interpretation) or in transmission efficiency. In the last category, the objective is to
obtain a symbolic description of the scene, leading to autonomous machine
reasoning and perception. Image processing and Analysis can be defined as the
“act of examining images for the purpose of identifying objects and judging
their significance”. A major attraction of digital imaging is the ability to
manipulate image and video information with the computer. Digital image
processing is now a very important component in many industrial and
commercial applications and a core component of computer vision applications.
3

Image processing techniques also provide the basic functional support for
document image analysis and many other medical applications. The field of digital
image processing is continually evolving. Transform theory plays a key role in image
processing. Image and signal compression is one of the most important
applications of wavelets. A key ideas for wavelets is the concept of “scale”. The
discrete wavelet transforms decomposes animage into “approximation” and “detail”.
ImageProcessing deals with the processing and display of images of realobjects. Their
emphasis is on the modification of the image, which takes in a digital image
and produces some other information, decision etc.,

1.2 IMAGE PROCESSING TECHNIQUES

An image processing system may handle a number of problems and have a number
of applications but it mostly involves the following processes known as the basic classes
in image processing:

1. Image Representation and Description


2. Image Enhancement
3. Image Restoration
4. Image Recognition and Interpretation
5. Image Segmentation
6. Image Reconstruction
7. Image Data Compression

1.2.1 Image Representation and Description

Any processed image must be represented and described in a form suitable


for further computer processing. Basically, representing a region involves two choices

 In terms of its external characteristics (its boundary) and


 In terms of its internal characteristics (the pixels in the region)

The next task is to describe the region based on the chosen representation.
Generally an external representation is chosen when the primary focus is on shape
characteristics. An internal representation is selected when the primary focus is on
reflectivity characteristics such as color and texture.
Some of the variable representation approaches are:
a. Chain codes
4

b. Polygonal approximations
c. Signatures
d. Boundary segments

1.2.2 Image Enhancement

The principle objective of enhancement technique is to process an image so


that the result is more suitable than the original than for a specific application. Most of
the enhancement techniques are very much problem oriented and hence enhancement
for one problem may turn out to be degradation for the other. Enhancement approaches
may be classified into two broad categories:

i. Spatial Domain enhancement techniques


ii. Frequency Domain enhancement techniques

The former refers to processing the image in the image plane (pixels) itself
while the latter techniques are based on modifying the Fourier (or any other) transform
of an image. In general enhancement techniques for problems involve various
combinations of methods from both the categories.

Some examples of enhancement operations are edge enhancement, pseudo


colouring, histogram equalization, noise filtering, un-sharp masking, sharpening,
magnifying, etc., the enhancement process does not increase the inherent information
content present in the image but only tries to present it in a suitable manner.
Enhancement operations may be either local or global. Global operations operate on the
entire image at a time.

1.2.3 Image Restoration

The ultimate goal of restoration techniques (as an image enhancement) is to


improve the image in some sense. However, unlike enhancement, restoration is a
process that attempts to recover an image that has been degraded by using some
knowledge of the degradation phenomenon. Thus restoration techniques are oriented
towards modelling the degradation and applying the inverse process in order to recover
the original image. This approach usually involves formulating a criterion of goodness
that will yield some optimal estimate of the desired result.

Early techniques for digital image restoration were derived mostly from
frequency domain concepts. However, modern methods take advantage of the algebraic
5

approach. Although a direct solution by algebraic methods generally involves the


manipulation of large systems of simultaneous equations, under certain conditions
computational complexities can be reduced to the same level as required by traditional
frequency domain restoration techniques. Restoration techniques may be either linear or
non-linear.

Image restoration may be classified into three major types:

a. Restoration models: Image formation, detector and recorder, noise model,


sampled observation.
b. Linear filtering: Inverse / pseudo-inverse filter, Wiener filter, FIR filter, Kalman
filter, semi recursive filter.
c. Other methods: speckle noise reduction, maximum entropy restoration, Bayesian
methods, blind deconvolution, etc.

1.2.4 Image Recognition and Interpretation

Image recognition or analysis is a process of discovering, identifying and


understanding patterns that are relevant to the performance of an image based task. One
of the principle goals of image analysis is to endow a machine withthe capability to
approximate similar to human beings. An automated imageanalysis system is capable of
exhibiting various degrees of intelligence. Some of the associated characteristics are:

a. The ability to extract pertinent information from a background of irrelevant


details.
b. The capability to learn from examples and to generalize this knowledge.
c. The ability to make inferences from incomplete information.
Image analysis can be divided into three basic areas:

i. Low level processing which deals with functions requiring no intelligence


ii. Intermediate level processing which deals with the task of extracting and
characterizing components in an image resulting from a low level processing
iii. High level processing which involves recognition and interpretation and is
generally termed as intelligent cognition.

1.2.5 Image segmentation


Image segmentation is a technique for extracting information from a image.
This is generally the first step in image analysis. Segmentation subdivided into its
6

constituent regions or objects. The level to which this subdivision is carried depends on
the problem being solved. Segmentation is stopped when the objects of interest in an
application have been isolated.
In general, autonomous segmentation is one of the difficult task in image
processing. This step determines the eventual success or failure of the analysis.
Effective segmentation rarely fails to lead to a successful solution.
Segmentation algorithms for monochrome images generally are based on
one of two basis properties of grey level values.

1. Discontinuity
2. Similarity

In the first category, the approach is to partition an image based on abrupt


changes in grey level. The principal areas of interest within this category are detection
of isolated points and detection of lines and edges in an image. The principal
approaches in the second category are based on thresholding, region growing, region
splitting and region merging. The concept of segmenting an image based on
discontinuity or similarity of the grey level values of its pixels is applicable to both
static and dynamic images. In the later cases, motion can be used as a powerful queue to
improve the performance of segmentation algorithms. The principal areas of interest
within this category are detection of isolated points and detection of lines and edges in
an image.

1.2.6 Image reconstruction

An important problem in image processing is to reconstruct a cross section


of an object from several images of its transaxial projections. A projection is a shadow
gram obtained by illuminating an object by penetrating radiation. Each horizontal lines
shown in the figure is a one dimensional projection of the horizontal slice of the project.
Each pixel on the projected image represents the total absorption of the radiation along
its path from the source to the detector. By rotating the source detector assembly around
the object, projection views for several different angles can be obtained. Imaging
systems that generate such slice views are called CT scanners. This constructions are of
several types:

1. Transmission Tomography
2. Reflection Tomography
7

3. Emission Tomography
4. Magnetic Resonance Imaging
5. Nuclear Magnetic Resonance Imaging
If a 3-D object is scanned by a parallel beam, then the entire 3D objects can
be reconstructed from a set of two dimensional slices, each of which can be
reconstructed using several available algorithms.

1.2.7 Image data compression

An enormous amount of data is produced when a 2D light intensity function


is sampled and quantized to create a digital image. The amount of data generated may
be so great that it results in impractical storage, processing and communication
requirements.
Image compression addresses the problem of reducing the amount of data
required to represent a digital image. The underlying basis of the reduction process is
the removal of redundant data. This amounts to transforming a 2D pixel array into a
statistically uncorrelated data set. The transformation is applied prior to storage or
transmission of image. Later the compressed image is decompressed to reconstruct the
original image or an approximation to it. Initial focus in this field was on the
development of methods for reducing video transmission bandwidth, a process called
bandwidth compression.
Image compression is the natural technology for handling the increased
spatial resolution of today’s imaging sensors and evolving broadcast television
standards. Applications of data compression are in broadcast television, remote sensing
via satellite, electric communications via air craft, radar and sonar, teleconferencing,
computer communications, facsimile transmission, document and medical imaging,
hazardous waste control application and the like. These are some of the important
applications of image compression. But we can see its importance over all fields of
technology especially for the transmission of wide amount of data.

1.2.8 DIGITAL IMAGE REPRESNTATION


Digital image is array or a matrix of square pixels arranged in column and
rows. Image processing is any form of signal processing for which the input is an image
such as photographs or frames of video, the output of image processing can either an
image or set of characteristics or parameter related to the image. Image processing
usually refers to digital image, but optical and analog image processing also possible.
8

An image can be defined as a 2D signal varies over the spatial coordinates s and
y, and can be written mathematically as f(x,y). In general, the image f(x,y) is divided
into X rows and Y columns. At the intersection of rows and columns, pixels are present.
Pixel mean ‘picture element’. Pixels are considered as the building blocks of digital
images, as they combine together to give a digital image. The value of the function
f(x,y) at every point indexed by a row and a columns is called grey value or intensity of
the image. Generally, the value of the pixel is the intensity value of the image at that
point. The number of rows in a digital image is called vertical resolution. The number of
columns is called horizontal resolution.
Digital image processing has becomes very popular now, as digital images have
many advantage over analog images. The advantages are as follows:

1. It is easy to post-process the image. Small corrections can be made in the


captured image using software.
2. It is easy to store the image in the digital memory.
3. It is possible to transmit the image over networks. So sharing an image is quite
easy.
4. A digital image does not require any chemical process. So it is very
environment friendly, as harmful film chemicals are not required or used.
5. It is easy to operate a digital camera.

1.3 DIABETIC RETINOPATHY


Diabetes is one of the common diseases all over the world in which the
pancreas is not able to secrete proper amount of insulin which leads to uncontrolled
sugar levels in the body. Insulin is that hormone that is important for regulating the
blood sugar level. Patients suffering from this disease over a long period of time are
more likely to develop eye problems known as diabetic retinopathy (DR). Diabetic
retinopathy is the leading cause of blindness among adults aged 20-74 years in the
United States. According to the World Health Organization (WHO), screening retina for
diabetic retinopathy is essential for diabetic patients and will reduce the burden of
disease. However, retinal images can be difficult to interpret, and computational image
analysis offers the potential to increase efficiency and diagnostic accuracy of the
screening process. Diabetic Retinopathy is primarily concerned with the changes
occurring in the blood vessels of the retina stimulated by increased blood glucose level.
In ophthalmology, indispensability lies in the detection of blood vessels as required in
numerous applications of image processing. The various components present in the
9

human retina such as the central portion of the retina as macula, the central part of
macula as fovea, optic disc (OD) and blood vessels are shown in fig 1. Grading disease
severity & ophthalmic diseases which imply the process of automated diagnosis, it is
required to have appropriate information about blood vessels. Retinal vessel
segmentation has contributed in the automatic generation of retinal maps for curing age
related muscular degeneration, retinal image mosaic synthesis, extraction of
characteristic points of the retinal vasculature for temporal or multifaceted image
registration, recognition of the optic disc position, and localization of the fovea.

1.3.1 AUTOMATIC DETECTION OF DIABETIC RETINOPATHY


In ophthalmology, blood vessels detection in the retina is significantly
approved initial action for the most applications of image analysis. Automatic
evaluation of retinal blood vessels is precociously needed for automatic detection of
diabetic retinopathy (DR). The designated pattern of a bank of directionally sensitive
Gabor filters is explored for several values of the scale and elongation parameters which
is needed for the precise visualization. A Gaussian shaped curve which is prescribed for
the fair approximation of the gray-level profile of the blood vessel, with the concept of
matched filter is used for the detection of signals which is used for detecting piecewise
linear segments of blood vessels precisely. A technique of mathematical morphology
which resembles the quantification o the retinal vasculature in ocular fundus images
uses a multidirectional top-hat operator containing rotating structuring elements that
highlights the vessels in a specific direction and inevitable information is extracted as
referred by bit plane slicing. A technique is used for enhancing, detecting and
segmenting blood vessels in retinal images that is directed to use 2-D Gabor wavelet
and sharpening filter. These filters perform the task of enhancing and sharpening the
vascular pattern respectively. This proficient technique significantly detects and
segments the blood vessels with the use of morphological operators and edge detection
algorithms. A bank of directionally sensitive Gabor filters is used for highlighting the
blood vessels in the retina. Saleh and Eswaran presented a method based on h-maxima
transformation and multilevel thresholding for blood vessel detection. Another method
uses edge enhancement and object classification for vessel detection. A three phase
algorithm for vessel detection combines a number of image processing techniques for
vessel detection. Matched filtering and entropy based filtering is used to automatically
identify and extract blood vessels. A novel technique for detecting vessels in retinal
10

Figure 1.1: Main components of Retina

images is based on Non- Subsampled Contourlet Transform and morphological


operators. It combines information from two scales of contourlet and gray scale image
to extract the vessel map. Shapers like micro aneurysms have been eliminated by using
morphological operators. Multilevel threshold based on particle swarm optimization
algorithm has been employed for detecting vessels in the retina image. Image is first
preprocessed by using adaptive histogram equalization. Blood vessels are segmented
using Tsallis multilevel thresholding. In this paper, an automated method for blood
vessel detection based on mathematical morphology and KCN clustering is proposed. In
the first step preprocessing is done and then morphological operations like top-hat and
bottom-hat transformation is applied. Finally, blood vessels are detected using KCN
clustering. The method is applied on DRIVE database retinal images and the results
obtained are compared with three other methods. The comparison shows that the
proposed method outperforms the other methods.

You might also like