Professional Documents
Culture Documents
Imp QB 1-5
Imp QB 1-5
(AUTONOMOUS)
Dundigal, Hyderabad - 500 043
COMPUTER SCIENCE AND
BANK
Programme B.Tech
Semester V CSE
Theory Practical
3 - 3 - -
COURSE OBJECTIVES:
The course should enable the students to:
I Understand the concepts of digital image processing methods and techniques.
II Study the image enhancement techniques in spatial and frequency domain for image quality
improvement
III Learn the image restoration and compression techniques for optimization.
IV Explore on color image features and transformation techniques.
V Illustrate the techniques of image segmentation to identify the objects in the image.
UNIT- I
INTRODUCTION TO DIGITAL IMAGE PROCESSING
Part - A (Short Answer Questions)
S. Question Bl Course Course
No oo Outcomes Learning
ms Outcome
Ta s
xo (CLOs)
no
m
y
lev
el
1 Define digital image processing Unders CO 1 ACS511.01
ans tand
You can see in this image, that the signal has been quantified into
three different levels. That means that when we sample an image, we
actually gather a lot of values, and in quantization, we set levels to
these values. This can be more clear in the image below.
3 To create a digital image, describe the steps involved converting Reme CO 1 ACS511.03
continuous sensed data into digital form? mber
ans
To create a digital image, we need to convert the continuous
sensed data into digital from. This involves two processes –
sampling and quantization. An image may be continuous with
respect to the x and y coordinates and also in amplitude. To
convert it into digital form we have to sample the function in both
coordinates and in amplitudes. Digitalizing the coordinate values
is called sampling. Digitalizing the amplitude values is called
quantization. There is a continuous the image along the line
segment AB. To simple this function, we take equally spaced
samples along line AB. The location of each samples is given by a
vertical tick back (mark) in the bottom part. The samples are
shown as block squares superimposed on function the set of these
discrete locations gives the sampled function. In order to form a
digital, the gray level values must also be converted (quantized)
into discrete quantities. So we divide the gray level scale into
eight discrete levels ranging from eight level values. The
continuous gray levels are quantized simply by assigning one of
the eight discrete gray levels to each sample. The assignment it
made depending on the vertical proximity of a simple to a vertical
tick mark. Starting at the top of the image and covering out this
procedure line by line produces a two dimensional digital image.
4 Write the basic relationships among the pixels in the Unders CO 1 ACS511.04
image a) Neighbor of a pixel b)Adjacency tand
ans
5 Illustrate the process of image sampling and quantization Reme CO 1 ACS511.05
and discuss about spatial and gray-level mber
resolution in image sampling.
ans Basic concepts in Sampling and Quantization
To create an image which is digital, we need to covert
continuous data into digital form. There are two steps in
which it is done.
1. Sampling
2. Quantization
6 What are the elements required for visual perception and Unders CO 1 ACS511.01
explain how image can be formed in human eye. tand
ans
Muscles controlling the eye rotate the eyeball until the image of
an object of interest falls on the fovea. Cone vision is called
photopic or bright-light vision. The number of rods is much
larger: Some 75 to 150 million are distributed over the retinal
surface.
2. Image formation in the eye:
Zooming Methods
1. Nearest Neighbor interpolation
2. Bilinear interpolation
3. K-times zooming
Shrinking: It may be viewed as under sampling. It is performed by
row-column deletion
8 Define digital image. Discuss how digital images are represented Unders CO 1 ACS511.01
with neat diagrams tand
ans
Digital image processing deals with manipulation of digital images
through a digital computer. It is a subfield of signals and systems but
focus particularly on images. DIP focuses on developing a computer
system that is able to perform processing on an image. The input of
that system is a digital image and the system process that image using
efficient algorithms, and gives an image as an output. The most
common example is Adobe Photoshop. It is one of the widely used
application for processing digital images.
How it works.
In the above figure, an image has been captured by a camera and has
been sent to a digital system to remove all the other details, and just
focus on the water drop by zooming it in such a way that the quality
of the image remains the same.
2 Find the relation among p and q in the given image segment and R CO 1 ACS511.01
V={1,2,3} e
ans m
e
m
b
e
r
UNIT-II
IMAGE ENHANCEMENT IN THE SPATIAL AND FREQUENCY DOMAIN
Part - A (Short Answer Questions)
1 Define image enhancement?
Image enhancement is the process of adjusting digital images so that the results are more suitable for display or
further image analysis. For example, you can remove noise, sharpen, or brighten an image, making it easier to
identify key features.
2 How to get an image negative?
In negative transformation, each value of the input image is subtracted from the L-1 and mapped onto the output
image.
In this case the following transition has been done.
s = (L – 1) – r
since the input image of 8 bpp image, so the number of levels in this image are 256. Putting 256 in the equation, we
get this
s = 255 – r
So each value is subtracted by 255 and the result image has been shown above. So what happens is that, the lighter
pixels become dark and the darker picture becomes light. And it results in image negative.
Contrast stretching (also called Normalization) attempts to improve an image by stretching the range of intensity
values it contains to make full use of possible values. Contrast stretching is restricted to a linear mapping of input
to output values.
4 Give the Bit-Plane slicing representation
Bit plane slicing is a method of representing an image with one or more bits of the byte used for each pixel. One
can use only MSB to represent the pixel, which reduces the original gray level to a binary image.
Histogram of an image normally refers to a histogram of the pixel intensity values. This histogram is a graph
showing the number of pixels in an image at each different intensity value found in that image.
The histogram of a digital image with gray levels in the range [0, L-1] is a discrete function h (rk) = nk, where rk is
the kth gray level and nk is the number of pixels in the image having gray level rk.
6 List the uses histogram for the image enhancement
● It works well for various noise types, with less blurring than linear filters of similar size
● Odd sized neighborhoods and efficient sorts yield a computationally efficient implementation
● Max filters are Useful for finding the brightest points in an image
● They also tend to reduce pepper noise (i.e. dark pixel values
11 Define the frequency domain
Frequency domain filtering of digital images involves conversion of digital images from spatial domain to
frequency domain. Frequency domain image filtering is the process of image enhancement other than
enhancement in spatial domain, which is an approach to enhance images for specific application.
12 Write any two steps of filtering in the frequency domain
● Spatial domain deals with image plane itself whereas Frequency domain deals with the rate of pixel change.
● Spatial domain works based on direct manipulation of pixels whereas Frequency domain works based on
modifying fourier transform.
● Spatial domain can be more easier to understand whereas Frequency domain can be less easier to
understand.
● Spatial domain is more cheaper whereas Frequency domain is less cheaper.
● Spatial domain takes less time to computer whereas Frequency domain takes more time to compute.
14 Define the ideal low pass filter.
The ideal low-pass filter is defined as “the filter that strictly allows the signals with frequency less than and
attenuates the signals with frequency more than the specified cutoff frequency.”
15 GLPF stands for
GAUSSIAN LOW PASS FILTERS
16 BLPF stands for
BUTTER WORTH LOW PASS FILTERS
Part – B (Long Answer Questions)
1 What is image enhancement? List and explain the advantages of image enhancement
Image Enhancement
Image enhancement is the process of adjusting digital images so that the results are more suitable for
display or further image analysis. For example, you can remove noise, sharpen, or brighten an image,
making it easier to identify key features.
• The principal objective of enhancement is to process an image so that the result is more
suitable for a special process
• Image Enhancement Fall into two categories: Enhancement in spatial domain and
Frequency domain.
• The term spatial domain refers to the Image Plane itself which is DIRECT manipulation
of pixels
• Frequency domain processing techniques are based on modifying the Fourier transform
of an image.
o Histogram equalization
o Noise removal using a Wiener filter
o Linear contrast adjustment
o Median filtering
o Unsharp mask filtering
o Contrast-limited adaptive histogram equalization (CLAHE)
o Decorrelation stretch
2 Discuss about the basic intensity transformation functions and plot the intensity transformations i) Linear
ii) log iii) power- law
***********(OR)************
3 Explain about image histogram matching specification
method.
Histogram matching is a process where a time series, image, or higher dimension scalar data is modified such that
its histogram matches that of another (reference) dataset. A common application of this is to match the images
from two sensors with slightly different responses, or from a sensor whose response changes over time.
The example given here is a synthetically generated time series. The reference series is just a uniform random
distribution, the second series is a Gaussian random distribution.
In practice for discrete valued data one does not step through data values but rather creates a mapping to the output
state for each possible input state. In the case of an image this would be a mapping for each of the 256 different
states.
4 What is the importance of image enhancement and
explain with arithmetic/logic operations.
Logical operators are often used to combine two (mostly binary) images. In the case of integer images, the logical
operator is normally applied in a bitwise way.
The output (response) of a smoothing, linear spatial filter is simply the average of the pixels contained in the
neighborhood of the filter mask. These filters sometimes are called averaging filters. The idea behind smoothing
filters is straightforward. By replacing the value of every pixel in an image by the average of the gray levels in
the neighborhood defined by the filter mask, this process results in an image with reduced ―sharp‖ transitions
in gray levels. Because random noise typically consists of sharp transitions in gray levels, the most obvious
application of smoothing is noise reduction. However, edges (which almost always are desirable features of an
image) also are characterized by sharp transitions in gray levels, so averaging filters have the undesirable side
effect that they blur edges. Another application of this type of process includes the smoothing of false contours
that result from using an insufficient number of gray levels.
• Averaging
• Gaussian
• Median filtering (non-linear)
Fig. Two 3 x 3 smoothing (averaging) filter masks. The constant multiplier in front of each mask is equal to the
sum of the values of its coefficients, as is required to compute an average.
6 Discuss about the filtering in the frequency domain with properties of frequency and filters
Frequency domain filtering of digital images involves conversion of digital images from spatial domain to
frequency domain. Frequency domain image filtering is the process of image enhancement other than
enhancement in spatial domain, which is an approach to enhance images for specific application. In this paper we
discussed about frequency domain approach of image filtering. As normally images are in spatial domain, so it
needed its conversion in frequency domain. Fast Fourier transform is a specific tool for spatial domain to
frequency domain conversion.
● For image smoothing we implement low pass filters while for image sharpening we implement high pass
filters. Both low pass and high pass filtering implemented and analyze for Ideal filter, Butterworth filter and
Gaussian filter. Filtering in frequency domain is based on modifying the Fourier transform of image of
interest to achieve frequency domain filtering and then get back spatial domain image by taking inverse
Fourier transform of resultant filtered image
7 Write the smoothing frequency-domain filters of Butter
worth low pass and Gaussian low pass filters.
A commonly used discrete approximation to the Gaussian (next section) is the Butterworth filter.
Applying this filter in the frequency domain shows a similar result to the Gaussian smoothing in the
spatial domain. The transfer function of a Butterworth lowpass filter (BLPF) of order n, and with cutoff
frequency at a distance r0 from the origin, is defined as
3.2.1
Where D(u ,v) is defined in 3.1.1.
As we can see from Figure 5, frequency response of the BLPF does not have a sharp transition as in the
ideal LPF. And as the filter order increases, the transition from the pass band to the stop band gets
steeper. Which means as the order of BLPF increase, it will exhibits the characteristics of the ILPF.
See example below to see the different between two images with different orders but the same cutoff
frequency. In fact, order of 20 already shows the ILPF characteristic.
Gaussian lowpass filter:
Gaussian filters are important in many signal processing, image processing and communication
applications. These filters are characterized by narrow bandwidths, sharp cutoffs, and low overshoots. A
key feature of Gaussian filters is that the Fourier transform of a Gaussian is also a Gaussian, so the filter
has the same response shape in both the spatial and frequency domains.
The form of a Gaussian lowpass filter in two-dimensions is given by
3 .3.1
Where D(u,v) is the distance from the origin in the frequency plane as defined in Equation 3.1.1. The
parameter σ measures the spread or dispersion of the Gaussian curve see Figure 9. Larger the value of σ,
larger the cutoff frequency and milder the filtering is. See example at the end of this section.
When letting σ = r0, which leads a more familiar form as previous discussion. So Equation 3.3.1
becomes:
3 .3.2
When D(u, v) = r0, the filter is down to 0.607 of its maximum value of 1.
8 What are sharpening frequency domain filters and explain with Gaussian High pass filters and Laplacian
in the frequency domain.
4.3.1
Where D(u, v) i s define in Equation 3.1.1, and r0 is the distance from the origin in the frequency plane.
Again, Equation 4.3.1 follows Equation 4.1.1.
The parameter σ, measures the spread or dispersion of the Gaussian curve. Larger the value of σ, larger
the cutoff frequency and milder the filtering is.
Figure 14: Perspective plot, image representation, and cross section of a GHPF.
Example 8: Results of highpass filtering the image using GHPF of order 2:
Original r0 = 5
1 r0 = 0
3 r0 = 0
8
Since edges consist of mainly high frequencies, we can, in theory, detect edges by applying a highpass
frequency filter in the Fourier domain or by convolving the image with an appropriate kernel in the
spatial domain. In practice, edge detection is performed in the spatial domain, because it is
computationally less expensive and often yields better results. As we can see later, we also can detect
edges very efficiently using Laplacian filter in the frequency domain.
The Laplacian is a very useful and common tool in image process. This is a second derivative
operator designed to measure changes in intensity without being overly sensitive to noise. The
function produces a peak at the start of the change in intensity and then at the end of the change.
As we know, the mathematical definition of derivative is the rate of change in a continuous
function. But in digital image processing, image is a discrete function f(x, y) of integer spatial
coordinates. As a result the algorithms will only be seen as approximations to the true spatial
derivatives of the original spatial-continuous image. The Laplacian of an image will highlight
regions of rapid intensity change and is therefore often used for edge detection (usually called
the Laplacian edge detector). Figure 15 shows a 3-D plot of Laplacian in the frequency domain.
4 .4.2
Combine Equation 4.4.1 and 4.4.2,
4.4.3
So, from Equation 4.4.3, we know that Laplacian can be implemented in the frequency domain by using
the filter:
.
For size of M x N image, the filter function at the center point of the frequency rectangle will be:
4.4.4
Use Equation 4.4.4 for the filter function, the Laplacian-filtered image in the spatial domain can be
obtained by:
4 .4.5
So, how we use the Laplacian for image enhancement in the spatial domain? Here are the basic ways
where the g(x, y) is the enhanced image:
In frequency domain, g(x, y) the enhance image is also possible to be obtained by taking the inverse
Fourier transform of a single mask (filter)
4 .4.6
2 Give the probability density functions for Rayleigh noise Rememb CO 3 ACS511.05
models er
3 Give the probability density functions for the Erlang noise Rememb CO 3 ACS511.05
models er
4 Give the probability density functions for Gaussian noise Rememb CO 3 ACS511.05
models er
5 Give the probability density functions for Salt and Pepper noise Rememb CO 3 ACS511.05
models er
6 Explain adaptive median filter and its advantages. Understa CO 3 ACS511.05
nd
7 How do you reduce the periodic noise using frequency Understa CO 3 ACS511.05
domain Filters? nd
8 What is image restoration? Rememb CO 3 ACS511.05
er
Modeling of No Yes
Degradation
1.
Image enhancement is largely a subjective
process which means that it is a heuristic
procedure designed to manipulate an
image in order to achieve the pleasing
aspects of a viewer. On the other hand
image restoration involves formulating a
criterion of goodness that will yield an
optimal estimate of the desired result.
■ Observation
■ Experimentation
■ Mathematical modeling
5 Discuss different types of filters used to restore an Understa CO 3 ACS511.05
image nd
same as 3rd ans
6 Write about Noise Probability Density Functions for all Understa CO 3 ACS511.05
noise models nd
7 Specify any four important noise probability density Rememb CO 3 ACS511.05
functions. er
same as 6th
8 Discuss the importance of adaptive filters in image Understa CO 3 ACS511.05
restoration system. Highlight the working of adaptive nd
median filters.
Adaptive filters play an important role in modern digital
signal processing (DSP) products in areas such as
telephone echo cancellation, noise cancellation,
equalization of communications channels, biomedical
signal enhancement, active noise control (ANC), and
adaptive control systems. Adaptive filters work
generally for the adaptation of signal-changing
environments, spectral overlap between noise and
signal, and unknown or time-varying noise. For
example, when the interference noise is strong and its
spectrum overlaps that of the desired signal, removing
the interference using a traditional filter such as a
notch filter with the fixed filter coefficients will fail to
preserve the desired signal spectrum, as shown
inverse filtering
Constrained least square filtering
10 What is image filtering process? List out various image Rememb CO 3 ACS511.05
filtering techniques er
Filtering is a technique for modifying or enhancing an image.
For example, you can filter an image to emphasize certain
features or remove other features. Image processing
operations implemented with filtering include smoothing,
sharpening, and edge enhancement.
Filtering is a neighborhood operation, in which the value of
any given pixel in the output image is determined by
applying some algorithm to the values of the pixels in the
neighborhood of the corresponding input pixel. A pixel's
neighborhood is some set of pixels, defined by their
locations relative to that pixel. (See Neighborhood or Block
Processing: An Overview for a general discussion of
neighborhood operations.) Linear filtering is filtering in which
the value of an output pixel is a linear combination of the
values of the pixels in the input pixel's neighborhood.
Convolution
Linear filtering of an image is accomplished through an
operation called convolution. Convolution is a neighborhood
operation in which each output pixel is the weighted sum of
neighboring input pixels. The matrix of weights is called the
convolution kernel, also known as the filter. A convolution
kernel is a correlation kernel that has been rotated 180
degrees.
For example, suppose the image is
A = [17 24 1 8 15
23 5 7 14 16
4 6 13 20 22
10 12 19 21 3
11 18 25 2 9]
Correlation
The operation called correlation is closely related to
convolution. In correlation, the value of an output pixel is
also computed as a weighted sum of neighboring pixels. The
difference is that the matrix of weights, in this case called the
correlation kernel, is not rotated during the computation. The
Image Processing Toolbox™ filter design functions return
correlation kernels.
The following figure shows how to compute the (2,4) output
pixel of the correlation of A, assuming h is a correlation
kernel instead of a convolution kernel, using these steps:
1. Slide the center element of the correlation
kernel so that lies on top of the (2,4) element of
A.
2. Multiply each weight in the correlation kernel by
the pixel of A underneath.
3. Sum the individual products.
11 Explain notch reject filters. How can we obtain the notch Understa CO 3 ACS511.05
filter that pass rather than suppressing the nd
frequency in notch area?
The Band Stop Filter, (BSF) is another type of frequency
selective circuit that functions in exactly the opposite way to
the Band Pass Filter we looked at before. The band stop
filter, also known as a band reject filter, passes all
frequencies with the exception of those within a specified
stop band which are greatly attenuated.
If this stop band is very narrow and highly attenuated over a
few hertz, then the band stop filter is more commonly
referred to as a notch filter, as its frequency response shows
that of a deep notch with high selectivity (a steep-side curve)
rather than a flattened wider band.
Coccommand
Image
Filters
Arithmetic Mean Filter
DesDescription
Applies a arithmetic mean filter to an image.
An arithmetic mean filter operation on an image removes
short tailed noise such as uniform and Gaussian type
noise from the image at the cost of blurring the image. The
arithmetic mean filter is defined as the average of all pixels
within a local region of an image.
The arithmetic mean is defined as:
DialDialog box
Filter Size: size of the filter mask; a larger filter yields
stronger effect.
Click Preview to judge the results.
Click OK to proceed.
ExaExamples
Coccommand
Image
Filters
Geometric Mean Filter
Desdescription
Applies a geometric mean filter to an image.
In the geometric mean method, the color value of each
pixel is replaced with the geometric mean of color values
of the pixels in a surrounding region. A larger region (filter
size) yields a stronger filter effect with the drawback of
some blurring.
The geometric mean is defined as:
Dialdialog box
Filter Size: larger filter yields stronger effect.
Click Preview to judge the results.
Click OK to proceed.
MeMedian Filter
Coccommand
Image
Filters
Median Filter
Desdescription
Applies a median filter to an image. The median filter is
defined as the median of all pixels within a local region of
an image.
The median filter is normally used to reduce salt and
pepper noise in an image, somewhat like the mean filter.
However, it often does a better job than the mean filter of
preserving useful detail in the image.
Dialdiaog box
Filter Size: larger filter yields stronger effect.
Click Preview to judge the results.
Click OK to proceed.
2 Obtain equations for butter worth, gaussian band reject Understa CO 3 ACS511.05
filters nd
5 Derive the expression for observed image when the Remember CO 3 ACS511.05
degradations are linear position invariant.
same as part b 13
UNIT-IV
COLOR IMAGE PROCESSING
Part - A (Short Answer Questions)
1 Define color image. Explain the need of color image Understand CO 4 ACS511.05
processing.
4 List out different color models in color image processing Remember CO 4 ACS511.05
and discuss about pseudo color image processing.
10 Compare image pyramids and sub band coding techniques Remember CO 4 ACS511.05
11 Discuss about Haar transform in wavelets and Understand CO 4 ACS511.05
multiresolution processing
12 Give some applications of color image processing Remember CO 4 ACS511.05
The result is that full-color image processing
techniques are now used in a broad range of
applications, including publishing, visualization,
and the Internet.
13 Distinguish between wavelet transforms in one and two Understand CO 4 ACS511.05
dimensions.
The discrete wavelet transform (DWT) of a one-
dimensional signal can be calculated by passing
it through a high pass and a low pass filter
simultaneously. ... To compute DWT for a two-
dimensional image, the original image is
convolved along and directions by low pass and
high pass filters
14 What is wavelet packet and fast wavelet transform? Understand CO 4 ACS511.05
wavelet packet:
.
Fast Wavelet Transform (FWT)
•computationally efficient implementation of the
DWT
•the relationship between the coefficients of the
DWTat adjacent scales
•also called Mallat's herringbone algorithm
•resembles the two band subband coding
scheme
15 What is the difference between encoder and decoder? Understand CO 4 ACS511.05
Give the names of different encoders and decoders
i)
ii)
ii)
Lempel–Ziv–Welch (LZW) is a universal
lossless data compression algorithm
21 Expand JPEG. What are the basic steps in JPEG? Remember CO 4 ACS511.05
JPEG. Stands for "Joint Photographic
Experts Group."
22 Define coding redundancy and inter pixel redundancy Understand CO 4 ACS511.05
Sharpening
Image enhancement process consists of a
collection of techniques whose purpose is to
improve image visual appearance and to
highlight or recover certain details of the
image for conducting an appropriate analysis
by a human or a machine.
WAVELET CODING
15 Discuss about wavelet transforms in one and two Understand CO 4 ACS511.05
dimensions
4 Explain about JPEG compression standard and the steps Understand CO 4 ACS511.05
involved in JPEG compression
JPEG. JPEG (Joint Photographic Experts
Group) is an image format that uses lossy
compression to create smaller file sizes.
One of JPEG's big advantages is that it
allows the designer to fine-tune the
amount of compression used.
UNIT-V
MORPHOLOGICAL IMAGE PROCESSING
Part - A (Short Answer Questions)
1 What is morphological image processing? Understand CO 5 ACS511.05
1.convex hull
2.Thinning
3.Thickening
4.Skeletons
5.Pruning
13 Give the summary of morphological operations on Remember CO 5 ACS511.05
binary images.
Morphological operations combine an
image with a structuring element, often a 3 ×
3 matrix. They process an image pixel by
pixel according to the neighbourhood pixel
values. Morphological operations are often
applied to binary images, although
techniques are available for grey level
images.
14 Write the differences between any two morphological Understand CO 5 ACS511.05
algorithms
15 Write algorithm for Understand CO 5 ACS511.05
i) Convex Hull ii)Thinning
ii)
16 Explain how to detect discontinuities in image Understand CO 5 ACS511.05
segmentation?
23 Understand CO 5 ACS511.05
What is edge linking and boundary detection in image
segmentation?
Part – B (Long Answer Questions)
1 What is the need of morphological image processing? Understand CO 5 ACS511.05
Mention and explain applications of morphological
image processing.
APPLICATIONS:
2 Discuss about different morphological algorithms Understand CO 5 ACS511.05
12 Explain about the Global processing via the Hough Understand CO 5 ACS511.05
Transform for Edge linking
13 Write in detail about the Global processing via graph- Understand CO 5 ACS511.05
theoretic techniques for edge linking
Global processing via graph-theoretic techniques In
this process we have a global approach for edge
detection and linking based on representing edge
segments in the form of a graph and searching the
graph for low-cost paths that correspond to
significant edges. This representation provides a
rugged approach that performs well in the presence of
noise.
Prepared by
Ms. S J Sowjanya, Assistant Professor HOD,
CSE