Download as pdf or txt
Download as pdf or txt
You are on page 1of 107

INSTITUTE OF AERONAUTICAL ENGINEERING

(AUTONOMOUS)
Dundigal, Hyderabad - 500 043
COMPUTER SCIENCE AND

ENGINEERING TUTORIAL QUESTION

BANK

Course Title IMAGE PROCESSING

Course Code ACS511

Programme B.Tech

Semester V CSE

Course Type Elective

Regulation IARE - R16

Theory Practical

Course Structure Lectures Tutorials Credits Laboratory Credits

3 - 3 - -

Chief Coordinator Ms. S J Sowjanya, Assistant Professor


Ms. B Tejaswi, Assistant Professor
Course Faculty

COURSE OBJECTIVES:
The course should enable the students to:
I Understand the concepts of digital image processing methods and techniques.
II Study the image enhancement techniques in spatial and frequency domain for image quality
improvement
III Learn the image restoration and compression techniques for optimization.
IV Explore on color image features and transformation techniques.
V Illustrate the techniques of image segmentation to identify the objects in the image.

COURSE OUTCOMES (COs):


CO 1 To Understand the need for image transforms different types of image transforms and their properties.
CO 2 Learn different techniques employed for the enhancement of images.
CO 3 Learn different causes for image degradation and overview of image restoration techniques.
CO 4 Understand the need for image compression and to learn the spatial and frequency domain
techniques of image compression.
CO 5 Learn different morphological algorithms for image analysis and recognition.
COURSE LEARNING OUTCOMES (CLOs):
ACS511.01 Understand the key concepts of Image Processing.
ACS511.02 Identify the origins of the Digital image processing
ACS511.03 Demonstrate the scope of the digital image processing in multiple fields
ACS511.04 Explore on overview of the components contained in the general purpose image processing
system and its use in real time applications
ACS511.05 Describe the concept of elements of visual perception.
ACS511.06 Use the concept of sampling and quantization in generating digital images
ACS511.07 Explore on the basic relationships existed between the pixels in the image
ACS511.08 Illustrate different mathematical tools used in image intensity transformations for quality
enhancement
ACS511.09 Use histogram processing techniques in image enhancement and noise reduction
ACS511.10 Understand the impact of smoothing and sharpening filters in spatial domain.
ACS511.11 Apply the Fourier transform concepts on image function in frequency domain filters(low
pass/high pass).
ACS511.12 Describe the concept of image degradation or restoration of images.
ACS511.13 Understand the various kind of noise present in the image and how to restore the noisy image.
ACS511.14 Understand the differences of inverse, least square and Wiener filtering in restoration process
of images
ACS511.15 Understand the color fundamentals and models in image processing
ACS511.16 Memorize the transformation techniques in pseudo color image processing.
ACS511.17 Use wavelet concepts in multi-resolution processing.
ACS511.18 Understand the basic multi-resolution techniques and segmentation methods
ACS511.19 Explore on lossy/lossless compression models using wavelets
ACS511.20 Use morphological operations like dilation and erosion to represent and describe regions,
boundaries etc. in identification of the components in images.
TUTORIAL QUESTION BANK

UNIT- I
INTRODUCTION TO DIGITAL IMAGE PROCESSING
Part - A (Short Answer Questions)
S. Question Bl Course Course
No oo Outcomes Learning
ms Outcome
Ta s
xo (CLOs)
no
m
y
lev
el
1 Define digital image processing Unders CO 1 ACS511.01
ans tand

2 Write any two origins of image processing? Unders CO 1 ACS511.02


ans tand

3 Mention different types of digital images Reme CO 1 ACS511.03


ans ● TIFF (also known as TIF) mber
● JPEG (also known as JPG)
● GIF
● PNG, file types ending in .png.
● Raw image files.

4 Mention different bands in electromagnetic spectrum Reme CO 1 ACS511.04


ans mber
The types of electromagnetic radiation are broadly
classified into the following classes (regions, bands or
types):
1. Gamma radiation
2. X-ray radiation
3. Ultraviolet radiation
4. Visible radiation
5. Infrared radiation
6. Terahertz radiation
7. Microwave radiation
8. Radio waves

5 Which step is the objective of digital image processing Unders CO 1 ACS511.05


ans tand
6 Write one example of hardware of image processing Unders CO 1 ACS511.01
components tand
ans Specialize image processing hardware: It consists of the
digitizer just mentioned, plus hardware that performs other
primitive operations such as an arithmetic logic unit, which
performs arithmetic such addition and subtraction and logical
operations in parallel on images.

7 What is meant by pel? Unders CO 1 ACS511.02


ans tand
PEL

A pixel is also known as PEL.


Pixel is the smallest element of an image. Each
pixel correspond to any one value. In an 8-bit gray
scale image, the value of the pixel between 0 and
255.
8 What are the different fields in which Digital Image Unders CO 1 ACS511.03
Processing is used? tand
ans

9 What is the need of image processing? Unders CO 1 ACS511.04


ans Image processing is a method to perform some tand
operations on an image, in order to get an enhanced
image or to extract some useful information from it. It
is a type of signal processing in which input is an
image and output may be image or
characteristics/features associated with that image.
10 Define connectivity and path in relationship between Unders CO 1 ACS511.05
pixels tand
ans

11 Define N4,N8,ND Unders CO 1 ACS511.01


ans tand
12 Define region and boundary in the image Unders CO 1 ACS511.02
ans tand

13 Observe the changes in sizes of different resolution Reme CO 1 ACS511.03


images. mber
ans

14 What is meant by illumination and reflectance in image Unders CO 1 ACS511.04


function? tand
ans

15 What are the applications of image processing? Unders CO 1 ACS511.05


ans tand

Applications of Digital Image


Processing
Some of the major fields in which digital image processing is
widely used are mentioned below
● Image sharpening and restoration
● Medical field
● Remote sensing
● Transmission and encoding
● Machine/Robot vision
● Color processing
● Pattern recognition
● Video processing
● Microscopic Imaging

16 List the different components in a simple Image Reme CO 1 ACS511.01


formation model mber
ans

17 Write about sampling role in digitization process Unders CO 1 ACS511.02


ans tand

18 Write about quantization in digitization process Unders CO 1 ACS511.03


ans tand
Quantization is opposite to sampling. It is done on y axis. When you
are quantizing an image, you are actually dividing a signal into
quanta(partitions).
On the x axis of the signal, are the co-ordinate values, and on the y
axis, we have amplitudes. So digitizing the amplitudes is known as
Quantization.

You can see in this image, that the signal has been quantified into
three different levels. That means that when we sample an image, we
actually gather a lot of values, and in quantization, we set levels to
these values. This can be more clear in the image below.

19 List the basic steps involved in image processing Reme CO 1 ACS511.04


ans mber

20 Define distance measure and Give the different Reme CO 1 ACS511.05


distance measures mber
ans
21 Define resolution of image Unders CO 1 ACS511.01
ans Image resolution is the detail an image holds. tand
The term applies to raster digital images, film
images, and other types of images. Higher
resolution means more image detail. Image
resolution can be measured in various ways.
Resolution quantifies how close lines can be to
each other and still be visibly resolved.
Part – B (Long Answer Questions)
1 Explain the steps involved in digital image processing Unders CO 1 ACS511.01
with neat diagram tand
ans Fundamental steps in Digital Image Processing :
1. Image Acquisition
This is the first step or process of the fundamental
steps of digital image processing. Image acquisition
could be as simple as being given an image that is
already in digital form. Generally, the image
acquisition stage involves preprocessing, such as
scaling etc.
2. Image Enhancement
Image enhancement is among the simplest and
most appealing areas of digital image processing.
Basically, the idea behind enhancement techniques
is to bring out detail that is obscured, or simply to
highlight certain features of interest in an image.
Such as, changing brightness & contrast etc.
3. Image Restoration
Image restoration is an area that also deals with
improving the appearance of an image. However,
unlike enhancement, which is subjective, image
restoration is objective, in the sense that restoration
techniques tend to be based on mathematical or
probabilistic models of image degradation.

4. Color Image Processing


Color image processing is an area that has been
gaining its importance because of the significant
increase in the use of digital images over the
Internet. This may include color modeling and
processing in a digital domain etc.
5. Wavelets and Multiresolution Processing
Wavelets are the foundation for representing
images in various degrees of resolution. Images
subdivision successively into smaller regions for
data compression and for pyramidal representation.
6. Compression
Compression deals with techniques for reducing the
storage required to save an image or the bandwidth
to transmit it. Particularly in the uses of internet it is
very much necessary to compress data.
7. Morphological Processing
Morphological processing deals with tools for
extracting image components that are useful in the
representation and description of shape.
8. Segmentation
Segmentation procedures partition an image into its
constituent parts or objects. In general, autonomous
segmentation is one of the most difficult tasks in
digital image processing. A rugged segmentation
procedure brings the process a long way toward
successful solution of imaging problems that require
objects to be identified individually.
9. Representation and Description
Representation and description almost always
follow the output of a segmentation stage, which
usually is raw pixel data, constituting either the
boundary of a region or all the points in the region
itself. Choosing a representation is only part of the
solution for transforming raw data into a form
suitable for subsequent computer processing.
Description deals with extracting attributes that
result in some quantitative information of interest or
are basic for differentiating one class of objects
from another.
10. Object recognition
Recognition is the process that assigns a label,
such as, “vehicle” to an object based on its
descriptors.
11. Knowledge Base:
Knowledge may be as simple as detailing regions of
an image where the information of interest is known
to be located, thus limiting the search that has to be
conducted in seeking that information. The
knowledge base also can be quite complex, such as
an interrelated list of all major possible defects in a
materials inspection problem or an image database
containing high-resolution satellite images of a
region in connection with change-detection
applications.

2 Discuss about image sensing and acquisition process in Unders CO 1 ACS511.02


detail. tand
ans

Image Acquisition using a Single


Sensor
the components of a single sensor. The most common
sensor of this type is the photodiode, which is
constructed of silicon materials and whose output
voltage waveform is proportional to light. The
use of a filter in front of a sensor improves selectivity.
For example, a green (pass) filter in front of a light
sensor favors light in the green band of the color
spectrum.

Image Acquisition using Sensor


Strips
A geometry that is used much more frequently than
single sensors consists of an in-line arrangement of
sensors in the form of a sensor strip. The strip provides
imaging elements in one direction. Motion
perpendicular to the strip provides imaging in the other
direction This is the type of arrangement used in most
flat bed scanners. Sensing devices with 4000 or more
in-line sensors are possible. In-line sensors are used
routinely in airborne imaging applications, in which the
imaging system is mounted on an aircraft that flies at
a constant altitude and speed over the geographical
area to be imaged.

Image Acquisition using Sensor


Arrays
Individual sensors can be arranged in the form of a 2-
D array. Numerous electromagnetic and some
ultrasonic sensing devices are arranged frequently in
an array format. This is also the predominant
arrangement found in digital cameras. A typical sensor
for these cameras is a CCD array, which can be
manufactured with a broad range of sensing properties
and can be packaged in rugged arrays of 4000 * 4000
elements or more. CCD sensors are used widely in
digital cameras and other light sensing instruments.
The response of each sensor is proportional to the
integral of the light energy projected onto the surface
of the sensor, a property that is used in astronomical
and other applications requiring low noise images.

3 To create a digital image, describe the steps involved converting Reme CO 1 ACS511.03
continuous sensed data into digital form? mber
ans
To create a digital image, we need to convert the continuous
sensed data into digital from. This involves two processes –
sampling and quantization. An image may be continuous with
respect to the x and y coordinates and also in amplitude. To
convert it into digital form we have to sample the function in both
coordinates and in amplitudes. Digitalizing the coordinate values
is called sampling. Digitalizing the amplitude values is called
quantization. There is a continuous the image along the line
segment AB. To simple this function, we take equally spaced
samples along line AB. The location of each samples is given by a
vertical tick back (mark) in the bottom part. The samples are
shown as block squares superimposed on function the set of these
discrete locations gives the sampled function. In order to form a
digital, the gray level values must also be converted (quantized)
into discrete quantities. So we divide the gray level scale into
eight discrete levels ranging from eight level values. The
continuous gray levels are quantized simply by assigning one of
the eight discrete gray levels to each sample. The assignment it
made depending on the vertical proximity of a simple to a vertical
tick mark. Starting at the top of the image and covering out this
procedure line by line produces a two dimensional digital image.
4 Write the basic relationships among the pixels in the Unders CO 1 ACS511.04
image a) Neighbor of a pixel b)Adjacency tand
ans
5 Illustrate the process of image sampling and quantization Reme CO 1 ACS511.05
and discuss about spatial and gray-level mber
resolution in image sampling.
ans Basic concepts in Sampling and Quantization
To create an image which is digital, we need to covert
continuous data into digital form. There are two steps in
which it is done.

1. Sampling

2. Quantization

Since an image is continuous not just in its co-ordinates


(x axis), but also in its amplitude (y axis), so the part that
deals with the digitizing of co-ordinates is known as
sampling and the part that deals with digitizing the
amplitude is known as quantization

The one-dimensional function shown in Figure 1.9 (b) is


a plot of amplitude (gray level) values of the continuous
image along the line segment AB in Figure 1.9 (a).The
random variations are due to image noise. To sample this
function, we take equally spaced samples along line AB,
as shown in Figure 1.9 (c).The location of each sample is
given by a vertical tick mark in the bottom part of the
figure. The samples are shown as small white squares
superimposed on the function. The set of these discrete
locations gives the sampled function. However, the values
of the samples still span (vertically) a continuous range of
gray-level values. In order to form a digital function, the
gray-level values also must be converted (quantized) into
discrete quantities. The right side of Figure 1.9 (c) shows
the gray-level scale divided into eight discrete levels,
ranging from black to white. The vertical tick marks
indicate the specific value assigned to each of
the eight gray levels. The continuous gray levels are quantized
simply by assigning one of the eight discrete gray levels to each
sample. The assignment is made depending on the vertical proximity
of a sample to a vertical tick mark. The digital samples resulting
from both sampling and quantization are shown in Figure 1.9 (d).
Starting at the top of the image and carrying out this procedure line
by line produces a two-dimensional digital image. Sampling in the
manner just described assumes that we have a continuous image in
both coordinate directions as well as in amplitude. In practice, the
method of sampling is determined by the sensor arrangement used to
generate the image. When an image is generated by a single
Sensing element combined with mechanical motion, as in
Fig below the output of the sensor is quantized in the
manner described above. However, sampling is
accomplished by selecting the number of individual
mechanical increments at which we activate the sensor to
collect data. Mechanical motion can be made very exact
so, in principle; there is almost no limit as to how fine we
can sample an image.

Figure 1.9: Generating a digital image (a) Continuous


image (b) A scan line from A to Bin the continuous
image, used to illustrate the concepts of sampling and
quantization (c) Sampling and quantization. (d) Digital
scan line

6 What are the elements required for visual perception and Unders CO 1 ACS511.01
explain how image can be formed in human eye. tand
ans

Elements of visual perception

Although the digital image processing field is built on a


foundation of mathematical and probabilistic formulations,
human intuition and analysis play a central role in the choice of
one technique versus another, and this choice often is made
based on subjective, visual judgments.

1. Structure of the Human Eye:


The eye is nearly a sphere, with an average diameter of
approximately 20mm.Three membranes enclose the eye:

Cornea: The cornea is a tough, transparent tissue that covers the


anterior surface of the eye Sclera: sclera is an opaque
membrane that encloses the remainder of the optic
globe.
Choroid: The choroid lies directly below the sclera. This
membrane contains a net- work of blood vessels that serve as
the major source of nutrition to the eye. The choroid coat is
heavily pigmented and hence helps to reduce the amount of
extraneous light entering the eye and the backscatter within the
optical globe.

Figure 1.3:Simplified diagram of a cross section of the human


eye.

The lens is made up of concentric layers of fibrous cells and is


suspended by fibers that attach to the ciliary body. It contains
60 to 70% water, about 6% fat, and more protein than any other
tissue in the eye. The innermost membrane of the eye is the
retina, which lines the Inside of all entire posterior portions.
When the eye is properly focused, light from an object outside
the eye is imaged on the retina. Pattern vision is afforded by the
distribution of discrete light receptors over the surface of the
retina.
There are two classes of receptors: cones and rods.
• The cones in each eye number between 6 and 7 million
• They are located primarily in the central portion of the
retina, called the fovea, and are highly sensitive to color.

Muscles controlling the eye rotate the eyeball until the image of
an object of interest falls on the fovea. Cone vision is called
photopic or bright-light vision. The number of rods is much
larger: Some 75 to 150 million are distributed over the retinal
surface.
2. Image formation in the eye:

The principal difference between the lens of the eye and an


ordinary optical lens is that the former is flexible. As illustrated
in Fig. 3.1, the radius of curvature of the anterior surface of the
lens is greater than the radius of its posterior surface. The shape
of the lens is controlled by tension in the fibers of the ciliary
body. To focus on distant objects, the controlling muscles cause
the lens to be relatively flattened. Similarly, these muscles
allow the lens to become thicker in order to focus on objects
near the eye. The distance between the center of the lens and
the retina (called the focal length) varies from approximately
17 mm to about 14 mm, as the refractive power of the lens
increases from its minimum to its maximum. When the eye

Figure 1.4: Graphical representation of the eye looking at a


palm tree Point C is the optical center of the lens.

7 Discuss about zooming and shrinking of digital Unders CO 1 ACS511.01


images with examples tand
ans Zooming and Shrinking of Digital images

Zooming: It may be viewed as over sampling and increasing


number of pixels in an image so that image appears bigger.
It requires two steps
1. Creation of new pixels
2. Assignment of gray levels to those new locations.

Zooming Methods
1. Nearest Neighbor interpolation
2. Bilinear interpolation
3. K-times zooming
Shrinking: It may be viewed as under sampling. It is performed by
row-column deletion
8 Define digital image. Discuss how digital images are represented Unders CO 1 ACS511.01
with neat diagrams tand
ans
Digital image processing deals with manipulation of digital images
through a digital computer. It is a subfield of signals and systems but
focus particularly on images. DIP focuses on developing a computer
system that is able to perform processing on an image. The input of
that system is a digital image and the system process that image using
efficient algorithms, and gives an image as an output. The most
common example is Adobe Photoshop. It is one of the widely used
application for processing digital images.

How it works.

In the above figure, an image has been captured by a camera and has
been sent to a digital system to remove all the other details, and just
focus on the water drop by zooming it in such a way that the quality
of the image remains the same.

9 List and explain applications of image processing. Reme CO 1 ACS511.02


ans APPLICATIONS OF DIGITAL IMAGE PROCESSING mber
Since digital image processing has very wide applications and
almost all of the technical fields are impacted by DIP, we will just
discuss some of the major applications of DIP.

Digital image processing has a broad spectrum of applications,


such as :

1.Remote sensing via satellites and other spacecrafts


2.Image transmission and storage for business applications
3.Medical processing,
4.RADAR (Radio Detection and Ranging)
5.SONAR(Sound Navigation and Ranging) and
6.Acoustic image processing (The study of underwater sound is
known as underwater acoustics or hydro acoustics.)
7.Robotics and automated inspection of industrial parts.

Images acquired by satellites are useful in tracking of


Earth resources;
1.Geographical mapping;
2.Prediction of agricultural crops,
3.Urban growth and weather monitoring
4.Flood and fire control and many other environmental
applications.

.Space image applications include:


1.Recognition and analysis of objects contained in images
obtained from deep
2.space-probe missions.
3.Image transmission and storage applications occur in broadcast
television
4.Teleconferencing
5.Transmission of facsimile images(Printed documents and
graphics) for office
6.automation
Communication over computer networks
1.Closed-circuit television based security monitoring systems and
In military communications.
Medical applications:
1.Processing of chest X- rays
2.Cineangiograms
3.Projection images of transaxial tomography and
4.Medical images that occur in radiology nuclear magnetic
resonance(NMR)
5.Ultrasonic scanning
IMAGE PROCESSING TOOLBOX (IPT) is a collection of
functions that extend the capability of the MATLAB numeric
computing environment. These functions, and the
expressiveness of the MATLAB language, make many image-
processing operations easy to write in a compact, clear manner,
thus providing a ideal software prototyping environment for the
solution of image processing problem.
10 List different components present in image processing R CO 1 ACS511.03
and explain each component in detail with diagram e
ans m
Components of Image processing System: e
m
b
e
r

Figure : Components of Image processing System

Image Sensors: With reference to sensing, two elements are required to


acquire digital image. The first is a physical device that is sensitive to the
energy radiated by the object we wish to image and second is specialized
image processing hardware.

Specialize image processing hardware: It consists of the digitizer just


mentioned, plus hardware that performs other primitive operations such
as an arithmetic logic unit, which performs arithmetic such addition and
subtraction and logical operations in parallel on images.

Computer: It is a general purpose computer and can range from a PC to


a supercomputer depending on the application. In dedicated applications,
sometimes specially designed computer are used to achieve a required
level of performance.

Software: It consists of specialized modules that perform specific tasks a


well designed package also includes capability for the user to write code,
as a minimum, utilizes the specialized module. More sophisticated
software packages allow the integration of these
modules.

Mass storage: This capability is a must in image processing applications.


An image of size 1024 x1024 pixels, in which the intensity of each pixel
is an 8- bit quantity requires one Megabytes of storage space if the image
is not compressed .Image processing applications
falls into three principal categories of storage

i) Short term storage for use during processing


ii) On line storage for relatively fast retrieval
iii) Archival storage such as magnetic tapes and disks

Image display: Image displays in use today are mainly color TV


monitors. These monitors are driven by the outputs of image and graphics
displays cards that are an integral part of computer system.

Hardcopy devices: The devices for recording image includes laser


printers, film cameras, heat sensitive devices inkjet units and digital units
such as optical and CD ROM disk. Films provide the highest possible
resolution, but paper is the obvious medium of choice for written
applications.

Networking: It is almost a default function in any computer system in


use today because of the large amount of data inherent in image
processing applications. The key consideration in image transmission
bandwidth.

Part – C (Problem Solving and Critical Thinking Questions)


1 Compare the linear and nonlinear operations with the help of U CO 1 ACS511.05
mathematical tools. n
ans d
e
r
s
t
a
n
d

2 Find the relation among p and q in the given image segment and R CO 1 ACS511.01
V={1,2,3} e
ans m
e
m
b
e
r
UNIT-II
IMAGE ENHANCEMENT IN THE SPATIAL AND FREQUENCY DOMAIN
Part - A (Short Answer Questions)
1 Define image enhancement?

Image enhancement is the process of adjusting digital images so that the results are more suitable for display or
further image analysis. For example, you can remove noise, sharpen, or brighten an image, making it easier to
identify key features.
2 How to get an image negative?

In negative transformation, each value of the input image is subtracted from the L-1 and mapped onto the output
image.
In this case the following transition has been done.
s = (L – 1) – r
since the input image of 8 bpp image, so the number of levels in this image are 256. Putting 256 in the equation, we
get this
s = 255 – r
So each value is subtracted by 255 and the result image has been shown above. So what happens is that, the lighter
pixels become dark and the darker picture becomes light. And it results in image negative.

3 Describe about contrast stretching?

Contrast stretching (also called Normalization) attempts to improve an image by stretching the range of intensity
values it contains to make full use of possible values. Contrast stretching is restricted to a linear mapping of input
to output values.
4 Give the Bit-Plane slicing representation

Bit plane slicing is a method of representing an image with one or more bits of the byte used for each pixel. One
can use only MSB to represent the pixel, which reduces the original gray level to a binary image.

The three main goals of bit plane slicing are:

● Converting a gray level image to a binary image.


● Representing an image with fewer bits and corresponding the image to a smaller size
● Enhancing the image by focusing.
5 How to construct a histogram?

Histogram of an image normally refers to a histogram of the pixel intensity values. This histogram is a graph
showing the number of pixels in an image at each different intensity value found in that image.
The histogram of a digital image with gray levels in the range [0, L-1] is a discrete function h (rk) = nk, where rk is
the kth gray level and nk is the number of pixels in the image having gray level rk.
6 List the uses histogram for the image enhancement

- Histograms are used to analyze image


- Histograms are used for brightness purpose
- Histograms are used to adjust the contrast of an image
- Histograms are used for image equalization
- Histograms are also used in thresholding.
- Histograms improve the visual appearance of an image
7 Define the mask mode radiography

Image Enhancement in Spatial Domain


Image Subtraction
•Image subtraction is used in medical imaging called ​mask mode radiography​.
• The initial image is captured and used as the mask image, h(x,y). Then after
injecting a contrast material into the bloodstream the mask image is subtracted
from the resulting image f(x,y) to give an enhanced output image g(x,y).
8 Mention any two basics of spatial filtering
Basics of Spatial Filtering:
The concept of filtering has its roots in the use of the
Fourier transform for signal processing in the so-called frequency domain.
Spatial filtering term is the filtering operations that are performed directly on the pixels of an image
9 List the different types of spatial filters.
● Low pass filter
● Highpass filter
● Rank order filter
● Derivate order filter

10 Write the uses of the order-statistics filters

● It works well for various noise types, with less blurring than linear filters of similar size
● Odd sized neighborhoods and efficient sorts yield a computationally efficient implementation
● Max filters are Useful for finding the brightest points in an image
● They also tend to reduce pepper noise (i.e. dark pixel values
11 Define the frequency domain

Frequency domain filtering of digital images involves conversion of digital images from spatial domain to
frequency domain. Frequency domain image filtering is the process of image enhancement other than
enhancement in spatial domain, which is an approach to enhance images for specific application.
12 Write any two steps of filtering in the frequency domain

a) Find the Fourier transform of the input signal

b) Find the Fourier transform of the impulse response of the filter

c) Multiply the two FTs to compute the FT of output signal

d) Find the inverse FT of the FT of the output signal.


13 What is the difference between spatial and frequency
domains in filtering.

● Spatial domain deals with image plane itself whereas Frequency domain deals with the rate of pixel change.
● Spatial domain works based on direct manipulation of pixels whereas Frequency domain works based on
modifying fourier transform.
● Spatial domain can be more easier to understand whereas Frequency domain can be less easier to
understand.
● Spatial domain is more cheaper whereas Frequency domain is less cheaper.
● Spatial domain takes less time to computer whereas Frequency domain takes more time to compute.
14 Define the ideal low pass filter.

The ideal low-pass filter is defined as “the filter that strictly allows the signals with frequency less than and
attenuates the signals with frequency more than the specified cutoff frequency.”
15 GLPF stands for
GAUSSIAN LOW PASS FILTERS
16 BLPF stands for
BUTTER WORTH LOW PASS FILTERS
Part – B (Long Answer Questions)
1 What is image enhancement? List and explain the advantages of image enhancement

Image Enhancement
Image enhancement is the process of adjusting digital images so that the results are more suitable for
display or further image analysis. For example, you can remove noise, sharpen, or brighten an image,
making it easier to identify key features.
• The principal objective of enhancement is to process an image so that the result is more
suitable for a special process
• Image Enhancement Fall into two categories: Enhancement in spatial domain and
Frequency domain.
• The term spatial domain refers to the Image Plane itself which is DIRECT manipulation
of pixels
• Frequency domain processing techniques are based on modifying the Fourier transform
of an image.

❖ Methods and advantages of image enhancement:

o Histogram equalization
o Noise removal using a Wiener filter
o Linear contrast adjustment
o Median filtering
o Unsharp mask filtering
o Contrast-limited adaptive histogram equalization (CLAHE)
o Decorrelation stretch
2 Discuss about the basic intensity transformation functions and plot the intensity transformations i) Linear
ii) log iii) power- law

***********(​OR)************
3 Explain about image histogram matching specification
method.

Histogram Matching (Specification):

Histogram matching is a process where a time series, image, or higher dimension scalar data is modified such that
its histogram matches that of another (reference) dataset. A common application of this is to match the images
from two sensors with slightly different responses, or from a sensor whose response changes over time.

The example given here is a synthetically generated time series. The reference series is just a uniform random
distribution, the second series is a Gaussian random distribution.

Histogram of the adjusted time series


The algorithm is as follows. The cumulative histogram is computed for each dataset, see the diagram below. For
any particular value (xi) in the data to be adjusted has a cumulative histogram value given by G(xi). This in turn
is the cumulative distribution value in the reference dataset, namely H(xj). The input data value xi is replaced by
xj.

In practice for discrete valued data one does not step through data values but rather creates a mapping to the output
state for each possible input state. In the case of an image this would be a mapping for each of the 256 different
states.
4 What is the importance of image enhancement and
explain with arithmetic/logic operations.

Importance and necessity of image enhancement:


● The first being the Improvement of pictorial information for human interpretation and the second being the
Processing of a scene data for an autonomous machine perception.
● Image enhancement has a broad range of applications such as remote sensing, image and data storage for
transmission in business applications, medical imaging, acoustic imaging, Forensic sciences and industrial
automation.
● Image enhancement has a major role in medical applications such as processing of X-Rays, Ultrasonic
scanning Electron micrographs, Magnetic Resonance Imaging, Nuclear Magnetic Resonance

Arithmetic and logic operations


Image arithmetic applies one of the standard arithmetic operations or a logical operator to two or more images. The
operators are applied in a pixel-by-pixel way, i.e. the value of a pixel in the output image depends only on the
values of the corresponding pixels in the input images. Hence, the images must be of the same size. Although
image arithmetic is the most simple form of image processing, there is a wide range of applications. A main
advantage of arithmetic operators is that the process is very simple and therefore fast.

Logical operators are often used to combine two (mostly binary) images. In the case of integer images, the logical
operator is normally applied in a bitwise way.

5 What are smoothing spatial filters and explain in detailed

Smoothing Spatial Filters:


Smoothing filters are used for blurring and for noise reduction. Blurring is used in preprocessing steps, such as
removal of small details from an image prior to (large) object extraction, and bridging of small gaps in lines or
curves. Noise reduction can be accomplished by blurring with a linear filter and also by non-linear filtering.

Smoothing Linear Filters:

The output (response) of a smoothing, linear spatial filter is simply the average of the pixels contained in the
neighborhood of the filter mask. These filters sometimes are called averaging filters. The idea behind smoothing
filters is straightforward. By replacing the value of every pixel in an image by the average of the gray levels in
the neighborhood defined by the filter mask, this process results in an image with reduced ―sharp‖ transitions
in gray levels. Because random noise typically consists of sharp transitions in gray levels, the most obvious
application of smoothing is noise reduction. However, edges (which almost always are desirable features of an
image) also are characterized by sharp transitions in gray levels, so averaging filters have the undesirable side
effect that they blur edges. Another application of this type of process includes the smoothing of false contours
that result from using an insufficient number of gray levels.
• Averaging
• Gaussian
• Median filtering (non-linear)
Fig. Two 3 x 3 smoothing (averaging) filter masks. The constant multiplier in front of each mask is equal to the
sum of the values of its coefficients, as is required to compute an average.
6 Discuss about the filtering in the frequency domain with properties of frequency and filters

Frequency domain filtering of digital images involves conversion of digital images from spatial domain to
frequency domain. Frequency domain image filtering is the process of image enhancement other than
enhancement in spatial domain, which is an approach to enhance images for specific application. In this paper we
discussed about frequency domain approach of image filtering. As normally images are in spatial domain, so it
needed its conversion in frequency domain. Fast Fourier transform is a specific tool for spatial domain to
frequency domain conversion.
● For image smoothing we implement low pass filters while for image sharpening we implement high pass
filters. Both low pass and high pass filtering implemented and analyze for Ideal filter, Butterworth filter and
Gaussian filter. Filtering in frequency domain is based on modifying the Fourier transform of image of
interest to achieve frequency domain filtering and then get back spatial domain image by taking inverse
Fourier transform of resultant filtered image
7 Write the smoothing frequency-domain filters of Butter
worth low pass and Gaussian low pass filters.

Butterworth lowpass filter:​

A commonly used discrete approximation to the Gaussian (next section) is the Butterworth filter.
Applying this filter in the frequency domain shows a similar result to the Gaussian smoothing in the
spatial domain. The transfer function of a Butterworth lowpass filter (BLPF) of order n, and with cutoff
frequency at a distance ​r​0​ from the origin, is defined as

3.2.1
Where ​D(u ,v) ​ is defined in 3.1.1​.

As we can see from Figure 5, frequency response of the BLPF does not have a sharp transition as in the
ideal LPF. And as the filter order increases, the transition from the pass band to the stop band gets
steeper.​ ​Which means as the order of BLPF increase, it will exhibits the characteristics of the ILPF.
See example below to see the different between two images with different orders but the same cutoff
frequency. In fact, order of 20 already shows the ILPF characteristic.
Gaussian lowpass filter:

Gaussian filters are important in many signal processing, image processing and communication
applications. These filters are characterized by narrow bandwidths, sharp cutoffs, and low overshoots. A
key feature of Gaussian filters is that the Fourier transform of a Gaussian is also a Gaussian, so the filter
has the same response shape in both the spatial and frequency domains.
The form of a Gaussian lowpass filter in two-dimensions is given by

3​ .3.1
Where ​D(u,v) is the distance from the origin in the frequency plane as defined in Equation 3.1.1. The
parameter σ measures the spread or dispersion of the Gaussian curve see Figure 9. Larger the value of σ,
larger the cutoff frequency and milder the filtering is. See example at the end of this section.

Figure 9: 1-D Gaussian distribution with mean 0 and σ =​ ​1

When letting σ = r​0,​ which leads a more familiar form as previous discussion. So Equation 3.3.1
becomes:

3​ .3.2
When ​D(u, v) =​ r0,​ ​ ​the filter is down to 0.607 of its maximum value of 1.
8 What are sharpening frequency domain filters and explain with Gaussian High pass filters and Laplacian
in the frequency domain.

Gaussian highpass filter:​


The transfer function of a Gaussian highpass filter (GHPF) with the cutoff frequency ​r0​ ​ i​ s given by:

4.3.1

Where ​D(u, v) i​ s define in Equation 3.1.1, and ​r​0 ​is the distance from the origin in the frequency plane.
Again, Equation 4.3.1 follows Equation 4.1.1.

The parameter σ, measures the spread or dispersion of the Gaussian curve. Larger the value of σ, larger
the cutoff frequency and milder the filtering is.

Figure 14: Perspective plot, image representation, and cross section of a GHPF.
Example 8: Results of highpass filtering the image using GHPF of order 2:

Original r0​ = ​ 5
​ 1 r0​ = ​ 0
​ 3 r0​ = ​ 0
​ 8

4.4 The Laplacian in the frequency Domain​:

Since edges consist of mainly high frequencies, we can, in theory, detect edges by applying a highpass
frequency filter in the Fourier domain or by convolving the image with an appropriate kernel in the
spatial domain. In practice, edge detection is performed in the spatial domain, because it is
computationally less expensive and often yields better results. As we can see later, we also can detect
edges very efficiently using Laplacian filter in the frequency domain.
The Laplacian is a very useful and common tool in image process. This is a second derivative
operator designed to measure changes in intensity without being overly sensitive to noise. The
function produces a peak at the start of the change in intensity and then at the end of the change.
As we know, the mathematical definition of derivative is the rate of change in a continuous
function. But in digital image processing, image is a discrete function ​f(x, y) of integer spatial
coordinates. As a result the algorithms will only be seen as ​approximations ​to the true spatial
derivatives of the original spatial-continuous image. The Laplacian of an image will highlight
regions of rapid intensity change and is therefore often used for edge detection (usually called
the Laplacian edge detector). Figure 15 shows a 3-D plot of Laplacian in the frequency domain.

Figure 15: 3-D plot of Laplacian in the frequency domain.


The Laplacian is often applied to an image that has first been smoothed with something approximating a
Gaussian smoothing filter in order to reduce its sensitivity to noise, and hence the two variants will be
described together here. The operator normally takes a single gray level image as input and produces
another gray level image as output.
The Laplacian of an image with pixel intensity values ​f(x, y) ​(original image) is given by:
4​ .4.1
Since

4​ .4.2
Combine Equation 4.4.1 and 4.4.2,

4.4.3
So, from Equation 4.4.3, we know that Laplacian can be implemented in the frequency domain by using
the filter:

.
For size of M x N image, the filter function at the center point of the frequency rectangle will be:

4.4.4
Use Equation 4.4.4 for the filter function, the Laplacian-filtered image in the spatial domain can be
obtained by:

4​ .4.5

So, how we use the Laplacian for image enhancement in the spatial domain? Here are the basic ways
where the ​g(x, y)​ ​is the enhanced image:

If the center coefficient of the mask is ​negative

I​ f the center coefficient of the mask is ​positive

In frequency domain, ​g(x, y) the enhance image is also possible to be obtained by taking the inverse
Fourier transform of a single mask (filter)

4​ .4.6

and the original image ​f(x, y):

9 Discuss about Homomorphic filtering with


Implementation
10 What is meant by Gradient and Lapacian? Discuss these
in image enhancement
Ans
11 Explain the method of using second derivative for
image sharpening by using Laplacian operator
Ans
12 What is meant by image enhancement using point
processing? Discuss any two methods in it.
Ans
13 What is high boost filtering? Compare it with high pass
filtering
Ans
Part – C (Problem Solving and Critical Thinking Questions)
1 Describe the correlation and convolution process with an
example.
An

2 Apply the low pass filters to perform the smoothing.


3 Apply full correlation image data and using the
specified weighted filter for contrast enhancement.
An
s
UNIT-III
IMAGE RESTORATION AND FILTERING
Part - A (Short Answer Questions)
1 Compare image enhancement and restoration techniques? Understa CO 3 ACS511.05
nd

2 Give the probability density functions for Rayleigh noise Rememb CO 3 ACS511.05
models er

3 Give the probability density functions for the Erlang noise Rememb CO 3 ACS511.05
models er

4 Give the probability density functions for Gaussian noise Rememb CO 3 ACS511.05
models er

5 Give the probability density functions for Salt and Pepper noise Rememb CO 3 ACS511.05
models er
6 Explain adaptive median filter and its advantages. Understa CO 3 ACS511.05
nd

7 How do you reduce the periodic noise using frequency Understa CO 3 ACS511.05
domain Filters? nd
8 What is image restoration? Rememb CO 3 ACS511.05
er

9 Image restoration and image enhancement differences Understa CO 3 ACS511.05


1. Image restoration and the image nd
enhancement techniques aim at improving
the image quality and both the techniques
can be performed in both spatial and
frequency domains.

2. The difference between image


enhancement and image restoration is
given below:

Criterion Enhanc Restor


ement ation

Result Objectiv Subject


Evaluation e ive

Modeling of No Yes
Degradation

Use of Prior No Yes


Knowledge

1.
Image enhancement is largely a subjective
process which means that it is a heuristic
procedure designed to manipulate an
image in order to achieve the pleasing
aspects of a viewer. On the other hand
image restoration involves formulating a
criterion of goodness that will yield an
optimal estimate of the desired result.

2. In image enhancement the degradation is


not usually modeled. Image restoration
attempts to reconstruct or recover an image
that has been degraded by using the prior
knowledge of the degradation. That is
restoration techniques try to model the
degradation and apply the inverse process
in order to recover the original image.

10 List out the all Image observation models. Rememb CO 3 ACS511.05


er
11 Explain Noise models Understa CO 3 ACS511.05
nd

12 Explain A general model of a simplified digital image Understa CO 3 ACS511.05


degradation process nd

13 Mention the Possible classification of restoration methods Rememb CO 3 ACS511.05


There are two types of image restoration er
methods namely Classical Linear Image
Restoration and Blind Image Restoration. The
blurring function is given and the degradation
process is inverted using one of the many
known restoration algorithms
14 What are band reject filters? Rememb CO 3 ACS511.05
In signal processing, a band-stop filter or band- er
rejection filter is a filter that passes most frequencies
unaltered, but attenuates those in a specific range to
very low levels. It is the opposite of a band-pass filter.
15 What is the difference between notch filters and band pass Understa CO 3 ACS511.05
filters? nd

Band Pass Filters can be used to isolate or filter out certain


frequencies that lie within a particular band or range of
frequencies.

A notch filter is a very narrow bandwidth bandstop


filter – it has a gain which drops and then rises very
steeply with increasing frequency and so the magnitude
response has the shape of a notch.

Part – B (Long Answer Questions)


1 Discuss about periodic noise reduction by frequency Understa CO 3 ACS511.05
domain filtering. nd
Band Pass Filters can be used to isolate or filter out certain
frequencies that lie within a particular band or range of
frequencies.

A notch filter is a very narrow bandwidth bandstop


filter – it has a gain which drops and then rises very
steeply with increasing frequency and so the magnitude
response has the shape of a notch.

2 Explain model of image degradation/restoration process Rememb CO 3 ACS511.05


with a block diagram er
3 Discuss about image restoration in the presence of noise Understa CO 3 ACS511.05
reduction using spatial filtering nd
4 Explain three principle ways to estimate the degradation Understa CO 3 ACS511.05
function for use in image restoration. nd
Estimation of degradation function-

Degradation function can be estimated by three


principle way in image restoration and these
are

■ Observation
■ Experimentation
■ Mathematical modeling
5 Discuss different types of filters used to restore an Understa CO 3 ACS511.05
image nd
same as 3rd ans
6 Write about Noise Probability Density Functions for all Understa CO 3 ACS511.05
noise models nd
7 Specify any four important noise probability density Rememb CO 3 ACS511.05
functions. er
same as 6th
8 Discuss the importance of adaptive filters in image Understa CO 3 ACS511.05
restoration system. Highlight the working of adaptive nd
median filters.
Adaptive filters play an important role in modern digital
signal processing (DSP) products in areas such as
telephone echo cancellation, noise cancellation,
equalization of communications channels, biomedical
signal enhancement, active noise control (ANC), and
adaptive control systems. Adaptive filters work
generally for the adaptation of signal-changing
environments, spectral overlap between noise and
signal, and unknown or time-varying noise. For
example, when the interference noise is strong and its
spectrum overlaps that of the desired signal, removing
the interference using a traditional filter such as a
notch filter with the fixed filter coefficients will fail to
preserve the desired signal spectrum, as shown

The Adaptive Median Filter performs spatial


processing to determine which pixels in an image have
been affected by impulse noise. The Adaptive Median
Filter classifies pixels as noise by comparing each pixel
in the image to its surrounding neighbor pixels.
9 Compare wiener filter with inverse filtering and Rememb CO 3 ACS511.05
constrained least square filtering er

inverse filtering
Constrained least square filtering

10 What is image filtering process? List out various image Rememb CO 3 ACS511.05
filtering techniques er
Filtering is a technique for modifying or enhancing an image.
For example, you can filter an image to emphasize certain
features or remove other features. Image processing
operations implemented with filtering include smoothing,
sharpening, and edge enhancement.
Filtering is a neighborhood operation, in which the value of
any given pixel in the output image is determined by
applying some algorithm to the values of the pixels in the
neighborhood of the corresponding input pixel. A pixel's
neighborhood is some set of pixels, defined by their
locations relative to that pixel. (See Neighborhood or Block
Processing: An Overview for a general discussion of
neighborhood operations.) Linear filtering is filtering in which
the value of an output pixel is a linear combination of the
values of the pixels in the input pixel's neighborhood.

Convolution
Linear filtering of an image is accomplished through an
operation called convolution. Convolution is a neighborhood
operation in which each output pixel is the weighted sum of
neighboring input pixels. The matrix of weights is called the
convolution kernel, also known as the filter. A convolution
kernel is a correlation kernel that has been rotated 180
degrees.
For example, suppose the image is
A = [17 24 1 8 15
23 5 7 14 16
4 6 13 20 22
10 12 19 21 3
11 18 25 2 9]

and the correlation kernel is


h = [8 1 6
3 5 7
4 9 2]

You would use the following steps to compute the output


pixel at position (2,4):
1. Rotate the correlation kernel 180 degrees about
its center element to create a convolution
kernel.
2. Slide the center element of the convolution
kernel so that it lies on top of the (2,4) element
of A.
3. Multiply each weight in the rotated convolution
kernel by the pixel of A underneath.
4. Sum the individual products from step 3.

Hence the (2,4) output pixel is

Shown in the following figure.


Computing the (2,4) Output of Convolution

Correlation
The operation called correlation is closely related to
convolution. In correlation, the value of an output pixel is
also computed as a weighted sum of neighboring pixels. The
difference is that the matrix of weights, in this case called the
correlation kernel, is not rotated during the computation. The
Image Processing Toolbox™ filter design functions return
correlation kernels.
The following figure shows how to compute the (2,4) output
pixel of the correlation of A, assuming h is a correlation
kernel instead of a convolution kernel, using these steps:
1. Slide the center element of the correlation
kernel so that lies on top of the (2,4) element of
A.
2. Multiply each weight in the correlation kernel by
the pixel of A underneath.
3. Sum the individual products.

The (2,4) output pixel from the correlation is


Computing the (2,4) Output of Correlation

11 Explain notch reject filters. How can we obtain the notch Understa CO 3 ACS511.05
filter that pass rather than suppressing the nd
frequency in notch area?
The Band Stop Filter, (BSF) is another type of frequency
selective circuit that functions in exactly the opposite way to
the Band Pass Filter we looked at before. The band stop
filter, also known as a band reject filter, passes all
frequencies with the exception of those within a specified
stop band which are greatly attenuated.
If this stop band is very narrow and highly attenuated over a
few hertz, then the band stop filter is more commonly
referred to as a notch filter, as its frequency response shows
that of a deep notch with high selectivity (a steep-side curve)
rather than a flattened wider band.

12 Compare different filters in the frequency domain for Rememb CO 3 ACS511.05


noise reduction er
same as part A 7th ans
13 Discuss about linear and position-invariant degradations Understa CO 3 ACS511.05
nd
Part – C (Problem Solving and Critical Thinking Questions)
1 Apply Arithmetic, geometric, median filters of various sizes Rememb CO 3 ACS511.05
on image. Analyze the result. er
AriAirthmetic Mean Filter

Coccommand
Image
Filters
Arithmetic Mean Filter

DesDescription
Applies a arithmetic mean filter to an image.
An arithmetic mean filter operation on an image removes
short tailed noise such as uniform and Gaussian type
noise from the image at the cost of blurring the image. The
arithmetic mean filter is defined as the average of all pixels
within a local region of an image.
The arithmetic mean is defined as:

Pixels that are included in the averaging operation are


specified by a mask. The larger the filtering mask becomes
the more predominant the blurring becomes and less high
spatial frequency detail that remains in the image.

DialDialog box
Filter Size: size of the filter mask; a larger filter yields
stronger effect.
Click Preview to judge the results.
Click OK to proceed.

ExaExamples

Ge geometric Mean Filter

Coccommand
Image
Filters
Geometric Mean Filter

Desdescription
Applies a geometric mean filter to an image.
In the geometric mean method, the color value of each
pixel is replaced with the geometric mean of color values
of the pixels in a surrounding region. A larger region (filter
size) yields a stronger filter effect with the drawback of
some blurring.
The geometric mean is defined as:

The geometric mean filter is better at removing Gaussian


type noise and preserving edge features than the
arithmetic mean filter. The geometric mean filter is very
susceptible to negative outliers.

Dialdialog box
Filter Size: larger filter yields stronger effect.
Click Preview to judge the results.
Click OK to proceed.

MeMedian Filter

Coccommand
Image
Filters
Median Filter

Desdescription
Applies a median filter to an image. The median filter is
defined as the median of all pixels within a local region of
an image.
The median filter is normally used to reduce salt and
pepper noise in an image, somewhat like the mean filter.
However, it often does a better job than the mean filter of
preserving useful detail in the image.

Dialdiaog box
Filter Size: larger filter yields stronger effect.
Click Preview to judge the results.
Click OK to proceed.

2 Obtain equations for butter worth, gaussian band reject Understa CO 3 ACS511.05
filters nd

Band-reject and Band-Pass filters are used less in


image processing than low-pass and high-pass
filters.

Band-reject filters (also called band-stop filters)


suppress frequency content within a range
between a lower and higher cutoff frequency. The
parameter here is the center frequency of the
reject band.
ideal band-reject filter

Gaussian band-reject filter

Butterworth band-reject filter


3 Obtain equations for butter worth, Gaussian band pass Understand CO 3 ACS511.05
filters

A band-pass filter is the opposite of a band-


reject filter. It only passes frequency content
within a range. A band-pass filter is obtained
from a band-reject filter as:

rewrite the expressions in 2nd question by subtracting the


above equations from 1
4 Explain Iterative deterministic approaches to restoration Remember CO 3 ACS511.05
Constrained least squares iteration and Least squares
iteration.

5 Derive the expression for observed image when the Remember CO 3 ACS511.05
degradations are linear position invariant.
same as part b 13
UNIT-IV
COLOR IMAGE PROCESSING
Part - A (Short Answer Questions)
1 Define color image. Explain the need of color image Understand CO 4 ACS511.05
processing.

The use of color is important in image processing because


•Color is a powerful descriptor that simplifies object
identification and extraction.
•Humans can discern thousands of color shades and
intensities, compared to about only two dozen shades of
gray.
Color image processing is divided into two major areas:
•Full color processing :images are acquired with a full color
sensor, such as a color TV camera or color scanner.
•Pseudo color processing: The problem is one of assigning a
color to a particular monochrome intensity or range of
intensities.
2 Distinguish between different color models Understand CO 4 ACS511.05
Color model, color space, color system •Specify colors in
a standard way
•A coordinate system that each color is represented by a
single point
•RGB model
•CYM model
•CYMK model
•HSI model
3 Mention the process of transforming gray-level image to Remember CO 4 ACS511.05
color image

4 List out different color models in color image processing Remember CO 4 ACS511.05
and discuss about pseudo color image processing.

•A coordinate system that each color is represented by a


single point
•RGB model
•CYM model
•CYMK model
•HSI model
5 Give the names of different color transformations Remember CO 4 ACS511.05
HSV (hue, saturation, value), also known as HSB
(hue, saturation, brightness) is often used by artists
because it is often more natural to think about a color
in terms of hue and saturation than in terms of additive
or subtractive color components. HSV is a
transformation of an RGB color space, and its
components and colorimetry are relative to the RGB
color space from which it was derived.
HSL (hue, saturation, lightness/luminance), also
known as HLS or HSI (hue, saturation, intensity) is
quite similar to HSV, with "lightness" replacing
"brightness". The difference is that the brightness of a
pure color is equal to the brightness of white, while the
lightness of a pure color is equal to the lightness of a
medium gray.

6 What is color image smoothing and sharpening? Understand CO 4 ACS511.05

Sharpness is a combination of two factors:


resolution and acutance. Resolution is
straightforward and not subjective. It's just the size,
in pixels, of the image file. All other factors equal, the
higher the resolution of the image—the more pixels it
has—the sharper it can be. Acutance is a little more
complicated. It’s a subjective measure of the
contrast at an edge.
7 What is color segmentation? Explain how segmentation Understand CO 4 ACS511.05
done in HSB color space.
Color image segmentation that is based on the
color feature of image pixels assumes that
homogeneous colors in the image correspond to
separate clusters and hence meaningful objects in
the image. In other words, each cluster defines a
class of pixels that share similar color properties.

HSB and HLS were developed to specify numerical Hue,


Saturation and Brightness (or Hue, Lightness and
Saturation) in an age when users had to specify colors
numerically. The usual formulations of HSB and HLS are
flawed with respect to the properties of color vision. Now
that users can choose colors visually, or choose colors
related to other media (such as PANTONE), or use
perceptually-based systems like L*u*v* and L*a*b*.
8 Define noise in color images. What is color image Understand CO 4 ACS511.05
compression.
Image noise is random variation of brightness or
color information in the images captured. It is
degradation in image signal caused by external
sources.Images containing multiplicative noise have
the characteristic that the brighter the area the
noisier it. But mostly it is additive.

The color image is compressed by standard lossy


compression method. The difference between the
lossy image and the original image is computed and
is called as residue. The residue at the background
area is dropped and rest of the area is compressed
by standard lossless compression method.
9 List out different multi resolution expansions in wavelets Understand CO 4 ACS511.05
and Multi resolution processing.

10 Compare image pyramids and sub band coding techniques Remember CO 4 ACS511.05
11 Discuss about Haar transform in wavelets and Understand CO 4 ACS511.05
multiresolution processing
12 Give some applications of color image processing Remember CO 4 ACS511.05
The result is that full-color image processing
techniques are now used in a broad range of
applications, including publishing, visualization,
and the Internet.
13 Distinguish between wavelet transforms in one and two Understand CO 4 ACS511.05
dimensions.
The discrete wavelet transform (DWT) of a one-
dimensional signal can be calculated by passing
it through a high pass and a low pass filter
simultaneously. ... To compute DWT for a two-
dimensional image, the original image is
convolved along and directions by low pass and
high pass filters
14 What is wavelet packet and fast wavelet transform? Understand CO 4 ACS511.05
wavelet packet:

.
Fast Wavelet Transform (FWT)
•computationally efficient implementation of the
DWT
•the relationship between the coefficients of the
DWTat adjacent scales
•also called Mallat's herringbone algorithm
•resembles the two band subband coding
scheme
15 What is the difference between encoder and decoder? Understand CO 4 ACS511.05
Give the names of different encoders and decoders

An encoder is a sensor of mechanical


motion that generates digital signals in
response to motion. As an electro-
mechanical device, an encoder is able
to provide motion control system users
with information concerning position,
velocity and direction. There are two
different types of encoders: linear and
rotary.
16 List out the types of encoders. What are the three Remember CO 4 ACS511.05
operations performed by source encoder.
● A rotary encoder converts rotary position
to an analog (e.g., analog quadrature) or
digital (e.g., digital quadrature, 32-bit
parallel, or USB) electronic signal.
● A linear encoder similarly converts linear
position to an electronic signal.
17 What is the need for Compression? What are different Understand CO 4 ACS511.05
Compression Methods?

18 Discuss about Understand CO 4 ACS511.05


i)Bit-plane coding ii)Lossless predictive coding

i)

ii)

19 Write short notes on Understand CO 4 ACS511.05


i)Variable-length coding ii)LZW coding
i)

ii)
Lempel–Ziv–Welch (LZW) is a universal
lossless data compression algorithm

20 What are the differences between error-free compression Understand CO 4 ACS511.05


and lossy compression?

21 Expand JPEG. What are the basic steps in JPEG? Remember CO 4 ACS511.05
JPEG. Stands for "Joint Photographic
Experts Group."
22 Define coding redundancy and inter pixel redundancy Understand CO 4 ACS511.05

23 What are the operations performed by error free Understand CO 4 ACS511.05


compression?

Part – B (Long Answer Questions)


1 Discuss about fundamentals required for color image Understand CO 4 ACS511.05
processing
2 Write short notes on Understand CO 4 ACS511.05
i)RGB color model ii)HSI and CMYK color models
3 Compare pseudo color and full-color image processing Remember CO 4 ACS511.05
techniques.
4 What is the need of color transformation? Explain Understand CO 4 ACS511.05
different color transformation techniques.
5 What is multiresolution expansion? Discuss about series Understand CO 4 ACS511.05
expansions and scaling functions.
6 Define smoothing and sharpening process. Explain the Understand CO 4 ACS511.05
process in color images.

Sharpening
Image enhancement process consists of a
collection of techniques whose purpose is to
improve image visual appearance and to
highlight or recover certain details of the
image for conducting an appropriate analysis
by a human or a machine.

There are several methods for image


sharpening in the spatial domain. One of
the most well-known is Histogram
Equalization (HE). It is based on an
adjustment of the contrast by using the
histogram of the input image. It is
manipulated in order to separate
intensity levels of higher probability
respect to their neighbour levels.
7 Explain how noise can be reduced in color images. Understand CO 4 ACS511.05
Noise reduction is the process of removing noise
from a signal.
All signal processing devices, both analog and
digital, have traits that make them susceptible to
noise. Noise can be random or white noise with an
even frequency distribution, or frequency dependent
noise introduced by a device's mechanism or signal
processing algorithms.

8 Compare segmentation in HIS color space and RGB Remember CO 4 ACS511.05


vector space

same as part-b 2nd question


9 Give the summary on wavelets and multiresolution Remember CO 4 ACS511.05
processing

part-a 9th question


10 Describe the process involved in discrete and continuous Understand CO 4 ACS511.05
wavelet transform
11 Explain in detail about image compression models. Understand CO 4 ACS511.05

12 Specify the characteristics of lossless and lossy Remember CO 4 ACS511.05


compression.
13 Determine the method of generating variable length codes Understand CO 4 ACS511.05
with an example
14 Describe the following related to lossy compression Understand CO 4 ACS511.05
i)Transform coding ii)Wavelet coding

WAVELET CODING
15 Discuss about wavelet transforms in one and two Understand CO 4 ACS511.05
dimensions

Part – C (Problem Solving and Critical Thinking Questions)


1 The color intensity of each pixel in the image depends on Remember CO 4 ACS511.05
the spectral properties of two components except for the
object. What are these?
2 What are the applications of wavelet transforms in image Understand CO 4 ACS511.05
processing?Discuss.

3 On which type of images can we expect that run-length Remember CO 4 ACS511.05


coding gives high compression. Discuss the degree of
compression when using JPEG.
Run-length encoding (RLE) is a form of lossless
data compression in which runs of data are
stored as a single data value and count, rather
than as the original run. This is most useful on
data that contains many such runs. Consider, for
example, simple graphic images such as icons,
line drawings etc

The degree of compression can be


adjusted, allowing a selectable tradeoff
between storage size and image quality.
JPEG typically achieves 10:1 compression
with little perceptible loss in image quality.
Type of format: Lossy image compression
format

Uniform Type Identifier (UTI): public.jpeg

Developed by: Joint Photographic Experts


Groups

4 Explain about JPEG compression standard and the steps Understand CO 4 ACS511.05
involved in JPEG compression
JPEG. JPEG (Joint Photographic Experts
Group) is an image format that uses lossy
compression to create smaller file sizes.
One of JPEG's big advantages is that it
allows the designer to fine-tune the
amount of compression used.
UNIT-V
MORPHOLOGICAL IMAGE PROCESSING
Part - A (Short Answer Questions)
1 What is morphological image processing? Understand CO 5 ACS511.05

2 Describe dilation morphological transformations on a Understand CO 5 ACS511.05


binary image.
3 Describe erosion morphological transformations on a Understand CO 5 ACS511.05
binary image .

4 Write short notes on Structuring elements in image Understand CO 5 ACS511.05


morphological transformations.
When a structuring element is placed in a
binary image, each of its pixels is
associated with the corresponding pixel of
the neighbourhood under the structuring
element. The structuring element is said
to fit the image if, for each of its pixels set
to 1, the corresponding image pixel is also
1.
5 Write short notes on Digital image water marking. Understand CO 5 ACS511.05

Digital watermarking is the act of hiding a message


related to a digital signal (i.e. an image, song,
video) within the signal itself. It is a concept closely
related to steganography, in that they both hide a
message inside a digital signal. However, what
separates them is their goal.
6 List out the Applications of morphology. Remember CO 5 ACS511.05
Morphological operations, e.g. dilation, erosion,
closing and opening, are useful tools in surface
metrology and dimensional metrology.
7 Give the Applications of digital water marking? Remember CO 5 ACS511.05
Digital watermarking may be used for a wide range
of applications, such as: Copyright protection.
Source tracking (different recipients get differently
watermarked content) Broadcast monitoring
(television news often contains watermarked video
from international agencies)
8 What is encoding technique in digital water marking Understand CO 5 ACS511.05
Digital image watermarking method based on
DCT and fractal encoding. ... The image is
encoded by fractal encoding as the first encryption,
and then encoded parameters are used in DCT
method as the second encryption. First, the fractal
encoding method is adopted to encode a private
image with private scales.
9 What is decoding technique in digital water marking Understand CO 5 ACS511.05

The decoder loads the watermarked, normal or


corrupted image CW and extracts the hidden signature
W. Using nonblind and blind watermarking techniques,
the decoder D loads an additional image CO, which is
often the original image, to extract the watermarking
information by correlation.
10 What is “bridging gap”, how it is achieved with the help Understand CO 5 ACS511.05
of dilation?

11 What is Hit-or-Miss Transformation? Understand CO 5 ACS511.05


12 Mention some basic morphological algorithms Remember CO 5 ACS511.05

1.convex hull
2.Thinning
3.Thickening
4.Skeletons
5.Pruning
13 Give the summary of morphological operations on Remember CO 5 ACS511.05
binary images.
Morphological operations combine an
image with a structuring element, often a 3 ×
3 matrix. They process an image pixel by
pixel according to the neighbourhood pixel
values. Morphological operations are often
applied to binary images, although
techniques are available for grey level
images.
14 Write the differences between any two morphological Understand CO 5 ACS511.05
algorithms
15 Write algorithm for Understand CO 5 ACS511.05
i) Convex Hull ii)Thinning

ii)
16 Explain how to detect discontinuities in image Understand CO 5 ACS511.05
segmentation?

we can also detect discontinuities using gradient


operators.
17 Differentiate between optimal and Global thresholding. Remember CO 5 ACS511.05
18 What are the things included in Region Based Understand CO 5 ACS511.05
segmentation?
19 What is the use of boundary characteristics in image Understand CO 5 ACS511.05
segmentation?
Image segmentation is typically used to
locate objects and boundaries (lines,
curves, etc.) in images. More precisely,
image segmentation is the process of
assigning a label to every pixel in an image
such that pixels with the same label share
certain characteristics.
20 List out the differences between point, edge and line Remember CO 5 ACS511.05
detection in image segmentation.
POINT DETECTION:

21 Define thresholding. What is the role of illumination in Understand CO 5 ACS511.05


thresholding?
22 Discuss about thresholds based on several variables Understand CO 5 ACS511.05

23 Understand CO 5 ACS511.05
What is edge linking and boundary detection in image
segmentation?
Part – B (Long Answer Questions)
1 What is the need of morphological image processing? Understand CO 5 ACS511.05
Mention and explain applications of morphological
image processing.

Morphological image processing is a collection of non-


linear operations related to the shape or morphology of
features in an image. Morphological operations rely only on
the relative ordering of pixel values, not on their numerical
values, and therefore are especially suited to the processing
of binary images. Morphological operations can also be
applied to greyscale images such that their light transfer
functions are unknown and therefore their absolute pixel
values are of no or minor interest.

Morphological techniques probe an image with a small


shape or template called a structuring element. The
structuring element is positioned at all possible locations in
the image and it is compared with the corresponding
neighbourhood of pixels. Some operations test whether the
element "fits" within the neighbourhood, while others test
whether it "hits" or intersects the neighbourhood:

A morphological operation on a binary image creates a


new binary image in which the pixel has a non-zero value
only if the test is successful at that location in the input
image.

APPLICATIONS:
2 Discuss about different morphological algorithms Understand CO 5 ACS511.05

∗ Using a larger se will yield a thicker


boundary
3 Determine the importance of Hit-or-Miss morphological Remember CO 5 ACS511.05
transformation operation on a digital binary image

4 Explain the opening operation in image morphology Understand CO 5 ACS511.05


with examples?
5 Explain the closing operation in image morphology Understand CO 5 ACS511.05
with examples?
6 Mention algorithms for Remember CO 5 ACS511.05
i)Extraction of connected components ii)skeleton
iii)pruning

answer in question no.2 part b

7 What are the preliminaries for morphological image Understand CO 5 ACS511.05


processing? Explain.
8 Write short notes on image segmentation and edge Understand CO 5 ACS511.05
detection.

9 Discuss region oriented segmentation in detail. Understand CO 5 ACS511.05


10 What are the derivative operators useful in image Urrnderstan CO 5 ACS511.05
segmentation? Explain their role in segmentation d
computed by smoothing the image with the
Gaussian smoothing mask, followed by
application of the Laplacian mask
11 What is thresholding? Explain about global thresholding Understand CO 5 ACS511.05

12 Explain about the Global processing via the Hough Understand CO 5 ACS511.05
Transform for Edge linking

13 Write in detail about the Global processing via graph- Understand CO 5 ACS511.05
theoretic techniques for edge linking
Global processing via graph-theoretic techniques In
this process we have a global approach for edge
detection and linking based on representing edge
segments in the form of a graph and searching the
graph for low-cost paths that correspond to
significant edges. This representation provides a
rugged approach that performs well in the presence of
noise.

14 Discuss in detail the threshold selection based on Understand CO 5 ACS511.05


boundary characteristics
The gradient at any point (x, y) in an image can be
found. Similarly, the Laplacian 2f can also be
found. These two quantities may be used to form a
three-level image, as follows:
where the symbols 0, +, and - represent any three
distinct gray levels, T is a threshold, and the
gradient and Laplacian are computed at every point
(x, y). For a dark object on a light background,
the use of the Eqn. produces an image s(x, y) in
which (1) all pixels that are not on an edge (as
determined by being less than T) are labeled 0; (2)
all pixels on the dark side of an edge are
labeled +; and (3) all pixels on the light side of an
edge are labeled -. The symbols + and - in Eq.
above are reversed for a light object on a dark
background. Figure 6.1 shows the labeling
produced by Eq. for an image of a dark, underlined
stroke written on a light background.
the pixels selected by a gradient/Laplacian criterion
can be expected to be sparsely populated. This
property produces the highly desirable deep valleys.
The gradient at any point (x, y) in an image can be
found. Similarly, the Laplacian 2
f can also be
found. These two quantities may be used to form a
three-level image, as follows:

Part – C (Problem Solving and Critical Thinking Questions)


1 Describe at least three different morphological set Understand CO 5 ACS511.05
operations (except erosion and dilation). What in the
image disappears when it is eroded and dilated,
respectively
ans.
opening
closing
hit-miss transform
explaination in part b
2 Consider two structuring elements s1 and s2, where s1 Analyze CO 5 ACS511.05
is a disc of radius r and s2 is a circle with radius r. The
center of the disc and circle respectively is the origin.
Will dilation and erosion using s1 or s2 yield the same
results with any set? Justify your answers.

The hole in the middle of the image increases in size as


the border shrinks. Note that the shape of the region has
been quite well preserved due to the use of a disk shaped
structuring element. In general, erosion using a disk
shaped structuring element will tend to round concave
boundaries, but will preserve the shape of convex
boundaries.
3 One category of image segmentation is referred to as Remember CO 5 ACS511.05
edge-based segmentation. Describe how the first and
second order derivatives can be used to detect edges,
how they differ from each other, how they are affected
by noise, and which filter masks can be used.
part b 10th ans
4 Explain region growing by pixel aggregation for image Remember CO 5 ACS511.05
segmentation.

Prepared by
Ms. S J Sowjanya, Assistant Professor HOD,
CSE

You might also like