Professional Documents
Culture Documents
IMG Board Exam Suggestion Solve
IMG Board Exam Suggestion Solve
Suggestion Solve
Assume that an image f(x, y) is sampled so that the resulting digital image has M rows and
N columns.
The values of the coordinates (x, y) now become discrete quantities. We use integer values
for these discrete coordinates. Thus, the values of the coordinates at the origin are (x, y) =
(0, 0).
The next coordinate values along the first row of the image are represented as (x, y) = (0,1).
3) Explain the structure of human eye.
Ans:-
▪ Sclera: It is the outer covering, a protective tough white layer called the sclera (white
part of the eye).
▪ Cornea: The front transparent part of the sclera is called cornea, Light enters the eye
through the cornea.
▪ Iris: A dark muscular tissue and ring-like structure behind the cornea are known as the
iris. The colour of the iris actually indicates the colour of the eye. The iris also helps
regulate or adjust exposure by adjusting the iris.
▪ Pupil: A small opening in the iris is known as a pupil. Its size is controlled by the help of
iris. It controls the amount of light that enters the eye.
▪ Lens: Behind the pupil, there is a transparent structure called a lens. By the action of
ciliary muscles, it changes its shape to focus light on the retina. It becomes thinner to
focus distant objects and becomes thicker to focus nearby objects,
▪ Retina: It is a light-sensitive layer that consists of numerous nerve cells. It converts
images formed by the lens into electrical impulses. These electrical impulses are then
transmitted to the brain through optic nerves
▪ Optic nerves: Optic nerves are of two types. These include cones and rods.
1. Cones: Cones are the nerve cells that are more sensitive to bright light. They help in
detailed central and colour vision
2. Rods: Rods are the optic nerve cells that are more sensitive to dim lights. They help in
peripheral vision.
▪ To create a digital image, we need to convert the continuous sensed data into digital
form.
This involves two processes:
• Sampling
• Quantization
▪ Fig (a) shows a continuous image f(x,y), that is converted to digital form. An image may
be continuous with respect to the x-and y-coordinates, and also in amplitude.
To convert it to digital form, we have to sample function in both coordinates and in
amplitude.
Digitizing the coordinate values is called Sampling Digitizing the amplitude values is called
Quantization.
(b) A scan line from A to B the continuous image, used to illustrate the concepts of sampling
& Quantization.
(c) Sampling and quantization (d) Digital scan line.
▪ Sampling
The one-dimensional function shown in fig (b) is a plot of amplitude (gray level) values of
the continuous image along the line segment AB in fig(a)
To sample this function, taking equally spaced sampled along line AB, as shown in fig (c).
The location each sample is given by a vertical tick mark in the bottom part of the figure.
The samples are shown as mall white square superimposed on the function. The set of
these discrete locations gives the sampled function. However, the values of the samples still
span (vertically) a continuous range of gray-level values.
▪ Quantization
In order to form a digital function, the gray-level values also must be converted (quantized)
into discrete quantities.
The right side of fig (c) shows the gray-level divided into eight discrete levels, rangirg from
black to white.
The vertical tick marks indicate the specific value assigned to each of the eight gray levels.
The continuous gray levels are quantized simply by assigning one of the eight discrete gray
levels to each sample.
The assignment is made of depending on the vertical proximity of a sample to a vertical tick
mark. The digital samples resulting from both sampling and quantization are shown in fig
(d).
Starting at the top of the image and carrying out this procedure line by line produces a two-
dimensional digital image.
7) Define the following terms with necessary figures :-
(i) Adjacency (ii) Connectivity (iii) Religion (iv) Boundary (v) Mask (vi) Path
Ans:- (i) Adjacency
(ii) Connectivity : Connectivity refers to the way in which we define an objects. For
example, after we have segmented an image, which segments should be connected to form
an objects. Or at lower level, when searching image for homogeneous regions, how do we
define which pixel are connected.
▪ Pixel has 8 possible neighbors :
i. Two horizontal neighbors
ii. Two vertical neighbors
iii. Four diagonal neighbors
(iv) Mask : A mask is a filter. Concept of masking is also known as spatial filtering. Masking
is also known as filtering. In this concept we just deal with the filtering operation that is
performed directly on the image.
A sample mask has been shown below
-1 0 1
-1 0 1
-1 0 1
13) What is masking? Discuss the mechanism of a liner spatial filtering for smoothing an
image.
Or, Calculate the response of linear spatial filter and device the expression of its
smoothing.
Ans:- Masking: Masking is an image processing method in which we define a small ‘image
piece’ and use it to modify a large image. Masking is the process that is underneath many
types of image processing, including edge detection, motion detection, and noise reduction.
14) Write down the steps for filtering in frequency domain.
Ans:-
15) Why does filter necessary in image processing? Describe the high pass and low pass
filter in image processing.
Ans:-
Filter necessary in image processing : Filtering is a technique for modifying or enhancing an
image. For example, you can filter an image to emphasize certain features or remove other
features. Image processing operations implemented with filtering include smoothing,
sharpening, and edge enhancement.
Image filtering is useful for many applications, including smoothing, sharpening, removing
noise, and edge detection. A filter is defined by a kernel, which is a small array applied to
each pixel and its neighbors within an image. In most applications, the center of the kernel
is aligned with the current pixel, and is a square with an odd number (3, 5, 7, etc.) of
elements in each dimension. The process used to apply filters to an image is known as
convolution, and may be applied in either the spatial or frequency domain.
The ideal high pass filter is given as:
18) What is white noise? Mention the spatial and frequency properties of a noise.
Or, Write down spatial and frequency properties of noise.
Ans:- White Noise: When the Fourier spectrum of noise is constant, the noise is usually
called White Noise. It is a carry over from the physical properties of white light, which
contains nearly all frequencies in the visible spectrum in equal proportions.
Spatial and Frequency Properties of Noise -
▪ White Noise:
✓ When the Fourier spectrum of noise is constant, the noise is usually called White
Noise.
✓ It is a carry over from the physical properties of white light, which contains nearly
all frequencies in the visible spectrum in equal proportions.
▪ If two signals are similar, they are correlated.
▪ If there is noise and two different signals, then there is no correlation.
▪ In spatial domain, two noisy signal can't be correlated.
19) Explain the probability distribution function of different noise models.
Ans:- PDF (Probability Distribution Function) : PDF is a function of a continuous random
variable, whose integral across an interval gives the probability that the values of the
variable lies within the same interval.
Color Fundamentals
Color is a powerful descriptor that often simplifies object identification and extraction
from a scene. Color is psychological effect of human being. Color image processing is
divided into two major areas.
i) Full color processing
✓ process in RGB Model.
ii) Pseudo-color processing (Or False color processing)
✓ Only process in Gray scale.
RGB Scheme has wider range of colors CMYK has lesser range of colors than
file formats:- JPEG, PNG, GIF etc. file formats:- PDF, EPS etc
24) Describe the algorithms for converting a color image to a grayscale image and then to
a binary image.
Or, Describe the gray level to color conversion process.
Ans:- Gray level to color conversion process:
There are three methods to convert an color image into a grayscale image. The methods
are:
▪ The lightness method
▪ Average method
▪ Weighted method or luminosity method
▪ The lightness method : The lightness method averages the most prominent and least
prominent colors: (max(R, G, B) + min(R, G, B)) / 2.
▪ Average method : Average method is the most simple one. You just have to take the
average of three colors. Since its an RGB image, so it means that you have add r with g
with b and then divide it by 3 to get your desired grayscale image. Its done in this way.
Grayscale = (R + G + B / 3)
▪ Weighted method or luminosity method: The luminosity method is a more
sophisticated version of the average method. It also averages the values, but it forms a
weighted average to account for human perception. We’re more sensitive to green than
other colors, so green is weighted most heavily. The formula for luminosity is 0.21 R +
0.72 G + 0.07 B.
Converting a grayscale image to binary image using Thresholding :
Thresholding is the simplest method of image segmentation and the most common way to
convert a grayscale image to a binary image.
In thresholding, we select a threshold value and then all the gray level value which is below
the selected threshold value is classified as 0(black i.e background ) and all the gray level
which is equal to or greater than the threshold value are classified as 1(white i.e
foreground).
Here g(x, y) represents threshold image pixel at (x, y) and f(x, y) represents greyscale image
pixel at (x, y).
Algorithm:
1. Read target image into MATLAB environment.
2. Convert it to a grayscale Image if read image is an RGB Image.
3. Calculate a threshold value, T
4. Create a new Image Array (say ‘binary’) with the same number of rows and columns
as original image array, containing all elements as 0 (zero).
5. Assign 1 to binary(i, j), if gray level pixel at (i, j) is greater than or equal to the
threshold value, T ; else assign 0 to binary(i, j).
Do the same for all gray level pixels.
25) What is chromatic and achromatic light? Describe the light and electromagnetic
spectrum with necessary figure.
Ans:-
Achromatic Light :
✓ Achromatic Light is what viewers see on a black and white television set, and it has
been an implicit component.
✓ If the light is Achromatic (void of color), its only attribute is its intensity, or amount
that ranges from black, to grays, and finally to white.
Chromatic Light :
✓ Chromatic Light spans the electromagnetic spectrum from approximately 400 to 700
mm.
✓ Three basic quantities are used to describe the quantity of a chromatic light source:
i) Radiance
ii) Luminance
iii) Brightness
▪ Radiance :
✓ Radiance is the amount of energy that flows from the light source.
✓ Total energy sometimes called Radiance.
✓ It is measured in watts (w).
▪ Luminance
✓ A measure of the amount of energy of an observer perceives from a light source,
called Luminance.
✓ Luminance measured in lumens (1m)
▪ Brightness
✓ Psychological effects is the Brightness
✓ Brightness is a subjective descriptor that is practically impossible to measure.
The visible portion of the electromagnetic spectrum is called light. It occurs between
wavelengths of approximately 400 to 700 nm (nano meters).
29) If R= 200, G = 100, B =50 then what will be the value of L and H in HSL color model?
Ans:-
30) A 5-bit grayscale image of resolution 7x6 is shown below Draw its histogram and CDF.
Also comment on the quality of the image :-
0 15 15 10 10 10 0
20 15 15 20 30 5 0
25 0 20 20 20 30 30
30 0 20 20 20 10 25
5 0 0 0 0 5 5
25 10 0 5 20 20 0
Ans:-
1.5 set – (31-41)
31) What is edge detection? How can you detect edge by first order and second order
derivatives?
Or, What are the impotence of first order and second order derivatives for detecting an
edge?
Ans:-
32) Describe the edge detection technique of Sobel and Laplacion Operator.
Ans:-
33) What do you mean by edge detection? How can you defect lines of an image?
Or, Write down line detection algorithm for vertical, horizontal +45* and -45* of lines.
Ans:-
Edge detection: An Edge is a set of connected pixels that lie on the boundary between two
regions.
34) Discuss the criteria & method/procedures of edge linking and boundary.
Ans:- Edge Linking: Edge linking is the process of forming an ordered list of edges from an
unordered list. By convention, edges are ordered by traversal in a clockwise direction.
In general, edge linking methods can be classified into two categories:
▪ Local Edge Linkers
-- where edge points are grouped to form edges by considering each point's relationship to
any neighboring edge points.
▪ Global Edge Linkers
-- where all edge points in the image plane are considered at the same time and sets of
edge points are sought according to some similarity constraint, such as points which share
the same edge equation.
Boundary : Set of pixels from edge detecting algorithms, seldom define a boundary
completely because of noise, breaks in the boundary etc. Therefore, Edge detecting
algorithms are typically followed by linking and other detection procedures, designed to
assemble edge pixels into meaningful boundaries. 2 types – local and global
Local Processing: Analyze the characteristics of pixels in a small neighborhood (3x3, or 5x5)
about every point that has undergone edge detection. All points that are similar are linked,
forming a boundary of pixels that share some common properties.
2 principal properties for establishing similarity of edge pixels:-
▪ strength of the response of the gradient operator used to produce the edge pixel
▪ direction of the gradient.
A point in the predefined neighborhood of (x,y) is linked to the pixel at (x,y) if both
magnitude and direction criteria are satisfied. This process is repeated for every location in
the image.
Method: Region based segmentation can be carried out in four different ways:
1) Region Growing
2) Region Splitting
3) Region merging
4) Split and merge
36) Define psycho-visual redundancy. Describe the intermixed and variable length coding.
Ans:-
Psycho-visual redundancy: The brightness of a region, as perceived by the eye, depends on
factors other than simply the light reflected by the region, e.g, Intensity variations (Mach
bards) can be perceived in an area of constant intensity.
Inter-Pixel Redundancy
▪ Inter-pixel redundancy is due to the correlation between the neighboring pixels in an
image.
▪ The value of any given pixel can be predicated from the value of its neighbors (Highly
Correlated).
▪ The information carried by individual pixel is relatively small.
▪ To reduce inter-pixel redundancy the difference between adjacent pixels can be used
to represent an image.
Variable-length Coding
▪ The coding redundancy can be minimized by using a variable-length coding method
where the shortest codes are assigned to most probable gray levels.
▪ The most popular variable-length coding method is the Huffman Coding. Huffman
Coding: The Huffman coding involves the following 2 steps -
1) Create a series of source reductions by ordering the probabilities of the symbols and
combining the lowest probability symbols into a single symbol and replace in the next
source reduction.
2) Each code reduced source starting with the smallest source and working back to the
original source.
37) Discuss the properties of Z-transformation.
Ans:-
38) Discuss the properties of 2D discrete Fourier transform and prove the circular
convolution theorem.
Ans:-
39) Define and derive the discrete Fourier Transformation.
Ans:-
40) What is Image compression? Describe a general image compression system model.
Or, Draw an image compression system explain how it works.
Ans:- Image compression : Image compression address the problem of reducing the
amount of data required to represent a digital image.
Image compression model with diagram are explain below:
Figure shows, an image compression system is composed of two distinct functional
components: an encoder and a decoder. The encoder performs compression, and the
decoder performs the complementary operation of decompression.
Input image f(x,y) is fed into the encoder, which creates a compressed representation of
the input. This representation is stored for later use, or transmitted for storage and use at a
remote location. When the compressed representation is presented to its complementary
decoder, a reconstructed output image f(x,y) is generated. In general f(x,y), may or may
not be an exact replica of f(x,y). If it is, the compression system is called error free,
lossless, or information preserving. If not, the reconstructed output image is distorted and
the compression system is referred to as lossy.