Download as pdf or txt
Download as pdf or txt
You are on page 1of 58

IMG Board Exam

Suggestion Solve

4.5 set – (1-30)


▪ Theory
1) Define Digital image processing. Write down the application areas of digital image
processing.
Ans:- Digital image processing : To process digital images by means of a digital computer is
referred to as Digital Image Processing.

Application areas of Digital Image Processing


i) Gamma Ray Imaging
ii) X-ray Imaging
iii) Imaging in the Ultraviolet Band
iv) Imaging in the visible and Infrared Bands
v) Imaging in the Microwave Band
vi) Imaging in the Radio Band
2) How digital image are represented? Explain in brief.
Or, Explain the representation of digital image.
Ans:- Representing Digital Image

Assume that an image f(x, y) is sampled so that the resulting digital image has M rows and
N columns.
The values of the coordinates (x, y) now become discrete quantities. We use integer values
for these discrete coordinates. Thus, the values of the coordinates at the origin are (x, y) =
(0, 0).
The next coordinate values along the first row of the image are represented as (x, y) = (0,1).
3) Explain the structure of human eye.
Ans:-
▪ Sclera: It is the outer covering, a protective tough white layer called the sclera (white
part of the eye).
▪ Cornea: The front transparent part of the sclera is called cornea, Light enters the eye
through the cornea.
▪ Iris: A dark muscular tissue and ring-like structure behind the cornea are known as the
iris. The colour of the iris actually indicates the colour of the eye. The iris also helps
regulate or adjust exposure by adjusting the iris.
▪ Pupil: A small opening in the iris is known as a pupil. Its size is controlled by the help of
iris. It controls the amount of light that enters the eye.
▪ Lens: Behind the pupil, there is a transparent structure called a lens. By the action of
ciliary muscles, it changes its shape to focus light on the retina. It becomes thinner to
focus distant objects and becomes thicker to focus nearby objects,
▪ Retina: It is a light-sensitive layer that consists of numerous nerve cells. It converts
images formed by the lens into electrical impulses. These electrical impulses are then
transmitted to the brain through optic nerves
▪ Optic nerves: Optic nerves are of two types. These include cones and rods.
1. Cones: Cones are the nerve cells that are more sensitive to bright light. They help in
detailed central and colour vision
2. Rods: Rods are the optic nerve cells that are more sensitive to dim lights. They help in
peripheral vision.

4) What are the different between photopic and scotopic?


Ans:-
Photopic Scotopic
Element Photopic vision is associated Scotopic associated with rodes.
with cones.
Name It is known as bright vision. It is known as dim vision.
Domain Daytime is the domain of Night time vision is the domain of scotopic
photopic vision. vision.
Eye Photopic vision is the vision of Scotopic vision is the vison of the eye
the eye under well light under low light conditions.
conditions.
Adaptation Photopic vision covers adaption Scotopic vision adaption level below 0.01
Level levels of 3 cd/𝐦𝟐 and higher. cm/𝐦𝟐 .

5) What is image enhancement? Briefly explain about image transformation function.


Ans:-
6) Explain image sampling and quantization.
Ans:- Image sampling and Quantization

▪ To create a digital image, we need to convert the continuous sensed data into digital
form.
This involves two processes:
• Sampling
• Quantization

▪ Fig (a) shows a continuous image f(x,y), that is converted to digital form. An image may
be continuous with respect to the x-and y-coordinates, and also in amplitude.
To convert it to digital form, we have to sample function in both coordinates and in
amplitude.
Digitizing the coordinate values is called Sampling Digitizing the amplitude values is called
Quantization.
(b) A scan line from A to B the continuous image, used to illustrate the concepts of sampling
& Quantization.
(c) Sampling and quantization (d) Digital scan line.

▪ Sampling
The one-dimensional function shown in fig (b) is a plot of amplitude (gray level) values of
the continuous image along the line segment AB in fig(a)
To sample this function, taking equally spaced sampled along line AB, as shown in fig (c).
The location each sample is given by a vertical tick mark in the bottom part of the figure.
The samples are shown as mall white square superimposed on the function. The set of
these discrete locations gives the sampled function. However, the values of the samples still
span (vertically) a continuous range of gray-level values.

▪ Quantization
In order to form a digital function, the gray-level values also must be converted (quantized)
into discrete quantities.
The right side of fig (c) shows the gray-level divided into eight discrete levels, rangirg from
black to white.
The vertical tick marks indicate the specific value assigned to each of the eight gray levels.
The continuous gray levels are quantized simply by assigning one of the eight discrete gray
levels to each sample.
The assignment is made of depending on the vertical proximity of a sample to a vertical tick
mark. The digital samples resulting from both sampling and quantization are shown in fig
(d).
Starting at the top of the image and carrying out this procedure line by line produces a two-
dimensional digital image.
7) Define the following terms with necessary figures :-
(i) Adjacency (ii) Connectivity (iii) Religion (iv) Boundary (v) Mask (vi) Path
Ans:- (i) Adjacency
(ii) Connectivity : Connectivity refers to the way in which we define an objects. For
example, after we have segmented an image, which segments should be connected to form
an objects. Or at lower level, when searching image for homogeneous regions, how do we
define which pixel are connected.
▪ Pixel has 8 possible neighbors :
i. Two horizontal neighbors
ii. Two vertical neighbors
iii. Four diagonal neighbors

▪ Connectivity can be defined in 3 different ways:


i. Four connectivity
ii. Eight connectivity
iii. Six connectivity

(iv) Mask : A mask is a filter. Concept of masking is also known as spatial filtering. Masking
is also known as filtering. In this concept we just deal with the filtering operation that is
performed directly on the image.
A sample mask has been shown below

-1 0 1

-1 0 1

-1 0 1

8) What do you mean by image restoration? Describe the image degradation/restoration


process.
Ans:-
Image Restoration
▪ To reconstruct or recover an image that has been degraded by using a priori knowledge
of the degradation phenomenon, is known as Image Restoration.
▪ Restoration techniques are oriented toward modeling the degradation and applying the
inverse process in order to recover the original image.
▪ Image Restoration approaches fall into two broad categories.
i) Spatial Domain
ii) Frequency Domain
9) Explain the following terms as applicable to image processary with necessary graph :-
(i) Cones & Rods
(ii) Brightness adaptation
(iii) Isopreference
Ans:- (i) Rods :
10) What do you mean by histogram processing? Explain histogram equalization.
Ans:-
11) What are the formula for negative and log transformation?
Ans:-

Negative transformation : The second linear transformation is negative transformation,


which is invert of identity transformation. In negative transformation, each value of the
input image is subtracted from the L-1 and mapped onto the output image.
In this case the following transition has been done.
s = (L – 1) – r

Log transformation: The log transformations can be defined by this formula


s = c log(r + 1).
Where s and r are the pixel values of the output and the input image and c is a constant.
The value 1 is added to each of the pixel value of the input image because if there is a pixel
intensity of 0 in the image, then log (0) is equal to infinity. So 1 is added, to make the
minimum value at least 1.
During log transformation, the dark pixels in an image are expanded as compare to the
higher pixel values. The higher pixel values are kind of compressed in log transformation.
This result in following image enhancement.
The value of c in the log transform adjust the kind of enhancement you are looking for.
12) Define Euclidian distance, D4 distance and D8 distance.
Ans:-

13) What is masking? Discuss the mechanism of a liner spatial filtering for smoothing an
image.
Or, Calculate the response of linear spatial filter and device the expression of its
smoothing.
Ans:- Masking: Masking is an image processing method in which we define a small ‘image
piece’ and use it to modify a large image. Masking is the process that is underneath many
types of image processing, including edge detection, motion detection, and noise reduction.
14) Write down the steps for filtering in frequency domain.
Ans:-
15) Why does filter necessary in image processing? Describe the high pass and low pass
filter in image processing.
Ans:-
Filter necessary in image processing : Filtering is a technique for modifying or enhancing an
image. For example, you can filter an image to emphasize certain features or remove other
features. Image processing operations implemented with filtering include smoothing,
sharpening, and edge enhancement.
Image filtering is useful for many applications, including smoothing, sharpening, removing
noise, and edge detection. A filter is defined by a kernel, which is a small array applied to
each pixel and its neighbors within an image. In most applications, the center of the kernel
is aligned with the current pixel, and is a square with an odd number (3, 5, 7, etc.) of
elements in each dimension. The process used to apply filters to an image is known as
convolution, and may be applied in either the spatial or frequency domain.
The ideal high pass filter is given as:

Where D0 is the cut off distance as before

Ideal Low Pass Filters


Simply cut off all high frequency components that are a specified distance D0 from the
origin of the transform.
16) What is image enhancement? Distinguish between image enhancement and image
restoration technique.
Ans:- Image enhancement : Image enhancement refers to accentuation or sharpening of
image features such as edges, or contrast to make a graphic display more useful for display
and analysis.
17) Discuss about inverse filtering.
Ans:- Inverse Filtering: In signal processing, an inverse filter h is one such that the sequence
of applying g then h to a signal results in the original signal. Software or electronic inverse
filters are often used to compensate for the effect of unwanted environmental filtering of
signals.
There are two techniques of inverse:
▪ For the first technique the data is recorded using a reference quality condenser
microphone with a flat frequency response beginning at a very low frequency (even 0 Hz)
and extending to up to 5 or 8 kHz. The advantage of this technique is its wide frequency
response which facilitates a detailed representation of the glottal flow signal. Its
disadvantage lies in the fact that when this procedure is used, the DC component is not
▪ For the second technique the airflow is registered through a face mask
(Rothenberg, 1973), which allows the recording of a DC flow component and the
calibration of the measurements in physical units. In this technique the useful
frequency response is flat (within +- 3 dB), from 0 Hz to about 1000-1500 Hz,
which limits the accuracy with which the glottal pulse can be recovered.
Especially information about the abruptness of the vocal fold closure is lost.

Assumptions of inverse filtering:


1) Speech is produced by a linear system in which a source signal is modified by a vocal
tract.
2) Filter.
3) The system is stationary during an analysis interval.
4) The glottal pulse spectrum is flat.
5) The all-pole model of vocal tract characteristics is correct.
6) The estimates of the bandwidths of spectral poles are correct.

18) What is white noise? Mention the spatial and frequency properties of a noise.
Or, Write down spatial and frequency properties of noise.
Ans:- White Noise: When the Fourier spectrum of noise is constant, the noise is usually
called White Noise. It is a carry over from the physical properties of white light, which
contains nearly all frequencies in the visible spectrum in equal proportions.
Spatial and Frequency Properties of Noise -
▪ White Noise:
✓ When the Fourier spectrum of noise is constant, the noise is usually called White
Noise.
✓ It is a carry over from the physical properties of white light, which contains nearly
all frequencies in the visible spectrum in equal proportions.
▪ If two signals are similar, they are correlated.
▪ If there is noise and two different signals, then there is no correlation.
▪ In spatial domain, two noisy signal can't be correlated.
19) Explain the probability distribution function of different noise models.
Ans:- PDF (Probability Distribution Function) : PDF is a function of a continuous random
variable, whose integral across an interval gives the probability that the values of the
variable lies within the same interval.

PDF of Noise Model -


1. Gaussian Noise
2. Rayleigh Noise
3. Erlang (Gamma) Noise
4. Exponential Noise
5. Uniform Noise
6. Impulse (Salt-and-pepper) noise.
20) What is Color model? Explain the relationship between RGB and HIS color model.
Or, Convert colors from RGB to HIS.
Ans:-
Color model: A color Model is a specification of a 3-D co-ordinate system and a subspace
within that system where each color is represented by a single point.
Most color models in use today are oriented either toward hardware or toward applications
where color manipulation is a goal. The most used color models are as
1) RGB Model: For color monitors and a broad class of color video cameras.
2) CMy (Cyan, Magenta, Yellow) Model : for use in color printer.
3) YIQ Model : Which is standard for color TV broad cast.
4) HSI/HSV (Hue, Saturation, Intercity/Hue, Saturation, Value) Model : frequently used
for color image manipulation.
21) What do you mean by color model? Write about color fundamentals.
Ans:- Color model: A color Model is a specification of a 3-D co-ordinate system and a
subspace within that system where each color is represented by a single point.
Most color models in use today are oriented either toward hardware or toward applications
where color manipulation is a goal.
The most used color models are as
1) RGB Model: For color monitors and a broad class of color video cameras.
2) CMy (Cyan, Magenta, Yellow) Model : for use in color printer.
3) YIQ Model : Which is standard for color TV broad cast.
4) HSI/HSV (Hue, Saturation, Intercity/Hue, Saturation, Value) Model : frequently used
for color image manipulation.

Color Fundamentals
Color is a powerful descriptor that often simplifies object identification and extraction
from a scene. Color is psychological effect of human being. Color image processing is
divided into two major areas.
i) Full color processing
✓ process in RGB Model.
ii) Pseudo-color processing (Or False color processing)
✓ Only process in Gray scale.

22) Explain RGB color model.


Ans:-
RGB color Model : In the RGB model, each color appears in its primary spectral components
of red, green, and blue. This model is based on the Cartesian co-ordinate system. The color
subspace is the cube as shown in figure below, in which RGB values are at the three
corners, cyan,
magenta and yellow at the other three corners.
Black is at the origin, and white is at the corner farthest from the origin. In this model, the
gray scale extends from black to white along the line joining those two points, and colors
are points on or inside the cube, defined by vectors extending from the origin. All values
of R, G, B are assumed to be in the range [0, 1]
Images in the RGB color model consists of three independent image planes, one for each
primary color.
When light fed into an RGB monitor, these three images combine on the phosphor screen
to produce a composite color image.
Thus the use of the RGB for image processing makes sense when the image themselves
are naturally expressed in terms of three color planes.

CMY color Model


Cyan, Magenta, and Yellow are the secondary colors of light, or alternatively, the primary
colors of pigment. e.g., Chosen a surface coated with cyan pigment is illuminated with
white light, no red light is reflected from the surface.
Most devices that deposit color pigments on paper, such as color printer and copiers,
require CMY data input or perform an RGB to CMY conversion internally. This conversion is
performed using the simple operation:
23) Differentiate between RGB and CMY color model.
Ans:-

RGB COLOR SCHEME CMYK COLOR SCHEME

Used for digital works. Used for print works.

Primary Colors: Cyan, Magenta,

Primary colors: Red, Green, Blue Yellow, Black

Additive Type Mixing Subtractive Type Mixing.

Colors of images are more vibrant Colors of lass vibrant.

RGB Scheme has wider range of colors CMYK has lesser range of colors than

than CMYK RGB.

file formats:- JPEG, PNG, GIF etc. file formats:- PDF, EPS etc

24) Describe the algorithms for converting a color image to a grayscale image and then to
a binary image.
Or, Describe the gray level to color conversion process.
Ans:- Gray level to color conversion process:
There are three methods to convert an color image into a grayscale image. The methods
are:
▪ The lightness method
▪ Average method
▪ Weighted method or luminosity method

▪ The lightness method : The lightness method averages the most prominent and least
prominent colors: (max(R, G, B) + min(R, G, B)) / 2.
▪ Average method : Average method is the most simple one. You just have to take the
average of three colors. Since its an RGB image, so it means that you have add r with g
with b and then divide it by 3 to get your desired grayscale image. Its done in this way.
Grayscale = (R + G + B / 3)
▪ Weighted method or luminosity method: The luminosity method is a more
sophisticated version of the average method. It also averages the values, but it forms a
weighted average to account for human perception. We’re more sensitive to green than
other colors, so green is weighted most heavily. The formula for luminosity is 0.21 R +
0.72 G + 0.07 B.
Converting a grayscale image to binary image using Thresholding :
Thresholding is the simplest method of image segmentation and the most common way to
convert a grayscale image to a binary image.
In thresholding, we select a threshold value and then all the gray level value which is below
the selected threshold value is classified as 0(black i.e background ) and all the gray level
which is equal to or greater than the threshold value are classified as 1(white i.e
foreground).

Here g(x, y) represents threshold image pixel at (x, y) and f(x, y) represents greyscale image
pixel at (x, y).
Algorithm:
1. Read target image into MATLAB environment.
2. Convert it to a grayscale Image if read image is an RGB Image.
3. Calculate a threshold value, T
4. Create a new Image Array (say ‘binary’) with the same number of rows and columns
as original image array, containing all elements as 0 (zero).
5. Assign 1 to binary(i, j), if gray level pixel at (i, j) is greater than or equal to the
threshold value, T ; else assign 0 to binary(i, j).
Do the same for all gray level pixels.

25) What is chromatic and achromatic light? Describe the light and electromagnetic
spectrum with necessary figure.
Ans:-
Achromatic Light :
✓ Achromatic Light is what viewers see on a black and white television set, and it has
been an implicit component.
✓ If the light is Achromatic (void of color), its only attribute is its intensity, or amount
that ranges from black, to grays, and finally to white.
Chromatic Light :
✓ Chromatic Light spans the electromagnetic spectrum from approximately 400 to 700
mm.
✓ Three basic quantities are used to describe the quantity of a chromatic light source:
i) Radiance
ii) Luminance
iii) Brightness
▪ Radiance :
✓ Radiance is the amount of energy that flows from the light source.
✓ Total energy sometimes called Radiance.
✓ It is measured in watts (w).
▪ Luminance
✓ A measure of the amount of energy of an observer perceives from a light source,
called Luminance.
✓ Luminance measured in lumens (1m)
▪ Brightness
✓ Psychological effects is the Brightness
✓ Brightness is a subjective descriptor that is practically impossible to measure.

Spectrum/Wavelength/ Electromagnetic Analysis of Light

The visible portion of the electromagnetic spectrum is called light. It occurs between
wavelengths of approximately 400 to 700 nm (nano meters).

Figure: Wavelength comprising the visible range of the electromagnetic spectrum.


26) Write down the procedure of ultrasound image generation?
Ans:- Ultrasound image generation :
✓ Ultrasound waves are usually both generated and detected by a
piezoelectric crystal.
✓ The crystal deforms under the influence of an electric field and vice-versa.
✓ When an alternating voltage is applied over the crystal, a compression wave with the
same frequency is generated.
✓ Generally used piezoelectric materials are PZT and PVDF (polyvinylidene fluoride)

The procedure of ultrasound image generation:


▪ Ultrasound imaging is based on the same principles involved in the sonar used by bats,
ships, fishermen and the weather service.
✓ When a sound wave strikes an object, it bounces back, or echoes.
By measuring these echo waves, it is possible to determine how far away the object is and
its size, shape and consistency (whether the object is solid, filled with fluid, or both).
✓ In medicine, ultrasound is used to detect changes in appearance of organs, tissues,
and vessels or detect abnormal masses, such as tumors.
27) Write down the algorithm of face detection using YIQ model.
Ans:-

28) Explain the transform coding technique in detail.


Ans:- Transform coding is used to convert spatial image pixel values to transform
coefficient values. Since this is a linear process and no information is lost, the number of
coefficients produced is equal to the number of pixels transformed.
The desired effect is that most of the energy in the image will be contained in a few large
transform coefficients. If it is generally the same few coefficients that contain most of the
energy in most pictures, then the coefficients may be further coded by lossless entropy
coding . In addition, it is likely that the smaller coefficients can be coarsely quantized or
deleted ( lossy coding ) without doing visible damage to the reproduced image.
Features of Transform coding :
✓ It's a LOSSY COMPRESSION technique. Generally used for converting natural data
such as audio signals and photographic images.
✓ Removes REDUNDANCY from the data.
✓ Lowers the BANDWIDTH of data.
✓ Forms an image with fewer colors.
✓ JPEG(Joint Photography Experts Group) is an example of Transform Coding.
▪ Simplification

29) If R= 200, G = 100, B =50 then what will be the value of L and H in HSL color model?
Ans:-
30) A 5-bit grayscale image of resolution 7x6 is shown below Draw its histogram and CDF.
Also comment on the quality of the image :-
0 15 15 10 10 10 0
20 15 15 20 30 5 0
25 0 20 20 20 30 30
30 0 20 20 20 10 25
5 0 0 0 0 5 5
25 10 0 5 20 20 0

Ans:-
1.5 set – (31-41)
31) What is edge detection? How can you detect edge by first order and second order
derivatives?
Or, What are the impotence of first order and second order derivatives for detecting an
edge?
Ans:-
32) Describe the edge detection technique of Sobel and Laplacion Operator.
Ans:-
33) What do you mean by edge detection? How can you defect lines of an image?
Or, Write down line detection algorithm for vertical, horizontal +45* and -45* of lines.
Ans:-

Edge detection: An Edge is a set of connected pixels that lie on the boundary between two
regions.
34) Discuss the criteria & method/procedures of edge linking and boundary.
Ans:- Edge Linking: Edge linking is the process of forming an ordered list of edges from an
unordered list. By convention, edges are ordered by traversal in a clockwise direction.
In general, edge linking methods can be classified into two categories:
▪ Local Edge Linkers
-- where edge points are grouped to form edges by considering each point's relationship to
any neighboring edge points.
▪ Global Edge Linkers
-- where all edge points in the image plane are considered at the same time and sets of
edge points are sought according to some similarity constraint, such as points which share
the same edge equation.

Boundary : Set of pixels from edge detecting algorithms, seldom define a boundary
completely because of noise, breaks in the boundary etc. Therefore, Edge detecting
algorithms are typically followed by linking and other detection procedures, designed to
assemble edge pixels into meaningful boundaries. 2 types – local and global
Local Processing: Analyze the characteristics of pixels in a small neighborhood (3x3, or 5x5)
about every point that has undergone edge detection. All points that are similar are linked,
forming a boundary of pixels that share some common properties.
2 principal properties for establishing similarity of edge pixels:-
▪ strength of the response of the gradient operator used to produce the edge pixel
▪ direction of the gradient.
A point in the predefined neighborhood of (x,y) is linked to the pixel at (x,y) if both
magnitude and direction criteria are satisfied. This process is repeated for every location in
the image.

35) Explain briefly –


(i) Region based segmentation
(ii) Use of motion in segmentation
Or, Explain the region growing segmentation with example.
Or, Discuss about region based segmentation.
Ans:- (i) Region based segmentation : The segmentation which is carried out based on
similarities in the given image is known as region based segmentation.
Properties: The regions that are formed using this method have the following properties:

✓ The sum of all the regions is equal to the whole image.

✓ Each region is contiguous and connected


✓ A pixel belongs to a single region only, hence there is no overlap of pixels.

✓ Each region must satisfy some uniformity condition

✓ Two adjacent regions do not have anything in common

Method: Region based segmentation can be carried out in four different ways:
1) Region Growing
2) Region Splitting
3) Region merging
4) Split and merge

Each of them is explained below:


1) Region Growing: The procedure in which pixels are grouped into larger regions based on
some predefined conditions is known as region growing.
2) Region Splitting: In region splitting, we try to satisfy the homogeneity property where
pixels that are similar are grouped together.
3) Region Merging: The region merging method, is exactly opposite to the region splitting
method. In this method, we start from the pixel level and consider each of them as a
homogeneous region. At any level of merging we check if four adjacent homogeneous
regions arranged in a 2 x 2 manner, together satisfy the homogeneity property.
4) Split and Merge: (i) Region splitting and region merging were explained above, in region
splitting we start with the whole image and split the image into four quadrants.
We continue splitting each quadrant further, until all the sub regions satisfy the predefined
homogeneity property.
(ii) In Region merging each pixel is taken as a small region, we merge small regions into
larger regions if they satisfy the homogeneity property.

(ii) Use of motion in segmentation:


Motion is a powerful cue used by humans and many animals to extract objects of interest
from a background of irrelevant detail.
In imaging applications, motion arises from relative displacement between the sensing
system and the scene being viewed, such as in robotic applications, autonomous
navigation, and dynamic scene analysis.
Motion arises from a relative displacement between the sensing system and the scene
being viewed such as
- in robotic applications,
- autonomous navigation, and
- dynamic scene analysis.

36) Define psycho-visual redundancy. Describe the intermixed and variable length coding.
Ans:-
Psycho-visual redundancy: The brightness of a region, as perceived by the eye, depends on
factors other than simply the light reflected by the region, e.g, Intensity variations (Mach
bards) can be perceived in an area of constant intensity.
Inter-Pixel Redundancy
▪ Inter-pixel redundancy is due to the correlation between the neighboring pixels in an
image.
▪ The value of any given pixel can be predicated from the value of its neighbors (Highly
Correlated).
▪ The information carried by individual pixel is relatively small.
▪ To reduce inter-pixel redundancy the difference between adjacent pixels can be used
to represent an image.
Variable-length Coding
▪ The coding redundancy can be minimized by using a variable-length coding method
where the shortest codes are assigned to most probable gray levels.
▪ The most popular variable-length coding method is the Huffman Coding. Huffman
Coding: The Huffman coding involves the following 2 steps -
1) Create a series of source reductions by ordering the probabilities of the symbols and
combining the lowest probability symbols into a single symbol and replace in the next
source reduction.
2) Each code reduced source starting with the smallest source and working back to the
original source.
37) Discuss the properties of Z-transformation.
Ans:-
38) Discuss the properties of 2D discrete Fourier transform and prove the circular
convolution theorem.
Ans:-
39) Define and derive the discrete Fourier Transformation.
Ans:-
40) What is Image compression? Describe a general image compression system model.
Or, Draw an image compression system explain how it works.
Ans:- Image compression : Image compression address the problem of reducing the
amount of data required to represent a digital image.
Image compression model with diagram are explain below:
Figure shows, an image compression system is composed of two distinct functional
components: an encoder and a decoder. The encoder performs compression, and the
decoder performs the complementary operation of decompression.

Fig: Functional block diagram of a general image compression system.

Input image f(x,y) is fed into the encoder, which creates a compressed representation of
the input. This representation is stored for later use, or transmitted for storage and use at a
remote location. When the compressed representation is presented to its complementary
decoder, a reconstructed output image f(x,y) is generated. In general f(x,y), may or may
not be an exact replica of f(x,y). If it is, the compression system is called error free,
lossless, or information preserving. If not, the reconstructed output image is distorted and
the compression system is referred to as lossy.

▪ The Encoding/Compression Process:


The encoder of Figure is designed to remove the redundancies.
➢ A mapper transforms f(x,y) into a format designed to reduce spatial and temporal
redundancy. This operation generally is reversible and may or may not reduce
directly the amount of data required to represent the image.
➢ The quantizer in Figure reduces the accuracy of the mapper's output in
accordance with a pre-established fidelity criterion. The goal is to keep irrelevant
information out of the compressed representation.
➢ The symbol coder of Figure generates a fixed- or variable-length code to represent
the quantizer output and maps the output in accordance with the code. In many
cases, a variable-length code is used.

▪ The Decoding/Decompression Process:


The decoder of Figure contains only two components: a symbol decoder and an inverse
mapper. They perform, in reverse order, the inverse operations of the encoder's symbol
encoder and mapper. Because quantization results in irreversible information loss, an
inverse quantizer block is not included in the general decoder model. In video applications,
decoded output frames are maintained in an internal frame store and used to
reinsert the temporal redundancy that was removed at the encoder.

41) Explain lossy predictive coding model.


Ans:-

You might also like