Download as pdf or txt
Download as pdf or txt
You are on page 1of 48

EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

UNIT II - IMAGE ENHANCEMENT


Spatial Domain: Gray level transformations – Histogram processing – Basics of Spatial
Filtering– Smoothing and Sharpening Spatial Filtering, Frequency Domain: Introduction to
Fourier Transform– Smoothing and Sharpening frequency domain filters – Ideal,
Butterworth and Gaussian filters, Homomorphic filtering, Color image enhancement.

TWO MARKS
1. Specify the objective of image enhancement technique.
The objective of enhancement technique is to process an image so that the result is more
suitable than the original image for a particular application.

2. What are the types of spatial domain processing?


Spatial domain processing is classified into three types.
1. Point Processing
2. Mask processing
3. Global operation
(a) Point Processing is an image enhancement technique in which enhancement at any
point in an image depends only on the gray level at that point
(b) In mask processing each pixel is modified according to the values in a predefined
neighborhood.
(c) In global operation, all pixel values in the image are taken into consideration for the
enhancement process.

3. What is meant by gray level transformation? What are its types?


Gray level transformation is the simplest of all image enhancement techniques.lt is a point
processing method. In this method each pixel value in the original image is mapped on to a
new pixel value to obtain the enhanced image. In its general form, a gray level transformation
is represented as,
S=T(r)
Where 'r' denotes the pixel value before processing, 's' denotes the pixel value after processing
and T represents the transformation that maps a pixel value 'r' to a pixel value 's'.

4. Types of Gray Level Transformation


(a) Image Negative
(b) Log Transformations
(c) Power Law Transformations
(d) Piece wise linear Transformations

2
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

5. What is an image negative?


The negative of an image with gray levels in the range [(0, L- 1] is obtained by using the
negative transformation, which is given by the expression.
s = (L- l) - r
 Where s is output pixel
 r is input pixel

6. What is contrast stretching?


Contrast stretching is an enhancement technique used to increase the dynamic range of the
gray levels in an image.

7. What is thresholding?
Thresholding is an image enhancement technique that create a binary image. All gray level
values above a threshold 'T' is mapped to (L — 1) and gray level values below the threshold is
mapped to 0.

8. What is grey level slicing?


Highlighting a specific range of grey levels in an image often is desired. It displays a high
value for all gray levels in the range of interest. Applications include enhancing features such
as masses of water in satellite imagery and enhancing flaws in x-ray images.

9. Define image subtraction.


The difference between 2 images f (x, y) and h(x, y) expressed as, g(x, y) — f (x, y) - h(x,y).
The difference image g(x, y) is obtained by computing the difference between all pairs of
corresponding pixels from f and h.

10. What is the purpose image averaging? Give its application.(N/D-18)


 It is a process of adding a set of noisy images and then averaging. Image averaging is
done to reduce the noise content in the image.
 An important application of image averaging is in the field of astronomy, where
imaging with very low light levels is routine, causing sensor noise frequently to render
single images virtually useless for analysis.

3
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

11. What is meant by mask?(N/D-18)


Mask is the small 2-D array in which the values of mask co-efficient determines the nature of
process. The enhancement technique based on this type of approach is referred to as mask
processing.

12. What is meant by bit-plane slicing?(N/D- 16)


 Highlighting the contribution made to total image appearance by specific bits might be
desired. Suppose that each pixel in an image is represented by8 bits.
 Imagine that the image is composed of eight I-bit planes, ranging from bit plane0 for
LSB to bit plane-7 for MSB. Highlighting the higher order bit planes to achieve
enhancement is called bit plane slicing.
13. Define histogram? How is it generated for an image (A/M-2019)
Histogram are the basis for numerous spatial domain processing techniques. Histogram
manipulation can be used for image enchancement.
The histogram of a digital image with gray-levels in the range [0,L-1] is a discrete
function,h(rk)=nk, where is the kth gray level and is the no. of pixels in the image having gray
level . Histogram is a plot of rk vs nk.

14. What is meant by histogram equalization or histogram linearization? (


Histogram equalization is an image enhancement process that attempts to spread out the gray
levels in an image so that they are evenly distributed across their range.
Histogram equalization produces an output image that has a uniform histogram. The transform
is given by,

Thus a processed (output) image is obtained by mapping each pixel with level rk in the input
image to a corresponding pixel with level Sk in the output image.

15. What are the advantages of histogram equalization?


(a) Histogram Equalization produces image with gray level values that cover the entire
gray scale.
(b) Histogram Equalization is fully automatic i.e. histogram equalization is only based on
information that can be directly extracted from the given image.
(c) Very simple computation.
4
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

16. What is meant by histogram matching (or) histogram specification?


The method used to generate a processed image that has a specified histogram is called
histogram matching or histogram specification.
It allows the user to specify the shape of the histogram that the processed image is supposed to
have.

17. What is a smoothing filter and what are its types?


Smoothing filters are used for noise reduction and blurring. Smoothing filters remove the high
frequency components hence it is also called as low pass filters.
Smoothing filters are broadly classified into two types:
a) Linear smoothing spatial filters and
b) Non linear smoothing spatial filters

18. Give some examples of linear smoothing spatial filters.


(a) Mean filters

o Box filter
o Weighted Average filter
(b) Geometric filters
(c) Harmonic filter
(d) Contra harmonic filter

19. What is a non linear smoothing spatial filter or order statistics filter? What are its
types?
These are spatial filters whose response is based on ordering the pixels contained in the image
area encompassed by the filter.
Types of order statistics filters
• Median filter
• Max and Min filter
• Midpoint filter
• Alpha trimmed mean filter

20. What are the applications of smoothing filters?


Smoothing filters are used for
(a) Removal of random noise
(b) Smoothing of false contours
(c) Reduction of irrelevant details in an image.

5
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

21. Explain spatial filtering.


 Spatial filtering is the process of moving the filter mask from point to point in an image.
 For linear spatial filter, the response is given by a sum of products of the filter
coefficients, and the corresponding image pixels are a spanned by the filter mask.

22. What is a sharpening filter?


 Sharpening filters are used to highlight fine details in an image or to enhance details
that have been blurred. Sharpening filters are called as high pass filters.
 Derivative filters are used for image sharpening.
 First order derivative filter produces thicker edges in an image. Second order derivative
filter produces thin edges in an image.

23. Name the different types of derivative filters in DIP.[A/M-18]


Derivative filters are of two types: First Order Derivative Filters and Second Order Derivative
Filters.
1. First Order Derivative Filters (Gradient Operators)
a) Roberts cross gradient operators
b) Prewitt operators
c) Sobel operator
2. Second Order Derivative Filters
Laplacian Filters

24. Define first order derivative filter or gradient filter.


For a function f (x,y) the gradient of f at coordinates (x, y) is defined as the two-dimensional
column vector

The magnitude of this vector is given by

25. Define the second order derivative filter or Laplacian operators.


second order derivative filters are commonly referred to as Laplacian operators. Laplacian is a
linear isotropic filters. A simple Laplacian operator for a function(image) f(x,y) is defined as,

6
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

Where,

the digital implementation of the two dimensional laplacian is obtained by summing these
these two components.

Simple laplacian masks are given by,

26. Define Robert's cross, Prewitt's and Sobel's Operators.


Robert's Cross operator: Robert's cross gradient operators is defined using a 2 x 2 masks as,

Prewitt's Operator
Prewitt's operator is defined using a 3 x 3 mask and the digital approximation of the Prewitt's
operator is defined as,

7
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

Sobel’s operator

27. Write the application of sharpening filters?


Important applications of sharpening filters are in the fields of
i Electronic printing
ii Medical imaging
iii Industrial application
iv Autonomous target detection in smart weapons.

28. What is an isotropic filter?


Isotropic filters are rotation invariant filters i.e., rotating the image and then applying the filter
gives the same result as applying the filter to the image first and then rotating the result.
Example: Laplacian filter

29. What are the applications of spatial enhancement filters? (or) Sharpening Filters?
a) Printing industry
b) Image based product inspection
c) Forensics
d) Microscopy
e) Surveillance etc.
8
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

30. What is unsharp masking?(Nov/Dec 16)


Unsharp masking is the process of subtracting a blurred version of an image from the original
image itself to sharpen it. Unsharp masking is defined as,

Where f (x, y) refers to the original image, 𝑓̂(x,y) refers to the blurred version of f (x, y) and
fs(x, y) refers to the sharpened image obtained. Unsharp masking is used in the publishing
industry to sharpen images.

31. What is high boost filtering?


High boost filtering is a slight modification of unsharp masking. A high boost filtered image
fhb is defined as,

Where A ≥1,f(x,y) refers to the original image, 𝑓̂(x, y) refers to the blurred version of f(x,y)
and fhb(x, y) refers to the sharpened image obtained.

32. Write the steps involved in frequency domain filtering.


a) Multiply the input image by(—l)x + y to center the transform.
b) Compute F (u, v),the DFT of the image from (l).
c) Multiply F (u, v) by a filter function H (u, v).
d) Compute the inverse DFT of the result in(3).
e) Obtain the real part of the result in (4). x + y
f) Multiply the result in (5) by (—1)

33. What are the types of frequency domain filters?


Frequency domain filters are classified into two types: i) Low Pass Filters ii) High Pass Filters
Type of Low Pass Filters (Smoothing Filters)
Ideal Low Pass Filters

9
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

Butterworth Low Pass Filters


Gaussian Low Pass Filters
Types of High Pass Filters (Sharpening Filters)
Ideal High Pass Filters
Butterworth High Pass Filters
Gaussian High Pass Filters

34. Give the filter function of ideal low pass filter and high pass filter?
The filter function of ideal low pass filter is given as,

The filter function of Ideal High Pass Filter is given as,

Where Do is the cutoff distance D(u, v) is the distance from the point (U, V) in the image to
the origin of the frequency rectangle.

35. Give the filter function of Butterworth low pass filter and high pass filter.
A Butterworth Low Pass filter of order n is defined as,

A Butterworth High Pass filter of order n is defined as,

Where Do is the cutoff distance,


D (u, v) is the distance from the point (U, V) in the image to the origin of the frequency
rectangle.

36. Give the filter function of Gaussian low pass filter and high pass filter?
The filter function of Gaussian low pass filter is given by,

The filter function of Gaussian high pass filter is given by,

Where Do is the cutoff distance,


10
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

-D(u, v) is the distance from the point (U, V) in the image to the origin of the frequency
rectangle.

37. What is homomorphic filtering?


It is a frequency domain procedure for improving the appearance of an image by simultaneous
gray level range compression and contrast enhancement using illumination-reflectance model.

38. List out any five properties of 2D – Fourier transforms?


The properties of 2D – DFT are as follows:-
1) Symmetric unitary 5) Conjugate symmetry
2) Periodic extension 6) 2D – circular convolution theorem.
3) Sampled Fourier transform 7) Block circulant operation
4) Fast Fourier transform

39. Whatare image transform and its applications?


 It refers to a class of unitary matrices used for representing images.
 An image can be expanded in terms of a discrete set of basis called basic image.
 These can be generated by unitary matrices. A given MXN image can be viewed as N2
vectors.

Applications of transforms are:-


1) To reduce bandwidth
2) To reduce redundancy
3) To extract feature

40. Write the formula for DFT:


The discrete fourier transform of a function (image) f(x,y) of size M N is given by
𝑢𝑥 𝑣𝑦
F(U,V) = 1 ∑𝑀−1 ∑𝑁−1 ( ) −𝑖2𝜋( + ) for u= 0,1,2............M-1;
𝑥=0 𝑦=0 𝑓 𝑥, 𝑦 𝑒 𝑀 𝑁
𝑀,𝑁
v=0, 1, 2, .N-1-------------- > (3)

If F(U,V) is given, f(x,y) can be obtained via the inverse DFT, given by the expression
f(x,y) =∑𝑀−1 ∑𝑁−1 ( 𝑢𝑥 𝑣𝑦
𝑖2𝜋( + ) for x= 0,1,2. .......... M-1;
𝑈=0 𝑉=0 𝐹 𝑈, 𝑉 )
𝑒 𝑀 𝑁

y=0,1, 2, .N-1 ------------------ > (4)

41. Write the formula for unitary DFT pair?

The discrete fourier transform of a function (image) f(x,y) of size M N is given by


11
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

𝑢𝑥 𝑣𝑦
F(U,V) =∑𝑀−1 ∑𝑁−1 ( ) −𝑖2𝜋(
+𝑁 ) for u= 0,1,2...........M-1;
𝑥=0 𝑦=0 𝑓 𝑥, 𝑦 𝑒 𝑀

v=0,1,2,…….N-1 --------------> (3)

If F(U,V) is given, f(x,y) can be obtained via the inverse DFT, given by the expression
f(x,y) = 1 ∑𝑀−1 ∑𝑁−1 ( 𝑢𝑥 𝑣𝑦
𝑖2𝜋( + ) for x= 0,1,2 .......... M-1;
𝑈=0 𝑉=0 𝐹 𝑈, 𝑉 )
𝑒 𝑀 𝑁
𝑀,𝑁
y=0,1,2,…….N-1 ------------------> (4)

42. Write the properties of 2D – DFT? (MAY/JUNE-2010)


The properties are
Separability,convolution,Linearity,Translation, modulation,rotation, Periodicity,
Conjugate symmetry, Distributivity, scaling, average value, correlation, Sampling and
Laplacian.

43. State 2D sampling theorem [Nov 2010]


A signal can be reconstructed from its samples ifthe original signal has no frequenciesabove
1/2 the sampling frequency. fs>=2fm
44. What are the basic geometric transformations that can be applied on images?[Nov
2012] i)DCT ii)DFT iii)KL transform iv)SVD Transform v)Walsh transform

45. State the two important properties of unitary transforms [NOV 2013]
i) Energy compaction – few transform coefficients have large magnitude.
ii) Energy conservation – preserve the norm of input vectors.

46. Give some examples of image enhancement process. Some of


the important examples are.
(1) ) Contrast enhancement
(2) ) Edge enhancement
(3) ) Noise filtering
(4) ) Sharpening
(5) ) Magnifying
47. Mention the basic approaches of image enhancement.
The two basic approaches are
(i) ) Spatial domain methods
These methods are based on direct manipulation of the pixels in an image.
(ii) i ) Frequency domain methods
- These methods are based on modifying the Fourier transform of an image

12
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

48. What is Mask processing?


 A mask is a two-dimensional array or subimage, which is also known as a filter,
kernel template or window.
 The center of this mask is moved from pixel to pixel, starting at the top left corner of
an image and at each point the response of an operator 'T' is calculated.
 Processing an image in this way is known as 'mask processing'.

49. Give some of the applications of power-law transformation?


Three important applications are,
(i) ) Gamma correction
(ii) i ) General-purpose contrast manipulation
(iii)Enhancing images with washed-out appearance.

50. What are the applications of histogram processing?


Histogram processing can be applied for
(i) Image enhancement
(ii) Image compression and
(iii) Image segmentation.
51. What is meant by image averaging?
Image averaging is the process used to reduce the noise content in an image by
adding a set of noisy images.

52. What is blurring? State the uses.


Blurring is the process of reducing "sharp" transitions in an image. It is also known
as smoothing.
It is used to,
(i) Remove small details in an image
(ii) Bridge small gaps in lines or curves
(iii) Reduce noise.

53. Mention some applications of image sharpening.


Some important applications are
(i) Electronic printing
(ii) Medical imaging
(iii) Industrial inspection
(iv) Autonomous guidance in military systems.

54. Write down the equation used to obtain the enhanced image using Laplacian filters?
13
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

Where,f(x, y) - original image

- Laplacian filtered image

55. Whether two different images can have same histogram? Justify your
answer.(NOV/DEC-17)
 For automatic enhancement, uniform histogram is good. If we describe the change of
enhancement, uniform histogram is not good.
 The method of specifying histogram or matching the histogram for a particular image is
called as ‘Histogram matching” or “Histogram specification”. Histogram specification
technique:
W.K.T

56. For an eight bit image write the expression for obtain the negative of the input
images.(NOV/DEC-17)
 The negative of an image with gray levels in the range [0, L-1]is obtained by using the
negative transformation shown in Fig. 3.3, which is given by the expression
s = L - 1 - r.
Reversing the intensity levels of an image in this manner produces the equivalent of a
photographic negative.
Assume L=8,
Therefore, the negative of an image s=(8-1)-r = 7-r.

57. Necessitate the need for transform? (APR/MAY-18)


The need for transform is most of the signals or images are time domain signal(i,e) signals can
be measured with a function of time. This representation is not always best.For most image
processing applications anyone of the mathematical transformation are applied to the signal or
images to obtain further information from that signal.

58. How negative of an image is obtained?(APR/MAY-19)


The negative of an image with gray level in the range [0, L-1] is obtained by using the
negative transformation.
The expression of the transformation is,

14
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

S = L-1-r
The value of pixel before processing can be represented as r and after processing can be
represented as S.

59. Compute the image negative values for the following 3 * 3 gray scale 8 bit
image.(A/M 2021)

Each pixel value subtracted by maximum intensity values.


-204 -14 -74
-84 -94 -104
-189 -159 0

The above image isneagtive image.


60. How to acheivev the blurring of the image in the frequency domain? (A/M 2021)
First an image is imported. The fourier transform of the image is then computed and the
image periodogram plotted.

15
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

PART-B [16 marks]

1. Explain Some Basic Gray Level Transformations.[NOV/DEC-17](APR/MAY-19)

Three basic types of functions used frequently for image enhancement:


1. linear (negative and identity transformations),
2. logarithmic (log and inverse-log transformations), and
3. Power-law (nth power and nth root transformations).
4. Piecewise-Linear Transformation Functions

The identity function is the trivial case in which output intensities are identical to input
intensities. It is included in the graph only for completeness.

1.1 Image Negatives


 The negative of an image with gray levels in the range [0, L-1]is obtained by using the
negative transformation shown in Fig. 3.3, which is given by the expression
s = L - 1 - r.
 Reversing the intensity levels of an image in this manner produces the equivalent of a
photographic negative.
 This type of processing is particularly suited for enhancing white or gray detail
embedded in dark regions of an image, especially when the black areas are dominant in
size.
1.2Log Transformations
The general form of the log transformation shown in Fig. 3.3 is

(3.2-2)
where c is a constant, and it is assumed that r ≥ 0.

16
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

 The shape of the log curve in Fig. 3.3 shows that this transformation maps a narrow
range of low gray-level values in the input image into a wider range of output levels.
 The opposite is true of higher values of input levels.
 We would use a transformation of this type to expand the values of dark pixels in an
image while compressing the higher-level values.
 The opposite is true of the inverse log transformation.

1.3Power-Law Transformations
Power-law transformations have the basic form
(3.2-3)where c and g are positive constants.
 Sometimes Eq. (3.2-3) is written as to account for an offset (that
is, a measurable output when theinput is zero). However, offsets typically are an issue
of display calibration andas a result they are normally ignored in Eq. (3.2-3).

 Plots of s versus r for variousvalues of g are shown in Fig. 3.6. As in the case of the log
transformation,power-law curves with fractional values of g map a narrow range of
dark inputvalues into a wider range of output values, with the opposite being true for
highser values of input levels.

 Unlike the log function, however, we notice here a family of possible transformation
curves obtained simply by varying g.
 A variety of devices used for image capture, printing, and display respond according to
a power law.
 By convention, the exponent in the power-law equation is referred to as gamma [hence
our use of this symbol in Eq. (3.2-3)].

17
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

The process used to correct these power-law response phenomena is called gamma
correction.
 For example, cathode ray tube (CRT) devices have an intensity-to-voltageresponse that
is a power function, with exponents varying fromapproximately 1.8 to 2.5.
 As expected, the output ofthe monitor appears darker than the input, as shown in Fig.
3.7(b).
 Gamma correctionin this case is straightforward. All we need to do is preprocess the
inputimage before inputting it into the monitor by performing the transformation
 The result is shown in Fig. 3.7(c).
 When input into the samemonitor, this gamma-corrected input produces an output that
is close in appearanceto the original image, as shown in Fig. 3.7(d).
 Gamma correction is important if displaying an image accurately on a computer
screen is of concern.
 Images that are not corrected properly can look either bleached out, or, what is more
likely, too dark.
 Trying to reproduce colors accurately also requires some knowledge of gamma
correction because varying the value of gamma correction changes not only the
brightness, but also the ratios of red to green to blue.
 Gamma correction has become increasingly important in the past few years, as use of
digital images for commercial purposes over the Internet has increased.

1.4. Piecewise-Linear Transformation Functions

 The principal advantage of piecewise:


Linear functions over the types of functions we have discussed thus far is that theform
of piecewise functions can be arbitrarily complex.
 The principal disadvantage of piecewise:
Functions are that their specification requires considerably more user input.

(i) Contrast stretching(APR/MAY-19)


 One of the simplest piecewise linear functions is a contrast-stretching transformation.
 Low-contrast images can result from poor illumination, lack of dynamicrange in the
imaging sensor, or even wrong setting of a lens apertureduring image acquisition.
 The idea behind contrast stretching is to increase thedynamic range of the gray levels in
the image being processed.

18
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

(ii) Gray-level slicing(APR/MAY-19)


Two basic themes
 One approach is to displaya high value for all gray levels in the range of interest and a
low value forall other gray levels.
 The second approach brightens the desired range of gray levels but preserves the
background andgray-level tonalities in the image.

(iii) Bit-plane slicing(APR/MAY-19)


Separating a digital image into its bit planes is useful for analyzing the relativeimportance
played by each bit of the image, a process that aids in determiningthe adequacy of the number
of bits used to quantize each pixel.

2. What is histogram? Explain histogram processing in detail? (Nov- 2012) (or) Write the
salient features of image histogram. What do you infer? (Nov 2014)(Nov/Dec 16) [OR]
What is histogram equalization? Discuss in details about the procedure involved in
histogram matching[APR/MAY-18]

Definition: Histogram process: the histogram of an image is a plot or graph drawn between
gray level values (0-255) in the x axis and the number of pixels having the corresponding grey
levels in the y axis.

The histogram of a digital image with gray levels ranging from 0 to L-1.
It is represented as,
hk ( rk ) = nk
rk -- Kth gray level
nk-------No. of pixels in the image having gray level rk

Histogram normalization:
It is done by dividing each of the histogram value by the total number of pixels in the image.
It is given by,
P ( rk ) = nk /n
P ( rk )= Probability of occurrence of gray level rk

Advantage of histogram:
i. It used in image processing application such as image compression and segmentation.
ii. Histograms are simple to calculate in software and also used in implementation.

19
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

Histogram process in image:


1. Dark image Histogram plot

2. Bright Image 3. Low Contrast 4. High Contrast

(i) Histogram equalization: (Nov/Dec 16)


This technique used to obtain the uniform histogram. Let us take continuous function
which represents the gray level of the input image. Let r = 0 represents black, r = 1 means
white.
The histogram equalization is given by,
S = T ( r ): 0 ≤ r ≤ 1
Which produces ‘S’ pixels using ‘r’ pixels in the original image.

T ( r ) satisfy the following two conditions


i. T ( r ) is single value and monotonically increasing in the interval ( 0,1
ii. 0 ≤ T ( r ) ≤ 1 for 0 ≤ r ≤1

If it is not monotonically increased a section of intensity ranges will be inverted.

20
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

P s (S) = 1 ;0≤s≤1

 The equation 3 states that uniform P.D.F because it has value 1 over the interval (0, 1)
and value 0 outside the interval (0, 1).
 Thus Ps (s) is always uniform independent of Pr (r).
The processed output image is obtained by mapping each pixel with gray level rk. This
transformation mapping is called histogram equalization or linearization.
It is interest to note that when all histograms are different, the histogram
equalized images are visually very similar because the difference between the
images in the left column is simply in contrast and in content.
In histogram equalization the content of image is same and there will be only
increase in contrast.

Salient features:
 Image quality assessment.
 Manipulating the contrast and brightness of an image.
 The histrogram for a good image will have a flat profile or distribution of pixels.That is
the pixel count is roughly the same for all intensities.
 Easy mapping function of pixels and preserve the local structure of the image.
 The quality of the image controlled indirectly by controlling its histrogram by
normalizing it to a flat profile.

The histrogram transformation is to spread the histrogram to cover the entire dynamic range
instead of changing the shape of the histrogram.This operation is known as image scaling.
Smax - Smin/rmax - rmin
Smax& Smin –Maximum and minimum values of image pixels
rmax& rmin –Maximum and minimum grey scale values in the original image.This
transformation improves contrast.

Histrogram sliding:
This operation make an image either darker or lighter but retains the relationship between the
grey level values. S= Slide= r +offset

21
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

Offset-extent used to slide the histogram.

Histrogram equalization:
It tries to flatten the histogram to create a better quality image.It treats an image as a
probability distribution.

(ii) Histogram matching or Histogram specification (April-2011):

 For automatic enhancement, uniform histogram is good. If we describe the change of


enhancement, uniform histogram is not good.
 The method of specifying histogram or matching the histogram for a particular image is
called as ‘Histogram matching” or “Histogram specification”. Histogram specification
technique:
W.K.T

Pr (r) the probability of input gray level r. We can write the output gray level z using the
following term.
G (z) = S = T (r)
G-1 (s) = z

The image with the specified probability density function can be obtained from two
procedures,
(i). Obtain the transfer function T (r).
(ii). Obtain the inverse transfer function G-1 (S)
(iii). Finally obtain the output image by applying to all the pixels in the input image.
The discrete version of Z also expressed a s, Zk = G-1(SK)

Implementation of Histogram matching:


Let us consider the gray levels rK, SK, zK where K is the location of the element in the array.
r, s, z is the value of that location.
Eg. In case of 8 – bit image means the gray level is given by, L = 2k = 28 = 256
(0- 255) levels
S
1
Sk

0 r
rk -1

22
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

The first gray level in the image r1 maps to S1. The second gray level r2 maps to s2 likewise,
Kth gray level of r are mapped to Kth gray level of S.
Eg. For mapping of r and s, the value of 128 pixels will be in the 127 position in S. then the
output image will be histogram equalized image.

Histogram matching diagram:

Zk

0 1 G (z)

For any Zp, this transformation will give the corresponding value Vp= Sk

Procedure for histogram matching is:-

i. Obtain the histogram for the given image.


ii. Obtain the transformation function G.
For each pixel in the original image the value of pixels in rk maps to Sk and the value of pixels
in the SG-1(s) k maps to Zk. it is known as “histogram matching process”.

(iii) Local histogram processing:


 The procedure is to define a square or rectangular neighborhood and move the center of
this area from pixel to pixel.
 At each location, the histogram of the points in the neighborhood is computed and
either a histogram equalization or histogram specification transformation function is
obtained.
 This function is finally used to map the gray level of the pixel centered in the
neighborhood.
 The center of the neighborhood region is then moved to an adjacent pixel location and
the procedure is repeated.

3. Explain spatial filtering in image enhancement? (OR) Explain clearly how spatial
integration and differentiation can be used for image enhancement in spatial domain.
(June-2010)(A/M 2021)
(i). Image averaging:
The original image f(x,y) is added with the noise ή(x,y) form a noisy image. It is given by,
g(x,y) = f(x,y)+ή(x,y)
23
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

The noise is always uncorrelated with image. The objective of this is to reduce the noise
content by adding another noise. The average of K – different noisy image is given by,

(ii). Spatial filtering:


 Processing the image pixel value with a neighbor subimage is called “Spatial filtering”.
 The sub image is also called as filter mask, kernel, template or window.

For example consider 3x3 images.

f (x-1,y-1) f (x-1,y) f (x-1,y+1)

f (x,y-1) f (x,y) f (x,y+1)

f (x+1,y-1) f (x+1,y) f (x+1,y+1)

w(-1,-1) w(-1,0) w(-1,1)


w(0,-1) w(0,0) w(0,1)

w(1,-1) w(1,0) w(1,1)

 The values of sub image is defined a s co-efficient of the pixel under the mask, the
value of sub image is same dimension of the original image. The process consists of the
filter mask from point to point.
 At each point the response of the filter is find out and it is calculated in linear manner.
Linearity means the response given by sum of products of filter co-efficient and corresponding
pixels.

w(-1,-1) f(x-1,y-1) + w(-1,0) f(x-1,y) + w(-1,+1) f(x-1,y+1) + w(0,-1) f(x,y-1) + w(0,0) f(x,y)
+ w(0,1) f(x,y+1) + w(1,-1) f(x+1,y-1) + w(1,0) f(x+1,y) + w(1,1) f(x+1,y+1)
Types of filtering:

1. Linear filtering:
It is same as the frequency domain concept is called convolution (involving the mask with
image).
24
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

Condition for linear spatial filtering:


 If a square mask of mxn is placed at least one edge of such a mask is coincide with the
border of the image.
 The center of the mask should be at a distance of n-1/2 pixel away from the border of
the image.
 If these conditions occurred the filtered image is smaller than the original image.
 For filtered image with same size of original image, the approach is known as padding
(adding rows & columns).

2. Non linear filtering:


This includes the calculation of medium gray level of neighborhood in which mask is
located.

Applications:
1. smoothing
2. sharpening

4. Explain in detail about the image smoothing filters in the spatial domain?(June-2010)
Smoothing filters: or How do you perform directional smoothing in images? Why it is
required: (Nov. 2014) (May/June 17)(NOV/DEC-18)
The averaging of image is also known as smoothing. Smoothing filters are used for
blurring and for noise reduction. This process results in image with reduced short transient in
gray level.

(i) Smoothing by linear filters:


Smoothing filters are also known as averaging filters. Because Smoothing is done by replacing
the value of every pixel of image by average of gray level in the neighborhood of the mask.

Filter types:
1. Box filter
2. Weighted average filter

1. Box filter: The spatial averaging filters with all coefficients equal means which is called as
“Box filter”
Let us consider 3x3 smoothing filters.
1 1 1
1 1 1
1 1 1
25
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

In the above filter the coefficient of filter are all one the sum of all coefficient pixels are 9. The
average of gray level pixel in 3x3 image is given by,

Where, m – Total no. of pixel. Wi – Average of coefficient of pixels. Zi – Value of gray level
of image.
1/9 1/9 1/9
1/9 1/9 1/9
1/9 1/9 1/9

2. Weighted average filter: The pixel at center of mask is higher value than any other pixels
in the image. This is for giving more important to the center pixel.

Consider 3x3 images:


1 2 1
2 4 2
1 2 1
 The other pixels are weighted as a function of the each distance from the center pixel.
 The diagonal pixels are away from the center of orthogonal pixel. This process of
giving weight age in the image average is obtained by,
1/16 1/8 1/16
1/8 1/4 1/8
1/16 1/8 1/16

(ii) Non – linear spatial filter (or) Order statistics filters:


 It is also known as non – linear spatial filter, the response of these noise filter is based
on ranking the pixel image under the mask and replacing the value of center pixel width, the
value of ranking result the best examples of non – linear spatial is median filter.

26
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

Directional smoothing:
 To protect the edges from blurring while smoothing a directional of averaging filter can
be useful in directional smoothing.

The filters are isotropic since the effect of the filters are the same in all directions.In image
processing applications,it may sometimes be necessary to select only certain features in
particular directions such as horizontal,vertical or diagonal edges.such filters are called
anisotropic filters.

It’s useful in reducing the effect of edges from blurring by excessing smoothing process.
Procedure is
1.calculate spatial average in several directions.
2.perform convolution process to replace centre pixel.

Then the mask is moved and the process is repeated till the entire image is processed.This
algorithm retains most of the important visual information.This is an one type of spatial
filtering.

5. Explain Sharpening of spatial filters(Nov/Dec 16)(NOV/DEC-18)

The principle objective of sharpening is to


 highlight fine detail in an image
 enhance detail that has been blurred, either in error or as a natural effect
 Since averaging is analogousto integration, it is logical to conclude that sharpening
could be accomplished by spatial differentiation.
 Thus, image differentiation enhances edges and other discontinuities (such as noise) and
deemphasizes areas with slowly varying gray-level values.

Sharpening operators:
1. The laplacian operator-based on second order derivatives
2. The gradient operator-based on first order derivatives

27
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

5.1 image enhancement using second order derivatives-the Laplacian filters


Basic steps:
i) Define a discrete formulation of the second order derivatives
ii) Construct a filter mask based on the defined formulation.
5.1.1 Second order derivatives

Requirements:
i) Must be zero in flat areas
ii) Must be non zero at the onset and end of a gray level step or ramp
iii) Must be zero along ramps of constant slope.

Second order derivatives can be defined as the difference as,

5.1.2 Development of the method

Isotropicfilters
Isotropic filters are rotation invariant; in the sense that rotating the image and then applying
the filter gives the same result as applying the filter to the image first and then rotating the
result.

The laplacian:
the simplest isotropic derivative operator is the Laplacian, which, for a function (image) f(x,
y) of two variables, is defined as

Digital implementation:

28
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

summing these two components:

Thus, an enhanced image g(x,y) is obtained by

Laplacian mask:
The coefficients of this singe mask are obtained as,
G(x,y)=

5.1.3 Uses of laplacian:

Adding the image tothe Laplacian restored the overall gray level variations in the image, with
theLaplacian increasing the contrast at the locations of gray-level discontinuities.

29
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

5.1.4 Unsharp masking:

A process used for many years in the publishing industry to sharpen images consists of
subtracting a blurred version of an image from the image itself.
This process, called unsharp masking, is expressed as

The origin of unsharp masking is darkroom photography

5.2 unsharp masking andHigh-boost filtering

A slight further generalization of unsharp masking is called high-boost filtering. A high-boost


filtered image, fhb, is defined at any point (x, y) as

Adding and subtracting f(x,y),

30
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

3*3 masks used for high boost filtering

 When A=1, high-boost filtering becomes “standard” Laplacian sharpening.


 As the value of A increases past 1, the contribution of the sharpening process becomes
less and less important.
 Eventually, if A is large enough, the high-boost image will be approximately equal to
the original image multiplied by a constant.

5.3 Use of First Derivatives for Enhancement—the Gradient(Nov/Dec 16)

First derivatives in image processing are implemented using the magnitude of the gradient. For
a function f(x, y), the gradient of f at coordinates (x, y) is defined as the two-dimensional
column vector.

the magnitude of the gradient vector is referred to as the gradient.

5.3.1 Implementation of the method:

The simplest approximations to a first-order derivative that satisfy the conditions stated in that
section are Gx=z8-z5 and Gy=z6-z5.
31
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

Two important operators to compute the gradient.


1. Roberts cross-gradient operators.
2. Sobel operators

1. Roberts cross-gradient operators.

Uses cross differences

2. Sobel operators

6. Explain the operation of SMOOTHING BY FREQUENCY - DOMAIN FILTERS.


(Nov/Dec 16)(May/June 17)(NOV/DEC-18)(APR/MAY-19)

 Smoothing in frequency domain is the process of attenuating a specified range of high-


frequency components in the transform of a given image. It is also known as blurring.

32
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

Need for Smoothing:

Sharp transistors in the gray levels of an image such as edges and noise are present as
high-frequency components in its Fourier transform. To remove these unwanted contents for
some applications, smoothing is needed.
Concept
The basic process of frequency domain filtering is expressed as, G (u,v) = H(u,v).F(u,v)

Where F(u,v)- Fourier transform of the input image.


H(u,v) - Filter transfer function.
Here, the main aim is to select a filter function H (u,v) which is suitable to attenuate the
high-frequency components and results in G(u,v).

Filters Used
Lowpass filters are used for blurring in frequency domain. Three types of lowpass filters
are given importance. They are,
 Ideal Lowpass Filters (ILPF)
 Butterworth Lowpass Filters (B LPF)
 Gaussian Lowpass Filters (GLPF)

These filters cover the entire range of filter functions from very sharp to very smooth.

6.1 Ideal Lowpass Filter (ILPF)


Ideal lowpass filter is the simplest lowpass filter. It "cuts off' all the high frequency
components of the Fourier transform which are located at a distance greater than a
specified distance Do from the origin of the centered transform.

Transfer Function

The transfer function of two-dimensional ILPF is given by,

Where, Do — Specified positive quantity


D(u,v) — Distance between the point (u,v) and the origin of the frequency rectangle.

33
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

Distance, D(u,v):

The distance from any point (u,v) to the center of thefrequency rectangle is given by
D(u,v)=√𝑢2 + 𝑣2

If the image size is M x N,


Center of the frequency rectangular is at (u,v) = (M/2,N/2).
Now the distance from any point (u,v) to the center or origin of the Fourier transform is,
𝑀
D(u,v) = √(𝑢 − )2 + (𝑣 − 𝑁/2)2
2

Cutoff Frequency
The cutoff frequency of ILPF is defined as the point at which the transition between the
filter function H(u, v) = 1 and H(u, v)=0 takes place.

Image Power
In three dimensional view, the function of the filter is to pass all the frequencies which
are inside a circle of radius Do and attenuate all the frequencies outside the circle.
Here, to set up standard cutoff frequencies, circles with specified amount of total image
power. PT should be computed. The total image power is found by,

Where,P(u,v)=|F(u,v)|2=R2(u,v)+I2(u,v)
R,I — Real and Imaginary parts

Also, a circle with radius 'r' and origin at the center of the frequency rectangle would have a
percent of the total power given by,

Ringing Effect:

Ringing is an unwanted effect caused by the blurring of ILPF. The basic model of
blurring in frequency domain is,

34
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

G (u. v) = H (u,v). F (u,v)

Using convolution theorem, it is represented in the spatial domain as,


g(x, y) = h(x, y) * f (x, y) (2.72)
Where f(x, y) — original image
g(x, y) — Blurred image
h(x, y) — Spatial filter function

Spatial Filter Function, h(r, y):

h(x, y) can be obtained from H(u, v) by the following steps:


i) Multiply H(u, v) by (-1)u+v
ii) Take inverse DFT of the result in (i)
iii) Obtain the real part
iv) Multiply the result by (-1)x+y

Here, the nature of h( x, y) affects both blurring and ringing effects. Because,
 the center component of h(x, y) is responsible for blurring and
 the concentric components of h(x, y) are responsible for the ringing characteristics of
ILPF.

6.2 Butterworth Lowpass Filter (BLPF)

In Butterworth lowpass filter, there is no clear cutoff frequency which decides the
amount of frequencies to be passed and the amount of frequencies to be filtered.
When the amount of high frequency content removed decreases, the image becomes
finer in texture.

Transfer Function:
The transfer function of a BLPF with cutoff frequency at a distance D 0 from the origin is
given by,
This implies that 50% of the frequencies from the maximum value is filtered.

35
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

Ringing Effect:
In BLPF, when the filter order 'n' increases, the ringing also increases. Therefore, ringing
effect should be taken into account for higher order filters and it can be ignored for lower
orders. The effect of ringing is shown in fig. 2.22.

6.3 Gaussian Low pass Filter (GLPF)


 Does not have sharp discontinuity
 No clear cut off between passed and filtered frequencies
 H(u,v)= 0.607 when D (u,v) = Do
Gaussian LPF of two dimension is given by
2
H(u,v) = 𝑒−𝐷2(𝑢,𝑣)/2𝐷𝑜 Do is cut off frequency. D(u,v) is the
distance from the center of the frequency rectangle.

36
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

7. Sketch the perspective plot of ideal high pass, Butterworth and Gaussian filter and
how these filters are used for blurring and sharpening the images in frequency domain.
(OR) Explain Sharpening of frequency domain filter(NOV/DEC-18)(A/M-19, 21)

The three important filters used for sharpening are,


• Ideal Highpass Filters (IHPF)
• Butterworth Highpass Filters (BHPF)
• Gaussian Highpass Filter (GHPF)

7.1 Ideal Highpass Filter (HIPP)

Ideal highpass filter is the opposite of ideal lowpass filter

Transfer Function:

The transfer function of two-dimensional IHPF is given by,


H(u,v) = 0 if D(u,v) ≤D0
1if D(u,v) ≥ D0
Where, D0 — Cutoff distance measured from the center of the frequencyrectangle
1D(u,x ) — Distance from any point (u,v) to the center of the frequencyrectangle.

37
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

This transfer function forces all the frequencies inside a circle with radius Do to zero and it
.passes all the frequencies outside the circle without attenuation.

Representation of IHPF:

The cross section and spatial representation of an ideal highpass filter is shown infig.2.23.

Ringing effect:
Ringing results in images with distorted and thickened object boundaries in sharpening
process.
In ideal highpass filtering, when the cutoff distance Do increases, the effect of ringing
reduces. Anyhow, it should be considered for low Do values.

7.2 Butterworth Highpass Filter (BHPF)

The behavior of Butterworth highpass filters is smoother than ideal highpass This
means that the images produced by BHPF are better than IHPF produced images.

Transfer Function: The transfer function of BHPF of order 'n' is given by

38
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

Representation of BHPF:
The cross section and spatial representation of BHPF are shown in fig. 2.34

Ringing Effect:
Even for small values of cutoff frequency Do with small order 'n' of the filter, the ringing
effect is very less in Butterworth highpass filtering. Therefore, boundaries are much less
distorted than in IHPF.

7.3 Gaussian Highpass Filter (GHPF)


The results produced by a Gaussian Highpass Filter are smoother than the results
produced by IHPF and BHPF.
Transfer Function:
The transfer function of the GHPF with cutoff frequency at a distance Do from the origin
is expressed as.

39
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

Representation of GHPF:

Thus, smaller objects and thin bars look cleaner in the result of GHPF than other filter results.

8. Illustrate the steps in Histogram Equalization of the image. [NOV 2013] (Nov/Dec 16)

Solution: Max value 5

Binary value of 5=101 (need 3 bits) so 23 =8(gray level 0 to 7)

Step 1: Find running sum of histogram values:


Gray level 0 1 2 3 4 5 6 7
No of Pixel 0 0 0 6 14 5 0 0
Running sum 0 0 0 6 20 25 25 25

Step 2: Divide the running sum in step1 by total no.of. pixels.


1 Gray level 0 1 2 3 4 5 6 7
2 No of Pixel 0 0 0 6 14 5 0 0
40
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

3 Running sum 0 0 0 6 20 25 25 25

4 0 / 25 0/25 0/25 6 / 25 20 /25 25 /25 25/25 25 / 25

Step 3.Multiply C4 by maximum gray level:


C5 0 0 0 ( 6/25)*7 (20/25)*7 7 7 7

Step 4. Round C5 to closest integer:


C6 0 0 0 2 6 7 7 7

Step 5.Mapping:
0 1 2 3 4 5 6 7
0 0 0 2 6 7 7 7

Answer:

------Histogram equalization----- >

9. Describe histogram equalization .obtain histogram equalization for the following 8 bit
image segment of size 5x5 .write the interference on image segment before and after
equalization. [May 2015]
200 200 200 180 240
𝖥180 180 180 180 190 1
⌈ ⌉
190 190 190 190 180
⌈ ⌉
⌈ 190 200 220 220 240 ⌉
⌈ 230 180 190 210 230 ⌉
⌈ ⌉
Solution: Max value 240andBinary value of 240=11110000 (need 8 bits) so 28 =256(gray
level 0 to 255)
Step 1: Find running sum of histogram values:
Gray level 0…… 180 190 200 210 220 230 240…. 255
No of Pixel 0 …… 7 7 4 1 2 2 2…… 0
Running sum 0…. 7 14 18 19 21 23 25…… 25

41
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

Step 2: Divide the running sum in step1 by total no.of. pixels.


Gray level 0…… 180 190 200 210 220 230 240…. 255
No of Pixel 0 …… 7 7 4 1 2 2 2…… 0
Running sum 0…. 7 14 18 19 21 23 25… 25
0/25 7/25 14/25 18/25 19/25 21/25 23/25 25/25... 25/25

Step 3.Multiply C4 by maximum gray level:

0/25 7/25 14/25 18/25 19/25 21/25 23/25 25/25 25/25


C5
*255 *255 *255 *255 *255 *255 *255 *255…. *255….

Step4. Round C5 to closest integer


Gray level 0…… 180 190 200 210 220 230 240…. 255

C5 0 71 143 184 194 214 235 255… 255


200 200 200 180 240 184 184 184 71 255
180 180 180 180 190 71 71 71 71 143
190 190 190 190 180 143 143 143 143 71
-------------
190 200 220 220 240 143 184 214 214 255
230 180 190 210 230 235 71 143 194 235
𝑜𝑟𝑖𝑔𝑖𝑛𝑎𝑙𝑖𝑚𝑎𝑔𝑒 𝑒𝑞𝑢𝑎𝑙𝑖𝑧𝑒𝑑𝑖𝑚𝑎𝑔𝑒

10. Obtain Discrete Fourier Transform for the given vectors. Input image matrix [0
0;255 255][2×2] matrix. Also analyze how the Fourier transform is used if the image is
rotated or translated.[Apr/May 15].
Solution:
Find kernel using 2x2 DFT
F(u,v)=Kernal× input image×(kernel)T

=(𝟎. 𝟓 𝟎. 𝟓 ) ( 𝟎 𝟎
)(
𝟎. 𝟓 𝟎. 𝟓
)
𝟎. 𝟓 −𝟎. 𝟓 𝟐𝟓𝟓 𝟐𝟓𝟓 𝟎. 𝟓 −𝟎. 𝟓

=( 𝟏𝟐𝟕. 𝟓 𝟏𝟐𝟕. 𝟓
)(
𝟎. 𝟓 𝟎. 𝟓
)
−𝟏𝟐𝟕. 𝟓 −𝟏𝟐𝟕. 𝟓 𝟎. 𝟓 −𝟎. 𝟓

42
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

=( 𝟏𝟐𝟕. 𝟓 𝟎)
−𝟏𝟐𝟕. 𝟓 𝟎

11. If all the pixels in an image are shuffled, will there be any change in the histogram?
Justify your answer.(May/June 17)
If all the pixels in an image are shuffled ,there will not be any change in the histogram of the
image.A histogram gives only the frequency of occurrence of the gray level.consider two
images 1 and 2.
Image 2 is obtained by shuffling the rows of image 1.Their corresponding histograms are

12 Why histogram equalization is consider as an “idempotent operation”?. Perform


histogram equalization of the image (NOV/DEC-17)

32454
𝖥3 4 5 4 3 1
⌈ ⌉
35553
⌈ ⌉
⌈ 3 4 5 4 3 ⌉
⌈4 5 3 4 4 ⌉
⌈ ⌉
43
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

Solution: Max value 5

Binary value of 5=101 (need 3 bits) so 23 =8(gray level 0 to 7)

Step 1: Find running sum of histogram values:


Gray level 0 1 2 3 4 5 6 7
No of Pixel 0 0 1 8 9 7 0 0
Running sum 0 0 1 9 18 25 25 25

Step 2: Divide the running sum in step1 by total no.of. pixels.


1 Gray level 0 1 2 3 4 5 6 7
2 No of Pixel 0 0 1 8 9 7 0 0
3 Running sum 0 0 1 9 18 25 25 25

4 0 / 25 0/25 1/25 9 / 25 18 /25 25 /25 25/25 25 / 25

Step 3.Multiply C4 by maximum gray level:


C5 (0 / 25)*7 (0/25)*7 (1/25) *7 (9/25) *7 (18/25)*7 (25 /25)*7 (25/25)*7 (25/25)*7

Step 4. Round C5 to closest integer:


C6 0 0 2 5 7 7 7 7

Step 5.Mapping:
0 1 2 3 4 5 6 7
0 0 2 5 7 7 7 7

Answer:

32454 527 77
𝖥3 4 5 4 3 1 𝖥 5 7 7 7 51
⌈ ⌉ ⌈ ⌉
35553 57775
⌈ ⌉---HISTOGRAM EQUALIZATION-⌈ ⌉
⌈3 4 5 4 3 ⌉ ⌈5 7 7 7 5⌉
⌈4 5 3 4 4 ⌉ ⌈7 7 5 7 7⌉
⌈ ⌉ ⌈ ⌉

13. Perform histogram equalization of an image (APR/MAY-19)

44
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

Gray level 0 1 2 3 4 5 6 7
No of Pixel 6 8 11 12 3 5 15 6

SOLUTION:
Step 1: Find running sum of histogram values:
Gray level 0 1 2 3 4 5 6 7
No of Pixel 6 8 11 12 3 5 15 6
Running sum 6 14 25 37 40 45 60 66

Step 2: Divide the running sum in step1 by total no.of. pixels.


1 Gray level 0 1 2 3 4 5 6 7
2 No of Pixel 6 8 11 12 3 5 15 6
3 Running sum 6 14 25 37 40 45 60 66

4 6/66 14/66 35/66 37/66 40/66 45/66 60/66 66/66

Step 3.Multiply C4 by maximum gray level:


C5 (14/66) (35/66) (37/66) (40/66) (60/66) (66/66)
(6/66) *7 (45/66) *7
*7 *7 *7 *7 *7 *7

Step 4. Round C5 to closest integer:


C6 1 1 4 4 4 5 6 7

Step 5.Mapping:
0 1 2 3 4 5 6 7
1 1 4 4 4 5 6 7

14. Explain in detail about color image enhancement?(Nov-2011) (or)Explain any two
techniques for color image enhancement.(Nov 2014)(or) How color image is enhanced
and compare it with gray scale processing.(May 2015)

Color image enhancement:

45
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

Color image enhancement requires improvement of color balance or color contrast in a color
image. Enhancement of a color images becomes a more difficult task given to added
complexity of color perception.

Block diagram

The input color coordinates of each pixel are independently transformed into another set of
color coordinates. Where the image in each coordinate is enhanced by its own image
enhancement algorithm.

The enhanced image coordinates T˚₁, T˚₂, T˚₃ are inverse transformed to R˚, G˚, B˚ for
display.
Smoothing and sharpening are the two techniques used for color image enchancement.

Grey scale image smoothing can be viewed as a spatial filtering operation in which the
coefficients of the filtering mask have same value.The average of the RGB component vectors
in this neighbourhood is

C^(x,y)=1 ∑ 𝐶(𝑠, 𝑡)
𝐾 (𝑠,𝑡)£𝑆𝑥𝑦

1 ∑ 𝑅(𝑠, 𝑡)
𝖥𝐾 (𝑠,𝑡)£𝑆𝑥𝑦 1
⌈1 ⌉
Where,C^(x,y)=⌈ ∑ 𝐺(𝑠, 𝑡)⌉
𝐾 (𝑠,𝑡)£𝑆𝑥𝑦 ⌉
⌈1 ∑ 𝐵(𝑠, 𝑡)
[𝐾 (𝑠,𝑡)£𝑆𝑥𝑦 ]
46
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

*Smoothing by neighborhood averaging can be carried out a percolor plane basis.The result is
same as when the averaging is performed using RGB color vectors.

*Image sharpening using laplacian is used mainly for color image enhancement.It’s defined as
a vector whose components are equal to the laplacian of the individual scalar component of a
vector.

❑ 2R(x, y)
❑2[C(x,y)]=[ ❑2G(x, y) ]
❑ 2[B(x, y)

15. Describe how homo morphic filtering is used to separate illumination and reflectance
component.(May 2015)(or)Explain Homomorphic filtering in detail? (Nov-2013, May-
2013, nov2011)

Homomorphic filtering:

Homomorphic filtering image gives minute details of the original image.


Block diagram of Homomorphic filter:

ln DFT H(u,v) IDFT Exp


F(x,y)

The image is separated into illumination and reflectance component.

H(u,v) filter function is multiplied and operates separately on the components.

The illumination component of the image is characterized by spatial variations while the
reflectance components variance only at the dissimilar objects.

This characteristics leads to associate the low frequencies with illumination & high
frequencies with reflectance component, which is used in enhancement of the image.

f (x,y) = I (x,y) r (x,y)

i -- illumination component

r---reflectance component
47
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

The Fourier transform of product of two functions is not separable.

Step 1:

Define a new function with natural log


F{f(x, y) ≠ f(i(x, y)) f(r(x, y))

F{ ln f(x, y) = f[ln(i(x, y)) f(r(x, y))] =F ln (i(x, y)+F ln (r(x, y)

After applying the “ln” function.

Z(x, y) = ln f (x, y)
=ln I (x, y)+ ln r (x, y)

Step 2:

Compute DFT,
After DFT,

I (z(x, y) =I { ln I (x, y)}+ I { ln r (x, y)} Z ( u, v) = Fi (u, v) + Fr (u, v)

Step 3:

Multiply z (u, v) by filter functions H (u, v)

If we process z (u, v) using a filter function

H (u, v) S (u, v) = z (u, v) H (u, v)

= {F i (u, v) + Fr (u, v)} H (u, v)


S (u, v) = F i (u, v) H (u, v) + Fr (u, v) H (u, v)
Step 4:
Compute inverse DFT,

The spatial domain can be obtained by taking inverse transform of S (u, v) S (x,y) = I-1
{ S (u, v)}
= I-1 {F I (u, v) H (u, v)} + I-1 {Fr (u, v) H (u, v)}
48
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT

= I-1{i’ (u, v) + r’ (x, y)} S (x,y) = I’ (x, y) +r’ (x, y)

Step 5:

Get the enhanced image by taking exponential applying exponential function,

g (x, y) = e i’ (x, y) e r’ (x, y) g (x, y) = i0’ (x, y) r0’ (x, y)


Advantages:

Good control over illumination & reflectance.

49

You might also like