Professional Documents
Culture Documents
Updated DIP UNIT2-2022
Updated DIP UNIT2-2022
TWO MARKS
1. Specify the objective of image enhancement technique.
The objective of enhancement technique is to process an image so that the result is more
suitable than the original image for a particular application.
2
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
7. What is thresholding?
Thresholding is an image enhancement technique that create a binary image. All gray level
values above a threshold 'T' is mapped to (L — 1) and gray level values below the threshold is
mapped to 0.
3
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
Thus a processed (output) image is obtained by mapping each pixel with level rk in the input
image to a corresponding pixel with level Sk in the output image.
o Box filter
o Weighted Average filter
(b) Geometric filters
(c) Harmonic filter
(d) Contra harmonic filter
19. What is a non linear smoothing spatial filter or order statistics filter? What are its
types?
These are spatial filters whose response is based on ordering the pixels contained in the image
area encompassed by the filter.
Types of order statistics filters
• Median filter
• Max and Min filter
• Midpoint filter
• Alpha trimmed mean filter
5
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
6
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
Where,
the digital implementation of the two dimensional laplacian is obtained by summing these
these two components.
Prewitt's Operator
Prewitt's operator is defined using a 3 x 3 mask and the digital approximation of the Prewitt's
operator is defined as,
7
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
Sobel’s operator
29. What are the applications of spatial enhancement filters? (or) Sharpening Filters?
a) Printing industry
b) Image based product inspection
c) Forensics
d) Microscopy
e) Surveillance etc.
8
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
Where f (x, y) refers to the original image, 𝑓̂(x,y) refers to the blurred version of f (x, y) and
fs(x, y) refers to the sharpened image obtained. Unsharp masking is used in the publishing
industry to sharpen images.
Where A ≥1,f(x,y) refers to the original image, 𝑓̂(x, y) refers to the blurred version of f(x,y)
and fhb(x, y) refers to the sharpened image obtained.
9
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
34. Give the filter function of ideal low pass filter and high pass filter?
The filter function of ideal low pass filter is given as,
Where Do is the cutoff distance D(u, v) is the distance from the point (U, V) in the image to
the origin of the frequency rectangle.
35. Give the filter function of Butterworth low pass filter and high pass filter.
A Butterworth Low Pass filter of order n is defined as,
36. Give the filter function of Gaussian low pass filter and high pass filter?
The filter function of Gaussian low pass filter is given by,
-D(u, v) is the distance from the point (U, V) in the image to the origin of the frequency
rectangle.
If F(U,V) is given, f(x,y) can be obtained via the inverse DFT, given by the expression
f(x,y) =∑𝑀−1 ∑𝑁−1 ( 𝑢𝑥 𝑣𝑦
𝑖2𝜋( + ) for x= 0,1,2. .......... M-1;
𝑈=0 𝑉=0 𝐹 𝑈, 𝑉 )
𝑒 𝑀 𝑁
𝑢𝑥 𝑣𝑦
F(U,V) =∑𝑀−1 ∑𝑁−1 ( ) −𝑖2𝜋(
+𝑁 ) for u= 0,1,2...........M-1;
𝑥=0 𝑦=0 𝑓 𝑥, 𝑦 𝑒 𝑀
If F(U,V) is given, f(x,y) can be obtained via the inverse DFT, given by the expression
f(x,y) = 1 ∑𝑀−1 ∑𝑁−1 ( 𝑢𝑥 𝑣𝑦
𝑖2𝜋( + ) for x= 0,1,2 .......... M-1;
𝑈=0 𝑉=0 𝐹 𝑈, 𝑉 )
𝑒 𝑀 𝑁
𝑀,𝑁
y=0,1,2,…….N-1 ------------------> (4)
45. State the two important properties of unitary transforms [NOV 2013]
i) Energy compaction – few transform coefficients have large magnitude.
ii) Energy conservation – preserve the norm of input vectors.
12
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
54. Write down the equation used to obtain the enhanced image using Laplacian filters?
13
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
55. Whether two different images can have same histogram? Justify your
answer.(NOV/DEC-17)
For automatic enhancement, uniform histogram is good. If we describe the change of
enhancement, uniform histogram is not good.
The method of specifying histogram or matching the histogram for a particular image is
called as ‘Histogram matching” or “Histogram specification”. Histogram specification
technique:
W.K.T
56. For an eight bit image write the expression for obtain the negative of the input
images.(NOV/DEC-17)
The negative of an image with gray levels in the range [0, L-1]is obtained by using the
negative transformation shown in Fig. 3.3, which is given by the expression
s = L - 1 - r.
Reversing the intensity levels of an image in this manner produces the equivalent of a
photographic negative.
Assume L=8,
Therefore, the negative of an image s=(8-1)-r = 7-r.
14
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
S = L-1-r
The value of pixel before processing can be represented as r and after processing can be
represented as S.
59. Compute the image negative values for the following 3 * 3 gray scale 8 bit
image.(A/M 2021)
15
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
The identity function is the trivial case in which output intensities are identical to input
intensities. It is included in the graph only for completeness.
(3.2-2)
where c is a constant, and it is assumed that r ≥ 0.
16
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
The shape of the log curve in Fig. 3.3 shows that this transformation maps a narrow
range of low gray-level values in the input image into a wider range of output levels.
The opposite is true of higher values of input levels.
We would use a transformation of this type to expand the values of dark pixels in an
image while compressing the higher-level values.
The opposite is true of the inverse log transformation.
1.3Power-Law Transformations
Power-law transformations have the basic form
(3.2-3)where c and g are positive constants.
Sometimes Eq. (3.2-3) is written as to account for an offset (that
is, a measurable output when theinput is zero). However, offsets typically are an issue
of display calibration andas a result they are normally ignored in Eq. (3.2-3).
Plots of s versus r for variousvalues of g are shown in Fig. 3.6. As in the case of the log
transformation,power-law curves with fractional values of g map a narrow range of
dark inputvalues into a wider range of output values, with the opposite being true for
highser values of input levels.
Unlike the log function, however, we notice here a family of possible transformation
curves obtained simply by varying g.
A variety of devices used for image capture, printing, and display respond according to
a power law.
By convention, the exponent in the power-law equation is referred to as gamma [hence
our use of this symbol in Eq. (3.2-3)].
17
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
The process used to correct these power-law response phenomena is called gamma
correction.
For example, cathode ray tube (CRT) devices have an intensity-to-voltageresponse that
is a power function, with exponents varying fromapproximately 1.8 to 2.5.
As expected, the output ofthe monitor appears darker than the input, as shown in Fig.
3.7(b).
Gamma correctionin this case is straightforward. All we need to do is preprocess the
inputimage before inputting it into the monitor by performing the transformation
The result is shown in Fig. 3.7(c).
When input into the samemonitor, this gamma-corrected input produces an output that
is close in appearanceto the original image, as shown in Fig. 3.7(d).
Gamma correction is important if displaying an image accurately on a computer
screen is of concern.
Images that are not corrected properly can look either bleached out, or, what is more
likely, too dark.
Trying to reproduce colors accurately also requires some knowledge of gamma
correction because varying the value of gamma correction changes not only the
brightness, but also the ratios of red to green to blue.
Gamma correction has become increasingly important in the past few years, as use of
digital images for commercial purposes over the Internet has increased.
18
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
2. What is histogram? Explain histogram processing in detail? (Nov- 2012) (or) Write the
salient features of image histogram. What do you infer? (Nov 2014)(Nov/Dec 16) [OR]
What is histogram equalization? Discuss in details about the procedure involved in
histogram matching[APR/MAY-18]
Definition: Histogram process: the histogram of an image is a plot or graph drawn between
gray level values (0-255) in the x axis and the number of pixels having the corresponding grey
levels in the y axis.
The histogram of a digital image with gray levels ranging from 0 to L-1.
It is represented as,
hk ( rk ) = nk
rk -- Kth gray level
nk-------No. of pixels in the image having gray level rk
Histogram normalization:
It is done by dividing each of the histogram value by the total number of pixels in the image.
It is given by,
P ( rk ) = nk /n
P ( rk )= Probability of occurrence of gray level rk
Advantage of histogram:
i. It used in image processing application such as image compression and segmentation.
ii. Histograms are simple to calculate in software and also used in implementation.
19
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
20
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
P s (S) = 1 ;0≤s≤1
The equation 3 states that uniform P.D.F because it has value 1 over the interval (0, 1)
and value 0 outside the interval (0, 1).
Thus Ps (s) is always uniform independent of Pr (r).
The processed output image is obtained by mapping each pixel with gray level rk. This
transformation mapping is called histogram equalization or linearization.
It is interest to note that when all histograms are different, the histogram
equalized images are visually very similar because the difference between the
images in the left column is simply in contrast and in content.
In histogram equalization the content of image is same and there will be only
increase in contrast.
Salient features:
Image quality assessment.
Manipulating the contrast and brightness of an image.
The histrogram for a good image will have a flat profile or distribution of pixels.That is
the pixel count is roughly the same for all intensities.
Easy mapping function of pixels and preserve the local structure of the image.
The quality of the image controlled indirectly by controlling its histrogram by
normalizing it to a flat profile.
The histrogram transformation is to spread the histrogram to cover the entire dynamic range
instead of changing the shape of the histrogram.This operation is known as image scaling.
Smax - Smin/rmax - rmin
Smax& Smin –Maximum and minimum values of image pixels
rmax& rmin –Maximum and minimum grey scale values in the original image.This
transformation improves contrast.
Histrogram sliding:
This operation make an image either darker or lighter but retains the relationship between the
grey level values. S= Slide= r +offset
21
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
Histrogram equalization:
It tries to flatten the histogram to create a better quality image.It treats an image as a
probability distribution.
Pr (r) the probability of input gray level r. We can write the output gray level z using the
following term.
G (z) = S = T (r)
G-1 (s) = z
The image with the specified probability density function can be obtained from two
procedures,
(i). Obtain the transfer function T (r).
(ii). Obtain the inverse transfer function G-1 (S)
(iii). Finally obtain the output image by applying to all the pixels in the input image.
The discrete version of Z also expressed a s, Zk = G-1(SK)
0 r
rk -1
22
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
The first gray level in the image r1 maps to S1. The second gray level r2 maps to s2 likewise,
Kth gray level of r are mapped to Kth gray level of S.
Eg. For mapping of r and s, the value of 128 pixels will be in the 127 position in S. then the
output image will be histogram equalized image.
Zk
0 1 G (z)
For any Zp, this transformation will give the corresponding value Vp= Sk
3. Explain spatial filtering in image enhancement? (OR) Explain clearly how spatial
integration and differentiation can be used for image enhancement in spatial domain.
(June-2010)(A/M 2021)
(i). Image averaging:
The original image f(x,y) is added with the noise ή(x,y) form a noisy image. It is given by,
g(x,y) = f(x,y)+ή(x,y)
23
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
The noise is always uncorrelated with image. The objective of this is to reduce the noise
content by adding another noise. The average of K – different noisy image is given by,
The values of sub image is defined a s co-efficient of the pixel under the mask, the
value of sub image is same dimension of the original image. The process consists of the
filter mask from point to point.
At each point the response of the filter is find out and it is calculated in linear manner.
Linearity means the response given by sum of products of filter co-efficient and corresponding
pixels.
w(-1,-1) f(x-1,y-1) + w(-1,0) f(x-1,y) + w(-1,+1) f(x-1,y+1) + w(0,-1) f(x,y-1) + w(0,0) f(x,y)
+ w(0,1) f(x,y+1) + w(1,-1) f(x+1,y-1) + w(1,0) f(x+1,y) + w(1,1) f(x+1,y+1)
Types of filtering:
1. Linear filtering:
It is same as the frequency domain concept is called convolution (involving the mask with
image).
24
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
Applications:
1. smoothing
2. sharpening
4. Explain in detail about the image smoothing filters in the spatial domain?(June-2010)
Smoothing filters: or How do you perform directional smoothing in images? Why it is
required: (Nov. 2014) (May/June 17)(NOV/DEC-18)
The averaging of image is also known as smoothing. Smoothing filters are used for
blurring and for noise reduction. This process results in image with reduced short transient in
gray level.
Filter types:
1. Box filter
2. Weighted average filter
1. Box filter: The spatial averaging filters with all coefficients equal means which is called as
“Box filter”
Let us consider 3x3 smoothing filters.
1 1 1
1 1 1
1 1 1
25
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
In the above filter the coefficient of filter are all one the sum of all coefficient pixels are 9. The
average of gray level pixel in 3x3 image is given by,
Where, m – Total no. of pixel. Wi – Average of coefficient of pixels. Zi – Value of gray level
of image.
1/9 1/9 1/9
1/9 1/9 1/9
1/9 1/9 1/9
2. Weighted average filter: The pixel at center of mask is higher value than any other pixels
in the image. This is for giving more important to the center pixel.
26
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
Directional smoothing:
To protect the edges from blurring while smoothing a directional of averaging filter can
be useful in directional smoothing.
The filters are isotropic since the effect of the filters are the same in all directions.In image
processing applications,it may sometimes be necessary to select only certain features in
particular directions such as horizontal,vertical or diagonal edges.such filters are called
anisotropic filters.
It’s useful in reducing the effect of edges from blurring by excessing smoothing process.
Procedure is
1.calculate spatial average in several directions.
2.perform convolution process to replace centre pixel.
Then the mask is moved and the process is repeated till the entire image is processed.This
algorithm retains most of the important visual information.This is an one type of spatial
filtering.
Sharpening operators:
1. The laplacian operator-based on second order derivatives
2. The gradient operator-based on first order derivatives
27
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
Requirements:
i) Must be zero in flat areas
ii) Must be non zero at the onset and end of a gray level step or ramp
iii) Must be zero along ramps of constant slope.
Isotropicfilters
Isotropic filters are rotation invariant; in the sense that rotating the image and then applying
the filter gives the same result as applying the filter to the image first and then rotating the
result.
The laplacian:
the simplest isotropic derivative operator is the Laplacian, which, for a function (image) f(x,
y) of two variables, is defined as
Digital implementation:
28
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
Laplacian mask:
The coefficients of this singe mask are obtained as,
G(x,y)=
Adding the image tothe Laplacian restored the overall gray level variations in the image, with
theLaplacian increasing the contrast at the locations of gray-level discontinuities.
29
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
A process used for many years in the publishing industry to sharpen images consists of
subtracting a blurred version of an image from the image itself.
This process, called unsharp masking, is expressed as
30
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
First derivatives in image processing are implemented using the magnitude of the gradient. For
a function f(x, y), the gradient of f at coordinates (x, y) is defined as the two-dimensional
column vector.
The simplest approximations to a first-order derivative that satisfy the conditions stated in that
section are Gx=z8-z5 and Gy=z6-z5.
31
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
2. Sobel operators
32
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
Sharp transistors in the gray levels of an image such as edges and noise are present as
high-frequency components in its Fourier transform. To remove these unwanted contents for
some applications, smoothing is needed.
Concept
The basic process of frequency domain filtering is expressed as, G (u,v) = H(u,v).F(u,v)
Filters Used
Lowpass filters are used for blurring in frequency domain. Three types of lowpass filters
are given importance. They are,
Ideal Lowpass Filters (ILPF)
Butterworth Lowpass Filters (B LPF)
Gaussian Lowpass Filters (GLPF)
These filters cover the entire range of filter functions from very sharp to very smooth.
Transfer Function
33
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
Distance, D(u,v):
The distance from any point (u,v) to the center of thefrequency rectangle is given by
D(u,v)=√𝑢2 + 𝑣2
Cutoff Frequency
The cutoff frequency of ILPF is defined as the point at which the transition between the
filter function H(u, v) = 1 and H(u, v)=0 takes place.
Image Power
In three dimensional view, the function of the filter is to pass all the frequencies which
are inside a circle of radius Do and attenuate all the frequencies outside the circle.
Here, to set up standard cutoff frequencies, circles with specified amount of total image
power. PT should be computed. The total image power is found by,
Where,P(u,v)=|F(u,v)|2=R2(u,v)+I2(u,v)
R,I — Real and Imaginary parts
Also, a circle with radius 'r' and origin at the center of the frequency rectangle would have a
percent of the total power given by,
Ringing Effect:
Ringing is an unwanted effect caused by the blurring of ILPF. The basic model of
blurring in frequency domain is,
34
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
Here, the nature of h( x, y) affects both blurring and ringing effects. Because,
the center component of h(x, y) is responsible for blurring and
the concentric components of h(x, y) are responsible for the ringing characteristics of
ILPF.
In Butterworth lowpass filter, there is no clear cutoff frequency which decides the
amount of frequencies to be passed and the amount of frequencies to be filtered.
When the amount of high frequency content removed decreases, the image becomes
finer in texture.
Transfer Function:
The transfer function of a BLPF with cutoff frequency at a distance D 0 from the origin is
given by,
This implies that 50% of the frequencies from the maximum value is filtered.
35
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
Ringing Effect:
In BLPF, when the filter order 'n' increases, the ringing also increases. Therefore, ringing
effect should be taken into account for higher order filters and it can be ignored for lower
orders. The effect of ringing is shown in fig. 2.22.
36
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
7. Sketch the perspective plot of ideal high pass, Butterworth and Gaussian filter and
how these filters are used for blurring and sharpening the images in frequency domain.
(OR) Explain Sharpening of frequency domain filter(NOV/DEC-18)(A/M-19, 21)
Transfer Function:
37
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
This transfer function forces all the frequencies inside a circle with radius Do to zero and it
.passes all the frequencies outside the circle without attenuation.
Representation of IHPF:
The cross section and spatial representation of an ideal highpass filter is shown infig.2.23.
Ringing effect:
Ringing results in images with distorted and thickened object boundaries in sharpening
process.
In ideal highpass filtering, when the cutoff distance Do increases, the effect of ringing
reduces. Anyhow, it should be considered for low Do values.
The behavior of Butterworth highpass filters is smoother than ideal highpass This
means that the images produced by BHPF are better than IHPF produced images.
38
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
Representation of BHPF:
The cross section and spatial representation of BHPF are shown in fig. 2.34
Ringing Effect:
Even for small values of cutoff frequency Do with small order 'n' of the filter, the ringing
effect is very less in Butterworth highpass filtering. Therefore, boundaries are much less
distorted than in IHPF.
39
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
Representation of GHPF:
Thus, smaller objects and thin bars look cleaner in the result of GHPF than other filter results.
8. Illustrate the steps in Histogram Equalization of the image. [NOV 2013] (Nov/Dec 16)
3 Running sum 0 0 0 6 20 25 25 25
Step 5.Mapping:
0 1 2 3 4 5 6 7
0 0 0 2 6 7 7 7
Answer:
9. Describe histogram equalization .obtain histogram equalization for the following 8 bit
image segment of size 5x5 .write the interference on image segment before and after
equalization. [May 2015]
200 200 200 180 240
𝖥180 180 180 180 190 1
⌈ ⌉
190 190 190 190 180
⌈ ⌉
⌈ 190 200 220 220 240 ⌉
⌈ 230 180 190 210 230 ⌉
⌈ ⌉
Solution: Max value 240andBinary value of 240=11110000 (need 8 bits) so 28 =256(gray
level 0 to 255)
Step 1: Find running sum of histogram values:
Gray level 0…… 180 190 200 210 220 230 240…. 255
No of Pixel 0 …… 7 7 4 1 2 2 2…… 0
Running sum 0…. 7 14 18 19 21 23 25…… 25
41
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
10. Obtain Discrete Fourier Transform for the given vectors. Input image matrix [0
0;255 255][2×2] matrix. Also analyze how the Fourier transform is used if the image is
rotated or translated.[Apr/May 15].
Solution:
Find kernel using 2x2 DFT
F(u,v)=Kernal× input image×(kernel)T
=(𝟎. 𝟓 𝟎. 𝟓 ) ( 𝟎 𝟎
)(
𝟎. 𝟓 𝟎. 𝟓
)
𝟎. 𝟓 −𝟎. 𝟓 𝟐𝟓𝟓 𝟐𝟓𝟓 𝟎. 𝟓 −𝟎. 𝟓
=( 𝟏𝟐𝟕. 𝟓 𝟏𝟐𝟕. 𝟓
)(
𝟎. 𝟓 𝟎. 𝟓
)
−𝟏𝟐𝟕. 𝟓 −𝟏𝟐𝟕. 𝟓 𝟎. 𝟓 −𝟎. 𝟓
42
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
=( 𝟏𝟐𝟕. 𝟓 𝟎)
−𝟏𝟐𝟕. 𝟓 𝟎
11. If all the pixels in an image are shuffled, will there be any change in the histogram?
Justify your answer.(May/June 17)
If all the pixels in an image are shuffled ,there will not be any change in the histogram of the
image.A histogram gives only the frequency of occurrence of the gray level.consider two
images 1 and 2.
Image 2 is obtained by shuffling the rows of image 1.Their corresponding histograms are
32454
𝖥3 4 5 4 3 1
⌈ ⌉
35553
⌈ ⌉
⌈ 3 4 5 4 3 ⌉
⌈4 5 3 4 4 ⌉
⌈ ⌉
43
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
Step 5.Mapping:
0 1 2 3 4 5 6 7
0 0 2 5 7 7 7 7
Answer:
32454 527 77
𝖥3 4 5 4 3 1 𝖥 5 7 7 7 51
⌈ ⌉ ⌈ ⌉
35553 57775
⌈ ⌉---HISTOGRAM EQUALIZATION-⌈ ⌉
⌈3 4 5 4 3 ⌉ ⌈5 7 7 7 5⌉
⌈4 5 3 4 4 ⌉ ⌈7 7 5 7 7⌉
⌈ ⌉ ⌈ ⌉
44
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
Gray level 0 1 2 3 4 5 6 7
No of Pixel 6 8 11 12 3 5 15 6
SOLUTION:
Step 1: Find running sum of histogram values:
Gray level 0 1 2 3 4 5 6 7
No of Pixel 6 8 11 12 3 5 15 6
Running sum 6 14 25 37 40 45 60 66
Step 5.Mapping:
0 1 2 3 4 5 6 7
1 1 4 4 4 5 6 7
14. Explain in detail about color image enhancement?(Nov-2011) (or)Explain any two
techniques for color image enhancement.(Nov 2014)(or) How color image is enhanced
and compare it with gray scale processing.(May 2015)
45
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
Color image enhancement requires improvement of color balance or color contrast in a color
image. Enhancement of a color images becomes a more difficult task given to added
complexity of color perception.
Block diagram
The input color coordinates of each pixel are independently transformed into another set of
color coordinates. Where the image in each coordinate is enhanced by its own image
enhancement algorithm.
The enhanced image coordinates T˚₁, T˚₂, T˚₃ are inverse transformed to R˚, G˚, B˚ for
display.
Smoothing and sharpening are the two techniques used for color image enchancement.
Grey scale image smoothing can be viewed as a spatial filtering operation in which the
coefficients of the filtering mask have same value.The average of the RGB component vectors
in this neighbourhood is
C^(x,y)=1 ∑ 𝐶(𝑠, 𝑡)
𝐾 (𝑠,𝑡)£𝑆𝑥𝑦
1 ∑ 𝑅(𝑠, 𝑡)
𝖥𝐾 (𝑠,𝑡)£𝑆𝑥𝑦 1
⌈1 ⌉
Where,C^(x,y)=⌈ ∑ 𝐺(𝑠, 𝑡)⌉
𝐾 (𝑠,𝑡)£𝑆𝑥𝑦 ⌉
⌈1 ∑ 𝐵(𝑠, 𝑡)
[𝐾 (𝑠,𝑡)£𝑆𝑥𝑦 ]
46
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
*Smoothing by neighborhood averaging can be carried out a percolor plane basis.The result is
same as when the averaging is performed using RGB color vectors.
*Image sharpening using laplacian is used mainly for color image enhancement.It’s defined as
a vector whose components are equal to the laplacian of the individual scalar component of a
vector.
❑ 2R(x, y)
❑2[C(x,y)]=[ ❑2G(x, y) ]
❑ 2[B(x, y)
15. Describe how homo morphic filtering is used to separate illumination and reflectance
component.(May 2015)(or)Explain Homomorphic filtering in detail? (Nov-2013, May-
2013, nov2011)
Homomorphic filtering:
The illumination component of the image is characterized by spatial variations while the
reflectance components variance only at the dissimilar objects.
This characteristics leads to associate the low frequencies with illumination & high
frequencies with reflectance component, which is used in enhancement of the image.
i -- illumination component
r---reflectance component
47
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
Step 1:
Z(x, y) = ln f (x, y)
=ln I (x, y)+ ln r (x, y)
Step 2:
Compute DFT,
After DFT,
Step 3:
The spatial domain can be obtained by taking inverse transform of S (u, v) S (x,y) = I-1
{ S (u, v)}
= I-1 {F I (u, v) H (u, v)} + I-1 {Fr (u, v) H (u, v)}
48
EC8093 - DIGITAL IMAGE PROCESSING UNIT II - IMAGE ENHANCEMENT
Step 5:
49