Download as pdf or txt
Download as pdf or txt
You are on page 1of 141

UNIT 2 (PART 1)

IMAGE ENHACEMENT IN
SPATIAL DOMAIN

1
2
3
• The term spatial domain refers to the image
plane itself, and image processing methods in
this category are based on direct manipulation
of pixels in an image.
• In frequency domain (the topic of Chapter 4)
in which operations are performed on the
Fourier transform of an image, rather than on
the image itself.

4
5
There are many difference between spatial domain and
frequency domain in image enhancement. –

• Spatial domain deals with image plane itself whereas


Frequency domain deals with the rate of pixel change.
• Spatial domain works based on direct manipulation of
pixels whereas Frequency domain works based on
modifying fourier transform.
• Spatial domain can be more easier to understand
whereas Frequency domain can be less easier to
understand.
• Spatial domain takes less time to computer whereas
Frequency domain takes more time to compute.

6
7
The Basics of Intensity Transformations
and Spatial Filtering

• where is the f(x,y) input image, g(x,y) is the output image, and
T is an operator on f defined over a neighbourhood of point (x,
y).
• The point (x, y) shown is an arbitrary location in the image,
and the small region shown containing the point is a
neighborhood of (x, y).
• Typically, the neighborhood is rectangular, centered on (x, y),
and much smaller in size than the image.
8
9
• The process that illustrates consists of moving the origin of
the neighbourhood from pixel to pixel and applying the
operator T to the pixels in the neighbourhood to yield the
output at that location.
• Thus, for any specific location (x, y), the value of the output
image g at those coordinates is equal to the result of
applying T to the neighbourhood with origin at (x, y) in f.
• For example, suppose that the neighbourhood is a square of
size and that operator T is defined as “compute the average
intensity of the neighbourhood.” Consider an arbitrary
location in an image, say (100, 150).

10
• Assuming that the origin of the
neighbourhood is at its centre, the result, , at
that location is computed
• as the sum of and its 8-neighbors, divided by 9
(i.e., the average intensity of the pixels
encompassed by the neighbourhood).

11
• The origin of the neighbourhood is then
moved to the next location and the procedure
is repeated to generate the next value of the
output image g.
• Typically, the process starts at the top left of
the input image and proceeds pixel by pixel in
a horizontal scan, one row at a time

12
TYPES OF TRANSFORMATIONS IN
SPATIAL DOMAIN

13
LINEAR TRANFORMATIONS

Linear transformation includes simple identity and negative transformation.

1. IDENTITY TRANFORMATION:-
Identity transition is shown by a straight line. In this transition, each value of
the input image is directly mapped to each other value of output image. That
results in the same input image and output image. And hence is called
identity transformation. It has been shown below:-
It shows that for each pixel or
intensity value of input image,
there is a same intensity value
of output image. That means
the output image is exact
replica of the input image.
It can be mathematically
represented as:
g(x,y) = f(x,y)

14
15
16
17
2. Negative transformation
• The second linear transformation is negative transformation, which is invert
of identity transformation. In negative transformation, each value of the
input image is subtracted from the L-1 and mapped onto the output image.
• In this case the following transition has been done.
s = (L – 1) – r
• since the input image of Einstein is an 8 bpp image, so the number of
levels in this image are 256. Putting 256 in the equation, we get this
s = 255 – r
• So each value is subtracted by 255 and the result image has been shown
above. So what happens is that, the lighter pixels become dark and the
darker picture becomes light. And it results in image negative.

18
19
20
21
22
• The log transformations can be defined by this formula
• s = c log(r + 1).
• Where s and r are the pixel values of the output and the input image and c
is a constant. The value 1 is added to each of the pixel value of the input
image because if there is a pixel intensity of 0 in the image, then log (0) is
equal to infinity. So 1 is added, to make the minimum value at least 1.
• During log transformation, the dark pixels in an image are expanded as
compare to the higher pixel values. The higher pixel values are kind of
compressed in log transformation. This result in following image
enhancement.
• The value of c in the log transform adjust the kind of enhancement you are
looking for.

23
24
25
• Variation in the value of γ varies the enhancement of the images.
Different display devices / monitors have their own gamma
correction, that’s why they display their image at different intensity.

• This type of transformation is used for enhancing images for


different type of display devices. The gamma of different display
devices is different. For example Gamma of CRT lies in between of
1.8 to 2.5, that means the image displayed on CRT is dark.
Correcting gamma.
• s=cr^γ
• s=cr^(1/2.5)

26
Piecewise-Linear Transformation
Functions
• Contrast stretching
– One of the simplest piecewise linear functions is a
contrast-stretching transformation.
– Contrast stretching, in general increases the contrast of the
images, by making the dark portions darker and the bright
portion brighter.
– Low-contrast images can result from poor illumination, lack of
dynamic range in the imaging sensor, or even the wrong setting
of a lens aperture during image acquisition.
– Contrast stretching is a process that expands the range of
intensity levels in an image so that it spans the full intensity
range of the recording medium or display device.

27
28
29
30
31
32
33
34
• Bit-plane slicing:-
▪ The gray level of each pixel in a digital image is stored as one or more
bytes in a computer.
▪ For an 8-bit image, 0 is encoded as 00000000 and 255 is encoded as
11111111.
▪ The bit in the far left side is referred as the most significant bit (MSB)
because a change in that bit would significantly change the value encoded
by the byte.
▪ The bit in the far right is referred as the least significant bit (LSB), because
a change in this bit does not change the encoded gray value much.
▪ The bit plane representation of an eight-bit digital image is given by:

35
36
▪ In bit plane slicing, we will get 8 different images and all 8
images will be binary, showing the significance of each bit in
the 8-bit image.
• Decomposing an image into its bit planes is useful for
analyzing the creative importance of each bit in the image.
• Also, this type of decomposition is useful for image
compression in which, fewer than all planes are used in
reconstructing the image.
• Bit plane slicing can be used for image compression

37
APPLY BIT PLANE SILICING ON THE GIVEN IMAGE MATRIX

38
39
40
NEIGHBOURHOOD PROCESSING
TECHNIGUE

41
42
43
44
Histogram Processing
• The histogram of a digital image with intensity levels in the
range [0,L-1] is a discrete function h(r)=nk
where r is the kth intensity value and nk is the number of pixels
in the image with intensity
• It is common practice to normalize a histogram by dividing
each of its components by the total number of pixels in the
image, denoted by the product MN, where, as usual, M and N
are the row and column dimensions of the image.
• Thus, a normalized histogram is given by for Loosely
speaking, is an estimate of the probability of occurrence of
intensity level in an image. The sum of all components of a
normalized histogram is equal to 1.

45
46
47
48
49
50
51
52
53
54
55
56
Histogram Equalization
• Consider for a moment continuous intensity
values and let the variable r denote the
intensities of an image to be processed.
• As usual, we assume that r is in the range [0,
L-1] with representing black and representing
white.
• For r satisfying these conditions, we focus
attention on transformations (intensity
mappings) of the form

57
58
59
60
61
62
that produce an output intensity level s for every pixel in the input image having
intensity r. We assume that:

63
64
65
66
67
• Thus, we see that the PDF of the output
intensity variable, s, is determined by the PDF
of the input intensities and the transformation
function used [recall that r and s are related by
].
• A transformation function of particular
importance in image processing has the form

68
69
70
71
Example
• Suppose that a 3-bit image of size pixels has
the intensity distribution shown in Table
where the intensity levels are integers in the
range [0,7]

72
73
74
75
76
77
Histogram Matching (Specification)
• While the goal of histogram equalization is to produce an output
image that has a flattened histogram, the goal of histogram
matching is to take an input image and generate an output image
that is based upon the shape of a specific (or reference) histogram.
• Histogram matching is also known as histogram specification.
• Let us suppose we have two images, an input image and a specified
image. We want to use histogram matching to force the input image
to have a histogram that is the shape of the histogram of the
specified image.
• The first few steps are similar to histogram equalization, except we
are performing histogram equalization on two images (original
image and the specific image).

78
• The method used to generate a processed
image that has a specified histogram is called
histogram matching or histogram
specification.

79
• where t is a dummy variable of integration. It
then follows from these two equations that
and, therefore, that z must satisfy the
condition

80
Procedure
• 1. Obtain from the input image and use Equation to obtain the
values of s.
• 2. Use the specified PDF in Equation to obtain the transformation
function G(z).
• 3.Obtain the inverse transformation because z is obtained from s,
this process is a mapping from s to z, the latter being the desired
values
• 4.Obtain the output image by first equalizing the input image using
Equation; the pixel values in this image are the s values.
• For each pixel with value s in the equalized image, perform the
inverse mapping to obtain the corresponding pixel in the output
image.
• When all pixels have been thus processed, the PDF of the output
image will be equal to the specified PDF.

81
Local Enhancements
• We have studied Global approaches, in the
sense that pixels are modified by a
transformation function based on the intensity
distribution of an entire image.
• Although this global approach is suitable for
overall enhancement, there are cases in which
it is necessary to enhance details over small
areas in an image.

82
• The number of pixels in these areas may have
negligible influence on the computation of a
global transformation whose shape does not
necessarily guarantee the desired local
enhancement.
• The solution is to devise transformation
functions based on the intensity distribution in
a neighbourhood of every pixel in the image.

83
• The histogram processing techniques previously
described are easily adapted to local
enhancement.
• The procedure is to define a neighbourhood and
move its centre from pixel to pixel.
• At each location, the histogram of the points in
the neighbourhood is computed and either a
histogram equalization or histogram specification
transformation function is obtained.

84
• This function is then used to map the intensity of
the pixel cantered in the neighbourhood.
• The centre of the neighbourhood region is then
moved to an adjacent pixel location and the
procedure is repeated.
• Because only one row or column of the
neighbourhood changes during a pixel-to-pixel
translation of the neighbourhood, updating the
histogram obtained in the previous location with
the new data introduced at each motion step is
possible

85
86
Fundamentals of Spatial Filtering
• The Mechanics of Spatial Filtering
– Spatial filter consists of
• (1) a neighbourhood, (typically a small rectangle), and
• (2) a predefined operation that is performed on the
image pixels encompassed by the neighbourhood
– Filtering creates a new pixel with coordinates
equal to the coordinates of the centre of the
neighbourhood, and whose value is the result of
the filtering operation

87
• A processed (filtered) image is generated as the centre of the filter visits
each pixel in the input image.
• If the operation performed on the image pixels is linear, then the filter is
called a linear spatial filter. Otherwise, the filter is nonlinear.
• A mask is a filter. Concept of masking is also known as spatial filtering.
Masking is also known as filtering. In this concept we just deal with the
filtering operation that is performed directly on the image.
• Filters are applied on image for multiple purposes. The two most common
uses are as following:
Filters are used for Blurring and noise reduction
Filters are used or edge detection and sharpness

88
89
90
91
92
93
94
95
96
97
Spatial Correlation and Convolution
• There are two closely related concepts that must
be understood clearly when performing linear
spatial filtering.
• One is correlation and the other is convolution.
Correlation is the process of moving a filter mask
over the image and computing the sum of
products at each location, exactly as explained in
the previous section.
• The mechanics of convolution are the same,
except that the filter is first rotated by 180°

98
99
100
101
102
Smoothing Spatial Filters
• Smoothing filters are used for blurring and for
noise reduction.
• Blurring is used in pre processing tasks, such
as removal of small details from an image
prior to (large) object extraction, and bridging
of small gaps in lines or curves.
• Noise reduction can be accomplished by
blurring with a linear filter and also by
nonlinear filtering.

103
Smoothing Linear Filters
• The output (response) of a smoothing, linear
spatial filter is simply the average of the pixels
contained in the neighbourhood of the filter
mask.
• These filters sometimes are called averaging
filters
• These are also referred to a lowpass filters

104
• The idea behind smoothing filters is
straightforward.
• By replacing the value of every pixel in an
image by the average of the intensity levels in
the filter mask, this process results in an image
with reduced “sharp” transitions in intensities

105
106
• The coefficients of the filter are all 1s.
• The idea here is that it is computationally
more efficient to have coefficients valued 1.
• At the end of the filtering process the entire
image is divided by 9.
• An mask would have a normalizing constant
equal to mn.
• A spatial averaging filter in which all
coefficients are equal sometimes is called a
box filter

107
• The second mask in Figure is a little more
interesting.
• This mask yields a so called weighted average,
terminology used to indicate that pixels are
multiplied by different coefficients, thus giving
more importance (weight) to some pixels at the
expense of others.
• In the mask shown in Figure the pixel at the centre
of the mask is multiplied by a higher value than
any other, thus giving this pixel more importance
in the calculation of the average.

108
109
110
111
112
Order-Statistic (Nonlinear) Filters
• Order-statistic filters are nonlinear spatial filters whose
response is based on ordering (ranking) the pixels
contained in the image area encompassed by the filter,
and then replacing the value of the centre pixel with
the value determined by the ranking result.
• The best-known filter in this category is the median
filter, which, as its name implies, replaces the value of
a pixel by the median of the intensity values in the
neighbourhood of that pixel (the original value of the
pixel is included in the computation of the median).

113
114
115
• Median filters are quite popular because, for
certain types of random noise, they provide
excellent noise-reduction capabilities, with
considerably less blurring than linear
smoothing filters of similar size.
• Median filters are particularly effective in the
presence of impulse noise, also called
salt-and-pepper noise because of its
appearance as white and black dots
superimposed on an image

116
• The median m, of a set of values is such that
half the values in the set are less than or equal
to m and half are greater than or equal to m.
• In order to perform median filtering at a point
in an image, we first sort the values of the
pixel in the neighborhood, determine their
median, and assign that value to the
corresponding pixel in the filtered image

117
• For example, in a neighbourhood the median
is the 5th largest value, in a neighbourhood it
is the 13th largest value, and so on.
• When several values in a neighbourhood are
the same, all equal values are grouped. For
example, suppose that a neighbourhood has
values (10, 20, 20, 20, 15, 20, 20, 25, 100).
• These values are sorted as (10, 15, 20, 20, 20,
20, 20, 25, 100), which results in a median of
20

118
119
120
121
122
Sharpening Spatial Filters
• The principal objective of sharpening is to
highlight transitions in intensity.
• Uses of image sharpening vary and include
applications ranging from electronic printing
and medical imaging to industrial inspection
and autonomous guidance in military systems

123
• Various ways of defining and implementing
operators for sharpening by digital differentiation.
• Fundamentally, the strength of the response of a
derivative operator is proportional to the degree
of intensity discontinuity of the image at the point
at which the operator is applied.
• Thus, image differentiation enhances edges and
other discontinuities (such as noise) and
deemphasizes areas with slowly varying
intensities.

124
• Sharpening filters that are based on first- and
second-order derivatives, respectively.
• Before proceeding with that discussion, however,
we stop to look at some of the fundamental
properties of these derivatives in a digital context.
• To simplify the explanation, we focus attention
initially on one-dimensional derivatives.
• In particular, we are interested in the behaviour of
these derivatives in areas of constant intensity, at
the onset and end of discontinuities .

125
• The behavior of derivatives during transitions
into and out of these image features also is of
interest.

126
• The derivatives of a digital function are defined in terms of differences.
• There are various ways to define these differences.
• However, we require that any definition we use for a first derivative
– (1) must be zero in areas of constant intensity;
– (2) must be nonzero at the onset of an intensity step or ramp; and
– (3) must be nonzero along ramps.
– Similarly, any definition of a second derivative
• (1) must be zero in constant areas;
• (2) must be nonzero at the onset and end of an intensity step or
ramp; and
• (3) must be zero along ramps of constant slope. Because we are
dealing with digital quantities whose values are finite, the
maximum possible intensity change also is finite, and the shortest
distance over which that change can occur is between adjacent
pixels.

127
• A basic definition of the first-order derivative of a
one-dimensional function is the difference

• We used a partial derivative here in order to keep


the notation the same as when we consider an
image function of two variables, at which time we
will be dealing with partial derivatives along the
two spatial axes.

128
• We define the second-order derivative of as
the difference

129
130
131
Using the Second Derivative for Image
Sharpening—The Laplacian
• implementation of 2-D, second-order
derivatives and their use for image sharpening
• It can be shown that the simplest derivative
operator is the Laplacian, which, for a function
(image) of two variables, is defined as

132
133
134
• Because derivatives of any order are linear
operations, the Laplacian is a linear operator.
• To express this equation in discrete form, we
use the definition in Equation, keeping in mind
that we have to carry a second variable.
• In the x-direction, we have

135
• Similarly, in the y-direction we have

• Therefore, it follows from the preceding three


equations that the discrete Laplacian of two
variables is

136
137
138
139
140
141

You might also like