Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

10-10-2020

Unit- III

18CSE353T - Digital Image Processing


Dr. R. Rajkamal

10/10/2020 Dr. R. Rajkamal 1

Image Restoration and Reconstruction


 Restoration attempts to recover an
image that has been degraded by
using a priori knowledge of the
degradation phenomenon.

g(x, y) = h(x, y) * f(x, y) + h(x, y)


h(x,y) is the spatial representation of the degradation function
* Convolution
Convolution in the spatial domain is analogous to multiplicationin the frequency domain

G(u, v) = H(u, v)F(u, v) + N(u, v)

where the terms in capital letters are the Fourier transforms of


the corresponding spatial domain terms
Dr. R. Rajkamal
2

1
10-10-2020

Noise Models
 The principal sources of noise in digital
 Images are corrupted during
images arise during image acquisition and/or
transmission principally due to
transmission.
interference in the channel used for
transmission.
 The performance of imaging sensors is
affected by a variety of factors, such as  Frequency properties refer to the
environmental conditions during image frequency content of noise in the Fourier
acquisition, and by the quality of the sensing sense
elements themselves.

Noise is independent of spatial coordinates, and that it is uncorrelated with respect to the
image itself .
(there is no correlation between pixel values and the values of noise components).

Dr. R. Rajkamal

Noise Probability Density Functions

Z represents intensity,
The standard deviation squared, is called the variance of z

Dr. R. Rajkamal

2
10-10-2020

Noise Probability Density Functions

Gamma Noise

Dr. R. Rajkamal

Noise Probability Density Functions

Salt and Pepper Noise

Dr. R. Rajkamal

3
10-10-2020

Mean Filters
Noise-reduction capabilities - Smoothing filters in spatial and frequency domain.
 Mean filter  Harmonic mean filter
 Geometric mean filter  Contraharmonic mean filter

Mean filter  This operation can be implemented using a spatial filter of size m x n
in which all coefficients have value 1/mn.
 A mean filter smooths local variations in an image, and noise is
reduced as a result of blurring.

Geometric mean filter  Each restored pixel is given by the product of the pixels in the
subimage window, raised to the power.
 A geometric mean filter achieves smoothing comparable to the
arithmetic mean filter, but it tends to lose less image detail in
the process.
Dr. R. Rajkamal

Mean Filters
Harmonic mean filter  The harmonic mean filter works well for salt noise, but fails for
pepper noise.
 It does well also with other types of noise like Gaussian noise.

 Q is called the order of the filter.


Contraharmonic mean filter  This filter is well suited for reducing or virtually eliminating
the effects of salt-and-pepper noise.
 For positive values of Q,the filter eliminates pepper noise. For
negative values of Q. It eliminates salt noise. It cannot do both
simultaneously.
 Note that the contraharmonic filter reduces to the arithmetic
mean filter if Q = 0, and to the harmonic mean filter if Q = -1.
Dr. R. Rajkamal

4
10-10-2020

Order-Statistic Filters
Order-statistic filters are spatial filters whose response is based on ordering (ranking) the
values of the pixels contained in the image area encompassed by the filter. The ranking result
determines the response of the filter.

Median filter  Median filters are quite popular because, for certain types of
random noise, they provide excellent noise-reduction capabilities,
with considerably less blurring than linear smoothing filters of
similar size.

Max and min filters  This filter is useful for finding the brightest points in an image. Also,
because pepper noise has very low values, it is reduced by this filter
as a result of the max selection process in the sub image area Sxy.

 This filter is useful for finding the darkest points in an image. Also, it
reduces salt noise as a result of the min operation.
Dr. R. Rajkamal

Order-Statistic Filters
Midpoint filter

 The midpoint filter simply computes the midpoint between the maximum and minimum
values in the area encompassed by the filter.
 This filter combines order statistics and averaging. It works best for randomly distributed
noise, like Gaussian or uniform noise.

Alpha-trimmed mean filter


 alpha-trimmed filter is useful in situations involving multiple types of noise, such as a
combination of salt-and-pepper and Gaussian noise.
Dr. R. Rajkamal

5
10-10-2020

Adaptive Filters
The mean gives a measure of average intensity in the region over which the
mean is computed, and the variance gives a measure of contrast in that region.

Adaptive, local noise réduction filter


Adaptive filter is to operate on a local region Sxy, The response of the filter at any point (x, y)
on which the region is centered is to be based on four quantities:
1) g(x, y), the value of the noisy image at (x, y);
2) the variance of the noise corrupting f(x,y) to form g(x,y)
3) the local mean of the pixels in Sxy and
4) the local variance of the pixels Sxy

Dr. R. Rajkamal

Bandreject Filters & Bandpass Filters


 Periodic noise can be analyzed and filtered quite
effectively using frequency domain techniques.

 The basic idea is that periodic noise appears as


concentrated bursts of energy in the Fourier transform,
at locations corresponding to the frequencies of the
periodic interference.

 The approach is to use a selective filter to isolate the


noise.

 The three types of selective filters (bandreject,


bandpass, and notch) are used for basic periodic noise
reduction.
Dr. R. Rajkamal

6
10-10-2020

Bandreject Filters
(a) Image corrupted by sinusoidal noise.
(b) Spectrum of (a).
(c) Butterworth bandreject filter (white represents 1).
(d) Result of filtering.

Performing straight bandpass filtering on an


image is not a common procedure because it
generally removes too much image detail.

However, bandpass filtering is quite useful in


isolating the effects on an image caused by
selected frequency bands.
Dr. R. Rajkamal

Inverse Filtering
The material in this section is our first step in studying restoration of images degraded by a
degradation function H, which is obtained by three principal ways to estimate the degradation
function (1) Observation, (2) Experimentation, and (3) Mathematical modeling.

The simplest approach to restoration is direct inverse filtering, where we compute an estimate,
of the transform of the original image simply by dividing the transform of the degraded image,
by the degradation function:

H(0,0) is usually the highest value of H(u,v) in the frequency domain. Thus, by limiting the
analysis to frequencies near the origin, reduce the probability of encountering zero values.
Dr. R. Rajkamal

7
10-10-2020

Minimum Mean Square Error Filtering


Wiener Filtering
 The inverse filtering approach makes no explicit provision for handling noise.

 This approach incorporates both the degradation function and statistical characteristics of
noise into the restoration process.

 The method is founded on considering images and noise as random variables, and the
objective is to find an estimate of the uncorrupted image f such that the mean square error
between them is minimized.

 This error measure is given by

It is assumed that the noise and the image are uncorrelated; that one or the other has zero mean; and that the
intensity levels in the estimate are a linear function of the levels in the degraded image.
Dr. R. Rajkamal

Minimum Mean Square Error Filtering


Minimum of the error function is given in the frequency domain by the expression
The filter, which consists of the terms inside the
brackets, also is commonly referred to as the
minimum mean square error filter or the least
square error filter.

Wiener filter does not have the same problem as the inverse filter with zeros in the
degradation function, unless the entire denominator is zero for the same value(s) of u and v.
This image cannot currently be display ed.

Dr. R. Rajkamal

8
10-10-2020

A number of useful measures are based on the power spectra of noise and of the undegraded
image.

One of the most important is the signal-to-noise ratio, approximated using frequency domain
quantities such as

Dr. R. Rajkamal

Point, Line, and Edge Detection


 The three types of image features are isolated points, lines, and edges. Edge pixels are
pixels at which the intensity of an image function changes abruptly.

 Edge detectors are local image processing methods designed to detect edge pixels.

 A line may be viewed as an edge segment in which the intensity of the background on
either side of the line is either much higher or much lower than the intensity of the line
pixels.

Lines referring to thin structures, typically just a few pixels thick.


Dr. R. Rajkamal

9
10-10-2020

Point, Line, and Edge Detection


Recap

 Transitions in intensity between the solid objects


and the background along the scan line show two
types of edges: ramp edges (on the left) and step
edges (on the right).
 Intensity transitions involving thin objects such as
lines often are referred to as roof edges.

Dr. R. Rajkamal

Point, Line, and Edge Detection


Recap
 First-order derivatives generally produce thicker edges in an image.
 Second-order derivatives have a stronger response to fine detail, such as thin lines,
isolated points, and noise.
 Second-order derivatives produce a double-edge response at ramp and step transitions
in intensity.
 The sign of the second derivative can be used to determine whether a transition into an
edge is from light to dark or dark to light.

Dr. R. Rajkamal

10
10-10-2020

Detection of Isolated Points


Using the Laplacian mask , a point has been
detected at the location (x,y) on which the
mask is centered if the absolute value of
the response of the mask at that point
exceeds a specified threshold.

Such points are labeled 1 in the output image and all others are
labeled 0, thus producing a binary image.

These features are usually defined as regions in the image where there is significant edge strength in two
or more directions.
Dr. R. Rajkamal

Detection of Isolated Points

Dr. R. Rajkamal

11
10-10-2020

Detection of Isolated Points


This formulation simply measures the weighted differences between a pixel and its 8-neighbors. Intuitively, the
idea is that the intensity of an isolated point will be quite different from its surroundings and thus will be
easily detectable by this type of mask.

The only differences in intensity that are considered of interest are those large enough (as determined by ) to be
considered isolated points. Note that, as usual for a derivative mask, the coefficients sum to zero, indicating
that the mask response will be zero in areas of constant intensity.

This type of detection process is rather specialized, because it is based on abrupt intensity changes at single-
pixel locations that are surrounded by a homogeneous background in the area of the detector mask

Dr. R. Rajkamal

Line Detection
For line detection It is expected that second derivatives to result in a stronger response and to produce thinner
lines than first derivatives. Thus, we can use the Laplacian mask for line detection also, keeping in mind that the
double-line effect of the second derivative must be handled properly.

(a) Binary image portion of a wire-bond mask for an electronic circuit.


(b) Laplacian image contains negative values, scaling is necessary for display.
As the magnified section shows, mid gray represents zero, darker shades of
gray represent negative values, and lighter shades are positive. The double-
line effect is clearly visible in the magnified region.
( c ) Absolute value of the Laplacian.
(d) Positive values of the Laplacian.

Dr. R. Rajkamal

12
10-10-2020

Line Detection

 Suppose that an image with a constant background and containing various lines (oriented
at 0°, +45, - 45 and 90°) is filtered with the first mask. The maximum responses would
occur at image locations in which a horizontal line passed through the middle row of the
mask.
Dr. R. Rajkamal

Edge detection
Edge detection is the approach used most frequently for segmenting images based on abrupt (local) changes in
intensity.

1. Image smoothing for noise reduction


2. Detection of edge points.
This is a local operation that extracts from an image all points that are potential
candidates to become edge points.
3. Edge localization.
The objective of this step is to select from the candidate edge points only the points that
are true members of the set of points comprising an edge.

Detecting changes in intensity for the purpose of finding edges can be accomplished using
first- or second-order derivatives.
Dr. R. Rajkamal

13
10-10-2020

Edge detection

Intensity profiles of step, ramp, and roof edge

Dr. R. Rajkamal

The Marr-Hildreth edge detector


Marr and Hildreth argued
 that intensity changes are not independent of image scale and
so their detection requires the use of operators of different
sizes; and
 that a sudden intensity change will give rise to a peak or trough
in the first derivative or, equivalently, to a zero crossing in the
second derivative .
These ideas suggest that an operator used for edge detection should have two salient features.
 First, it should be a differential operator capable of computing a digital approximation of the first
or second derivative at every point in the image.
 Second, it should be capable of being “tuned” to act at any desired scale, so that large operators
can be used to detect blurry edges and small operators to detect sharply focused fine detail.

Dr. R. Rajkamal

14
10-10-2020

The Marr-Hildreth edge detector

Laplacian of Guassian (LoG)


Or
Mexican hat operator
Dr. R. Rajkamal

The Marr-Hildreth edge detector


The Marr-Hildreth edge-detection algorithm may be summarized as follows:
1. Filter the input image n x n with an Gaussian low pass filter obtained by sampling

2. Compute the Laplacian of the image resulting from Step 1 using, for example, the 3 x 3
mask.

3. Find the zero crossings of the image from Step 2.

 A zero crossing at p implies that the signs of at least two of its opposing neighboring pixels must differ.
There are four cases to test: left/right, up/down, and the two diagonals.

 If the values of g(x,y) are being compared against a threshold, then not only must the signs of opposing
neighbors be different, but the absolute value of their numerical difference must also exceed the threshold
before we can call p a zero- crossing pixel.
Dr. R. Rajkamal

15
10-10-2020

Canny edge detector


Canny’s approach is based on three basic objectives:
 Low error rate.
 Edge points should be well localized. This image cannot currently be display ed.

 Single edge point response.

Summarizing, the Canny edge detection algorithm consists of the following basic steps: This image cannot currently be display ed.

1. Smooth the input image with a Gaussian filter.


2. Compute the gradient magnitude and angle images. This image cannot currently be display ed.

3. Apply non maxima suppression to the gradient magnitude image.


4. Use double thresholding and connectivity analysis to detect and link edges.

Dr. R. Rajkamal

Canny edge detector

Strong Edges

Weak Edges

Dr. R. Rajkamal

16
10-10-2020

Edge Linking and Boundary Detection


Local Processing

All points that are


similar according to
predefined criteria are
linked, forming an edge
of pixels that share
common properties.
Similarity according to:
1. Strength (magnitude)
2. Direction of the
gradient vector

Dr. R. Rajkamal

Local Processing

1. A image of the rear of a vehicle.


2. Gradient magnitude image.
3. Horizontally connected edge pixels.
4. Vertically connected edge pixels.
5. The logical OR of the two preceding
images.
6. Final result obtained using morphological
thinning.

Dr. R. Rajkamal

17
10-10-2020

Regional Processing

Dr. R. Rajkamal

Segmentation Using Morphological Watersheds

A drainage basin or watershed is an extent or an


area of land where surface water from rain melting
snow or ice converges to a single point at a lower
elevation, usually the exit of the basin, where the
waters join another waterbody, such as
a river, lake, wetland, sea, or ocean

Dr. R. Rajkamal

18
10-10-2020

Segmentation Using Morphological Watersheds

The Watershed transformation is a powerful tool for


image segmentation, it uses the region-based approach
and searches for pixel and region similarities.

The watershed concept was first applied by Beucher


and Lantuejoul at 1979, they used it to segment
images of bubbles and SEM metallographic pictures

Dr. R. Rajkamal

Watershed Algorithm
An image gradient is a directional change in the
intensity or color in an image. Image gradients may
be used to extract information from images.

an intensity image a gradient image in the x a gradient image in the y


direction measuring direction measuring vertical
horizontal change in intensity change in intensity
Dr. R. Rajkamal

19
10-10-2020

Watershed Algorithm
The concept of watersheds is based on visualizing an image in three dimensions: two spatial
coordinates versus intensity

The set of points in the function 𝑓 can be seen as


topographic surface 𝑆, The lighter the gray value of the
function at the point 𝑥 the higher the altitude of the
corresponding point on the surface

In “topographic” interpretation, we consider three types of points:


(a) points belonging to a regional minimum;
(b) points at which a drop of water, if placed at the location of any of those points, would fall
with certainty to a single minimum; and
(c) points at which water would be equally likely to fall to more than one such minimum
Dr. R. Rajkamal

Watershed Algorithm

 Suppose that a hole is punched in each regional minimum and that the entire topography
is flooded from below by letting water rise through the holes at a uniform rate.

 When the rising water in distinct catchment basins is about to merge, a dam is built to
prevent the merging.

 The flooding will eventually reach a stage when only the tops of the dams are visible above
the water line.

 These dam boundaries correspond to the divide lines of the watersheds.

 Therefore, they are the (connected) boundaries extracted by a watershed segmentation


algorithm.
Dr. R. Rajkamal

20
10-10-2020

Watershed Algorithm

 One of the principal applications of watershed segmentation is in the extraction of nearly


uniform objects from the background.

 Regions characterized by small variations in intensity have small gradient values.

 Thus, in practice, we often see watershed segmentation applied to the gradient of an


image, rather than to the image itself.

https://docs.opencv.org/master/d7/d1c/tutorial_js_watershed.html

Dr. R. Rajkamal

21

You might also like