Nass p2

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 81

Image Processing Lecture 6

(c) Highpass filter function (d) Result of highpass filtering


Figure 6.4 Low-pass and High-pass Filtering

Low-pass filtering results in blurring effects, while High-pass filtering


results in sharper edges.
In the last example, the highpass filtered image has little smooth gray-
level detail as a result of setting F(0,0) to 0. This can be improved by
adding a constant to the filter, for example we add 0.75 to the previous
highpass filter to obtain the following sharp image.

Figure 6.5 Result of highpass filter modified by adding 0.75 to the filter

©Asst. Lec. Wasseem Nahy Ibrahem Page 7


Image Processing Lecture 7

Smoothing frequency domain filters


Ideal Lowpass Filter (ILPF)
ILPF is the simplest lowpass filter that “cuts off” all high frequency
components of the DFT that are at a distance greater than a specified
distance D0 from the origin of the (centered) transform.
The transfer function of this filter is:

where D0 is the cutoff frequency, and

(a) (b) (c)


Figure 7.1 (a) Ideal lowpass filter. (b) ILPF as an image. (c) ILPF radial cross section

The ILPF indicates that all frequencies inside a circle of radius D0 are
passed with no attenuation, whereas all frequencies outside this circle are
completely attenuated.

The next figure shows a gray image with its Fourier spectrum. The circles
superimposed on the spectrum represent cutoff frequencies 5, 15, 30, 80
and 230.

©Asst. Lec. Wasseem Nahy Ibrahem Page 1


Image Processing Lecture 7

(a) (b)
Figure 7.2 (a) Original image. (b) its Fourier spectrum
The figure below shows the results of applying ILPF with the previous
cutoff frequencies.

(a) (b)

(c) (d)

©Asst. Lec. Wasseem Nahy Ibrahem Page 2


Image Processing Lecture 7

(e) (f)
Figure 7.3 (a) Original image. (b) - (f) Results of ILPF with cutoff frequencies 5, 15, 30, 80,
and 230 respectively.

We can see the following effects of ILPF:


1. Blurring effect which decreases as the cutoff frequency increases.
2. Ringing effect which becomes finer (i.e. decreases) as the cutoff
frequency increases.

Gaussian Lowpass Filter (GLPF)


The GLPF with cutoff frequency D0 is defined as:

(a) (b) (c)


Figure 7.4 (a) Gaussian lowpass filter. (b) GLPF as an image. (c) GLPF radial cross section

©Asst. Lec. Wasseem Nahy Ibrahem Page 3


Image Processing Lecture 7

Unlike ILPF, the GLPF transfer function does not have a sharp transition
that establishes a clear cutoff between passed and filtered frequencies.
Instead, GLPF has a smooth transition between low and high frequencies.

The figure below shows the results of applying GLPF on the image in
Figure 7.2(a) with the same previous cutoff frequencies.

(a) (b)

(c) (d)

©Asst. Lec. Wasseem Nahy Ibrahem Page 4


Image Processing Lecture 7

(e) (f)
Figure 7.5 (a) Original image. (b) - (f) Results of GLPF with cutoff frequencies 5, 15, 30, 80,
and 230 respectively.

We can see the following effects of GLPF compared to ILPF:


1. Smooth transition in blurring as a function of increasing cutoff
frequency.
2. No ringing effect.
Smoothing (lowpass) filtering is useful in many applications. For
example, GLPF can be used to bridge small gaps in broken characters by
blurring it as shown below. This is useful for character recognition.

(a) (b)
Figure 7.6 (a) Text of poor resolution. (b) Result of applying GLPF with cutoff=80 on (a)

©Asst. Lec. Wasseem Nahy Ibrahem Page 5


Image Processing Lecture 7

GLPF can also be used for cosmetic processing prior to printing and
publishing as shown below.

(a) (b)
Figure 7.7 (a) Original image. (b) Result of filtering with GLPF with cutoff=80

©Asst. Lec. Wasseem Nahy Ibrahem Page 6


Image Processing Lecture 7

Sharpening frequency domain filters


Edges and sudden changes in gray levels are associated with high
frequencies. Thus to enhance and sharpen significant details we need to
use highpass filters in the frequency domain
For any lowpass filter there is a highpass filter:

Ideal Highpass Filter (IHPF)


The IHPF cuts off all low frequencies of the DFT but maintain the high
ones that are within a certain distance from the center of the DFT.

where D0 is the cutoff frequency, and

(a) (b) (c)


Figure 7.8 (a) Ideal highpass filter. (b) IHPF as an image. (c) IHPF radial cross section

The IHPF sets to zero all frequencies inside a circle of radius D0 while
passing, without attenuation, all frequencies outside the circle.
The figure below shows the results of applying IHPF with cutoff
frequencies 15, 30, and 80.

©Asst. Lec. Wasseem Nahy Ibrahem Page 7


Image Processing Lecture 7

(a) (b)

(c) (d)
Figure 7.9 (a) Original image. (b) - (d) Results of IHPF with cutoff frequencies 15, 30, and 80
respectively.

We can see the following effects of IHPF:


1. Ringing effect.
2. Edge distortion (i.e. distorted, thickened object boundaries).
Both effects are decreased as the cutoff frequency increases.

©Asst. Lec. Wasseem Nahy Ibrahem Page 8


Image Processing Lecture 7

Gaussian Highpass Filter (GHPF)


The Gaussian Highpass Filter (GHPF) with cutoff frequency at distance
D0 is defined as:

(a) (b) (c)


Figure 7.10 (a) Gaussian highpass filter. (b) GHPF as an image. (c) GHPF radial cross section

The figure below shows the results of applying GHPF with cutoff
frequencies 15, 30 and 80.

(a) (b)

©Asst. Lec. Wasseem Nahy Ibrahem Page 9


Image Processing Lecture 7

(c) (d)

Figure 7.11 (a) Original image. (b) - (d) Results of GHPF with cutoff frequencies 15, 30, and
80 respectively.

The effects of GHPF in comparison with IHPF are:


1. No ringing effect.
2. Less edge distortion.
3. The results are smoother than those obtained by IHPF.

©Asst. Lec. Wasseem Nahy Ibrahem Page 10


Image Processing Lecture 8

Wavelets and Multiresolution Processing


Wavelet Transform (WT)
Wavelets (i.e. small waves) are mathematical functions that represent
scaled and translated (shifted) copies of a finite-length waveform called
the mother wavelet.
A wavelet transform (WT) is based on wavelets. It is used to
analyze a signal (image) into different frequency components at different
resolution scales (i.e. multiresolution). This allows revealing image’s
spatial and frequency attributes simultaneously.

2D-Discrete Wavelet Transform (2D-DWT)


The DWT provides a compact representation of a signal’s frequency
components with strong spatial support. DWT decomposes a signal into
frequency subbands at different scales from which it can be perfectly
reconstructed.
2D-signals such as images can be decomposed using many wavelet
decomposition filters in many different ways. We only study the Haar
wavelet filter and the pyramid decomposition method.

The Haar Wavelet Transform (HWT)


The Haar wavelet is a discontinuous, and resembles a step function.
For a function f, the HWT is defined as:

©Asst. Lec. Wasseem Nahy Ibrahem Page 1


Image Processing Lecture 8

where L is the decomposition level, a is the approximation subband and d


is the detail subband.

For example, if f={f1,f2,f3,f4 ,f5 ,f6 ,f7 ,f8 } is a time-signal of length 8, then
the HWT decomposes f into an approximation subband containing the
Low frequencies and a detail subband containing the high frequencies:
Low = a =
High = d =

To apply HWT on images, we first apply a one level Haar wavelet to


each row and secondly to each column of the resulting "image" of the
first operation. The resulted image is decomposed into four subbands: LL,
HL, LH, and HH subbands. (L=Low, H=High). The LL-subband contains
an approximation of the original image while the other subbands contain
the missing details. The LL-subband output from any stage can be
decomposed further.
The figure below shows the result of one and two level HWT based
on the pyramid decomposition. Figure 8.2 shows an image decomposed
with 3-level Haar wavelet transform.

©Asst. Lec. Wasseem Nahy Ibrahem Page 2


Image Processing Lecture 8

(a) Decomposition Level 1 (b) Decomposition Level 2


Figure 8.1 Pyramid decomposition using Haar wavelet filter

(a) Original image

©Asst. Lec. Wasseem Nahy Ibrahem Page 3


Image Processing Lecture 8

(b) Level 1

(c) Level 2

©Asst. Lec. Wasseem Nahy Ibrahem Page 4


Image Processing Lecture 8

(d) Level 3
Figure 8.2 Example of a Haar wavelet transformed image
Wavelet transformed images can be perfectly reconstructed using the four
subbands using the inverse wavelet transform.

Inverse Haar Wavelet Transform (IHWT)


The inverse of the Haar wavelet transform is computed in the reverse
order as follows:

To apply IHWT on images, we first apply a one level inverse Haar


wavelet to each column and secondly to each row of the resulting
"image" of the first operation.

©Asst. Lec. Wasseem Nahy Ibrahem Page 5


Image Processing Lecture 8

Statistical Properties of Wavelet subbands


The distribution of the LL-subband approximates that of the original
image but all non-LL subbands have a Laplacian distribution. This
remains valid at all depths (i.e. decomposition levels).

(a) (b)

(c) (d)
Figure 8.3 Histogram of (a) LL-subband (b) HL-subband (c) LH-subband (d) HH-subband of
subbands in Figure 8.2 (b)

©Asst. Lec. Wasseem Nahy Ibrahem Page 6


Image Processing Lecture 8

Wavelet Transforms in image processing


Any wavelet-based image processing approach has the following steps:
1. Compute the 2D-DWT of an image
2. Alter the transform coefficients (i.e. subbands)
3. Compute the inverse transform
Wavelet transforms are used in a wide range of image applications. These
include:
• Image and video compression
• Feature detection and recognition
• Image denoising
• Face recognition
Most applications benefit from the statistical properties of the non-LL
subbands (The Laplacian distribution of the wavelet coefficients in these
subbands).

Wavelet-based edge detection


Figure below shows a gray image and its wavelet transform for one-level
of decomposition.

(a)

©Asst. Lec. Wasseem Nahy Ibrahem Page 7


Image Processing Lecture 8

(b)
Figure 8.3 (a) gray image. (b) its one-level wavelet transform

Note the horizontal edges of the original image are present in the HL
subband of the upper-right quadrant of the Figure above. The vertical
edges of the image can be similarly identified in the LH subband of the
lower-left quadrant.
To combine this information into a single edge image, we simply zero the
LL subband of the transform, compute the inverse transform, and take the
absolute value.
The next Figure shows the modified transform and resulting edge image.

©Asst. Lec. Wasseem Nahy Ibrahem Page 8


Image Processing Lecture 8

(a)

(b)
Figure 8.4 (a) transform modified by zeroing the LL subband. (b) resulted edge image

©Asst. Lec. Wasseem Nahy Ibrahem Page 9


Image Processing Lecture 8

Wavelet-based image denoising


The general wavelet-based procedure for denoising the image is as
follows:
1. Choose a wavelet filter (e.g. Haar, symlet, etc…) and number of
levels for the decomposition. Then compute the 2D-DWT of the
noisy image.
2. Threshold the non-LL subbands.
3. Perform the inverse wavelet transform on the original
approximation LL-subband and the modified non-LL subbands.

The figure below shows a noisy image and its wavelet transform for two-
levels of decomposition.

(a)

©Asst. Lec. Wasseem Nahy Ibrahem Page 10


Image Processing Lecture 8

(b)
Figure 8.4 (a) noisy image. (b) its two-level wavelet transform

Now we threshold all the non-LL subbands at both decomposition levels


by 85. Then we perform the inverse wavelet transform on the LL-subband
and the modified (i.e. thresholded) non-LL subbands to obtain the
denoised image shown in the next figure.

©Asst. Lec. Wasseem Nahy Ibrahem Page 11


Image Processing Lecture 8

Figure 8.5 denoised image generated by thresholding all non-LL subbands by 85

In the image above, we can see the following:


• Noise Reduction.
• Loss of quality at the image edges.
The loss of edge detail can be reduced by zeroing the non-LL subbands at
the first decomposition level and only the HH-subband at the second
level. Then we apply the inverse transform to obtain the denoised image
in the figure below.

Figure 8.6 denoised image generated by zeroing the non-LL subbands

©Asst. Lec. Wasseem Nahy Ibrahem Page 12


Image Processing Lecture 9

Image Restoration
Image restoration attempts to reconstruct or recover an image that has
been degraded by a degradation phenomenon. As in image enhancement,
the ultimate goal of restoration techniques is to improve an image in some
predefined sense.

Image restoration vs. image enhancement


Image restoration Image enhancement

1. is an objective process is a subjective process

2. formulates a criterion of involves heuristic procedures


goodness that will designed to manipulate an image
yield an optimal estimate of the in order to satisfy the human
desired result visual system

3. Techniques include noise Techniques include contrast


removal and deblurring (removal stretching
of image blur)

Like enhancement techniques, restoration techniques can be performed in


the spatial domain and frequency domain. For example, noise removal is
applicable using spatial domain filters whereas deblurring is performed
using frequency domain filters.

A Model of Image Degradation & Restoration


As shown in the next figure, image degradation is a process that acts on
an input image f(x,y) through a degradation function H and an additive
noise η(x,y). It results in a degraded image g(x,y) such that:

©Asst. Lec. Wasseem Nahy Ibrahem Page 1


Image Processing Lecture 9

where h(x,y) is the spatial representation of the degradation function and


the symbol “ * ” indicates convolution.
Note that we only have the degraded image g(x,y). The objective of
restoration is to obtain an estimate of the original image. We
want the estimate to be as close as possible to the original input image
and, in general, the more we know about H and η, the closer will
be to f(x, y).

Figure 9.1 A model of the image degradation/restoration process

In the frequency domain, this model is equivalent to:

The approach that we will study is based on various types of image


restoration filters. We assume that H is the identity operator, and we deal
only with degradations due to noise.

©Asst. Lec. Wasseem Nahy Ibrahem Page 2


Image Processing Lecture 9

Noise and its characteristics


Noise in digital images arises during:
• Acquisition: environmental conditions (light level & sensor
temperature), and type of cameras
• and/or transmission – interference in the transmission channel
To remove noise we need to understand the spatial characteristics of
noise and its frequency characteristics (Fourier spectrum).
Generally, spatial noise is assumed to be independent of position in
an image and uncorrelated to the image itself (i.e. there is no correlation
between pixel values and the values of noise components). Frequency
properties refer to the frequency content of noise in the Fourier sense.

Noise Models
Spatial noise is described by the statistical behavior of the gray-level
values in the noise component of the degraded image. Noise can be
modeled as a random variable with a specific probability distribution
function (PDF). Important examples of noise models include:
1. Gaussian Noise
2. Rayleigh Noise
3. Gamma Noise
4. Exponential Noise
5. Uniform Noise
6. Impulse (Salt & Pepper) Noise

©Asst. Lec. Wasseem Nahy Ibrahem Page 3


Image Processing Lecture 9

Gaussian Noise
The PDF of Gaussian noise is given by

Figure 9.2 Gaussian noise PDF

where z is the gray value, μ is the mean and σ is the standard deviation.

Rayleigh Noise
The PDF of Rayleigh noise is given by

Figure 9.3 Rayleigh noise PDF

©Asst. Lec. Wasseem Nahy Ibrahem Page 4


Image Processing Lecture 9

Impulse (Salt & Pepper) Noise


The PDF of impulse noise is given by

Figure 9.4 Impulse noise PDF

If b > a, then gray level b appears as a light dot (salt), otherwise the gray
level a appears as a dark dot (pepper).

Determining noise models


The simple image below is well-suited test pattern for illustrating the
effect of adding various noise models.

Figure 9.5 Test pattern used to illustrate


the characteristics of the noise models

The next figure shows degraded (noisy) images resulted from adding the
previous noise models to the above test pattern image.

©Asst. Lec. Wasseem Nahy Ibrahem Page 5


Image Processing Lecture 9

Gaussian Rayleigh Gamma

Exponential Uniform Salt & Pepper

Figure 9.6 Images and histograms from adding Gaussian, Rayleigh, Gamma, Exponential,
Uniform, and Salt & Pepper noise.

©Asst. Lec. Wasseem Nahy Ibrahem Page 6


Image Processing Lecture 9

To determine the noise model in a noisy image, one may select a


relatively small rectangular sub-image of relatively smooth region. The
histogram of the sub-image approximates the probability distribution of
the corrupting model of noise. This is illustrated in the figure below.

(a) (b) (c)

(d) (e) (f)


Figure 9.10 (a) Gaussian noisy image. (b) sub-image extracted from a. (c) histogram of b
(d) Rayleigh noisy image. (e) sub-image extracted from d. (f) histogram of e

Image restoration in the presence of Noise Only


When the only degradation present in an image is noise, the degradation
is modeled as:

and

©Asst. Lec. Wasseem Nahy Ibrahem Page 7


Image Processing Lecture 9

Spatial filtering is the method of choice in situations when only additive


noise is present. Spatial filters that designed to remove noise include:
1. Order Statistics Filters: e.g. Min, Max, & Median
2. Adaptive Filters: e.g. adaptive median filter

Order-Statistics Filters
We have used one of these filters (i.e. median) in the image enhancement.
We now use additional filters (min and max) in image restoration.

Min filter
This filter is useful for finding the darkest points in an image. Also, it
reduces salt noise as a result of the min operation.

(a) (b)
Figure 9.11 (a) image corrupted by salt noise. (b) Result of filtering (a) with a 3×3 min filter.

Max filter
This filter is useful for finding the brightest points in an image. Also,
because pepper noise has very low values, it is reduced by this filter as a
result of the max operation.

©Asst. Lec. Wasseem Nahy Ibrahem Page 8


Image Processing Lecture 9

(a) (b)
Figure 9.12 (a) image corrupted by pepper noise. (b) Result of filtering (a) with a 3×3 max
filter.

Adaptive Filters
The previous spatial filters are applied regardless of local image
variation. Adaptive filters change their behavior using local statistical
parameters in the mask region. Consequently, adaptive filters outperform
the non-adaptive ones.

Adaptive median filter


The median filter performs well as long as the spatial density of the
impulse noise is not large (i.e. Pa and Pb less than 0.2). Adaptive median
filtering can handle impulse noise with probabilities even larger than
these. Moreover the adaptive median filter seeks to preserve detail while
smoothing non-impulse noise, while the median filter does not do.
The adaptive median filter aims to replace f(x,y) with the median
of a neighborhood up to a specified size as long as the median is different
from the max and min values but f(x,y)=min or f(x,y)=max. Otherwise,
f(x,y) is not changed.

©Asst. Lec. Wasseem Nahy Ibrahem Page 9


Image Processing Lecture 9

Consider the following notation:


Sxy = mask region (neighborhood sub-image)
zmin = minimum gray level value in Sxy
zmax = maximum gray level value in Sxy
zmed = median of gray levels in Sxy
zxy = gray level at coordinates (x, y)
Smax = maximum allowed size of Sxy
The adaptive median filtering algorithm works in two levels A and B as
follows:
Level A: A1 = zmed - zmin
A2 = zmed - zmax
If A1 > 0 AND A2 < 0, Go to level B
Else increase the window size
If window size <= Smax repeat level A
Else output zmed.
Level B: B1 = zxy - zmin
B2 = zxy - zmax
If B1 > 0 AND B2 < 0, output zxy
Else output zmed.

The next figure shows an example of filtering an image corrupted by salt-


and-pepper noise with density 0.25 using 7×7 median filter and the
adaptive median filter with Smax = 7.

©Asst. Lec. Wasseem Nahy Ibrahem Page 10


Image Processing Lecture 9

(a)

(b) (c)
Figure 9.13 (a) Image corrupted by salt&pepper noise with density 0.25. (b) Result obtained
using a 7×7 median filter. (c) Result obtained using adaptive median filter with Smax = 7.

From this example, we find that the adaptive median filter has three main
purposes:
1. to remove salt-and-pepper (impulse) noise.
2. to provide smoothing of other noise that may not be impulsive.
3. to reduce distortion, such as excessive thinning or thickening of
object boundaries.

©Asst. Lec. Wasseem Nahy Ibrahem Page 11


Image Processing Lecture 10

Morphological Image Processing


• Morphology is concerned with image analysis methods whose
outputs describe image content (i.e. extract “meaning” from an
image).
• Mathematical morphology is a tool for extracting image
components that can be used to represent and describe region
shapes such as boundaries and skeletons.
• Morphological methods include filtering, thinning and pruning.
These techniques are based on set theory. All morphology
functions are defined for binary images, but most have natural
extension to grayscale images.

Basic Concepts of Set Theory


A set is specified by the elements between two braces: { }. The elements
of the sets are the coordinates (x,y) of pixels representing objects or other
features in an image.
Let A be a set in 2D image space Z2:
• If a = (a 1 , a 2) is an element of A, then
• If a is not an element of A, then
• Empty set is a set with no elements and is denoted by
• If every element of a set A is also an element of another set B, then
A is said to be a subset of B, denoted as
• The union of two sets A and B, denoted by
• The intersection of two sets A and B, denoted by
• Two sets A and B are said to be disjoint, if they have no common
elements. This is denoted by

©Asst. Lec. Wasseem Nahy Ibrahem Page 1


Image Processing Lecture 10

• The complement of a set A is the set of elements not contained in A.


This is denoted by
• The difference of two sets A and B, denoted A – B, is defined as

• The reflection of set B, denoted B̂ , is defined as



• The translation of set A by point z = (z1 , z2), denoted (A)z is
defined as

The figure below illustrates the preceding concepts.

Figure 10.1 Basic concepts of Set Theory

©Asst. Lec. Wasseem Nahy Ibrahem Page 2


Image Processing Lecture 10

Logic Operations Involving Binary Images


A binary image is an image whose pixel values are 0 (representing black)
or 1 (representing white, i.e. 255). The usual set operations of
complement, union, intersection, and difference can be defined easily in
terms of the corresponding logic operations NOT, OR and AND. For
example:
• Intersection operation is implemented by AND operation
• Union operation is implemented by OR operation
The figure below shows an example of using logic operations to perform
set operations on two binary images.

(a) (b)

a&b a|b a – b = a & bc


Figure 10.2 Using logic operations for applying set operations on two binary images

©Asst. Lec. Wasseem Nahy Ibrahem Page 3


Image Processing Lecture 10

Structuring Element
A morphological operation is based on the use of a filter-like binary
pattern called the structuring element of the operation. Structuring
element is represented by a matrix of 0s and 1s; for simplicity, the zero
entries are often omitted.
Symmetric with respect to its origin:
Lines:

0 0 0 0 1 1
0 0 0 1 0 1 1
0 0 1 0 0 = 1 1
0 1 0 0 0 1 1
1 0 0 0 0 1

Diamond:
0 1 0
1 1 1
0 1 0

Non-symmetric:
1 1
1 1 1 1 1 1 Reflection 1 1
1 1 on origin 1 1 1 1 1 1
1 1

Dilation
Dilation is an operation used to grow or thicken objects in binary images.
The dilation of a binary image A by a structuring element B is defined as:

This equation is based on obtaining the reflection of B about its origin


and translating (shifting) this reflection by z. Then, the dilation of A by B
©Asst. Lec. Wasseem Nahy Ibrahem Page 4
Image Processing Lecture 10

is the set of all structuring element origin locations where the reflected
and translated B overlaps with A by at least one element.

Example: Use the following structuring element to dilate the binary


image below.
1 0 0 0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0 0 0 0 0
1 0 0 0 1 1 1 1 1 1 0 0 0
1 0 0 0 1 1 1 1 1 1 0 0 0
Structuring element 0 0 0 1 1 1 1 1 1 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
Binary image

Solution:
We find the reflection of B:

B= 1 B̂
1
1
1
1

0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 1 1 1 1 1 1 0
0 0 0 0 1 1 1 1 1 1 1 0
0 0 0 1 1 1 1 1 1 1 1 0
0 0 1 1 1 1 1 1 1 1 0 0
0 1 1 1 1 1 1 1 1 0 0 0
0 1 1 1 1 1 1 1 0 0 0 0
0 1 1 1 1 1 1 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0

©Asst. Lec. Wasseem Nahy Ibrahem Page 5


Image Processing Lecture 10

Dilation can be used for bridging gaps, for example, in broken/unclear


characters as shown in the figure below.

(a)

(b)

Figure 10.3 (a) Broken-text binary image. (b) Dilated image.

©Asst. Lec. Wasseem Nahy Ibrahem Page 6


Image Processing Lecture 10

Erosion
Erosion is used to shrink or thin objects in binary images. The erosion of
a binary image A by a structuring element B is defined as:

The erosion of A by B is the set of all structuring element origin locations


where the translated B does not overlap with the background of A.

Example: Use the following structuring element to erode the binary


image below.
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0 0 0 0 0
1 0 0 0 1 1 1 1 1 1 0 0 0
1 0 0 0 1 1 1 1 1 1 0 0 0
Structuring 0 0 0 1 1 1 1 1 1 0 0 0
element 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
Binary image

Solution
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 1 1 1 1 1 1 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0

©Asst. Lec. Wasseem Nahy Ibrahem Page 7


Image Processing Lecture 10

Erosion can be used to remove isolated features (i.e. irrelevant detail)


which may include noise or thin edges as shown in the figure below.

(a) (b)
Figure 10.4 (a) Binary image. (b) Eroded image.

Combining Dilation & Erosion - Opening Morphology


The opening operation erodes an image and then dilates the eroded image
using the same structuring element for both operations, i.e.

where A is the original image and B is the structuring element.


The opening operation is used to remove regions of an object that cannot
contain the structuring element, smooth objects contours, and breaks thin
connections as shown in the figure below.

(a)

©Asst. Lec. Wasseem Nahy Ibrahem Page 8


Image Processing Lecture 10

(b)

(c)
Figure 10.5 (a) Original binary image. (b) Result of opening with square structuring element
of size 10 pixels. (c) Result of opening with square structuring element of size 20 pixels.

The opening operation can also be used to remove small objects in an


image while preserving the shape and size of larger objects as illustrated
in the figure below.

(a) (b)
Figure 10.6 (a) Original binary image. (b) Result of opening with square structuring element
of size 13 pixels.

©Asst. Lec. Wasseem Nahy Ibrahem Page 9


Image Processing Lecture 10

Combining Dilation & Erosion - Closing Morphology


The closing operation dilates an image and then erodes the dilated image
using the same structuring element for both operations, i.e.

where A is the original image and B is the structuring element.


The closing operation fills holes that are smaller than the structuring
element, joins narrow breaks, fills gaps in contours, and smoothes objects
contours as shown in the figure below.

(a)

(b)
Figure 10.7 (a) Result of closing with square structuring element of size 10 pixels. (c) Result
of closing with square structuring element of size 20 pixels.

Combining Opening & Closing Morphology


Combining opening and closing can be quite effective in removing noise
as illustrated in the next figure.

©Asst. Lec. Wasseem Nahy Ibrahem Page 10


Image Processing Lecture 10

(a)

(b) (c)
Figure 10.8 (a) Noisy fingerprint. (b) Result of opening (a) with square structuring element of
size 3 pixels. (c) Result of closing (b) with the same structuring element.

Note that the noise was removed by opening the image, but this process
introduced numerous gaps in the ridges of the fingerprint. These gaps can
be filled by following the opening with a closing operation.

©Asst. Lec. Wasseem Nahy Ibrahem Page 11


Image Processing Lecture 11

The Hit-or-Miss Transformation


The hit-or-miss transformation of an image A by B is denoted by A B.
B is a pair of structuring elements B= (B1,B2) rather than a single element.
B1: set of elements of B associated with an object
B2 : set of elements of B associated with the background
The hit-or-miss transform is defined as follows:

This transform is useful in locating all pixel configurations that match the
B1 structure (i.e a hit) but do not match that of B2 (i.e. a miss). Thus, the
hit-or-miss transform is used for shape detection.

Example: Use the hit-or-miss transform to identify the locations of the


following shape pixel configuration in the image below using the two
structuring elements B1 and B2.

0 1 0 00000000000
1 1 1 00100000000 1
0 1 0 00100111100 1 1 1
Shap 01110000000 1
00100001100 B1
00001001110
00011100100 1 1
00001000000
00000000000 1 1
Image A B2

Solution:

©Asst. Lec. Wasseem Nahy Ibrahem Page 1


Image Processing Lecture 11

00000000000 11111111111
A B1 =
00000000000 A=c 11011111111
00000000000 11011000011
00100000000 10001111111
00000000000 11011110011
00000000100 11110110001
00001000000 11100011011
00000000000 11110111111
00000000000 11111111111

10101111111 00000000000
Ac B2=
10100000001 00000000000
00000111111 A B= 00000000000
10100000001 00100000000
00000000000 00000000000
10000000001 00000000000
11101000000 00001000000
11000000101 00000000000
11101011111 00000000000

The figure below shows an example of applying the hit-or-miss transform


on the image in the previous example.

(a) (b)
Figure 11.1 (a) Binary image. (b) Result of applying hit-or-miss transform.

©Asst. Lec. Wasseem Nahy Ibrahem Page 2


Image Processing Lecture 11

Basic Morphological Algorithms (Applications)


The principle application of morphology is extracting image components
that are useful in the representation and description of shape.
Morphological algorithms are used for boundaries extraction,
skeletonization (i.e. extracting the skeleton of an object), and thinning.

Boundary Extraction
The boundary of a set A, denoted by (A), can be obtained by:

where B is the structuring element.


The figure below shows an example of extracting the boundary of an
object in a binary image.

(a) (b)
Figure 11.2 (a) Binary image. (b) Object boundary extracted
using the previous equation and 3×3 square structuring element.

Note that, because the size of structuring element is 3×3 pixels, the
resulted boundary is one pixel thick. Thus, using 5×5 structuring element
will produce a boundary between 2 and 3 pixels thick as shown in the
next figure.

©Asst. Lec. Wasseem Nahy Ibrahem Page 3


Image Processing Lecture 11

Figure 11.3 Object boundary extracted


using 5×5 square structuring element

Thinning
Thinning means reducing binary objects or shapes in an image to strokes
that are a single pixel wide. The thinning of a set A by a structuring
element B, is defined as:

Since we only match the pattern (shape) with the structuring elements, no
background operation is required in the hit-or-miss transform.
Here, B is a sequence of structuring elements:

where Bi is the rotation of Bi-1. Thus, the thinning equation can be written
as:

The entire process is repeated until no further changes occur. The next
figure shows an example of thinning the fingerprint ridges so that each is
one pixel thick.

©Asst. Lec. Wasseem Nahy Ibrahem Page 4


Image Processing Lecture 11

(a) (b)

(c) (d)
Figure 11.4 (a) Original fingerprint image. (b) Image thinned once. (c) Image thinned twice.
(d) Image thinned until stability (no changes occur).

Skeletonization (Skeleton Extraction)


is another way to reduce binary objects to thin strokes that retain
important structural information about the shapes of the original objects.
The skeleton of A can be expressed in terms of erosions and openings as
follows:

with

where B is a structuring element, and (A kB) indicates k successive


erosions of A:

©Asst. Lec. Wasseem Nahy Ibrahem Page 5


Image Processing Lecture 11

The figure below illustrates an example of extracting a skeleton of an


object in a binary image.

(a) (b)
Figure 11.5 (a) Bone image. (b) Skeleton extracted from (a).

Gray-scale Morphology
The basic morphological operations of dilation, erosion, opening and
closing can also be applied to gray images.

Gray-scale Dilation
The gray-scale dilation of a gray-scale image f by a structure element b is
defined as:

where Db is the domain of the structuring element b. This process


operates in the same way as the spatial convolution.

©Asst. Lec. Wasseem Nahy Ibrahem Page 6


Image Processing Lecture 11

The figure below shows the result of dilating a gray image using a 3×3
square structuring element.

(a) (b)
Figure 11.6 (a) Original gray image. (b) Dilated image.

We can see that gray-scale dilation produces the following:


1. Bright and slightly blurred image.
2. Small, dark details have been reduced.

Gray-scale Erosion
The gray-scale erosion of a gray-scale image f by a structure element b is
defined as:

The next figure shows the result of eroding a gray image using a 3×3
square structuring element.

©Asst. Lec. Wasseem Nahy Ibrahem Page 7


Image Processing Lecture 11

(a) (b)
Figure 11.7 (a) Original gray image. (b) Eroded image.
We can see that gray-scale erosion produces the following:
1. Dark image. 2. Small, bright details were reduced.

Gray-scale Opening and Closing


The opening and closing of gray-scale images have the same form as in
the binary images:
Opening :
Closing:
The figure below shows the result of opening a gray image.

(a) (b)
Figure 11.8 (a) Original gray image. (b) Opened image.

©Asst. Lec. Wasseem Nahy Ibrahem Page 8


Image Processing Lecture 11

Note the decreased sizes of the small, bright details, with no appreciable
effect on the darker gray levels.
The figure below shows the result of closing a gray image.

(a) (b)
Figure 11.9 (a) Original gray image. (b) Closed image.
Note the decreased sizes of the small, dark details, with relatively little
effect on the bright features.

Gray-Scale Morphology Applications


Morphological smoothing
Smoothing is obtained by performing a morphological opening followed
by a closing as shown in the figure below.

(a) (b)
Figure 11.10 (a) Original gray image. (b) Morphological smoothed image.

©Asst. Lec. Wasseem Nahy Ibrahem Page 9


Image Processing Lecture 11

Morphological gradient
is produced from subtracting an eroded image from its dilated version. It
is defined as:

The resulted image has edge-enhancement characteristics, thus


morphological gradient can be used for edge detection as shown in the
figure below.

(a) (b)
Figure 11.11 (a) Original gray image. (b) Morphological gradient.

©Asst. Lec. Wasseem Nahy Ibrahem Page 10


Image Processing Lecture 12

Image Segmentation
is one of image analysis methods used to subdivide an image into its
regions or objects depending on the type of shapes and objects searched
for in the image. Image segmentation is an essential first step in most
automatic pictorial pattern recognition and scene analysis tasks.

Applications of image segmentation


• Inspecting images of electronic boards for missing components or
broken connections.
• Detecting faces, facial features and other objects for surveillance.
• Detecting certain cellular objects in biomedical images.

Segmentation Approaches
Image segmentation algorithms are based on one of two basic properties
of gray-level values: discontinuity and similarity.
• In the first category, the approach is to partition an image based on
abrupt discontinuity (i.e. change) in gray level, such as edges in an
image.
• In the second category, the approaches are based on partitioning an
image into regions that are similar according to a set of predefined
criteria.

We shall focus on segmentation algorithms to detect discontinuities such


as points, lines and edges. Segmentation methods, studied here, rely on
two steps:

©Asst. Lec. Wasseem Nahy Ibrahem Page 1


Image Processing Lecture 12

1. Choosing appropriate filters that help highlight the required


feature(s).
2. Thresholding.

Point Detection
This is concerned with detecting isolated image points in relation to its
neighborhood which is an area of nearly constant gray level.

1. Simple method
The simplest point detection method works in two steps:
1. Filter the image with the mask:
-1 -1 -1
-1 8 -1
-1 -1 -1

Then, we take the absolute values of the filtered image.


2. On the filtered image apply an appropriate threshold (e.g. the
maximum pixel value).

The next figure shows an example of point detection in a face image


using the simple method.

©Asst. Lec. Wasseem Nahy Ibrahem Page 2


Image Processing Lecture 12

(a)

(b) Result with (c) Result with (d) Result with


threshold=max threshold=220 threshold=168

(e) Result with (f) Result with (g) Result with


threshold=118 threshold=68 threshold=55

Figure 12.1 Example of point detection using simple method. (a) Original face image.
(b)-(g) Results with different Thresholds

©Asst. Lec. Wasseem Nahy Ibrahem Page 3


Image Processing Lecture 12

2. Alternative method
An alternative approach to the simple method is to locate the points in a
window of a given size where the difference between the max and the
min value in the window exceeds a given threshold. This can be done
again in two steps:
1. Obtain the difference between the max value (obtained with the
order statistics max filter) and the min value (obtained with the
order statistics min filter) in the given size mask.
2. On the output image apply an appropriate threshold (e.g. the
maximum pixel value).
The figure below shows an example of point detection in a face image
using the alternative method.

(b) Threshold=max (c) Threshold=90

(a)

(d) Threshold=40 (e) Threshold=30

Figure 12.2 Example of point detection using alternative method. (a) Original face image.
(b)-(e) Results with different Thresholds

©Asst. Lec. Wasseem Nahy Ibrahem Page 4


Image Processing Lecture 12

Line Detection
Detecting a line in a certain direction require detecting adjacent points in
the image in the given direction. This can be done using filters that yields
significant response at points aligned in the given direction.
For example, the following filters
-1 2 -1 -1 -1 -1
-1 2 -1 2 2 2
-1 2 -1 -1 -1 -1

-1 -1 2 2 -1 -1
-1 2 -1 -1 2 -1
2 -1 -1 -1 -1 2

highlight lines in the vertical, horizontal, +45 direction , and – 45


direction, respectively.
This can be done again in two steps:
1. Filter the image using an appropriate filter.
2. Apply an appropriate threshold (e.g. max value).

The next figure illustrates an example of line detection using the filters
above.

©Asst. Lec. Wasseem Nahy Ibrahem Page 5


Image Processing Lecture 12

(a)

(b) (c)

(d) (e)
Figure 12.3 Example of line detection. (a) Original image. (b)-(e) Detected lines in the
vertical, horizontal, +45° direction , and – 45° direction, respectively.

©Asst. Lec. Wasseem Nahy Ibrahem Page 6


Image Processing Lecture 12

Edge detection
Edge detection in images aims to extract meaningful discontinuity in
pixel gray level values. Such discontinuities can be deduced from first
and second derivatives as defined in Laplacian filter.
The 1st-order derivative of an image f(x,y) is defined as:

Its magnitude is defined as:

Or by using the absolute values

The 2nd-order derivative is computed using the Laplacian as follows:

However, Laplacian filter is not used for edge detection because, as a


second order derivative:
• it is sensitive to noise.
• its magnitude produces double edges.
• it is unable to detect edge direction.

There are 1st-order derivative estimators in which we can specify whether


the edge detector is sensitive to horizontal or vertical edges or both. We
study only two edge detectors namely Sobel and Prewitt edge detectors.

©Asst. Lec. Wasseem Nahy Ibrahem Page 7


Image Processing Lecture 12

Sobel edge detector


This detector uses the following masks to approximate the digitally the
1st-order derivatives Gx and Gy:
-1 -2 -1 -1 0 1
0 0 0 -2 0 2
1 2 1 -1 0 1

To detect:
• Horizontal edges, we filter the image f using the left mask above.
• Vertical edges, we filter the image f using the right mask above.
• Edges in both directions, we do the following:
1. Filter the image f with the left mask to obtain Gx
2. Filter the image f again with the right mask to obtain Gy
3. Compute or
In all cases, we then take the absolute values of the filtered image, then
apply an appropriate threshold.

The next figure shows an example of edge detection using the Sobel
detector.

©Asst. Lec. Wasseem Nahy Ibrahem Page 8


Image Processing Lecture 12

(a) (b)

(c) (d)
Figure 12.4 Example of Sobel edge detection. (a) Original image.
(b)-(d) Edges detected in vertical, horizontal, and both directions, respectively.

Prewitt edge detector.


This detector uses the following masks to approximate Gx and Gy:
-1 -1 -1 -1 0 1
0 0 0 -1 0 1
1 1 1 -1 0 1
The steps of applying this detector are the same as that of the Sobel
detector.
The next figure shows an example of edge detection using the Prewitt
detector.

©Asst. Lec. Wasseem Nahy Ibrahem Page 9


Image Processing Lecture 12

(a) (b)

(c) (d)
Figure 12.5 Example of Prewitt edge detection. (a) Original image.
(b)-(d) Edges detected in vertical, horizontal, and both directions, respectively.

We can see that the Prewitt detector produces noisier results than the
Sobel detector. This is because the coefficient with value 2 in the Sobel
detector provides smoothing.

©Asst. Lec. Wasseem Nahy Ibrahem Page 10


Image Processing Lecture 13

Image Compression
• Image compression means the reduction of the amount of data
required to represent a digital image by removing the redundant data.
It involves reducing the size of image data files, while retaining
necessary information.
• Mathematically, this means transforming a 2D pixel array (i.e. image)
into a statistically uncorrelated data set. The transformation is applied
prior to storage or transmission of the image. At later time, the
compressed image is decompressed to reconstruct the original
(uncompressed) image or an approximation of it.
• The ratio of the original (uncompressed) image to the compressed
image is referred to as the Compression Ratio CR:

where

Example:
Consider an 8-bit image of 256×256 pixels. After compression, the image
size is 6,554 bytes. Find the compression ratio.
Solution:
Usize = (256 × 256 × 8) / 8 = 65,536 bytes
Compression Ratio = 65536 / 6554 = 9.999 ≈ 10 (also written 10:1)
This means that the original image has 10 bytes for every 1 byte in the
compressed image.

©Asst. Lec. Wasseem Nahy Ibrahem Page 1


Image Processing Lecture 13

Image Data Redundancies


There are three basic data redundancies that can be exploited by image
compression techniques:
• Coding redundancy: occurs when the data used to represent the image
are not utilized in an optimal manner. For example, we have an 8-bit
image that allows 256 gray levels, but the actual image contains only
16 gray levels (i.e. only 4-bits are needed).
• Interpixel redundancy: occurs because adjacent pixels tend to be
highly correlated. In most images, the gray levels do not change
rapidly, but change gradually so that adjacent pixel values tend to be
relatively close to each other in value.
• Psychovisual redundancy: means that some information is less
important to the human visual system than other types of information.
This information is said to be psychovisually redundant and can be
eliminated without impairing the image quality.

Image compression is achieved when one or more of these redundancies


are reduced or eliminated.

Fidelity Criteria
These criteria are used to assess (measure) image fidelity. They quantify
the nature and extent of information loss in image compression. Fidelity
criteria can be divided into classes:
1. Objective fidelity criteria
2. Subjective fidelity criteria

©Asst. Lec. Wasseem Nahy Ibrahem Page 2


Image Processing Lecture 13

Objective fidelity criteria


These are metrics that can be used to measure the amount of information
loss (i.e. error) in the reconstructed (decompressed) image.
Commonly used objective fidelity criteria include:
• root-mean-square error, , between an input image f(x,y) and
output image :

where the images are of size M × N. The smaller the value of ,


the better the compressed image represents the original image.

• mean-square signal-to-noise ratio, SNRms :

• Peak signal-to-noise ratio, SNRpeak :

where L is the number of gray levels.


A larger number of SNR implies a better image.

©Asst. Lec. Wasseem Nahy Ibrahem Page 3


Image Processing Lecture 13

Subjective fidelity criteria


These criteria measure image quality by the subjective evaluations of a
human observer. This can be done by showing a decompressed image to a
group of viewers and averaging their evaluations. The evaluations may be
made using an absolute rating scale, for example {Excellent, Fine,
Passable, Marginal, Inferior, and Unusable}.

Image Compression System


As shown in the figure below, the image compression system consists of
two distinct structural blocks: an encoder and a decoder.

Figure 13.1 A general image compression system

The encoder is responsible for reducing or eliminating any coding,


interpixel, or psychovisual redundancies in the input image. It consists of:
• Mapper: it transforms the input image into a nonvisual format
designed to reduce interpixel redundancies in the input image. This
operation is reversible and may or may not reduce directly the
amount of data required.

©Asst. Lec. Wasseem Nahy Ibrahem Page 4


Image Processing Lecture 13

• Quantizer: it reduces the psychovisual redundancies of the input


image. This operation is irreversible.
• Symbol coder: it creates a fixed- or variable-length code to
represent the quantizer output. In a variable-length code, the
shortest code words are assigned to the most frequently occurring
output values, and thus reduce coding redundancy. This operation
is reversible.

The decoder contains only two components: symbol decoder and an


inverse mapper.

Image Compression Types


Compression techniques are classified into two primary types:
• Lossless compression
• Lossy compression

Lossless compression
• It allows an image to be compressed and decompressed without losing
information (i.e. the original image can be recreated exactly from the
compressed image).
• This is useful in image archiving (as in the storage of legal or medical
records).
• For complex images, the compression ratio is limited (2:1 to 3:1). For
simple images (e.g. text-only images) lossless methods may achieve
much higher compression.
• An example of lossless compression techniques is Huffman coding.

©Asst. Lec. Wasseem Nahy Ibrahem Page 5


Image Processing Lecture 13

Huffman Coding
is a popular technique for removing coding redundancy. The result after
Huffman coding is variable length code, where the code words are
unequal length. Huffman coding yields the smallest possible number of
bits per gray level value.

Example:
Consider the 8-bit gray image shown below. Use Huffman coding
technique for eliminating coding redundancy in this image.
119 123 168 119
123 119 168 168
119 119 107 119
107 107 119 119

Solution:
Gray level Histogram Probability
119 8 0.5
168 3 0.1875
107 3 0.1875
123 2 0.125

1 1
0.5 0.5 0.5 1
01
0.1875 00 0.3125 0.5 0
011 00
0.1875 0.1875
0.125 010

We build a lookup table:

©Asst. Lec. Wasseem Nahy Ibrahem Page 6


Image Processing Lecture 13

Lookup table:
Gray level Probability Code
119 0.5 1
168 0.1875 00
107 0.1875 011
123 0.125 010

We use this code to represent the gray level values of the compressed
image:
1 010 00 1
010 1 00 00
1 1 011 1
011 011 1 1

Hence, the total number of bits required to represent the gray levels of the
compressed image is 29-bit: 10101011010110110000011110011.
Whereas the original (uncompressed) image requires 4*4*8 = 128 bits.
Compression ratio = 128 / 29 ≈ 4.4

Lossy compression
• It allows a loss in the actual image data, so the original uncompressed
image cannot be recreated exactly from the compressed image).
• Lossy compression techniques provide higher levels of data reduction
but result in a less than perfect reproduction of the original image.
• This is useful in applications such as broadcast television and
videoconferencing. These techniques can achieve compression ratios
of 10 or 20 for complex images, and 100 to 200 for simple images.
• An example of lossy compression techniques is JPEG compression
and JPEG2000 compression.

©Asst. Lec. Wasseem Nahy Ibrahem Page 7


Image Processing Lecture 14

Color Image Processing


The use of color is important in image processing because:
• Color is a powerful descriptor that simplifies object identification
and extraction.
• Humans can discern thousands of color shades and intensities,
compared to about only two dozen shades of gray.
Color image processing is divided into two major areas:
• Full-color processing: images are acquired with a full-color sensor,
such as a color TV camera or color scanner.
• Pseudocolor processing: The problem is one of assigning a color to
a particular monochrome intensity or range of intensities.

Color Fundamentals
Colors are seen as variable combinations of the primary colors of light:
red (R), green (G), and blue (B). The primary colors can be mixed to
produce the secondary colors: magenta (red+blue), cyan (green+blue),
and yellow (red+green). Mixing the three primaries, or a secondary with
its opposite primary color, produces white light.

Figure 14.1 Primary and secondary colors of light

RGB colors are used for color TV, monitors, and video cameras.

©Asst. Lec. Wasseem Nahy Ibrahem Page 1


Image Processing Lecture 14

However, the primary colors of pigments are cyan (C), magenta (M), and
yellow (Y), and the secondary colors are red, green, and blue. A proper
combination of the three pigment primaries, or a secondary with its
opposite primary, produces black.

Figure 14.2 Primary and secondary colors of pigments

CMY colors are used for color printing.

Color characteristics
The characteristics used to distinguish one color from another are:
• Brightness: means the amount of intensity (i.e. color level).
• Hue: represents dominant color as perceived by an observer.
• Saturation: refers to the amount of white light mixed with a hue.

Color Models
The purpose of a color model is to facilitate the specification of colors in
some standard way. A color model is a specification of a coordinate
system and a subspace within that system where each color is represented
by a single point. Color models most commonly used in image processing
are:

©Asst. Lec. Wasseem Nahy Ibrahem Page 2


Image Processing Lecture 14

• RGB model for color monitors and video cameras


• CMY and CMYK (cyan, magenta, yellow, black) models for color
printing
• HSI (hue, saturation, intensity) model

The RGB color model


In this model, each color appears in its primary colors red, green, and
blue. This model is based on a Cartesian coordinate system. The color
subspace is the cube shown in the figure below. The different colors in
this model are points on or inside the cube, and are defined by vectors
extending from the origin.

Figure 14.3 RGB color model

All color values R, G, and B have been normalized in the range [0, 1].
However, we can represent each of R, G, and B from 0 to 255.
Each RGB color image consists of three component images, one for each
primary color as shown in the figure below. These three images are
combined on the screen to produce a color image.

©Asst. Lec. Wasseem Nahy Ibrahem Page 3


Image Processing Lecture 14

Figure 14.4 Scheme of RGB color image

The total number of bits used to represent each pixel in RGB image is
called pixel depth. For example, in an RGB image if each of the red,
green, and blue images is an 8-bit image, the pixel depth of the RGB
image is 24-bits. The figure below shows the component images of an
RGB image.

Full color

Red Green Blue


Figure 14.5 A full-color image and its RGB component images

©Asst. Lec. Wasseem Nahy Ibrahem Page 4


Image Processing Lecture 14

The CMY and CMYK color model


Cyan, magenta, and yellow are the primary colors of pigments. Most
printing devices such as color printers and copiers require CMY data
input or perform an RGB to CMY conversion internally. This conversion
is performed using the equation

where, all color values have been normalized to the range [0, 1].
In printing, combining equal amounts of cyan, magenta, and yellow
produce muddy-looking black. In order to produce true black, a fourth
color, black, is added, giving rise to the CMYK color model.

The figure below shows the CMYK component images of an RGB image.

Full color Cyan Magenta

Yellow Black
Figure 14.6 A full-color image and its CMYK component images

©Asst. Lec. Wasseem Nahy Ibrahem Page 5


Image Processing Lecture 14

The HSI color model


The RGB and CMY color models are not suited for describing colors in
terms of human interpretation. When we view a color object, we describe
it by its hue, saturation, and brightness (intensity). Hence the HSI color
model has been presented. The HSI model decouples the intensity
component from the color-carrying information (hue and saturation) in a
color image. As a result, this model is an ideal tool for developing color
image processing algorithms.
The hue, saturation, and intensity values can be obtained from the RGB
color cube. That is, we can convert any RGB point to a corresponding
point is the HSI color model by working out the geometrical formulas.

Converting colors from RGB to HSI


The hue H is given by

Where

The saturation S is given by

The intensity I is given by

All RGB values are normalized to the range [0,1].

©Asst. Lec. Wasseem Nahy Ibrahem Page 6


Image Processing Lecture 14

Converting colors from HSI to RGB


The applicable equations depend on the value of H:
If :

If :

If :

The next figure shows the HSI component images of an RGB image.

©Asst. Lec. Wasseem Nahy Ibrahem Page 7


Image Processing Lecture 14

Full color

Hue Saturation Intensity

Figure 14.7 A full-color image and its HSI component images

Basics of Full-Color Image Processing


Full-color image processing approaches fall into two major categories:
• Approaches that process each component image individually and
then form a composite processed color image from the individually
processed components.
• Approaches that work with color pixels directly.
In full-color images, color pixels really are vectors. For example, in the
RGB system, each color pixel can be expressed as

For an image of size M× N, there are MN such vectors, c(x, y), for x = 0,1,
2,...,M-1; y = 0,1,2,...,N-1.

©Asst. Lec. Wasseem Nahy Ibrahem Page 8


Image Processing Lecture 14

Color Transformation
As with the gray-level transformation, we model color transformations
using the expression

where f(x, y) is a color input image, g(x, y) is the transformed color output
image, and T is the color transform.
This color transform can also be written

For example, we wish to modify the intensity of the image shown in


Figure 14.8(a) using

• In the RGB color space, three components must be transformed:

• In CMY space, also three component images must be transformed

• In HSI space, only intensity component is transformed

(a) (b)
Figure 14.8 (a) Original image. (b) Result of decreasing its intensity

©Asst. Lec. Wasseem Nahy Ibrahem Page 9

You might also like