Download as pdf or txt
Download as pdf or txt
You are on page 1of 117

Image Processing

Semester VII
Department of Computer Engineering
Dharmsinh Desai University

23-Jul-18 1
Digital Image Fundamentals

Elements of Visual perception


Light and Electromagnetic spectrum
Image Sensing and application
image sampling and quantization
Basic relationships between pixels

23-Jul-18 2
Elements of Visual perception

23-Jul-18 3
Elements of Visual perception

Iris: controls the size of the pupil which in turn


controls the amount of light that enters the eye; it
forms the colored portion of the eye
Cornea: The transparent dome-shaped anterior
portion of the outer covering of the eye; it covers
the iris and pupil and is continuous with the sclera

23-Jul-18 4
Elements of Visual perception

Receptors: An organ having nerve endings (in the


skin, viscera, eye, ear, nose or mouth) that respond
to stimulation
Two classes of receptors: cones and rods

23-Jul-18 5
Cones

Number between 6 and 7 million


Located primarily in the central portion of the
retina, called fovea.
Highly sensitive to color
Humans can resolve fine details with these cones
largely because each one is connected to its own
nerve end.
Cone vision is called “photopic” or “bright-light
vision”.
23-Jul-18 6
Rods
Number between 75 to 150 million
Distributed over the retinal surface
Rods serve to give a general ,overall picture of the
field of view.
Not involved in color vision and are sensitive to
low level illumination.
Example. Objects that appear brightly colored in
daylight when seen by moonlight appear as
colorless forms only the rods are stimulated.
Phenomenon is known as “scotopic” or dim-light
vision
23-Jul-18 7
Distributions of rods and cones in
retina

23-Jul-18 8
Blind spot
Blind Spot: The absence of receptors in this area is
called blind spot.

23-Jul-18 9
Image formation of a eye

23-Jul-18 10
Image formation of a eye
Example: a person is looking at a in the retinal
image
Distance between the center of the lens and the
retina along the visual axis is approximately 17 mm
h denote the height of that object in the retinal
image.
15/100 = h/17
h= 2.55 mm

23-Jul-18 11
Brightness Adaptation and Discrimination

23-Jul-18 12
Brightness Adaptation and Discrimination

Subjective brightness is a logarithmic function of the light


intensity incident on the eye.
For any given set of conditions, the current sensitivity level
of the visual system is called the brightness adaption.
Example brightness Ba in figure.
The short intersecting curve represents the range of
subjective brightness that the eye can perceive when
adapted to this level.
This range is rather restricted ,having a level Bb at and
below which all stimuli are perceived as indistinguishable
blacks.

23-Jul-18 13
Brightness Adaptation and Discrimination
The ability of the eye to discriminate between changes in light
intensity at any specific adaption level is also of considerable
interest.
A classic experiment is used to determine the capability of the
human visual system for brightness discrimination

Delta I short duration flash that appears as a circle in the center


of uniformly illuminated field as given in figure.

23-Jul-18 14
Brightness Adaptation and Discrimination
if ∆I is not bright enough , the subject says “no,” indicating
perceivable change.
As ∆I gets stronger ,the stronger , the subject may give
positive response of “yes”, indicating perceivable change
Weber ratio : ∆Ic/I, where ∆Ic is the increment of
illumination discriminable 50% of the time with the
background illumination I.
A small value of ∆Ic/I means that small percentage change
in intensity is discriminable. This represents “good”
brightness discrimination.
Conversely large value of ∆Ic/I means that a large
percentage change in intensity is required. This represents
“poor” brightness discrimination.

23-Jul-18 15
Brightness Adaptation and Discrimination

It shows that brightness discrimination is poor (when weber


ratio is large) at low levels of illumination.
It improves significantly (when weber ratio decreases).
The two branches in the curve reflect the fact that at low levels
of illumination vision is carried out by rods whereas high levels
vision is the function of cones.

23-Jul-18 16
Brightness Adaptation and Discrimination

Perceived brightness is not a simple function of intensity.


Two phenomena clearly demonstrate it
1) Mach band effect
2) Simultaneous contrast

23-Jul-18 17
Match band effect

23-Jul-18 18
Simultaneous contrast

 Region’s perceived brightness does not depend simply on its intensity

23-Jul-18 19
Optical illusions
Eye fills in non existing information or wrongly perceives
geometrical properties of objects.
Figure C appears same length but shorter than the other.
Figure D that are oriented at 45 degree are equidistant
parallel.
Yet the crosshatching creates the illusion that those lines
are far from being parallel.

23-Jul-18 20
Optical illusions

23-Jul-18 21
Optical illusions

23-Jul-18 22
Electromagnetic spectrum

23-Jul-18 23
Electromagnetic spectrum
Light that is void of color is called monochromatic. (or
achromatic) light.
This is colorless light described by the notions black-gray-
white.
Attribute is intensity or amount
The intensity of monochromatic light is perceived to vary
from black to grays and finally to white, the term gray level is
used commonly to denote monochromatic intensity.
 monochromatic images are frequently referred to as gray
scale images.
Three basic quantities are used to describe the quality of a
chromatic light source
Radiance , luminance , brightness
23-Jul-18 24
Electromagnetic spectrum
Radiance is the total amount of energy that flows from the
light source, and it is usually measured in watts(W).
Luminance measured in lumens(lm), give a measure of the
amount of energy an observer perceives from a light source.
Brightness is a subjective descriptor of light perception that
is practically impossible to measure.

23-Jul-18 25
Image sensing and acquisition

23-Jul-18 26
Image sensing and acquisition
Most of the images in which we are interested are
generated by the combination of an “illumination” source
and the reflection or absorption of energy from that source
by the elements of “scene” being imaged.
An example in the first category is light reflected from a
planar surface.
Second category is when X-rays pass through a patient’s
body for the purpose of generating a diagnostic X-ray film.
In figure, three principal sensor arrangements used to
transform illumination energy into digital images.

23-Jul-18 27
Image sensing and acquisition

23-Jul-18 28
Image sensing and acquisition
Incoming energy is transformed into a voltage by
the combination of input electrical power and
sensor material that is responsive to the particular
type of energy being detected.
The output voltage waveform is the response of
the sensor(s), and a digital quantity is obtained from
each sensor by digitizing its response.

23-Jul-18 29
Image acquisition using a single sensor

Most familiar sensor of this type is the photodiode, which is


constructed of silicon materials and whose output voltage
waveform is proportional to light.
Figure shows an arrangement used in high precision
scanning.
23-Jul-18 30
Image sensing and acquisition
A film negative is mounted onto a drum whose mechanical
rotation provides displacement in one dimension.
The single sensor is mounted on a lead screw that provides
motion in the perpendicular direction.

23-Jul-18 31
Image acquisition using sensor trips

23-Jul-18 32
Image acquisition using sensor trips
This type of arrangement used in flat bed scanners.
Sensing devices with 4000 or more in-line sensors are
possible.
The imaging strip gives one line of an image at a time, the
motion of the strip completes the other dimension of a two-
dimensional image.
Lenses or other focusing schemes are used to project the
area to be scanned onto the sensors.

23-Jul-18 33
Image acquisition using sensor trips
Sensor strips mounted in a ring configuration are used in
medical and industrial imaging to obtain cross-
sectional(“slice”) images of 3-D objects.
A rotating X-ray source provides illumination and the
sensors opposite the source collect the X-ray energy that
passes through object.
This is the basics for medical and industrial computerized
axial tomography (CAT) imaging, MRI(magnetic resonance
imaging) and PET (Positron emission tomography).
It is important to note that output of sensors must be
processed by reconstruction algorithms whose objective is to
transform the sensed data into meaningful cross-sectional
images.

23-Jul-18 34
Image acquisition using sensor trips
In other words, images are not obtained directly from the
sensors by motion alone.
They require extensive processing

23-Jul-18 35
Image acquisition using sensor arrays
Figure 2.12(c) shows individual sensors arranged in the
form of a 2-D array.
Numerous electromagnetic and some ultrasonic sensing
devices frequently are arranged in an array format.
This is also the predominant arrangement found in digital
cameras.
A typical sensor for these cameras is a CCD array, which can
be manufactured with broad range of sensing properties and
can be packaged in rugged arrays of 4000 * 4000 elements or
more.
CCD sensors are used widely in digital cameras and other
light sensing instruments.
23-Jul-18 36
Image acquisition using sensor arrays
The response of each sensor is proportional to the integral
of the light energy projected onto the surface of the sensor.
Key advantage is that a complete image can be obtained by
focusing the energy pattern onto the surface of the array.

23-Jul-18 37
Image acquisition using sensor arrays

23-Jul-18 38
Image acquisition using sensor arrays
The figure shows that the energy from an illumination source
being reflected from scene element.
Figure 2.15(c), the first function performed by the imaging
system in figure is to collect the incoming energy and focus it onto
an image plane.
If illumination is light , the front end of the imaging system is an
optical lens that projects the viewed scene onto the lens focal
plane, produces outputs proportional to the integral of the light
received at each sensor as fig. 2.15(d) shows.
The sensor array which is coincident with the focal plane,
produces outputs proportional to the integral of the light received
at each sensor.
Fig. 2.15(e) conversion of an image into digital form.

23-Jul-18 39
Image formation model

23-Jul-18 40
Image formation model
When image is generated from a physical process, its intensity
values are proportional to energy radiated by a physical source(
e.g. electromagnetic waves)
As a conquence f(x,y) must be non zero and finite that is
0 < f(x,y) < ∞
The function f(x,y) may be characterized by two components
The amount of source illumination incident on the scene being
viewed
The amount of illumination reflected by the objects in the
scene.
Appropriately, these are called the illumination and reflectance
components and are denoted by i(x,y) and r(x,y)

23-Jul-18 41
Image formation model
The two functions combine as a product to form f(x, y)
f(x, y)= i(x, y) * r(x, y)
Where, 0 < i(x y) < ∞ and 0 < r(x, y) < 1
Reflectance is bounded by 0 (total absorption) and 1 (total
reflectance)
Let the intensity ( gray level) of a monochrome image at any
coordinates (X₀, Y₀) be denoted by
Lmin <= L <= Lmax
Lmin= imin.rmin and Lmax = imax.rmax
The interval [ Lmin , Lmax] is called the gray (or intensity ) scale
Common practice is to shift this interval numerically to the
interval [0 , L-1]

23-Jul-18 42
Image sampling and quantization

23-Jul-18 43
Image sampling and quantization

23-Jul-18 44
Image sampling and quantization

23-Jul-18 45
Image sampling and quantization
Digitizing the coordinate values is called sampling .
Digitizing the amplitude values is called quantization.
fig. 2.16(a) shows a continuous image with respect to the x
– and y- coordinates, and also in amplitude.
The one –dimensional function in fig. 2.16(b) is a plot of
amplitude (intensity level) values of the continuous image.
The random variations are due to image noise.
The spatial location of each sample is indicated by a
vertical tick mark in the bottom part of the figure.
The set of these discrete locations gives a sampled
function.

23-Jul-18 46
Image sampling and quantization
However the values of the samples still span (vertically) a
continuous range of intensity values.
In order to form a digital function, the intensity values also
must be converted (quantized) into discrete quantities.
The right side of fig. 2.16(c) shows the intensity scale
divided into eight discrete intervals, ranging from black to
white.
The vertical tick marks indicate the specific value assigned
to each of the eight intensity intervals.
The continuous intensity level are quantized by assigning
one of the eight values to each sample.
Fig. 2.16(d) represents both sampling and quantization
23-Jul-18 47
Representation of digital images

23-Jul-18 48
Representation of digital images

23-Jul-18 49
Representation of digital images
This digitization process requires that decisions be made
regarding the values for M , N and for the number L of
discrete intensity levels.
There are no restrictions placed on M and N other than
they have to be positive integers.
The number of intensity levels typically is an integer power
of 2
L= 2ᵏ

23-Jul-18 50
Representation of digital images
The number of bits required to store a digitized image is
b= M * N * K
If M = N this equation becomes
b = N²K
When an image can have 2ᵏ intensity levels, it is common
practice to refer to the images as a “k-bit image”
Ex. b = 32².1=1024, Where N =32,K=1.

23-Jul-18 51
Representation of digital images

23-Jul-18 52
Saturation vs. Noise

23-Jul-18 53
Saturation vs. Noise
Image noise is random variation of brightness or color
information in images.
It can be produced by the sensor and circuitry of a scanner
or digital camera.

23-Jul-18 54
Spatial resolution
Spatial Resolution: measure of the smallest discernible
detail in an image.
Spatial resolution can be stated in a number of ways, with
line pairs per unit distance, and dots (pixels) per unit distance
being among the most common measures.
In the U.S. this measure usually is expressed as dots per
inch (dpi).
Newspaper are printed with a resolution of 75 dpi,
magazines at 133, glossy brochures at 175 dpi, and book of
DIP (by gonzalez) is printed at 2400 dpi.

23-Jul-18 55
Spatial resolution

23-Jul-18 56
Intensity resolution
Intensity Resolution: measure of the smallest discernible
change in intensity level.
Based on hardware consideration , the number of intensity
levels usually is a integer power of two, as mentioned in
previous slides.
The most common number is 8 bits, with 16 bits being
used in some applications in which enhancement of specific
intensity ranges is necessary.

23-Jul-18 57
Intensity resolution

23-Jul-18 58
Intensity resolution
Fig. 2.21(d) however , has an imperceptible set of very fine
ridge-like structures in areas of constant or nearly constant
intensity (Particularly in the skull)
These effect is caused by the use of insufficient number of
intensity levels in smooth areas of a digital image is called
false contouring.
False contouring generally is quite visible in images
displayed using 16 or less uniformly spaced intensity levels
as in fig. 2.21(e) through h

23-Jul-18 59
Intensity resolution

23-Jul-18 60
Intensity Resolution
I=uigetfile('*.*','Select the Image');
I=imread(I);
[r c] = size(I);
for n1=1:8
L1 = ((2^n1)-1); %total number of levels
y1 = uint8(255/L1); %number of gray shades per
level
x1 = 0:y1:255; %gray levels in output image
I1 = zeros(r,c);
23-Jul-18 Prof. Tushar V. Ratanpara 61
Intensity Resolution
for i = 1:1:r
for j = 1:1:c
k = uint8(I(i,j)/y1);
k = k+1;
if (k > L1+1)
k = k-1;
end
I1(i,j) = x1(k);
end
end
figure, imshow(uint8(I1));title([num2str(n1),' bit image']);
end
23-Jul-18 Prof. Tushar V. Ratanpara 62
Intensity Resolution

23-Jul-18 Prof. Tushar V. Ratanpara 63


Intensity Resolution

23-Jul-18 Prof. Tushar V. Ratanpara 64


Intensity Resolution

23-Jul-18 Prof. Tushar V. Ratanpara 65


Isopreference curve

23-Jul-18 66
Isopreference curve

23-Jul-18 67
Isopreference curve
Each point in NK-plane represents an image having values
of N and k equal to the coordinates of that point.
Points lying on an isopreference curves correspond to
images of equal subjective quality.
It was found in the course of the experiments that the iso-
preference curves tended to shift right and upward ,but their
shapes in each of the three image categories were similar to
those in fig.
A shift up and right in the curves simply means larger
values for N and k, which implies better picture quality.
The key point of interest in the context of the present
discussion is that isopreference curves tend to become more
vertical as the detail in the image increases.

23-Jul-18 68
Image interpolation
Interpolation is a basic tool used extensively in tasks such
as zooming, shrinking, rotating and geometric corrections.
In this section our principal objective is image resizing (
shrinking and zooming) which are basically image resampling
methods.
Interpolation is the process of using known data to
estimate values of unknown locations.
Suppose image of size 500 * 500 pixels has to be enlarged
1.5 times to 750 * 750.

23-Jul-18 69
Image Zooming (1)
(Bi-linear Interpolation)
1,2 2,2
3 4

1 2 y
1,1 2,1

(x1,y1) = (1,1)
(x2,y1) = (2,1)
1,2 1.33,2 1.66,2 2,2
(x1,y2) = (1,2)
(x2,y2) = (2,2) 1,1,66 1.33,1.66 1.66,1.66 2,1.66
f(Q11) = 1
1,1.33 1.33,1.33 1.66,1.33 2,1.33
f(Q21) = 2
f(Q12) = 3 1,1 1.33,1 1.66,1 2,1 x
f(Q22) = 4
Image Zooming (2)
(Bi-linear Interpolation)
Image Zooming (3)
(Bi-linear Interpolation)

f(1.33,2) = (2 – 1.33) * 3 (1.33 – 1) *4


f(1.33,1) = (2 – 1.33) *1 (1.33 – 1) *2 +
+ (2-1) ( 2- 1)
(2-1) ( 2- 1)

f(1.33,2) = 0.67 * 3 + 0.33 * 4


f(1.33,1) = 0.67 + 0.66 = 3.33
= 1.33

f(1.66,1) = (2 – 1.66) *1 (1.66 – 1) *2 f(1.66,2) = (2 – 1.66) *3 (1.66 – 1)*4


(2-1) + ( 2- 1) +
(2-1) ( 2- 1)
f(1.66,1) = 0.34 + 1.32
= 1.66 f(1.66,2) = 0.34 * 3 + 0.66 * 4
= 3..66
Image Zooming (4)
(Bi-linear Interpolation)

f(2,1.33) = (2 – 1.33) *2 (1.33 – 1) *4


f(1,1.33) = (2 – 1.33) *1 (1.33 – 1)*3 +
+ (2-1) ( 2- 1)
(2-1) ( 2- 1)

f(2,1.33) = 0.67 * 2 + 0.33 * 4


f(1,1.33) = 0.67 + 0.99 = 2.66
= 1.66

f(1,1.66) = (2 – 1.66)*1 (1.66 – 1)*3 f(2,1.66) = (2 – 1.66) *2 (1.66 – 1) *4


+ +
(2-1) ( 2- 1) (2-1) ( 2- 1)

f(1,1.66) = 0.34 + 1.99 f(2,1.66) = 0.34 * 2 + 0.66 * 4


= 2.33 = 3..33
Image Zooming (5)
(Bi-linear Interpolation)

f(1.33,1.33) = (2 – 1.33) * 1.66 (1.33 – 1) * 2.66


+
(2-1) ( 2- 1)

f(1,1.33) = 1.11 + 0.88


= 1.99
f(1.66,1.66) = (2 – 1.66) *2.33 (1.66 – 1) *3.33
+
(2-1) ( 2- 1)

f(2,1.66) = 0.34 * 2 .33 + 0.66 * 3.33


= 0.79 + 2.19
= 2.99
Image interpolation
Nearest neighbor interpolation : it assigns to each new
locations the intensity of its nearest neighbor in the original
image.
Bilinear interpolation : in which we use the four nearest
neighbors to estimate the intensity at a given location.
v(x , y) = ax + by + cxy + d
Where the four coefficients are determined from the four
equations in four unknowns that can be written using the
four nearest neighbors of point(x , y).
Bicubic interpolation:
V(x ,y) = ∑∑ aij xⁱyᴶ where i and j is 0 to 3

23-Jul-18 75
Image Interpolation

23-Jul-18 76
4-neighbors of pixel

•4-neighbors of pixel is denoted by N4(p)


•It is set of horizontal and vertical neighbors
(x) (x+1)
(x-1)
f(x,y) is a yellow circle
(y-1) f(x,y-1) is top one
(y)
f(x-1,y) is left one
f(x+1,y) is right one
(y+1)
f(x,y+1) is bottom one
Diagonal neighbors of pixel
•Diagonal neighbors of pixel is denoted by ND(p)
•It is set of diagonal neighbors

(x) (x+1)
(x-1)
f(x,y) is a yellow circle
(y-1) f(x-1,y-1) is top-left one
(y)
f(x+1,y-1) is top-right one
f(x-1,y+1) is bottom-left one
(y+1)
f(x+1,y+1) is bottom-right one
8-neighbors of pixel

•8-neighbors of pixel is denoted by N8(p)


•4-neighbors and Diagonal neighbors of pixel

(x) (x+1)
(x-1)
f(x,y) is a yellow circle
(y-1) (x-1,y-1), (x,y-1),(x+1,y-1),
(y)
(x-1,y), (x,y), (x+1,y),
(x-1,y+1),(x,y+1), (x+1,y+1)
(y+1)
Connectivity

• Establishing boundaries of objects and


components of regions in an image.
• Group the same region by assumption that
the pixels being the same color or equal
intensity will are the same region
Connectivity

• Let C is the set of colors used to define


• There are three type of connectivity:
 4-Connectivity : 2 pixels (p and q) with value in C are 4-
connectivity if q is in the set N4(p)
 8-Connectivity : 2 pixels (p and q) with value in C are 8-
connectivity if q is in the set N8(p)
 M-Connectivity : 2 pixels (p and q) with value in C are
8-connectivity if
(i) Q is in N4(p), or
(ii) Q is in ND(p) and the set N4(p) ∩ N4(q) is empty
Example 4-Connectivity

• Set of color consists of color 1 ; C ={1}

0 1 1 0

1 1 0 1

0 0 1 1
Example 8-Connectivity

• Set of color consists of color 1 ; C ={1}

0 1 1 0

1 1 0 1

0 0 1 1
Example M-Connectivity
• Set of color consists of color 1 ; C ={1}

0 1 1 0

1 1 0 1

0 0 1 1
Pixel adjacencies and paths
Pixel p is adjacent to q if they are connected
We can define 4-, 8-, or m-adjacency depending on the
specified type of connectivity
Two image subsets S1 and S2 are adjacent if some pixel in
S1 is adjacent to S2
A path from p at (x,y) to q at (s,t) is a sequence of distinct
pixels with coordinates (X0, Y0), (X1, Y1),….., (Xn, Yn)
Where (X0, Y0)=(x,y) and (Xn, Yn)=(s,t) and (Xi, Yi) is adjacent
to (xi-1,yi-1) for 1<= i <= n - n is the length of the path
If p and q are in S, then p is connected to q in S if there is a
path from p to q consisting entirely of pixels in S

23-Jul-18 85
4- Path, 8- Path and m-path

• Path: 4, 8, and m-paths


– A sequence of distinct pixels from pixel p to q.

0 1 1 (p) 0 1 1 (p) 0 1 1 (p)

0 1 0 0 1 0 0 1 0

0 0 1 (q) 0 0 1 (q) 0 0 1 (q)

8-Path Exists m-Path Exists


4-Path does not Exist
Connectivity

• Connectivity
– Let S represent a subset of pixels in an image.
Two pixels p and q are said to be connected in S if there
exists a path between them entirely of pixels in S

– There are 4, 8 and m-connectivity

– Connect set: only has one connected component.


Labeling of connected Components

• Scan an image pixel by pixel from left to right


and top to bottom
• Equivalent labeling
Labeling of connected Components
• P is pixel scanned process
• If pixel p is color value 0 move on the next scaning
• If pixel p is color value 1 examine pixel top and left
– If top and left were 0, assign a new label to p
– If only one of them’re 1, assign its label to p
– If both of them’re 1 and have
• the same number, assign their label to p
• Different number, assign label of top to p and make note that
two label is equivalent
• Sort all pairs of equivalent labels and assign each
equivalent to be same type
Example connected components Labeling

0 1 1 0 Equivalent Table

1 2
21 1 0 1
3
3 4

0 0 41 3
1
Connected components Labeling using 4-connectivity

0 1 1 0 Equivalent Table

1 2
21 1 0 1
3
3 4

0 0 41 3
1
Connected components Labeling using 8-connectivity

• Steps are same as 4-connected components


• But the pixel that are consider is 4 previous
pixels ( top-left, top, top-right and left )
Example-components Labeling using 8-connectivity

0 1 1 0

1 1 0 1

0 0 1 1
Find out the connected components of
a given image using 8-connectivity
11011101
11010101
11110001
00000001
11110101
00010101
11010001
11010111
Labeling (answer)
11011102
11010102
11110002
00000002
33330402
00030402
55030002
55030222
23-Jul-18 96
Region

• Region
– Region is a connected set.

1 1 1
1 0 1 R1 Only 8-path between two regions
0 1 0 exists

0 0 1
1 1 1 R2
1 1 1
Boundary

• Boundary
– The border of region is the set of pixels in the region
that have at least one background neighbor.

0 0 0 0
0 0 0
0 0 1 0
0 1 0
0 1 1 0
0 1 0
0 1 1 0
0 1 0
0 0 0 0
0 0 0
Distance Measures


• Application
Applicationofofdistance
distancemeasure is isarithmetic/logical
measure
operation on images.
 Neighborhood operation on images
 Shape matching

Types of Distance measures:


 Euclidean Distance
 City block Distance
 Chess board Distance

23-Jul-18 99
Euclidean Distance


• Euclidean Distance
Application of distance measure is

 Given pixels p, q with coordinates at (p1,q1), (p2,q2)


respectively,
 D is a distance function (or metric) if:
 D(p,q) ≥ 0 (D(p,q)=0 iff p=q),
 D(p,q) = D(q,p), and
 D(p,z) ≤ D(p,q) + D(q,z).

23-Jul-18 100
City block distance

• Application
The D4 distance (also
of called the city
distance block distance)
measure is between p
and q is given by: The pixels having a D4 distance less than
some r from (x,y) form a diamond centered at (x,y)
 Example: pixels where D4 ≤ 2
 D4 ( p, q) = |x − s| + |y − t|
2
212
21012
212
2
Note: Pixels with D4=1 are the 4-neighbors of (x,y)

23-Jul-18 101
Chessboard distance

The D8 distance (also called the chessboard distance) between p and


•q isApplication
given by:
of distance measure is
The pixels having a D8 distance less than some r from (x,y) form a
square centered at (x,y)
 Example: pixels where D8 ≤ 2
D8 (p,q )= max( | x − s|,| y − t|)
22222
21112
21012
21112
22222
Note: Pixels with D8=1
are the 8-neighbors
of (x,y)

23-Jul-18 102
23-Jul-18 103
Distance Measures

The
• Application
D4 and D8 distances between
of distance p and q are
measure is independent of
any paths that might exist between the points because these
distances involve only the coordinates of the points.

If we elect to consider m-adjacency.


In this case, the distance between two pixels will depend on
the values of the pixels along the path , as well as the value of
their neighbors.
p3 p4
p1 p2
p

23-Jul-18 104
Distance Measures

If
• Application
we elect to consider m-adjacency.
of distance measure is
In this case, the distance between two pixels will depend on
the values of the pixels along the path , as well as the value of
their neighbors.
p3 p4
p1 p2
p
0 1 (q) 0 1 (q) 1 1 (q)
0 1 1 1 1 1
(p)1 (p)1 (p)1

Case 1 Case 2 Case 3


23-Jul-18 105
An Introduction to the Mathematical Tools Used
in Digital Image Processing
• Array versus matrix operations
• Array Product
• [ a11 a12 ; a21 a22 ] [b11 b12 ; b21 b22] =
[a11b11 a12b12 ; a21b21 a22b22]
• Matrix Product
• [ a11b11 +a12b21 a11b12+ a12b22 ; a21b11
+a22b21 a21b12 + a22b22 ]
An Introduction to the Mathematical Tools Used
in Digital Image Processing
• Linear operation vs nonlinear operations
– H is said to be a linear operator if, for any two
images f and g and any two scalars a and b,
H (af  bg )  aH ( f )  bH ( g )

• f=[ 0 2; 2 3 ] g= [6 5 ; 4 7] a= 1 b = -1
• Example : max operation (Non Linear)
• Example : Summation (Linear)
An Introduction to the Mathematical Tools Used
in Digital Image Processing
• f=[ 0 2; 2 3 ] g= [6 5 ; 4 7] a= 1 b = -1
• Example : Summation (Linear)
H (af  bg )  aH ( f )  bH ( g )
• L.H.S. af + bg = [ -6 -3 ; -2 -4 ]
Summation(af +bg) = -15

• R.H.S. aH(f) + bH(g) = 1 (7) + (-1) (22)


= -15
An Introduction to the Mathematical Tools Used
in Digital Image Processing
• f=[ 0 2; 2 3 ] g= [6 5 ; 4 7] a= 1 b = -1
• Example : Max (Nonlinear)
H (af  bg )  aH ( f )  bH ( g )
• L.H.S. af + bg = [ -6 -3 ; -2 -4 ]
Max(af +bg) = -2

• R.H.S. aH(f) + bH(g) = 1 (3) + (-1) (7)


= -4
Arithmetic operations

23-Jul-18 110
Arithmetic operations

23-Jul-18 111
Arithmetic operations

23-Jul-18 112
Geometric spatial transformations

23-Jul-18 113
Geometric spatial transformations
• Rotation

• (old coordinates are (x, y) and the new coordinates are (x', y'))
• ѳ = initial angle, Ф = angle of rotation.
• x = r cos ѳ
y = r sin ѳ
• x' = r cos ( ѳ + Ф ) = r cos ѳ cos Ф - r sin ѳ sin Ф
y' = r sin (ѳ + Ф ) = r sin ѳ cos Ф + r cos ѳ sin Ф
• hence:
x' = x cos Ф - y sin Ф
y' = y cos Ф + x sin Ф
• Example: ѳ = 0 and 30 , x =10, y =5
• x’ = x cos 30 – y sin 30 = 6.16
• y’ = y cos 30 + x sin 30 = 9.33
Geometric spatial transformations

23-Jul-18 115
Geometric spatial transformations
 Nearest neighbor interpolation produced the most jagged (Having a
sharply uneven surface or outline ) edges and bilinear interpolation
produced yielded significantly improved results.

 Bicubic interpolation produced slightly sharper results.

23-Jul-18 116
Thank you

23-Jul-18 117

You might also like