Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 52

Array versus Matrix

Operations
• Consider two 2 x 2 images
and

• Array Product is:

• Matrix Product is:


• Array operation involving one or more images is carried out
on a pixel-by-pixel basis.

• Ex. i) Raising an image to a power.


Individual pixel is raised to that power.

ii) Dividing an image by another:


Division is between corresponding pixel pairs.
Linear versus Nonlinear
Operations
• General operator, H, that produces an output image, g(x, y),
for a given input image, f (x, y):
H[f(x, y)] = g(x, y)

• H is said to be a linear operator if


H[aifi(x,y) + ajfj(x,y)] = aiH[fi(x,y)] + ajH[fj(x,y)]
= aigi(x,y) + ajgj(x,y)—Eq.(i)
where, ai, aj – arbitrary constants
fi(x,y), fj(x,y) – images of same size.
• Suppose H is the sum operator,

• = +

Thus, operator is linear.


• Consider max operation,
• Let f1 = , f2 = , a1 = 1, a2 = -1.

• To Test Linearity,
• LHS of eq(i): =
• RHS of eq(i): = -4
• LHS ≠ RHS
• So, max is non-linear operation.
Arithmetic Operations
• Arithmetic operations are array operations that are carried
out between corresponding pixel pairs.

• Four arithmetic operations:


s(x, y) = f(x, y) + g(x, y)
d(x, y) = f(x, y) - g(x, y)
p(x, y) = f(x, y) * g(x, y)
v(x, y) = f(x, y) / g(x, y)
Where, x = 0,1,2,…,M-1, y = 0,1,2,….N-1.
All images are of size M (rows) x N (columns).
Set and Logical
Operations
• Basic Set operation
• Let A - set composed of ordered pairs of real numbers.
• If pixel a = (x,y), is an element of A

• If a is not an element of A

• Set with no elements is called the null or empty set


• If every element of a set A is also an element of a set B, then
A is said to be a subset of B

• Union of two sets A and B

• Intersection of two sets A and B

• Two sets A and B are disjoint or mutually exclusive if they have


no common elements
• The set universe, U, is the set of all elements in a given
application.
• complement of a set A is the set of elements that are not in A

• difference of two sets A and B,


Illustration of Set Concept
Illustration of Logical
Operators
Spatial Operations
• Spatial operations are performed directly on the pixels of a
given image.
o (1) single-pixel operations,
o (2) neighborhood operations, &
o (3) geometric spatial transformations.

• Single-pixel operations
s = T(z)
o z - intensity of a pixel in the original image
o s - (mapped) intensity of the corresponding pixel in the processed image.
• Neighborhood operations
• We can express the operation in equation form as

• where r and c are the row and column coordinates of the


pixels whose coordinates are members of the set Sxy.
• Geometric spatial transformations
• They modify the spatial relationship between pixels in an
image.
• a.k.a. rubber-sheet transformations.
• They consists of two basic operations:
o (1) spatial transformation of coordinates and
o (2) intensity interpolation that assigns intensity values to the spatially transformed
pixels.
• The transformation of coordinates may be expressed as
o (v, w) - pixel coordinates in the original input image
o (x, y) - the corresponding pixel coordinates in the transformed output image.
• One of the most commonly used spatial coordinate
transformations is the affine transform
• Its General Form

• This transformation can scale, rotate, translate, or sheer a set


of coordinate points, depending on the value chosen for the
elements of matrix T.
Examples
• Rotation
Vector and Matrix
Operations
• Color images are formed in RGB color space by using red,
green, and blue component images.
• Each pixel of an RGB image has 3 components, which can be
organized in the form of a column vector.
• forward mapping, consists of scanning the pixels of the input
image and, at each location, (v, w), computing the spatial
location, (x, y), of the corresponding pixel in the output image.

• inverse mapping, scans the output pixel locations and, at


each location, (x, y), computes the corresponding location in
the input image using
(v, w, 1) = T-1(x, y, 1)
• It then interpolates using either of nearest neighbor, bilinear,
and bicubic interpolation techniques.
• Euclidean distance, D, between a pixel vector z and an
arbitrary point a in n-dimensional space is defined as the
vector product

• This is a generalization of the 2-D Euclidean distance


• Sometimes is referred to as a vector norm, denoted by .
• An image of size M X N can be represented as a vector of
dimension MN X 1.
• A broad range of linear processes can be applied to such
images by using notation
g = Hf + n
o f – MN X 1 vector representing Input image
o n - MN X 1 vector representing M X N noise pattern
o g - MN X 1 vector representing affected image
o H – MN X MN matrix representing linear process applied to input image
Image Registration
• Transform an image to
align its pixels with those
in another image of same
scene.
• Transformation is
unknown.
• Transformation can be
linear or nonlinear.
• Example : Align two images and combine them to produce a
larger one
Output
Input 1 Input 2
Image Transforms
• Approaches discussed till now work directly on spatial domain.
• Some tasks are best formulated by transforming the input
images, carrying the specified task in a transform domain, and
applying the inverse transform to return to the spatial domain.
• General form of 2-D linear transforms is given by:

…..eq.(1)….eq.(1)
o f(x, y) - is the input image
o r(x, y, u, v) - is called the forward transformation kernel
o u – 0,1,2,…..,M-1
o v – 0,1,2,…..,N-1
• General approach for operating in the linear transform
domain.

o x, y - spatial variables
o u, v – transform variables
o M, N - row and column dimensions of f.
o T(u, v) - is called the forward transform of f(x, y).
• Given T(u, v) , we can recover f(x, y) using the inverse
transform of T(u, v),

….eq.(2)
o x – 0,1,2,…..,M-1
o y – 0,1,2,…..,N-1
o s(x, y, u, v) - is called the inverse transformation kernel.
• (a) Image corrupted by sinusoidal interference.
• (b) Magnitude of the Fourier transform.

(a) (b)

(c) (d)

• (c) Mask (Filter) used to eliminate the energy bursts.


• (d) Result of computing the inverse of the modified Fourier transform.
• The forward transformation kernel is said to be separable if

• Also the kernel is said to be symmetric if r1(x, y) is functionally


equal to r2(x, y).

• Identical comments apply to the inverse kernel by replacing r


with s in the preceding equations.
• Thus, forward & inverse kernels are given by:
• Substituting these kernels into the general transform
formulations, we get the Discrete Fourier transform pair:
• Fourier (forward and inverse) kernels are
o separable and symmetric
o allow 2-D transforms to be computed using 1-D transforms
&
• f(x, y) is a square image of size M x M

• Then, eq.(1) & eq.(2) can be expressed in matrix form as

o F – M x M matrix with elements of input image f(x, y).


o A – M x M matrix with elements aij = r1(x, y)
o T – resulting M x M matrix with values T(u, v), u, v = 0,1,2….., M-1
• To obtain the inverse transform, we pre- and post-multiply
above equation by an inverse transformation matrix B.

• If B = A-1

• F can be recovered completely from its forward transform.


• If B ≠ A-1
• Approximation is
Intensity Transformation
 Very first step in Digital Image Processing.

 It is purely subjective.

 It is a cosmetic procedure.

 It improves subjective qualities of images.

 It has two domains:

 Spatial domain

 Frequency domain
Spatial Domain

 Spatial means working in space i.e. (given image).

 It means working with pixel values or raw data.

 Let g( x, y) be original image where g is gray level values & ( x,


y) is co-ordinates

 For 8-bit image, g can take values from 0 – 255

where 0 – black ,
255 – white &
others - shades of gray
Spatial Domain
 In an image with size 256 x 256, (x, y) can assume any value
from (0 , 0) to (255 , 255).

(0 , 0) Y

X (255, 255)
Spatial Domain
 Applying transform modifies the image

f(x,y) = T g(x,y)
where,
g(x,y) is original image
T is transformation applied on g(x,y)
f(x,y) is new modified image

 In spatial domain techniques simply T changes.

 Spatial domain enhancement is carried out in two ways:


Point processing
Neighborhood processing
Point Processing
 Here, we work on singe pixel i.e. T is 1 x 1 operator.

New image depends on transform T and original image.

 Some important examples of point processing are:

 Digital Negative
 Contrast Stretching
 Thresholding
 Gray level slicing
 Bit plane slicing
 Dynamic range compression
Point Processing
 Identity Transformation:
255 T

Modified
Gray Level
125

s
10

0
10 125 255
Original Gray Level r

• It does not modify the input image at all.

• In general, s = r
Point Processing
1) Digital Image Negative:

 Useful in large applications e.g. X-ray images.

Negative means inverting gray levels.

255

Output Gray level 200

55

0 55 200 255
Input image intensity r
Point Processing
Digital Negative can be obtained by:

s = 255 – r (where, = 255)

when, r = 0; s = 255
&if r = 255; s = 0

Generally, s = (L-1) –r

where, L – total number of gray levels (e.g. 256 for 8-bit image)
Point Processing
2) Contrast Stretching:

255
w (r2, s2) n

Output
Image
Intensity s m
level

(r1, s1)
v l

0 a b 255
r
Input Image Intensity level
Point Processing
Reasons:
 Poor Illumination
 Wrong setting of lens aperture

 Idea behind Contrast Stretching is to make dark portion darker and bright
portion brighter.

 In above figure, dotted line indicated Identity Transformation & solid line
indicates Contrast Stretching.

 Dark portion is being made darker by assigning slope of < 1.


 Bright portion is being made brighter by assigning slope of > 1.
 Any set of slopes cant be generalized for all kind of images.

Formulation is given below:


s = l.r ; for 0 ≤ r ≤ a
= m(r-a) + v ; for a ≤ r ≤ b
= n(r-b) + w ; for b ≤ r ≤ L-1
Point Processing
3) Thresholding:

255

Output
Image
Intensity s
level

0 a 255
r
Input Image Intensity level
Point Processing
 Extreme Contrast Stretching yields Thresholding.

 In Contrast Stretching figure, if l & n slope are made ZERO & if m slope is
increased then we get Thresholding Transformation.

 If r1 = r2, s1 = 0 & s2 = L-1


Then we get Thresholding function.

 Expression goes as under:

s = 0; if r ≤ a
s = L – 1 ; if r >a
where, L is number of Gray levels.

Note: It is a subjective phenomenon.


Thresholded image has maximum contrast as it has only BLACK & WHITE
gray values.
Point Processing
4) Gray Level Slicing (Intensity Slicing):

L-1 L-1

s s

0 a b L-1 0 a b L-1
r r
fig. (1) Slicing w/o background fig. (2) Slicing with background
Point Processing
 Thresholding splits the image in 2 parts

At times, we need to highlight a specific range of gray levels.


eg. X-ray scan, CT scan

 It looks similar to thresholding except that we select a band of gray


levels.

Formulation of Gray level slicing w/o background (fig. 1):


s = L-1 ; for a ≤ r ≤ b
=0 ; otherwise
 No background at all.

Sometimes we may need to retain the background.

 Formulation of Gray level slicing with background (fig. 2):


s = L-1 ; for a ≤ r ≤ b
=r ; otherwise
Point Processing
5) Bit Plane Slicing:
 Here, we find the contribution made by each bit to the final image.
 Consider a 256 x 256 image with 256 gray levels i.e. 8-bit representation
for each pixel. E.g. BLACK is represented as 0000_0000 & WHITE by
1111_1111.
 Consider LSB value of each pixel & plot image. Continue till MSB is
reached.
All 8 images will be binary.
 Observing the images we conclude that
Higher order images contain visually sufficient data.
Lower order bits contain suitable details of image.
 Hence, BPS can be used in Image Compression.
We can transmit only higher order bits & remove lower order bits.
E.g. Steganography
Point Processing
Ex. Plot bit planes of the given 3 x 3 image.

1 2 0 1 - 00000001 001 010 000


2 - 00000010
4 3 2 0 - 00000000 100 011 010
4 - 00000100
7 5 2 3 - 00000011 111 101 010
2 - 00000010
7 - 00000111
Max. Intensity is 7 thus 3 – bits 5 - 00000101
2 - 00000010

1 0 0 0 1 0 0 0 0

0 1 0 0 1 1 1 0 0

1 1 0 1 0 1 1 1 0

LSB plane Middle Plane MSB Plane


Point Processing
6) Dynamic Range Compression (Log transformation):

r
 At times, dynamic range of image exceeds the capability of display device.

 Some pixel values are so large that the other low value pixel gets obscured.
E.g. stars in day time are not visible though present due to large intensity of
sun.

 Thus dynamic range needs to be compressed.


Point Processing
 Log operator is an excellent compression function.

 Thus, Dynamic range compression is achieved using log operator.

Formulation:
s = C.ln(1 + |r|)
where, C – normalization constant
r – input intensity
Point Processing
7) Power law Transform:

γ<1
s
γ=1
γ>1

f(x, y) = C .g(x, y)γ

s = C. rγ where, C & γ are positive constants


Point Processing
The Transformation is shown for different values of ‘γ’ which is also the
gamma correction factor.

 By changing γ, we obtain the family of transformation curves.

Nonlinearity encountered during image capturing, storing & displaying can


be corrected using gamma correction.

 Power Law Transform can be used to increase dynamic range of image.

You might also like