Download as pdf or txt
Download as pdf or txt
You are on page 1of 79

Table of Content

Digital Image Processing


Image Filtering

O. Le Meur
olemeur@irisa.fr

Univ. of Rennes 1
http://www.irisa.fr/temics/staff/lemeur/

January 2, 2011

1
Table of Content

1 Introduction

2 Point-to-point transformation

3 Linear ltering (neighborhood


operator)

4 Non Linear ltering

5 Conclusion

2
Introduction
Point-to-point transformation
Linear ltering (neighborhood operator)
Non Linear ltering
Conclusion
Introduction

1 Introduction

2 Point-to-point transformation

3 Linear ltering (neighborhood


operator)

4 Non Linear ltering

5 Conclusion

3
Introduction
Point-to-point transformation
Linear ltering (neighborhood operator)
Non Linear ltering
Conclusion
Introduction: image transformation

There exist 3 types of transformation:


Point to point transformation:
The output value at a specic coordinate
is dependent only on one input value but
not necessarily at the same coordinate;

Local to point transformation:


The output value at a specic coordinate
is dependent on the input values in the
neighborhood of that same coordinate;

Global to point transformation:


The output value at a specic coordinate
is dependent on all the values in the input
image.

Note that the complexity increases with the size of the considered neighborhood...

4
Introduction
Point-to-point transformation Spatial coordinates-based transformations
Linear ltering (neighborhood operator) Pixel values-based transformations
Non Linear ltering
Conclusion
Point-to-point transformation

1 Introduction

2 Point-to-point transformation
Spatial coordinates-based
transformations
Pixel values-based transformations
3 Linear ltering (neighborhood
operator)

4 Non Linear ltering

5 Conclusion

5
Introduction
Point-to-point transformation Spatial coordinates-based transformations
Linear ltering (neighborhood operator) Pixel values-based transformations
Non Linear ltering
Conclusion
Spatial coordinates-based transformations

Remark:
This section is composed of several pictures extracted from
http://eeweb.poly.edu/~onur/lectures/lectures.html.
Let im[x , y ] be an input image of size N × N .
A spatial coordinates-based transformation, also called warping, aims at providing an
image IM [k , l ] from the input image im[x , y ]:
IM [k , l ] = im[x (k , l ), y (k , l )]
x (k , l ) and y (k , l ) are the transformations or the pixel warping functions. These functions
just modify the spatial coordinates of a pixel not its value;

Special cases to take into consideration:

the new coordinates [x (k , l ), y (k , l )] is out of the image IM . In this case,


IM [k , l ] = 0;
the new coordinates must be integers (Rounding operation, nearest integers...).

6
Introduction
Point-to-point transformation Spatial coordinates-based transformations
Linear ltering (neighborhood operator) Pixel values-based transformations
Non Linear ltering
Conclusion
Transpose

The transpose tansformation is given by


x (k , l ) = l
y (k , l ) = k

7
Introduction
Point-to-point transformation Spatial coordinates-based transformations
Linear ltering (neighborhood operator) Pixel values-based transformations
Non Linear ltering
Conclusion
Vertical Flip
The vertical ip tansformation is given by
x (k , l ) = N −k
y (k , l ) = l

The horizontal ip tansformation is given by


x (k , l ) = k
y (k , l ) = N −l

8
Introduction
Point-to-point transformation Spatial coordinates-based transformations
Linear ltering (neighborhood operator) Pixel values-based transformations
Non Linear ltering
Conclusion
Translation
The translation tansformation is given by
x (k , l ) = k + Tk
y (k , l ) = l + Tl
where Tk and Tl are the transalation values for the x-axis and the y-axis respectively.
In the example below, we have (Tl = −50)
IM [k , l ] = im[x (k , l ), y (k , l )]
IM [k , l ] = im[k , l + Tl ]

Dierent transformations can be obtained depending on Tk and Tl values.


9
Introduction
Point-to-point transformation Spatial coordinates-based transformations
Linear ltering (neighborhood operator) Pixel values-based transformations
Non Linear ltering
Conclusion
Rotation

The rotation tansformation is given by


x (k , l ) = (k − x0 )cos θ − (l − y0 )sinθ + x0
y (k , l ) = (k − x0 )sinθ + (l − y0 )cos θ + y0

where [x0 , y0 ] is the spatial coordinates of the center of the rotation and θ the angle.

Extracted from http://eeweb.poly.edu/ onur/lectures/lectures.html.

10
Introduction
Point-to-point transformation Spatial coordinates-based transformations
Linear ltering (neighborhood operator) Pixel values-based transformations
Non Linear ltering
Conclusion
Wave

x (k , l ) = k x (k , l ) = k + α × sin(β × k )
y (k , l ) = l + α × sin(β × l ) y (k , l ) = l

α and β can be used to strengthen the eect.

11
Introduction
Point-to-point transformation Spatial coordinates-based transformations
Linear ltering (neighborhood operator) Pixel values-based transformations
Non Linear ltering
Conclusion
Warp and swirl

The swirl eect is a rotation but the


x (k , l ) = k angle of the rotation θ varies with the
y (k , l ) sign(l − y0 ) × (l − y )2 + y pixel distance from the center of the
=
y0 0 0 image [x0 , y0 ]:

d (k − x0 )2 + (l − y0 )2
q
=
π
θ = r
512
If r −→ 0, θ is small...
12
Introduction
Point-to-point transformation Spatial coordinates-based transformations
Linear ltering (neighborhood operator) Pixel values-based transformations
Non Linear ltering
Conclusion
Glass eect

A glass eect is obtained by adding a small and random displacement to each pixel:
x (k , l ) = k + (RAND (1, 1) − 1/2) × 10
y (k , l ) = l + (RAND (1, 1) − 1/2) × 10

Extracted from http://eeweb.poly.edu/ onur/lectures/lectures.html.

13
Introduction
Point-to-point transformation Spatial coordinates-based transformations
Linear ltering (neighborhood operator) Pixel values-based transformations
Non Linear ltering
Conclusion
Summary
All 2D linear transformations:
x a b k
    
y =
c d l
Scale, rotation, mirror...
Properties:
Origin maps to origin;
Lines map to lines;
Ratios are preserved...
Ane transformations (linear transf. + translation):
x a b c k
    
y  = d e f   l 
w 0 0 1 w
Properties:
Origin does not necessarily map to origin;
Lines map to lines;
Ratios are preserved...

14
Introduction
Point-to-point transformation Spatial coordinates-based transformations
Linear ltering (neighborhood operator) Pixel values-based transformations
Non Linear ltering
Conclusion
Summary

Ane transformations (linear transf. + translation):


Translation:
x 1 0 tx k
    
y  = 0 1 ty   l 
1 0 0 1 1
Scale:
x sx 0 0 k
    
y  = 0 sy 0  l 
1 0 0 1 1
2D in-plane rotation:
x cos θ −sinθ 0 k
    
y  =  sinθ cos θ 0  l 
1 0 0 1 1

15
Introduction
Point-to-point transformation Spatial coordinates-based transformations
Linear ltering (neighborhood operator) Pixel values-based transformations
Non Linear ltering
Conclusion
Pixel values-based transformations

Let im[x , y ] be an input image of size N × N .

A pixel values-based transformation aims at providing an image IM [k , l ] from the


input image im[x , y ]:
IM [k , l ] = f (im[k , l ])
Noticed that the spatial coordinates of pixel are not modied. The function f is
used to modify the pixel values.

The simplest one is the identity function: f (p ) = p :


IM [k , l ] = f (im[k , l ])
IM [k , l ] = im[k , l ]

16
Introduction
Point-to-point transformation Spatial coordinates-based transformations
Linear ltering (neighborhood operator) Pixel values-based transformations
Non Linear ltering
Conclusion
Histogram
An image histogram is a graphical representation of the tonal distribution in a digital
image. It plots the number of pixels for each tonal value.
Histogram gives information about the global distribution of an image.
Histogram plots the number of pixels in the image (vertical axis) with a particular
brightness value (horizontal axis).

(a) Original (b) Histogram (intensity)

17
Introduction
Point-to-point transformation Spatial coordinates-based transformations
Linear ltering (neighborhood operator) Pixel values-based transformations
Non Linear ltering
Conclusion
Histogram

(a) Original (b) Histogram (intensity)

(c) Original (d) Histogram (intensity)

18
Introduction
Point-to-point transformation Spatial coordinates-based transformations
Linear ltering (neighborhood operator) Pixel values-based transformations
Non Linear ltering
Conclusion
Histogram

High dynamic range in the last case


providing the best quality.

From R.C. Gonzalez, R.E. Woods, Digital image processing.

19
Introduction
Point-to-point transformation Spatial coordinates-based transformations
Linear ltering (neighborhood operator) Pixel values-based transformations
Non Linear ltering
Conclusion
Histogram equalization

The goal is to increase the global contrast of images, especially when the usable data
of the image is represented by close contrast values.
Consider a discrete grayscale image {x } and let ni be the number of occurrences of
gray level i .
The probability of an occurrence of a pixel of level i in the image is
px (i ) = p(x = i ) = nni , 0 ≤ i < L, L being the total number of gray levels in the
image, n being the total number of pixels in the image.
Let us also dene the cumulative distribution function: cdfx (i ) = ij =0 px (j ).
P

We want to produce an image {y }, such that cdfy (i ) = iK .

y = cdfx (x ) × (max {x } − min {x }) + min {x }

20
Introduction
Point-to-point transformation Spatial coordinates-based transformations
Linear ltering (neighborhood operator) Pixel values-based transformations
Non Linear ltering
Conclusion
Negative image

The negative image is obtained by f (p ) = 255 − p (pixel values are coded on 8 bits).
IM [k , l ] = f (im[k , l ])
IM [k , l ] = 255 − im[k , l ]

21
Introduction
Point-to-point transformation Spatial coordinates-based transformations
Linear ltering (neighborhood operator) Pixel values-based transformations
Non Linear ltering
Conclusion
Piece continuous transformation

The objective of such transformation is:


to compress pre-determined ranges of values: Range compression;
to accentuate pre-determined ranges of values: Range stretching.

α1 × p 0 ≤ p < a1

α2 × (p − a1 ) a 1 ≤ p < a2




 ..




f (p ) = .
αi (p − ai −1 ) + i −1 α (α − α ai − 1 ≤ p < ai
P 
1 j j j −1

j

 =

 ..




.

Obviously, we have:
αi < 1, compression;
αi > 1, stretching.

22
Introduction
Point-to-point transformation Spatial coordinates-based transformations
Linear ltering (neighborhood operator) Pixel values-based transformations
Non Linear ltering
Conclusion
Piece continuous transformation

Example of contrast stretching:


Let f a function dened on three pieces:
 α1 p , 0 ≤ p < a1

f (p) =  α2 (p − a1 ) + α1 a1 , a1 ≤ p < a2
α3 (p − a2 ) + (α2 (a2 − a1 ) + α1 a1 ), a2 ≤ p ≤ 255

23
Introduction
Point-to-point transformation Spatial coordinates-based transformations
Linear ltering (neighborhood operator) Pixel values-based transformations
Non Linear ltering
Conclusion
Piece continuous transformation
Example of contrast stretching:

Let imaging that α1 and α2 are null. The ltered image contains only grey level
belonging to [a1 , a2 ]. We just keep a slice of the image.

24
Introduction
Point-to-point transformation Spatial coordinates-based transformations
Linear ltering (neighborhood operator) Pixel values-based transformations
Non Linear ltering
Conclusion
Piece continuous transformation
Binary thresholding:
0 im[x,y]<T

IM [x , y ] = 255 otherwise
Gamma correction:
IM [x , y ] = im[x , y ]γ

25
Introduction Denition
Point-to-point transformation Low-pass lters in spatial domain
Linear ltering (neighborhood operator) High-pass lters in spatial domain
Non Linear ltering Dierentiation lter in spatial domain
Conclusion Frequency domain ltering
Linear ltering with neighborhoods operator

1 Introduction

2 Point-to-point transformation

3 Linear ltering (neighborhood


operator)
Denition
Low-pass lters in spatial domain
High-pass lters in spatial domain
Dierentiation lter in spatial
domain
Frequency domain ltering

4 Non Linear ltering

5 Conclusion

26
Introduction Denition
Point-to-point transformation Low-pass lters in spatial domain
Linear ltering (neighborhood operator) High-pass lters in spatial domain
Non Linear ltering Dierentiation lter in spatial domain
Conclusion Frequency domain ltering
Denition

Denition of neighborhood around a pixel of spatial coordinate (x , y ). The


neighborhood is called V (x , y ):

Two examples:

27
Introduction Denition
Point-to-point transformation Low-pass lters in spatial domain
Linear ltering (neighborhood operator) High-pass lters in spatial domain
Non Linear ltering Dierentiation lter in spatial domain
Conclusion Frequency domain ltering
Denition

2D Finite Impluse Response (FIR) lter:


IM [x , y ] = (im ∗ h)[x , y ]
IM [x , y ] h(k , l ) im[x − k , y − l ]
X X
=
k ∈V (x ,y ) l ∈V (x ,y )
| {z } | {z } | {z }
Output Filter coe. Input

with h is the 2D impulse response called also the Point Spread Function (PSF) or the
kernel of the transform. It is composed of the lter coecients (nite length).
The gain of the lter is equal to
g= h(i , j )
X X

k ∈V (x ,y ) l ∈V (x ,y )

28
Introduction Denition
Point-to-point transformation Low-pass lters in spatial domain
Linear ltering (neighborhood operator) High-pass lters in spatial domain
Non Linear ltering Dierentiation lter in spatial domain
Conclusion Frequency domain ltering
Denition

IM [x , y ] = h(k , l ) im[x − k , y − l ]
X X

k ∈V (x ,y ) l ∈V (x ,y )
| {z } | {z } | {z }
Output Filter coe. Input

where H is the convolution kernel.


Example: the lter support is (3 × 5). The convolution kernel is:
h(−2, −1) h(−1, −1) h(0, −1) h(1, −1) h(2, −1)
 

H =  h(−2, 0) h(−1, 0) h(0, 0) h(1, 0) h(2, 0) 


h(−2, −1) h(−1, −1) h(0, −1) h(1, −1) h(2, −1)

29
Introduction Denition
Point-to-point transformation Low-pass lters in spatial domain
Linear ltering (neighborhood operator) High-pass lters in spatial domain
Non Linear ltering Dierentiation lter in spatial domain
Conclusion Frequency domain ltering
Denition

Example for a neigborhood of size (2N + 1) × (2N + 1):


N=2
N N
IM [x , y ] = h(k , l )im[x −k , y −l ]
X X

k =−N l =−N

Number of multiplications per output point: O(N 2 )

30
Introduction Denition
Point-to-point transformation Low-pass lters in spatial domain
Linear ltering (neighborhood operator) High-pass lters in spatial domain
Non Linear ltering Dierentiation lter in spatial domain
Conclusion Frequency domain ltering
Average lter

The most simple low-pass lter is the local averaging operation. The main eect of a
low-pass lter is a blurring. The size of the kernel is (2N + 1) × (2N + 1):
1 −N ≤ k , l ≤ − N
(
h (k , l ) = (2N +1)2
0 Otherwise
For N = 1, the convolution kernel is given by:
1 1 1
 

H = 1 1 1 1
9 1 1 1

31
Introduction Denition
Point-to-point transformation Low-pass lters in spatial domain
Linear ltering (neighborhood operator) High-pass lters in spatial domain
Non Linear ltering Dierentiation lter in spatial domain
Conclusion Frequency domain ltering
Average lter

Three examples of averaging for dierent sizes of kernel. From the left-hand side to
the right-side, N = {1, 3, 8}:

The amount of blur increases with the size of the kernel (the number of operation too
O(N 2 )). In order to lter pixels located near the edges of the image, edge pixel values
are replicated to give sucient data (this is not the case here).

32
Introduction Denition
Point-to-point transformation Low-pass lters in spatial domain
Linear ltering (neighborhood operator) High-pass lters in spatial domain
Non Linear ltering Dierentiation lter in spatial domain
Conclusion Frequency domain ltering
Gaussian lter
The kernel h is given by the following function

h(x , y ) = 1 2 exp x2 + y2
 

2πσ 2σ2

Each pixel's new value is set to a weighted average of that pixel's neighborhood. The
original pixel's value receives the heaviest weight (having the highest Gaussian value)
and neighboring pixels receive smaller weights as their distance to the original pixel
increases. This results in a blur that preserves boundaries and edges better than other,
more uniform blurring lters.
Note that the lter support is truncated...

33
Introduction Denition
Point-to-point transformation Low-pass lters in spatial domain
Linear ltering (neighborhood operator) High-pass lters in spatial domain
Non Linear ltering Dierentiation lter in spatial domain
Conclusion Frequency domain ltering
Gaussian lter

h(x , y ) = 1 2 exp x2 + y2
 

2πσ 2σ2
Example of kernel:
σ = 0.84089642, N = 3:
Note that the center element (at [4, 4]) has the largest value, decreasing
symmetrically as distance from the center increases.
0.00000067 0.00002292 0.00019117 0.00038771 0.00019117 0.00002292 0.00000067
0.00002292 0.00078633 0.00655965 0.01330373 0.00655965 0.00078633 0.00002292
0.00019117 0.00655965 0.05472157 0.11098164 0.05472157 0.00655965 0.00019117

H = 0.00038771 0.01330373 0.11098164 0.22508352 0.11098164 0.01330373 0.00038771


0.00019117 0.00655965 0.05472157 0.11098164 0.05472157 0.00655965 0.00019117


0.00002292 0.00078633 0.00655965 0.01330373 0.00655965 0.00078633 0.00002292

0.00000067 0.00002292 0.00019117 0.00038771 0.00019117 0.00002292 0.00000067

Note that 0.22508352 (the central one) is 1177 times larger than 0.00019117
which is just outside 3σ.

34
Introduction Denition
Point-to-point transformation Low-pass lters in spatial domain
Linear ltering (neighborhood operator) High-pass lters in spatial domain
Non Linear ltering Dierentiation lter in spatial domain
Conclusion Frequency domain ltering
Gaussian lter

h(x , y ) = 1 2 exp x2 + y2
 

2πσ 2σ2
Example of kernel:
σ = 0.6, N = 1:

1 2 1
 

H = 1 2 4 2
16 1 2 1
σ = 1, N = 2:
1 
9 18 9 1
1  9 81 162 81 9
H= 18

162 324 162 18
1444  9 81 162 81 9

1 9 18 9 1

35
Introduction Denition
Point-to-point transformation Low-pass lters in spatial domain
Linear ltering (neighborhood operator) High-pass lters in spatial domain
Non Linear ltering Dierentiation lter in spatial domain
Conclusion Frequency domain ltering
Gaussian lter

Example of Gaussian ltering with σ = 2:

36
Introduction Denition
Point-to-point transformation Low-pass lters in spatial domain
Linear ltering (neighborhood operator) High-pass lters in spatial domain
Non Linear ltering Dierentiation lter in spatial domain
Conclusion Frequency domain ltering
Gaussian lter

Increasing sigma increases the smoothing

37
Introduction Denition
Point-to-point transformation Low-pass lters in spatial domain
Linear ltering (neighborhood operator) High-pass lters in spatial domain
Non Linear ltering Dierentiation lter in spatial domain
Conclusion Frequency domain ltering
Other low pass lters

2D Pyramidal lter:

1 2 3 2 1
2 4 6 4 2
H= 1 3

6 9 6 3
81 2 4 6 4 2

1 2 3 2 1
Conic lter: 
0 0 1 0 0
0 2 2 2 0
H= 1 1

2 5 2 1
25 0 2 2 2 0

0 0 1 0 0

38
Introduction Denition
Point-to-point transformation Low-pass lters in spatial domain
Linear ltering (neighborhood operator) High-pass lters in spatial domain
Non Linear ltering Dierentiation lter in spatial domain
Conclusion Frequency domain ltering
High-pass lter

The high-pass ltered image can be thought of as the original image minus the low
pass ltered image.
N N
IM [x , y ] = im[x , y ] − h(k , l )im[x − k , y − l ]
X X

k =−N l =−N
N N
IM [x , y ] = h(k , l )im[x − k , y − l ]
X X

k =−N l =−N
with h the convolution kernel.

0 −1 0 −1 −1 −1 1 −2 1
     

H = −1 5 −1 H = −1 9 −1 H = −2 5 −2


0 −1 0 −1 −1 −1 1 −2 1

39
Introduction Denition
Point-to-point transformation Low-pass lters in spatial domain
Linear ltering (neighborhood operator) High-pass lters in spatial domain
Non Linear ltering Dierentiation lter in spatial domain
Conclusion Frequency domain ltering
High-pass lter

Three examples of high-pass ltering for dierent sizes of kernel. From the left-hand
side to the right-side, N = {1, 3, 8}:

When the kernel's size increases, the ltering is more important and then the result is
less noisy.

40
Introduction Denition
Point-to-point transformation Low-pass lters in spatial domain
Linear ltering (neighborhood operator) High-pass lters in spatial domain
Non Linear ltering Dierentiation lter in spatial domain
Conclusion Frequency domain ltering
Dierentiation lter

f 0 (x ) = ∆lim f (x + ∆x ) − f (x )
x →0 ∆x
Local variations of intensity are an important source of information in image
processing. These local variations are gradient (it measures the rate of change of the
function):
∂ im ∂ im
 
∇im[k , l ] = [k , l ], [k , l ]
∂x ∂y
In the illustration below: GX = ∂ im
∂x
[ k , l ] and GY y [k , l ].
= ∂∂im

41
Introduction Denition
Point-to-point transformation Low-pass lters in spatial domain
Linear ltering (neighborhood operator) High-pass lters in spatial domain
Non Linear ltering Dierentiation lter in spatial domain
Conclusion Frequency domain ltering
Dierentiation lter

GX x [k , l ]:
= ∂∂im Kernel = hx = −1 1
 

∂ im
[k , l ] ≈ im[k + 1, l ] − im[k , l ]
∂x

GY y [k , l ]
= ∂∂im Kernel = hy = −1 1 T
 

∂ im
[k , l ] ≈ im[k , l + 1] − im[k , l ]
∂y
However, most of the time, we use the following kernel −1 0
1 and
 
T
−1 0 1 (phase =zero). Below, from the left-hand side to the right: original,


−1 1 , −1 0 1 .
   

42
Introduction Denition
Point-to-point transformation Low-pass lters in spatial domain
Linear ltering (neighborhood operator) High-pass lters in spatial domain
Non Linear ltering Dierentiation lter in spatial domain
Conclusion Frequency domain ltering
Dierentiation lter

However, these lters are very sensitive to the noise. In order to enhance the
robustness, these lters are combined with a blurring lter:
−1 0 1
 

hx =  −2 0 2
−1 0 1
−1 −2 −1
 

hy =  0 0 0
1 2 1

These kernel are the Sobel's kernel. The blurring lter is 1 2 1.


 

IMx [k , l ] = (im ∗ hx )[k , l ]


IMy [k , l ] = (im ∗ hy )[k , l ]

In the same vein, Prewitt and Robert's lters.

43
Introduction Denition
Point-to-point transformation Low-pass lters in spatial domain
Linear ltering (neighborhood operator) High-pass lters in spatial domain
Non Linear ltering Dierentiation lter in spatial domain
Conclusion Frequency domain ltering
Dierentiation lter

Norm of the gradient: k∇IM [k , l ]k2 = IMx [k , l ]2 + IMy [k , l ]2 ;


p

IMy [k ,l ]
Its orientation: arg (∇IM [k , l ]) = arctan
 
IMx [k ,l ]
From left-hand side to the right: IMx , IMy and the norm.

44
Introduction Denition
Point-to-point transformation Low-pass lters in spatial domain
Linear ltering (neighborhood operator) High-pass lters in spatial domain
Non Linear ltering Dierentiation lter in spatial domain
Conclusion Frequency domain ltering
Dierentiation lter
The Laplacian of a picture is the second derivative:
∂ 2 im
[k , l ] ≈ im[k + 1, l ] + im[k − 1, l ] − 2 × im[k , l ]
∂2x
∂ 2 im
[k , l ] ≈ im[k , l + 1] + im[k , l − 1] − 2 × im[k , l ]
∂2y

∂ 2 im ∂ 2 im
∇2 im[k , l ] = [k , l ] + 2 [k , l ]
∂ x2 ∂ y
∇2 im[k , l ]
≈ im[k + 1, l ] + im[k − 1, l ] + im[k , l + 1] + im[k , l − 1] − 4 × im[k , l ]
For a 4-neighborhood, the kernel is given by
0 1 0
 

h = 1 −4 1
0 1 0

We can extent this kernel to compute the laplacian in all directions (8-neighborhood):
1 1 1
 

h = 1 −8 1
1 1 1

45
Introduction Denition
Point-to-point transformation Low-pass lters in spatial domain
Linear ltering (neighborhood operator) High-pass lters in spatial domain
Non Linear ltering Dierentiation lter in spatial domain
Conclusion Frequency domain ltering
Dierentiation lter

The second order derivatives have a stronger response to ne details (e.g. thin lines)
than the rst order derivatives.

46
Introduction Denition
Point-to-point transformation Low-pass lters in spatial domain
Linear ltering (neighborhood operator) High-pass lters in spatial domain
Non Linear ltering Dierentiation lter in spatial domain
Conclusion Frequency domain ltering
Frequency domain ltering

where H is the convolution kernel.

IM [x , y ] = (im ∗ h)[x , y ]

im1 [x , y ] ∗ im2 [x , y ] →
F
IM1 [u, v ] × IM2 [u, v ]
im1 [x , y ] × im2 [x , y ] →
F
IM1 [u, v ] ∗ IM2 [u, v ]
When the size of the kernel is large, it is better to apply the lter in the frequency domain.

For more information:


Digital Image Processing, by R. C. Gonzalez and R. E. Woods, 3rd edition, Pearson Prentice
Hall, 2008.

47
Introduction Denition
Point-to-point transformation Low-pass lters in spatial domain
Linear ltering (neighborhood operator) High-pass lters in spatial domain
Non Linear ltering Dierentiation lter in spatial domain
Conclusion Frequency domain ltering
Frequency domain ltering
We can spatially lter an image by Fourier transforming and applying a frequency
lter:
IM [x , y ] = im[x , y ] ∗ h[x , y ]
IM
˜ [u , v ] = IM [u, v ] × H [u, v ]

where, H [u , v ] is the lter in the frequency function.

48
Introduction Denition
Point-to-point transformation Low-pass lters in spatial domain
Linear ltering (neighborhood operator) High-pass lters in spatial domain
Non Linear ltering Dierentiation lter in spatial domain
Conclusion Frequency domain ltering
Ideal low pass lter

From the left-hand side to the right: Ideal low pass lter transfert function, lter
displayed as an image, lter radial cross section.

1 D (u, v ) ≤ D0

H (u, v ) =
0 D (u, v ) > D0
With D the euclidean distance from the spectrum center ( N2 , N2 ).

49
Introduction Denition
Point-to-point transformation Low-pass lters in spatial domain
Linear ltering (neighborhood operator) High-pass lters in spatial domain
Non Linear ltering Dierentiation lter in spatial domain
Conclusion Frequency domain ltering
Ideal low pass lter

Low pass ltering:

Ringing and blurring

50
Introduction Denition
Point-to-point transformation Low-pass lters in spatial domain
Linear ltering (neighborhood operator) High-pass lters in spatial domain
Non Linear ltering Dierentiation lter in spatial domain
Conclusion Frequency domain ltering
Butterworth low pass lter

1
H (u , v ) =
D (u ,v ) 2n
1+
 
D0

51
Introduction Denition
Point-to-point transformation Low-pass lters in spatial domain
Linear ltering (neighborhood operator) High-pass lters in spatial domain
Non Linear ltering Dierentiation lter in spatial domain
Conclusion Frequency domain ltering
Butterworth low pass lter

Top: spatial representation of the lter for dierent orders;


Bottom: intensity proles through the center of the lters.
Butterworth low pass ltering:

Smooth transition in blurring, no ringing is present.

52
Introduction Denition
Point-to-point transformation Low-pass lters in spatial domain
Linear ltering (neighborhood operator) High-pass lters in spatial domain
Non Linear ltering Dierentiation lter in spatial domain
Conclusion Frequency domain ltering
Gaussian low pass lter

D 2 (u, v )
 
H (u, v ) = exp −
2D02
with D0 = σ. Gaussian low pass ltering:

Smooth transition in blurring, no ringing is present.


53
Introduction Denition
Point-to-point transformation Low-pass lters in spatial domain
Linear ltering (neighborhood operator) High-pass lters in spatial domain
Non Linear ltering Dierentiation lter in spatial domain
Conclusion Frequency domain ltering
Low pass lter
Comparison between the ideal, the Butterworth and the gaussian low pass lter:
Ideal

Butterworth

Gaussian

54
Introduction Denition
Point-to-point transformation Low-pass lters in spatial domain
Linear ltering (neighborhood operator) High-pass lters in spatial domain
Non Linear ltering Dierentiation lter in spatial domain
Conclusion Frequency domain ltering
High pass ltering in the frequency domain

HHP (u, v ) = 1 − HLP (u , v )

55
Introduction Denition
Point-to-point transformation Low-pass lters in spatial domain
Linear ltering (neighborhood operator) High-pass lters in spatial domain
Non Linear ltering Dierentiation lter in spatial domain
Conclusion Frequency domain ltering
High pass ltering in the frequency domain

Ideal high-pass lters enhance edges but suer from ringing artefacts, just like
Ideal LPF;
smoother results with the two others.

56
Introduction Denition
Point-to-point transformation Low-pass lters in spatial domain
Linear ltering (neighborhood operator) High-pass lters in spatial domain
Non Linear ltering Dierentiation lter in spatial domain
Conclusion Frequency domain ltering
Frequency domain ltering

Example of two band-pass lters:

57
Introduction Denition
Point-to-point transformation Low-pass lters in spatial domain
Linear ltering (neighborhood operator) High-pass lters in spatial domain
Non Linear ltering Dierentiation lter in spatial domain
Conclusion Frequency domain ltering
Laplacian in the frequency domain
We remind (see previous lecture):

∂ 2 im ∂ 2 im
∇2 im[k , l ] = [k , l ] + 2 [k , l ]
∂2x ∂ y

d n x (t ) F
dt n −→ (j 2π f ) X (f ).
n

From this, it follows that


∂ 2 im ∂ 2 im
[k , l ] + 2 [k , l ] −→ −(u 2 + v 2 )IM [u , v ]
F
∂ x2 ∂ y

The Laplacian lter is then implemented in the frequency domain by


H (u, v ) = −(u 2 + v 2 )

58
Introduction Denition
Point-to-point transformation Low-pass lters in spatial domain
Linear ltering (neighborhood operator) High-pass lters in spatial domain
Non Linear ltering Dierentiation lter in spatial domain
Conclusion Frequency domain ltering
Laplacian in the frequency domain

Finally, to compute the Laplacian, we need :


1 to compute the Fourier transform of the picture;

2 to multiply the spectrum by −(u 2 + v 2 );


3 to compute the inverse Fourier transform.

59
Introduction Denition
Point-to-point transformation Low-pass lters in spatial domain
Linear ltering (neighborhood operator) High-pass lters in spatial domain
Non Linear ltering Dierentiation lter in spatial domain
Conclusion Frequency domain ltering
The cortex transform

The cortex transform is rst described by A. Watson as the modeling of the neural
response of retinal cells to visual stimuli.
The cortex lter in the frequency domain is
cortex[bi ,θi ] (ρ, θ) = dombi × fanθi (θ)

Where,
bi and θi represent the frequency band and the index of orientation, respectively;
(ρ, θ) are polar coordinates.
The cortex transform decomposes the input image im[x , y ] into a set of subband
images B[bi ,θi ] [k , l ]:
B[bi ,θi ] [k , l ] F −1 cortex[bi ,θi ] (ρ, θ) × F {im[x , y ]}

=

60
Introduction Denition
Point-to-point transformation Low-pass lters in spatial domain
Linear ltering (neighborhood operator) High-pass lters in spatial domain
Non Linear ltering Dierentiation lter in spatial domain
Conclusion Frequency domain ltering
The cortex transform

Frequency responses of several cortex lters (brightness represents gain for the given
spatial frequency).

Complete set of cortex lters

61
Introduction Denition
Point-to-point transformation Rank ltering
Linear ltering (neighborhood operator) Homomorphic ltering
Non Linear ltering Adaptive ltering
Conclusion
Introduction

1 Introduction

2 Point-to-point transformation

3 Linear ltering (neighborhood


operator)

4 Non Linear ltering


Denition
Rank ltering
Homomorphic ltering
Adaptive ltering
Conditional mean
Anisotropic Kuwahara ltering
Bilateral ltering

5 Conclusion

62
Introduction Denition
Point-to-point transformation Rank ltering
Linear ltering (neighborhood operator) Homomorphic ltering
Non Linear ltering Adaptive ltering
Conclusion
Denition

The most important drawback of the linear ltering is that all pixels in the image are
modied by the ltering process.
To overcome this problem, non linear ltering is used. It aims, for instance, to protect
some parts of the picture having particular features (edges...) or to remove data
without blurring the whole image (impluse noise).

Three kinds of lters:


Rank ltering;
Adaptive linear ltering;
Morphological operators.

63
Introduction Denition
Point-to-point transformation Rank ltering
Linear ltering (neighborhood operator) Homomorphic ltering
Non Linear ltering Adaptive ltering
Conclusion
Rank ltering

Rank lters are based on three steps:

1 Data selection, also called


windowing;
2 Data ranking (in ascendant order);
3 1D linear weighting of the ordered
data list.

64
Introduction Denition
Point-to-point transformation Rank ltering
Linear ltering (neighborhood operator) Homomorphic ltering
Non Linear ltering Adaptive ltering
Conclusion
Special case of a generalized rank lter
If all weights of the linear lter are null, except one in the median position. This lter
is called a median lter.
IM [x , y ] = MED (im[xi , yi ]|[xi , yi ] ∈ V [x , y ])
V [x , y ] is the neighborhood (a set of N samples).
if the size of the neighborhood is odd, the output value is the median value;
if the size is even, the output value is the average of the two middles values.

The median lter is very ecient in ltering signals corrupted by impulsive noise but it
is not very ecient in gaussian noise environment.
65
Introduction Denition
Point-to-point transformation Rank ltering
Linear ltering (neighborhood operator) Homomorphic ltering
Non Linear ltering Adaptive ltering
Conclusion
Special case of a generalized rank lter

However, when the number of the samples is large, the ordering procedure becomes
cumbersome.
Idea: the median lter is taken over the outputs of several FIR substructures and the
number of the substructures is much smaller than the number of the data samples
inside the lter window.
IM [x , y ] = MED (y (1), . . . , y (m))
where, m is linear FIR lters.

66
Introduction Denition
Point-to-point transformation Rank ltering
Linear ltering (neighborhood operator) Homomorphic ltering
Non Linear ltering Adaptive ltering
Conclusion
Homomorphic ltering

Homomorphic ltering is a generalized technique for signal and image processing, in-
volving a nonlinear mapping to a dierent domain in which linear lter techniques are
applied, followed by mapping back to the original domain.
In many case, we want to remove shading eects from an image. The objective is then
to enhance high frequencies;
to attenuate low frequencies (but ne details have to be preserved).
Consider the following model of image formation:
im[x , y ] = i (x , y ) × r (x , y )
| {z } | {z }
illumination reection

The illumination component varies slowly and then aects low frequencies mostly;
The reection component varies faster and then aects the high frequencies
mostly.

67
Introduction Denition
Point-to-point transformation Rank ltering
Linear ltering (neighborhood operator) Homomorphic ltering
Non Linear ltering Adaptive ltering
Conclusion
Homomorphic ltering

What is the solution to separate LF and HF?

im[x , y ] = i [x , y ] × r [x , y ]
Fourier transform
IM [u, v ] = I [u , v ] ∗ R [u , v ]

Due to the convolution in the frequency domain, LF and HF from i [x , y ] and r [x , y ]


are mixed together...
Too complex to lter LF and HF in such condtion.

68
Introduction Denition
Point-to-point transformation Rank ltering
Linear ltering (neighborhood operator) Homomorphic ltering
Non Linear ltering Adaptive ltering
Conclusion
Homomorphic ltering
What is the solution to separate LF and HF? We can take the log !

im[x , y ] = i [x , y ] × r [x , y ]
log (im[x , y ]) = log (i [x , y ] × r [x , y ])
log (im[x , y ]) = log (i [x , y ]) + log (r [x , y ])

1 Take the log and apply the Fourier tansform to the new signal:
log (im[x , y ]) = log (i [x , y ]) + log (r [x , y ])
F (log (im[x , y ])) = F (log (i [x , y ])) + F (log (r [x , y ]))
Z [u , v ] = Ilog [u, v ] + Rlog [u, v ]
2 Filtering in the frequency domain: H [u , v ]
Z [u , v ] × H [u , v ] = Ilog [u, v ] × H [u, v ] + Rlog [u, v ] × H [u, v ]
3 Take the inverse Fourier transform and apply the exponential function:

(Z [u , v ] × H [u , v ]) = F Ilog [u, v ] × H [u, v ] + F −1 Rlog [u, v ] × H [u, v ]


−1 −1  
F
z [x , y ] = i [x , y ] + er [x , y ]
e

exp (z [x , y ]) exp ei [x , y ] + exp (er [x , y ])


 
=

69
Introduction Denition
Point-to-point transformation Rank ltering
Linear ltering (neighborhood operator) Homomorphic ltering
Non Linear ltering Adaptive ltering
Conclusion
Homomorphic ltering

γH
H [u , v ] = γL +  2n
1 + √ D20
u +v 2

with,
γL a parameter that aects low frequencies;
γH a parameter that aects high frequencies;

70
Introduction Denition
Point-to-point transformation Rank ltering
Linear ltering (neighborhood operator) Homomorphic ltering
Non Linear ltering Adaptive ltering
Conclusion
Adaptive ltering

An adaptive lter is a lter that self-adjusts its transfer function according to an


optimizing algorithm.
The goal is still to smooth the signal. However, we want to preserve edges...

Filtering by pixel grouping;


Conditional mean, Bilateral ltering and mean shift lter;
Diusion.

71
Introduction Denition
Point-to-point transformation Rank ltering
Linear ltering (neighborhood operator) Homomorphic ltering
Non Linear ltering Adaptive ltering
Conclusion
Conditional mean

IM [x , y ] h(k , l ) im[x − k , y − l ]
X X
=
k ∈V (x ,y ) l ∈V (x ,y )
| {z } | {z } | {z }
Output Filter coe. Input

Principle: pixels in a neighbourhood are averaged only if they dier from the central
pixel by less than a given threshold.
1 if |IM [x − k , y − k ] − IM [k , l ]| < TH

h(k , l ) =
0 Otherwise.
Example with a neighbourhood equal (2 × 3 + 1)(2 × 3 + 1),TH = 32:

72
Introduction Denition
Point-to-point transformation Rank ltering
Linear ltering (neighborhood operator) Homomorphic ltering
Non Linear ltering Adaptive ltering
Conclusion
Anisotropic Kuwahara ltering

Method proposed by Kuwahara and adapted by Nagao in 1980.


Example for a window 5 × 5:

Selection of the sub-domain that


has the minimum variance (9
windows for Nagao);
Replace the value of the central
pixel by the average value of the
sub-domain having the minimum
variance.

73
Introduction Denition
Point-to-point transformation Rank ltering
Linear ltering (neighborhood operator) Homomorphic ltering
Non Linear ltering Adaptive ltering
Conclusion
Anisotropic Kuwahara ltering

74
Introduction Denition
Point-to-point transformation Rank ltering
Linear ltering (neighborhood operator) Homomorphic ltering
Non Linear ltering Adaptive ltering
Conclusion
Bilateral ltering
The idea is to use a weighted ltering but with an outlier rejection.
Pixels that are very dierent in intensity from the central pixel are weighted less even
though they may be in close proximity to the central pixel.
This is applied as two Gaussian lters at a localized pixel neighborhood:
1
IM [x , y ] im[k , l ] c[x ,y ] [k , l ]s[x ,y ] [k , l ]
X
=
k ,l c[x ,y ] [k , l ]s[x ,y ] [k , l ]
P
k ,l | {z }
h[k ,l ]

One in the spatial domain named the domain lter:


d ([x , y ], [k , l ])
 
c[x ,y ] [k , l ] = exp −
2σd
where d is Euclidean distance.
One in the intensity space named the range lter:
φ(im[x , y ], im[k , l ])
 
s[x ,y ] [k , l ] = exp −
2σr
where φ is a suitable measure of distance in intensity space.

75
Introduction Denition
Point-to-point transformation Rank ltering
Linear ltering (neighborhood operator) Homomorphic ltering
Non Linear ltering Adaptive ltering
Conclusion
Bilateral ltering

76
Introduction Denition
Point-to-point transformation Rank ltering
Linear ltering (neighborhood operator) Homomorphic ltering
Non Linear ltering Adaptive ltering
Conclusion
Bilateral ltering
Low contrast texture has been removed and edges are well preserved.

77
Introduction
Point-to-point transformation
Linear ltering (neighborhood operator)
Non Linear ltering
Conclusion
Conclusion

1 Introduction

2 Point-to-point transformation

3 Linear ltering (neighborhood


operator)

4 Non Linear ltering

5 Conclusion

78
Introduction
Point-to-point transformation
Linear ltering (neighborhood operator)
Non Linear ltering
Conclusion
Conclusion

Image Transformation: global to point; point to point; local to point;


Histogram;
Linear Filtering;
Non linear ltering.

79

You might also like