Professional Documents
Culture Documents
T3: Segmentation and Feature Extraction: Computer Vision
T3: Segmentation and Feature Extraction: Computer Vision
Computer Vision
T3: Segmentation and Feature Extraction
Computer Vision Stages
Scene
Image
acquisition
Digital
Image
Processing
Segmentation
Descriptors
and
Feature
Extraction
Recognition
and
Interpretation
Detection,
Classification,
Recognition
Object
Models
Introduction: Examples of Feature Extraction
Original Image Segmentation Feature Extraction
Index
- Contour detection
- Region based segmentation
- Connectivity analysis and labeling
- Basic edge and region feature extraction
Index
- Contour detection
- Region based segmentation
- Connectivity analysis and labeling
- Basic edge and region feature extraction
Contour Detection: High Pass Filter
High pass filter and Laplacian:
!
2
f !
"
2
f
" x
2
#
"
2
f
" y
2
In a 8 neighborhood the Laplacian is expressed as follows,
!
2
f ! $ f % x#1, y&# f % x' 1, y &# f % x , y# 1&# f % x , y' 1& (' 4f % x , y&
We can also do this operation by using one of the following masks,
Contour Detection: Gradient Technique
Gradient technique:
y y
f(x,y) f(x,y+1)
f(x+1,y)
x
f(x,y) f(x,y+1)
f(x+1,y)
x
f(x+1,y+1)
The gradient of an image is defined as:
! F!
"
G
x
G
y
#
!
"
$ f
$ x
$ f
$ y
#
!
%
f % x&1, y'( f % x , y '
f % x , y&1'( f % x , y'
'
where the magnitude is: ! f ! mag% ! F '!
)
G
x
2
&G
y
2
* G
x
& G
y
2
Region Based Segmentation: K-means
Given an initial set of k means m
1
1
,! ,m
k
1
" see below#, the algorithm proceeds by alternating between two steps:
Step 1 Assigment step : Assign each observation to the cluster with the closest mean
S
i
t
$
%
x
p
: x
p
& m
i
t
' x
p
& m
j
t
1' j ' k
(
where each x
p
goes into exactly one S
i
t
, even if it could go in two of them
Step 2 Update step : Calculate the new means to be the centroid of the observations in the cluster
m
i
t ) 1
$
1
S
i
t
*
x
j
S
i
t
x
j
The algorithm is deemed to have converged when the assigments no longer change .
1) k initial "means" (in this
case k=3) are randomly
generated within the data
domain (shown in color).
2) k clusters are created
by associating every
observation with the
nearest mean. The
partitions here represent
the Voronoi diagram
generated by the means.
3) The centroid of
each of the k clusters
becomes the new
mean.
4) Steps 2 and 3 are
repeated until
convergence has
been reached.
Region Based Segmentation: Mean-shift
Mean-shift: it is a procedure for locating the maxima of a density function given
discrete data sampled from that function.
It is useful for detecting the modes of
this density.
Region Based Segmentation: Mean-shift
Mean-shift: it is a procedure for locating the maxima of a density function given
discrete data sampled from that function.
It is useful for detecting the modes of
this density.
This is an iterative method, and we start with an initial estimate x. Let a kernel
function be given K! x
i
" x #. This function determines the weight of nearby points
for re-estimation of the mean . Typically we use the Gaussian kernel on the
distance to the current estimate,
K ! x
i
" x#$ e
c x
i
" x
2
The weighted mean of the density in the window determined by K is
m! x#$
%
x
i
N! x #
K ! x
i
" x # x
i
%
x
i
N! x #
K ! x
i
" x #
where N ! x# is the neigborhood of x , a set of points for which K ! x #& 0.
The mean-shift algorithm now sets x'm! x#, and repeats its estimation
until m! x # converges .
Region Based Segmentation: Mean-shift
Mean-shift: Examples
Connectivity Analysis and Labeling
Connectivity analysis and labeling: Once an image is binarized, then we
have to find the image regions and give a label to each region. The method
follows a two stage procedure. In the first one, all the pixels are labeled
following the procedure from top to bottom and from left to right. In the second
stage, the miss labeled pixels are corrected. Example:
Binarized Image
Contour
Connectivity Analysis and Labeling
Connectivity analysis and labeling: Once an image is binarized, then we
have to find the image regions and give a label to each region. The method
follows a two stage procedure. In the first one, all the pixels are labeled
following the procedure from top to bottom and from left to right. In the second
stage, the miss labeled pixels are corrected. Example:
Label Scanning
Connectivity Analysis and Labeling
Connectivity analysis and labeling: Once an image is binarized, then we
have to find the image regions and give a label to each region. The method
follows a two stage procedure. In the first one, all the pixels are labeled
following the procedure from top to bottom and from left to right. In the second
stage, the miss labeled pixels are corrected. Example:
Labels Scanning
Connectivity Analysis and Labeling
Connectivity analysis and labeling: Once an image is binarized, then we
have to find the image regions and give a label to each region. The method
follows a two stage procedure. In the first one, all the pixels are labeled
following the procedure from top to bottom and from left to right. In the second
stage, the miss labeled pixels are corrected. Example:
Labels Scanning
Connectivity Analysis and Labeling
Connectivity analysis and labeling: Once an image is binarized, then we
have to find the image regions and give a label to each region. The method
follows a two stage procedure. In the first one, all the pixels are labeled
following the procedure from top to bottom and from left to right. In the second
stage, the miss labeled pixels are corrected. Example:
Labels Scanning
Connectivity Analysis and Labeling
Connectivity analysis and labeling: Once an image is binarized, then we
have to find the image regions and give a label to each region. The method
follows a two stage procedure. In the first one, all the pixels are labeled
following the procedure from top to bottom and from left to right. In the second
stage, the miss labeled pixels are corrected. Example:
Scanning
Connectivity Analysis and Labeling
Connectivity analysis and labeling: Once an image is binarized, then we
have to find the image regions and give a label to each region. The method
follows a two stage procedure. In the first one, all the pixels are labeled
following the procedure from top to bottom and from left to right. In the second
stage, the miss labeled pixels are corrected. Example:
0 1 0 0 0 0 0 0
0 1 0 0 1 0 0 0
0 1 0 0 1 0 0 0
0 1 0 0 1 0 1 0
0 0 1 1 1 0 1 0
0 0 0 0 0 1 1 1
0 0 1 1 0 0 0 0
0 0 0 1 0 0 0 0
!
0 4 0 0 0 0 0 0
0 1 0 0 1 0 0 0
0 1 0 0 1 0 0 0
0 1 0 0 1 0 1 0
0 0 1 1 1 0 1 0
0 0 0 0 0 1 1 1
0 0 1 1 0 0 0 0
0 0 0 1 0 0 0 0
!
0 4 0 0 0 0 0 0
0 4 0 0 5 0 0 0
0 4 0 0 5 0 0 0
0 4 0 0 5 0 6 0
0 0 1 1 1 0 1 0
0 0 0 0 0 1 1 1
0 0 1 1 0 0 0 0
0 0 0 1 0 0 0 0
!
0 4 0 0 0 0 0 0
0 4 0 0 5 0 0 0
0 4 0 0 5 0 0 0
0 4 0 0 5 0 6 0
0 0 4 4 ? 0 6 0
0 0 0 0 0 6 6 6
0 0 7 7 0 0 0 0
0 0 0 7 0 0 0 0
!
0 4 0 0 0 0 0 0
0 4 0 0 4 0 0 0
0 4 0 0 4 0 0 0
0 4 0 0 4 0 4 0
0 0 4 4 4 0 4 0
0 0 0 0 0 4 4 4
0 0 7 7 0 0 0 0
0 0 0 7 0 0 0 0
Connectivity Analysis and Labeling
Example:
The connectivity result are two regions one with the label 4 and the
other one with the label 7.
The region with the label 4 has the following pixels: (0,1),(1,1),(2,1),
(3,1),(4,2),(4,3),(4,4),(3,4),(2,4),(1,4),(5,5),(5,6),(5,7),(4,6),(3,6)
The region with label 7 has the following pixels: (6,2),(6,3),(7,3)
Contour Extraction
Contour extraction: Once an image is binarized, we can extract the pixels of
the object contour and obtain the perimeter.
Algorithm:
Step 1: Scan the image to look for the first 1. Put pixel c=(x
c
,y
c
) at the same
position of the first pixel with value 1. Put pixel d at the position d=(xc,yc-1)
Scanning
First 1 pixel
Contour Extraction
Contour extraction: Once an image is binarized, we can extract the pixels of
the object contour and obtain the perimeter.
Algorithm:
Step 1: Scan the image to look for the first 1. Put pixel c=(x
c
,y
c
) at the same
position of the first pixel with value 1. Put pixel d at the position d=(xc,yc-1)
Scanning
c=(x
c
,y
c
)
d=(xc,yc-1)
Contour Extraction
Contour extraction: Once an image is binarized, we can extract the pixels of
the object contour and obtain the perimeter.
Algorithm:
Step 1: Scan the image to look for the first 1. Put pixel c=(x
c
,y
c
) at the same
position of the first pixel with value 1. Put pixel d at the position d=(xc,yc-1)
Scanning
c=(x
c
,y
c
)
d=(xc,yc-1)
Contour Extraction
Contour extraction: Once an image is binarized, we can extract the pixels of
the object contour and obtain the perimeter.
Algorithm:
Step 1: Scan the image to look for the first 1. Put pixel c=(x
c
,y
c
) at the same
position of the first pixel with value 1. Put pixel d at the position d=(xc,yc-1)
Step 2: Change the value of c to c=3 and d to d=2.
c=3
d=2
Contour Extraction
Contour extraction: Once an image is binarized, we can extract the pixels of
the object contour and obtain the perimeter.
Algorithm:
Step 3: With center on c and starting from d, turn clockwise and assign the
label e
k
to the first pixel which value is 1, 4 or 3. Then:
If c is c=3, e
k
=4 and e
h
=2 for any h < k, then change the value 3 to 4
and 2 to 0 and STOP (the algorithm has arrived to the first contour pixel)
Otherwise, change c to c=4 (if its value was c=1). Then take e
k
as the
new c (c= e
k
) and e
k-1
as the new d (d= e
k-1
). Finally return to Step 2.
Contour Extraction
Contour extraction: Once an image is binarized, we can extract the pixels of
the object contour and obtain the perimeter.
Algorithm:
Step 3: With center on c and starting from d, turn clockwise and assign the
label e
k
to the first pixel which value is 1, 4 or 3. Then:
If c is c=3, e
k
=4 and e
h
=2 for any h < k, then change the value 3 to 4
and 2 to 0 and STOP (the algorithm has arrived to the first contour pixel)
Otherwise, change c to c=4 (if its value was c=1). Then take e
k
as the
new c (c= e
k
) and e
k-1
as the new d (d= e
k-1
). Finally return to Step 2.
Contour Extraction
Contour extraction: Once an image is binarized, we can extract the pixels of
the object contour and obtain the perimeter.
Algorithm:
Step 3: With center on c and starting from d, turn clockwise and assign the
label e
k
to the first pixel which value is 1, 4 or 3. Then:
If c is c=3, e
k
=4 and e
h
=2 for any h < k, then change the value 3 to 4
and 2 to 0 and STOP (the algorithm has arrived to the first contour pixel)
Otherwise, change c to c=4 (if its value was c=1). Then take e
k
as the
new c (c= e
k
) and e
k-1
as the new d (d= e
k-1
). Finally return to Step 2.
Contour Extraction
Contour extraction: Once an image is binarized, we can extract the pixels of
the object contour and obtain the perimeter.
Algorithm:
Step 3: With center on c and starting from d, turn clockwise and assign the
label e
k
to the first pixel which value is 1, 4 or 3. Then:
If c is c=3, e
k
=4 and e
h
=2 for any h < k, then change the value 3 to 4
and 2 to 0 and STOP (the algorithm has arrived to the first contour pixel)
Otherwise, change c to c=4 (if its value was c=1). Then take e
k
as the
new c (c= e
k
) and e
k-1
as the new d (d= e
k-1
). Finally return to Step 2.
Contour Extraction
Contour extraction: Once an image is binarized, we can extract the pixels of
the object contour and obtain the perimeter.
Algorithm:
Step 1: Scan the image to look for the first 1. Put pixel c=(x
c
,y
c
) at the same
position of the first pixel with value 1. Put pixel d at the position d=(xc,yc-1)
Step 2: Change the value of c to c=3 and d to d=2.
Step 3: With center on c and starting from d, turn clockwise and assign the
label e
k
to the first pixel which value is 1, 4 or 3. Then:
If c is c=3, e
k
=4 and e
h
=2 for any h < k, then change the value
3 to 4 and 2 to 0 and STOP (the algorithm has arrived to the first
contour pixel)
Otherwise, change c to c=4 (if its value was c=1). Then take e
k
as
the new c (c= e
k
) and e
k-1
as the new d (d= e
k-1
). Finally return to
Step 2.
Contour Extraction
Contour extraction: Once an image is binarized, we can extract the pixels of
the object contour and obtain the perimeter.
0 0 0 0 0 0
0 1 1 0 0 0
0 1 1 0 0 0
0 0 0 1 0 0
0 0 0 0 0 0
0 0 0 0 0 0
!
0 2 0 0 0 0
0 3 1 0 0 0
0 1 1 0 0 0
0 0 0 1 0 0
0 0 0 0 0 0
0 0 0 0 0 0
!
0 2 0 0 0 0
0 3 4 0 0 0
0 1 1 0 0 0
0 0 0 1 0 0
0 0 0 0 0 0
0 0 0 0 0 0
!
0 2 0 0 0 0
0 3 4 0 0 0
0 1 4 0 0 0
0 0 0 1 0 0
0 0 0 0 0 0
0 0 0 0 0 0
!
0 2 0 0 0 0
0 3 4 0 0 0
0 1 4 0 0 0
0 0 0 4 0 0
0 0 0 0 0 0
0 0 0 0 0 0
!
0 2 0 0 0 0
0 3 4 0 0 0
0 1 4 0 0 0
0 0 0 4 0 0
0 0 0 0 0 0
0 0 0 0 0 0
!
0 2 0 0 0 0
0 3 4 0 0 0
0 4 4 0 0 0
0 0 0 4 0 0
0 0 0 0 0 0
0 0 0 0 0 0
!
0 0 0 0 0 0
0 4 4 0 0 0
0 4 4 0 0 0
0 0 0 4 0 0
0 0 0 0 0 0
0 0 0 0 0 0
Final contour " x,y# coordinates: " 1,1# ," 1,2# ," 2,2# ," 3,3# ," 2,2# , " 2,1#
Perimeter : 6
Contour
Basic Feature Extraction
Basic features:
Rectangle which includes the labeled pixels: Diagonal coordinates
Rectangle area of the region: Total number of pixels of the rectangle
Region area: Total number of pixels of the region
Perimeter: Number of pixels from the initial pixel to the final pixel
Compacity: C = P
2
/ A where (P is the perimeter and A is the area)
Excentricity: e = ((M
20
M
02
)
2
+ 4 M
11
) / A (where M
ij
is the moment ij of
second order)
Euler number: E = (Number of connected regions) (Number of holes)
Basic Feature Extraction: Second Order Moments
Features extracted from second order moments of an image
Area: M
00
!
"
x! 0
N
"
y! 0
N
f # x , y$
Coordinates of the geometric area
M
01
!
"
x! 0
N
"
y! 0
N
f # x , y$ x
M
00
M
10
!
"
x! 0
N
"
y! 0
N
f # x , y$ y
M
00
Magnitude and orientation:
G!
1
2
# M
02
% M
20
$&
'
# M
20
( M
02
$
2
( 4# M
11
$
2
y !!
1
2
%tag
( 1
#
2M
11
M
20
( M
02
$
with
M
02
!
"
x! 0
N
"
y! 0
N
f # x , y $ y
2
M
00
( M
10
2
; M
20
!
"
x! 0
N
"
y! 0
N
f # x , y$ x
2
M
00
( M
01
2
M
11
!
"
x! 0
N
"
y! 0
N
f # x , y$ xy
M
00
( M
10
M
01