Professional Documents
Culture Documents
Lec 10 Image Segmentation
Lec 10 Image Segmentation
Lec 10 Image Segmentation
Image Segmentation
Introduction
Image Segmentation
Subdivides an image into its constituent regions or objects.
Segmentation should stop when the objects of interest in
an application have been isolated.
Segmentation accuracy determines the eventual success
or failure of computerized analysis procedures.
Image segmentation algorithm generally are based on one
of two basic properties of intensity values:
Discontinuity
Partitioning an image based on abrupt changes in intensity.
Similarity
Partitioning an image into regions that are similar according to a
set of predefined criteria.
2
Detection of discontinuities
There are three basic types of gray-level discontinuities
Points, lines, and edges.
The most common way to look for discontinuities is to run a
mask through the image in the manner described previously
(sum of product)
R w1 z1 w2 z 2 ... w9 z9
(10.1-1)
9
wi zi
i 1
3
Detection
Point detection
of discontinuities
An isolated point will be quit different from its surroundings.
Measures the weighted differences between the center point and
its neighbors.
A point has been detected at the location on which the mask is
centered if
R T
where T is a nonnegative threshold.
Small Holes
4
Detection of discontinuities
Line detection
Let R1, R2, R3, and R4 denote the response of the mask in
following.
Suppose that the four masks are run individually through an
image.
If, at a certain point in the image, |Ri|>|Rj|, for all j≠i, that point is
said to be more likely associated with a line in the direction of
mask i.
If we are interested in detecting all the lines in an image in the
direction defined by a given mask, we simply run the mask
through the image and threshold the absolute value of the result.
The coefficients in each mask sum to zero.
5
Detection of discontinuities
We are interested in finding all the lines that are one pixel
thick and are oriented at -45.
Use the last mask shown in Fig. 10.3.
6
Detection of discontinuities
Edge detection
An edge is a set of connected pixels that lie on
the boundary between two regions.
The “thickness” of the edge is determined by
the length of the ramp.
Blurred edges tend to be thick and sharp edges tend
to be thin.
7
Detection of discontinuities
The magnitude of the first derivative can be used to detect the presence
of an edge.
The sign of the second derivative can be used to determine whether
8
an edge pixel lies on the dark or light side of an edge.
Edge detection (cont.)
9
Detection of discontinuities
The entire transition from black to white is a single edge.
=10
10
Edge Linking and Boundary
Detection
Ideally, edge detection should yield pixels lying only on
edges.
In practice, this set of pixels seldom characterizes an
edge completely because of noise, breaks in the edge
from nonuniform illumination, and other effects that
introduce spurious intensity discontinuities.
Thus, edge detection algorithms are followed by linking
procedures to assemble edge pixels into meaningful
edges.
Local Processing
To analyze the characteristics of pixels in a small neighborhood
about every point (x,y) in an image that has been labeled an
edge point.
All points that are similar according to a set of predefined criteria
are linked.
11
Edge Linking and Boundary
Detection
The two principal properties used for
establishing similarity of edge pixels:
The strength of the response of the gradient
operator used to produce the edge pixel. As
defined in Eq.(10.1-4)
The direction of the gradient vector. As defined in
Eq. (10.1-5)
12
Edge Linking and Boundary
Detection
An edge pixel with coordinates (x ,y ) in a predefined neighborhood of
0 0
(x,y), is similar in magnitude to the pixel at (x,y) if
f ( x , y ) f ( x 0 , y 0 ) E
where E is a nonnegative threshold
An edge pixel at (x0,y0) is the predefined neighborhood of (x,y) has an
angle similar to the pixel at (x,y) if
( x, y ) ( x0 , y 0 ) A
where A is a nonnegative angle threshold
A point in the predefined neighborhood of (x,y) is linked to the pixel at
(x,y) if both magnitude and direction criteria are satisfied 。
This process is repeated at every location in the image.
A record must be kept of linked points as the center of the
neighborhood is moved from pixel to pixel.
13
Edge Linking and Boundary
Detection
Example 10-6: the objective is to find rectangles whose sizes
makes them suitable candidates for license plates.
The formation of these rectangles can be accomplished by
detecting strong horizontal and vertical edges.
Linking all points, that had a gradient value greater than 25 and
whose gradient directions did not differ by more than 15°.
14
Thresholding
Thresholding
To select a threshold T, that separates the objects from
the background.
Then any point (x,y) for which f(x,y)>T is called an object point;
otherwise, the point is called a background point.
Multilevel thresholding
Classifies a point (x,y) as belonging to one object class if T1
<f(x,y) ≤T2, and to the other object class if f(x,y) >T2
And to the background if f(x,y) ≤T2
15
Thresholding
In general, segmentation problems requiring multiple thresholds are
best solved using region growing methods.
The thresholding may be viewed as an operation that involves tests
against a function T of the form
T T x, y, px, y , f x, y
where f(x,y) is the gray-level of point (x,y) and p(x,y) denotes some
local property of this point.
A threshold image g(x,y) is defined as
1 if f ( x, y ) T
g x, y
0 if f ( x, y ) T
Thus, pixels labeled 1 correspond to objects, whereas pixels labeled
0 correspond to the background.
When T depends only on f(x,y) the threshold is called global. If T
depends on both f(x,y) and p(x,y), the threshold is called local.
If T depends on the spatial coordinates x and y, the threshold is
called dynamic or adaptive.
16
The role of illumination
An image f(x,y) is formed as the product of a
reflectance component r(x,y) and an illumination
component i(x,y).
(10.3-4)
f x, y i x, y r x, y
In ideal illumination, the reflective nature of objects
and background could be easily separable.
However, the image resulting from poor illumination
could be quit difficult to segment.
Taking the natural logarithm of Eq.(10.3-3)
z x, y ln f x, y
ln i x, y ln r x, y (10.3-5)
18
The role of illumination
f ( x, y ) i ( x, y ) r ( x, y )
19
Thresholding
Basic global thresholding
Select an initial estimate for T
Segment the image using T
G1 : consisting of all pixels with gray level values > T
21
Basic Global Thresholding
Fig. (a) is the original image, (b) is the image histogram.
The clear valley of the histogram.
after three iterations starting with the average gray level and T0=0.
The result obtained using T=125 to segment the original image is
shown in Fig. (c).
22
Thresholding
Basic adaptive thresholding
Imaging factors such as uneven illumination can transform a
perfectly segmentable histogram into a histogram that cannot be
partitioned effectively by a single global threshold.
To divide the original image into subimages and then utilize a
different threshold to segment each subimage.
The key issues are
How to subdivide the image?
How to estimate the threshold for each resulting subimages?
23
Thresholding
24
Thresholding (cont.)
Optimal Global and Adaptive Thresholding
Estimating threshold that produce the minimum average
segmentation error.
Suppose that an image contains only two principal gray-
level regions.
Let z denote gray-level values.
We can view these values as random quantities, and their
histogram may be considered an estimate of their
probability density function, p(z).
The overall density function is the sum or mixture of two
densities in the image.
If the form of the densities is known or assumed, it is
possible to determine an optimal threshold for segmenting
the image into two distinct regions.
25
Thresholding (cont.)
Optimal Global and Adaptive Thresholding
The mixture probability density function describing the overall
gray-level variation in the image is
p ( z ) P1 p1 ( z ) P2 p 2 ( z ) (10.3-5)
background
object
26
Thresholding (cont.)
An image is segmented by classifying as background all pixels with
gray levels greater than a threshold T. All other pixels are called
object pixels.
The main objective is to select the value of T that minimizes the
average error in making the decisions that a given pixel belongs to
an object or to the background.
The probability of erroneously classifying a background point as an
object point is
T
E1 (T )
p 2 ( z )dz 將背景誤認為物體的機率。 (10.3-7)
The probability of erroneously classifying a object point as an
background point is
E 2 (T ) p1 ( z )dz 將物體誤認為背景的機率。 (10.3-8)
T
27
Thresholding (cont.)
To find the threshold value for which this error is minimal requires
differentiating E(T) with respect to T and equating the result to 0.
The result is為求得最小誤差的門限值 T ,須求 E(T) 對 T 的偏微分。
P1 p1 (T ) P2 p 2 (T ) (10.3-10)
This equation is solved for T to find the optimum threshold. If
P1=P2, the optimum threshold is where the curves for p1(z) and
p2(z) intersect.
Obtaining an analytical expression for T requires that we know the
equations for the two PDFs.
Estimating these densities in practice is not always feasible. One of
the principal densities used is the Gaussian density, which is
completely characterized by two parameters, the mean and the
variance ( 以 Gaussian 密度函數來近似 p(z))
z 1 2
z 2 2
pz
P1
e 2 12
P2
e 2 22 (10.3-11)
2 1 2 2
where 1 and are the mean and variance of the Gaussian density of
one class of pixels.
28
Thresholding (cont.)
Using this equation in the general solution of Eq.(10.3-10) results
in the following solution for the threshold T: (10.3-10) 式的通解 )
AT 2 BT C 0 (10.3-12)
where
A 12 22
B 2 1 22 2 12 (10.3-13)
C 12 22 22 12 2 12 22 ln 2 P1 / 1 P2
Since a quadratic equation has two possible solution, two
threshold values may be required to obtain the optimal solution.
If the variances are equal, a single threshold is sufficient:
1 2 2 P (10.3-14)
T ln 2
2 1 2 P1
30
Thresholding
Example 10.13
(cont.)
The general problem is to outline automatically the boundaries of
heart ventricles in cardioangiograms.
Pre -processings:
Each pixel was mapped with a log function to counter exponential
effects caused by radioactive absorption.
An image obtained before application of the contrast medium was
subtracted from each image captured after the medium was injected
in order to remove the spinal column present in both images.
Several angiograms were summed to reduce random noise.
31
Thresholding (cont.)
Each preprocessed image was subdivided into 49 regions by placing a 7x7
grid with 50% overlap over each image.
Each of the 49 resulting overlapped regions contained 64x64 pixels.
The histogram for region A is bimodal, indicating the presence of a boundary.
The histogram for region B is unimodal, indicating the absence of two
markedly distinct regions.
After all 49 histograms were computed, a test of bimodality was performed to
reject the unimodal histograms.
The remaining histograms were then fitted by bimodal Gaussian density
curves [see Eq. (10.3-11)] using a conjugate gradient hill-climbing method to
minimize the error function given in Eq(10.3-15).
32
Thresholding (cont.)
The x’s and o’s in Fig. 10.34 (a) are two fits to the histogram shown in
black dots.
The optimal thresholds were then obtained by using Eqs.(10.3-12) and .
(10.3-13)
For non-bimodal regions, the thresholds were obtained by interpolating
these thresholds.
Then a second interpolation was carried out point by point using
neighboring threshold values so that, at the end of the procedure, every
point in the image had been assigned a threshold.
Finally, a binary decision was carried out for each pixel using the rule
1 if f(x,y) Txy
f x, y
0 otherwise
33
Thresholding
Use of boundary characteristics for histogram improvement and
local thresholding
The chances of selecting a “good” threshold are enhanced
considerably if the histogram peaks are tall, narrow, symmetric,
and separated by deep valleys.
One approach for improving the shape of histogram is to consider
only those pixels that lie on or near the edges between objects and
the background. 。
If only the pixels on or near the edge between object and the
background were used, the resulting histogram would have peaks
of approximately the same height.
In practice the valleys of histograms formed from the pixels
selected by a gradient/Laplacian criterion can be expected to be
sparsely populated.
0 f T 非邊界點標為 0
sx, y f T and 2 f 0 邊界點的深色邊標為 +
f T and 2 f 0
邊界點的亮色邊標為 -
34
Thresholding (cont.)
Figure 10.36 shows the labeling produced by Eq. (10.3-16) for an
image of a dark, underlined stroke written on a light background.
The information obtained with this procedure can be used to
generate a segmented, binary image in which 1’s correspond to
object of interest and 0’s correspond to the background.
The transition from a light background to a dark object must be
characterized by the occurrence of a – followed by a + in s(x,y).
The interior of the object is composed of pixels that are labeled
either 0 or +.
A horizontal or vertical scan line containing a section of an object
has the following structure:
(…)(-,+)(o or +)(+, -)(…)
35
Thresholding (cont.) An ordinary scenic bank
check
Example 10.14
The histogram as a function of
gradient values for pixels with
gradients greater than 5.
This histogram has two
dominant modes that are
symmetric, nearly of the same
height, and are separated by a
distinct valley.
A monochrome
picture of a color
photograph.
(d) P ( Ri ) TRUE for i 1,2,..., n The properties must be satisfied by the pixels
38
Region-Based Segmentation
Region Growing
Group pixels or subregions into regions based on predefined
criteria.
Start with a set of “seed” points and from these grow regions
by appending to each seed those neighboring pixels that have
properties similar to the seed.
Problems about region growing:
Selecting a set of seed points
39
Region-Based Segmentation
Example 10-16
To use region growing to segment the regions of the weld
failures.
An X-ray image of a We selected as
weld, the pixels of starting points
defective welds tend all pixels having
to have values of values of 255.
255.
41
Region-Based Segmentation
Region Splitting and Merging
To subdivide an image initially into a set of arbitrary, disjointed
42
Region-Based Segmentation
43
Region-Based Segmentation
44
Region-Based Segmentation
46
The Use of Motion in Segmentation
(cont.)
Accumulative differences
Consider a sequence of image frames f(x, y, t1), f(x, y, t2), …, f(x,
y, tn).
Let f(x, y, t1) be the reference images.
An accumulative difference image (ADI) is formed by comparing
this reference image with every subsequent image in the
sequence.
A counter for each pixel location in the accumulative image is
incremented every time a difference occurs at that pixel location
between the reference and an image in the sequence.
When the kth frame is being compared with the reference, the
entry in a given pixel of the accumulative image gives the
number of times the gray level at that position was different from
the corresponding pixel value in the reference image.
Three types of accumulative difference images (ADI)
Absolute, Positive, and Negative ADI
Assuming that the gray-level values of the moving objects are large
than the background, these three types of ADIs of the moving
objects are defined as follows.
47
The Use of Motion in Segmentation
(cont.)
Let R(x,y) denote the reference image.
P ( x, y ) 1 R ( x, y ) f ( x, y, k ) T
k 1
P ( x, y )
Positive k
P ( x, y )
k 1 otherwise
N ( x, y ) 1 R ( x, y ) f ( x, y, k ) T
Negative N ( x , y )
k
k 1
N ( x, y )
k 1 otherwise
48
The Use of Motion in Segmentation
(cont.)
Example 10.19
Image size: 256x256, object szie:75x50, moving in a southeasterly direction
at a speed of 5 2 pixels per frame.
1.Positive ADI
2.Absolute ADI
3.Negative ADI
Negative
ADI
Absolute
ADI
Positive
ADI 49
The Use of Motion in Segmentation
Establishing a reference image
The difference between two images in a dynamic imaging problem has
the tendency to cancel all stationary components, leaving only image
elements that correspond to noise and to the moving objects.
In practice, obtaining a reference image with only stationary elements is
not always possible, and building a reference from a set of images
containing one or more moving objects becomes necessary.
One procedure for generating a reference image is as follows
Consider the first image in a sequence to be the reference image.
When a nonstationary component has moved completely out of its position in
the reference frame, the corresponding background in the present frame can
be duplicated in the location originally occupied by the object in the reference
frame.
When all moving objects have moved completely out of their original positions,
a reference image containing only stationary components will have been
created.
Object displacement can be established by monitoring the changes in the
positive ADI.
50
The Use of Motion in Segmentation (cont.)
51
The Use of Motion in Segmentation (cont.)
52
The Use of Motion in Segmentation (cont.)
53
The Use of Motion in Segmentation (cont.)
54
The Use of Motion in Segmentation (cont.)
55