Professional Documents
Culture Documents
Weighting-Adjacent-Region Segmentation and Application To Image Vectorisation
Weighting-Adjacent-Region Segmentation and Application To Image Vectorisation
to Image Vectorisation
Master’s Thesis
of
Xu Xiang Dong
January 2007
The use of vector drawing has long been practiced in the field of
active topic for both engineering drawing and coloured image. However, the
needs for identifying regions of interest while vectorising the image are also
Image Segmentation.
descriptions of the objects and often also of the relations among them must
structures.
results demonstrate that this technique performs better and shows improved
1. Introduction 1
2. Background Studies 4
• Region Growing 8
• Data Clustering 10
Image Segmentation 17
3. Methodology 21
based Segmentation 53
BitmapTrace 55
5. Conclusion 57
6. Bibliography 59
7. Appendices 64
List of Tables
56
List of Figures
65
List of Algorithms
both engineering design and graphics design. Unlike raster image or bitmap,
vector image uses geometrical objects such as lines, curves, and polygons
with editable attributes such as colour, fill, and outline to represent image.
It is worth noting that choosing vector images over raster images has
raster image rather than vector image, therefore the conversion between
raster image and vector image is very significant. This conversion is believed
perspective.
vector image, has been a research topic for both engineering drawing (most
of the drawing covers binary or grey scale image) and colour image. This can
be seen from the amount of research work to devise algorithms and software
packages in solving the problem of Vectorisation, such as Dori and Liu [1],
Nieuwenhuizen et al. [2], Jimenez and Navalon [3], Tombre and Tabbone [4],
Ju and Hong [5], and Valiente et al. [6]. The works from [1, 2, 3, 4] are
scale image. On the other hand, [5, 6] focus on artificial kind of colour image
1
1. Introduction
involves Image Segmentation) that can be applied not only to artificial image
but also real image to be used in computer graphic design. Moreover, the
and colour of objects. Hence, it is understood that the needs for identifying
regions of interest while vectorising the image are also undeniably important,
segmentation techniques.
overview flow of the colour Image Vectorisation application and the processes
elaborates the newly devised distinct set of evaluation criteria and the
outlines future direction. Finally, chapter 6 shows the list of references used
2
1. Introduction
3
2. Background Studies
studies, empirical work and the state of art in the domains of Image
detail.
there are numerous different studies that come out with approaches to do
raster to vector conversion. Liu and Dori [7] have identified that the basic
method needs to preserve the original shapes of the graphic objects in the
raster image. Moreover, they have classified Crude Vectorisation method into
six categories: Hough Transform based, Thinning based, Contour based, Run-
graph based, Mesh-pattern based, and Sparse-pixel based. Other work such
as Tombre et. al. [8] have also devised the steps involved in the Crude
approximate the lines found into a set of vectors; to not only perform some
post-processing but also find better positions for the junction points; to merge
some vectors and remove some others, etc.; and to find the circular arcs.
engineering drawing. Therefore, they might not be suitable for very complex
4
2.2 Image Segmentation
examples of computer graphic design may range from batik pattern design,
believed that colour image segmentation can be one of the optimal choices as
critical objects and their colours in raster image can be easily preserved
image processing technique since there have been numbers of different image
in object recognition and object motion tracking areas [36, 37, 38].
The outcome of this research work shows promising result such that the
author strongly believes that any good image segmentation method can be
needed.
The field of image segmentation has been an active research topic for
years. This can be seen from the availability of numerous different image
review the segmentation techniques that are aimed at complex colour image.
5
2. Background Studies
And finally the review on a technique known as Boundary Tracing that is used
regions, is one of the most important steps in image analysis and processing.
According to Sonka et. al. [12], segmentation methods can be divided into
the image and help finding the threshold. Sezgin and Sankur [14] have
histogram.
3. Entropy-based methods, which divide the histogram of the image into the
6
2.2 Image Segmentation
as edge coincidence, fuzzy shape, etc, between the grey-level and the
binarised image.
that by using only thresholding technique may not create satisfactory result in
Furthermore, Sox et. al. [32] believe that threshold-based approaches often
edges found in a digital image [12]. Prager [17] has proposed a set of
method is needed to remove any noise data in the image, or to smooth the
detector to the image. Edges are the abrupt changes in the intensity function
found in an image. The edge detectors locate edges in the image relying on
earlier years and to name a few are: Sobel operator, Laplace operator, and
7
2. Background Studies
Prewitt operator. Thirdly, the edges produced in second stage are joined into
line segments and their features are computed. The features include length,
contrast, frequency, mean, variance and location of each line segment. Lastly,
the borders of significant objects in an image, and generally the shapes of the
objects are usually preserved. Nevertheless, the two most common and
[12].
• Region Growing
which basically take one or more pixels called seeds, and grow the regions
logical statement that is true only if pixels in the regions are sufficiently
texture, shape or some other property. According to Efford [31], one main
8
2.2 Image Segmentation
For instance, the results obtained by 4-connected region growing may differ
from 8-connected region growing. In addition to that, the results obtained can
selection of the seeds can be problematic because the user will not know that
the seeds defined are sufficient to create a region for every pixel.
considered initially to be a single region. The region will be divided into many
graph of the underlying mesh, which takes every pixel in the image as a
random number drawn from the (0, 1) uniform distribution. To generate the
next level of the hierarchy, only a subset of the vertices is retained. Only
retained vertices are called survivors, and the rests are non-survivors. This
satisfies the condition that no two survivors can be neighbours and every
9
2. Background Studies
Associating the co-occurrence probability with the initial RAG, a new graph
probability fields in the weighted RAG. When the value of a probability is high,
supports can be merged. It has been observed that building RAG map is an
• Data Clustering
one of data clustering, which to name a few are works from [10, 11, 16, 17,
towards the average of the data points within, was first introduced by
(value) domain of grey scale and colour image for discontinuity preserving
filtering and image segmentation by Comaniciu and Meer [10, 11, 17].
10
2.2 Image Segmentation
According to [10, 17], the local mean, which is the average of the data
runs the mean shift procedure to find the local maxima of the sequences
discontinuity-preserving-smoothing respectively.
mean shift is run till convergence and it maintains the structure of the
Tomasi and Manduchi [21], Saint-Marc et. al. [26], and Perona and Malik
also proposed by Comaniciu and Meer [17]. The segmentation results are
Section 3.2) is used to represent the colour information for both spatial-
11
2. Background Studies
range filtering and segmentation. By applying the mean shift filtering, the
Subsequent to this, a RAG is built directly from the clusters. Eventually all
adjacent regions, which are closer than a threshold, are merged to create
regions, which are typically smaller than the minimum region size to the
closest joint regions, are usually helpful in the aim of removing non-
adjacent pixels will be separated into two different regions if the difference
of the two pixels is more than the threshold in the decomposition phase;
and two adjacent regions will be merged if the difference between them is
few chapters.
Luo and Khoshgoftaar [20] further improve the work of Comaniciu and
Meer [17] for colour and texture image segmentation by combining mean-
means the feature palette is rich enough such that the image is
12
2.2 Image Segmentation
decomposed into many small regions from which any sought information
preserving salient boundaries. The major idea behind their method is that
minimising description length, which is the sum of the coding length of the
data given the model and the coding length of the model itself, to be
will lead to a decrease in the total coding length. From the author’s
only the most significant regions are retained. However not all the region
another problem.
technique based on mean shift clustering. They claim that the method
[17] by replacing the L*u*v* colour space to the hue and intensity
computes local homogeneity value of a source image and retains the pixel
only with high homogeneity values before applying mean shift algorithm
repeated peaks, the peaks found by the mean shift clustering which are
very close to each other, and small peaks, the peaks has smaller value
13
2. Background Studies
been studied that, while validating the peaks, the process of choosing and
removing one from two peaks is not a stable operation, for instance,
different sets of validated peaks than the ones chosen from right-to-left
direction.
image. Then each region will be labelled and the corresponding region
label will replace all pixels in region to create a class-map of the image. In
on statistics of the colour classes was defined. Calculating the J-value and
applying it to a local area of the class-map can indicate whether the area
is within the region or near region boundaries. The higher the local J-value
is, the more likely that the corresponding pixel is near a region boundary.
Finally in the spatial segmentation phase, those pixels with lower J-value
are considered as region centres and used as seed pixels. A seed growing
all adjacent regions where their colour difference is less than a maximum
14
2.2 Image Segmentation
instance, the colour of a sunset sky can vary from red to orange to dark in
study that the causes of its limitation is due to the region with smooth
the separated regions back to the one region through the following
phases.
Liew et. al. [16] point out that a conventional fuzzy c-means clustering
feature space. Furthermore, the conventional FCM does not consider the
algorithm for both synthetic and real colour image segmentation. This
regions. In this case, the adaptive measure implies the centre pixel is
window is in a homogeneous region. Liew et. al. [16] also find out that by
15
2. Background Studies
transitions, and merge adjacent regions that do not have boundaries with
detect the number of objects automatically from the image. ACIS uses
The saturation and intensity planes are utilised for colour image
segmentation. By assuming that for a given colour object, these are the
two parameters that may vary and hue value remains the same. Hence
the histograms of given image for saturation and intensity planes are
image. Once threshold and target values are calculated, the neural
network with multi-sigmoid function labels and colours the objects with
their mean colour. From their experimental results, the credibility of their
16
2.2 Image Segmentation
studied that the images are also distorted such that the clarity of the
Martin et. al. [29] believe that although different human segmentations of
the same image are not identical, but they are highly consistent. Therefore
takes two segmentations (usually the second is the refinement of the first) as
input, and calculates the difference of pixel sets in region by using a standard
formula defined where zero signifies no error. This local error measure
supports only one direction; for instance, the difference is zero when the
second is the refinement of the first, but not vice versa. Two different error
values are combined into an error measure for the entire image: Global
17
2. Background Studies
two segmentations for both two measures is that the two segmentations must
Jiang et. al. [40] categorise various methods for performance evaluation
GT-based evaluation. Jiang et. al. [40] adopt the GT-based evaluation
measures for comparing clusterings are introduced and classified into three
theoretic distance of clusterings. Jiang et. al. [40] realise that the
Zhang et. al. [30] believe that manually generating a reference image, or
ground truth, is a difficult, subjective, and time-consuming job. And for most
18
2.2 Image Segmentation
using a machine learning approach to coalesce the results from the constitute
approach. However, it has been studied in this research work that this
these evaluation measures work well for all cases. Furthermore, it has been
relatively subjective and it involves lots of manual work, which falls short
deals with arranging all nodes of the border that is sequentially chained. This
borders: inner border, outer border, and extended border. An inner border is
a subset of the region and in contrast, the outer border is never a subset of
the region [12]. Both inner and outer borders cause difficulties in region
description because two adjacent regions could never have a common border.
19
2. Background Studies
situations that may happen during extended border tracing and it moves the
Moreover, the complexity can be reduced because each border between two
adjacent regions could be traced once. The only problem discovered was that
the original look-up table could separate a region to different regions if for
example in a case where the region is a one pixel wide diagonal line. This is
because the look-up table applied supports four-connectivity only and the
vertical pixel links to that pixel. However, the problem could be easily
complex and rich colour image to be used in computer graphic design. Hence,
that contributes in vectoring the image into high quality and yet reliable result
20
3. Methodology
Regions) Segmentation applicable for both colour and grey scale images are
techniques, WAR on the other hand can be used with or without any pre-
processing techniques.
21
3. Methodology
nonlinear colour space or RGB model (referring to red, green, and blue), to a
the image, and enhancing significant image features for further processing.
are identified and extracted in the source image. And finally step 5 converts
22
3.1 Overview of BATIK
23
3. Methodology
The predicate specifies two adjacent regions can be merged only if the two
difference. The most common colour space RGB, being used in visually every
one of the linear colour spaces, for example CIE L*a*b*, CIE L*u*v*, etc.
24
3.3 Discontinuity-Preserving Smoothing
because it performs better visual quality than CIE L*u*v* for image
formula, which is known as CIE 1976 used to calculate the difference between
where L1, a1, b1 and L2, a2, b2 are three components of two colours. For
extremely small colour differences, the two modification versions of CIE 1976
are CIE 1994 and CIE 2000. Another extremely small colour difference
formula is CMC, which is similar to the CIE versions but it includes weighting
functions for different areas. CIE 1994, CIE 2000 and CMC may give better
conducted in this research work, it is found that CIE 2000 produces better
processing time.
Filter are generally used to smooth image through replacement of every pixel
image and removes not only noise but also salient information. It is believed
vectorisation application.
25
3. Methodology
different techniques have been proposed and a brief discussion on them can
application. It is discovered that both Mean Shift Filter [17] and Bilateral Filter
[21] apply similar kernel density estimation technique working in the joint
traditional domain filtering such as Gaussian Filter, weights only pixel values
the centre and the neighbours. A given traditional domain filtering formula is
shown below:
w h
gs−1(i, j) = ∑ ∑ f (i + x, j + y )Gs (x, y ) (3)
x = −w y = − h
neighbourhood centre (i, j ) and all nearby points within the small
26
3.3 Discontinuity-Preserving Smoothing
neighbourhood that has size of ((2w + 1) × (2h + 1)) . f is the intensity and
x2 +y2
−
2σ s 2
G s ( x, y ) = e (4)
Boyle [12].
similarity (range domain) of the centre and its neighbourhood samples. The
w h
∑ ∑ f (i + x, j + y )Gr (f (i, j ) − f (i + x, j + y ))
x = −w y = − h
gr (i, j ) = (5)
w h
∑ ∑ Gr (f (i, j ) − f (i + x, j + y ))
x = −w y = − h
between the neighbourhood centre (i, j ) and the nearby points. Let
27
3. Methodology
R2
−
2
Gr (R ) = e 2σ r (6)
and the difference greater than σr is not mixed together [21]. Unlike in the
w h
∑ ∑ Gs (i + x, j + y )Gr (f (i, j ) − f (i + x, j + y ))
x = −w y = − h
(7)
Every pixel value in a window is hence replaced by the average of similar and
are usually similar to each other. In other words, if a pixel is very different to
28
3.3 Discontinuity-Preserving Smoothing
averages away the small, weakly correlated differences between pixel values
and blue bands). Tomasi and Manduchi [21] find out that the edge-preserving
smoothing can be applied to the red, green, and blue components of the
image separately. However, the intensity profiles across the edge in the three
not only appears blurred but also exhibits odd-looking, coloured auras around
objects. The uniform CIE L*a*b* colour space can therefore be applied to
convolution; once in the horizontal direction and once in the vertical direction,
calculating the Gaussian kernel can be reduced from its square to its size
w
g s−1(i ) = ∑ f (i + x )Gs (x ) (8)
x = −w
x2
−
2
G s (x) = e 2σ s . (9)
29
3. Methodology
∑ f (i + x )Gr (f (i ) − f (i + x ))
gr (i ) = x = −w . (10)
w
∑ Gr (f (i ) − f (i + x ))
x = −w
∑ f (i + x )Gs (x )Gr (f (i ) − f (i + x ))
g(i ) = x = −w . (11)
w
∑ Gs (x )Gr (f (i ) − f (i + x ))
x = −w
Let {I index =1Kn } and {Oindex =1Kn } be the d-dimensional original and
the radius of the kernel; σ s and σ r be the standard deviation of the spatial
30
3.3 Discontinuity-Preserving Smoothing
2. Assign X = h and Y =v
3. For each j = 1 KY
a. Assign oi = j
b. Assign io = j × X
c. For each i = 1 K X
i. For each x = −w K w
( )
ii. Calculate g io + i using expression (11)
(
iii. Assign Ooi = g io + i )
iv. Assign oi = oi + Y
4. Assign I = O , X = v and Y = h
5. Repeat step 3.
taking data from the filtered image space O , and using original image’s width
as height and original image’s height as width. Eventually, step 5 takes all of
the re-initialised values to the step 3 and computes the final filtered image.
∑ x i K (x − x i )
m(x ) = i =1 (12)
n
∑ K (x − x i )
i =1
31
3. Methodology
called mean shift. The repeated movement of data points to the sample
Mean Shift is initially proposed by Cheng [9] used as a mode seeking and
range (value) domain of grey scale and colour image for discontinuity-
preserving filtering. One fact worth noting observed by Comaniciu and Meer
[10] is that Bilateral Filtering and Mean Shift Filtering are based on the same
principle, which is the simultaneous processing of both the spatial and range
whereas the Mean Shift window is dynamic, moving the direction of the
∧ n
1 ⎛ x − Xi ⎞
f (x ) =
nh ∑ K ⎜⎝ h ⎠
⎟ (13)
i =1
∧ ck,d n ⎛ x−x 2
⎞
f h, K (x ) = ∑ ⎜ h i
k ⎟ (14)
nh d i =1 ⎜⎝ ⎟
⎠
32
3.3 Discontinuity-Preserving Smoothing
and assumed strictly positive in d-dimensional space. They also apply the
⎛ x − x 2⎞
∑
n
xi g ⎜ i ⎟
i =1 ⎜ h ⎟
mh, G (x ) = ⎝ ⎠ −x (15)
⎛ x−x ⎞ 2
∑
n
g⎜ i ⎟
i =1 ⎜ h ⎟
⎝ ⎠
The local mean is shifted toward the region where the majority of the
points reside and stops when it gets fairly close to a local maximum, which is
the unique stationary point that has zero gradient calculated from the
Another issue worth noting that was introduced by Sochen, Kimmel and
which 2D surface embedded in the 3D (x, y, g) space for grey level images,
The similar representation is used in such that using (x, y, L*) for grey level
images and (x, y, L*, a* b*) for colour images. As explained in [17], the joint
domain employs both spatial domain, the space of the kernel, and range
33
3. Methodology
⎛ xs ⎞ ⎛ xr ⎞
K hs , hr (x ) = C ⋅ k ⎜⎜ ⎟⋅k
⎟
⎜
⎜ h
⎟
⎟
⎝ hs ⎠ ⎝ r ⎠
s r
where x is the spatial part, x is the range part of a feature vector, k (x ) is
the common profile used in both two domains, hs and hr are the employed
To sum up, the Bilateral filtering and Mean Shift filtering are somehow
estimation of each pixel is computed only once. On the contrary, Mean Shift
filtering will not generally perform the computation only one time but it will
Regions, and finally merging closest regions. The processes automatically end
34
3.4 WAR (Weighting Adjacent Region) Segmentation
which regions and their adjacencies are depicted in a region map. It can be
to have the same interpretation are merged into one region [12]. By
perceptually linear colour space, for instance CIE L*a*b*, the RAG map can
be easily built using the colour properties for homogeneity criterion. Initially,
the RAG map can be built by taking every pixel in the input image as a
Based on this RAG map approach, a customised RAG map that conforms
identification for every pixel and their adjacent regions in the neighbourhood.
follows.
{
Let I i = 1K n } and {Li =1Kn } be the original and labelled image with length
35
3. Methodology
1. If Li == 0 , assign Li = + + l , push i to S .
i. If It == It + h and Lt + h ≠ 0 , assign Lt + h j = Li
j j
ii. Push t + h j to S .
neighbourhood will eventually have their own labels or identities. Once all the
same labels are performed. This whole procedure will finally generate a RAG
map, which contains all the inter-related regions. Thus, the follow-up
Weighting Adjacent Region process will directly manipulate this RAG map. An
36
3.4 WAR (Weighting Adjacent Region) Segmentation
Figure 2. An illustration of building the initial RAG map. (a) Every pixel in
(b) All connected regions that have the same property and grouped
with the same label are merged and connections among them are revised.
One point worth noting in the implementation of building RAG map is that
each region defined can be consider as one object, which contains four
and a list of references to the adjacent regions. These four properties are
37
3. Methodology
regions. A region might be removed from RAG map if for example, the region
region with modified properties may remain in the RAG map only if the label
revised when merging process is achieved. The algorithm to build RAG map is
given as follows.
Let {I i =1Kn } {
and Li = 1K n } be the original and labelled image obtained
1. If Li ! = Li + h
j
properties, which are label, colour property, region size and neighbourhood
list. Therefore in step (a) and step (b), region properties including label,
colour property and region size can be set if the region is newly defined.
These three properties can easily be obtained from Algorithm 2. In step (c)
38
3.4 WAR (Weighting Adjacent Region) Segmentation
and step (d), it basically adds a region to the neighbourhood list of its
filtering, both range and spatial domains are taken into consideration when
and replaces the centre pixel by using the weighted value whereas WARs
weights all pixels in both the centre region and its neighbourhood samples.
Obviously in a pixel map, the connections between a centre pixel and its
each region needs to be identified individually from the previously built RAG
map.
Let {Ri =1Kn } be the set of region objects where n is the total number of
and S be two variables used to sum up colour property and size respectively
39
3. Methodology
1. Assign S = Ri , S , C = Ri , S × Ri ,C ;
a. If (R
i ,C − RRN , I ,C )≤ ∆
i. Assign C = C + RR × RRN , I ,S ;
N , I ,C
ii. Assign S = S + RR .
N , I ,S
3. Set Ri ,C = C ÷ S .
The threshold ∆ bounds the desired range between the centre region and its
regions.
connections among regions and hence, enables the next merging process to
40
3.4 WAR (Weighting Adjacent Region) Segmentation
two adjacent regions will be merged if the difference between them is less
than the threshold in the composition phase. However, this raises the
sequence.
regions. (B’) and (C’) are the map produced from sequentially merging
approach. (B”) and (C”) are the map produced from closest-regions-
merging approach.
(C’), the sequence of regions are mapped from top-down and left-right.
to the fact that they are also legitimately closer regions. However, since
region 4 has already merged with region 1 and the final intensity of region 1
41
3. Methodology
approach merges all joint regions with the closest intensities until no joint
merged with region 4 at first place. Since the intensity of the merged region 2
has been updated, this has consequently resulted the differences among
remaining regions are higher than the set threshold. As a result, the merging
Unlike the two different methods depicted in Figure 3, this research work
devises a new merging strategy that is able to maximise the merging process.
It is believed and proven that this newly developed strategy will eventually
(B”’), (C”’), (D”’) and (E”’) are the map produced from sequentially merging
approach.
42
3.4 WAR (Weighting Adjacent Region) Segmentation
that region 2 is the possible region to merge with, then region 2 is labelled as
sets of region 1 with no intensity value updated. Hence, the referenced region
initialising the referenced region, discovering its possible merging sets in the
Eventually in (E”’), all the sets of regions found will be merged together and
To conclude, regions with similar density are merged together into each
respective region, and each region’s connectivity to its adjacent neighbour are
depicted in the RAG map. For more detailed explanation on the merging
{
Let Ri =1Kn } be a set of regions obtained from Algorithm 4; Ri , I , Ri , S ,
Ri ,C and Ri,{RN =1Km } be the label, size, colour property and neighbourhood
of Ri ; σ be a pre-selected threshold.
43
3. Methodology
I. If RL,C − RR
N , I ,C
≤ σ
i. Push RN, I to S
3. Compute Ri ,C and Ri , S .
The Step 1 evaluates the availability of the selected region since the
respective region has possibly merged to other region. The Step 2 iteratively
finds all adjacent regions that could be merged with the selected region.
Essentially, step 2 (II) revises the connections among the potential merging
sets. The revising operation is used to keep the relations among the regions
even though certain region(s) has been “removed” from the region map as
explained in Figure 2 (c). Finally Step 3 updates colour property and size of
the region.
small regions that are usually not significant in further processing and can be
44
3.5 Region Pruning
applied in the previous steps. The crude resolution can simply merge a small
region, which is smaller than a pre-selected size, to its adjacent region that is
most similar to the smaller region according to the homogeneity criteria used.
However, for image vectorisation, some small regions are significant yet
cannot be removed. For example in Figure 8 (a), the regions of the eyes and
the lips on the Ethnic-Lady’s face are smaller yet significant. Removing those
pruning process needs to preserve these small yet significant regions in the
image.
every smaller region with its most similar adjacent region. Whenever the
{
Let Ri =1Kn } be a set of regions obtained from Algorithm 5; Ri, I , Ri , S ,
Ri ,C and Ri,{RN =1Km } be the label, size, colour property and neighbourhood
property.
45
3. Methodology
c. If d > σ break,
d. Merge Ri and RL .
Step 1 does not only search the smaller regions but also evaluates the
availability of the selected region since the region has possibly merged to
the formula used in step (b). In addition to that, step (c) indicates no merging
process performed if the difference between two regions is greater than the
pre-selected range. Lastly, step (d) merges one region (usually the smaller
With the given region map, the follow-up work will concentrate on tracing and
capturing all regions’ border nodes, which is widely known as Region Border
Tracing process.
As a result from the previous stage, a finalised region map (regions have
the image, one has to consider the needs of capturing all of the regions’
46
3.6 Region Border Tracing
border, outer border, and extended border. Only extended border defines a
that the original look-up table is not adaptable in current study as it could
look-up table. The region shown is a one pixel wide diagonal line. When a
starting pixel is found, the first move along the traced boundary from the
starting pixel is always moving down (Figure 5a). The next move successfully
goes to right based on the original look-up table (Figure 5b). After that, the
next move along the traced boundary ignores the next diagonally linked pixel
and inappropriately goes up (Figure 5c), and finally closes the traced
boundary as it meets the starting pixel (Figure 5d). Hence, the rest of pixels
in the region could be separated from the traced boundary and be considered
as other regions.
47
3. Methodology
(a) (b)
(c) (d)
The new look-up table devised (as shown in Figure 6) in the extended
guarantees that any regions can be closed. The anticipated tracing result from
48
3.6 Region Border Tracing
Figure 6: The look-up table defining all 12 possible situations that can
appear during extended border tracing. Note that the newly devised (a),
(c), (e), (f), (h), (i), (k), and (l) are different from the tables proposed in
[13].
49
3. Methodology
Algorithm 7.
{
Let I i = 0K n } be a region map of an image with index from 0 to n and
each { }
I i contains the label number j . Initialize R j = 0Km be a set of regions
1. If RI == 0
i
b. Assign RI = −1 ,
i
Essentially, step (a), (b), and (c) locate every starting pixel of all regions
to begin with a new boundary trace. Algorithm 7 assists the trace of each
are produced after region border tracing procedure. Using these region border
nodes together with colour information of each region preserved during the
50
4. Experimentation and Evaluation
from original colour image to be used in computer graphic design area. From
and evaluation on segmentation is done by Jiang et. al. [40]. They adopt the
between the machine segmentation result and the ground truth (expected
consuming job [30]. Furthermore for most images, especially natural images,
shows a set of distinct evaluation criterias that are adopted while comparing
a. Result Resemblance
b. Edge Preservation
51
4. Experimentation and Evaluation
Most of the critical edges found in the original image should be preserved
practicality of WAR segmentation, these results are compared with the results
these results are done based on the defined distinct evaluation criterias. A
(a)
52
4.1 Result Evaluation
(b) (b’)
(c) (c’)
Boundaries (c) WAR Segmented with range resolution = (10) (c’) WAR
Segmented Region Boundaries. For post processing, both (b) and (c) use
53
4. Experimentation and Evaluation
• Result Resemblance
details such as the shapes of the lips and eyebrow are retained, and
segmented into only few distinct regions. On the other hand, Mean-Shift
• Edge Preservation
WAR segmentation generated result shows critical edges of the image are
generated result includes both significant and insignificant edges and they
complexly depicted.
54
4.1 Result Evaluation
BATIK application).
55
4. Experimentation and Evaluation
56
4.1 Result Evaluation
technique produces reliable yet accurate results and this also means that
computational module and can extend its use for other application in other
domain.
57
5. Conclusion
insignificant regions are disregarded. The results also show high resemblance
to the original image and most of the critical edges are preserved properly.
the domain of computer graphic design. It can also be applied to any domain
that requires region and feature extraction. Both Schek [40] and Jing et. al.
[41] propose an image similarity search system that explicitly compares the
similarity between the regions of the query image and the regions of the
58
5. Conclusion
59
6. Bibliography
[1] D. Dori and W. Liu, “Sparse pixel vectorisation: an algorithm and its
integrated line tracking and vectorisation algorithm”, Euro Assoc. vol. 13,
num. 3, 1994.
vectorisation”, IBM J. Res. Develop, vol. 26, NO. 6, pp. 724-734, Nov.
1982.
(CGI’01), 2001.
processing tool for the purpose of textile fabric modeling”, XII ADM
[7] W. Liu and D. Dori, “From raster to vectors: extracting visual information
from line drawing”, Springer London, vol. 2, num. 1, pp. 10-21, Apr.
1999.
and robust vectorisation: how to make the right choices”, 3rd Inter.
[9] Y. Cheng, “Mean shift, mode seeking, and clustering”, IEEE Trans.
Pattern Anal. Machine Intel, vol. 17, no. 8, pp. 790-799, Aug. 1995.
60
6. Bibliography
[10] D. Comaniciu and P. Meer, “Mean shift and applications”, IEEE Computer
313-321, 1991.
scenes”, IEEE Trans. Pattern Anal. Machine Intel, vol. 2, no. 1, pp. 16-
27, 1980.
feature space analysis”, IEEE Trans. Pattern Anal. Machine Intel. Vol. 24,
61
6. Bibliography
2004.
[21] C. Tomasi and R. Manduchi, “Bilateral filtering for grey and colour
images”, in Proc. 6th Int. Conf. Comp. Vision, New Delhi, India, pp. 839-
846, 1998.
[22] M. Elad, “On the origin of the bilateral filter and ways to improve it”,
IEEE Trans. Image Processing, Vol. 11, No. 10, pp 1141-1151, Oct.
2002.
different colour transformations for JPEG 2000”, PICS 2000: Image Proc.
Image Qua. Image Capt. Sys. Conference, pp. 259-263, Portland, Mar.
2000.
Wesley, 1992.
general tool for early vision”, IEEE Trans. Pattern Anal. Machine Intel.
texture regions in images and video”, IEEE Trans. Pattern Anal. Machine
diffusion”, IEEE Trans. Pattern Anal. Machine Intel. Vol. 12, pp 629-639,
1990.
62
6. Bibliography
Symposium – Signal Proc. Sensor Fusion, Target Recon. XIV, pp. 420-
[34] K.S. Deshmukh and G.N. Shinde, “An Adaptive Color Image
[35] M. P. Wand and M. Jones, “Kernel Smoothing”, Chapman and Hall, 1995.
Objects using Mean Shift”, IEEE Comp. Vision and Pattern Recognition,
63
6. Bibliography
Vision and Pattern Recognition, San Franscisco, CA, pp. 403-410, 1996.
[40] H. -J. Schek, “Region based Image Similarity Search” Diploma Thesis,
[41] F. Jing, M. J. Li, H. J. Zhang and B. Zhang, “An Efficient and Effective
64
6. Bibliography
7. Appendix
boundaries superposed are shown in Figure 10, 11, 12, 13, and 14.
Figure 10. Landscape images (a) Original (b) WAR Segmented (c) Region
Boundaries
(a) (b)
65
7. Appendix
(c)
Figure 11. House images (a) Original (b) WAR Segmented (c) Region
Boundaries
(a) (b)
(c)
Figure 12. Fruit images (a) Original (b) WAR Segmented (c) Region Boundaries
66
7. Appendix
(a) (b)
(c)
Figure 13. Flower images (a) Original (b) WAR Segmented (c) Region
Boundaries
67
7. Appendix
(a)
(b) (c)
Figure 14. Flamingo images (a) Original (b) WAR Segmented (c) Region
Boundaries
68