Download as pdf or txt
Download as pdf or txt
You are on page 1of 134

Fast Polygonal Approximation to a Continuous Path

Hugo Leonardo Rosano Matchain


T
H
E
U
N
I V
E
R
S
I
T
Y
O
F
E
D
I
N B
U
R
G
H
Master of Science
Articial Intelligence
School of Informatics
University of Edinburgh
2003
Abstract
The main task is to compress at high-speed information contained in an image or sequence
of position coordinate pairs in order to allow fast processing subsequently. This is done by
means of a polygonal approximation algorithm; an improved version of the algorithm known
as Malcolms Corner Filter (Malcolm, 1983) is proposed for this task. The original algorithm
was optimised to run rapidly and was computationally cheap enough to be implemented in a
microprocessor. When calibrated appropriately for certain tasks, results are satisfactory. Nev-
ertheless, the calibration process is hard to perform because its parameters affect each other and
some of them are not as intuitive as required. Furthermore, the algorithm has some misleading
implementations which make it unstable under certain types of images.
The aim of this dissertation is to deeply analyse the behaviour of the algorithm and nd
mathematical support that helps improve its performance. A calibration process which is as
easy as possible for any user also has to be proposed, perhaps by referring parameters to more
intuitive considerations, e.g. maximum error or compression ratio desired. The original al-
gorithm also needs to be tested against other methods and results reported based on parame-
ters used by most of the scientic community. For instance, compression ratio (CR), integral
squared error (ISE) and the gure of merit (FOM). Images used for comparison between the
original algorithm and the improved version were taken from other articles, which are now also
a popular source of silhouettes. In order to test both algorithms presented herein, assessment
based on techniques suggested (Rosin, 1997) were realized. Unfortunately, only enough data
could be found for one image.
I found that the performance obtained by the improved version is more accurate and reli-
able than its original counterpart. The results are based on statistical parameters used by the
scientic community, and some visual results are presented to support this idea. Nevertheless,
it is still necessary to perform a better comparison based on time processing consumed by the
improved version. Future work proposed is based on ideas about nishing the calibration and
code optimisation proposed herein, as well as possible real implementations where to test this
algorithm.
i
Acknowledgements
Many thanks to my supervisor Chris Malcolm for invaluable feedback and support ideas, which
have helped me work in the right direction. Thanks to Bob Fisher for his guidance concern-
ing the evaluation process and related algorithms to those analysed herein. Thanks also to the
Mexican council CONACyT for the scholarship that allowed me to pursue this degree.
Thanks also to the University of Surrey, UK. For providing me with silhouette images used
in this thesis for testing and comparison. Thanks also to D.L. Kreher and D.R. Stinson for
their very useful L
A
T
E
Xpackage Pseudocode used to describe the algorithm proposed in this
thesis. Also many thanks to Michael Downes for its invaluable Short Math Guide for L
A
T
E
X:
(http://www.ams.org/tex/short-math-guide.html). Matlab version 6, from MathWorks was used
for running all algorithms and experiments. This document was written using L
A
T
E
X.
Last but not least, many thanks to my wife, Fatme, for her priceless support, encouragement
and patience whilst I was working on this project.
ii
Declaration
I declare that this thesis was composed by myself, that the work contained herein is my own
except where explicitly stated otherwise in the text, and that this work has not been submitted
for any other degree or professional qualication except as specied.
(Hugo Leonardo Rosano Matchain)
iii
Table of Contents
1 Introduction 1
1.1 Thesis overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 Related Work 4
2.1 Greyscale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Contour Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 Corner Detectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3.1 Region of Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.4 Polygonal Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4.1 Popular methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4.2 Optimal algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.5 Assessing Polygonal Approximation . . . . . . . . . . . . . . . . . . . . . . . 8
2.5.1 Self-performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.5.2 Performance comparison . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.5.3 Levels of compression . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.6 Test Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3 Malcolms Algorithm Review 14
3.1 Operational Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.1.1 Corner Detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.1.2 Curve Detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1.3 Deviation Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2 Parameters Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2.1 Worm Length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2.2 Corner Detector Threshold . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2.3 Curvature Threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.2.4 Deviation Threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
iv
3.2.5 Parameter Interactions . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.3 Algorithm Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.3.1 Corner Detector Improved . . . . . . . . . . . . . . . . . . . . . . . . 30
3.3.2 Curve Detector Improved . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.3.3 Deviation Improved and Removal of Redundant Vertices Proposal . . . 44
3.4 Compression by calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4 Results 49
4.1 Calibration for experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.2 Comparison between original algorithm and improved version . . . . . . . . . 51
4.2.1 Analytical Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.2.2 Visual comparison versus statistics . . . . . . . . . . . . . . . . . . . . 54
4.2.3 Time Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.3 Performance compared with other algorithms . . . . . . . . . . . . . . . . . . 60
4.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5 Summary and Conclusions 66
5.1 Malcolms Algorithm Review . . . . . . . . . . . . . . . . . . . . . . . . . . 66
5.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
5.3 What I learned from this project . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.4 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
5.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
A Analytical analysis 72
A.1 Sharp detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
A.2 Curve Detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
A.3 Calibration of curvature sharpness in term of angle . . . . . . . . . . . . . . 75
A.4 Equation for calculating number of segments in a curve . . . . . . . . . . . . . 75
B Images used for testing 77
C Tables of Results 82
C.1 Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
C.2 Complete Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
C.2.1 Original Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
C.2.2 Improved Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
D Additional Visual Results 111
v
Bibliography 120
vi
List of Figures
3.1 CS plots of the image shown on the top left. Top right: L = 5, bottom left: L = 10,
bottom right: L = 25. Features labelled on the image are indicated also at the plots. . . 18
3.2 Corner detection case, denition of angles. . . . . . . . . . . . . . . . . . . . . . . 20
3.3 Variations caused by image rotation. Xaxis correspond to angle = 0 /2 and
yaxis correspond to angle = 0 /2. Angles correspond to gure 3.2. Values of
contour levels increase by 0.2 each step from left to right. . . . . . . . . . . . . . . 20
3.4 Analytical result for equation 3.10 is on the left, versus real digital result on the right,
= 30

and L = 25. X axis represent the position of the head while dealing with the
curve k = 0 2L, y axis represents angle = 0 /2. . . . . . . . . . . . . . . 21
3.5 By nding only one maximum, close features are treated as one by the original algo-
rithm. The object under analysis in shown on the left hand side. Its curvature sharpness
is shown on the right hand side. The size of the worm is L = 10. Main features of the
object are labelled and indicated in the plot according to their position. . . . . . . . . 22
3.6 Variables denitions for the accumulator parameter. . . . . . . . . . . . . . . . . . 24
3.7 Accumulator only detects variation in angular motion. Top left is the object under
analysis. Top Right is accumulator plot for worm size L = 5. Bottom left Acc plot for
L = 10. Bottom right Acc plot for L = 15. . . . . . . . . . . . . . . . . . . . . . . 25
3.8 Left: Accumulator plotted versus angle = 0 2, R = 100 and L = 10. Right:
Complete analysis for (, ) but varying its initial position = 0 /2 indicated
on the xaxis, is on yaxis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.9 If is below deviation threshold ThresC, is never going to be detected, because at
P
k
there is a corner and no further deviation checking is done in that zone. . . . . . . 27
3.10 Smoothness done by convolution versus smoothness done by worm perception. Top
left shows the original CS plot of object on the bottomleft. Top right shows smoothness
done by convolution with mask of ones size L/2. Bottom right shows CS function
obtained by changing size of the worm. . . . . . . . . . . . . . . . . . . . . . . . 32
vii
3.11 Implementation results of local maximum lter, L = 10. Top left shows the original
object, an airplane seen from above. Top Right shows smooth version for the CS
function along with corners considered shown with circles. Bottom left is a section of
the left wing. Bottom right shows plot of the plane section shown to the left, along
with corners considered also shown with circles. . . . . . . . . . . . . . . . . . . . 35
3.12 New proposal for nding curve segments. Top left is the test image. Top right is the
new plot indicating slopes found, initial coordinate indicated with circles, along with
inection change points indicated with starts. Bottom right is the plot used by the
original algorithm. Bottom left shows key points found by new proposal. Stars are
corners and circles indicate start and end of curves. . . . . . . . . . . . . . . . . . . 40
3.13 Result of using improved version of curve detector. Star indicates vertices located
by the corner detector, circles indicates curve vertices and the triangle was set by the
straight line detector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.1 Visual results obtained by the original algorithm, both to the left. Compared to visual
results obtained by the improved version, both to the right. Both images on the top are
kk745, analysed with L = 7, and both at the bottom are comm1 analysed with L = 5. . . 56
4.2 Visual results obtained by the original algorithm, both to the left. Compared to visual
results obtained by the improved version, both to the right. Both images on the top are
kk1087, analysed with L = 7, and both at the bottom are kk788 analysed with L = 10. . 58
B.1 Images taken from (Chetverikov and Szab o, 1999). These do not represent its actual
size. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
B.2 Images taken from the squid-silhouette database developed at the University of Surrey,
UK. These do not represent its actual size. . . . . . . . . . . . . . . . . . . . . . . 78
B.3 Images taken from the squid-silhouette database developed at the University of Surrey,
UK. These do not represent its actual size. . . . . . . . . . . . . . . . . . . . . . . 79
B.4 Images taken from the squid-silhouette database developed at the University of Surrey,
UK. These do not represent its actual size. . . . . . . . . . . . . . . . . . . . . . . 80
B.5 Images taken from the squid-silhouette database developed at the University of Surrey,
UK. These do not represent its actual size. . . . . . . . . . . . . . . . . . . . . . . 81
B.6 Images taken from (Teh and Chin, 1989). Each black square represent a pixel. . . . . 81
D.1 Visual comparison. Original algorithm on the top, improved version on the bottom.
L = 10 applied on to image kk1051. . . . . . . . . . . . . . . . . . . . . . . . . . 111
D.2 Visual comparison. Original algorithm on the top, improved version on the bottom.
L = 14 applied on to image kk418. . . . . . . . . . . . . . . . . . . . . . . . . . . 112
D.3 Visual comparison. Original algorithm on the top, improved version on the bottom.
L = 10 applied on to image kk450. . . . . . . . . . . . . . . . . . . . . . . . . . . 113
viii
D.4 Visual comparison. Original algorithm on the top, improved version on the bottom.
L = 8 applied on to image comm6. . . . . . . . . . . . . . . . . . . . . . . . . . . 114
D.5 Results obtained with the Malcolm algorithm at different worm sizes. The original
image is shown on the left hand side of gure B.6. . . . . . . . . . . . . . . . . . . 115
D.6 Results obtained with the improved version of the Malcolm algorithm at different
worm sizes. The original image is shown on the left hand side of gure B.6. . . . . . 116
D.7 Visual comparison proposed by (Rosin, 1997). Data used for this plot are shown in
table 4.5. Performance by the algorithm depends on how close in vertical is to the
optimal line. The object under analysis is shown on the left hand side of gure B.6. . . 117
D.8 Results obtained with the Malcolm algorithm at different worm sizes. The original
image is shown on the right hand side of gure B.6. . . . . . . . . . . . . . . . . . 118
D.9 Results obtained with the improved version of the Malcolm algorithm at different
worm sizes. The original image is shown on the right hand side of gure B.6. . . . . . 119
ix
Chapter 1
Introduction
Polygonal approximation is an important technique used for image processing and pattern
recognition. Its main purpose is to detect the dominant points along the contour of an image that
best describe the object. This could provide higher-level programs with the necessary informa-
tion about the object in a compact and highly efcient manner. Furthermore, these points play
an important role in shape classication made with the human vision system (Chetverikov and
Szab o, 1999). However, polygonal approximation is not only concerned about salient points
along the contour, it is also concerned about the difference in shape between the original image
and the proposed polygon. This distinguishes them from corner detector algorithms, which
only are focused on the detection of high curvature points.
Applications for this algorithm are highly varied. From silhouette classication and ob-
ject tracking, to multimedia applications and complex object recognition. Image processing is
an area that requires fast and highly efcient algorithms to work with, because normally the
amount of initial information is of the range of 10
5
or 10
6
pixels. Consequently, it is not only
possible to solve these tasks by increasing the speed of the computer where the process takes
place. It is also useful to have efcient algorithms that can process high number of pixels and
return reliably compressed data in the least possible time. Besides, there are many applications
where computer power is restricted to small microprocessors but required to processed com-
plex images at least at 25Hz. This situation implies that the algorithm does not have time to
compute gradients, convolutions, or to rely on iterative techniques.
Chris Malcolm in (Malcolm, 1983) proposed a polygonal approximation algorithm based
on simple mathematical operations and an uncomplicated logical sequence to solve this prob-
lem at high speed. The algorithm works on a sequence of coordinate pairs describing the
contour of an object. Each coordinate pair is used at most twice, and vertices are proposed
whilst the algorithm scanned each of the pixels. The whole task is divided into three main
feature detectors: Sharp corner detector, curve detector and deviation error detector. However,
the algorithm has severe difculties during the calibration process and performed awkwardly
1
Chapter 1. Introduction 2
and unstably under certain circumstances. The task is to analyse the performance of the algo-
rithm in order to propose a calibration method based on parameters that are more intuitive, for
instance, a compression ratio or error between the original and the polygonal approximation.
Furthermore, when required, improvements to the original algorithm have to be performed in
order to obtain more stable and useful results. Besides, by the time (Malcolm, 1983) was pub-
lished, none of the most popular assessing techniques had yet been proposed. Therefore, there
is no evidence of the Malcolms algorithm performance compared to other methods. Conse-
quently, a series of comparison experiments along with visual results on images used by the
scientic community have to be done.
1.1 Thesis overview
Chapter 2 presents information concerning different types of feature extractor algorithms based
on corner detection and polygonal approximation algorithms. In addition, relevant techniques
for assessing algorithms are presented along with popular methods for reporting results and
comparison parameters. Finally, a reference about the origins of images use in section 4 for
comparison and testing is explained in more detail.
Chapter 3 contains full analysis made of all parameters compiling the original algorithm,
along with an analysis made of the interaction between them. Section 3.1 presents the op-
eration of the original algorithm as described in (Malcolm, 1983). In sections 3.2 and 3.2.5,
analysis of all parameters is shown, emphasising aws that cause unstable results. Based on
the analysis made in the former sections, improvements to the algorithm are proposed in sec-
tion 3.3. Besides, some insights about results obtained before and after improvements were
demonstrated. Using conclusions gathered through this chapter, a calibration process based on
a possible qualitative error is proposed in section 3.4.
The results of comparing the original algorithm with the improved version are reported
in section 4. For comparison, images with features proposed in 2.5.1 were used, along with
parameters exposed in section 2.5.2. Results are also reported for visual analysis, relying most
on what the reader believed about performance of both algorithms. Because of the enormous
amount of images used for comparison, only a few images with salient features were selected.
Finally, chapter 5 presents a summary of this thesis along with some interesting conclu-
sions. Future work proposed is based on ideas about nishing calibration and code optimisa-
tion proposed herein, as well as possible real implementations as to where to test this algo-
rithm. Calculations presented in chapter 3 are demonstrated in Appendix A for each of the
feature detectors of the original algorithm. Appendix C presents all results preformed in chap-
ter 4 with the intention of stimulating further comparison with other methods. This Appendix
also contains tables that complement extracted information presented in that chapter. Finally,
Chapter 1. Introduction 3
Appendix B has a compendium of all images used within this dissertation.
1.2 Notation
In mathematical equations, scalars are presented in italics, e.g. ThresA. Whereas vectors
are indicated with an arrow hat on top of the variable, e.g.

AccV. If a variable depends on
other variables, this is indicated as usual with the dependent variable between parenthesis, e.g.
(, L). The dot product between vectors is indicated as follows
R =

B = A
x
B
x
+A
y
B
y
Where A
x
is the component along the x-axis. Unitary vectors are indicated with a triangular
hat on top of the vector, e.g. x. Therefore, A
x
=

A x.
While referring to common terms used by other polygonal approximation algorithms. The
total number of pixels composing the contour is indicated with the variable n, and the total
number of vertices proposed is indicated by M.
Chapter 2
Related Work
This chapter gives some background information on polygonal approximation of a thin discrete
planar digital curve extracted from a contour of an image. Different techniques for corner
detection and polygonal approximation are presented, emphasising their approach for detecting
the most important contour features. Techniques for comparing algorithm performances are
then described and test images proposed by different researchers in order to be consistent with
the evaluation process are presented.
2.1 Greyscale
There are many corner detector algorithms that work directly on greyscale images. These do
not require rst nding the contour of the object; therefore nding corners or salient features
inside an object under study is also feasible in these cases. The algorithms proposed include the
use of Hough Transform (Davies, 1988), neural network based algorithm (Dias et al., 1995),
based on mathematical morphology (Laganiere, 1998), using wavelet transform (Lee et al.,
1993b) or (Quddus and Fahmy, 1999), etc. Some of these methods rely on directional deriva-
tives, resulting in error bias due to noisy information (Laganiere, 1998), for instance using
vector potential for locating corners (Luo et al., 1998). Active contour algorithms, also called
snakes, are relatively common methods. Since these associate an energy function to the objects
contour which tends to minimize, they are considered as optimal. For instance, (Kass et al.,
1998) is of frequent use. Nevertheless, its process relies on repetitive energy minimization,
which makes it slow for the purposes of this thesis.
A simpler and more popular method proposed is the SUSAN algorithm, (Smith and Brady,
1995), which relies on a circular mask that compares similar brightness around certain point.
Avoiding the use of derivative and an easy parameter calibration, makes it a suitable algorithm
for the problem of corner detection (Torres-Huitzil and Arias-Estrada, 2000). Nevertheless, its
intense computational algorithm and scan through the entire image makes it a slower algorithm
4
Chapter 2. Related Work 5
compare to the ones that work only with the contour of the object.
Algorithms working on greyscale images will not be addressed further in this thesis be-
cause of their intrinsic timeconsuming image scan. Furthermore, techniques used by this
algorithm do not share any similarity with the current algorithm under study.
2.2 Contour Representation
For most of the contour representation techniques, points have to be next to each other and
without ambiguity of sequence. The contour of an image is usually encoded using Freeman
chaincode representation by using eightdirectional encoding scheme (Freeman, 1961)
1
. At
each point, the next coordinate in the sequence is indicated using eight or four possible direc-
tions as reference. For instance 2222000066664444 represent a four pixels size square. It
can be also used relative to the current orientation of the sequence. The previous chain-code
will be 2222422242224222, indicating only right-turns and straight movements.
Encoding of contour can also be calculated by coordinate pairs. For each point found in the
contour, its coordinate (x, y) relative to a pre-specied origin is calculated and stored, usually
one of the corners of the canvas. It facilitates analysing the relationships of distant points, such
as the middle point between coordinate pairs or the distance between them. Therefore this
technique is more suitable for the purposes of this thesis.
2.3 Corner Detectors
Corner detectors focus on dominant points along the contour depending on the curvature as-
sociated to each section of the boundary. (Arrebola et al., 1997), (Yuan and Suen, 1992) and
others based their approach on histograms of contour, based on Freeman chaincode represen-
tation (Freeman, 1961). (Kaneko and Okudaira, 1985) and (Lu and Dunham, 1991) decode by
nding patterns on the chaincode, though the results reported are susceptible to noise. Con-
sidering algorithms reviewed in this chapter, those based on chaincode representation have the
simplest computational process to detect corners. Despite that, some of these report better re-
sults than those using more sophisticated methods, for instance those using genetic algorithms
(Huang and Sun, 1998) and the use of directional templates (Backnak and Celenk, 1989). This
could lead to the optimistic conclusion that even simple algorithms, like the one analysed here,
could provide satisfactory results if they are implemented correctly. Methods reporting more
reliable results, for example, based on wavelet transform (Lee et al., 1995), (Quddus and Gab-
bouj, 2000) and (Lee et al., 1993a), rely on more complex and time consuming algorithms,
which are enormous compared to the expectations for this thesis.
1
This notation is frequently used also by a four directional encoding scheme.
Chapter 2. Related Work 6
Although corner detectors do not consider minimising error between the original and the
approximated boundary, and therefore by themselves are not suitable for our purposes, one of
the feature detectors on which the Malcolms algorithm presented in chapter 3 is based, deals
only with detection of corners. Consequently, this section could contain useful references and
ideas to detect salient points. These could be incorporated later into the proposed algorithm in
order to improve its performance.
2.3.1 Region of Support
Curvature can be associated to a particular point, nevertheless, it is only calculated by the
behaviour of the curve shape to each side of the point. The collection of points at a certain
distance from this middle point is known as the region of support. Therefore, in order to obtain
satisfactory results, it is not only important to have an accurate or novel measurement method
for the curvature throughout the contour, but also a good estimate for the region of support.
Algorithms like (Urriales et al., 2003), (Bandera et al., 2000) and (Teh and Chin, 1989) have a
exible region of support implemented in the algorithm, i.e. the region of support is automati-
cally adjusted while the algorithm is running, making them more reliable algorithms for corner
detection, even at different object scales and rotations. Other corner detection algorithms like:
(Rosenfeld and Johnston, 1973), (Freeman and Davis, 1977), (Reche et al., 2002) (Chetverikov
and Szab o, 1999) or (Beus and Tiu, 1987), leave the region of support as a parameter, perhaps
because it is directly related to resolution of the object contour. Too large a region of support
will smooth out features of a curve, and a small region will lead to additional and sometimes
redundant points (Teh and Chin, 1989). Evidently for each algorithm, the level of resolution
will be affected in a different manner depending on the technique used by detecting salient
features.
I have decided not to implement an automatic adjustable region of support, instead this
parameter will be available for the user to set previously
2
. It was decided not to implement
this automatic feature, because it has the disadvantage that one would have to increase the
number and complexity of calculations for each of the points through the object contour. In
some algorithms like (Teh and Chin, 1989), the procedure suggested is iterative. In our case,
considering the tools and base algorithm available for the task, we would have to divide the
whole task into many phases, each part done in a different cycle around the contour. As an
alternative, it is suggested here that a relationship between the desired approximation of the
object and the parameter related to the region of support can be established. This will aid an
easy parameter calibration while avoiding adding computation to the algorithm.
2
See section 3.2.1 for further information concerning the relationship between the region of support and the
current parameters in Malcolms algorithm.
Chapter 2. Related Work 7
2.4 Polygonal Approximation
Polygonal curve tting involves approximating a discrete planar curve by sequence of con-
nected straightline segments (Zhu and Seneviratne, 1997), capturing the essence of the bound-
ary shape with the fewest possible line segments and vertices (Huang and Sun, 1998). There-
fore, the main difference compared with corner detection algorithms is the concern for also
tting smooth curves or semicircle sections. Of course, it is more important for our algorithm
to place polygonal vertices accurately in corners compared to vertices tting a curve. This
is because the former has a well-dened location and the latter is just for diminishing error
between areas, therefore no exact location for these vertices is required.
There is a variety of different approaches for implementing polygonal approximation to an
object that are not related to our work; for instance, genetic algorithm(Huang and Sun, 1996) or
neuron networkbased (Sanchiz et al., 1996), which decision on whether considering a vertex
or not depends on their training, not parameters similar to the ones discussed herein; cubic B
splines (Medioni and Yasumoto, 1986) or curvature based polygonal approximation (Pinheiro
et al., 2000), (Kartzir et al., 1994), (Pikaz and Dinstein, 1994b) because approximation takes
place by using curve segments instead of straightline segments.
2.4.1 Popular methods
Iterative renement (IRM) (Pitas, 1993) is one of the most popular published methods (Mikheev
et al., 2001). It is based on iterative split and merge steps apparently originated by (Ramer,
1972) or (Pavlidis and Horowitz, 1974)
3
. For each segment during the splitting process a vec-
tor going from one extreme to the other is calculated, and then for all points of the contour
within that segment, a perpendicular distance is calculated. If the maximum distance found
is above certain threshold, that segment is split at that point and the process is repeated it-
eratively. Splitting a segment by its most distant point is a good approach to reduce error.
Nevertheless, basing the entire process solely on this approach makes it computationally inef-
cient, because each coordinate pair is evaluated many times. This technique is also used in a
similar way by the algorithm under study, but only by splitting once if the segment has error
above threshold
4
.
Another common proposal is by a progressive scheme (Pitas, 1993). The curve is traversed
starting from an initial point on the curve until the longest possible section edge that lies within
a maximum distance from the curve is obtained. This process is repeated until the last point
becomes the starting point, nevertheless, it may fail to obtain the longest possible edge when
there is more than one inection point (Hosur and Ma, 1999). This approach is the most similar
3
It is not clear who was the rst in proposing this technique. These references are the most used in other papers,
and are the oldest I could found.
4
Further information can be found in 3.2.4
Chapter 2. Related Work 8
in method to the algorithm analysed here
5
, consequently, it also shares the same disadvantage
when many inection points within one segment appeared.
Curvature scalespace (CSS) based algorithms are also of frequent use, like (Rattarangsi
and Chin, 1990), (Mokhtarian and Suomela, 1999) and (Pinheiro et al., 2000). These are very
robust to noise, even though some of them lack error control between the polygonal approxima-
tion and the original contour. The procedure consists of convolving the curve with a gaussian
function at different variance values
2
, then for each curve, inection points are found and
plotted in a x graph. Therefore, the procedure is tough and repetitive for each object, the
computational cost ruling it out for this study. Nevertheless, conclusions that arise from these
papers concerning the different scale representation of an object, are essential for the algorithm
analysis when proposing a calibration system
6
.
2.4.2 Optimal algorithms
An optimal algorithm works by continuously checking for error between the original curve and
the polygonal approximation whilst adding or removing vertices. Nevertheless, the concept
of optimality is not treated in the same way in all articles. For instance (Yuan and Suen,
1992) is not concerned about the number of vertices used by the approximation, as long as
well dened straightlines are detected in all curve sections. Therefore, extra vertices can
be detected because of noise. (Huang and Sun, 1998) proposed a tness function related to
compression of the curve, which they tend to minimize while maintaining a deviation error
from the curve below a threshold . One of the key concepts about these optimal algorithms
7
is that they do not rely only on their rst polygonal approximation proposal. It is less likely for
Malcolms algorithm to be able to propose a correct polygon in its rst try, because it depends
on lower computational power to run and its basic approach is to use each coordinate pair at
most twice. Optimality for the algorithm proposed by Malcolm, agrees with those by (Pikaz
and Dinstein, 1994a) and (Hosur and Ma, 1999), based on minimizing the number of vertices,
distant from the original curve by no more than a prespecic value . Perhaps the most
accurate optimal algorithms are based on dynamic programming, for instance, those reported
by (Perez and Vidal, 1994). These algorithms however are extremely costly computationally
compared to the one proposed by Malcolm.
2.5 Assessing Polygonal Approximation
Little work has been done in this area and many algorithm proposals employ different ap-
proaches to compare and evaluate their algorithms. Some of them only present time expended
5
Overall operational description for the algorithm can be found in section 3.1.
6
More detail can be found in section 4.1.
7
Each of them optimal under its own considerations.
Chapter 2. Related Work 9
and visual results, leaving to the reader the decision whether the algorithm behaves appropri-
ately. Perhaps it is because no general accepted mathematical denition for corner exists, at
least for digital curves (Chetverikov and Szab o, 1999). In addition, if the algorithm is to be
used at specic applications, its performance is better assessed while analysing direct results,
instead of evaluating it by some other techniques. As a result in many cases no direct evaluation
of the algorithm is presented. In the case presented here, there is not a denite task where the
algorithm is to be applied, therefore comparison criteria must be decided on independently.
2.5.1 Self-performance
(Rajan and Davidson, 1989) proposed a systematic evaluation process for corner detection. It
is not suitable to use for comparison with other algorithms because the true corners have to be
given and this is not always the case. Nevertheless, I believe it is useful for the algorithm cali-
bration process because it covers the main features which a polygonal approximation algorithm
has to detect. Six attributes are presented along with the gure proposed:
1. corner, angle. Isosceles triangle with variable angle
2. corner, arm length. Isosceles triangle with variable length
3. adjacency. Quadrilateral with length l and width w
4. sharpness. Triangle with rounded vertex, curvature radius
5. greylevel distribution. Square with variable grey levels
6. noise level. Zero mean gaussian noise added to a corner considered here.
It is clear that greylevel distribution is out of scope for these algorithms. Furthermore, at
least one more attribute related to curve tting has to be added. I have decided to include this
attribute with the testing for sharpness, expecting the algorithm to assign vertices within that
curve section also.
2.5.2 Performance comparison
Perhaps the most common approaches with which to evaluate polygonal approximation algo-
rithms are the integral square error (ISE) and the gure of merit (FOM). Integral square error
is dened as
ISE =
n

i=1
d
2
i
(2.1)
where d
i
is the i-th distance between the approximation and the real contour and n is the number
of boundary points. FOM is the quotient of the compression ratio (CR) and the ISE. CR is
dened as follow
CR =
n
M
(2.2)
Chapter 2. Related Work 10
where M is the total number of vertices proposed for the polygonal approximation. Unfortu-
nately, CR and ISE are not well balanced while calculating the FOM (Rosin, 1997) and there-
fore are unsuitable for images with different numbers of lines and images at different scales.
Furthermore, analysing ISE or CR by itself does not represent the entire approximation per-
formance. A higher value of ISE can be easily achieved by simply increasing the number of
vertices. On the other hand, by removing any of the vertices one can improve the CR.
(Rosin, 1997) makes a complete analysis concerning polygonal approximation assessment.
Results show that by combining two components: delity and efciency, one can avoid prob-
lems of natural scales and it is suitable for use while comparing images with different numbers
of lines. The merit of the algorithm is calculated as
Fidelity =
E
opt
E
approx
x100%
E f f iciency =
M
opt
M
approx
x100%
Merit =
_
Fidelity E f f iciency =

E
opt
M
opt
E
approx
M
approx
(2.3)
Here, E can take the form of the integral squared error E
2
, summation of absolute error E
1
,
maximum deviation error E

, etc. The square root is a heuristic proposal by (Rosin, 1997). An-


other method for comparing polygonal approximation performance is reported also by (Rosin,
1997). For several different algorithms the ISE is plotted against number of vertices found, and
in order to have an optimal reference results obtained from optimal algorithms are used, for
instance using results obtained by (Perez and Vidal, 1994) based on dynamic programming run
for many values of M.
2.5.3 Levels of compression
(Deguchi and Aoki, 1990) relate tness and simplicity in a single criterion J.
J =

i
s
i


i
l
2
i
(
i
l
i
)
2
(2.4)
where s
i
is the area error in the i-th segment, l
i
is the length of the i-th segment and is a
parameter related to the form of the approximation polygon. The rst term is related to tness
of the approximation and the second term evaluates its simplicity. Although this evaluation
technique is not popular, (Deguchi and Aoki, 1990) it shows, as well as our algorithm results, a
remarkable conclusion concerning compression of an image; even if parameters related to com-
pression are smoothly changed from high-resolution to low-resolution, one can detect in results
just a few discrete changes. One can also appreciate this behaviour by analysing CSS
8
of an
8
See section 2.4.1 for further information.
Chapter 2. Related Work 11
image (Mokhtarian and Suomela, 1999), in which at least three regions of different resolution
levels can be detected.
2.6 Test Images
In order to be consistent with comparison results obtained with the proposed algorithm, a se-
ries of test images were selected for use during test and calibration. Most of these images were
taken from previous works on curve detection and polygonal approximation due to the lack of
digital test images specied for this task. (Chetverikov and Szab o, 1999) based their compari-
son on other well-known algorithms: (Rosenfeld and Johnston, 1973) and (Freeman and Davis,
1977) on test images also used previously by (Liu and Srinath, 1990). The most recent images
used for more than 10 years are those used by (Teh and Chin, 1989), for instance in (Huang
and Sun, 1998), (Rosin, 1997) and (Quddus and Fahmy, 1999). There are also many papers in-
terested in approximating letters and OCR
9
, for instance: (Mikheev et al., 2001), (Reche et al.,
2002) and (Arrebola et al., 1997).
There are also popular images like the ones used by (Smith and Brady, 1995) and (Mokhtar-
ian and Suomela, 1999). Although these images can also be used for evaluation, they rely
on pre-process phases, like the Canny edge detector (Canny, 1986), and thinning algorithms,
(Kwok, 1992). Furthermore, several objects are obtained in one image, this might blur re-
sults while assessing performance with different algorithms. For this reason I will avoid using
grey-scale images during calibration and comparison with other algorithms. Note however
that algorithms which work fast, like Malcolms algorithm, can easily be applied to greyscale
images. Given that these types of images are composed by many objects within them, it is com-
mon to require processing all of them within the same time as with a small image. Therefore
requiring fast processing of many subimages.
I am not also interested in analysing the algorithm by just using the criteria shown in sec-
tion 2.5.2, but also those proposed by (Rajan and Davidson, 1989). Therefore, a number of ad-
ditional test images were included, those with salient characteristics exposed in section 2.5.1.
Because of its enormous variety of contours, some of these images where taken from a squid-
silhouette database developed at the University of Surrey, UK. The fact that other papers (Pin-
heiro et al., 2000) (Quddus and Gabbouj, 2000) show results applied to images taken from this
database allow us to have some insights about the performance of the proposed algorithm. Note
that these images are not exactly those proposed in 2.5.1. Nevertheless, some section of the
images chosen contain primitive contour shapes proposed by (Rajan and Davidson, 1989). It is
more reliable to have more images used by the scientic community during test and calibration
rather than images created by ourselves. However, for clarity reasons, this paper will use im-
9
OCR: Optical Character Recognition
Chapter 2. Related Work 12
ages made by myself for demonstration purposes within some sections. These images will not
be used for testing or comparison. Appendix B has a compendium of all images used within
this dissertation.
2.7 Summary
There are two most important approaches for approximating an object contour which are most
related to the algorithm to be analysed here, namely, corner detector algorithms and polygo-
nal approximation algorithms. Corner detectors focus on dominant points along the contour
depending on the curvature associated to each section of the boundary. Curvature is assigned
to a point only by analysing pixels surrounding it at both sides. These pixels form what is
called the region of support. This, in many algorithms is automatically adjustable while the
algorithm is running, and in others is left as a preestablished parameter because it is related to
the compression of the object.
Polygonal approximation consists of capturing the essence of the boundary with a series of
straight lines while trying to constrain a minimum error between the original curve and the
polygon. The main difference compared with corner detectors is their concern for approxima-
tion also smooth curves or semicircle sections. The most popular methods share approaches
similar to the algorithm under study. IRM, consists of recursively splitting the segment until
the maximum perpendicular error distance is below threshold. In general, this is a good ap-
proach, nevertheless, basing the entire process solely on this approach makes it inefcient in
terms of time consumption. Another common proposal is by progressive scheme. The curve
is traversed starting from an initial point on the curve until the longest possible section edge
that lies within a distance from the curve is obtained. Optimal algorithms consist of a con-
tinuous error check between the original curve and the polygonal approximation whilst adding
or removing vertices. For the algorithm proposed by Malcolm, this is based on minimizing the
number of vertices, distant from the original curve by no more than a prespecic value .
Two techniques for assessing polygonal approximation will be used. For the calibration
process and comparison between the improved version and the original algorithm, the integral
square error (ISE), compression ratio (CR) and the gure of merit (FOM) are the most com-
mon approaches. However, the most complete analysis is proposed by (Rosin, 1997), which
combines two main components: delity and efciency; these are combined into a single value
called merit, which is dened in equation 2.3. This approach is used for comparison with other
algorithms.
Images for testing were taken from previous works on curve detection and polygonal ap-
proximation due to the lack of test digital images specied for this task. By using same images
as other papers, one can take advantage of their results reported, allowing a faster compari-
Chapter 2. Related Work 13
son between many algorithms. For testing criterions exposed by (Rajan and Davidson, 1989),
most of the images were taken from a squid-silhouette database developed at the University
of Surrey, UK. However, for clarity reasons, this paper will use images made by myself for
demonstration purposes within some sections.
Chapter 3
Malcolms Algorithm Review
Chris Malcolm proposed a simple polygonal approximation algorithm in (Malcolm, 1983),
which is also known as the Malcolm corner lter. It is not in fact a corner lter as dened
in section 2.3, because it is designed to maintain a deviation from the original curve less than
a specic distance , therefore it is better classied as a polygonal approximation algorithm,
dened in section 2.4.
1
The main attribute of this algorithm, which makes it extremely fast, is
the computational power required to run it. The most timeconsuming mathematical computa-
tions are additions and subtractions, the rest of the operations are register shifts and conditional
jumps. Furthermore, it is not an IRM
2
algorithm, consequently each point within the curve is
considered at most twice
3
.
However, at the time this algorithm was proposed, none of the test techniques shown in
section 2.5 had been proposed, therefore the algorithm had not been appropriately tested or
compared with similar algorithms. In addition, there is no clear evidence concerning the use
of parameters or calibration proposed for the algorithm. While assessing the algorithm, it is
also important to consider how exible is the algorithm when calibrating for different tasks.
This consideration is important because even if the algorithm can perform well with certain
applications, it has to be easily adaptable for other type of images and tasks.
Section 3.1 presents how all feature detectors of the original algorithm work. Next, sec-
tions 3.2 and 3.2.5 demonstrate deciencies found in the behaviour of these feature detectors.
Consequently, solutions to problems encountered are proposed in section 3.3, and nally, sec-
tion 3.4 proposes a calibration method that is more intuitive for all parameters.
1
It was called corner lter because its original purpose was to be only a corner detector. However, although
new feature detectors were added, the original project name remained
2
See section 2.4.1 for further information on Iterative Rened Methods (IRM)
3
Further information can be found in section 3.2.4.
14
Chapter 3. Malcolms Algorithm Review 15
3.1 Operational Description
The algorithm has different feature detectors that can be identied separately, each of these
deals with certain aspects of polygonal approximation. The rst is concerned with the accurate
detection of salient features on the contour, this is the part of the algorithm dedicated to corner
detection. The second is designed to detect smooth corners and curves, this is the part that
makes the algorithm a polygonal approximation. The third part of the code veries that the
maximum deviation from the original curve does not increase more than a prespecied value
, which is also an important consideration for this type of algorithm.
3.1.1 Corner Detector
The algorithm operates on a chain of n coordinate pairs describing the contour. Each of the
kth coordinate pairs will be referred as point

P
k
and the jth vertex proposed by the algorithm
will be referred to as

V
j
, of a total of M vertices forming the polygon. The entire process is
based on a digital worm
4
of prespecied pixel length L crawling around the contour just once.
The head and tail of the worm represent a directional vector, which indicates how the contour
is evolving while surrounding the boundary. If k is the number of coordinate pairs analysed so
far, then the head can be identied as

P
k
and the tail as

P
kL
, the directional vector w
k
is dened
as
w
k
=

P
k

P
kL
(3.1)
In order to detect changes in the contour pattern a curvature sharpness vector

CSV is dened as
the difference between the current directional vector w
k
and the r previous directional vector
instance w
kr
. It is proposed by Malcolm to set r =L in order to assign a single maximum value
to each corner
5
. Then a scalar value CS is assigned to the curvature value

CSV by adding the
absolute value of its rectangular components.

CSV
k
= w
k
w
kL
(3.2)
CS
k
= |CSV x| +|CSV y| (3.3)
If CS
k
is above a prespecied parameter ThresA, then a maximum value is searched until
CS
k
< ThresA. M = M+1 and

V
M
=

P
kmaxL
, where kmax is the index where the maximum
was found.
4
The expression worm was rst used in the original article for this algorithm (Malcolm, 1983)
5
Note that by setting r=L, we are in fact working with a worm size 2L. The middle of the worm could be
considered as the reference to which both side directions are measured, almost identical to algorithms based on
(Rosenfeld and Johnston, 1973). Mathematical proof can be found in appendix A.1
Chapter 3. Malcolms Algorithm Review 16
3.1.2 Curve Detector
If the worm is moving over a smooth corner or a curve section the value CS might not neces-
sarily become greater than the threshold ThresA. Therefore, curve sections cannot be detected
by using CS by itself. (Malcolm, 1983) proposed to have an accumulative vector

AccV for the
curvature, which indicates if the worm had been turning for certain amount of distance. This
accumulative value is dened as

AccV
k
=

AccV
k1
+

CSV
k
(3.4)
Acc
k
= |

AccV
k
x| +|

AccV
k
y| (3.5)
If Acc
k
is bigger than a prespecied threshold ThresB, a new vertex is added to the polygon,
M is increased and

V
M
=

P
k
. The vector

AccV
k
is reset to (0, 0) after any vertex added to the
polygon, including when a vertex is added according to section 3.1.1.
3.1.3 Deviation Control
The Malcolm algorithm, performed appropriately enough to be considered as a polygonal ap-
proximation by only considering sections 3.1.1 and 3.1.2. Nevertheless, curve detection fails
to detect when the curve is too smooth and long, because it only detects how much it has been
turning since the last vertex and does not consider distance travelled so far
6
. Therefore, big
deviations from the original curve are missed by the criterion proposed in section 3.1.2 and
evidently by those proposed in section 3.1.1.
The solution proposed by Malcolm for deviation control is to add a new evaluation criterion
to measure how much the approximation is separated from the original curve. If

P
q
=

V
M
then

P
(kq)/2
is the point on top the original curve, halfway from the current worm head position

P
k
and the last vertex set

V
M
. The exact geometric middle vector between this two halfway
vectors, is (

V
M
+

P
k
)/2. The deviation for this segment and its representative scalar are dened
as

DevV =

P
(kq)/2
(

V
M
+

P
k
)/2 (3.6)
Dev = |

DevV x| +|

DevV y| (3.7)
If Dev is above a prespecify threshold ThresC a new vertex is considered at

P
(kq)/2
, M is
increased and

AccV
k
is reset.
3.2 Parameters Survey
Simplicity and low mathematical complexity in this algorithm produced substantial disadvan-
tages. Malcolm did not propose any specic method or default values for calibrating the algo-
6
Proof of this can be found in section 3.2.3.
Chapter 3. Malcolms Algorithm Review 17
rithm, neither did he make references to the behaviour of the algorithm for different threshold
values. Is difcult calibrating procedure is its most critical disadvantage because although for
almost all kind of images the algorithm can perform highly efciently and accurately, if one
considers time the expended while calibrating the algorithm, it becomes highly inefcient.
Furthermore, the original algorithm also has some design omissions, which evidently have
to be attended to. For instance, the corner detector tends to amalgamate close corners into one.
Also, its calibration process has to incorporate rotational dependency, which currently causes
different results depending on the image orientation. The curve detector does not distinguish
between arcs with different radii, and it is highly dependant on the direction of the worm along
the contour. Deviation error also could be improved in some aspects, such as reducing the times
it requires to control error.
3.2.1 Worm Length
It is possible to nd some reasonable relationship between parameters and obtain acceptable
results if the optimumis not required. The rst attempt to assist calibration of the algorithmwas
to relate all parameters to an intuitive parameter of more easily understandable functionality. I
decided to use the length of the worm L as a base for the calibration process. Heuristic results
obtained after several runs on different images showed that by setting the following thresholds
with respect to the length of the worm, one can obtain roughly good results most of the time.
ThresA = L/2
ThresB = L
2
/2
ThresC = 3L/2
(3.8)
ThresC does not depend on L, but it is expected to need more precision as the size of the worm
decreases. It is also possible to leave that threshold xed. Calibrating the original algorithm by
relating the rest of the parameters to the length of the worm might solve partially the difculty
of calibration. Nevertheless, it is far from solving the polygonal approximation task, even if the
calibration is focused on a different parameter. This is because we are restricting the efciency
E(

) of the algorithm to some value which lies only on a hyperline parameterised by


i
7
. It is
highly improbable that good results lie only at a specic direction without taking into account
other aspects. Results showed instability for images under motion and scale changes, i.e. small
variation of the object could lead to completely different results.
The length of the worm is related to the region of support discussed at section 2.3.1 because
sharpness or curvature on that section depends on how many pixels are used for determining
them, in the case of this algorithm it corresponds to 2L.
8
Features of the image are detected
7
Here all parameters are expressed by the vector

and ith component could be any parameter.


8
Proof of this can be found in appendix A.2.
Chapter 3. Malcolms Algorithm Review 18
depending on its relationship to the length of the worm. Similar to the CSS discussed in sec-
tion 2.4.1 on page 7, the worm identies different features at different scales. Results are wide
and unpredictable sometimes. For small worms, one can nd sharp features as well as noise.
For big worms, the contour is relatively smooth, nevertheless soft curves are sometimes de-
tected as corners because of scale effects. In addition, corners close to each other are treated as
one, while others are ignored. It will be useless trying to avoid this behaviour because it is an
inherent property of many polygonal approximation algorithms and is inherent in the HVS
9
.
Figure 3.1: CS plots of the image shown on the top left. Top right: L = 5, bottom left: L = 10, bottom
right: L = 25. Features labelled on the image are indicated also at the plots.
In order to demonstrate how the length of the worm affects its perception of the image, the
object shown in the top left corner of gure 3.1 was analysed by using different length values.
This image was created because it contains most of the basic elements we are interested in
detecting. To the left of the object, the regions of support corresponding to each of the worm
lengths are shown, in addition, each of the curvature sharpness CS plots indicates which worm
length was used. These plots showhowsome features are enhanced while others are diminished
depending on the worm size. For instance, curve sections indicated as a are almost confused
with a straight line, represented by section b, when the length of the worm is L = 5, whereas
9
Human Vision System HVS
Chapter 3. Malcolms Algorithm Review 19
when L = 25 the worm has to twist in order to deal with these curves, very similar to a corner.
Note that, in the bottom right plot, how corner c, which is at 100 degrees, has almost the same
reading as these curve sections, labelled as a.
The feature labelled as d represents an enormous change in direction by a worm length
L = 5, for this example almost equal to any corner within the image. However, when L = 25
this same feature is detected almost as a single bump, and it is clearly reduced by less than 20%
compared to L = 5. Finally, it is also important to note how the noise is more ignored as the
size of the worm increases. This is illustrated in section b, which is almost 12% of the highest
peak when L = 5, and only 2% when L = 25.
I believe the length of the worm should be considered a parameter for controlling feature
detection and not a parameter for controlling compression and efciency. Therefore, I recom-
mend not to move this parameter in order to obtain the desired compression. I consider that the
length of the worm has to be set once the size of image and some insight concerning resolution
required is known, and then used the rest of the parameters for setting resolution and com-
pression, instead of trying to control everything by changing the size of the worm. By leaving
this parameter constrained to a specic range of values and knowing its effects on the worm
algorithm, will be enough to aid calibration.
3.2.2 Corner Detector Threshold
In order to be able to calibrate the algorithm to detect corners correctly or deal with problems
such as amalgamation of close features into one, it is rst necessary to know the relation be-
tween the curvature sharpness CS seen in section 3.1.1 and features of the image. Furthermore,
it is also compulsory to detect why the algorithm behaves inappropriately with some images.
Once this is done, it is easier to propose solutions for problems this feature detector has and
propose a calibration method based on more intuitive parameters.
Curvature sharpness CS for corner detection depends on the size of the worm as seen in
equations 3.1 and 3.2. Therefore, threshold ThresA has to be also adjusted when the size of the
worm is moved. I show in appendix A.1 that the curvature sharpness is proportional to the size
of the worm at the intersection of two straight lines with an angle between them, as illustrated
in gure 3.2. The relationship is as follows
(, ) | sin() sin(+)| +|cos(+) cos()| (3.9)
CS = k(, ); k = 1 L (3.10)
Where k is the position of the head while crawling through the section and the complete reading
of a single corner took k = 2L. The rst half increased with slope (, ), and the second half
with slope (, ). Variables and are xed for each feature of the image, therefore
CS L.
Chapter 3. Malcolms Algorithm Review 20
Figure 3.2: Corner detection case, denition of angles.
According to section 3.1.1 the vertex is considered at P
kmaxL
when CS > ThresA. One
has to be careful with this approach for detecting vertices, because equal angles are not always
detected in similar ways because of their orientation dependency. For instance, a corner with
angle at = 50

degrees, could have a maximum value 20% lower when is close to 70


degrees. Therefore, if this decrease happened to be below threshold, a corner will be missed.
Equation 3.10 indicates that the original algorithm is not rotational invariant / = 0, i.e.
depends on , which is the orientation of the initial worm direction before the corner. The
function (, ) is plotted in gure 3.3. The xaxis corresponds to variations on and yaxis
correspond to variations on . In order to estimate variations caused by rotation one also have
to consider the size of the worm. For instance, = 60

degrees corresponds to 1.04 radians or


104 centiradians in gure 3.3. The minimum threshold is reported when 57, therefore the
minimum threshold has to be set to ThresA = 1.2L.
Figure 3.3: Variations caused by image rotation. Xaxis correspond to angle = 0 /2 and yaxis
correspond to angle = 0 /2. Angles correspond to gure 3.2. Values of contour levels increase by
0.2 each step from left to right.
Chapter 3. Malcolms Algorithm Review 21
However, calculations shown so far are only analytical. The worm moves in a discrete digi-
tal world and there are other considerations which one has to take into account when analysing
its behaviour. In order to prove the analytical results shown in appendix A.1 with the actual
values of the worm in the digital domain, I plotted an equal experiment for both methods, us-
ing an angle = 30

. Figure 3.4 shows these results. One can see that the main behaviour
is predicted, although in this case the real measure report values four units above the digital
prediction. Both approaches agreed that responses to the corner are more prominent when the
angle is at 30 degrees for = 30

, reaching a maximum value of 21 for the digital result, and


18 for the analytical prediction. When = 75

, the maximum value is CS 11, this represents


almost 50% compared to the maximum reached at = 30

. It is also important to note that the


digital result resembles more a circle near the maximum value, whereas the analytical result is
more elliptical and smooth. As a result, digital measurements react more abruptly to changes
in the angle . For big angles, this rotation dependency does not represent a serious threat.
The problem is with small angle values and calibration done to images that are not meant to be
static all the time. However, this problem can be solved if an appropriate calibration process is
done carefully.
Figure 3.4: Analytical result for equation 3.10 is on the left, versus real digital result on the right, =30

and L = 25. X axis represent the position of the head while dealing with the curve k = 0 2L, y axis
represents angle = 0 /2.
Nevertheless, the algorithm fails to respond correctly in more complex situations. Such as
when two or more corners are close together and the size of the worm allows it to perceive
them as many distinct features. Then, peaks in the plot are more likely represented as bimodal
or polymodal peaks. If the algorithm is programmed to nd only one maximum value, then
close features are going to be treated as one. This problem is related to a perception problem
caused by the size of the worm, because two features share their regions of support. Therefore,
it is not a problem with the algorithm, but with calibration. On the other hand, it is a situation
Chapter 3. Malcolms Algorithm Review 22
likely to happen, and therefore has to be attended to.
Figure 3.5: By nding only one maximum, close features are treated as one by the original algorithm.
The object under analysis in shown on the left hand side. Its curvature sharpness is shown on the right
hand side. The size of the worm is L = 10. Main features of the object are labelled and indicated in the
plot according to their position.
Figure 3.5 shows some cases where corners are close together are therefore represented as a
main crest with many inner peaks. All four corners corresponding to feature b can be treated
only as if it were a triangle, because the two conforming the top of the rectangle are too close
to be detected as different corners. If the threshold is increased in order to detect them as two,
then the two corners labelled with d are not going to be considered. Furthermore, if threshold
is ThresA = 5, then the small triangle labelled with letter a will be detected just as if it were
only one corner, even though it is clearly composed of three. The size of the worm does affect
how peaks are combined, for instance, even if the algorithm detects all local maximum values,
it is not going to detect seven corners in region e. It is also acceptable that for the size of
the worm, feature c can be considered only formed by three corners, because its behaviour
is very similar to the one caused by noise, and also the platform on the top of the peak is less
than the size of the worm. However, features a, b, c and g are clearly errors. Note also
that peaks on feature d have at tops, therefore, the original algorithm will locate the corner
to the left or to the right of that section. According to this example, the correct position is on
the middle of that zone, however, for other shapes, consideration varies.
Chapter 3. Malcolms Algorithm Review 23
The analytical result presented in equation 3.10, and the result shown in gure 3.4 indicate
that the maximum of the function, is always at k = L, i.e. just when the tail of the worm left
the corner. Even when (, ) is not xed
CS(, , k)
k
(k = L) = 0
Nevertheless, further work is necessary in order to include many local maximum peaks as
possible corners, because proximity with other features, represented here as polymodal peaks,
are not considered by the original algorithm. Furthermore, now that an approximate relation
between image features and curvature sharpness has been detected, it is easier to solve problems
related to calibration and rotational dependency. A possible solution for these problems is
presented in section 3.3.
3.2.3 Curvature Threshold
The Malcolm algorithm (Malcolm, 1983) was designed to divide circle arcs considering only
the amount of angular motion and to ignore effects caused by different radii. This leads us
to treat all circles as ngons, if n can be approximately related to the curve accumulator
threshold. However, based on denitions presented in section 2.4, which consider a polygonal
approximation as the essence of the boundary shape with fewest possible vertices (Huang and
Sun, 1998), the approach presented in (Malcolm, 1983) is not entirely suitable for this task.
Furthermore, it is also demonstrated within this chapter that it is highly dependant on the di-
rection which the worm takes along the contour. As with the corner detector, it is necessary
to nd a relation between features wanted to be detected and the variable responsible for that
task. Consequently, the cause of each problem encountered has to be exposed and analysed in
order to propose solutions in further sections.
The relation between the length of the worm L and the measure of

AccV is not linear as
with the corner detector variable in section 3.2.2. Appendix A.2 shows that the accumulator is
expressed as
(, ) |sin() sin(+)| +|cos(+) cos()| (3.11)
Acc
k
= L
2
(, ); L Arc (3.12)
Where Arc is the total arc length, k
ini
/R, and (k
end
k
ini
)/R. The index k
ini
is where the
accumulator starts and k
end
is where the accumulator nished with respect to the arc of a circle.
Other variables used are based on gure 3.6. Equation 3.12 not only proves that the accumu-
lator is not linearly dependent with respect to the length, it also demonstrates that, whenever
L Arc stands, the accumulator only detects the range of angular movement regardless of the
size of the arc. It also demonstrates that, for each curve it varies at different rates depending on
the orientation of the arc with respect to the main axis, this can be related to k
ini
, or .
Chapter 3. Malcolms Algorithm Review 24
Figure 3.6: Variables denitions for the accumulator parameter.
Plot of the variable (, ) is the same in shape and values as gure 3.3 when 0 /2.
The xaxis represents the radial distance travel by the worm, i.e. (k
end
k
ini
)/R, and the y
axis represents at which position, measured in radians, the worm starts crawling along the
curve section, i.e. k
ini
/R. The difference in this case is that the error caused by the orientation
is proportional to the square of the size of the worm. In order to prove these results, the
object shown in the top left corner of gure 3.7, was created. Basically, analysis is focused
on four circular arcs, three of them represented as having a variable = /2, and the one in
the bottom has /4. The radii of the circles are 50, 120, 200 and 320. Arcs are separated
by straight line segments with the intention of nding them more easily when analysing the
accumulator plot for this object. Also, if the corner detector nds a corner when the arc begins,
the accumulator resets, aiding visual localization. Unfortunately in real objects, there is not
always a corner just before the corner, or just after the corner. With the purpose of showing
what might happen in these cases, section d has a smooth entrance for the worm, so it does
not detect a corner when L 10.
Equation 3.12 indicates that the accumulator Acc does not depend on the radius of the cir-
cular section. According to analytical prediction, one can see in gure 3.7 that Acc reaches very
similar values for section labelled a with respect to section b, even though the difference
between their radii is by a factor of 4. If a circle arc with angle and with radius R is divided in
number of segments, then the maximum perpendicular distance
max
from the approximation
to the original circle can be calculated as the following equation
10
.

max
= R
_
1cos
_

2
__
; (3.13)
In order to maintain
max
within the limits, it is simple to prove that the number of segments
has to increase, as the radius of the circumference also increases. As a consequence, if one of
the tasks for a polygonal approximation algorithmis to maintain the distance with a maximum
value, then the algorithm is to provide more segments to bigger circular sections compared to
the small ones. The curve accumulator of the original algorithm does not provide the necessary
10
Proof of this can be found in appendix A.4.
Chapter 3. Malcolms Algorithm Review 25
Figure 3.7: Accumulator only detects variation in angular motion. Top left is the object under analysis.
Top Right is accumulator plot for worm size L = 5. Bottom left Acc plot for L = 10. Bottom right Acc plot
for L = 15.
information to distinguish between different radii, therefore, this error is compensated with the
deviation error. However, the next section demonstrates that it is more convenient to detect
fewer vertices by means of deviation error. Consequently, I believe that proposing the most
vertices possible belonging to a curve just with the curve detector is the appropriately approach
to follow. By using a xed threshold with the original algorithm, features like a, b or c in
gure 3.8 are going to be divided with the same number of segments. This compromises two
main aspects of a polygonal approximation algorithm; the use of fewer points, which in the
case of section b might not be achieved; and the approximation has to be close to the original
contour object, which in the case of section a may not be possible.
11
It is also very important to note the shape that each of the semicircles has in the curve
accumulator plot. It starts decreasing before leaving the curve, even though it is still turning
in the same direction. This severe distortion occurs when > 3/4 and = h/2; h Z.
The mathematical prediction for this behaviour is shown on the left hand side in gure 3.8,
11
This result might be desire for some tasks. However, note that by calculating the absolute value, problems with
inection points are missed with the original algorithm because opposite turns cancelled each other.
Chapter 3. Malcolms Algorithm Review 26
Figure 3.8: Left: Accumulator plotted versus angle = 0 2, R = 100 and L = 10. Right: Complete
analysis for (, ) but varying its initial position = 0 /2 indicated on the xaxis, is on yaxis.
represented by the rst half of the graph, < , very similar to peaks a, b and c, on
gure 3.7. On the right hand side is the complete plot for = and = 0 /2. It is less
likely for the current algorithm to have to wait until > /4 when R 1 before putting a
vertex. Therefore, this distortion is practically irrelevant for the original algorithm.
So far negative aspects concerning the accumulator have been sustained by mathematical
proofs. Nonetheless, there are behaviours that have to be analysed directly from results ob-
tained while testing the algorithm. The feature labelled as d I gure 3.8 when L > 10, does
not have any mathematically predictable shape, at least at rst sight. The accumulator plot
when L = 5 shows the feature d as an upward triangle, whereas in fact it is a downward
triangle because of the direction of the turn, but the absolute value changes it to the positive
direction. It started in zeros because of the reset caused by the corner detector, at the star sym-
bol *. However, when L = 10 there is no reset at that point, and it detects an abrupt left turn
before the smooth right turn caused by the circle arc. This is represented in the plot as a sudden
increase of the accumulator in the positive direction. When the worm nally gets within the
arc, the accumulator starts decreasing just like in the case of L = 5, but in this case starting
from the peak. At that point the same downward triangle is plotted just as with the previous
case, nevertheless, now only the second half has been transformed by the absolute value. This
behaviour seen in section d represents another important disadvantage while using this ap-
proach: The absolute value of the accumulator does not represent how much the worm has been
turning, but the absolute value of the slopes in the plot. Slope information is already available,
at the CS function. Unfortunately, CS has a more noisy behaviour that the accumulator Acc.
Therefore, it is more reliable to keep using this variable.
We had seen in this section that the curve accumulator in the original algorithm does not
consider the radius of the circle in which is crawling. Therefore, if you want to constrain error
Chapter 3. Malcolms Algorithm Review 27
with fewer vertices, it is not possible to propose a calibration method based only on information
provided so far. Moreover, if one want circles approximated to ngons, it is also necessary to
avoid calculating the absolute value of Acc. Because, in most cases there is no accumulator
reset at the beginning of the curve and opposite curves obscure some results extracted from the
curve accumulator. By considering how the algorithm locates vertices as it goes, it is evident
that results depend on the direction of the worm along the contour. Properties found in this
section, such as the relation between the total angular motion and the Acc, are going to be used
for proposing modications to the present algorithm in section 3.3.
3.2.4 Deviation Threshold
One can easily verify in equation 3.6 that deviation control error does not depend on the size of
the worm. Consequently, one can choose threshold ThresC to constrain deviation error to be
lower than perpendicular distance , and calibrate the rest of the parameters, including L. As I
already proved in previous sections, calculating the rectangular distance of a vector, instead of
the Euclidean distance, makes the scalar variable rotational invariant. If the deviation threshold
is ThresC = , then for deviations at h/4; h Z radian slopes, is in fact going to allow only
ThresC =/

2. I would recommend a mean value, approximated as


ThresC =
6
5
(3.14)
This approximation is relevant only for big worm lengths, for instance, above ten, because
for smaller values is almost the same value, i.e. ThresC . However, it is important to
remember that a low value for deviation control leads to detection of more vertices by this
means and this interferes with the development of the other feature detectors. Perhaps the most
affected could be the curve detector, because frequent resets to the curve accumulator inhibits
its performance. Therefore, it is good practice to set values even close to ThresC 3/2. It
is of course a decision for the user which value to set according to implementations of the
algorithm.
Figure 3.9: If is below deviation threshold ThresC, is never going to be detected, because at P
k
there is a corner and no further deviation checking is done in that zone.
Failure with deviation error detection happens when the difference is located on the second
half part of a section with inection points. Figure 3.9 shows a corner about to be detected
a couple of pixels after the current position of P
k
, nevertheless < ThresC, and no vertex is
Chapter 3. Malcolms Algorithm Review 28
proposed. Therefore the last vertex will be moved close to P
k
. As a consequence, the deviated
error is not going to be noticed and a huge error between the original and the polygonal
approximation will be allowed.
Deviation error offers an excellent alternative in order to constrain error between the image
and the original contour. Nonetheless, it increases the amount of time required to process the
entire image from O(n) up to 50%, because coordinate pairs are used twice during the rst
section between vertices. If this time could be reduced or removed, then other improvements
could be made without compromising the original computation economy of the algorithm.
3.2.5 Parameter Interactions
So far, algorithm performance has been analysed only by treating its parameters separately. It
is also important to consider how these parameters interact with each other while processing
a contour. Because once a vertex is proposed, some of these change its reference for mea-
surements. For instance, deviation depend on V
M
to calculate the error distance and the
accumulator Acc is reset whenever a vertex is added to the polygon. Therefore, the sequence
in which each parameter proposes a vertex affects the results of the approximation. This also
implies that, if the contour is analysed in the opposite direction, i.e. from end to beginning, the
result might not be the same.
It is not possible for the algorithm to know which vertex is better in each situation because
it does not have information of further points. It might be highly possible that the accumulator
set a vertex just before a corner, subsequently, a corner is set almost in the same place. Accord-
ing to this situation, we believe that the corner vertex is more important than the accumulator
vertex, as explained in section 2.4. Nevertheless, it is not clear what should the algorithm do
with the vertex proposed by the accumulator. The simplest solution is to leave both vertices
just as proposed, the consequence is to have redundant points and ineffectiveness. An Alterna-
tive option could be to delete the accumulator vertex, therefore the ISE will increase because
the algorithm is ignoring the error accumulated so far. Relocation of vertices could be more
appropriate. In fact, some of my rst attempts include keeping a buffer with previous points for
further examination. Nevertheless, the algorithm also has to consider moving previous vertices
related to the one it is moving or deleting, otherwise the same quest will be applied to another
zone without solving the predicament. Furthermore, if the algorithm is going to analyse again
curve or deviation vertices every time a corner had been set, then perhaps it is more appropriate
to gather information between corners, and then propose vertices just once.
Section 3.2.2 explained how a corner is proposed when CS < ThresA not CS 0. This
means that the accumulator will be detecting twist in the worm caused by a feature that was
already considered. (Malcolm, 1983) realized this problem, and proposed a period of grace
after the reporting of a point, which is based on a priority system. Although, this period is
Chapter 3. Malcolms Algorithm Review 29
not specied, it has to be long enough in order not to include this feature within the region of
support. If one need to be sure this is not going to happen, it at least has to be set to L pixels
after the corner. However, as seen in section 3.2.2, corners are not always found at the same
distance with respect to the head. This could also lead to ignoring accumulative error after the
corner, which is almost as harmful as ignoring it before. An alternative method that I already
tried, consisted of setting vertices before and after a corner if the peak above threshold was
wide enough. Distance between the maximum value of CS and the rst time above threshold
or the last time above threshold ThresA were considered for this purpose. Nevertheless, this
requires adding more thresholds that are not directly related to image features. I realize that this
kind of thresholds have in fact more inconveniences than benets. Must of this disadvantages
related to time required by calibration. Consequently, must of the alternative theories were
abandoned if thresholds were not intuitive for polygonal approximation tasks.
Accumulative curve and deviation error are parameters that depend on the direction of
the worm along the contour. Whenever all the parameters behave properly and the image is
not as complex, the nal global assessment for the polygonal approximation is very similar.
For instance, its ISE or FOM. Nevertheless, polygons are different, and almost all recognition
techniques fail to classify the desired object correctly. Corner detectors are not as affected
as accumulative curve and deviation error. However, vertices proposed by this approach are
located on the other side of the corner almost at the same incorrect distance. Consequently, not
only does the algorithm depend on the sequence and interaction between its parameters, it also
depends on the direction of the worm.
According to what was shown in this section, and although each of the feature detectors
within the algorithm worked on the detection of different characteristics, all of them have to
be logically related in order to provide a coherent result. For instance, avoid setting irrelevant
close vertices to those located by the corner detector. The main aspect to keep in mind is the
priority each of these feature detectors has; The most important is the corner detector, then the
curve detector, and nally the deviation control error. Furthermore, if calibration is required to
be controllable, then thresholds must have some relation with image features.
3.3 Algorithm Development
As I already demonstrated in section 3.2 and 3.2.5, the Malcolm algorithm not only has prob-
lems with calibration, it also has misleading defects. These are the problems I consider im-
portant to solve, provided that their solutions are feasible and can be constrained to the main
philosophy of the algorithm, which is low computational power and high speed.
1. Corner Detector
No relationship between angle invariant of rotation and threshold.
Chapter 3. Malcolms Algorithm Review 30
No relationship between ThresA and the length of the worm L.
2. Curve detector.
Results are rotational dependent, k
ini
= 0
Results depend on worm direction
Accumulator does not provide enough information to constrain error with an ap-
proximate optimal number of points, Acc(, )/R = 0
3. Deviation error
Results are rotational dependent
External number of calculations caused by this parameter are n/2
4. No calibration proposal available
These problems could be easily solved by including operations such as multiplication and di-
vision. This opens the possibility of calculating all types of mathematical functions, square
roots, trigonometric functions, polynomial approximations, etc. As a result, calculations could
be precise and results might be improved. Nevertheless, its main, or possible only, processing
advantage over the other algorithms will be severely reduced or even completely lost. There-
fore, it is important to keep using simple computational operations. It is unfeasible to solve
these problems without investing more time, but I expect to compensate this extra processing
time by reducing the extra computational operations used by deviation. Which as explained in
section 3.2.4 currently took almost 33% of the total time.
3.3.1 Corner Detector Improved
The corner detector as proposed in (Malcolm, 1983) already gives good results. However, it
can be improved to deal more appropriately with noise and amalgamation of close features
explained in section 3.2.2. Furthermore, a more intuitive calibration process, which considers
rotation of the object, is needed. This section presents an improved version for the corner
detector, based on results obtained in section 3.2.2 and test images shown within this section.
Proposing a threshold for the corner detector can be proposed in many ways. For instance,
using test images to nd useful values and then scale them using results shown in section 3.2.2.
However, equation 3.10 gives a good estimate about the relation between the maximum of the
CS value and the angle of the corner one might want to nd . In order for the algorithm to
nd this corner, it has to ignore the initial direction of the worm, i.e. the angle . This can be
done by nding which is the minimum possible value depending only on the angle , because
Chapter 3. Malcolms Algorithm Review 31
for the rest of angles this value will be larger. In appendix A.3 it is shown that in order for
this to happen, the angle has a linear relationship with respect to angle , expressed as

min
() =

2
(3.15)
The nal expression for a curvature sharpness threshold is nally expressed as

min
() = 2sin(/2) (3.16)
Once having this equation it is straight forward to propose a value for ThresA. One just has to
decide which angle
min
one is interested in detecting, and decide an appropriate length for the
worm based on image features
12
. The threshold is calculated as
ThresA = 2Lsin(
min
/2) (3.17)
However, there is still the problem with missing features when these are too close to each
other. Corner detector problems can be solved theoretically
13
in a simple way: Finding all
maximum local peaks in the curvature sharpness function. I already showed mathematical
reasons and examples that support equation 3.2.2 in section 3.2.2. However, practical results
are not as straightforward as theory alleges. As seen in gure 3.1, curvature sharpness has
highly noisy behaviour proportional to the inverse of the size. Therefore, one can nd at least
a hundred maximum values within a plot of worm length L = 5.
Search for the maximum starts once the curvature sharpness is above threshold, and ter-
minate when the sharpness is below threshold. This partially solves the problem by ignoring
noise caused by straight lines and curves, which most of the time has low absolute values of
sharpness. Nonetheless, there is still a problem with noise above the threshold, which appears
on curved corners or when the length of the worm is small. Applying a Gaussian convolution
to the function would be against the principles on which the algorithm is based; simple com-
putation power and high speed. Nevertheless, I believe that in order to nd reliable maxima
in each peak, a smooth process has to be considered for noisy images or small worms. If the
data has to be smoothed, I suggest convolving it with a mask of ones sized = L/2, which is
related to calculate the average of the surrounding values scaled up by a factor of . It is not
necessary to convolve the entire data in each step, only the last pixels visited. The smoothest
curvature sharpness version is calculated as
SCS
k
=
k

i=k
CS
i
(3.18)
This procedure needs to readjust the position of the corner with respect to the maximum value
found by pixels, because the smoothest version of the sharpness cannot be calculated for
12
See section 3.2.1 for further information.
13
Without considering noise effects and discrete pixel movements.
Chapter 3. Malcolms Algorithm Review 32
position P
k
until position P
k+/2
has been reached
14
. The nal result is a smooth version
of the curvature sharpness, done with low computational power and without compromising
processing time. Plot on the top right of Figure 3.10 shows the smoothest version of the top
Figure 3.10: Smoothness done by convolution versus smoothness done by worm perception. Top left
shows the original CS plot of object on the bottom left. Top right shows smoothness done by convolution
with mask of ones size L/2. Bottom right shows CS function obtained by changing size of the worm.
left sharpness plot. Note how the smoothness by the mask process has a different behaviour
than smoothness caused by perception while incrementing the length of the worm, shown on
the bottom right plot for L = 25. Here noise on feature b is still detected and feature a
is diminished compared to b and c by almost 40%. It is clear that functions smoothed
with the mask better approximates features found in the original function. The number of
supplementary computational operations, all of them additions, can be estimated as n.
15
Even if the curvature sharpness function is smoothed or not, there is still a problem pre-
sented in section 3.2.2. Figure 3.10 shows a trimodal peak indicated by label a. Each of
these peaks represent the three corners of the triangle, or feature a, on the top of the object
14
CS function will be scaled up by . Therefore, also the threshold has to be adjust. ThresA = L
2
sin(
min
/2).
15
This value can be reduced to only two operations, by adding the new value and subtracting the oldest value.
Chapter 3. Malcolms Algorithm Review 33
shown in the bottom left. These three corners have to be considered in order to reconstruct
an appropriate polygonal approximation. Again, with the aim of solving this situation, it is
necessary to increase the amount of time required to process the entire contour. Consequently,
it is important to constrain computational power to the minimum.
In order to be able to propose many convincing maximum values within a main poly
modal crest, it is necessary to consider its relative value with respect to inner peaks at both
sides, not only to consider its absolute value. If this is true, then it is not possible to decide
which local maximum is representative of a corner, until all local maxima had been found
within the main polymodal crest, because neighbouring peaks are also subject to disregard.
It is also compulsory for each local maximum to have information concerning its relative size
with respect to both troughs. Therefore, it is also necessary to have information concerning all
local minima within the main crest.
The procedure I propose for nding corners started once the sharpness is above threshold
CS
k
> ThresA. This value of CS is recorded as the rst local minimum. Whilst the worm con-
tinues its movement along the corner, the maximum value is updated until no further increase
is reported in CS. Once CS start decreasing the rst local maximum is recorded. The algorithm
now looks for minimum values until no further decrease is detected. This alternate process
continues until the CS is below threshold and all local maximums and minimums are recorded.
The total number of slopezero values are represented by Q. Each of these positions has an
additional index that indicates whether that coordinate pair will be considered as a true corner.
The result is a series of local curvature sharpness values identied as
LCS(, ) = (3.19)
where =1 Qis an index to identify themin order of appearance and indicates if that value
represents a local minimum =0, or a local maximum =1. The value ={0, 1} has to be set
by the algorithm, indicating whether that value represent a true corner or not. None of the local
minima represents a corner, therefore, LCS(, = 0) = 0. The survey is based on analysing
local curvature sharpness LCS(, ) compared to both LCS(1, ) for = 2 Q1. Note
that because of the alternating process, both local sharpness vectors at both sides of 1 have
the opposite value of , which is represented as . The code I propose for ltering out wrong
local maxima is shown in pseudocode 3.3.1.
At most, for each LCS(, ), three conditions are performed and three comparisons with
other values. In order to void complicated mathematic calculations, I decided to base the code
on logical conditions and an empirical value ThresA/2, which appears to work well for almost
all cases. The algorithm 3.3.1 not only considers the absolute value of local maximum peaks,
but also on the relative value from the trough on the left and to the right. If two peaks are anal-
ysed from a valley, the algorithm leaves the one with higher absolute value. The algorithm is in
fact analysing the right hand side of the previous peak, but in this case using information also
Chapter 3. Malcolms Algorithm Review 34
from the following peak. This also solves the problem introduced by symmetry. By smoothing
the graph before looking for peaks, also solves the problem with at peaks. Because when the
top is at and not too wide, it becomes a curve, allowing the nding of the middle point without
extra computation involved
16
.
Algorithm 3.3.1: PEAK FILTER([LCS(, 2), LCS(, 3), . . . , LCS(, Q1)])
comment: Remove local maximums that do not represent real corners.
for 2 to (Q1)
do
_

_
if |LCS(, ) LCS(1, )| < ThresA/2
then
_

_
if = 1
then
_

_
if LCS(, ) > LCS(+2, )
then
_
LCS(+2, ) = 0
else
_
LCS(, ) = 0
else
_

_
if LCS(+1,

tau) > LCS(1,

tau)
then
_
LCS(1, ) = 0
else
_
LCS(+1, ) = 0
Figure 3.11 shows the result while applying lter dened in algorithm 3.3.1. The silhouette
represents an airplane seen from above. The graphic on the bottom right is a zoom taken from
the graph above, it represents only the section shown on the bottom left. The threshold is set to
ThresA = 17. With the original algorithm the extreme of the wing, shown by label a, is de-
tected only as a single corner, possibly later detected by the deviation or curve accumulator, but
not in the correct place. The turbine label with b is correctly approximated by the improved
version because all its peaks are well dened. In this example the original algorithm also could
fail in accurately detecting the real position of corners. Nevertheless, two important corners
are missed; just before the second turbine; and the following corner where the wing joins the
fuselage. This is because its local maximum are not as salient as the others are, and therefore
ignored. This can be easily solved by setting the size of the worm to L = 5. Alternatively,
one can consider all local maxima as corners, which it is not always a good idea because it
can allow detection of noise. However, if the image is not too noisy, then the algorithm can be
easily modied in order to allow it, by only changing the expression ThresA/2, used in the rst
conditional check in algorithm 3.3.1, to an additional parameter. What it is important to note,
is that the algorithm presented in 3.3.1 can be use as a platform to control searching for peaks,
possible by only consider another parameter to control the lter of noise. I recommend to relate
this new threshold to ThresA, because both change in the same proportion with respect to L.
16
This problem is not avoided when the size of the worm is small, however, by simply nding the middle can
solve the problem.
Chapter 3. Malcolms Algorithm Review 35
The default value could be set to ThresA/2.
Figure 3.11: Implementation results of local maximum lter, L = 10. Top left shows the original object,
an airplane seen from above. Top Right shows smooth version for the CS function along with corners
considered shown with circles. Bottom left is a section of the left wing. Bottom right shows plot of the
plane section shown to the left, along with corners considered also shown with circles.
The improved version for the corner detector presented herein, detects more accurately
close features while ignoring noise within the curvature sharpness CS function. Moreover, an
important relation between image features wanted to be detected and ThresA has been estab-
lished; Features now are represented as the angle formed by two straight lines.
3.3.2 Curve Detector Improved
Besides other imperfections, section 3.2.3 demonstrates that the curve detector as proposed in
(Malcolm, 1983) does not consider the radius of the arc in which is crawling the algorithm
worm. Furthermore, there is no clear relation between image features and threshold ThresB,
which makes difcult the calibration process. This section presents an improved version for
the curve detector, based on results obtained in section 3.2.3, such as, information contained in
slopes of the curve accumulator function.
Chapter 3. Malcolms Algorithm Review 36
Because this thesis had a xed submission deadline, most of the algorithms, assumptions
and predictions for the curve detector behaviour could not be efciently improved. Therefore,
it is necessary to test the algorithm with many contour shapes in order to test all possible
parameter situations and threshold values. In this chapter I focus on the problems encountered
in gure 3.7. This object contains all the problems mentioned at the beginning of section 3.3,
therefore, I believe it is an excellent initial test for the improved version of the curve detector.
Even if the accumulator considered both the angular motion and the radius of the curve,
it will keep depending on the direction of the worm took along the contour. Furthermore,
the possibility of proposing a vertex just before a true corner will be still present, which is
not desirable. I believe this is not an appropriate approach to deal with curve segments. For
example, suppose that an arc segment with radius R = 50 and size going from = 0

to =
45

is going to be analysed. Suppose the algorithm could be calibrated in order to divide


this arc in exactly three segments. That implies that if other arc segment within the same
image has an angle = 0

22.5

, this approximation will be inefcient. Because the most


appropriate location to place this vertex is now at = 11.25

, this of course is impossible


to know until both extremes of the arc had been set. Therefore, my proposal is based on
the assumption that a curve segment cannot be divided appropriately until further information
is known. If the algorithm waits until both known vertices have been set and then divides
according to information gathered from the accumulator and curvature sharpness, then the
accumulator will be independent of the direction of the worm along the contour. In addition,
if calculations are done correctly, it is impossible to locate a curve vertex close to a corner
vertex. The disadvantage by following this method alone, is that straight line segments could
end up in the middle of two curve segments and be considered as part of the whole, dividing
them unnecessarily in more segments. Two possible options to solve this problem could be
to analyse each of the curvature segments within the main curve, associating to each of those
its particular number of segments, or to detect straight line segments at the same time the
algorithm gathered information from the curve. The former is out of scope for this thesis for
its complexity analysis of inection points. Therefore, I will focus on the latter.
In order to be able to analyse a curve, it is necessary to know the radius of the arc be-
sides the amount of angular movement the worm detects along the curve. This information is
straightforward to nd if the arc is made of a circle with constant radius. Equation 3.12 shows
that for small values of the accumulator is proportional to the angular movement along the
curve. Therefore, the radius can be estimated as R k
tot
/Acc, where k
tot
is the total amount
of pixels crawled along the arc section. However, it is more useful to use directly information
from k
tot
, because it is easier to nd that value, and consequently nd a direct relationship
between the angular motion crawled and the number of segments needed.
I believe there is no need for calculating the absolute value of the accumulator, because
Chapter 3. Malcolms Algorithm Review 37
the information that is important to extract is the absolute value of the slope, not necessarily
the absolute value alone. Furthermore, it will also avoid problems of dividing unnecessarily
the same segment in two. For instance, feature d in gure 3.7 in section 3.2.3. In order to
keep track of slopes within the curve accumulator function, it is necessary to keep a record of
all maximum and minimum values. Fortunately, the accumulator function is less sensitive to
noise disturbances than the corner detector. Therefore, it can rely on an alternative search for
maximum and minimum values, waiting just a couple of pixels to consider a true change of
direction in slope. However, so far this approach by itself fails to detect straight line segments
in between.
While the algorithm is looking for local maxima or minima, it also keeps track of how
many pixels had been detected without leaving certain restriction window. Once the length of
this straight line is big enough, for instance, twice the size of the worm, there is no need to keep
looking for local maxima or minima. In fact, it is probable that this zone is more susceptible
to noise than the others are, therefore this extra information has to be avoided. Note that
once a straight line segment has been found, the algorithm already has enough information
to propose vertices about the former curve section, i.e. information concerning local maxima
and minima, along with information about straight lines. These records are enough for the
algorithm to calculate the number of vertices and their position. However, it is also important
not to mix information belonging to corners with the curve accumulator. That is why, once the
curvature sharpness value is above threshold, all accumulator related data has to be ignored.
This diminishes intervention from corners into curves. However, when CS is above threshold,
the accumulator has already detected the initial phase of that corner. In order to avoid this, the
accumulator is working L pixels before curvature sharpness, and waits L pixels after the corner
to update the accumulator. Waiting this amount of time without updating the accumulator does
not affect the nal result, because normally segments about to be divided are much bigger
than 2L and no immediate response action is required by the curve detector. Therefore, it is
only necessary to adjust positions found by accumulator by Ad j = L+2, which is increased
heuristically by two to avoid small bumps.
Pseudocode for nding all local maximumand minimumvalues is shown in algorithm3.3.2.
The rst condition indicates that a straight line has been found and there is no further need
to keep looking for local maximum or minimum values. Pseudocode shown in 3.3.3 detects
straight lines along the algorithm. Threshold represented as Small Value works with values
from 5 to 15 when L > 10 and values of 2 or even zero for smaller sizes. One can use
these values without the necessity of calculating complicated equations. Because in any case,
approximations needed to solve these equations introduce error proportional to the solution,
therefore are unreliable. The variable responsible for measuring the straight line length is indi-
cated as CounterSL. Variables MinAcc and MaxAcc have the minimum and maximum values
Chapter 3. Malcolms Algorithm Review 38
of the accumulator since the last possible initial position for the straight line. Note that vertices
are located once all the straight line segment had been detected. This could be done since the
rst time the algorithm detected a straight line, i.e. CounterSL > 2L. This algorithm waits
until the straight line nished in order to have information concerning the other extreme of the
straight line segment, and to locate all vertices at once.
Algorithm 3.3.2: FIND LOCAL MAXIMUM AND MINIMUM VALUES(Acc
k
)
comment: Algorithm nd all maximum and minimum values ignoring small bumps.
LookFor 1
if CounterSL < 2L
then
_

_
if LookFor = 1
then
_

_
if Acc
k
AccLocals(end)
then
_
AccL(end) Acc
k
AccLP(end) k Ad j
else if |k Ad j AccLP(end 1)| > 5
then
_

_
LookFor 0
AccL [AccL, Acc
k
]
AccLP [AccLP, k Ad j]
else
_

_
if Acc
k
AccL(end)
then
_
AccL(end) Acc
k
AccLP(end) k Ad j
else if |k Ad j AccLP(end 1)| > 5
then
_

_
LookFor 1
AccL [AccL, Acc
k
]
AccLP [AccLP, k Ad j]
This approach has the disadvantage that if a corner is detected in the interim, a verication
of straight line has to be done in order not to miss this information. Function named as Locate
Curve Vertices, is the one responsible of using information from all local maxima and minima
values contained in AccL, along with their position of occurrence AccLP, in order to divide
arc curve in an appropriate number of segments. For reasons of locating the last vertex it
is necessary to know which action was the last taken. This information is contained in the
variable WhoWas. In this case, because the straight line is supposed to be in between two
curve segments, both extremes of the line will be marked as vertices. The reference point for
the next curve will be at the end of the straight line, unless the last vertex before the line is a
corner, and no local maximum or minimum values were found. In that case, only the last vertex
is located. Evidently, straight line between corners are ignored.
Chapter 3. Malcolms Algorithm Review 39
Algorithm 3.3.3: STRAIGHT LINE DETECTOR(Acc
k
)
comment: If value is constrained within certain range of value for more than 2L.
if MaxAcc Acc
then
_
MaxAcc Acc
if MinAcc Acc
then
_
MinAcc Acc
if |MaxAcc MinAcc| < Small Value
then
_
CounterSL CounterSL+1
else if CounterSL > 2L
then
_

_
IniSL k Ad j
AccL [AccL, PossibleAccL, Acc]
AccLP [AccLP, PossibleAccLP, k Ad j]
WhoWas = Straightline
LOCATE CURVE VERTICES(AccL, AccLP, WhoWas)
else
_

_
PossibleAccL Acc
PossibleAccLP k
MinAcc Acc
MaxAcc Acc
CounterSL 0
It is important to relate the curve detector behaviour with the corner detector in order to
improve results. However, it is also important to remember that the corner detector has priority
over the rest of the parameters. As mentioned previously, nding a straight line stops the
process of nding curve features. Evidently, nding a corner also stops processing the curve,
and also stops the straight line nding. When a curvature sharpness value is above threshold,
it rst checks if a straight line was already in progress. If so, then the algorithm does the same
procedure as when the straight line nished by a curve, nevertheless, no last vertex is located. If
there was no straight line in progress, then the last coordinate for the curve segment is identied
at the same position of the corner. Generally in this proposal, vertices are located as follows:
1. Corners, if detected, are located according to their local maximum position;
2. Straight line vertices, if found, are located when appropriate;
3. The beginning and ending of a curve segment are identied, as well as the arc length and
angular distance associated with it;
4. Curve vertices are located properly.
Chapter 3. Malcolms Algorithm Review 40
So far, I had considered that each curve segment does not have inection points, which is not
always the case. Nevertheless, information gathered for the whole segment can provide the
necessary data to deal with this problem. When there is a change of sign in its curvature, i.e.
inection point, it also means that the sign of the accumulator slope change.
Figure 3.12: New proposal for nding curve segments. Top left is the test image. Top right is the new
plot indicating slopes found, initial coordinate indicated with circles, along with inection change points
indicated with starts. Bottom right is the plot used by the original algorithm. Bottom left shows key points
found by new proposal. Stars are corners and circles indicate start and end of curves.
In gure 3.12 on the top right are slopes found by the newalgorithm. Circles indicate where
the curve starts and each star indicates a change in inection point in the contour. Besides,
some curve regions found, especially from large curves, might have a local maximum or local
minimum by effects of equation 3.12 explained in section 3.2.3. Results on the actual object
are shown on the bottom left of gure 3.12. Because now parameters are meant to interact with
each other, initial and nal points from a curve can be identied as vertices already found by
corner or straight line detector, improving precision even more. Feature d is correctly isolated
from both straight line segments, and features a, b and c start at a corner and nish at the
other. The total curve information gathered by the algorithm from object shown in gure 3.12
Chapter 3. Malcolms Algorithm Review 41
is available in table 3.1. Note that the rst segment found is ignored because its arc length is
below threshold ThresB = 15.
|Acc| k
end
k
ini
k
ini
k
end
Feature
10 20 0 20
10 20 0 20 Ignored
272 199 160 359
70 63 359 422
342 278 144 422 a
219 43 467 510
10 12 510 522
229 68 454 522 b
67 33 619 652
260 108 652 760
327 154 606 760 c
11 6 919 925
162 202 925 1127
71 16 1127 1143
244 234 909 1143 d
Table 3.1: Results gathered by the improved algorithm whilst analysing object in gure 3.12 using
L = 10.
The next step is to decide from where is going to be located the rst and last point along
the curve. By analysing how the algorithm behaves, I found that no vertex has to be set at the
very beginning of any curve. In addition, the end of the curve is always set more accurately by
the straight line or the corner detector. Just when the curve segment is nished by the end of
the image contour the last vertex has to be located.
Although each curve segment is composed by different curve segments, for simplicity, I
am only considering them as a single arc segment. Consequently, for each curve segment
found, all changes of absolute accumulator change are added, as well as the total arc length.
Mathematically it is easy to calculate in howmany segments the arc has to be divided in order to
constrain error to a maximum distance
max
. Difculties start when considering computational
power restrictions. Appendix A.4 demonstrates that the following equation can be used for this
purpose.

_
Arc
8
max

Arc Acc
(L,
max
)
(3.20)
Where Arc is the arc length, and is the angular distance for that segment. Variable is cal-
Chapter 3. Malcolms Algorithm Review 42
culated from the accumulator value Acc in equation 3.12, however it is difcult to calculate in
run time. Therefore it is easier to use Acc which is the value already available. The propor-
tional factor between them, as well as other constants in equation 3.20, can be represented by a
single scalar (L,
max
). This variable controls how much error is allowed according to the size
of the worm. Considering the current computational power and time available for this task, the
precise result of equation 3.20 is completely out of scope. Furthermore, even if the number of
segments could be calculated, it is not possible to divide the arc segment accurately enough to
assure invariance with respect to the direction of the worm. However, what it is uncomplicated
to perform, is to divide arc segments by factors of power of two, 2
n
. This can be done by only
shifting to the right the byte representation of the arc length. Furthermore, it is less likely to
have to divide the arc by more than 32 segments. Therefore, I believe that in many cases it need
only be necessary to calculate a rough number of segments which gives some clue about which
power of two can be used, i.e. ={2, 4, 8, 16, 32}. If variable can only take ve values, then
equation 3.20 can be approximated to the computation of just a couple of multiplications and
division. Computation of this equation is done just a few times during the complete contour
analysis. Therefore, it does not affect too much the total time consumed. However, in order
to constrain time consumption to a minimum, a logarithm lookup table will be added to the
algorithm in order to calculate fast multiplication and divisions. I propose to solve the square
root with an iterative approximation, which in this case gave good results at the rst iteration.
Because it is more likely to have low values for the variable 3, this will be the rst initial
value for the square root. This approach allows the algorithm to calculate equation 3.21 very
fast. Appendix A.4 contains the approach followed to get , which later will be related to the
ve possible values of .

_
Root
ini
+exp
_
log(Acc) +log(Arc) log(Root
ini
) log()
__
/2 (3.21)
Where Root
ini
is an estimated initial value for the number of segments, heuristic results show
that Root
ini
= 3 worked appropriately for many cases. The exponential operation represents
only the inverse operation in the lookup table. The pseudocode for dividing the segment into
many fragments is shown in algorithm 3.3.4. Threshold ThresB presented in section 3.1.2 is
now related to curve segments which are not relevant to the approximation, instead of rep-
resenting the amount of accumulator limit in order to consider a curve. Variable is a pre
specied new parameter which has to be set according to a possible range of values obtained
from equation 3.12.
Chapter 3. Malcolms Algorithm Review 43
Algorithm 3.3.4: LOCATE CURVE VERTICES(AccL, AccLP, WhoWas)
comment: Divide arc segment according to information gathered
Angular
Total

i
AccL(i)
Arc
Total

i
AccLP(i)
if Arc
Total
ThresB and AccP ThresB
then
_

_
CALCULATE NUMBER OF SEGMENTS(Angular
Total
, Arc
Total
)
CLOSEST TO(, 2
n
)
SegArch Arc
Total
/
for i AccPL(1) to AccPL(end) SegArch/2
do
_
INCLUDE IN POLYGONAL APPROXIMATION(P
i
)
if WhoWas = End of Contour
then
_
INCLUDE IN POLYGONAL APPROXIMATION(P
AccLP(end)
)
The nal result for the object in gure 3.12 is shown in gure 3.13. The value set for control
segmentation parameter was set on = 2000, for worm length L = 10. Theoretically, the max-
imum error distance in each of the curve segments is supposed to be very similar. Nevertheless,
it depends now on how much difference exists between the number of segments presumably
calculated and its closest power of two , therefore, former assumptions do not stand any
more. What is important to notice is the symmetry of vertices located in each of the curve sec-
tions. This will guarantee that results are independent of the worm direction. Furthermore, no
redundant vertices are located close to other vertices, for instance, those located by the corner
or the straight line detector. In agreement with the philosophy of optimal polygonal approx-
imation algorithms shown in section 2.4.2, if some gaps are left because bad approximation
in the number of segments during the rst vertex proposal, then deviation control error will
include some extra vertices later.
The improved version for the curve detector presented in this section, precisely divides arc
sections found by the algorithm worm depending on an estimated deviation error between the
original curve and the polygonal approximation. As a result, no close vertices proposed by the
curve accumulator are set near corners and results are more symmetric to the direction of the
worm along the contour. Moreover, an important relation between threshold controlling curve
detector and deviation error has been established. I believe is very important to have a threshold
for curve detector which depends on an approximated maximum distance allowance
max
,
instead of having to propose a value for the accumulator itself. Just as happened with the
original algorithm.
Chapter 3. Malcolms Algorithm Review 44
Figure 3.13: Result of using improved version of curve detector. Star indicates vertices located by the
corner detector, circles indicates curve vertices and the triangle was set by the straight line detector.
3.3.3 Deviation Improved and Removal of Redundant Vertices Proposal
Sections 3.3.1 and 3.3.2 deliberately ignored any calculation of deviation error while the worm
is crawling along the contour, because normally deviation has a low value which interferes with
the other two feature extractors. Furthermore, it consumes almost 33% of the total time spent
while analysing all the contour. The change of strategy I propose for this parameter, is to wait
until the corner detector and the curve detector set all vertices and then make just a few calcu-
lations for checking deviation errors between vertices. This reduces the number of calculations
needed for the deviation from n/2 to M1. This also allows the option of checking continu-
ously until no further errors above ThresC are detected, because by the time the algorithm gets
to this point, the object is nearly completed. Therefore, it is expected to have only a couple of
checking cycles. In addition, dependency on the direction of the worm is avoided, and time
saved can be used wisely to attend other problems, for instance in sections 3.3.1 and 3.3.2. Pro-
posal for deviation error is expressed in algorithm 3.3.5. Deviation error shown in equation 3.6
is indicated as Dev(V
j
,V
j+1
).
As well as it being important to check the polygonal approximation in order to detect devi-
ation errors bigger than certain threshold, it is also important to verify if there are some vertices
whose purpose is redundant. For instance, if some vertices are located in sections with many
inection points, it might be possible that some of those vertices lie exactly on the imaginary
Chapter 3. Malcolms Algorithm Review 45
line form by vertices at both sides. This improvement is only concerned with the compression
ratio of the image, because if the distance is less than a pixel, then the error between the original
image and the approximation is not affected. The procedure for detecting redundant points is
very similar to that shown in algorithm 3.3.5, however it now checks DevV(V
j
,V
j+2
) <ThresR,
where ThresR is always a small value and, if required, the vertex removed is V
j+1
.
Algorithm 3.3.5: DEVIATION ERROR CONTROL(V
j
; j = 1..M)
comment: Control deviation error between original image and polygonal approximation.
change 1
while change = 1
do
_

_
change = 0
for j 1 to M
do
_

_
if DevV(V
j
,V
j+1
) > ThresC
then
_
Add

P
(kq)/2
to the result
change = 1
3.4 Compression by calibration
When a polygonal approximation algorithm is applied to an image, there are two main aspects
to take in to account: Compression ratio and error between the original image and the nal
result. When considering optimal results, these two parameters are inversely related. This
means that for most algorithms, a high compression ratio means low total error, and vice versa.
Therefore, if one can control one of these parameters, then the other will follow. As explained
is section 3.2.1, it is tempting to try to control the performance of the algorithm proposed
herein by relating everything to the size of the worm, expecting to control the compression
ratio by means of changing the length of the worm depending on the image features. This
is supported by the fact that different worm sizes perceive or ignore different features on the
image depending on the size of the worm
17
. For instance, ignoring noise or certain smooth
corners. However, the result can be expressed as giving a different image to the same algorithm
depending on the compression one need, instead of detecting required features with the same
level of perception. I believe this is not always adequate for controlling the compression ratio,
because the image is different at every worm size and it is possible to obtain other features that
are not required. In addition, it is sometimes more difcult to predict the nal result, as this
depends on the scale the algorithm now perceived. For instance, in image 3.1 on page 18, a
curve section indicated with label a, is detected as a corner, whereas is more appropriate to
locate vertices in there by means of curve detector.
17
See section 3.2.1 for further information.
Chapter 3. Malcolms Algorithm Review 46
If the correct method to control compression is to calibrate thresholds related to curvature
sharpness, curvature accumulator and deviation error, then, the task is to nd an adequate
method to relate them to the desired compression ratio. However, according to section 2.5.3
only a fewdiscrete changes can be detected in the nal result if parameters are moved gradually,
e.g. low, normal or high compression. This makes more feasible the task of relating algorithm
thresholds to compression ratio, because if we assume that the compression can only take a few
values, then thresholds can also be related to these discrete values.
Such a direct relation between compression ratio and algorithm thresholds is not possible
with the analysis made so far. However, in section 3.3.1 a method to relate the minimum angle

min
the algorithm can detect and threshold ThresA is presented. Section 3.3.2 presents an im-
proved version for the curve detector, which can be controlled with a variable (
max
) by only
specifying the maximum distance error
max
desired on a curve segment. The deviation error
presented in section 3.3.3 can now control smaller deviation errors without interfering with
other algorithm feature detectors
18
. Consequently, even though the algorithm cannot control
the compression ratio directly, it can be indirectly controlled it by specifying the maximum
error between the original and the polygonal approximation. Furthermore, it is sufcient to
specify some xed values for all thresholds according to some standard errors. For instances,
angle
min
could take the following values

min
Error ThresA
15 Low 0.2611 L
30 Moderate 0.51 L
45 High 0.76 L
60 Huge L
Table 3.2: Possible calibration values for ThresA according to global predictions.
Evidently, values of used by the curve detector and deviation error depend on the size
of the image under analysis. Nevertheless, it is straightforward to propose a value for ac-
cording to qualitative errors labelled in table 3.2 and the size of the image, because now these
parameters are more intuitive.
3.5 Summary
The rst section in this chapter presents how the original algorithm proposed in (Malcolm,
1983) detects three main features from images. The corner detector works by detecting high
curvature values along the contour. The curve detector works by accumulating turns the worm
18
Theoretically, this value could be the same as the one used by the curvature detector.
Chapter 3. Malcolms Algorithm Review 47
performed while crawling along the contour. If it has been turning for certain amount of angular
motion then a vertex is located. The last feature detector identies deviation error between
the original image and the polygonal approximation. If the deviation error between the exact
middle point of two adjacent vertices and the middle point along the contour between them is
above threshold, then a vertex is located at that point on the contour.
Each of these feature detectors have aws while analysing certain type of cases. This
is shown in section 3.2. For instance, the corner detector fails to detect close features, con-
sequently, those features are amalgamated into one. Furthermore, calibration is not related
directly to features, and therefore it is hard to propose a value for the threshold responsible of
detecting corners. The curve accumulator tries only to divide curve arcs by angular distance,
and does not consider the radius of the curve in which the worm is crawling. Therefore, it is not
possible to optimally divide arc segments in order to constrain errors below certain threshold

max
without depending on the deviation error, which is time consuming and placed vertices
less efcient than the curve detector. Moreover, it is highly dependant on the direction the
worm takes along the contour and sometimes places redundant vertices close to those proposed
by the corner detector. The calibration process is also complicated to perform, because there is
no simple relation between image features and calibration variables
19
. The deviation detector
consumes almost 33% of the total time the algorithm needs to propose a polygonal approxima-
tion, and furthermore, interferes with the performance of the curve accumulator when set too
low.
Using the analysis found so far, section 3.3 proposes a new version for the Malcolm al-
gorithm (Malcolm, 1983). The corner detector now can detect close features and deal more
appropriately with noisy images and small values of the worm. It is based on the detection of
many maximum values within a major polymodal peak after smoothing the curvature sharp-
ness function with a mask of ones. Features of the image are now related to a minimum angle
wanted to detect
min
, which considers all possible orientations of the image. The curve detec-
tor in the improved version rst detects the complete arc segment and then divides it according
to an approximate angular motion and arc length associated to that section. This is done by
means of calculating the number of segments in which the circle arc has to be divided in order
to have a distance error below threshold. Information about the arc segment is extracted by
considering the slope in the curve accumulator. As a result, the threshold is now related to a
more intuitive parameter and results are more stable. The deviation control error is left to the
end of the contour survey. By doing this, the algorithm saves time and allows the threshold con-
trolling this feature detector to have smaller values without interfering with the performance
of others. In addition, a nal check is performed to the polygonal approximation in order to
19
In order to properly relate curve accumulator to the threshold angle it tries to detect, it is necessary not to
calculate the absolute value of the curve accumulator, and focus instead of the difference between local maxima
and minima. Section 3.2.3 contains ideas that support this assumption.
Chapter 3. Malcolms Algorithm Review 48
remove redundant vertices, more probably located at segments with many inection points.
The calibration process in the new version is now easier to perform compared to the origi-
nal version, because parameters are directly related to image features and the size of the worm
one is interested in use. In order to control compression of the image, I propose to indirectly
control it by specifying the maximum error between the original image and the polygonal ap-
proximation. Furthermore, it is shown that calibration can be divided into just a few qualitative
cases. I propose to calibrate feature detector parameters once the size of the worm has been
selected according to the features one is interested in detecting and the size of the image.
Chapter 4
Results
Although many papers report results on common images, it was not possible within the time
given to obtain the original image in their original size. This makes comparison difcult with
the results obtained by Malcolms algorithm (Malcolm, 1983) and the improved version be-
cause the difference in scale affects enormously results obtained by the ISE, and therefore, the
gure of merit (FOM). The only images which could be created accurately in size and shape
where those reported by (Teh and Chin, 1989). One of these images was also used by (Rosin,
1997) in its comparison technique explained in 2.5.2. Here, the merit of an algorithm applied
to an image is based on two important parameters: Efciency and delity. In order to be able to
calculate these values it is necessary to know optimal results for that particular image. There-
fore, I had to rely on results published in (Rosin, 1997) and then insert results obtained by
Malcolms algorithm and the improved version. The results while comparing both algorithms
with silhouette images at different worm lengths are shown in section 4.3. Evidently, before
any of the experiments could be performed within this chapter, a calibration process for all
parameters had to be developed. Section 2.5.2 shows a calibration proposal based on the size
of the worm, which allowed me to perform more experiments and present more understandable
results.
Most of the analysis was based on comparison between the original algorithm and its
improved version, using images taken from (Chetverikov and Szab o, 1999) and the squid-
silhouette database developed at the University of Surrey, UK. In total, I used for this compar-
ison 44 images, including the object shown on gure 3.12 on the top right. These images can
be found in appendix B, with a particular key name that is used in this chapter. In order to have
more reliable results, I had to try each algorithm with different calibrations parameters. Nev-
ertheless, there were at least four parameters per algorithm, and therefore it was not possible
to have all possible combination of parameters. Besides, comparison is not as straightforward
as required. Just for extensive comparison reasons I will relate all thresholds to the size of
the worm in order to be able to present more results at different image scales. Each image
49
Chapter 4. Results 50
was therefore analysed by eleven different worm sizes, L = 5 15. Results obtained while
comparing the original algorithm versus the improved version are shown in section 4.2.
4.1 Calibration for experiment
I decided to use for the original algorithm the calibration shown by equations 3.8 in sec-
tion 3.2.1 that correspond to a moderate error according to table 3.2. The original algorithmalso
has an additional unnamed parameter, which is related to the amount of time the worm waits to
gather again information after a corner
1
. The whole experiment was repeated for three different
waiting values and the best was left for comparison, these values were T
wait
={0.5L, L, 1.5L}.
Calibration for the improved version had to be also related to the size of the worm in order to
be consistent with the comparison experiment between algorithms. For the corner detector it is
straightforward to prove that the threshold ThresA has to be scaled up by a factor of = L/2,
because of the ones-mask applied to the curvature sharpness CS explained in section 3.3.1.
By analysing equation 3.17, this corresponds to detecting angles on the image at least above

min
29. Of course, because of its rotation dependency, it can also sometimes detect smaller
angles.
The curvature detector in the improved version is very different from the original algo-
rithm, because now it is necessary to control at least two parameters; One parameter controls
the minimum value for the accumulator which will be considered as a possible curve, this is
known as ThresB; Threshold controls the maximum distance
max
a curve is supposed to
have depending on the size of the worm. The image used on gure 3.12 was used in order to
nd an empirical value for . Data collected from its curves and for different worm sizes was
used in order to nd the following relation
=
Acc
1.065L
2
(4.1)
For the object in gure 3.12 this represents an angle of = 183
0
for feature a, and for the
same image and feature scale-down by 50% the angle is estimated in = 165
0
. Using this
result it is straight forward to prove that is represented as

2
=
Arc
8
max
=
Arc Acc
8L
2
1.065
max
(L,
max
) = 8.52L
2

max
(4.2)
Where
max
was set empirically to 2.22 for this experiment. For example shown in gure 3.13,
was empirically set to 2000, this correspond to
max
= 2.35. This segmentation result in
7 for feature a, resulting in a nal division by 8 because of its proximity to 2
3
.
1
See section 3.2.5 for deeper explanation.
Chapter 4. Results 51
For the deviation control, the threshold ThresC was xed for both algorithms during all the
experiment. Nevertheless, for the original algorithm, deviation has to be set higher than the
improved version because it works at the same time as the corner detector and curve detector.
As a consequence, it could obstruct their performance if set too low. For the original algorithm,
the deviation threshold was set to ThresC = 13, and for the improved version ThresC = 5. The
new parameter for removing redundant points in the improved version was set to ThresR =1.5.
The automatic calibration process for the experiment is shown in table 4.1. Note that
Parameter Original Improved
ThresA L/2 L
2
/4
ThresB L
2
/2 5L
ThresC 13 5
(
max
= 2.22) 18.9L
2
ThresR 1.5
Table 4.1: Calibration used for comparison between algorithms. L = 5 15.
ThresB for the improved version varies proportional to L. This means that the threshold in
order to detect a curve varies according to the size of the worm as
thres
= 4.6948/L radians.
Nevertheless, it could be also more practical to propose a different value of
max
depending
on the size of the worm; Leaving all the decision of whether to put a vertex or not to the
number of vertices calculated and remove this extra parameter. Unfortunately, I could not nd
an appropriate relation in time, therefore, calibration was left as shown in table 4.1. It could
be also very helpful for the algorithm performance to vary threshold ThresR according to the
size of the worm, because it is more probable that low values of the worm size locates points
closer to each other, and this threshold could erase necessary vertices. However, for simplicity
this parameter was also xed for all experiments.
4.2 Comparison between original algorithm and improved version
Although the improved Malcolms algorithm is based on the original version, its behaviour is
very different. The rst version is more reactive, therefore it does not miss features as easily as
the improved version. Besides, the improved version relies more on the interaction between its
feature detectors, such as the initial and nal vertex on curve segments, which are taken from
the corner detector. Furthermore, the improved version is more supported by logical analysis
after the region has been scanned, i.e. it waits until all information have been gathered to pro-
pose vertices. As a result, both algorithms treat each image reasonably differently and results
are diverse and difcult to analyse as a whole. In section 4.2.1 analysis is based on parame-
Chapter 4. Results 52
ters explained in section 2.5.2. Because this parameter might not describe them as the human
vision system would do, section 4.2.2 report some images with salient features important to
detect by a polygonal approximation according to section 2.5.1. Finally, section 4.2.3 reports
some results based on time expended by both algorithms.
4.2.1 Analytical Comparison
Most of the results obtained by both algorithms could not be analysed in detail because of
lack of time. Only global results were obtained for all images, for instance, the number of
vertices proposed M, the integral squared error (ISE), compression ratio (CR) and the gure of
merit (FOM) could be analysed in more detail. Because the FOM relates the ISE and the CR,
some of the conclusions are based on this parameter alone, even though it is not an appropriate
relation between the ISE and the CR. The complete data results are shown in appendix C,
for the 44 images each scanned by 11 different worm sizes, i.e. 484 experiments for each
algorithm. These results do not represent a global performance of these or any algorithm,
because all results presented depend on the image scale. Therefore, they are only useful if the
comparison is made with the same set of images. For instance, the original algorithm with
L = 5 obtained a FOM = 17976 on image comm1. On the other hand, the improved version
obtained FOM = 12320, with the same size but on image comm2. This information is useless in
terms of performance comparison, because they are referring to different images and therefore
ISE results are reported at different ranges depending on the size of the image. This problem
also prevents me from using many of the results published by other articles, because there is
no direct access to exactly the same image. Furthermore, even if the algorithm is compared
with the same image, it is not clear how much better the algorithm performed with respect to
the other images, or with the same image but at different worm sizes, because the compression
ratio and the integral squared error are not well balanced (Rosin, 1997). Unfortunately, I could
not implement on time a balanced version for the gure of merit and most of the results are
based on its simple form
2
. However, even if we cannot rely completely on what the gure of
merit reports, we can be fairly sure that if the compression ratio is more and the integral squared
error is less, then the performance of the algorithm should be considered as better, because its
polygonal approximation uses fewer points and produces less error.
One can see in appendix C that results gathered for the complete experiment are vast and
difcult to analyse. Therefore, instead of dealing with the absolute value of results, I refer
most of the time to which algorithm performed better than which. For instance in table 4.2,
the number to the right represent how many times the improved algorithm report a better FOM
value than the original algorithm for each of the eleven worm sizes.
2
Further information can be found in (Rosin, 1997) and (Rosin, 2003), which are very useful references for
comparison and assessing polygonal approximations.
Chapter 4. Results 53
Figure # Figure # Figure # Figure #
comm1 8 kk1012 11 kk121 10 kk418 6
comm2 11 kk102 11 kk139 9 kk450 11
comm3 8 kk103 11 kk147 11 kk457 11
comm4 11 kk1032 11 kk157 11 kk469 11
comm5 10 kk104 11 kk158 10 kk554 11
comm6 10 kk1051 11 kk197 11 kk577 11
comm7 11 kk1058 11 kk3 11 kk581 11
comm8 11 kk1069 11 kk342 11 kk745 9
kk100 11 kk1086 11 kk373 10 kk778 11
kk1006 11 kk1087 11 kk402 9 kk788 11
kk1008 11 kk113 11 kk408 11 test31 11
Table 4.2: Number of times the improved algorithm performed better using the gure of merit as a
comparison parameter.
Results demonstrate that for 95.45% of the experiments the improved algorithm performed
better according to the FOM. For the compression ratio it is expected that the improved algo-
rithm would not perform better than the original algorithm. This is because it is designed not
to do so. As explained in section 3.3.1 the improved algorithm tries to locate more vertices
when the curvature sharpness has a poly-modal shape. Furthermore, because the algorithm
divides an arc segment in numbers corresponding to 2
n
, then it is more likely that segments
that only need between 5 or 6 divisions would end up with 8 segments. Only in 35.33% of
the experiments had the improved algorithm a bigger compression ratio, which represents the
expected behaviour. Appendix C.1 shows a table with these results in more detail. On the other
hand, the improved algorithm is also designed to avoid redundant vertices and locate vertices
only where required. Therefore, it is expected to compensate these extra vertices with a con-
siderably improved ISE of the polygonal approximation. Results report that on 92.98% of the
experiments the improved algorithm did perform better. Results in Appendix C.1 show a table
in more detail.
Table 4.3 shows how many times the improved algorithm had a bigger compression ratio
and a smaller integral squared error. The global result shows that for 28.93%of the experiments
the improved algorithm almost certainly performed better. On the other hand, they show that
only three times the original algorithm performed better in CR and ISE. This represent only
0.62% of the total experiments. It is clear that if the improved algorithm has almost a third
of the experiments certainly proved to perform better and the original algorithm only three
experiments of 484, that most of the results shown in table 4.2 for the FOM predict well which
Chapter 4. Results 54
Figure # Figure # Figure # Figure #
comm1 1 kk1012 2 kk121 2 kk418 0
comm2 4 kk102 1 kk139 5 kk450 3
comm3 1 kk103 1 kk147 2 kk457 4
comm4 7 kk1032 1 kk157 7 kk469 3
comm5 9 kk104 2 kk158 5 kk554 6
comm6 1 kk1051 0 kk197 2 kk577 4
comm7 3 kk1058 7 kk3 1 kk581 9
comm8 1 kk1069 3 kk342 6 kk745 1
kk100 3 kk1086 2 kk373 4 kk778 1
kk1006 5 kk1087 1 kk402 1 kk788 3
kk1008 6 kk113 2 kk408 1 test31 7
Table 4.3: Number of times the improved algorithm had a smaller ISE and bigger CR.
algorithm performed better.
I proved that although the gure of merit is not well balanced, the improved algorithm
performed better than (Malcolm, 1983) based on separate analysis of the compression ratio and
the integral squared error. However, these results are based on global statistical parameters that
do not tell us anything about the actual result on the polygonal approximation. For instance,
none of the results reported in appendix C indicates how accurately the vertex was located with
respect to its ideal position. Furthermore, even if the ISE is small, this does not indicate how
well the polygonal approximation resembles the original object. These considerations are hard
to express in mathematical terms, therefore for some images its polygonal approximation is
shown, leaving most of the conclusions about its performance to the reader.
4.2.2 Visual comparison versus statistics
It was not possible to visually analyse the 484 results obtained by the experiment. Therefore
I had to choose some representative results, those presented in section 4.2.2.1 introduce the
problem of balancing the compression ratio and the integral square error. Experiments chosen
for section 4.2.2.1 have better FOM for the original algorithm, nevertheless, in both cases
the compression ratio is in support of the improved version. This research was also intented
to establish the relation between the algorithm here developed and the theoretical criteria on
which it is based on; for instance, accurately locating vertices that are close to each other; and
dividing curve segments as symmetrically as possible while avoiding problems with inection
points and depending less on the deviation detector. Section 4.2.2.2 presents a couple of images
with these characteristics, both of them reporting a better FOM for the original algorithm but
Chapter 4. Results 55
a better compression ratio obtained by the original algorithm. Furthermore, I believe that the
percentage difference between the ISE or the CR obtained by each algorithm is more useful for
comparison and not the absolute value each of the algorithm obtained, because for each image,
the shape and scale are very different. Therefore, results presented herein are based only on
relative difference between them.
4.2.2.1 Integral squared error and compression ratio balance
The gure of merit relates the integral squared error and the compression ratio in one single
value. However, these parameters are not well balanced (Rosin, 1997), and therefore in some
cases the FOM is inclined only to one of them. Although both experiments chosen for this
section reported a better performance by the original algorithm, it was not possible to decide
which algorithm performed better. For one of the images chosen for this section, one can iden-
tify approximately the correct position for most of the optimal vertices, and for the other image
this is more subjective. However, I believe that in both cases, the difference in performance is
based on complementary vertices. The nal decision on how much favourable are these for the
ISE and how much harmful are for the CR, is a decision for the reader.
The test image comm1 at L =5 is reported to have a bigger FOM for the original algorithm.
The CR is in support of the improved version, with 32.2% more compression, nevertheless the
ISE is in support of the original algorithm, with 110% difference between them. The visual
result for this particular case is shown on the bottom of gure 4.1. The original algorithm
locates at least 8 vertices too close and none of those in a correct place, whereas the improved
algorithm omitted to detect a large error on one of the peaks to the left, this error triggering
an enormous ISE. Although it is not clear statistically or mathematically which algorithm did
a better job, I believe that the improved version performed better because its solution is within
calibration control, for instance, by reducing deviation error, whereas the error with the original
algorithm is intrinsic to its algorithm sequence and logic. Note that one of the extreme left
corners has two missed place identical vertices in both algorithms. This is caused because
that is where both algorithms started crawling along the contour. This error could be easily
solved by ignoring the rst corner or rst sequence of pixels and then analysed them later.
However, I decided not to attend that problem for this experiments, because of the time given
for the project. Nevertheless, both algorithms share the same mistake, and for comparison
reasons between them, this is fair enough. Note also that the polygonal approximations shown
in gures 4.1 and 4.2 are distorted because each of the plot axes used different scales. This
is a plot effect caused by MATLAB; The actual proportions of these images can be found in
appendix B.
For the test image kk745 at L = 7 it is more difcult to analyse differences, because both
performances are very similar. Besides, because it does not have too many precise corners, it
Chapter 4. Results 56
Figure 4.1: Visual results obtained by the original algorithm, both to the left. Compared to visual results
obtained by the improved version, both to the right. Both images on the top are kk745, analysed with
L = 7, and both at the bottom are comm1 analysed with L = 5.
is complicated to evaluate the number and position of the vertices. For this image the FOM
was slightly in favour of the original algorithm by 3%; CR was in support of the improved
algorithm by 26.1%; but the ISE was 29.6% bigger than the original algorithm. Again the
original algorithm locates some close vertices referring to the same feature. The improved
version omitted to detect some salient errors along the contour, but by considering the amount
of vertices perhaps the positions chosen for vertices were appropriate. In this case it is also
difcult to decide which algorithm performed better. Nevertheless, I believe that the same
conclusion as with image comm1 could be applied for this image, which is that deviation error
can improve the ISE done by the improved version, whereas the original algorithmneeds deeper
work on calibration or the algorithm itself.
Experiments presented in this section are in support of the original algorithm based on the
gure of merit. It is difcult to decide which algorithm performed better than the other if no
visual results are presented, and even so, it is not an easy task to evaluate their performance
because nowthey depend on subjective and esthetical opinions about results. In my opinion, the
original algorithm placed too many curve vertices close to corners; this is not only inefcient,
Chapter 4. Results 57
but hard to solve with calibration alone. On the other hand, the improved algorithm results, can
be easily improved by changing feature detector thresholds.
4.2.2.2 Improved version feature detectors test
In order to test how the algorithm improved with the curve and corner detector, I picked some
images with many close features and curve segments. The original algorithm can only detect
close features when the length of the worm is small; therefore, it also becomes susceptible
to noise. That is why I decided to analyse them with worm lengths not necessarily small.
For instance L = 10 and L = 7, which are the type of worm lengths that ignore noise but are
still able of detecting small close features. Both experiments presented in this section report
a better performance by the improved version, with values very similar to those presented in
section 4.2.2.1. However, it is expected to have a better performance by the improved version
because this is designed to deal with the type of image features chosen. Images shown in this
section are also distorted by the MATLAB plotting display, which tends to present all results
in square canvas. The real proportion of these images are shown in Appendix B.
Figure 4.2 shows results obtained by the original algorithmto the left and with the improved
version to the right. For instance, both polygonal approximations shown on the bottom of
gure 4.2 are based on kk788. Although CR is 15.7% bigger with the original algorithm,
the ISE is also 74.5% bigger. This percentage difference is not as distinct as those shown in
gure 4.1. Nevertheless, in this case the visual results are quite contrasting. For the test image
kk1087 shown on the top of gure 4.2, the FOM is almost three times bigger for the improved
algorithm, most probable for the section on the bottom that was left without vertices. The CR is
19% bigger for the original algorithm, however the ISE is also 80.2% bigger. Consequently, in
both cases analytical results support a better performance by the improved version very similar
to those supporting the original algorithm in section 4.2.2.1. However, I consider visual results
less ambiguous with these experiments, supporting a better performance done by the improved
version.
The original algorithm detects corners where the peaked zones are in both test images,
but in neither of them succeeds to detect where the actual peak is. Whereas, the improved
algorithm correctly detects all peaks on image kk788, shown on the bottom of gure 4.2, and
almost all peaks on image kk1087, shown on the top of the same gure. For the curve detector, I
was not expecting for the original algorithm to fail in detecting a contour section on the image
kk788. This happened because the worm in that section detects an s shape, therefore, the
accumulator in the second part of that segment cancels out the rst accumulator obtained so
far. As a result, no vertex is assumed, and a straight line is proposed where it should be at least
three vertices. This example also support my believes about the necessity of changing the curve
detector independently on whether one is interested in considering the radius of the curve or
Chapter 4. Results 58
Figure 4.2: Visual results obtained by the original algorithm, both to the left. Compared to visual results
obtained by the improved version, both to the right. Both images on the top are kk1087, analysed with
L = 7, and both at the bottom are kk788 analysed with L = 10.
not
3
. Note that this problem was also detected while analysing the curve accumulator plot on
the bottom right of gure 3.7 on page 25.
This section shows the type of problems the original algorithm have while detecting close
features with a worm length that also allowed it ignoring noise. The compression ratio in these
examples supported the original algorithm because it did not properly approximate all peaks,
however, the ISE and the FOM statistically supported the improved version. In addition, I
believe that visual results supported a better performance of the improved version because in
both images it located almost all peaks correctly, and all curves were segmented apparently
without redundant vertices.
Evidently, the FOM is just a parameter that helps to identify which algorithm performed
better based on information from the CR and the ISE. Nevertheless, it is also important to con-
sider which results are the most adequate according to our visual perception and the features
one is interested in detecting. For instance, if the purpose for a program is to distinguish be-
3
See section 3.2.3 for further information on radius curve dependency.
Chapter 4. Results 59
tween test image kk373
4
and test image kk1087, it is not necessary to know the exact position
of every peak on the top of the sh. Just to detect that in one image there are peaks and on the
other not. The same argument could be applied to most of the images shown in appendix B,
one can detect particular features that can be identied solely with the original algorithm. Nev-
ertheless, if one is interested in classifying shes belonging to the same species, or to obtain
more reliable results, then the improved version could perform better. For example, by count-
ing how many peaks test image kk788 has on its body without increasing noisy readings on
different contour sections.
4.2.3 Time Performance
The most important advantage while using (Malcolm, 1983), is that it processes the whole
image really fast. This is because it is based only on simple mathematical operations and it is
not an IRM
5
, therefore, each pair coordinate is analysed at most twice. Improvements on the
algorithm were based on following the same philosophy, using logarithm tables to avoid the use
of multiplications and divisions. Furthermore, logarithm table is consulted once of each curve
segmentation, which is repeated only sometimes and therefore it is not time consuming
6
. The
rest of the improvements are only based on logical sequence and some extra computations, like
bit shifts and conditional jumps. Extra time expended by the improved algorithm is supposed
to be balanced with time saved by removing the exhaustive search for deviation error
7
done by
the original algorithm.
The new improved version increased signicantly in size because now it relies more on
logical sequence. However, because most of the additional code is based on conditional oper-
ations it is harder to predict the time consumed. In order to test how much slower is the new
version compared with the original algorithm it is necessary to translate the code in to low level
programming language. Otherwise, time measures could be biased by higher-level procedures.
Unfortunately, I could not implement the nal version in a different language, for instance, on
C language or assembler. Therefore, these preliminary results are based on algorithms running
in MATLAB, which is a scripting language. Besides, only relative results are shown. Table 4.4
shows the average increase in time done by the proposed improved version relative to the time
expended by the original algorithm
8
. The deviation standard for this dataset was 4.36%. As
I already mentioned previously, these results do not represent an accurate proportion between
performance done by both versions of the Malcolms algorithm because it was implemented
in a scripting and interpreted language. This means that most of the time expended by the
4
This image can be found in Appendix B.
5
More information concerning iterative renement methods can be found in section 2.4.1.
6
See section 3.3.2 for further information
7
See section 3.2.4 for a deeper explanation
8
Time shown was calculated in a Pentium III @750MHz
Chapter 4. Results 60
Figure % Figure % Figure % Figure %
comm1 15.06 kk1012 12.58 kk121 14.59 kk418 19.11
comm2 13.15 kk102 15.77 kk139 13.58 kk450 14.05
comm3 15.03 kk103 9.19 kk147 18.75 kk457 15.16
comm4 14.24 kk1032 13.29 kk157 16.32 kk469 13.54
comm5 6.79 kk104 15.30 kk158 15.24 kk554 13.52
comm6 12.12 kk1051 16.75 kk197 10.32 kk577 13.63
comm7 14.27 kk1058 14.11 kk3 12.75 kk581 13.81
comm8 14.52 kk1069 15.16 kk342 16.65 kk745 14.11
kk100 17.87 kk1086 14.83 kk373 15.77 kk778 17.04
kk1006 14.84 kk1087 16.19 kk402 14.74 kk788 15.14
kk1008 14.93 kk113 10.44 kk408 14.23 test31 9.67
Table 4.4: Average time percentage increase made with the improved algorithm.
program is consumed in communication between inner programs, calls to sub-functions and
interpretation of user functions. Consequently, even if an exact square root is 100 times slower
than a simple addition, both operations vanished compared to time consumed by the control
and interpretation overheads. Comparison by this means with other algorithms, such as IRM
algorithms, are inadequate. However, because is an interpreter language, total time consumed
could be related to the amount of lines processed, and because both algorithms rely in the same
basic mathematical operations, the times shown in table 4.4 could give an indication of the
global behaviour in lower level languages.
4.3 Performance compared with other algorithms
Images used for comparison with other methods were taken from (Teh and Chin, 1989) and
are shown in gure B.6. These images are easy to work with, because one can create them by
copying pixel by pixel; The result is exactly the same image as the one used in other articles.
Nevertheless, both of them are very small and that caused some problems with algorithms
based on (Malcolm, 1983). This happens because these algorithms rely on statistics that the
worm collects along the contour. However, with these images the algorithm is too reactive and
results are not consistent. Basically, under these circumstances, only the corner detector works.
Furthermore, the algorithm has to operate with small values of the worm length, which create
more noisy readings for the CR.
Only for the object on the left hand side of gure B.6, could I nd enough information to
Chapter 4. Results 61
calculate the efciency, delity and nally the merit of the algorithm
9
. In order to calculate
the merit for a polygonal approximation it is necessary to compare results with those reported
by an optimal algorithm
10
. Nevertheless, it was not possible for me to program or get the code
of any of these algorithms within the time given. The necessary information to be extracted
from the optimal algorithm is the minimum ISE for each of the possible values of M. Once this
information is collected, (Rosin, 1997) also proposed a visual method for comparison. This
method consist of plotting in a semi-logarithm scaled graph, the integral squared error versus
the number of points proposed by the algorithm. Subsequently, for other methods results can
be plotted in the same graphic. The closer they are in vertical to the optimal line the better
their performances are. Results obtained by using this technique is shown in gure D.7 on
Appendix D. An analytical proposal for comparison between algorithms is by calculating the
merit, according to equation 2.3 shown in section 2.5.2. In order to solve this equation it is
necessary to know which is the optimal ISE for a particular value of M, and which is the
optimal number of points necessary to create a particular value of ISE.
Because I did not have access to data obtained by the optimal algorithm used in (Rosin,
1997). I had to extract which values of ISE the optimal algorithm most probably gave for
each M in order to obtain the merit results shown by Rosin. However, I could only accurately
calculate for 28 different values of 52 values of M. Eight others values for M were calculated by
interpolation between values assuming an exponential decay, because in the graph ISE versus
M appeared to have a constant negative slope. As a result, I could extract ISE values for
M = 2 35. In order to calculate the number of vertices necessary to produce a particular
value for ISE, I also assumed an exponential behaviour between values on the database. First I
found which value for the ISE was closest to the one I wanted, and then interpolated between
data to obtain an accurate approximation for M. The maximum number of vertices proposed
by (Malcolm, 1983) and the improved version was 13, therefore it was not necessary to obtain
results of further values of M.
The size of the worm for both algorithms varied from L = 2 7. The only modication
made to the improved algorithm was setting to zero the threshold for noise in the local max-
imum detector
11
. In both algorithms, and because the size of the object demanded detection
of all possible features, the initial xed vertex located by both algorithms was removed and
then a certain amount of pixels were considered again in order to consider appropriately those
pixels
12
.
Results and comparison with other algorithms are shown in table 4.5. The best performance
obtained by algorithms based on (Malcolm, 1983) was made by the improved version using
9
See section 2.5.2 for further explanation of these terms.
10
Section 2.4.2 mention some of these algorithms. The one used by (Rosin, 1997) is (Perez and Vidal, 1994).
11
See section 3.3.1 for further explanation.
12
This technique did not work with all images, therefore it was only used in experiments within this section.
Chapter 4. Results 62
METHOD Merit R METHOD Merit R
(Lowe, 1987) 97.1 1 (Malcolm, 1983) L=3 46.3 21
(Banerjee et al., 1996) 96.0 2 Ray & Ray [27] (2) 46.0 22
(Ramer, 1972) 84.4 3 (Teh and Chin, 1989) 44.9 23
Sarkar [35] 2 point method 75.2 4 (Rosenfeld and Johnston, 1973) 44.7 24
Proposal L=2 74.5 5 Rosenfeld & Weszka [30] 43.8 25
Chun et al [7] 66.0 6 Anderson & Bezdek [2] 42.4 26
Sarkar [35] 1 point method 65.3 7 Williams [41] 41.5 27
Arcelli & Ramella [5] 64.5 8 (Rosenfeld and Johnston, 1973) 40.6 28
(Banerjee et al., 1996) 63.6 9 Ray & Ray [28] (1) 39.0 29
Anderson & Bezdek [2] 60.4 10 Proposal L=6 34.2 30
(Malcolm, 1983) L=7&L=6 59.3 11 (Deguchi and Aoki, 1990) 33.3 31
Rattarangsi & Chin [26] 57.7 12 Proposal L=7 32.8 32
Proposal L=3 56.1 13 Proposal L=5 32.2 33
Proposal L=4 56.0 14 (Banerjee et al., 1996) 30.8 34
Held, Abe & Arcelli [15] 54.1 15 Ansari & Huang [3] 30.5 35
(Freeman and Davis, 1977) 53.3 16 (Melen and Ozanian, 1993) 28.8 36
(Malcolm, 1983) L=5 52.4 17 (Freeman and Davis, 1977) 26.5 37
Chun et al [7] 51.4 18 Rosenfeld & Weszka [30] 23.2 38
Chun et al [7] 48.9 19 (Phillips and Rosenfeld, 1987) 19.6 39
Douglas & Peuker [9] 48.2 20 (Malcolm, 1983) L=4 19.4 40
Table 4.5: Values of merit for other methods were taken from (Rosin, 1997) on page 662. Besides,
references indicated in squared brackets belong to bibliography of the same article.
L =2, with a global rank of 5. The best rank obtained by the original Malcolms algorithm was
11, done by L=6 and L=7, which gave an equal result. Comparing the values of merit obtained
by both algorithms at different worm sizes, the results are widely varied, reporting ranks from
ve to forty. Perhaps this is because the range of worm lengths represent a large change for the
image size, for instance, L = 2 represents almost 6% of the image, whereas L = 7 represents
25%. For low values of size of worm I was expecting a similar behaviour between algorithms,
because there are no important advantages the improved version supplies when L is small; most
of the peaks are detected as single peaks because these are within the worm perception range;
smoothness of the CR is no longer signicant, because the mask used is smaller; furthermore,
the accumulator is no longer used and deviation can be neglected. Nevertheless, the image
also demands more precision on corner location, and even difference in vertex position by one
pixel changes enormously the overall performance. Possibly this high sensibility to detail also
Chapter 4. Results 63
could explain the wide variation in results. Table C.1 has the complete results gathered by the
original algorithm and the improved version proposed herein.
The merit of the polygonal approximation based on (Rosin, 1997) represents a superior
balance between the number of points and the error between the actual image and its approx-
imation. However, I believe that with some results, the performance reported by the merit
might not correspond with the visual evaluation one might have about the polygonal approx-
imation. All the results obtained by the Malcolm algorithm are shown in gure D.5, and the
visual results done by the improved version are shown in gure D.6. I believe that the Mal-
colm algorithm with L ={6, 7} does not describe properly the actual image, because it is more
symmetric to the x-axis than to the y-axis as it should be. Furthermore, I believe the number
of pixels necessary to describe an object must have a minimum limit in order to be functional.
I consider, based on visual analysis, that the improved version performed better than the Mal-
colm algorithm, because all the results better represent the actual object with the necessary
amount of pixels and location.
For the gure shown on the right hand side of gure B.6, which represents an X chromo-
some, I could not nd any optimal result, and therefore I was not able to calculate the merit of
the algorithm. However, (Huang and Sun, 1996) reported results based on the FOM applied to
the same image, compared also with some methods used by (Rosin, 1997). Here the procedure
was straightforward because no information from external sources was needed. Results and
new ranks are shown in table 4.6. Here, the best performance was obtained by the improved
Method FOM Rank Method FOM Rank
(Teh and Chin, 1989) k-curvature b24 63.45 1 Ansari-Huang b24 18.52 11
(Huang and Sun, 1996) 56.42 2 Proposal, L=2 18.43 12
(Teh and Chin, 1989) k-cosine b24 55.56 3 (Malcolm, 1983), L=2 17.36 13
(Huang and Sun, 1996) 53.65 4 Proposal, L=6 16.68 14
(Rosenfeld and Johnston, 1973) 34.18 5 Proposal, L=3 15.64 15
(Freeman and Davis, 1977) 33.24 6 Proposal, L=7 8.66 16
Proposal, L=5 30.89 7 (Malcolm, 1983), L=4 6.21 17
Proposal, L=4 28.58 8 (Malcolm, 1983), L=5 5.46 18
Anderson Bezdek 25.16 9 (Malcolm, 1983), L=7 4.84 19
Rosenfeld weszka 22.11 10 (Malcolm, 1983), L=3 4.58 20
Table 4.6: Values for the FOM for other methods were taken from (Huang and Sun, 1996) on pages 473.
References to other articles can be found in the same article or in (Rosin, 1997).
version using L = 5, with a global rank of 7. The best result obtained by the original algorithm
used L = 2, which obtained a global rank of 13. With this image the original algorithm did
Chapter 4. Results 64
not perform as well as with the previous image. Perhaps because this image is much smaller
than the previous one, and the algorithm could not handle it properly. For these experiments,
on average, the improved version proposed an extra vertex compared to the original algorithm.
However, the ISE made by the improved version was, except for one experiment, less than half
of the value reported by the original algorithm. The results obtained by other algorithms are
hard to compare because those algorithms used more than twice of the total vertices used by
the original or the improved version of the Malcolms algorithm. Nevertheless, according to
the gure of merit almost all them performed better than those based on (Malcolm, 1983).
Visual results obtained for this image are shown in gure D.8 for the original algorithm
and gure D.9 for the improved version. If the task was supposed to discriminate between X
and Y chromosomes, then the original algorithm would had succeeded only with one of the
worm length. Whereas the improved version with all of the worm lengths correctly detected
an x shape. In agreement with results reported by using the gure of merit and considering
visual results, I considered that the improved algorithm performed better with this image than
the original algorithm. Algorithms based on (Malcolm, 1983) degrade as the length of the
worm get smaller and parameters are more difcult to calibrate. Malcolm thinks that, there
will be a small size and detail of image they cannot quite catch. That the improved algorithm
is more stable as these limits are approached.
I believe that comparison with other methods is still incomplete, because images presented
in this chapter are very small. Therefore, the improved version could not prove its new features
against other algorithms. Furthermore, I believe that these images are more related to corner
detector algorithms, not to polygonal approximation methods. It is important to include images
with features described in section 2.5.1, for instance those used in section 4.2. Besides, in
agreement with (Rosin, 1997), the simple version of the gure of merit is not a good estimate to
compare algorithms, at least those that report large differences values for the CR. Consequently,
the results shown in table 4.6 are not denitive. Table C.1 has complete information gathered
by both algorithms. This is with the intention of providing enough data for other people to
compare their algorithms with (Malcolm, 1983) and its improved proposal presented herein.
4.4 Summary
The calibration process for experiments was related to the size of the worm in order to obtain
clear results and aid comparison between the original algorithm and the improved version pro-
posed herein. This does not affect the comparison of both algorithms because both of them are
based on the same digital worm, therefore, for each image and size both algorithms perceived
the same image features. It was also shown how the algorithms based on (Malcolm, 1983) can
be calibrated according to different worm sizes according to mathematical calculations shown
Chapter 4. Results 65
in previous sections.
Section 4.2 demonstrated enough information to support claims of better performance by
the improved version. The results agree with features proposed in section 2.5.1, and perfor-
mance based on parameters shown in section 2.5.2 also indicates better performance by the
improved version. The improved version, based on the FOM, performed 95.45% better than
the original algorithm. Difference in time processing could not be measured adequately, how-
ever with tests done so far there is no proof that the improved version takes more than 1.5 of
the time the original algorithm consumes.
The results by applying the method proposed by (Rosin, 1997) to images taken from (Teh
and Chin, 1989) are scale invariant and relate the CR and the ISE appropriately. The rst and
second position ranks obtained by algorithms based only on (Malcolm, 1983), i.e. the original
and the improved version, were obtained by the improved version with global ranks of 5 and
14. Although perhaps not entirely supported by visual results, the second best performance
between the original and the improved version was achieved by the original algorithm with
a global rank of 11. Merit values are widely distributed among all merit ranges, however, I
consider that visual results support a better performance done by the improved version because
all results better represent the actual object with the necessary amount of vertices and location.
Comparison with the second image was not as clear as the rst image, because no data from
optimal algorithms was found. Therefore, results are only reported based on the gure of
merit. I consider that visual results again support a better performance done by the improved
algorithm because the original algorithm fails to detect, excluding one case, the actual shape
of the image. Furthermore, the improved version showed a better stability at low size limits
compared to (Malcolm, 1983).
Chapter 5
Summary and Conclusions
This section presents summaries of previous chapters and conclusions gathered through test
and experimentation. Section 5.1 shows a general idea of work presented while analysing and
improving the Malcolm algorithm (Malcolm, 1983). In addition, proposal concerning cali-
bration for both algorithms is shown. Section 5.2 presents an overview of results obtained
while comparing the original and the improved version. Additionally, some insights concern-
ing comparison with other algorithms are also presented. Section 5.3 contains things that I
learned whilst working on this project, useful ideas for someone willing to work on similar
projects. Section 5.4 shows some ideas, which because of lack of time, I could not achieve
soon enough. In addition, it vaguely proposed some methods in order to accomplished them.
Finally, section 5.5 contains my conclusion about the project.
5.1 Malcolms Algorithm Review
In section 3.2.1 is shown how the algorithm perceived the contour of the image in a different
manner depending on the size of the worm. In addition, I showed reasons for not using the
size of length as the sole parameter to control the compression of an image. Instead, the size
of the worm is the rst parameter to x according to features required to be detected, and im-
age scale. Consequently, other thresholds are calibrated according to a tolerance error xed
beforehand. However, in section 3.4 an easy method to calibrate the algorithm based on as-
sumptions reported in section 2.5.3 is presented. As a result, the calibration process is reduced
to a minimum.
The corner detector threshold is proved to be affected proportionally to the length of the
worm and by rotation of the object, up to 20% in some cases. Omitted features caused by
rotation of the image can be solved with a proper calibration, however the original algorithm
has an intrinsic problem that it is not programmed to deal with; multi peaks above threshold
are created by close features and have to be detected in order to consider all possible corners.
66
Chapter 5. Summary and Conclusions 67
Section 3.3.1 proposed a method to relate the minimum angle
min
to be detected along the
contour according to the size of the worm L. In addition, a pseudocode is proposed, which can
detect many local maximums within a major main peak. However, this technique required that
the algorithm wait until all local maximums are found before processing results.
Section 3.2.3 demonstrates that curve detector fail to segment curves according to de-
nitions presented in chapter 2, which proposed to maintain a maximum deviation error from
the real contour with the optimal number of points. The original algorithm was not designed
to discriminate between curves with different curvature radii. However, in certain occasions,
aws are also presented when dividing curve sections according only to angular motion; these
are the result of calculating the absolute value to the curve accumulator, because sometimes
opposing curves cancelled out each other. Considering these facts, minor adjustments could
be made to improve performance, even though the main task is to divide arc segments based
only on angular motion
1
. By the way the algorithm is implemented, it is highly dependant
on the direction of the worm along the object perimeter, consequently, results become instable
and not suitable for classication. In section 3.3.2 a new method is presented in order to deal
with problems on the original algorithm. The new proposal based segmentation depending on
a maximum deviation error
max
whilst considering the angular motion of the worm and the
arc length of that segment. Because this calculation turns out to be more complicated, it was
necessary to introduce a logarithm look -up table in order to calculate a possible segmentation
value. Then the closest value to 2
n
is chosen in order to performed an accurate division without
compromising time. In addition, there could be some situations in which the algorithm may
fail because of sequence bugs within the algorithm. This is because most of the proposals for
the algorithm improvements are based on appropriately mixing the behaviour of all parame-
ters into one single task, such as taking the rst and last curve vertex from the corner detector
or straight-line detector, or checking for curve vertices after the correct information had been
gathered.
The deviation control error on the original algorithm consumed almost 33% of the total
processing time. Furthermore, when deviation threshold is set to low values, it obstructs per-
formance of the curve detector, which is of higher priority because of their precision to locate
vertices. In section 3.3.3, I proposed to leave deviation control error to the end of the algorithm
survey. Therefore, it is only necessary to check a smaller amount of data, proportional to the
number of vertices, instead of being proportional to half of the total number of contour pixels.
Furthermore, an extra check is made after deviation error locates vertices in order to remove
redundant vertices that do not contribute sufciently to the polygonal approximation.
The improved version is easier to calibrate because all its parameters are related to intuitive
1
This aws are also controlled with the deviation control error, however, I consider more appropriate to detect
curve vertices with less intervention of the deviation detector.
Chapter 5. Summary and Conclusions 68
references. For instance, minimum angle between straight-lines
min
in the corner detector,
and the maximum deviation distance
max
implemented in the curve detector and deviation
control error. Furthermore, by removing unnecessary continuous checking of deviation done
by the original algorithm, it was possible to increase the amount of logical power without
compromising the time required by the improved version, and so give better results.
5.2 Results
Although it was not possible to use better comparison parameters for assessing both algorithms,
for instance a balanced version for the gure of merit, section 4.2 demonstrated enough infor-
mation to support a better performance of the improved version. The results agree with the
features proposed in section 2.5.1, which are the basic features a corner detector algorithm has
to detect. In addition, performance based on parameters shown in section 2.5.2 also indicates a
better performance done by the improved version. These parameters are the compression ratio
(CR), integral squared error (ISE), and the gure of merit (FOM). The visual results presented
are subjective, however, most of these clearly support a better performance by the improved
version. The difference in time processing could not be measure adequately. However with
tests done so far there is no proof that the improved version takes more than 150% of the time
the original algorithm consumes. Considering the increase in precision and feature detection I
believe this increase in processing time is acceptable.
Comparison with other algorithms was based only on two images reported rst in (Teh and
Chin, 1989). One of these images was used by (Rosin, 1997) in order to compare many algo-
rithms with an optimal result obtained by on dynamic programming. Results by this method
are scale invariant and relate the CR and the ISE appropriately. Considering the algorithms
based on (Malcolm, 1983), the rst and third position ranks were obtained by its improved
version with ranks of 5 and 14. The best ranks obtained by the original algorithm were 11 and
18. Merit values are widely distributed among all merit ranges, however, I consider that visual
results support a better performance done by the improved version because all results better
represent the actual object with the necessary amount of vertices and location. Comparison
with the second image was not as clear as the rst image, because no data from optimal algo-
rithms was found. Therefore, the results are only reported based on the gure of merit, which is
not accurate enough to obtain a certain conclusion. Here the original algorithm performed rela-
tively worse compared on the former image, with an ISE twice as big as the improved version,
where as the improved version only used one extra vertex in almost all experiment performed
in this image. Furthermore, the improved version showed a better stability at low size limits
compared to (Malcolm, 1983).
Chapter 5. Summary and Conclusions 69
5.3 What I learned from this project
Searching for different sources of polygonal approximation and corner detection algorithms
turned out to be the key for most of the ideas implemented in the improved algorithm. Mainly
those related to the denition of polygonal approximation, because almost everything is related
to them, such as, evaluation and comparison with other algorithms, which are based on the
kind of features wanted to be detected and features that are not important. For instance, it is
not clear how a curve detector should proceed; dividing segments regardless of the scale, i.e.
just considering angular motion; or concerning about the error between the approximation and
the real contour. The evaluation process, in most cases, rules how the algorithm is created or
transformed, because that is the global parameter on which solutions are focused. This project
is based on technical information showed at the literature review in chapter 2; however, future
research or selection of different bibliographical reference could have led to the development
of a very different algorithm.
The original algorithm is simple and fast, nonetheless, gives results within the same aver-
age performance of other complex algorithms. In addition, it was designed to be programmed
in a microprocessor using simple mathematical operations. I consider that the most important
contribution of this thesis was the investigation of all feature detectors behaviour of the original
algorithm; supported by mathematical analysis shown in Appendix A. This analysis allows an
easier calibration for the original algorithm, furthermore, it could be considered as an essential
tool to address the same problem in a different and innovative manner, whilst maintaining the
original algorithm as the initial platform. There are innite different ways to analyse this prob-
lem, the approach presented herein is only and example based on this analysis and literature
review on chapter 2. In addition, the main philosophy of the original algorithm, which is simple
mathematical operations, was preserved as much as possible.
I found that algorithm assessment and comparison with other methods was the most com-
plicated part of the dissertation. Most of the other articles do not report a consistent result based
on the same test parameters, for example, ISE or CR. This makes difcult the comparison test
because visual results are very subjective. What I realized is that, besides the inconsistency
on evaluation process, no clear distinction between polygonal and corner detection algorithms
has been established on test and comparison. Therefore, sometimes polygonal approximation
algorithms are compared against corner detector algorithms, so their performance is evidently
superior when the ISE is used for that purpose on images with curves. (Rosin, 1997) proposed
a better method for assessment algorithms; however, it requires results obtained by an optimal
algorithm, which is not always easy to obtain. In addition, it depends on what the algorithm
considers as optimal.
Chapter 5. Summary and Conclusions 70
5.4 Future Work
The calibration of both algorithms has to be performed in more detail in order to nd a bet-
ter relation between features wanted to be found while avoiding noisy readings. The equation
for ThresA provides a good insight about features and thresholds. Nevertheless, is based only
on mathematical assumptions and not properly tested with real digital data. Proofs about this
relation were only performed in controlled segmentations without noise. Furthermore, the
threshold related to the noise peak lter has to be calibrated properly and related more ade-
quately with threshold ThresA. The calibration for the variable and its relation with Acc can
be improved by using more data and different curve segments. Furthermore, instead of calcu-
lating , the algorithm could nd directly Arc/, allowing the algorithm to divide curves into
more sections than those based only on 2
n
. Perhaps the parameter that needs more attention
is the straight-line detector used by the curve detector. Currently this parameter is calibrated
empirically, however it is probable that its performance affects the results of the curve accumu-
lator. Finally, a complete code optimisation of the improved version has to be performed. For
instance, it is not necessary to add all CS values in the mask in each step, it is only necessary
to add always one and subtract the other. Similar deciencies can be improved, facilitating the
implementation of a high speed of the original version.
Inevitably, a better comparison process has to be performed in order to evaluate work pre-
sented herein. The FOM proved not to be a feasible parameter to compare them. Nevertheless,
along with the CR and the ISE, it is the most used comparison variable, and therefore it is very
useful to report results based on similar parameters. Besides, it is also necessary to have an
optimal algorithm to which compare results and report results based on (Rosin, 1997). Further-
more, both algorithms have to be implemented in a low level programming language in order
to have an adequate time performance comparison. This also will allow these algorithms to be
compared with other methods based on their main quality, which is fast processing polygonal
approximation.
The most important test yet to be implemented, is a real application of the algorithm to
a task that requires high precision in short time. One option could be implementation for
processing large greyscale images. Here, high speed is needed because search for the actual
contours is already time demanding, therefore it is necessary to process many objects as fast as
possible in order not to lose more time. On the other hand, implementation could be applied to
a simple object but several times within a second, for instance, a moving camera approaching
an object. In this situation, it is necessary to produce results before hitting the object.
Chapter 5. Summary and Conclusions 71
5.5 Conclusions
The original algorithm works remarkably well with certain images and situations. Denitely,
it is one of the fastest algorithms so far, which gives useful results while permitting a exible
calibration process. Furthermore, it is easy to programthe complete algorithmand can be easily
incorporated into any microprocessor. However, among other sometimes awkward behaviours,
its calibration process was considered one of its foremost disadvantages, because there was no
easy relation between them and the image features required to be detected.
For each of the parameters of the original algorithm, it is demonstrated how the charac-
teristics of the contour are detected by each of the feature detectors of the algorithm; for the
curvature sharpness, this relation is by means of the angle between straightline segments;
also, a relation between the angular motion and the curve accumulator is demonstrated. Fur-
thermore, these are also related to the size of the worm one decides to use depending on the
image size and features. Consequently, now is more feasible the calibration of (Malcolm, 1983)
without expending unnecessary time.
Whilst analysing the relationship between features and algorithm parameters, I found that
some of these were not properly used in order to obtain optimal results. For each of the feature
detectors proposals were presented, and a new technique to relate them all was presented. The
calibration process is slightly more intuitive than the original algorithm, because now the curve
detector also can be set by specifying a maximum error distance and the deviation error can
admit lower values without interfering with the rest of the feature detectors. As a result, the
improved version can nowdeal with a bigger variety of images with more precision and without
compromising processing time. Also, it can now work on smaller images in terms of pixels.
However, the most important disadvantages with the improved algorithm are not directly
related to the polygonal approximation. Because it relies more on logical sequence to compen-
sate the lack of difcult mathematical operations, the size of the algorithm and its implementa-
tion process are more complicated. Furthermore, it needs more free space memory in order to
include logarithm tables
2
and information collected for each section. These make the algorithm
more susceptible to have bugs within the algorithm sequence and not as easy to implement in
any microprocessor.
This dissertation presents, based on a more intuitive calibration process, a procedure about
how to implement more successfully the algorithm proposed by Chris Malcolm in (Malcolm,
1983). Moreover, based on this algorithm, I propose an improved version that can deal with
images that are more complicated without losing computation speed. It is the decision for the
user, depending on application requirements and results expectations, which of these versions
of the Malcolm algorithm to use.
2
Precision used for chapter 4 was 1:100, requiring memory space of approximately 4K bytes.
Appendix A
Analytical analysis
A.1 Sharp detector
The following analysis is based on gure 3.2 and its variables are dened in section 3.1.1. The
head of the worm P
k
and its tail P
kL
are expressed as

P
kL
= ksin() x +kcos() y (A.1)

P
k
= (Lsin() ksin(+)) x +(Lcos() +kcos(+)) y (A.2)
where k represents the position of the head. Consequently, the directional vector is dened as
equation A.3. The complete reading of the corner took k = 1..2L.
w
k
=

P
k


P
kL
= (Lsin() ksin(+) +ksin()) x
+(Lcos() +kcos(+) kcos()) y
(A.3)

CurV
k
is calculate as w
k
w
kL
. Note that the worm takes L steps to exit the curve. Therefore,
for k L, w
kL
does not change, and remains at the original orientation before getting into the
corner. When k > L the worm does not change its shape, and remains with the orientation after
exiting the corner. Consequently,
w
be f ore
= w
k=0
=Lsin() x +Lcos() y (A.4)
w
af ter
= w
k=L
=Lsin(+) x +Lcos(+) y (A.5)
For k L,

CurV is calculated with respect to w
be f ore
, and with respect to w
af ter
when k > L. It
is straightforward to prove that

CurV
be f
= k(sin() sin(+)) x +k(cos(+) cos()) y (A.6)

CurV
af t
= (Lk)(sin() sin(+)) x +(Lk)(cos(+) cos()) y (A.7)
72
Appendix A. Analytical analysis 73
And therefore
|


CurV
be f
k
| =|


CurV
af t
k
| (A.8)
As a result, it is only necessary to focus on analysing only when k < L. Next, we dened a new
variable as
(, ) | sin() sin(+)| +|cos(+) cos()| (A.9)
where (, ) represents the image feature of a corner. The scalar CS which is the one we are
looking for, is nally expressed as
Cur = |

CurV x| +|

CurV y| (A.10)
= k(, ); k = 1..L (A.11)
Hence, I had proved equation 3.10 in section 3.2.2 and demonstrated that Cur k for this
simple case.
A.2 Curve Detector
For the curve detector I will assume that the whole worm is always completely inside the curve;
this implies that L R and also L Arc, where Arc is the total arc length and R is the radius.
In this case I will change references to provide with more clear notation. In section 3.1.1 vector

CurV is

CurV = w
k
w
kL
=

P
k
2

P
kL
+

P
k2L
(A.12)
It is straightforward to change references to the middle point. As a result we can consider that
the algorithm works with just one worm sizing 2L; using its middle point as reference, instead
of working with two different instances. Also providing also evidence that the region of support
could be estimated as 2L. Therefore, the curvature vector and the accumulator variable related
to the measure of the curve introduced in section 3.1.2, are dened as

CurV
k
=

P
k+L
2

P
k
+

P
kL
(A.13)

AccV
k
=
k

i=q

CurV
i
(A.14)
where q is the index for the last vertex proposed. The image we are considering for this analysis
is the arc of a circle radius R, shown in gure 3.2. The middle point of this double size worm
is dened as

P
k
=Rcos(
k
R
) x +Rsin(
k
R
) x (A.15)
Appendix A. Analytical analysis 74
therefore, the curvature value is expressed as

CurV =R
_
cos
_
k +L
R
_
2cos
_
k
R
_
+cos
_
k L
R
__
x
+R
_
sin
_
k +L
R
_
2sin
_
k
R
_
+sin
_
k L
R
__
y
(A.16)
Note that L/R is constant once the worm is moving inside the curve. Consequently, it is con-
venient to separate that term from the rest in order to leave the parameter L alone. Using
trigonometric identities, and after some simple algebra, the expression can be expressed as

CurV = 2R
_
1cos
_
L
R
___
cos
_
k
R
_
x sin
_
k
R
_
y
_
(A.17)
The scalar accumulator for the curve is expressed as
Acc
k
= 2R
_
1cos
_
L
R
__ k

i=q
_

cos
_
i
R
_

sin
_
i
R
_

_
(A.18)
where q is again the index for the last vertex. When L R, the worm takes many steps for
dealing with the curve. This resolution allows the summation on equation A.18 to be expressed
as an integral. Solving the integral starting from index k
ini
to the index k
end
, the accumulator
becomes
Acc
k
= 2R
2
_
1cos
_
L
R
___

sin
_
k
end
R
_
sin
_
k
ini
R
_

cos
_
k
end
R
_
cos
_
k
ini
R
_

_
(A.19)
Since we are assuming that L R we can approximate further the former equation by applying
MacLaurin series to the expression containing L. The cosine term can be approximated by
cos
_
L
R
_
= 1
L
2
2R
2
+
L
4
4R
4
. . . (A.20)
I decided to use only the rst two terms because those are enough to assure a good result
without losing precision. For a more clear result, one can see that k/R is indeed the angle
between x and the current index. By dening k
ini
/R, and (k
end
k
ini
)/R, I created a
new variable as
(, ) |sin() sin(+)| +|cos(+) cos()| (A.21)
the accumulator nally is expressed as
Acc
k
= L
2
(, ); L R
Acc
k
L
2
(A.22)
Note that this result is equivalent in behaviour to equation A.11. However, the angle inside
the trigonometric functions are not equivalent. Nevertheless, (, ) and (, ) do represent
features of the object, and the most important aspect about these equations is how the length is
related to those features. It is also important to note that this quadratic dependency is also true
for CurV while crawling along a curve.
Appendix A. Analytical analysis 75
A.3 Calibration of curvature sharpness in term of angle
I am going to prove equation 3.16 from section 3.3.1 based on equation 3.10. Therefore, rst
it is necessary to calculate
(, )

= 0 (A.23)
This computation is not as straightforward as it may appear, because it contains absolute value
calculation, which is not differentiable. However, as seen in gure 3.3 the minimum value is
precisely where the discontinuity is presented. Consequently, it is only necessary to nd where
the absolute value change one of the terms of equation 3.10 from negative to positive. These
two terms are
T
1
= sin() sin(+)
T
2
= cos() cos(+)
(, ) =|T
1
| +|T
2
|
(A.24)
If T
2
is expanded after trigonometric identities as
T
2
= cos()(1cos()) +sin()sin() (A.25)
It is easy to verify that each of the terms composing T
2
are always positive for = 0../2 and
= 0../2. Therefore, T
1
is the only term which changes of sign, this threshold is when
sin(
min
) = sin(
min
+) (A.26)
Because I am analysing when = 0../2, the only solution is that
sin(
min
+) = sin(
min
)

min
=

2
(A.27)
By substituting this value, it is obvious that T
1
= 0 and T
2
> 0. The nal result after applying
trigonometric identities is

min
() = cos
_

2
_
cos
_
+
2
_
= 2sin(/2)
(A.28)
Proving equation 3.16 in section 3.3.1.
A.4 Equation for calculating number of segments in a curve
If an arc segment length Arc has a curvature radius R, then the following relation stands.
Arc = R (A.29)
Appendix A. Analytical analysis 76
where is the angle formed by the two extremes of the circle arc. If there is a chord joining
these two extremes, then the maximum distance between the chord and the arc is at /2. Fur-
thermore, the distance from the centre to the chord is Rcos(/2). Therefore, the maximum
distance is expressed as

max
= R
_
1cos
_

2
__
(A.30)
By using the Taylor series expansion showed in appendix A.3, the maximum distance can be
approximated to

max
R
_

2
8
_
(A.31)
Angle is divided into segments in order to reduce the maximum error to
max
. Therefore
=/, and because R = Arc/, the nal expression for is as follows.

_
Arc
8
max

Arc Acc
(L,
max
)
(A.32)
Variable (L,
max
) is introduce in order to relate the accumulator with the angle , absorbing
also the constant value 8
max
. This proves equation 3.20. The task now is to simplify it in
order to compute it without expending too much time and without using direct multiplications
or divisions. The most common approach to solve this problem is by using a logarithm look-up
table. For the square root approximation I propose to use an iterative version for

x, which is
expressed as follows.
new =
_
old +
x
old
_
/2 (A.33)
Equation A.33 has some advantages compared to other approximations to the square root. The
division by two can be done by shifting the byte representation of the answer to the right by
one. An initial estimate value can be inserted, which in case of been near the answer assures
fast convergence. The initial value is represented by Root
ini
. Substituting equation 3.20 into
equation A.33 result is expressed as

_
Root
ini
+
Arc Acc
Root
ini

_
/2 (A.34)
Inverse operation of the logarithm lookup table is represented by the exponential. This imply
start looking from the answer to the possible value which created it. The nal expression is as
follows.

_
Root
ini
+exp
_
log(Acc) +log(Arc) log(Root
ini
) log()
__
/2 (A.35)
This approximation only needs access to a logarithm lookup table, simple mathematic oper-
ations and a right byte shift. If more precision is required, then Root
ini
= and equation A.35
is repeated. I believed that, based on the range of data values and precision required, no more
than two iterations are needed.
Appendix B
Images used for testing
Figure B.1: Images taken from (Chetverikov and Szab o, 1999). These do not represent its actual size.
77
Appendix B. Images used for testing 78
Figure B.2: Images taken from the squid-silhouette database developed at the University of Surrey, UK.
These do not represent its actual size.
Appendix B. Images used for testing 79
Figure B.3: Images taken from the squid-silhouette database developed at the University of Surrey, UK.
These do not represent its actual size.
Appendix B. Images used for testing 80
Figure B.4: Images taken from the squid-silhouette database developed at the University of Surrey, UK.
These do not represent its actual size.
Appendix B. Images used for testing 81
Figure B.5: Images taken from the squid-silhouette database developed at the University of Surrey, UK.
These do not represent its actual size.
Figure B.6: Images taken from (Teh and Chin, 1989). Each black square represent a pixel.
Appendix C
Tables of Results
C.1 Tables
Figure # Figure # Figure # Figure #
comm1 3 kk1012 2 kk121 4 kk418 0
comm2 4 kk102 1 kk139 9 kk450 3
comm3 3 kk103 1 kk147 2 kk457 4
comm4 7 kk1032 1 kk157 8 kk469 3
comm5 11 kk104 2 kk158 7 kk554 6
comm6 6 kk1051 0 kk197 2 kk577 4
comm7 4 kk1058 7 kk3 1 kk581 9
comm8 1 kk1069 3 kk342 6 kk745 5
kk100 3 kk1086 3 kk373 6 kk778 1
kk1006 5 kk1087 1 kk402 3 kk788 3
kk1008 6 kk113 2 kk408 1 test31 8
Table C.1: Number of times the improved algorithm had a bigger compression ratio.
82
Appendix C. Tables of Results 83
Figure # Figure # Figure # Figure #
comm1 9 kk1012 11 kk121 9 kk418 10
comm2 11 kk102 11 kk139 7 kk450 11
comm3 7 kk103 11 kk147 11 kk457 11
comm4 11 kk1032 11 kk157 10 kk469 11
comm5 9 kk104 11 kk158 9 kk554 11
comm6 6 kk1051 11 kk197 11 kk577 11
comm7 10 kk1058 11 kk3 11 kk581 11
comm8 11 kk1069 11 kk342 11 kk745 7
kk100 11 kk1086 10 kk373 9 kk778 11
kk1006 11 kk1087 11 kk402 9 kk788 11
kk1008 11 kk113 11 kk408 11 test31 10
Table C.2: Number of times the improved algorithm had a smaller integral squared error.
Method M ISE Efciency Fidelity Merit
(Malcolm, 1983) L=2 13 285.40 38.46 7.25 16.70
(Malcolm, 1983) L=3 13 66.83 69.23 30.96 46.30
(Malcolm, 1983) L=4 8 750.90 37.50 10.00 19.37
(Malcolm, 1983) L=5 9 126.69 55.56 49.41 52.39
(Malcolm, 1983) L=6 6 332.42 83.33 42.25 59.34
(Malcolm, 1983) L=7 6 332.42 83.33 42.25 59.34
Proposal L=2 13 31.83 85.39 65.02 74.51
Proposal L=3 10 88.43 71.03 44.38 56.15
Proposal L=4 10 88.76 70.83 44.22 55.96
Proposal L=5 8 399.81 55.05 18.78 32.16
Proposal L=6 9 291.11 54.31 21.50 34.17
Proposal L=7 11 137.19 46.26 23.26 32.81
Table C.3: Results obtained while analysing object on the left hand side of gure B.6.
Appendix C. Tables of Results 84
Mehod M ISE CR FOM
(Teh and Chin, 1989) k-curvature b24 16 5.91 3.8 63.45
(Huang and Sun, 1996) 15 7.09 4.0 56.42
(Teh and Chin, 1989) k-cosine b24 15 7.20 4.0 55.56
(Huang and Sun, 1996) 16 6.99 3.8 53.65
(Rosenfeld and Johnston, 1973) 8 21.94 7.5 34.18
(Freeman and Davis, 1977) 8 22.56 7.5 33.24
Proposal, L=5 6 32.37 10.0 30.89
Proposal, L=4 7 29.99 8.6 28.58
Anderson Bezdek 9 26.50 6.7 25.16
Rosenfeld weszka 12 22.61 5.0 22.11
Ansari-Huang 16 20.25 3.8 18.52
Proposal, L=2 7 46.50 8.6 18.43
(Malcolm, 1983), L=2 7 49.37 8.6 17.36
Proposal, L=6 6 59.97 10.0 16.68
Proposal, L=3 7 54.79 8.6 15.64
Proposal, L=7 6 115.43 10.0 8.66
(Malcolm, 1983), L=4 5 193.19 12.0 6.21
(Malcolm, 1983), L=5 5 219.88 12.0 5.46
(Malcolm, 1983), L=7 5 248.00 12.0 4.84
(Malcolm, 1983), L=3 8 163.83 7.5 4.58
(Malcolm, 1983), L=6 5 334.66 12.0 3.59
Table C.4: Results obtained while analysing object on the right hand side of gure B.6.
C.2 Complete Results
C.2.1 Original Algorithm
Appendix C. Tables of Results 85
Figure Variable L=5 L=6 L=7 L=8 L=9 L=10
comm1 n 926 927 928 929 930 931
M 41 34 33 32 31 29
ISE 1256.4 1206.5 1932.5 2069 2439.1 3070.7
CR 22.6 27.3 28.1 29 30 32.1
FOM 17976.1 22598.9 14551.8 14031.7 12299.6 10454.7
comm2 n 717 718 719 720 721 722
M 24 27 21 22 19 19
ISE 8762.8 1949.6 8209.4 6185.3 7636.3 5934
CR 29.9 26.6 34.2 32.7 37.9 38
FOM 3409.3 13640.3 4170.6 5291.1 4969.3 6403.8
comm3 n 403 404 405 406 407 408
M 31 24 16 16 12 10
ISE 471.6 745.9 887.6 632.9 890.1 1302
CR 13 16.8 25.3 25.4 33.9 40.8
FOM 27567.5 22566.9 28519 40092.8 38105.1 31336.3
comm4 n 643 644 645 646 647 648
M 28 26 23 23 23 24
ISE 3921.2 2303 5177.2 6272.8 6754.4 6594.5
CR 23 24.8 28 28.1 28.1 27
FOM 5856.4 10755.1 5416.7 4477.6 4164.8 4094.3
comm5 n 564 565 566 567 568 569
M 24 25 22 23 22 21
ISE 1498.3 2396.8 2189.6 2401.8 2808.5 3781.9
CR 23.5 22.6 25.7 24.7 25.8 27.1
FOM 15684.1 9429.2 11749.5 10264.2 9192.8 7164.4
comm6 n 852 853 854 855 856 857
M 36 33 29 28 27 29
ISE 1297.2 1165.8 1623 1757.1 1307.5 2278.2
CR 23.7 25.8 29.4 30.5 31.7 29.6
FOM 18244.7 22172 18144.6 17378.4 24247.6 12971.5
comm7 n 851 852 853 854 855 856
M 38 41 38 33 31 29
ISE 2078.2 2394.2 3197 5600.1 7859.8 9254.2
CR 22.4 20.8 22.4 25.9 27.6 29.5
FOM 10776.1 8679.6 7021.4 4621.1 3509.1 3189.6
Table C.5: Results gathered by the original algorithm.
Appendix C. Tables of Results 86
Figure Variable L=11 L=12 L=13 L=14 L=15
comm1 n 932 933 934 935 936
M 29 28 27 27 26
ISE 2662 3320.8 5138.3 5485.7 5125.1
CR 32.1 33.3 34.6 34.6 36
FOM 12073.1 10034.2 6732.3 6312.7 7024.2
comm2 n 723 724 725 726 727
M 19 21 20 18 17
ISE 6626.8 13121.7 16941.7 20446.1 21222.2
CR 38.1 34.5 36.3 40.3 42.8
FOM 5742.3 2627.4 2139.7 1972.7 2015.1
comm3 n 409 410 411 412 413
M 8 9 7 7 6
ISE 1907 2706.7 2684.5 9869.5 10284.6
CR 51.1 45.6 58.7 58.9 68.8
FOM 26808.9 16830.6 21871.9 5963.5 6692.8
comm4 n 649 650 651 652 653
M 23 18 19 17 16
ISE 9016.9 9466.9 9515.2 13775.4 20156.6
CR 28.2 36.1 34.3 38.4 40.8
FOM 3129.4 3814.5 3600.9 2784.2 2024.8
comm5 n 570 571 572 573 574
M 20 21 20 20 19
ISE 5154 2874.2 3425.8 4721.1 10581.4
CR 28.5 27.2 28.6 28.7 30.2
FOM 5529.7 9460.4 8348.3 6068.5 2855.1
comm6 n 858 859 860 861 862
M 27 24 24 23 21
ISE 9766.1 11350.1 9918.3 10823.7 12227.4
CR 31.8 35.8 35.8 37.4 41
FOM 3253.9 3153.4 3612.8 3458.6 3357
comm7 n 857 858 859 860 861
M 28 26 25 22 22
ISE 12020.5 13764.1 16692.6 27819.1 28518.5
CR 30.6 33 34.4 39.1 39.1
FOM 2546.2 2397.5 2058.4 1405.2 1372.3
Table C.6: Results gathered by the original algorithm.
Appendix C. Tables of Results 87
Figure Variable L=5 L=6 L=7 L=8 L=9 L=10
comm8 n 1116 1117 1118 1119 1120 1121
M 44 35 31 30 29 29
ISE 4206.3 7673.1 10518.7 11346.9 9489.3 8853.1
CR 25.4 31.9 36.1 37.3 38.6 38.7
FOM 6029.9 4159.2 3428.6 3287.2 4069.9 4366.3
kk100 n 923 924 925 926 927 928
M 67 63 55 48 40 31
ISE 2073.1 3337.8 5297 4046.4 4817.2 16598.2
CR 13.8 14.7 16.8 19.3 23.2 29.9
FOM 6645.2 4394.1 3175 4767.7 4810.9 1803.5
kk1006 n 638 639 640 641 642 643
M 31 27 26 25 24 22
ISE 1213.1 997 2529 3288.1 3033.7 3358.6
CR 20.6 23.7 24.6 25.6 26.8 29.2
FOM 16965.5 23738.8 9733.2 7797.9 8817.6 8702.3
kk1008 n 794 795 796 797 798 799
M 36 35 33 28 27 28
ISE 7371.4 5394.9 5186.1 7418.9 7107.4 6555.9
CR 22.1 22.7 24.1 28.5 29.6 28.5
FOM 2992 4210.3 4651.1 3836.7 4158.4 4352.7
kk1012 n 629 630 631 632 633 634
M 26 24 19 16 15 14
ISE 4527.3 2863.8 6152.7 10043.1 15272.7 16936.9
CR 24.2 26.3 33.2 39.5 42.2 45.3
FOM 5343.6 9166 5397.7 3933 2763.1 2673.8
kk102 n 856 857 858 859 860 861
M 55 43 38 35 31 33
ISE 2024.1 3899.6 5489.9 7783.2 8630.3 10103.3
CR 15.6 19.9 22.6 24.5 27.7 26.1
FOM 7689.3 5110.9 4112.8 3153.3 3214.5 2582.4
kk103 n 926 927 928 929 930 931
M 26 24 22 21 21 21
ISE 23260.5 23310.8 25451.1 22273.3 22693.6 23071.4
CR 35.6 38.6 42.2 44.2 44.3 44.3
FOM 1531.2 1657 1657.4 1986.1 1951.5 1921.6
Table C.7: Results gathered by the original algorithm.
Appendix C. Tables of Results 88
Figure Variable L=11 L=12 L=13 L=14 L=15
comm8 n 1122 1123 1124 1125 1126
M 27 28 28 27 26
ISE 10051.9 14344.8 17755.3 18970.5 23781.1
CR 41.6 40.1 40.1 41.7 43.3
FOM 4134.1 2795.9 2260.9 2196.4 1821.1
kk100 n 929 930 931 932 933
M 30 28 29 25 24
ISE 16790.1 54687.7 23382.2 54756.1 11802.5
CR 31 33.2 32.1 37.3 38.9
FOM 1844.3 607.3 1373 680.8 3293.8
kk1006 n 644 645 646 647 648
M 21 19 18 17 17
ISE 4750.8 8311.8 10431.3 10308.5 8390
CR 30.7 33.9 35.9 38.1 38.1
FOM 6455 4084.3 3440.5 3692 4543.2
kk1008 n 800 801 802 803 804
M 27 25 24 23 21
ISE 7470 9711.4 10062.3 9978.9 11190.9
CR 29.6 32 33.4 34.9 38.3
FOM 3966.5 3299.2 3321 3498.7 3421.2
kk1012 n 635 636 637 638 639
M 14 14 14 14 13
ISE 16501.9 17885.6 17397.5 16394 28115.3
CR 45.4 45.4 45.5 45.6 49.2
FOM 2748.6 2540 2615.3 2779.8 1748.3
kk102 n 862 863 864 865 866
M 28 27 26 25 24
ISE 13405.8 12737.9 14852.3 17008.4 17867.5
CR 30.8 32 33.2 34.6 36.1
FOM 2296.4 2509.3 2237.4 2034.3 2019.5
kk103 n 932 933 934 935 936
M 21 19 18 20 19
ISE 28046.2 31933.2 37227.9 29632.9 29730.8
CR 44.4 49.1 51.9 46.8 49.3
FOM 1582.4 1537.8 1393.8 1577.6 1657
Table C.8: Results gathered by the original algorithm.
Appendix C. Tables of Results 89
Figure Variable L=5 L=6 L=7 L=8 L=9 L=10
kk1032 n 845 846 847 848 849 850
M 32 27 23 26 24 24
ISE 7352.5 7980 8332.7 7977.2 9351.7 9000.8
CR 26.4 31.3 36.8 32.6 35.4 35.4
FOM 3591.5 3926.5 4419.5 4088.6 3782.7 3934.8
kk104 n 772 773 774 775 776 777
M 37 34 29 25 24 23
ISE 3147.2 8153.4 9567.8 9947.6 10054.8 8326.9
CR 20.9 22.7 26.7 31 32.3 33.8
FOM 6629.6 2788.5 2789.5 3116.3 3215.7 4057
kk1051 n 1241 1242 1243 1244 1245 1246
M 91 78 71 65 59 54
ISE 3700 4619.5 6504.1 6971.5 6708.3 8809.5
CR 13.6 15.9 17.5 19.1 21.1 23.1
FOM 3685.8 3446.9 2691.7 2745.3 3145.6 2619.2
kk1058 n 734 735 736 737 738 739
M 45 41 33 31 28 29
ISE 3427.6 3889.5 8654 6801.8 8937.1 10709.5
CR 16.3 17.9 22.3 23.8 26.4 25.5
FOM 4758.8 4609.1 2577.2 3495.3 2949.2 2379.5
kk1069 n 721 722 723 724 725 726
M 41 35 32 29 28 27
ISE 3035.5 2894.9 2598.6 3439.2 3629 5718.2
CR 17.6 20.6 22.6 25 25.9 26.9
FOM 5793.2 7126 8694.5 7259 7135.1 4702.3
kk1086 n 1128 1129 1130 1131 1132 1133
M 57 56 42 43 35 36
ISE 1767.9 3066.1 3645.2 4600 13499.9 11501.2
CR 19.8 20.2 26.9 26.3 32.3 31.5
FOM 11193.7 6575.4 7380.8 5717.9 2395.8 2736.4
kk1087 n 935 936 937 938 939 940
M 68 52 47 40 36 35
ISE 2354.8 3331.9 6297.4 9398.2 11227.9 12268.6
CR 13.8 18 19.9 23.5 26.1 26.9
FOM 5839.1 5402.3 3165.8 2495.2 2323.1 2189.1
Table C.9: Results gathered by the original algorithm.
Appendix C. Tables of Results 90
Figure Variable L=11 L=12 L=13 L=14 L=15
kk1032 n 851 852 853 854 855
M 23 22 22 21 21
ISE 9354.2 9885.3 10604.6 10757 12014.1
CR 37 38.7 38.8 40.7 40.7
FOM 3955.4 3917.7 3656.2 3780.5 3388.9
kk104 n 778 779 780 781 782
M 22 21 22 19 18
ISE 11379.3 11142.4 11650.7 12589.3 13936.2
CR 35.4 37.1 35.5 41.1 43.4
FOM 3107.7 3329.2 3043.1 3265.1 3117.4
kk1051 n 1247 1248 1249 1250 1251
M 52 48 44 39 39
ISE 10392.9 15219.8 15466.3 16277.4 18732.3
CR 24 26 28.4 32.1 32.1
FOM 2307.4 1708.3 1835.4 1969.1 1712.4
kk1058 n 740 741 742 743 744
M 27 22 19 17 17
ISE 8314 10909.4 12693.6 13872.4 15282.3
CR 27.4 33.7 39.1 43.7 43.8
FOM 3296.5 3087.4 3076.6 3150.6 2863.7
kk1069 n 727 728 729 730 731
M 25 24 23 21 19
ISE 6596.5 12705.5 10514 18586 19983
CR 29.1 30.3 31.7 34.8 38.5
FOM 4408.4 2387.4 3014.6 1870.3 1925.3
kk1086 n 1134 1135 1136 1137 1138
M 31 28 26 23 23
ISE 11880 12240.2 12827.3 16246.2 7785.8
CR 36.6 40.5 43.7 49.4 49.5
FOM 3079.2 3311.7 3406.2 3042.9 6354.9
kk1087 n 941 942 943 944 945
M 32 31 28 25 24
ISE 9278.7 22170.1 22447.6 34706.2 45236.8
CR 29.4 30.4 33.7 37.8 39.4
FOM 3169.2 1370.6 1500.3 1088 870.4
Table C.10: Results gathered by the original algorithm.
Appendix C. Tables of Results 91
Figure Variable L=5 L=6 L=7 L=8 L=9 L=10
kk113 n 1161 1162 1163 1164 1165 1166
M 40 37 30 31 28 26
ISE 22509.8 22579.3 20433.3 23679 23351.9 25492.8
CR 29 31.4 38.8 37.5 41.6 44.8
FOM 1289.4 1390.9 1897.2 1585.7 1781.7 1759.2
kk121 n 705 706 707 708 709 710
M 37 33 28 26 24 24
ISE 1389.5 1699.3 4259.6 3987.5 5452.3 8053.6
CR 19.1 21.4 25.3 27.2 29.5 29.6
FOM 13712.4 12589.7 5927.8 6829 5418.2 3673.3
kk139 n 688 689 690 691 692 693
M 36 33 28 28 21 21
ISE 1563.9 1861.3 1774 2017.5 2011.1 2038.4
CR 19.1 20.9 24.6 24.7 33 33
FOM 12220.2 11217.2 13891 12232.4 16385.4 16189.3
kk147 n 732 733 734 735 736 737
M 49 47 38 36 32 30
ISE 1854.1 1377.4 1718.6 2174.9 3762.3 2843.5
CR 14.9 15.6 19.3 20.4 23 24.6
FOM 8057 11323 11239.1 9387.4 6113.2 8639.4
kk157 n 791 792 793 794 795 796
M 67 59 55 47 46 40
ISE 1495 1770.4 1755.1 2620.9 2609.3 2567.5
CR 11.8 13.4 14.4 16.9 17.3 19.9
FOM 7896.9 7582.3 8214.8 6445.6 6623.5 7750.6
kk158 n 607 608 609 610 611 612
M 40 34 29 29 25 23
ISE 782.4 758.2 1389.4 7186 9322.7 10256.9
CR 15.2 17.9 21 21 24.4 26.6
FOM 19396.2 23583.8 15114.3 2927.1 2621.6 2594.2
kk197 n 897 898 899 900 901 902
M 35 34 29 30 27 24
ISE 17131.4 20653.1 6258.3 21862 10093.8 18081.5
CR 25.6 26.4 31 30 33.4 37.6
FOM 1496 1278.8 4953.4 1372.2 3306 2078.5
Table C.11: Results gathered by the original algorithm.
Appendix C. Tables of Results 92
Figure Variable L=11 L=12 L=13 L=14 L=15
kk113 n 1167 1168 1169 1170 1171
M 26 25 22 24 24
ISE 23005.7 26333.7 30650.5 32535.7 30521.9
CR 44.9 46.7 53.1 48.8 48.8
FOM 1951 1774.2 1733.6 1498.4 1598.6
kk121 n 711 712 713 714 715
M 21 19 18 18 16
ISE 10201.2 12332.3 8711.7 8379.9 9008.1
CR 33.9 37.5 39.6 39.7 44.7
FOM 3318.9 3038.7 4546.9 4733.5 4960.8
kk139 n 694 695 696 697 698
M 18 17 19 16 16
ISE 4149.7 3712.3 2916.4 2761.1 3857.5
CR 38.6 40.9 36.6 43.6 43.6
FOM 9291.2 11012.7 12560.6 15777.4 11309
kk147 n 738 739 740 741 742
M 28 24 24 24 24
ISE 4216.4 4485.4 5231.3 7464 9215.1
CR 26.4 30.8 30.8 30.9 30.9
FOM 6251 6864.9 5894 4136.5 3355
kk157 n 797 798 799 800 801
M 39 32 30 29 24
ISE 3424.8 3129.8 3686.7 3692.9 3935.9
CR 20.4 24.9 26.6 27.6 33.4
FOM 5967.1 7967.7 7224.2 7470.1 8479.6
kk158 n 613 614 615 616 617
M 23 21 19 17 17
ISE 11215.3 12912.6 5563.8 11935.4 11716
CR 26.7 29.2 32.4 36.2 36.3
FOM 2376.4 2264.3 5817.7 3035.9 3097.8
kk197 n 903 904 905 906 907
M 22 20 19 18 16
ISE 20736.6 20989.7 22016.7 42329 11748.9
CR 41 45.2 47.6 50.3 56.7
FOM 1979.4 2153.4 2163.4 1189.1 4824.9
Table C.12: Results gathered by the original algorithm.
Appendix C. Tables of Results 93
Figure Variable L=5 L=6 L=7 L=8 L=9 L=10
kk3 n 1083 1084 1085 1086 1087 1088
M 44 39 34 31 29 25
ISE 6159 10018.4 10916.7 9524.7 13511.2 16409.9
CR 24.6 27.8 31.9 35 37.5 43.5
FOM 3996.4 2774.4 2923.2 3678 2774.2 2652.1
kk342 n 740 741 742 743 744 745
M 47 42 35 30 31 31
ISE 1913.2 2038.8 2584 2585.1 2888.4 2889.7
CR 15.7 17.6 21.2 24.8 24 24
FOM 8229.5 8653.4 8204.4 9580.4 8309.2 8316.5
kk373 n 464 465 466 467 468 469
M 26 25 21 19 17 17
ISE 1239.7 765.6 1271.4 1317 5250.2 5158.5
CR 17.8 18.6 22.2 24.6 27.5 27.6
FOM 14395.6 24295.2 17453.6 18663.3 5243.5 5348.1
kk402 n 851 852 853 854 855 856
M 46 42 32 34 27 27
ISE 1903.9 2058.9 3083.7 2228.3 3826.4 3932.9
CR 18.5 20.3 26.7 25.1 31.7 31.7
FOM 9716.7 9852.7 8644.3 11272.4 8275.9 8061.2
kk408 n 764 765 766 767 768 769
M 36 30 25 25 23 22
ISE 4759.1 5123.4 5484.8 5959.8 13041.7 13319.7
CR 21.2 25.5 30.6 30.7 33.4 35
FOM 4459.3 4977.1 5586.4 5147.8 2560.3 2624.3
kk418 n 649 650 651 652 653 654
M 45 41 37 32 31 25
ISE 1295 1456.9 2498.3 2557.1 2618.1 3855.6
CR 14.4 15.9 17.6 20.4 21.1 26.2
FOM 11136.7 10882.1 7042.7 7968.2 8045.6 6784.9
kk450 n 1041 1042 1043 1044 1045 1046
M 52 47 39 39 34 34
ISE 1724.9 2725.5 4664.3 9867.2 10431.7 11519.3
CR 20 22.2 26.7 26.8 30.7 30.8
FOM 11606.3 8134.4 5733.7 2712.9 2946.3 2670.7
Table C.13: Results gathered by the original algorithm.
Appendix C. Tables of Results 94
Figure Variable L=11 L=12 L=13 L=14 L=15
kk3 n 1089 1090 1091 1092 1093
M 25 23 22 22 22
ISE 17107.2 18420.9 17995.5 18807.7 18983.1
CR 43.6 47.4 49.6 49.6 49.7
FOM 2546.3 2572.7 2755.7 2639.2 2617.2
kk342 n 746 747 748 749 750
M 28 26 24 23 22
ISE 9445.5 10733.8 3288.9 3640 4360.5
CR 26.6 28.7 31.2 32.6 34.1
FOM 2820.7 2676.7 9476.3 8946.4 7818
kk373 n 470 471 472 473 474
M 16 16 16 15 15
ISE 5294.1 5322.6 5718.6 6084.9 6769.9
CR 29.4 29.4 29.5 31.5 31.6
FOM 5548.6 5530.6 5158.6 5182.3 4667.7
kk402 n 857 858 859 860 861
M 26 22 22 22 21
ISE 3088.1 6737.2 9799.7 8200 10051.4
CR 33 39 39 39.1 41
FOM 10673.9 5788.7 3984.3 4767.2 4079
kk408 n 770 771 772 773 774
M 20 20 19 19 19
ISE 14306.1 14174.7 18333.9 18874.9 18078.7
CR 38.5 38.6 40.6 40.7 40.7
FOM 2691.2 2719.6 2216.2 2155.5 2253.3
kk418 n 655 656 657 658 659
M 25 23 21 20 17
ISE 3564.8 3643.2 3609.9 2815.3 3329.1
CR 26.2 28.5 31.3 32.9 38.8
FOM 7349.6 7828.9 8666.6 11686.2 11644.2
kk450 n 1047 1048 1049 1050 1051
M 29 28 28 27 26
ISE 12670.7 14532.5 18399.4 17975 18327.3
CR 36.1 37.4 37.5 38.9 40.4
FOM 2849.4 2575.5 2036.2 2163.5 2205.6
Table C.14: Results gathered by the original algorithm.
Appendix C. Tables of Results 95
Figure Variable L=5 L=6 L=7 L=8 L=9 L=10
kk457 n 1043 1044 1045 1046 1047 1048
M 64 58 52 51 41 42
ISE 3836.3 2694.8 3777 4047 4181.6 4574.6
CR 16.3 18 20.1 20.5 25.5 25
FOM 4248.1 6679.5 5320.6 5067.9 6106.8 5454.6
kk469 n 905 906 907 908 909 910
M 50 47 36 36 32 30
ISE 4117.1 4917.3 5701.4 5526.2 10174.1 10874.1
CR 18.1 19.3 25.2 25.2 28.4 30.3
FOM 4396.3 3920.1 4419 4564.1 2792 2789.5
kk554 n 801 802 803 804 805 806
M 36 35 31 29 29 28
ISE 4784.3 6223.8 5969.3 5784.2 6311.1 7513.6
CR 22.3 22.9 25.9 27.7 27.8 28.8
FOM 4650.7 3681.7 4339.4 4793.1 4398.4 3831.1
kk577 n 1661 1662 1663 1664 1665 1666
M 100 96 84 76 70 66
ISE 7764.1 6706.2 8769.8 9622.5 12113.7 12937.2
CR 16.6 17.3 19.8 21.9 23.8 25.2
FOM 2139.3 2581.5 2257.5 2275.4 1963.5 1951.2
kk581 n 615 616 617 618 619 620
M 50 43 37 35 30 27
ISE 1019.7 1407.1 1489.5 1580.9 1949.2 1655.2
CR 12.3 14.3 16.7 17.7 20.6 23
FOM 12062 10180.7 11195.1 11169.2 10585.7 13873.2
kk745 n 619 620 621 622 623 624
M 39 33 29 29 27 22
ISE 529.4 824.1 999.8 1490.1 1421.2 2720.3
CR 15.9 18.8 21.4 21.4 23.1 28.4
FOM 29983.2 22796.8 21418.2 14393.8 16236 10426.7
kk778 n 1572 1573 1574 1575 1576 1577
M 125 104 94 82 76 70
ISE 3846.8 9206.4 5610.2 7847.8 12816.7 20614.1
CR 12.6 15.1 16.7 19.2 20.7 22.5
FOM 3269.2 1642.9 2984.7 2447.5 1618 1092.9
Table C.15: Results gathered by the original algorithm.
Appendix C. Tables of Results 96
Figure Variable L=11 L=12 L=13 L=14 L=15
kk457 n 1049 1050 1051 1052 1053
M 37 36 33 30 30
ISE 5515.8 6511.5 27599.8 33789.6 11596.7
CR 28.4 29.2 31.8 35.1 35.1
FOM 5140 4479.2 1153.9 1037.8 3026.7
kk469 n 911 912 913 914 915
M 25 22 22 22 19
ISE 12022.3 14040 12180.6 9584.3 12851.9
CR 36.4 41.5 41.5 41.5 48.2
FOM 3031 2952.6 3407.1 4334.7 3747.2
kk554 n 807 808 809 810 811
M 27 23 22 20 20
ISE 7380 9112.5 9371.9 8706.8 9382.5
CR 29.9 35.1 36.8 40.5 40.6
FOM 4050 3855.2 3923.7 4651.5 4321.9
kk577 n 1667 1668 1669 1670 1671
M 61 56 48 50 47
ISE 17196.3 19528.1 23195.8 21521.4 34909.1
CR 27.3 29.8 34.8 33.4 35.6
FOM 1589.2 1525.3 1499 1551.9 1018.5
kk581 n 621 622 623 624 625
M 24 22 18 18 15
ISE 1883.9 2334.2 3977.9 3263.5 4583.3
CR 25.9 28.3 34.6 34.7 41.7
FOM 13735 12112.6 8700.9 10622.6 9090.9
kk745 n 625 626 627 628 629
M 20 16 17 14 14
ISE 4660.6 10571.3 32033.1 37509.6 39149.5
CR 31.3 39.1 36.9 44.9 44.9
FOM 6705.2 3701.1 1151.4 1195.9 1147.6
kk778 n 1578 1579 1580 1581 1582
M 69 65 64 55 57
ISE 18545.5 25047 19005.8 32016.2 29510
CR 22.9 24.3 24.7 28.7 27.8
FOM 1233.2 969.9 1298.9 897.8 940.5
Table C.16: Results gathered by the original algorithm.
Appendix C. Tables of Results 97
Figure Variable L=5 L=6 L=7 L=8 L=9 L=10
kk788 n 843 844 845 846 847 848
M 46 45 39 36 35 29
ISE 1675.7 3750.5 7288.6 6696.2 7484.1 7677.1
CR 18.3 18.8 21.7 23.5 24.2 29.2
FOM 10936.6 5000.8 2972.7 3509.4 3233.5 3808.9
test31 n 1333 1334 1335 1336 1337 1338
M 35 36 35 33 33 33
ISE 3919.6 3145.5 4583.9 4349.2 6032.9 5892.4
CR 38.1 37.1 38.1 40.5 40.5 40.5
FOM 9716.8 11780.6 8321 9308.6 6715.7 6881
Table C.17: Results gathered by the original algorithm.
Figure Variable L=11 L=12 L=13 L=14 L=15
kk788 n 849 850 851 852 853
M 28 26 27 25 25
ISE 9764.3 10585.9 10974.3 10711.8 12729.5
CR 30.3 32.7 31.5 34.1 34.1
FOM 3105.3 3088.3 2872 3181.5 2680.4
test31 n 1339 1340 1341 1342 1343
M 32 30 29 29 29
ISE 6845.4 11054.6 12460.3 12478.1 13168.3
CR 41.8 44.7 46.2 46.3 46.3
FOM 6112.6 4040.6 3711.1 3708.6 3516.8
Table C.18: Results gathered by the original algorithm.
C.2.2 Improved Algorithm
Appendix C. Tables of Results 98
Figure Variable L=5 L=6 L=7 L=8 L=9 L=10
comm1 n 926 927 928 929 930 931
M 31 33 32 34 32 31
ISE 2647 1603.9 1473.2 1433.6 2035.4 2274
CR 29.9 28.1 29 27.3 29.1 30
FOM 11284.8 17514.3 19685.1 19059.4 14278.5 13206.8
comm2 n 717 718 719 720 721 722
M 22 23 19 21 19 20
ISE 2645.4 1707.8 2268.3 2721.3 3159.2 2584.6
CR 32.6 31.2 37.8 34.3 37.9 36.1
FOM 12319.9 18279.6 16682.7 12599.2 12011.6 13967.4
comm3 n 403 404 405 406 407 408
M 24 25 14 14 12 11
ISE 576 350.6 870.4 994.8 906.6 1470.9
CR 16.8 16.2 28.9 29 33.9 37.1
FOM 29152.1 46092.8 33234.6 29152.2 37409.4 25215.9
comm4 n 643 644 645 646 647 648
M 18 20 19 20 19 19
ISE 2853.7 2262.9 2254.6 2094.1 2125.7 2032
CR 35.7 32.2 33.9 32.3 34.1 34.1
FOM 12518 14229.5 15056.8 15424.6 16019.4 16783.8
comm5 n 564 565 566 567 568 569
M 20 20 17 18 17 18
ISE 2013.8 2271.6 2462.3 2263.8 2287.4 1927.8
CR 28.2 28.3 33.3 31.5 33.4 31.6
FOM 14003.4 12436.3 13521.5 13914.4 14606.9 16397.2
comm6 n 852 853 854 855 856 857
M 24 23 23 22 22 24
ISE 1322.5 1495.3 1153.4 1962.4 1935.4 2553.2
CR 35.5 37.1 37.1 38.9 38.9 35.7
FOM 26843.3 24802.3 32191.5 19803.9 20103.7 13985.5
comm7 n 851 852 853 854 855 856
M 29 32 30 35 30 31
ISE 2659.5 1634.4 2400 2073.4 2398.9 2214.6
CR 29.3 26.6 28.4 24.4 28.5 27.6
FOM 11033.9 16290.2 11847.4 11768.4 11880.7 12468.3
Table C.19: Results gathered by the improved algorithm.
Appendix C. Tables of Results 99
Figure Variable L=11 L=12 L=13 L=14 L=15
comm1 n 932 933 934 935 936
M 31 32 33 33 33
ISE 2370.8 2938.2 2687.5 3331.2 3015.5
CR 30.1 29.2 28.3 28.3 28.4
FOM 12681 9923.3 10531.2 8505.5 9406
comm2 n 723 724 725 726 727
M 20 21 21 21 21
ISE 2703.3 3956.8 2958.2 3070.6 2628.4
CR 36.2 34.5 34.5 34.6 34.6
FOM 13372.6 8713.1 11670.6 11258.7 13171.2
comm3 n 409 410 411 412 413
M 10 11 11 13 11
ISE 1265.3 999.3 1043.4 804.3 877
CR 40.9 37.3 37.4 31.7 37.5
FOM 32324.4 37298.5 35810.2 39404.1 42810.2
comm4 n 649 650 651 652 653
M 18 19 20 20 20
ISE 2521.4 2072.5 1480.7 1528.7 1600.3
CR 36.1 34.2 32.6 32.6 32.7
FOM 14299.8 16507.1 21982.8 21325.1 20401.9
comm5 n 570 571 572 573 574
M 17 17 18 16 17
ISE 2116.2 2373.3 1872.9 3583.9 3142.8
CR 33.5 33.6 31.8 35.8 33.8
FOM 15844.3 14152.3 16967.2 9992.6 10743.6
comm6 n 858 859 860 861 862
M 27 25 26 25 23
ISE 2575.2 2823.9 1783.1 2288.9 2892.6
CR 31.8 34.4 33.1 34.4 37.5
FOM 12339.9 12167.4 18550.5 15046.5 12956.5
comm7 n 857 858 859 860 861
M 29 31 31 29 29
ISE 2162 2370.2 2613.7 3295.1 3151.9
CR 29.6 27.7 27.7 29.7 29.7
FOM 13668.6 11677.1 10601.6 8999.9 9419.5
Table C.20: Results gathered by the improved algorithm.
Appendix C. Tables of Results 100
Figure Variable L=5 L=6 L=7 L=8 L=9 L=10
comm8 n 1116 1117 1118 1119 1120 1121
M 39 38 38 39 43 43
ISE 2904.8 3199.2 3106.5 3644.4 2024.5 2132.8
CR 28.6 29.4 29.4 28.7 26 26.1
FOM 9851 9188.2 9470.7 7872.9 12865.7 12223.5
kk100 n 923 924 925 926 927 928
M 64 58 54 48 41 44
ISE 1725 1502.5 1628.9 2407.6 2429.2 2501.5
CR 14.4 15.9 17.1 19.3 22.6 21.1
FOM 8360.7 10603.3 10515.9 8012.7 9307.5 8431.2
kk1006 n 638 639 640 641 642 643
M 28 27 22 21 22 21
ISE 1086.1 952 1486.6 1384.7 1747.2 2302.1
CR 22.8 23.7 29.1 30.5 29.2 30.6
FOM 20979.2 24860.1 19568.7 22044.1 16702.5 13300.5
kk1008 n 794 795 796 797 798 799
M 35 32 31 29 27 27
ISE 1665.3 1791.2 1833.7 1836.2 3968.5 2696.2
CR 22.7 24.8 25.7 27.5 29.6 29.6
FOM 13622.3 13870.3 14003.2 14967.2 7447.6 10975.7
kk1012 n 629 630 631 632 633 634
M 22 23 22 21 20 20
ISE 1976.2 1035.9 1207.8 1089.1 1627.3 1947.8
CR 28.6 27.4 28.7 30.1 31.7 31.7
FOM 14468 26440.9 23747.6 27633.3 19449.1 16274.5
kk102 n 856 857 858 859 860 861
M 52 50 46 42 40 36
ISE 1993.7 1947.8 1995.4 2119.5 2327.9 3114
CR 16.5 17.1 18.7 20.5 21.5 23.9
FOM 8256.9 8799.5 9347.8 9649.7 9235.6 7680.4
kk103 n 926 927 928 929 930 931
M 26 22 22 24 25 25
ISE 3217.2 3049.7 2548.1 3275.3 2756.4 2447.3
CR 35.6 42.1 42.2 38.7 37.2 37.2
FOM 11070.4 13816.5 16554 11818.3 13495.8 15216.7
Table C.21: Results gathered by the improved algorithm.
Appendix C. Tables of Results 101
Figure Variable L=11 L=12 L=13 L=14 L=15
comm8 n 1122 1123 1124 1125 1126
M 41 38 42 40 39
ISE 2328.8 4140.4 2735.2 2857.9 2950.3
CR 27.4 29.6 26.8 28.1 28.9
FOM 11751.1 7137.6 9784.4 9841.2 9786.2
kk100 n 929 930 931 932 933
M 42 41 41 34 34
ISE 2817 2865 3157.3 3516.7 3643.8
CR 22.1 22.7 22.7 27.4 27.4
FOM 7852 7917.3 7191.9 7794.7 7530.8
kk1006 n 644 645 646 647 648
M 21 23 21 22 23
ISE 1446.2 1844 2851.3 1854 1627.6
CR 30.7 28 30.8 29.4 28.2
FOM 21205.5 15208.4 10788.6 15862.4 17309.9
kk1008 n 800 801 802 803 804
M 25 24 26 29 30
ISE 3036.5 3536 2946.7 2794.6 2105.2
CR 32 33.4 30.8 27.7 26.8
FOM 10538.5 9438.6 10468 9908.3 12730.1
kk1012 n 635 636 637 638 639
M 20 21 21 21 19
ISE 1983.2 1973.4 1986.7 1942.6 2986.1
CR 31.8 30.3 30.3 30.4 33.6
FOM 16009.1 15347.2 15268 15639 11262.8
kk102 n 862 863 864 865 866
M 35 33 34 34 32
ISE 3788.9 3924.4 3396 3496.4 3735.7
CR 24.6 26.2 25.4 25.4 27.1
FOM 6500.2 6663.8 7482.8 7276.3 7244.2
kk103 n 932 933 934 935 936
M 24 26 24 21 24
ISE 1814.6 2135 2238.8 2561 2091.2
CR 38.8 35.9 38.9 44.5 39
FOM 21401.1 16808 17383.2 17385.6 18649.6
Table C.22: Results gathered by the improved algorithm.
Appendix C. Tables of Results 102
Figure Variable L=5 L=6 L=7 L=8 L=9 L=10
kk1032 n 845 846 847 848 849 850
M 31 29 27 26 25 24
ISE 2247.2 1576.3 1700.2 2303.5 1747.5 1941
CR 27.3 29.2 31.4 32.6 34 35.4
FOM 12130 18506.6 18451.5 14158.9 19433.3 18246.7
kk104 n 772 773 774 775 776 777
M 35 32 30 29 28 25
ISE 1807.3 1430.2 1809.5 2220.6 1982 2408
CR 22.1 24.2 25.8 26.7 27.7 31.1
FOM 12204.7 16890.3 14258.2 12034.9 13982.7 12907.1
kk1051 n 1241 1242 1243 1244 1245 1246
M 93 84 78 69 71 63
ISE 3542 1950.7 2379.5 3164.5 2788 3983.1
CR 13.3 14.8 15.9 18 17.5 19.8
FOM 3767.4 7579.5 6697.1 5697.2 6289.5 4965.4
kk1058 n 734 735 736 737 738 739
M 39 36 31 26 26 26
ISE 1688.2 1722.5 1789.4 2157.5 2069.7 1677.9
CR 18.8 20.4 23.7 28.3 28.4 28.4
FOM 11148.5 11852.8 13268 13138.2 13714.3 16939.2
kk1069 n 721 722 723 724 725 726
M 31 34 28 32 30 30
ISE 1598.7 1108.2 2178.8 1424.9 1664.9 1664.7
CR 23.3 21.2 25.8 22.6 24.2 24.2
FOM 14548.3 19162.6 11851.3 15878 14515.7 14537.5
kk1086 n 1128 1129 1130 1131 1132 1133
M 48 44 43 37 38 37
ISE 1950.2 2786.4 2896.6 3534 2873 3087.9
CR 23.5 25.7 26.3 30.6 29.8 30.6
FOM 12049.9 9208.7 9072.5 8649.7 10368.8 9916.8
kk1087 n 935 936 937 938 939 940
M 67 61 58 54 49 47
ISE 915 1110.4 1247.3 2043.5 2498.9 2563.3
CR 14 15.3 16.2 17.4 19.2 20
FOM 15252.4 13818.3 12951.6 8500.2 7668.8 7802.4
Table C.23: Results gathered by the improved algorithm.
Appendix C. Tables of Results 103
Figure Variable L=11 L=12 L=13 L=14 L=15
kk1032 n 851 852 853 854 855
M 24 23 23 25 24
ISE 2542 2180.3 2587.8 1971.5 2502.3
CR 35.5 37 37.1 34.2 35.6
FOM 13948.9 16990.4 14331.6 17326.9 14237.1
kk104 n 778 779 780 781 782
M 27 25 26 27 30
ISE 2102.4 2710.1 2528.4 2890.5 2507.7
CR 28.8 31.2 30 28.9 26.1
FOM 13705.7 11497.7 11865.4 10007.1 10394.5
kk1051 n 1247 1248 1249 1250 1251
M 62 54 53 52 55
ISE 3805 5887.1 5407.3 5178.2 5000.3
CR 20.1 23.1 23.6 24 22.7
FOM 5286 3925.7 4358.2 4642.3 4548.8
kk1058 n 740 741 742 743 744
M 26 26 25 25 23
ISE 1596.2 2121.9 2351.3 2272.8 3312.8
CR 28.5 28.5 29.7 29.7 32.3
FOM 17831.3 13431.2 12622.6 13076.4 9764.4
kk1069 n 727 728 729 730 731
M 28 26 26 26 26
ISE 2252 2630.6 2561.6 2613.9 2562.1
CR 26 28 28 28.1 28.1
FOM 11529.5 10643.9 10945.6 10741.5 10973.7
kk1086 n 1134 1135 1136 1137 1138
M 36 38 39 37 37
ISE 3358.2 3718 3713.1 4050.7 3723.4
CR 31.5 29.9 29.1 30.7 30.8
FOM 9380 8033.6 7844.7 7586.3 8260.5
kk1087 n 941 942 943 944 945
M 42 37 40 34 30
ISE 3349.1 3380.9 3182.8 3476.5 4586.9
CR 22.4 25.5 23.6 27.8 31.5
FOM 6689.7 7530.3 7407.1 7986.4 6867.4
Table C.24: Results gathered by the improved algorithm.
Appendix C. Tables of Results 104
Figure Variable L=5 L=6 L=7 L=8 L=9 L=10
kk113 n 1161 1162 1163 1164 1165 1166
M 37 35 36 35 33 30
ISE 2817.7 2429 2882.5 1976.9 2390 3793
CR 31.4 33.2 32.3 33.3 35.3 38.9
FOM 11136.1 13668.1 11207.5 16822.7 14771.2 10246.8
kk121 n 705 706 707 708 709 710
M 25 25 26 25 24 25
ISE 2224.4 2129.8 1740.3 2233 2128 2137.9
CR 28.2 28.2 27.2 28.3 29.5 28.4
FOM 12677.6 13259.3 15625.1 12682.4 13882.3 13283.9
kk139 n 688 689 690 691 692 693
M 27 25 21 20 20 17
ISE 1470.4 1722.5 2067.5 1815.5 1584.6 2702.7
CR 25.5 27.6 32.9 34.6 34.6 40.8
FOM 17329.9 16000 15892.4 19030.3 21834.7 15083
kk147 n 732 733 734 735 736 737
M 53 40 37 36 33 32
ISE 998.3 1082.3 1533.2 1585.7 1460.3 1902.5
CR 13.8 18.3 19.8 20.4 22.3 23
FOM 13834.9 16932.2 12938.7 12875.3 15273.4 12105.9
kk157 n 791 792 793 794 795 796
M 66 62 51 40 33 33
ISE 1018.4 883.9 1307.2 1762.6 2511.2 2635.1
CR 12 12.8 15.5 19.9 24.1 24.1
FOM 11768.7 14452.1 11894.8 11261.7 9593.5 9153.7
kk158 n 607 608 609 610 611 612
M 26 27 23 24 24 22
ISE 1170.9 1269.9 1016.7 1074.1 941.8 1137.1
CR 23.3 22.5 26.5 25.4 25.5 27.8
FOM 19939.2 17732.6 26044.1 23662.9 27032.9 24463.6
kk197 n 897 898 899 900 901 902
M 30 35 29 30 26 26
ISE 1513.5 1128.1 1139.9 1100.4 1727.4 1533
CR 29.9 25.7 31 30 34.7 34.7
FOM 19755.3 22744 27196.4 27262.1 20061.2 22630.9
Table C.25: Results gathered by the improved algorithm.
Appendix C. Tables of Results 105
Figure Variable L=11 L=12 L=13 L=14 L=15
kk113 n 1167 1168 1169 1170 1171
M 31 32 31 29 29
ISE 2799.1 2735.4 3250.6 3176.8 3067.5
CR 37.6 36.5 37.7 40.3 40.4
FOM 13449 13343.7 11600.9 12699.7 13163.6
kk121 n 711 712 713 714 715
M 24 26 25 25 24
ISE 2204.6 2125.6 1961.8 2103.9 2217.5
CR 29.6 27.4 28.5 28.6 29.8
FOM 13437.6 12882.9 14537.8 13574.5 13434.5
kk139 n 694 695 696 697 698
M 18 15 16 15 16
ISE 2113.4 3048 2961.7 3181.5 2638
CR 38.6 46.3 43.5 46.5 43.6
FOM 18243 15201 14687.5 14605.4 16537.3
kk147 n 738 739 740 741 742
M 31 29 30 31 34
ISE 2047.8 2910.9 2806.1 2330.1 1978.5
CR 23.8 25.5 24.7 23.9 21.8
FOM 11625.3 8754.2 8790.3 10258.6 11030.3
kk157 n 797 798 799 800 801
M 33 32 26 27 26
ISE 2861.3 3122.2 3418.6 3599.3 3481.9
CR 24.2 24.9 30.7 29.6 30.8
FOM 8440.8 7987.3 8989.3 8232.1 8847.9
kk158 n 613 614 615 616 617
M 21 22 21 20 21
ISE 1274.8 1484 1517.1 2009.9 1825.7
CR 29.2 27.9 29.3 30.8 29.4
FOM 22898.2 18806.4 19303.3 15324.1 16092.5
kk197 n 903 904 905 906 907
M 27 26 20 23 20
ISE 1417.4 2146 2231.3 1802.2 2341.3
CR 33.4 34.8 45.3 39.4 45.4
FOM 23595.1 16201.6 20279.3 21857.3 19369.2
Table C.26: Results gathered by the improved algorithm.
Appendix C. Tables of Results 106
Figure Variable L=5 L=6 L=7 L=8 L=9 L=10
kk3 n 1083 1084 1085 1086 1087 1088
M 39 40 34 34 34 32
ISE 3576.5 2086.6 2754.6 3755.9 3544.4 3502.5
CR 27.8 27.1 31.9 31.9 32 34
FOM 7764.3 12987.9 11585 8504.3 9020.1 9707.3
kk342 n 740 741 742 743 744 745
M 41 37 30 30 27 30
ISE 1612.3 1140.1 2295.2 1576.9 2428.2 1470.8
CR 18 20 24.7 24.8 27.6 24.8
FOM 11194.5 17565.6 10776.3 15705.8 11348.1 16884.4
kk373 n 464 465 466 467 468 469
M 20 21 18 18 16 17
ISE 1064.9 892.7 1251.8 1705.6 1858.2 1526.9
CR 23.2 22.1 25.9 25.9 29.3 27.6
FOM 21785.9 24804.7 20681 15211.4 15740.9 18068
kk402 n 851 852 853 854 855 856
M 41 41 32 31 29 31
ISE 2296.1 1794.6 3026.1 2944.6 1912.5 2110.9
CR 20.8 20.8 26.7 27.5 29.5 27.6
FOM 9039.7 11579.2 8808.7 9355.6 15415.8 13081.2
kk408 n 764 765 766 767 768 769
M 29 33 28 28 28 28
ISE 1229 876.6 966.9 1042.4 1198.2 1422.6
CR 26.3 23.2 27.4 27.4 27.4 27.5
FOM 21435.3 26444.2 28294.4 26278.9 22890.8 19306.3
kk418 n 649 650 651 652 653 654
M 49 45 41 37 33 29
ISE 541.2 825 1237.3 2389.1 2472.2 2972.3
CR 13.2 14.4 15.9 17.6 19.8 22.6
FOM 24473.1 17507.9 12833.2 7375.7 8004 7587.3
kk450 n 1041 1042 1043 1044 1045 1046
M 39 38 33 39 35 35
ISE 1476.2 1389.3 2409.3 1922.3 2669.9 2805.4
CR 26.7 27.4 31.6 26.8 29.9 29.9
FOM 18081.2 19737.2 13118.1 13925.6 11182.7 10652.9
Table C.27: Results gathered by the improved algorithm.
Appendix C. Tables of Results 107
Figure Variable L=11 L=12 L=13 L=14 L=15
kk3 n 1089 1090 1091 1092 1093
M 32 29 30 32 33
ISE 3352.8 6294.7 5222 4031.7 4453.8
CR 34 37.6 36.4 34.1 33.1
FOM 10150.1 5971.1 6964.1 8464.2 7436.7
kk342 n 746 747 748 749 750
M 27 26 26 25 25
ISE 1893.3 2309.4 2477.1 2520 2592.5
CR 27.6 28.7 28.8 30 30
FOM 14593.5 12440.7 11614.1 11889 11571.7
kk373 n 470 471 472 473 474
M 15 16 18 17 17
ISE 1814.1 2283.3 1397.8 1601.1 1506.8
CR 31.3 29.4 26.2 27.8 27.9
FOM 17272.5 12892.5 18759.1 17378.2 18504.7
kk402 n 857 858 859 860 861
M 30 30 30 32 28
ISE 2389.1 2501.3 2391.7 2426.2 3461.8
CR 28.6 28.6 28.6 26.9 30.8
FOM 11957 11433.9 11972.1 11077 8882.6
kk408 n 770 771 772 773 774
M 28 24 24 26 26
ISE 1176.2 1859.4 1570.1 1784.3 1591.7
CR 27.5 32.1 32.2 29.7 29.8
FOM 23380 17277.1 20486.5 16662.9 18702.3
kk418 n 655 656 657 658 659
M 27 28 25 27 29
ISE 3178.2 2762.8 3117.5 3009 2854
CR 24.3 23.4 26.3 24.4 22.7
FOM 7632.9 8479.9 8429.9 8099.1 7962.2
kk450 n 1047 1048 1049 1050 1051
M 34 33 33 34 32
ISE 2776.9 3161.2 3167.6 2906.3 4386.1
CR 30.8 31.8 31.8 30.9 32.8
FOM 11089.3 10045.9 10035.2 10626.2 7488.2
Table C.28: Results gathered by the improved algorithm.
Appendix C. Tables of Results 108
Figure Variable L=5 L=6 L=7 L=8 L=9 L=10
kk457 n 1043 1044 1045 1046 1047 1048
M 51 48 50 44 46 42
ISE 1634.3 2192.9 2096.8 1997.2 2238.3 2964
CR 20.5 21.8 20.9 23.8 22.8 25
FOM 12513.4 9918.4 9967.6 11903.3 10168.9 8418.4
kk469 n 905 906 907 908 909 910
M 47 40 39 35 34 34
ISE 1542.3 1306.9 1818.3 2833.5 2990 2634.5
CR 19.3 22.7 23.3 25.9 26.7 26.8
FOM 12485.2 17331 12790 9155.7 8941.6 10159.3
kk554 n 801 802 803 804 805 806
M 34 32 31 26 27 27
ISE 1550.2 1427.5 1733.9 2906.7 3035.9 2684.9
CR 23.6 25.1 25.9 30.9 29.8 29.9
FOM 15197 17556.5 14939.1 10638.7 9820.7 11118.3
kk577 n 1661 1662 1663 1664 1665 1666
M 97 88 78 76 68 66
ISE 3105.8 3425.3 3829.9 4311.1 4563.2 5074.8
CR 17.1 18.9 21.3 21.9 24.5 25.2
FOM 5513.5 5513.8 5566.8 5078.7 5365.8 4974.1
kk581 n 615 616 617 618 619 620
M 35 39 26 27 23 25
ISE 901.7 824.7 809 1560.4 1071.9 1059.1
CR 17.6 15.8 23.7 22.9 26.9 24.8
FOM 19487.5 19152.3 29332 14668.5 25107.8 23417.1
kk745 n 619 620 621 622 623 624
M 26 27 23 22 22 22
ISE 1305.4 944.9 1295.3 1412 1660.9 1785.8
CR 23.8 23 27 28.3 28.3 28.4
FOM 18237.2 24300.9 20843.8 20023.2 17050 15882.9
kk778 n 1572 1573 1574 1575 1576 1577
M 119 108 99 101 90 83
ISE 2379.8 1851.5 2773.5 3352.9 3720.7 5254.4
CR 13.2 14.6 15.9 15.6 17.5 19
FOM 5550.9 7866.3 5732.5 4651 4706.4 3616
Table C.29: Results gathered by the improved algorithm.
Appendix C. Tables of Results 109
Figure Variable L=11 L=12 L=13 L=14 L=15
kk457 n 1049 1050 1051 1052 1053
M 40 40 37 42 41
ISE 2868.9 3892.4 4081.1 2974.2 3819.9
CR 26.2 26.3 28.4 25 25.7
FOM 9141.3 6743.9 6960.2 8421.6 6723.5
kk469 n 911 912 913 914 915
M 33 32 31 28 30
ISE 2449.9 3195.2 3851.8 4913.5 4060.5
CR 27.6 28.5 29.5 32.6 30.5
FOM 11268.4 8919.5 7646.2 6643.5 7511.5
kk554 n 807 808 809 810 811
M 26 26 24 24 25
ISE 3231.6 3028.4 3694.2 3710.5 4037.8
CR 31 31.1 33.7 33.8 32.4
FOM 9604.6 10261.9 9124.6 9095.8 8034.1
kk577 n 1667 1668 1669 1670 1671
M 61 63 63 63 60
ISE 6537.2 6231.6 5703.2 6406.2 6967.9
CR 27.3 26.5 26.5 26.5 27.9
FOM 4180.4 4248.7 4645.2 4137.9 3996.9
kk581 n 621 622 623 624 625
M 18 18 17 18 18
ISE 1065.4 1409 2016.4 1532.3 1623.7
CR 34.5 34.6 36.6 34.7 34.7
FOM 32382.1 24525.7 18174.8 22623.7 21384.8
kk745 n 625 626 627 628 629
M 22 22 21 21 21
ISE 2071.5 1909.9 1852.5 4925.1 4629
CR 28.4 28.5 29.9 29.9 30
FOM 13714.2 14898.2 16117.3 6071.9 6470.5
kk778 n 1578 1579 1580 1581 1582
M 83 75 75 77 78
ISE 4814.9 7214.3 5177.2 5994.1 5696.4
CR 19 21.1 21.1 20.5 20.3
FOM 3948.6 2918.3 4069.1 3425.4 3560.5
Table C.30: Results gathered by the improved algorithm.
Appendix C. Tables of Results 110
Figure Variable L=5 L=6 L=7 L=8 L=9 L=10
kk788 n 843 844 845 846 847 848
M 41 39 38 38 39 39
ISE 1227.2 1796.4 1699 1856.9 1725.2 1964.3
CR 20.6 21.6 22.2 22.3 21.7 21.7
FOM 16754.4 12046.8 13088.2 11989.4 12588.8 11069.6
test31 n 1333 1334 1335 1336 1337 1338
M 31 29 29 30 30 30
ISE 2454.5 3258.4 3270.4 2657.4 2750.7 2880.5
CR 43 46 46 44.5 44.6 44.6
FOM 17519.1 14117.2 14076.1 16758.4 16201.7 15483.4
Table C.31: Results gathered by the improved algorithm.
Figure Variable L=11 L=12 L=13 L=14 L=15
kk788 n 849 850 851 852 853
M 40 39 35 29 27
ISE 1748.5 2092.8 2289.2 4016.4 5115.3
CR 21.2 21.8 24.3 29.4 31.6
FOM 12139.1 10414 10621.1 7314.8 6176.1
test31 n 1339 1340 1341 1342 1343
M 30 29 29 30 31
ISE 2924.5 3300.7 3664.6 3675.3 3268.9
CR 44.6 46.2 46.2 44.7 43.3
FOM 15262 13999.3 12618.4 12171.4 13253.1
Table C.32: Results gathered by the improved algorithm.
Appendix D
Additional Visual Results
Figure D.1: Visual comparison. Original algorithm on the top, improved version on the bottom. L = 10
applied on to image kk1051.
111
Appendix D. Additional Visual Results 112
Figure D.2: Visual comparison. Original algorithm on the top, improved version on the bottom. L = 14
applied on to image kk418.
Appendix D. Additional Visual Results 113
Figure D.3: Visual comparison. Original algorithm on the top, improved version on the bottom. L = 10
applied on to image kk450.
Appendix D. Additional Visual Results 114
Figure D.4: Visual comparison. Original algorithm on the top, improved version on the bottom. L = 8
applied on to image comm6.
Appendix D. Additional Visual Results 115
Figure D.5: Results obtained with the Malcolm algorithm at different worm sizes. The original image is
shown on the left hand side of gure B.6.
Appendix D. Additional Visual Results 116
Figure D.6: Results obtained with the improved version of the Malcolm algorithm at different worm sizes.
The original image is shown on the left hand side of gure B.6.
Appendix D. Additional Visual Results 117
Figure D.7: Visual comparison proposed by (Rosin, 1997). Data used for this plot are shown in table
4.5. Performance by the algorithm depends on how close in vertical is to the optimal line. The object
under analysis is shown on the left hand side of gure B.6.
Appendix D. Additional Visual Results 118
Figure D.8: Results obtained with the Malcolm algorithm at different worm sizes. The original image is
shown on the right hand side of gure B.6.
Appendix D. Additional Visual Results 119
Figure D.9: Results obtained with the improved version of the Malcolm algorithm at different worm sizes.
The original image is shown on the right hand side of gure B.6.
Bibliography
Arrebola, F., Bandera, A., Camacho, P., and Sandoval, F. (1997). Corner detection by local
histogram of contour code. Electronics Letters, 33(21):17691771.
Backnak, R. and Celenk, M. (1989). A corner detectionbased object representation technique
for 2-d images. Intelligent control, 1989. Proceedings., ieee international symposium on,
pages 186190.
Bandera, A., Urdiales, C., Arrebola, F., and Sandoval, F. (2000). Corner detection by means of
adaptively estimated curvature function. Electronics Letters, 36(2):124126.
Banerjee, S., Niblack, W., and Flickner, M. (1996). A minimum description length polygonal
approximation method. Technical Report RJ 10007 (89096), IBM Research Division.
Beus, H. and Tiu, S. (1987). An improved corner detection algorithm based on chaincoded
plane curves. Pattern Recognition, 20:291296.
Canny, J. (1986). A computational approach to edge detection. IEEE Trans on Pattern Analysis
and Machine Intelligence, 8(6):679698.
Chetverikov, D. and Szab o, Z. (1999). A simple and effcient algorithm for detection of high
curvature points in planar curves. Electronics Letters.
Davies, E. (1988). Application of the generalised hough transform to corner detection. Com-
puters and Digital Techniques, IEE Proceedings E, 135(1):4954.
Deguchi, K. and Aoki, S. (1990). Regularized polygonal approximation for analysis and inter-
pretation of planar contour gures. Pattern Recognition, 1990. Proceedings., 10th Interna-
tional Conference on, 1:865869.
Dias, P., Kassim, A., and Srinivasan, V. (1995). A neural network based corner detection
method. Neural Networks, 1995. Proceedings., IEEE International Conference on, 4:2116
2120.
Freeman, H. (1961). On the encoding of arbitrary geometric congurations. IRE Trans. on
Electronic Computers EC, 10:260268.
120
Bibliography 121
Freeman, H. and Davis, L. S. (1977). A corner nding algorithm for chaincoded curves. IEEE
Trans. Computers, 26:297303.
Hosur, P. and Ma, K.-K. (1999). Optimal algorithm for progressive polygon approximation of
discrete planar curves. Image Processing, 1999. ICIP 99. Proceedings. 1999 International
Conference on, 1:1620.
Huang, S.-C. and Sun, Y.-N. (1996). Polygonal approximation using genetic algorithm. Evolu-
tionary Computation, 1996., Proceedings of IEEE International Conference on, pages 469
474.
Huang, S.-C. and Sun, Y.-N. (1998). Determination of optimal polygonal approximation using
genetic algorithms. Evolutionary Computation Proceedings, 1998. IEEE World Congress on
Computational Intelligence., The 1998 IEEE International Conference on, pages 124129.
Kaneko, T. and Okudaira, M. (1985). Encoding of arbitrary curves based on the chain code
representation. Communications, IEEE Transactions on [legacy, pre - 1988], 33(7):697
707.
Kartzir, N., Linderbaum, M., and Porat, M. (1994). Curve segmentation under partial occlu-
sion. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 16(5):513519.
Kass, M., Witkin, A., and Terzopoulos, D. (1998). Snakes: Active contour models. Interna-
tional Journal of Computer Vision, 1(4):321331.
Kwok, P. (1992). Non-recursive thinning algorithms using chain codes. Pattern Recognition,
1992. Vol.III. Conference C: Image, Speech and Signal Analysis, Proceedings., 11th IAPR
International Conference on, pages 369372.
Laganiere, R. (1998). Morphological corner detection. Computer Vision, 1998. Sixth Interna-
tional Conference on, pages 280285.
Lee, J.-S., Sun, Y.-N., and Chen, C.-H. (1993a). Boundary-based corner detection using
wavelet transform. Systems, Man and Cybernetics, 1993. Systems Engineering in the Ser-
vice of Humans, Conference Proceedings., International Conference on, pages 513516.
Lee, J.-S., Sun, Y.-N., and Chen, C.-H. (1993b). Gray-level-based corner detection by using
wavelet transform. TENCON 93. Proceedings. Computer, Communication, Control and
Power Engineering.1993 IEEE Region 10 Conference on Issue: 0, pages 970973.
Lee, J.-S., Sun, Y.-N., and Chen, C.-H. (1995). Multiscale corner detection by using wavelet
transform. Image Processing, IEEE Transactions on, 4(1):100104.
Bibliography 122
Liu, H. and Srinath, M. (1990). Corner detection from chain-code. Pattern Recognition, 23:51
68.
Lowe, D. G. (1987). Three-dimensional object recognition from single two-dimensional im-
ages. Articial Intelligence, 31(3):355395.
Lu, C.-C. and Dunham, J. (1991). Highly efcient coding schemes for contour lines based on
chain code representations. Communications, IEEE Transactions on, 39(10):15111514.
Luo, B., Cross, A., and Hancock, E. (1998). Corner detection using vector potential. Pattern
Recognition, 1998. Proceedings. Fourteenth International Conference on, 2:10181021.
Malcolm, C. (1983). Polygonal approximation to a continues path. Proceedings of the 3rd
International Conference on Robot Vision and Sensory Controls, pages 6168.
Medioni, G. and Yasumoto, Y. (1986). Corner detection and curve representation using cubic
b-splines. Robotics and Automation. Proceedings. 1986 IEEE International Conference on,
3:764769.
Melen, T. and Ozanian, T. (1993). A fast algorithm for dominant point detection on chain
coded contours. Proc. Fifth Intl Conf. Computer Analysis of Images and Patterns.
Mikheev, A., Vincent, L., and Faber, V. (2001). High-quality polygonal contour approxima-
tion based on relaxation. Document Analysis and Recognition, 2001. Proceedings. Sixth
International Conference on, pages 361365.
Mokhtarian, F. and Suomela, R. (1999). Curvature scale space for image point feature detec-
tion. Image Processing And Its Applications, 1999. Seventh International Conference on
(Conf. Publ. No. 465), 1:206210.
Pavlidis and Horowitz (1974). Segmentation plane curves. IEEE Transactions on Computer,
pages 860870.
Perez, J. and Vidal, E. (1994). Optimum polygonalapproximation of digitizedcurves. Pattern
Recognition Letters, 15(8):743750.
Phillips, T. and Rosenfeld, A. (1987). A method of curve partitioning using arc-chord distance.
Pattern Recognition Letters, 5:285288.
Pikaz, A. and Dinstein, I. (1994a). Optimal polygonal approximation of digital curves. Com-
puter Vision & Image Processing., Proceedings of the 12th IAPR International Conference
on, 1:619621.
Bibliography 123
Pikaz, A. and Dinstein, I. (1994b). Using simple decomposition for smoothing and feature
point detection of noisy digital curves. Pattern Analysis and Machine Intelligence, IEEE
Transactions on, 16(8):808813.
Pinheiro, A., Izquierdo, E., and Ghanhari, M. (2000). Shape matching using a curvature based
polygonal approximation in scale-space. Image Processing, 2000. Proceedings. 2000 Inter-
national Conference on, 2:538541.
Pitas, I. (1993). Digital Image Processing Algorithms. Prentice Hall.
Quddus, A. and Fahmy, M. (1999). Fast wavelet-based corner detection technique. Electronics
Letters, 35(4):287288.
Quddus, A. and Gabbouj, M. (2000). Wavelet based corner detection using singular value
decomposition. Acoustics, Speech, and Signal Processing, 2000. ICASSP 00. Proceedings.
2000 IEEE International Conference on, 6:22272230.
Rajan, P. and Davidson, J. (1989). Evalutation of corner detection algorithm. System theory,
1989. Proceeding., twenty-rst southeastern symposium on, pages 2933.
Ramer, U. (1972). An iterative procedure for the polygon approximation of planar curves.
Computer Graphics: Image Processing, 1:244256.
Rattarangsi, A. and Chin, R. (1990). Scale-based detection of corners of planar curves. Pattern
Recognition. Proceedings., 10th International Conference on, 1:923930.
Reche, P., Urdiales, C., Bandera, A., Tranzegnies, C., and F.Sandoval (2002). Corner detection
by means of contour local vectors. Electronics Letters, 38(14):699701.
Rosenfeld, A. and Johnston, E. (1973). Angle detection on digital curves. IEEE Trans. Com-
puters, 22:875878.
Rosin, P. (1997). Techniques for assessing polygonal approximations of curves. Pattern Anal-
ysis and Machine Intelligence, IEEE Transactions on, 19(6):659 666.
Rosin, P. (2003). Assessing the behaviour of polygonal approximation algorithms. Pattern
Recognition, 36(2):505518.
Sanchiz, J., Inesta, J., and Pla, F. (1996). A neural network-based algorithm to detect dominant
points from the chain-code of a contour. Pattern Recognition, 1996., Proceedings of the 13th
International Conference on, 3:325329.
Smith, S. M. and Brady, J. M. (1995). SUSAN A new approach to low level processing.
Technical Report TR95SMS1c, Oxford University, Chertsey, Surrey, UK.
Bibliography 124
Teh, C.-H. and Chin, R. T. (1989). On the detection of dominant points on digital curves. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 11(8):859872.
Torres-Huitzil, C. and Arias-Estrada, M. (2000). An fpga architecture for high speed edge and
corner detection. Computer Architectures for Machine Perception, 2000. Proceedings. Fifth
IEEE International Workshop on, pages 112116.
Urriales, C., Transegnies, C., Bandera, A., and Sandoval, F. (2003). Corner detection based on
adaptively ltered curvature function. Electronics Letters, 39(5):426428.
Yuan, J. and Suen, C. (1992). An optimal algorithm for detecting straight lines in chain codes.
Image, Speech and Signal Analysis, Proceedings., 11th IAPR International Conference on,
3:692695.
Zhu, Y. and Seneviratne, L. (1997). Optimal polygonal approximation of digitised curves.
Vision, Image and Signal Processing, IEE Proceedings-, 144(1):814.

You might also like