Textbook Advances and Applications of Optimised Algorithms in Image Processing 1St Edition Diego Oliva Ebook All Chapter PDF

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 53

Advances and Applications of

Optimised Algorithms in Image


Processing 1st Edition Diego Oliva
Visit to download the full and correct content document:
https://textbookfull.com/product/advances-and-applications-of-optimised-algorithms-in
-image-processing-1st-edition-diego-oliva/
More products digital (pdf, epub, mobi) instant
download maybe you interests ...

Metaheuristic Algorithms for Image Segmentation Theory


and Applications Diego Oliva

https://textbookfull.com/product/metaheuristic-algorithms-for-
image-segmentation-theory-and-applications-diego-oliva/

Image Processing and Communications Techniques


Algorithms and Applications Micha■ Chora■

https://textbookfull.com/product/image-processing-and-
communications-techniques-algorithms-and-applications-michal-
choras/

Fractals: applications in biological signalling and


image processing 1st Edition Aliahmad

https://textbookfull.com/product/fractals-applications-in-
biological-signalling-and-image-processing-1st-edition-aliahmad/

Advances in Soft Computing and Machine Learning in


Image Processing 1st Edition Aboul Ella Hassanien

https://textbookfull.com/product/advances-in-soft-computing-and-
machine-learning-in-image-processing-1st-edition-aboul-ella-
hassanien/
Advances in Metaheuristics Algorithms Methods and
Applications Erik Cuevas

https://textbookfull.com/product/advances-in-metaheuristics-
algorithms-methods-and-applications-erik-cuevas/

Grouping Genetic Algorithms: Advances and Applications


1st Edition Michael Mutingi

https://textbookfull.com/product/grouping-genetic-algorithms-
advances-and-applications-1st-edition-michael-mutingi/

Image Operators: Image Processing in Python 1st Edition


Jason M. Kinser

https://textbookfull.com/product/image-operators-image-
processing-in-python-1st-edition-jason-m-kinser/

Image operators image processing in Python First


Edition Kinser

https://textbookfull.com/product/image-operators-image-
processing-in-python-first-edition-kinser/

Modern Algorithms for Image Processing: Computer


Imagery by Example Using C# 1st Edition Vladimir
Kovalevsky

https://textbookfull.com/product/modern-algorithms-for-image-
processing-computer-imagery-by-example-using-c-1st-edition-
vladimir-kovalevsky/
Intelligent Systems Reference Library 117

Diego Oliva
Erik Cuevas

Advances and
Applications
of Optimised
Algorithms in
Image Processing
Intelligent Systems Reference Library

Volume 117

Series editors
Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland
e-mail: kacprzyk@ibspan.waw.pl
Lakhmi C. Jain, University of Canberra, Canberra, Australia;
Bournemouth University, UK;
KES International, UK
e-mails: jainlc2002@yahoo.co.uk; Lakhmi.Jain@canberra.edu.au
URL: http://www.kesinternational.org/organisation.php
About this Series

The aim of this series is to publish a Reference Library, including novel advances
and developments in all aspects of Intelligent Systems in an easily accessible and
well structured form. The series includes reference works, handbooks, compendia,
textbooks, well-structured monographs, dictionaries, and encyclopedias. It contains
well integrated knowledge and current information in the field of Intelligent
Systems. The series covers the theory, applications, and design methods of
Intelligent Systems. Virtually all disciplines such as engineering, computer science,
avionics, business, e-commerce, environment, healthcare, physics and life science
are included.

More information about this series at http://www.springer.com/series/8578


Diego Oliva Erik Cuevas

Advances and Applications


of Optimised Algorithms
in Image Processing

123
Diego Oliva Erik Cuevas
Departamento de Electrónica, CUCEI Departamento de Electrónica, CUCEI
Universidad de Guadalajara Universidad de Guadalajara
Guadalajara, Jalisco Guadalajara, Jalisco
Mexico Mexico

and

Tecnológico de Monterrey, Campus


Guadalajara
Zapopan, Jalisco
Mexico

ISSN 1868-4394 ISSN 1868-4408 (electronic)


Intelligent Systems Reference Library
ISBN 978-3-319-48549-2 ISBN 978-3-319-48550-8 (eBook)
DOI 10.1007/978-3-319-48550-8
Library of Congress Control Number: 2016955427

© Springer International Publishing AG 2017


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, express or implied, with respect to the material contained herein or
for any errors or omissions that may have been made.

Printed on acid-free paper

This Springer imprint is published by Springer Nature


The registered company is Springer International Publishing AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
To my family and Gosia Kijak, you are
always my support
Foreword

This book brings together and explores possibilities for combining image
processing and artificial intelligence, both focused on machine learning and opti-
mization, two relevant areas and fields in computer science. Most books have been
proposed about the major topics separately, but not in conjunction, giving it a
special interest. The problems addressed and described in the different chapters
were selected in order to demonstrate the capabilities of optimization and machine
learning to solve different issues in image processing. These problems were selected
considering the degree of relevance in the field providing important cues on par-
ticular applications domains. The topics include the study of different methods for
image segmentation, and more specifically detection of geometrical shapes and
object recognition, where their applications in medical image processing, based on
the modification of optimization algorithms with machine learning techniques,
provide a new point of view. In short, the book was intended with the purpose and
motivation to show that optimization and machine learning main topics are
attractive alternatives for image processing technique taking advantage over other
existing strategies. Complex tasks can be addressed under these approaches pro-
viding new solutions or improving the existing ones thanks to the required foun-
dation for solving problems in specific areas and applications.
Unlike other existing books in similar areas, the book proposed here introduces
to the reader the new trends using optimization approaches about the use of opti-
mization and machine learning techniques applied to image processing. Moreover,
each chapter includes comparisons and updated references that support the results
obtained by the proposed approaches, at the same time that provides the reader a
practical guide to go to the reference sources.
The book was designed for graduate and postgraduate education, where students
can find support for reinforcing or as the basis for their consolidation or deepening
of knowledge, and for researchers. Also teachers can find support for the teaching
process in the areas involving machine vision or as examples related to main
techniques addressed. Additionally, professionals who want to learn and explore the
advances on concepts and implementation of optimization and learning-based

vii
viii Foreword

algorithms applied image processing find in this book an excellent guide for such
purpose.
The content of this book has been organized considering an introduction to
machine learning an optimization. After each chapter addresses and solves selected
problems in image processing. In this regard, Chaps. 1 and 2 provides respectively
introductions to machine learning and optimization, where the basic and main
concepts related to image processing are addressed. Chapter 3, describes the
electromagnetism-like optimization (EMO) algorithm, where the appropriate
modifications are addressed to work properly in image processing. Moreover, its
advantages and shortcomings are also explored. Chapter 4 addresses the digital
image segmentation as an optimization problem. It explains how the image seg-
mentation is treated as an optimization problem using different objective functions.
Template matching using a physical inspired algorithm is addressed in Chap. 5,
where indeed, template matching is considered as an optimization problem, based
on a modification of EMO and considering the use of a memory to reduce the
number of call functions. Chapter 6 addresses the detection of circular shapes
problem in digital images, and again focused as an optimization problem.
A practical medical application is proposed in Chap. 7, where blood cell seg-
mentation by circle detection is the problem to be solved. This chapter introduces a
new objective function to measure the match between the proposed solutions and
the blood cells contained in the images. Finally, Chap. 8 proposes an improvement
EMO applying the concept of opposition-based electromagnetism-like optimiza-
tion. This chapter analyzes a modification of EMO used as a machine learning
technique to improve its performance. An important advantage of this structure is
that each chapter could be read separately. Although all chapters are interconnected,
Chap. 3 serves as the basis for some of them.
The concise comprehensive book on the topics addressed makes this work an
important reference in image processing, which is an important area where a sig-
nificant number of technologies are continuously emerging and sometimes unten-
able and scattered along the literature. Therefore, congratulations to authors for
their diligence, oversight and dedication for assembling the topics addressed in the
book. The computer vision community will be very grateful for this well-done
work.

July 2016 Gonzalo Pajares


Universidad Complutense de Madrid
Preface

The use of cameras to obtain images or videos from the environment has been
extended in the last years. Now these sensors are present in our lives, from cell
phones to industrial, surveillance and medical applications. The tendency is to have
automatic applications that can analyze the images obtained with the cameras. Such
applications involve the use of image processing algorithms.
Image processing is a field in which the environment is analyzed using samples
taken with a camera. The idea is to extract features that permit the identification
of the objects contained in the image. To achieve this goal is necessary applying
different operators that allow a correct analysis of a scene. Most of these operations
are computationally expensive. On the other hand, optimization approaches are
extensively used in different areas of engineering. They are used to explore complex
search spaces and obtain the most appropriate solutions using an objective function.
This book presents a study the uses of optimization algorithms in complex prob-
lems of image processing. The selected problems explore areas from the theory of
image segmentation to the detection of complex objects in medical images. The
concepts of machine learning and optimization are analyzed to provide an overview
of the application of these tools in image processing.
The aim of this book is to present a study of the use of new tendencies to solve
image processing problems. When we start working on those topics almost ten
years ago, the related information was sparse. Now we realize that the researchers
were divided and closed in their fields. On the other hand, the use of cameras was
not popular then. This book presents in a practical way the task to adapt the
traditional methods of a specific field to be solved using modern optimization
algorithms. Moreover, in our study we notice that optimization algorithm could also
be modified and hybridized with machine learning techniques. Such modifications
are also included in some chapters. The reader could see that our goal is to show
that exist a natural link between the image processing and optimization. To achieve
this objective, the first three chapters introduce the concepts of machine learning,
optimization and the optimization technique used to solve the problems. The
structure of the rest of the sections is to first present an introduction to the problem
to be solved and explain the basic ideas and concepts about the implementations.

ix
x Preface

The book was planned considering that, the readers could be students, researchers
expert in the field and practitioners that are not completely involved with the topics.
This book has been structured so that each chapter can be read independently
from the others. Chapter 1 describes the machine learning (ML). This chapter
concentrates on elementary concepts of machine learning. Chapter 2 explains the
theory related with global optimization (GO). Readers that are familiar with those
topics may wish to skip these chapters.
In Chap. 3 the electromagnetism-like optimization (EMO) algorithm is intro-
duced as a tool to solve complex optimization problems. The theory of physics
behind the EMO operators is explained. Moreover, their pros and cons are widely
analyzed, including some of the most significant modifications.
Chapter 4 presents three alternative methodologies for image segmentation
considering different objective functions. The EMO algorithm is used to find the
best thresholds that can segment the histogram of a digital image.
In Chap. 5 the problem template matching is introduced that consists in the
detection of objects in an image using a template. Here the EMO algorithm opti-
mizes an objective function. Moreover, improvements to reduce the number of
evaluations and the convergence velocity are also explained.
Continuing with the object detection, Chap. 6 shows how EMO algorithm can be
applied to detect circular shapes embedded in digital images. Meanwhile, in
Chap. 7 a modified objective function is used to identify white blood cells in
medical images using EMO.
Chapter 8 shows how a machine learning technique could improve the perfor-
mance of an optimization algorithm without affecting its main features such as
accuracy or convergence.
Writing this book was a very rewarding experience where many people were
involved. We acknowledge Dr. Gonzalo Pajares for always being available to help
us. We express our gratitude to Prof. Lakhmi Jain, who so warmly sustained this
project. Acknowledgements also go to Dr. Thomas Ditzinger, who so kindly agreed
to its appearance.
Finally, it is necessary to mention that this book is a small piece in the puzzles of
image processing and optimization. We would like to encourage the reader to
explore and expand the knowledge in order create their own implementations
according their own necessities.

Zapopan, Mexico Diego Oliva


Guadalajara, Mexico Erik Cuevas
July 2016
Contents

1 An Introduction to Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . 1


1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Typed of Machine Learning Strategies . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3.1 Nearest Neighbors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Parametric and Non-parametric Models . . . . . . . . . . . . . . . . . . . . . 4
1.5 Overfitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.6 The Curse of Dimensionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.7 Bias-Variance Trade-Off . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.8 Data into Probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.1 Definition of an Optimization Problem . . . . . . . . . . . . . . . . . . . . . . 13
2.2 Classical Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3 Evolutionary Computation Methods . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3.1 Structure of an Evolutionary Computation Algorithm . . . . . 18
2.4 Optimization and Image Processing and Pattern Recognition . . . . . 19
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3 Electromagnetism—Like Optimization Algorithm: An
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.1 Introduction to Electromagnetism-Like Optimization (EMO) . . . . . 23
3.2 Optimization Inspired in Electromagnetism . . . . . . . . . . . . . . . . . . 24
3.2.1 Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.2.2 Local Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.2.3 Total Force Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2.4 Movement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.3 A Numerical Example Using EMO . . . . . . . . . . . . . . . . . . . . . . . . 30

xi
xii Contents

3.4 EMO Modifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31


3.4.1 Hybridizing EMO with Descent Search (HEMO) . . . . . . . . 34
3.4.2 EMO with Fixed Search Pattern (FEMO) . . . . . . . . . . . . . . 37
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4 Digital Image Segmentation as an Optimization Problem . . . . . . . . . 43
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.2 Image Multilevel Thresholding (MT) . . . . . . . . . . . . . . . . . . . . . . . 45
4.2.1 Between—Class Variance (Otsu’s Method) . . . . . . . . . . . . . 46
4.2.2 Entropy Criterion Method (Kapur’s Method) . . . . . . . . . . . 48
4.2.3 Tsallis Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.3 Multilevel Thresholding Using EMO (MTEMO) . . . . . . . . . . . . . . 51
4.3.1 Particle Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.3.2 EMO Implementation with Otsu’s and Kapur’s
Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.3.3 EMO Implementation with Tsallis Entropy . . . . . . . . . . . . . 52
4.4 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.4.1 Otsu and Kapur Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.4.2 Tsallis Entropy Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
5 Template Matching Using a Physical Inspired Algorithm . . . . . . . . . 93
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.2 Template Matching Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
5.3 New Local Search Procedure in EMO for Template Matching . . . . 98
5.3.1 The New Local Search Procedure for EMO . . . . . . . . . . . . 98
5.3.2 Fitness Estimation for Velocity Enhancement in TM . . . . . 102
5.3.3 Template Matching Using EMO . . . . . . . . . . . . . . . . . . . . . 102
5.4 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
5.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
6 Detection of Circular Shapes in Digital Images. . . . . . . . . . . . . . . . . . 113
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
6.2 Circle Detection Using EMO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
6.2.1 Particle Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
6.2.2 Objective Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
6.2.3 EMO Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
6.3 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
6.3.1 Circle Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
6.3.2 Test on Shape Discrimination . . . . . . . . . . . . . . . . . . . . . . . 121
6.3.3 Multiple Circle Detection . . . . . . . . . . . . . . . . . . . . . . . . . . 122
6.3.4 Circular Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Contents xiii

6.3.5 Circle Localization from Occluded or Imperfect


Circles and Arc Detection . . . . . . . . . . . . . . . . . . . . . . . . . . 124
6.3.6 Accuracy and Computational Time . . . . . . . . . . . . . . . . . . . 125
6.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
7 A Medical Application: Blood Cell Segmentation
by Circle Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
7.2 Circle Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
7.2.1 Data Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
7.2.2 Particle Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
7.2.3 Objective Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
7.2.4 EMO Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
7.2.5 White Blood Cell Detection . . . . . . . . . . . . . . . . . . . . . . . . 142
7.3 A Numerical Example of White Blood Cells Detection . . . . . . . . . 146
7.4 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
7.5 Comparisons to Other Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
7.5.1 Detection Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
7.5.2 Robustness Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
7.5.3 Stability Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
7.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
8 An EMO Improvement: Opposition-Based
Electromagnetism-Like for Global Optimization . . . . . . . . . . . . . . . . . 159
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
8.2 Opposition - Based Learning (OBL) . . . . . . . . . . . . . . . . . . . . . . . . 161
8.2.1 Opposite Number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
8.2.2 Opposite Point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
8.2.3 Opposition-Based Optimization . . . . . . . . . . . . . . . . . . . . . . 162
8.3 Opposition-Based Electromagnetism-Like Optimization
(OBEMO) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
8.3.1 Opposition-Based Population Initialization . . . . . . . . . . . . . 163
8.3.2 Opposition-Based Production for New Generation . . . . . . . 165
8.4 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
8.4.1 Test Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
8.4.2 Parameter Settings for the Involved EMO Algorithms . . . . 166
8.4.3 Results and Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
8.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
Chapter 1
An Introduction to Machine Learning

1.1 Introduction

We already are in the era of big data. The overall amount of data is steadily
growing. There are about one trillion of web pages; one hour of video is uploaded
to YouTube every second, amounting to 10 years of content every day. Banks
handle more than 1 M transactions per hour and has databases containing more than
2.5 petabytes (2.5 × 1015) of information; and so on [1].
In general, we define machine learning as a set of methods that can automatically
detect patterns in data, and then use the uncovered patterns to predict future data, or
to perform other kinds of decision making under uncertainty. Learning means that
novel knowledge is generated from observations and that this knowledge is used to
achieve defined objectives. Data itself is already knowledge. But for certain
applications and for human understanding, large data sets cannot directly be applied
in their raw form. Learning from data means that new condensed knowledge is
extracted from the large amount of information [2].
Some typical machine learning problems include, for example in bioinformatics,
the analysis of large genome data sets to detect illnesses and for the development of
drugs. In economics, the study of large data sets of market data can improve the
behavior of decision makers. Prediction and inference can help to improve planning
strategies for efficient market behavior. The analysis of share markets and stock
time series can be used to learn models that allow the prediction of future devel-
opments. There are thousands of further examples that require the development of
efficient data mining and machine learning techniques. Machine learning tasks vary
in various kinds of ways, e.g., the type of learning task, the number of patterns, and
their size [2].

© Springer International Publishing AG 2017 1


D. Oliva and E. Cuevas, Advances and Applications of Optimised Algorithms
in Image Processing, Intelligent Systems Reference Library 117,
DOI 10.1007/978-3-319-48550-8_1
2 1 An Introduction to Machine Learning

1.2 Typed of Machine Learning Strategies

The Machine learning methods are usually divided into three main types: super-
vised, unsupervised and reinforcement learning [3]. In the predictive or supervised
learning approach, the goal is to learn a mapping from inputs x to outputs y, given a
 
labeled set of input-output pairs D ¼ fðxi ; yi ÞgNi¼1 ,xi ¼ x1i ; . . .; xdi . Here D is
called the training data set, and N represents the number of training examples.
In the simplest formulation, each training vector x is a d-dimensional vector,
where each dimension represents a feature or attribute of x. Similarly, yi symbolizes
the category assigned to xi . Such categories integrate a set defined as
yi 2 f1; . . .; C g. When yi is categorical, the problem is known as classification and
when yi is real-valued, the problem is known as regression. Figure 1.1 shows a
schematic representation of the supervised learning.
The second main method of machine learning is the unsupervised learning. In
unsupervised learning, it is only necessary to provide the data D ¼ fxi gNi¼1 .
Therefore, the objective of an unsupervised algorithm is to automatically find
patterns from the data, which are not initially apparent. This process is sometimes
called knowledge discovery. Under such conditions, this process is a much less
well-defined problem, since we are not told what kinds of patterns to look for, and
there is no obvious error metric to use (unlike supervised learning, where we can
compare our prediction of yi for a given xi to the observed value). Figure 1.2
illustrate the process of unsupervised learning. In the figure, data are automatically
classified according to their distances in two categories, such as clustering
algorithms.
Reinforcement Learning is the third method of machine learning. It is less
popular compared with supervised and unsupervised methods. Under,
Reinforcement learning, an agent learns to behave in an unknown scenario through
the signals of reward and punishment provided by a critic. Different to supervised
learning, the reward and punishment signals give less information, in most of the
cases only failure or success. The final objective of the agent is to maximize the
total reward obtained in a complete learning episode. Figure 1.3 illustrate the
process of reinforcement learning.

Fig. 1.1 Schematic


representation of the Input xi Supervised Actual
supervised learning learning algorithm output

-
Error + Desired
yi
signal output
1.3 Classification 3

Fig. 1.2 Process of xi


unsupervised learning. Data
are automatically classified
according to their distances in
two categories, such as
clustering algorithms

Fig. 1.3 Process of


reinforcement learning
Unknown scenario

Reward/punishment
Actions

State
Agent

Values

Critic

1.3 Classification

Classification considers the problem of determining categorical labels for unla-


beled patterns based on observations. Let ðx1 ; y1 Þ; . . .; ðxN ; yN Þ be observations of
d-dimensional continuous patterns, i.e., xi 2 Rd with discrete labels y1 ; . . .; yN . The
objective in classification is to obtain a functional model f that allows a reasonable
prediction of unknown class labels y0 for a new pattern x0 . Patterns without labels
should be assigned to labels of patterns that are enough similar, e.g., that are close
to the target pattern in data space, that come from the same distribution, or that lie
on the same side of a separating decision function. But learning from observed
patterns can be difficult. Training sets can be noisy, important features may be
unknown, similarities between patterns may not be easy to define, and observations
4 1 An Introduction to Machine Learning

may not be sufficiently described by simple distributions. Further, learning func-


tional models can be tedious task, as classes may not be linearly separable or may
be difficult to separate with simple rules or mathematical equations.

1.3.1 Nearest Neighbors

The Nearest neighbor (NN) method is the most popular method used in machine
learning for classification. Its best characteristic is its simplicity. It is based on the
idea that the closest patterns to a target pattern x0 , for which we seek the label,
deliver useful information of its description. Based on this idea, NN assigns the
class label of the majority of the k-nearest patterns in data space. Figure 1.4 show
the classification process under the NN method, considering a 4-nearest approach.
Analyzing Fig. 1.4, it is clear that the novel pattern x0 will be classified as element
of the class A, since most of the nearest element are of the A category.

1.4 Parametric and Non-parametric Models

The objective of a machine learning algorithm is to obtain a functional model f that


allows a reasonable prediction or description of a data set. There are many ways to
define such models, but the most important distinction is this: does the model have
a fixed number of parameters, or does the number of parameters grow with the
amount of training data? The former is called a parametric model, and the latter is
called a nonparametric model. Parametric models have the advantage of often being
faster to use, but the disadvantage of making stronger assumptions about the nature
of the data distributions. Nonparametric models are more flexible, but often com-
putationally intractable for large datasets. We will give examples of both kinds of
models in the sections below. We focus on supervised learning for simplicity,
although much of our discussion also applies to unsupervised learning. Figure 1.5
represents graphically the architectures from both approaches.

Fig. 1.4 Classification


A
process under the NN method,
considering a 4-nearest
approach B

2
3
1 4
x
1.5 Overfitting 5

(a) (b)
Input Actual Input Actual
System Output System Output
+ +
- Target - Target
Output Output

Adaptable system with Adaptable system with


fixed parameters Calibration
variable parameters

Fig. 1.5 Graphical representation of the learning process in Parametric and non-parametric
models

1.5 Overfitting

The objective of learning is to obtain better predictions as outputs, being they class
labels or continuous regression values. The process to know how successfully the
algorithm has learnt is to compare the actual predictions with known target labels,
which in fact is how the training is done in supervised learning. If we want to
generalize the performance of the learning algorithm to examples that were not seen
during the training process, we obviously can’t test by using the same data set used
in the learning stage. Therefore, it is necessary a different data, a test set, to prove
the generalization ability of the learning method. This test set is used by the
learning algorithm and compared with the predicted outputs produced during the
learning process. In this test, the parameters obtained in the learning process are not
modified.
In fact, during the learning process, there is at least as much danger in
over-training as there is in under-training. The number of degrees of variability in
most machine learning algorithms is huge—for a neural network there are lots of
weights, and each of them can vary. This is undoubtedly more variation than there
is in the function we are learning, so we need to be careful: if we train for too long,
then we will overfit the data, which means that we have learnt about the noise and
inaccuracies in the data as well as the actual function. Therefore, the model that we
learn will be much too complicated, and won’t be able to generalize.
Figure 1.6 illustrates this problem by plotting the predictions of some algorithm
(as the curve) at two different points in the learning process. On the Fig. 1.6a the
curve fits the overall trend of the data well (it has generalized to the underlying
general function), but the training error would still not be that close to zero since it
passes near, but not through, the training data. As the network continues to learn, it
will eventually produce a much more complex model that has a lower training error
(close to zero), meaning that it has memorized the training examples, including any
noise component of them, so that is has overfitted the training data (see Fig. 1.6b).
6 1 An Introduction to Machine Learning

(a) (b)

f(x) g(x)

x x

Fig. 1.6 Examples of a generalization and b overfitting

We want to stop the learning process before the algorithm overfits, which means
that we need to know how well it is generalizing at each iteration. We can’t use the
training data for this, because we wouldn’t detect overfitting, but we can’t use the
testing data either, because we’re saving that for the final tests. So we need a third
set of data to use for this purpose, which is called the validation set because we’re
using it to validate the learning so far. This is known as cross-validation in statistics.
It is part of model selection: choosing the right parameters for the model so that it
generalizes as well as possible.

1.6 The Curse of Dimensionality

The NN classifier is simple and can work quite well, when it is given a represen-
tative distance metric and an enough training data. In fact, it can be shown that the
NN classifier can come within a factor of 2 of the best possible performance if
N ! 1.
However, the main problem with NN classifiers is that they do not work well
with high dimensional data x. The poor performance in high dimensional settings is
due to the curse of dimensionality.
To explain the curse, we give a simple example. Consider applying a NN
classifier to data where the inputs are uniformly distributed in the d-dimensional
unit cube. Suppose we estimate the density of class labels around a test point x0 by
“growing” a hyper-cube around x0 until it contains a desired fraction F of the data
points. The expected edge length of this cube will be ed ðFÞ ¼ F 1=d . If d = 10 and
we want to compute our estimate on 10 % of the data, we have e10 ð0:1Þ ¼ 0:8, so
we need to extend the cube 80 % along each dimension around x0 . Even if we only
use 1 % of the data, we find e10 ð0:01Þ ¼ 0:63, see Fig. 1.7. Since the entire range
of the data is only 1 along each dimension, we see that the method is no longer very
local, despite the name “nearest neighbor”. The trouble with looking at neighbors
that are so far away is that they may not be good predictors about the behavior of
the input-output function at a given point.
1.7 Bias-Variance Trade-Off 7

(b)
1

(a) 0.8

Edge length of cube


0.6
1
d=1
0.4 d=2
d=3
d=5
0.2 d=7
0
s
1
0
0 0.2 0.4 0.6 0.8 1
Fraction of data in neighborhood

Fig. 1.7 Illustration of the curse of dimensionality. a We embed a small cube of side s inside a
larger unit cube. b We plot the edge length of a cube needed to cover a given volume of the unit
cube as a function of the number of dimensions

1.7 Bias-Variance Trade-Off

An inflexible model is defined as a mathematical formulation that involve few


parameters. Due to the few parameters, they have several problems to fit data well.
On the other hand, flexible models integrates several parameters with better mod-
eling capacities [4]. The sensibility of inflexible models to variations during the
learning process is relatively moderate in comparison to flexible models. Similarly,
inflexible models have comparatively few modeling capacities, but are often easy to
interpret. For example, linear models formulate linear relationships between their
parameters, which are easy to describe with their coefficients, e.g.,

f ðxi Þ ¼ a0 þ a1 x1i þ    þ ad xdi ð1:1Þ

The coefficients a represents the adaptable parameters that formulate relation-


ships easy to interpret. Optimizing the coefficients of a linear model is easier than
fitting an arbitrary function with several adapting parameters.
Linear models do not suffer from overfitting as they are less depending on slight
changes in the training set. They have low variance that is the amount by which the
model changes, if using different training data sets. Such models have large errors
when approximating a complex problem corresponding to a high bias. Bias is a
measure for the inability of fitting the model to the training patterns. In contrast,
flexible methods have high variance, i.e., they vary a lot when changing the training
set, but have low bias, i.e., they better adapt to the observations.
8 1 An Introduction to Machine Learning

Fig. 1.8 Illustration of


bias-variance compromise

error
expected error

variance bias

flexibility

Figure 1.8 illustrates the bias-variance trade-off. On the x-axis the model com-
plexity increases from left to right. While a method with low flexibility has a low
variance, it usually suffers from high bias. The variance increases while the bias
decreases with increasing model flexibility. The effect changes in the middle of the
plot, where variance and bias cross. The expected error is minimal in the middle of
the plot, where bias and variance reach a similar level. For practical problems and
data sets, the bias-variance trade-off has to be considered when the decision for a
particular method is made.

1.8 Data into Probabilities

Figure 1.9 shows the measurements of a feature x for two different classes, C1 and
C2 . Members of class C2 tend to have larger values of feature x than members of
class C1 , but there is some overlap between both classes. Under such conditions, the
correct class is easy to predict at the extremes of the range of each class, but what to
do in the middle where is unclear [3].

Fig. 1.9 Histograms of 350


feature x values against their 300 C
probability p(x) for two 1
C
2
classes 250

200
p(x)

150

100

50

0
30 40 50 60 70 80 90
x
1.8 Data into Probabilities 9

Assuming that we are trying to classify the writing letters ‘a’ and ‘b’ based on
their height (as it is shown in Fig. 1.10). Most of the people write the letter ‘a’
smaller than their ‘b’, but not everybody. However, in this example, other class of
information can be used to solve this classification problem. We know that in
English texts, the letter ‘a’ is much more common than the letter ‘b’. If we see a
letter that is either an ‘a’ or a ‘b’ in normal writing, then there is a 75 % chance that
it is an ‘a.’ We are using prior knowledge to estimate the probability that the letter is
an ‘a’: in this example, pðC1 Þ ¼ 0:75, pðC2 Þ ¼ 0:25. If we weren’t allowed to see
the letter at all, and just had to classify it, then if we picked ‘a’ every time, we’d be
right 75 % of the time.
In order to give a prediction, it is necessary to know the value x of the dis-
criminant feature. It would be a mistake to use only the occurrence (a priori)
probabilities pðC1 Þ and pðC2 Þ. Normally, a classification problem is formulated
through the definition of a data set which contains a set of values of x and the class
of each exemplar. Under such conditions, it is easy to calculate the value of pðC1 Þ
(we just count how many times out of the total the class was C1 and divide by the
total number of examples), and also another useful measurement: the conditional
probability of C1 given that x has value X: pðC1 jXÞ. The conditional probability
tells us how likely it is that the class is C1 given that the value of x is X. So in
Fig. 1.9 the value of pðC1 jXÞ will be much larger for small values of X than for
large values. Clearly, this is exactly what we want to calculate in order to perform
classification. The question is how to get to this conditional probability, since we
can’t read it directly from the histogram. The first thing that we need to do to get
these values is to quantize the measurement x, which just means that we put it into
one of a discrete set of values {X}, such as the bins in a histogram. This is exactly
what is plotted in Fig. 1.8. Now, if we have lots of examples of the two classes, and
the histogram bins that their measurements fall into, we can compute pðCi ; Xj Þ,
which is the joint probability, and tells us how often a measurement of Ci fell into
histogram bin Xj . We do this by looking in histogram bin Xj , counting the number
of elements of Ci , and dividing  by the total number of examples of any class.
We can also define pðXj Ci Þ, which is a different conditional probability, and
tells us how often (in the training set) there is a measurement of Xj given that the
example is a member of class Ci . Again, we can just get this information from the

Fig. 1.10 Letters “a” and “b”


in the pixel context
10 1 An Introduction to Machine Learning

histogram by counting the number of examples of class Ci in histogram bin Xj and


dividing by the number of examples of that class there are (in any bin).
So we have now worked out two things from our  training data: the joint prob-
ability pðCi ; Xj Þ and the conditional probability pðXj Ci Þ. Since we actually want to
compute pðCi jXj Þ we need to know how to link these things together. As some of
you may already know, the answer is Bayes’ rule, which is what we are now going
to derive. There is a link between the joint probability and the conditional proba-
bility. It is:

pðCi Xj Þ ¼ pðXj jCi Þ  pðCi Þ ð1:2Þ

or equivalently:

pðCi ; Xj Þ ¼ pðCi Xj Þ  pðXj Þ ð1:3Þ

Clearly, the right-hand side of these two equations must be equal to each other,
since they are both equal to pðCi ; Xj Þ, and so with one division we can write:

 pðXj jCi ÞpðCi Þ


pðCi Xj Þ ¼ ð1:4Þ
pðXj Þ

This is Bayes’ rule. If you don’t already know it, learn it: it is the most important
equation in machine learning. It relates the posterior probability pðCi jXj Þ with the
prior probability pðC1 Þ and class-conditional probability pðXj Ci Þ The denominator
(the term on the bottom of the fraction) acts to normalize everything, so that all the
probabilities sum to 1. It might not be clear how to compute this term. However, if
we notice that any observation Xk has to belong to some class Ci , then we can
marginalize over the classes to compute:
X
pðXk Þ ¼ pðXk jCi Þ  PðCi Þ ð1:5Þ
i

The reason why Bayes’ rule is so important is that it lets us obtain the posterior
probability—which is what we actually want—by calculating things that are much
easier to compute. We can estimate the prior probabilities by looking at how often
each class appears in our training set, and we can get the class-conditional prob-
abilities from the histogram of the values of the feature for the training set. We can
use the posterior probability to assign each new observation to one of the classes by
picking the class Ci where:

pðCi jxÞ [ pðCj jxÞ 8i 6¼ j ð1:6Þ

where x is a vector of feature values instead of just one feature. This is known as the
maximum a posteriori or MAP hypothesis, and it gives us a way to choose which
class to choose as the output one. The question is whether this is the right thing to
1.8 Data into Probabilities 11

do. There has been quite a lot of research in both the statistical and machine
learning literatures into what is the right question to ask about our data to perform
classification, but we are going to skate over it very lightly.
The MAP question is; what is the most likely class given the training data?
Suppose that there are three possible output classes, and for a particular input the
posterior probabilities of the classes are pðC1 jxÞ ¼ 0:35, pðC2 jxÞ ¼ 0:45,
pðC3 jxÞ ¼ 0:2. The MAP hypothesis therefore tells us that this input is in class C2 ,
because that is the class with the highest posterior probability. Now suppose that,
based on the class that the data is in, we want to do something. If the class is C1 or
C3 then we do action 1, and if the class is C2 then we do action 2. As an example,
suppose that the inputs are the results of a blood test, the three classes are different
possible diseases, and the output is whether or not to treat with a particular
antibiotic. The MAP method has told us that the output is C2 .
As an example, suppose that the inputs are the results of a blood test, the three
classes are different possible diseases, and the output is whether or not to treat with
a particular antibiotic. The MAP method has told us that the output is C2 , and so we
will not treat the disease. But what is the probability that it does not belong to class
C2 , and so should have been treated with the antibiotic? It is 1  pðC2 Þ ¼ 0:55. So
the MAP prediction seems to be wrong: we should treat with antibiotic, because
overall it is more likely. This method where we take into account the final outcomes
of all of the classes is called the Bayes’ Optimal Classification. It minimizes the
probability of misclassification, rather than maximizing the posterior probability.

References

1. Kramer, O.: Machine Learning for Evolution Strategy. Springer, Switzerland (2016)
2. Kevin, P.: Murphy, Machine Learning: A Probabilistic Perspective. MIT Press, London (2011)
3. Marsland, S.: Machine Learning, An Algorithm Perspective. CRC Press, Boca Raton (2015)
4. Mola, A., Vishwanathan, A.: Introduction to Machine Learning. Cambridge University Press,
Cambridge (2008)
Chapter 2
Optimization

2.1 Definition of an Optimization Problem

The vast majority of image processing and pattern recognition algorithms use some
form of optimization, as they intend to find some solution which is “best” according
to some criterion. From a general perspective, an optimization problem is a situation
that requires to decide for a choice from a set of possible alternatives in order to
reach a predefined/required benefit at minimal costs [1].
Consider a public transportation system of a city, for example. Here the system
has to find the “best” route to a destination location. In order to rate alternative
solutions and eventually find out which solution is “best,” a suitable criterion has to
be applied. A reasonable criterion could be the distance of the routes. We then
would expect the optimization algorithm to select the route of shortest distance as a
solution. Observe, however, that other criteria are possible, which might lead to
different “optimal” solutions, e.g., number of transfers, ticket price or the time it
takes to travel the route leading to the fastest route as a solution.
Mathematically speaking, optimization can be described as follows: Given a
function f : S ! R which is called the objective function, find the argument which
minimizes f:

x ¼ arg min f ðxÞ ð2:1Þ


x2S

S defines the so-called solution set, which is the set of all possible solutions for the
optimization problem. Sometimes, the unknown(s) x are referred to design vari-
ables. The function f describes the optimization criterion, i.e., enables us to cal-
culate a quantity which indicates the “quality” of a particular x.
In our example, S is composed by the subway trajectories and bus lines, etc.,
stored in the database of the system, x is the route the system has to find, and the
optimization criterion f(x) (which measures the quality of a possible solution) could

© Springer International Publishing AG 2017 13


D. Oliva and E. Cuevas, Advances and Applications of Optimised Algorithms
in Image Processing, Intelligent Systems Reference Library 117,
DOI 10.1007/978-3-319-48550-8_2
14 2 Optimization

calculate the ticket price or distance to the destination (or a combination of both),
depending on our preferences.
Sometimes there also exist one or more additional constraints which the solution
x has to satisfy. In that case we talk about constrained optimization (opposed to
unconstrained optimization if no such constraint exists). As a summary, an opti-
mization problem has the following components:
• One or more design variables x for which a solution has to be found
• An objective function f(x) describing the optimization criterion
• A solution set S specifying the set of possible solutions x
• (optional) One or more constraints on x
In order to be of practical use, an optimization algorithm has to find a solution in
a reasonable amount of time with reasonable accuracy. Apart from the performance
of the algorithm employed, this also depends on the problem at hand itself. If we
can hope for a numerical solution, we say that the problem is well-posed. For
assessing whether an optimization problem is well-posed, the following conditions
must be fulfilled:
1. A solution exists.
2. There is only one solution to the problem, i.e., the solution is unique.
3. The relationship between the solution and the initial conditions is such that
small perturbations of the initial conditions result in only small variations of x .

2.2 Classical Optimization

Once a task has been transformed into an objective function minimization problem,
the next step is to choose an appropriate optimizer. Optimization algorithms can be
divided in two groups: derivative-based and derivative-free [2].
In general, f(x) may have a nonlinear form respect to the adjustable parameter
x. Due to the complexity of f ðÞ, in classical methods, it is often used an iterative
algorithm to explore the input space effectively. In iterative descent methods, the
next point xk þ 1 is determined by a step down from the current point xk in a
direction vector d:

xk þ 1 ¼ xk þ ad; ð2:2Þ

where a is a positive step size regulating to what extent to proceed in that direction.
When the direction d in Eq. 2.1 is determined on the basis of the gradient (g) of the
objective function f ðÞ, such methods are known as gradient-based techniques.
The method of steepest descent is one of the oldest techniques for optimizing a
given function. This technique represents the basis for many derivative-based
methods. Under such a method, Eq. 2.3 becomes the well-known gradient formula:
2.2 Classical Optimization 15

xk þ 1 ¼ xk  agðf ðxÞÞ; ð2:3Þ

However, classical derivative-based optimization can be effective as long the


objective function fulfills two requirements:
– The objective function must be two-times differentiable.
– The objective function must be uni-modal, i.e., have a single minimum.
A simple example of a differentiable and uni-modal objective function is

f ðx1 ; x2 Þ ¼ 10  eðx1 þ 3x2 Þ


2 2
ð2:4Þ

Figure 2.1 shows the function defined in Eq. 2.4.


Unfortunately, under such circumstances, classical methods are only applicable
for a few types of optimization problems. For combinatorial optimization, there is
no dentition of differentiation.
Furthermore, there are many reasons why an objective function might not be
differentiable. For example, the “floor” operation in Eq. 2.5 quantizes the function
in Eq. 2.4, transforming Fig. 2.1 into the stepped shape seen in Fig. 2.2. At each
step’s edge, the objective function is non-differentiable:
 
f ðx1 ; x2 Þ ¼ floor 10  eðx1 þ 3x2 Þ
2 2
ð2:5Þ

Even in differentiable objective functions, gradient-based methods might not


work. Let us consider the minimization of the Griewank function as an example.

10

9.8
f (x1 ,x 2 )

9.6

9.4

9.2
1
9 0.5
1
0.5 0
0 −0.5 x1
x2 −0.5
−1
−1

Fig. 2.1 Uni-modal objective function


16 2 Optimization

10

8
f (x1 ,x 2 )

0 1
1 0.5
0.5
0
0
x2 −0.5 −0.5 x1
−1 −1

Fig. 2.2 A non-differentiable, quantized, uni-modal function

 
x21 þ x22 x2ffiffi
minimize f ðx1 ; x2 Þ ¼ 4000 cosðx1 Þ cos p
2
þ1
subject to 30  x1  30 ð2:6Þ
30  x2  30

From the optimization problem formulated in Eq. 2.6, it is quite easy to


understand that the global optimal solution is x1 ¼ x2 ¼ 0. Figure 2.3 visualizes the
function defined in Eq. 2.6. According to Fig. 1.3, the objective function has many
local optimal solutions (multimodal) so that the gradient methods with a randomly
generated initial solution will converge to one of them with a large probability.
Considering the limitations of gradient-based methods, image processing and
pattern recognition problems make difficult their integration with classical opti-
mization methods. Instead, some other techniques which do not make assumptions
and which can be applied to wide range of problems are required [3].
f (x1 ,x 2 )

x2
x1

Fig. 2.3 The Griewank multi-modal function


2.3 Evolutionary Computation Methods 17

2.3 Evolutionary Computation Methods

Evolutionary computation (EC) [4] methods are derivative-free procedures, which


do not require that the objective function must be neither two-times differentiable
nor uni-modal. Therefore, EC methods as global optimization algorithms can deal
with non-convex, nonlinear, and multimodal problems subject to linear or nonlinear
constraints with continuous or discrete decision variables.
The field of EC has a rich history. With the development of computational
devices and demands of industrial processes, the necessity to solve some opti-
mization problems arose despite the fact that there was not sufficient prior
knowledge (hypotheses) on the optimization problem for the application of an
classical method. In fact, in the majority of image processing and pattern recog-
nition cases, the problems are highly nonlinear, or characterized by a noisy fitness,
or without an explicit analytical expression as the objective function might be the
result of an experimental or simulation process. In this context, the EC methods
have been proposed as optimization alternatives.
A EC technique is a general method for solving optimization problems. It uses
an objective function in an abstract and efficient manner, typically without utilizing
deeper insights into its mathematical properties. EC methods do not require
hypotheses on the optimization problem nor any kind of prior knowledge on the
objective function. The treatment of objective functions as “black boxes” [5] is the
most prominent and attractive feature of EC methods.
EC methods obtain knowledge about the structure of an optimization problem by
utilizing information obtained from the possible solutions (i.e., candidate solutions)
evaluated in the past. This knowledge is used to construct new candidate solutions
which are likely to have a better quality.
Recently, several EC methods have been proposed with interesting results. Such
approaches uses as inspiration our scientific understanding of biological, natural or
social systems, which at some level of abstraction can be represented as opti-
mization processes [6]. These methods include the social behavior of bird flocking
and fish schooling such as the Particle Swarm Optimization (PSO) algorithm [7],
the cooperative behavior of bee colonies such as the Artificial Bee Colony
(ABC) technique [8], the improvisation process that occurs when a musician
searches for a better state of harmony such as the Harmony Search (HS) [9], the
emulation of the bat behavior such as the Bat Algorithm (BA) method [10], the
mating behavior of firefly insects such as the Firefly (FF) method [11], the
social-spider behavior such as the Social Spider Optimization (SSO) [12], the
simulation of the animal behavior in a group such as the Collective Animal
Behavior [13], the emulation of immunological systems as the clonal selection
algorithm (CSA) [14], the simulation of the electromagnetism phenomenon as the
electromagnetism-Like algorithm [15], and the emulation of the differential and
conventional evolution in species such as the Differential Evolution (DE) [16] and
Genetic Algorithms (GA) [17], respectively.
Another random document with
no related content on Scribd:
the weeks consumed by the trip, an existence which must have been
somewhat like that of Noah’s family in the Ark.
There was not, as I have mentioned, any means of keeping foods
fresh, nor was there even ice water to be had on those boats. We
used entirely, even for drinking, the muddy river water, which was
hauled up in buckets on the barber side of the boat, while the
steward was emptying refuse to the fishes on the pantry side. The
passengers became more or less intimate, necessarily, on a trip
such as I am attempting to describe. There was no place to sit but in
the general cabin, the sleeping rooms being so cramped. There was
no library, very little reading, but much fancy work, mostly on canvas,
footstools and bell-pulls. A bell-pull, you may want to know, was a
long band about three inches wide; it was hung from the parlor
cornice and connected with a bell in the servant’s region; it was quite
the style to embroider them in gay vines and flower designs.
The elderly ladies knit fine thread nightcaps, collars and lace.
Really some of the “old lady” work was quite handsome. Thus
fingers were kept busy, while gossip and interchange of bread and
cake recipes entertained the housewives who had never heard of
cooking schools and domestic science. Our trip necessarily
embraced at least one Sunday. I remember my father had a dear old
relative of the deepest dyed Presbyterian type (father of the late Dr.
T. G. Richardson), who always on his river trips landed wherever he
happened to be on Saturday and on Monday boarded another boat
(if one came along), his scruples forbidding Sunday travel.
American Stagecoach.

(From “Forty Etchings, from Sketches Made with the Camera Lucida in North
America in 1827 and 1828,” by Captain Basil Hall, R. N.)

Arrived at the end of our river journey, father chartered a whole


stage to take his family a two days’ trip into the heart of the blue
grass region. Nine passengers filled the interior of the coach, and
four or five, if need be, could ride on top. The rumble (we always
called it boot) was filled with baggage. The vehicle had no springs,
but was swung on braces, which gave it a kind of swaying motion
that always made me sick. However, we managed to start off in fine
style, but every time there was a stop to change horses all of us
alighted, stiff and tired and hot, to “stretch our legs,” like Squeers in
Dickens’ “Nicholas Nickleby.” At noon we rejoiced to hear our
coachman’s horn, a grand, loud blast, followed by toot, toot!—one
toot for each passenger, so the tavern man would know how many
plates to lay, and his wife how many biscuits and chicken legs to
have ready. We always made out to spend the one night of the
journey at Weissiger’s tavern in Frankfort, the best tavern in all the
land. We had a leisurely breakfast the following morning and were
refreshed in body and soul for the last lap of our journey.
Late afternoon the stage winds up a hill, and in a woods pasture
and surrounded by blue grass meadows the gable end of a red brick
house can be seen. My dear, tired mother puts her head out of the
window, “Driver, blow your horn.” A great blast sounds over the
waving grass and blossoming fields, and we know that they know we
are coming. Tired as the horses are after the long, hard pull; tired as
the coachman must be, he cracks his whip, and we gallop up the
shady lane to the dear old door as briskly as though we were fresh
from the stable. Long before we are fully there, and the steps of the
nine-passenger coach can be lowered; long before the boys can
jump off the top, a host of dear faces, both white and black, is
assembled to greet us. As a little child I always wondered why it was,
when the occasion was so joyful, and all of us tumbled from that
stage so beaming and happy, that as my aunt folded my mother in
her arms, they both wept such copious tears. Now I know.
XX
HOTEL AT PASS CHRISTIAN IN 1849

If there is a more restful spot on earth than a comfortable rocking-


chair on a deep veranda, with a nearby view of the dancing waters of
the gulf through a grove of tall pines, commend me to it. A whole
month on the west coast of Florida, all sand underfoot, pines and
oaks overhead, is ideal for fagged-out, tired-out, frayed-out humanity
from busy cities. This is not an advertisement, so I do not propose to
tell where six people from six different and widely separated parts of
the country last year dropped down from the skies, as it were, upon
just such a delightful straight mile of gulf coast.
One halts at a “turpentine depot” and takes a queer little tram to
the Gulf, seven miles away. Tram is hauled over wooden rails by two
tired nags whose motions suggest the lazy air of the pines. It is
loaded with the baggage—crates of hunting dogs—(fine hunting
abounds), the mail bag, some miscellaneous freight and finally the
passengers.
The country hotel is pine; ceilings, floors, walls are pine, the home-
made and built-in furniture is pine; a big fire, roaring in the open
fireplace if the day is chilly, is also made of pine—the rich, red
Florida pine, ever so much richer in color and in turpentine than the
boasted Georgia article. With the fish swimming in front of this hotel
and the birds flying behind, and rabbits running in both directions, it
goes without saying the table is above the average.
Here on the broad verandas, as we rock and dream the lazy days
away, visions visit me of the old hotel at Pass Christian in the forties.
The oaks and three China trees in front of the veranda, and the view
of the nearby waters, the whistle of mocking-birds among the china
berries (thank heaven! sparrows have not found this Elysium) lend
additional force to the semblance. One old lady, who hunts not,
neither does she fish, rocks on the sunny veranda and dreams, as is
the wont of those who have lived beyond their day and generation.
She brings forth from a long-forgotten corner of memory’s closet a
picture covered with the dust of years, and lovingly brushes away the
dimness, when behold! old Pass Christian, dear old Pass Christian,
before the day of railroads and summer cottages, before the day of 6
o’clock dinners and trailing skirts, of cotillion favors and abbreviated
bathing suits.
The old hotel was built with a wing or extension at each end, which
formed with the main building three sides of a square. There was no
attempt at landscape gardening; not even a rosebush or an oleander
decorated the little court. No plaster Apollos and Dianas such as
were seen peeping about the shrubbery of the various cottages (like
the De Blancs’ and Ducayets’) that dotted in those days the old
bayou road, and were considered so very decorative, but plain sand
and scrub such as meet my eye to-day on this little frequented part
of the Florida Gulf Coast. There was no beach driving or riding of
gay people then—none here now.
I fly back to the summer of ’49, and live again with the young girls
who made life one long summer’s day. We walked the pier, the
image of one before my eyes now, to the bath-houses in muslin
dresses. Bathing suits were hideous, unsightly garments, high neck,
long sleeves, long skirts, intended for water only! The young girls
returned under parasols and veils. How decorous! No baigneuse
decolletée to be seen on the beach. Our amusements were simple
and distinctly ladylike. There was no golf or tennis, not even the
innocent croquet, to tempt the demoiselles to athletics, so they
drifted more to the “Lydia Languish” style.
There was no lack of beaux who came, more than enough to “go
round,” by the Saturday boats, in time for the weekly hop—danced
all Saturday night and returned to weekly drudge (as they called it) in
the city. The bonbons and flowers they brought vanished and faded
long before the little boat with its freight of waving hats and
handkerchiefs faded in the twilight of a summer Sunday.
Also there come to my dream two dainty Goodman sisters,
wonderful and most accommodating musicians they were. One was
already affianced to her cousin, George Nathan. He was a
prosperous business man at that time. I doubt if even his name is
known among his thrifty race in New Orleans to-day. He carried off
his accomplished wife to Rio Janeiro, and made his home in that
country, which was as far away to us then, as the North Pole is to-
day. The younger sister met that summer at the Pass and eventually
married E. C. Wharton, an attaché of the Picayune, whose articles
were signed “Easy Doubleyou.” He was soon dancing attendance on
the pretty, curly haired girl. I remember how he wandered around
with pad and pencil, and we girls were horribly afraid of being put in
the Picayune. No reason for fear, as it was before the dawn of the
society page and personal column. The Whartons drifted to Texas
during the war, and at Houston they found already a host of stranded
Louisianians; but “Easy Doubleyou” had a government appointment
of some kind. The rest of us were simply runaways.
There, too, was Dick Taylor, propelled in a wheel chair over that
hotel veranda, an interesting convalescent from severe illness, or
perhaps a wound, I do not recall which, his valet so constant in
attendance that we wondered how the young man ever got an
opportunity to whisper sweet nothings into the ear of lovely Myrtle
Bringier—but he did! And that was the fourth engagement of the
season that culminated in marriage, which signalizes the superior
advantages of a hotel veranda, and most especially that of dear old
Pass Christian. Dick Taylor had a magnetic personality, which
overshadowed the fact (to paraphrase a Bible text) he was the only
son of his father, and he the President.
In New York some years ago “The Little Church Around the
Corner,” still garnished with its wealth of Easter lilies and fragrant
with spring bloom, threw wide its portals for the last obsequies of this
loved and honored Confederate general. In that throng of mourners
was one who had known him in his early manhood on the veranda of
that old Pass Christian hotel, and whose heart had followed his
career with ever-increasing admiration and veneration even unto the
end. I lay aside my old picture forever. Alas! it remains “only a dream
at the best, but so sweet that I ask for no more.”
XXI
OLD MUSIC BOOKS

I wonder how many old ladies start to go through an unused hall


closet, to make room for an accumulation of pasteboard boxes too
good to throw away, and hampers too strong to discard, and in that
long-closed closet, which a junk man with push-cart is waiting to help
clear out, find a treasure, long since buried under piles of trash,
mourned for, and, as in the case of many departed things, at length
given up for lost—then forgotten. In just such a dark closet, from
beneath a pile of old magazines (what they were kept and stored for
goodness knows) and crazy bits of bric-a-brac, that nobody but a
junk man (not even Salvation Army men, who are getting to be
mighty choosy, by the way) would cart off, I found two bruised music
books.
One dated back to 1847, when I was a schoolgirl in New Haven,
and played with great éclat “La Fête au Couvent” quadrilles,
purchased of Skinner & Co., Chapel Street. Chapel Street still exists,
but Skinner & Co. are buried in the dust of more than sixty years. I
cannot play “La Fête au Couvent” or any other fête now, but I can
close my eyes and see the lovely young girls in the school music
room whirling away to the music of the inspiring cotillion. Alas! Alas!
Time has whirled every one of them away and stiffened the nimble
fingers that danced so merrily over the keys.
In those far away days that are as yesterday to my dreaming there
were “Variations” of every familiar melody. Variations that started with
the simple air and branched off into all sorts of fantastic and involved
and intricate paths. “Oft in the Stilly Night,” “’Tis Midnight Hour,”
“Twilight Dews,” “Low-Back’d Car,” “The Harp That Once Thro’ Tara’s
Halls,” “Oh, Cast That Shadow From Thy Brow,” and so on and on,
whole pages of “Variations,” now dim with age, but every blessed
note brings to me the faces and voices of those long stilled in death.
One sweet young girl played “The Harp That Once Thro’ Tara’s
Halls” and “’Tis Midnight Hour” so charmingly that my eyes were
dimmed when I turned the leaves of the school-day music book, for
her fate was saddest of all—an inmate for years of an insane
asylum. Another who sang as she played the “Low-Back’d Car” so
delightfully (she was half Irish) died suddenly of yellow fever. Still
another associated with “Oh, Cast That Shadow From Thy Brow,”
played the melody on a guitar, accompanied by her sweet young
voice. Alas! She, too, is gone where they play on harps and there are
no shadowed brows. So, on and on to the bitter end, and with a sigh
I close the first chapter of my musical reminiscences that have lain
dormant so many, many years.
The fashion of dedicating bits of music to some well-known person
—need not be a musician, either, but a body of some note—has
passed away with the one-button glove and the green barège veil of
sixty years ago. In the ’50s it was quite common, and my dear music
book of that date holds ever so many dedicated polkas and
mazourkas. The very front leaf has a picture of a wonderfully
crocheted kind of a serpent with a man’s head, rather a shocking
thing, “Sea Serpent Polka,” dedicated to Miss Rose Kennedy, by M.
Strakosch. Dear Rose used to play it for us. It was not an inspiring
bit of music, but her wonderfully deft touch would make melody out
of anything that had crochets and quavers in it.
There is, a few pages further, another dedication to the
incomparable Rose, “Grande Polka de Concert,” by Wallace. Miss
Lou Gross, a most accomplished musician, daughter of the noted
surgeon, Dr. Samuel Gross, was honored by Strakosch in the
“Kossuth Galop,” a galloping thing, much in the Strakosch style,
which predominated in those days. Strakosch believed in a grand
“send off” of his innumerable productions. There’s “Carnival de
Paris,” dedicated to Mme. Caroline Arpin (I did not know of her) and
“Flirtation Polka,” to Mme. Lavillebeuvre, who was a delightful pianist
and merited something more inspiring than that “Flirtation.”
Then Wallace dances on the pages with a “Polka” adorned with
the name of Mlle. Dumilatre, and Ed Armant dedicates “La Rose
Polka” to Miss Augusta Slocomb. I don’t think Armant wrote music;
he “got it done,” as the saying is. That was not an unusual feat; a
valse was dedicated to Miss Philomène Briant, by George
McCausland, and he was ignorant of a note in music—he “got it
done.” P. A. Frigerio honored Miss Sara Byrne by the dedication of
“La Chasse Polka.” Miss Sara was a decided belle in the ’50s, so a
bit of music with her name attached found rapid disposal. Also, a
belle of the ’50s was Miss Estelle Tricou. Lehman, chef d’orchestre
at the opera house, wrote “Souvenir de Paris” in her honor. Miss
Estelle was bright and sparkling and beautiful, so was much in
evidence. George W. Christy wrote more than one of his “starry”
verses to “E. T.,” and they were printed in the Picayune. George was
not noted for self-effacement and modesty. His signature always
appeared in full to his sentimental effusions.
Lehman dedicated his “Clochettes Polka Mazourka,” a fine,
inspiring bit of dance music it was, too, to Mme. Odile Ferrier, and
“La Valentine Polka,” another charming, catchy dance piece, to Miss
Anaïs Boudousquie. There was Mme. Angélina, a new French
importation, whose specialty was the new dances that nobody else
could teach. She was immortalized by “L’Esmeralda Nouvelle Danse
de Salon.” We pupils had to learn some new steps and flourishes to
be able to make successful début, after All Saints’ Day, for it was
decreed “L’Esmeralda” was to be most popular. Everybody, even
some stout old ladies that did not mean to be relegated to back
seats, and passé beaux who were fast becoming clumsy and awfully
hard to dance with, took dancing lessons on the sly of Mme.
Angélina, not to mention the young girls, débutantes and such, that
went in small installments to her tiny room in Royal Street....
After this seeming digression I turn a leaf in the old music book to
dedications to Mme. Boyer, “Mazourka Sentimentale,” by the fertile
Strakosch, and here, too, “La Valse Autrichienne” by a new name—
E. Johns. Mme. Boyer was the fashionable teacher of music. Both
these dedicated pieces we scholars had to learn, and both bits,
besides a dozen other bits a thousand times more difficult and
intricate, like Gottschalk’s “Bamboula,” for instance, are so spotted
with black pencil marks they are a sight! For the madame did not
make a suggestion as to technique or expression or anything else in
the musical mind that was not emphasized by a pencil mark on the
page.
I find that most of this music was published by Mayo, No. 5 Camp
Street; by Lyler & Hewitt, 39 Camp Street. Lehman published his
own work at 194 St. Anne Street.... I am not half through, but I am
weary of looking over these old music books. So many memories
cluster about every page—memories of lovely dances with delightful
partners. Oh! That grand valse à cinq temps, the music of which was
never printed, and no band but Lehman’s band could play it, and
nobody taught the whirling steps but Mme. Angélina. Memories of
sweet girls, now old and faded, or, better than that, listening to the
“Music of the Spheres.” Memories of painstaking professors whose
pencil marks are all that is left to bring forcibly to mind their patient
personality. I turn the last leaf, and lo! here is a unique bit of music
and information—“The Monterey Waltz,” by Eugene Wythe Dawson,
a little Texas boy, who dedicates it to the little musicians of his own
age (eight years) in the sister States! I do not remember anyone who
essayed to render the “Monterey Waltz”—I never did—but Eugene
Dawson was still playing the piano in Texas during the war, proving
possibly our grandfather’s dictum, “A man who plays the piano is
mighty little account for anything else.”
We don’t think so now. I would be glad for a musician, male or
female, in this house to render for me the sweet musical numbers
that once made my young heart bound.
XXII
THE SONGS OF LONG AGO

How the ballads of our youth are, in memory, merged into the
personality of those who sang them! How, as we recall the simple
rhymes, the sweet voices of departed friends clothe them in melody.
The songs of my early years come to me to-day with more freshness
than the songs I heard yesterday, and with them come more vividly
to mind the voices and faces of those long-gone friends than come
the faces of those of to-day.
How many of us can recall “Blue-eyed Mary”? the little ballad with
which my mother always quieted me to rest. The pitiful little song!
And in my childhood days, too, mammy rocked me to sleep with “Ole
Grimes is daid, dat good ole man.” I never hear “Blue-eyed Mary” or
“Old Grimes” now, nor have I for more than threescore years and
ten, they are both so buried in oblivion, though I can repeat every
word of each, they were so nestled and rocked into my baby life.
When my father’s home was on Customhouse Street Duncan
Hennen lived directly opposite. Mrs. Hennen was a dashing beauty.
She had a sister from Tennessee visiting her, who had a powerful
voice, and she sang “Old Rosin the Beau” and “Life on the Ocean
Wave” with all the abandon of a professional. My father admired her
style prodigiously, but my mother thought it too robust. “The Carrier
Dove—fly away to my native land, sweet dove,” and “Twilight Dews,”
she pronounced more ladylike. (How often we used that word
“ladylike.” We rarely hear it now.) I must have been a very small “little
girl” when I heard Wallace, in concert, sing “The Old Arm Chair.” No
one since Wallace ever sung that touching, homely ballad so
beautifully. Once having heard his sympathetic rendering, one
always associates the song with William Wallace.
I think it has been full sixty years since that song and “Farewell to
Tom Moore,” by Byron, have been heard. And “Twilight Dews,” oh,
my! and “Shells of the Ocean”—“One summer’s day in pensive
thought,” etc. Young girls played their accompaniments and tossed
their ringlets and sang those ditties to enraptured swains, who often
stood back of them, holding the candle at the proper angle and
turning the leaves! How it all comes back to this dreaming old lady,
who never sang, but who dearly loved to listen to her more gifted
friends....
In the Cajin settlement on the border of which I occasionally visited
there was a family of Lafitons—I boldly give the name, for the two
sons, Lafiton fils and Pete, never married, and all the family died
years and years ago, but there was a lovely sprig of a girl,
Amenaide, who possessed a fine voice and no doubt would have
made her mark if she had had the necessary training, but she was
one of the flowers “born to blush unseen.” I don’t think she knew one
note of music. Of course, a guitar, much more a piano, was beyond
her reach. She sang the sweet old French melody, “Fleuve du Tage,”
delightfully. I wonder now where she ever heard it. For years after
when I heard the song Amenaide rose before me, and with her the
impression that she was not equaled.
There was another touching little ballad, in the days that were,
“We Have Been Friends Together,” and that tender “Good-by,” who
ever sings them now? Nobody, unless it be some old lady with
quavering voice, who sings them in her heart while she dreams of
the sweet girls of “Long, long ago” who have vanished.
“I Cannot Sing the Old Songs,” and “When Stars Are in the Quiet
Skies” were two of the songs we loved to hear in the days before
“Dixie” and “The Volunteer” and “The Bonnie Blue Flag” captured the
voices of so many of our sweet singers.
Some of us remember Mollie Haynes, who became the wife of
Col. Charles D. Dreux, and none can recall her charming personality
without a thought of the superb voice she possessed. “Ave Maria,
Ora Pro Nobis”—none that I knew could render that prayerful melody
with the pathos of Mollie Dreux. We all remember that Col. Dreux
was the first Confederate officer from Louisiana who fell in battle,
and no subsequent funeral was more largely attended than was
Charles Dreux’s. “Joys That We’ve Tasted” brings to mind a popular
singer in the “Long Time Ago,” Mrs. George D. Prentice, of Kentucky.
How the names and the very people come thronging my mind as I
recall these old melodies in which they are associated.
A few years since, listening to the well-trained voice of a
professional, as she rendered some intricate, superlative kind of
music, that did not in the least appeal to me, I ventured to ask if she
would favor us with “Ben Bolt.” She graciously consented. And she
rendered that simple old ballad that every child whistled or hummed
when I was a child, with so many trills and bravuras, and I don’t
know what else in the vocal line, that I was lost in amazement.
Svengali himself could not have idealized to the same extent. Poor
“Sweet Alice” was buried under such an avalanche of sound that one
could not recognize the “corner, obscure and alone,” where she was
supposed to rest under a “slab of granite so gray.”
So, perhaps, in the march of improvement, where none sing
unless they possess a voice that would electrify a whole opera
house audience, it is well the dear old songs of long ago are not
resurrected and amplified to suit the tastes and requirements of to-
day. I recall though, with a thrill of tender memory, hearing Jenny
Lind sing “Home, Sweet Home”—just the simple ballad—without a
single flourish when she was in New Orleans in 1851. I was in deep
mourning and did not dream I would have the pleasure of hearing
her, but a friend secured a loge grillée, and insisted upon my going,
accompanied by my brother. It was all arranged so courteously and
so sympathetically and so kindly that I could not refuse, and thus I
heard that incomparable artist sing “Home, Sweet Home.”
No longer can mother sit in her “old arm chair” waving a turkey tail
fan warm summer evenings, and be comforted and soothed by
sweet warblings of her girls at the piano. No longer can the tired
father call for his favorite, “Oh! Would I Were a Boy Again,” or “Rock
Me to Sleep, Mother,” or Mrs. Hemans’ “Bring Flowers, Fresh
Flowers,” the sweet old flowers that all girls were singing sixty years
ago. The old mothers and fathers, the bright young daughters are
scarce buried more deeply or mourned more deeply than are the
songs of long ago.
XXIII
A RAMBLE THROUGH THE OLD CITY

In the days of which I write New Orleans bore a very different


aspect from the present, and it may be well for me to take my
readers on a gossipy ramble through the thoroughfares which I so
often traverse nowadays in my thoughts.
Canal Street in the early forties was, par excellence, a resident
street. From Camp and Chartres Streets, way back as far as
sidewalks were flagged or bricked, which was only a few blocks,
Canal Street was lined with homes, side by side, often without even
an alley to separate them, as though land was scarce and one need
economize space, whereas just beyond was land in plenty, but no
sidewalks or easy approaches to speak of. From Camp Street to the
levee were, as I remember, large wholesale business houses,
convenient to the shoppers of large supplies, who arrived at regular
intervals from their plantations on Belle Creole, or some other coast
packet, frequently retained their quarters on the boat the short time it
was in port, and so monsieur and madame could accomplish their
necessary shopping, untrammeled by the elegancies and
inconvenient hours of a hotel.
Things were conducted on a very liberal basis in those days. I
have a liking for that old way—it was so debonair and generous,
putting the captain on the same social standing as his guests.
On the lower side of Canal Street, about where Holmes’ store now
stands, were more homes, in a row, all the houses exactly alike, with
narrow balconies stretching clear across the fronts, in a most
confidentially neighborly way. The lower floors were doctors’ or
lawyers’ offices or exchange brokers’. Fancy goods, dry goods, retail
shops, in fact of every kind, were on Chartres or Royal Street; none
on Canal. R. W. Montgomery had his home also on that fashionable
thoroughfare.
Christ Church was on the corner of Baronne and Canal, and Dr.
Laycock was the pastor at the date of which I write, and, with few
exceptions, all these families were of his flock.
Lower Camp Street was occupied mostly by exchange brokers’
and such offices. The Sun Mutual Insurance Company had a
conspicuous sign on a modest two-story brick building which any
insurance structure to-day would put to shame.
If it is near Christmas time, when we are taking this gossipy
ramble, we might meet a flock of turkeys marching up Camp Street,
guided by a man and boys with long poles. In those days fowls were
not offered for sale ready dressed or plucked, but sold “on the hoof,”
as we say of cattle. Camp and the adjacent resident streets were, to
use another Westernism, a favorite “turkey trot.” Those turkeys may
have trotted miles. Goodness knows whence they took up the line of
march—presumably at some boat landing—but they were docile as
lambs and in good condition. No roast turkey gobbler, or, better still,
boiled turkey hen with oyster dressing, tastes now like the ones
mother had on her table when I was a child and clamored for the
drum-stick. What does taste as good to us old folks to-day? Nothing!
Absolutely nothing!...
In Exchange Alley (it may have a new name now, since Triton
Walk and Customhouse Street and others of the old days have been
rechristened) my father and a number of other “attorneys at law,” as
their signs indicated, had offices. Mr. Wharton was one, and I also
recall two Hebrew beaux of that date who were neighbors of my
father’s, A. K. Josephs and M. M. Cohen. Nobody knew their given
names. Beyond Camp Street, near Magazine, Mme. Shall kept a
boarding house. It was a popular hostelry for gentlemen. Ladies did
not board, except (to use Susan Nippers’ language) as temporaries.
Exchange Alley.

Visitors to the city “put up” at the St. Charles Hotel, in the hands of
Colonel Mudge. St. Charles was the best hotel even then, comparing
favorably with the Galt House, in Louisville, under the management
of that prince of hosts, Major Aris Throckmorton—which is saying
volumes for the St. Charles. In the season flocks of Nashville,
Louisville and Cincinnati belles descended upon New Orleans, sat in
gorgeous attire and much chatter of voices on the divans under the
chandelier of the St. Charles parlor, while the kindly fathers and
insinuating brothers, bent on giving the girls a good time, foraged
about the ample rotunda, captured, escorted in and introduced many
eligible beaux found sauntering around that fascinating rendezvous.
Up Carondelet Street—one could not find the location on a city
map now, for, as I remember, the streets were not named—were
suburban homes, all about, quite remote and countrified. Judge John
N. Duncan lived in one of those cottages. There was a grand, big
yard surrounding it, with fig trees, hedges, rosebushes and vines, a
perfect bower of delight to us children. Rose, the only daughter, was
a lifelong friend of mine. She became the first wife of Col. William
Preston Johnston. Nearby lived the Peter Conreys, who gave lovely
lawn parties, that the naughty uninvited dubbed “Feat sham peters.”
Not so very far away, in the neighborhood of Constance and Robin
Streets, there was erected in 1843 Quite a grand residence for the
Slark family. I do not remember much of them in those early days,
though they lived near enough to my father’s to be neighbors. Later
in life my acquaintance with them was more intimate. I recall, though,
quite vividly Mrs. Slark’s visiting card, which I admired prodigiously.
Being a small collector of curios that unique bit of pasteboard was
one of my treasures till I lost it! There seems to have been
considerable latitude in the style of visiting cards about that time—
some were highly glazed and had gilt edges; some were even pink
tinged, but I think Mrs. Slark’s was the ne plus ultra—a bird’s beak
holding a waving pennant, and on its flowing folds was engraved
“Mrs.—Abigail—L.—Slark,” something after the style of the eagle
and E pluribus unum.
I find I am wandering away from that dear old Canal Street of
fragrant memories. Fragrant, though the broad neutral ground was a
wilderness of weeds of dampy growth, and (so our John used to tell
me) snakes! There certainly were frogs after a spring rain. I have
heard their croaks. Further back toward the swamps were deep
ditches, with crawfish sneaking about in them. Fine fishing place for

You might also like