Download as pdf or txt
Download as pdf or txt
You are on page 1of 53

Deep learning in computer vision:

principles and applications First


Edition. Edition Mahmoud Hassaballah
Visit to download the full and correct content document:
https://textbookfull.com/product/deep-learning-in-computer-vision-principles-and-appli
cations-first-edition-edition-mahmoud-hassaballah/
More products digital (pdf, epub, mobi) instant
download maybe you interests ...

Digital Media Steganography: Principles, Algorithms,


and Advances 1st Edition Mahmoud Hassaballah (Editor)

https://textbookfull.com/product/digital-media-steganography-
principles-algorithms-and-advances-1st-edition-mahmoud-
hassaballah-editor/

Deep Learning on Windows: Building Deep Learning


Computer Vision Systems on Microsoft Windows Thimira
Amaratunga

https://textbookfull.com/product/deep-learning-on-windows-
building-deep-learning-computer-vision-systems-on-microsoft-
windows-thimira-amaratunga/

Deep Learning on Windows Building Deep Learning


Computer Vision Systems on Microsoft Windows 1st
Edition Thimira Amaratunga

https://textbookfull.com/product/deep-learning-on-windows-
building-deep-learning-computer-vision-systems-on-microsoft-
windows-1st-edition-thimira-amaratunga/

Computer Vision Using Deep Learning Neural Network


Architectures with Python and Keras 1st Edition Vaibhav
Verdhan

https://textbookfull.com/product/computer-vision-using-deep-
learning-neural-network-architectures-with-python-and-keras-1st-
edition-vaibhav-verdhan/
Deep Learning for Vision Systems 1st Edition Mohamed
Elgendy

https://textbookfull.com/product/deep-learning-for-vision-
systems-1st-edition-mohamed-elgendy/

Deep Learning for Vision Systems 1st Edition Mohamed


Elgendy

https://textbookfull.com/product/deep-learning-for-vision-
systems-1st-edition-mohamed-elgendy-2/

Deep Learning for Vision Systems 1st Edition Mohamed


Elgendy

https://textbookfull.com/product/deep-learning-for-vision-
systems-1st-edition-mohamed-elgendy-3/

Programming PyTorch for Deep Learning Creating and


Deploying Deep Learning Applications 1st Edition Ian
Pointer

https://textbookfull.com/product/programming-pytorch-for-deep-
learning-creating-and-deploying-deep-learning-applications-1st-
edition-ian-pointer/

Explainable and Interpretable Models in Computer Vision


and Machine Learning Hugo Jair Escalante

https://textbookfull.com/product/explainable-and-interpretable-
models-in-computer-vision-and-machine-learning-hugo-jair-
escalante/
Deep Learning in
Computer Vision
Digital Imaging and Computer Vision Series
Series Editor
Rastislav Lukac
Foveon, Inc./Sigma Corporation San Jose, California, U.S.A.
Dermoscopy Image Analysis
by M. Emre Celebi, Teresa Mendonça, and Jorge S. Marques
Semantic Multimedia Analysis and Processing
by Evaggelos Spyrou, Dimitris Iakovidis, and Phivos Mylonas
Microarray Image and Data Analysis: Theory and Practice
by Luis Rueda
Perceptual Digital Imaging: Methods and Applications
by Rastislav Lukac
Image Restoration: Fundamentals and Advances
by Bahadir Kursat Gunturk and Xin Li
Image Processing and Analysis with Graphs: Theory and Practice
by Olivier Lézoray and Leo Grady
Visual Cryptography and Secret Image Sharing
by Stelvio Cimato and Ching-Nung Yang
Digital Imaging for Cultural Heritage Preservation: Analysis,
Restoration, and Reconstruction of Ancient Artworks
by Filippo Stanco, Sebastiano Battiato, and Giovanni Gallo
Computational Photography: Methods and Applications
by Rastislav Lukac
Super-Resolution Imaging
by Peyman Milanfar
Deep Learning in
Computer Vision
Principles and Applications

Edited by
Mahmoud Hassaballah and Ali Ismail Awad
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742

© 2020 by Taylor & Francis Group, LLC


CRC Press is an imprint of Taylor & Francis Group, an Informa business

No claim to original U.S. Government works

Printed on acid-free paper

International Standard Book Number-13: 978-1-138-54442-0 (Hardback)

This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have
been made to publish reliable data and information, but the author and publisher cannot assume responsibility
for the validity of all materials or the consequences of their use. The authors and publishers have attempted to
trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if
permission to publish in this form has not been obtained. If any copyright material has not been acknowledged
please write and let us know so we may rectify in any future reprint.

Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmit-
ted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented,
including photocopying, microflming, and recording, or in any information storage or retrieval system, with-
out written permission from the publishers.

For permission to photocopy or use material electronically from this work, please access www.copyright.com
(http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive,
Danvers, MA 01923, 978-750-8400. CCC is a not-for-proft organization that provides licenses and registration
for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate
system of payment has been arranged.

Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used
only for identifcation and explanation without intent to infringe.

Library of Congress Cataloging‑in‑Publication Data

Names: Hassaballah, Mahmoud, editor. | Awad, Ali Ismail, editor.


Title: Deep learning in computer vision : principles and applications /
edited by M. Hassaballah and Ali Ismail Awad.
Description: First edition. | Boca Raton, FL : CRC Press/Taylor and
Francis, 2020. | Series: Digital imaging and computer vision | Includes
bibliographical references and index.
Identifers: LCCN 2019057832 (print) | LCCN 2019057833 (ebook) | ISBN
9781138544420 (hardback ; acid-free paper) | ISBN 9781351003827 (ebook)
Subjects: LCSH: Computer vision. | Machine learning.
Classifcation: LCC TA1634 .D437 2020 (print) | LCC TA1634 (ebook) | DDC
006.3/7--dc23
LC record available at https://lccn.loc.gov/2019057832
LC ebook record available at https://lccn.loc.gov/2019057833

Visit the Taylor & Francis Web site at


http://www.taylorandfrancis.com

and the CRC Press Web site at


http://www.crcpress.com
Contents
Foreword ..................................................................................................................vii
Preface.......................................................................................................................ix
Editors Bio ............................................................................................................. xiii
Contributors ............................................................................................................. xv

Chapter 1 Accelerating the CNN Inference on FPGAs........................................ 1


Kamel Abdelouahab, Maxime Pelcat, and François Berry

Chapter 2 Object Detection with Convolutional Neural Networks..................... 41


Kaidong Li, Wenchi Ma, Usman Sajid, Yuanwei Wu, and
Guanghui Wang

Chapter 3 Effcient Convolutional Neural Networks for Fire Detection in


Surveillance Applications .................................................................. 63
Khan Muhammad, Salman Khan, and Sung Wook Baik

Chapter 4 A Multi-biometric Face Recognition System Based on


Multimodal Deep Learning Representations..................................... 89
Alaa S. Al-Waisy, Shumoos Al-Fahdawi, and Rami Qahwaji

Chapter 5 Deep LSTM-Based Sequence Learning Approaches for Action


and Activity Recognition.................................................................. 127
Amin Ullah, Khan Muhammad, Tanveer Hussain,
Miyoung Lee, and Sung Wook Baik

Chapter 6 Deep Semantic Segmentation in Autonomous Driving ................... 151


Hazem Rashed, Senthil Yogamani, Ahmad El-Sallab,
Mahmoud Hassaballah, and Mohamed ElHelw

Chapter 7 Aerial Imagery Registration Using Deep Learning for


UAV Geolocalization ....................................................................... 183
Ahmed Nassar, and Mohamed ElHelw

Chapter 8 Applications of Deep Learning in Robot Vision.............................. 211


Javier Ruiz-del-Solar and Patricio Loncomilla

v
vi Contents

Chapter 9 Deep Convolutional Neural Networks: Foundations and


Applications in Medical Imaging..................................................... 233
Mahmoud Khaled Abd-Ellah, Ali Ismail Awad,
Ashraf A. M. Khalaf, and Hesham F. A. Hamed

Chapter 10 Lossless Full-Resolution Deep Learning Convolutional


Networks for Skin Lesion Boundary Segmentation......................... 261
Mohammed A. Al-masni, Mugahed A. Al-antari, and Tae-Seong Kim

Chapter 11 Skin Melanoma Classifcation Using Deep Convolutional


Neural Networks .............................................................................. 291
Khalid M. Hosny, Mohamed A. Kassem, and Mohamed M. Foaud

Index...................................................................................................................... 315
Foreword
Deep learning, while it has multiple defnitions in the literature, can be defned as
“inference of model parameters for decision making in a process mimicking the
understanding process in the human brain”; or, in short: “brain-like model iden-
tifcation”. We can say that deep learning is a way of data inference in machine
learning, and the two together are among the main tools of modern artifcial intel-
ligence. Novel technologies away from traditional academic research have fueled
R&D in convolutional neural networks (CNNs); companies like Google, Microsoft,
and Facebook ignited the “art” of data manipulation, and the term “deep learning”
became almost synonymous with decision making.
Various CNN structures have been introduced and invoked in many computer
vision-related applications, with greatest success in face recognition, autonomous
driving, and text processing. The reality is: deep learning is an art, not a science.
This state of affairs will remain until its developers develop the theory behind its
functionality, which would lead to “cracking its code” and explaining why it works,
and how it can be structured as a function of the information gained with data. In
fact, with deep learning, there is good and bad news. The good news is that the indus-
try—not necessarily academia—has adopted it and is pushing its envelope. The bad
news is that the industry does not share its secrets. Indeed, industries are never inter-
ested in procedural and textbook-style descriptions of knowledge.
This book, Deep Learning in Computer Vision: Principles and Applications—as
a journey in the progress made through deep learning by academia—confnes itself
to deep learning for computer vision, a domain that studies sensory information
used by computers for decision making, and has had its impacts and drawbacks for
nearly 60 years. Computer vision has been and continues to be a system: sensors,
computer, analysis, decision making, and action. This system takes various forms
and the fow of information within its components, not necessarily in tandem. The
linkages between computer vision and machine learning, and between it and arti-
fcial intelligence, are very fuzzy, as is the linkage between computer vision and
deep learning. Computer vision has moved forward, showing amazing progress in
its short history. During the sixties and seventies, computer vision dealt mainly with
capturing and interpreting optical data. In the eighties and nineties, geometric com-
puter vision added science (geometry plus algorithms) to computer vision. During
the frst decade of the new millennium, modern computing contributed to the evolu-
tion of object modeling using multimodality and multiple imaging. By the end of
that decade, a lot of data became available, and so the term “deep learning” crept
into computer vision, as it did into machine learning, artifcial intelligence, and other
domains.
This book shows that traditional applications in computer vision can be solved
through invoking deep learning. The applications addressed and described in the
eleven different chapters have been selected in order to demonstrate the capabilities
of deep learning algorithms to solve various issues in computer vision. The content
of this book has been organized such that each chapter can be read independently

vii
viii Foreword

of the others. Chapters of the book cover the following topics: accelerating the CNN
inference on feld-programmable gate arrays, fre detection in surveillance applica-
tions, face recognition, action and activity recognition, semantic segmentation for
autonomous driving, aerial imagery registration, robot vision, tumor detection, and
skin lesion segmentation as well as skin melanoma classifcation.
From the assortment of approaches and applications in the eleven chapters, the
common thread is that deep learning for identifcation of CNN provides accuracy
over traditional approaches. This accuracy is attributed to the fexibility of CNN
and the availability of large data to enable identifcation through the deep learning
strategy. I would expect that the content of this book to be welcomed worldwide by
graduate and postgraduate students and workers in computer vision, including prac-
titioners in academia and industry. Additionally, professionals who want to explore
the advances in concepts and implementation of deep learning algorithms applied to
computer vision may fnd in this book an excellent guide for such purpose. Finally,
I hope that readers would fnd the presented chapters in the book interesting and
inspiring to future research, from both theoretical and practical viewpoints, to spur
further advances in discovering the secrets of deep learning.

Prof Aly Farag, PhD, Life Fellow, IEEE, Fellow, IAPR


Professor of Electrical and Computer Engineering
University of Louisville, Kentucky
Preface
Simply put, computer vision is an interdisciplinary feld of artifcial intelligence that
aims to guide computers and machines toward understanding the contents of digital
data (i.e., images or video). According to computer vision achievements, the future
generation of computers may understand human actions, behaviors, and languages
similarly to humans, carry out some missions on their behalf, or even communicate
with them in an intelligent manner. One aspect of computer vision that makes it
such an interesting topic of study and active research feld is the amazing diversity
of daily-life applications such as pedestrian protection systems, autonomous driving,
biometric systems, the movie industry, driver assistance systems, video surveillance,
and robotics as well as medical diagnostics and other healthcare applications. For
instance, in healthcare, computer vision algorithms may assist healthcare profession-
als to precisely classify illnesses and cases; this can potentially save patients’ lives
through excluding inaccurate medical diagnoses and avoiding erroneous treatment.
With this wide variety of applications, there is a signifcant overlap between com-
puter vision and other felds such as machine vision and image processing. Scarcely
a month passes where we do not hear from the research and industry communities
with an announcement of some new technological breakthrough in the areas of intel-
ligent systems related to the computer vision feld.
With the recent rapid progress on deep convolutional neural networks, deep learn-
ing has achieved remarkable performance in various felds. In particular, it has brought
a revolution to the computer vision community, introducing non-traditional and eff-
cient solutions to several problems that had long remained unsolved. Due to this prom-
ising performance, it is gaining more and more attention and is being applied widely in
computer vision for several tasks such as object detection and recognition, object seg-
mentation, pedestrian detection, aerial imagery registration, video processing, scene
classifcation, autonomous driving, and robot localization as well as medical image-
related applications. If the phrase “deep learning for computer vision” is searched in
Google, millions of search results will be obtained. Under these circumstances, a book
entitled Deep Learning in Computer Vision that covers recent progress and achieve-
ments in utilizing deep learning for computer vision tasks will be extremely useful.
The purpose of this contributed volume is to fll the existing gap in the literature
for the applications of deep learning in computer vision and to provide a bird’s eye
view of recent state-of-the-art models designed for practical problems in computer
vision. The book presents a collection of eleven high-quality chapters written by
renowned experts in the feld. Each chapter provides the principles and fundamentals
of a specifc topic, introduces reviews of up-to-date techniques, presents outcomes,
and points out challenges and future directions. In each chapter, fgures, tables,
and examples are used to improve the presentation and analysis of covered topics.
Furthermore, bibliographic references are included in each chapter, providing a good
starting point for deeper research and further exploration of the topics considered in
this book. Further, this book is structured such that each chapter can be read inde-
pendently from the others as follows:

ix
x Preface

Chapter 1 presents a state-of-the-art of CNN inference accelerators over FPGAs.


Computational workloads, parallelism opportunities, and the involved memory
accesses are analyzed. At the level of neurons, optimizations of the convolutional and
fully connected layers are explained and the performances of the different methods
compared, while at the network level, approximate computing and data-path optimi-
zation methods are covered and state-of-the-art approaches compared. The methods
and tools investigated in this chapter represent the recent trends in FPGA CNN infer-
ence accelerators and will fuel future advances in effcient hardware deep learning.
Chapter 2 concentrates on object detection problem using deep CNN (DCNN): the
recent developments of several classical CNN-based object detectors are discussed.
These detectors signifcantly improve detection performance either through employ-
ing new architectures or through solving practical issues like degradation, gradi-
ent vanishing, and class imbalance. Detailed background information is provided to
show the progress and improvements of different models. Some evaluation results
and comparisons are reported on three datasets with distinctive characteristics.
Chapter 3 proposes three methods for fre detection using CNNs. The frst method
focuses on early fre detection with an adaptive prioritization mechanism for surveil-
lance cameras. The second CNN-assisted method improves fre detection accuracy with
a main focus on reducing false alarms. The third method uses an effcient deep CNN
for fre detection. For localization of fre regions, a feature map selection algorithm that
intelligently selects appropriate feature maps sensitive to fre areas is proposed.
Chapter 4 presents an accurate and real-time multi-biometric system for identi-
fying a person’s identity using a combination of two discriminative deep learning
approaches to address the problem of unconstrained face recognition: CNN and deep
belief network (DBN). The proposed system is tested on four large-scale challenging
datasets with high diversity in the facial expressions—SDUMLA-HMT, FRGC V
2.0, UFI, and LFW—and new state-of-the-art recognition rates on all the employed
datasets are achieved.
Chapter 5 introduces a study of the concept of sequence learning using RNN,
LSTM, and its variants such as multilayer LSTM and bidirectional LSTM for action
and activity recognition problems. The chapter concludes with major issues of
sequence learning for action and activity recognition and highlights recommenda-
tions for future research.
Chapter 6 discuses semantic segmentation in autonomous driving applications,
where it focuses on constructing effcient and simple architectures to demonstrate
the beneft of fow and depth augmentation to CNN-based semantic segmentation
networks. The impact of both motion and depth information on semantic segmenta-
tion is experimentally studied using four simple network architectures. Results of
experiments on two public datasets—Virtual-KITTI and CityScapes—show reason-
able improvement in overall accuracy.
Chapter 7 presents a method based on deep learning for geolocalizing drones
using only onboard cameras. A pipeline has been implemented that makes use of the
availability of satellite imagery and traditional computer vision feature detectors and
descriptors, along with renowned deep learning methods (semantic segmentation), to
be able to locate the aerial image captured from the drone within the satellite imag-
ery. The method enables the drone to be autonomously aware of its surroundings and
navigate without using GPS.
Preface xi

Chapter 8 is intended to be a guide for the developers of robot vision systems,


focusing on the practical aspects of the use of deep neural networks rather than on
theoretical issues.
The last three chapters are devoted to deep learning in medical applications.
Chapter 9 covers basic information about CNNs in medical applications. CNN
developments are discussed from different perspectives, specifcally, CNN design,
activation function, loss function, regularization, optimization, normalization, and
network depth. Also, a deep convolutional neural network (DCNN) is designed for
brain tumor detection using MRI images. The proposed DCNN architecture is eval-
uated on the RIDER dataset, achieving accurate detection accuracy within a time of
0.24 seconds per MRI image.
Chapter 10 discusses automatic segmentation of skin lesion boundaries from sur-
rounding tissue and presents a novel deep learning segmentation methodology via
full-resolution convolutional network (FrCN). Experimental results show the great
promise of the FrCN method compared to state-of-the-art deep learning segmenta-
tion approaches such as fully convolutional networks (FCN), U-Net, and SegNet
with overall segmentation.
Chapter 11 is about the automatic classifcation of color skin images, where a
highly accurate method is proposed for skin melanoma classifcation utilizing two
modifed deep convolutional neural networks and consisting of three main steps.
The proposed method is tested using the well-known MED-NODE and DermIS &
DermQuest datasets.
It is very necessary to mention here that the book is a small piece in the puzzle
of computer vision and its applications. We hope that our readers fnd the presented
chapters in the book interesting and that the chapters will inspire future research
both from theoretical and practical viewpoints to spur further advances in the com-
puter vision feld.
The editors would like to take this opportunity to express their sincere grati-
tude to the contributors for extending their wholehearted support in sharing some
of their latest results and fndings. Without their signifcant contribution, this
book could not have fulflled its mission. The reviewers deserve our thanks for
their constructive and timely input. Special profound thanks go to Prof Aly Farag,
Professor of Electrical and Computer Engineering, University of Louisville,
Kentucky for writing the Foreword for this book. Finally, the editors acknowledge
the efforts of the CRC Press Taylor & Francis for giving us the opportunity to edit
a book on deep learning for computer vision. In particular, we would like to thank
Dr Rastislav Lukac, the editor of the Digital Imaging and Computer Vision book
series, and Nora Konopka for initiating this project. Really, the editorial staff
at CRC Press has done a meticulous job, and working with them was a pleasant
experience.

Mahmoud Hassaballah
Qena, Egypt

Ali Ismail Awad


Luleå, Sweden
Editors Bio
Mahmoud Hassaballah was born in 1974, Qena, Egypt. He
received his BSc degree in Mathematics in 1997 and his MSc
degree in Computer Science in 2003, both from South Valley
University, Egypt, and his Doctor of Engineering (D Eng) in
computer science from Ehime University, Japan in 2011. He
was a visiting scholar with the department of computer &
communication science, Wakayama University, Japan in
2013 and GREAH laboratory, Le Havre Normandie
University, France in 2019. He is currently an associate professor of computer sci-
ence at the faculty of computers and information, South Valley University, Egypt. He
served as a reviewer for several journals such as IEEE Transactions on Image
Processing, IEEE Transactions on Fuzzy Systems, Pattern Recognition, Pattern
Recognition Letters, IET Image Processing, IET Computer Vision, IET Biometrics,
Journal of Real-Time Image Processing, and Journal of Electronic Imaging. He has
published over 50 research papers in refereed international journals and conferences.
His research interests include feature extraction, object detection/recognition, artif-
cial intelligence, biometrics, image processing, computer vision, machine learning,
and data hiding.

Ali Ismail Awad (SMIEEE, PhD, PhD, MSc, BSc) is cur-


rently an Associate Professor (Docent) with the Department
of Computer Science, Electrical, and Space Engineering,
Luleå University of Technology, Luleå, Sweden, where
he also serves as a Coordinator of the Master Programme
in Information Security. He is a Visiting Researcher with
the University of Plymouth, United Kingdom. He is also
an Associate Professor with the Electrical Engineering
Department, Faculty of Engineering, Al-Azhar University at Qena, Qena, Egypt. His
research interests include information security, Internet-of-Things security, image
analysis with applications in biometrics and medical imaging, and network security.
He has edited or co-edited fve books and authored or co-authored several journal
articles and conference papers in these areas. He is an Editorial Board Member of the
following journals: Future Generation Computer Systems, Computers & Security,
Internet of Things: Engineering Cyber Physical Human Systems, and Health
Information Science and Systems. Dr Awad is currently an IEEE senior member.

xiii
Contributors
Ahmad El Sallab Hesham F.A. Hamed
Valeo Company Egyptian Russian University
Cairo, Egypt Cairo, Egypt
and
Ahmed Nassar Minia University
IRISA Institute Minia, Egypt
Rennes, France
Javier Ruiz-Del-Solar
Alaa S. Al-Waisy University of Chile
University of Bradford Santiago, Chile
Bradford, UK
Kaidong Li
University of Kansas
Ali Ismail Awad
Kansas City, Kansas
Luleå University of Technology
Luleå, Sweden
Kamel Abdelouahab
and
Clermont Auvergne University
Al-Azhar University
Clermont-Ferrand, France
Qena, Egypt
Khalid M. Hosny
Amin Ullah Zagazig University
Sejong University Zagazig, Egypt
Seoul, South Korea
Khan Muhammad
Ashraf A. M. Khalaf Sejong University
Minia University Seoul, South Korea
Minia, Egypt
Mahmoud Hassaballah
François Berry South Valley University
University Clermont Auvergne Qena, Egypt
Clermont-Ferrand, France
Mahmoud Khaled Abd-Ellah
Guanghui Wang Al-Madina Higher Institute for
University of Kansas Engineering and Technology
Kansas City, Kansas Giza, Egypt

Hazem Rashed Maxime Pelcat


Valeo Company University of Rennes
Cairo, Egypt Rennes, France

xv
xvi Contributors

Miyoung Lee Salman Khan


Sejong University Sejong University
Seoul, South Korea Seoul, South Korea

Mohamed A. Kassem Senthil Yogamani


Kafr El Sheikh University Valeo Company
Kafr El Sheikh, Egypt Galway, Ireland

Mohamed Elhelw Shumoos Al-Fahdawi


Nile University University of Bradford
Giza, Egypt Bradford, UK

Sung Wook Baik


Mohamed M. Foaud
Sejong University
Zagazig University
Seoul, South Korea
Zagazig, Egypt
Tae-Seong Kim
Mohammed A. Al-Masni Kyung Hee University
Kyung Hee University Seoul, South Korea
Seoul, South Korea
and Tanveer Hussain
Yonsei University Sejong University
Seoul, South Korea Seoul, South Korea

Mugahed A. Al-Antari Usman Sajid


Kyung Hee University University of Kansas
Seoul, South Korea Kansas City, Kansas
and
Sana’a Community College Wenchi Ma
Sana’a, Republic of Yemen University of Kansas
Kansas City, Kansas
Patricio Loncomilla
University of Chile Yuanwei Wu
Santiago, Chile University of Kansas
Kansas City, Kansas
Rami Qahwaji
University of Bradford
Bradford, UK
1 Accelerating the CNN
Inference on FPGAs
Kamel Abdelouahab, Maxime Pelcat,
and François Berry

CONTENTS
1.1 Introduction ......................................................................................................2
1.2 Background on CNNs and Their Computational Workload ............................3
1.2.1 General Overview.................................................................................3
1.2.2 Inference versus Training ..................................................................... 3
1.2.3 Inference, Layers, and CNN Models ....................................................3
1.2.4 Workloads and Computations...............................................................6
1.2.4.1 Computational Workload .......................................................6
1.2.4.2 Parallelism in CNNs ..............................................................8
1.2.4.3 Memory Accesses ..................................................................9
1.2.4.4 Hardware, Libraries, and Frameworks ................................ 10
1.3 FPGA-Based Deep Learning.......................................................................... 11
1.4 Computational Transforms ............................................................................. 12
1.4.1 The im2col Transformation ................................................................ 13
1.4.2 Winograd Transform .......................................................................... 14
1.4.3 Fast Fourier Transform ....................................................................... 16
1.5 Data-Path Optimizations ................................................................................ 16
1.5.1 Systolic Arrays.................................................................................... 16
1.5.2 Loop Optimization in Spatial Architectures ...................................... 18
Loop Unrolling ................................................................................... 19
Loop Tiling .........................................................................................20
1.5.3 Design Space Exploration................................................................... 21
1.5.4 FPGA Implementations ...................................................................... 22
1.6 Approximate Computing of CNN Models ..................................................... 23
1.6.1 Approximate Arithmetic for CNNs.................................................... 23
1.6.1.1 Fixed-Point Arithmetic ........................................................ 23
1.6.1.2 Dynamic Fixed Point for CNNs...........................................28
1.6.1.3 FPGA Implementations ....................................................... 29
1.6.1.4 Extreme Quantization and Binary Networks....................... 29
1.6.2 Reduced Computations....................................................................... 30
1.6.2.1 Weight Pruning .................................................................... 31
1.6.2.2 Low Rank Approximation ................................................... 31
1.6.2.3 FPGA Implementations ....................................................... 32

1
2 Deep Learning in Computer Vision

1.7 Conclusions..................................................................................................... 32
Bibliography ............................................................................................................ 33

1.1 INTRODUCTION
The exponential growth of big data during the last decade motivates for innovative
methods to extract high semantic information from raw sensor data such as videos,
images, and speech sequences. Among the proposed methods, convolutional neural
networks (CNNs) [1] have become the de facto standard by delivering near-human
accuracy in many applications related to machine vision (e.g., classifcation [2],
detection [3], segmentation [4]) and speech recognition [5].
This performance comes at the price of a large computational cost as CNNs
require up to 38 GOPs to classify a single frame [6]. As a result, dedicated hard-
ware is required to accelerate their execution. Graphics processing units GPUs
are the most widely used platform to implement CNNs as they offer the best per-
formance in terms of pure computational throughput, reaching up 11 TFLOPs
[7]. Nevertheless, in terms of power consumption, feld-programmable gate array
(FPGA) solutions are known to be more energy effcient (vs. GPU). While GPU
implementations have demonstrated state-of-the-art computational performance,
CNN acceleration will soon be moving towards FPGAs for two reasons. First,
recent improvements in FPGA technology put FPGA performance within striking
distance of GPUs with a reported performance of 9.2 TFLOPs for the latter [8].
Second, recent trends in CNN development increase the sparsity of CNNs and
use extremely compact data types. These trends favor FPGA devices, which are
designed to handle irregular parallelism and custom data types. As a result, next-
generation CNN accelerators are expected to deliver up to 5.4× better computa-
tional throughput than GPUs [7].
As an infection point in the development of CNN accelerators might be near, we
conduct a survey on FPGA-based CNN accelerators. While a similar survey can be
found in [9], we focus in this chapter on the recent techniques that were not covered
in the previous works. In addition to this chapter, we refer the reader to the works
of Venieris et al. [10], which review the toolfows automating the CNN mapping
process, and to the works of Sze et al., which focus on ASICs for deep learning
acceleration.
The amount and diversity of research on the subject of CNN FPGA acceleration
within the last 3 years demonstrate the tremendous industrial and academic interest.
This chapter presents a state-of-the-art review of CNN inference accelerators over
FPGAs. The computational workloads, their parallelism, and the involved memory
accesses are analyzed. At the level of neurons, optimizations of the convolutional
and fully connected (FC) layers are explained and the performances of the differ-
ent methods compared. At the network level, approximate computing and data-path
optimization methods are covered and state-of-the-art approaches compared. The
methods and tools investigated in this survey represent the recent trends in FPGA
CNN inference accelerators and will fuel the future advances on effcient hardware
deep learning.
Accelerating the CNN Inference on FPGAs 3

1.2 BACKGROUND ON CNNS AND THEIR


COMPUTATIONAL WORKLOAD
In this frst section, we overview the main features of CNNs, mainly focusing on the
computations and parallelism patterns involved during their inference.

1.2.1 GENERAL OVERVIEW


Deep* CNNs are feed-forward†, sparsely connected‡ neural networks. A typical
CNN structure consists of a pipeline of layers. Each layer inputs a set of data, known
as a feature map (FM), and produces a new set of FMs with higher-level semantics.

1.2.2 INFERENCE VERSUS TRAINING


As typical machine learning algorithms, CNNs are deployed in two phases. First,
the training stage works on a known set of annotated data samples to create a model
with a modeling power (which semantics extrapolates to natural data outside the
training set). This phase implements the back-propagation algorithm [11], which
iteratively updates CNN parameters such as convolution weights to improve the pre-
dictive power of the model. A special case of CNN training is fne-tuning. When
fne-tuning a model, weights of a previously trained network are used to initialize the
parameters of a new training. These weights are then adjusted for a new constraint,
such as a different dataset or a reduced precision.
The second phase, known as inference, uses the learned model to classify new data
samples (i.e., inputs that were not previously seen by the model). In a typical setup, CNNs
are trained/fne-tuned only once, on large clusters of GPUs. By contrast, the inference
is implemented each time a new data sample has to be classifed. As a consequence,
the literature mostly focuses on accelerating the inference phase. As a result, our dis-
cussion overviews the main methods employed to accelerate the inference. Moreover,
since most of the CNN accelerators benchmark their performance on models trained for
image classifcation, we focus our chapter on this application. Nonetheless, the methods
detailed in this survey can be employed to accelerate CNNs for other applications such
object detection, image segmentation, and speech recognition.

1.2.3 INFERENCE, LAYERS, AND CNN MODELS


CNN inference refers to the feed-forward propagation of B input images across L
layers. This section details the computations involved in the major types of these
layers. A common practice is to manipulate layers, parameters, and FMs as multidi-
mensional arrays, as listed in Table 1.1. Note that when it will be relevant, the type
of the layer will be denoted with superscript, and the position of the layer will be
denoted with subscript.

* Includes a large number of layer, typically above three.


† The information fows from the neurons of a layer ˜ towards the neurons of a layer. ˜ + 1
‡ CNNs implement the weight sharing technique, applying a small number of weights across all the

input pixels (i.e., image convolution).


4 Deep Learning in Computer Vision

TABLE 1.1
Tensors Involved in the Inference of a Given Layer ˜ with Their Dimensions
X Input FMs B×C×H×W B Batch size (Number of input frames)
Y Output FMs B×N×V×U W/H/C Width/Height/Depth of Input FMs
Θ Learned Filters N×C×J×K U/V/N Width/Height/Depth of Output FMs
β Learned biases N K/J Horizontal/Vertical Kernel size

A convolutional layer (conv) carries out the feature extraction process by applying – as
illustrated in Figure 1.1 – a set of three-dimensional convolution flters Θconv to a set
of B input volumes Xconv. Each input volume has a depth C and can be a color image
(in the case of the frst conv layer), or an output generated by previous layers in the
network. Applying a three-dimensional flter to three-dimensional input results in
a 2D (FM). Thus, applying N three-dimensional flters in a layer results in a three-
dimensional output with a depth N.
In some CNN models, a learned offset βconv – called a bias – is added to processed
feature maps. However, this practice has been discarded in recent models [6]. The
computations involved in feed-forward propagation of conv layers are detailed in
Equation 1.1.

˜ {b, n, u, v} ˝[1, B ] × [1, N ] × [1,V ] × [1,U ]

Y conv[b, n, v, u] = b conv[ n]
C J K (1.1)
+ åååX
c=1 j=1 k=1
conv
[b, c, v + j, u + k ] × Qconv[ n, c, j, k]

One may note that applying a depth convolution to a 3D input boils down to applying
a mainstream 2D convolution to each of the 2D channels of the input, then, at each
point, summing the results across all the channels, as shown in Equation 1.2.

FIGURE 1.1 Feed-forward propagation in conv, act, and pool layers (batch size B = 1, bias
β omitted).
Accelerating the CNN Inference on FPGAs 5

˜n °[1, N ]

C
Y[ n] conv
=b conv
[n] + åconv2D ( X[c]
c=1
conv
,Q[ c]conv ) (1.2)

Each conv layer of a CNN is usually followed by an activation layer that applies a
nonlinear function to all the values of FMs. Early CNNs were trained with TanH
or Sigmoid functions, but recent models employ the rectifed linear unit (ReLU)
function, which grants faster training times and less computational complexity, as
highlighted in Krizhevsky et al. [12].

˜ {b, n, u, v} ˝[1, B ] × [1, N ] × [1,V ] × [1,U ]

Y act [b, n, h, w] = act(X act [b, n, h, w]) | act:=TanH, Sigmoid, ReLU… (1.3)

The convolutional and activation parts of a CNN are directly inspired by the
cells of visual cortex in neuroscience [13]. This is also the case with pooling
layers, which are periodically inserted in between successive conv layers. As
shown in Equation 1.4, pooling sub-samples each channel of the input FM by
selecting either the average, or, more commonly, the maximum of a given neigh-
borhood K. As a result, the dimensionality of an FM is reduced, as illustrated
in Figure 1.1.

˜ {b, n, u, v} ˝[1, B ] × [1, N ] × [1,V ] × [1,U ]

(
Y pool [b, n, v, u] = max X pool [b, n, v + p, u + q]
p,q˜[1:K ]
) (1.4)

When deployed for classifcation purposes, the CNN pipeline is often terminated
by FC layers. In contrast with convolutional layers, FC layers do not implement
weight sharing and involve as much weight as input data (i.e., W = K, H = J,U = V = 1).
Moreover, in a similar way as conv layers, a nonlinear function is applied to the
outputs of FC layers.

˜ {b, n} ˝[1, B ] × [1, N ]

C H W
Y [b, n] = b [ n] +
fc fc
åååX [b, c, h, w]× Q [n, c, h, w]
c=1 h=1 w=1
fc fc
(1.5)
6 Deep Learning in Computer Vision

The Softmax function is a generalization of the Sigmoid function, and “squashes”


a N-dimensional vector X to Sigmoid(X) where each output is in the range [0,1].
The Softmax function is used in various multi-class classifcation methods, espe-
cially in CNNs. In this case, the Softmax layer is placed at the end of the net-
work and the dimension of vector it operates on (i.e., N) represents the number of
classes in the considered dataset. Thus, the input of the Softmax is the data gener-
ated by the last fully connected layer, and the output is the probability predicted
for each class.

˜ {b, n} ˝[1, B ] × [1, N ]

exp(X[b, n]) (1.6)


Softmax(X[b, n]) =
°
N
exp(X[b, c])
c=1

Batch normalization was introduced [14] to speed up training by linearly shifting


and scaling the distribution of a given batch of inputs B to have zero mean and unit
variance. These layers fnd also their interest when implementing binary neural net-
works (BNNs) as they reduce the quantization error compared to an arbitrary input
distribution, as highlighted in Hubara et al. [15]. Equation 1.7 details the processing
of batch norm layers, where the mean μ and the variance σ are statistics collected
during the training, α and γ are parameters learned during the training, and ϵ is a
hyper-parameter set empirically for numerical stability purposes (i.e., avoiding divi-
sion by zero).

˜ {b, n, u, v} ˝[1, B ] × [1, N ] × [1,V ] × [1,U ]

X BN [b, n, u, v] − m (1.7)
Y BN [b, n, v, u] = g+a
s2 + ˜

1.2.4 WORKLOADS AND COMPUTATIONS


The accuracy of CNN models has been increasing since their breakthrough in 2012
[12]. However, this accuracy comes at a high computational cost. The main challenge
that faces CNN developers is to improve classifcation accuracy while maintain-
ing a tolerable computational workload. As shown in Table 1.2, this challenge was
successfully addressed by Inception [16] and ResNet models [17], with their use of
bottleneck 1 × 1 convolutions that reduce both model size and computations while
increasing depth and accuracy.

1.2.4.1 Computational Workload


As shown in Equations 1.1 and 1.5, the processing of CNN involves an intensive use
of Multiply Accumulate (MAC) operation. All these MAC operations take place at
conv and FC layers, while the remaining parts of network are element-wise trans-
formations that can be generally implemented with low-complexity computational
requirements.
Accelerating the CNN Inference on FPGAs 7

TABLE 1.2
Popular CNN Models with Their Computational Workload*
AlexNet GoogleNet VGG16 VGG19 ResNet101 ResNet-152
Model [12] [16] [6] [6] [17] [17]

Top1 err (%) 42.9% 31.3% 28.1% 27.3% 23.6% % 23.0%


Top5 err (%) 19.80% 10.07% 9.90% 9.00% 7.1% 6.7%
Lc 5 57 13 16 104 155

˜
Lc
conv 666 M 1.58 G 15.3 G 19.5 G 7.57 G 11.3 G

˜=1

˜
Lc 2.33 M 5.97 M 14.7 M 20 M 42.4 M 58 M
W˜conv
˜=1

Act ReLU
Pool 3 14 5 5 2 2
Lf 3 1 3 3 1 1

˜
Lf 58.6 M 1.02 M 124 M 124 M 2.05 M 2.05 M
C˜fc
˜=1

˜
Lf 58.6 M 1.02 M 124 M 124 M 2.05 M 2.05 M
W˜ fc
˜=1

C 724 M 1.58 G 15.5 G 19.6 G 7.57 G 11.3 G


W 61 M 6.99 M 138 M 144 M 44.4 M 60 M

* Accuracy Measured on Single-Crops of ImageNet Test-Set

In this chapter, the computational workload C of a given CNN corresponds to the


number of MACs it involves during inference*. The number of these MACs mainly
depends on the topology of the network, and more particularly on the number of
conv and FC layers and their dimensions. Thus, the computational workload can be
expressed as in Equation 1.8, where L c is the number of conv (fully connected) lay-
ers, and C˜conv (C˜fc ) is the number of MACs occurring on a given convolution (fully
connected) layer ˜.

Lc Lf

C= ˜C
˜=1
˜
conv
+ ˜C
˜=1
˜
fc
(1.8)

C˜conv = N ˜ × C˜ × J ˜ × K ˜ × U ˜ × V˜ (1.9)

C˜fc = N ˜ × C˜ × W˜ × H ˜ (1.10)

* Batch size is set to 1 for clarity purposes.


8 Deep Learning in Computer Vision

In a similar way, the number of weights, and consequently the size of a given CNN
model, can be expressed as follows:

Lc Lf

W= ˜W
˜=1
˜
conv
+ ˜W
˜=1
˜
fc
(1.11)

W˜conv = N ˜ × C˜ × J ˜ × K ˜ (1.12)

W˜fc = N ˜ × C˜ × W˜ × H ˜ (1.13)

For state-of-the-art CNN models, L c, N ˜ , and C˜ can be quite large. This makes
CNNs computationally and memory intensive, where for instance, the classifcation
of a single frame using the VGG19 network requires 19.5 billion MAC operations.
It can be observed in the same table that most of the MACs occur on the convolu-
tion parts, and consequently, 90% of the execution time of a typical inference is spent
on conv layers [18]. By contrast, FC layers marginalize most of the weights and thus
the size of a given CNN model.

1.2.4.2 Parallelism in CNNs


The high computational workload of CNNs makes their inference a challenging task,
especially on low-energy embedded devices. The key solution to this challenge is to
leverage on the extensive concurrency they exhibit. These parallelism opportunities
can be formalized as follows:

• Batch Parallelism: CNN implementations can simultaneously classify


multiple frames grouped as a batch B in order to reuse the flters in each
layer, minimizing the number the memory accesses. However, and as
shown in [10], batch parallelism quickly reaches its limits. This is due to the
fact that most of the memory transactions result from storing intermediate
results and not loading CNN parameters. Consequently, reusing the flters
only slightly impacts the overall processing time per image.
• Inter-layer Pipeline Parallelism: CNNs have a feed-forward hierarchical
structure consisting of a succession of data-dependent layers. These layers
can be executed in a pipelined fashion by launching layer (˜) before ending
the execution of layer (˜ −1). This pipelining costs latency but increases
throughput.

Moreover, the execution of the most computationally intensive parts (i.e., conv lay-
ers), exhibits the four following types of concurrency:

• Inter-FM Parallelism: Each two-dimensional plane of an FM can be pro-


cessed separately from the others, meaning that PN elements of Yconv can be
computed in parallel (0 < PN < N).
Accelerating the CNN Inference on FPGAs 9

• Intra-FM Parallelism: In a similar way, pixels of a single output FM plane


are data-independent and can thus be processed concurrently by evaluating
PV × PU values of Yconv[n] (0 < PV × PU < V × U).
• Inter-convolution Parallelism: Depth convolutions occurring in conv lay-
ers can be expressed as a sum of 2D convolutions, as shown in Equation
1.2. These 2D convolutions can be evaluated simultaneously by computing
concurrently Pc elements (0 < Pc < C).
• Intra-convolution Parallelism: The 2D convolutions involved in the pro-
cessing of conv layers can be implemented in a pipelined fashion such as
in [76]. In this case PJ × PK multiplications are implemented concurrently
(0 < PJ × PK < J × K).

1.2.4.3 Memory Accesses


As a consequence of the previous discussion, the inference of a CNN shows large vec-
torization opportunities that can be exploited by allocating multiple computational
resources to concurrently process multiple features. However, this parallelization
can not accelerate the execution of a CNN if no datacaching strategy is implemented.
In fact, memory bandwidth is often the bottleneck when processing CNNs.
In FC parts, the execution can be memory-bounded because of the high number
of weights that these layers contain, and consequently, the high number of memory
reads required.
This is expressed in Equation 1.14, where M˜fc refers to the number of memory
accesses occurring in an FC layer ˜. This number can be written as the sum of
memory accesses reading the inputs X fc˜ , the memory accesses reading the weights
(q ˜fc), and the number of memory accesses writing the results (Y˜fc ).

M˜fc = MemRd(X ˜fc ) + MemRd(q ˜fc ) + MemWr(Y˜fc ) (1.14)

= C˜ H ˜W˜ + N ˜C˜ H ˜W˜ + N ˜ (1.15)

˜ N ˜C˜ H ˜W˜ (1.16)

Note that the fully connected parts of state-of-the-art models involve large values
of N ˜ and C˜ , making the memory reading of weights the most impacting factor,
as formulated in Equation 1.16. In this context, batch parallelism can signifcantly
accelerate the execution of CNNs with a large number of FC layers.
In the conv parts, the high number of MAC operations results in a high number
of memory accesses, as each MAC requires at least 2 memory reads and 1 memory
write*. This number of memory accesses accumulates with the high dimensions of
data manipulated by conv layers, as shown in Equation 1.18. If all these accesses are
towards external memory (for instance, DRAM), throughput and energy consumption

* This is the best-case scenario of a fully pipelined MAC, where intermediate results do not need to be
loaded.
10 Deep Learning in Computer Vision

will be highly impacted, because DRAM access engenders high latency and energy
consumption, even more than the computation itself [21].

M˜conv = MemRd(X ˜conv ) + MemRd(q ˜conv ) + MemWr(Y˜conv ) (1.17)

= C˜ H ˜W˜ + N ˜C˜ J ˜ K ˜ + N ˜U ˜V˜ (1.18)

The number of these DRAM accesses, and thus latency and energy consumption, can
be reduced by implementing a memory-caching hierarchy using on-chip memories.
As discussed in the next sections, state-of-the-art CNN accelerators employ register
fles as well as several levels of caches. The former, being the fastest, is implemented
at the nearest of the computational capabilities. The latency and energy consumption
resulting from these caches is lower by several orders of magnitude than external
memory accesses, as pointed out in Sze et al. [22].

1.2.4.4 Hardware, Libraries, and Frameworks


In order to catch the parallelism of CNNs, dedicated hardware accelerators are
developed. Most of them are based on GPUs, which are known to perform well
on regular parallelism patterns thanks to simd and simt execution models, a dense
collection of foating-point computing elements that peak at 12 TFLOPs, and high
capacity/bandwidth on/off-chip memories [23]. To support these hardware accel-
erators, specialized libraries for deep learning are developed to provide the neces-
sary programming abstraction, such as CudNN on Nvidia GPU [24]. Built upon
these libraries, dedicated frameworks for deep learning are proposed to improve
productivity of conceiving, training, and deploying CNNs, such as Caffe [25] and
TensorFlow [26].
Beside GPU implementations, numerous FPGA accelerators for CNNs have been
proposed. FPGAs are fne-grained programmable devices that can catch the CNN
parallelism patterns with no memory bottleneck, thanks to the following:

1. A high density of hard-wired digital signal processor (DSP) blocks that are
able to achieve up to 20 (8 TFLOPs) TMACs [8].
2. A collection of in situ on-chip memories, located next to DSPs, that can be
exploited to signifcantly reduce the number of external memory accesses.

As a consequence, CNNs can beneft from a signifcant acceleration when running


on reconfgurable hardware. This has caused numerous research efforts to study
FPGA-based CNN acceleration, targeting both high performance computing (HPC)
applications [27] and embedded devices [28].
In the remaining parts of this chapter, we conduct a survey on methods and hard-
ware architectures to accelerate the execution of CNN on FPGA. The next section
lists the evaluation metrics used, then Sections 1.4 and 1.5 respectively study the
computational transforms and the data-path optimization involved in recent CNN
accelerators. Finally, the last section of this chapter details how approximate com-
puting is a key in FPGA-based deep learning, and overviews the main contributions
implementing these techniques.
Accelerating the CNN Inference on FPGAs 11

1.3 FPGA-BASED DEEP LEARNING


Accelerating a CNN on an FPGA-powered platform can be seen as an optimization
effort that focuses on one or several of the following criteria:

• Computational Throughput (T ): A large number of the works studied


in this chapter focus on reducing the CNN execution times on the FPGA
(i.e., the computation latency), by improving the computational throughput
of the accelerator. This throughput is usually expressed as the number of
MACs an accelerator performs per second. While this metric is relevant in
the case of HPC workloads, we prefer to report the throughput as the num-
ber of frames an accelerator processes per second (fps), which better suits
the embedded vision context. The two metrics can be directly related using
Equation 1.19, where C is defned in Equation 1.8, and refers to the number
of computations a CNN involve in order to process a single frame:

T (MACS)
T (FPS) = (1.19)
C(MAC)

• Classifcation/Detection Perf. (A ) : Another way to reduce CNN execution


times is to trade some of their modeling performance in favor of faster exe-
cution timings. For this reason, the classifcation and detection metrics are
reported, especially when dealing with approximate computing methods.
Classifcation performance is usually reported as top-1 and top-5 accura-
cies, and detection performance is reported using the mAP50 and mAP75
metrics.
• Energy and Power Consumption (P ): Numerous FPGA-based accelera-
tion methods can be categorized as either latency-driven or energy-driven.
While the former focus on improving the computational throughput, the
latter considers the power consumption of the accelerator, reported in watts.
Alternatively, numerous latency-driven accelerators can be ported to low-
power-range FPGAs and perform well under strict power consumption
requirements.
• Resource Utilization (R ): When it comes to FPGA acceleration, the utili-
zation of the available resources (lut, DSP blocks, sram blocks) is always
considered. Note that the resource utilization can be correlated to the power
consumption*, but improving the ratio between the two is a technological
problem that clearly exceeds the scope of this chapter. For this reason, both
power consumption and resources utilization metrics will be reported when
available.

An FPGA implementation of a CNN has to satisfy to the former requirements. In this


perspective, the literature provides three main approaches to address the problem

* At a similar number of memory accesses. These accesses typically play the most dominant role in the
power consumption of an accelerator.
12 Deep Learning in Computer Vision

FIGURE 1.2 Main approaches to accelerate CNN inference on FPGAs.

of FPGA-based deep learning. These approaches mainly consists of computational


transforms, data-path optimization, and approximate computing techniques, as illus-
trated in Figure 1.2.

1.4 COMPUTATIONAL TRANSFORMS


In order to accelerate the execution of conv and FC layers, numerous implementa-
tions rely on computational transforms. These transforms, which operate on the FM
and weight arrays, aim at vectorizing the implementations and reducing the number
of operations occurring during inference.
Three main transforms can be distinguished. The im2col method reshapes the
feature and weight arrays in a way to transform depth convolutions into matrix mul-
tiplications. The FFT method operates on the frequency domain, transforming con-
volutions into multiplications. Finally, in Winograd fltering, convolutions boil down
to element-wise matrix multiplications thanks to a tiling and a linear transformation
of data.
These computational transforms mainly appear in temporal architectures and are
implemented by means of variety of linear algebra libraries such OpenBLAS for
CPUs* or cuBLAS for GPUs†. Besides this, various implementations make use of
these transforms to effciently map CNNs on FPGAs.
This section discusses the three former methods, highlighting their use-cases and
computational improvements. For a better understanding, we recall that for each
layer ˜:

• The input feature map is represented as four-dimensional array X, in which


the dimensions B × C × H × W respectively refer to the batch size, the num-
ber of input channels, the height, and the width.

* https://www.openblas.net/
† https://developer.nvidia.com/cublas
Another random document with
no related content on Scribd:
were unsatisfactory to several of the concerted Powers, and
were sharply criticised in the British and German press. The
German government, especially, was disposed to insist upon
stern and strenuous measures in dealing with that of China,
and it addressed the following circular note, on the 18th of
September, to all the Powers:

"The Government of the Emperor holds as preliminary to


entering upon diplomatic relations with the Chinese Government
that those persons must be delivered up who have been proved
to be the original and real instigators of the outrages
against international law which have occurred at Peking. The
number of those who were merely instruments in carrying out
the outrages is too great. Wholesale executions would be
contrary to the civilized conscience, and the circumstances of
such a group of leaders cannot be completely ascertained. But
a few whose guilt is notorious should be delivered up and
punished. The representatives of the powers at Peking are in a
position to give or bring forward convincing evidence. Less
importance attaches to the number punished than to their
character as chief instigators or leaders. The Government
believes it can count on the unanimity of all the Cabinets in
regard to this point, insomuch as indifference to the idea of
just atonement would be equivalent to indifference to a
repetition of the crime. The Government proposes, therefore,
that the Cabinets concerned should instruct their
representatives at Peking to indicate those leading Chinese
personages from whose guilt in instigating or perpetrating
outrages all doubt is excluded."

The British government was understood to be not unwilling to


support this demand from Germany, but little encouragement
seems to have been officially given to it from other quarters,
and the government of the United States was most emphatic in
declining to approve it. The reply of the latter to the German
circular note was promptly given, September 21, as follows:
"The government of the United States has, from the outset,
proclaimed its purpose to hold to the uttermost accountability
the responsible authors of any wrongs done in China to
citizens of the United States and their interests, as was
stated in the Government's circular communication to the
Powers of July 3 last. These wrongs have been committed not
alone in Peking, but in many parts of the Empire, and their
punishment is believed to be an essential element of any
effective settlement which shall prevent a recurrence of such
outrages and bring about permanent safety and peace in China.
It is thought, however, that no punitive measures can be so
effective by way of reparation for wrongs suffered and as
deterrent examples for the future as the degradation and
punishment of the responsible authors by the supreme Imperial
authority itself, and it seems only just to China that she
should be afforded in the first instance an opportunity to do
this and thus rehabilitate herself before the world.

"Believing thus, and without abating in anywise its deliberate


purpose to exact the fullest accountability from the
responsible authors of the wrongs we have suffered in China,
the Government of the United States is not disposed, as a
preliminary condition to entering into diplomatic negotiations
with the Chinese Government, to join in a demand that said
Government surrender to the Powers such persons as, according
to the determination of the Powers themselves, may be held to
be the first and real perpetrators of those wrongs. On the
other hand, this Government is disposed to hold that the
punishment of the high responsible authors of these wrongs,
not only in Peking, but throughout China, is essentially a
condition to be embraced and provided for in the negotiations
for a final settlement.
{139}
It is the purpose of this Government, at the earliest
practicable moment, to name its plenipotentiaries for
negotiating a settlement with China, and in the mean time to
authorize its Minister in Peking to enter forthwith into
conference with the duly authorized representatives of the
Chinese Government, with a view of bringing about a
preliminary agreement whereby the full exercise of the
Imperial power for the preservation of order and the
protection of foreign life and property throughout China,
pending final negotiations with the Powers, shall be assured."

On the same day on which the above note was written the
American government announced its recognition of Prince Ching
and Li Hung-chang, as plenipotentiaries appointed to represent
the Emperor of China, in preliminary negotiations for the
restoration of the imperial authority at Peking and for a
settlement with the foreign Powers.

Differences between the Powers acting together in China, as to


the preliminary conditions of negotiation with the Chinese
government, and as to the nature and range of the demands to
be made upon it, were finally adjusted on the lines of a
proposal advanced by the French Foreign Office, in a note
dated October 4, addressed to the several governments, as
follows:

"The intention of the Powers in sending their forces to China


was, above all, to deliver the Legations. Thanks to their
union and the valour of their troops this object has been
attained. The question now is to obtain from the Chinese
Government, which has given Prince Ching and Li Hung-chang
full powers to negotiate and to treat in its name, suitable
reparation for the past and serious guarantees for the future.
Penetrated with the spirit which has evoked the previous
declarations of the different Governments, the Government of
the Republic has summarized its own sentiments in the
following points, which it submits as a basis for the
forthcoming negotiations after the customary verification of
powers:

(1) The punishment of the chief culprits, who will be


designated by the representatives of the Powers in Peking.
(2) The maintenance of the embargo on the importation of arms.

(3) Equitable indemnity for the States and for private


persons.

(4) The establishment in Peking of a permanent guard for the


Legations.

(5) The dismantling of the Ta-ku forts.

(6) The military occupation of two or three points on the


Tien-tsin-Peking route, thus assuring complete liberty of
access for the Legations should they wish to go to the coast
and to forces from the sea-board which might have to go up to
the capital.

It appears impossible to the Government of the Republic that


these so legitimate conditions, if collectively presented by
the representatives of the Powers and supported by the
presence of the international troops, will not shortly be
accepted by the Chinese Government."

On the 17th of October, the French Embassy at Washington


announced to the American government that "all the interested
powers have adhered to the essential principles of the French
note," and added: "The essential thing now is to show the
Chinese Government, which has declared itself ready to
negotiate, that the powers are animated by the same spirit;
that they are decided to respect the integrity of China and
the independence of its Government, but that they are none the
less resolved to obtain the satisfaction to which they have a
right. In this regard it would seem that if the proposition
which has been accepted as the basis of negotiations were
communicated to the Chinese plenipotentiaries by the Ministers
of the powers at Peking, or in their name by their Dean, this
step would be of a nature to have a happy influence upon the
determinations of the Emperor of China and of his Government."
The government of the United States approved of this
suggestion from France, and announced that it had "instructed
its Minister in Peking to concur in presenting to the Chinese
plenipotentiaries the points upon which we are agreed." Other
governments, however, seem to have given different
instructions, and some weeks were spent by the foreign
Ministers at Peking in formulating the joint note in which
their requirements were to be presented to Prince Ching and
Earl Li.

The latter, meantime, had submitted, on their own part, to the


allied plenipotentiaries, a draft of what they conceived to be
the just preliminaries of a definitive treaty. They prefaced
it with a brief review of what had occurred, and some remarks,
confessing that "the throne now realizes that all these
calamities have been caused by the fact that Princes and high
Ministers of State screened the Boxer desperados, and is
accordingly determined to punish severely the Princes and
Ministers concerned in accordance with precedent by handing
them over to their respective Yamêns for the determination of
a penalty." The "draft clauses" then submitted were as
follows:

"The siege of the Legations was a flagrant violation of the


usages of international law and an utterly unpermissible act.
China admits the gravity of her error and undertakes that
there shall be no repetition of the occurrence. China admits
her liability to pay an indemnity, and leaves it to the Powers
to appoint officers who shall investigate the details and make
out a general statement of claims to be dealt with
accordingly.

"With regard to the subsequent trade relations between China


and the foreign Powers, it will be for the latter to make
their own arrangements as to whether former treaties shall be
adhered to in their entirety, modified in details, or
exchanged for new ones. China will take steps to put the
respective proposals into operation accordingly.

"Before drawing up a definitive treaty it will be necessary


for China and the Powers to be agreed as to general
principles. Upon this agreement being arrived at, the
Ministers of the Powers will remove the seals which have been
affixed to the various departments of the Tsung-li-Yamên and
proceed to the Yamên for the despatch of business in matters
relating to international questions exactly as before.

"So soon as a settlement of matters of detail shall have been


agreed upon between China and the various nations concerned in
accordance with the requirements of each particular nation,
and so soon as the question of the payment of an indemnity
shall have been satisfactorily settled, the Powers will
respectively withdraw their troops. The despatch of troops to
China by the Powers was undertaken with the sole object of
protecting the Ministers, and so soon as peace negotiations
between China and the Powers shall have been opened there
shall be a cessation of hostilities.

{140}

"The statement that treaties will be made with each of the


Powers in no way prejudices the fact that with regard to the
trade conventions mentioned the conditions vary in accordance
with the respective powers concerned. With regard to the
headings of a definitive treaty, questions of nomenclature and
precedence affecting each of the Powers which may arise in
framing the treaty can be adjusted at personal conferences."
Great Britain and Germany were now acting in close accord,
having, apparently, been drawn together by a common distrust
of the intentions of Russia. On the 16th of October, Lord
Salisbury and Count Hatzfeldt signed the following agreement,
which was made known at once to the other governments
concerned, and its principles assented to by all:

"Her Britannic Majesty's Government and the Imperial German


Government, being desirous to maintain their interests in
China and their rights under existing treaties, have agreed to
observe the following principles in regard to their mutual policy
in China:—

"1. It is a matter of joint and permanent international


interest that the ports on the rivers and littoral of China
should remain free and open to trade and to every other
legitimate form of economic activity for the nationals of all
countries without distinction; and the two Governments agree
on their part to uphold the same for all Chinese territory as
far as they can exercise influence.

"2. The Imperial German Government and her Britannic Majesty's


Government will not, on their part, make use of the present
complication to obtain for themselves any territorial
advantages in Chinese dominions, and will direct their policy
towards maintaining undiminished the territorial condition of
the Chinese Empire.

"3. In case of another Power making use of the complications


in China in order to obtain under any form whatever such
territorial advantages, the two Contracting Parties reserve to
themselves to come to a preliminary understanding as to the
eventual steps to be taken for the protection of their own
interests in China.

"4. The two Governments will communicate this Agreement to the


other Powers interested, and especially to Austria-Hungary,
France, Italy, Japan, Russia, and the United States of
America, and will invite them to accept the principles
recorded in it."

The assent of Russia was no less positive than that of the


other Powers. It was conveyed in the following words: "The
first point of this Agreement, stipulating that the ports
situated on the rivers and littoral of China, wherever the two
Governments exercise their influence, should remain free and
open to commerce, can be favorably entertained by Russia, as
this stipulation does not infringe in any way the 'status quo'
established in China by existing treaties. The second point
corresponds all the more with the intentions of Russia, seeing
that, from the commencement of the present complications, she
was the first to lay down the maintenance of the integrity of
the Chinese Empire as a fundamental principle of her policy in
China. As regards the third point relating to the eventuality
of an infringement of this fundamental principle, the Imperial
Government, while referring to their Circular of the 12th
(25th) August, can only renew the declaration that such an
infringement would oblige Russia to modify her attitude
according to circumstances."

On the 13th of November, while the foreign plenipotentiaries


at Peking were trying to agree in formulating the demands they
should make, the Chinese imperial government issued a decree
for the punishment of officials held responsible for the Boxer
outrages. As given the Press by the Japanese Legation at
Washington, in translation from the text received there, it
was as follows;

"Orders have been already issued for the punishment of the


officials responsible for opening hostilities upon friendly
Powers and bringing the country into the present critical
condition by neglecting to suppress and even by encouraging
the Boxers. But as Peking and its neighborhood have not yet
been entirely cleared of the Boxers, the innocent people are
still suffering terribly through the devastation of their
fields and the destruction of their houses, a state of affairs
which cannot fail to fill one with the bitterest feelings
against these officials. And if they are not severely
punished, how can the anger of the people be appeased and the
indignation of the foreign Powers allayed?

"Accordingly, Prince Tuan is hereby deprived of his title and


rank, and shall, together with Prince Chwang, who has already
been deprived of his title, be delivered to the Clan Court to
be kept in prison until the restoration of peace, when they
shall be banished to Sheng-King, to be imprisoned for life.
Princes Yi and Tsai Yung, who have both been already deprived
of their titles, are also to be delivered to the Clan Court
for imprisonment, while Prince Tsai Lien, also already
deprived of title and rank, is to be kept confined in his own
house, Duke Tsai Lan shall forfeit his ducal salary, but may
be transferred with the degradation of one rank. Chief Censor
Ying Nien shall be degraded two ranks and transferred. As to
Kang Yi, Minister of the Board of Civil Appointment, upon his
return from the commission on which he had been sent for the
purpose of making inquiries into the Boxer affair he
memorialized the Throne in an audience strongly in their
favor. He should have been severely punished but for his death
from illness, and all penalties are accordingly remitted. Chao
Shuy Yao, Minister of the Board of Punishment, who had been
sent on a mission similar to that of Kang Yi, returned almost
immediately. Though such conduct was a flagrant neglect of his
duties, still he did not make a distorted report to the
Throne, and therefore he shall be deprived of his rank, but
allowed to retain his present office. Finally, Yu Hsien,
ex-Governor of Shan-Se, allowed, while in office, the Boxers
freely to massacre the Christian missionaries and converts.
For this he deserves the severest punishment, and therefore he
is to be banished to the furthermost border of the country, and
there to be kept at hard labor for life.

{141}

"We have a full knowledge of the present trouble from the very
beginning, and therefore, though no impeachment has been brought
by Chinese officials at home or abroad against Princes Yi,
Tsai Lien and Tsai Yung, we order them to be punished in the
same manner as those who have been impeached. All who see this
edict will thus perceive our justice and impartiality in
inflicting condign penalties upon these officials," It was not
until the 20th of December that the joint note of the
plenipotentiaries of the Powers, after having been submitted
in November to the several governments represented, and
amended to remove critical objections, was finally signed and
delivered to the Chinese plenipotentiaries. The following is a
precis of the requirements set forth in it:

"(1) An Imperial Prince is to convey to Berlin the Emperor's


regret for the assassination of Baron von Ketteler, and a
monument is to be erected on the site of the murder, with an
inscription, in Latin, German, and Chinese, expressing the
regret of the Emperor for the murder.

"(2) The most severe punishment fitting their crimes is to be


inflicted on the personages designated in the Imperial decree
of September 21, whose names—not mentioned—are Princes Tuan
and Chuang and two other princes, Duke Lan, Chao Shu-chiao,
Yang-yi, Ying-hien, also others whom the foreign Ministers
shall hereafter designate. Official examinations are to be
suspended for five years in those cities where foreigners have
been assassinated or cruelly treated.

"(3) Honourable reparation is to be made to Japan for the


murder of M. Sugiyama.

"(4) Expiatory monuments are to be erected in all foreign


cemeteries where tombs have been desecrated.

"(5) The importation of arms or 'materiel' and their


manufacture are to be prohibited.

"(6) An equitable indemnity is to be paid to States,


societies, and individuals, also to Chinese who have suffered
injury because of their employment by foreigners. China will
adopt financial measures acceptable to the Powers to guarantee
the payment of the indemnity and the service of the loans.

"(7) Permanent Legation guards are to be maintained, and the


diplomatic quarter is to be fortified.

"(8) The Ta-ku forts and those between Peking and the sea are
to be razed.

"(9) There is to be a military occupation of points necessary


to ensure the safety of the communications between Peking and
the sea.

"(10) Proclamations are to be posted during two years


throughout the Empire threatening death to any person joining
an anti-foreign society and enumerating the punishment
inflicted by China upon the guilty ringleaders of the recent
outrages. An Imperial edict is to be promulgated ordering
Viceroys, Governors, and Provincial officials to be held
responsible for anti-foreign outbreaks or violations of
treaties within their jurisdiction, failure to suppress the
same being visited by the immediate cashiering of the
officials responsible, who shall never hold office again.

"(11) China undertakes to negotiate a revision of the


commercial treaties in order to facilitate commercial
relations.

"(12) The Tsung-li-Yamên is to be reformed, and the Court


ceremonial for the reception of foreign Ministers modified in
the sense indicated by the Powers.

"Until the foregoing conditions are complied with ('se


conformer à') the Powers can hold out no expectation of a
limit of time for the removal of the foreign troops now
occupying Peking and the provinces."
CHINA: A. D. 1900 (November).
Russo-Chinese agreement relating to Manchuria.

See (in this volume)


MANCHURIA.

CHINA: A. D. 1900 (December).


Russo-Chinese agreement concerning the Manchurian
province of Fêng-tien.

See (in this volume)


MANCHURIA: A. D. 1900.

CHINA: A. D. 1900-1901 (November-February).


Seizure of grounds at Peking for a large Legation Quarter.
Extensive plans of fortification.

In February, 1901, the following from a despatch written in


the previous November by Mr. Conger, the American Minister at
Peking, was given to the Press by the State Department at
Washington: "I have the honor to report that in view of the
probability of keeping large legation grounds in the future,
and because of the general desire on the part of all the
European representatives to have extensive legations, all of
the Ministers are taking possession of considerable areas
adjoining their legations—property belonging either to the
Chinese Government or to private citizens, and having been
abandoned by the owners during the siege—with the intention to
claim them as conquest, or possibly credit something for them
on their account for indemnity. I have as yet not taken formal
possession of any ground for this purpose, nor shall I without
instructions, but I shall not for the present permit any of the
owners or other persons to reoccupy any of the property
between this legation and the canal to the east of it. While
this area will be very small in comparison with the other
legations, yet it will be sufficient to make both the legation
personnel and the guard very comfortable, and will better
comport with our traditional simplicity vis-a-vis the usual
magnificence of other representatives.

"It is proposed to designate the boundaries of a legation


quarter, which shall include all the legations, and then
demand the right to put that in a state of defence when
necessary, and to prohibit the residence of Chinese there,
except by permission of the Ministers. If, therefore, these
ideas as to guards, defence, etc:., are to be carried out, a
larger legation will be an absolute necessity. In fact, it is
impossible now to accommodate the legation and staff in our
present quarters without most inconvenient crowding.

"There are no public properties inside the legation quarter


which we could take as a legation. All the proposed property
to be added, as above mentioned, to our legation, is private
ground, except a very small temple in the southeast corner,
and I presume, under our policy, if taken, will be paid for
either to the Chinese owners or credited upon account against
the Chinese Government for indemnity, although I suspect most
of the other Governments will take theirs as a species of
conquest. The plot of ground adjoining and lying to the cast
of the legation to which I have made reference is about the
size of the premises now occupied by us."

Before its adjournment on the 4th of March, 1901, the Congress


of the United States made an appropriation for the purchase of
grounds for its Legation at Peking, and instructions were sent
to make the purchase.

{142}

By telegram from Peking on the 14th of February it was


announced that a formidable plan of fortification for this
Legation Quarter had been drawn up by the Military Council of
the Powers at Peking, and that work upon it was to begin at
once. The correspondent of the "London Times" described the
plan and wrote satirically of it, as follows; "From supreme
contempt for the weakness of China armed we have swayed to
exaggerated fear of the strength of China disarmed. The
international military experts have devised a scheme for
putting the Legation quarter in a state of defence which is
equivalent to the construction of an International fortress
alongside the Imperial Palace. The plan requires the breaching
of the city wall at the Water-gate, the levelling of the Ha-ta
Mên and Chien Mên towers, the demolition of the ramparts
giving access to them, the sweeping clear of a space 150 to
300 yards wide round the entire Legation area, and the
construction of walls, glacis, moats, barbed wire defences,
with siege guns, Maxims, and barracks capable of holding 2,000
troops, with military stores and equipment sufficient to
withstand a siege of three months. All public buildings,
boards, and civil offices between the Legations and the
Imperial walls are to be levelled, while 11,000 foreign troops
are to hold the communications between Peking and the sea, so
that no Chinese can travel to Peking from the sea without the
knowledge of the foreign military authorities.

"The erection of the defences is to begin at once, before the


return of the Court to Peking. They are no doubt devised to
encourage the Court to return to Peking, it being apparently
the belief of the foreign Ministers that an Imperial Court
governing an independent empire are eager to place themselves
under the tutelage of foreign soldiers and within the reach of
foreign Maxims.

"Within the large new Legation area all the private property
of Chinese owners who years before sought the advantages of
vicinity to the Legations has been seized by the foreign
Legations. France and Germany, with a view to subsequent
commercial transactions, have annexed many acres of valuable
private property for which no compensation is contemplated,
while the Italian Legation, which boasts a staff of two
persons, carrying out the scheme of appropriation to a logical
absurdity, has, in addition to other property, grabbed the
Imperial Maritime Customs gardens and buildings occupied for
so many years by Sir Robert Hart and his staff."

CHINA: A. D. 1901 (January-February).


Famine in Shensi.

A Press telegram from Peking, late in January, announced a


fearful famine prevailing in the province of Shensi, where
thousands of natives were dying. The Chinese government was
distributing rice, and there was reported to be discrimination
against native Christians in the distribution. Mr. Conger, Sir
E. Satow, and M. Pichon protested to Prince Ching and Li
Hung-chang against such discrimination. A Court edict was
therefore issued on the 26th instant ordering all relief
officials and Chinese soldiers to treat Christians in exactly
the same way as all other Chinese throughout the Empire, under
penalty of decapitation. Another despatch, early in February,
stated: "Trustworthy reports received here from Singan-fu [the
temporary residence of the fugitive Chinese court] all agree
that the famine in the provinces of Shen-si and Shan-si is one
of the worst in the history of China. It is estimated that
two-thirds of the people are without sufficient food or the
means of obtaining it. They are also suffering from the bitter
cold. As there is little fuel in either province the woodwork
of the houses is being used to supply the want. Oxen, horses,
and dogs have been practically all sacrificed to allay hunger.
Three years of crop failures in both provinces and more or less
of famine in previous seasons had brought the people to
poverty when winter began. This year their condition has
rapidly grown worse. Prince Ching stated to Mr. Conger, the
United States Minister, that the people were reduced to eating
human flesh and to selling their women and children.
Infanticide is alarmingly common."

CHINA: A. D. 1901 (January-February).


Submission to the demands of the Powers
by the Imperial Government.
Punishments inflicted and promised.
A new Reform Edict.

With no great delay, the Chinese plenipotentiaries at Peking


were authorized by the Emperor and Empress to agree to the
demands of the Powers, which they did by formally signing the
Joint Note. Prince Ching gave his signature on the 12th of
January, 1901, and Li Hung-chang, who was seriously ill,
signed on the following day. Discussion of the punishments to
be inflicted on guilty officials was then opened, and went on
for some time. On the 5th of February, the foreign Ministers
submitted the names of twelve leading officials, against whom
formal indictments were framed, and who were considered to be
deserving of death. Three of them, however (Kang Yi, Hsu Tung,
and Li Ping Heng), were found to be already deceased. The
remaining nine were the following: Prince Chuang,
commander-in-chief of the Boxers; Prince Tuan, who was held to
be the principal instigator of the attack on foreigners; Duke
Lan, the Vice-President of Police, who admitted the Boxers to
the city; Yu Hsien, who was the governor of Shan-Si Province,
promoter of the Boxer movement there, and director of the
massacres in that province; General Tung Fu Siang, who led the
attacks on the Legations, Ying Nien, Chao Hsu Kiao, Hsu Cheng
Yu, and Chih Siu, who were variously prominent in the
murderous work. In the cases of Prince Tuan and Duke Lan, who
were related to the Imperial family, and in the case of
General Tung Fu Siang, whose military command gave him power
to be troublesome, the Chinese court pleaded such difficulties
in the way of executing a decree of death that the Ministers
at Peking were persuaded to be satisfied with sentences of
exile, or degradation in rank, or both. On the 21st of
February the Ministers received notice that an imperial edict
had been issued, condemning General Tung Fu Siang to be
degraded and deprived of his rank; Prince Tuan and Duke Lan to
be disgraced and exiled; Prince Chuang, Ying Nien and Chao Hsu
Kiao to commit suicide; Hsu Cheng Yu, Yu Hsien and Chih Siu to
be beheaded. Hsu Cheng Yu and Chih Siu were then prisoners in
the hands of the foreign military authorities at Peking, and
the sentence was executed upon them there, on the 26th of
February, in the presence of Japanese, French, German and
American troops. A despatch from Peking reporting the
execution stated that, while it was being carried out, "the
ministers held a meeting and determined on the part of the
majority to draw a curtain over further demands for blood.
United States Special Commissioner Rockhill sided strongly
with those favoring humane methods, who are Sir Ernest Satow
and MM. Komura, De Cologan and De Giers, respectively British,
Japanese, Spanish and Russian ministers. Others believe that
China has not been sufficiently punished, and that men should
be executed in every city, town and village where foreigners
were injured."

{143}

While the subject of punishments was pending, and with a view,


it was said, of quickening the action of the Chinese
government, Count von Waldersee, the German Field-Marshal
commanding the allied forces in China, ordered preparations to
be made for an extensive military expedition into the
interior. The government of the United States gave prompt
directions that its forces at Peking should not take part in
this movement, and the remonstrances of other Powers more
pacifically inclined than the Germans caused the project to be
given up.

Meantime, three Imperial edicts of importance, if faithfully


carried out, had been issued. One, on the 5th of February,
commanded new undertakings of reform, accounting for the
abandonment of the reform movement of 1898 by declaring that
it was seditionary and would have resulted in anarchy, and
that it was entered upon when the Emperor was in bad health;
for all which reasons he had requested the Empress Dowager to
resume the reins of government. Now, it was declared, since
peace negotiations were in progress, the government should be
formed on a basis for future prosperity. Established good
methods of foreign countries should be introduced to supply
China's deficiencies. "China's greatest difficulty," said the
edict, "is her old customs, which have resulted in the
insincere dispatch of business and the promoting of private
gain. Up to the present time those who have followed the
Western methods have had only superficial knowledge, knowing
only a little of foreign languages and foreign inventions,
without knowing the real basis of the strength of foreign
nations. Such methods are insufficient for real reform."

In order to obtain a true basis, the Emperor commanded a


consultation between the ministers of the privy council, the
six boards, nine officers, the Chinese ministers to foreign
countries and all the viceroys and governors. Those were
instructed to recommend reforms in the seven branches of
government, namely, the central government, ceremonies,
taxation, schools, civil-service examinations, military
affairs and public economies. They were also to recommend what
part of the old system can be used and what part needs changing.
Two months were given them in which to prepare their report.

On the following day, two edicts, in fulfilment of demands


made in the Joint Note of the Powers, were promulgated. The
first provided, in accordance with article 3 of the Joint
Note, for the suspension of official examinations for five
years in places where foreigners are killed. The second edict
forbade anti-foreign societies, recited the punishment of
guilty parties and declared that local officials will be held
responsible for the maintenance of order. If trouble occurs
the officials would be removed without delay and never again
allowed to hold office.

CHINA: A. D. 1901 (March).


The murdered Christian missionaries and native converts.
Varying statements and estimates of their number.

To the time of this writing (March, 1901), no complete


enumeration of the foreign Christian missionaries and members
of missionary families who were killed during the Boxer
outbreak of the past year has been made. Varying estimates
have appeared, from time to time, and it is possible that one
of the latest among these, communicated from Shanghai on the
1st of March, may approach to accuracy. It was published in
the "North China Daily News," and said to be founded on the
missionary records, according to which, said the "News," "a
total of 134 adults and 52 children were killed or died of
injuries in the Boxer rising of 1899 and 1900."

On the 13th of March, the "Lokal Anzeiger," of Berlin,


published a statistical report from its Peking correspondent
of "foreign Christians killed during the troubles, exclusive
of the Peking siege," which enumerated 118 Englishmen, 79
Americans, Swedes and Norwegians, 26 Frenchmen, 11 Belgians,
10 Italians and Swiss, and 1 German. The total of these
figures is largely in excess of those given by the "North
China Daily News," but they cover, not missionaries alone, but
all foreign Christians. It is impossible, however, not to
doubt the accuracy of both these accounts. Of native
Christians, the German writer estimated that 30,000 had
perished. In September, 1900, the United States Consul-General
at Shanghai, Mr. Goodnow, "after making inquiries from every
possible source," placed the number of British and American
missionaries who had probably been killed at 93, taking no
account of a larger number in Chih-li and Shan-si whose fate
was entirely unknown. Of those whose deaths he believed to be
absolutely proved at that time, 34 were British, including 9
men, 15 women and 10 children, and 22 were American, 8 of
these being men, 8 women and 6 children.

In December, 1900, a private letter from the "Association for


the Propagation of the Faith, St. Mary's Seminary," Baltimore,
Maryland, stated that up to the end of September 48 Catholic
missionaries were known to have been murdered. A pastoral
letter issued in December by Cardinal Vaughan, in London,
without stating the numbers killed, declared that all work of
the Catholic church, throughout the most of China, where 942
European and 445 native priests had been engaged, was
practically swept away.

A private letter, written early in January, 1901, by the


Reverend Dr. Judson Smith, one of the corresponding
secretaries of the American Board of Commissioners for Foreign
Missions, contains the following statement: "The American
Board has lost in the recent disturbances in China 13
missionaries, 6 men and 7 women, and 5 children belonging to
the families who perished. The number of native converts
connected with the mission churches of the American Board who
have suffered death during these troubles cannot be stated
with accuracy. It undoubtedly exceeds 1,000; it may reach a
much larger figure; but some facts that have come to light of
late imply that more of those who were supposed to be lost
have been in hiding than was known. If we should reckon along
with native converts members of their families who have
suffered death, the number would probably be doubled."

There seems to be absolutely no basis of real information for


any estimate that has been made of the extent of massacre
among the native Christian converts. Thousands perished,
without doubt, but how many thousands is yet to be learned. As
intimated by Dr. Smith, larger numbers than have been supposed
may have escaped, and it will probably be long before the true
facts are gathered from all parts of the country.

{144}

In any view, the massacre of missionaries and their families


was hideous enough; but fictions of horror were shamefully
added, it seems, in some of the stories which came from the

You might also like