Download as pdf or txt
Download as pdf or txt
You are on page 1of 70

Computational Analysis and Deep

Learning for Medical Care Principles


Methods and Applications 1st Edition
Amit Kumar Tyagi Editor
Visit to download the full and correct content document:
https://ebookmeta.com/product/computational-analysis-and-deep-learning-for-medical
-care-principles-methods-and-applications-1st-edition-amit-kumar-tyagi-editor-2/
More products digital (pdf, epub, mobi) instant
download maybe you interests ...

Computational Analysis and Deep Learning for Medical


Care Principles Methods and Applications 1st Edition
Amit Kumar Tyagi (Editor)

https://ebookmeta.com/product/computational-analysis-and-deep-
learning-for-medical-care-principles-methods-and-
applications-1st-edition-amit-kumar-tyagi-editor/

Data Science for Genomics 1st Edition Amit Kumar Tyagi

https://ebookmeta.com/product/data-science-for-genomics-1st-
edition-amit-kumar-tyagi/

Computational Methods for Deep Learning Theoretic


Practice and Applications Texts in Computer Science Wei
Qi Yan

https://ebookmeta.com/product/computational-methods-for-deep-
learning-theoretic-practice-and-applications-texts-in-computer-
science-wei-qi-yan/

Machine Learning for Intelligent Multimedia Analytics


Techniques and Applications Pardeep Kumar Amit Kumar
Singh Eds

https://ebookmeta.com/product/machine-learning-for-intelligent-
multimedia-analytics-techniques-and-applications-pardeep-kumar-
amit-kumar-singh-eds/
Machine Learning for Intelligent Multimedia Analytics
Techniques and Applications Studies in Big Data 82
Pardeep Kumar Editor Amit Kumar Singh Editor

https://ebookmeta.com/product/machine-learning-for-intelligent-
multimedia-analytics-techniques-and-applications-studies-in-big-
data-82-pardeep-kumar-editor-amit-kumar-singh-editor/

Data Science and Data Analytics: Opportunities and


Challenges 1st Edition Amit Kumar Tyagi

https://ebookmeta.com/product/data-science-and-data-analytics-
opportunities-and-challenges-1st-edition-amit-kumar-tyagi/

Deep Learning in Gaming and Animations: Principles and


Applications 1st Edition Vikas Chaudhary (Editor)

https://ebookmeta.com/product/deep-learning-in-gaming-and-
animations-principles-and-applications-1st-edition-vikas-
chaudhary-editor/

Internet of Things Theory and Practice Build Smarter


Projects to Explore the IoT Architecture and
Applications Amit Kumar Tyagi

https://ebookmeta.com/product/internet-of-things-theory-and-
practice-build-smarter-projects-to-explore-the-iot-architecture-
and-applications-amit-kumar-tyagi/

Deep Learning and Medical Applications Mathematics in


Industry 40 Jin Keun Seo (Editor)

https://ebookmeta.com/product/deep-learning-and-medical-
applications-mathematics-in-industry-40-jin-keun-seo-editor/
Table of Contents
Cover
Title Page
Copyright
Preface
Part 1: Deep Learning and Its Models
1 CNN: A Review of Models, Application of IVD Segmentation
1.1 Introduction
1.2 Various CNN Models
1.3 Application of CNN to IVD Detection
1.4 Comparison With State-of-the-Art Segmentation
Approaches for Spine T2W Images
1.5 Conclusion
References
2 Location-Aware Keyword Query Suggestion Techniques
With Artificial Intelligence Perspective
2.1 Introduction
2.2 Related Work
2.3 Artificial Intelligence Perspective
2.4 Architecture
2.5 Conclusion
References
3 Identification of a Suitable Transfer Learning Architecture
for Classification: A Case Study with Liver Tumors
3.1 Introduction
3.2 Related Works
3.3 Convolutional Neural Networks
3.4 Transfer Learning
3.5 System Model
3.6 Results and Discussions
3.7 Conclusion
References
4 Optimization and Deep Learning-Based Content Retrieval,
Indexing, and Metric Learning Approach for Medical Images
4.1 Introduction
4.2 Related Works
4.3 Proposed Method
4.4 Results and Discussion
4.5 Conclusion
References
Part 2: Applications of Deep Learning
5 Deep Learning for Clinical and Health Informatics
5.1 Introduction
5.2 Related Work
5.3 Motivation
5.4 Scope of the Work in Past, Present, and Future
5.5 Deep Learning Tools, Methods Available for Clinical,
and Health Informatics
5.6 Deep Learning: Not-So-Near Future in Biomedical
Imaging
5.7 Challenges Faced Toward Deep Learning Using in
Biomedical Imaging
5.8 Open Research Issues and Future Research Directions
in Biomedical Imaging (Healthcare Informatics)
5.9 Conclusion
References
6 Biomedical Image Segmentation by Deep Learning Methods
6.1 Introduction
6.2 Overview of Deep Learning Algorithms
6.3 Other Deep Learning Architecture
6.4 Biomedical Image Segmentation
6.5 Conclusion
References
7 Multi-Lingual Handwritten Character Recognition Using
Deep Learning
7.1 Introduction
7.2 Related Works
7.3 Materials and Methods
7.4 Experiments and Results
7.5 Conclusion
References
8 Disease Detection Platform Using Image Processing Through
OpenCV
8.1 Introduction
8.2 Problem Statement
8.3 Conclusion
8.4 Summary
References
9 Computer-Aided Diagnosis of Liver Fibrosis in Hepatitis
Patients Using Convolutional Neural Network
9.1 Introduction
9.2 Overview of System
9.3 Methodology
9.4 Performance and Analysis
9.5 Experimental Results
9.6 Conclusion and Future Scope
References
Part 3: Future Deep Learning Models
10 Lung Cancer Prediction in Deep Learning Perspective
10.1 Introduction
10.2 Machine Learning and Its Application
10.3 Related Work
10.4 Why Deep Learning on Top of Machine Learning?
10.5 How is Deep Learning Used for Prediction of Lungs
Cancer?
10.6 Conclusion
References
11 Lesion Detection and Classification for Breast Cancer
Diagnosis Based on Deep CNNs from Digital Mammographic
Data
11.1 Introduction
11.2 Background
11.3 Methods
11.4 Application of Deep CNN for Mammography
11.5 System Model and Results
11.6 Research Challenges and Discussion on Future
Directions
11.7 Conclusion
References
12 Health Prediction Analytics Using Deep Learning Methods
and Applications
12.1 Introduction
12.2 Background
12.3 Predictive Analytics
12.4 Deep Learning Predictive Analysis Applications
12.5 Discussion
12.6 Conclusion
References
13 Ambient-Assisted Living of Disabled Elderly in an
Intelligent Home Using Behavior Prediction—A Reliable Deep
Learning Prediction System
13.1 Introduction
13.2 Activities of Daily Living and Behavior Analysis
13.3 Intelligent Home Architecture
13.4 Methodology
13.5 Senior Analytics Care Model
13.6 Results and Discussions
13.7 Conclusion
Nomenclature
References
14 Early Diagnosis Tool for Alzheimer’s Disease Using 3D
Slicer
14.1 Introduction
14.2 Related Work
14.3 Existing System
14.4 Proposed System
14.5 Results and Discussion
14.6 Conclusion
References
Part 4: Deep Learning - Importance and Challenges for Other
Sectors
15 Deep Learning for Medical Healthcare: Issues, Challenges,
and Opportunities
15.1 Introduction
15.2 Related Work
15.3 Development of Personalized Medicine Using Deep
Learning: A New Revolution in Healthcare Industry
15.4 Deep Learning Applications in Precision Medicine
15.5 Deep Learning for Medical Imaging
15.6 Drug Discovery and Development: A Promise
Fulfilled by Deep Learning Technology
15.7 Application Areas of Deep Learning in Healthcare
15.8 Privacy Issues Arising With the Usage of Deep
Learning in Healthcare
15.9 Challenges and Opportunities in Healthcare Using
Deep Learning
15.10 Conclusion and Future Scope
References
16 A Perspective Analysis of Regularization and Optimization
Techniques in Machine Learning
16.1 Introduction
16.2 Regularization in Machine Learning
16.3 Convexity Principles
16.4 Conclusion and Discussion
References
17 Deep Learning-Based Prediction Techniques for Medical
Care: Opportunities and Challenges
17.1 Introduction
17.2 Machine Learning and Deep Learning Framework
17.3 Challenges and Opportunities
17.4 Clinical Databases—Electronic Health Records
17.5 Data Analytics Models—Classifiers and Clusters
17.6 Deep Learning Approaches and Association
Predictions
17.7 Conclusion
17.8 Applications
References
18 Machine Learning and Deep Learning: Open Issues and
Future Research Directions for the Next 10 Years
18.1 Introduction
18.2 Evolution of Machine Learning and Deep Learning
18.3 The Forefront of Machine Learning Technology
18.4 The Challenges Facing Machine Learning and Deep
Learning
18.5 Possibilities With Machine Learning and Deep
Learning
18.6 Potential Limitations of Machine Learning and Deep
Learning
18.7 Conclusion
Acknowledgement
Contribution/Disclosure
References
Index

List of Illustrations
Chapter 1
Figure 1.1 Architecture of LeNet-5.
Figure 1.2 Architecture of AlexNet.
Figure 1.3 Architecture of ZFNet.
Figure 1.4 Architecture of VGG-16.
Figure 1.5 Inception module.
Figure 1.6 Architecture of GoogleNet.
Figure 1.7 (a) A residual block.
Figure 1.8 Architecture of ResNeXt.
Figure 1.9 Architecture of SE-ResNet.
Figure 1.10 Architecture of DenseNet.
Figure 1.11 Architecture of MobileNets.
Chapter 2
Figure 2.1 General architecture of a search engine.
Figure 2.2 The increased mobile users.
Figure 2.3 AI-powered location-based system.
Figure 2.4 Architecture diagram for querying.
Chapter 3
Figure 3.1 Phases of CECT images (1: normal liver; 2: tumor
within liver; 3: sto...
Figure 3.2 Architecture of convolutional neural network.
Figure 3.3 AlexNet architecture.
Figure 3.4 GoogLeNet architecture.
Figure 3.5 Residual learning—building block.
Figure 3.6 Architecture of ResNet-18.
Figure 3.7 System model for case study on liver tumor
diagnosis.
Figure 3.8 Output of bidirectional region growing
segmentation algorithm: (a) in...
Figure 3.9 HA Phase Liver CT images: (a) normal liver; (b)
HCC; (c) hemangioma; ...
Figure 3.10 Training progress for AlexNet.
Figure 3.11 Training progress for GoogLeNet.
Figure 3.12 Training progress for ResNet-18.
Figure 3.13 Training progress for ResNet-50.
Chapter 4
Figure 4.1 Proposed system for image retrieval.
Figure 4.2 Schematic of the deep convolutional neural
networks.
Figure 4.3 Proposed feature extraction system.
Figure 4.4 Proposed model for the localization of the
abnormalities.
Figure 4.5 Graph for the retrieval performance of the metric
learning for VGG19.
Figure 4.6 PR values for state of art ConvNet model for CT
images.
Figure 4.7 PR values for state of art CNN model for CT images.
Figure 4.8 Proposed system—PR values for the CT images.
Figure 4.9 PR values for proposed content-based image
retrieval.
Figure 4.10 Graph for loss function of proposed deep
regression networks for tra...
Figure 4.11 Graph for loss function of proposed deep
regression networks for val...
Chapter 6
Figure 5.1 Different informatics in healthcare [28].
Chapter 6
Figure 6.1 CT image reconstruction (past, present, and future)
[3].
Figure 6.2 (a) Classic machine learning algorithm, (b) Deep
learning algorithm.
Figure 6.3 Traditional neural network.
Figure 6.4 Convolutional Neural Network.
Figure 6.5 Psoriasis images [2].
Figure 6.6 Restricted Boltzmann Machine.
Figure 6.7 Autoencoder architecture with vector and image
inputs [1].
Figure 6.8 Image of chest x-ray [60].
Figure 6.9 Regular thoracic disease identified in chest x-rays
[23].
Figure 6.10 MRI of human brain [4].
Chapter 7
Figure 7.1 Architecture of the proposed approach.
Figure 7.2 Sample Math dataset (including English
characters).
Figure 7.3 Sample Bangla dataset (including Bangla numeric).
Figure 7.4 Sample Devanagari dataset (including Hindi
numeric).
Figure 7.5 Dataset distribution for English dataset.
Figure 7.6 Dataset distribution for Hindi dataset.
Figure 7.7 Dataset distribution for Bangla dataset.
Figure 7.8 Dataset distribution for Math Symbol dataset.
Figure 7.9 Dataset distribution.
Figure 7.10 Precision-recall curve on English dataset.
Figure 7.11 ROC curve on English dataset.
Figure 7.12 Precision-recall curve on Hindi dataset.
Figure 7.13 ROC curve on Hindi dataset.
Figure 7.14 Precision-recall curve on Bangla dataset.
Figure 7.15 ROC curve on Bangla dataset.
Figure 7.16 Precision-recall curve on Math Symbol dataset.
Figure 7.17 ROC curve on Math symbol dataset.
Figure 7.18 Precision-recall curve of the proposed model.
Figure 7.19 ROC curve of the proposed model.
Chapter 8
Figure 8.1 Eye image dissection [34].
Figure 8.2 Cataract algorithm [10].
Figure 8.3 Pre-processing algorithm [48].
Figure 8.4 Pre-processing analysis [39].
Figure 8.5 Morphologically opened [39].
Figure 8.6 Finding circles [40].
Figure 8.7 Iris contour separation [40].
Figure 8.8 Image inversion [41].
Figure 8.9 Iris detection [41].
Figure 8.10 Cataract detection [41].
Figure 8.11 Healthy eye vs. retinoblastoma [33].
Figure 8.12 Unilateral retinoblastoma [18].
Figure 8.13 Bilateral retinoblastoma [19].
Figure 8.14 Classification of stages of skin cancer [20].
Figure 8.15 Eye cancer detection algorithm.
Figure 8.16 Sample test cases.
Figure 8.17 Actual working of the eye cancer detection
algorithm.
Figure 8.18 Melanoma example [27].
Figure 8.19 Melanoma detection algorithm.
Figure 8.20 Asymmetry analysis.
Figure 8.21 Border analysis.
Figure 8.22 Color analysis.
Figure 8.23 Diameter analysis.
Figure 8.24 Completed detailed algorithm.
Chapter 9
Figure 9.1 Basic overview of a proposed computer-aided
system.
Figure 9.2 Block diagram of the proposed system for finding
out liver fibrosis.
Figure 9.3 Block diagram representing different pre-
processing stages in liver f...
Figure 9.4 Flow chart showing student’s t test.
Figure 9.5 Diagram showing SegNet architecture for
convolutional encoder and dec...
Figure 9.6 Basic block diagram of VGG-16 architecture.
Figure 9.7 Flow chart showing SegNet working process for
classifying liver fibro...
Figure 9.8 Overall process of the CNN of the system.
Figure 9.9 The stages in identifying liver fibrosis by using
Conventional Neural...
Figure 9.10 Multi-layer neural network architecture for a CAD
system for diagnos...
Figure 9.11 Graphical representation of Support Vector
Machine.
Figure 9.12 Experimental analysis graph for different
classifier in terms of acc...
Chapter 10
Figure 10.1 Block diagram of machine learning.
Figure 10.2 Machine learning algorithm.
Figure 10.3 Structure of deep learning.
Figure 10.4 Architecture of DNN.
Figure 10.5 Architecture of CNN.
Figure 10.6 System architecture.
Figure 10.7 Image before histogram equalization.
Figure 10.8 Image after histogram equalization.
Figure 10.9 Edge detection.
Figure 10.10 Edge segmented image.
Figure 10.11 Total cases.
Figure 10.12 Result comparison.
Chapter 11
Figure 11.1 Breast cancer incidence rates worldwide (source:
International Agenc...
Figure 11.2 Images from MIAS database showing normal,
benign, malignant mammogra...
Figure 11.3 Image depicting noise in a mammogram.
Figure 11.4 Architecture of CNN.
Figure 11.5 A complete representation of all the operation
that take place at va...
Figure 11.6 An image depicting Pouter, Plesion, and Pbreast in
a mammogram.
Figure 11.7 The figure depicts two images: (a) mammogram
with a malignant mass a...
Figure 11.8 A figure depicting the various components of a
breast as identified ...
Figure 11.9 An illustration of how a mammogram image
having tumor is segmented t...
Figure 11.10 A schematic representation of classification
procedure of CNN.
Figure 11.11 A schematic representation of classification
procedure of CNN durin...
Figure 11.12 Proposed system model.
Figure 11.13 Flowchart for MIAS database and unannotated
labeled images.
Figure 11.14 Image distribution for training model.
Figure 11.15 The graph shows the loss for the trained model
on train and test da...
Figure 11.16 The graph shows the accuracy of the trained
model for both test and...
Figure 11.17 Depiction of the confusion matrix for the trained
CNN model.
Figure 11.18 Receiver operating characteristics of the trained
model.
Figure 11.19 The image shows the summary of the CNN
model.
Figure 11.20 Performance parameters of the trained model.
Figure 11.21 Prediction of one of the image collected from
diagnostic center.
Chapter 12
Figure 12.1 Deep learning [14]. (a) A simple, multilayer deep
neural network tha...
Figure 12.2 Flowchart of the model [25]. The orange icon
indicates the dataset, ...
Figure 12.3 Evaluation result [25].
Figure 12.4 Deep learning techniques evaluation results [25].
Figure 12.5 Deep transfer learning–based screening system
[38].
Figure 12.6 Classification result.
Figure 12.7 Regression result [45].
Figure 12.8 AE model of deep learning [47].
Figure 12.9 DBN for induction motor fault diagnosis [68].
Figure 12.10 CNN model for health monitoring [80].
Figure 12.11 RNN model for health monitoring [87].
Figure 12.12 Deep learning models usage.
Chapter 13
Figure 13.1 Intelligent home layout model.
Figure 13.2 Deep learning model in predicting behavior
analysis.
Figure 13.3 Lifestyle-oriented context aware model.
Figure 13.4 Components for the identification, simulation, and
detection of acti...
Figure 13.5 Prediction stages.
Figure 13.6 Analytics of event.
Figure 13.7 Prediction of activity duration.
Chapter 14
Figure 14.1 Comparison of normal and Alzheimer brain.
Figure 14.2 Proposed AD prediction system.
Figure 14.3 KNN classification.
Figure 14.4 SVM classification.
Figure 14.5 Load data in 3D slicer.
Figure 14.6 3D slicer visualization.
Figure 14.7 Normal patient MRI.
Figure 14.8 Alzheimer patient MRI.
Figure 14.9 Comparison of hippocampus region.
Figure 14.10 Accuracy of algorithms with baseline records.
Figure 14.11 Accuracy of algorithms with current records.
Figure 14.12 Comparison of without and with dice coefficient.
Chapter 15
Figure 15.1 U-Net architecture [19].
Figure 15.2 Architecture of the 3D-DCSRN model [29].
Figure 15.3 SMILES code for Cyclohexane and Acetaminophen
[32].
Figure 15.4 Medical chatbot architecture [36].
Chapter 16
Figure 16.1 A classical perceptron.
Figure 16.2 Forward and backward paths on an ANN
architecture.
Figure 16.3 A DNN architecture.
Figure 16.4 A DNN architecture for digit classification.
Figure 16.5 Underfit and overfit.
Figure 16.6 Functional mapping.
Figure 16.7 A generalized Tikhonov functional.
Figure 16.8 (a) With hidden layers (b) Dropping h2 and h5.
Figure 16.9 Image cropping as one of the features of data
augmentation.
Figure 16.10 Early stopping criteria based on errors.
Figure 16.11 (a) Convex, (b) Non-convex.
Figure 16.12 (a) Affine (b) Convex function.
Figure 16.13 Workflow and an optimizer.
Figure 16.14 (a) Error (cost) function (b) Elliptical: Horizontal
cross section.
Figure 16.15 Contour plot for a quadratic cost function with
elliptical contours...
Figure 16.16 Gradients when steps are varying.
Figure 16.17 Local minima. (When the gradient ∇ of the
partial derivatives is po...
Figure 16.18 Contour plot showing basins of attraction.
Figure 16.19 (a) Saddle point S. (b) Saddle point over a two-
dimensional error s...
Figure 16.20 Local information encoded by the gradient
usually does not support ...
Figure 16.21 Direction of gradient change.
Figure 16.22 Rolling ball and its trajectory.
Chapter 17
Figure 17.1 Artificial Neural Networks vs. Architecture of
Deep Learning Model [...
Figure 17.2 Machine learning and deep learning techniques
[4, 5].
Figure 17.3 Model of reinforcement learning
(https://www.kdnuggets.com).
Figure 17.4 Data analytical model [5].
Figure 17.5 Support Vector Machine—classification approach
[1].
Figure 17.6 Expected output of K-means clustering [1].
Figure 17.7 Output of mean shift clustering [2].
Figure 17.8 Genetic Signature–based Hierarchical Random
Forest Cluster (G-HR Clu...
Figure 17.9 Artificial Neural Networks vs. Deep Learning
Neural Networks.
Figure 17.10 Architecture of Convolution Neural Network.
Figure 17.11 Architecture of the Human Diseases Pattern
Prediction Technique (EC...
Figure 17.12 Comparative analysis: processing time vs.
classifiers.
Figure 17.13 Comparative analysis: memory usage vs.
classifiers.
Figure 17.14 Comparative analysis: classification accuracy vs.
classifiers.
Figure 17.15 Comparative analysis: sensitivity vs. classifiers.
Figure 17.16 Comparative analysis: specificity vs. classifiers.
Figure 17.17 Comparative analysis: FScore vs. classifiers.
Chapter 18
Figure 18.1 Deep Neural Network (DNN).
Figure 18.2 The evolution of machine learning techniques
(year-wise).

List of Tables
Chapter 1
Table 1.1 Various parameters of the layers of LeNet.
Table 1.2 Every column indicates which feature map in S2 are
combined by the uni...
Table 1.3 AlexNet layer details.
Table 1.4 Various parameters of ZFNet.
Table 1.5 Various parameters of VGG-16.
Table 1.6 Various parameters of GoogleNet.
Table 1.7 Various parameters of ResNet.
Table 1.8 Comparison of ResNet-50 and ResNext-50 (32 × 4d).
Table 1.9 Comparison of ResNet-50 and ResNext-50 and SE-
ResNeXt-50 (32 × 4d).
Table 1.10 Comparison of DenseNet.
Table 1.11 Various parameters of MobileNets.
Table 1.12 State-of-art of spine segmentation approaches.
Chapter 2
Table 2.1 History of search engines.
Table 2.2 Three types of user refinement of queries.
Table 2.3 Different approaches for the query suggestion
techniques.
Chapter 3
Table 3.1 Types of liver lesions.
Table 3.2 Dataset count.
Table 3.3 Hyperparameter settings for training.
Table 3.4 Confusion matrix for AlexNet.
Table 3.5 Confusion matrix for GoogLeNet.
Table 3.6 Confusion matrix for ResNet-18.
Table 3.7 Confusion matrix for ResNet-50.
Table 3.8 Comparison of classification accuracies.
Chapter 4
Table 4.1 Retrieval performance of metric learning for VGG19.
Table 4.2 Performance of retrieval techniques of the trained
VGG19 among fine-tu...
Table 4.3 PR values of various models—a comparison for CT
image retrieval.
Table 4.4 Recall vs. precision for proposed content-based
image retrieval.
Table 4.5 Loss function of proposed deep regression networks
for training datase...
Table 4.6 Loss function of proposed deep regression networks
for validation data...
Table 4.7 Land mark details (identification rates vs. distance
error) for the pr...
Table 4.8 Accuracy value of the proposed system.
Table 4.9 Accuracy of the retrieval methods compared with
the metric learning–ba...
Chapter 6
Table 6.1 Definition of the abbreviations.
Chapter 7
Table 7.1 Performance of proposed models on English dataset.
Table 7.2 Performance of proposed model on Bangla dataset.
Table 7.3 Performance of proposed model on Math Symbol
dataset.
Chapter 8
Table 8.1 ABCD factor for TDS value.
Table 8.2 Classify mole according to TDS value.
Chapter 9
Table 9.1 The confusion matrix for different classifier.
Table 9.2 Performance analysis of different classifiers:
Random Forest, SVM, Naï...
Chapter 10
Table 10.1 Result analysis.
Chapter 11
Table 11.1 Comparison of different techniques and tumor.
Chapter 13
Table 13.1 Cognitive functions related with routine activities.
Table 13.2 Situation and design features.
Table 13.3 Accuracy of prediction.
Chapter 14
Table 14.1 Accuracy comparison and mean of algorithms with
baseline records.
Table 14.2 Accuracy comparison and mean of algorithms with
current records.
Chapter 15
Table 15.1 Variances of Convolutional Neural Network (CNN).
Table 15.2 Various issues challenges faced by researchers for
using deep learnin...
Chapter 17
Table 17.1 Comparative analysis: classification accuracy for 10
datasets—analysi...
Chapter 18
Table 18.1 Comparison among data mining, machine learning,
and deep learning.
Scrivener Publishing
100 Cummings Center, Suite 541J
Beverly, MA 01915-6106
Publishers at Scrivener
Martin Scrivener (martin@scrivenerpublishing.com)
Phillip Carmical (pcarmical@scrivenerpublishing.com)
Computational Analysis and
Deep Learning for Medical
Care

Principles, Methods, and


Applications
Edited by

Amit Kumar Tyagi


This edition first published 2021 by John Wiley & Sons, Inc., 111 River Street,
Hoboken, NJ 07030, USA and Scrivener Publishing LLC, 100 Cummings Center, Suite
541J, Beverly, MA 01915, USA
© 2021 Scrivener Publishing LLC
For more information about Scrivener publications please visit
www.scrivenerpublishing.com.
All rights reserved. No part of this publication may be reproduced, stored in a
retrieval system, or transmitted, in any form or by any means, electronic,
mechanical, photocopying, recording, or otherwise, except as permitted by law.
Advice on how to obtain permission to reuse material from this title is available at
http://www.wiley.com/go/permissions.
Wiley Global Headquarters
111 River Street, Hoboken, NJ 07030, USA
For details of our global editorial offices, customer services, and more information
about Wiley products visit us at www.wiley.com.
Limit of Liability/Disclaimer of Warranty
While the publisher and authors have used their best efforts in preparing this work,
they make no representations or warranties with respect to the accuracy or
completeness of the contents of this work and specifically disclaim all warranties,
including without limitation any implied warranties of merchantability or fitness for
a particular purpose. No warranty may be created or extended by sales
representatives, written sales materials, or promotional statements for this work.
The fact that an organization, website, or product is referred to in this work as a
citation and/or potential source of further information does not mean that the
publisher and authors endorse the information or services the organization, website,
or product may provide or recommendations it may make. This work is sold with
the understanding that the publisher is not engaged in rendering professional
services. The advice and strategies contained herein may not be suitable for your
situation. You should consult with a specialist where appropriate. Neither the
publisher nor authors shall be liable for any loss of profit or any other commercial
damages, including but not limited to special, incidental, consequential, or other
damages. Further, readers should be aware that websites listed in this work may have
changed or disappeared between when this work was written and when it is read.
Library of Congress Cataloging-in-Publication Data
ISBN 9781119785729
Cover image: Pixabay.Com
Cover design by Russell Richardson
Set in size of 11pt and Minion Pro by Manila Typesetting Company, Makati,
Philippines
Printed in the USA
10 9 8 7 6 5 4 3 2 1
Preface
Due to recent technological developments and the integration of
millions of Internet of Things (IoT)-connected devices, a large volume
of data is being generated every day. This data, known as big data, is
summed up by the 7 V’s—Volume, Velocity, Variety, Variability, Veracity,
Visualization, and Value. Efficient tools, models and algorithms are
required to analyze this data in order to advance the development of
applications in several sectors, including e-healthcare (i.e., for disease
prediction) and satellites (i.e., for weather prediction) among others. In
the case of data related to biomedical imaging, this analyzed data is
very useful to doctors and their patients in making predictive and
effective decisions when treating disease. The healthcare sector needs
to rely on smart machines/devices to collect data; however, nowadays,
these smart machines/devices are facing several critical issues,
including security breaches, data leaks of private information, loss of
trust, etc.
We are currently entering the era of smart world devices, where robots
or machines are being used in most applications to solve real-world
problems. These smart machines/devices reduce the burden on
doctors, which in turn make their lives easier and the lives of their
patients better, thereby increasing patient longevity, which is the
ultimate goal of computer vision. Therefore, our goal in writing this
book is to attempt to provide complete information on reliable deep
learning models required for e-healthcare applications. Ways in which
deep learning can enhance healthcare images or text data for making
useful decisions will be discussed. Also presented are reliable deep
learning models, such as neural networks, convolutional neural
networks, backpropagation, and recurrent neural networks, which are
increasingly being used in medical image processing, including for
colorization of black and white X-ray images, automatic machine
translation images, object classification in photographs/images (CT
scans), character or useful generation (ECG), image caption generation,
etc. Hence, reliable deep learning methods for the perception or
production of better results are a necessity for highly effective e-
healthcare applications. Currently, the most difficult data-related
problem that needs to be solved concerns the rapid increase of data
occurring each day via billions of smart devices. To address the growing
amount of data in healthcare applications, challenges such as not
having standard tools, efficient algorithms, and a sufficient number of
skilled data scientists need to be faced. Hence, there is growing interest
in investigating deep learning models and their use in e-healthcare
applications.
Based on the above facts, some reliable deep learning and deep neural
network models for healthcare applications are contained in this book
on computational analysis and deep learning for medical care. These
chapters are contributed by reputed authors; the importance of deep
learning models is discussed along with the issues and challenges
facing available current deep learning models. Also included are
innovative deep learning algorithms/models for treating disease in the
Medicare population. Finally, several research gaps are revealed in deep
learning models for healthcare applications that will provide
opportunities for several research communities.
In conclusion, we want to thank our God, family members, teachers,
friends and last but not least, all our authors from the bottom of our
hearts (including publisher) for helping us complete this book before
the deadline. Really, kudos to all.
Amit Kumar Tyagi
Part 1
DEEP LEARNING AND ITS
MODELS
1
CNN: A Review of Models, Application of IVD
Segmentation
Leena Silvoster M.1* and R. Mathusoothana S. Kumar2
1Department of Computer Science Engg, College of Engg, Attingal, Thiruvananthapuram, Kerala, India
2Department of Information Technology, Noorul Islam University, Tamilnadu, India

Abstract
The widespread publicity of Convolutional Neural Network (CNN) in various domains such as image
classification, object recognition, and scene classification has revolutionized the research in machine
learning, especially in medical images. Magnetic Resonance Images (MRIs) are suffering from severe
noise, weak edges, low contrast, and intensity inhomogeneity. Recent advances in deep learning with
fewer connections and parameters made their training easier. This chapter presents an in-depth review
of the various deep architectures as well as its application for segmenting the Intervertebral disc (IVD)
from the 3D spine image and its evaluation. The first section deals with the study of various traditional
architectures of deep CNN such as LeNet, AlexNet, ZFNet, GoogleNet, VGGNet, ResNet, Inception model,
ResNeXt, SENet, MobileNet V1/V2, and DenseNet. It also deals with the study of the parameters and
components associated with the models in detail. The second section discusses the application of these
models to segment IVD from the spine image. Finally, theoretically performance and experimental results
of the state-of-art of the literature shows that 2.5D multi-scale FCN performs the best with the Dice
Similarity Index (DSC) of 90.64%.
Keywords: CNN, deep learning, intervertebral disc degeneration, MRI segmentation

1.1 Introduction
The concept of Convolutional Neural Network (CNN) was introduced by Fukushima. The principle in CNN is
that the visual mechanism of human is hierarchical in structure. CNN has been successfully applied in
various image domain such as image classification, object recognition, and scene classification. CNN is
defined as a series of convolution layer and pooling layer. In the convolution layer, the image is convolved
with a filter, i.e., slide over the image spatially and computing dot products. Pooling layer provides a smaller
feature set.
One major cause of low back pain is disc degeneration. Automated detection of lumbar abnormalities from
the clinical scan is a burden for radiologist. Researchers focus on the automation task of the segmentation of
large set of MRI data due to the huge size of such images. The success of the application of CNN in various
field of object detection enables the researchers to apply various models for the detection of Intervertebral
Disc (IVD) and, in turn, helps in the diagnosis of diseases.
The details of the structure of the remaining section of the paper are as follows. The next section deals with
the study of the various CNN models. Section 1.3, presents applications of CNN for the detection of the IVD.
In Section 1.4, comparison with state-of-the-art segmentation approaches for spine T2W images is carried
out, and conclusion is in Section 1.5.

1.2 Various CNN Models


1.2.1 LeNet-5
The LeNet architecture was proposed by LeCun et al. [1], and it successfully classified the images in the
MNIST dataset. LeNet uses grayscale image of 32×32 pixel as input image. As a pre-processing step the input
pixel values are normalized so that white (background) pixel represents a value of 1 and the black
(foreground) represents a value of 1.175, which, in turn, speedup the learning task. The LeNet-5
architecture consists of succession of input layer, two sets of convolutional and average pooling layers,
followed by a flattening convolutional layer, then two fully connected layers, and finally a softmax classifier.
The first convolutional layer filters the 32×32 input image with six filters. All filter kernels are of size 5×5
(receptive field) with a stride of 1 pixel (this is the distance between the receptive field centers of
neighboring neurons in a kernel map) and uses “same” padding. Given the input image of size 28×28, apply
six convolutional kernels each of size 5×5 with stride 1 in C1, the feature maps obtained is of size 14×14.
Figure 1.1 shows the architecture of LeNet-5, and Table 1.1 shows the various parameter details of LeNet-5.
Let Wc is the number of weights in the layer; Bc is the number of biases in the layer; Pc is the number of
parameters in the layer; K is the size (width) of kernels in the layer; N is the number of kernels; C is the
number of channels in the input image.

(1.1)

(1.2)
In the first convolutional layer, number of learning parameters is (5×5 + 1) × 6 = 156 parameters; where 6 is
the number of filters, 5 × 5 is the filter size, and bias is 1, and there are 28×28×156 = 122,304 connections.
The number of feature map calculation is as follows:

(1.3)

(1.4)

W = 32; H = 32; Fw = Fh = 5; P = 0, and the number of feature map is 28 × 28.


First pooling layer: W = 28; H = 28; P = 0; S = 2

(1.5)

Figure 1.1 Architecture of LeNet-5.


Table 1.1 Various parameters of the layers of LeNet.
Sl Layer Feature Feature map Kernel Stride Activation Trainable #
no. map size size parameters Connections
1 Image 1 32 × 32 - - - - -
2 C1 6 28 × 28 5×5 1 tanh 156 122,304
3 S1 6 14 × 14 2×2 2 tanh 12 5,880
4 C2 16 10 × 10 5×5 1 tanh 1516 151,600
5 S2 16 5×5 2×2 2 tanh 32 2,000
6 Dense 120 1×1 5×5 1 tanh 48,120 48,120
7 Dense - 84 - - tanh 10,164 10,164
8 Dense - 10 - - softmax - -
60,000 (Total)

(1.6)

The number of feature map is 14×14 and the number of learning parameters is (coefficient + bias) × no.
filters = (1+1) × 6 = 12 parameters and the number of connections = 30×14×14 = 5,880.
Layer 3: In this layer, only 10 out of 16 feature maps are connected to six feature maps of the previous layer
as shown in Table 1.2. Each unit in C3 is connected to several 5 × 5 receptive fields at identical locations in
S2. Total number of trainable parameters = (3×5×5+1)×6+(4×5×5+1)×9+(6×5×5+1) = 1516. Total number
of connections = (3×5×5+1)×6×10×10+(4×5×5+1) ×9×10×10 +(6×5×5+1)×10×10 = 151,600. Total number
of parameters is 60K.

1.2.2 AlexNet
Alex Krizhevsky et al. [2] presented a new architecture “AlexNet” to train the ImageNet dataset, which
consists of 1.2 million high-resolution images, into 1,000 different classes. In the original implementation,
layers are divided into two and to train them on separate GPUs (GTX 580 3GB GPUs) takes around 5–6 days.
The network contains five convolutional layers, maximum pooling layers and it is followed by three fully
connected layers, and finally a 1,000-way softmax classifier. The network uses ReLU activation function, data
augmentation, dropout and smart optimizer layers, local response normalization, and overlapping pooling.
The AlexNet has 60M parameters. Figure 1.2 shows the architecture of AlexNet and Table 1.3 shows the
various parameters of AlexNet.
Table 1.2 Every column indicates which feature map in S2 are combined by the units in a particular feature
map of C3 [1].
012 3456 789 10 11 12 13 14 15
0X XXX X X X X X X
1XX XX X X X X X X
2XXX X XX X X X X
3 XX X X XXX X X X
4 X XX XXX X X X X
5 XXX XX X X X X X
Figure 1.2 Architecture of AlexNet.
First Layer: AlexNet accepts a 227 × 227 × 3 RGB image as input which is fed to the first convolutional layer
with 96 kernels (feature maps or filters) of size 11 × 11 × 3 and a stride of 4 and the dimension of the output
image is changed to 96 images of size 55 × 55. The next layer is max-pooling layer or sub-sampling layer
which uses a window size of 3 × 3 and a stride of two and produces an output image of size 27 × 27 × 96.
Second Layer: The second convolutional layer filters the 27 × 27 × 96 image with 256 kernels of size 5 × 5
and a stride of 1 pixel. Then, it is followed by max-pooling layer with filter size 3 × 3 and a stride of 2 and the
output image is changed to 256 images of size 13 × 13.
Third, Fourth, and Fifth Layers: The third, fourth, and fifth convolutional layers uses filter size of 3 × 3 and a
stride of one. The third and fourth convolutional layer has 384 feature maps, and fifth layer uses 256 filters.
These layers are followed by a maximum pooling layer with filter size 3 × 3, a stride of 2 and have 256
feature maps.
Sixth Layer: The 6 × 6 × 256 image is flattened as a fully connected layer with 9,216 neurons (feature maps)
of size 1 × 1.
Seventh and Eighth Layers: The seventh and eighth layers are fully connected layers with 4,096 neurons.
Output Layer: The activation used in the output layer is softmax and consists of 1,000 classes.

1.2.3 ZFNet
The architecture of ZFNet introduced by Zeiler [3] is same as that of the AlexNet, but convolutional layer
uses reduced sized kernel 7 × 7 with stride 2. This reduction in the size will enable the network to obtain
better hyper-parameters with less computational efficiency and helps to retain more features. The number
of filters in the third, fourth and fifth convolutional layers are increased to 512, 1024, and 512. A new
visualization technique, deconvolution (maps features to pixels), is used to analyze first and second layer’s
feature map.
Table 1.3 AlexNet layer details.
Sl. no. Layer Kernel Stride Activation Weights Bias # Activation #
size shape Parameters Connections
1 Input - - (227,227,3) 0 0 - relu -
Layer
2 CONV1 11 × 11 4 (55,55,96) 34,848 96 34,944 relu 105,415,200
3 POOL1 3 × 3 2 (27,27,96) 0 0 0 relu -
4 CONV2 5 × 5 1 (27,27,256) 614,400 256 614,656 relu 111,974,400
5 POOL2 3 × 3 2 (13,13,256) 0 0 0 relu -
6 CONV3 3 × 3 1 (13,13,384) 884,736 384 885,120 relu 149,520,384
7 CONV4 3 × 3 1 (13,13,384) 1,327,104 384 1,327,488 relu 112,140,288
8 CONV5 3 × 3 1 (13,13,256) 884,736 256 884,992 relu 74,760,192
9 POOL3 3 × 3 2 (6,6,256) 0 0 0 relu -
10 FC - - 9,216 37,748,736 4,096 37,752,832 relu 37,748,736
11 FC - - 4,096 16,777,216 4,096 16,781,312 relu 16,777,216
12 FC - - 4,096 4,096,000 1,000 4,097,000 relu 4,096,000
OUTPUT FC - - 1,000 - - 0 softmax -
- - - - - - - 62,378,344 - -
(Total)

Figure 1.3 Architecture of ZFNet.


ZFNet uses cross-entropy loss error function, ReLU activation function, and batch stochastic gradient
descent. Training is done on 1.3 million images uses a GTX 580 GPU and it takes 12 days. The ZFNet
architecture consists of five convolutional layers, followed by three max-pooling layers, and then by three
fully connected layers, and a softmax layer as shown in Figure 1.3. Table 1.4 shows an input image 224 × 224
× 3 and it is processing at each layer and shows the filter size, window size, stride, and padding values across
each layer. ImageNet top-5 error improved from 16.4% to 11.7%.

1.2.4 VGGNet
Simonyan and Zisserman et al. [4] introduced VGGNet for the ImageNet Challenge in 2014. VGGNet-16
consists of 16 layers; accepts a 227 × 227 × 3 RGB image as input, by subtracting global mean from each
pixel. Then, the image is fed to a series of convolutional layers (13 layers) which uses a small receptive field
of 3 × 3 and uses same padding and stride is 1. Besides, AlexNet and ZFNet uses max-pooling layer after
convolutional layer. VGGNet does not have max-pooling layer between two convolutional layers with 3 × 3
filters and the use of three of these layers is more effective than a receptive field of 5 × 5 and as spatial size
decreases, the depth increases. The max-pooling layer uses a window of size 2 × 2 pixel and a stride of 2. It is
followed by three fully connected layers; first two with 4,096 neurons and third is the output layer with
1,000 neurons, since ILSVRC classification contains 1,000 channels. Final layer is a softmax layer. The
training is carried out on 4 Nvidia Titan Black GPUs for 2–3 weeks with ReLU nonlinearity activation
function. The number of parameters is decreased and it is 138 million parameters (522 MB). The test set
top-5 error rate during competition is 7.1%. Figure 1.4 shows the architecture of VGG-16, and Table 1.5
shows its parameters.
Table 1.4 Various parameters of ZFNet.
Layer name Input Filter Window # Stride Padding Output # Feature # Connections
size size size Filters size maps
Conv 1 224 × 224 7 × 7 - 96 2 0 110 × 96 14,208
110
Max-pooling 110 × 110 3×3 - 2 0 55 × 55 96 0
1
Conv 2 55 × 55 5×5 - 256 2 0 26 × 26 256 614,656
Max-pooling 26 × 26 - 3×3 - 2 0 13 × 13 256 0
2
Conv 3 13 × 13 3×3 - 384 1 1 13 × 13 384 885,120
Conv 4 13 × 13 3×3 - 384 1 1 13 × 13 384 1,327,488
Conv 5 13 × 13 3×3 - 256 1 1 13 × 13 256 884,992
Max-pooling 13 × 13 - 3×3 - 2 0 6×6 256 0
3
Fully 4,096 37,752,832
connected 1 neurons
Fully 4,096 16,781,312
connected 2 neurons
Fully 1,000 4,097,000
connected 3 neurons
Softmax 1,000 62,357,608
classes (Total)

Figure 1.4 Architecture of VGG-16.

1.2.5 GoogLeNet
In 2014, Google [5] proposed the Inception network for the ImageNet Challenge in 2014 for detection and
classification challenges. The basic unit of this model is called “Inception cell”—parallel convolutional layers
with different filter sizes, which consists of a series of convolutions at different scales and concatenate the
results; different filter sizes extract different feature map at different scales. To reduce the computational
cost and the input channel depth, 1 × 1 convolutions are used. In order to concatenate properly, max pooling
with “same” padding is used. It also preserves the dimensions. In the state-of-art, three versions of Inception
such as Inception v2, v3, and v4 and Inception-ResNet are defined. Figure 1.5 shows the inception module
and Figure 1.6 shows the architecture of GoogLeNet.
For each image, resizing is performed so that the input to the network is 224 × 224 × 3 image, extract mean
before feeding the training image to the network. The dataset contains 1,000 categories, 1.2 million images
for training, 100,000 for testing, and 50,000 for validation. GoogLeNet is 22 layers deep and uses nine
inception modules, and global average pooling instead of fully connected layers to go from 7 × 7 × 1,024 to 1
× 1 × 1024, which, in turn, saves a huge number of parameters. It includes several softmax output units to
enforce regularization. It is trained on a high-end GPUs within a week and achieved top-5 error rate of
6.67%. GoogleNet trains faster than VGG and size of a pre-trained GoogleNet is comparatively smaller than
VGG.
Table 1.5 Various parameters of VGG-16.
Layer name Input size Filter Window # Stride/Padding Output # Feature #
size size Filters size maps Parameters
Conv 1 224 × 224 3 × 3 - 64 1/1 224 × 64 1,792
224
Conv 2 224 × 224 3 × 3 - 64 1/1 224 × 64 36,928
224
Max-pooling 224 × 224 - 2×2 - 2/0 112 × 64 0
1 112
Conv 3 112 × 112 3 × 3 - 128 1/1 112 × 128 73,856
112
Conv 4 112 × 112 3 × 3 - 128 1/1 112 × 128 147,584
112
Max-pooling 112 × 112 - 2×2 - 2/0 56 × 56 128 0
2
Conv 5 56 × 56 3×3 - 256 1/1 56 × 56 256 295,168
Conv 6 56 × 56 3×3 - 256 1/1 56 × 56 256 590,080
Conv 7 56 × 56 3×3 - 256 1/1 56 × 56 256 590,080
Max-pooling 56 × 56 - 2×2 - 2/0 28 × 28 256 0
3
Conv 8 28 × 28 3×3 - 512 1/1 28 × 28 512 1,180,160
Conv 9 28 × 28 3×3 - 512 1/1 28 × 28 512 2,359,808
Conv 10 28 × 28 3×3 - 512 1/1 28 × 28 512 2,359,808
Max-pooling 28 × 28 - 2×2 - 2/0 14 × 14 512 0
4
Conv 11 14 × 14 3×3 - 512 1/1 14 × 14 512 2,359,808
Conv 12 14 × 14 3×3 - 512 1/1 14 × 14 512 2,359,808
Conv 13 14 × 14 3×3 - 512 1/1 14 × 14 512 2,359,808
Max-pooling 14 × 14 - 2×2 - 2/0 7×7 512 0
5
Fully 4,096 102,764,544
connected 1 neurons
Fully 4,096 16,781,312
connected 2 neurons
Fully 1,000 4,097,000
connected 3 neurons
Softmax 1,000
classes
Figure 1.5 Inception module.

Figure 1.6 Architecture of GoogleNet.


First layer: Here, input image is 224 × 224 × 3, and the output feature is 112 × 112 × 64. Followed by the
convolutional layer uses a kernel of size 7 × 7 × 3 and with step 2. Then, followed by ReLU and max pooling
by 3 × 3 kernel with step 2, now the output feature map size is 56 × 56 × 64. Then, do the local response
normalization.
Second layer: It is a simplified inception model. Here, 1 × 1 convolution using 64 filters generate feature
maps from the previous layer’s output before performing the 3 × 3 (with step 2) convolutions using 64
filters. Then, perform ReLU and local response normalization. Finally, perform a 3 × 3 max pooling with
stride 2 to obtain 192 numbers of output of 28 feature maps.
Third layer: Is a complete inception module. The previous layer’s output is 28 × 28 with 192 filters and there
will be four branches originating from the previous layer. The first branch uses 1 × 1 convolution kernels
with 64 filters and ReLU, generates 64, 28 × 28 feature map; the second branch uses 1 × 1 convolution with
96 kernels (ReLU) before 3 × 3 convolution operation (with 128 filters), generating 128 × 28 × 28 feature
map; the third branch use 1 × 1 convolutions with 16 filters (using ReLU) of 32 × 5 × 5 convolution
operation, generating 32 × 28 × 28 feature map; the fourth branch contains 3 × 3 max pooling layer and a 1 ×
1 convolution operation, generating 32 × 28 × 28 feature maps. And it is followed by concatenation of the
generated feature maps that provide an output of 28 × 28 feature map with 258 filters.
The fourth layer is inception module. Input image is 28 × 28 × 256. The branches include 1 × 1 × 128 and
ReLU, 1 × 1 × 128 as reduce before 3 × 3 × 192 convolutional operation, 1 × 1 × 32 as reduce before 5 × 5 ×
96 convolutional operation, 3 × 3 max pooling with padding 1 before 1 × 1 × 64. The output is 28 × 28 × 128,
28 × 28 × 192, 28 × 28 × 96, and 28 × 28 × 64, respectively for each branch. The final output is 28 × 28 × 480.
Table 1.6 shows the parameters of GoogleNet.

1.2.6 ResNet
Usually, the input feature map will be fed through a series of convolutional layer, a non-linear activation
function (ReLU) and a pooling layer to provide the output for the next layer. The training is done by the back-
propagation algorithm. The accuracy of the network can be improved by increasing depth. Once the network
gets converged, its accuracy saturates. Further, if we add more layers, then the performance gets degraded
rapidly, which, in turn, results in higher training error. To solve the problem of the vanishing/exploding
gradient, ResNet with a residual learning framework [6] was proposed by allowing new layers to fit a
residual mapping. When a model is converged than to fit the mapping, it is easy to push the residual to zero.
The principle of ResNet is residual learning and identity mapping and skip connections. The idea behind the
residual learning is that it feeds the input image to the next convolutional layer and adds them together and
performs non-linear activation (ReLU) and pooling.
Table 1.6 Various parameters of GoogleNet.
Layer Input Filter Window # Stride Depth # 1 # 3 × 3 # 3 # 5 × 5 # 5 Pool Padding Out
name size size size Filters × 1 reduce × 3 reduce × 5 proj size
Convolution 224 × 7 × 7 - 64 2 1 2 112
224 112
64
Max pool 112 × - 3×3 - 2 0 0 56 ×
112 × 64
Convolution 56 × 3×3 - 192 1 2 64 192 1 56 ×
56 × 19
Max pool 56 × - 3×3 192 2 0 0 28 ×
56 × 19
Inception 28 × - - - - 2 64 96 128 16 32 32 - 28 ×
(3a) 28 × 25
Inception 28 × - - - - 2 128 128 192 32 96 64 - 28 ×
(3b) 28 × 48
Max pool 28 × - 3×3 480 2 0 0 14 ×
28 × 48
Inception 14 × - - - - 2 192 96 208 16 48 64 - 14 ×
(4a) 14 × 51
Inception 14 × - - - - 2 160 112 224 24 64 64 - 14 ×
(4b) 14 × 51
Inception 14 × - - - - 2 128 128 256 24 64 64 - 14 ×
(4c) 14 × 51
Inception 14 × - - - - 2 112 144 288 32 64 64 - 14 ×
(4d) 14 × 52
Inception 14 × - - - - 2 256 160 320 32 128 128 - 14 ×
(4e) 14 × 83
Max pool 14 × - 3×3 - 2 0 0 7×7
14 832
Inception 7×7 - - - - 2 256 160 320 32 128 128 - 7×7
(5a) 832
Inception 7×7 - - - - 2 384 192 384 48 128 128 - 7×7
(5b) 1,02
Avg pool 7×7 - 7×7 - - 0 0 1×1
1,02
Dropout - - - 1,024 - 0 - 1×1
(40 %) 1,02
Linear - - - 1,000 - 1 - 1×1
1,00
Softmax - - - 1,000 - 0 - 1×1
1,00

The architecture is a shortcut connection of VGGNet (consists of 3 × 3 filters) that is inserted to form a
residual network as shown in figure. Figure 1.7(b) shows 34-layer network converted into the residual
network and has lesser training error as compared to the 18-layer residual network. As in GoogLeNet, it
utilizes a series of a global average pooling layer and the classification layer. ResNets were capable of
learning a network with a maximum depth of 152. Compared to the GoogLeNet and VGGNet, accuracy is
better and computationally efficient than VGGNet. ResNet-152 achieves 95.51 top-5 accuracies. Figure 1.7(a)
shows a residual block, Figure 1.7(b) shows the architecture of ResNet and Table 1.7 shows the parameters
of ResNet.

1.2.7 ResNeXt
The ResNeXt [7] architecture is built based on the advantages of ResNet (residual networks) and GoogleNet
(multi-branch architecture) and requires less number of hyperparameters compared to the traditional
ResNet. The next defines the next dimension (“cardinality”), an additional dimension on top of the depth and
width of ResNet. The input is split channelwise into groups. The standard residual block is replaced with a
“split-transform-merge” procedure. This architecture uses a series of residual blocks and uses the following
rules. (1) If the spatial maps are of same size, the blocks will split the hyperparameters; (2) The spatial map
is pooled by two factors; block width is doubled by two factors. ResNeXt becomes the 1st runner up of
ILSVRC classification task and produces better results than ResNet. Figure 1.8 shows the architecture of
ResNeXt, and the comparison with REsNet is shown in Table 1.8.
Figure 1.7 (a) A residual block.
Table 1.7 Various parameters of ResNet.

Figure 1.8 Architecture of ResNeXt.

1.2.8 SE-ResNet
Hu et al. [8] proposed a Squeeze-and-Excitation Network (SENet) (first position on ILSVRC 2017 category)
with lightweight gating mechanism. This architecture focuses explicitly on model interdependencies
between the channels of convolutional features and to achieve dynamic channel-wise feature recalibration.
In the squeeze phase, SE block uses global average pooling operation and in the excitation phase uses
channel-wise scaling. For an input image of size 224 × 224, the running time of ResNet-50 is 164 ms,
whereas it is 167 ms for SE-ResNet-50. Also, SE-ResNet-50 requires ∼3.87 GFLOPs, which shows a 0.26%
relative increase over the original ResNet-50. The top-5 error is reduced to 2.251%. Figure 1.9 shows the
architecture of SE-ResNet, and Table 1.9 shows ResNet and its comparison with SE-ResNet-50 and SE-
ResNeXt-50.
1.2.9 DenseNet
The architecture is proposed by [9], where every layer connect directly with each other so as to ensure
maximum information (and gradient) flow. Thus, this model with L layer has L(L+1) connections. A number
of dense block (group of layers connected to previous layers) and the transition layer control the complexity
of the model. Each dense block adds one channel to the model. Transition layer is used to reduce the number
of channels by using the convolutional layer of size 1 × 1 and reduces the width and height of the average
pooling layer by a factor of 2 and with a stride of 2. It concatenates all the output feature map of previous
layers along with incoming feature maps, i.e., each layer has direct access to the gradients from the loss
function and the original input image. Further, DenseNets needs small set of parameters as compared to the
traditional CNN and reduces vanishing gradient problem. Figure 1.10 shows the architecture of DenseNet,
and Table 1.10 shows various DenseNet architectures.
Table 1.8 Comparison of ResNet-50 and ResNext-50 (32 × 4d).

1.2.10 MobileNets
Google proposed MobileNets VI [10] uses depthwise separable convolution instead of the normal
convolutions, which, in turn, reduces the model size and complexity. Depthwise separable convolution is
defined as a depthwise convolution followed by a pointwise convolution, i.e., a single convolution is
performed on each colour channel and it is followed by pointwise convolution which applies a 1 × 1
convolution to combine the outputs of depthwise convolution; after each convolution, batch normalization
(BN) and ReLU are applied. The whole architecture consists of 30 layers with (1) Convolutional layer with
stride 2, (2) Depthwise layer, (3) Pointwise layer, (4) Depthwise layer with stride 2, and (5) Pointwise layer.
The advantage of MobileNets is that it requires fewer number of parameters and the model is less complex
(small number of Multiplications and Additions). Figure 1.11 shows the architecture of MobileNets. Table
1.11 shows the various parameters of MobileNets.
Figure 1.9 Architecture of SE-ResNet.

1.3 Application of CNN to IVD Detection


Mader [11] proposed V-Net for the detection of IVD. Bateson [12] propose a method which embeds domain-
invariant prior knowledge and employ ENet to segment IVD. Other works which deserve special mentioning
for the detection and segmentation of IVD from a 3D Spine MRI includes Zeng [13] uses CNN; Chang Liu [14]
utilized 2.5D multi-scale FCN; Gao [15] presented a 2D CNN and DenseNet; Jose [17] presents a HD-UNet
asym model; and Claudia Iriondo [16] uses VNet-based 3D connected component analysis algorithm.
Table 1.9 Comparison of ResNet-50 and ResNext-50 and SE-ResNeXt-50 (32 × 4d).
Figure 1.10 Architecture of DenseNet.

1.4 Comparison With State-of-the-Art Segmentation Approaches for


Spine T2W Images
This work discusses the various architecture of CNN that have been employed for the segmentation of spine
MRI. The difference in the architecture depends on several factors like number of layers, number of filters,
whether padding is required or not, and the presence or absence of striding. The performance of
segmentation is evaluated using Dice Similarity Coefficient (DSC), Mean Absolute Surface Distance (MASD),
etc., and the experimental results are shown in Table 1.12. In the first three literature works, DSC is
computed and CNN developed by Zeng et al. achieves 90.64%. DenseNET produces approximately similar
segmentations based on MASD, Mean Localisation Distance (MLD), and Mean Dice Similarity Coefficient
(MDSC). Comparison result is shown in Table 1.12.

1.5 Conclusion
In this Chapter, we had discussed about the various CNN architectural models and its parameters. In the first
phase, various architectures such as LeNet, AlexNet, VGGnet, GoogleNet, ResNet, ResNeXt, SENet, and
DenseNet and MobileNet are studied. In the second phase, the application of CNN for the segmentation of
IVD is presented. The comparison with state-of-the-art of segmentation approaches for spine T2W images
are also presented. From the experimental results, it is clear that 2.5D multi-scale FCN outperforms all other
models. As a future study, this work modify any currents models to get optimized results.
Another random document with
no related content on Scribd:
filled sacks; but no children. I would scramble through the little trap to make a
closer investigation, [172]recalling how Judge Hooker had walled up his brood,
years before, when the Hopi of the First Mesa protested against education.

In the first of these places there was no room for hiding between the sacks, and
when I moved against them I could feel the corn they held. I prepared to leave
the place, and was at the opening, when I heard a sigh, as if someone had long
held his breath and could hold it no longer. Back I went. No one among the
melons, nor behind the racked corn. I began moving the sacks. Three were filled
with corn on the cob; the fourth—my hand grasped the top of a Hopi head. It
was like the jars of wine and the hidden thieves.

From the sacks we delivered the three children of that household.

When they appeared in the main room, laughing, the father caught them in his
arms; and when they were taken from him, the mother proceeded to play the
same trick. It was easy to break his hold on them, but not so easy to handle a
woman without giving grounds for complaint as to rough usage—a charge the
Hopi like to make. But those three children went into the street, notwithstanding
all this hokum, and other employees took them before the physicians. There
were three doctors present, the Army surgeon and two physicians of the Indian
Service. Each child received a thorough examination, and only those fit and
above the age of ten years were taken from the village.

I do not know how many houses there are in Hotevilla, but I crawled into every
filthy nook and hole of the place, most of them blind traps, half-underground.
And I discovered Hopi children in all sorts of hiding-places, and through their
fright found them in various conditions of [173]cleanliness. It was not an
agreeable job; not the sort of work that a sentimentalist would care for.

In but one instance was real trouble threatened. On coming from one cellar, I
found the head of the house sitting in the centre of his castle with an axe at his
feet. He protested against the removal of the children, and grasped the axe as if
to use it. The men with me promptly removed the implement, and threw him into
a corner.

By midday the wagons had trundled away from Hotevilla with fifty-one girls and
eighteen boys. Our survey of the place in July had warranted an estimate of one
hundred and fifty pupils, but in the five months that had elapsed an epidemic of
measles and its terrible aftermath of bronchial pneumonia had swept the town.
“Where are the others?” the interpreter asked of a villager.

“Dead,” he replied, solemnly.

So much for expediency and Departmental delay.

Of those taken, nearly all had trachoma. It was winter, and not one of those
children had clothing above rags; some were nude. During the journey of forty-
five miles to the Agency many ragged garments went to pieces; the blankets
provided became very necessary as wrappings before the children reached their
destination. It was too late to attempt the whole distance that afternoon, so the
outfit went into camp at the Oraibi day-school, where a generous meal was
provided, and the next day their travel was completed.

Across the great Oraibi Valley was the pueblo of Chimopovi, perched on the
highest of the mesa cliffs. And this place had a suburb, dominated by one
Sackaletztewa, a direct descendant of the gentleman who had founded the
original Hopi settlement after their emerging from the [174]Underworld.
Sackaletztewa was as orthodox as old Youkeoma, and it was his following that
had given battle to a former Agent and his Navajo police. I proposed to Colonel
Scott that Chimopovi should be visited.

“Take the troop to-morrow morning, and finish it up yourself.”

So next day the same scene was enacted. It was a short job, only three children
being found; but here occurred something like resistance. All the protestants
congregated in the house of Sackaletztewa. When I entered, a man opened a
little cupboard of the wall and produced a packet of papers. They were offered to
me as documents of great value. And they were strange documents—letters
from people of the country who had read in newspapers of Youkeoma’s visit to
Washington, and his defiance of the Government. I suppose such persons have
nothing better to do, and write letters of sympathy to the members of every
Indian delegation that parades itself eastward in feathers and war-paint to
present a fancied grievance. I recall the words of one of these papers, from
some weak-minded woman:—

Chief Youkeoma: you are a noble man. Do not let the Government have your children.
Their schools are not the place for your Indian lads who know only the hunt and the
open spaces. Resist to the last gasp. Die rather than submit.
Very like, she is now writing scenarios. Of course this correspondent had read
Fenimore Cooper, and was filled to the neck with the storybook idea of Indians—
lithe, clean, untouched by disease, and painted by romance. The Southwest has
no such Indians; and Indians, whether lithe or not, are seldom clean and never
romantic. She knew nothing of filth and trachoma and child-prostitution,
[175]while the Hopi had brought such things to a fine degree of perfection. And
she lived in Indiana.

Now there is a wide difference between demanding the rights of Indians, rights
that should be sacred under agreements,—and perhaps foreign treaties, such
as those of the Pueblo Indians of New Mexico,—and inciting them to warfare
and rebellion when teachers and physicians are striving to recover them from
ignorance and disease. There is a vast difference between the argument that a
title confirmed by three sovereign Governments be not attacked for the sake of
political loot—as in the case of the Pueblo Indians of New Mexico—and
denouncing the educational system of the United States and advising a group of
benighted savages to kill in a distant and lonely desert. That writer from Indiana
should have been a field matron for a little!

I have no sympathy with this type of sentimentalist. I deported some of them


from the Hopi desert country when they appeared with their box of theoretical
tricks.

I handed back the documents, and asked where the children were.
Accompanied by my Tewa policeman, I entered a small room off the main house
and found these three mentioned surrounded by relatives. The room filled up to
its capacity and a harangue began. At Hotevilla we had not listened to argument,
but here I thought it best to placate them, to explain things, rather more in line
with the moral-suasion programme outlined from Washington. All talk led to one
definite answer, growing sullenly louder and louder: “You cannot take the
children.”

We had to make an end. When I proceeded to lift one from the floor, in a twinkle
two lusty Indians were at my [176]throat. The Tewa (Indian police) came to my
assistance, his face expanding in a cheerful grin as he recognized the
opportunity of battle, and three or four others draped themselves around his
form. The sound of the struggle did not at once get outside. The Tewa began to
thresh out with his arms and let his voice be heard. An employee peered inside
and set up a shout. Then in plunged several very earnest fellows in uniform, and
out went the protestants, scrambling, dragging, and hitting the door jambs. The
Tewa followed to see that these things were properly managed, he being the
local and ranking officer in such affairs. I remained behind to counsel against
this attitude, but did not remain long enough, for on going outside the house I
spoiled a little comedy.

Sackaletztewa, the head man, a sinewy fellow of about fifty years, when
unceremoniously booted forth, had challenged the Tewa policeman to mortal
combat. He declaimed that no Indian policeman could whip him. The soldiers
had greeted this as the first worthy incident of a very dull campaign.

“You have on a Washington uniform and wear guns,” said Sackaletztewa, “but
without them you are not a match for me. If you did not have those things, I
would show you how a real Hopi fights.”

Now this Tewa always rejoiced in a chance for battle. The fact that no one at
Hotevilla had been arrested had filled him with gloom. Unbuckling his belt and
guns, he handed them to the nearest trooper; then he promptly shucked himself
out of his uniform. Twenty or thirty of the soldiers made a ring, their rifles
extended from hand to hand, and into this arena Nelson was conducting
Sackaletztewa for the beating of his life. It was a pity to issue an injunction. If I
had remained only five minutes [177]longer in the house, those patient soldiers
would have had something for their pains, and the grudge of the Indian police,
who had suffered in esteem at Chimopovi five years earlier, would have been
wiped from the slate.

Sackaletztewa was a good man physically; he had courage; but he was a Hopi,
and knew nothing of striking blows with his fists. He would have relied on the
ancient grapple method of combat, and the proficient art of scalp-tearing.
Perhaps he would have tried to jerk Nelson’s ears off by dragging at his
turquoise earrings. He would have scratched and gouged, and, if fortunate
enough to get a twist in the neckerchief, would have choked his man to a finish.
All this is permitted by the desert Indian rules of the game. But unless Nelson
had been tied to a post, he would have accomplished none of these things; for
the first rush would have carried him against a terrific right smash, accompanied
by a wicked left hook. Behind these two taps would have lunged one hundred
and sixty pounds of pure muscle. And a very bewildered Hopi would have spent
the remainder of the day holding a damaged head, and wondering how he would
manage a flint-corn diet without his teeth.

That night, blaming myself for the necessary interference, I joined Colonel Scott
at the Agency.

Now you will please not strive to conjure up a harrowing scene of terrified
children, removed from their parents, lonely and unconsoled. They were not
babies. They were nude, and hungry, and covered with vermin, and most of
them afflicted with trachoma, a very unpleasant and messy disease. Some of
them had attended this Cañon school in the past, that time before their parents’
last defiance, and they knew what was in store for them—baths, good food,
warm clothing, clean beds and [178]blankets, entertainment and music, the care
of kindly people. There would be no more packing of firewood and water up
steep mesa-trails, and living for weeks at a time on flint corn, beans, and
decaying melons. There would be meat,—not cut from hapless burros,—and
excellent bread of wheat flour, gingerbread even; and toys and candy at that
wonderful time the Bohannas call Christmas. There would be games for both
boys and girls, and no one at this school would interfere with their innocent
Indian pleasures. Their parents would visit them, and bring piki bread—and the
parents very promptly availed themselves of the privilege.
A HOPI SCHOOLGIRL

This same girl is shown in native dress opposite page


358
A HOPI YOUTH WHO IS PREPARING FOR COLLEGE

His ambition is to be a physician

So there was nothing of exile or punishment involved in this matter; and if you
have any true regard for childhood and defenceless children, there will be seen
a great deal of protection and happiness in it. I fancy that many of the girls—
especially those who had reached that age when the maternal uncles, the ogres
of the family, assign them in marriage and as the old men pleased—had been
counting the days since the first news of the troop’s coming.

It was a busy time for the corps of school employees when the wagons arrived.
Seventy-two children had to be recovered from the dirt and vermin that had
accumulated during their long holiday. The less said about this the better; but I
would have been amused to see the critics at the job of hair-cutting!

Those children spent four years at the Cañon school, and without vacations.
When the school departments were closed in 1915, because certain buildings
showed weaknesses and I feared their collapse, the Hotevilla children, having
reached eighteen years, might decide for themselves whether or not they
wished further education. With few exceptions, they elected to attend the
Phœnix [179]Indian school. They had no wish to visit Hotevilla, and very frankly
told me so. To illustrate their standpoint, Youkeoma’s granddaughter, an orphan,
was not of age so to elect. She feared that I would consult the old man about the
matter, and she knew that he would insist upon her return to the pueblo life. So
she secreted herself in one of the wagons that would carry the older pupils to
the railroad, and went away without my knowledge.

I had advised against the immediate recall of the troop of soldiers, and had
expected that a sergeant’s squad would remain for some months to return
runaways and to preserve discipline among those who might risk the power of
my army of three policemen. It was not improbable that a band of Hotevillans
would come to the Cañon to demand their children, once the soldiers were
withdrawn. They had staged this play before, and in 1913 certain Navajo did not
hesitate to make off with pupils. But trouble on the Border called. It was then I
sought the Colonel’s counsel. For a time he evaded a direct statement of his
views, but I was insistent, and he said:—

“I would never permit an Indian to remove his child from the school against my
orders to the contrary. They would find me sitting on the dormitory steps. Other
methods of prevention you must devise for yourself.”

He concluded with the words I have quoted before: “Young man! you have an
empire to control. Either rule it, or pack your trunk.”
Very early the next morning the troop departed. There was a light fall of snow, to
be followed by more and more, until the stark Cañon cliffs were frozen and white
in the drifts. The little campaign in the hills had closed just in time.

Twice thereafter Colonel Scott, accompanied by the [180]cavalry, came to the


Desert; once to pacify the truculent Navajo at Beautiful Mountain, after they had
threatened the San Juan Agency at Shiprock, New Mexico, and once to quiet
the Ute on our northern borders. But the Moqui Reservation was left entirely to
my ruling. The Department read the Colonel’s report through a reducing glass,
and gave me eight policemen instead of the twenty he advised. With these and
a few determined employees I contrived to have peace and order within the
Hopi-Navajo country—not always easily or pleasantly, but without actual war.
And I did not pack the proverbial trunk until the latter part of 1919, eight years
later, when ordered to take charge of the Pueblo Indians of New Mexico. [181]
[Contents]
XV
AN ECHO OF THE DAWN-MEN
“According to the law of the Medes and
Persians.”—Daniel, vi, 12

The sending of a small army to one’s home, and the imposing of


rigid Governmental regulations, would seem to be sufficient to give
any rebel pause. But not so Youkeoma. He stood faithfully by the
traditions; and unfortunately for him, the traditions obstructed or
became entangled with everything that a white official proposed for
the best interests of his community. No doubt the old man had been
amazed, and I think somewhat disappointed, when he was not sent
away as a prisoner. He could have made capital of another entry in
an already lengthy record as a political martyr. But he did not
propose to soften in consideration of this amnesty. He very likely
thought it an exhibition of the white man’s weakness, and gave his
ancient oracles the credit.

Nothing was heard of him until the next early summer, when came
time for the dipping of sheep on the range. The Hotevilla flocks were
the poorest of all the Hopi stock, which is saying a good deal, since
the Hopi is a disgraceful shepherd at any pueblo. But whatever their
condition, the head man of Hotevilla did not intend to recognize the
sanitary live-stock regulations issued by the peculiar Bohannas.
They paid no attention to the Indian crier who announced the order,
and they did not move their sheep toward the vats. It was necessary
to send police, hire herders, drive the animals to the dip about
twenty-five [182]miles from their village, and return them to the sullen
owners. Naturally, in such a movement, there are losses. Youkeoma
came to the Agency, at the head of a delegation, to file protest
against this action and to present claims for damages. He came
modestly clad in one garment, a union suit, and without other
indication of his rank.

During the hearing a few of the Hotevilla children came in to greet


their relatives. It was a satisfied little group of clean and well-fed
youngsters, having no resemblance to the filthy, trachomatous
urchins we had gathered at the pueblo.

“Your people’s children are happy here,” said a clerk.

Youkeoma looked at the girls in their fresh frocks, and noticed their
well-dressed hair, which had not been weeded with a Hopi broom.

“They should be dirty like the sheep,” he answered, “as dirty as I am.
That is the old Hopi way.”

His claims for damage were disallowed, and for much angry
disputing he spent a few days in the jail; then, very much to my
surprise, he promised that he would not counsel resistance to future
Governmental orders.

“I will attend to my affairs hereafter,” he agreed. “For myself, I do not


promise to obey Washington; but the people may choose for
themselves which way to go—with me, or with Washington.”

This was all that was asked of him, and he departed.

A year passed without incident. When the pupils were not returned in
vacation time, the parents filed regular complaints. They very
truthfully admitted that, were their requests granted, they had no
intention of permitting the children to return, so it seemed best to
deny them.

And now the other children of the village were growing up. At the
time of the first gathering, only those above [183]ten years of age
were taken; and given a few years among the Hopi, without
epidemic, children spring up and expand like weeds. A census was
taken, not without acrid dispute and a few blows, which showed that
the pueblo held about one hundred children of age to attend primary
grades. So I proposed to build a complete school-plant close to their
homes. This was another terrible blow to the traditions.

When selecting a site, great care was taken not to appropriate


tillable land or to invade fields. The school stands on a rock-ledge.
For a water-supply it was necessary to develop an old spring, one
that the Hopi had long since abandoned and lost. It is the only Hopi
school on the top of a mesa, and the children do not have to use
dangerous trails.

The villagers watched us very suspiciously as we surveyed the lines


for seven buildings, and they respected the flags marking the site-
limits. But when materials and workmen arrived, and the buildings
began to go up, they uttered a violent protest.

“We do not wish to see a white man’s roof from our pueblo!”

They declared that all such buildings would be burned. Guards were
necessary whenever the workmen left the camp. The school was
built, however, and the smaller children rounded up and into it. Two
dozen men managed what had required a troop of cavalry; but do
not think that we approached it in a spirit of indifference. The town
held about one hundred husky men, and one never knew what might
happen. Once again I had to crawl through the corn-cellars of the
place.

The old Chief was not to the front, and his body-guard of elders was
conspicuous by its absence. Great credit [184]was given them for
keeping their word. I flattered myself that the contentious Hopi spirit
and the backbone of rebellion had cracked together. But he was
simply waiting for a more propitious date, in strict accord with
prophecy, perhaps. The fire in the kiva had not burned with a flame
of promise; the cornmeal had not fallen in a certain sign; the
auguries were not auspicious. A little later and these things must
have strengthened him, for one night he appeared at the door of the
field matron’s quarters, accompanied by his cohort, the whole band
evidencing an angry mood.

“It is time,” he said, wrathfully. “You have been here long enough. We
will not drive you away to-night, but in the morning do not let us find
you here. There will be trouble, and we may have to cut off your
head.”

The field matron was alarmed, but she did not leave as directed. She
waited until they had gone away, and then slipped across the half-
cleared desert space to the school principal’s home. He promptly
saddled a horse and came into the Agency that night. There were no
telephones across the Desert then. Next day he returned with
definite instructions.

It is not wise to permit Indians of an isolated place to indulge


themselves in temper of this kind. One bluff succeeds another, until
finally a mistake in handling causes a flare-up that is not easy to
control, and one is not thanked in Washington for fiascos. I have
pointed out how quickly Washington moves itself to aid when there is
revolt.

A capable field-matron or field-nurse is a good angel among such


people. She supplements daily the work of the visiting physician,
dispensing simple remedies according to his direction; she is foster-
mother to the little children of the camps and to the girls who return
from the [185]schools. All social ills have her attention. She maintains
a bathhouse and laundry for the village people, and a sewing-room
for the women. In times of epidemic, these field matrons perform
extraordinary labors, and have been like soldiers when facing
contagious disease. With one other, Miss Mary Y. Rodger at the First
Mesa, Miss Abbott of Hotevilla ranked as the best in the Service; and
having ordered her to remain on that station, I determined that she
should live at the pueblo of Hotevilla in peace, if every one of the
ten-thousand sacred traditions reaching straight back to the
Underworld went by the board.

It is necessary first to catch your rabbit.

Whenever wanted and diligently sought for, Youkeoma was


somewhere else, and an unknown somewhere. While it was said
that he and the other old men spent their time in the kivas, I had
failed to find them there. Like the coyote that scents gun-oil, he smelt
business from afar; and this time it was business, and I wanted him.

Summoning the Indian police, I dispatched them under two white


officers to attend a Navajo dance in a distant cañon, forty miles east
of the Agency. Hotevilla was directly west from the Agency and
about the same distance removed. Having placed eighty miles
between my police and the scene of action, I informed my office
force that I intended visiting the railroad town on business. This
would take me eighty miles to the south. Others of the white men
were sent to work at different range points. No one suspected a
Hotevilla mission. We went our several ways.

But I did not go to the railroad town. A messenger, sent from the
Desert, recalled the two officers and the Indian police from the
Navajo encampment and, going roundabout the trails, they joined
me at the Indian Wells [186]trading-post on the south line of the
Reserve. After dark on the second night we hiked across the
southern Desert, avoiding all Indian camps and settlements, to reach
the Second Mesa about midnight. There we halted for a pot of
coffee, and rested an hour or two. Then on again, crossing the
Second Mesa in the wee sma’ hours, we avoided alarming Oraibi,
that always suspicious pueblo. The rangemen were collected from
their different stations. In the black, before the stars had begun to
pale, we arrived at Hotevilla and, without disturbing a soul, strung out
around the town.

With the first streak of red in the east, the Hopi became aware that
strangers were present. A perfect bedlam of noise arose. It seemed
that thousands of dogs came into vociferous action, and made the
morning ring with their challenges. But no man got out of the place.

We found our slippery friend Youkeoma and his supporters. They


were taken to the school and identified as those who had threatened
the matron. And once again the wagons started for the Agency
guardhouse. This time friend Youkeoma joined our Cañon
community permanently, for I had no idea of releasing him while in
charge of the post. This occurred in the summer of 1916 and he
remained at the Agency until the autumn of 1919.

He did not complain. In fact he seemed quite contented in his


quarters. He was not imprisoned in the sense of being locked-up, but
was given the work of mess-cook for the other prisoners. This in no
way offended his dignity. The more able of the men were required to
work at odd jobs—the cutting of weeds, the herding of sheep, the
tilling of small fields, and an occasional bit of road-mending.

Life as prisoners was not very irksome for these old [187]men. The
guardhouse was very like their home kiva. Instead of cold stone
benches, they slept on good beds; for rabbit-skin quilts and
sheepskins, they had good blankets; and in place of a central smoky
fire there was an excellent egg-shaped stove. Aside from being
clean, with walls freshly painted and floors scrubbed, it was very like
their kiva indeed. No one disturbed them in it. I fancy their
discussions were the same, and the ceremonies conducted
according to the calendar. Certainly they occupied themselves in
weaving belts and other talismanic articles.

And as prisoners they developed fully some very peculiar tastes.


Required to bathe regularly, they came to like soap and water very
much. I recall the first time Youkeoma found himself under a shower.
He had soap and towels, things considered entirely unessential at
home, and he looked for a tub and water. Suddenly the ceiling
opened and the water came down from Lodore. He was scared
speechless at first, and then began chattering as if this were some
rare form of white man’s magic. And he liked it!

They received new clothing, sufficient for the different seasons, but
they would refuse to don these garments until ordered to do so by
Moungwi. A clerk would make the issue from commissary, and would
succeed in getting them to pack the articles to the guardhouse. Next
morning they would appear in their old rags. When a solemn
Governmental pronunciamento was hurled at them, something
smacking of excommunication, the traditions were satisfied, and
forthwith they would array themselves.

They very diligently prepared and sowed certain fields—small


patches of corn, beans, and melons, such as they used at home.
They weeded and cultivated and watched the plants, until told that
the harvest would be theirs to [188]supplement the guardhouse ration
of staples. They refused to work at once. It was against the
traditions. They would not willingly raise a crop, to accept it as a
reward from Washington. Their work must be wholly in the nature of
punishment.

“So be it,” I said, washing my hands of them; and they continued


working those fields faithfully, once they knew that others would
possess the fruits thereof.
One by one, the men were released for good conduct, until only
Youkeoma remained. I told him plainly that he would not return to
foment trouble until I was relieved of authority. Often in the long,
drowsy, summer afternoons I would talk with him. He would sit on my
porch-floor, hugging his knees in his skinny arms, and amaze me by
his observations.

“You see,” he would say, “I am doing this as much for you as for my
own people. Suppose I should not protest your orders—suppose I
should willingly accept the ways of the Bohannas. Immediately the
Great Snake would turn over, and the Sea would rush in, and we
would all be drowned. You too. I am therefore protecting you.”

He stated such things as an infallible prophet. There was no malice


in the old chap, and I did not bear him any grudge for his pertinent
reflections.

“Yes; I shall go home sometime. I am not unhappy here, for I am an


old man, little use, and my chief work is ceremonies. But I shall go
back sometime. Washington may send another Agent to replace you,
or you may return to your own people, as all men do. Or you may be
dismissed by the Government. Those things have happened before.
White men come to the Desert, and white men leave the Desert; but
the Hopi, who came up from the Underworld, remain. You have been
here a long time now[189]—seven winters—much longer than the
others. And, too—you may die.”

He had many probable strings to his bow of the future. I had to admit
the soundness of his remarks, but I did not relish his last sentence.
There was a little too much of hope in it.

And it came to pass that I was sent to another post. My last official
act as a Moungwi was the dismissing of Youkeoma. Our differences
would not affect the success of a newcomer. We shook hands this
time, pleasantly, and he smiled. I asked him for no promises, and
preached him no sermon. He departed down the Cañon afoot, for his
hike of forty-odd miles. Quite likely he would stop that night with his
married daughter at the settlement of the Five Houses, a Christian
family, and the next night with Sackaletztewa on the Chimopovi cliffs.
He was too old to make the journey in true Hopi fashion, jogging
tirelessly. I venture that he did not visit his hereditary rival,
Tewaquaptewa, at the original stronghold of his people—Oraibi had
slipped too far from the traditions. But I would like to have witnessed
his entry into Hotevilla in the sunset, a tired old man, but steadfast in
spirit and unconquered, and to have heard the talk at that first all-
night conference of the ancients in the kiva.

In 1921 I visited the Agency; and lo! he was in the guardhouse


again. He was squatted on the floor, sifting a pan of flour for the
prison-mess, his old trade. He looked up, to recognize me with a
whimsical, not unwelcoming smile.

“Hello!” he said, “You back?”

When I saw him last, he was talking to Major-General Hugh L. Scott,


who had spent ten days listening to him ten years before. Youkeoma
was again reciting the legend [190]of the Hopi people. Many things
had happened in those wild and unreasonable ten years. The world
had suffered discord and upheaval; merciless war had lived abroad
and bitter pestilence at home. Nations had quite lost identity, and
individuals had become as chaff blown to bits in the terrible winds.
Scott had heard the great guns roar out across Flanders. Nearly
everything had changed except the Desert—and Youkeoma.

He was the same unwavering fanatic, “something nearly complete,”


a gnome-like creature that would have better fitted dim times in the
cavern cities of the Utah border, where his cliff-dwelling forbears built
and defended Betatakin, and Scaffold House, and the Swallow’s

You might also like