Professional Documents
Culture Documents
Handbook Of Medical Image Computing And Computer Assisted Intervention Elsevier And Miccal Society S Kevin Zhou Editor full chapter pdf docx
Handbook Of Medical Image Computing And Computer Assisted Intervention Elsevier And Miccal Society S Kevin Zhou Editor full chapter pdf docx
https://ebookmass.com/product/deep-learning-for-medical-image-
analysis-1st-edition-s-kevin-zhou/
https://ebookmass.com/product/medical-image-recognition-
segmentation-and-parsing-machine-learning-and-multiple-object-
approaches-1st-edition-zhou/
https://ebookmass.com/product/diabetes-and-retinopathy-computer-
assisted-diagnosis-volume-2-1st-edition-ayman-s-el-baz/
https://ebookmass.com/product/trapped-brides-of-the-kindred-
book-29-faith-anderson/
Neuroscience for Neurosurgeons (Feb 29,
2024)_(110883146X)_(Cambridge University Press) 1st
Edition Farhana Akter
https://ebookmass.com/product/neuroscience-for-neurosurgeons-
feb-29-2024_110883146x_cambridge-university-press-1st-edition-
farhana-akter/
https://ebookmass.com/product/deep-learning-on-edge-computing-
devices-design-challenges-of-algorithm-and-architecture-1st-
edition-xichuan-zhou/
https://ebookmass.com/product/the-high-stakes-rescue-
a-k9-handler-romance-disaster-city-search-and-rescue-book-29-1st-
edition-jenna-brandt/
https://ebookmass.com/product/cardiology-an-integrated-approach-
human-organ-systems-dec-29-2017_007179154x_mcgraw-hill-1st-
edition-elmoselhi/
https://ebookmass.com/product/handbook-of-robotic-and-image-
guided-surgery-1st-edition-mohammad-h-abedin-nasab/
HANDBOOK OF
MEDICAL IMAGE
COMPUTING AND
COMPUTER
ASSISTED
INTERVENTION
THE ELSEVIER AND MICCAI SOCIETY
BOOK SERIES
Advisory Board
Nicholas Ayache
James S. Duncan
Alex Frangi
Hayit Greenspan
Pierre Jannin
Anne Martel
Xavier Pennec
Terry Peters
Daniel Rueckert
Milan Sonka
Jay Tian
S. Kevin Zhou
Titles:
Balocco, A., et al., Computing and Visualization for Intravascular Imaging and
Computer Assisted Stenting, 9780128110188
Dalca, A.V., et al., Imaging Genetics, 9780128139684
Depeursinge, A., et al., Biomedical Texture Analysis, 9780128121337
Munsell, B., et al., Connectomics, 9780128138380
Pennec, X., et al., Riemannian Geometric Statistics in Medical Image Analysis,
9780128147252
Wu, G., and Sabuncu, M., Machine Learning and Medical Imaging, 9780128040768
Zhou S.K., Medical Image Recognition, Segmentation and Parsing, 9780128025819
Zhou, S.K., et al., Deep Learning for Medical Image Analysis, 9780128104088
Zhou, S.K., et al., Handbook of Medical Image Computing and Computer Assisted
Intervention, 9780128161760
HANDBOOK OF
MEDICAL IMAGE
COMPUTING AND
COMPUTER
ASSISTED
INTERVENTION
Edited by
S. KEVIN ZHOU
Institute of Computing Technology
Chinese Academy of Sciences
Beijing, China
DANIEL RUECKERT
Imperial College London
London, United Kingdom
GABOR FICHTINGER
Queen’s University
Kingston, ON, Canada
Academic Press is an imprint of Elsevier
125 London Wall, London EC2Y 5AS, United Kingdom
525 B Street, Suite 1650, San Diego, CA 92101, United States
50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States
The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, United Kingdom
Copyright © 2020 Elsevier Inc. All rights reserved.
No part of this publication may be reproduced or transmitted in any form or by any means, electronic or
mechanical, including photocopying, recording, or any information storage and retrieval system, without
permission in writing from the publisher. Details on how to seek permission, further information about the
Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center
and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions.
This book and the individual contributions contained in it are protected under copyright by the Publisher (other
than as may be noted herein).
Notices
Knowledge and best practice in this field are constantly changing. As new research and experience broaden our
understanding, changes in research methods, professional practices, or medical treatment may become necessary.
Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any
information, methods, compounds, or experiments described herein. In using such information or methods they
should be mindful of their own safety and the safety of others, including parties for whom they have a professional
responsibility.
To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability
for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or
from any use or operation of any methods, products, instructions, or ideas contained in the material herein.
ISBN: 978-0-12-816176-0
Contributors xvii
Acknowledgment xxvii
4. CAD in lung 91
Kensaku Mori
4.1. Overview 91
4.2. Origin of lung CAD 92
4.3. Lung CAD systems 92
4.4. Localized disease 93
v
vi Contents
10. Deep multilevel contextual networks for biomedical image segmentation 231
Hao Chen, Qi Dou, Xiaojuan Qi, Jie-Zhi Cheng, Pheng-Ann Heng
10.1. Introduction 231
10.2. Related work 233
10.3. Method 235
10.4. Experiments and results 237
10.5. Discussion and conclusion 244
Acknowledgment 244
References 245
12. Deformable models, sparsity and learning-based segmentation for cardiac MRI
based analytics 273
Dimitris N. Metaxas, Zhennan Yan
12.1. Introduction 273
viii Contents
16. Machine learning based imaging biomarkers in large scale population studies:
A neuroimaging perspective 379
Guray Erus, Mohamad Habes, Christos Davatzikos
16.1. Introduction 379
16.2. Large scale population studies in neuroimage analysis: steps towards dimensional
neuroimaging; harmonization challenges 382
16.3. Unsupervised pattern learning for dimensionality reduction of neuroimaging data 385
16.4. Supervised classification based imaging biomarkers for disease diagnosis 387
16.5. Multivariate pattern regression for brain age prediction 389
Contents ix
23. Deep learning: Generative adversarial networks and adversarial methods 547
Jelmer M. Wolterink, Konstantinos Kamnitsas, Christian Ledig, Ivana Išgum
23.1. Introduction 547
23.2. Generative adversarial networks 549
23.3. Adversarial methods for image domain translation 554
23.4. Domain adaptation via adversarial training 558
23.5. Applications in biomedical image analysis 559
23.6. Discussion and conclusion 568
References 570
33. Human–machine interfaces for medical imaging and clinical interventions 817
Roy Eagleson, Sandrine de Ribaupierre
33.1. HCI for medical imaging vs clinical interventions 817
33.2. Human–computer interfaces: design and evaluation 820
33.3. What is an interface? 821
33.4. Human outputs are computer inputs 822
33.5. Position inputs (free-space pointing and navigation interactions) 822
33.6. Direct manipulation vs proxy-based interactions (cursors) 823
33.7. Control of viewpoint 824
33.8. Selection (object-based interactions) 824
33.9. Quantification (object-based position setting) 825
33.10. User interactions: selection vs position, object-based vs free-space 826
33.11. Text inputs (strings encoded/parsed as formal and informal language) 828
33.12. Language-based control (text commands or spoken language) 828
33.13. Image-based and workspace-based interactions: movement and selection events 830
33.14. Task representations for image-based and intervention-based interfaces 833
33.15. Design and evaluation guidelines for human–computer interfaces: human inputs
are computer outputs – the system design must respect perceptual capacities and
constraints 834
33.16. Objective evaluation of performance on a task mediated by an interface 836
References 838
Acknowledgments 972
References 972
Index 1013
Contributors
Purang Abolmaesumi
University of British Columbia, Vancouver, BC, Canada
Sahar Ahmad
Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC, United
States
William S. Anderson
Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD, United States
Andrew J. Asman
Pandora Media Inc., Oakland, CA, United States
Avi Ben-Cohen
Tel Aviv University, Faculty of Engineering, Department of Biomedical Engineering, Medical
Image Processing Laboratory, Tel Aviv, Israel
Emad Boctor
Johns Hopkins University, Baltimore, MD, United States
Sebastian Bodenstedt
National Center for Tumor Diseases (NCT), Partner Site Dresden, Division of Translational
Surgical Oncology (TSO), Dresden, Germany
Elodie Breton
ICube UMR7357, University of Strasbourg, CNRS, Strasbourg, France
Xiaohuan Cao
Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
Aaron Carass
Dept. of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD,
United States
Dept. of Computer Science, The Johns Hopkins University, Baltimore, MD, United States
M. Jorge Cardoso
School of Biomedical Engineering & Imaging Sciences, King’s College London, London,
United Kingdom
Jose M. Castillo Tovar
Erasmus MC, Department of Radiology and Nuclear Medicine, Rotterdam, the Netherlands
Erasmus MC, Department of Medical Informatics, Rotterdam, the Netherlands
xvii
xviii Contributors
Guray Erus
Artificial Intelligence in Biomedical Imaging Lab (AIBIL), Center for Biomedical Image
Computing and Analytics (CBICA), Perelman School of Medicine, University of Pennsylvania,
Philadelphia, PA, United States
M. Esposito
Technische Universität München, Computer Aided Medical Procedures, Munich, Germany
Caroline Essert
ICube / Université de Strasbourg, Illkirch, France
Jingfan Fan
Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC, United
States
Aaron Fenster
Robarts Research Institute, Western University, London, ON, Canada
Gabor Fichtinger
Queen’s University, Kingston, ON, Canada
School of Computing, Queen’s University, Kingston, ON, Canada
Afshin Gangi
Department of Interventional Imaging, Strasbourg University Hospital (HUS), Strasbourg,
France
ICube UMR7357, University of Strasbourg, CNRS, Strasbourg, France
Julien Garnon
Department of Interventional Imaging, Strasbourg University Hospital (HUS), Strasbourg,
France
ICube UMR7357, University of Strasbourg, CNRS, Strasbourg, France
Kathleen Gilbert
Department of Anatomy and Medical Imaging, University of Auckland, Auckland, New
Zealand
Ben Glocker
Department of Computing, Imperial College London, London, United Kingdom
Hayit Greenspan
Tel Aviv University, Faculty of Engineering, Department of Biomedical Engineering, Medical
Image Processing Laboratory, Tel Aviv, Israel
xx Contributors
Mohamad Habes
Artificial Intelligence in Biomedical Imaging Lab (AIBIL), Center for Biomedical Image
Computing and Analytics (CBICA), Perelman School of Medicine, University of Pennsylvania,
Philadelphia, PA, United States
Ilker Hacihaliloglu
Rutgers University, Piscataway, NJ, United States
Gregory D. Hager
Johns Hopkins University, Department of Computer Science, Baltimore, MD, United States
Johns Hopkins University, Malone Center for Engineering in Healthcare, Baltimore, MD,
United States
Kerstin Hammernik
Graz University of Technology, Institute of Computer Graphics and Vision, Graz, Austria
Department of Computing, Imperial College London, London, United Kingdom
Mattias P. Heinrich
Institute of Medical Informatics, University of Lübeck, Lübeck, Germany
Pheng-Ann Heng
The Chinese University of Hong Kong, Department of Computer Science and Engineering,
Hong Kong, China
C. Hennersperger
Technische Universität München, Computer Aided Medical Procedures, Munich, Germany
Bennet Hensen
Department of Diagnostical and Interventional Radiology, Hannover Medical School,
Hannover, Germany
Matthew Holden
School of Computing, Queen’s University, Kingston, ON, Canada
Yuankai Huo
Vanderbilt University, Electrical Engineering, Nashville, TN, United States
Maximilian Ilse
University of Amsterdam, Amsterdam Machine Learning Lab, Amsterdam, the Netherlands
Ivana Išgum
Image Sciences Institute, University Medical Center Utrecht, Utrecht, the Netherlands
Leo Joskowicz
The Hebrew University of Jerusalem, Jerusalem, Israel
Contributors xxi
Brendan F. Judy
Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD, United States
Urte Kägebein
Department of Diagnostical and Interventional Radiology, Hannover Medical School,
Hannover, Germany
Konstantinos Kamnitsas
Imperial College London, London, United Kingdom
Satyananda Kashyap
The Iowa Institute for Biomedical Imaging, The University of Iowa, Iowa City, IA, United
States
Peter Kazanzides
Computer Science, Johns Hopkins University, Baltimore, MD, United States
Stefan Klein
Erasmus MC, Department of Radiology and Nuclear Medicine, Rotterdam, the Netherlands
Erasmus MC, Department of Medical Informatics, Rotterdam, the Netherlands
Florian Knoll
New York University, Radiology, Center for Biomedical Imaging and Center for Advanced
Imaging Innovation and Research, New York, NY, United States
Ender Konukoglu
Computer Vision Laboratory, ETH Zürich, Zürich, Switzerland
Bennett A. Landman
Vanderbilt University, Electrical Engineering, Nashville, TN, United States
Andras Lasso
Queen’s University, Kingston, ON, Canada
School of Computing, Queen’s University, Kingston, Ontario, Canada
Christian Ledig
Imagen Technologies, New York, NY, United States
Kyungmoo Lee
The Iowa Institute for Biomedical Imaging, The University of Iowa, Iowa City, IA, United
States
Cristian A. Linte
Rochester Institute of Technology, Rochester, NY, United States
xxii Contributors
Le Lu
Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health,
Bethesda, MD, United States
Ilwoo Lyu
Vanderbilt University, Electrical Engineering, Nashville, TN, United States
Lena Maier-Hein
German Cancer Research Center (DKFZ), Div. Computer Assisted Medical Interventions
(CAMI), Heidelberg, Germany
Dimitris N. Metaxas
Rutgers University, Department of Computer Science, Piscataway, NJ, United States
Karol Miller
Intelligent Systems for Medicine Laboratory, Department of Mechanical Engineering, The
University of Western Australia, Perth, WA, Australia
Marc Modat
School of Biomedical Engineering & Imaging Sciences, King’s College London, London,
United Kingdom
Kensaku Mori
Nagoya University, Nagoya, Japan
Nikita Moriakov
Radboud University Medical Center, Department of Radiology and Nuclear Medicine,
Nijmegen, the Netherlands
Parvin Mousavi
Queen’s University, Kingston, ON, Canada
N. Navab
Technische Universität München, Computer Aided Medical Procedures, Munich, Germany
Johns Hopkins University, Computer Aided Medical Procedures, Baltimore, MD, United States
Wiro J. Niessen
Erasmus MC, Department of Radiology and Nuclear Medicine, Rotterdam, the Netherlands
Erasmus MC, Department of Medical Informatics, Rotterdam, the Netherlands
Delft University of Technology, Faculty of Applied Sciences, Delft, the Netherlands
Sebastien Ourselin
School of Biomedical Engineering & Imaging Sciences, King’s College London, London,
United Kingdom
Bartłomiej W. Papież
Big Data Institute, University of Oxford, Oxford, United Kingdom
Contributors xxiii
Yifan Peng
National Center for Biotechnology Information, National Library of Medicine, National
Institutes of Health, Bethesda, MD, United States
Dzung L. Pham
CNRM, Henry M. Jackson Foundation for the Advancement of Military Medicine, Bethesda,
MD, United States
Beau Pontre
Department of Anatomy and Medical Imaging, University of Auckland, Auckland, New
Zealand
Jerry L. Prince
Dept. of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD,
United States
Dept. of Computer Science, The Johns Hopkins University, Baltimore, MD, United States
Xiaojuan Qi
The Chinese University of Hong Kong, Department of Computer Science and Engineering,
Hong Kong, China
Eva Rothgang
Department of Medical Engineering, Technical University of Applied Sciences
Amberg-Weiden, Weiden, Germany
Snehashis Roy
CNRM, Henry M. Jackson Foundation for the Advancement of Military Medicine, Bethesda,
MD, United States
Sebastian Schafer
Advanced Therapies, Siemens Healthineers, Forchheim, Germany
Dinggang Shen
Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC, United
States
Jeffrey H. Siewerdsen
Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United
States
Sang-Eun Song
Mechanical and Aerospace Engineering, University of Central Florida, Orlando, FL, United
States
Milan Sonka
The Iowa Institute for Biomedical Imaging, The University of Iowa, Iowa City, IA, United
States
xxiv Contributors
Stefanie Speidel
National Center for Tumor Diseases (NCT), Partner Site Dresden, Division of Translational
Surgical Oncology (TSO), Dresden, Germany
Martijn P.A. Starmans
Erasmus MC, Department of Radiology and Nuclear Medicine, Rotterdam, the Netherlands
Erasmus MC, Department of Medical Informatics, Rotterdam, the Netherlands
P. Stefan
Technische Universität München, Computer Aided Medical Procedures, Munich, Germany
Danail Stoyanov
Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), UCL Computer
Sciences, London, United Kingdom
Carole H. Sudre
School of Biomedical Engineering & Imaging Sciences, King’s College London, London,
United Kingdom
Avan Suinesiaputra
Department of Anatomy and Medical Imaging, University of Auckland, Auckland, New
Zealand
Russell H. Taylor
Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States
Jonas Teuwen
Radboud University Medical Center, Department of Radiology and Nuclear Medicine,
Nijmegen, the Netherlands
Netherlands Cancer Institute, Department of Radiation Oncology, Amsterdam, the
Netherlands
Jakub M. Tomczak
University of Amsterdam, Amsterdam Machine Learning Lab, Amsterdam, the Netherlands
J. Traub
Technische Universität München, Computer Aided Medical Procedures, Munich, Germany
SurgicEye GmbH, Munich, Germany
Tamas Ungi
School of Computing, Queen’s University, Kingston, ON, Canada
Sebastian R. van der Voort
Erasmus MC, Department of Radiology and Nuclear Medicine, Rotterdam, the Netherlands
Erasmus MC, Department of Medical Informatics, Rotterdam, the Netherlands
Contributors xxv
Francisco Vasconcelos
Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), UCL Computer
Sciences, London, United Kingdom
S. Swaroop Vedula
Johns Hopkins University, Malone Center for Engineering in Healthcare, Baltimore, MD,
United States
Jifke F. Veenland
Erasmus MC, Department of Radiology and Nuclear Medicine, Rotterdam, the Netherlands
Erasmus MC, Department of Medical Informatics, Rotterdam, the Netherlands
Frank K. Wacker
Department of Diagnostical and Interventional Radiology, Hannover Medical School,
Hannover, Germany
Xiaosong Wang
Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health,
Bethesda, MD, United States
Max Welling
University of Amsterdam, Amsterdam Machine Learning Lab, Amsterdam, the Netherlands
Adam Wittek
Intelligent Systems for Medicine Laboratory, Department of Mechanical Engineering, The
University of Western Australia, Perth, WA, Australia
Jelmer M. Wolterink
Image Sciences Institute, University Medical Center Utrecht, Utrecht, the Netherlands
Tao Xiong
Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, United States
Daguang Xu
NVIDIA, Santa Clara, CA, United States
Zhoubing Xu
Siemens Healthineers, Princeton, NJ, United States
Zhennan Yan
SenseBrain, Princeton, NJ, United States
Dong Yang
Computer Science, Rutgers University, Piscataway, NJ, United States
xxvi Contributors
Lin Yang
Department of Computer Information and Science Engineering (CISE), University of Florida,
Gainesville, FL, United States
University of Florida, Biomedical Engineering, Gainesville, FL, United States
Pew-Thian Yap
Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC, United
States
Alistair A. Young
Department of Anatomy and Medical Imaging, University of Auckland, Auckland, New
Zealand
Boris Zevin
Department of Surgery, Queen’s University, Kingston, ON, Canada
Honghai Zhang
The Iowa Institute for Biomedical Imaging, The University of Iowa, Iowa City, IA, United
States
Zizhao Zhang
Department of Computer Information and Science Engineering (CISE), University of Florida,
Gainesville, FL, United States
Can Zhao
Dept. of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD,
United States
Yefeng Zheng
Tencent
S. Kevin Zhou
Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
Acknowledgment
We are deeply indebted to numerous people who made this book possible.
We thank all chapter authors who diligently finished their chapters and provided
reviews on time and with quality. We extend our gratitude to the Elsevier team, es-
pecially the editor, Tim Pitts, and editorial project manager, Leticia M. Lima, who
exhibited patience and provided help when needed while keeping the book production
on schedule.
Kevin is grateful to his colleagues and friends, especially at the Institute of Comput-
ing Technology, Chinese Academy of Sciences (in particular, those at his MIRACLE
group), including Dr. Xilin Chen, Ms. Xiaohong Wang, Dr. Hu Han, Dr. Li Xiao,
Dr. Ruiping Wang, Dr. Shiguang Shan, Han Li, Ruocheng Gao, Qingsong Yao, Jun
Li, Jiuwen Zhu, Zhiying Ke, Chao Huang, Zheju Li, etc., and at Z2 Sky Technolo-
gies, Inc., including Dr. Yuanyuan Lv, Zhiwei Cheng, Dr. Jingdan Zhang, Mr. Bo Wu,
Haofu Liao, Wei-An Lin, Xiaohang Sun, Cheng Peng, Jiarui Zhang, Dr. Hai Su, Kara
Luo, Yefeng Jiang, Xinwei Yu, etc., for their support and help. Kevin really appreciates
Rama’s mentoring and support for almost 20 years.
Finally, thanks to our families who shared the burden of this work.
xxvii
CHAPTER 1
Contents
1.1. Introduction 1
1.2. Image synthesis 2
1.2.1 Physics-based image synthesis 2
1.2.2 Classification-based synthesis 3
1.2.3 Registration-based synthesis 5
1.2.4 Example-based synthesis 6
1.2.5 Scan normalization in MRI 9
1.3. Superresolution 12
1.3.1 Superresolution reconstruction 12
1.3.2 Single-image deconvolution 14
1.3.3 Example-based superresolution 15
1.4. Conclusion 18
References 19
1.1. Introduction
Two of the most common classes of image processing algorithms are image restoration
and image enhancement. Image restoration is a process that seeks to recover an image that
has been corrupted in some way. Unlike image reconstruction, which recovers images
from observations in a different space, image restoration is an image-to-image opera-
tion. Image enhancement is an image-to-image process that creates new characteristics in
images that were not present in the original or the observed image. Image enhancement
is generally used to generate more visually pleasant images. In the context of medical
imaging, both of these general processes are used either to improve images for visual-
ization or to facilitate further analysis for either clinical or scientific applications.
Image synthesis and superresolution are special cases of image restoration and en-
hancement. Image synthesis, where one creates new images whose acquisition contrast
Handbook of Medical Image Computing and Computer Assisted Intervention Copyright © 2020 Elsevier Inc.
https://doi.org/10.1016/B978-0-12-816176-0.00006-5 All rights reserved. 1
2 Handbook of Medical Image Computing and Computer Assisted Intervention
or modality had not been acquired, can be thought of as belonging to either category,
restoration or enhancement, depending on its use. Superresolution, where one creates
images whose resolution is better than that which was observed, falls into the cate-
gory of image enhancement and can be implemented using image synthesis techniques.
Since image synthesis has become a very general approach, offering broad applications
in medical imaging, including its application to superresolution, we begin this chapter
by describing image synthesis. With this background in hand, we move on to describe
superresolution.
Figure 1.1 Sagittal images of a normal knee. Top left: Standard intermediate-weighted fast SE MR
image. Top middle: Fat-saturated T2-weighted fast SE MR image. Four additional images were acquired
with variable T2-weighted image contrasts to generate a T2 map of the tissue. Synthetic images with
arbitrary TE can then be created and four examples are shown here: top right, TE = 0 ms; bottom left,
TE = 6 ms; bottom middle, TE = 30 ms; bottom right, TE = 60 ms. (Credit: Andreisek et al. [3].)
this way is that because the synthetic images are produced mathematically, they need
not correspond to physically realizable pulse sequences. This early work was extended
to pulse sequence extrapolation [6,52], synthesis in which multiple acquired spin-echo im-
ages were used to generate inversion-recovery images with arbitrary inversion times.
With this approach, one could carry out retrospective optimization of the MR image
contrast between one or more tissues. An illustration of this approach using sagittal
images of a normal male knee is shown in Fig. 1.1.
Figure 1.2 Segmentation of a 2-point Dixon pulse sequence with derived (A) water and (B) fat images
producing the (C) linear attenuation map; (D) the linear attenuation map derived from CT shows bone
that cannot be observed from the Dixon pulse sequence. (From Martinez-Moller A., Souvatzoglou M.,
Delso G., et al., Tissue classification as a potential approach for attenuation correction in whole-body
PET/MRI: evaluation with PET/CT data, J. Nucl. Med. 50 (4) (2009) 520–526, Figure 3, © SNMMI.)
membership functions—to describe the partial volume contributions of the tissue types
that comprise each voxel, and (iv) images are synthesized from the classification using
physics-based models which includes the application of random noise and intensity in-
homogeneities (if directed to do so) in order to produce very realistic neuroimages. The
BrainWeb approach is limited in that the image can only be synthesized on the available
anatomies (around two dozen) that have been carefully prepared for this purpose.
Figure 1.3 Top left to right: the acquired CT, the pseudo-CT obtained using multiple atlases, the
pseudo-CT obtained using the best atlas, and the pseudo-CT obtained using a pair of ultrafast echo-
time (UTE) MR images. Bottom left to right: the acquired T1-weighted image and three differences
between the pseudo-CTs (top row) and the true acquired CT. (Credit: Burgos et al. [8].)
vision—the concept of image analogies, which looks for image B that is analogous to
image B in the same way that image A is analogous to image A. Here, A = {A, A } is
an atlas (or textbook) in the same way that registration-based synthesis had an atlas. But
rather than using deformable registration to register A (and A ) to B, this method finds a
mapping (or regression) between features in A and intensities in A and then applies that
same mapping to features in B in order to generate B . Here, the atlas features provide
examples that are used to determine the mapping used for synthesis, which is the reason
for calling this basic approach example-based synthesis. Emerging from this fundamental
notion, there are a host of methods that have been proposed for image synthesis in med-
ical imaging, differing in the nature of the atlas images, how the features are defined,
and how the regression is learned.
Random forest regression synthesis. Rather than defining an explicit mapping be-
tween features in the atlas, the previous approach defines an implicit mapping that is
computed for each subject patch “on the fly.” Machine learning methods, on the other
8 Handbook of Medical Image Computing and Computer Assisted Intervention
Figure 1.4 (A) Atlas T1-weighted SPGR and (B) its corresponding T1-weighted MPRAGE; (C) Subject
T1-weighted SPGR scan and (D) its T1-weighted MPRAGE image (only used for comparison). Atlas SPGR
is deformably registered to the subject SPGR. This deformation is applied to the atlas MPRAGE to ob-
tain (E) a synthetic subject MPRAGE; (F) Synthetic MPRAGE image generated by MIMECS. (Credit: Roy
et al. [73].)
Figure 1.5 Top left to right: the input MPRAGE image and the real T2-weighted image (shown only for
comparison). Bottom left to right: a registration fusion result (similar to Burgos et al. [8]), the MIMECS
result (see [73]), and the REPLICA synthetic T2-weighted image. (Credit: Jog et al. [45].)
showed that just three paired subject 3D images provided sufficient training data. It
was also shown that patches with an additional “constellation” of average image values
taken from three image scales provided improved performance when data from out-
side the brain (skin, fat, muscle) were synthesized. Jog et al. [45] also demonstrated
how data from multiple acquisitions—e.g., T1 w, T2 w, and PD w—could be used within
the same framework to lead to improved synthesis of fluid attenuated inversion recov-
ery (FLAIR) images, readily revealing the white matter lesions that are usually present
in multiple sclerosis (MS) patients. An example result from Jog et al. [45] (the method
called REPLICA) is shown in Fig. 1.5.
ent brain and lesion volume measures were found to be significantly larger than the
intrascanner variation. One common approach to address this issue is to perform har-
monized acquisitions by restricting the field strength, scanner manufacturer, and pulse
sequence protocol. However, Shinohara et al. [82] imaged a multiple sclerosis patient
at seven different sites across North America, and even with careful matching of the
MRI pulse sequences, brain volume variations across sites derived from several different
segmentation algorithms were still significant.
A number of postprocessing approaches have been proposed for dealing with scan
intensity differences, including statistical modeling [11,47], and alignment of image his-
tograms using linear or piecewise linear transformations [31,59,60,79,83]. Because the
transformations alter the global histogram, local contrast differences are not addressed by
this approach. Furthermore, there is an inherent assumption that the global histogram
is similar in the two images being matched. In the case where one image possesses
pathology and the other does not or when the extent of pathology differs markedly,
this assumption is violated and can lead to inaccurate results. Example-based image syn-
thesis approaches have also been proposed for contrast normalization [15,45,73]. Most
approaches require paired training data that include both examples of the input acquisi-
tion and the output acquisition.
Fig. 1.6 shows an example of an example-based deep learning technique for paired
harmonization [15]. In this work, scans from the same set of patients before and after
a scanner hardware and software upgrade were acquired and used as training data. Be-
cause scans took place within a short time span, anatomical changes were assumed to
be negligible. The differing appearance of ventricular size is primarily due to partial
volume averaging caused by thicker sections in the prior FLAIR sequence. The syn-
thesis was based on a modified 2D U-net architecture [69] applied to three orthogonal
views. Overall the harmonized images show remarkable consistency and this was further
corroborated by significantly improved stability in segmentation volumes [15].
In performing scan normalization, the requirement that the atlas images have the
same scan properties as the input data can be difficult to fulfill, particularly in ret-
rospective analyses where such paired data may have never been acquired. The PSI-
CLONE (Pulse Sequence Information-based Contrast Learning On Neighborhood
Ensemble) synthesis method [43] addresses this problem by first performing a pulse
sequence regression step that adjusts for these differences. This is accomplished by com-
bining physics-based synthesis with example-based synthesis. PSI-CLONE uses imaging
equations to model the sequences, and aligns the image intensities in the parameter
space of the equations. A flow chart depicting the steps of PSI-CLONE is provided in
Fig. 1.7. Rather than include a specific type of MRI scan, quantitative maps (denoted
Aq in the figure) of the underlying nuclear magnetic resonance (NMR) parameters,
T1 , T2 , and PD are included within its atlas images, along with a set of desired output
atlases. By estimating the pulse sequence parameters P of the input image, the NMR
Image synthesis and superresolution in medical imaging 11
Figure 1.6 Paired harmonization: (A) scan before acquisition upgrade, (B) scan after upgrade, (C) har-
monized scan before upgrade, (D) harmonized scan after upgrade. (Credit: Dewey et al. [15].)
Figure 1.7 Flow chart for -CLONE synthesis. An input subject image has its pulse sequence param-
eters estimated, which are used to create a customized atlas, Anew , from the Atlas quantitative maps.
This Anew is used to train a regression between pulse sequence of the input subject image and the
desired contrast. See the text for complete details. (Credit: Jog et al. [44].)
parameters can be used to synthesize an atlas contrast that is similar to the subject image,
denoted Anew . A patch-based regression [43,45] is then applied to map the original im-
age to the desired output contrast. Because synthesis offers greater flexibility in that the
12 Handbook of Medical Image Computing and Computer Assisted Intervention
1.3. Superresolution
Superresolution can be thought of as resolution enhancement (when image acquisition
limits the resolution) or resolution restoration (when image transmission has led to a loss
of data and consequently degraded resolution). To be clear, it refers to improvement of
the underlying (physics-based) resolution, not the digital resolution of the image voxels.
Zero-padding Fourier space (as is often done on the MR scanner itself), for example,
is not superresolution since it only affects the spacing between the voxels that represent
the image. Superresolution can be an approach that combines acquisition strategies with
postprocessing, as in superresolution reconstruction [63], or an approach that is strictly lim-
ited to postprocessing, as in image deconvolution [50]. Computer vision researchers tend
to refer to these processes as multiimage superresolution [30] and single-image superresolution,
respectively. In this section, we consider superresolution developments from both his-
torical and technical points of view and provide a few examples of the most important
developments related to medical imaging.
Figure 1.8 Illustration of the Fourier content for three hypothetical scans acquired in the axial, sagittal,
and coronal directions.
Figure 1.9 Example of superresolution reconstruction from multiple orientations: (A)–(C) axial, coro-
nal, and sagittal views of the superresolution result (voxel size 0.5 × 0.5 × 0.5 mm) that used six rotated
multislice image stacks and algebraic reconstruction technique (ART) reconstruction; (D)–(F) axial,
coronal, and sagittal views of one sagittal multislice stack (voxel size 3 × 0.46875 × 0.46875 mm);
(G)–(I) axial, coronal, and sagittal views of one coronal multislice stack (voxel size 0.46786 × 3 ×
0.46875 mm) (Credit: Shilling et al. [81].)
14 Handbook of Medical Image Computing and Computer Assisted Intervention
three steps [63]: (i) registration of the acquired images; (ii) interpolation of these images
onto an HR grid; and (iii) deconvolution to remove blur and noise. Because of the size
of the problems, iterative algorithms have generally been required, and several variants
have been reported [19,38,64,65,84,97]. Plenge et al. [64] carried out a comparison
and extensive evaluation of these methods, extending a previous comparison in [27].
They demonstrated that, with equal image acquisition time and signal-to-noise ratio,
SRR can indeed improve resolution over direct HR acquisition (given equal acquisition
time). They also noted that slightly regularized solutions achieved the best results.
Delbracio and Sapiro [14] developed a method to “side-step” the requirement to
know the PSF and therefore avoid the ill-posed deconvolution step. Their method,
called Fourier burst accumulation (FBA), exploits the fact that acquisitions by hand-held
cameras tend to be blurred by hand “jitters” that have high resolution in one di-
rection (orthogonal to the jitter direction) and lower resolution in the other. When
multiple snapshots are acquired, each acquired image will preserve the Fourier coeffi-
cients in one dominant direction and degrade them in the other directions (by reducing
their Fourier magnitudes). Therefore, combination of all acquired images is concep-
tually simple: take the Fourier transform of all images, pick the maximum value at
each Fourier frequency, and inverse transform the result. Two recent superresolution
approaches for magnetic resonance images acquired with higher in-plane resolution and
lower through-plane resolution have exploited this approach in a hybrid approach that
also exploits example-based superresolution (see descriptions below) [42,95,96].
cisely, deconvolution efforts can produce severe artifacts, most notably so-called ringing
artifacts.
In MR imaging applications, the PSF is generally assumed to be known since it can
be inferred in the in-plane directions by the Fourier resolutions implied by the read-
out and phase-encode extents and in the through-plane direction by the slice-selection
pulse in 2D MRI or by the three Fourier extents in 3D MRI. Note, however, that the
Fourier extents of phase encodes and readouts cause a complete absence of Fourier con-
tent outside of these ranges. In this case, deconvolution as it is conventionally thought
of is impossible as Fourier content is not just attenuated but is entirely suppressed. As
a result, while deconvolution might be theoretically possible in MRI (because of some
apodization due to T2∗ effects), it is truly applicable only to restore through-plane reso-
lution in 2D MRI—and even then aliasing is the major problem, not resolution [27,96].
Figure 1.10 Results of isotropic voxel upsampling. LR data (voxel resolution = 2 × 2 × 2 mm) was
up sampled to have a 1 × 1 × 1 mm voxel resolution using nearest neighbor (NN) (upper right),
B-splines (lower left), and nonlocal means superresolution (lower right). (Credit: Manjon et al. [56].)
This learnt mapping is then applied to the original subject image features—serving as
image B here—creating features and image B at (presumably) yet even higher reso-
lution than the original image. The nonlocal MRI upsampling method of Manjon et
al. [56] uses, at its core, the same philosophy though it exploits a nonlocal approach to
identifying the features that are used to map the original image to a higher-resolution
version. An example of the results of this approach is shown in Fig. 1.10. The problem
with these methods is that the learnt mappings do not truly describe the relationship
between the acquired image and a higher resolution image; they just assume that the
same relationship between features exists at multiple scales, which may not be true in
general.
A method called brain hallucination has been proposed as an alternative to the methods
that assume scale–space invariance [70,71]. This approach exploits the fact that it is
common in MR imaging protocols to acquire both an LR T2 w image and an HR T1 w
image. T2 -weighted images have a tissue contrast that depends largely on the transverse
magnetization while T1 w images have a tissue contrast that depends largely on the
longitudinal magnetization. This dichotomy in resolutions exists because T1 w images
Image synthesis and superresolution in medical imaging 17
Figure 1.11 Example of “brain hallucination” reconstruction results. Left to right: ground truth, trilin-
ear interpolation, cubic B-spline, brain hallucination. (Credit: Rousseau [71].)
can, in general, be acquired at higher resolution, higher SNR, and shorter time than
either PD w or T2 w images.
Brain hallucination runs the HR T1 w image through a low pass filter and learns a
mapping—also using a nonlocal approach—between the LR features and the original
image; this forms the A and A image pair in the notation of image analogies. The learnt
mapping is then applied to the T2 w image to find an HR version of this image; these
two form the B and B images in the notation of image analogies. An example showing
a close-up of a superresolution T2 w image is shown in Fig. 1.11. This approach might
arguably be better since it uses an actual HR image, but it might not be accurate because
it assumes that the relationship between T1 w features is the same as that for T2 w features
(across scales). Thus, it may introduce features that are intrinsic to one modality and not
to the other.
The final approach we consider here is the synthetic multiorientation resolution
enhancement (SMORE) approach described in [42,95,96]. This approach exploits the
fact that many scans are acquired with elongated voxels having better resolutions in
the in-plane than through-plane direction. The method first applies a low-pass filter
in one of the in-plane directions and learns a regression between that LR image and
the original OR image. Anchored adaptive neighborhood (patch-based) regression [85]
was used in [42] while the deep network EDSR [54] was used in [95,96] to carry
out this regression. This regression is then applied to the original image in the through-
plane direction to enhance its resolution. Multiple orientations—i.e., rotations about the
through-plane axis—are used to establish additional training data for the regressions as
well as more complete Fourier coverage and the resulting Fourier volumes are com-
bined using Fourier burst accumulation [14]. When 2D MR images are acquired and
18 Handbook of Medical Image Computing and Computer Assisted Intervention
Figure 1.12 Experiment on a 0.15 × 0.15 × 1 mm LR marmoset PD MRI, showing axial views of (A) cu-
bic B-spline interpolated image, (B) SMORE without antialiasing, (C) SMORE with antialiasing, and
(D)–(F) sagittal views of the same. (Credit: Zhao et al. [96].)
the slice thickness is equal to the slice separation, aliasing is also present [27]. The
SMORE method is also able to learn a relationship between an aliased and nonaliased
version of itself and run an antialiasing process as well as superresolution [96]. An exam-
ple demonstrating the performance of SMORE is shown in Fig. 1.12. One limitation
of this method is that the final resolution is limited by the in-plane resolution.
1.4. Conclusion
Both image synthesis and superresolution have improved dramatically since the incor-
poration of machine learning methods and the use of examples to drive the methods.
Significant questions remain, however, about the limits of their ultimate capability and
their validity in any given untested scenario, particularly in the case of pathological data,
where examples of representative training data may be sparse or nonexistent. Methods
to compute and display measures of reliability are being developed, and these devel-
opments will help to provide additional confidence in the use of these approaches for
both scientific and clinical use. It is truly an exciting time to join the medical imag-
ing research community as new methods are emerging and new capabilities previously
thought impossible are now becoming possible.
Another random document with
no related content on Scribd:
and although work has been begun, yet we [Sidenote: School
have but little expectation of its accomplishment at Robeson]
in a short time;
That there is a ... school at Maiden Creek kept by Thomas
Pearson, a Friend, who is at present engaged for a year, has
15 scholars entered for that time and 8 quarterly ditto scholars
at the rate of 40/ per annum for each, which is under the
direction of three overseers chosen by the employers. The
school house built on a piece of ground belonging to a Friend
which contains about five acres. There is likewise a school at
Reading kept by Benjamin Parks and wife in their own house;
they are members of the society and have about 50 scholars;
such as spell at 7/6 and others at 10/ per quarter but is not
under the direction of the meeting, nor are there any
overseers chosen to superintend the same, yet we are of the
mind a school established there under proper regulations and
care of the monthly meeting, might be useful and deserves
encouragement.
The schools within the verge of Robeson Monthly Meeting
are kept by a person who inclines to go to our meetings, has
about 20 scholars, amounting to about £34 per annum.
Endeavors are also used to get a school established there
upon a better plan and near the direction of the yearly
meeting, but how far they may be successful is at present
unknown. We do therefore recommend the whole to the
notion of alleviation of the Monthly Meeting as a matter
wherein friends are deeply interested.
Which we submit to the Meeting.
Amos Lee, Thomas Lightfoot, Samuel Hughes, Fannie
Ambree, Owen Hughes, (which was approved by the Monthly
Meeting, and decided that the substance be made a report to
the Quarterly Meeting—The Committee to be continued to the
service of Schools and report in the future).[339]
Maiden Creek was at this time (1784) making [Sidenote: Maiden
earnest efforts to meet the standards set by the Creek secures
general meeting. In the eleventh month they land for school]
requested a number of persons to be named to [Sidenote:
whom they might give a deed of trust for the Attempt to
ground agreed upon for the use of their school.[340] establish school
at Reading]
Three were suggested and the deed and
declaration of trust accordingly drawn up. Efforts in the meantime
had been made towards establishing a school at Reading and a
committee to conduct a subscription for that purpose named.[341]
Help was solicited from the yearly meeting, but James Pemberton
answered for that body that there was no money to be spared at the
time, so Reading was advised to build such a house as their
circumstances would permit.[342] Near the close of 1787 those
having direct charge thereof made the following report of their
progress:
[Sidenote:
We the committee appointed to have the Committee report
school education of youth under care, have on Reading
given close attention to a school proposed to be school]
opened in a short time at Reading by Caleb Johnson, in a
house now in building by Friends there, and nearly finished,
which we are of the mind should be under particular care and
direction of the monthly meeting; and that it may be well that a
committee be thereby appointed to superintend and monthly
to visit said school; we have also drawn up and agreed on
certain rules to be observed and attended to by the
employers, master and scholars concerned therein for the
regulation and well ordering thereof: which we have ready for
the examination and inspection of the monthly meeting if
thought necessary. All which we submit thereto. Signed on
behalf of the committee, Francis Parvin.... Which minute
being read was allowed of and it was directed that a copy
thereof be kept in open view in said school and that the
original be lodged among the meeting papers; Benjamin
Pearson, Samuel Jackson, John Mears, Francis Parvin,
Johannes Lee, Jr., and James Iddings are appointed to have
the said school under care and visit it once a month or oftener
as necessity may require and report of their care. The former
committee is continued.[343]
After the school had been in progress two years, [Sidenote: School
Samuel Jackson reported that it “appeared to be in discontinued]
an increasing way”[344] but its prosperity was not to
be long continued. In 1705 it was reported “discontinued,”[345] and
no reason assigned for it excepting “the situation of the Friends
there” which, taking into consideration the shortage of funds when it
was begun, we may infer, had reference to the financial situation.
The action of the monthly meeting in regard to it was left entirely to
their own judgment.[346]
SUMMARY
In this chapter we have considered the schools [Sidenote: Scope
of Philadelphia (city and county), and also those at of chapter]
Exeter Monthly Meeting, which belonged to the
Philadelphia Quarter.
Education in the Quaker colony was initially [Sidenote:
provided for in the instrument of government, Education to be
drawn up before the Proprietary left England; in function of
government]
accord with said provisions the first school
(Flower’s) was set up by the Council in 1683. [Sidenote: First
Thereafter, however, the initiative was usually school]
taken by the Quaker meeting, which in 1689 set up [Sidenote: School
a school and in 1697 applied for a charter under established by
the laws of the province. This petition was granted monthly meeting]
and Penn gave the first charter in 1701. Later
[Sidenote:
charters, in 1708 and 1711, granted extended Overseers made
privileges; by the last one the body of overseers independent]
were made self-perpetuating, and thus as
independent of the meeting as they wished to be. The letter said to
have been written to Thomas Lloyd, which credits Penn with
suggesting the school of 1689, has not yet been discovered.
The earliest masters were Keith, Makin, [Sidenote: Earliest
Pastorius, and Cadwalader. Mistresses were masters and
mentioned in connection with the schools from mistresses]
about 1699, Olive Songhurst being the first one [Sidenote: Growth
named. Salaries were not high and seem in some of system]
cases to have hardly sufficed for the family of the
master; increases were made upon complaint. Extra duties for the
teacher included keeping charge of the boys and girls in meeting.
From 1689 to 1779 the system increased from employing one to one
which required nine. In 1784 ten were reported.
Philadelphia Friends’ schools were first [Sidenote: Means
supported by (1) rates and (2) subscriptions, while of support]
(3) legacies and special gifts soon came to form a
considerable item in their support. Bequests were also a factor in the
support of the Negro School. Funds were occasionally raised by
bond issues, and derived from tenements built on school property.
Schools were first held in rented property and in [Sidenote: Place
the meeting house, but in 1698 steps were taken to of first schools]
purchase property of Lionell Brittain for the use of
schools. Property was received as a gift from [Sidenote:
Property by
Samuel Carpenter in 1701. The first record of a purchase and gift]
schoolhouse was the one to be begun in 1701. In
accord with their charter rights the power and [Sidenote:
Overseers more
independence of the overseers increased. In 1725 independent]
the monthly meeting conveyed to them all money
and the titles for all school property. The Negro School was provided
with a building in 1771. The end of the century is marked by the
establishment by the yearly meeting of a Boarding School at
Westtown in Chester County.
The exact date of Byberry’s first school is not [Sidenote:
determined; but must have been early, since Byberry]
Richard Brockden is reported to have been
schoolmaster there in 1711. School activity, however, seems to have
increased greatly near the middle of the century. The school was
under the care of a standing committee, which was to visit schools
every six weeks and make two reports thereon each year. Poor
children were schooled by the trustees of the school funds.
Germantown school began in 1702, though [Sidenote:
perhaps an evening school existed before that Germantown]
date. Pastorius continued in this school as master,
at least until 1718. The official language used in the school was
probably English. The names of the first patrons were all German; a
large number of English names among them in 1708 is an indication
of how the school and its master were regarded.
In 1758 youths’ meetings were established by [Sidenote: Exeter
Exeter, but no school committee was appointed Monthly]
until 1778. This committee accomplished nothing
[Sidenote:
and made no report of value. By a report of 1784, Maidencreek
Maidencreek, Reading, and Robeson were credited Reading
with one school each, which measured up in some Robeson]
ways to the desired standards. Exeter had none.
The Reading School was discontinued in 1795.
The total number of schools reported at Philadelphia,
Germantown, Byberry, and Exeter monthly meeting, was fifteen.
CHAPTER V
SCHOOLS OF BUCKS COUNTY
And in 1780:
to inspect into the state of such schools as are now kept and
where it may be necessary, to promote others,