Download as pdf or txt
Download as pdf or txt
You are on page 1of 70

Medical Image Computing and

Computer Assisted Intervention MICCAI


2021 24th International Conference
Strasbourg France September 27
October 1 Part I 1st Edition Marleen De
Bruijne
Visit to download the full and correct content document:
https://ebookmeta.com/product/medical-image-computing-and-computer-assisted-inte
rvention-miccai-2021-24th-international-conference-strasbourg-france-september-27-
october-1-part-i-1st-edition-marleen-de-bruijne/
More products digital (pdf, epub, mobi) instant
download maybe you interests ...

Medical Image Computing and Computer Assisted


Intervention MICCAI 2021 24th International Conference
Strasbourg France September 27 October 1 Part VI 1st
Edition Marleen De Bruijne
https://ebookmeta.com/product/medical-image-computing-and-
computer-assisted-intervention-miccai-2021-24th-international-
conference-strasbourg-france-september-27-october-1-part-vi-1st-
edition-marleen-de-bruijne/

Medical Image Computing and Computer Assisted


Intervention MICCAI 2021 24th International Conference
Strasbourg France September 27 October 1 Part IV 1st
Edition Marleen De Bruijne
https://ebookmeta.com/product/medical-image-computing-and-
computer-assisted-intervention-miccai-2021-24th-international-
conference-strasbourg-france-september-27-october-1-part-iv-1st-
edition-marleen-de-bruijne/

Machine Learning in Clinical Neuroimaging 4th


International Workshop MLCN 2021 Held in Conjunction
with MICCAI 2021 Strasbourg France September Lecture
Notes in Computer Science 13001 Ahmed Abdulkadir
(Editor)
https://ebookmeta.com/product/machine-learning-in-clinical-
neuroimaging-4th-international-workshop-mlcn-2021-held-in-
conjunction-with-miccai-2021-strasbourg-france-september-lecture-
notes-in-computer-science-13001-ahmed-abdulkadi/

Diabetic Foot Ulcers Grand Challenge Second Challenge


DFUC 2021 Held in Conjunction with MICCAI 2021
Strasbourg France September 27 2021 Moi Hoon Yap Editor
Bill Cassidy Editor Connah Kendrick Editor
https://ebookmeta.com/product/diabetic-foot-ulcers-grand-
challenge-second-challenge-dfuc-2021-held-in-conjunction-with-
miccai-2021-strasbourg-france-september-27-2021-moi-hoon-yap-
Natural Language Processing and Chinese Computing 10th
CCF International Conference NLPCC 2021 Qingdao China
October 13 17 2021 Proceedings Part I Lu Wang

https://ebookmeta.com/product/natural-language-processing-and-
chinese-computing-10th-ccf-international-conference-
nlpcc-2021-qingdao-china-october-13-17-2021-proceedings-part-i-
lu-wang/

Computer Analysis of Images and Patterns 19th


International Conference CAIP 2021 Virtual Event
September 28 30 2021 Proceedings Part I 1st Edition
Nicolas Tsapatsoulis
https://ebookmeta.com/product/computer-analysis-of-images-and-
patterns-19th-international-conference-caip-2021-virtual-event-
september-28-30-2021-proceedings-part-i-1st-edition-nicolas-
tsapatsoulis/

Computer Vision and Image Processing 6th International


Conference CVIP 2021 Rupnagar India December 3 5 2021
Revised Selected Papers Part I Balasubramanian Raman

https://ebookmeta.com/product/computer-vision-and-image-
processing-6th-international-conference-cvip-2021-rupnagar-india-
december-3-5-2021-revised-selected-papers-part-i-balasubramanian-
raman/

Intelligent Computing Theories and Application 17th


International Conference ICIC 2021 Shenzhen China
August 12 15 2021 Proceedings Part I 1st Edition De-
Shuang Huang (Editor)
https://ebookmeta.com/product/intelligent-computing-theories-and-
application-17th-international-conference-icic-2021-shenzhen-
china-august-12-15-2021-proceedings-part-i-1st-edition-de-shuang-
huang-editor/

Advances in Visual Computing 16th International


Symposium ISVC 2021 Virtual Event October 4 6 2021
Proceedings Part I Lecture Notes in Computer Science
George Bebis
https://ebookmeta.com/product/advances-in-visual-computing-16th-
international-symposium-isvc-2021-virtual-event-
october-4-6-2021-proceedings-part-i-lecture-notes-in-computer-
Marleen de Bruijne · Philippe C. Cattin ·
Stéphane Cotin · Nicolas Padoy ·
Stefanie Speidel · Yefeng Zheng ·
Caroline Essert (Eds.)

Medical Image Computing


LNCS 12901

and Computer Assisted


Intervention – MICCAI 2021
24th International Conference
Strasbourg, France, September 27 – October 1, 2021
Proceedings, Part I
Lecture Notes in Computer Science 12901

Founding Editors
Gerhard Goos
Karlsruhe Institute of Technology, Karlsruhe, Germany
Juris Hartmanis
Cornell University, Ithaca, NY, USA

Editorial Board Members


Elisa Bertino
Purdue University, West Lafayette, IN, USA
Wen Gao
Peking University, Beijing, China
Bernhard Steffen
TU Dortmund University, Dortmund, Germany
Gerhard Woeginger
RWTH Aachen, Aachen, Germany
Moti Yung
Columbia University, New York, NY, USA
More information about this subseries at http://www.springer.com/series/7412
Marleen de Bruijne Philippe C. Cattin
• •

Stéphane Cotin Nicolas Padoy


• •

Stefanie Speidel Yefeng Zheng


• •

Caroline Essert (Eds.)

Medical Image Computing


and Computer Assisted
Intervention – MICCAI 2021
24th International Conference
Strasbourg, France, September 27 – October 1, 2021
Proceedings, Part I

123
Editors
Marleen de Bruijne Philippe C. Cattin
Erasmus MC - University Medical Center University of Basel
Rotterdam Allschwil, Switzerland
Rotterdam, The Netherlands
Nicolas Padoy
University of Copenhagen ICube, Université de Strasbourg, CNRS
Copenhagen, Denmark Strasbourg, France
Stéphane Cotin Yefeng Zheng
Inria Nancy Grand Est Tencent Jarvis Lab
Villers-lès-Nancy, France Shenzhen, China
Stefanie Speidel
National Center for Tumor Diseases
(NCT/UCC)
Dresden, Germany
Caroline Essert
ICube, Université de Strasbourg, CNRS
Strasbourg, France

ISSN 0302-9743 ISSN 1611-3349 (electronic)


Lecture Notes in Computer Science
ISBN 978-3-030-87192-5 ISBN 978-3-030-87193-2 (eBook)
https://doi.org/10.1007/978-3-030-87193-2
LNCS Sublibrary: SL6 – Image Processing, Computer Vision, Pattern Recognition, and Graphics

© Springer Nature Switzerland AG 2021


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the
material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,
broadcasting, reproduction on microfilms or in any other physical way, and transmission or information
storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now
known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this book are
believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors
give a warranty, expressed or implied, with respect to the material contained herein or for any errors or
omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in
published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface

The 24th edition of the International Conference on Medical Image Computing and
Computer Assisted Intervention (MICCAI 2021) has for the second time been placed
under the shadow of COVID-19. Complicated situations due to the pandemic and
multiple lockdowns have affected our lives during the past year, sometimes perturbing
the researchers work, but also motivating an extraordinary dedication from many of our
colleagues, and significant scientific advances in the fight against the virus. After
another difficult year, most of us were hoping to be able to travel and finally meet in
person at MICCAI 2021, which was supposed to be held in Strasbourg, France.
Unfortunately, due to the uncertainty of the global situation, MICCAI 2021 had to be
moved again to a virtual event that was held over five days from September 27 to
October 1, 2021. Taking advantage of the experience gained last year and of the
fast-evolving platforms, the organizers of MICCAI 2021 redesigned the schedule and
the format. To offer the attendees both a strong scientific content and an engaging
experience, two virtual platforms were used: Pathable for the oral and plenary sessions
and SpatialChat for lively poster sessions, industrial booths, and networking events in
the form of interactive group video chats.
These proceedings of MICCAI 2021 showcase all 531 papers that were presented at
the main conference, organized into eight volumes in the Lecture Notes in Computer
Science (LNCS) series as follows:
• Part I, LNCS Volume 12901: Image Segmentation
• Part II, LNCS Volume 12902: Machine Learning 1
• Part III, LNCS Volume 12903: Machine Learning 2
• Part IV, LNCS Volume 12904: Image Registration and Computer Assisted
Intervention
• Part V, LNCS Volume 12905: Computer Aided Diagnosis
• Part VI, LNCS Volume 12906: Image Reconstruction and Cardiovascular Imaging
• Part VII, LNCS Volume 12907: Clinical Applications
• Part VIII, LNCS Volume 12908: Microscopic, Ophthalmic, and Ultrasound
Imaging
These papers were selected after a thorough double-blind peer review process. We
followed the example set by past MICCAI meetings, using Microsoft’s Conference
Managing Toolkit (CMT) for paper submission and peer reviews, with support from
the Toronto Paper Matching System (TPMS), to partially automate paper assignment to
area chairs and reviewers, and from iThenticate to detect possible cases of plagiarism.
Following a broad call to the community we received 270 applications to become an
area chair for MICCAI 2021. From this group, the program chairs selected a total of 96
area chairs, aiming for diversity — MIC versus CAI, gender, geographical region, and
vi Preface

a mix of experienced and new area chairs. Reviewers were recruited also via an open
call for volunteers from the community (288 applications, of which 149 were selected
by the program chairs) as well as by re-inviting past reviewers, leading to a total of
1340 registered reviewers.
We received 1630 full paper submissions after an original 2667 intentions to submit.
Four papers were rejected without review because of concerns of (self-)plagiarism and
dual submission and one additional paper was rejected for not adhering to the MICCAI
page restrictions; two further cases of dual submission were discovered and rejected
during the review process. Five papers were withdrawn by the authors during review
and after acceptance.
The review process kicked off with a reviewer tutorial and an area chair meeting to
discuss the review process, criteria for MICCAI acceptance, how to write a good
(meta-)review, and expectations for reviewers and area chairs. Each area chair was
assigned 16–18 manuscripts for which they suggested potential reviewers using TPMS
scores, self-declared research area(s), and the area chair’s knowledge of the reviewers’
expertise in relation to the paper, while conflicts of interest were automatically avoided
by CMT. Reviewers were invited to bid for the papers for which they had been
suggested by an area chair or which were close to their expertise according to TPMS.
Final reviewer allocations via CMT took account of reviewer bidding, prioritization of
area chairs, and TPMS scores, leading to on average four reviews performed per person
by a total of 1217 reviewers.
Following the initial double-blind review phase, area chairs provided a meta-review
summarizing key points of reviews and a recommendation for each paper. The program
chairs then evaluated the reviews and their scores, along with the recommendation from
the area chairs, to directly accept 208 papers (13%) and reject 793 papers (49%); the
remainder of the papers were sent for rebuttal by the authors. During the rebuttal phase,
two additional area chairs were assigned to each paper. The three area chairs then
independently ranked their papers, wrote meta-reviews, and voted to accept or reject
the paper, based on the reviews, rebuttal, and manuscript. The program chairs checked
all meta-reviews, and in some cases where the difference between rankings was high or
comments were conflicting, they also assessed the original reviews, rebuttal, and
submission. In all other cases a majority voting scheme was used to make the final
decision. This process resulted in the acceptance of a further 325 papers for an overall
acceptance rate of 33%.
Acceptance rates were the same between medical image computing (MIC) and
computer assisted interventions (CAI) papers, and slightly lower where authors clas-
sified their paper as both MIC and CAI. Distribution of the geographical region of the
first author as indicated in the optional demographic survey was similar among sub-
mitted and accepted papers.
Preface vii

New this year, was the requirement to fill out a reproducibility checklist when
submitting an intention to submit to MICCAI, in order to stimulate authors to think
about what aspects of their method and experiments they should include to allow others
to reproduce their results. Papers that included an anonymous code repository and/or
indicated that the code would be made available were more likely to be accepted. From
all accepted papers, 273 (51%) included a link to a code repository with the
camera-ready submission.
Another novelty this year is that we decided to make the reviews, meta-reviews, and
author responses for accepted papers available on the website. We hope the community
will find this a useful resource.
The outstanding program of MICCAI 2021 was enriched by four exceptional
keynote talks given by Alyson McGregor, Richard Satava, Fei-Fei Li, and Pierre
Jannin, on hot topics such as gender bias in medical research, clinical translation to
industry, intelligent medicine, and sustainable research. This year, as in previous years,
high-quality satellite events completed the program of the main conference: 28
workshops, 23 challenges, and 14 tutorials; without forgetting the increasingly suc-
cessful plenary events, such as the Women in MICCAI (WiM) meeting, the MICCAI
Student Board (MSB) events, the 2nd Startup Village, the MICCAI-RSNA panel, and
the first “Reinforcing Inclusiveness & diverSity and Empowering MICCAI” (or
RISE-MICCAI) event.
MICCAI 2021 has also seen the first edition of CLINICCAI, the clinical day of
MICCAI. Organized by Nicolas Padoy and Lee Swanstrom, this new event will
hopefully help bring the scientific and clinical communities closer together, and foster
collaborations and interaction. A common keynote connected the two events. We hope
this effort will be pursued in the next editions.
We would like to thank everyone who has contributed to making MICCAI 2021 a
success. First of all, we sincerely thank the authors, area chairs, reviewers, and session
viii Preface

chairs for their dedication and for offering the participants and readers of these pro-
ceedings content of exceptional quality. Special thanks go to our fantastic submission
platform manager Kitty Wong, who has been a tremendous help in the entire process
from reviewer and area chair selection, paper submission, and the review process to the
preparation of these proceedings. We also thank our very efficient team of satellite
events chairs and coordinators, led by Cristian Linte and Matthieu Chabanas: the
workshop chairs, Amber Simpson, Denis Fortun, Marta Kersten-Oertel, and Sandrine
Voros; the challenges chairs, Annika Reinke, Spyridon Bakas, Nicolas Passat, and
Ingerid Reinersten; and the tutorial chairs, Sonia Pujol and Vincent Noblet, as well as
all the satellite event organizers for the valuable content added to MICCAI. Our special
thanks also go to John Baxter and his team who worked hard on setting up and
populating the virtual platforms, to Alejandro Granados for his valuable help and
efficient communication on social media, and to Shelley Wallace and Anna Van Vliet
for marketing and communication. We are also very grateful to Anirban Mukhopadhay
for his management of the sponsorship, and of course many thanks to the numerous
sponsors who supported the conference, often with continuous engagement over many
years. This year again, our thanks go to Marius Linguraru and his team who supervised
a range of actions to help, and promote, career development, among which were the
mentorship program and the Startup Village. And last but not least, our wholehearted
thanks go to Mehmet and the wonderful team at Dekon Congress and Tourism for their
great professionalism and reactivity in the management of all logistical aspects of the
event.
Finally, we thank the MICCAI society and the Board of Directors for their support
throughout the years, starting with the first discussions about bringing MICCAI to
Strasbourg in 2017.
We look forward to seeing you at MICCAI 2022.

September 2021 Marleen de Bruijne


Philippe Cattin
Stéphane Cotin
Nicolas Padoy
Stefanie Speidel
Yefeng Zheng
Caroline Essert
Organization

General Chair
Caroline Essert Université de Strasbourg, CNRS, ICube, France

Program Chairs
Marleen de Bruijne Erasmus MC Rotterdam, The Netherlands,
and University of Copenhagen, Denmark
Philippe C. Cattin University of Basel, Switzerland
Stéphane Cotin Inria, France
Nicolas Padoy Université de Strasbourg, CNRS, ICube, IHU, France
Stefanie Speidel National Center for Tumor Diseases, Dresden, Germany
Yefeng Zheng Tencent Jarvis Lab, China

Satellite Events Coordinators


Cristian Linte Rochester Institute of Technology, USA
Matthieu Chabanas Université Grenoble Alpes, France

Workshop Team
Amber Simpson Queen’s University, Canada
Denis Fortun Université de Strasbourg, CNRS, ICube, France
Marta Kersten-Oertel Concordia University, Canada
Sandrine Voros TIMC-IMAG, INSERM, France

Challenges Team
Annika Reinke German Cancer Research Center, Germany
Spyridon Bakas University of Pennsylvania, USA
Nicolas Passat Université de Reims Champagne-Ardenne, France
Ingerid Reinersten SINTEF, NTNU, Norway

Tutorial Team
Vincent Noblet Université de Strasbourg, CNRS, ICube, France
Sonia Pujol Harvard Medical School, Brigham and Women’s
Hospital, USA
x Organization

Clinical Day Chairs


Nicolas Padoy Université de Strasbourg, CNRS, ICube, IHU, France
Lee Swanström IHU Strasbourg, France

Sponsorship Chairs
Anirban Mukhopadhyay Technische Universität Darmstadt, Germany
Yanwu Xu Baidu Inc., China

Young Investigators and Early Career Development


Program Chairs
Marius Linguraru Children’s National Institute, USA
Antonio Porras Children’s National Institute, USA
Daniel Racoceanu Sorbonne Université/Brain Institute, France
Nicola Rieke NVIDIA, Germany
Renee Yao NVIDIA, USA

Social Media Chairs


Alejandro Granados King’s College London, UK
Martinez
Shuwei Xing Robarts Research Institute, Canada
Maxence Boels King’s College London, UK

Green Team
Pierre Jannin INSERM, Université de Rennes 1, France
Étienne Baudrier Université de Strasbourg, CNRS, ICube, France

Student Board Liaison


Éléonore Dufresne Université de Strasbourg, CNRS, ICube, France
Étienne Le Quentrec Université de Strasbourg, CNRS, ICube, France
Vinkle Srivastav Université de Strasbourg, CNRS, ICube, France

Submission Platform Manager


Kitty Wong The MICCAI Society, Canada

Virtual Platform Manager


John Baxter INSERM, Université de Rennes 1, France
Organization xi

Program Committee
Ehsan Adeli Stanford University, USA
Iman Aganj Massachusetts General Hospital, Harvard Medical
School, USA
Pablo Arbelaez Universidad de los Andes, Colombia
John Ashburner University College London, UK
Meritxell Bach Cuadra University of Lausanne, Switzerland
Sophia Bano University College London, UK
Adrien Bartoli Université Clermont Auvergne, France
Christian Baumgartner ETH Zürich, Switzerland
Hrvoje Bogunovic Medical University of Vienna, Austria
Weidong Cai University of Sydney, Australia
Gustavo Carneiro University of Adelaide, Australia
Chao Chen Stony Brook University, USA
Elvis Chen Robarts Research Institute, Canada
Hao Chen Hong Kong University of Science and Technology,
Hong Kong SAR
Albert Chung Hong Kong University of Science and Technology,
Hong Kong SAR
Adrian Dalca Massachusetts Institute of Technology, USA
Adrien Depeursinge HES-SO Valais-Wallis, Switzerland
Jose Dolz ÉTS Montréal, Canada
Ruogu Fang University of Florida, USA
Dagan Feng University of Sydney, Australia
Huazhu Fu Inception Institute of Artificial Intelligence,
United Arab Emirates
Mingchen Gao University at Buffalo, The State University
of New York, USA
Guido Gerig New York University, USA
Orcun Goksel Uppsala University, Sweden
Alberto Gomez King’s College London, UK
Ilker Hacihaliloglu Rutgers University, USA
Adam Harrison PAII Inc., USA
Mattias Heinrich University of Lübeck, Germany
Yi Hong Shanghai Jiao Tong University, China
Yipeng Hu University College London, UK
Junzhou Huang University of Texas at Arlington, USA
Xiaolei Huang The Pennsylvania State University, USA
Jana Hutter King’s College London, UK
Madhura Ingalhalikar Symbiosis Center for Medical Image Analysis, India
Shantanu Joshi University of California, Los Angeles, USA
Samuel Kadoury Polytechnique Montréal, Canada
Fahmi Khalifa Mansoura University, Egypt
Hosung Kim University of Southern California, USA
Minjeong Kim University of North Carolina at Greensboro, USA
xii Organization

Ender Konukoglu ETH Zürich, Switzerland


Bennett Landman Vanderbilt University, USA
Ignacio Larrabide CONICET, Argentina
Baiying Lei Shenzhen University, China
Gang Li University of North Carolina at Chapel Hill, USA
Mingxia Liu University of North Carolina at Chapel Hill, USA
Herve Lombaert ÉTS Montréal, Canada, and Inria, France
Marco Lorenzi Inria, France
Le Lu PAII Inc., USA
Xiongbiao Luo Xiamen University, China
Dwarikanath Mahapatra Inception Institute of Artificial Intelligence,
United Arab Emirates
Andreas Maier FAU Erlangen-Nuremberg, Germany
Erik Meijering University of New South Wales, Australia
Hien Nguyen University of Houston, USA
Marc Niethammer University of North Carolina at Chapel Hill, USA
Tingying Peng Technische Universität München, Germany
Caroline Petitjean Université de Rouen, France
Dzung Pham Henry M. Jackson Foundation, USA
Hedyeh Rafii-Tari Auris Health Inc, USA
Islem Rekik Istanbul Technical University, Turkey
Nicola Rieke NVIDIA, Germany
Su Ruan Laboratoire LITIS, France
Thomas Schultz University of Bonn, Germany
Sharmishtaa Seshamani Allen Institute, USA
Yonggang Shi University of Southern California, USA
Darko Stern Technical University of Graz, Austria
Carole Sudre King’s College London, UK
Heung-Il Suk Korea University, South Korea
Jian Sun Xi’an Jiaotong University, China
Raphael Sznitman University of Bern, Switzerland
Amir Tahmasebi Enlitic, USA
Qian Tao Delft University of Technology, The Netherlands
Tolga Tasdizen University of Utah, USA
Martin Urschler University of Auckland, New Zealand
Archana Venkataraman Johns Hopkins University, USA
Guotai Wang University of Electronic Science and Technology
of China, China
Hongzhi Wang IBM Almaden Research Center, USA
Hua Wang Colorado School of Mines, USA
Qian Wang Shanghai Jiao Tong University, China
Yalin Wang Arizona State University, USA
Fuyong Xing University of Colorado Denver, USA
Daguang Xu NVIDIA, USA
Yanwu Xu Baidu, China
Ziyue Xu NVIDIA, USA
Organization xiii

Zhong Xue Shanghai United Imaging Intelligence, China


Xin Yang Huazhong University of Science and Technology,
China
Jianhua Yao National Institutes of Health, USA
Zhaozheng Yin Stony Brook University, USA
Yixuan Yuan City University of Hong Kong, Hong Kong SAR
Liang Zhan University of Pittsburgh, USA
Tuo Zhang Northwestern Polytechnical University, China
Yitian Zhao Chinese Academy of Sciences, China
Luping Zhou University of Sydney, Australia
S. Kevin Zhou Chinese Academy of Sciences, China
Dajiang Zhu University of Texas at Arlington, USA
Xiahai Zhuang Fudan University, China
Maria A. Zuluaga EURECOM, France

Reviewers

Alaa Eldin Abdelaal Chloé Audigier


Khalid Abdul Jabbar Kamran Avanaki
Purang Abolmaesumi Angelica Aviles-Rivero
Mazdak Abulnaga Suyash Awate
Maryam Afzali Dogu Baran Aydogan
Priya Aggarwal Qinle Ba
Ola Ahmad Morteza Babaie
Sahar Ahmad Hyeon-Min Bae
Euijoon Ahn Woong Bae
Alireza Akhondi-Asl Junjie Bai
Saad Ullah Akram Wenjia Bai
Dawood Al Chanti Ujjwal Baid
Daniel Alexander Spyridon Bakas
Sharib Ali Yaël Balbastre
Lejla Alic Marcin Balicki
Omar Al-Kadi Fabian Balsiger
Maximilian Allan Abhirup Banerjee
Pierre Ambrosini Sreya Banerjee
Sameer Antani Shunxing Bao
Michela Antonelli Adrian Barbu
Jacob Antunes Sumana Basu
Syed Anwar Mathilde Bateson
Ignacio Arganda-Carreras Deepti Bathula
Mohammad Ali Armin John Baxter
Md Ashikuzzaman Bahareh Behboodi
Mehdi Astaraki Delaram Behnami
Angélica Atehortúa Mikhail Belyaev
Gowtham Atluri Aicha BenTaieb
xiv Organization

Camilo Bermudez Ahmad Chaddad


Gabriel Bernardino Jayasree Chakraborty
Hadrien Bertrand Sylvie Chambon
Alaa Bessadok Yi Hao Chan
Michael Beyeler Ming-Ching Chang
Indrani Bhattacharya Peng Chang
Chetan Bhole Violeta Chang
Lei Bi Sudhanya Chatterjee
Gui-Bin Bian Christos Chatzichristos
Ryoma Bise Antong Chen
Stefano B. Blumberg Chang Chen
Ester Bonmati Cheng Chen
Bhushan Borotikar Dongdong Chen
Jiri Borovec Geng Chen
Ilaria Boscolo Galazzo Hanbo Chen
Alexandre Bousse Jianan Chen
Nicolas Boutry Jianxu Chen
Behzad Bozorgtabar Jie Chen
Nathaniel Braman Junxiang Chen
Nadia Brancati Lei Chen
Katharina Breininger Li Chen
Christopher Bridge Liangjun Chen
Esther Bron Min Chen
Rupert Brooks Pingjun Chen
Qirong Bu Qiang Chen
Duc Toan Bui Shuai Chen
Ninon Burgos Tianhua Chen
Nikolay Burlutskiy Tingting Chen
Hendrik Burwinkel Xi Chen
Russell Butler Xiaoran Chen
Michał Byra Xin Chen
Ryan Cabeen Xuejin Chen
Mariano Cabezas Yuhua Chen
Hongmin Cai Yukun Chen
Jinzheng Cai Zhaolin Chen
Yunliang Cai Zhineng Chen
Sema Candemir Zhixiang Chen
Bing Cao Erkang Cheng
Qing Cao Jun Cheng
Shilei Cao Li Cheng
Tian Cao Yuan Cheng
Weiguo Cao Farida Cheriet
Aaron Carass Minqi Chong
M. Jorge Cardoso Jaegul Choo
Adrià Casamitjana Aritra Chowdhury
Matthieu Chabanas Gary Christensen
Organization xv

Daan Christiaens Mengjin Dong


Stergios Christodoulidis Nanqing Dong
Ai Wern Chung Reuben Dorent
Pietro Antonio Cicalese Sven Dorkenwald
Özgün Çiçek Qi Dou
Celia Cintas Simon Drouin
Matthew Clarkson Niharika D’Souza
Jaume Coll-Font Lei Du
Toby Collins Hongyi Duanmu
Olivier Commowick Nicolas Duchateau
Pierre-Henri Conze James Duncan
Timothy Cootes Luc Duong
Luca Corinzia Nicha Dvornek
Teresa Correia Dmitry V. Dylov
Hadrien Courtecuisse Oleh Dzyubachyk
Jeffrey Craley Roy Eagleson
Hui Cui Mehran Ebrahimi
Jianan Cui Jan Egger
Zhiming Cui Alma Eguizabal
Kathleen Curran Gudmundur Einarsson
Claire Cury Ahmed Elazab
Tobias Czempiel Mohammed S. M. Elbaz
Vedrana Dahl Shireen Elhabian
Haixing Dai Mohammed Elmogy
Rafat Damseh Amr Elsawy
Bilel Daoud Ahmed Eltanboly
Neda Davoudi Sandy Engelhardt
Laura Daza Ertunc Erdil
Sandro De Zanet Marius Erdt
Charles Delahunt Floris Ernst
Yang Deng Boris Escalante-Ramírez
Cem Deniz Maria Escobar
Felix Denzinger Mohammad Eslami
Hrishikesh Deshpande Nazila Esmaeili
Christian Desrosiers Marco Esposito
Blake Dewey Oscar Esteban
Neel Dey Théo Estienne
Raunak Dey Ivan Ezhov
Jwala Dhamala Deng-Ping Fan
Yashin Dicente Cid Jingfan Fan
Li Ding Xin Fan
Xinghao Ding Yonghui Fan
Zhipeng Ding Xi Fang
Konstantin Dmitriev Zhenghan Fang
Ines Domingues Aly Farag
Liang Dong Mohsen Farzi
xvi Organization

Lina Felsner Sandesh Ghimire


Jun Feng Ali Gholipour
Ruibin Feng Sayan Ghosal
Xinyang Feng Andrea Giovannini
Yuan Feng Gabriel Girard
Aaron Fenster Ben Glocker
Aasa Feragen Arnold Gomez
Henrique Fernandes Mingming Gong
Enzo Ferrante Cristina González
Jean Feydy German Gonzalez
Lukas Fischer Sharath Gopal
Peter Fischer Karthik Gopinath
Antonio Foncubierta-Rodríguez Pietro Gori
Germain Forestier Michael Götz
Nils Daniel Forkert Shuiping Gou
Jean-Rassaire Fouefack Maged Goubran
Moti Freiman Sobhan Goudarzi
Wolfgang Freysinger Dushyant Goyal
Xueyang Fu Mark Graham
Yunguan Fu Bertrand Granado
Wolfgang Fuhl Alejandro Granados
Isabel Funke Vicente Grau
Philipp Fürnstahl Lin Gu
Pedro Furtado Shi Gu
Ryo Furukawa Xianfeng Gu
Jin Kyu Gahm Yun Gu
Laurent Gajny Zaiwang Gu
Adrian Galdran Hao Guan
Yu Gan Ricardo Guerrero
Melanie Ganz Houssem-Eddine Gueziri
Cong Gao Dazhou Guo
Dongxu Gao Hengtao Guo
Linlin Gao Jixiang Guo
Siyuan Gao Pengfei Guo
Yixin Gao Xiaoqing Guo
Yue Gao Yi Guo
Zhifan Gao Yulan Guo
Alfonso Gastelum-Strozzi Yuyu Guo
Srishti Gautam Krati Gupta
Bao Ge Vikash Gupta
Rongjun Ge Praveen Gurunath Bharathi
Zongyuan Ge Boris Gutman
Sairam Geethanath Prashnna Gyawali
Shiv Gehlot Stathis Hadjidemetriou
Nils Gessert Mohammad Hamghalam
Olivier Gevaert Hu Han
Organization xvii

Liang Han Yue Huang


Xiaoguang Han Yufang Huang
Xu Han Arnaud Huaulmé
Zhi Han Henkjan Huisman
Zhongyi Han Yuankai Huo
Jonny Hancox Andreas Husch
Xiaoke Hao Mohammad Hussain
Nandinee Haq Raabid Hussain
Ali Hatamizadeh Sarfaraz Hussein
Charles Hatt Khoi Huynh
Andreas Hauptmann Seong Jae Hwang
Mohammad Havaei Emmanuel Iarussi
Kelei He Kay Igwe
Nanjun He Abdullah-Al-Zubaer Imran
Tiancheng He Ismail Irmakci
Xuming He Mobarakol Islam
Yuting He Mohammad Shafkat Islam
Nicholas Heller Vamsi Ithapu
Alessa Hering Koichi Ito
Monica Hernandez Hayato Itoh
Carlos Hernandez-Matas Oleksandra Ivashchenko
Kilian Hett Yuji Iwahori
Jacob Hinkle Shruti Jadon
David Ho Mohammad Jafari
Nico Hoffmann Mostafa Jahanifar
Matthew Holden Amir Jamaludin
Sungmin Hong Mirek Janatka
Yoonmi Hong Won-Dong Jang
Antal Horváth Uditha Jarayathne
Md Belayat Hossain Ronnachai Jaroensri
Benjamin Hou Golara Javadi
William Hsu Rohit Jena
Tai-Chiu Hsung Rachid Jennane
Kai Hu Todd Jensen
Shi Hu Won-Ki Jeong
Shunbo Hu Yuanfeng Ji
Wenxing Hu Zhanghexuan Ji
Xiaoling Hu Haozhe Jia
Xiaowei Hu Jue Jiang
Yan Hu Tingting Jiang
Zhenhong Hu Xiang Jiang
Heng Huang Jianbo Jiao
Qiaoying Huang Zhicheng Jiao
Yi-Jie Huang Amelia Jiménez-Sánchez
Yixing Huang Dakai Jin
Yongxiang Huang Yueming Jin
xviii Organization

Bin Jing Bongjin Koo


Anand Joshi Ivica Kopriva
Yohan Jun Kivanc Kose
Kyu-Hwan Jung Mateusz Kozinski
Alain Jungo Anna Kreshuk
Manjunath K N Anithapriya Krishnan
Ali Kafaei Zad Tehrani Pavitra Krishnaswamy
Bernhard Kainz Egor Krivov
John Kalafut Frithjof Kruggel
Michael C. Kampffmeyer Alexander Krull
Qingbo Kang Elizabeth Krupinski
Po-Yu Kao Serife Kucur
Neerav Karani David Kügler
Turkay Kart Hugo Kuijf
Satyananda Kashyap Abhay Kumar
Amin Katouzian Ashnil Kumar
Alexander Katzmann Kuldeep Kumar
Prabhjot Kaur Nitin Kumar
Erwan Kerrien Holger Kunze
Hoel Kervadec Tahsin Kurc
Ashkan Khakzar Anvar Kurmukov
Nadieh Khalili Yoshihiro Kuroda
Siavash Khallaghi Jin Tae Kwak
Farzad Khalvati Yongchan Kwon
Bishesh Khanal Francesco La Rosa
Pulkit Khandelwal Aymen Laadhari
Maksim Kholiavchenko Dmitrii Lachinov
Naji Khosravan Alain Lalande
Seyed Mostafa Kia Tryphon Lambrou
Daeseung Kim Carole Lartizien
Hak Gu Kim Bianca Lassen-Schmidt
Hyo-Eun Kim Ngan Le
Jae-Hun Kim Leo Lebrat
Jaeil Kim Christian Ledig
Jinman Kim Eung-Joo Lee
Mansu Kim Hyekyoung Lee
Namkug Kim Jong-Hwan Lee
Seong Tae Kim Matthew Lee
Won Hwa Kim Sangmin Lee
Andrew King Soochahn Lee
Atilla Kiraly Étienne Léger
Yoshiro Kitamura Stefan Leger
Tobias Klinder Andreas Leibetseder
Bin Kong Rogers Jeffrey Leo John
Jun Kong Juan Leon
Tomasz Konopczynski Bo Li
Organization xix

Chongyi Li Bin Liu


Fuhai Li Chi Liu
Hongming Li Daochang Liu
Hongwei Li Dong Liu
Jian Li Dongnan Liu
Jianning Li Feng Liu
Jiayun Li Hangfan Liu
Junhua Li Hong Liu
Kang Li Huafeng Liu
Mengzhang Li Jianfei Liu
Ming Li Jingya Liu
Qing Li Kai Liu
Shaohua Li Kefei Liu
Shuyu Li Lihao Liu
Weijian Li Mengting Liu
Weikai Li Peng Liu
Wenqi Li Qin Liu
Wenyuan Li Quande Liu
Xiang Li Shengfeng Liu
Xiaomeng Li Shenghua Liu
Xiaoxiao Li Shuangjun Liu
Xin Li Sidong Liu
Xiuli Li Siqi Liu
Yang Li Tianrui Liu
Yi Li Xiao Liu
Yuexiang Li Xinyang Liu
Zeju Li Xinyu Liu
Zhang Li Yan Liu
Zhiyuan Li Yikang Liu
Zhjin Li Yong Liu
Gongbo Liang Yuan Liu
Jianming Liang Yue Liu
Libin Liang Yuhang Liu
Yuan Liang Andrea Loddo
Haofu Liao Nicolas Loménie
Ruizhi Liao Daniel Lopes
Wei Liao Bin Lou
Xiangyun Liao Jian Lou
Roxane Licandro Nicolas Loy Rodas
Gilbert Lim Donghuan Lu
Baihan Lin Huanxiang Lu
Hongxiang Lin Weijia Lu
Jianyu Lin Xiankai Lu
Yi Lin Yongyi Lu
Claudia Lindner Yueh-Hsun Lu
Geert Litjens Yuhang Lu
xx Organization

Imanol Luengo Liang Mi


Jie Luo Stijn Michielse
Jiebo Luo Abhishek Midya
Luyang Luo Fausto Milletari
Ma Luo Hyun-Seok Min
Bin Lv Zhe Min
Jinglei Lv Tadashi Miyamoto
Junyan Lyu Sara Moccia
Qing Lyu Hassan Mohy-ud-Din
Yuanyuan Lyu Tony C. W. Mok
Andy J. Ma Rafael Molina
Chunwei Ma Mehdi Moradi
Da Ma Rodrigo Moreno
Hua Ma Kensaku Mori
Kai Ma Lia Morra
Lei Ma Linda Moy
Anderson Maciel Mohammad Hamed Mozaffari
Amirreza Mahbod Sovanlal Mukherjee
S. Sara Mahdavi Anirban Mukhopadhyay
Mohammed Mahmoud Henning Müller
Saïd Mahmoudi Balamurali Murugesan
Klaus H. Maier-Hein Cosmas Mwikirize
Bilal Malik Andriy Myronenko
Ilja Manakov Saad Nadeem
Matteo Mancini Vishwesh Nath
Tommaso Mansi Rodrigo Nava
Yunxiang Mao Fernando Navarro
Brett Marinelli Amin Nejatbakhsh
Pablo Márquez Neila Dong Ni
Carsten Marr Hannes Nickisch
Yassine Marrakchi Dong Nie
Fabio Martinez Jingxin Nie
Andre Mastmeyer Aditya Nigam
Tejas Sudharshan Mathai Lipeng Ning
Dimitrios Mavroeidis Xia Ning
Jamie McClelland Tianye Niu
Pau Medrano-Gracia Jack Noble
Raghav Mehta Vincent Noblet
Sachin Mehta Alexey Novikov
Raphael Meier Jorge Novo
Qier Meng Mohammad Obeid
Qingjie Meng Masahiro Oda
Yanda Meng Benjamin Odry
Martin Menten Steffen Oeltze-Jafra
Odyssée Merveille Hugo Oliveira
Islem Mhiri Sara Oliveira
Organization xxi

Arnau Oliver Matthias Perkonigg


Emanuele Olivetti Mehran Pesteie
Jimena Olveres Jorg Peters
John Onofrey Jens Petersen
Felipe Orihuela-Espina Kersten Petersen
José Orlando Renzo Phellan Aro
Marcos Ortega Ashish Phophalia
Yoshito Otake Tomasz Pieciak
Sebastian Otálora Antonio Pinheiro
Cheng Ouyang Pramod Pisharady
Jiahong Ouyang Kilian Pohl
Xi Ouyang Sebastian Pölsterl
Michal Ozery-Flato Iulia A. Popescu
Danielle Pace Alison Pouch
Krittin Pachtrachai Prateek Prasanna
J. Blas Pagador Raphael Prevost
Akshay Pai Juan Prieto
Viswanath Pamulakanty Sudarshan Sergi Pujades
Jin Pan Elodie Puybareau
Yongsheng Pan Esther Puyol-Antón
Pankaj Pandey Haikun Qi
Prashant Pandey Huan Qi
Egor Panfilov Buyue Qian
Shumao Pang Yan Qiang
Joao Papa Yuchuan Qiao
Constantin Pape Chen Qin
Bartlomiej Papiez Wenjian Qin
Hyunjin Park Yulei Qin
Jongchan Park Wu Qiu
Sanghyun Park Hui Qu
Seung-Jong Park Liangqiong Qu
Seyoun Park Kha Gia Quach
Magdalini Paschali Prashanth R.
Diego Patiño Cortés Pradeep Reddy Raamana
Angshuman Paul Mehdi Rahim
Christian Payer Jagath Rajapakse
Yuru Pei Kashif Rajpoot
Chengtao Peng Jhonata Ramos
Yige Peng Lingyan Ran
Antonio Pepe Hatem Rashwan
Oscar Perdomo Daniele Ravì
Sérgio Pereira Keerthi Sravan Ravi
Jose-Antonio Pérez-Carrasco Nishant Ravikumar
Fernando Pérez-García Harish RaviPrakash
Jorge Perez-Gonzalez Samuel Remedios
Skand Peri Yinhao Ren
xxii Organization

Yudan Ren Youngho Seo


Mauricio Reyes Lama Seoud
Constantino Reyes-Aldasoro Ana Sequeira
Jonas Richiardi Maxime Sermesant
David Richmond Carmen Serrano
Anne-Marie Rickmann Muhammad Shaban
Leticia Rittner Ahmed Shaffie
Dominik Rivoir Sobhan Shafiei
Emma Robinson Mohammad Abuzar Shaikh
Jessica Rodgers Reuben Shamir
Rafael Rodrigues Shayan Shams
Robert Rohling Hongming Shan
Michal Rosen-Zvi Harshita Sharma
Lukasz Roszkowiak Gregory Sharp
Karsten Roth Mohamed Shehata
José Rouco Haocheng Shen
Daniel Rueckert Li Shen
Jaime S. Cardoso Liyue Shen
Mohammad Sabokrou Mali Shen
Ario Sadafi Yiqing Shen
Monjoy Saha Yiqiu Shen
Pramit Saha Zhengyang Shen
Dushyant Sahoo Kuangyu Shi
Pranjal Sahu Luyao Shi
Maria Sainz de Cea Xiaoshuang Shi
Olivier Salvado Xueying Shi
Robin Sandkuehler Yemin Shi
Gianmarco Santini Yiyu Shi
Duygu Sarikaya Yonghong Shi
Imari Sato Jitae Shin
Olivier Saut Boris Shirokikh
Dustin Scheinost Suprosanna Shit
Nico Scherf Suzanne Shontz
Markus Schirmer Yucheng Shu
Alexander Schlaefer Alberto Signoroni
Jerome Schmid Wilson Silva
Julia Schnabel Margarida Silveira
Klaus Schoeffmann Matthew Sinclair
Andreas Schuh Rohit Singla
Ernst Schwartz Sumedha Singla
Christina Schwarz-Gsaxner Ayushi Sinha
Michaël Sdika Kevin Smith
Suman Sedai Rajath Soans
Anjany Sekuboyina Ahmed Soliman
Raghavendra Selvan Stefan Sommer
Sourya Sengupta Yang Song
Organization xxiii

Youyi Song Aleksei Tiulpin


Aristeidis Sotiras Hamid Tizhoosh
Arcot Sowmya Matthew Toews
Rachel Sparks Oguzhan Topsakal
William Speier Antonio Torteya
Ziga Spiclin Sylvie Treuillet
Dominik Spinczyk Jocelyne Troccaz
Jon Sporring Roger Trullo
Chetan Srinidhi Chialing Tsai
Anuroop Sriram Sudhakar Tummala
Vinkle Srivastav Verena Uslar
Lawrence Staib Hristina Uzunova
Marius Staring Régis Vaillant
Johannes Stegmaier Maria Vakalopoulou
Joshua Stough Jeya Maria Jose Valanarasu
Robin Strand Tom van Sonsbeek
Martin Styner Gijs van Tulder
Hai Su Marta Varela
Yun-Hsuan Su Thomas Varsavsky
Vaishnavi Subramanian Francisco Vasconcelos
Gérard Subsol Liset Vazquez Romaguera
Yao Sui S. Swaroop Vedula
Avan Suinesiaputra Sanketh Vedula
Jeremias Sulam Harini Veeraraghavan
Shipra Suman Miguel Vega
Li Sun Gonzalo Vegas Sanchez-Ferrero
Wenqing Sun Anant Vemuri
Chiranjib Sur Gopalkrishna Veni
Yannick Suter Mitko Veta
Tanveer Syeda-Mahmood Thomas Vetter
Fatemeh Taheri Dezaki Pedro Vieira
Roger Tam Juan Pedro Vigueras Guillén
José Tamez-Peña Barbara Villarini
Chaowei Tan Satish Viswanath
Hao Tang Athanasios Vlontzos
Thomas Tang Wolf-Dieter Vogl
Yucheng Tang Bo Wang
Zihao Tang Cheng Wang
Mickael Tardy Chengjia Wang
Giacomo Tarroni Chunliang Wang
Jonas Teuwen Clinton Wang
Paul Thienphrapa Congcong Wang
Stephen Thompson Dadong Wang
Jiang Tian Dongang Wang
Yu Tian Haifeng Wang
Yun Tian Hongyu Wang
xxiv Organization

Hu Wang Ji Wu
Huan Wang Jian Wu
Kun Wang Jie Ying Wu
Li Wang Pengxiang Wu
Liansheng Wang Xiyin Wu
Linwei Wang Ye Wu
Manning Wang Yicheng Wu
Renzhen Wang Yifan Wu
Ruixuan Wang Tobias Wuerfl
Sheng Wang Pengcheng Xi
Shujun Wang James Xia
Shuo Wang Siyu Xia
Tianchen Wang Wenfeng Xia
Tongxin Wang Yingda Xia
Wenzhe Wang Yong Xia
Xi Wang Lei Xiang
Xiaosong Wang Deqiang Xiao
Yan Wang Li Xiao
Yaping Wang Yiming Xiao
Yi Wang Hongtao Xie
Yirui Wang Lingxi Xie
Zeyi Wang Long Xie
Zhangyang Wang Weidi Xie
Zihao Wang Yiting Xie
Zuhui Wang Yutong Xie
Simon Warfield Xiaohan Xing
Jonathan Weber Chang Xu
Jürgen Weese Chenchu Xu
Dong Wei Hongming Xu
Donglai Wei Kele Xu
Dongming Wei Min Xu
Martin Weigert Rui Xu
Wolfgang Wein Xiaowei Xu
Michael Wels Xuanang Xu
Cédric Wemmert Yongchao Xu
Junhao Wen Zhenghua Xu
Travis Williams Zhoubing Xu
Matthias Wilms Kai Xuan
Stefan Winzeck Cheng Xue
James Wiskin Jie Xue
Adam Wittek Wufeng Xue
Marek Wodzinski Yuan Xue
Jelmer Wolterink Faridah Yahya
Ken C. L. Wong Ke Yan
Chongruo Wu Yuguang Yan
Guoqing Wu Zhennan Yan
Organization xxv

Changchun Yang Daoqiang Zhang


Chao-Han Huck Yang Fan Zhang
Dong Yang Guangming Zhang
Erkun Yang Hang Zhang
Fan Yang Huahong Zhang
Ge Yang Jianpeng Zhang
Guang Yang Jiong Zhang
Guanyu Yang Jun Zhang
Heran Yang Lei Zhang
Hongxu Yang Lichi Zhang
Huijuan Yang Lin Zhang
Jiancheng Yang Ling Zhang
Jie Yang Lu Zhang
Junlin Yang Miaomiao Zhang
Lin Yang Ning Zhang
Peng Yang Qiang Zhang
Xin Yang Rongzhao Zhang
Yan Yang Ru-Yuan Zhang
Yujiu Yang Shihao Zhang
Dongren Yao Shu Zhang
Jiawen Yao Tong Zhang
Li Yao Wei Zhang
Qingsong Yao Weiwei Zhang
Chuyang Ye Wen Zhang
Dong Hye Ye Wenlu Zhang
Menglong Ye Xin Zhang
Xujiong Ye Ya Zhang
Jingru Yi Yanbo Zhang
Jirong Yi Yanfu Zhang
Xin Yi Yi Zhang
Youngjin Yoo Yishuo Zhang
Chenyu You Yong Zhang
Haichao Yu Yongqin Zhang
Hanchao Yu You Zhang
Lequan Yu Youshan Zhang
Qi Yu Yu Zhang
Yang Yu Yue Zhang
Pengyu Yuan Yueyi Zhang
Fatemeh Zabihollahy Yulun Zhang
Ghada Zamzmi Yunyan Zhang
Marco Zenati Yuyao Zhang
Guodong Zeng Can Zhao
Rui Zeng Changchen Zhao
Oliver Zettinig Chongyue Zhao
Zhiwei Zhai Fenqiang Zhao
Chaoyi Zhang Gangming Zhao
xxvi Organization

He Zhao Bo Zhou
Jun Zhao Haoyin Zhou
Li Zhao Hong-Yu Zhou
Qingyu Zhao Kang Zhou
Rongchang Zhao Sanping Zhou
Shen Zhao Sihang Zhou
Shijie Zhao Tao Zhou
Tengda Zhao Xiao-Yun Zhou
Tianyi Zhao Yanning Zhou
Wei Zhao Yuyin Zhou
Xuandong Zhao Zongwei Zhou
Yiyuan Zhao Dongxiao Zhu
Yuan-Xing Zhao Hancan Zhu
Yue Zhao Lei Zhu
Zixu Zhao Qikui Zhu
Ziyuan Zhao Xinliang Zhu
Xingjian Zhen Yuemin Zhu
Guoyan Zheng Zhe Zhu
Hao Zheng Zhuotun Zhu
Jiannan Zheng Aneeq Zia
Kang Zheng Veronika Zimmer
Shenhai Zheng David Zimmerer
Yalin Zheng Lilla Zöllei
Yinqiang Zheng Yukai Zou
Yushan Zheng Lianrui Zuo
Jia-Xing Zhong Gerald Zwettler
Zichun Zhong Reyer Zwiggelaar

Outstanding Reviewers
Neel Dey New York University, USA
Monica Hernandez University of Zaragoza, Spain
Ivica Kopriva Rudjer Boskovich Institute, Croatia
Sebastian Otálora University of Applied Sciences and Arts Western
Switzerland, Switzerland
Danielle Pace Massachusetts General Hospital, USA
Sérgio Pereira Lunit Inc., South Korea
David Richmond IBM Watson Health, USA
Rohit Singla University of British Columbia, Canada
Yan Wang Sichuan University, China
Organization xxvii

Honorable Mentions (Reviewers)


Mazdak Abulnaga Massachusetts Institute of Technology, USA
Pierre Ambrosini Erasmus University Medical Center, The Netherlands
Hyeon-Min Bae Korea Advanced Institute of Science and Technology,
South Korea
Mikhail Belyaev Skolkovo Institute of Science and Technology, Russia
Bhushan Borotikar Symbiosis International University, India
Katharina Breininger Friedrich-Alexander-Universität Erlangen-Nürnberg,
Germany
Ninon Burgos CNRS, Paris Brain Institute, France
Mariano Cabezas The University of Sydney, Australia
Aaron Carass Johns Hopkins University, USA
Pierre-Henri Conze IMT Atlantique, France
Christian Desrosiers École de technologie supérieure, Canada
Reuben Dorent King’s College London, UK
Nicha Dvornek Yale University, USA
Dmitry V. Dylov Skolkovo Institute of Science and Technology, Russia
Marius Erdt Fraunhofer Singapore, Singapore
Ruibin Feng Stanford University, USA
Enzo Ferrante CONICET/Universidad Nacional del Litoral, Argentina
Antonio IBM Research, Switzerland
Foncubierta-Rodríguez
Isabel Funke National Center for Tumor Diseases Dresden, Germany
Adrian Galdran University of Bournemouth, UK
Ben Glocker Imperial College London, UK
Cristina González Universidad de los Andes, Colombia
Maged Goubran Sunnybrook Research Institute, Canada
Sobhan Goudarzi Concordia University, Canada
Vicente Grau University of Oxford, UK
Andreas Hauptmann University of Oulu, Finland
Nico Hoffmann Technische Universität Dresden, Germany
Sungmin Hong Massachusetts General Hospital, Harvard Medical
School, USA
Won-Dong Jang Harvard University, USA
Zhanghexuan Ji University at Buffalo, SUNY, USA
Neerav Karani ETH Zurich, Switzerland
Alexander Katzmann Siemens Healthineers, Germany
Erwan Kerrien Inria, France
Anitha Priya Krishnan Genentech, USA
Tahsin Kurc Stony Brook University, USA
Francesco La Rosa École polytechnique fédérale de Lausanne, Switzerland
Dmitrii Lachinov Medical University of Vienna, Austria
Mengzhang Li Peking University, China
Gilbert Lim National University of Singapore, Singapore
Dongnan Liu University of Sydney, Australia
xxviii Organization

Bin Lou Siemens Healthineers, USA


Kai Ma Tencent, China
Klaus H. Maier-Hein German Cancer Research Center (DKFZ), Germany
Raphael Meier University Hospital Bern, Switzerland
Tony C. W. Mok Hong Kong University of Science and Technology,
Hong Kong SAR
Lia Morra Politecnico di Torino, Italy
Cosmas Mwikirize Rutgers University, USA
Felipe Orihuela-Espina Instituto Nacional de Astrofísica, Óptica y Electrónica,
Mexico
Egor Panfilov University of Oulu, Finland
Christian Payer Graz University of Technology, Austria
Sebastian Pölsterl Ludwig-Maximilians Universitat, Germany
José Rouco University of A Coruña, Spain
Daniel Rueckert Imperial College London, UK
Julia Schnabel King’s College London, UK
Christina Schwarz-Gsaxner Graz University of Technology, Austria
Boris Shirokikh Skolkovo Institute of Science and Technology, Russia
Yang Song University of New South Wales, Australia
Gérard Subsol Université de Montpellier, France
Tanveer Syeda-Mahmood IBM Research, USA
Mickael Tardy Hera-MI, France
Paul Thienphrapa Atlas5D, USA
Gijs van Tulder Radboud University, The Netherlands
Tongxin Wang Indiana University, USA
Yirui Wang PAII Inc., USA
Jelmer Wolterink University of Twente, The Netherlands
Lei Xiang Subtle Medical Inc., USA
Fatemeh Zabihollahy Johns Hopkins University, USA
Wei Zhang University of Georgia, USA
Ya Zhang Shanghai Jiao Tong University, China
Qingyu Zhao Stanford University, China
Yushan Zheng Beihang University, China

Mentorship Program (Mentors)


Shadi Albarqouni Helmholtz AI, Helmholtz Center Munich, Germany
Hao Chen Hong Kong University of Science and Technology,
Hong Kong SAR
Nadim Daher NVIDIA, France
Marleen de Bruijne Erasmus MC/University of Copenhagen,
The Netherlands
Qi Dou The Chinese University of Hong Kong,
Hong Kong SAR
Gabor Fichtinger Queen’s University, Canada
Jonny Hancox NVIDIA, UK
Organization xxix

Nobuhiko Hata Harvard Medical School, USA


Sharon Xiaolei Huang Pennsylvania State University, USA
Jana Hutter King’s College London, UK
Dakai Jin PAII Inc., China
Samuel Kadoury Polytechnique Montréal, Canada
Minjeong Kim University of North Carolina at Greensboro, USA
Hans Lamecker 1000shapes GmbH, Germany
Andrea Lara Galileo University, Guatemala
Ngan Le University of Arkansas, USA
Baiying Lei Shenzhen University, China
Karim Lekadir Universitat de Barcelona, Spain
Marius George Linguraru Children’s National Health System/George
Washington University, USA
Herve Lombaert ETS Montreal, Canada
Marco Lorenzi Inria, France
Le Lu PAII Inc., China
Xiongbiao Luo Xiamen University, China
Dzung Pham Henry M. Jackson Foundation/Uniformed Services
University/National Institutes of Health/Johns
Hopkins University, USA
Josien Pluim Eindhoven University of Technology/University
Medical Center Utrecht, The Netherlands
Antonio Porras University of Colorado Anschutz Medical
Campus/Children’s Hospital Colorado, USA
Islem Rekik Istanbul Technical University, Turkey
Nicola Rieke NVIDIA, Germany
Julia Schnabel TU Munich/Helmholtz Center Munich, Germany,
and King’s College London, UK
Debdoot Sheet Indian Institute of Technology Kharagpur, India
Pallavi Tiwari Case Western Reserve University, USA
Jocelyne Troccaz CNRS, TIMC, Grenoble Alpes University, France
Sandrine Voros TIMC-IMAG, INSERM, France
Linwei Wang Rochester Institute of Technology, USA
Yalin Wang Arizona State University, USA
Zhong Xue United Imaging Intelligence Co. Ltd, USA
Renee Yao NVIDIA, USA
Mohammad Yaqub Mohamed Bin Zayed University of Artificial
Intelligence, United Arab Emirates, and University
of Oxford, UK
S. Kevin Zhou University of Science and Technology of China, China
Lilla Zollei Massachusetts General Hospital, Harvard Medical
School, USA
Maria A. Zuluaga EURECOM, France
Contents – Part I

Image Segmentation

Noisy Labels are Treasure: Mean-Teacher-Assisted Confident Learning


for Hepatic Vessel Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Zhe Xu, Donghuan Lu, Yixin Wang, Jie Luo, Jagadeesan Jayender,
Kai Ma, Yefeng Zheng, and Xiu Li

TransFuse: Fusing Transformers and CNNs for Medical Image


Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Yundong Zhang, Huiye Liu, and Qiang Hu

Pancreas CT Segmentation by Predictive Phenotyping . . . . . . . . . . . . . . . . . 25


Yucheng Tang, Riqiang Gao, Hohin Lee, Qi Yang, Xin Yu, Yuyin Zhou,
Shunxing Bao, Yuankai Huo, Jeffrey Spraggins, Jack Virostko,
Zhoubing Xu, and Bennett A. Landman

Medical Transformer: Gated Axial-Attention for Medical Image


Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Jeya Maria Jose Valanarasu, Poojan Oza, Ilker Hacihaliloglu,
and Vishal M. Patel

Anatomy-Constrained Contrastive Learning for Synthetic Segmentation


Without Ground-Truth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Bo Zhou, Chi Liu, and James S. Duncan

Study Group Learning: Improving Retinal Vessel Segmentation Trained


with Noisy Labels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Yuqian Zhou, Hanchao Yu, and Humphrey Shi

Multi-phase Liver Tumor Segmentation with Spatial Aggregation


and Uncertain Region Inpainting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Yue Zhang, Chengtao Peng, Liying Peng, Huimin Huang, Ruofeng Tong,
Lanfen Lin, Jingsong Li, Yen-Wei Chen, Qingqing Chen, Hongjie Hu,
and Zhiyi Peng

Convolution-Free Medical Image Segmentation Using Transformers . . . . . . . 78


Davood Karimi, Serge Didenko Vasylechko, and Ali Gholipour

Consistent Segmentation of Longitudinal Brain MR Images with


Spatio-Temporal Constrained Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Jie Wei, Feng Shi, Zhiming Cui, Yongsheng Pan, Yong Xia,
and Dinggang Shen
xxxii Contents – Part I

A Multi-branch Hybrid Transformer Network for Corneal Endothelial Cell


Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Yinglin Zhang, Risa Higashita, Huazhu Fu, Yanwu Xu, Yang Zhang,
Haofeng Liu, Jian Zhang, and Jiang Liu

TransBTS: Multimodal Brain Tumor Segmentation Using Transformer . . . . . 109


Wenxuan Wang, Chen Chen, Meng Ding, Hong Yu, Sen Zha,
and Jiangyun Li

Automatic Polyp Segmentation via Multi-scale Subtraction Network . . . . . . . 120


Xiaoqi Zhao, Lihe Zhang, and Huchuan Lu

Patch-Free 3D Medical Image Segmentation Driven by Super-Resolution


Technique and Self-Supervised Guidance . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Hongyi Wang, Lanfen Lin, Hongjie Hu, Qingqing Chen, Yinhao Li,
Yutaro Iwamoto, Xian-Hua Han, Yen-Wei Chen, and Ruofeng Tong

Progressively Normalized Self-Attention Network for Video


Polyp Segmentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
Ge-Peng Ji, Yu-Cheng Chou, Deng-Ping Fan, Geng Chen, Huazhu Fu,
Debesh Jha, and Ling Shao

SGNet: Structure-Aware Graph-Based Network for Airway Semantic


Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Zimeng Tan, Jianjiang Feng, and Jie Zhou

NucMM Dataset: 3D Neuronal Nuclei Instance Segmentation at Sub-Cubic


Millimeter Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
Zudi Lin, Donglai Wei, Mariela D. Petkova, Yuelong Wu,
Zergham Ahmed, Krishna Swaroop K, Silin Zou, Nils Wendt,
Jonathan Boulanger-Weill, Xueying Wang, Nagaraju Dhanyasi,
Ignacio Arganda-Carreras, Florian Engert, Jeff Lichtman,
and Hanspeter Pfister

AxonEM Dataset: 3D Axon Instance Segmentation of Brain Cortical


Regions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
Donglai Wei, Kisuk Lee, Hanyu Li, Ran Lu, J. Alexander Bae,
Zequan Liu, Lifu Zhang, Márcia dos Santos, Zudi Lin, Thomas Uram,
Xueying Wang, Ignacio Arganda-Carreras, Brian Matejek,
Narayanan Kasthuri, Jeff Lichtman, and Hanspeter Pfister

Improved Brain Lesion Segmentation with Anatomical Priors from


Healthy Subjects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
Chenghao Liu, Xiangzhu Zeng, Kongming Liang, Yizhou Yu,
and Chuyang Ye
Contents – Part I xxxiii

CarveMix: A Simple Data Augmentation Method for Brain


Lesion Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
Xinru Zhang, Chenghao Liu, Ni Ou, Xiangzhu Zeng, Xiaoliang Xiong,
Yizhou Yu, Zhiwen Liu, and Chuyang Ye

Boundary-Aware Transformers for Skin Lesion Segmentation. . . . . . . . . . . . 206


Jiacheng Wang, Lan Wei, Liansheng Wang, Qichao Zhou, Lei Zhu,
and Jing Qin

A Topological-Attention ConvLSTM Network and Its Application


to EM Images. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
Jiaqi Yang, Xiaoling Hu, Chao Chen, and Chialing Tsai

BiX-NAS: Searching Efficient Bi-directional Architecture for Medical


Image Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
Xinyi Wang, Tiange Xiang, Chaoyi Zhang, Yang Song, Dongnan Liu,
Heng Huang, and Weidong Cai

Multi-task, Multi-domain Deep Segmentation with Shared Representations


and Contrastive Regularization for Sparse Pediatric Datasets. . . . . . . . . . . . . 239
Arnaud Boutillon, Pierre-Henri Conze, Christelle Pons, Valérie Burdin,
and Bhushan Borotikar

TEDS-Net: Enforcing Diffeomorphisms in Spatial Transformers


to Guarantee Topology Preservation in Segmentations . . . . . . . . . . . . . . . . . 250
Madeleine K. Wyburd, Nicola K. Dinsdale, Ana I. L. Namburete,
and Mark Jenkinson

Learning Consistency- and Discrepancy-Context for 2D Organ


Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
Lei Li, Sheng Lian, Zhiming Luo, Shaozi Li, Beizhan Wang, and Shuo Li

Partially-Supervised Learning for Vessel Segmentation in Ocular Images . . . . 271


Yanyu Xu, Xinxing Xu, Lei Jin, Shenghua Gao, Rick Siow Mong Goh,
Daniel S. W. Ting, and Yong Liu

Unsupervised Network Learning for Cell Segmentation . . . . . . . . . . . . . . . . 282


Liang Han and Zhaozheng Yin

MT-UDA: Towards Unsupervised Cross-modality Medical Image


Segmentation with Limited Source Labels . . . . . . . . . . . . . . . . . . . . . . . . . 293
Ziyuan Zhao, Kaixin Xu, Shumeng Li, Zeng Zeng, and Cuntai Guan

Context-Aware Virtual Adversarial Training for


Anatomically-Plausible Segmentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
Ping Wang, Jizong Peng, Marco Pedersoli, Yuanfeng Zhou,
Caiming Zhang, and Christian Desrosiers
xxxiv Contents – Part I

Interactive Segmentation via Deep Learning and B-Spline Explicit


Active Surfaces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
Helena Williams, João Pedrosa, Laura Cattani, Susanne Housmans,
Tom Vercauteren, Jan Deprest, and Jan D’hooge

Multi-compound Transformer for Accurate Biomedical Image


Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
Yuanfeng Ji, Ruimao Zhang, Huijie Wang, Zhen Li, Lingyun Wu,
Shaoting Zhang, and Ping Luo

kCBAC-Net: Deeply Supervised Complete Bipartite Networks with


Asymmetric Convolutions for Medical Image Segmentation . . . . . . . . . . . . . 337
Pengfei Gu, Hao Zheng, Yizhe Zhang, Chaoli Wang, and Danny Z. Chen

Multi-frame Attention Network for Left Ventricle Segmentation


in 3D Echocardiography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
Shawn S. Ahn, Kevinminh Ta, Stephanie Thorn, Jonathan Langdon,
Albert J. Sinusas, and James S. Duncan

Coarse-To-Fine Segmentation of Organs at Risk in Nasopharyngeal


Carcinoma Radiotherapy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
Qiankun Ma, Chen Zu, Xi Wu, Jiliu Zhou, and Yan Wang

Joint Segmentation and Quantification of Main Coronary Vessels Using


Dual-Branch Multi-scale Attention Network . . . . . . . . . . . . . . . . . . . . . . . . 369
Hongwei Zhang, Dong Zhang, Zhifan Gao, and Heye Zhang

A Spatial Guided Self-supervised Clustering Network for Medical Image


Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
Euijoon Ahn, Dagan Feng, and Jinman Kim

Comprehensive Importance-Based Selective Regularization for Continual


Segmentation Across Multiple Sites. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
Jingyang Zhang, Ran Gu, Guotai Wang, and Lixu Gu

ReSGAN: Intracranial Hemorrhage Segmentation with Residuals


of Synthetic Brain CT Scans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
Miika Toikkanen, Doyoung Kwon, and Minho Lee

Refined Local-imbalance-based Weight for Airway Segmentation in CT . . . . 410


Hao Zheng, Yulei Qin, Yun Gu, Fangfang Xie, Jiayuan Sun, Jie Yang,
and Guang-Zhong Yang

Selective Learning from External Data for CT Image Segmentation. . . . . . . . 420


Youyi Song, Lequan Yu, Baiying Lei, Kup-Sze Choi, and Jing Qin
Contents – Part I xxxv

Projective Skip-Connections for Segmentation Along a Subset


of Dimensions in Retinal OCT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
Dmitrii Lachinov, Philipp Seeböck, Julia Mai, Felix Goldbach,
Ursula Schmidt-Erfurth, and Hrvoje Bogunovic

MouseGAN: GAN-Based Multiple MRI Modalities Synthesis


and Segmentation for Mouse Brain Structures. . . . . . . . . . . . . . . . . . . . . . . 442
Ziqi Yu, Yuting Zhai, Xiaoyang Han, Tingying Peng,
and Xiao-Yong Zhang

Style Curriculum Learning for Robust Medical Image Segmentation . . . . . . . 451


Zhendong Liu, Van Manh, Xin Yang, Xiaoqiong Huang, Karim Lekadir,
Víctor Campello, Nishant Ravikumar, Alejandro F. Frangi, and Dong Ni

Towards Efficient Human-Machine Collaboration: Real-Time Correction


Effort Prediction for Ultrasound Data Acquisition . . . . . . . . . . . . . . . . . . . . 461
Yukun Ding, Dewen Zeng, Mingqi Li, Hongwen Fei, Haiyun Yuan,
Meiping Huang, Jian Zhuang, and Yiyu Shi

Residual Feedback Network for Breast Lesion Segmentation


in Ultrasound Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
Ke Wang, Shujun Liang, and Yu Zhang

Learning to Address Intra-segment Misclassification in Retinal Imaging. . . . . 482


Yukun Zhou, Moucheng Xu, Yipeng Hu, Hongxiang Lin, Joseph Jacob,
Pearse A. Keane, and Daniel C. Alexander

Flip Learning: Erase to Segment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493


Yuhao Huang, Xin Yang, Yuxin Zou, Chaoyu Chen, Jian Wang,
Haoran Dou, Nishant Ravikumar, Alejandro F. Frangi, Jianqiao Zhou,
and Dong Ni

DC-Net: Dual Context Network for 2D Medical Image Segmentation . . . . . . 503


Rongtao Xu, Changwei Wang, Shibiao Xu, Weiliang Meng,
and Xiaopeng Zhang

LIFE: A Generalizable Autodidactic Pipeline for 3D OCT-A


Vessel Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514
Dewei Hu, Can Cui, Hao Li, Kathleen E. Larson, Yuankai K. Tao,
and Ipek Oguz

Superpixel-Guided Iterative Learning from Noisy Labels for Medical Image


Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
Shuailin Li, Zhitong Gao, and Xuming He

A Hybrid Attention Ensemble Framework for Zonal


Prostate Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
Mingyan Qiu, Chenxi Zhang, and Zhijian Song
xxxvi Contents – Part I

3D-UCaps: 3D Capsules Unet for Volumetric Image Segmentation . . . . . . . . 548


Tan Nguyen, Binh-Son Hua, and Ngan Le

HRENet: A Hard Region Enhancement Network for Polyp Segmentation. . . . 559


Yutian Shen, Xiao Jia, and Max Q.-H. Meng

A Novel Hybrid Convolutional Neural Network for Accurate Organ


Segmentation in 3D Head and Neck CT Images . . . . . . . . . . . . . . . . . . . . . 569
Zijie Chen, Cheng Li, Junjun He, Jin Ye, Diping Song, Shanshan Wang,
Lixu Gu, and Yu Qiao

TumorCP: A Simple but Effective Object-Level Data Augmentation


for Tumor Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
Jiawei Yang, Yao Zhang, Yuan Liang, Yang Zhang, Lei He,
and Zhiqiang He

Modality-Aware Mutual Learning for Multi-modal Medical Image


Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589
Yao Zhang, Jiawei Yang, Jiang Tian, Zhongchao Shi, Cheng Zhong,
Yang Zhang, and Zhiqiang He

Hybrid Graph Convolutional Neural Networks for Landmark-Based


Anatomical Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600
Nicolás Gaggion, Lucas Mansilla, Diego H. Milone, and Enzo Ferrante

RibSeg Dataset and Strong Point Cloud Baselines for Rib Segmentation
from CT Scans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611
Jiancheng Yang, Shixuan Gu, Donglai Wei, Hanspeter Pfister,
and Bingbing Ni

Hierarchical Self-supervised Learning for Medical Image Segmentation


Based on Multi-domain Data Aggregation . . . . . . . . . . . . . . . . . . . . . . . . . 622
Hao Zheng, Jun Han, Hongxiao Wang, Lin Yang, Zhuo Zhao,
Chaoli Wang, and Danny Z. Chen

CCBANet: Cascading Context and Balancing Attention


for Polyp Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 633
Tan-Cong Nguyen, Tien-Phat Nguyen, Gia-Han Diep,
Anh-Huy Tran-Dinh, Tam V. Nguyen, and Minh-Triet Tran

Point-Unet: A Context-Aware Point-Based Neural Network


for Volumetric Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644
Ngoc-Vuong Ho, Tan Nguyen, Gia-Han Diep, Ngan Le,
and Binh-Son Hua
Contents – Part I xxxvii

TUN-Det: A Novel Network for Thyroid Ultrasound Nodule Detection . . . . . 656


Atefeh Shahroudnejad, Xuebin Qin, Sharanya Balachandran,
Masood Dehghan, Dornoosh Zonoobi, Jacob Jaremko, Jeevesh Kapur,
Martin Jagersand, Michelle Noga, and Kumaradevan Punithakumar

Distilling Effective Supervision for Robust Medical Image Segmentation


with Noisy Labels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 668
Jialin Shi and Ji Wu

On the Relationship Between Calibrated Predictors and Unbiased


Volume Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 678
Teodora Popordanoska, Jeroen Bertels, Dirk Vandermeulen,
Frederik Maes, and Matthew B. Blaschko

High-Resolution Segmentation of Lumbar Vertebrae from Conventional


Thick Slice MRI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 689
Federico Turella, Gustav Bredell, Alexander Okupnik,
Sebastiano Caprara, Dimitri Graf, Reto Sutter, and Ender Konukoglu

Shallow Attention Network for Polyp Segmentation . . . . . . . . . . . . . . . . . . 699


Jun Wei, Yiwen Hu, Ruimao Zhang, Zhen Li, S. Kevin Zhou,
and Shuguang Cui

A Line to Align: Deep Dynamic Time Warping for Retinal OCT


Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 709
Heiko Maier, Shahrooz Faghihroohi, and Nassir Navab

Learnable Oriented-Derivative Network for Polyp Segmentation . . . . . . . . . . 720


Mengjun Cheng, Zishang Kong, Guoli Song, Yonghong Tian,
Yongsheng Liang, and Jie Chen

LambdaUNet: 2.5D Stroke Lesion Segmentation of Diffusion-Weighted


MR Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 731
Yanglan Ou, Ye Yuan, Xiaolei Huang, Kelvin Wong, John Volpi,
James Z. Wang, and Stephen T. C. Wong

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 743


Image Segmentation
Noisy Labels are Treasure:
Mean-Teacher-Assisted Confident
Learning for Hepatic Vessel Segmentation

Zhe Xu1,3 , Donghuan Lu2(B) , Yixin Wang4 , Jie Luo3 , Jagadeesan Jayender3 ,
Kai Ma2 , Yefeng Zheng2 , and Xiu Li1(B)
1
Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
li.xiu@sz.tsinghua.edu.cn
2
Tencent Jarvis Lab, Shenzhen, China
caleblu@tencent.com
3
Brigham and Women’s Hospital, Harvard Medical School, Boston, USA
4
Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China

Abstract. Manually segmenting the hepatic vessels from Computer


Tomography (CT) is far more expertise-demanding and laborious than
other structures due to the low-contrast and complex morphology of ves-
sels, resulting in the extreme lack of high-quality labeled data. Without
sufficient high-quality annotations, the usual data-driven learning-based
approaches struggle with deficient training. On the other hand, directly
introducing additional data with low-quality annotations may confuse
the network, leading to undesirable performance degradation. To address
this issue, we propose a novel mean-teacher-assisted confident learning
framework to robustly exploit the noisy labeled data for the challenging
hepatic vessel segmentation task. Specifically, with the adapted confi-
dent learning assisted by a third party, i.e., the weight-averaged teacher
model, the noisy labels in the additional low-quality dataset can be trans-
formed from ‘encumbrance’ to ‘treasure’ via progressive pixel-wise soft-
correction, thus providing productive guidance. Extensive experiments
using two public datasets demonstrate the superiority of the proposed
framework as well as the effectiveness of each component.

Keywords: Hepatic vessel · Noisy label · Confident learning

1 Introduction

Segmenting hepatic vessels from Computer Tomography (CT) is essential to


many hepatic surgeries such as liver resection and transplantation. Benefit-
ing from a large amount of high-quality (HQ) pixel-wise labeled data, deep
learning has greatly advanced in automatic abdominal segmentation for vari-
ous structures, such as liver, kidney and spleen [5,9,13,16]. Unfortunately, due

Z. Xu—This work was done as a research intern at Tencent Jarvis Lab.


c Springer Nature Switzerland AG 2021
M. de Bruijne et al. (Eds.): MICCAI 2021, LNCS 12901, pp. 3–13, 2021.
https://doi.org/10.1007/978-3-030-87193-2_1
4 Z. Xu et al.

(a) Set-HQ: 3DIRCADb (b) Set-LQ: MSD8

Fig. 1. 2D and 3D visualization of the processed example cases of (a) 3DIRCADb


dataset [1] with high-quality annotations (Set-HQ), and (b) MSD8 dataset [21] with
numerous mislabeled and unlabeled pixels (Set-LQ). Red represents the labeled vessels,
while the yellow arrows at (b) point at some unlabeled pixels. (Color figure online)

to the noises in CT images, pathological variations, poor-contrast and com-


plex morphology of vessels, manually delineating the hepatic vessels is far more
expertise-demanding, laborious and error-prone than other structures. Thus, lim-
ited amount of data with HQ pixel-wise hepatic vessel annotations, as exampled
in Fig. 1(a), is available. Most data, as exampled in Fig. 1(b), have considerable
unlabeled or mislabeled pixels, also known as “noises”.
For a typical fully-supervised segmentation method, training with tiny HQ
labeled dataset often results in overfitting and inferior performance. However,
additionally introducing data with low-quality (LQ) annotation may provide
undesirable guidance, and offset the efficacy of the HQ labeled data. Experimen-
tally, the considerable noises become the ‘encumbrance’ for training, leading to
substantial performance degradation, as shown in Fig. 3 and Table 1. Therefore,
how to robustly exploit the additional information in the abundant LQ noisy
labeled data remains an open challenge.

Related Work. Due to the lack of HQ labeled data and the complex morphol-
ogy, few efforts have been made on hepatic vessel segmentation. Huang et al.
applied the U-Net with a new variant Dice loss to balance the foreground (vessel)
and background (liver) classes [8]. Kitrungrotsakul et al. used three deep net-
works to extract the vessel features from different planes of hepatic CT images
[11]. Neglecting the data with LQ annotation because of their potential mislead-
ing guidance, only 10 and 5 HQ labeled volumes were used for training in [8]
and [11], resulting in unsatisfactory performance. To introduce auxiliary image
information from additional dataset, Semi-Supervised Learning (SSL) technique
[4,22,24] is a promising method. However, the standard SSL-based methods fail
to exploit the potential useful information of the noisy label. To make full use
of the LQ labeled data, several efforts have been made to alleviate the negative
effects brought by the noisy labels, such as assigning lower weights to the noisy
labeled samples [18,28], modeling the label corrupting process [7] and confident
learning [17]. However, these studies focused on image-level noise identification,
while the localization of pixel-wise noises is necessary for the segmentation task.
Mean-Teacher-Assisted Confident Learning 5

In this paper, we propose a novel Mean-Teacher-assisted Confident Learning


(MTCL) framework for hepatic vessel segmentation to leverage the additional
‘cumbrous’ noisy labels in LQ labeled data. Specifically, our framework shares
the same architecture as the mean-teacher model [22]. By encouraging consistent
segmentation under different perturbations for the same input, the network can
additionally exploit the image information of the LQ labeled data. Then, assisted
by the weight-averaged teacher model, we adapt the Confident Learning (CL)
technique [17], which was initially proposed for removing noisy labels in image-
level classification, to characterize the pixel-wise label noises based on the Classi-
fication Noise Process (CNP) assumption [3]. With the guidance of the identified
noise map, the proposed Smoothly Self-Denoising Module (SSDM) progressively
transforms the LQ labels from ‘encumbrance’ to ‘treasure’, allowing the network
to robustly leverage the additional noisy labels towards superior segmentation
performance. We conduct extensive experiments on two public datasets with
hepatic vessel annotations [1,21]. The results demonstrate the superiority of the
proposed framework as well as the effectiveness of each component.

2 Methods
The detailed explanation of the experimental materials, the hepatic CT prepro-
cessing approach, and the proposed Mean-Teacher-assisted Confident Learning
(MTCL) framework are presented in the following three sections, respectively.

2.1 Materials
Two public datasets, 3DIRCADb [1] and MSD8 [21], with obviously different
qualities of annotation (shown in Fig. 1) are used in this study, tersely referred
as Set-HQ (i.e., high quality) and Set-LQ (i.e., low quality), respectively.
1) Set-HQ: 3DIRCADb [1]. The first dataset, 3DIRCADb, maintained by
the French Institute of Digestive Cancer Treatment, serves as Set-HQ. It only
consists of 20 contrast-enhanced CT hepatic scans with high-quality liver and
vessel annotation. In this dataset, different volumes share the same axial slice
size (512 × 512 pixels), while the pixel spacing varies from 0.57 to 0.87 mm, the
slice thickness varies from 1 to 4 mm, and the slice number is between 74 and
260.
2) Set-LQ: MSD8 [21]. The second dataset MSD8 provides 443 CT hepatic
scans collected from Memorial Sloan Kettering Cancer Center, serving as the Set-
LQ. The properties of the CT scans are similar to that of the 3DIRCADb dataset
but with low-quality annotations. According to the statistics [15], around 65.5%
of the vessel pixels are unlabeled and approximately 8.5% are mislabeled as
vessels for this dataset, resulting in the necessity of laborious manual refinement
in previous work [15].
In our experiments, the images in Set-HQ are randomly divided into two
groups: 10 cases for training, and the remaining 10 cases for testing, while all
the samples in Set-LQ are only used for training since their original low-quality
noisy labels are not appropriate for unbiased evaluation [15].
6 Z. Xu et al.

Smoothly Denoised Label


Limited HQ Labeled Data (Set-HQ) … Student Set-HQ Pred

Student

Self-loop Update
Robust Pre-process

Img
Prob Map Student Set-LQ Pred SSDM

EMA
Prob Map
Img
… Teacher Set-HQ Pred Identified Noise Map
Abundant LQ Noisy Labeled Data (Set-LQ)

Teacher
CL: Confident Learning CL
SSDM: Smoothly Self-Denoising Module
Teacher Set-LQ Pred Set-LQ Noisy Label

Fig. 2. Illustration of the proposed Mean-Teacher-assisted Confident Learning (MTCL)


framework for hepatic vessel segmentation.

2.2 Hepatic CT Preprocessing

A standard preprocessing strategy is firstly applied to all the CT images: (1)


the images are masked and cropped to the liver region based on the liver seg-
mentation masks. Note that for the MSD8 dataset, the liver masks are obtained
with the trained H-DenseUNet model [13] because no manual annotation of the
liver is provided. All the cropped images are adjusted to 320 × 320 × D, where
D denotes the slice number. Since the slice thickness varies greatly, we do not
perform any resampling to avoid the potential artifacts [23] caused by the inter-
polation; (2) The intensity of each pixel is truncated to the range of [−100, 250]
HU, followed by Min-Max normalization.
However, we observe that many cases have different intensity ranges (shown
in Fig. 1) and intrinsic image noises [6], which could drive the model to be over-
sensitive to the high-intensity regions, as demonstrated in Table 1 and Fig. 3.
Therefore, the vessel probability map based on the Sato tubeness filter [20] is
introduced to provide auxiliary information. By calculating the Hessian matrix’s
eigenvectors, the similarity of the image to tubes can be obtained, so that the
potential vessel regions can be enhanced with high probability (illustrated in
Fig. 2). Following the input-level fusion strategy used in other multimodal seg-
mentation tasks [27], we regard the vessel probability map as an auxiliary modal-
ity and directly concatenate it with the processed CT images in the original input
space. By jointly considering the information in both the images and the proba-
bility maps, the network could perceive more robust vessel signals towards better
segmentation performance (demonstrated in Table 1 and Fig. 3).
Mean-Teacher-Assisted Confident Learning 7

2.3 Mean-Teacher-assisted Confident Learning Framework

Learn from Images of Set-LQ. To additionally exploit the image information


of Set-LQ, the mean-teacher model (MT) [22] is adopted as our basic architec-
ture with the backbone network U-Net [19], as shown in Fig. 2. Denoting the
weights of the student model at training step t as θt , Exponential Moving Aver-
age (EMA) is applied to update the teacher model’s weights θt , formulated
as θt = αθt−1

+ (1 − α)θt , where α is the EMA decay rate and set to 0.99
as recommended by [22]. By encouraging the teacher model’s temporal ensem-
ble prediction to be consistent with that of the student model under different
perturbations (e.g., adding random noise ξ to the input samples) for the same
inputs, superior prediction performance can be achieved as demonstrated in pre-
vious studies [22,24,25]. As shown in Fig. 2, the student model is optimized by
minimizing the supervised loss Ls on Set-HQ, along with the (unsupervised)
consistency loss Lc between predictions of the student model and the teacher
model on both datasets.

Learn from Progressively Self-Denoised Soft Labels of Set-LQ. The


above MT model can only leverage the image information, while the potential
useful information of the noisy labels is still unexploited. To further leverage
the LQ annotation without being affected by the label noises, we propose a
progressive self-denoising process to alleviate the potential misleading guidance.
Inspired by the arbitration based manual annotation procedure where a third
party, e.g., the radiologists, is consulted for disputed cases, the teacher model
serves as the ‘third party’ here to provide guidance for identifying label noises.
With its assistance, we adapt the Confident Learning [17], which was initially
proposed for pruning mislabeled samples in image-level classification, to charac-
terize the pixel-wise label noises based on the Classification Noise Process (CNP)
assumption [3]. The self-denoising process can be formulated as follows:
(1) Characterize the pixel-wise label errors via adapted CL. First, we estimate
the joint distribution Qỹ,y∗ between the noisy (observed) labels ỹ and the true
(latent) labels y ∗ . Given a dataset X := (x, ỹ)n consisting of n samples of x
with m-class noisy label ỹ, the out-of-sample predicted probabilities P̂ can be
obtained via the ‘third party’, i.e., our teacher model. Ideally, such a third party
is also jointly enhanced during training. If the sample x with label ỹ = i has large
enough p̂j (x) ≥ tj , the true latent label y ∗ of x can be suspected to be j instead
of i. Here, the threshold tj is obtained by calculating the average (expected)
predicted probabilities p̂j (x)of the samples labeled with ỹ = j, which can be
formulated as tj := |Xỹ=j 1
| x∈Xỹ=j p̂j (x). Based on the predicted label, we
further introduce the confusion matrix C ỹ,y∗ , where Cỹ,y∗ [i][j] is the number of
x labeled as i (ỹ = i), yet the true latent label may be j (y ∗ = j). Formally,
C ỹ,y∗ can be defined as:
 
 
Cỹ,y∗ [i][j] := X̂ỹ=i,y∗ =j  , where
 
(1)
X̂ỹ=i,y∗ =j := x ∈ Xỹ=i : p̂j (x) ≥ tj , j = arg max p̂l (x) .
l∈M :p̂l (x)≥tl
8 Z. Xu et al.

With the constructed confusion matrix C ỹ,y∗ , we can further estimate the
m × m joint distribution matrix Qỹ,y∗ for p (ỹ, y ∗ ):
Cỹ,y∗ [i][j]

Cỹ,y∗ [i][j] · |Xỹ=i |
Qỹ,y∗ [i][j] =  
j∈M
. (2)
Cỹ,y∗ [i][j]
i∈M,j∈M
 · |Xỹ=i |
j∈M Cỹ,y ∗ [i][j]

Then, we utilize the Prune by Class (PBC) [17] method recommended by [26]
to
 identify the label noises. Specifically, for each class i ∈ M , PBC selects the n ·
j∈M :j=i (Q ỹ,y ∗ [i][j]) samples with the lowest self-confidence p̂ (ỹ = i; x ∈ X i )
as the wrong-labeled samples, thereby obtaining the binary noise identification
map Xn , where “1” denotes that the pixel has a wrong label and vice versa. It is
worth noting that the adapted CL module is computationally efficient and does
not require any extra hyper-parameters.
(2) Smoothly refine the noisy labels of Set-LQ to provide rewarding super-
vision. Experimentally, the CL still has uncertainties in distinguishing the label
noises. Therefore, instead of directly imposing the hard-correction, we introduce
the Smoothly Self-Denoising Module (SSDM) to impose a soft correction [2] on
the given noisy segmentation masks ỹ. Based on the binary noise identification
map Xn , the smoothly self-denoising operation can be formulated as follows:
ẏ(x) = ỹ(x) + I(x ∈ Xn ) · (−1)ỹ · τ, (3)
where I(·) is the indicator function, and τ ∈ [0, 1] is the smooth factor, which is
empirically set as 0.8. After that, the updated soft-corrected LQ labels of Set-LQ
are used as the auxiliary CL guidance Lcl to the student model.
(3) Self-loop updating. With the proposed SSDM, we construct a self-loop
updating process that substitutes the noisy labels of Set-LQ with the updated
denoised ones for the next training epoch, so that the framework can progres-
sively refine the noisy vessel labels during training.

2.4 Loss Function


The total loss is a weighted combination of the supervised loss Ls on Set-HQ, the
perturbation consistency loss Lc on both datasets and the auxiliary self-denoised
CL loss Lcl on Set-LQ, calculated by:
L = Ls + λc Lc + λcl Lcl , (4)
where λc and λcl are the trade-off weights for Lc and Lcl , respectively. We adopt
the time-dependent Gaussian function [4] to schedule the ramp-up weight Lc .
Meanwhile, the teacher model needs to be “warmed up” to provide reliable out-
of-sample predicted probabilities. Therefore, λcl is set as 0 in the first 4,000
iterations, and adjusted to 0.5 during the rest training iterations. Note that the
supervised loss Ls is a combination of cross-entropy loss, Dice loss, focal loss [14]
and boundary loss [10] with weights of 0.5, 0.5, 1 and 0.5, respectively, as such
a combination can provide better performance in our exploratory experiments
with the fully supervised baseline method. The consistency loss Lc is calculated
by the voxel-wise Mean Squared Error (MSE), and the CL loss Lcl is composed
of cross-entropy loss and focal loss with equal weights.
Mean-Teacher-Assisted Confident Learning 9

Table 1. Quantitative results of different methods. Best results are shown in bold.

Method Dice ↑ PRE ↑ ASD ↓ HD ↓


Huang et al. [8] 0.5991 0.6352 2.5477 10.5088
U-Net(i) 0.6093 0.5601 2.7209 10.3103
U-Net(p) 0.6082 0.5553 2.3574 10.2864
U-Net(c) 0.6685 0.6699 2.0463 9.2078
U-Net(c, Mix) 0.6338 0.6322 1.6040 9.2038
MT(c) 0.6963 0.6931 1.4860 7.5912
MT(c)+NL w/o CL 0.6807 0.7270 1.3205 8.0893
MTCL(c) w/o SSDM 0.7046 0.7472 1.2252 8.3667
MTCL(c) 0.7245 0.7570 1.1718 7.2111

Img Prob Map GT U-Net(i) U-Net(p) U-Net(c) U-Net(c, Mix) MTCL(c) (ours)

Fig. 3. Visualization of the fused segmentation results of different methods. The red
voxels represent the ground truth, while the green voxels denote the difference between
the ground truth and the segmented vessel of different methods. (Color figure online)

3 Experiments and Results

Evaluation Metrics and Implementation. For inference, the student model


segments each volume slice-by-slice and the segmentation of each slice is concate-
nated back into 3D volume. Then, a post-processing step that removes very small
regions (less than 0.1% of the volume size) is performed. We adopt four metrics
for a comprehensive evaluation, including Dice score, Precision (PRE), Average
Surface Distance (ASD) and Hausdorff Distance (HD). The framework is based
on the PyTorch implementation of [25] using an NVIDIA Titan X GPU. SGD
optimizer is also adopted and the batch size is set to 4. Standard data augmen-
tation, including randomly flipping and rotating, is applied. Our implementation
is publicly available at https://github.com/lemoshu/MTCL.

Comparison Study. A comprehensive qualitative and quantitative comparison


study is performed on the hold-out test set of Set-HQ, as shown in Fig. 3 and
Table 1. Succinctly, “i”, “p” and “c” represent different input types: processed
image, the vessel probability map and the concatenated one, respectively.
Another random document with
no related content on Scribd:
Highlander, 462
Hilda No. 5, 462
Hillside, 462
Hilltop, 462
Hilman, 462
Hinckley, M. E., var. orig. by, 490, 515
Hinckley (syn. of Miner), 281
Hinkley, 462
Hlubeck Aprikosenpflaume, 462
Hoag’s Seedling, 462
Hoffman, 462
Hoffman, Ernest, var. orig. by, 533
Hofinger Mirabelle, 462
Hofinger’s Rote Mirabelle (syn. of Hofinger Mirabelle), 462
Hog, 462
Hogg, John A., quoted, 200, 260, 363, 364, 432;
var. orig. by, 432
Hog Plum, 59
Hoheitspflaume (syn. of Imperatrice), 249
Hoheits Pflaume (syn. of Red Diaper), 323
Holister (syn. of Hollister), 463
Holister, var. orig. by, 463
Holland, 462, 463
Holland Plum; Holland Prune (syns. of Holland), 462
Hollister, 463
Holman, D. S., var. orig. by, 463
Holman Prune, 463
Holme, 463
Holmes Early Blue (syn. of Holme), 463
Holt, B. J., var. orig. by, 463
Holt, 463
Holton, Warren, var. orig. by, 442
Homestead, 463
Honey, 463
Honey Drop (syn. of Golden Beauty), 226
Honey Grove (syn. of Sanders), 538
Honey Julian, 463
Honsmomo (syn. of Berger, 160; of Satsuma, 337)
Hon-smomo (syn. of Chabot), 172
Hoo Green Gage, 463
Hooker, quoted, 37
Hoosier, 463
Horemoritzer Reine Claude, 463
Horrigan, 463
Horse, 464
Horse Gage (syn. of Horse Jag), 464
Horse Jag, 464
Horse Plum, 59
Horse Plum (syn. of Horse), 464
Hoskins, 464
Hoskins, var. orig. by, 464
Houston County, 464
Hovey, C. M., quoted, 167, 229
How, Hall J., var. orig. by, 464
Howard, 464
Howard’s Favorite (syn. of Howard), 464
Howe, 464
Howell, 465
Howell’s Early (syn. of Howell), 465
Howell’s Large (syn. of Nectarine, 291; of Peach, 309)
Howel’s (syn. of Nectarine), 291
How Amber, 464
How’s Amber (syn. of How Amber), 464
Hoyo Smomo, 465
Hoyt, R. D., var. orig. by, 446
H. T. S. 84,761, 465
Huankume, 465
Hudson, 243
Hudson, quoted, 93
Hudson; Hudson Gage; Hudson’s gelbe Frühpflaume (syns. of
Hudson Gage), 465
Hudson Gage, 465
Hudson River Purple; Hudson River Purple Egg (syns. of
Hudson), 243
Hughes, 465
Hughes Late (syn. of Tecumseh), 552
Hulings, 245
Huling’s Reine Claude (syn. of Reine Claude), 327
Huling’s Reine-Claudia (syn. of Reine Claude), 327
Huling’s Superb (syn. of Hulings), 245
Hungarian, 245
Hungarian (syn. of Pond), 314
Hungarian (syn. of Ungarish), 361
Hungarian Date (syn. of Hungarian), 246
Hungarian Musk Prune, 465
Hungarian No. 1, 465
Hungarian No. 2, 466
Hungarian Plum (syn. of Hungarian), 246
Hungarian Prune (syn. of Hungarian, 246; of Pond, 314; of
Ungarish, 361)
Hungarica (syn. of Hungarian), 246
Hungary (syn. of Ungarish), 361
Hunn, 466
Hunt, 466
Hunt, Henry, var. orig. by, 466
Hunt, R. A., var. orig. by, 491
Hunt De Soto, 466
Hunt’s De Soto (syn. of Hunt De Soto), 466
Hyacinth; Hyacinthe Pflaume (syns. of Jacinthe), 471
Hytankayo (syn. of Abundance), 136
Hytankayo (syn. of Chabot), 172
Hytankayo (syn. of Red June), 324
Hytan-Kayo (syn. of Kerr), 259
Hytankio (syn. of Kerr), 259
Ickworth, 247
Ickworth Imperatrice (syn. of Ickworth), 247
Ida, 466
Ida Gage; Ida Green Gage (syns. of Reine Claude), 327
Ida Green Gage, 466
Idal (syn. of Idall), 466
Idall, 466
Idol (syn. of Idall), 466
Ienua (syn. of Date), 428
Ilevert; Ile Vert; Ile vert; Ille verte; Illvert (syns. of Isle-Verte), 470
Illinois Ironclad (syn. of Ironclad), 469
Illinois Plum (syn. of Langsdon), 479
Impératrice, 248
Impératrice (syn. of Red Diaper), 323
Impératrice; Impératrice Blue; Impératrice Violette (syns. of
Impératrice), 249
Impératrice Blanche (syn. of White Impératrice), 375
Impératrice group, 33
Impératrice Ickworth (syn. of Ickworth), 247
Impératrice Jaune (syn. of Yellow Impératrice), 569
Impératrice Jckworth (syn. of Ickworth), 247
Impératrice Violette (syn. of German Prune), 219
Impératrice Violette Grosse (syn. of German Prune), 219
Imperial (syn. of Red Magnum Bonum), 325
Imperial, 466
Imperial Blanc; Imperiale Blanche; (syns. of Yellow Egg), 386
Imperial de Sharp (syn. of Sharp, 340; of Victoria, 363)
Imperial Diadem; Imperial Diademe (syns. of Red Diaper), 323
Imperial Jaune (syn. of Weisse Kaiserin), 563
Imperial Epineuse, 250
Imperial Epineux (syn. of Imperial Epineuse), 250
Imperial Gage, 251
Imperial Gage (syn. of Washington), 368
Imperial Gage; Imperial Green Gage (syns. of Imperial Gage),
251
Imperial jaune (syn. of Yellow Imperial), 569
Imperiall (syn. of Red Magnum Bonum), 325
Imperial Ottoman, 467
Imperial Ottoman (syn. of Imperial Ottoman), 467
Imperial Purple, 467
Imperial Purple (syn. of Imperial Purple), 467
Imperial Rouge; Imperial Violet (syns. of Red Magnum Bonum),
325
Imperial Violet, 467
Imperial Washington, 467
Impériale (syn. of Red Magnum Bonum), 325
Impériale Alexandrina, 466
Impériale à Petit Fruit Violet (syn. of Imperial Violet), 467
Impériale Blanche (syn. of Quetsch, Dr. Létricourt, 524; of
Yellow Egg, 386)
Impériale de Mann (syn. of Brandy Gage), 408
Impériale de Milan, 467
Impériale Ottomane (syn. of Imperial Ottoman), 467
Imperiale de Sharp (syn. of Sharp), 340
Impériale de Turquie (syn. of Imperial Ottoman), 467
Impériale Hâtive (syn. of Red Magnum Bonum), 325
Impériale jaune (syn. of Yellow Imperial), 569
Impériale Rouge; Impériale Violette (syns. of Red Magnum
Bonum), 325
Impériale Violette (syn. of Violet Imperial), 559
Impériale Violette à feuilles panachees (syn. of Imperial Violet),
467
Improved French Prune, 467
Incomparable, 467
Incomparable de Lucombe (syn. of Lucombe), 271
Incomparable Prune (syn. of Incomparable), 467
Indian, 468
Indian Chief, 468
Indiana, 468
Indiana Red (syn. of Indiana), 468
Infertility, causes of, in plums, 110-112
Inkpa, 468
Inselpflaume Grüne (syn. of Isle-Verte), 470
Iola, 468
Iona, 468
Iowa, 468
Iowa Beauty, 468
Irby, 468
Irby September (syn. of Irby), 468
Ireland, 469
Ireland Golden, 469
Ireland’s Golden Gage (syn. of Ireland Golden), 469
Ireland’s Seedling (syn. of Ireland), 469
Irene, 469
Iris, 469
Irish Horse Plum (syn. of Horse), 464
Ironclad, 469
Iroquois, 469
Irving’s Bolmar or Bolmer (syns. of Washington), 368
Isaac, 469
Isabel (syn. of Miner), 281
Isabella, 469
Isle Vert; Isle Verte (syns. of Isle-Verte), 470
Isle-Verte, 469
Isleworth Green Gage (syn. of Reine Claude), 327
Italian Damask, 470
Italian Damask (syn. of Morocco, 288; of Orleans, 302)
Italian Guetsche (syn. of Italian Prune), 253
Italianische blanc Zwetsche, blaue Zwetsche, Zwetsche; Italian
Prune or Quetsche; Italienische Zwetsche (syns. of Italian
Prune), 253
Italienische Damascene (Diel’s), 470
Italienische Damascene (Liegel’s), 470
Italienische Blaue Zwetsche or Pflaumen Zwetsche (syns. of
Italian Prune), 253
Italian Prune, 252
Italienische Grüne Zwetsche (syn. of Quetsche Verte D’Italie),
525
Itasca (syn. of Aitkin), 140
Itasca, 470
Itaska (syn. of Itasca), 470
Ithaca, 470
Ivason, 470
Ives, 470
Ives, J. M., var. orig. by, 470
Ives Damson (syn. of Ives), 470
Ive’s Seedling (syn. of Ives), 470
Ive’s Washington (syn. of Ives), 470
Ives’ Washington Seedling (syn. of Ives), 470

Jacinthe (syn. of Jacinthe), 471


Jacinthe, 471
Jackson? (syn. of Rigny, 532; of Washington, 368)
Jacob, 471
Jacobi Zwetsche (syn. of Large Sugar Prune), 480
Jahns Gelbe Jerusalems-Pflaume (syn. of Yellow Jerusalem),
569
Jahn’s Jerusalems Pflaume (syn. of Yellow Jerusalem), 569
Jakobs Pflaume (syn. of Damson), 186
James Vick (syn. of Vick), 559
Jamin, var. orig. by, 187
Japan Blood Plum (syn. of Satsuma), 337
Japanese Plum Seedling, 471
Japanese Seedling X (syn. of Japex), 255
Japan Hybrid No. 3 (syn. of Ames), 144
Japan No. 1, 471
Japan No. 3, 471
Japan Hybrid No. 2 (a), 471
Japan Hybrid No. 2 (b), 471
Japex, 255
Jap No. 4, 471
Jaspisartige Pflaume, 471
Jaune de Bleeker (syn. of Bleeker), 163
Jaune de Catalogne (syn. of Early Yellow), 203
Jaune de Jerusalem (syn. of Gelbe Jerusalempflaume), 450
Jaune de Monsieur (syn. of Yellow Impératrice), 569
Jaune de Reizenstein (syn. of Reizenstein Yellow Prune), 531
Jaune d’Este, 471
Jaune Hâtive (syn. of Early Yellow), 203
Jaune précoce (syn. of Early Yellow), 203
Jaune Tardive, 471
Jaune Tres Hâtive Baboud, 471
Jckworth Imperatrice (syn. of Ickworth), 247
Jean d’Este (syn. of Jaune d’Este), 471
Jean Hâtive (syn. of Early Yellow), 203
Jean Morceau, 472
Jean Morceau (syn. of Stoneless), 353
Jean White (syn. of Early Yellow), 203
Jefferson, 255
Jemmy Moore (syn. of Denbigh), 430
Jenkins, J. H. G., var. orig. by, 448
Jenkin’s Imperial (syn. of Nectarine, 291; of Peach, 309)
Jenkinson’s Imperial (syn. of Imperial Gage), 251
Jennie Lucas, 472
Jerusalem, 472
Jerusalem (syn. of Jerusalem), 472
Jerusalem Jaune (syn. of Yellow Jerusalem), 569
Jessie, 472
Jewell, 472
J (syn. of Hale), 237
J. B. Rue (syn. of Rue), 536
J. H. Rue, 472
Jodoigne, 472
Jodoigne Green Gage (syn. of Jodoigne), 472
Joe Hooker, 473
Johannispflaume (syn. of Noire de Montreuil), 504
John A, 473
Johnny Roe, 473
John’s Gelbe Jerusalems-Pflaume (syn. of Yellow Jerusalem),
569
Johnson, Franklin, var. orig. by, 548
Johnson, J. E., var. orig. by, 362
Jones, 473
Jones, Herbert A., var. orig. by, 452
Jones, Mrs. Owen, var. orig. by, 473
Jones Late, 473
Jordan, F., var. orig. by, 473
Jordan Seedling, 473
Josselyn, John, quoted, 20
J. Parks, 473
Judson, 473
Juicy, 257
Julia, 473
Julian, 473
Julien Gros à Feuilles Panachees, 473
Juli Reine-Claude; Julius Reine-Claude (syns. of July Green
Gage), 474
July Fourth, 473
July Green Gage, 473
July Green Gage (syn. of July Green Gage), 474
Jumelles, 474
Jumelles de Liegel, 474
Jumelles de Liegel (syn. of Jumelles de Liegel), 474

Kaga, 474
Kaiser Von Japan (syn. of Emperor of Japan), 443
Kaiser Wilhelm, 474
Kalm, Peter, quoted, 20
Kampeska, 474
Kanawha, 474
Karl Koch’s Königs Pflaume (syn. of Koch Königspflaume), 477
Kazan, 474
Keindl’s Violette Königspflaume (syn. of Royale Violette de
Keindl), 535
Keindt, 475
Keindt’s Frühdamascene (syn. of Keindt), 475
Keindt’s Violette Königspflaume (syn. of Royale Violette de
Keindl), 535
Keith, 475
Keiser (syn. of Hulings), 245
Kelley, 475
Kelbalan, 475
Kelmyro, 475
Kelroba, 475
Kelsaw, 475
Kelsey, 258
Kelsey No. 1 (syn. of Kelmyro), 475
Kelsey No. 2 (syn. of Kelroba), 475
Kelsey No. 3 (syn. of Kelbalan), 475
Kelsey Prune, 475
Kelsey’s Japan (syn. of Kelsey), 258
Kelso, 475
Kenellan, 475
Kennedy Red, 475
Kensington Prune (syn. of Holland), 462
Kent, 475
Kentish Bush (syn. of Kent), 476
Kentish Diamond (syn. of Diamond), 191
Kenyon, 476
Kerr, 259
Kerr, J. W., life of, 349-350;
quoted, 98, 115, 118, 171, 422, 495, 508;
var. orig. by, 349, 418, 446, 492, 501, 559, 562
Kester Green Gage, 476
Kester’s Green Gage (syn. of Kester Green Gage), 476
Kester’s Yellow Gage (syn. of Kester Yellow Gage), 476
Kester Yellow Gage, 476
Keyser, var. orig. by, 245
Keyser’s Plum (syn. of Hulings), 245
Kibitzenei (syn. of Small Reine Claude), 347
Kicab, 476
Kickapoo, 476
Kieth (syn. of Keith), 475
Kilpatrick, E. W., var. orig. by, 485
King, 476
King Damson, 259
King of Damsons (syn. of King Damson), 259
King of Plums (syn. of Golden Drop, 229; of Reine Claude, 327)
King of Plums (syn. of King), 476
Kings Plum (syn. of Royal), 534
Kings Plum of Tours (syn. of Royal Tours), 332
Kingston (syn. of Diamond), 191
Kingston, 476
Kirchhof’s Pflaume (syn. of Capitaine Kirchhof), 414
Kirke, 260
Kirke (syn. of Kirke), 260
Kirke’s; Kirke’s Pflaume; Kirke’s Plum; Kirk’s Plum (syns. of
Kirke), 260
Kirke’s Stoneless (syn. of Stoneless), 353
Kirschpflaume (syn. of Myrobalan), 290
Kladrauer Pflaume (syn. of Large Sugar Prune), 480
Klein Weisse Damassener Pflaume (syn. of Small White
Damson), 544
Kleine Blaue Frühzwetsche, 476
Kleine Blaue Julians Pflaume (syn. of Damson, 186; of Saint
Julien, 335)
Kleine Brisette (syn. of Late Mirabelle), 263
Kleine Dauphine (syn. of Small Reine Claude), 347
Kleine Gelbe Eierpflaume, 476
Kleine gelbe Früh Pflaume (syn. of Early Yellow), 203
Kleine Grüne Reine-Claude (syn. of Small Reine Claude), 347
Kleine Kirschpflaume, 476
Kleine Kirschpflaume (syn. of Myrobalan), 290
Kleine Kirsch Pflaume (syn. of Rote Mirabelle), 533
Kleine Mirabelle (syn. of Mirabelle), 284
Kleine Reine-Claude (syn. of Small Reine Claude), 347
Kleine Rosspauke, 476
Kleine Weisse Damascene (syn. of Small Reine Claude, 347; of
Small White Damson, 544)
Kleine Zucker Zwetsche (syn. of Petite Quetsche Sucrée), 515
Kleinste Mirabelle (syn. of Mirabelle), 284
Klondike, 477
Klondyke (syn. of Klondike), 477
Knevett’s Late Orleans (syn. of Nelson), 503
Kniedsen’s Peach (syn. of Knudson), 477
Knight, var. orig. by, 248, 436, 479
Knight’s Green Drying, Large Drying or Large Green Drying
(syns. of Large Green Drying), 479
Knight’s No. 6 (syn. of Ickworth), 247
Knudson, 477
Knudson, H., var. orig. by, 182, 422, 460, 462, 463, 477, 496
Knudson’s Peach (syn. of Knudson), 477
Koa, 477
Koa’s Imperial (syn. of Koa), 477
Kober, 477
Koch, quoted, 17, 18
Koch’s Gelbe Spät Damascene (syn. of Koch Späte
Damascene), 477
Koch Königspflaume, 477
Koch Späte Damascene, 477
Koch’s Späte Aprikosen; Koch’s Späte Damascene (syns. of
Koch Späte Damascene), 477
Koepher, 477
Koetsche (syn. of German Prune), 219
Kohlenkamp, 477
Kohlenkamp, W., var. orig. by, 477
Kohlen Kamp (syn. of Kohlenkamp), 477
Königin Claudia or Klaudia (syns. of Reine Claude), 327
Königin der Mirabellen (syn. of Reine des Mirabelles), 530
Königin Mutter (syn. of Queen Mother), 522
Königin Victoria (syn. of Victoria), 363
Königin von Tours (syn. of Royal Tours), 332
Königliche Grosse Pflaume; Königliche Pflaume von Tours;
Königs Pflaume; Königspflaume von Tours (syns. of Royal
Tours), 332
Königspflaume (syn. of Early Orleans), 198
Königspflaume (syn. of Royal), 534
Königs Pflaume aus Paris; Königspflaume von Paris (syns. of
Perdrigon Tardif), 515
Königspflaume Frühe (syn. of Royale Hâtive de Liegel), 535
Königs Pflaume von Maugerou (syn. of Maugeron), 492
Königspflaume von Trapp’s (syn. of Trapps Königspflaume), 555
König Zwetsche (syn. of Trauttenberg), 555
Kook, var. orig. by, 432
Kook’s Gelbe Diaprée; Kooks Neue Diapre (syn. of Diaprée
Nouvelle De Kook), 432
Kopp, 477
Korai, 478
K. P. 193 (syn. of Purple-leaved Hybrid), 521
Krasnaya osimaya (syn. of Red Winter), 529
Krasnaya Skorospielkaya (syn. of Early Red), 440
Kreger (syn. of Danish Damson), 428
Kreke (syn. of Damson), 186
Kreuters Zwetsche (syn. of Quetsche de Kreuter), 523
Krieche (syn. of Damson), 186
Krieke (syn. of Gemeiner Gelbe Spilling), 451
Kroh, P. H., var. orig. by, 316
Kroh (syn. of Poole Pride), 315
Kroos-Pruim, 478
Krueger (syn. of Danish Damson), 428
Kruger’s Seedling (syn. of Cruger Scarlet), 424
Kuchen Pflaume (syn. of Frankfort Peach), 447
Kume, 478

La Bonne Deux Fois l’An (syn. of Venetianische Zweimal


Tragende), 558
La Bricette (syn. of Late Mirabelle), 263
La Courbon (syn. of Red Diaper), 323
La Delicieuse (syn. of Cooper, 423; of Smith Orleans, 348)
La Duc (syn. of Le Duc), 483
La Grosse Reine-Claude (syn. of Reine Claude), 327
La Madeleine (syn. of Noire de Montreuil), 504
La Mirabelle (syn. of Mirabelle), 284
La Prairie, 479
La Prune Suisse (syn, of Suisse), 549
La Roche-Corbon (syn. of Red Diaper), 323
La Royal (syn. of Royal), 534
La Royale (syn. of Royal), 534
La Venitienne (syn. of Venetianische Zweimal Tragende), 558
La Victorine (syn. of Victoria), 363
Labert (syn. of Lambert), 478
Labert’s Red (syn. of Lambert), 478
Lachine, 478
Ladies’ Plum (syn. of Damas Violet), 427
Lady, 478
Lady Lucy (syn. of Cooper), 423
Lady Plum (syn. of Lady), 478
Lafay, M., var. orig. by, 360
Lafayette, 261
Laire, 262
Lakeside No. 1 & No. 2, 478
Lallinger Königspflaume, 478
Lambert, 478
Lambert’s Red (syn. of Lambert), 478
Lammas, 478
Lancaster, 478
Lang, 478
Langdon, 478
Langdon, Reuben, var. orig. by, 478
Langdon (syn. of Langsdon), 479
Langdon’s Seedling (syn. of Langdon), 478
Langes Aprikosenpflaume (syn. of Abricotée de Lange), 391
Lange Violette Damascene, 479
Lange Violette Dattel Pflaume; Lange Violette Dattel Zwetsche
(syns. of Red Date), 322
Lange Violette Dattel Zwetsche (syn. of Hungarian), 246
Langley, quoted, 302
Langsdon, 479
Langliche Blaue Damascene? (syn. of Lange Violette
Damascene), 479
Lannix, 479
Large Black Imperial? (syn. of Belle de Louvain, 400; of
Bradshaw, 166)
Large Early Black (syn. of Nectarine), 291
Large early black (syn. of Noire de Montreuil), 504
Large Early Damson (syn. of Horse), 464
Large English, 262
Large English Damson, 479
Large German Prune (syn. of York State Prune), 571
Large German Prune (syn. of German Prune, 219; of Italian
Prune, 253)
Large Golden Prolific, 479
Large green claudia (syn. of Reine Claude), 327
Large Green Drying, 479
Large Green Drying (syn. of Large Green Drying), 479
Large Holland (syn. of Holland), 462
Large Late Red Damask (syn. of Late Red Damask), 481
Large Long Blue (syn. of Manning), 489
Large Orlean (syn. of Red Magnum Bonum, 326; of Smith
Orleans, 348)
Large Peach (syn. of Peach), 309
Large Peach Plum (syn. of Peach), 309
Large Purple (syn. of Smith Orleans), 348
Large Queen, 479
Large Queen Claude; Large Queen Claudia (syns. of Reine
Claude), 327

You might also like